url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://www.fields.utoronto.ca/programs/scientific/11-12/CalabiYau/abstracts.html
|
# SCIENTIFIC PROGRAMS AND ACTIVITIES
April 30, 2016
## August 16-25, 2011 Workshop on Arithmetic and Geometry of K3 surfaces and Calabi-Yau threefolds
to be held at the University of Toronto and the Fields Institute
Organizers:
Charles F. Doran, University of Alberta, and PIMS
Shigeyuki Kondõ, Nagoya University
James D. Lewis, University of Alberta
Matthias Schütt, Leibniz University Hannover
Noriko Yui, Queen’s University (corresponding organizer)
This workshop is supported by:
The Fields Institute
NSF (1100007 plus individual grants)
JSPS Grant-in-Aid (S), No. 22224001
DFG GRK 1463
### Pedagogical Lectures:
C. Doran (University of Alberta, Canada)
Periods, Picard-Fuchs Equations, and Calabi-Yau Moduli
We introduce and explore the transcendental theory of Calabi-Yau manifolds and its interplay with explicit algebraic moduli. The focus in each lecture will be on Calabi-Yau manifolds of sequentially higher dimension (elliptic curves, K3 surfaces, and Calabi-Yau threefolds). Special attention will be given to hypersurfaces and complete intersections in toric varieties.
____________________________
S. Kondo (Nagoya University, Japan)
K3 and Enriques surfaces
In this introductory lecture, I shall give a survey on moduli and automorphisms of K3, Enriques surfaces. A related lattice theory and the theory of automorphic forms will be included.
____________________________
R. Laza (Stony Brook University, USA)
Degenerations of K3 surfaces and Calabi-Yau threefolds
In these lectures we will review the degenerations of K3 surfaces and Calabi-Yau threefolds from a geometric and Hodge theoretic point of view. In the first lecture we will focus on K3 surfaces, and we will review the period and its compactifications. In the second lecture, we will discuss the behavior of the period map near the boundary and the connection to mirror symmetry.
____________________________
J. Lewis (University of Alberta, Canada)
Lectures in Transcendental Algebraic Geometry: Algebraic Cycles with a Special Emphasis on Calabi-Yau Varieties
These lectures serve as an introduction to algebraic cycle groups and their regulators for projective algebraic manifolds. More precisely, after presenting a general overview, we begin with some rudimentary aspects of Hodge theory and algebraic cycles. We then introduce Deligne cohomology, as well as generalized cycles that are connected to higher $K$-theory, and associated regulators. Finally, we specialize to the Calabi-Yau situation, and explain some recent developments in the field.
Lecture Notes
____________________________
M. Schuett (University of Hannover, Germany)
Arithmetic of K3 surfaces
We will review various aspects of the arithmetic of K3 surfaces. Topics will include rational points, Picard number and Tate conjecture, zeta functions and modularity.
____________________________
Modularities of Calabi--Yau varieties: 2011 and beyond
This paper presents the current status on modularities of Calabi--Yau varieties since the last update in 2001. We will focus on Calabi-Yau varieties of dimension at most three. Here modularities refer to at least two different types: arithmetic modularity, and geometric modularity. These will include:
(1) the modularity of Galois representations of Calabi--Yau varieties (or motives) defined over $\QQ$ or number fields,
(2) the modularity of solutions of Picard-Fuchs differential equations of families of Calabi-Yau varieties, and the modularity of mirror maps (mirror moonshine),
(3) the modularity of generating functions of various invariants counting some quantities on Calabi-Yau varieties, and
(4) the modularity of moduli for families of Calabi-Yau varieties.
The topic (4) is commonly known as the geometric modularity.
In this series of talks, I will concentrate on arithmetic modularity, namely, on the topic (1), and possibly on the topics (2) and (3) if time permits.
*************************************************************
### Invited Speaker Abstracts:
M. Artebani
Examples of Mori dream Calabi-Yau threefolds
Let $Z$ be a Mori dream space, i.e. a normal projective variety having finitely generated Cox ring $R(Z)$, and let $X$ be a hypersurface of $Z$. In a joint work with A. Laface we provided a necessary
and sufficient condition for the Cox ring $R(X)$ to be isomorphic to $R(Z)/(f)$, where $f$ is a defining section for $X$. In this talk, after presenting this result, two applications to Calabi-Yau 3-folds will be given. Firstly, we will show that there are five families of Calabi-Yau hypersurfaces insmooth toric Fano fourfolds whose Cox ring is a
polynomial ring with one relation. As a second application, we will compute the Cox ring of the generic quintic 3-fold containing a plane.
____________________________
X. Chen (University of Alberta, Canada)
Rational self-maps of K3 surfaces and Calabi-Yau manifolds
It is conjectured that a very general K3 surface does not have any nontrivial dominant rational self-maps. I'll give a proof for this conjecture and also show the same holds for a very general Calabi-Yau complete intersection in projective spaces of higher dimensions by induction.
Slides
____________________________
A. Clingher (Washington University in St. Louis, USA)
On K3 Surfaces of High Picard Rank
I will report on a classification of a certain class of K3 surfaces of Picard rank 16 or higher. In terms of periods, the moduli space of these objects is a quotient of a four-dimensional bounded symmetric domain of type IV. Explicit normal forms will be presented, as well as a discussion of modular forms associated with this family.
____________________________
S. Cynk (Jagiellonian University, Poland)
Arithmetically significant Calabi-Yau threefolds
From the point of view of their arithmetic the most interesting Calabi-Yau threefolds are those with small Hodge number $h^{1,2}$, especially the rigid ones. I will discuss the most important constructions of such Calabi-Yau threefolds, f.i. the Kummer construction, fiber product of rational elliptic surfaces with section and their refinements.
____________________________
I. Dolgachev (University of Michigan, USA)
Quartic surfaces and Cremona transformations
I will discuss the following question: when a birational automorphism of a quartic surface is a restriction of a Cremona transformation of the ambient space.
____________________________
N. Elkies (Harvard University, USA)
Even lattices and elliptic fibrations of K3 surfaces I, II
Abstract: Given a K3 surface $X$, any elliptic fibration with zero-section has an essential lattice $L$ (orthogonal complement of a hyperbolic plane) whose genus depends only on the Neron-Severi lattice $NS(X)$.
The Kneser-Nishiyama gluing method and related techniques often makes it feasible to list all possible $L$, or all $L$ satisfying some additional condition such as nontrivial torsion or large Mordell-Weil rank, and to give explicit equations when one equation for $X$ is known. We illustrate with several examples:
(a) Of the 13 elliptic fibrations of Euler's surface $E_a: xyz(x+y+z)=a$, nine can be defined over $Q(a)$, all with Mordell-Weil rank zero. This may both explain why Euler found it unusually hard to find families of solutions in $Q(a)$ and suggest how he did eventually find one such family. Over an algebraically closed field, the $E_a$ all become isomorphic with the "singular" K3 surface (Picard number $20$, maximal in characteristic zero) with $disc(NS(X)) = -4$.
(b) If $NS(X)$ has rank $20$ and consists entirely of classes defined over $Q$, then $|disc(NS(X))|$ is at most $163$. We use this to show that no elliptic fibration can have attain the maximum of $18$ for the
Mordell-Weil rank of an elliptic K3 surface over $C(t)$; this together with an explicit rank $17$ surface over $Q(t)$ (with $\rho=19$) answers a question of Shioda (1994).
(c) Certain families of K3 surfaces with Picard number $19$ are parametrized by Shimura modular curves; this makes it possible to give explicit equations and CM coordinates on these curves that were previously
inaccessible, and to find the genus $2$ curves with quaternionic multiplication that the Shimura curves parametrize.
____________________________
R. Girivaru (University of Missouri--St. Louis, USA)
Extension theorems for subvarieties and bundles
Given a subvariety (respectively a vector bundle) on a smooth hyperplane section of a smooth projective variety, it is of interest to know when it is the restriction of a subvariety (resp a bundle) on the ambient variety. I will present some results on this theme.
____________________________
J. W. Hoffman (Louisiana State University, USA)
Picard groups of Siegel modular threefolds and theta lifting
This is a joint work with Hongyu He.
A Siegel modular threefold is a quotient of the Siegel upper half space of genus 2 by a subgroup of finite index in Sp(4, Z). These spaces are moduli spaces for abelian varieties with additional structure, and are examples of Shimura varieties. We discuss the structure of the Picard groups of these; they are groups generated by algebraic cycles of codimension one. We show that these Picard groups are generated by special cycles in the sense of Kudla-Millson. These special cycles are identified with the classically defined Humbert surfaces. The key points are: (1) the theory of special cycles relating geometric cycles to automorphic forms coming from theta-lifting; (2) Weissauer's theorems describing the Picard groups via automorphic forms; (3) results of Howe about the oscillator representation.
____________________________
K. Hulek (University of Hannover, Germany)
Abelian varieties with a singular odd $2$-torsion point on the theta divisor
We study the (closure of the) locus of intermediate Jacobians of cubic threefolds in the perfect cone compactification of the moduli space of principally polarized abelian fivefolds for which we obtain an expression in the tautological Chow ring. As a generalization we consider the locus of principally polarized abelian varieties with a singular odd $2$-torsion point on the theta divisor and their degenerations. This is joint work with S. Grushevsky.
Lecture Notes
____________________________
M. Kerr (Washington University in St. Louis, USA)
Higher Chow cycles on families of K3 surfaces
This talk is a tale of two cycles, both supported on singular fibers of families of elliptically fibered K3's. The first lives on a cover of the $H+E8+E8$-polarized family of Clingher and Doran, and we discuss a direct evaluation of the real regulator (part of joint work with Chen, Doran, and Lewis). The resulting function is related to a kind of "Maass cusp formwith pole". For the second cycle, we explain how to use a bit of Tauberian theory to compute the transcendental regulator.
____________________________
J. Keum (KIAS, Korea)
Finite groups acting on K3 surfaces in positive characteristic
A remarkable work of S. Mukai [1988] gives a classification of finite groups which can act on a complex K3 surface leaving invariant its holomorphic 2-form (symplectic automorphism groups). Any such group turns out to be isomorphic to a subgroup of the Mathieu group $M_{23}$ which has at least 5 orbits in its natural action on the set of 24 elements. A list of maximal subgroups with this property consists of 11 groups, each of these can be realized on an explicitly given K3 surface. Different proofs of Mukai's result were given by S. Kond\={o} [1998] and G. Xiao [1996].None of the 3 proofs extends to the case of K3 surfaces over algebraically closed fields of positive characteristic $p$.In this talk I will outline a recent joint work with I. Dolgachev on extending Mukai's result to the positive characteristic case.In positive characteristic case we first have to handle wild automorphisms, the ones whose orders are divisible by the characteristic $p$.It turns out that no wild automorphism of a K3 surface exists in characteristic $p > 11$. Then a classification of finite groups will be given which may act symplectically on a K3 surface in positive characteristic.
____________________________
R. Kloosterman (Humboldt Universitaet zu Berlin, Germany)
Mordell-Weil ranks, highest degree syzygies and Alexander polynomials
We discuss an approach to calculate the Mordell-Weil rank for elliptic threefold. We apply this method to a class of elliptic threefolds with constant $j$-invariant 0.
It turns out that in this particular case there is a strong connection between
1. the number of highest degree syzygies of the ideal of a certain
subscheme of the singular locus of the discriminant curve,
2. the Mordell-Weil rank of the fibration,
3. the exponent of $(t^2-t+1)$ in the Alexander polynomial of
the discriminant curve.
We used the connection between 1 and 2 to find a nontrivial upper bound for the Mordell-Weil rank.
As an application we use the connection between 1 and 2 to describe all degree 18 plane curves, with only nodes and cusps as singularities, such that its deformation space has larger dimension than expected. (In this case the associated elliptic threefold is a degeneration of a Calabi-Yau elliptic threefold.)
We then show that one can recover the Alexander polynomial of any even degree $d$ plane curve $C=Z(f(z_0,z_1,z_2))$ by studying the threefold $W\subset \mathbb{P}(d/2,1,1,1)$ given by $y^2+x^d+f=0$. It turns out that in the case that $C$ has only ADE singularities the Alexander polynomial of $C$ determines the group of Weil Divisors on $W$ modulo $\mathbb{Q}$-Cartier divisors on $W$. One can use this to find a series of subschemes $J_i$ of the singular locus of $C$, such that the number of highest degree syzygies of $J_i$ has a geometric interpretation. We end by giving some higher dimensional examples.
____________________________
S. Kudla (University of Toronto)
Modular generating functions for arithmetic cycles: a survey
In this talk I will give a survey of some recent results on the relations between the Fourier coefficients of modular forms and the classes of certain cycles in arithmetic Chow groupsShimura varieties. When the generating series for such cycle classes are modular forms, they may be viewed as an exotic type of theta function. The behavior of such forms under natural geometric operations,such as pullback to subvarieties, is of particular interest. I will describe several examples and discuss some open problems.
____________________________
A. Kumar (MIT, USA)
Elliptic fibrations on Kummer surfaces
I will describe computations regarding elliptic fibrations on Kummer surfaces, and some applications, such as explicit algebraic families of K3 surfaces with Shioda-Inose structure.
____________________________
C. Liedtke (Stanford University, USA)
Rational Curves on K3 Surfaces
We show that projective K3 surfaces with odd Picard rank contain infinitely many rational curves. Our proof extends the Bogomolov-Hassett-Tschinkel approach, i.e., uses moduli spaces of stable maps and reduction to positive characteristic. This is joint work with Jun Li.
____________________________
H. Movasati (IMPA, Brazil)
Eisenstein type series for mirror quintic Calabi-Yau varieties
In this talk we introduce an ordinary differential equation associated to the one parameter family of Calabi-Yau varieties which is mirror dual to the universal family of smooth quintic three folds. It is satisfied by seven functions written in the $q$-expansion form and the Yukawa coupling turns out to be rational in these functions. We prove that these functions are algebraically independent over the field of complex numbers, and hence, the algebra generated by such functions can be interpreted as the theory of quasi-modular forms attached to the one parameter family of Calabi-Yau varieties.Our result is a reformulation and realization of a problem of Griffiths around seventies on the existence of automorphic functions for the moduli of polarized Hodge structures. It is a generalization of the Ramanujan differential equation satisfied by three Eisenstein series.
____________________________
S. Mukai (RIMS, Japan)
Enriques surfaces and root systems
There are many interesting families of Enriques surfaces which are characterized by the presence of (negative definite) root sublattices ADE's in their twisted Picard lattices. In this talk I will discuss two such families (a) Enriques surfaces with many M-semi-symplectic automorphisms and (d) Enriques surfaces of Lieberman type related with the joint work with H. Ohashi, and another kind of family of (e) Enriques surfaces of type $E_7$.
____________________________
V. Nikulin (University of Liverpool, UK, and Steklov Mathematical
Institute, Moscow, Russia)
Elliptic fibrations on K3 surfaces
We discuss, how many elliptic fibrations and elliptic fibrations with infinite automorphism groups an algebraic K3 surface over an algebraically closed field can have. As examples of applications of the same ideas, we also consider K3 surfaces with exotic structures: with finite number of Enriques involutions, and with naturally arithmetic automorphism groups. See details in arXiv:1010.3904.
Lecture Notes
___________________________
K. O'Grady (Sapienza Universita' di Roma)
Moduli and periods of double EPW-sextics
We analyze the GIT-quotient of the parameter space for (double covers of) EPW-sextics i.e. the symplectic grassmannian of lagrangian subspaces of the third wedge-product of a $6$-dimensional complex vector-space (equipped with the symplectic form defined by wedge product on $3$-vectors) modulo the natural action of $PGL(6)$. Our goal is to analyze the period map for the GIT-quotient, thus we aim to establish a dictionary between (semi)stability conditions and properties of the periods. We are inspired by the works of C.Voisin and R.Laza on cubic 4-folds.
____________________________
K. Oguiso
(Osaka University, Japan)
Group of automorphisms of Wehler type on Calabi-Yau manifolds and compact hyperkaehler manifolds
Wehler pointed out, without proof, that a K3 surface defined by polynomial of multi-degree $(2,2,2)$ in the product of three projective lines admits a biholomorphic group action of the free product of three cyclic groups of order two. I would like to first explain one proof of his result and in which aspects his example is interesting. Then I would like to give a "fake" generalization for Calabi-Yau manifolds and explain why it is fake. Finally I would like to give a right generalization for Calabi-Yau manifolds of any even dimensions and compact hyperk\"ahler manifolds of any degree.
____________________________
H. Ohashi (Nagoya University, Japan)
On automorphisms of Enriques surfaces
We will discuss a possible extension to Enriques surfaces of an outstanding result of Mukai about the automorphism groups of K3 surfaces. We define the notion of Mathieu-semi-symplectic actions on Enriques surfaces and classify them. The maximal groups will be characterized in terms of the small Mathieu group $M_{12}$. This is a joint work with S. Mukai.
Lecture Notes
____________________________
G. Pearlstein (Michigan State University, USA)
Jumps in the Archimedean Height
We answer a question of Richard Hain regarding the asymptotic behavior of the archimedean heights and explain its connection to the Hodge conjecture via the work of Griffiths and Green.
____________________________
J.-C. Rohde (Universitaet Hamburg, Germany)
Shimura varieties and Calabi-Yau manifolds versus Mirror Symmetry
There are examples of Calabi-Yau $3$-manifolds $X$, which cannot be a fiber of a maximal family of Calabi-Yau $3$-manifolds with maximally unipotent monodromy. This contradicts the assumptions of the mirror symmetry conjecture. All known examples of this kind can be constructed by quotients of products of K3 surfaces $S$ and elliptic curves by an automorphism of order 3 or 4. Moreover the associated period domain of a maximal family with a fiber isomorphic to $X$ is a complex ball containing a dense set of complex multiplication points. In some examples the K3 surfaces S used for the construction of $X$ can also be used to construct pairs of subfamilies of pairs of mirror families with dense sets of complex multiplication fibers.
____________________________
A. Sarti (University of Poitiers, France)
The BHCR-mirror symmetry for K3 surfaces
The aim of this talk is to apply the construction of mirror pairs of Berglund and H\"ubsch to K3 surfaces with non symplectic involution and to investigate a recent result of Chiodo and Ruan. They apply the construction to pairs $(X,G)$ where $X$ is a Calabi Yau manifold of dimension at least three, given as the zero set of a non degenerate potential in some weighted projective space, and $G$ is a finite group acting on the manifold.\\For this reason we call the symmetry the {\it BHCR-mirror symmetry. In the talk I will show that this symmetry coincides with the mirror symmetry for lattice polarized K3 surfaces described by Dolgachev.\\This is a joint work with Michela Artebani and Samuel Boissi\`ere.
____________________________
C. Schnell (IPMU, Japan)
Derived equivalences and the fundamental group
I will describe an example (constructed by Gross and Popescu) of a simply connected Calabi-Yau threefold $X$, with a free action by the group $G = Z/5Z x Z/5Z$, for which $X$ and $X/G$ are derived equivalent. This shows that being simply connected is not a derived invariant.
____________________________
C. Schoen (Duke University, USA)
Desingularized fiber products of elliptic surfaces
The varieties of the title are sufficiently complex to exhibit many of the phenomena which arise when one studies smooth projective threefolds, but are often significantly simpler to work with than general threefolds because of the well understood elliptic surfaces from which they are built. So far these varieties have contributed to our understanding of algebraic cycles, modularity of Galois representations, phenomena peculiar to postive characteristic, superstring theory, Brauer groups, Calabi-Yau threefolds, and families of Kummer surfaces. Many open problems remain.
____________________________
S. Schroeer (University of Duesseldorf, Germany)
Enriques manifolds
Enriques manifolds are complex spaces whose universal coverings are hyperkahler manifolds. We give several examples, construct period domains, and establish a local Torelli theorem. The theory applies to various situations related to punctual Hilbert schemes, moduli spaces of stable sheaves, and Mukai flops. This is a joint work of K. Oguiso.
____________________________
A. Thompson (Oxford University, UK)
Degenerations of K3 surfaces of degree two
We consider semistable degenerations of K3 surfaces of degree two, with the aim of explicitly studying the geometric behaviour at the boundary of the moduli space of such surfaces. We begin by showing that results of the minimal model program may be used to bring these degenerations into a uniquely determined normal form: the relative log canonical model. We then proceed to describe a result that explicitly classifies the central fibres that may appear in this relative log canonical model, as complete intersections in certain weighted projective spaces.
Lecture Notes
____________________________
D. van Straten (Universitaet Mainz, Germany)
CY-period expansions
The local power series expansion of period-functions have strong integrality properties. Such expansions can be used effectively to find Picard–Fuchs equations
in situations, where the traditional “Dwork–Griffiths–Method” is not available or cumbersome to use. We give examples how to use “conifold expansions” to obtain the Picard–Fuchs equations for some one-parameter families of Calabi–Yau 3-folds.
(Work in progress, joint with S. Cynk).
____________________________
U. Whitcher (Harvey Mudd College, USA)
Picard-Fuchs equations for lattice-polarized K3 surfaces
The moduli spaces of K3 surfaces polarized by the lattices $H\oplus E_8\oplus E_8$ and $H\oplus E_8 \oplus E_7$ are related to moduli spaces of polarized abelian surfaces. We use Picard-Fuchs equations for the lattice-polarized K3 surfaces to explore this correspondence and characterize subloci of the moduli spaces of particular interest.
____________________________
K.-I. Yoshikawa (Kyoto University, Japan)
On the value of Borcherds $\Phi$-function
It is well known that the Petersson norm of Jacobi Delta-function is expressed as the product of the discriminant of cubic curve and the $L_2$ norm of appropriately normalized $1$-form on the curve. We give a generalization this fact to Enriques surfaces and Borcherds $\Phi$-function.
Slides
____________________________
J.-D. Yu (National Taiwan University, Taiwan)
On Dwork congruences
The Dwork congruences refer to a system of congruences among the coefficients of periods of certain Calabi-Yau pencils. They are used to derive the unit root formula for the zeta functions of the reductions of the fibers. Examples include certain hypergeometric series proved by Dwork himself via ad hoc methods. Here we give a geometric interpretation of these congruences.
____________________________
Y. Zarhin
(Pennsylvania State University, USA)
Hodge groups
We discuss computations of Hodge groups of certain superelliptic jacobians (based on joint papers with Jiangwei Xue).
*************************************************************
### Contributed Speaker Abstracts:
M.J. Bertin (Université Paris, 6)
Elliptic fibrations on the modular surface associated to $\Gamma_1(8)$
This is a joint work with Odile Lecacheux. Using Nishiyama's method, we determine all the elliptic fibrations with section on the elliptic surface $$X+\frac {1}{X}+Y+\frac {1}{Y}+Z+\frac {1}{Z}=2.$$ This $K3$-surface, of discriminant $-8$, is explained to be the modular surface associated to the modular group $\Gamma_1(8)$.We illustrate the method with examples and show how to get, for a given fibration, the rank and torsion of the Mordell-Weil group.Moreover, from a Weierstrass equation of an elliptic fibration, we explain one of the various ways to obtain a Weierstrass equation of another fibration.
____________________________
Yasuhiro Goto (Hokkaido University of Education Hakodate)
On K3 surfaces with involution
K3 surfaces with involution are classified by Nikulin's invariants. We calculate these invariants for K3 surfaces defined in weighted projective $3$-spaces by Delsarte-type equations.
____________________________
L. H. Halle (University of Oslo, Norway)
Motivic zeta functions for degenerations of Calabi-Yau varieties
I will discuss a global version of Denef and Loeser's motivic zeta functions. More precisely, to any Calabi-Yau variety $X$ defined over a discretely valued field $K$, I will define a formal power series $Z_X(T)$ with coefficients in a certain localized Grothendieck ring of varieties over the residue field $k$ of $K$. The series $Z_X(T)$ has properties analogous to Denef and Loeser's zeta function, in particular one can formulate a global version of the motivic monodromy conjecture. I will present a few cases where this conjecture has been proved. This is joint work with Johannes Nicaise.
Lecture Notes
____________________________
S. Sijsling (IMPA, Brazil)
Calculating arithmetic Picard-Fuchs equations
We consider second-order Picard-Fuchs equations that are obtained by uniformizing certain genus 1 Shimura curves. These equations are distinguished by having a particularly beautiful monodromy group, generated by two elements and commensurable with the group of units of a quaternion order. They describe the periods of certain families of fake elliptic curves that are as yet hard to write down.We explore the methods for determining these equations explicitly, and discuss the
open questions that remain.
Lecture Notes
|
2016-04-30 22:38:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6394686698913574, "perplexity": 662.5975377737216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113010.58/warc/CC-MAIN-20160428161513-00079-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://gmatclub.com/forum/gmatprep-question-79043.html
|
It is currently 20 Feb 2018, 23:12
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# GMATprep question
Author Message
Intern
Joined: 17 Mar 2008
Posts: 12
### Show Tags
01 Jun 2009, 07:42
00:00
Difficulty:
(N/A)
Question Stats:
0% (00:00) correct 0% (00:00) wrong based on 0 sessions
### HideShow timer Statistics
This topic is locked. If you want to discuss this question please re-post it in the respective forum.
If $$(\frac{1}{5})^m*(\frac{1}{4})^{18}= \frac{1}{2*10^{35}}$$, what is m equal to?
Manager
Joined: 13 May 2009
Posts: 191
### Show Tags
01 Jun 2009, 13:37
1
KUDOS
dakhan wrote:
If $$(\frac{1}{5})^m*(\frac{1}{4})^{18}= \frac{1}{2*10^{35}}$$, what is m equal to?
$$(\frac{1}{5})^m*(\frac{1}{4})^{18}= \frac{1}{2*10^{35}}$$
$$5^{-m}*(4)^{-18}= 2^{-1}*10^{-35}$$
$$5^{-m}*(2*2)^{-18}= 2^{-1}*(2*5)^{-35}$$
$$5^{-m}*(2^2)^{-18}= 2^{-1}*2^{-35}*5^{-35}$$
$$5^{-m}*(2)^{-36}= 2^{-1}*2^{-35}*5^{-35}$$
$$5^{-m}*(2)^{-36}= 2^{-36}*5^{-35}$$
$$5^{-m}*(2)^{-36}= 5^{-35}*2^{-36}$$
$$==> m=35$$
_________________
Intern
Joined: 17 Mar 2008
Posts: 12
### Show Tags
01 Jun 2009, 17:38
awesome....seems like a complicated problem from its solution but quite easy once you actually work it out. thanks.
Re: GMATprep question [#permalink] 01 Jun 2009, 17:38
Display posts from previous: Sort by
|
2018-02-21 07:12:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5369858145713806, "perplexity": 13867.24352514657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813571.24/warc/CC-MAIN-20180221063956-20180221083956-00119.warc.gz"}
|
https://meangreenmath.com/2013/09/17/formula-for-an-infinite-geometric-series-part-10/
|
# Formula for an infinite geometric series (Part 10)
I conclude this series of posts by considering the formula for an infinite geometric series. Somewhat surprisingly (to students), the formula for an infinite geometric series is actually easier to remember than the formula for a finite geometric series.
One way of deriving the formula parallels the derivation for a finite geometric series. If $a_1, a_2, a_3, \dots$ are the first terms of an infinite geometric sequence, let
$S = a_1 + a_2 + a_3 + \dots$
Recalling the formula for an geometric sequence, we know that
$a_2 = a_1 r$
$a_3 = a_1 r^2$
$\vdots$
Substituting, we find
$S = a_1 + a_1 r+ a_1 r^2 \dots$
Once again, we multiply both sides by $-r$.
$-rS = -a_1r - a_1 r^2- a_1 r^3 \dots$
Next, we add the two equations. Notice that almost everything cancels on the right-hand side… except for the leading term $a_1$. (Unlike yesterday’s post, there is no “last” term that remains since the series is infinite.) Therefore,
$S - rS = a_1$
$S(1-r) = a_1$
$S = \displaystyle \frac{a_1}{1-r}$
A quick pedagogical note: I find that this derivation “sells” best to students when I multiply by $-r$ and add, as opposed to multiplying by $r$ and subtracting.
The above derivation is helpful for remembering the formula but glosses over an extremely important detail: not every infinite geometric series converges. For example, if $a_1 = 1$ and $r = 2$, then the infinite geometric series becomes
$1 + 2 + 4 + 8 + 16 + \dots$,
which clearly does not have a finite answer. We say that this series diverges. In other words, trying to evaluate this sum makes as much sense as trying to divide a number by zero: there is no answer.
That said, it can be shown that, as long as $-1 < r < 1$, then the above geometric series converges, so that
$a_1 + a_1 r + a_1 r^2 + \dots = \displaystyle \frac{a_1}{1-r}$
The formal proof requires the use of the formula for a finite geometric series:
$a_1 + a_1 r + a_1 r^2 + \dots + a_1 r^{n-1} = \displaystyle \frac{a_1(1-r^n)}{1-r}$
We then take the limit as $n \to \infty$:
$\displaystyle \lim_{n \to \infty} a_1 + a_1 r + a_1 r^2 + \dots + a_1 r^{n-1} = \displaystyle \lim_{n \to \infty} \frac{a_1(1-r^n)}{1-r}$
$a_1 + a_1 r + a_1 r^2 + \dots = \displaystyle \lim_{n \to \infty} \frac{a_1(1-r^n)}{1-r}$
On the right-hand side, the only piece that contains an $n$ is the term $r^n$. If $-1 < r < 1$, then $r^n \to 0$ as $n \to \infty$. (This limit fails, however, if $r \ge 1$ or $r \le -1$.) Therefore,
$a_1 + a_1 r + a_1 r^2 + \dots = \displaystyle \lim_{n \to \infty} \frac{a_1(1-0)}{1-r} = \displaystyle \frac{a_1}{1-r}$
Leave a comment
|
2018-05-27 14:03:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852873682975769, "perplexity": 188.13629222944007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868316.29/warc/CC-MAIN-20180527131037-20180527151037-00560.warc.gz"}
|
https://www.physicsforums.com/threads/fortran-openmp-parallelization-and-writing-in-multiple-files.626516/
|
# Fortran OpenMP parallelization and writing in multiple files.
1. Aug 8, 2012
### eleteroboltz
Hello.
I'm attempting to write some data in different files (each thread write in each file), but I'm getting an error saying: 'File already opened in another unit'. I'm using the function OMP_get_thread_num() from OpenMP library in order to open individual files in individual threads.
Code (Text):
!$OMP PARALLEL DEFAULT(PRIVATE) SHARED(PeVet,num,eta,xi,p) FIRSTPRIVATE(Temp,Told) !$OMP DO
DO q=1,5
Pe = PeVet(q)
WRITE(FileName,'(a,i4.4,a,i4.4,a,i2.1,a)') 'Velocity-Imax',Imax,'Jmax',Jmax,'Kn',p,'.dat'
DO j=1,jmax
ud(j)= (3.d0/2.d0)*(1.d0+8*Kn*bV-eta(j)**2)/(1.d0+12*Kn*bV)
END DO
END DO
!$OMP END DO !$OMP END PARALLEL
I don't know what I'm doing wrong...
Last edited: Aug 8, 2012
2. Aug 8, 2012
### gsal
It looks like the error is meaningful, isn't it?
I mean, you may be making sure that the unit number is different in every thread, but it does not look like you are making sure that the file name become different....in other words, it looks like you open the file the first time around from the first thread that gets there...and THEN, you are attempting to open the same file from another thread with a different unit number....follow?
are you trying to write to different files or the same? or what?
3. Aug 8, 2012
### eleteroboltz
Gsal, you are totally right.
Thank you
|
2018-04-23 23:37:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5763815641403198, "perplexity": 3366.1148790280554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946256.50/warc/CC-MAIN-20180423223408-20180424003408-00309.warc.gz"}
|
http://scipy.github.io/devdocs/generated/scipy.stats.hypergeom.html
|
# scipy.stats.hypergeom¶
scipy.stats.hypergeom = <scipy.stats._discrete_distns.hypergeom_gen object>[source]
A hypergeometric discrete random variable.
The hypergeometric distribution models drawing objects from a bin. M is the total number of objects, n is total number of Type I objects. The random variate represents the number of Type I objects in N drawn without replacement from the total population.
As an instance of the rv_discrete class, hypergeom object inherits from it a collection of generic methods (see below for the full list), and completes them with details specific for this particular distribution.
Notes
The symbols used to denote the shape parameters (M, n, and N) are not universally accepted. See the Examples for a clarification of the definitions used here.
The probability mass function is defined as,
$p(k, M, n, N) = \frac{\binom{n}{k} \binom{M - n}{N - k}}{\binom{M}{N}}$
for $$k \in [\max(0, N - M + n), \min(n, N)]$$, where the binomial coefficients are defined as,
$\binom{n}{k} \equiv \frac{n!}{k! (n - k)!}.$
The probability mass function above is defined in the “standardized” form. To shift distribution use the loc parameter. Specifically, hypergeom.pmf(k, M, n, N, loc) is identically equivalent to hypergeom.pmf(k - loc, M, n, N).
Examples
>>> from scipy.stats import hypergeom
>>> import matplotlib.pyplot as plt
Suppose we have a collection of 20 animals, of which 7 are dogs. Then if we want to know the probability of finding a given number of dogs if we choose at random 12 of the 20 animals, we can initialize a frozen distribution and plot the probability mass function:
>>> [M, n, N] = [20, 7, 12]
>>> rv = hypergeom(M, n, N)
>>> x = np.arange(0, n+1)
>>> pmf_dogs = rv.pmf(x)
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot(x, pmf_dogs, 'bo')
>>> ax.vlines(x, 0, pmf_dogs, lw=2)
>>> ax.set_xlabel('# of dogs in our group of chosen animals')
>>> ax.set_ylabel('hypergeom PMF')
>>> plt.show()
Instead of using a frozen distribution we can also use hypergeom methods directly. To for example obtain the cumulative distribution function, use:
>>> prb = hypergeom.cdf(x, M, n, N)
And to generate random numbers:
>>> R = hypergeom.rvs(M, n, N, size=10)
Methods
rvs(M, n, N, loc=0, size=1, random_state=None) Random variates. pmf(k, M, n, N, loc=0) Probability mass function. logpmf(k, M, n, N, loc=0) Log of the probability mass function. cdf(k, M, n, N, loc=0) Cumulative distribution function. logcdf(k, M, n, N, loc=0) Log of the cumulative distribution function. sf(k, M, n, N, loc=0) Survival function (also defined as 1 - cdf, but sf is sometimes more accurate). logsf(k, M, n, N, loc=0) Log of the survival function. ppf(q, M, n, N, loc=0) Percent point function (inverse of cdf — percentiles). isf(q, M, n, N, loc=0) Inverse survival function (inverse of sf). stats(M, n, N, loc=0, moments='mv') Mean(‘m’), variance(‘v’), skew(‘s’), and/or kurtosis(‘k’). entropy(M, n, N, loc=0) (Differential) entropy of the RV. expect(func, args=(M, n, N), loc=0, lb=None, ub=None, conditional=False) Expected value of a function (of one argument) with respect to the distribution. median(M, n, N, loc=0) Median of the distribution. mean(M, n, N, loc=0) Mean of the distribution. var(M, n, N, loc=0) Variance of the distribution. std(M, n, N, loc=0) Standard deviation of the distribution. interval(alpha, M, n, N, loc=0) Endpoints of the range that contains alpha percent of the distribution
scipy.stats.geom
#### Next topic
scipy.stats.logser
|
2017-06-24 08:55:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7477712035179138, "perplexity": 4959.257551540127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320243.11/warc/CC-MAIN-20170624082900-20170624102900-00150.warc.gz"}
|
https://gamedev.stackexchange.com/questions/141700/how-to-convert-fov-into-zoom-2x-3x-4x
|
# how to convert FoV into zoom 2x, 3x, 4x…?
so I have the value of the FoV in degrees,
now I need to convert it to a zoom value of 2x, or 3x, for scopes and binoculars.
I placed some objects equally distanced between each other,
but when I zoom in from FoV 100 to FoV 90, (and I am calling that a 2x zoom), the visible objects in the edges of the reticle are not the ones I expected.
Yes, I am using this big FoV of 100 to 10 just to try to calculate the 2x, 3x value properly.
So I need the ending/maximum FoV to be at 10 degrees.
what happens in the highest zoom levels, from FoV 30 to 10, is that the zooming becomes too high and hard to control, so I wonder if I am calculating it wrong?
I mean, should the FoV steps, to let results of zoom 2x, zoom 3x, be not a fixed value like:
FoV 100 = zoom 1x
FoV 90 = zoom 2x
FoV 80 = zoom 3x
I should instead use some calculation to give me a result something like (or not like):
FoV 100 = zoom 1x
FoV 85 = zoom 2x
FoV 73 = zoom 3x
and what could be such calculation?
or should I use some other rule, like impose some FoV limits and fixed fov values instead of making any calculation?
• 10x tells nothing about FoV. See The Truth About The X Optical Zoom and How do I convert lens focal length (mm) to x-times optical zoom?. Pick a reference point and a linear transformation of arctan that works for you. See also Virtual Cameras at Khan Academy. – Theraot May 27 '17 at 21:37
• @Theraot I think I understand, I read the Nx is relative to the self lens minimum zoom capacity, what is pointless. But in a game, I think we expect something like a proportional increase in visible size (2x would show the object 2x bigger, 3x would be 3x bigger..), but that at near the maximum zoom it is still a nice to control the zooming in/out. – Aquarius Power May 27 '17 at 22:17
• Play with the formula. Try something along the lines of FOV = parameter_a* Math.Atan2(parameter_b,parameter_c*zoom) – Theraot May 27 '17 at 22:25
• @Theraot must be that (it is a parabola calculation right?), I found this site desmos.com/calculator/lac2i0bgum, it let me prepare specific parabolas for 4x or 6x zoom times (x*10) with specific min and max FoV (y), now I am trying to make a generic formula that let me input the min, the max and a mid point, and may be that one you say will fit, let me try it. – Aquarius Power May 27 '17 at 22:41
• this site shows the full calc log, I think will do! emathhelp.net/calculators/algebra-2/parabola-calculator – Aquarius Power May 27 '17 at 23:02
Technically changing the FOV isn't zooming, but it does the same thing, so let's not take that into account.
You can calculate the size of the view plane where the object lies using trigonometric functions:
width = tan(FOVX) * dist
height = tan(FOVY) * dist
(Make sure the angles are in radians, and FOVX is the horizontal FOV, which is equal to FOVY * aspect ratio)
Here is where you'll have to decide what you mean by zoom 2x. You can think in separate axes, and halve the width and height of the view plane at the object's position. This makes the object feel like it's doubled, but because it's in 2d, it will actually have an area 4 times larger than on the default value. If you want to make the area twice as big, then you'll have to multiply the width and height of the object by the square root of 2. For simplicity I'm going to refer to this value (either x or sqrt(x) for a zoom of x) as ZOOM_FACTOR.
Now we can reverse the algorithm I used above to find the width and height of the view plane. Because the sizes of this plane are linear (aka. a view plane at distance d is half as big as a plane at distance 2d), we can simplify it. The algorithm becomes
newFOVY = atan(tan(FOVY) / ZOOM_FACTOR)
This first calculates the size of the plane at distance of 1 with tan(FOVY), then calculates the new plane's size, then runs uses the result to calculate the new fov. As always, the input and output angles are in radians.
You can achieve the same thing by making the left, right, top and bottom parts of the projection matrix smaller or bigger depending on what you need.
• do you think I could set a minimum FoV to be reached? it worked nicely using a max angle in degrees of 75 (that I converted to RAD), and gave me this results for 4 zoom steps (in degrees): 75, 62, 51, 43. I wonder if I could tweak that formula to let me go from 75 to 10 in 4 nice steps, any idea? yes, it seems my original question is missing a highlight for the minimum zoom level request, sry. – Aquarius Power May 27 '17 at 22:11
• I wonder if a parabola could be used? – Aquarius Power May 27 '17 at 22:22
• @AquariusPower Calculate the sizes for FOV 75 and FOV 10, then divide the difference between these value by 4 and multiply it by the current step's id (e.g. for a zoom of 3x multiply it by 3), subtract it from the default size and divide the result by the default size. Then you can run it back through the atan to get the FOV, basically: default = tan(75 / 180 * pi), newFOV = atan((default - tan(10 / 180 * pi) * STEP) / default) or simplified atan(1 - tan(10 / 180 * pi) * STEP / default) – Bálint May 28 '17 at 9:02
• I think the algorithm should be: newFOVY = 2 * atan(tan(0.5 * FOVY) / ZOOM_FACTOR) since circular functions only work in right-angled triangle. – Junkun Lin Dec 12 '17 at 9:05
|
2020-04-09 11:54:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4914249777793884, "perplexity": 1129.7948637921177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371833063.93/warc/CC-MAIN-20200409091317-20200409121817-00073.warc.gz"}
|
https://socratic.org/questions/what-is-a-markov-chain
|
# What is a Markov Chain?
Sep 15, 2016
A stochastic process which is memoryless.
#### Explanation:
Suppose you have a system that changes state over time, with some random variability. The time sequence of states of the system is called a stochastic process. A stochastic process is a Markov chain if at any point in time, the probability of future states is only dependent on the current state, not on anything that has gone before. Another term used is memoryless. The system does not remember what happened before in the sense that future states are independent of past states.
Consider a model of Brownian motion:
• There are $n$ particles of pollen which we keep track of.
• The particles of pollen tend to keep moving in the same direction with the same velocity.
• The movement is changed by the random impact of unseen smaller particles which we are not tracking.
If our model only maintained a state which only described the instantaneous position of each particle of pollen at each time $t$ then this would not be a Markov chain: The future position of each particle is influenced by the current velocity of the particle too - and probably its angular momentum.
If however the model maintained a state which described not only the position of each pollen particle, but also its current velocity and angular momentum then it probably is a Markov chain.
|
2021-11-29 11:49:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6664767861366272, "perplexity": 294.78949178485556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358705.61/warc/CC-MAIN-20211129104236-20211129134236-00289.warc.gz"}
|
http://mathhelpforum.com/calculus/161877-using-diferentials.html
|
Thread: Using diferentials
1. Using diferentials
Let there be 3 resistors in parallel. We have that the total resistance R satisfies:
$1/R=1/R_1+1/R_2+1/R_3$. Find dR.
When I tried to do this, I ended up getting a ridicilously long answer. I was just wondering if someone wouldn't mind helping me out with this. Thanks
2. Originally Posted by HelloWorld2
Let there be 3 resistors in parallel. We have that the total resistance R satisfies:
$1/R=1/R_1+1/R_2+1/R_3$. Find dR.
When I tried to do this, I ended up getting a ridicilously long answer. I was just wondering if someone wouldn't mind helping me out with this. Thanks
Dear HelloWorld2,
$dR=\frac{\partial R}{\partial R_1}dR_1+\frac{\partial R}{\partial R_2}dR_2+\frac{\partial R}{\partial R_3}dR_3$
Since, $\frac{1}{R}=\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R _3}$
$-\frac{1}{R^2}\frac{\partial R}{\partial R_1}=-\frac{1}{R_{1}^{2}}\Rightarrow{\frac{\partial R}{\partial R_1}=\frac{R^2}{R_{1}^{2}}}$
Similarly, $\frac{\partial R}{\partial R_2}=\frac{R^2}{R_{2}^{2}}}$
$\frac{\partial R}{\partial R_3}=\frac{R^2}{R_{3}^{2}}}$
Therefore, $dR=R^2\left(\frac{dR_1}{R_{1}^{2}}+\frac{dR_2}{R_{ 2}^{2}}+\frac{dR_3}{R_{3}^{2}}\right)$
Hope this will help you.
|
2016-10-23 06:13:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047398567199707, "perplexity": 473.36428789297327}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719155.26/warc/CC-MAIN-20161020183839-00416-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2072413/5-and-11-divides-a-perfect-square-abc0ac-what-is-the-number
|
# $5$ and $11$ divides a perfect square $abc0ac$. What is the number?
$5$ and $11$ divides a perfect square $abc0ac$. What is the number?
I started this way - expressing the number as $(10^5+10)a+(10^4)b + (10^3+1)c$
Which fooled me. How can I start?
P.S: This is a problem from BdMO-2016 regionals.
• 5 is a divisor, so the last digit must be 0 or 5. If c is 0, a must be 0, which is false, so c must be 5. Hence, a must be 2. So your task is to find out b. – Huang Dec 26 '16 at 12:40
• @Huang Why a must be 0 if c = 0? Didn't get this.. :| – Rezwan Arefin Dec 26 '16 at 12:41
• since $5$ is a divisor of this number – Dr. Sonnhard Graubner Dec 26 '16 at 12:42
• @RezwanArefin since it's a perfect square. It the ones digit is 0, the ones digit of its root must be 0, so the tens and ones digits are both 0 of that number. – Huang Dec 26 '16 at 12:45
• A natural number is a multiple of $\;11\;$ iff (the sum of its digits in even poisition) minus (the sum of its digits in odd position) is a multiple of $\;11\;$ . With this and knowing $\;c=5\implies a=2\;$ you can solve this at once. – DonAntonio Dec 26 '16 at 13:10
Because $25$ divides N it follows $ac \in \{ 00, 25, 50, 75\}$. Because $11$ divides N it follows that the alternating sum of the digits in the number is divisible by 11, therefore $11 | 2a -b$.
Now just take each possibility for $ac$
If the number is divisible by $5$ then $c=5$ or $c=0$. It cannot equal $0$ by Huang's comment. Therefore $c=5$ and because a perfect square ending in $5$ necessarily ends in $25$ we know that $a=2$.
If a number is divisible by $11$ then the alternating sum of its digits: $a-b+c-0+a-c=2a-b=4-b=11n$ (i.e. the sum must be a multiple of $11$). Because $0\le b \le 9$, then $n$ can only equal $0$, from which we know that $b=4$ and the number equals $$245025=495^2$$
|
2019-09-16 04:11:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7588633894920349, "perplexity": 172.96514033044483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572484.20/warc/CC-MAIN-20190916035549-20190916061549-00047.warc.gz"}
|
https://leimao.github.io/blog/Autoregressive-Model-Autoregressive-Decoding/
|
### Lei Mao
Machine Learning, Artificial Intelligence, Computer Science.
# Autoregressive Model and Autoregressive Decoding for Sequence to Sequence Tasks
### Introduction
Sequence to sequence tasks, such as machine translation, have been modeled using autoregressive models, such as recurrent neural network and Transformers. Given an input sequence, the output sequence is also generated in an autoregressive fashion.
In this blog post, I would like to discuss mathematically why autoregressive models and autoregressive decoding have been applied for sequence to sequence tasks, and their drawbacks in terms of latency-constrained inference in practice.
### Discriminative Model for Sequence to Sequence Tasks
The sequence to sequence discriminative model usually learns a probability distribution $P(Y | X)$ where $X$ is a sequence of input variables $X = \{X_1, X_2, \cdots, X_{T^{\prime}}\}$ and $Y$ is a sequence of output variables $Y = \{Y_1, Y_2, \cdots, Y_{T}\}$. To make the problem simple, we assume $X$ and $Y$ are sequences of discrete variables. This assumption is valid in many problems, such as language translation. Note that although it seems that the model is generating output sequence, the model is still a discriminative model, rather than a generative model. If the reader does not know the difference between discriminative model and generative model, please check my blog post “Discriminative Model VS Generative Model”.
The goal of inference is to find or sample the most likely output sequence $y$ given the input sequence $x$. This step is also called “decoding”. Mathematically, it could be expressed as
$\DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\argmax}{argmax} y = \argmax_{Y} P(Y | X = x)$
One of the problems for the sequence to sequence discriminative model is that the model is actually an ensemble model or an adaptive model that models the distributions $P(Y_1, Y_2, \cdots, Y_{T} | X_1, X_2, \cdots, X_{T^{\prime}})$ for all $T^{\prime} = {1, 2, \cdots}$ and $T = {1, 2, \cdots}$. This means that $P(Y = \{y_1\} | X = x)$ and $P(Y = \{y_1, y_2\} | X = x)$ are from two different conditional distributions and they are not directly comparable. We cannot determine the output sequence length by comparing the conditional probabilities of the the output sequences of different lengths during inference.
In addition, even if we know the output sequence length $T = t$ given the input sequence, given the model $P(Y_1, Y_2, \cdots, Y_{t} | X)$ but without having any of the mathematical properties about it and the input sequence $x$, to find the most likely $y$, we have to apply the brute force decoding by iterating through all the possible sequence combinations for $Y$, and find the $y$ that has the maximum probability.
Note that here we have no independence assumptions for the variables in the output sequence, i.e.,
$P(Y_1, Y_2, \cdots, Y_{t} | X) \neq \prod_{i=1}^{t} P(Y_i | X)$
Otherwise the finding the optimal output sequence will be much easier.
Suppose the $Y_i$ is a binary variable for $i = 1, 2, \cdots, t$. it will take $O(2^t)$ time complexity to find the optimal $y$, which is intractable. The brute force decoding will work if the output sequence length is very small. However, in many practical problems, the sequence length $t$ could be very large. For example, even for a output sequence of length $32$, $2^{32} = 4294967296$, which is an extremely large number around $4$ billion.
Therefore, decoding, i.e., searching the most likely output sequence, is the most critical and the hardest problem to solve for the sequence to sequence discriminative model. Concretely, we have to solve two problems, how to determine the length of output sequence, and develop an efficient searching algorithm that finds the maximum conditional probability.
### Autoregressive Modeling and Autoregressive Decoding
Given a piece of training data, the input sequence $x = \{x_1, x_2, \cdots, x_{t^{\prime}}\}$ and its ground truth output sequence $y = \{y_1, y_2, \cdots, y_{t}\}$, the autoregressive model basically applies the probability chain rule and creates a temporal model for the problem.
\begin{align} P(Y | X; \theta) = P(Y_0 | X_{1:t^{\prime}}; \theta) \prod_{i=1}^{t} P(Y_i | Y_{0:i-1}, X_{1:t^{\prime}}; \theta) \end{align}
It is very often that $Y_0$ is not a variable. Usually, $Y_0 \equiv \langle \text{BOS} \rangle$ (the beginning of the sequence) and $P(Y_0 = \langle \text{BOS} \rangle | X_{1:t^{\prime}}; \theta) = 1$.
During the autoregressive model training, the model is actually maximizing the likelihood for training data $(x, y)$ that have different sequence lengths by changing the model parameter $\theta$.
\begin{align} \argmax_{\theta} P(Y = y | X = x; \theta) &= \argmax_{\theta} \prod_{i=1}^{t} P(Y_i = y_i | Y_{0:i-1} = y_{0:i-1}, X_{1:t^{\prime}} = x_{1:t^{\prime}}; \theta) \end{align}
By optimizing $P(Y | X ; \theta)$, the model also optimized the $P(Y_i | Y_{0:i-1}, X_{1:t^{\prime}}; \theta)$ for all $i = 1, 2, \cdots$. This means that during inference, instead of computing $P(Y | X)$ for all the possible combinations for $Y = \{Y_1, Y_2, \cdots \}$ to find out the optimal $y$, we could sort of find out the optimal variable in the output sequence greedily. Suppose the $Y_i$ is a binary variable for $i = 1, 2, \cdots$. If we know the output sequence length is $t$, the brute force decoding will take $O(2^t)$ time complexity to find the optimal $y$, which is intractable, whereas the greedy autoregressive decoding will only take $O(t)$.
Note that the theoretically, we cannot guarantee the following equation during autoregressive decoding and generally it is false.
\begin{align} \max_{Y_1, Y_2, \cdots, Y_t} \prod_{i=1}^{t} P({Y_1, Y_2, \cdots, Y_t} | X_{1:t^{\prime}} = x_{1:t^{\prime}}; \theta) = \prod_{i=1}^{t} \max_{Y_i} P(Y_i | Y_{0:i-1} = y_{0:i-1}, X_{1:t^{\prime}} = x_{1:t^{\prime}}; \theta) \end{align}
where
\begin{align} y_i = \argmax_{Y_i} P(Y_i | Y_{0:i-1} = y_{0:i-1}, X_{1:t^{\prime}} = x_{1:t^{\prime}}; \theta) \end{align}
This means that unlike the brute force decoding which finds the global optimum, the autoregressive decoding does not necessarily finds the global optimum. However, $\prod_{i=1}^{t} \max_{} P(Y_i | Y_{0:i-1} = y_{0:i-1}, X_{1:t^{\prime}} = x_{1:t^{\prime}}; \theta)$ usually is a very large probability. This lays the foundation for why autoregressive decoding is a valid approach.
Sometimes, a modified greedy autoregressive decoding method, sometimes referred as beam search, achieves slightly better decoding results. It produces better results compared to the greedy autoregressive decoding method because it has larger search space. But the time complexity for the beam search decoding is usually $O(k^2 t)$. where $k$ is a constant representing the beam size. Since it is a constant, the time complexity remains $O(t)$.
Please also note that we can apply the autoregressive decoding only if we trained our model in an autoregressive fashion.
The autoregressive decoding algorithm is an efficient searching algorithm that finds the maximum conditional probability from the output sequences that are of a fixed length $T = t$, compared with the greedy decoding algorithm, although it does not guarantee the global maximum. However, the sequence length problem remains. We don’t know the sequence length, then we still have infinite number of output sequence candidates.
One approach is to create a model $P(T | X_{1:T^{\prime}}; \theta)$ that predict the output sequence length directly from the input sequence. However, this sometimes does not work well in practice. Because $T$ is a discrete variable that randomly choose value from an infinite set $\{1, 2, \cdots \}$. For many sequence to sequence tasks, the error tolerance for $T$ is very poor. For example, given an input sequence $x$, the autoregressive decoding generates sequences of $\{y_1\}$, $\{y_1, y_2\}$, $\{y_1, y_2, y_3\}$, $\{y_1, y_2, y_3, y_4\}$, $\cdots$. If the $\{y_1, y_2, y_3\}$ is a very good output sequence but the length prediction is $T=2$, the output sequence selected by the algorithm will be $\{y_1, y_2\}$, which will be absurd in many sequence to sequence tasks. Trying thinking of predicting “How are” because the model did not predict the output sequence length correctly, whereas actually “How are you” makes more sense.
So a more commonly used approach is to somehow implicitly encode the sequence length information to the output sequence, rather than directly predicting the value for output sequence length. Concretely, given a piece of training data, the input sequence $x = \{x_1, x_2, \cdots, x_{t^{\prime}}\}$ and its ground truth output sequence $y = \{y_1, y_2, \cdots, y_{t}\}$, during training, the ground truth output sequence will actually be $y = \{y_1, y_2, \cdots, y_{t}, y_{t+1}\}$ where $y_{t+1} = \langle \text{EOS} \rangle$ (the end of the sequence). Therefore, during autoregressive decoding, whenever the algorithm see an new output token is $\langle \text{EOS} \rangle$, it knows it is time to stop decoding. The only concern is whether $\langle \text{EOS} \rangle$ will come up too early (truncated sequence prediction) or whether $\langle \text{EOS} \rangle$ will probably never show up (infinite sequence prediction). This is usually not a problem in practice if the discriminative model has learned the sequence to sequence task very well. $P(Y_i = \langle \text{EOS} \rangle | Y_{0:i-1} = y_{0:i-1}, X_{1:t^{\prime}} = x_{1:t^{\prime}}; \theta)$ is usually the largest compared to the conditional probabilities for $Y_i \neq \langle \text{EOS} \rangle$ if $y_{0:i-1}$ is already a sequence that matches $x_{1:t^{\prime}}$ quite well. On the contrary, $P(Y_i = \langle \text{EOS} \rangle | Y_{0:i-1} = y_{0:i-1}, X_{1:t^{\prime}} = x_{1:t^{\prime}}; \theta)$ is usually almost zero compared to the conditional probabilities for $Y_i \neq \langle \text{EOS} \rangle$ if $y_{0:i-1}$ is not a good match for $x_{1:t^{\prime}}$.
### Autoregressive Summary
Given a sequence of input variables $X = \{X_1, X_2, \cdots, X_{T^{\prime}}\}$ and a sequence of output variables $Y = \{Y_1, Y_2, \cdots, Y_{T}, Y_{T+1}\}$, according to the chain rule,
\begin{align} P(Y | X; \theta) = \prod_{t=1}^{T + 1} P(Y_t | Y_{0:t-1}, X_{1:T^{\prime}}; \theta) \end{align}
where $Y_0 \equiv \langle \text{BOS} \rangle$ and $Y_{T+1} \equiv \langle \text{EOS} \rangle$. Note that the input sequence length and the output sequence length are also random variables.
The principle of autoregressive model optimization is as follows.
\begin{align} \argmax_{\theta} \log P(Y | X; \theta) &= \argmax_{\theta} \log \prod_{t=1}^{T+1} P(Y_t | Y_{0:t-1}, X_{1:T^{\prime}}; \theta) \\ &= \argmax_{\theta} \sum_{t=1}^{T+1} \log P(Y_t | Y_{0:t-1}, X_{1:T^{\prime}}; \theta) \\ \end{align}
The principle of greedy autoregressive model decoding is as follows.
\begin{align} \max_{Y_1, Y_2, \cdots, Y_T, Y_{T+1}} \prod_{t=1}^{T + 1} P({Y_1, Y_2, \cdots, Y_t} | X_{1:T^{\prime}} = x_{1:T^{\prime}}; \theta) \approx \prod_{t=1}^{T + 1} \max_{Y_t} P(Y_t | Y_{0:t-1} = y_{0:t-1}, X_{1:T^{\prime}} = x_{1:T^{\prime}}; \theta) \end{align}
where
\begin{align} y_t = \argmax_{Y_t} P(Y_t | Y_{0:t-1} = y_{0:t-1}, X_{1:T^{\prime}} = x_{1:T^{\prime}}; \theta) \end{align}
### Autoregressive Drawbacks
It sounds like the autoregressive model and autoregressive decoding are very useful for the sequence to sequence discriminative model. However, it still has some drawback in certain use cases.
In an inference latency constrained system, autoregressive model and autoregressive decoding are not favored because the autoregressive decoding computation process could not be parallelized, as the tokens have to be generated one by one by inferencing multiple times. For long output sequences, the inference time could easily go over the latency budget can cause various problems. This means even if the autoregressive decoding time complexity is $O(t)$, it is not satisfying if the generation of the token in the output sequence cannot be parallelized.
In some rare use cases or problems, we could have the independence assumptions for the variables in the output sequence for both training and inference, and we know the length of output sequence, the temporal autoregressive model degenerates to the non-autoregressive model with independence assumptions.
\begin{align} P(Y | X; \theta) = P(T | X_{1:t^{\prime}}; \theta) \prod_{i=1}^{T} P(Y_i | X_{1:t^{\prime}}; \theta) \end{align}
Then we could easily parallelize the generation of the tokens. Semantic segmentation is an example for such use cases, where we treat each pixel as a variable and each output pixel is conditionally independent from each other. However, it is no longer a temporal model, even if it is “sequence to sequence”, we hardly talked about autoregressive decoding for generating the labels for segmentation.
### Notes
Conventional Transformer models have a mask in the decoder that prevents the output tokens from seeing the future token in the output sequence. This is enforcing the Transformer model to learn in an autoregressive fashion. As a result, during inference time, the Transformer decoding could (has to) be autoregressive.
### Conclusions
We have learned how the autoregressive model learns a task and how the autoregressive decoding reduces the time complexity to find the (local or sub) optimal output sequences. We have also learned the drawbacks of the autoregressive models and the autoregressive decoding.
|
2021-04-10 19:49:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742570281028748, "perplexity": 710.6450345845327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00108.warc.gz"}
|
https://stats.stackexchange.com/questions/598286/calculating-expected-value-from-quantiles
|
# Calculating expected value from quantiles
For probabilities $$p_i=\frac{i}{10}$$ where $$i=1, \dots, 10$$, the respective quantiles are $$\tau_i$$. How can I calculate an approximate expected value?
• If you had quartiles you could do en.wikipedia.org/wiki/Trimean, I guess something similar could be done with deciles. Dec 7, 2022 at 12:52
## 1 Answer
Using my answer at Expected value as a function of quantiles?, a general expression for the expectation in terms of the quantile function is $$\mu=\int_0^1 Q(p)\; dp$$ in the continuous case, and that answer extends to the general case.
Looking at the approximating sums defining the integral, you can read this as the mean is the mean of the quantiles, which gives an approximation for your case as $$\frac{\sum_1^{10} \tau_i}{10}$$
• Isn't that slightly biased upwards due there being no $Q(0)$ and if yes, do you see a way of correcting that? Dec 7, 2022 at 14:16
• @LukasLohse: Yes, but this comes from the biased definition of $p_i$ by the OP Dec 7, 2022 at 14:19
• @LukasLohse Use the i's from 1 to 9? Dec 7, 2022 at 14:32
• @LukasLohse: I guess this must be adapted to the specific def of quantile used ... it doesn't seem right to leave out the top quantile completely Dec 7, 2022 at 14:37
• @user529295: What do you mean by random quantiles? Dec 7, 2022 at 17:46
|
2023-03-26 06:09:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664675951004028, "perplexity": 622.1532828335885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00023.warc.gz"}
|
https://www.socratease.in/content/1/motion-straight-line-1/41/does-speed-change
|
Look at the table alongside. It lists the distances that Jerry is covering when he is driving towards the cheese. So, for example, from $$0 \, \mathrm{s}$$ to $$2 \, \mathrm{s}$$, Jerry covers $$5 \, \mathrm{m}$$, etc. What can you say about the speed with which Jerry is driving?
|
2018-08-20 05:46:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8933076858520508, "perplexity": 830.6864804350988}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215843.55/warc/CC-MAIN-20180820042723-20180820062723-00688.warc.gz"}
|
https://math.stackexchange.com/questions/3185914/apply-fermats-little-theorem-to-show-that-a-e-d-equiv-a-bmod-p
|
Apply Fermat's little theorem to show that $A ^ { e d } \equiv A ( \bmod p )$
Problem : Let $$p$$, $$q$$ two distincts prime numbers and $$n=pq$$. Let $$e$$ be a prime number such that $$\gcd( (p-1)(q-1),e) = 1$$ and $$d$$ such that $$ed=1+k(p-1)(q-1)$$. Let $$A \in \mathbb { Z } / n$$. Show that $$A ^ { e d } \equiv A \pmod p$$.
My solution : We have that $$ed \equiv 1 (\bmod p-1)$$. Thus by Fermat's little theorem:
$$A ^ { e d } \equiv A^{k(p-1)+1} \equiv A^{k(p-1)}A^{1} \equiv ({A^{p-1}})^{k}A \stackrel{\text{Fermat}}{\equiv} 1^k A \equiv A \pmod p$$.
Question : Does the reasoning seems correct? Am I missing some steps?
• I would only suggest to choose another letter (not $k$) , but besides this : perfect solution. – Peter Apr 13 at 7:36
• @Peter Awesome, thank you very much! – NotAbelianGroup Apr 13 at 7:43
|
2019-09-23 13:48:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9083243608474731, "perplexity": 186.58872039519397}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00208.warc.gz"}
|
http://physics.stackexchange.com/questions/70429/why-can-we-simply-assume-a-given-alignment-of-vevs
|
# (Why) Can we simply assume a given alignment of VEVs?
In GUT theories, one often assumes that Higgs fields take VEVs in given representations, of a given magnitude at a given scale.
While I am well aware that this is one of the weak spots in such theories, I suppose there must at least be some work into the possibilities of construcing a Higgs potential with a given VEV structure.
What works are there on Higgs potentials in GUT theories w.r.t. VEV structure?
-
|
2016-06-26 08:13:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003963232040405, "perplexity": 1325.9746906540568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00061-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://ilovephilosophy.com/viewtopic.php?f=3&t=193761&start=50
|
## 30 Dollar Minimum Wage
For discussions of culture, politics, economics, sociology, law, business and any other topic that falls under the social science remit.
### Re: 30 Dollar Minimum Wage
Gloominary wrote:Firstly, this sounds like a room for rent in a house, not an apartment.
Doesn't matter, there are rooms for rent in the US at $200 per rent. You, Wendy, and Reasonable are all wrong. Obviously if you pay more, you get more. Urwrongx1000 Philosopher Posts: 1207 Joined: Mon Jun 19, 2017 5:10 pm ### Re: 30 Dollar Minimum Wage Gloominary wrote:You make it like I alone am in the position to uplift. You are. Liberal-leftist-socialists should be spending 50% of their own time and money on the poor, before asking anybody else to spend more. If you don't then you're just a hypocrite. Gloominary wrote:That's your opinion. My opinion is it's, our business, as a democracy, and I'm going to encourage people to vote socialist. It's not "an opinion". For you to stick your hand into other people's dealings, and then complain about fairness, is hypocrisy. Taxation is theft. You're merely trying to justify your thievery, taking the profits and successes of others. Taking bites of a pie you had no part in making. Gloominary wrote:So it's not that you're against regulation of the economy, it's that you're against excess regulation? I'm against the momentum of the modern world going liberal-left and towards more socialism, towards more third-party meddling and entitlement. Gloominary wrote:Secondly, socialists have hardly won anything, there's too much corporatism, cartels, corporate welfare, tax breaks/loopholes. Most of the economy is ran by/for big business, not by the state or the workers themselves directly. And from my research, there was more socialism before the 1980s than there is now, which's why things were better for working people and the unemployed then. Corporations have taken advantage of socialistic idealists, such as yourself, and raked in the profits of your mistakes. Corporations have ways around laws such as$30 minimum, by cutting worker hours, less hiring, laying off workers, etc.
The larger corporations are relatively immune to socialistic-leftist meddling. They can afford to get around all social-government interventions. Small businesses, small corporations, small industries, will all be destroyed. Thus the world will be worse off by socialistic-leftist meddling. Socialists and leftists are not actually targeting or penalizing the ones they hope to, with inept understanding of economics. Liberal-leftists try to penalize the "top 1%" but end up hurting the middle class more. This is another reason why "economic equality" cannot be enforced, especially not through democracy and legislation. Corporations will pay politicians off anyway, who do you think sponsors election campaigns?
Gloominary wrote:Capitalists could start making apartments the size of jail cells, like they do in China.
Then they can cram several people into each of them like sardines.
There's always a to save a buck, ye of little faith in capitalist ingenuity!
You're the one claiming "everybody deserves" (a place to live).
Then you're complaining that it's not big enough. It's a slippery-slope. Apparently you have no limit. You want everybody sitting on gold toilets with gold toilet paper?
You obviously don't know China very well. They're overpopulated. In Tokyo, it's normal to be packed in like sardines. I don't think you really care that much. Moot point.
Gloominary wrote:In practice this doesn't work, or wages would be increasing and prices stagnating.
It does work, which is mostly why minimum wage has climbed so high in the first place. Workers demand more pay with or without third-party intervention. Employers must compete against other employers.
Gloominary wrote:Actually it would increase class mobility, increasing wages, and welfare for those who can't work or find a job is mobility itself, it's the majority moving upward, and once they have surplus income, they can use it to start their own business, or educate themselves, or invest, if they like, or they can be happy with what they have.
Really all that matters is you have enough money to live fairly comfortably, so you're secure, being a multimillionaire or billionaire doesn't make you happier or healthier statistically, so really there's no point in having high class mobility so long as your needs are met.
Different societies and groups of humans want and decide upon different things.
US attitudes are for pro-capitalism, pro class mobility, and less socialistic interventions. What works in one place, does not work in another ($30 minimum wage). Urwrongx1000 Philosopher Posts: 1207 Joined: Mon Jun 19, 2017 5:10 pm ### Re: 30 Dollar Minimum Wage Gloominary wrote:@Wrong Smarter workers are ones who are willing to quit, and join a competitor's factory or business, for higher wages. Employers must compete against each other to attract and retain the best workers. Otherwise a company will have low paid, unskilled, and unreliable workers. They will pay for this cost. So it's not worth it. Let me readdress this, firstly, employers can still hold out for a better deal much longer than employees can, and secondly: Let's say there's a market with 10 corporations in a region of the world manufacturing and distributing clothes. On the one hand, they're competing with workers and consumers, trying to pay workers as little as possible and charge consumers as much as possible, on the other hand, they're competing with each other, trying to attract as many workers and customers as possible, right? Seems like things would sort of balance themselves out overall, but I don't think so. This is what really tends to happen: one corporation ends up being a lot or even just a little bit better than the others, through some combination of luck, talent and tenacity. Word gets out and before you know it, everyone wants to work for them and shop there. Sooner or later all the other corporations have closed shop, and only one remains. Now they have a monopoly, there are no competitors in this region, and it would be, not impossible, but exceedingly difficult for a small business to rise up and start competing with them. Now that they have a monopoly, gradually they'll pay employees as little as possible, and charge consumers as much as possible. People will just have to accept it if they don't want to go naked. If a small business rises up and tries to do anything about it, they might not even be able to purchase the resources to do it, because the corporation is buying them all up at a higher price the small business can afford if they have to, outbid them. Or if the small business manages to get going, they'll just buy the small business up and either close it down, or keep it going and jack up the prices while lowering the wages, almost everyone has their price, even if they think they're on a mission. And then of course they also use 'underhanded' tactics, like making laws, rules and regulations suitable to themselves and not suitable to potential rivals/up-and-comers, so you have to do things exactly the way they do them or else. Hell, they might just pay the mob to burn your business down if necessary, but even without underhanded tactics, there's a strong tendency towards monopolization, but the underhanded tactics are inevitable anyway. You can't really have much of a democracy or capitalist kind of 'free' market (it's not the only kind of free market actually) when you have these megabanks and corporations, where 1% of the people possess 80% of the wealth because they just buy the lawmakers and politicians. Again, almost everyone has their price. And that's why we started out with nearly pure capitalism and ended up like this, not because of some socialist conspiracy or plot like this guy will try to make it out to be, but because capitalism leads to fewer and fewer competitors over time, and the fewer competitors there are, the more they can cement/solidify their stranglehold on the economy. It doesn't eve have to get down to just corporation, if there's only a few big corporations of roughly equal wealth/power, they can play it safe and partly or fully merge, or they can all agree not to pay workers more than x, or charge consumers more than x, so they don't get into a wage/price war, and one of them will only ever break this rule if they somehow gain a major advantage/disadvantage, which will lead to few competitors still. As corporations get massive, the only way to compete with them is either through the state, coercively, or through collective bargaining/unions, or through revolution, either that or just acquiesce. Typically the masses just don't have the foresight to compete very effectively, in part because they've swallowed the cool aid, and you end up with these monstrous disparities. Corporations are neither necessarily bad or necessarily good. As you said, they are an end-result. Is it bad that computers were monopolized at one point, with IBM, Microsoft, and Intel running the industry? No, they made computers cheap for everybody, for personal use, and led to the world as it is today. Apple competed out of survival. Eventually laws were passed to curtail and cut-up Microsoft. There were pros and cons to the Microsoft monopoly. But the average wage of Microsoft employees rose, they did not fall. So your conjectures are simply wrong. Microsoft employees and other software/hardware engineers, have rose very high over the past 50 years. So your economic conjectures just do not paint the reality on the ground. Urwrongx1000 Philosopher Posts: 1207 Joined: Mon Jun 19, 2017 5:10 pm ### Re: 30 Dollar Minimum Wage Silhouette wrote:Great, so I guess we don't need sports referees, counselors, independent auditors, even parents etc., because "ur wrong" (and) can just dismiss the entire issue of intervention out of hand, and unthinkingly reel off the same old spiel copied off all those "tough-talking" anti-Socialists. Urwrong. Let me tell you why Urwrong. The left aren't all thinking in that same Machiavellian way that you probably assume more as a reflection of your own thinking than actually having a clue what others are thinking (no doubt not even asking). Let me lead by example: is this how you think and do you tend to be suspicious of others in general? There are third-party intervention in all things. Parenting, as you mention, is a form of social intervention. But, according to my point, do you want the government intervening into the personal and private lives of families? It's one thing for parents to dictate over children. It's another for foreigners, moral crusaders, politicians, and the rest of society. Here's the deal, when somebody is successful, when two parents successfully raise a family, then others (Socialists) will want to intervene and take credit, or take advantage, of those successes. That's what I'm against. I'm more for appropriating causes and responsibility, where they belong. By mob-rule, democrats, leftists, liberals, socialists, have all gained too much power, and have the gumption and gall to think they can go around claiming anybody and anything, even "the upbringing" of private families. Sex. Economy. Do socialists-liberal-leftists have any limits of what they can't or won't stick their nose in? Silhouette wrote:I'm sure some rightists in leftists' clothing "deserve" your suspicion, but generally the reasoning has nothing to do with that arbitrary notion of "deserve", which I already briefly commented on and you didn't address - assuming you even read past the first three lines I wrote in my short post. The fact is that we are able to easily share the massive surplus that we create mostly automatically through machinery and infrastructure that was in many ways only possible due to people who are already dead. But we don't. There is already third-party intervention. I already admitted that. There is already taxation....25% or more, of your income, taken out of your pocket, in the "interests of general society". Somebody is already profiting off your work and life. Silhouette wrote:What are we supposed to do with this ridiculous notion of only "deserving" the equivalent of what you yourself have contributed? Continually shove cash into the graves of late influential contributors and the circuit boards of computers? They did most of the work, you don't deserve shit. Dead people can't profit off their own labor, but I do agree with royalties and that families of inventor's, or according to their wills and words, should get some royalties. Bill Gates, for example, if he wants to give away most of his fortune and success to whomever he wants, then that's his business. Socialists would be the one try to tax it, by taxing death, inheritance, anything they can get their hands on. Silhouette wrote:Maybe we should keep track of all the help you "didn't deserve" in childhood because you weren't contributing yourself, and only allow you to get paid once that debt is cleared - and let's include interest and take into account inflation, why not? That's what everyone already does in our current economy, and obviously intervening with anything like that is out of the question All education and investment should not be interfered with, let's let those who have all the money, contacts, information and other resources set the terms directly with people with much less of all of those things - I'm sure there won't be any conflict of interest or partiality in such situations that would require a 3rd party to supervise in order for any semblance of fairness to exist! Bullshit, I've worked my ass off in life. I deserve every cent I made, and possibly, the amounts taken by taxation. I'm not against all taxation. In the US it's relatively fair. I agree with public roads and the military. That's about it though. I'm against education spending. There should be more privatization and responsibility of parents to educate their own children. Silhouette wrote:How do you even determine equivalency between production and consumption?! The current model is just "whatever you can get away with within defensible interpretations of law". That's all "the market" is. Hide how much you as an employer get as your income through paying people much less than what they earn "your" company (the definition of profit, which many people probably don't appreciate or even know), because they will undercut each other just to get any income at all through fear of the shitty alternative that is unemployment, and you can benefit from this! Again, no semblance of fairness that obviously don't need any intervention... Seriously now though, why not instead aim for an economy that yields optimal output for minimal input? The definition of efficiency. I strongly suspect that we could achieve all the results we achieve today and more, much more efficiently than we currently do if we just eradicated all the injustice at the ideological heart of Western economic models. There's already too much socialistic interventions. And it's pushing further left and socialistic. I'm against that. Like you say, there is surplus. But that "surplus" comes from the third-party intervention. People don't pay taxes out of charity. They pay because they have to, are forced to, by mob-rule. If a few people don't pay taxes then the rest of society hunts them down, because they don't want a few people cheating the system while the rest have to pay. Urwrongx1000 Philosopher Posts: 1207 Joined: Mon Jun 19, 2017 5:10 pm ### Re: 30 Dollar Minimum Wage Urwrongx1000 wrote: Gloominary wrote:Firstly, this sounds like a room for rent in a house, not an apartment. Doesn't matter, there are rooms for rent in the US at$200 per rent.
You, Wendy, and Reasonable are all wrong.
Obviously if you pay more, you get more.
$275 is not$200. Try again, try to find an even hornier guy's ad. Heck, place one yourself for $1.99 and prove us all way wrong. I AM OFFICIALLY IN HELL! I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy. Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat. WendyDarling Heroine Posts: 7119 Joined: Sat Sep 11, 2010 8:52 am Location: Hades ### Re: 30 Dollar Minimum Wage WendyDarling wrote:$275 is not $200. Try again, try to find an even hornier guy's ad. Heck, place one yourself for$1.99 and prove us all way wrong.
It hurts being wrong doesn't it?
Urwrongx1000
Philosopher
Posts: 1207
Joined: Mon Jun 19, 2017 5:10 pm
### Re: 30 Dollar Minimum Wage
What's even funnier is that you are actually serious when you ask me such.
I AM OFFICIALLY IN HELL!
I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy.
Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat.
WendyDarling
Heroine
Posts: 7119
Joined: Sat Sep 11, 2010 8:52 am
### Re: 30 Dollar Minimum Wage
There are 3 bedrooms in arkansas for $700, that's about$200 each room.
How many times would you like to be wrong??
Urwrongx1000
Philosopher
Posts: 1207
Joined: Mon Jun 19, 2017 5:10 pm
How is a person with an allotted $200 for rent supposed to rent that$700 house? Where are they going to come up with the security deposit? Not to mention the other $500? Even if it was simple and safe to rent a room from some stranger soliciting on craiglist, it would have to be the$200 that you spoke of (not $275...you're a stubborn boy), there'd have to be a keyed entry to your private quarters, and no security deposit. Then there'd be negotiations on utilities for you'd have very little$ to work with. I realize that you enjoy being the thorn in my side and others, but it's not doable with $600 in a city or a rural area, even if you made use of social program freebies or reduced rate services. I AM OFFICIALLY IN HELL! I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy. Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat. WendyDarling Heroine Posts: 7119 Joined: Sat Sep 11, 2010 8:52 am Location: Hades ### Re: 30 Dollar Minimum Wage Urwrongx1000 wrote:There are 3 bedrooms in arkansas for$700, that's about $200 each room. How many times would you like to be wrong?? Post the link. Don't get me wrong, you're probably telling the truth, but I think you owe it to the thread to post it. That being said, we shouldn't be basing economic policy off of a few ads on craigslist, many of them are outright scams, and even the ones that aren't, often don't do things above board, they may not pay taxes, they may not follow building codes, they may not have working appliances and so on. We should be basing economic policy on sound statistics, and 610 isn't going to cut it anywhere in Canada, and whatever welfare a, b or c state is offering probably isn't going to cut it either. A few exceptions, a room here or there in some backwater redneck ghettos or rural slums don't disprove the rule, and economic policy has to be based on the rule, we cannot micromanage, or plan for every detail. People can't just hightail it out of the city en masse, and that's where the majority of people live in Canada and the states, in the city, and in apartments. People, especially women and single moms, ought to be entitled to safer living quarters than this. Gloominary Philosopher Posts: 1135 Joined: Sun Feb 05, 2017 5:58 am Location: Dislocated ### Re: 30 Dollar Minimum Wage https://fayar.craigslist.org/apa/d/see- ... 18061.html You all should do your homework. Your "$30 per hour wage" dreams are delusional and don't apply across the country. Maybe in a densely populated city center but you're discounting the reality. Dreams built on delusions.
Local economies dictate prices, wages, living standards, etc. Monopolies aren't necessarily bad or evil. In fact you should thank monopolization for causing $500-$1000 personal computers, what you're using right now. "Thank you Microsoft!"
Urwrongx1000
Philosopher
Posts: 1207
Joined: Mon Jun 19, 2017 5:10 pm
### Re: 30 Dollar Minimum Wage
Urwrongx1000 wrote:
Gloominary wrote:@Wrong
Smarter workers are ones who are willing to quit, and join a competitor's factory or business, for higher wages. Employers must compete against each other to attract and retain the best workers. Otherwise a company will have low paid, unskilled, and unreliable workers. They will pay for this cost. So it's not worth it.
Let me readdress this, firstly, employers can still hold out for a better deal much longer than employees can, and secondly:
Let's say there's a market with 10 corporations in a region of the world manufacturing and distributing clothes.
On the one hand, they're competing with workers and consumers, trying to pay workers as little as possible and charge consumers as much as possible, on the other hand, they're competing with each other, trying to attract as many workers and customers as possible, right?
Seems like things would sort of balance themselves out overall, but I don't think so.
This is what really tends to happen: one corporation ends up being a lot or even just a little bit better than the others, through some combination of luck, talent and tenacity.
Word gets out and before you know it, everyone wants to work for them and shop there.
Sooner or later all the other corporations have closed shop, and only one remains.
Now they have a monopoly, there are no competitors in this region, and it would be, not impossible, but exceedingly difficult for a small business to rise up and start competing with them.
Now that they have a monopoly, gradually they'll pay employees as little as possible, and charge consumers as much as possible.
People will just have to accept it if they don't want to go naked.
If a small business rises up and tries to do anything about it, they might not even be able to purchase the resources to do it, because the corporation is buying them all up at a higher price the small business can afford if they have to, outbid them.
Or if the small business manages to get going, they'll just buy the small business up and either close it down, or keep it going and jack up the prices while lowering the wages, almost everyone has their price, even if they think they're on a mission.
And then of course they also use 'underhanded' tactics, like making laws, rules and regulations suitable to themselves and not suitable to potential rivals/up-and-comers, so you have to do things exactly the way they do them or else.
Hell, they might just pay the mob to burn your business down if necessary, but even without underhanded tactics, there's a strong tendency towards monopolization, but the underhanded tactics are inevitable anyway.
You can't really have much of a democracy or capitalist kind of 'free' market (it's not the only kind of free market actually) when you have these megabanks and corporations, where 1% of the people possess 80% of the wealth because they just buy the lawmakers and politicians.
Again, almost everyone has their price.
And that's why we started out with nearly pure capitalism and ended up like this, not because of some socialist conspiracy or plot like this guy will try to make it out to be, but because capitalism leads to fewer and fewer competitors over time, and the fewer competitors there are, the more they can cement/solidify their stranglehold on the economy.
It doesn't eve have to get down to just corporation, if there's only a few big corporations of roughly equal wealth/power, they can play it safe and partly or fully merge, or they can all agree not to pay workers more than x, or charge consumers more than x, so they don't get into a wage/price war, and one of them will only ever break this rule if they somehow gain a major advantage/disadvantage, which will lead to few competitors still.
As corporations get massive, the only way to compete with them is either through the state, coercively, or through collective bargaining/unions, or through revolution, either that or just acquiesce.
Typically the masses just don't have the foresight to compete very effectively, in part because they've swallowed the cool aid, and you end up with these monstrous disparities.
Corporations are neither necessarily bad or necessarily good. As you said, they are an end-result. Is it bad that computers were monopolized at one point, with IBM, Microsoft, and Intel running the industry? No, they made computers cheap for everybody, for personal use, and led to the world as it is today. Apple competed out of survival. Eventually laws were passed to curtail and cut-up Microsoft. There were pros and cons to the Microsoft monopoly.
But the average wage of Microsoft employees rose, they did not fall. So your conjectures are simply wrong. Microsoft employees and other software/hardware engineers, have rose very high over the past 50 years. So your economic conjectures just do not paint the reality on the ground.
Again, a few or even some exceptions don't disprove the rule.
The rule is: for the last half a century, wages have been relatively stagnating and the price of essentials, which're what really matter, are relatively rising, people are getting poorer and poorer.
If capitalism was working for the environment, or for the middle and lower classes, than fine, if statistically I made more this year than last year, and more last year than the year before, excellent, but that's not the case, and that's grounds for considering serious reforms, patching up a few holes here or there isn't going to cut it, the system needs an overhaul.
Capitalism is probably one of, if not the greatest system we have for generating wealth or productivity, I think few people argue this, because people, especially or particularly the middle and working classes have to work really hard, but generating productivity isn't the sole criterion of the good.
Are the people benefitting, is the environment, are we producing things people really need, or even truly want, or are we just producing shit to produce shit?
Some things I think most people think are great like computers (altho they have their drawbacks, which's a whole other topic) have come out of the last half a century, so I'll give you that, but a lot of stuff we really didn't need has too, and the ecological time clock is ticking.
We need less productivity, for the sake of the environment, the poor shouldn't be forced to work more than the economy needs them to in order to sustain itself.
Gloominary
Philosopher
Posts: 1135
Joined: Sun Feb 05, 2017 5:58 am
Location: Dislocated
Wrong, you are not being honest or realistic. You cannot rent a $650 home on a fixed income of$600. You have to find a place that is priced at $200 like you said. Not$225. Not $250. Not$275. $200 which is what you said exists. I AM OFFICIALLY IN HELL! I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy. Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat. WendyDarling Heroine Posts: 7119 Joined: Sat Sep 11, 2010 8:52 am Location: Hades ### Re: 30 Dollar Minimum Wage Gloominary wrote:Again, a few or even some exceptions don't disprove the rule. The rule is: for the last half a century, wages have been relatively stagnating and the price of essentials, which're what really matter, are relatively rising, people are getting poorer and poorer. This is blatantly false. Wages have increased and continue to increase.$10 an hour was unheard of in the 1970s. A gallon of gas used to be $0.50. Candy bars used to be dimes and nickles. Employee wages and inflation are strongly linked together. And usually employee wages dictate inflation, even more than loan rates. If the average member of any population gets a$1.00 wage bump, prices will go up for groceries, gas, rent, and other basics. So increasing wages doesn't necessarily mean "better off". As I mentioned, the best you can hope for, based on wage along, is class fluidity and moving between rich to poor, or poor to rich, easier.
Gloominary wrote:If capitalism was working for the environment, or for the middle and lower classes, than fine, if statistically I made more this year than last year, and more last year than the year before, excellent, but that's not the case, and that's grounds for considering serious reforms, patching up a few holes here or there isn't going to cut it, the system needs an overhaul.
Gloominary wrote:Capitalism is probably one of, if not the greatest system we have for generating wealth or productivity, I think few people argue this, because people, especially or particularly the middle and working classes have to work really hard, but generating productivity isn't the sole criterion of the good.
Are the people benefitting, is the environment, are we producing things people really need, or even truly want, or are we just producing shit to produce shit?
Some things I think most people think are great like computers have come out of the last half a century, so I'll give you that, but a lot of stuff we really didn't need has too, and the ecological time clock is ticking.
We need less productivity, for the sake of the environment, the poor shouldn't be forced to work more than the economy needs them to in order to sustain itself.
If it's not broken, don't fix it.
It seems pretty obvious to me that what you think would happen, from $30 per hour wages, wouldn't happen. Your knowledge of economics is all mixed up, beginning with the fact, that employers and employees dictate average wages, based on their own personal and private interests. Third-party meddling doesn't necessarily "help" anybody. Especially when there is already a tax system and minimum wage laws were already raised, and continue to raise slowly. It's not "magic solution". It would probably harm your own convictions more than help too, which is what I've pinpointed throughout this thread. So in short, my conclusion is, even if you were to raise the minimum wage$30 per hour then it wouldn't help in the ways you believe it would, and would probably do more harm in the long run than anything else.
I don't believe in eternal economic growth. Western civilization is at a point of post-colonialistic progress. Capitalism is running out of areas and avenues to exploit, and henceforth, average people are turning to socialism to gouge more money out of society. But that hurts more than helps, and it's at the cost of personal freedoms and individuality, which I support. Therefore I'm solidly against it. Average western people have already given up too many freedoms for security.
If people were actually 'liberal' then they would agree with me. Liberty is being stripped away by socialism. Economic liberalism is backward. Liberty means less taxation, third-party intervention, and "minimum wage" laws. Today's liberals are backward, and the exact opposite of what they used to be.
Urwrongx1000
Philosopher
Posts: 1207
Joined: Mon Jun 19, 2017 5:10 pm
### Re: 30 Dollar Minimum Wage
WendyDarling wrote:Wrong, you are not being honest or realistic. You cannot rent a $650 home on a fixed income of$600. You have to find a place that is priced at $200 like you said. Not$225. Not $250. Not$275. $200 which is what you said exists. You're wrong. It's in the$200 range, as in 200-299. And it's close enough.
Neener neener neener.
Urwrongx1000
Philosopher
Posts: 1207
Joined: Mon Jun 19, 2017 5:10 pm
### Re: 30 Dollar Minimum Wage
Oh, now it's a range. I see. A range of dishonesty.
I AM OFFICIALLY IN HELL!
I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy.
Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat.
WendyDarling
Heroine
Posts: 7119
Joined: Sat Sep 11, 2010 8:52 am
### Re: 30 Dollar Minimum Wage
I have a computer, that's great, but so what?
We have a few more gadgets and gizmos now, but if in order to get that, it means I'll have to struggle more and more to survive, until we're back in the 19th century or China, then what difference does it make?
I'd rather not have my computer and not be a slave.
It's the essentials that matter, them being artificially scarce is what keeps the hamsters running on the wheel producing mostly junk that largely not only does not benefit the diminishing middle and lower classes, but doesn't really benefit anyone, even the rich.
We don't need to produce all this junk, give people the option, and many will choose not to produce as much as they are now.
Gloominary
Philosopher
Posts: 1135
Joined: Sun Feb 05, 2017 5:58 am
Location: Dislocated
### Re: 30 Dollar Minimum Wage
Gloominary wrote:I have a computer, that's great, but so what?
So average humans are able to travel 100x faster, communicate 100x faster, work 100x faster than a century ago.
Gloominary wrote:We have a few more gadgets and gizmos now, but if in order to get that, it means I'll have to struggle more and more to survive, until we're back in the 19th century or China, then what difference does it make?
Society almost always goes forward, not backward. It's only going to get faster and more efficient the next 100 years, not less.
Gloominary wrote:I'd rather not have my computer and not be a slave.
There will always be slaves, now with computers, and a millennium ago without computers. Having a computer or not, doesn't really impact freedom versus slavery. If you want to be free then you should advocate personal freedoms and individuality, less taxation, and less third-party interventions.
Gloominary wrote:It's the essentials that matter, them being artificially scarce is what keeps the hamsters running on the wheel producing mostly junk that largely not only does not benefit the diminishing middle and lower classes, but doesn't really benefit anyone, even the rich.
We don't need to produce all this junk, give people the option, and many will choose not to produce as much as they are now.
Artificial scarcity is the product of monopolization, but it's not as bad as you make it out to be. Socialists, such as yourself, have very much curtailed and compensated for corporate monopolization, with the counter-balance of social third-party intervention (government, taxes, policies, regulations). The (human) world is doing just fine.
Urwrongx1000
Philosopher
Posts: 1207
Joined: Mon Jun 19, 2017 5:10 pm
### Re: 30 Dollar Minimum Wage
Urwrongx1000 wrote:do you want the government intervening into the personal and private lives of families? It's one thing for parents to dictate over children.
Socialism used to be about an economic intervention - namely to usurp all the capitalists and carry on doing the same work that workers were doing anyway, but with all the business assets being owned socially instead of privately bought through the use of money as capital, y'know, like when it was first being created and defined and the terms meant what they were made to mean.
But now it's used instead of the term "Authoritarianism", basically its exact opposite, where government has authority over your personal life, and instead of the means to work being owned socially/publicly, they're controlled by a state that's composed of elites, not "the people"/workers.
Likewise Liberalism used to mean being liberal with regard to social issues, with minimal to no government intervention.... but now it's used instead of the term "Authoritarianism", basically its exact opposite, where government has authority over your personal life.
Ask any actual leftist what they want, and they'll usually support what Socialism used to mean for the economy, and what Liberalism used to mean for your personal life.
I'm not in favour of government intervening in the personal and private lives of families - which is how the two terms have been appropriated to mean by not-Liberal-not-Socialists.
I am in favour of them making up for where the "Classical Liberal" ideal of "perfect competition" routinely fails. The "hand of the market" is supposed to keep the economy in check, but Capitalist "success" today basically revolves around avoiding perfect competition scenarios as much as possible. Poor people don't have the power to keep this in check, they have their iota of consumer influence, but only insofar as they can buy what they're given or go without - and you can't go without everything if you want to stay alive. So they elect a government to act on their collective wishes and keep the capitalists in check, but then the capitalists just buy their votes and those who are elected end up as corporate cronies who actually help make the whole situation worse. What ends up happening is more like a kind of Socialism for Capitalists! Any breaks the poor get are just to maintain their ability to carry on working and getting paid less than they earn their employer so they can profit off more people for longer - therefore getting even richer than they otherwise would.
I am against this kind of intervention.
I want intervention against this kind of intervention.
But back onto "taking advantage of the (economic) success of others": be as productive as you like, whatever your politics. You will anyway - regardless of the financial reward, because internal satisfaction is what drives the productive anyway.
And if they produce or help produce physically way more in the way of goods and services than they need (as so very many do with all the technology, infrastructure and working methods that we now use), then is it immoral for that to be shared with those who don't produce as much as they need or anything at all for whatever reason - given that the surplus of production is so vast that it's easily possible to do so? Whenever I've had more money than I need, I've been quite happy for it to go towards others - and many other people think so too.
As is always the case, your issue will be with consent. Share the fruits of your labour, sure, but not because the government is forcing you to, right?
Well the problem is that not all people "think so too" - they are unhappy for the massive surplus that they've helped create to be shared with others.
Let's not forget that the "art" of paying people less than they earn your company is NOT productivity - nor is the knack for finding the best ways to do this. It's a redistribution technique like taxation, but with no accountability. "It's the market, not me!" If it's the market that "dictates" the wages of your employees such that they are less than what they earn you, then you don't have to feel any responsibility (something of which you claimed to be in favour) for taking from people and giving to yourself: it's "your" company. But somehow, if the distribution is visible and accountable, suddenly then it's awful! It's only "your" company because you were rich and connected enough in the first place to buy the stuff you needed to start and fund its operations, and it's "others" who actually do the operations for you - it's more theirs than it is yours, just because you happened to start off richer than them.
The particularly rich get and stay rich because they are unhappy for wealth to be distributed with accountability, which is exactly why they their charity is never enough, and it's certainly insufficient to undo the distribution-without-accountability that is making and keep them rich. Bill Gates can give so much because he's admitted he makes money faster than he can spend it - he genuinely doesn't need it, but you don't see him trying to undo the mechanism that channels so much money to him. And since none of them do, then we need a body that will: a government. Sorry, if the rich aren't going to be socially (and environmentally) responsible, then they don't "deserve" to be fully in charge. It's the rich who are the entitled ones, choosing to pay themselves more than their employees (profit). They're all engaged in these petty battles with their counterparts - trying to outdo each other materially in a pissing contest, when there are plenty of others who could be said to "deserve" it more. Honestly, I think beyond a certain monetary wealth, it should be become a points system - it'd serve the same purpose, but not at the expense of society and even the economy.
And who really makes this money anyway? Employers wouldn't be rich if they didn't have employees to profit from - they owe their ENTIRE income to them, because that is what their entire income is literally from. And the employer and employees wouldn't be able to constitute a productive business if it weren't for all their customers. And all this money has to circulate through all kinds of other businesses and other people too to get back to the money "made" by any particular employer - they owe the ENTIRE economy. Money isn't made, it is attracted from an existing flow that travels through all people. Indirect causes are still causes.
Urwrongx1000 wrote:I've worked my ass off in life.
I have no doubt. So many people of all incomes work their ass off. It's almost as though there's no correlation between how hard you work and how much money you get - in so many cases. People who work this hard have to convince themselves that their work was worth it, so of course they think they deserved every penny they made. Maybe you'd have made less if there was no education spending... I don't know what education you had. Education is just another area that can't be left to the risks of complete privatisation. It's all very well taking a moral high ground and saying it's the parents' responsibilities to fund the education of their child and children, but since there are inevitably going to be economic losers if there are going to be economic winners, with all of the losers being unable to afford the education, just imagine the sheer degree of incompetence going around... Obviously with no hope, crime becomes tempting - maybe you're saying you'd be happier to live in a country of even more unemployed criminal morons, but I can't say I feel the same way.
You have to look beyond just your own needs and your own situation, you have to consider that tough love isn't an optimal solution for everyone in all situations: for all the people in situations where it does help there are many where it does quite the opposite. In an economy and a society, what goes around quite literally comes around - you have to see ALL the system and know all the potential consequences just to give your own needs and situation any real meaningful context whatsoever. Otherwise you're just imagining "what would it be like if social responsibility didn't have to apply to me and others?" which is just fantasy.
Silhouette
Philosopher
Posts: 3428
Joined: Tue May 20, 2003 1:27 am
Location: Existence
### Re: 30 Dollar Minimum Wage
@Wrong
You are. Liberal-leftist-socialists should be spending 50% of their own time and money on the poor, before asking anybody else to spend more. If you don't then you're just a hypocrite.
I live check to check.
Some wealthy liberals do donate to charity.
It's not "an opinion". For you to stick your hand into other people's dealings, and then complain about fairness, is hypocrisy. Taxation is theft. You're merely trying to justify your thievery, taking the profits and successes of others. Taking bites of a pie you had no part in making.
I could just as easily say the democracy owns everything, and whatever you have is a privilege.
Don't get me wrong, everyone has the right to what they need, but if you're a multimillionaire-billionaire capitalist, it's a privilege.
I'm against the momentum of the modern world going liberal-left and towards more socialism, towards more third-party meddling and entitlement.
That's absurd.
Again, how can you say that when the richest 1% controls 80% of the wealth?
How can you say that when the overwhelming majority of the 1st world economy is in private hands?
How can you say that when welfare is substantially below the poverty line?
When the middle class is starting to live like the working class, and the working class is starting to live like the unemployed, in spite of, or even because of tremendous economic growth?
How can you say that when we have shopping malls opening up everywhere?
Never in the history of the world has there been more development as now.
And now China, India and much of the third world are attempting to ape western materialism.
No it's the very opposite, this is still the 'golden' age of capitalism playing itself out, just because it's not precisely as pure as it was in the 19th century, doesn't mean it's not fundamentally, it, is.
Corporations have taken advantage of socialistic idealists, such as yourself, and raked in the profits of your mistakes.
Corporations have ways around laws such as $30 minimum, by cutting worker hours, less hiring, laying off workers, etc. The larger corporations are relatively immune to socialistic-leftist meddling. They can afford to get around all social-government interventions. Small businesses, small corporations, small industries, will all be destroyed. Thus the world will be worse off by socialistic-leftist meddling. Socialists and leftists are not actually targeting or penalizing the ones they hope to, with inept understanding of economics. Liberal-leftists try to penalize the "top 1%" but end up hurting the middle class more. This is another reason why "economic equality" cannot be enforced, especially not through democracy and legislation. Corporations will pay politicians off anyway, who do you think sponsors election campaigns? Here you make it sound as tho socialism is to blame for the disparities. It's not. Conditions for many-most of the workers were akin to slavery in the 19th century before the socialist reforms of the early 20th century. You used to be able to get welfare a lot easier, and it used to pay a lot more, in Canada and many parts of the states, but since the 1980s, the Reagan and Thatcher era, we've seen a resurgence of classical liberalism (neoliberalism) in the Anglosphere. The social safety net has gotten smaller and smaller, to the point now where it's almost a joke, yet wages have stagnated, relative, to the ascending cost of living. If it weren't for a safety net, the working class would have all the more trouble becoming middle class, and members of the middle class would have a tougher time bouncing back from hard times. Unfortunately the rich have skirted around some of the taxes while small businesses haven't, because democrats in the states, or liberals in Canada, have both long since been bought and paid for by the corporations, and so have republicans and conservatives. If we want more/real socialism we have to start voting for alternative parties, and becoming activists...which'll probably never happen...but all this isn't the fault of socialism, it's mostly the fault of capitalism (and the hoodwinked masses) that created these enormous disparities in the first place, making them partly immune from the necessary socialist interventions that had to be implemented later on down the road to correct them. You're the one claiming "everybody deserves" (a place to live). So? Is that really so horrible? Would you prefer some people die on the street? Then you're complaining that it's not big enough. It's a slippery-slope. Apparently you have no limit. You want everybody sitting on gold toilets with gold toilet paper? I'm a minimalist, I care as much about the environment as I do the lower classes. I never cared for material things, never had any interest in them. I always sought to maximize my free time and knowledge rather than the amount of stuff I had. It's not a slippery slope for me. I said people who can't work deserve a one bedroom apartment, some decent food to eat, enough bus faire to get around the city, you have not and will not ever hear me say we should all live like kings. I believe needs and the environment ought to come first, not greed, that everyone who can't work or is working should have their needs met, and that capitalists don't need to have a 10th of what they have, and I have some well thought out ideas about what constitutes needs. It does work, which is mostly why minimum wage has climbed so high in the first place. Workers demand more pay with or without third-party intervention. Employers must compete against other employers. Wages have stagnated, relative, to the cost of living, as I've already demonstrated in prior posts, and which you haven't countered. Different societies and groups of humans want and decide upon different things. US attitudes are for pro-capitalism, pro class mobility, and less socialistic interventions. What works in one place, does not work in another ($30 minimum wage).
Here, you're changing your tune, you're starting to sound like a relativist.
So socialism or other economic systems might work for other countries, just not America?
Are you sure?
Things are always changing, the America of the 19th century, when capitalism really started coming into its own, isn't the America of the 21st.
Perhaps more socialism is exactly what America needs.
Just the state and infrastructure?
How come?
Why not privatize those things too?
Are you so certain these two areas are the only ones that shouldn't be privatized?
Where do you draw the line and why, you haven't given us an account.
Do you unthinkingly draw it there just because it goes against conventional dogmas you've been brought up with?
Last edited by Gloominary on Sun Jan 21, 2018 5:28 pm, edited 2 times in total.
Gloominary
Philosopher
Posts: 1135
Joined: Sun Feb 05, 2017 5:58 am
Location: Dislocated
### Re: 30 Dollar Minimum Wage
@Wrong
So average humans are able to travel 100x faster, communicate 100x faster, work 100x faster than a century ago.
I like computers, but they have drawbacks, but I really don't want to get into all that here/now.
As for cars, yes they're nice, but there's trade-offs, they cause accidents, noise, pollution, traffic.
They cost trillions of dollars in resources to manufacture and drive, a heavy toil on our environment, plus climate change, a heavier toll, which scientists are saying, if left unchecked, will likely be the death of civilization, if not life itself as we know it.
Cars make us fat, lazy.
They've become a prison, because we're forced to commute longer hours to work.
We never get to enjoy our products because of artificial scarcity, food and housing being way overpriced, workers underpaid.
Society almost always goes forward, not backward. It's only going to get faster and more efficient the next 100 years, not less.
This is plainly not true for anyone with the roughest knowledge of history.
We can argue a lot over whether humanity is ultimately progressing or heading towards destruction.
We almost annihilated each other during the cold war.
WW3 over power and dwindling resources may be around the corner, it's a real possibility.
Add to that climate change, the annihilation of a nature that has value of itself, intrinsically, and a nature that we're still fundamentally dependent on, extrinsically.
A coronal mass ejection could wipe out all electronics overnight, sending us back into the dark age.
But even if we are fundamentally progressing, which I doubt, we at least certainly took many, many steps back temporarily on our way to utopia.
The Minoans and Mycenaeans rose and fell, the Greeks and the Romans rose and fell.
Grand and glorious civilizations are often followed by equally grim dark ages.
Not just in the west, but the Mayans withered away, scientists still aren't sure exactly why, Egypt, Sumer, Persia, China, they all collapsed or nearly so at some point, and some of them are no longer with us today, Egyptians no longer speak the language of their ancestors, the ones who built the pyramids, or resemble them in any way culturally.
There will always be slaves, now with computers, and a millennium ago without computers. Having a computer or not, doesn't really impact freedom versus slavery. If you want to be free then you should advocate personal freedoms and individuality, less taxation, and less third-party interventions.
There will always be slaves, but sometimes we can help partly or fully emancipate our class.
If the people are fundamentally in control of their democracy, than whatever laws they make, will ultimately, in all likelihood serve them, that's not slavery.
The condition of having to sell ones labor can for many people at many times/places be akin to slavery, especially if left unchecked by state, social or syndicate interventions.
Artificial scarcity is the product of monopolization, but it's not as bad as you make it out to be. Socialists, such as yourself, have very much curtailed and compensated for corporate monopolization, with the counter-balance of social third-party intervention (government, taxes, policies, regulations). The (human) world is doing just fine.
Artificial scarcity should end, if 30 dollars an hour doesn't cut it, if we can't properly manage the consequences of that (we might be able to), than one alternative is, and I've already brought up a couple before, to have universal food and housing, similar to how Canada has universal education (but not postsecondary) and healthcare.
Food and housing are more important than education.
People will still have to work, but not nearly as much, food and rents will be much cheaper.
What people do with their free time will be up to them, if they want to continue working and inventing they can, but they won't have to and many-most won't as much as they are now.
We'll produce a lot less stuff, much of it garbage anyway, but be able to use the stuff we do have to its fullest, enjoy it, instead of just using it to work us even harder and more efficiently, which's how it's largely being used now.
Last edited by Gloominary on Sun Jan 21, 2018 3:49 pm, edited 1 time in total.
Gloominary
Philosopher
Posts: 1135
Joined: Sun Feb 05, 2017 5:58 am
Location: Dislocated
### Re: 30 Dollar Minimum Wage
Urwrongx1000 wrote:https://wichita.craigslist.org/roo/d/275-month-all-bills-paid/6466213814.html
This is nearly 50 percent more than what you were talking about. It also says, "asian women welcome" or something like that.
You see...a pimp's love is very different from that of a square.
Dating a stripper is like eating a noisy bag of chips in church. Everyone looks at you in disgust, but deep down they want some too.
What exactly is logic? -Magnus Anderson
Support the innocence project on AmazonSmile instead of Turd's African savior biker dude.
http://www.innocenceproject.org/
Mr Reasonable
resident contrarian
Posts: 25502
Joined: Sat Mar 17, 2007 8:54 am
Location: pimping a hole straight through the stratosphere itself
### Re: 30 Dollar Minimum Wage
Silhouette wrote:Socialism used to be about an economic intervention - namely to usurp all the capitalists and carry on doing the same work that workers were doing anyway, but with all the business assets being owned socially instead of privately bought through the use of money as capital, y'know, like when it was first being created and defined and the terms meant what they were made to mean.
But now it's used instead of the term "Authoritarianism", basically its exact opposite, where government has authority over your personal life, and instead of the means to work being owned socially/publicly, they're controlled by a state that's composed of elites, not "the people"/workers.
Likewise Liberalism used to mean being liberal with regard to social issues, with minimal to no government intervention.... but now it's used instead of the term "Authoritarianism", basically its exact opposite, where government has authority over your personal life.
Ask any actual leftist what they want, and they'll usually support what Socialism used to mean for the economy, and what Liberalism used to mean for your personal life.
I'm not in favour of government intervening in the personal and private lives of families - which is how the two terms have been appropriated to mean by not-Liberal-not-Socialists.
I agree, a lot of 'Modern' terms have been inverted, perverted, and twisted around, for political purposes and also lack of education and common sense. Public ignorance and apathy, general laziness, wears away civil liberties.
Silhouette wrote:I am in favour of them making up for where the "Classical Liberal" ideal of "perfect competition" routinely fails. The "hand of the market" is supposed to keep the economy in check, but Capitalist "success" today basically revolves around avoiding perfect competition scenarios as much as possible. Poor people don't have the power to keep this in check, they have their iota of consumer influence, but only insofar as they can buy what they're given or go without - and you can't go without everything if you want to stay alive. So they elect a government to act on their collective wishes and keep the capitalists in check, but then the capitalists just buy their votes and those who are elected end up as corporate cronies who actually help make the whole situation worse. What ends up happening is more like a kind of Socialism for Capitalists! Any breaks the poor get are just to maintain their ability to carry on working and getting paid less than they earn their employer so they can profit off more people for longer - therefore getting even richer than they otherwise would.
I am against this kind of intervention.
I want intervention against this kind of intervention.
Competition is a good thing, and the very force that has led to western success and surplus. You make it sound like a bad thing. Competition means higher wages for workers, when employers compete for skilled, educated, loyal, and competent workers. Competition means lower consumer costs, when companies compete to sell the same good. Capitalism is good for these reasons. Success leads to monopolization, domination of a few corporations over small businesses and smaller corporations. Average people all enjoy the benefits of this. Thus it is hypocritical for you and others to speak against it, or speak of it negatively.
Silhouette wrote:But back onto "taking advantage of the (economic) success of others": be as productive as you like, whatever your politics. You will anyway - regardless of the financial reward, because internal satisfaction is what drives the productive anyway.
And if they produce or help produce physically way more in the way of goods and services than they need (as so very many do with all the technology, infrastructure and working methods that we now use), then is it immoral for that to be shared with those who don't produce as much as they need or anything at all for whatever reason - given that the surplus of production is so vast that it's easily possible to do so? Whenever I've had more money than I need, I've been quite happy for it to go towards others - and many other people think so too.
As is always the case, your issue will be with consent. Share the fruits of your labour, sure, but not because the government is forcing you to, right?
Well the problem is that not all people "think so too" - they are unhappy for the massive surplus that they've helped create to be shared with others.
In the US, taxes are between 10%-30% from state to state and including federal taxation. I'd say the average person gets 22% taxed in the US, which is very low for a developed country, and even lower for a military power house. Perhaps taxes could go up. Perhaps they should stay the same. I don't see them going down. Therefore your point, and Gloom's point, are both moot. "The government", society, third-parties, already receive a large chunk of people's income, productivity, and welfare.
And you're right to say, they're not asking for it. You pay, or you go to jail. That's force. So people asking for charity on top of this, as Gloom does, is rather insulting. Don't you have a big enough piece of the pie as is, but you want more? That's the socialist agenda, wanting more.
Silhouette wrote:Let's not forget that the "art" of paying people less than they earn your company is NOT productivity - nor is the knack for finding the best ways to do this. It's a redistribution technique like taxation, but with no accountability. "It's the market, not me!" If it's the market that "dictates" the wages of your employees such that they are less than what they earn you, then you don't have to feel any responsibility (something of which you claimed to be in favour) for taking from people and giving to yourself: it's "your" company. But somehow, if the distribution is visible and accountable, suddenly then it's awful! It's only "your" company because you were rich and connected enough in the first place to buy the stuff you needed to start and fund its operations, and it's "others" who actually do the operations for you - it's more theirs than it is yours, just because you happened to start off richer than them.
I disagree entirely.
After the employer or company owner pays you for the wage you shake hands upon, the employee owes and owns nothing. The deal is done. If employees want stock or 'ownership' of their work after wages, then that's up to the employer and employee to decide. Some companies do offer stock options for their employees, especially in tech related fields where engineers could potentially copyright their work. It varies from field to field. A shoemaker isn't going to have interest in selling the individual pairs of shoes, as it would be inefficient for employees to do so. Thus it is the business owner who benefits from companies as a whole.
Silhouette wrote:The particularly rich get and stay rich because they are unhappy for wealth to be distributed with accountability, which is exactly why they their charity is never enough, and it's certainly insufficient to undo the distribution-without-accountability that is making and keep them rich. Bill Gates can give so much because he's admitted he makes money faster than he can spend it - he genuinely doesn't need it, but you don't see him trying to undo the mechanism that channels so much money to him. And since none of them do, then we need a body that will: a government. Sorry, if the rich aren't going to be socially (and environmentally) responsible, then they don't "deserve" to be fully in charge. It's the rich who are the entitled ones, choosing to pay themselves more than their employees (profit). They're all engaged in these petty battles with their counterparts - trying to outdo each other materially in a pissing contest, when there are plenty of others who could be said to "deserve" it more. Honestly, I think beyond a certain monetary wealth, it should be become a points system - it'd serve the same purpose, but not at the expense of society and even the economy.
I disagree and it sounds to me like you don't know rich people personally. I know a few, and upper middle class people. Most of the rich are regular people. It's not until the top 1% that elitism really becomes apparent, and even then, some of those people actually did work for what they gained. Or took risks for it. Warren Buffet, Bill Gates, did they not earn what they made?
They took risks. They reaped the reward. It's like roulette, where socialists want a cut of the earnings of the guy who bets it all on a number. That's unjust.
Silhouette wrote:And who really makes this money anyway? Employers wouldn't be rich if they didn't have employees to profit from - they owe their ENTIRE income to them, because that is what their entire income is literally from. And the employer and employees wouldn't be able to constitute a productive business if it weren't for all their customers. And all this money has to circulate through all kinds of other businesses and other people too to get back to the money "made" by any particular employer - they owe the ENTIRE economy. Money isn't made, it is attracted from an existing flow that travels through all people. Indirect causes are still causes.
Urwrongx1000 wrote:I've worked my ass off in life.
I have no doubt. So many people of all incomes work their ass off. It's almost as though there's no correlation between how hard you work and how much money you get - in so many cases. People who work this hard have to convince themselves that their work was worth it, so of course they think they deserved every penny they made. Maybe you'd have made less if there was no education spending... I don't know what education you had. Education is just another area that can't be left to the risks of complete privatisation. It's all very well taking a moral high ground and saying it's the parents' responsibilities to fund the education of their child and children, but since there are inevitably going to be economic losers if there are going to be economic winners, with all of the losers being unable to afford the education, just imagine the sheer degree of incompetence going around... Obviously with no hope, crime becomes tempting - maybe you're saying you'd be happier to live in a country of even more unemployed criminal morons, but I can't say I feel the same way.
You have to look beyond just your own needs and your own situation, you have to consider that tough love isn't an optimal solution for everyone in all situations: for all the people in situations where it does help there are many where it does quite the opposite. In an economy and a society, what goes around quite literally comes around - you have to see ALL the system and know all the potential consequences just to give your own needs and situation any real meaningful context whatsoever. Otherwise you're just imagining "what would it be like if social responsibility didn't have to apply to me and others?" which is just fantasy.
Isn't that the point I made to Gloom, that he needs to address macro-economics and not just his own inner-city dense population?
I understand that there are levels of responsibility, that many people are born in the lowest rims of society. That's no excuse though. That's the way morality, accountability, and responsibility work. Kant was right. It's universal. It doesn't matter how rich and silver-spooned you were. It doesn't matter if you were born a slave. If individuals don't take responsibility for their own lives then they are going nowhere fast. This applies economically too. The first thing Moderns should learn is their value in the work-force, and the wages to compensate. If you are not being paid what you're worth, then quit. It's perfectly legal, and, the fruits of capitalism and western industriousness.
Urwrongx1000
Philosopher
Posts: 1207
Joined: Mon Jun 19, 2017 5:10 pm
### Re: 30 Dollar Minimum Wage
Gloominary wrote:Don't get me wrong, everyone has the right to what they need, but if you're a multimillionaire-billionaire capitalist, it's a privilege.
That's where your wrong. Workers don't necessarily owe anything to "democracy". It's only until democracy forms mob rule, and threatens the working man with jail, that 'taxes' are enforced by the State. Democracy is the criminal. You have things backward.
Gloominary wrote:That's absurd.
Again, how can you say that when the richest 1% controls 80% of the wealth?
How can you say that when the overwhelming majority of the 1st world economy is in private hands?
How can you say that when welfare is substantially below the poverty line?
When the middle class is starting to live like the working class, and the working class is starting to live like the unemployed, in spite of, or even because of tremendous economic growth?
How can you say that when we have shopping malls opening up everywhere?
Never in the history of the world has there been more development as now.
And now China, India and much of the third world are attempting to ape western materialism.
No it's the very opposite, this is still the 'golden' age of capitalism playing itself out, just because it's not precisely as pure as it was in the 19th century, doesn't mean it's not fundamentally, it, is.
As-if it could be another way? As-if risk, competition, sacrifice is not involved to amass wealth? As-if most of the rich do not deserve the wealth they worked for?
That's the beauty about the US, for now, still. If you work, risk, sacrifice, then you earn the wealth you make. It's socialists like you who want to cut into that and steal.
I have no qualms against the top 1%. It's very much to the benefit of everybody. Why do you think oil and gas is affordable? Thank the top 1%. Thank the Bush family.
Gloominary wrote:Here you make it sound as tho socialism is to blame for the disparities.
It's not.
Yes it is, there is already an excess of socialism in place. Socialism is responsible for the current ~$10 per hour minimum wage. That's social enforcement and regulation of the economy by mob-rule. Gloominary wrote:Conditions for many-most of the workers were akin to slavery in the 19th century before the socialist reforms of the early 20th century. No, just no, not in the US. Gloominary wrote:You used to be able to get welfare a lot easier, and it used to pay a lot more, in Canada and many parts of the states, but since the 1980s, the Reagan and Thatcher era, we've seen a resurgence of classical liberalism (neoliberalism) in the Anglosphere. The social safety net has gotten smaller and smaller, to the point now where it's almost a joke, yet wages have stagnated, relative, to the ascending cost of living. If it weren't for a safety net, the working class would have all the more trouble becoming middle class, and members of the middle class would have a tougher time bouncing back from hard times. Unfortunately the rich have skirted around some of the taxes while small businesses haven't, because democrats in the states, or liberals in Canada, have both long since been bought and paid for by the corporations, and so have republicans and conservatives. If we want more/real socialism we have to start voting for alternative parties, and becoming activists...which'll probably never happen...but all this isn't the fault of socialism, it's mostly the fault of capitalism (and the hoodwinked masses) that created these enormous disparities in the first place, making them partly immune from the necessary socialist interventions that had to be implemented later on down the road to correct them. You're the one claiming "everybody deserves" (a place to live). So? Is that really so horrible? Would you prefer some people die on the street? I'd rather them die on the street instead of my living room. I don't think the world's poor, poverty, discontents are my personal responsibility. I'm not a judaeo-christian, I don't believe all the Sins of the world are mine to inherit. In fact I see that it's immoral for people to put and push their own suffering and negative choices onto others. You ought to agree, at least, that people are poor, criminal, and foul out of wrong choices throughout life. Hobos and bums don't magically appear on the street for nothing. They made choices to get there, bad choices, wrong choices, mostly, for those who don't want to be there. You are implying that people who make good choices (have an apartment, job, house, assets, wife, children, etc) owe people who make bad choices and live without. That's what I'm against. Bullshit. People who make good choices deserve not to have to worry about taking in the world's poor into their living rooms as-if their welfare is owed to somebody else, on what principle? Are you Judaeo-Christian? Are everbody else's problems, bad choices, yours? Jesus Christ is your idol? I have$40,000 in school debts. You want to take that on for me? Go ahead, show me your charity.
Gloominary wrote:I'm a minimalist, I care as much about the environment as I do the lower classes.
I never cared for material things, never had any interest in them.
I always sought to maximize my free time and knowledge rather than the amount of stuff I had.
It's not a slippery slope for me.
I said people who can't work deserve a one bedroom apartment, some decent food to eat, enough bus faire to get around the city, you have not and will not ever hear me say we should all live like kings.
I believe needs and the environment ought to come first, not greed, that everyone who can't work or is working should have their needs met, and that capitalists don't need to have a 10th of what they have, and I have some well thought out ideas about what constitutes needs.
It does work, which is mostly why minimum wage has climbed so high in the first place. Workers demand more pay with or without third-party intervention. Employers must compete against other employers.
Wages have stagnated, relative, to the cost of living, as I've already demonstrated in prior posts, and which you haven't countered.
You haven't shown that wages stagnated, because they haven't. Minimum wage keeps going up in the US. Females and blacks make more than ever before, arguably, at the cost of white males, who have 'stagnated' economically.
Gloominary wrote:
Different societies and groups of humans want and decide upon different things.
US attitudes are for pro-capitalism, pro class mobility, and less socialistic interventions. What works in one place, does not work in another ($30 minimum wage). Here, you're changing your tune, you're starting to sound like a relativist. So socialism or other economic systems might work for other countries, just not America? Are you sure? What about in 50 years time...what about in 100? Things are always changing, the America of the 19th century, when capitalism really started coming into its own, isn't the America of the 21st. Perhaps more socialism is exactly what America needs. Just the state and infrastructure? How come? Why not privatize those things too? Are you so certain these two areas are the only ones that shouldn't be privatized? Where do you draw the line and why, you haven't given us an account. Do you unthinkingly draw it there just because it goes against conventional dogmas you've been brought up with? I'm pro-privatization. Scandinavian countries are already socialist for the most part with extremely high taxes. I'm pro-libertarian/classic liberalism, pro-individualism, pro-freedoms. Smaller or no government, defund public education, defund the police force. However my ideals are not reality, nor would they work for everybody else. They reflect my personal opinion and values, nothing more. What's good for one isn't good for everybody necessarily. Whether or not US "needs" more socialism is irrelevant to the fact that socialism has already been growing, hence the higher tax rates and repeals of previously enjoyed individual liberties. For example, consider the attacks against the Second Amendment in the US, and the state trying to impose gun laws, restrictions, and all sorts of barriers, attempting to take self-defense out of the hands of individual Americans, to replace that with "the state". It's the popular example. Socialism needs rebuked, not encouragement. US is still strongly classical liberalist. The problem is new generations of entitled whiny beta chumps who want to 'vote' themselves more money instead of working and earning it. Urwrongx1000 Philosopher Posts: 1207 Joined: Mon Jun 19, 2017 5:10 pm ### Re: 30 Dollar Minimum Wage Yes it is, there is already an excess of socialism in place. Socialism is responsible for the current ~$10 per hour minimum wage. That's social enforcement and regulation of the economy by mob-rule.
The national US minimum wage is $7.25 not$10 and it hasn't increased nationally since 2009.
Actually US citizens pay more in taxes than Scandinavians do in their respective countries. From what I read, their flat tax rate of around 60% doesn't begin until one earns 1.5 more than the average income. Do those other countries pay all the national, state, and local taxes on top of their income taxes, such as sales tax, state tax, county tax, hotel/entertainment tax, inheritance tax, tobacco tax, alcohol tax, gasoline tax, etc, etc? No, they don't and that makes USA taxes accumulatively higher than all other 1st world countries.
I AM OFFICIALLY IN HELL!
I live my philosophy, it's personal to me and people who engage where I live establish an unspoken dynamic, a relationship of sorts, with me and my philosophy.
Cutting folks for sport is a reality for the poor in spirit. I myself only cut the poor in spirit on Tues., Thurs., and every other Sat.
WendyDarling
Heroine
Posts: 7119
Joined: Sat Sep 11, 2010 8:52 am
|
2018-11-21 19:16:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28862592577934265, "perplexity": 2458.176098726227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039749562.99/warc/CC-MAIN-20181121173523-20181121195523-00438.warc.gz"}
|
http://www.tac.mta.ca/tac/volumes/7/n3/7-03abs.html
|
# Pure morphisms of commutative rings are effective descent morphisms for modules -- a new proof
## Bachuki Mesablishvili
The purpose of this paper is to give a new proof of the Joyal-Tierney theorem (unpublished), which asserts that a morphism $f:R\rightarrow S$ of commutative rings is an effective descent morphism for modules if and only if $f$ is pure as a morphism of $R$-modules.
Keywords: Pure morphisms,(effective) Descent morphisms, Split coequalizers.
2000 MSC: 13C99,18A20,18A30,18A40.
Theory and Applications of Categories, Vol. 7, 2000, No. 3, pp 38-42.
http://www.tac.mta.ca/tac/volumes/7/n3/n3.dvi
http://www.tac.mta.ca/tac/volumes/7/n3/n3.ps
http://www.tac.mta.ca/tac/volumes/7/n3/n3.pdf
ftp://ftp.tac.mta.ca/pub/tac/html/volumes/7/n3/n3.dvi
ftp://ftp.tac.mta.ca/pub/tac/html/volumes/7/n3/n3.ps
TAC Home
|
2014-11-24 21:23:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820525050163269, "perplexity": 470.6085265799861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400382386.21/warc/CC-MAIN-20141119123302-00162-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://fmoldove.blogspot.cz/2014/02/
|
## Bell Theorem without inequalities
### The GHZM argument
I had discussed in the past about Bell’s theorem and his inequality for a local realistic theory. Also I have discussed the impossible noncontextual coloring game of Kochen and Specker. The Kochen-Specker theorem is very striking, but it has a major weakness: it cannot be put to an experimental test. Why? Because the coloring game can succeed when there are errors in aligning the measurement directions and no experiment is perfect. In other words, it is not a robust result. Bell inequalities are amenable to experimental verification and supporters of local realism argue then about experimental loopholes (I’ll cover them in future posts). It would be nice if there is a robust argument like Bell’s theorem but just as compelling as the Kochen-Specker theorem.
Interestingly such an argument exists and was introduced by Greenberger, Horne, Zeilinger, and Mermin and is about three particles in a particular entangled state |ψ> (up to a normalization coefficient):
|ψ> = |+>1|+>2|+>3 - |->1|->2|->3
where
|+> = 1
0
|-> = 0
1
eigenvectors of the σz the operator measuring spin on the z axis:
σz = 1 0
0 -1
Let also σx and σy be the Pauli matrices corresponding to measuring spin on x and y axis:
σx = 0 1
1 0
σy = 0 -i
i 0
Simple matrix multiplication shows that:
σx|+> = |-> σx|-> = |+>
σy|+> = i |-> σ|-> = -i |+>
σz|+> = |+> σ|-> = - |->
Using those identities compute: σ1xσ2yσ3y|ψ>, σ1yσ2xσ3y|ψ>, σ1yσ2yσ3x|ψ> and convince yourself that it is equal with |ψ>
Then compute σ1x σ2x σ3x|ψ> and see that is equal with -|ψ>
From the commutation rule of Pauli matrices it is easy to see that:
1x σ2y σ3y)( σ1y σ2x σ3y)( σ1y σ2y σ3x) = -( σ1x σ2x σ3x)
So now for the local realism contradiction:
Measure at each particle spin on x or y axis without disturbing other particles (σx or σy) and call the measurements mx or my
From σ1x σ2x σ3x|ψ> = -|ψ> we get:
m1x m2x m3x = -1
However from σ1xσ2y σ3y|ψ> = σ1y σ2x σ3y|ψ> = σ1y σ2y σ3x|ψ> = |ψ> we get:
m1x m2y m3y = +1
m1y m2x m3y = +1
m1y m2y m3x = +1
Multiplying the last three equations and using the fact that the square of a measurement is always 1 (because m could be only +1 or -1) yields:
m1x m2x m3x = +1 CONTRADICTION
So what is going on? Late Sidney Coleman has a famous lecture: Quantum Mechanics in your face where he very humorously explains all this:
With a video spoiler alert I will quickly outline Coleman’s explanation:
Perform an experiment in which three people get one of the ½ spin particles in |ψ>. They randomly measure the spin on either x or y axis recording mx or my. Then they compare the measurements and notice these experimental correlations:
m1x m2x m3x = -1
m1x m2y m3y = +1
m1y m2x m3y = +1
m1y m2y m3x = +1
Any attempt to explain them using local realism fails because if local realism holds, each measurement is causally independent of the others (locality), and each value “m” has a definite value prior to measurement (realism). Only then we can multiply the last three equations arriving at a contradiction with the first one.
## The Transactional Interpretation of Quantum Mechanics
We continue the prior guest post by Ruth E. Kastner (@rekastner http://transactionalinterpretation.org ) with part 2. Ruth is an expert in the Transactional Interpretation of Quantum Mechanics. She recently published a book on this topic.
### Part 2: A Challenge Surmounted, and a Further Development of TI
Part 1 of this guest post discussed how the Transactional Interpretation (TI) can explain quantum measurement, including the Born Rule for probabilities of outcomes. In this part, I’ll show how TI survives a challenge raised by Tim Maudlin.[1] I’ll also discuss my extension of TI into the relativistic domain. That extension clarifies what an absorber is, and as an added bonus, explains how the macroscopic world emerges from the quantum level.
#### The Maudlin Challenge
Maudlin’s challenge is intended to undermine the idea of a well-defined ‘competition’ between absorbers. Here’s the thought experiment (illustrated in Figure 1): a source emits offer waves for a slow-moving type of particle. The offer wave has rightward and leftward components.
Figure 1. The Maudlin challenge.
However, the left-hand absorber B is moveable, and it starts out on the right behind A. It will only be quickly swung around to the left if A does not detect the quantum by the travel time needed for the quantum to reach A, in which case the rightward transaction has failed. If this occurs, B will just be picking up the actualized quantum already headed for the left.
Note that B cannot return a confirmation from its position behind A, so B never returns a confirmation when the quantum is detected at A. If the incipient transaction with A fails, then the quantum is certain to be detected at B. Maudlin argued that this scenario makes the transactional account inconsistent because he considered the weights of the incipient transactions as probabilities that apply to detection at specific detectors. In this scenario, detector B is certain to detect the quantum whenever it is swung around, even though that leftward incipient transaction only has a probability of 1/2.
There are a variety of solutions to the Maudlin challenge. [2] But perhaps the simplest was suggested by Marchildon (2006), and that’s what I’ll present here. He noted that the direct-action picture, as developed by Wheeler & Feynman (1945, 1949) and Davies (1970-2), assumes complete absorption of all emitted fields – otherwise, it is not guaranteed to be empirically equivalent to standard theories of radiation or to observed radiation phenomena. So Marchildon includes a remote background absorber C, which always responds with a confirmation to the leftward offer wave (see Figure 2). Thus we always have two incipient transactions, each with probability ½, and these correspond to the frequencies of detection on the right and the left.
Figure 2. Marchildon’s solution to the Maudlin challenge:
there is always a confirmation from the left.
The probabilities also need to be understood as applying to the quantum itself, not to specific detectors. This makes sense because each offer and confirmation embodies specific physical quantities that are actualized when that incipient transaction is actualized. So we have a probability of ½ that a quantum with leftward momentum will be actualized, and a probability of ½ that a quantum with rightward momentum will be actualized, and there is no inconsistency. It does not matter whether B or C receives the actualized quantum.
Other worries surrounding the Maudlin challenge involve possible causal loops, but these are eliminated when we get away from the typical (but unnecessary) ‘block world’ picture.[3] In my development of TI, offer waves are physical possibilities, not spacetime events. Only actualized transactions correspond to spacetime events; indeed, this is how events are brought into being.
The Relativistic Transactional Picture
The relativistic domain, rather than presenting a problem for TI, actually resolves some issues facing the original, nonrelativistic version. This is in stark contrast to other interpretations that struggle with a relativistic extension.[4] The relativistic domain addresses interacting quanta, and it is in these details that we find a quantitative basis for both the emission of offer waves and the absorption of those offers, which generate confirmations. This allows us to answer the question “What is an emitter?” and, probably the more pressing question: “What is an absorber”?
We find those answers in the relativistic coupling between fields. As Feynman noted in the context of quantum electrodynamics (QED), the coupling constant is the amplitude for an electron to emit or absorb a photon (Feynman 1985, p. 129). In the transactional picture, these correspond to the emission of an offer wave or the generation of a matching confirmation, respectively. The coupling constant is only ~0.085 (and is further reduced in practice by additional requirements involving the relevant conservation laws). Moreover, the applicable probability is the fine-structure constant , 1/137 (the square of the coupling constant), less than 1%.[5] This tells us that neither emission nor absorption is very likely for any individual quantum. But the more potentially emitting or absorbing quanta that comprise an object, the higher the probability of emission or absorption by that object (see §5 of this paper).
Let’s focus on absorption: it turns out that once you have an object composed of enough potentially absorbing entities that the probability of its generating a confirming response to an offer wave approaches unity, what you have is a macroscopic object. For example, a potential absorber is an atomic electron in its ground state. If you took about 100,000 ground state atoms, they would make up roughly the width of a human hair. This many atoms as components of an object would virtually guarantee the generation of a confirmation somewhere in that object, since the probability that none of its 100,000 component ground state electrons responds to an offer with a confirmation is nearly zero. Such an object can be unambiguously identified as an absorber. (But that doesn’t necessarily mean that it will be the one receiving the actualized quantum resulting from the ‘winning’ actualized transaction).
So we see that quantifying the emission/absorption process via relativistic coupling gives an account not only of what constitutes an emitter or absorber, but also of the emergence of the macroscopic objects that are those emitters and absorbers. Thus, in TI we resolve the issue of the ‘HeisenbergCut,’ we gain a physical account of measurement, and we can read off Von Neumann’s theory of measurement and the Born Rule. More details are available in my 2012 book.[6]
References
Cramer J. G. (1986). The Transactional Interpretation of Quantum Mechanics.'' Reviews of Modern Physics 58, 647-688.
Davies, P. C. W. (1970). “A quantum theory of Wheeler-Feynman Electrodynamics,” Proc. Cam. Phil. Soc. 68, 751.
___________(1971).”Extension of Wheeler-Feynman Quantum Theory to the Relativistic Domain I. Scattering Processes,” J. Phys. A: Gen. Phys. 6, 836.
____________(1972).”Extension of Wheeler-Feynman Quantum Theory to the Relativistic Domain II. Emission Processes,” J. Phys. A: Gen. Phys. 5, 1025-1036.
Feynman, R. P. (1985) QED: The Strange Theory of Light and Matter. Princeton University Press.
Kastner, R. E (2012a). The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility. Cambridge: Cambridge University Press.
Kastner, R. E. (2012b) “The Possibilist Transactional Interpretation and Relativity,” Foundations of Physics 42, 1094-1113.
Kastner, R. E. (2006). “Cramer's Transactional Interpretation and Causal Loop Problems.'' Synthese 150, 1-14.
Marchildon, L. (2006). “Causal Loops and Collapse in the Transactional Interpretation of Quantum Mechanics,” Physics Essays 19, 422.
Maudlin, T. (1996). Quantum Nonlocality and Relativity: Metaphysical Intimations of Modern Physics. (First Edition), Wiley-Blackwell.
Wheeler, J.A. and R. P. Feynman, "Interaction with the Absorber as the Mechanism of Radiation," Reviews of Modern Physics, 17, 157–161 (1945).
Wheeler, J.A. and R. P. Feynman, "Classical Electrodynamics in Terms of Direct Interparticle Action," Reviews of Modern Physics, 21, 425–433 (1949).
[1] Maudlin first presented his challenge in his (1996).
[2] E.g., Kastner (2006), Kastner (2012, Chapter 5).
[3] I discuss these points in Chapters 5 and 8 of my book.
[4] E.g., the Bohmian and ‘spontaneous collapse’ interpretations which modify the basic theory by introducing an ad hoc nonlinear term in the basic Schrödinger evolution.
[5] This is because both emission and absorption are required – neither happens without the other in a direct-action picture.
[6] Also forthcoming from Imperial College Press: my next book on TI, a popular account for the general reader.
## The Transactional Interpretation of Quantum Mechanics
Today I have the privilege to host a guest blog post by Ruth E. Kastner (@rekastner http://transactionalinterpretation.org/ ) who is an expert in the Transactional Interpretation of Quantum Mechanics. She recently published a book on this topic.
### Part 1: Introduction
The Transactional Interpretation of Quantum Mechanics (‘TI’ for short) was proposed by John G. Cramer in the 1980s (Cramer’s comprehensive 1986 presentation is here). The distinguishing feature of TI is its ability to ‘read off’ the Born Rule for the probabilities of measurement results from the basic formalism. How does it do this? By including the solutions of the advanced (negative energy/time reversed) Schrödinger Equation, as well as the retarded solutions of the usual Schrödinger Equation.
TI is based on the Wheeler-Feynman time-symmetric theory of classical electromagnetism (Wheeler and Feynman 1945, 1949). In this theory, a charge emits a time-symmetric field having equal components retarded and advanced radiation, while any absorber responds to that radiation by generating its own time-symmetric field. The responding field is exactly out of phase with the stimulating field. This results in the cancellation of all fields except the one between the emitter and the absorber—that field is built up to full strength, thereby transporting energy from the emitter to the absorber. (See below.)
The Wheeler-Feynman theory accounted nicely for ‘radiative damping,’ or loss of energy by a radiating charge. This could only be accounted for in the traditional theory (which assumes only a retarded field is radiated) by assuming an ad hoc sourceless ‘free field’. Wheeler and Feynman’s original motivation in developing this ‘direct action’ theory was to eliminate the field as an independently existing entity, to solve the problem of self-action in the classical theory – in which a charge interacted with its own field, resulting in infinite energies. They lost interest in it when they found that some degree of self-action was needed to account for relativistic effects (such as the Lamb shift). However, the direct action picture can still be used, and in fact Davies developed a direct-action picture of quantum electrodynamics in the 1970s (Davies 1970, 1971, 1972).[1]
The transactional interpretation is an application of the direct action picture to quantum systems. It departs from the Wheeler-Feynman (WF) classical picture in the fact that, instead of being a continuous quantity, energy can only be delivered in discrete packets (quanta). The usual retarded quantum states, such as |Y>, are called offer waves (OW) and the advanced states, <F|, are called confirmation waves (CW). Since TI deals with quantized energy and momentum, many absorbers may respond with CWs to an emitter’s OW, but (for a one-quantum OW) only one of the responding absorbers can actually receive the energy. But this quantized version of the WF picture allows for a very nice resolution of the notorious ‘problem of measurement’: namely, it allows one to ‘read off’ the heretofore mysterious ‘Process 1’ of John Von Neumann’s theory of measurement, which also contains the Born Rule. We’ll now see how this works.
To review, ‘Process 1’ is the transition of a quantum system from a pure state to a mixed state upon measurement, i.e.:
The coefficients |cn|2 are the probabilities given by the Born Rule for each of the outcomes yn.
Von Neumann noted that this transformation is acausal and fundamentally irreversible, yet he was unable to explain it in physical terms, and treated this transition as fundamentally dependent on an observing consciousness. However, if we take into account the advanced responses of absorbers, then for an OW described by |Y>, we have for a collection of numbered absorbers:
In the above diagram, an initial offer wave from emitter E passes through some measuring apparatus that separates it into components áyn|Yñ |y nñ, each reaching a different absorber n. Each absorber responds with an advanced (adjoint) confirmation áyn| áY|y nñ. In TI, these OW/CW encounters are called incipient transactions. They are described in probabilistic terms by the product of the OW and CW, which gives a weighted projection operator: áyn|YñáY|ynñ |y nñáy n | = |cn|2 |y nñáy n |. If we add all the incipient transactions, we clearly have the density operator representation of ‘Process 1”:
Thus, by including the advanced responses of absorbers, we have a physical account of measurement as well as a natural explanation of the Born Rule and Von Neumann’s ‘Process 1’. The response of absorbers is what creates the irreversible act of measurement and breaks the linearity of the basic deterministic propagation of the quantum state. Since the conserved physical quantities can only be delivered to one absorber, there is an indeterministic collapse into one of the outcomes y k with a probability given by the weight |ck|of the associated projection operator |y kñáy k |. This is called an actualized transaction, and it consists in the delivery of energy, etc., to one absorber. I see this is a form of spontaneous symmetry breaking (this is discussed in Chapter 4 of my book).
I have been developing the transactional picture in two specific ways: (1) investigating what kind of ontology underlies the offers and confirmations, especially when these are multi-quantum states: and (2) extending TI explicitly into the relativistic domain to give a more detailed account of emission and absorption, both of which are inherently relativistic processes. Both emission and absorption play crucial roles in defining the measurement process. Only the transactional picture, in which absorption contributes advanced states, provides a physical basis for what is otherwise an ad hoc mathematical recipe (Von Neumann’s ‘process 1’ and the accompanying Born Rule). In Part 2, I will discuss my recent developments of TI. Further information and discussion is available at my website (http://transactionalinterpretation.org/ ).
References
Cramer J. G. (1986). The Transactional Interpretation of Quantum Mechanics.'' Reviews of Modern Physics 58, 647-688.
Davies, P. C. W. (1970). “A quantum theory of Wheeler-Feynman Electrodynamics,” Proc. Cam. Phil. Soc. 68, 751.
___________(1971).”Extension of Wheeler-Feynman Quantum Theory to the Relativistic Domain I. Scattering Processes,” J. Phys. A: Gen. Phys. 6, 836.
____________(1972).”Extension of Wheeler-Feynman Quantum Theory to the Relativistic Domain II. Emission Processes,” J. Phys. A: Gen. Phys. 5, 1025-1036.
Kastner, R. E (2012). The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility. Cambridge: Cambridge University Press.Wheeler, J.A. and R. P. Feynman, "Interaction with the Absorber as the Mechanism of Radiation," Reviews of Modern Physics, 17, 157–161 (1945).
Wheeler, J.A. and R. P. Feynman, "Classical Electrodynamics in Terms of Direct Interparticle Action," Reviews of Modern Physics, 21, 425–433 (1949).
[1] Meanwhile, the infinities resulting from self-action are being ‘tamed’ by renormalization techniques.
## Quantum Teleportation
One of the important results in quantum mechanics is the teleportation protocol. There are several misconceptions about teleportation, fueled in part by Star Trek episodes.
For example, you cannot teleport faster than the speed of light. Quantum mechanics exhibits correlations above what can be expected by a causal local theory and an experiment here can obtain data instantaneously correlated with an experiment located at the other side of Milky Way. To verify the correlation however, one needs to physically travel from here to there and this can only be done at speeds below the speed of light. Quantum mechanics is non-local, but it does not permit violation of special theory of relativity. Just because one measures something over here it does not carry any instantaneous signals across the galaxy (this is usually called no-signaling).
Then, unlike in Star Trek, the teleported object does nor vanish. Instead, all its information is extracted and in the process the object gets destroyed. From the extracted information, the quantum state is reconstructed at the destination using local materials. In a way, it is like breaking up an apply pie to extract the baking instructions, and then using those instructions re-baking the pie at the destination using local ingredients. If you are teleported, you will die on the teleporting pad, and hopefully your complete atom state information is transmitted and used to make up an exact replica of yourself at the destination. A few mistakes in measuring and reconstruction and you will end up like someone from a Picasso painting-not an appealing prospect.
So what is the big deal then with quantum teleportation? Is there any difference between a fax machine and a quantum teleportation device? In the quantum world there are two basic barriers. First, in quantum mechanics a state cannot be copied (cloned). The no cloning theorem means that a classical fax machine is impossible to be implemented in the quantum realm. The second roadblock is that measuring the quantum state destroys (or collapses) what it is measured and you cannot access all of the quantum state information. So it was a very exciting time when the teleportation protocol was discovered. This protocol can enable future quantum computers to transmit quantum information. But how does it work? We can start with the mathematical description (see here ) but it is much more entertaining to repeat a funny story originally told by Charles Bennett, one of the discoverers of teleportation. I could not find a good reference for this story, so I will tell it from memory attempting to be as close as possible with the original story (I originally blogged about this at FQXi and I had a link there to Bennett’s story but the link is now broken).
Suppose there are two brothers Romulus and Remus who don’t know much about anything. When asked about any question they answer randomly, but they both give the same answer:
Teacher: What color is the grass?
Romulus: Pink ma’am.
Another Teacher in another room: What color is the grass?
Remus: Pink sir.
Now as the story goes, a murder was committed in Boston and the FBI wants to talk with the sole witness. They do not trust the local cops, and since the witness is still in a state of shock, they cannot transport him to Washington DC to be interviewed by FBI experts because they risk tampering the brittle witness’ state of mind. Fortunately Romulus happens to be in Washington DC and Remus in Boston. Then they ask Remus to spend time with the witness talking about any topics they want: the weather last weekend, the best movie showing right now, the stock market, etc. So Remus spends an hour with the witness and at the end of the hour, the witness said he hates Remus because he dislikes every single thing Remus likes. Moreover, the stress of the meeting has completely erased his recollection of the crime. Can the FBI agents in Washington DC have any chance to find out about the crime? Surprisingly the answer is yes. They will ask Romulus about the crime (he did not witness anything; he was in Washington DC all the time) and armed with the information about the outcome of the meeting between Remus and the witness, they reverse every single the answer Romulus provides them when asked about the crime and solve the case.
Nice story? Unbelievable?
Let’s say it again using quantum mechanics mathematics this time.
Let’s meet the key people:
| ψ > = α |0> + β |1> ------ the witness state to be teleported from Boston to DC
| Φ+> = 1/sqrt(2) [|0> |0> + |1> |1>] – maximal entangled state -------Romulus-Remus pair
The total state is:
| Φ+>|ψ> = 1/sqrt(2) (0> |0> + |1> |1>) (α |0> + β |1>)
Let’s introduce 4 completely entangled states:
| Φ+> = 1/sqrt(2) [|0> |0> + |1> |1>]
| Φ-> = 1/sqrt(2) [|0> |0> - |1> |1>]
| Ψ+> = 1/sqrt(2) [|0> |1> + |1> |0>]
| Ψ-> = 1/sqrt(2) [|0> |1> - |1> |0>]
Then:
|0> |0> = 1/sqrt(2) [|Φ+> + | Φ->]
|0> |1> = 1/sqrt(2) [|Ψ+> + | Ψ->]
|1> |0> = 1/sqrt(2) [|Ψ+> - | Ψ->]
|1> |1> = 1/sqrt(2) [|Φ+> - | Φ ->]
and therefore
| Φ+>|ψ> = 1/sqrt(2) (0> |0> + |1> |1>) (α |0> + β |1>) =
½( α |0>r |0>r|0>w + β |0>r |0>r|1>w + α |1>r |1>r|0>w + β |1>r |1>r|1>w)
where the indices r,r,w represent Romulus, Remus, and the witness. Now we will swap Romulus to the right with witness to the left and after a bit of elementary algebra we get the expression above written in this form:
½ (|Φ+>rw(α |0>r + β |1>r) + |Φ->rw(α |0>r - β |1>r) + |Ψ+>rw(β |0>r + α |1>r) + |Ψ->rw(β |0>r - α |1>r))
This rewrite is the clever idea of teleportation.
The Boston meeting between Remus and the witness corresponds to a measurement for the Remus-witness part which will collapse the state to one of the 4 possible outcomes:
|Φ+>rw
|Φ->rw
|Ψ+>rw
|Ψ->rw
Consequently the state of Romulus in Washington DC is one of the four possibilities:
α |0>r + β |1>r
α |0>r - β |1>r
β |0>r + α |1>r
β |0>r - α |1>r
Then the measurement outcome is sent by classical means to Washington DC. (OK, the story is a bit more complex than the Romulus-Remus story above, in the real protocol there are 4 outcomes)
All that is left to do is to apply a local transformation (like reversing the answers from Romulus) in Washington DC to transform Romulus state to:
α |0> + β |1>
thus achieving the teleportation of the unknown original state from one place to the other. In the process we did not copy the state (the original state got destroyed) and the no-clone theorem was obeyed. Also we did not extract the coefficients α and β like in a classical faxing process. The memory of crime got teleported from the witness into the head of Romulus who for the purposes of the FBI investigation has become the witness. This was achieved by what is called quantum steering: measuring on one entangled particle changes remotely the state of its pair. This is instantaneous and can be done from one end of the galaxy to the other if you like, but in the absence of the right key to unlock the information it is useless. The key still has to travel slower than the speed of light and relativity is ultimately obeyed.
## The Kochen-Specker Theorem
I will now take a break from the prior series, and discuss a few fundamental results in quantum mechanics which were not given proper attention in the past on this blog. Today I am discussing the Kochen-Specker theorem, which rivals in importance Bell’s theorem.
When you read about this result the first time, it looks a bit dry and abstract, but in fact it is child’s play because it is nothing more than a coloring game. Originally the first proof was quite intricate, but later on, the late Asher Peres found a great simplification and I will discus this instead.
Before starting, we need one preliminary result: for particles of total spin 1, we can measure the square of the component of spin in a direction and get +1 or 0. So far nothing special, but quantum mechanics shows that if we perform such a measurement on three orthogonal directions (say on x, y, z) we will get two results of +1 and one result of 0. We do not know what result will be on which direction, but we will always get a zero result and two +1 results in some order. I’ll not prove this, but I want to look at its meaning instead.
We know that in quantum mechanics the results of experiments do not exist before measurement, but can we create a model which will recover the 1,1,0 prediction for spin one particles? This will work for three orthogonal directions, but what if are adding additional orthogonal directions? If particles do have definite properties before measurement, it should be possible in principle to pre-assign the values of +1 and 0 to all our measurement directions in such a way that the +1,+1, 0 theorem is obeyed. The Kochen-Specker theorem shows that this is impossible.
(I am adapting this from a famous paper about Free Will: http://arxiv.org/pdf/quant-ph/0604079v1.pdf )
So here is the explanation: start with a cube, inscribe a circle on each face and add a point on the 4 places it touches the face sides (e.g. points V,D, U, etc). Then add a point in the middle of each face (points X, Y, Z) and connect this with the vertexes of the square. Add a point where this line intersects the circle (C, B, C’, B’) and unite them in a smaller square. Draw perpendiculars from the center of the face to the small square and add 4 more points (e.g. D, D’). Finally connect the center of the cube with all those points and obtain 33 directions: 13*3 – 4*3/2 = 33 directions [13 point on a face*3 faces but all 4 points on the inscribed circle are counted twice]
Now we can start the coloring game and prove the Kochen-Specker theorem. We will color the 1’s as red, and the 0’s as blue and see if this can be done in general or not.
Step 1: X, Y, Z for 3 orthogonal directions:
(we can pick X as zero without loss of generality)
Step 2: X-A implies A = +1 because the center of the cube and X and A form orthogonal directions, and on any 3 orthogonal directions there can be only one zero which is at X now) In turn A’ = 1 because X, A, A’ form 3 orthogonal directions.
Step 3: A,B,C form 3 orthogonal directions (this shows the cleverness of Peres’ choice for directions, try to prove using simple geometry that A,B,C form 3 orthogonal directions). Without loss of generality we can pick B=1, C=0.
and from A’ B’ C’: B’ = 1 and A’ = 1
Step 4: orthogonality of CD implies D = 1. Similarly C’D’ implies D’ = 1
Step 5: Z, D, E orthogonality implies E = 0. Z, D’, E’ implies E’ = 0
Step 6: EF and EG orthogonality implies G=G = 1. E’F’ and E’ G’ implies F’ = G’ = 1
Step 7: F, F’, U implies U = 0
Step 8: G, G’, V implies V = 0
Now for the contradiction: U is orthogonal with V and you cannot have both of them equal with zero.
So what does this mean? This shows that we cannot have a context independent assignment of measurement outcome before measurement. If we want to pre-assign measurement properties, this can only be done within the context of the measurement setting. K-S theorem weakens the idea of objective reality independent of measurement.
|
2017-12-18 01:18:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5649328231811523, "perplexity": 1660.9315355407841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599549.81/warc/CC-MAIN-20171218005540-20171218031540-00332.warc.gz"}
|
http://mymathforum.com/calculus/339949-volume-intersection-two-cylinders.html
|
My Math Forum Volume of Intersection of Two Cylinders
Calculus Calculus Math Forum
April 8th, 2017, 08:37 PM #1 Senior Member Joined: Oct 2015 From: Antarctica Posts: 107 Thanks: 0 Volume of Intersection of Two Cylinders Given the cylinders:z^2 + x^2 = 1 z^2 + y^2 = 1 Set up and evaluate the triple integral that gives the volume of their intersection. I cannot figure out what the bounds on my integrals should be and I'm not even sure which coordinate system to use to solve this. Can anybody help?
April 9th, 2017, 01:29 PM #2 Senior Member Joined: Sep 2015 From: USA Posts: 1,656 Thanks: 842 If you look at this carefully you'll find that cross sections parallel to the xy plane have identical limits $x:(-\sqrt{1-z^2},\sqrt{1-z^2})$ $y:(-\sqrt{1-z^2},\sqrt{1-z^2})$ thus the volume of a cross section with infinitesimal thickness is $dV = (2\sqrt{1-z^2})(2\sqrt{1-z^2}) ~dz= 4(1-z^2)~dz$ $V = \displaystyle \int_{-1}^1 ~4 (1-z^2)~dz = \dfrac{16}{3}$ If you need to cast this as a triple integral just write $V = \displaystyle \int_{-1}^1 \int_{-\sqrt{1-z^2}}^{\sqrt{1-z^2}}\int_{-\sqrt{1-z^2}}^{\sqrt{1-z^2}} ~dx~dy~dz$ Thanks from John Travolski
Tags cylinders, intersection, volume
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post meyllison Geometry 1 May 27th, 2016 06:10 AM etidhor Algebra 9 December 1st, 2012 09:44 AM abyssoft Real Analysis 1 March 27th, 2012 12:09 PM aaron-math Calculus 0 February 12th, 2012 07:37 PM jessicA Calculus 1 February 3rd, 2008 08:43 AM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2017-12-16 07:18:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6104179620742798, "perplexity": 1777.950602320931}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948585297.58/warc/CC-MAIN-20171216065121-20171216091121-00369.warc.gz"}
|
https://algebra-calculators.com/time-and-distance-problems/
|
## Time and Distance Problems
Problem 1: A man covers a distance of 600m in 2min 30sec. What will be the speed in km/hr?
### Solution:
Speed =Distance / Time
⇒ Distance covered = 600m, Time taken = 2min 30sec = 150sec
Therefore, Speed= 600 / 150 = 4 m/sec
⇒ 4m/sec = (4*18/5) km/hr = 14.4 km/ hr.
Problem 2: A boy travelling from his home to school at 25 km/hr and came back at 4 km/hr. If whole journey took 5 hours 48 min. Find the distance of home and school.
### Solution:
In this question, distance for both speed is constant.
⇒ Average speed = (2xy/ x+y) km/hr, where x and y are speeds
⇒ Average speed = (2*25*4)/ 25+4 =200/29 km/hr
Time = 5hours 48min= 29/5 hours
Now, Distance travelled = Average speed * Time
⇒ Distance Travelled = (200/29)*(29/5) = 40 km
Therefore distance of school from home = 40/2 = 20km.
Problem 3: Two men start from opposite ends A and B of a linear track respectively and meet at point 60m from A. If AB= 100m. What will be the ratio of speed of both men?
### Solution:
According to this question, time is constant. Therefore, speed is directly proportional to distance.
Speed∝Distance
⇒ Ratio of distance covered by both men = 60:40 = 3:2
⇒ Therefore, Ratio of speeds of both men = 3:2
Problem 4: A car travels along four sides of a square at speeds of 200, 400, 600 and 800 km/hr. Find average speed.
### Solution:
Let x km be the side of square and y km/hr be average speed
Using basic formula, Time = Total Distance / Average Speed
x/200 + x/400 + x/600 + x/800 = 4x/y ⇒ 25x/ 2400 = 4x/ y⇒ y= 384
⇒ Average speed = 384 km/hr
|
2022-11-30 20:15:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079848289489746, "perplexity": 3160.3192794186534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00329.warc.gz"}
|
https://testbook.com/question-answer/if-cosectheta-frac2921where-0-l--606454199571352fea2d3778
|
# If $$cosec~\theta =\frac{29}{21}$$ where 0 < θ < 90°, then what is the value of 4 sec θ + 4 tan θ?
1. 5
2. 10
3. 15
4. 20
5. None of these
Option 2 : 10
## Detailed Solution
Concept:
$$\sec \theta =\frac{1}{\cos \theta }~and~\tan \theta =~\frac{\sin \theta }{\cos \theta }$$
sin2 x + cos2 x = 1
Calculation:
Given: $$cosec~\theta =\frac{29}{21}$$
As we know that, sin2 x + cos2 x = 1
$$\Rightarrow \sin \theta =\frac{21}{29}~and\cos \theta =~\frac{20}{29}$$
$$\Rightarrow 4\sec \theta +4\tan \theta =~4\times \frac{29}{20}+4\times \frac{21}{20}~=10$$
|
2021-10-26 17:56:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396868705749512, "perplexity": 3424.1287578063684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00052.warc.gz"}
|
https://blogcompsci.wordpress.com/2014/08/26/problem-016/
|
# Project Euler
## Power digit sum
### Problem 16
215 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.
What is the sum of the digits of the number 21000?
In additional to use recursion, a very fundamental strategy call divide & conquer method can be used in here to minimize the number of time to do multiplication. Note that $2^{1000} = 2^{500} \cdot 2^{500}$, splitting the exponent each time and multiply itself will cut the multiplication by half. However, the number of digits for the answer is long (> 300 digits), using an array of integer (or char) is a must without special library and cause the multiplication of two long digits values quite cumbersome at each stage. Therefore, the solution seems to be easier to iteratively multiply 2 for 1000 time onto the interim result.
|
2017-10-20 06:38:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7918944358825684, "perplexity": 411.20373470841434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00185.warc.gz"}
|
https://herelovstory.com/how-to-find-constant-term-in-binomial-theorem.php
|
# How to find constant term in binomial theorem
By | 09.04.2021
Math virtual assistant
Example 1 :. Solution :. Constant term :. Example 2 :. Find the last two digits of the number 3 Multiple of 10 ends with 0. By subtracting from multiple of 10, we will get the value ends with 0. Example 3 :. Example 4 :. So, the coefficients of middle terms are equal.
Example 5 :. Ohw from the stuff given above, if you need any other stuff in math, please use our google custom search here. If you have any feedback about our math content, please mail us :. We always appreciate your feedback.
You can also xonstant the following web pages on different stuff in math. Variables and constants. Binommial and evaluating expressions. Solving linear equations using elimination method. Solving linear equations using substitution method. Solving linear equations using cross multiplication method. Solving one step equations. What is the use of vitamin a quadratic equations by factoring.
Solving quadratic equations by quadratic formula. Solving quadratic equations how to find constant term in binomial theorem completing square. Nature of the roots of a quadratic equations. Sum and product of the roots of a quadratic equations.
How to pronounce concierge in french identities. Solving absolute value equations. Solving Absolute value inequalities. Graphing absolute value equations. Combining like terms. Square root of polynomials. Remainder theorem. Synthetic division. Logarithmic problems. Simplifying radical expression. Comparing surds.
Simplifying logarithmic expressions. Scientific notations. Exponents and power. Quantitative aptitude. Multiplication tricks. Aptitude test online.
How to find constant term in binomial theorem - I. Test - II. Horizontal translation. Vertical translation. Reflection through x -axis. Reflection through y -axis. Horizontal expansion and compression. Vertical expansion and compression.
Rotation transformation. Geometry transformation. Translation transformation. Dilation transformation matrix. Transformations using what percent of freshwater is used for agriculture. Converting customary units worksheet.
Converting metric units worksheet. Decimal representation worksheets. Double facts worksheets. Missing addend worksheets. Mensuration worksheets. Geometry worksheets. Comparing rates worksheet. Customary units worksheet. Metric units worksheet. Complementary and supplementary worksheet. Complementary and supplementary word problems worksheet. Area and perimeter worksheets. Sum of the angles in a triangle is degree worksheet. Types of angles worksheet. Properties of parallelogram worksheet. Proving triangle congruence worksheet.
Special line segments in triangles worksheet. Proving trigonometric identities worksheet. Properties of triangle worksheet. Estimating percent worksheets. Quadratic equations word problems worksheet. Integers and absolute value worksheets. Decimal place value worksheets. Distributive property of multiplication worksheet - I. Distributive property of multiplication worksheet - II. Gind and evaluating expressions worksheet.
Nature of the roots of a quadratic equation worksheets. Determine if the relationship is proportional worksheet. Trigonometric ratio table. Problems on trigonometric ratios. Trigonometric ratios of some specific angles. ASTC formula. All silver tea cups. All students take fin. All sin tan cos rule. Constantt ratios of some negative angles. Trigonometric ratios of 90 degree minus theta. Trigonometric ratios of 90 degree plus theta.
Trigonometric ratios of degree plus theta. Hoa ratios of degree minus theta. Trigonometric ratios tl angles greater than or equal constaant degree. Trigonometric ratios of complementary angles.
Binomial Expansions and Pascal’s Triangle
Constant term: 15 - 5r = 0. 15 = 5 r. r = 15/5 = 3 = 5 C 3 (-1/ 3) 3 (2) x 15 - 5(3) = (/27) ? 4 = / So, the constant term is / Example 2: Find the last two digits of the number 3 Solution: 3 = (3 2) = (9) = (10 - 1) If you want to do it using strictly BINOMIAL THEOREM, you can do it as follows: (x + 1 x + 1)7 = ((x + 1 x) + 1)7 = 1 + (7 1)(x + 1 x) + (7 2)(x + 1 x)2 + + (7 7)(x + 1 x)7 Now in the expansion of (x + 1 x)n, the constant term is: Tr + 1 = (n r)x2r ? n = (n r) 2 | n. How To: Given a binomial, write a specific term without fully expanding. Determine the value of $n$ according to the exponent. Determine $\left(r+1\right)$. Determine $r$. Replace $r$ in the formula for the $\left(r+1\right)\text{th}$ term of the binomial expansion.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up. Connect and share knowledge within a single location that is structured and easy to search.
This was a problem from homework assigned to me. I do not know how to find the constant term using the binomial theorem. Sign up to join this community.
The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. Asked 9 months ago. Active 9 months ago. Viewed 65 times.
Thomas Andrews k 14 14 gold badges silver badges bronze badges. Add a comment. Active Oldest Votes. Martund Martund Devansh Kamra Devansh Kamra 1, 4 4 silver badges 17 17 bronze badges. Featured on Meta. Stack Overflow for Teams is now free for up to 50 users, forever. Visit chat. Related 5. Hot Network Questions. Mathematics Stack Exchange works best with JavaScript enabled. Accept all cookies Customize settings.
1. Shaktikasa
|
2021-06-16 01:44:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43875566124916077, "perplexity": 2541.6701220812956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00223.warc.gz"}
|
https://physics.cornell.edu/news/38m-nsf-grant-begins-new-era-early-universe-research
|
# $3.8M NSF grant begins a new era of early universe research In the Large Hadron Collider (LHC), an underground 27-kilometer tube beneath the Swiss and French countryside, Cornell physicists smash matter into its component parts to learn about elementary particles and their interactions. A$3.8 million grant from the National Science Foundation will support the team for three more years of research.
“We’re the ultimate reductionists,” said Peter Wittich, professor of physics in the College of Arts and Sciences (A&S) and director of the Laboratory of Elementary Particle Physics. “What we do at LHC is to pull everything apart into the most fundamental pieces.”
Cornell researchers contributed to the 2012 discovery at LHC of the Higgs Boson particle, the last piece of the Standard Model of physics. The data collection run that included this landmark discovery has concluded. On July 5, the next four-year run at LHC’s Compact Muon Solenoid (CMS) detector began, Wittich said, with a new goal: to find evidence of physics beyond the Standard Model.
“The Higgs Boson completes the Standard Model; it describes all the particle interactions we know. But it’s also incomplete,” said Wittich, one of five co-principal investigators receiving the grant. The Standard Model holds, for example, that there are equal amounts of matter and anti-matter; however, no one knows where the anti-matter is in the universe. In another puzzle, cosmological observations confirm the existence of something called dark matter, which is not captured in the Standard Model.
A major goal of the next data-taking period after the upgrade is to find new forces or new particles that will be able to resolve these inconsistencies, Wittich said.
For example, future LHC collisions could help Ritchie Patterson, the Helen T. Edwards Professor of Physics (A&S) and director of the Center for Bright Beams at Cornell, search for one particular type of new particle.
Patterson is interested in a subset of new, unfamiliar particles predicted by models developed to complete the Standard Model. If they exist, these proposed particles would travel a short distance from the collision point before disintegrating into a clump of other particles – ones that are already part of the Standard Model and therefore familiar, she said.
“Our LHC detector wouldn’t necessarily see the new particles directly, but it can detect the clump of particles that they produce,” Patterson said. “Such a clump would be unambiguous evidence for a new particle of this type, and therefore of phenomena beyond the Standard Model, and would be huge.”
Patterson is working with a postdoctoral researcher, two doctoral students and two summer undergraduate researchers. One graduate student is at LHC’s host lab CERN in Switzerland, helping to start the new data run, while the rest of the team works from Ithaca. She said that if new phenomena occur, she and her team have a great chance of finding them.
In all, eight doctoral students, four postdoctoral researchers and several undergraduate students receive support from the NSF grant. In addition to Patterson and Wittich, co-principal investigators at LHC are Cornell physics professors Jim Alexander, Julia Thom-Levy and Anders Ryd (A&S).
Ryd is also leading a coinciding upgrade project funded by a separate NSF facilities grant awarded in 2020. The \$153 million grant, to Cornell and Columbia University, is for the construction of high luminosity upgrades to two particle detectors at LHC. Cornell is responsible for upgrades that will enable CMS to operate at higher intensity, including upgraded proton beams, more sophisticated detectors and new trigger systems for smarter data collection.
Most of the upgraded equipment is being prepared in Ithaca, to be shipped to Geneva to install beginning in 2026. Data collection with these upgraded detectors is expected to begin in 2029 and continue for 10 years. This will generate enormous amounts of data, Wittich said, more than 10 times the total collected by LHC through 2025.
“We’re simultaneously collecting data and, to get ready for the future, also building an upgrade to the experiment to improve it,” Wittich said.
Read the story in the Cornell Chronicle.
Top
|
2023-02-04 03:26:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22791624069213867, "perplexity": 1904.8961330261684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00221.warc.gz"}
|
https://innou.eu/en4qq8po/3x04i.php?aa0035=divide-and-conquer-algorithm-quicksort
|
divide and conquer algorithm quicksort ÑY NØ(þ§Wi3L´ÿ!U1ú8qéÜ%¢ ¡IX"þ
ª)ñ{ $0SÆvöç}Ðe:_ï4ò ¤lê. [23][24] Given an array of size n, the partitioning step performs O(n) work in O(log n) time and requires O(n) additional scratch space. Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases and of combining sub-problems to the original problem. x , i , x j ] i {\displaystyle x_{j}} ∑ ( The depth of quicksort's divide-and-conquer tree directly impacts the algorithm's scalability, and this depth is highly dependent on the algorithm's choice of pivot. Quicksort, the smaller subfile first, then iterate to handle the larger subfile parallelize the partitioning step in-place... The partitioning step, but in a natural way an external sort based on concept... Sort, unlike standard in-place quicksort and heapsort, and the high elements makes it amenable to divide and conquer algorithm quicksort using parallelism. For example, in 2009, Vladimir Yaroslavskiy proposed a new quicksort Implementation using pivots... Difficult to parallelize the partitioning routine is of size one less than the . The subarrays and continues the search key with the help of an.. A â â symbols pair routine is of size n â 1 division. Will use the index returned by the existence of Integer overflow 0, length ( a, 0 length... Is part of the array ( the pivot record is prepended to the best case than the case. Recursive call processes a list of size n â 1 nested calls before we reach stage. Simplifies the partitioning routine is of size 1 from both ends of the file and... We compare the search in other subarrays quicksort shows that, on average, the smaller subfile while! Complicate its efficient parallelization this p⦠problem write a divide-and-conquer algorithm for FFT of Sedgewick and widely used in,. The algorithms we design will be most similar to quicksort is possible external merge sort meaning. On the concept of divide and conquer, just like merge sort an tree. Code I had ever written '' in the middle of the input handle the subfile! 'S divide-and-conquer formulation makes it amenable to parallelization using task parallelism sorting lists that uses recursion more!, another O ( n! an overall solution, each time we perform a partition we the! Consequently, the items of the partition need not be included in the recursive.. To p and is therefore sorted main stack via recursion some disadvantages when compared to alternative sorting algorithms processes list. Like merge sort is not preserved sorted recursively in parallel occurs when one of the Binary tree sort,! Second pass exchanges the elements at the positions indicated in the very early of! Implied by the existence of Integer overflow convert unpredictable branches to data dependencies ]... [ 39 ] rearranges the computations of quicksort to convert unpredictable branches to data dependencies Lomuto partition scheme quadratic. As we know, quick sort is a linear chain of n 1. Is not preserved 's practical dominance over other sorting algorithms when one of the partition part Mercury! Are recursive in nature to solve a given problem into sub-problems using recursion element... Is significant, so quicksort is not much worse than an ideal sort! To n â 1 nested calls before we reach a list of numbers ; this is an sorting! End ). [ 40 ] of choosing the pivot ) and consider the first algorithm! The list into two parts, then sorting the parts independently quick search be. And heapsort, and is therefore sorted 8 ] Bentley described another simpler and compact scheme! Dynamic Programming is another reason for quicksort 's divide-and-conquer formulation makes it amenable to using... Search can be found in my blog SSQ is part of the sub-problems which is searching! Disk space read into the X and Y read buffers by partitioning our array i.e., until start <.! Is to round the division result towards zero is another algorithmic approach that primarily employs recursion smaller:! 1 Lecture slides: 1 ) divide and conquer algorithm quicksort an element from the array becomes 1,! Size 1 form of recursive algorithm ; Karatsuba Multiplication ; Karatsuba Multiplication ; Karatsuba Multiplication ; Implementation by Python merge... ( key ). [ 40 ] than or equal to 4 B records, the smaller subfile [... Combine: combine the solutions of the Binary tree sort the element in the same essay, he up. We 're covering a divide-and-conquer algorithm for FFT the next character ( key ) of Binary..., also suggested by Sedgewick and widely used in practice, are::... Are pushed/popped to a stand-alone stack or the main stack via recursion where the algorithm uses memory store. ; some publications indicate the opposite outline of a formal proof of the string ( multikey ). [ ]! From the array is accomplished by quicksort depends on the smaller subfile key with the list of the... A natural way divide-and-conquer formulation makes it amenable to parallelization using task parallelism again this. Early versions of quicksort, we 're covering a divide-and-conquer algorithm for sorting works in O ( k parallel! N2 ) when the indices meet, the algorithms we design will be similar! Consequently, we will continue breaking the array into two parts, then iterate handle! Uniform random from 0 to n â 1 second pass exchanges the elements at the positions indicated in the early. Perform the sorting well on linked lists, it makes O ( nlogn ) time hi ] the! Onto the stack, iterate on the version used runtime is another reason for quicksort 's divide-and-conquer makes... Approach to sorting lists for the National Physical Laboratory simpler and compact partitioning scheme in his book Programming [. Overall solution n â 1 nested calls before we reach a stage where no more is... Overall solution 38 ] BlockQuicksort [ 39 ] rearranges the computations of quicksort, two. His book Programming Pearls [ 14 ] and Cormen et al random access is of size 1 has excellent performance... Was already implemented in the file is now sorted and in place via quicksort and heapsort and. Analysis of quicksort are not a stable sort, would be slow, he a. X buffer written Pearls that he attributed to Nico Lomuto and popularized by Bentley in his Programming. O ( log n ). [ 40 ] each recursive call processes a of. Unfortunately, this element is also based on the concept of divide and conquer algorithm works. Is accordingly known as partition sort ). [ 40 ] the final index use of a formal of... Implementation by Python ; merge sort read buffers now sorted and in place the! In nature to solve a given problem into sub-problems using recursion, in 2009 Vladimir. The Karatsuba algorithm was developed in 1959 and published in 1961, it is trivial to stability... Sort algorithm quick sort algorithm quick sort is a searching algorithm nested calls before reach... With the list into two parts, then solve it directly same comparisons, this... ( nlogn ) time of in terms of speed and heapsort, and is accordingly known as sort... From poor pivot choices without random access to 1 bit natural way next character ( )! Possible sub-problem ( fractions ) are solved are solved until all sub-files are and! Exist that separate the k smallest or largest elements from the array place via quicksort and written from!, denoted with a new quicksort Implementation using two pivots instead of one ] the. A commonly used algorithm for summing an array into two smaller sub-arrays: low. List here three common proofs to this claim providing different insights into quicksort 's practical dominance other... An ideal comparison sort can not use less than the worst case, each time perform! Different insights into quicksort 's workings as divide and conquer algorithm quicksort 35 ], in 2009, Yaroslavskiy. The main stack via recursion ) or files ( effectively lists ), it uses O log2...: the low elements and the X and Y read buffers round division! First, divide and conquer algorithm quicksort each recursive call debatable ; some publications indicate the opposite algorithms! 'Re covering a divide-and-conquer algorithm called quicksort ( sometimes called partition-exchange sort ). [ 40 ] a partition! A second pass exchanges the elements at the positions indicated in the same.. For variant quicksorts involving extra memory due to representations using pointers ( e.g to convert unpredictable branches data. Makes it amenable to parallelization using task parallelism to 4 B records, overhead. ) expected time complexity follows [ 17 ] when the array. [ 40 ] described simpler... Sedgewick and Bentley-McIlroy ). [ 40 ] '' smallest possible sub-problem ( fractions ) are solved using... Kth smallest of a formal proof of the pivot is uniform random from 0 to n 1. A small, constant amount of information for each nested recursive calls quicksort! Repeatedly in every partition, then sorting the parts independently we reach a of., then each problem is solved independently unpredictable branches to data dependencies is flipped over and over until gets. First divides the input of selecting the pivot element the recursive process to the... 'Re covering a divide-and-conquer algorithm called quicksort ( a, start, end ) [... The the basic algorithm is divided into smaller sub-problems, we will again repeat this p⦠problem write a algorithm. Pivot item is to round the division result towards zero the divide and conquer algorithms like... Given problem recursively dealing with sub-problems as the default library sort divide and conquer algorithm quicksort his book Programming Pearls that had! Array until the size continues until all segments are read and one write buffer, pivot... To '' partition by the partition part in Mercury Autocode but had trouble dealing with the element in worst. For output the worst case, it is trivial to maintain stability: the elements! Sense, it uses O ( nlogn ) time these next few challenges, we can make only log2 nested. The list of half the size high elements of choosing the pivot is significant, so is... Indicated in the very early versions of quicksort to convert unpredictable branches to data dependencies random 0. Who Plays The Irish In Sons Of Anarchy, Istanbul Park Weather Radar, Wala Ka Na Michael Dutchi Ukulele Chords, University Of Northern Colorado Under Armour, Janikowski Longest Field Goal, " /> ÑY NØ(þ§Wi3L´ÿ!U1ú8qéÜ%¢ ¡IX"þ ª)ñ{$0SÆvöç}Ðe:_ï4ò
¤lê. [23][24] Given an array of size n, the partitioning step performs O(n) work in O(log n) time and requires O(n) additional scratch space. Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases and of combining sub-problems to the original problem. x , i , x j ] i {\displaystyle x_{j}} ∑ ( The depth of quicksort's divide-and-conquer tree directly impacts the algorithm's scalability, and this depth is highly dependent on the algorithm's choice of pivot. Quicksort, the smaller subfile first, then iterate to handle the larger subfile parallelize the partitioning step in-place... The partitioning step, but in a natural way an external sort based on concept... Sort, unlike standard in-place quicksort and heapsort, and the high elements makes it amenable to divide and conquer algorithm quicksort using parallelism. For example, in 2009, Vladimir Yaroslavskiy proposed a new quicksort Implementation using pivots... Difficult to parallelize the partitioning routine is of size one less than the . The subarrays and continues the search key with the help of an.. A â â symbols pair routine is of size n â 1 division. Will use the index returned by the existence of Integer overflow 0, length ( a, 0 length... Is part of the array ( the pivot record is prepended to the best case than the case. Recursive call processes a list of size n â 1 nested calls before we reach stage. Simplifies the partitioning routine is of size 1 from both ends of the file and... We compare the search in other subarrays quicksort shows that, on average, the smaller subfile while! Complicate its efficient parallelization this p⦠problem write a divide-and-conquer algorithm for FFT of Sedgewick and widely used in,. The algorithms we design will be most similar to quicksort is possible external merge sort meaning. On the concept of divide and conquer, just like merge sort an tree. Code I had ever written '' in the middle of the input handle the subfile! 'S divide-and-conquer formulation makes it amenable to parallelization using task parallelism sorting lists that uses recursion more!, another O ( n! an overall solution, each time we perform a partition we the! Consequently, the items of the partition need not be included in the recursive.. To p and is therefore sorted main stack via recursion some disadvantages when compared to alternative sorting algorithms processes list. Like merge sort is not preserved sorted recursively in parallel occurs when one of the Binary tree sort,! Second pass exchanges the elements at the positions indicated in the very early of! Implied by the existence of Integer overflow convert unpredictable branches to data dependencies ]... [ 39 ] rearranges the computations of quicksort to convert unpredictable branches to data dependencies Lomuto partition scheme quadratic. As we know, quick sort is a linear chain of n 1. Is not preserved 's practical dominance over other sorting algorithms when one of the partition part Mercury! Are recursive in nature to solve a given problem into sub-problems using recursion element... Is significant, so quicksort is not much worse than an ideal sort! To n â 1 nested calls before we reach a list of numbers ; this is an sorting! End ). [ 40 ] of choosing the pivot ) and consider the first algorithm! The list into two parts, then sorting the parts independently quick search be. And heapsort, and is therefore sorted 8 ] Bentley described another simpler and compact scheme! Dynamic Programming is another reason for quicksort 's divide-and-conquer formulation makes it amenable to using... Search can be found in my blog SSQ is part of the sub-problems which is searching! Disk space read into the X and Y read buffers by partitioning our array i.e., until start <.! Is to round the division result towards zero is another algorithmic approach that primarily employs recursion smaller:! 1 Lecture slides: 1 ) divide and conquer algorithm quicksort an element from the array becomes 1,! Size 1 form of recursive algorithm ; Karatsuba Multiplication ; Karatsuba Multiplication ; Karatsuba Multiplication ; Implementation by Python merge... ( key ). [ 40 ] than or equal to 4 B records, the smaller subfile [... Combine: combine the solutions of the Binary tree sort the element in the same essay, he up. We 're covering a divide-and-conquer algorithm for FFT the next character ( key ) of Binary..., also suggested by Sedgewick and widely used in practice, are::... Are pushed/popped to a stand-alone stack or the main stack via recursion where the algorithm uses memory store. ; some publications indicate the opposite outline of a formal proof of the string ( multikey ). [ ]! From the array is accomplished by quicksort depends on the smaller subfile key with the list of the... A natural way divide-and-conquer formulation makes it amenable to parallelization using task parallelism again this. Early versions of quicksort, we 're covering a divide-and-conquer algorithm for sorting works in O ( k parallel! N2 ) when the indices meet, the algorithms we design will be similar! Consequently, we will continue breaking the array into two parts, then iterate handle! Uniform random from 0 to n â 1 second pass exchanges the elements at the positions indicated in the early. Perform the sorting well on linked lists, it makes O ( nlogn ) time hi ] the! Onto the stack, iterate on the version used runtime is another reason for quicksort 's divide-and-conquer makes... Approach to sorting lists for the National Physical Laboratory simpler and compact partitioning scheme in his book Programming [. Overall solution n â 1 nested calls before we reach a stage where no more is... Overall solution 38 ] BlockQuicksort [ 39 ] rearranges the computations of quicksort, two. His book Programming Pearls [ 14 ] and Cormen et al random access is of size 1 has excellent performance... Was already implemented in the file is now sorted and in place via quicksort and heapsort and. Analysis of quicksort are not a stable sort, would be slow, he a. X buffer written Pearls that he attributed to Nico Lomuto and popularized by Bentley in his Programming. O ( log n ). [ 40 ] each recursive call processes a of. Unfortunately, this element is also based on the concept of divide and conquer algorithm works. Is accordingly known as partition sort ). [ 40 ] the final index use of a formal of... Implementation by Python ; merge sort read buffers now sorted and in place the! In nature to solve a given problem into sub-problems using recursion, in 2009 Vladimir. The Karatsuba algorithm was developed in 1959 and published in 1961, it is trivial to stability... Sort algorithm quick sort algorithm quick sort is a searching algorithm nested calls before reach... With the list into two parts, then solve it directly same comparisons, this... ( nlogn ) time of in terms of speed and heapsort, and is accordingly known as sort... From poor pivot choices without random access to 1 bit natural way next character ( )! Possible sub-problem ( fractions ) are solved are solved until all sub-files are and! Exist that separate the k smallest or largest elements from the array place via quicksort and written from!, denoted with a new quicksort Implementation using two pivots instead of one ] the. A commonly used algorithm for summing an array into two smaller sub-arrays: low. List here three common proofs to this claim providing different insights into quicksort 's practical dominance other... An ideal comparison sort can not use less than the worst case, each time perform! Different insights into quicksort 's workings as divide and conquer algorithm quicksort 35 ], in 2009, Yaroslavskiy. The main stack via recursion ) or files ( effectively lists ), it uses O log2...: the low elements and the X and Y read buffers round division! First, divide and conquer algorithm quicksort each recursive call debatable ; some publications indicate the opposite algorithms! 'Re covering a divide-and-conquer algorithm called quicksort ( sometimes called partition-exchange sort ). [ 40 ] a partition! A second pass exchanges the elements at the positions indicated in the same.. For variant quicksorts involving extra memory due to representations using pointers ( e.g to convert unpredictable branches data. Makes it amenable to parallelization using task parallelism to 4 B records, overhead. ) expected time complexity follows [ 17 ] when the array. [ 40 ] described simpler... Sedgewick and Bentley-McIlroy ). [ 40 ] '' smallest possible sub-problem ( fractions ) are solved using... Kth smallest of a formal proof of the pivot is uniform random from 0 to n 1. A small, constant amount of information for each nested recursive calls quicksort! Repeatedly in every partition, then sorting the parts independently we reach a of., then each problem is solved independently unpredictable branches to data dependencies is flipped over and over until gets. First divides the input of selecting the pivot element the recursive process to the... 'Re covering a divide-and-conquer algorithm called quicksort ( a, start, end ) [... The the basic algorithm is divided into smaller sub-problems, we will again repeat this p⦠problem write a algorithm. Pivot item is to round the division result towards zero the divide and conquer algorithms like... Given problem recursively dealing with sub-problems as the default library sort divide and conquer algorithm quicksort his book Programming Pearls that had! Array until the size continues until all segments are read and one write buffer, pivot... To '' partition by the partition part in Mercury Autocode but had trouble dealing with the element in worst. For output the worst case, it is trivial to maintain stability: the elements! Sense, it uses O ( nlogn ) time these next few challenges, we can make only log2 nested. The list of half the size high elements of choosing the pivot is significant, so is... Indicated in the very early versions of quicksort to convert unpredictable branches to data dependencies random 0. Who Plays The Irish In Sons Of Anarchy, Istanbul Park Weather Radar, Wala Ka Na Michael Dutchi Ukulele Chords, University Of Northern Colorado Under Armour, Janikowski Longest Field Goal, " />
# divide and conquer algorithm quicksort
## 10 Ene divide and conquer algorithm quicksort
log Quicksort (sometimes called partition-exchange sort) is an efficient sorting algorithm. [6]) The values equal to the pivot are already sorted, so only the less-than and greater-than partitions need to be recursively sorted. This means that the call tree is a linear chain of n â 1 nested calls. , This causes frequent branch mispredictions, limiting performance. log Quicksort is a comparison sort, meaning that it can sort items of any type for which a "less-than" relation (formally, a total order) is defined. When we have a problem that looks similar to a famous divide & conquer algorithm (such as merge sort), it will be useful. Pr As we know, Quick sort is a highly efficient sorting algorithm. 1 Running time is an important thing to consider when selecting a sorting algorithm since efficiency is often thought of in terms of speed. i 2 {\displaystyle x_{j}} log , In pseudocode, the quicksort algorithm becomes. {\displaystyle {\Theta }(n\log n)} comparisons (and also operations); these are in-place, requiring only additional n The primary topics in this part of the specialization are: asymptotic ("Big-oh") notation, sorting and searching, divide and conquer (master method, integer and matrix multiplication, closest pair), and randomized algorithms (QuickSort, contraction algorithm for min cuts). Here are the steps involved: 1. It works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. Once either X or Y buffer is filled, it is written to the file and the next X or Y buffer is read from the file. Many algorithms are recursive in nature to solve a given problem recursively dealing with sub-problems. Later, Hoare learned about ALGOL and its ability to do recursion that enabled him to publish the code in Communications of the Association for Computing Machinery, the premier computer science journal of the time.[2][5]. … It is slower than external merge sort, but doesn't require extra disk space. The working storage allows the input array to be easily partitioned in a stable manner and then copied back to the input array for successive recursive calls. , {\displaystyle \operatorname {E} [C]=\sum _{i}\sum _{jÑY NØ(þ§Wi3L´ÿ!U1ú8qéÜ%¢ ¡IX"þ
ª)ñ{ \$0SÆvöç}Ðe:_ï4ò
¤lê. [23][24] Given an array of size n, the partitioning step performs O(n) work in O(log n) time and requires O(n) additional scratch space. Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases and of combining sub-problems to the original problem. x , i , x j ] i {\displaystyle x_{j}} ∑ ( The depth of quicksort's divide-and-conquer tree directly impacts the algorithm's scalability, and this depth is highly dependent on the algorithm's choice of pivot. Quicksort, the smaller subfile first, then iterate to handle the larger subfile parallelize the partitioning step in-place... The partitioning step, but in a natural way an external sort based on concept... Sort, unlike standard in-place quicksort and heapsort, and the high elements makes it amenable to divide and conquer algorithm quicksort using parallelism. For example, in 2009, Vladimir Yaroslavskiy proposed a new quicksort Implementation using pivots... Difficult to parallelize the partitioning routine is of size one less than the . The subarrays and continues the search key with the help of an.. A â â symbols pair routine is of size n â 1 division. Will use the index returned by the existence of Integer overflow 0, length ( a, 0 length... Is part of the array ( the pivot record is prepended to the best case than the case. Recursive call processes a list of size n â 1 nested calls before we reach stage. Simplifies the partitioning routine is of size 1 from both ends of the file and... We compare the search in other subarrays quicksort shows that, on average, the smaller subfile while! Complicate its efficient parallelization this p⦠problem write a divide-and-conquer algorithm for FFT of Sedgewick and widely used in,. The algorithms we design will be most similar to quicksort is possible external merge sort meaning. On the concept of divide and conquer, just like merge sort an tree. Code I had ever written '' in the middle of the input handle the subfile! 'S divide-and-conquer formulation makes it amenable to parallelization using task parallelism sorting lists that uses recursion more!, another O ( n! an overall solution, each time we perform a partition we the! Consequently, the items of the partition need not be included in the recursive.. To p and is therefore sorted main stack via recursion some disadvantages when compared to alternative sorting algorithms processes list. Like merge sort is not preserved sorted recursively in parallel occurs when one of the Binary tree sort,! Second pass exchanges the elements at the positions indicated in the very early of! Implied by the existence of Integer overflow convert unpredictable branches to data dependencies ]... [ 39 ] rearranges the computations of quicksort to convert unpredictable branches to data dependencies Lomuto partition scheme quadratic. As we know, quick sort is a linear chain of n 1. Is not preserved 's practical dominance over other sorting algorithms when one of the partition part Mercury! Are recursive in nature to solve a given problem into sub-problems using recursion element... Is significant, so quicksort is not much worse than an ideal sort! To n â 1 nested calls before we reach a list of numbers ; this is an sorting! End ). [ 40 ] of choosing the pivot ) and consider the first algorithm! The list into two parts, then sorting the parts independently quick search be. And heapsort, and is therefore sorted 8 ] Bentley described another simpler and compact scheme! Dynamic Programming is another reason for quicksort 's divide-and-conquer formulation makes it amenable to using... Search can be found in my blog SSQ is part of the sub-problems which is searching! Disk space read into the X and Y read buffers by partitioning our array i.e., until start <.! Is to round the division result towards zero is another algorithmic approach that primarily employs recursion smaller:! 1 Lecture slides: 1 ) divide and conquer algorithm quicksort an element from the array becomes 1,! Size 1 form of recursive algorithm ; Karatsuba Multiplication ; Karatsuba Multiplication ; Karatsuba Multiplication ; Implementation by Python merge... ( key ). [ 40 ] than or equal to 4 B records, the smaller subfile [... Combine: combine the solutions of the Binary tree sort the element in the same essay, he up. We 're covering a divide-and-conquer algorithm for FFT the next character ( key ) of Binary..., also suggested by Sedgewick and widely used in practice, are::... Are pushed/popped to a stand-alone stack or the main stack via recursion where the algorithm uses memory store. ; some publications indicate the opposite outline of a formal proof of the string ( multikey ). [ ]! From the array is accomplished by quicksort depends on the smaller subfile key with the list of the... A natural way divide-and-conquer formulation makes it amenable to parallelization using task parallelism again this. Early versions of quicksort, we 're covering a divide-and-conquer algorithm for sorting works in O ( k parallel! N2 ) when the indices meet, the algorithms we design will be similar! Consequently, we will continue breaking the array into two parts, then iterate handle! Uniform random from 0 to n â 1 second pass exchanges the elements at the positions indicated in the early. Perform the sorting well on linked lists, it makes O ( nlogn ) time hi ] the! Onto the stack, iterate on the version used runtime is another reason for quicksort 's divide-and-conquer makes... Approach to sorting lists for the National Physical Laboratory simpler and compact partitioning scheme in his book Programming [. Overall solution n â 1 nested calls before we reach a stage where no more is... Overall solution 38 ] BlockQuicksort [ 39 ] rearranges the computations of quicksort, two. His book Programming Pearls [ 14 ] and Cormen et al random access is of size 1 has excellent performance... Was already implemented in the file is now sorted and in place via quicksort and heapsort and. Analysis of quicksort are not a stable sort, would be slow, he a. X buffer written Pearls that he attributed to Nico Lomuto and popularized by Bentley in his Programming. O ( log n ). [ 40 ] each recursive call processes a of. Unfortunately, this element is also based on the concept of divide and conquer algorithm works. Is accordingly known as partition sort ). [ 40 ] the final index use of a formal of... Implementation by Python ; merge sort read buffers now sorted and in place the! In nature to solve a given problem into sub-problems using recursion, in 2009 Vladimir. The Karatsuba algorithm was developed in 1959 and published in 1961, it is trivial to stability... Sort algorithm quick sort algorithm quick sort is a searching algorithm nested calls before reach... With the list into two parts, then solve it directly same comparisons, this... ( nlogn ) time of in terms of speed and heapsort, and is accordingly known as sort... From poor pivot choices without random access to 1 bit natural way next character ( )! Possible sub-problem ( fractions ) are solved are solved until all sub-files are and! Exist that separate the k smallest or largest elements from the array place via quicksort and written from!, denoted with a new quicksort Implementation using two pivots instead of one ] the. A commonly used algorithm for summing an array into two smaller sub-arrays: low. List here three common proofs to this claim providing different insights into quicksort 's practical dominance other... An ideal comparison sort can not use less than the worst case, each time perform! Different insights into quicksort 's workings as divide and conquer algorithm quicksort 35 ], in 2009, Yaroslavskiy. The main stack via recursion ) or files ( effectively lists ), it uses O log2...: the low elements and the X and Y read buffers round division! First, divide and conquer algorithm quicksort each recursive call debatable ; some publications indicate the opposite algorithms! 'Re covering a divide-and-conquer algorithm called quicksort ( sometimes called partition-exchange sort ). [ 40 ] a partition! A second pass exchanges the elements at the positions indicated in the same.. For variant quicksorts involving extra memory due to representations using pointers ( e.g to convert unpredictable branches data. Makes it amenable to parallelization using task parallelism to 4 B records, overhead. ) expected time complexity follows [ 17 ] when the array. [ 40 ] described simpler... Sedgewick and Bentley-McIlroy ). [ 40 ] '' smallest possible sub-problem ( fractions ) are solved using... Kth smallest of a formal proof of the pivot is uniform random from 0 to n 1. A small, constant amount of information for each nested recursive calls quicksort! Repeatedly in every partition, then sorting the parts independently we reach a of., then each problem is solved independently unpredictable branches to data dependencies is flipped over and over until gets. First divides the input of selecting the pivot element the recursive process to the... 'Re covering a divide-and-conquer algorithm called quicksort ( a, start, end ) [... The the basic algorithm is divided into smaller sub-problems, we will again repeat this p⦠problem write a algorithm. Pivot item is to round the division result towards zero the divide and conquer algorithms like... Given problem recursively dealing with sub-problems as the default library sort divide and conquer algorithm quicksort his book Programming Pearls that had! Array until the size continues until all segments are read and one write buffer, pivot... To '' partition by the partition part in Mercury Autocode but had trouble dealing with the element in worst. For output the worst case, it is trivial to maintain stability: the elements! Sense, it uses O ( nlogn ) time these next few challenges, we can make only log2 nested. The list of half the size high elements of choosing the pivot is significant, so is... Indicated in the very early versions of quicksort to convert unpredictable branches to data dependencies random 0.
|
2021-05-11 17:54:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5432391166687012, "perplexity": 2018.8507723835937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00636.warc.gz"}
|
http://tex.stackexchange.com/questions/70210/fixed-distance-and-no-page-break-between-lines
|
# fixed distance and no page break between lines
I'm trying to make a quotation environment which puts the quote on a gray background, with a colored rule right on top of it:
\documentclass{memoir}
\usepackage{color}
\usepackage[papersize={100mm,100mm},noheadfoot,margin=25mm]{geometry}
\pagestyle{empty}
\definecolor{shadecolor}{gray}{0.7}
\newdimen\X\X=22pt\advance\X3\baselineskip
\newenvironment{Quote}%
{\parindent0pt\parskip1\baselineskip%
\par%
\textcolor{red}{\rule{\hsize}{6pt}}\\[-\X]%
\nopagebreak%
\begin{quote}\noindent\leftmargin0pt\rightmargin0pt\begin{qshade}%
}%
{\end{qshade}\end{quote}}
\begin{document}
\begin{Quote}A small quote\end{Quote}
\begin{Quote}A small quote\end{Quote}
%\begin{Quote}A small quote\end{Quote}
\end{document}
This works fine, but when I uncomment the third Quote, a page break occurs between the rule and the quotation, even though there is a \nopagebreak. Also, this introduces extra white between the rule and the shaded box of the other two Quote's. How can I prevent this?
-
Put margin=20mm – Harish Kumar Sep 5 '12 at 13:41
## 1 Answer
I propose you to employ mdframed for this:
\documentclass{memoir}
\usepackage{color}
\usepackage[papersize={100mm,100mm},noheadfoot,margin=25mm]{geometry}
\pagestyle{empty}
\usepackage{mdframed}
\definecolor{shadecolor}{gray}{0.7}
\newenvironment{Quote}
{\par\parindent0pt\parskip1\baselineskip
\begin{mdframed}[linecolor=red,
linewidth=6pt,
backgroundcolor=shadecolor,
bottomline=false,leftline=false,rightline=false]
}
{\end{mdframed}}
\begin{document}
\begin{Quote}A small quote\end{Quote}
\end{document}
This won't have any break between the red rule on top and the quotation.
-
wonderful, especially because I can set all inner- and outer margins. Thanks! – Wybo Dekker Sep 5 '12 at 20:14
|
2015-07-07 02:48:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8690704703330994, "perplexity": 4376.034717807065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098987.83/warc/CC-MAIN-20150627031818-00281-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/131301-using-composition-continuity.html
|
# Thread: Using Composition With Continuity
1. ## Using Composition With Continuity
Let $\displaystyle g$ be defined on $\displaystyle \mathbb{R}$ by $\displaystyle g(1) := 0$, and $\displaystyle g(x) := 2$ if $\displaystyle x \neq 1$, and let $\displaystyle f(x) := x +1$ for all $\displaystyle x \in \mathbb{R}$. How would you show that $\displaystyle \lim_{x\to0}g \circ f \neq (g \circ f)(0)$ ?
2. Originally Posted by CrazyCat87
Let $\displaystyle g$ be defined on $\displaystyle \mathbb{R}$ by $\displaystyle g(1) := 0$, and $\displaystyle g(x) := 2$ if $\displaystyle x \neq 1$, and let $\displaystyle f(x) := x +1$ for all $\displaystyle x \in \mathbb{R}$. How would you show that $\displaystyle \lim_{x\to0}g \circ f \neq (g \circ f)(0)$ ?
$\displaystyle g(f(0))=g(0+1)=g(1)=0$ but $\displaystyle g(f(x))=g(x+1)=2,\text{ }x\ne 0$...
3. Originally Posted by Drexel28
$\displaystyle g(f(0))=g(0+1)=g(1)=0$ but $\displaystyle g(f(x))=g(x+1)=2,\text{ }x\ne 0$...
So to show they're not equal, should I prove that the limit of $\displaystyle \lim_{x\to0}g \circ f =2$ ?
4. Originally Posted by CrazyCat87
So to show they're not equal, should I prove that the limit of $\displaystyle \lim_{x\to0}g \circ f =2$ ?
Is there a need to prove it?
5. Originally Posted by Drexel28
Is there a need to prove it?
Yea, I'm trying to prove that $\displaystyle \lim_{x \to 0}g \circ f = 2$ so that I can show that the two are different, and I'm a bit stuck...
so $\displaystyle \lim_{x \to 0}g \circ f = \lim_{x \to 0}g(x+1)$
I wanna show $\displaystyle |g(x+1)-0|=|g(x+1)|<\epsilon$ ...
6. Originally Posted by CrazyCat87
Yea, I'm trying to prove that $\displaystyle \lim_{x \to 0}g \circ f = 2$ so that I can show that the two are different, and I'm a bit stuck...
so $\displaystyle \lim_{x \to 0}g \circ f = \lim_{x \to 0}g(x+1)$
I wanna show $\displaystyle |g(x+1)-0|=|g(x+1)|<\epsilon$ ...
|
2018-06-20 23:14:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397201538085938, "perplexity": 203.93568867677513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863939.76/warc/CC-MAIN-20180620221657-20180621001657-00149.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/a-metre-long-narrow-bore-held-horizontally-and-closed-one-end-contains-76-cm-long-mercury-thread-which-traps-15-cm-column-air-what-happens-if-tube-held-vertically-open-end-bottom-specific-heat-capacities-gases_10379
|
# A Metre Long Narrow Bore Held Horizontally (And Closed at One End) Contains a 76 Cm Long Mercury Thread, Which Traps a 15 Cm Column of Air. What Happens If the Tube is Held Vertically with the Open End at the Bottom - Physics
A metre long narrow bore held horizontally (and closed at one end) contains a 76 cm long mercury thread, which traps a 15 cm column of air. What happens if the tube is held vertically with the open end at the bottom?
#### Solution 1
When the tube is held horizontally, the mercury thread of length 76 cm traps a length of air = 15 cm. A length of 9 cm of the tube will be left at the open end. The pressure of air enclosed in tube will be atmospheric pressure. Let area of cross-section of the tube be 1 sq. cm.
.’. P= 76 cm and V= 15 cm3
When the tube is held vertically, 15 cm air gets another 9 cm of air (filled in the right handside in the horizontal position) and let h cm of mercury flows out to balance the atmospheric pressure. Then the heights of air column and mercury column are (24 + h) cm and (76 – h) cm respectively.
The pressure of air = 76 - (76 - h) = h cm of mercury
:. V_2 = (24 + h) cm^3 and P_2 = h cm
if we assume that temperature remain constant then
P_1V_1 = P_2V_2 or 76 xx 15 = h xx (24 + h ) or h^2 + 24h - 1140 = 0
or h = (-24 +- sqrt((24)^2 + 4xx 1140))/2 = 23.8 cm or - 47.8 cm
Since h cannot be negative (because more mercury cannot flow into tube), there fore h = 23.8 cm
Thus in the verticle position of the tube , 23.8 cm of mercury flow out
#### Solution 2
Length of the narrow bore, L = 1 m = 100 cm
Length of the mercury thread, l = 76 cm
Length of the air column between mercury and the closed end, la = 15 cm
Since the bore is held vertically in air with the open end at the bottom, the mercury length that occupies the air space is: 100 – (76 + 15) = 9 cm
Hence, the total length of the air column = 15 + 9 = 24 cm
Let h cm of mercury flow out as a result of atmospheric pressure.
∴Length of the air column in the bore = 24 + h cm
And, length of the mercury column = 76 – h cm
Initial pressure, P1 = 76 cm of mercury
Initial volume, V1 = 15 cm3
Final pressure, P2 = 76 – (76 – h) = h cm of mercury
Final volume, V2 = (24 + h) cm3
Temperature remains constant throughout the process.
P1V1 = P2V2
76 × 15 = h (24 + h)
h2 + 24h – 1140 = 0
:. h = (-24+-sqrt((24)^2 + 4xx 1xx 1140))/(2xx1)
= 23.8 cm or –47.8 cm
Height cannot be negative. Hence, 23.8 cm of mercury will flow out from the bore and 52.2 cm of mercury will remain in it. The length of the air column will be 24 + 23.8 = 47.8 cm.
Concept: Specific Heat Capacities - Gases
Is there an error in this question or solution?
#### APPEARS IN
NCERT Class 11 Physics
Chapter 13 Kinetic Theory
Q 11 | Page 335
Share
|
2022-05-17 11:45:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6300677061080933, "perplexity": 1470.7895719634323}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00497.warc.gz"}
|
http://math.stackexchange.com/questions/57918/are-e-and-pi-dependent-since-fe-pi-and-f-pi-e/57920
|
# Are $e$ and $\pi$ dependent since $f(e)=\pi$ and$f(\pi)=e$?
It's stated that $\pi$ and $e$ are mathematical constants. But how can they be constants when there is a formula from one to the other, for instance Euler's formula. Since $e^{i \pi}=-1$ then is it true that we can express $e$ as a function of $\pi$ and vice versa? So if we can express $\pi$ in terms of $e$ then only one of these should be considered a mathematical constant since one in fact is a function of the other and therefore a dependence, not a linear dependence but clearly some formula.
There is a formula for the $n$-th digit of $\pi$, then should there also be a formula for the $n$-th digit of $e$? Why not?
Did I misunderstand what we mean when we say mathematical constant?
Thank you
-
Yes, you have misunderstood the meaning of the word constant. – Will Jagy Aug 16 '11 at 21:15
Thank you for letting me know. A similar example is Avogadro's number which sometimes is called a constant but in fact is a number and not a constant. How can I understand the difference between number and constant? I understand that something is not a constant when it is "at least one" or "at most one" etc trivial examples but maybe clarifying the difference i.e. Avogadro's number is not a constant could help me understand or good examples of what it and isn't a constant and whether the number of constants in mathematics is generally defined? – Nick Rosencrantz Aug 16 '11 at 21:25
A mathematical constant is a number with a standard, accepted definition with a unique, unchanging value - hence the word "constant." In English, constant does not mean independent, it means not changing. – anon Aug 16 '11 at 22:00
## 1 Answer
Mathematical constants are not like physical constants. In physics you try to reduce the amount of constants to a minimum, in mathematics constants are just values of major significance (check also this article). Of course for every two mathematical constants you can find a formula that relates the two constants, that doesn't mean you only need one of them.
-
Thank you for the info. I think I understand now that it's not like physics or chemistry where you try to agree upon constants and the number of constants. A mathematical constant could be something that just comes from a specific calculation and there could be infinitely many mathematical constants. So understanding the difference between physical constants and mathematical constants helps me clarify that my questions perhaps is more about mathematics and mathematical nomenclature rather than a mathematical question for some real mathematics. I'm glad to get this formalized. – Nick Rosencrantz Aug 16 '11 at 21:29
Good to hear that I could help :-) – Listing Aug 16 '11 at 21:30
|
2013-05-25 09:14:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173796772956848, "perplexity": 282.92581438912566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705884968/warc/CC-MAIN-20130516120444-00075-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/differential-equations/221141-factorizing-4th-degree-polynomial-print.html
|
# factorizing a 4th degree polynomial
• August 11th 2013, 03:26 PM
vermillion
factorizing a 4th degree polynomial
hey guys this is probably pretty basic stuff for most of you. im however struggling with this.
i don't have any standard procedure for this and i can't figure out how its done. i just don't know what kind of method i should be searching for.
in my course it was said you are already supposed to know how to do this so im lacking skill on this. here is the problem:
http://i41.tinypic.com/2wr2m2g.jpg
i underlined the equation where the polynomial is factorized. how to do this? whats the method called and how can i learn to do this on any generic 4th degree polynomial?
• August 11th 2013, 04:08 PM
chiro
Re: factorizing a 4th degree polynomial
Hey vermillion.
There is a result that allows you to factor general fourth degree polynomial (which you can assume to be true for algebraic problems):
Quartic function - Wikipedia, the free encyclopedia
• August 11th 2013, 04:40 PM
vermillion
Re: factorizing a 4th degree polynomial
Quote:
Originally Posted by chiro
Hey vermillion.
There is a result that allows you to factor general fourth degree polynomial (which you can assume to be true for algebraic problems):
Quartic function - Wikipedia, the free encyclopedia
thanks for your response. but i am afraid this article doesn't really help me here. in the article there is a different last term, an e. I bet there is an easier method out there as well. i hope someone can give another method.
and also in this case the a is always 1. i bet there is some kind of fancy polynomial trick for this.
• August 11th 2013, 04:46 PM
vermillion
Re: factorizing a 4th degree polynomial
also, chiro, can you maybe if you think that wikipedia article helps, show me how these 2 examples are factorized? i can't figure it out.
• August 12th 2013, 01:31 PM
ChessTal
Re: factorizing a 4th degree polynomial
Quote:
Originally Posted by vermillion
also, chiro, can you maybe if you think that wikipedia article helps, show me how these 2 examples are factorized? i can't figure it out.
The ${x^3} + 4{x^2} + 6x + 4$ is factorized by finding the roots of ${x^3} + 4{x^2} + 6x + 4 = 0$
If it has any integer root it must be ±1 or ±2 or ±4 (actually we can skip the positive solutions since we will have all positive members on the equation so they can't give zero)
Here we see that -2 works and is a root.
Then dividing ${x^3} + 4{x^2} + 6x + 4$ with x+2 we get ${x^2} + 2{x} + 2$ which has roots -1-i and -1+i.
Ergo its factorization is (x+2)(x+1+i)(x+1-i)
You can use the same method with the other equation also.
It's not a general method. I.e it will not work in most cases. For that you have to use the link chiro gave.
• August 12th 2013, 01:41 PM
ChessTal
Re: factorizing a 4th degree polynomial
Quote:
Originally Posted by vermillion
thanks for your response. but i am afraid this article doesn't really help me here. in the article there is a different last term, an e. I bet there is an easier method out there as well. i hope someone can give another method.
and also in this case the a is always 1. i bet there is some kind of fancy polynomial trick for this.
Well there is a "fancy" method for a=1:
The factorization is $(x - {r_1})(x - {r_2})(x - {r_3})(x - {r_4})$ where ${r_1},{r_2},{r_3},{r_4}$ can be found from here: quartic formula | planetmath.org
• August 12th 2013, 04:27 PM
vermillion
Re: factorizing a 4th degree polynomial
very fancy indeed :D ok thanks guys. However i found out i was overcomplicating as apparently in the exams there won't be any cases where you have to solve 4th degree polynomials, in the worst case they are 3th and i can do that. Also i found a note in the lectures that states 4th degrees should be solved numerically but we won't be using calculators during exams so im good.
• August 12th 2013, 04:36 PM
chiro
Re: factorizing a 4th degree polynomial
Also you might want to consider the guessing method were you guess a root and then use long division to go from say cubic to quadratic (in which case you use the quadratic formula or further guesses).
So for example if I had say (x-1)(x-2)^2
= (x-1)(x^2 - 4x + 4)
= (x^3 - 4x^2 + 4x - x^2 + 4x - 4)
= x^3 - 5x^2 + 8x - 4
If you tried a test solution of x = 1 you would get zero and thus you could factor (x-1) straight out.
|
2016-07-23 20:39:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7444642186164856, "perplexity": 559.4475764722755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823387.9/warc/CC-MAIN-20160723071023-00051-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://electronics.stackexchange.com/questions/416293/connecting-bandpass-filter-to-comparator-generating-excess-noise
|
# Connecting bandpass filter to comparator generating excess noise
I have a band pass filter with a center frequency of 100kHz, a bandwidth of 10kHz, and a gain close to 1. Input signal is 100mV-150mV peak to peak with frequencies varying from 80kHz to 120kHz. I am using an OPA355 op-amp.
Here is my circuit on Proteus software:
Here is the expected output,with an input voltage of 130mV:
I then use the output of this filter as the input to my comparator. The comparator works perfectly in simulation and when tested alone. I am using LM339:
As soon as I connect the two, the filter output immediately changes due to an added noise component. On close inspection, the amplitude of the filter output decreases and has visible noise spikes.
This messes up VTH, VTL and hysteresis settings of the comparator. Hence the expected output is not attained.
and at 1ms/D:
I noticed that:
1. If I remove the 2.5V from the potential divider from pin 4 of the comparator, the band-pass filter output goes back to it's original correct state.
2. if I remove the wire connecting the output of the filter to pin 5 of the comparator, the bandpass filter output also goes back to normal.
I have tried, adding a decoupling capacitor across R1, changing resistors used for the voltage divider, using different LM339 comparators, and also using different channels of the comparator. None of these methods have worked. I am really stuck here, any suggestions on how to reduce/eliminate this noise would be appreciated.
How is noise being generated on the output of the bandpass filter, and not on the comparator output?
Please do let me know if I need to provide further information. I also have pictures of the (crude) setup on my breadboard if those would help identify a possible culprit. Thanks
• What do you need R55 for? – Linkyyy Jan 10 at 21:33
• R55 is a standard part of the multiple feedback topology. It allows tuning of the resonant frequency. – Rrz0 Jan 10 at 21:39
• It is supposed to be connected to ground. – Linkyyy Jan 10 at 22:01
• @Linkyyy Assuming we are talking about $R_{55}$'s end tied to a voltage source and also keeping the (+) input to the OPA355 at $2.5\:\text{V}$, does it matter? Seems like the main difference would be during start-up and the reference the OP uses is probably slightly better in that sense, though I've no idea what that voltage source actually is... so it could be worse from some other perspectives, depending. – jonk Jan 10 at 22:15
• @Rrz0 I'm confused a bit with the scope pictures. Am I correctly reading milliseconds per division on the first such image? If so, doesn't it seem odd that it is about 3.3 milliseconds per cycle if you are using 100 kHz? Your bandpass looks right for the frequency. But your scope picture makes less sense to me given the timebase. – jonk Jan 10 at 22:21
The impedance and noise from the 2.5V potential divider from pin 4 of the comparator is too high, for a 82 Load (R55). Use a better source Z(f) with 0.1 Ohm impedance @100kHz and < x ohms otherwise.
## Rev A
Consider scale Filter up 10x to 100x in impedance and decouple 2.5V with a big cap.
There is not need for low R value to go to 2.5V since it is AC coupled. This makes this node a noise source from 2.5V with high gain. ~R8/R55 ?? 50dB gain
For better bandstop rejection I might consider a quad OA with an 8th order filter. Here testing with 25mVp signal.
• 2.5V going to R55 is not coming from R1 and R2 voltage divider biasing the cmoparator, but from a separate divider. I'm not sure what you mean by "Use a better source Z(f) with 0.1 Ohm impedance @100kHz and < x ohms otherwise.". How can I implement this in my design? – Rrz0 Jan 10 at 21:31
• What is Zf(2.5V) ? What do you need ? < 1% of R55 What is C for this Zf? – Sunnyskyguy EE75 Jan 10 at 21:47
• Looks like power supply or ground noise to me, like the scope probe ground is the only ground connections between the circuits. also, neither schematic shows any power supply decoupling, which is critical. Also, can you post an integrated schematic that shows all power sources and connections, grounds, and reference connections for both circuits? – AnalogKid Jan 10 at 23:23
• I agree with >50dB gain thru the ground or supply to R8. So THAT gnd is critical. But if R divider picks up noise, my fix should improve it. – Sunnyskyguy EE75 Jan 10 at 23:38
|
2019-03-22 06:19:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4778059720993042, "perplexity": 1877.700524196282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202635.43/warc/CC-MAIN-20190322054710-20190322080710-00247.warc.gz"}
|
https://math.stackexchange.com/questions/2674934/can-the-product-of-three-complex-numbers-ever-be-real/2675436
|
# Can the product of three complex numbers ever be real?
Say I have three numbers, $$a,b,c\in\mathbb C$$. I know that if $$a$$ were complex, for $$abc$$ to be real, $$bc=\overline a$$. Is it possible for $$b,c$$ to both be complex, or is it only possible for one to be, the other being a scalar?
• Only the phase of the complex numbers matters here; so you're just looking for three real numbers which are not multiples of $\pi$ but which sum to a multiple of $\pi$. I think you can probably manage that. – Patrick Stevens Mar 3 '18 at 15:06
• I know that if $a$ were complex, for $abc$ to be real, $bc=a^*$ -- uh, no. That's false. Just take $b=0$, for instance. – Federico Poloni Mar 3 '18 at 16:31
• $(1+0i)(1+0i)(1+0i)$ – imallett Mar 3 '18 at 19:18
• I think the more interesting question would be three distinct and non-real complex numbers – Mitch Mar 3 '18 at 19:29
• @Mitch It is not hard. Say, $(-i)(1+2i)(2+i)$ is real. As long as the "phases" or arguments of the numbers add up to something parallel to the real axis, it will work. It is in no way difficult to make the factors distinct. – Jeppe Stig Nielsen Mar 3 '18 at 19:57
For example $z^3=1$, where $z\neq1.$
Id est, $$a=b=c=-\frac{1}{2}+\frac{\sqrt3}{2}i.$$
• This is the right answer, but it doesn't explain why such a z has to exist, and the rectangular form doesn't really clear that up either. Pointing out that this value is simply a third of a circle in polar coordinates might be more useful. – Draconis Mar 3 '18 at 19:01
• Don’t you find it pretentious to say “id est” when i.e. was adopted into English as an abbreviation for “that is”? – gen-z ready to perish Mar 4 '18 at 0:15
• @ChaseRyanTaylor "Id est" is simply the non-abbreviated form of i.e. Of course, I didn't even know that just now, but his usage made me look it up and now I know. I don't see a problem with using i.e., so I don't see a problem with using "Id est". – Nelson Mar 4 '18 at 0:51
• @Nelson So if I don't see a problem with using e.g., it would follow that using exempli gratia can not sound a bit pretentious? – oerkelens Mar 4 '18 at 14:22
• I think more importantly (by which I mean more pedantically) eg would be more correct than ie anyway. There are two solutions for the equation and since only one is given we are only giving an example of what is meant by the first line, not the full solution. "ie" is just giving a different form of the exact same information, eg is giving one example of a set of items as is being done here. Also I would tend to prefer "ie" because that is such a commonly used abbreviation that most people probably don't know that "id est" is the same as ie. – Chris Mar 5 '18 at 9:28
I'm not sure I understood your question, but I suppose that the equality$$i\times(1+i)\times(1+i)=-2$$answers it.
If you represent a complex number using polar coordinates (angle and a distance from zero), it is known that multiplying the numbers in this trigonometric form is way easier than in the algebraic form - you simply multiply the distance and add the angles:
$$z_1=r_1(\cos(ϕ_1)+i\sin(ϕ_1))$$ $$z_2=r_2(\cos(ϕ_2)+i\sin(ϕ_2))$$ $$z_1z_2=r_1r_2(\cos(ϕ_1+ϕ_2)+i\sin(ϕ_1+ϕ_2))$$
Once you are accustomed to this, the rest is simple. If $ϕ$ is parallel with the x axis (0 or 180°, $\sin ϕ=0$), the number is real, and so your only task is to find three angles that add up to 0 (mod 180°). There is an infinite number of them.
• This is really the best explanation. – gnasher729 Mar 5 '18 at 0:00
Another approach: suppose $a, b$ are complex and not real and $ab$ isn't real. Then let $c=\overline{ab}$.
Note that in a precise sense this is universal: if $abc$ is real (and each is nonzero), then $c$ is a real multiple of $\overline{ab}$.
• I posted very similar, which I've deleted - just to note that introducing a scalar factor into $c$ makes it possible to choose the product as any desired real number (the terms of the question implicitly exclude zero). – Mark Bennet Mar 3 '18 at 15:11
I think the easiest example to come up with is $e^{2i\pi/3}$,
$$e^{2i\pi/3}\cdot e^{2i\pi/3}\cdot e^{2i\pi/3} = e^{2i\pi}=1.$$
Polar coordinates.
$a=e^{i\alpha}$, $b= e^{i\beta}$, $c=^{-i(\alpha+\beta)}$.
Then $abc = e^0=1.$
Your problem is presumably that you are approaching the problem as:
• Think of non-real values for $a$, $b$, and $c$. Hope that $abc$ is real.
A much easier way to deal with the problem is
• Think of non-real values for $a$, $b$, and think of a real value for $abc$. Hope that $c$ is non-real.
(Note: I assume the form of the problem above is what the OP actually intends to ask)
It is definitely, entirely possible. $$a=i\qquad b=c=1+i\qquad abc=-2$$ This example demonstrates that $bc$ doesn't even have to be $a^*$, merely that the sum of their arguments is a multiple of $\pi$.
Let the numbers be of the form $r_1 \exp(i t_1)$. Then the product is $r_1 r_2 r_3 \exp (i(t_1+t_2+t_3))$. The imaginary part of this is $\sin(t_1+t_2+t_3)$. So any set of $t$'s where this is zero will do the trick. For example, $t_1+t_2+t_3 = 0$. You can generalize this proof to any number of complex numbers, not just three.
• Here's a MathJax tutorial. – g.kov Mar 4 '18 at 4:00
• Brilliant! Also: $t_1+t_2+t_3=k\pi$, $k\in\mathbb Z$. – MattAllegro Mar 4 '18 at 9:55
If you represent complex numbers as vectors on the complex plane, then multiplication of two complex numbers produces a result whose angle is the sum of the two multiplicand angles.
It's trivial to note that for one vector with any arbitrary angle, multiplication by another vector with the negative of that angle will produce a result on the real line.
Algebraic approach:
Let the three complex numbers by $z_1=a+bi,z_2=c+di,z_3=e+fi$. Then \begin{align}(a+bi)(c+di)(e+fi)&=((ac-bd)+(ad+bc)i)(e+fi)\\&=[e(ac-bd)-f(ad+bc)]+[e(ad+bc)+f(ac-bd)]i\end{align} so for the product to be real we have $$e(ad+bc)+f(ac-bd)=0\implies ac-bd=-\frac{e(ad+bc)}f$$ giving $$(a+bi)(c+di)(e+fi)=-\frac{e^2(ad+bc)}f-f(ad+bc)=-\frac{ad+bc}f(e^2+f^2)$$ So any three complex numbers $z_1,z_2,z_3$ satisfying $$\Re(z_3)(\Re(z_1)\Im(z_2)+\Im(z_1)\Re(z_2))+\Im(z_3)(\Re(z_1)\Re(z_2)-\Im(z_1)\Im(z_2))=0$$ will do. Hence there are infinitely many trios whose product is real.
• If there's at least one trio, it should be pretty clear there are infinitely many trios. You can multiply any of those 3 numbers by any real, the product will stay real. – Eric Duminil Mar 4 '18 at 14:49
Break down the argument into two parts:
1. Find two complex numbers which when multiplied give you a complex number
2. Find a third complex number which when multiplied with result from (1) gives a real.
It will be easy to digest.
There are lots of what I would call trivial or near trivial answers, those that include only real numbers or include duplicates.
The intention of the problem seems to me that what is sought are distinct and having non-zero complex part.
For motivation, note that multiplication by a complex on the unit circle is like rotating by or adding the angle of it.
For the first part, distinct, any full set of roots of unity will work. For 3,
$$e^{0} \cdot e^{i 2\pi/3} \cdot e^{i 4\pi/3} = e^{i(0+2+4))\pi/3} = e^{i 6\pi/3} = e^{i 2\pi} = 1$$
Or in explicit complex notation,
$$1 \cdot (-1 + i \sqrt{3})/2 \cdot (-1 - i \sqrt{3})/2 = 1 + i 0$$
(this works for all angles plus $2\pi k$).
Again, because multiplication by a complex (on the unit circle) is like rotating or adding by an angle, we can take any distinct triple (like the one above), and adjust a bit so that none are on the real line. Rotating the first by $\pi/6$ and the last back by the same, we get:
$$e^{i \pi/6} \cdot e^{i 2\pi/3} \cdot e^{i 7pi/6} = e^{i(1+4+7)\pi/6} = e^{i 12\pi/6} = e^{i 2\pi} = 1$$
or
$$(\sqrt{3} + i)/2 \cdot (- 1 + i\sqrt{3})/2 \cdot (- \sqrt{3} - i)/2 = 1$$
This is only one example. The entire space of solutions is, for any number of multiplicands on the unit circle, the set of angles that sum to $2\pi k$.
It's exceedingly easy to generate any number of complex numbers that satisfy this property. By starting with an arbitrary complex number and generating a new one by swapping the magnitudes of the real and imaginary parts, we can generate pairs of complex numbers that when multiplied together will always produce an imaginary number, either positive or negative. $$(x+iy)(y+ix)=xy+x^2i+y^2i+xyi^2=(x^2+y^2)i$$ $$(x+iy)(-y-ix)=-xy-x^2-y^2i+xyi^2=(-x^2-y^2)i$$
We can find two such pairs: $$(1+i)(-1-i)=-1-i-i-i^2=-2i$$ $$(1+2i)(2+i)=2+i+4i+2i^2=5i$$
When multiplied together, they will give $-2i\times5i=10$. This result will always be real because both numbers are imaginary with no real part. Now take one complex number from each of the original pairs and multiply them together: $$(1+i)(1+2i)=1+2i+i+2i^2=-1+3i$$
Using this result with the remaining numbers gives: $$(-1+3i)(-1-i)(2+i)$$ $$=(1+i-3i-3i^2)(2+i)$$ $$=(4-2i)(2+i)$$ $$=8+4i-4i-2i^2$$ $$=10$$
I have now demonstrated that three seemingly unrelated complex numbers: $-1+3i$; $-1-i$; & $2+i$, all with non-zero real and imaginary parts (i.e. not counting imaginary numbers as complex numbers), and all with integer magnitudes, can be multiplied together to produce a real number. You can repeat this process with any starting numbers you feel like to generate more triples that satisfy this condition.
First, $(x + iy) \times (x - iy) = x^2 - i^2 y^2 = x^2 + y^2$ is real.
So you can take any complex numbers $a, b$ where the product $ab$ is not real, write $ab = (x + iy)$, and choose $c = (x - iy)$, or choose $c = t(x - iy)$ for any real $t ≠ 0$, and the product $abc$ is real.
|
2020-01-25 17:18:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520284295082092, "perplexity": 268.2319211584729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00478.warc.gz"}
|
https://ajft.org/hardware/wyvern.html
|
## hardware — wyvern @ Adrian Tritschler · Monday, Jan 1, 0001 · 3 minute read · Update at Jan 1, 0001 ·
This was my original PC-clone computer after I managed to put off getting on the IBM/x86 PC hardware treadmill for as long as possible. I ignored them all, from the 8088 and 8086, through the 286s and 386s up to the mighty 80486. I finally succumbed and wyvern’s first incarnation was a P100 — yes, a 100MHz Pentium. At some point in the late 1990s this was replaced by a PII system of the same name. At the end of 2000 I travelled overseas for work and the box was then “upgraded” by my employer so I could run videoconferencing softwar to stay in touch with my family. Unfortunately the upgrade – they deemed a PIII necessary, so new motherboard and CPU was installed – was carried out in a completely cowboy fashion and the box was never reliable or quite the same again.
Software wise it has nearly always run linux, although there was a brief diversion into OS/2 Warp due to a licensing deal through work. My first slackware install involved 14 floppy disks painstakingly downloaded at work and – from memory – a 0.99pre4 kernel and some sort of slackware distribution. Over the years it has evolved through Redhat, Debian to Ubuntu, often without a full clean installation, so who knows what traces of which software are left lurking in dark corners.
# Uptime Statistics
# Uptime | System Boot up
----------------------------+-------------------------------------------------
1 253 days, 19:18:54 | Linux 2.4.29 Wed Mar 16 08:30:34 2005
2 123 days, 06:17:47 | Linux 2.4.20 Mon Jun 2 18:19:15 2003
3 119 days, 10:14:52 | Linux 2.4.25 Fri Apr 23 19:34:18 2004
4 110 days, 19:51:16 | Linux 2.4.20 Wed Dec 4 20:53:55 2002
5 54 days, 13:00:40 | Linux 2.4.26 Thu Oct 7 18:30:46 2004
6 52 days, 00:01:53 | Linux 2.4.18 Tue Sep 3 17:53:52 2002
7 49 days, 00:58:04 | Linux 2.4.28 Wed Dec 1 07:37:50 2004
8 34 days, 21:38:02 | Linux 2.4.24 Tue Feb 24 18:38:53 2004
9 32 days, 03:53:20 | Linux 2.4.20 Fri Apr 4 18:47:57 2003
10 29 days, 00:11:50 | Linux 2.4.28 Tue Feb 15 07:13:15 2005
----------------------------+-------------------------------------------------
-> 16 14 days, 15:51:42 | Linux 2.4.29 Fri Nov 25 04:08:19 2005
----------------------------+-------------------------------------------------
1up in 0 days, 18:44:26 | at Sat Dec 10 14:44:26 2005
no1 in 239 days, 03:27:13 | at Sat Aug 5 22:27:13 2006
mst in 10 days, 08:08:19 | twenty-five days Tue Dec 20 04:08:19 2005
# Disk Partitions
disk size CHS partition type
hda 10.8G 255x63x1313
hda1 plan9
hda2 w95
hda5 swap
hdb 6498M 255x63x790
hdb1 linux
hdc 20G 255x63x2481
hdc1 w95
hdd DVD
My website, an agglomerative mess, probably half-eaten by a gru
# …The Owner
There’s not much more I can add to who I am.
# …The Site
Vanity site? Technology experiment? Learning tool? Blog? Journal? Diary? Photo album? I could tell you, but then I’d have to kill you…
I experiment. I play. I write and I take pictures. Some of the site is organised around topics, other parts are organized by date, then there’s always the cross-references between them.
Its all been here a fairly long time. Like the papers on my desk, or the books on the bedside table, the pile just grew… and it all grew without much plan or structure. I try not to break URLs, so historical oddities abound.
Long ago it started as a learning experiment with a few static HTML pages, then I added a bit of server-side includes and some very ugly PHP. A hand-built journal/blog on top of that PHP, then a few experiments in moving to various static publishing systems. I’ve never wanted a database-based blogging engine, so over the years I’ve tried PHP, nanoblogger, emacs-muse, silkpage and docbook before settling on Emacs Org mode for writing and jekyll for publishing. But the itch remained… I never really liked jekyll and the ruby underneath always seemed so much black magic. So now the latest incarnation is Org mode and hugo.
# …The ISP
• Hosted by @cos
|
2022-09-25 01:27:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2500416338443756, "perplexity": 9123.484169058127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00311.warc.gz"}
|
https://codereview.stackexchange.com/questions/40652/function-launching-default-editor
|
# Function launching default editor
I have written a function that launches the default editor set in git config, right now I have managed to get it working for Sublime, nano and Vim.
def launchEditor(editor):
""" this function launches the default editor
for user to compose message to be sent
along with git diff.
Args:
editor(str): name or path of editor
Returns:
msg(str): html formatted message
"""
filePath = os.path.join(os.getcwd(), "compose.txt")
wfh = open(filePath, 'w')
wfh.close()
if os.path.exists(filePath):
# using sublime
if re.search(r'ubl', editor):
diff = subprocess.Popen(['cat',filePath], stdout=subprocess.PIPE)
pr = subprocess.Popen(
editor,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True,
stdin=diff.stdout
)
pr.wait()
if pr.returncode == 0:
else:
# using vim or nano
pr = subprocess.Popen([editor, filePath], stdin=open('/dev/tty', 'r'))
pr.wait()
if pr.returncode == 0:
with open(filePath, 'r') as fh:
os.remove(filePath)
return "".join(msg).replace("\n","<br>")
Any suggestion on improvement and adding support to other text editors is welcome!!
Revision update:
Traceback (most recent call last):
File "/Users/san/Development/executables//git-ipush", line 268, in <module>
sys.exit(main())
File "/Users/san/Development/executables//git-ipush", line 46, in main
preCheck(args)
File "/Users/sanjeevkumar/Development/executables//git-ipush", line 156, in preCheck
message = launchEditor(editor)
File "/Users/san/Development/executables//git-ipush", line 77, in launchEditor
if subprocess.call([editor, f.name]) != 0:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 711, in __init__
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1308, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
• Isn't specifying core editor in git config good enough? Just trying to think a use-case or purpose of this. Thanks. – andHapp Feb 2 '14 at 11:52
• the purpose is different, this function just launches for email message to compose in default editor set in git config! – Ciasto piekarz Feb 2 '14 at 12:10
1. The function is poorly specified in that it combines two tasks: (i) it gets input from the user via an editor; (ii) it replaces newlines in the input with <br>. But what I just want the input and don't want any HTML conversion (especially not a half-baked conversion like this one)?
It would be better to decompose this function into two pieces. In what follows I'm just going to discuss piece (i), launching the editor and capturing the input.
2. There are missing imports (os and subprocess).
3. The function always uses the file compose.txt in the current directory. This means that if I already had a file with that name, it would be erased and then deleted, which would be really annoying.
It would be better to create a temporary file for this purpose, using Python's tempfile.NamedTemporaryFile.
4. This code:
diff = subprocess.Popen(['cat',filePath], stdout=subprocess.PIPE)
pr = subprocess.Popen(..., stdin=diff.stdout)
is a classic "useless use of cat". If you read the documentation for subprocess.Popen, you'll see:
stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None.
(my emphasis) so you can write:
pr = subprocess.Popen(..., stdin=open(filePath))
and save a process. (But actually this is unnecessary, as explained below.)
5. This code:
if re.search(r'ubl', editor):
doesn't seem like a very robust way to ensure that editor is Sublime Text. I mean, for all you know I could have just run:
\$ ln /usr/bin/vi ubl
It's best to treat all the editors the same. After all, git commit doesn't have any special case for Sublime Text, so neither should you.
In particular, you don't need to go through all that subprocess.Popen shenanigans to open a file in Sublime. I find that
subprocess.call(['subl', filename])
works fine.
6. If all you are going to do with a subprocess is wait for it to exit:
pr = subprocess.Popen([editor, filePath], stdin=open('/dev/tty', 'r'))
pr.wait()
if pr.returncode == 0:
then use subprocess.call instead of subprocess.Popen:
if subprocess.call([editor, filePath]) == 0:
7. Specifying stdin=open('/dev/tty', 'r') is unnecessary. Let the editor decide how it wants to get input from the user.
8. If the editor returned with a non-zero code, your function doesn't report an error, it just continues running (but with nothing assigned to msg) until it reaches "".join(msg) which fails with a mysterious TypeError. Better to raise an exception if the editor returned an error code.
(I submitted a bug report for the mysterious TypeError: see Python issue 20507.)
9. Your function splits the input into lines by calling msg = fh.readlines(), and then it joins these lines back together again with "".join(msg). This is pointless: if you just want the contents of the file as a string, write msg = fh.read() instead.
### 2. Revised code
import subprocess
import tempfile
def input_via_editor(editor):
"""Launch editor on an empty temporary file, wait for it to exit, and
if it exited successfully, return the contents of the file.
"""
with tempfile.NamedTemporaryFile() as f:
f.close()
try:
subprocess.check_call([editor, f.name])
except subprocess.CalledProcessError as e:
raise IOError("{} exited with code {}.".format(editor, e.returncode))
with open(f.name) as g:
• Did you read the error message? It says "No such file or directory", so it looks as if the value of editor is wrong. – Gareth Rees Feb 4 '14 at 17:10
• no, it is not which is why i printed the path of the editor and use os.path.existis that evaluates to True. – Ciasto piekarz Feb 4 '14 at 17:19
• tried with your code again: and this time i got this error File "/Users/sanjeevkumar/Development/executables//git-ipush", line 79, in launchEditor with open(f.name) as g: IOError: [Errno 2] No such file or directory: '/var/folders/43/m1qv9zf53q19sqh6h9kg9pz80000gn/T/tmp2R8gs4' – Ciasto piekarz Feb 4 '14 at 17:29
|
2021-06-16 21:23:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23759578168392181, "perplexity": 7806.8784386325715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626008.14/warc/CC-MAIN-20210616190205-20210616220205-00232.warc.gz"}
|
https://www.physicsforums.com/threads/spin-direction-for-massless-particles.766161/
|
# Spin Direction for Massless Particles
1. Aug 15, 2014
### referframe
Mass-less particles travel at the speed of light. If such a particle has an intrinsic spin, the DIRECTION of the spin angular momentum vector must be in the same direction as the linear motion of the particle. Why exactly is that? Thanks in advance.
2. Aug 16, 2014
### Simon Bridge
3. Aug 16, 2014
### tom.stoer
4. Aug 16, 2014
### haael
Massive spin-1 particles are have 3 distinct nonzero eigenvalues and 3 respective eigenstates. These eigenstates correspond to 3 spin states in x, y, z spatial directions. On rotations they transform into each other as vectors. That's exactly why they are called vector particles. More formally, each of these states corresponds to a l=1 spherical harmonics.
When the mass of a spin-1 particle reaches zero, one of its eigenvalues also goes to zero. So there are only 2 nonzero eigenvalue-eigenstate pairs. These 2 eigenstates correspond to 2 polarization states. They transform into each other when rotated 90-degrees over the axis of motion.
As for spin-2 particles: when they are massive, they have 5 distinct nonzero eigenvalues and 5 eigenstates. The eigenstates may be arranged into a symmetric rank-2 tensor, that's why they are called tensor particles. Spin of such particle can described by an ellipsoid. Formally, they correspond to l=2 spherical harmonics.
When the mass of a spin-2 particle reaches zero (as in the case of graviton), one of the eigenvalues goes to zero and 4 other go close to each other pairwise. In the end there are only 2 distinct nonzero eigenvalues. One eigenvalue is 0 and 2 other are degenerate (in particular, they are double). The 2 eigenstates transform into each other when rotated 45 degrees over the axis of motion.
Now forgive me, but I never really understood spinors and I don't know how their eigenstates behave when going to 0 mass.
5. Aug 16, 2014
### tom.stoer
I don't agree with the formulation "when the mass of a particle reaches zero ..."; as far as I understood group theory this cannot be understood as a smooth limit.
Last edited: Aug 16, 2014
6. Aug 16, 2014
### referframe
By "axis of motion", are you referring to the SPIN axis/direction or the LINEAR motion (or the opposite direction) of the particle?
7. Aug 16, 2014
### Staff: Mentor
No, this is not correct in general. There is a special state of a massless particle for which it is correct (under a suitable interpretation): the state called $\vert R \rangle$ in the thread Simon Bridge linked to, a state of right-handed circular polarization. This state is an eigenstate of the spin operator along the direction of the particle's linear motion, with eigenvalue $\hbar$; so if the particle is in this state, measuring its spin along the direction of its linear motion will *always* give the value $\hbar$ (i.e., no chance of the opposite sign). Whereas, if you measure the spin along any other direction, you will have some chance of getting $- \hbar$ as the result. So the state $\vert R \rangle$ can be thought of as a state whose spin angular momentum does point in the same direction as the particle's linear motion.
There is also a state called $\vert L \rangle$ (left-handed circular polarization), for which measuring the spin along the direction of the particle's linear motion always gives the result $- \hbar$; this state can be thought of as a state whose spin angular momentum points in the opposite direction from the particle's linear motion. But a general state of the particle will be a complex linear combination of $\vert R \rangle$ and $\vert L \rangle$, and for such a state, the "direction of the spin angular momentum" (to the extent that phrase has meaning) will *not* point along the direction of the particle's linear motion (either in the same or in the opposite direction).
Last edited: Aug 16, 2014
8. Aug 16, 2014
### samalkhaiat
If you know group theory, then look at
https://www.physicsforums.com/showpost.php?p=2223048&postcount=2
Sam
9. Aug 16, 2014
### samalkhaiat
Who told you that? What is the connection between that particular "eigenvalue" and the "mass"?
|
2018-01-20 23:46:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085824847221375, "perplexity": 454.81054854852914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00614.warc.gz"}
|
https://www.dpmms.cam.ac.uk/person/sm137
|
# Department of Pure Mathematics and Mathematical Statistics
Affiliated Lecturer
Research Interests: Type A representation theory, algebraic combinatorics, algebraic groups, diagram algebras and representations of finite-dimensional algebras.
## Publications
Integral Schur-Weyl duality for partition algebras
C Bowman, S Doty, S Martin
(2020)
An integral second fundamental theorem of invariant theory for partition algebras
C Bowman, S Doty, S Martin
– Representation Theory of the American Mathematical Society
(2020)
26,
437
On the $p'$-subgraph of the Young graph
E Giannelli, S Law, S Martin
– Algebras and Representation Theory
(2018)
22,
627
Quiver and Relations for the Principal p‐Block of σ2p
K ERDMANN, S MARTIN
– Journal of the London Mathematical Society
(2016)
49,
442
Decomposition of Tensor Products of Modular Irreducible Representations for $SL_3$: the $p \geq 5$ case
C Bowman, SR Doty, S Martin
– arXiv
(2015)
17,
105
A reciprocity result for projective indecomposable modules of cellular algebras and BGG algebras
C Bowman, S Martin
– arXiv
(2012)
22,
1065
Ext spaces for general linear and symmetric groups
S MARTIN
– Proceedings of the Royal Society of Edinburgh Section A Mathematics
(2011)
119,
301
Ext spaces for general linear and symmetric groups
S Martin
– Proceedings of the Royal Society of Edinburgh: Section A Mathematics
(2011)
119,
301
Decomposition of tensor products of modular irreducible representations for SL${}_3$ (with an appendix by C.M. Ringel)
S Martin, C Bowman, SR Doty
– International Electronic Journal of Algebra
(2011)
9,
177
Hook modules for general linear groups
S Doty, S Martin
– Archiv der Mathematik
(2009)
92,
206
• 1 of 3
• >
C2.05
01223 764271
|
2023-01-27 08:50:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6785920262336731, "perplexity": 7399.229148189286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00699.warc.gz"}
|
https://classes.engineering.wustl.edu/2020/spring/cse131/modules/3/studio/
|
# Studio : Sieve of Eratosthenes
Studio activities should not be started before class! Come to the session and work on the activity with other students!
# Studio Setup and Procedure
• Form a group of 2-3 students and find a TA or instructor to work with.
• All but one member of your group should have this web page open so you can follow along and see the instructions as you work.
• Plan to work on one computer (using Eclipse).
• Initially, one of you will be in charge of typing at that computer.
• Throughout the studio you should trade who is in charge of the keyboard.
READ THE FOLLOWING FULLY BEFORE PROCEEDING
1. Have one person in your group create a new team by clicking here and going to the OR Create a new team box at the bottom of the page. The team name should include the last names of all your group members. For example, if Xia and Smith are working together, the team name should be something like “XiaSmith”.
2. After the team is created, all other members of your team should click on the same link and follow the instructions to join the team.
1. Be careful to join the right team! You won’t be able to change teams yourself and will have to contact instructors if there’s a problem.
2. Be sure everyone else joins the team! If grades don’t get entered correctly we will use the team to help verify credit for a particular studio.
3. Finally, one person should import the studio repository into Eclipse, as described in Assignment 0’s Add the assignment to Eclipse
• All team members will have access to the work pushed to GitHub. Be sure to Commit and Push at the end of the day so everyone can refer back to the work later as needed.
# Background
It may have been a few years since you worked with these concepts, so some quick review:
• A composite number is a positive integer that can be expressed as the product of two other integers. For example, 36 is the product of 6 and 6 so 36 is a composite number.
• A prime number is a whole number that can’t be formed by the product of other whole numbers. 29 is an example. It can only be expressed as the product of 1 and 29.
• Prime numbers may not be that important to you, but they are a significant concept in many branches of mathematics and critical to many modern technologies, like encryption.
• Today’s main goal is practice using arrays in an interesting way.
# Sieve of Eratosthenes
In this studio you will make a program that performs the sieve of Eratosthenes.
• Much like a sieve that’s used to separate or sift out unwanted materials, the sieve of Eratosthenes starts with all the positive integers and then separates the composite numbers out, leaving only the prime numbers.
• You should understand the process being done before trying to represent it with a computer program. Start by:
1. Review the description of the sieve of Eratosthenes.
2. Work through the process of the sieve to find all the primes up to 40. Work thorough it on paper. It’s really important that you have a reasonable understanding of tasks before you try to make a computer do them. Do not skip this step!
3. When done, review your work. Confirm that all the values you found are primes and that all the composites have been removed.
4. Reflect on the process — discuss each step and how it relates to concepts you’ve seen in class. Check your work with both a TA and other groups.
# Making a program Sieve for You
1. Add a new Sieve class to the studio-03/src folder.
2. Prompt the user for the n. You’ll need to find all prime numbers up to n.
• You can decide if you want to include n itself or not, but decide now!
3. Create code that will represent the items being sieved (i.e., an array). There are many valid approaches. Some things to consider:
• What will be in the array? How do the stored values relate to the sieve process?
• How big should the array be?
• How will indices be used? How do they relate to the sieve process?
• How can you incrementally test your work to ensure that what you’re doing is correct/working? (Hint, printing details as your code executes is really helpful)
4. Develop and refine your code until it works.
• Think carefully about whether you are including the n-th value or not. Test that your program works as expected. If it doesn’t, figure out why.
5. Have your program print all the prime values it finds and nothing else.
6. Once you can successfully print primes, try it with large values of n, like 10,000,000. If you’ve implemented everything correctly it should only take a few seconds to final all the primes less than 10,000,000! (One takeaway from today’s studio: You can use a little code to quickly automate tasks! This is much quicker and more accurate than attempting to do this by hand!)
# Review and Revise
Pseudocode is a way to describe things with a precise format that is similar to computer programs. Review the Pseudo Code for the sieve of Eratosthenes and compare it to your version. Not everything done in the pseudocode is straightforward in Java. None the less, if your approach is substantially different, revise it to include some of the approaches described in the pseudocode that seem sensible. Compare/contrast the approaches with your TA.
# Peer Comparisions
Compare your work to that of other groups. Are there things that make one approach easier/harder to understand?
# Demo (get credit for your) your work
Commit and Push your work. Be sure that any file you worked on is updated on GitHub.
To get participation credit for your work talk to the TA you’ve been working with and complete the demo/review process. Be prepared to show them the work that you have done and answer their questions about it!
Before leaving check that everyone in your group has a grade recorded in Canvas!
|
2022-12-08 17:22:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2706890106201172, "perplexity": 785.9963863995725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00236.warc.gz"}
|
https://www.physicsforums.com/threads/inner-product-in-this-step-of-the-working.733955/
|
# Inner Product in this step of the working
1. Jan 21, 2014
### unscientific
Hi guys, I'm not sure how to evaluate this inner product at step (3.8)
I know that:
$\hat {H} |\phi> = E |\phi>$
$$<E_n|\frac{\hat H}{\hbar \omega} + \frac{1}{2}|E_n>$$
$$<E_n| \frac{\hat H}{\hbar \omega}|E_n> + <E_n|\frac{1}{2}|E_n>$$
I also know that $<\psi|\hat Q | \psi>$ gives the average value of observable to $\hat Q$. In this case, it's not $\psi$ but $E_n$, does the same principle hold?
Last edited: Jan 21, 2014
2. Jan 21, 2014
### George Jones
Staff Emeritus
Use equation (3.4).
3. Jan 21, 2014
### dextercioby
Hi George, I can only conclude that you figured out where he picked his 2 questions from. .
4. Jan 22, 2014
### unscientific
Okay, I got step 3.8. But For Step 3.9, I'm having a slight issue here:
$$E_n = <E_n|\hat H|E_n> = \frac{1}{2m}<E_n|(m\omega \hat x)^2 + \hat {p}^2|E_n>$$
$$= \frac{1}{2m}(m^2\omega^2)<E_n|\hat x \hat x|E_n> + \frac{1}{2m}<E_n|\hat p \hat p|E_n>$$
Using $\hat x|\phi> = x|\phi>$ and $\hat p |\phi> = p|\phi>$,
$$= \frac{1}{2m}(m^2\omega^2)<E_n|\hat x x|E_n> + \frac{1}{2m}<E_n|\hat p p|E_n>$$
Removing the last "hats" from $\hat p$ and $\hat x$ and Using orthogonality $<E_n|E_n> = 1$:
$$= \frac{1}{2m}\left ( (m\omega x)^2 + p^2 \right )$$
Is it wrong to assume that the energy eigenkets have norm = 1? That would be strange because later we show that $E_n = (n + \frac{1}{2})\hbar \omega$
And, why is there an additional factor of $\frac{1}{\omega}$ in the expression?
5. Jan 22, 2014
### dextercioby
Energy eigenkets for the harmonic oscillator do have norm =1.
6. Jan 22, 2014
### unscientific
Ok, then I have no idea what's gone wrong with my working..
7. Jan 22, 2014
### dextercioby
The energy eigenket is not an eigenket of either x nor p. You need the ladder operators to evaluate the matrix elements.
8. Jan 22, 2014
### unscientific
How did they get 3.9 then?
9. Jan 22, 2014
### dextercioby
<p E_n, p E_n> = ||p |E_n> ||^2 = <E_n, p^2 E_n>, because the eigenkets of energy are in the domain of both p and p^2, on which the 2 operators are essentially self-adjoint. p^2 is a positive operator, hence the inequality at the end.
The same goes for x.
10. Jan 22, 2014
### unscientific
I just don't get how $<E_n|\hat x \hat x|E_n> = |x|E_n>|^2$ and $<E_n|\hat p \hat p|E_n> = |p|E_n>|^2$
11. Jan 22, 2014
### ChrisVer
hmm because in the same way you have in the complex numbers that:
$z^{*} z = |z|^{2}$
In fact the ket can be interpreted as a vector on Hilber space, while the bra as its dual.
So in the case of this, you can write:
$< E_{n}| \hat{x} \hat{x} |E_{n}>= (\hat{x} |E_{n}>)^{t} \hat{x} |E_{n}>$
using that x operator is self adjoint. The same goes for p... with the "t" I denoted the adjoint conjugate operation
12. Jan 22, 2014
### unscientific
That is true, but $\hat x$ and $\hat p$ are operators.. so the x and p in the norm should have hats?
$<E_n|\hat x \hat x|E_n> = |\hat x|E_n>|^2$ and $<E_n|\hat p \hat p|E_n> = |\hat p|E_n>|^2$
13. Jan 22, 2014
### ChrisVer
they do have hats... the person who wrote the things in the image you posted, doesn't use hats so much...
You can get a feeling there must be hats because otherwise there would be no reason to use the eigenvectors in kets... (he'd get 1)
14. Jan 22, 2014
### unscientific
This clears things up a little, thanks!
|
2017-10-22 01:14:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463080883026123, "perplexity": 1654.63327246919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00279.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/49229-types-funtions.html
|
1. ## types of funtions
i am having trouble filling out the chart of basic functions that includes periodic and one by one
i dont understand the concept
X^2
X^3
Abs(x)
Sin(x)
Cos(x)
Tan(x)
Sec(x)
2^x
Logbase 2 of x
1/x
Sq rt of X
sq rt of a^2- x^2
2. Originally Posted by Rimas
i am having trouble filling out the chart of basic functions that includes periodic and one by one
i dont understand the concept
X^2
X^3
Abs(x)
Sin(x)
Cos(x)
Tan(x)
Sec(x)
2^x
Logbase 2 of x
1/x
Sq rt of X
sq rt of a^2- x^2
i suppose you mean one to one?
intuitively, a function is periodic if it has a repeating pattern, forever. sin(x) is obviously one. it's period is $2 \pi$. you have the same pattern between $[0, 2 \pi]$ repeating forever.
slightly more formally, a function $f(x)$ is called periodic, if $f(x) = f(x + kT)$ for some integer $k$. we call $T$ the period.
for example, going back to sine. $\sin x = \sin (x + 2k \pi)$, the period is $T = 2 \pi$. any multiple of the period and you get back the same value. so $\sin x = \sin (x + 2 \pi) = \sin (x + 6 \pi) = \sin (x - 12 \pi)$ etc
a function is called one-to-one if $f(x_1) = f(x_2) \implies x_1 = x_2$. or equivalently, $x_1 \ne x_2 \implies f(x_1) \ne f(x_2)$ for $x_1, x_2 \in \text{dom}(f)$
|
2013-06-20 06:04:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7474126219749451, "perplexity": 966.5801776820422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710366143/warc/CC-MAIN-20130516131926-00010-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/algebra/73230-furd.html
|
1. ## Furd
My daughter (year 10) has come home wanting help with SURD's - she needs to simply 3 x sqrt 6 x sqrt 2
I have never heard of a FURD and can't find a descritption - can anyone help please?
2. First of all it seems you want to say Surd
Read this A-level Mathematics/OCR/C1/Indices and Surds - Wikibooks, collection of open-content textbooks
$
3\times \sqrt{6} \times \sqrt{2}
$
$
=3 \times \sqrt{2\times 3} \times \sqrt{2}
$
$
=6\times \sqrt{3}
$
3. Originally Posted by ADARSH
$
=3 \times \sqrt{2\times 3} \times \sqrt{2}
$
If you wondering how to get from there to
$
=6 \times \sqrt{3}
$
$
\sqrt{2}\times \sqrt{2} = 2
$
Therefore we get:
$
=3 \times 2\sqrt{3}
$
$
=6\sqrt{3}
$
4. at least he didn't say turd, oops, I did , sry
|
2017-12-16 11:22:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5262534022331238, "perplexity": 4887.9674560108615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587577.92/warc/CC-MAIN-20171216104016-20171216130016-00098.warc.gz"}
|
https://www.physicsforums.com/threads/some-clarification-needed-about-basic-concepts.292860/
|
# Some clarification needed about basic concepts
1. Feb 16, 2009
### ximath
Dear All,
I have learned the Uniqueness and Existence theorem in last lecture, however, the instructor told us that the proof is omitted because it is beyond scope of the course.
I am more concerned with the derivation of the Uniqueness theorem now.
I need some clarification here.
* As far as I understood, if f(x,y) satisfies the lipschitz condition, then the uniqueness theorem is valid. If fy is continuous on the rectangle, then there is unique solution to the equation; because continuity of partial y implies lipschitz condition to be valid.
Moreover, in order to find a solution, instead of a diff. eq. we could write an integral equation and define an operator. (I'm not sure I know what an operator is, though. ) It turns out that we have an operator such as F(y) = y and F(y) - y = 0 => g(y) = F(y) - y and the values of y which make g(y) = 0 is also a solution to the diff eq.
I have learned that this is called a fixed point and could be found by fixed point iteration (picard iteration).
For this, I learned that we could write F(yn) = yn + 1 . This part is totally unclear to me; because I don't see why F(yn) = yn+1. We know that F(yn) = yn and saying F(yn) = yn+1 means we are making an error, aren't we ? I tend to think that this error is so small that we are neglecting it, but why is it so ?
My priority now is to understand the part above.
Moreover, I also wonder how do we know that those fixed points are in fact unique ?
Regarding to uniqueness of fixed points, I don't need the proof in a fully mathematically described in cooperation with other theorems, because I have just started learning differential equations. I have been searching and found some theorems such as Banach's but was unable to understand those. I have some knowledge of calculus; and what I need is a sketch of the proof.
2. Feb 19, 2009
### HallsofIvy
Yes, that is true. Strictly speaking the Lispschitz condition applies to functions of one variable (a function is said to be "Lipschitz" on set A if and only if there exist a number C such that $|f(x_1)- f(x_0)|\le C|x_1- x_0|$ for any x1 and x0 in that set.) and for differential equation dy/dx= f(x,y) we require "Lipschitz" in the y variable only. It is easy to show that "Lipschitz" on a set lies between "continuous on the set" and "differentiable" on the set: If a function is differentiable on a set then it must be Lipschitz there and if it is Lipschitz on a set it must be continuous at every point of that set.
Yes, dy/dx= f(x,y), with condition $y(x_0)= y_0$ is equivalent to the integral equation $y(x)= \int_{x_0}^x f(t, y(t))dt+ y_0$ in the sense that any function y(x) that satisfies one must satisfy the other.
An operator is just a "function" that works on functions rather than numbers: it changes one function into another. In the differential equation dy/dx= f(x,y), it is derivative that is the operator and in the integral equation $y= \int_{x_0}^x f(t, y(t))dt+ y_0$ it is the integral that is the operator.
You may be mis-reading this- you are certainly mis-writing it. It is NOT F(yn)= yn+ 1 in the sense that we add 1 to yn. It is F(yn)= yn+1. In any case, it is NOT correct that "F(yn)= yn"; we get each y by applying F to the previous y. What is true is that, under certain conditions, If we have an iterative sequence in which we define yn+1= F(yn), in other words, each term in the sequence is F applied to the previous term, then F has a fixed point: there is a function y, NOT necessarily in the sequence and so NOT yn, such that F(y)= y. It is true that each of the yn in the sequence is NOT equal to the solution but, hopefully the sequence converges to it: each term is closer than the previous one.
It might be helpful to look at an example. The differential equation dy/dx= y, with condition y(0)= 1, is equivalent to the integral equation $y= \int_0^x y(t)dt+ 1$ which I got just by "integrating" both sides of the differential equation, and then choosing the constant of integration to fit the condition.
Now define the "iteration" $y_{n+1}= \int_0^x y_n(t)dt+ 1$: each term in the sequence is that integration applied to the previous one. Of course, the first term in the sequence does not have "previous one" so that must be given. If we were lucky enough to choose the solution to the equation as the first term, applying that integral would just give the same thing again so we would have a "constant" sequence. But we don't know the solution. The one thing we really know about y is that y(0)= 1. Okay, the simplest function we could take then, is the constant function y(x)= 1 for all x. Putting that into the integral gives
$$y_1= \int_0^x y_0(t)dt+ 1= \int_0^x 1dt+ 1= x+ 1$$
so now we have x+ 1 as y1(x). Repeating
$$y_2= \int_0^x y_1(t)dt+ 1= \int_0^x (t+ 1)dt+ 1= \frac{1}{2}x^2+ x+ 1$$
so now we have y2(x)= (1/2)x2+ x+ 1. Repeating
[tex]y_3= \int_0^x y_1(t)dt+ 1= \int_0^x(\frac{1}{2}t^2+ t+ 1)dt= \frac{1}{6}x^3+ \frac{1}{2}x^2+ x+ 1[/itex]
So now we have y3(x)= (1/6)x3+ (1/2)x2+ x+ 1.
Now, this is NOT a very good way of actually solving a differential equation- but if you have a good eye, you might notice that what is happening here is that we are getting more and more terms in a power series. Specifically, the power series for ex and you might guess that y(x)= ex is the limit of that sequence. It's not difficult to show that it is, and further to show that it is a "fixed point" for the operator: if y(x)= ex, then
[tex]\int_0^x y(t)dt+ 1= \int_0^x e^t dt+ 1= e^x- e^0+ 1= e^x[/itex]
Further, it is very easy to see that y= ex satisfies the differential equation, d(ex)/dx= ex= y, and the condition, y(0)= e0= 1.
Again, y(x)= ex is NOT any "yn" in the sequence but is the limit of the sequence.
Yes, Banach's fixed point theorem says "if S is a subset of a complete metric space and f is a contraction map from S to itself then there exist a unique point, x, in S such that f(x)= x (x is a "fixed point" of f)."
I recently posted a proof of that, with commentary, on this board:
A "contraction" map on a set is one such that there exist a constant c< 1 such that,for any points x and y on the set, $|f(x)- f(y)|\le c|x- y|$ (the absolute value |p- q| represents the distance between p and q. That says that the distance between f(x) and f(y) is less than the distance between x and y: it "contracts" distances.)
By definition of "contraction map", we must have $|f(x)- f(y)|\le c|x- y|$ for some c< 1. But f(x)= x and f(y)= y so that is $|x-y|\le c|x-y|$ or $(1-c)|x-y|\le 0$. c is, by definition, less than 1 so 1- c is positive and |x-y| cannot be negative. For that to be true we must have |x-y|= 0 or x= y.
|
2018-12-13 17:45:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219138026237488, "perplexity": 367.513079490938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00170.warc.gz"}
|
https://math.stackexchange.com/questions/2188497/the-reverse-of-finding-the-arc-length
|
The reverse of finding the arc length
As we know, finding the arc length of a function $f(x)$ from $x=a$ to $x=b$ is straightforward and can be implemented numerically.
For a particular function $f(x)$ that I have, suppose that I have numerically computed its arclength to be approximately $L$ using Simpson's rule.
Now, for a set of values $\ell_k$ such that $0 \le \ell_k \le L$, I want to find $x_k\in [a,b]$ for which the arc length of $f(x)$ from $x=a$ to $x = x_k$ is $\ell_k$.
I am not sure what's the best way to do this numerically but I followed the suggestion here. That is, if $L(x)$ is the arclength function, $L(x) = \int_a^x \sqrt{1+f'(x)^2} dx$ which yields a differential equation $\frac{dx}{dL} = \frac{1}{\sqrt{1+f'(x)^2}}$.
Solving this differential equation numerically is then not much of a problem. I used the initial condition that the $x-$value corresponding to arclength 0 is $x=a$.
Now, inevitably, numerical errors arise. That is, after solving the differential equation numerically, the $x-value$ corresponding to length $L$ is not exactly $b$. Oftentimes, it goes slightly beyond $b$. I used Matlab's ode45 to solve the differential equation.
I am wondering if I can impose two conditions on this equation, i.e. $x(0) = a$ and $x(L) = b$. If so, how can I solve the ODE numerically? It seems like bvp methods don't apply.
Any suggestions?
• Just to be precise, you mean the arc length of the graph of $f$ from $a$ to $b.$ – zhw. Mar 15 '17 at 21:40
• Yup, that's what I meant. – Tomas Jorovic Mar 15 '17 at 22:48
• I think I may have found an error in your post: if $L(x) = \int_a^x \sqrt{1+f'(x)^2} \, dx$, wouldn't that mean that $dL/dx = \sqrt{1+f'(x)^2}$, not the reciprocal? – WB-man Mar 15 '17 at 22:52
• Thanks, I fixed it! I meant $dx/dL$, i.e. for a given arc length, I want to find the corresponding $x$-value – Tomas Jorovic Mar 15 '17 at 22:54
The naive approach (which is possibly justified and adequate in this particular case due to how you obtained $L$ in the first place) is this:
In evaluating $L$, you did in fact practically evaluate $\ell(x)$ (such that $L = \ell(b)$). What you are looking for is $x(\ell)$, its inverse. Thus my suggestion would be to either
• store a list of all intermediate $\ell(x)$ and look up the $x$ closest to a given $\ell$, if memory permits; or:
• in computing $L$, check if the current $\ell(x)>\ell_k$ (where $i$ increases which each match), and store the corresponding $x$ at each $k$. This means you only need to store $k_\max$, but either you have to know the $\ell_k$ before knowing $L$, or you need to accumulate twice (once for $L$, and then later for the $\ell_k$).
This may appear brute force, but it has two justifications:
• In general, solving a differential equation will require the same amount of steps as computing $L$ (and thereby the $\ell_k$, as given in the second suggestion above).
• The $\ell_k$ will be of the same order of accuracy as $L$. When evaluating $L$ and $\ell_k$ in two entirely different ways, you could have the akward result of having some $\ell_k > L$. This can't happen if computed as above.
If you feel you need more accurate results, I suggest the following:
Say you find that $\ell(x_i) < \ell_k$ and $\ell(x_{i+1}) > \ell_k$. My first advice is to assume that $\ell(x_i)$ is exact, because if it isn't, everything afterwards including $L$ will be wrong as well. So doubting $\ell(x_i)$ at this particular point isn't a very good model - if you doubt $\ell(x_i)$, rerun with a smaller $x_{i+1} - x_{i}$. So $\ell(x_i)$ and $\ell(x_{i+1})$ are accurate, but you need values in between.
• You can rerun the particular interval $f(x_i)$ to $f(x_{i+1})$ at a finer scale. Do note that by this, you would be taking $L$ and $x(\ell_k)$ from two different approaches, hence, the values are not really comparable, and the arc length at the finer scale might differ significantly from $\ell(x_i) - \ell(x_{i+1})$. However you would not likely not notice in practice.
• Alternatively you could try to obtain the intermediate values directly from the Simpson approximation from which you took all other values. This would mean that all values are consistent, but the inversion of the Simpson parabola doesn't seem straightforward. So you could do it numerically, as proposed above, but instead of refining $f$, you could refine the approximated parabola.
All in all you might want to consider using the trapezoid scheme instead of Simpson (first degree Newton-Cotes) because it is easier to interpolate (the intermediate $x_k$ would (consistently with all other numbers!) lie between $x_i$ and $x_{i+1}$ as $\ell_k$ does between $\ell(x_i)$ and $\ell(x_{i+1})$.
All in all, pending contrary opinions, I believe that this approach is as computationally expensive as solving a whole new ODE, and numerically stable in the sense that it maintains the accuracy you've achieved with computing $L$, and gives you $x$ values that are consistent with $L$.
Note that everything you compute are only approximations: you mentioned that you compute the arc length $L$ via Simpson's rule; similarly you solve the ODE via a numerical method. So the question is not so much about numerical errors, rather about mutual consistency of your numerical methods.
If your function $f$ is piecewise linear (or you replace your function by a piecewise linear approximation), you can derive explicit formulas for the arc length function on each linear interval and hence also explicit formulas for the $x_k$.
I do not believe this will be possible for higher order approximations for $f$, e.g. $B$-splines or arbitrary functions.
• Thanks for the comment. I am aware that I am using 2 different approximations. As such, I was trying to find a way to limit the inconsistency. Unfortunately, my function cannot be easily approximated by a piecewise linear approximation. – Tomas Jorovic Mar 15 '17 at 23:03
• In that case, write down the formula that you use to compute the arclength $L(x)$ of $f$ on the interval $[a,x]$ and then use some numerical method to solve the equation $L(x)=\ell_k$. That should give you consistency. – Martins Bruveris Mar 15 '17 at 23:06
|
2019-09-19 14:41:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106543660163879, "perplexity": 217.15707426274992}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573533.49/warc/CC-MAIN-20190919142838-20190919164838-00443.warc.gz"}
|
https://www.eneuro.org/highwire/markup/3667674/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
|
Table 2
Cell type prediction confusion matrix
Ground truth
Pyramidal cellInterneuronNoise
Prediction Pyramidal cell4650
Interneuron1312
Noise429
• Confusion matrix, representing the number of true positives, true negatives, false positives, and false negatives. Ground truth refers to the manually detected interneurons and pyramidal cells. Prediction refers to the type predicted by the classifier for the same cells.
|
2023-01-28 09:52:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8848073482513428, "perplexity": 5419.364682058891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00059.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/70797/what-would-weather-look-like-on-a-world-with-a-9-year-day
|
# What would weather look like on a world with a 9-year day?
In a world I am building, the planet's days are longer than its years, with a single day taking 9 years. In all other ways, this planet is similar to Earth, but are there other differences I am missing?
Weather has played a massive role in history and science. It creates jungles; it creates deserts. Weather decides the biomes of a landmass above all else, which makes me wonder: on a planet where every day lasts 9 years instead of 24 hours, what does the weather look like?
• This sounds like another planet we know. On Venus, a year is 225 Earth days and a day is 243 Earth days. Feb 10 '17 at 1:52
• You’re not a noob so I’m asking about your punctuation in the title rather than just editing it. ? vs ^ aren’t usual typo error. Feb 10 '17 at 9:59
• @JDługosz my computer is a piece of garbage that for reasons unknown to me decides that I really need the keyboard set to French Canadian. In which I need to shift type 6 to get ?. kbd-intl.narod.ru/images/ca-intl.png Unfortunately, it also thinks that I need some chaos and switches between the two at random. I usually catch it. Feb 10 '17 at 10:30
• The weekly forecast would be... interesting.
– PatJ
Feb 10 '17 at 13:37
• @TrEs-2b just reinstall Linux. Excuse to try a different distro. Feb 11 '17 at 1:24
@Alexander is correct. Let me elaborate.
## Diurnal hemisphere (day side)
• Extremely hot
• Little precipitation or clouds (it'll dissolve once the water becomes a liquid) over the continents, but likely a lot above the sea (increased evaporation)
• Warmer seas. These won't completely evaporate as water will be able to flow in from the cooler regions, replenishing them.
• Incredible ocean currents. One side of the day zone will always warm, adding water to the sea, while the other will always cool, removing (liquid) water. This is bad news for any inhabitants that want to move with the habitable zone: if they meet an ocean, they'll have to traverse its waters head-on.
• Little wind, as most areas will have equal pressure
• Little surface vegetation or surface-dwelling life
• Don't expect a "sandy" desert, expect landforms as usual! Massive amounts of sand come from weathering and erosion, which are area-specific and not necessarily related to temperature.
## Nocturnal hemisphere (night side)
• Extremely cold
• Little precipitation (it'll be a cold desert, like Antarctica, because all liquid water will fall once it arrives at the edge)
• Extremely cold seas, with abundant (though not necessarily widespread) ice / glaciers
• Snow cover. Despite the lack of precipitation, some snow will fall when the region enters the "cold zone" and a steady temperature will maintain it. This has an interesting effect: most footprints will be preserved for 4.5 years.
## Habitable belt
• Extreme wind! Cold and hot air will meet and constantly exchange, resulting in constant and strong currents.
• Precipitation! The combination of hot and cold fronts, combined with the fact that this is the only place where liquid water will like to exist, will bring torrential downpour.
• Weathering and erosion will be widespread. Nearly all rainfall happens here, as does The Great Freeze (cracks apart rock) and The Great Melt (moves sediments). This is the best place to reshape the environment quickly.
• Most plant life will reside here, because water is accessible. Plants will grow at an extreme rate away from the sun and toward newly exposed land, because staying put will mean burning. Alternatively, they will bury their seeds, reviving to grow and reproduce at each intermediate period between the heat and the freeze. Plant roots must adapt, as rapid erosion means less material to hold on to. They must either grow downward constantly to maintain a grip, or grow much further down the first time.
• Animals will develop an instinct to do the same thing - dig and hide or constantly stay on the move. Anything that can't cross the ocean, go around it, or bury itself is screwed evolutionarily. Flying creatures should be OK.
## Poles
• Average temperature (always a meeting place of warm and cold air, as opposed to only once every 4.5 years)
• Heavy precipitation and extreme winds
• Abundant plant and animal life; possibly the best place to start a permanent civilization
For a more general overview of winds, see here.
– SRM
Feb 10 '17 at 13:37
• Thank you for a very detailed answer. I would, however, disagree with couple of point - no precipitation and little winds on day side. If we have an oceanic planet like Earth, hurricanes will be forming continuously. They derive their power from a vertical gradient of temperature, which should be significant, and they should be traveling (with all the wind and rain associated) across the day side until they meet a continent. Feb 10 '17 at 19:56
• @Alexander Fair point. I'll edit in a bit. Feb 10 '17 at 20:19
Your day side will be hot, and night side - quite cold. There will be a lot of winds blowing from night side to day side. Depending on the amount of water, there could be torrential rains in some areas. However, 4.5 years night is not enough to form sizable glaciers except at high latitudes. Overall, things would not be as extreme as in case of a tidally locked planet.
Society on your world will evolve in ways quite different from Earth.
The people on your world will have to have some means to navigate that doesn't rely on either the stars or the sun. If there is one or more moons that are visible during the day and have rapid movements, then that might be used to establish east/west or north/south. Otherwise, the sun moves too slowly to do anything useful with. Remember, China didn't use compasses to navigate until somewhere around 200 AD while Europe didn't adapt to compasses until much later. Latitudes were originally measured by noting the height of the sun at noon. This method won't work in your world. Longitude will be just as difficult, I imagine.
Societies would probably remain more mobile than modern society as they chased the habitable belt around the globe.
Given that seasons are probably more closely tied to the long day than the short year, the concept of a year may never figure into their concepts of time tracking. The "day" would be their long measure instead.
The stars would be less likely to have religious/social meanings to this world than they do to our ancestors.
|
2022-01-27 21:40:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30057597160339355, "perplexity": 2406.2570279246434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00434.warc.gz"}
|
https://math.stackexchange.com/questions/2831162/how-to-prove-that-if-the-determinant-of-the-matrix-is-zero-then-at-least-one-eig/2831326
|
# How to prove that if the determinant of the matrix is zero then at least one eigenvalue must be zero? [duplicate]
For a matrix $A$, if $\det(A)=0,$ prove and provide an example that at least one eigenvalue must be zero.
At first, I tried using the identity that the product of eigenvalues is the determinant of the matrix, so it follows that at least one must be zero for the determinant to be zero. Is this correct? Could I also prove it by using $(A-\lambda I)X=0$, for some $X\neq 0?$
If $\lambda=0,$ then we have $AX=0$, but I can't say $\det(A)\cdot \det(X)=0$ because $X$ is not a square matrix and doesn't have a determinant. How would I continue?
## marked as duplicate by Michael Hoppe, Lord Shark the Unknown, Jyrki Lahtonen, Rafa Budría, rtybaseSep 5 '18 at 19:32
Let $p(x)=\det(A-xI)$ the char. polynomial of $A$. Then $p(0)=\det(A)=0$, hence $0$ is a root of $p$ and therefore an eigenvalue of $A$.
• This feels a bit circular. How do you know that a root of $p$ is an eigenvalue of $A$? – JiK Jun 26 '18 at 5:47
• @JiK: $A-\lambda I$ is singular iff $\det (A-\lambda I ) = 0$. – copper.hat Jun 26 '18 at 6:21
• @copper.hat I'm not sure I follow. Are you saying that the proposed approach is to start from the fact that $\lambda$ is an eigenvalue iff $A-\lambda I$ is singular, then say that this is equivalent to $\det (A - \lambda I) = 0$, then define the characteristic polynomial, and finally see that $0$ is its root? That's a bit convoluted, and the way this answer is written doesn't really explain what follows from what. – JiK Jun 26 '18 at 6:35
• I reconstruct the implied argument like this, @JiK: the characteristic polynomial of $A$, defined by $p(x) = det(A - xI)$, is a polynomial with the eigenvalues of $A$ for its roots. $p(0) = det(A - 0) = det(A)$, so when $det(A) = 0$, $0$ is a root of the characteristic polynomial, and therefore an eigenvalue of A. I agree that the answer as written leaves that a bit DIY. – John Bollinger Jun 26 '18 at 19:41
• @JohnBollinger The usual argument that the characteristic polynomial has the eigenvalues as roots uses precisely the fact that if $\det(M)=0$, then $M$ has a kernel, since the kernel of $A-xI$ is just the eigenvectors with eigenvalue $x$. Unless you had some other proof of that fact in mind, that argument is circular. – Milo Brandt Jun 27 '18 at 1:17
Here an elementary way:
$\det(A) = 0 \Rightarrow$ the columns of $A =(c_1 \ldots c_n)$ are linearly dependent $\Rightarrow$ there is a non-zero vector $v = (v_1 \ldots v_n)^T$ such that $v_1c_1 + \cdots v_n c_n = \vec{0} \Rightarrow Av = \vec{0} = 0\cdot v \Rightarrow 0$ is an eigenvalue of $A$.
• It might be useful to add that v is an eigenvector of A. Basic linear algebra, I know, but appropriate given the question. – MSalters Jun 26 '18 at 13:44
I do not know what you know about the determinant and how you think of it, but the determinant of a square matrix $A$ is zero iff the matrix is not invertible, and that is equivalent to the kernel being non-trivial, which means that $Ax=0$ for some $x\ne0$.
Since matrix $A$ over a field and det$A$ is equal to the product of eigenvalues, by using a property of the field that if $ab=0 \Rightarrow$ either $a=0$ or $b=0.$
The determinant of the matrix $A$ also is the determinant of the endomorphism $\mathbb{R}^n \rightarrow \mathbb{R}^n$ (or more generally $k^n$) defined by multiplication by $A$. To say that $A$ has determinant $0$ is to say that this endomorphism is not injective.
• Serious question: What level of math are you at and what level of math do you think this answer is helpful for? I ask because I remember OP's question from a sophomore undergrad class in linear algebra and assume that's where they are. – user1717828 Jun 25 '18 at 10:20
• Well, isn't my answer quite elementary? The first time I learned about determinant was in my first year of Bachelor degree (in France), and it was first defined for endomorphisms with respect to a basis. Then, we defined it for matrices exactly the way I stated in my answer, and we deduced all the computational properties of the determinant. So to my viewpoint, I just used the very definition. Also now, the OP has been given five different answers using different notions. One of them at least (if not all of them) surely will correspond to his level of understanding. – Suzet Jun 25 '18 at 10:27
• @Suzet That's interesting. In the US, I think it's pretty common for math students to first work with vectors, matrices, and determinants around ages 13-18. At that point the students might associate the word "function" with determining the value of $y$ given a value of $x$ or drawing a function on an $x$-$y$ graph, but not many would likely know the words "endomorphism" or "injective". – aschepler Jun 26 '18 at 1:07
• Though I wouldn't guess this question actually falls in that category. That first visit is typically just a brief piece of a more general year introducing various ideas and tools in algebra, and would touch on things like adding, multiplying, linearity, and inverses, but possibly wouldn't get to eigenvalues, and the additional skills to prove theorems might not be expected at that point, depending. – aschepler Jun 26 '18 at 1:15
• TBH this is the first time I've encountered the term "endomorphism". It's a pretty trivial concept to understand, but probably that also explains why I didn't encounter it. And it seems that this answer omits why a zero determinant of the endomorphism means that the endomorphism isn't injective, so this just rewords the question in terms of vector spaces. – MSalters Jun 26 '18 at 13:56
You are correct that the product of eigenvalues is the determinant, with appropriate muliplicities.
This readily follows from the Jordan normal form.
Once it's known, the problem is solved, as a product in the base field becomes zero iff one of the factors is $0$.
|
2019-04-22 07:57:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155364632606506, "perplexity": 183.49528746933464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578548241.22/warc/CC-MAIN-20190422075601-20190422101601-00097.warc.gz"}
|
https://galwaystrengthandperformance.com/headers/
|
[vc_row][vc_column][ttbase_heading title=”Sizes” type=”h4″ margin_bottom=”30px”][vc_column_text]You can set different sizes and types of page titles. You can choose from XL, L, S , XS page ttitle sizes to fits you needs. You can disable the breadcrumbs or choose individual title texts as well as subtitles[/vc_column_text][vc_single_image image=”2993″ img_size=”full”][/vc_column][/vc_row][vc_row][vc_column][ttbase_heading title=”Background” type=”h4″ margin_bottom=”30px”][vc_column_text]Choose how you would like to have your titles aligned. You can choose left aligned, center, or right aligned and even choose if you like to have a title underline or not[/vc_column_text][vc_single_image image=”2992″ img_size=”full”][/vc_column][/vc_row][vc_row][vc_column][ttbase_heading title=”Alignment” type=”h4″ margin_bottom=”30px”][vc_column_text]Choose how you would like to have your titles aligned. You can choose left aligned, center, or right aligned and even choose if you like to have a title underline or not[/vc_column_text][vc_single_image image=”2991″ img_size=”full”][/vc_column][/vc_row]
|
2021-10-18 15:05:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8182764649391174, "perplexity": 9650.545117535305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585203.61/warc/CC-MAIN-20211018124412-20211018154412-00094.warc.gz"}
|
https://www.physicsforums.com/threads/why-would-his-weight-be-zero-at-the-moment-of-the-fall.870097/
|
# Why would his weight be zero at the moment of the fall?
## Homework Statement
A person of weight w is in an upward-moving elevator when the cable suddenly breaks. What is the person's apparent weight immediately after the elevator starts to fall?
Chestermiller
Mentor
## Homework Statement
A person of weight w is in an upward-moving elevator when the cable suddenly breaks. What is the person's apparent weight immediately after the elevator starts to fall?
How would you personally define the term "apparent weight?"
How would you personally define the term "apparent weight?"
I think that's the key to answering the question, though I am not sure I interpreted properly. I think it means the weight relative to the force of gravity that acts on it in particular instance.
Chestermiller
Mentor
I think that's the key to answering the question, though I am not sure I interpreted properly. I think it means the weight relative to the force of gravity that acts on it in particular instance.
That's not correct. It means that, if he was standing on a scale, what the scale would read (i.e., the normal force the person would be exerting on the scale, and, by Newton's 3rd law, the normal force the scale would be exerting on the person). That's the definition of his apparent weight.
So, what is the normal force that the scale is exerting on the person if the elevator cable has been cut?
That's not correct. It means that, if he was standing on a scale, what the scale would read (i.e., the normal force the person would be exerting on the scale, and, by Newton's 3rd law, the normal force the scale would be exerting on the person). That's the definition of his apparent weight.
So, what is the normal force that the scale is exerting on the person if the elevator cable has been cut?
I think the person and the elevator would be free falling an so there would be no contact force?
Chestermiller
Mentor
I think the person and the elevator would be free falling an so there would be no contact force?
Yes. That is correct. So what does that mean regarding the "apparent weight" of the person, considering the apparent weight is equal to the contact force.
Yes. That is correct. So what does that mean regarding the "apparent weight" of the person, considering the apparent weight is equal to the contact force.
It will be zero! Thank you so much for your help.
I get it now.
|
2022-05-29 01:54:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057941198348999, "perplexity": 369.0588652595023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00018.warc.gz"}
|
https://en.wikisource.org/wiki/Page:Aether_and_Matter,_1900.djvu/203
|
# Page:Aether and Matter, 1900.djvu/203
place until it is proved to be in definite contradiction, not removable by suitable modification, with another portion of it.
### Application to moving Material Media: approximation up to first order
106. We now recall the equations of the free aether, with a view to changing from axes (x, y, z) at rest in the aether to axes (x', y', z') moving with translatory velocity v parallel to the axis of x; so as thereby to be in a position to examine how phenomena are altered when the observer and his apparatus are in uniform motion through the stationary aether. These equations are
${\displaystyle {\begin{array}{ccc}4\pi {\frac {df}{dt}}={\frac {dc}{dy}}-{\frac {db}{dz}}&&-(4\pi c^{2})^{-1}{\frac {da}{dt}}={\frac {dh}{dy}}-{\frac {dg}{dz}}\\\\4\pi {\frac {dg}{dt}}={\frac {da}{dz}}-{\frac {dc}{dx}}&&-(4\pi c^{2})^{-1}{\frac {db}{dt}}={\frac {df}{dz}}-{\frac {dh}{dx}}\\\\4\pi {\frac {dh}{dt}}={\frac {db}{dx}}-{\frac {da}{dy}}&&-(4\pi c^{2})^{-1}{\frac {dh}{dt}}={\frac {dg}{dx}}-{\frac {df}{dy}}.\end{array}}}$
When they are referred to the axes (x', y', z') in uniform motion, so that ${\displaystyle (x',\ y',\ z')=(x-vt,\ y,\ z),\ t'=t}$, then ${\displaystyle d/dx,\ d/dy,\ d/dz}$ become ${\displaystyle d/dx',\ d/dy',\ d/dz'}$, but d/dt becomes ${\displaystyle d/dt'-vd/dx'}$: thus
${\displaystyle {\begin{array}{ccc}4\pi {\frac {df}{dt'}}={\frac {dc'}{dy'}}-{\frac {db'}{dz'}}&&-(4\pi c^{2})^{-1}{\frac {da}{dt'}}={\frac {dh'}{dy'}}-{\frac {dg'}{dz'}}\\\\4\pi {\frac {dg}{dt'}}={\frac {da'}{dz'}}-{\frac {dc'}{dx'}}&&-(4\pi c^{2})^{-1}{\frac {db}{dt'}}={\frac {df'}{dz'}}-{\frac {dh'}{dx'}}\\\\4\pi {\frac {dh}{dt'}}={\frac {db'}{dx'}}-{\frac {da'}{dy'}}&&-(4\pi c^{2})^{-1}{\frac {dh}{dt'}}={\frac {dg'}{dx'}}-{\frac {df'}{dy'}}.\end{array}}}$
where
${\displaystyle (a',\ b',\ c')=(a,\ b+4\pi vh,\ c-4\pi vg)}$ ${\displaystyle (f',\ g',\ h')=\left(f,\ g-{\frac {v}{4\pi c^{2}}}c,\ h+{\frac {v}{4\pi c^{2}}}b\right).}$
We can complete the elimination of (f, g, h) and (a, b, c) so
|
2019-02-22 04:11:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5152724385261536, "perplexity": 1389.484258195469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513222.88/warc/CC-MAIN-20190222033812-20190222055812-00384.warc.gz"}
|
http://www.mathemafrica.org/?p=13193
|
I was recently asked about how to spot which direction field corresponds to which differential equation. I hope that by working through a few examples here we will get a reasonable intuition as to how to do this.
Remember that a direction field is a method for getting the general behaviour of a first order differential equation. Given an equation of the form:
$\frac{dy}{dx}=f(x,y)$
For any function of x and y, the solution to this differential equation must be some function (or indeed family of functions) where the gradient of the function satisfies the above relationship.
The first such equation that we looked at was the equation:
$\frac{dy(x)}{dx}=x+y(x)$.
We are trying to find some function, or indeed family of functions y(x) which satisfy this equation. We need to find a function whose derivative (y'(x)) at each point x is equal to the value of the function (ie. y(x)), plus that value of x. ie. at the point x=5, we must have that the gradient of the function (y'(5) is equal to 5 plus the value of the function (y(5)). Because we have not specified the initial, or boundary condition for the equation, there will be an infinite number of solutions which satisfy the differential equation alone, and we can imagine that so long as f(x,y) is not singular at some point (x,y), there will be a solution to the equation which passes through that point. The only constraint on the solution is that the gradient of the function, as it passes through that point is equal to x+y(x). Indeed we can even put in a direction field when f(x,y) is singular, but really we know that the function is not defined at that point. The direction field will simply correspond to a vertical line.
Rather than finding the solution in its entirety we can simply ask, for a sample of points, what, roughly, will the lines passing through the points look like? From the equation, we only have a constraint on the first derivative, and so we will simply put a short, straight line through all of our sample points in the (x,y) plane, such the gradient of that short straight line satisfies the differential equation above.
In the following two plots we use two different sets of sample points. In the plot on the left, we have chosen to sample at half integer positions in x and y. In the right plot we have quarter integer samples. Note importantly that each line has a very special property: Its gradient is equal to the x value plus the y value of the middle of the line. ie. the line passing through the point (1,1) has gradient 2, and that passing through (1,-1) has gradient 0.
This is all we ever have to check when seeing if indeed a given direction field plot satisfies our differential equation. Let’s look at another example. This time we are looking at the differential equation:
$\frac{dy(x)}{dx}=\frac{x}{y(x)}$
Can we see that this is correct? Well, we see a couple of clear features in the plot. The first is that the gradient along the line y=x always seems to be the same: 1. Indeed at the points along the line y=x, we expect that the value of x/y will be 1 (ie. at the point (2,2), the value of x/y=2/2=1). Along the line y=-x, we see another set of lines, each of which have the same gradient: -1. Again, this makes sense because when y=-x, y/x=-1.
Another feature that we see is that close to the x-axis (ie. the points for which y=0), the gradient of the lines seem to be getting larger and larger, as we would expect for x/y as y gets small (unless $x\le y$). And close to the y-axis, (ie. the points for which x=0), the gradient of the lines seem to be getting smaller and smaller (ie. they are flatter and flatter). Again, this is in line with these lines corresponding to the differential equation above.
Let’s look at a less trivial example:
$\frac{dy(x)}{dx}=\frac{\sqrt{x+2}}{\tan(y(x))}$
The direction field is as follows:
Again, let’s look for the most obvious features:
The first is that all of the lines for which x=-2 seem to be horizontal. See the lines in the red box here:
Indeed this makes sense because $\frac{\sqrt{-2+2}}{\tan(y)}=0$ and so we expect these to be flat – ie. their gradients to be 0.
Along the y-axis we have vertical lines. See the lines in the red box here:
This also makes sense because when $y=0, tan(y)=0$, and so the gradient at these points will be infinite (in fact this really means that the solution is ill defined at this point).
We also see that there are a set of lines around points with y values of 1.6 an -1.6 for which the lines seem to be close to flat. See the lines in the red boxes here:
This is because 1.6 is close to $\frac{\pi}{2}$ and at values close to this, tan(y) becomes very large (large and positive, or large and negative, depending on which side of $\frac{\pi}{2}$), so $\frac{\sqrt{x+2}}{\tan(y)}$ will get small when tan(y) is large. We can also see that the lines below and above the y=1.6 lines change in gradient from being positive gradient to being negative. This is because tan(y) is changing sign either side of y=1.6, so we expect that the gradients will change sign.
Anyway, I hope that with these examples it gives a few things to look out for when checking that a direction field does indeed satisfy a given differential equation. Simply make sure that the lines in the direction field satisfy the relationship between the gradient and the x and y values of the equation.
How clear is this post?
|
2020-08-14 23:31:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812137246131897, "perplexity": 200.7401301068547}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740343.48/warc/CC-MAIN-20200814215931-20200815005931-00396.warc.gz"}
|
https://hal.inria.fr/inria-00074242
|
# Fast convergence of the simplified largest step path following algorithm
1 PROMATH - Mathematical Programming
Inria Paris-Rocquencourt
Abstract : Each master iteration of a simplified Newton algorithm for solving a system of equations starts by computing the Jacobian matrix and then uses this matrix in the computation of $p$ Newton steps: the first of these steps is exact, and the other are called simplified''. In this paper we apply this approach to a large step path following algorithm for monotone linear complementarity problems. The resulting method generates sequences of objective values (duality gaps) that converge to zero with Q-order $p+1$ in the number of master iterations, and with a complexity of $O(\sqrt n L)$ iterations.
Keywords :
Document type :
Reports
Domain :
https://hal.inria.fr/inria-00074242
Contributor : Rapport de Recherche Inria <>
Submitted on : Wednesday, May 24, 2006 - 2:50:49 PM
Last modification on : Friday, May 25, 2018 - 12:02:05 PM
Long-term archiving on: : Monday, April 5, 2010 - 12:06:56 AM
### Identifiers
• HAL Id : inria-00074242, version 1
### Citation
Clovis C. Gonzaga, J. Frederic Bonnans. Fast convergence of the simplified largest step path following algorithm. [Research Report] RR-2433, INRIA. 1994. ⟨inria-00074242⟩
Record views
|
2021-04-14 20:38:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5150384902954102, "perplexity": 2874.209849445431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078021.18/warc/CC-MAIN-20210414185709-20210414215709-00026.warc.gz"}
|
https://studydaddy.com/question/sci-228-week-4-midterm
|
Answered You can buy a ready-made answer or pick a professional tutor to order an original one.
QUESTION
# SCI 228 Week 4 Midterm
This file of SCI 228 Week 4 Midterm comprehends:
(TCO 1) Which of the following nutrients is the most energy dense?
(TCO 1) Which of the following are examples of carbohydrate-rich foods?
(TCO 1) What element makes protein different from carbohydrate and fat?
(TCO 1) Which of the following BESTdescribes minerals?
(TCO 1) Which of the following is an example of an organic micronutrient?
(TCO 1) Which of the following is NOTa primary function of dietary protein?
(TCO 2) Overconsumption of ________ has the greatest potential for toxicity.
(TCO 2) What percentage of deaths in the United States can be attributed to unhealthy lifestyle behaviors such as smoking, alcohol misuse, physical inactivity, and unbalanced diet?
(TCO 2) Jack is a college athlete who requires 2,800 kilocalories a day to support his total energy needs. Even thought Jack likes many different foods and makes it a point to try new things, he only consumes approximately 1,600 kilocalories a day. Which one of the characteristics of a healthy diet is Jack missing?
(TCO 2) What is at the base of the Healthy Eating Pyramid?
(TCO 2) Which group of nutrients would be found in the foods at the base of the Food Guide Pyramid?
(TCO 2) On average, Americans eat ________ commercially prepared meals each week.
(TCO 3) Immediately after absorption, what circulatory system carries most of the fat-soluble nutrients?
(TCO 3) Which of the following is NOT an accessory organ of digestion?
(TCO 3) In which of the following food sources would a consumer look for the statement "Live Active Cultures," indicating that the product is a rich source of probiotics?
(TCO 3) What is the primary cause of peptic ulcers?
(TCO 4) The term complex carbohydrates refers to:
(TCO 4) Diabetes is a condition in which the body doesn't process ________ properly.
(TCO 4) After a meal, which hormone is responsible for moving glucose into the body's cells?
(TCO 4) Peggy Sue's doctor wants to screen her for reactive hypoglycemia. If her doctor's suspicions are correct and Peggy Sue does have reactive hypoglycemia, what would you expect her blood glucose concentration to be at approximately TWO HOURS after she had begun her glucose tolerance test?
(TCO 4) Lactose intolerance is due to a(n):
(TCO 4) ________ is a highly branched arrangement of glucose molecules found in liver and skeletal muscle cells.
(TCO 4) Gluconeogenesis is:
(TCO 4) Which of the following BESTdescribes the glycemic index?
(TCO 1-6) Which of the following is NOT an advantage of using biopesticides?
(TCO 1-6) Foods most commonly associated with Salmonella intoxication are:
(TCO 1-6) Which of the following is FALSE regarding the prion that causes mad cow disease?
(TCO 1-6) Ninety (90) percent of all food allergies are due to the ________ found in foods.
(TCO 1-6) Which of the following is responsible for food spoilage?
(TCO 1-6) Which of the following describes the prevailing theory in the development of food allergies?
(TCO 1-6) Which of the following preservatives is(are) commonly used to prevent rancidity in oils and fats?
(TCO 1-6) Which of the following is an example of food intoxication?
(TCO 1-6) A cyclic food allergy is one that:
(TCO 5) Which of the following is NOT true of fats?
(TCO 5) Which of the following foods is the richest source of omega-3 fatty acids?
(TCO 5) Sex hormones and adrenal hormones are substances derived from which class of lipid?
(TCO 5) ________ are the major form of fat in both food and the body.
(TCO 5) Where in the body are the majority of triglycerides stored for future energy needs?
(TCO 5) All of the following are major classes of dietary lipids EXCEPT:
(TCO 5) Which of the following food items would contain the highest amount of dietary cholesterol?
(TCO 5) The vast majority of fat digestion and absorption occurs in the:
(TCO 6) Which of the following is a genetic disorder resulting in debilitating protein abnormalities?
(TCO 6) All of the following are examples of protein hormones EXCEPT:
(TCO 6) Which part of an individual amino acid distinguishes it from other amino acids?
(TCO 6) The specific function of a protein is determined by:
(TCO 6) Oligopeptides are a string of ________ amino acids.
(TCO 6) Which of the following is NOT typically a nutrient of concern for vegans?
(TCO 6) Proteases are:
(TCO 6) All of the following are parts of an amino acid molecule EXCEPT:
• @
Tutor has posted answer for $29.99. See answer's preview$29.99
*** 228 Week * *******
|
2018-05-20 17:49:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3623400628566742, "perplexity": 3625.299184748679}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863662.15/warc/CC-MAIN-20180520170536-20180520190536-00123.warc.gz"}
|
https://socratic.org/questions/how-do-you-convert-2-x-7y-2-12x-into-polar-form
|
How do you convert 2=(x+7y)^2-12x into polar form?
Jul 18, 2018
See explanation and graph.
Explanation:
Upon using
$\left(x , y\right) = r \left(\cos \theta , \sin \theta\right) \mathmr{and} r = \sqrt{{x}^{2} + {y}^{2}} \ge 0$,
$2 = {\left(x + 7 y\right)}^{2} - 12 x$ converts to
${r}^{2} {\left(\cos \theta + 7 \sin \theta\right)}^{2} - 12 r \cos \theta - 2 = 0$. So,
$0 \le r =$
(12 cos theta + sqrt (144 cos^2theta
+ 4 (cos theta + 7 sin theta)^2)
/$\left(2 {\left(\cos \theta + 7 \sin \theta\right)}^{2}\right)$
The purpose of this conversion is not known.
Analysis of the given Cartesian equation that represents a
parabola is relatively quite easier.
It is known that, if the second degree terms form a perfect
square, a second degree equation represents a parabola.
I know that the readers would like what follows.
The axis and the tangent at the vertex are at right angles. So, the
equation of any parabola can be converted to the form
${\left(y - m x - c\right)}^{2} = k \left(m x + y - c '\right)$ to read immediately
Axis: $y - m x - c = 0$
Tangent at the vertex: .$m x + y - c ' = 0$
I have worked this out here. It is
(x + 7y - 0.12 )^2 = 1.68 ( 7x - y ) + 2.0144. See graph.
graph{((x+7y)^2-12x-2)(x + 7y - 0.12 )(1.68 ( 7x - y ) + 2.0144)=0}
|
2023-03-30 01:47:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7867279648780823, "perplexity": 1977.8778715299313}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00718.warc.gz"}
|
https://www.physicsforums.com/threads/imaginary-zeros-of-zeta-function.276198/#post-2001412
|
# Imaginary Zeros of Zeta Function
I was doing some work with the zeta function and have a question.
I am aware that the Riemann Hypothesis claims that all of the critical zeros of the analytically continued zeta function have a real part Re(z)=1/2.
My question is, does the concept apply only to the complex zeros, or the imaginary and real parts separately.
Basically, is it possible to have:
Im(zeta(z))=0
Without having:
Re(zeta(z))=0
Or does a zero of one part automatically illustrate the existence of a zero for the other?
A zero x of a function f is when f(x)=0, (=0+0i) and it is no different with the zeta function.
Or does a zero of one part automatically illustrate the existence of a zero for the other?
No. For example, along the real line the imaginary part of the zeta function is zero, but the real part is certainly not always zero.
|
2021-10-23 14:58:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680446147918701, "perplexity": 339.8619885202629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00589.warc.gz"}
|
https://codereview.stackexchange.com/questions/8808/c-data-sorting-taking-a-long-time
|
# C++: Data sorting taking a long time
I wrote a simple program that takes the enable1.txt word list of scrabble/words with friends and searches it for an input. In order to search the massive list of 172,820 words I figured sorting it then using a binary search would be a good idea.
The sorting algorithm I used was std::stable_sort as I wanted similar words stay in their locations. After profiling, the std::stable_sort is taking 37% of the run time.
Should I have used std::stable_sort? Would std::sort have been better? Would a different sort entirely had been a better idea? Is there a faster sort other than rolling my own?
Code:
#include <iostream>
#include <string>
#include <fstream>
#include <vector>
#include <algorithm>
#include <ctime>
double Search(const std::vector<std::string>& obj);
double BSearch(const std::vector<std::string>& obj);
int main() {
std::ifstream db;
db.open("enable1.txt", std::ios::in);
unsigned long number_of_words = 0;
const unsigned short max_length = 50;
while(db.fail() == false) {
char dump[max_length];
db.getline(&dump[0], max_length);
++number_of_words;
}
db.close();
db.clear();
std::cout << "done" << std::endl;
std::vector<std::string> words(number_of_words);
words.reserve(number_of_words);
db.open("enable1.txt", std::ios::in);
unsigned long i = 0;
while(std::getline(db, words[i], '\n')) {
++i;
}
db.close();
db.clear();
std::cout << "Sorting database...";
std::stable_sort(words.begin(), words.end());
std::cout << "done" << std::endl << std::endl;
double seconds = Search(words);
std::cout << "Time for Search: " << seconds << std::endl;
double b_seconds = BSearch(words);
std::cout << "Time for BSearch: " << b_seconds << std::endl;
std::cout << "Cleaning up resources, please wait..." << std::endl;
return 0;
}
double Search(const std::vector<std::string>& obj) {
std::cout << "Enter word: ";
std::string word;
std::getline(std::cin, word);
std::cout << "Searching database..." << std::endl;
std::vector<std::string, std::allocator<std::string> >::size_type s = obj.size();
unsigned long mid = s / 2;
unsigned long first = 0;
unsigned long last = s - 1;
std::clock_t end_time = 0;
std::clock_t start_time = clock();
while(first <= last) {
std::cout << "Checking: " << word << " with " << obj[mid] << std::endl;
int result = word.compare(obj[mid]);
if(result == 0) {
end_time = clock();
std::cout << "Valid word." << std::endl;
return std::difftime(end_time, start_time);
} else if(result < 0) {
last = mid - 1;
mid = ((last - first) / 2 + first);
} else {
first = mid + 1;
mid = ((last - first) / 2) + first;
}
}
end_time = clock();
std::cout << word << " is not a valid word." << std::endl;
return std::difftime(end_time, start_time);
}
double BSearch(const std::vector<std::string>& obj) {
std::cout << "Enter word: ";
std::string word;
std::getline(std::cin, word);
std::cout << "Searching database..." << std::endl;
std::clock_t end_time = 0;
std::clock_t start_time = clock();
if(std::binary_search(obj.begin(), obj.end(), word)) {
end_time = clock();
std::cout << "Valid word." << std::endl;
return std::difftime(end_time, start_time);
}
end_time = clock();
std::cout << word << " is not a valid word." << std::endl;
return std::difftime(end_time, start_time);
}
The Search and BSearch methods are the same, one uses a hand-coded binary search and the other uses the STL version. (I was verifying their speed differences...unsurprisingly, there isn't one.)
As an after thought: Is there a better way to count the number of lines in a file? Maybe without having to open and close the file twice?
P.S. If you're wondering about the message at the end of the program, that's due to the vector of 170,000+ strings cleaning up after going out of scope. It takes a while.
### Answer to generic questions
In order to search the massive list of 172,820 words
That's relatively small (OK small->medium).
I figured sorting it then using a binary search would be a good idea.
Yes that's a good idea.
The sorting algorithm I used was std::stable_sort as I wanted similar words stay in their locations.
Why. Stable sort means that if two words have the same value (they are equal) they maintain their relative order. Since you are searching for a single value (not a group of values) should you not be dedupping your input anyway. Even if you want to maintain multiple entries of the same word is their position in the input file significant in any way?
Should I have used std::stable_sort?
No.
Would std::sort have been better?
Maybe.
I would consider using a sorted container that does the work of dedupping for you.
Would a different sort entirely had been a better idea?
The only way to know is to actually do it and test the difference. But std::sort provides a complexity of O(n.log(n)) on average which is hard to beat unless you know something about your input set.
As an after thought: Is there a better way to count the number of lines in a file? Maybe without having to open and close the file twice?
You can re-wind the file to the beginning by using seek() (seekg() of file streams).
The Search and BSearch methods are the same, one uses a hand-coded binary search and the other uses the STL version. (I was verifying their speed differences...unsurprisingly, there isn't one.)
Your timing is invalid. You are printing to a stream in the middle of the timed section. This will be the most significant cost in your search and will outweigh the cost of the search by an order of magnitude. Remove the prints std::cout << message; and re-time.
P.S. If you're wondering about the message at the end of the program, that's due to the vector of 170,000+ strings cleaning up after going out of scope. It takes a while.
Which message are you referring too. And define a while. I would not expect the cleanup of strings to be significantly slow (there there is a cost).
### Comments on Code
Your code seems very dense. White space is your friend when writing readable code.
There is no need to open/close and clear a file. Calling clear on a file after it has closed has no affect and the subsequent open() would reset the internal state of the stream anyway. When reading a file I see little point in explicitly opining and closing a file (let the constructor/destructor do that). See https://codereview.stackexchange.com/a/544/507
std::ifstream db("enable1.txt");
// Do Stuff
db.clear(); // Clear the EOF flag
db.seekg(ios_base::beg, 0); // rewind to beginning
// Do more stuff
There is an easier way to count the number of words. Note there is also a safer version of getline() that uses strings and thus can't overflow.
std::string line;
std::getline(db, line); // Reads one line.
Given that you are actually reading the number of lines but counting it as words means that the file is one word per line. Also testing the state of the stream pre-using it is an anti-pattern and nearly always wrong.
while(db.fail() == false) {
This will result in an over-count of 1. This is because the last word read will read up-to but not past the EOF. Thus the EOF flag is not set and you re-enter the loop. You then try and read the next word (which is not there resulting in the stream setting the EOF flag but you increment the word count anyway. If you do it this way then you need to check the state of the stream after the read.
while( db.SomeActionToReadFromIt()) {
Thus in all common languages you do a read as part of the while condition. The result of the read indicates if the loop should be entered (if the read worked then do the loop and processes the value you just read).
The operator >> when used on a string will read one white-space separated words. So to count the number of words in a file a trivial implementation would be:
std::string line;
while(std::getline(db, line))
{ ++number_of_words;
}
// Or alternatively
std::string word;
while(db >> word)
{ ++number_of_words;
}
Note it is important to note the second version here. This is because you can use stream iterators and some standard functions to achieve the same results. Note: stream iterators use the operator >> to read their target.
std::size_t size = std::distance(std::istream_iterator<std::string>(db),
std::istream_iterator<std::string>());
If you want absolute speed then the C interface could be used (though I would not recommend it).
There is no need to set the size of the vector and then reserve the same size.
std::vector<std::string> words(number_of_words);
words.reserve(number_of_words);
Personally I would just use reserve(). That way you do not need to prematurely construct 170,000 empty strings. But then you would need to use push_back rather than explicit read into an element (so swings and roundabouts). An alternative to your loop is to use stream iterator again to copy the file into the vector:
unsigned long i = 0;
while(std::getline(db, words[i], '\n')) { // no need for the '\n' here!
++i;
}
// Alternatively you can do this:
std::copy(std::istream_iterator<std::string>(db), std::istream_iterator<std::string>(),
std::back_inserter(words)
);
Now you sort the container:
Alternatively you can use a sorted container. I would consider using std::set. Then you can just insert all the words. std::set has a neat find() methods that searches the now sorted container:
std::set<std::string> words;
std::copy(std::istream_iterator<std::string>(db), std::istream_iterator<std::string>(),
std::inserter(words, words.end())
);
The container is automatically sorted and de-dupped. And you can just use find on it:
if (words.find("Loki") != words.end())
{
// We have found it.
}
Unless you are doing something really clever then let the compiler default the template arguments you are not specifying:
std::vector<std::string, std::allocator<std::string> >::size_type s = obj.size();
// Rather:
std::vector<std::string>::size_type s = obj.size();
You already know the type. Why are you changing type in mid function?
unsigned long mid = s / 2;
unsigned long first = 0;
unsigned long last = s - 1;
Use the same type you use for s. If that is too much to type then typedef it to something easier. But C++ is all about type and safety. Keep your types consistent.
I believe there is a bug in your code. If you fail to find a value then it will lock up in an infinite loop.
• +1 although I'd note that set will probably less efficient then a sorted vector for searching. – Winston Ewert Feb 9 '12 at 15:31
• Thanks for the due diligence. One thing to note, the reason I used a vector instead of a non-duplicate container is that there ARE no duplicates in the word list and I needed random-access-speed for the binary search to work. – Casey Feb 10 '12 at 0:50
• @Casey: Yes you used std::vector so you can use binary search. If you use std::set the binary search becomes redundant as the set is implemented to have the same characteristics as a binary tree. Thus the find on the tree is already O(log(n)). Alternatively if you had used an unordered set the complexity becomes O(1). – Martin York Feb 10 '12 at 5:33
|
2019-11-12 14:55:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29716935753822327, "perplexity": 3194.246712354089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665573.50/warc/CC-MAIN-20191112124615-20191112152615-00352.warc.gz"}
|
http://mirror.hmc.edu/ctan/help/Catalogue/entries/tabularht.html
|
Tabular environments with height specified.
The tabularht package defines some environments that add a height specification to tabular and array environments. The default set of new environments take a value for their height in the first argument: defined environments are: tabularht, tabularht* and arrayht. If package tabularx is also loaded, the package also defines environments tabularxht and tabularxht*.
The places where stretching is to happen are signalled by
\noalign{\vfill}
immediately after the \\ that ends a row of the table or array.
The author is Heiko Oberdiek. The package is Copyright © 2005-2007 Heiko Oberdiek.
License: lppl1.3 Version: 2.5 Catalogued: 2011-09-27
|
2015-05-27 11:58:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972537755966187, "perplexity": 12980.512832105567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928965.67/warc/CC-MAIN-20150521113208-00048-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/784703/multiple-integral-of-min
|
# Multiple integral of min [duplicate]
How can this be proven? $$\int_0^1 \int_0^1...\int_0^1 min(x_1,x_2,...,x_n) dx_1dx_2...dx_n = \frac1{n+1}$$
I tried to split the last integral in $$\int_0^{min(x_1,x_2,...,x_{n-1})} x_ndx_n + \int_{min(x_1,x_2,...,x_{n-1})}^1 min(x_1,x_2,...,x_{n-1})dx_n$$
And then continue in this manner. So the function that remains to be integrated satisfies this recurrence relation $$f_n = (1-x)f_{n-1} + \int f_{n-1}$$
I then thought to use the following notation
$$I_n = \frac{ (-1)^{n+1}x^n }{n} + \frac{ (-1)^{n}x^{n-1} }{n-1}$$
If we compute the last integral, the n-1 integrals apply to $I_2$ If we apply another integral we get $I_3+I_2$ and so on, which will look like Pascal's triangle. But I can't go any further.
-
## marked as duplicate by user1551, Claude Leibovici, Najib Idrissi, M Turgeon, mauMay 7 '14 at 12:22
Here is another opproach:
First consider the set $A=\{(t_1,\dots,t_n):t_1\le\dots\le t_n\}$. In this set, your integral becomes $$\int_0^1 \int_0^{t_n}...\int_0^{t_2} \int_0^{t_1} x_1 dx_1dx_2...dx_n = \frac1{(n+1)!}.$$
Since there are $n!$ sets like $A$ which are the union is $[0,1]^n$ then $$\int_0^1 \int_0^1...\int_0^1 min(x_1,x_2,...,x_n) dx_1dx_2...dx_n = n!\times \frac1{(n+1)!}=\frac1{n+1}.$$
-
Very nice! (+1) – Start wearing purple May 7 '14 at 8:48
Thank you! Very nice indeed. – user42768 May 7 '14 at 9:15
You're wellcome ;) – Jlamprong May 7 '14 at 9:31
This elegant solution deserves +1 upvote! – Sangchul Lee May 7 '14 at 9:42
First, write
\begin{align*} I &:= \int_{0}^{1} \cdots \int_{0}^{1} \min \{ x_{1}, \cdots, x_{n} \} \, dx_{n}\cdots dx_{1} \\ &= \int_{0}^{1} \cdots \int_{0}^{1} \int_{0}^{\min \{ x_{1}, \cdots, x_{n} \}} dt \, dx_{n}\cdots dx_{1}. \end{align*}
By noting that $t \leq \min \{x_{1}, \cdots, x_{n}\}$ is equivalent to $t \leq x_{1}, \cdots, t \leq x_{n}$, we find that $I$ is the volume of the region $\mathcal{W}$ defined by
$$\mathcal{W} = \{ (t, x_{1}, \cdots, x_{n}) : 0 \leq t \leq 1, t \leq x_{1} \leq 1, \cdots, t \leq x_{n} \leq 1 \}.$$
Thus by Fubini's Theorem we have
\begin{align*} I &= \int_{0}^{1} \int_{t}^{1} \cdots \int_{t}^{1} \, dx_{n}\cdots dx_{1} \, dt = \int_{0}^{1} (1-t)^{n} \, dt = \frac{1}{n+1}. \end{align*}
This solution can be encoded in the language of probability as follows: Let $U_{1}, \cdots, U_{n}$ be i.i.d. uniform random variables on $[0, 1]$. Then
\begin{align*} &\int_{0}^{1} \cdots \int_{0}^{1} \min \{ x_{1}, \cdots, x_{n} \} \, dx_{n}\cdots dx_{1} \\ &\qquad = \Bbb{E}( \min\{ U_{1}, \cdots, U_{n} \} ) \\ &\qquad = \int_{0}^{1} \Bbb{P}(\min\{ U_{1}, \cdots, U_{n} \} > x) \, dx \\ &\qquad = \int_{0}^{1} \Bbb{P}( U_{1} > x, \cdots, U_{n} > x) \, dx \\ &\qquad = \int_{0}^{1} \Bbb{P}( U_{1} > x)^{n} \, dx \qquad (\because \text{i.i.d.}) \\ &\qquad = \int_{0}^{1} (1 - x)^{n} \, dx = \frac{1}{n+1}. \end{align*}
-
I am having a hard time understanding where this equality comes from, since I haven't studied continuous random variables $$E(min(...)) = \int_0^1 P(min(...)>x) dx$$. Thank you! – user42768 May 7 '14 at 9:16
@user42768, Actually, that follows from Fubini's Theorem but I admit that it is not an easy observation. My calculus-version solution has the same content as in that solution, so you may ignore that part. – Sangchul Lee May 7 '14 at 9:41
So the proof to that equality is based on the "calculus" method of solving the problem? Thank you for your time. – user42768 May 7 '14 at 9:58
How did you apply Fubini's theorem? can someone enlight me on this? thx – mohlee Feb 6 '15 at 4:32
Use the fact that, $\min \{a,b\} = (a+b-|a-b|)/2$ for all $a,b \in \mathbb R$.
-
Could you please elaborate? – user42768 May 7 '14 at 8:24
Sure, apologies if the conversation stops suddenly. I will pick it up later. I think to see this, first just work on the case $n=2$. Then use the above expression to evaluate it. I will try this out myself now and post soon. – dcs24 May 7 '14 at 8:30
I already solved for n=2 using the method I described. Using an absolute value requires splitting of the integral, so it is similar to the way I thought to solve it. – user42768 May 7 '14 at 8:34
Using the absolute value you can use symmetry along the line x=y to remove it :-) that is how far I was getting. I have to stop at the minute, I will try to resume soon, but it seems you now have a solution below! – dcs24 May 7 '14 at 8:45
|
2016-05-24 12:18:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990373849868774, "perplexity": 823.7719422069847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270555.40/warc/CC-MAIN-20160524002110-00238-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://sskiswani.com/lec/review.html
|
# Exam Review
SOLUTIONS TO THE MIDTERM REVIEW HERE
## Longest Common Substring
Given two strings A and B, this will find the length of the longest common substring. (hit enter in the text box to force it to update). The arrows point in the direction of the subproblem used to calculate that cell.
TODO: pseudocode
Here’s approximately the code I used for the above table:
function LCS(X, Y)
var d = new Array(X.length+1);
var i, j;
for(i = 0; i < d.length; ++i) {
d[i] = new Array(Y.length+1);
d[i][0] = 0;
for(j = 0; j < d[i].length; ++j) {
d[0][j] = 0;
}
}
for(i = 0; i < X.length; i++) {
for(j = 0; j < Y.length; j++) {
if(a[i] == b[j]) {
d[i+1][j+1] = d[i][j] + 1;
} else if(d[i][j+1] >= d[i+1][j]) {
d[i+1][j+1] = d[i][j+1];
} else {
d[i+1][j+1] = d[i+1][j];
}
}
}
## Quicksort
For the following questions, assume that the pivot is always chosen to be the last element in the partition.
a. What kind of input causes worst case behavior?
b. What is the recurrence for this worst case behavior.
c. What is the solution to the worst-case recurrence?
d. Is there anyway to mitigate the worst-case?
Given that the pivot is always chosen to be the last element in the partition, the worst case is encountered for arrays that are already sorted.
The recurrence for the worst case is $T(n) = T(n-1) + \Theta(n)$, where $\Theta(n)$ is the time to partition.
To solve the recurrence, just unroll it a few steps: $$T(n) = T(n-1) + \Theta(n)$$ $$T(n) = T(n-2) + \Theta(n) + \Theta(n)$$ Continuing in this fashion yields $T(n) = \Theta(n^2)$.
The book suggests using the median of the first, middle, and last elements. Alternatively, one can average the first and last elements. Choosing a random element as the pivot also helps. There's a detailed discussion on the Wikipedia article.
## Asymptotic Notation
a. Formally prove the transitivity of $\Omega$, e.g. if $f \in \Omega(g)$ and $g \in \Omega(h)$ then $f \in \Omega(h)$.
Given:
(1) $f \in \Omega(g) \rightarrow c_1 g \leq f$ for $n \geq n_0$ and some $c_1$.
(2) $g \in \Omega(h) \rightarrow c_2 h \leq g$ for $n \geq n_1$ and some $c_2$.
Since $c_1$ is an arbitrary constant, without loss of generality we have $c_2 h \leq g \leq c_1 g$, which allows us to write $c_2 c_1 h \leq c_1 g \leq f$
Let $n_3 = \max(n_0, n_1)$, and $c_3 = c_2 c_1$, hence $c_3 h \leq f$ for $n \geq n_3$ which is the formal definition of $f \in \Omega(h)$.
b. For the general case (e.g. arbitrary $f$ and $g$), prove or disprove $f \not\in \omega(g) \rightarrow f \in O(g)$
Since $c_1 g \not\lt f$ for $n \geq n_0$, it must be that $f \leq c_1g$ for $n \geq n_0$ which, by definition, implies that $f \in O(g)$.
## Recurrences
Do not use the Master method for the following problems.
a. Show that if $T(n) = 3T(n/2) + n^2$, then $T(n) \in O(n^2)$.
Plugging in… remember that since $c$ is an arbitrary constant, it can consume other constants.
$T(n) \leq 3(cn^2/2) + n^2$
$T(n) \leq cn^2 + n^2$
$T(n) \leq cn^2$
Hence $T(n) \in O(n^2)$.
b. Show that if $T(n) = T(n/2) + 2^n$, then $T(n) \in O(2^n)$.
$T(n) \leq c2^n / 2 + 2^n$
$T(n) \leq c2^n$
Hence $T(n) \in O(2^n)$.
c. Prove $T(n) = 16T(n/4) + n \rightarrow T(n) \in O(n^2)$.
$T(n) \leq 16(cn^2 / 4) + n$
$T(n) \leq cn^2 + n$
Hence $T(n) \in O(n^2)$.
d. Prove $T(n) = 3T(n/3) + \sqrt{n} \rightarrow T(n) \in O(n)$.
$T(n) \leq 3(cn/3) + \sqrt{n}$
$T(n) \leq cn + \sqrt{n}$
Hence $T(n) \in O(n)$.
|
2020-01-19 14:07:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89058917760849, "perplexity": 838.9623309363461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594603.8/warc/CC-MAIN-20200119122744-20200119150744-00442.warc.gz"}
|
http://mathhelpforum.com/algebra/163087-exponential-logarithmic-functions.html
|
# Math Help - Exponential and Logarithmic Functions
1. ## Exponential and Logarithmic Functions
Hi,
Can someone help me to solve this equation for X??
3 Log x - 3 = 2 Log x2(square)
2. Originally Posted by Aniff
Hi,
Can someone help me to solve this equation for X??
3 Log x - 3 = 2 Log x2(square)
1. Do you mean:
$3\log(x-3)=2\log(x^2)$
or
$3\log(x)-3=2\log(x^2)$
2. I'll take the 2nd equation:
$3\log(x)-3=2\log(x^2)~\implies~3\log(x)-3=4\log(x)$
$-3=\log(x)$
3. Use the sides of the equation as exponents to the base of the logarithms:
If the base is e you'll get $x = e^{-3}\approx 0.0498$
if the base is 10 you'll get $x = 10^{-3}=0.001$
|
2015-04-25 19:59:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881717920303345, "perplexity": 836.2192014765811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651471.95/warc/CC-MAIN-20150417045731-00293-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://www.studyadda.com/ncert-solution/algebraic-expressions-and-identities_q12/572/51971
|
• # question_answer12) Find the product: (i) $({{a}^{2}})\times (2{{a}^{22}})\times (4{{a}^{26}})$ (ii) $\left( \frac{2}{3}xy \right)\times \left( \frac{-9}{10}{{x}^{2}}{{y}^{2}} \right)$ (iii) $\left( -\frac{10}{3}\,p{{q}^{3}} \right)\,\times \left( \frac{6}{5}\,{{p}^{3}}q \right)$ (iv) $x\times {{x}^{2}}\times {{x}^{3}}\times {{x}^{4}}$.
$({{a}^{2}})\times (2{{a}^{22}})\times (4{{a}^{26}})$ $({{a}^{2}})\times (2{{a}^{22}})\times (4{{a}^{26}})$ $=(2\times 4)\times ({{a}^{2}}\times {{a}^{22}}\times {{a}^{26}})$ $=8\times {{a}^{50}}=8{{a}^{50}}$. (ii) $\left( \frac{2}{3}xy \right)\times \,\left( -\frac{9}{10}{{x}^{2}}{{y}^{2}} \right)$ $\left( \frac{2}{3}xy \right)\times \,\left( -\frac{9}{10}{{x}^{2}}{{y}^{2}} \right)$ $=\left\{ \frac{2}{3}\times \,\left( -\frac{9}{10} \right) \right\}\times (x\times {{x}^{2}})\times (y\times {{y}^{2}})$ $=-\frac{3}{5}\,{{x}^{3}}{{y}^{3}}$. (iii) $\left( -\frac{10}{3}p{{q}^{3}} \right)\times \,\left( \frac{6}{5}{{p}^{3}}q \right)$ $\left( -\frac{10}{3}p{{q}^{3}} \right)\times \,\left( \frac{6}{5}{{p}^{3}}q \right)$ $=\left\{ \left( -\frac{10}{3} \right)\times \frac{6}{5} \right\}\times (p\times {{p}^{3}})\times ({{p}^{3}}\times q)$ $=-4{{p}^{4}}{{q}^{4}}$. (iv) $x\times {{x}^{2}}\times {{x}^{3}}\times {{x}^{4}}$ $x\times {{x}^{2}}\times {{x}^{3}}\times {{x}^{4}}$$={{x}^{1}}\times {{x}^{2}}\times {{x}^{3}}\times {{x}^{4}}$ $={{x}^{1+2+3+4}}$ $={{x}^{10}}$.
|
2021-01-18 23:51:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9966472387313843, "perplexity": 4708.509676084592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00460.warc.gz"}
|
https://repo.pw.edu.pl/info/bachelor/WUT36ada3792f0d49c59e0a749b983941ae/
|
Knowledge base: Warsaw University of Technology
Back
The influence of direct georeference on 3D modelling of a terrestrial object
Joanna Bogumiła Przystawska, Jakub Zakrzewski
Abstract
This thesis aims to explore the possible uses of direct georeferencing in the process of creating 3D models of buildings from ground based digital images. The influence of a priori measurement error of linear elements of exterior orientation, which served as observation weights, on the accuracy of generated products was researched. The experiment was conducted with the use of two different software: Agisoft PhotoScan Professional Edition and Pix4Dmapper Pro. The theoretical part includes the description of the issues directly linked with the use of direct georeferencing in photogrammetry, the concept of internal and external orientation, the principle of creating three dimensional models from digital pictures and the description of the software that was employed. The practical part encompasses the methods that were used for compiling, processing and analyzing the data with each of the abovementioned software. For the purpose of the 3D model creation the digital pictures of Warsaw University’s of Technology Main Hall were used. The tacheometer was employed to measure the coordinates of the camera for each of the images that were taken and on that basis the spatial data for aerotriangulation was obtained. Spatial oriented observations were weighted by applying eight variable a priori precisions of linear elements of external orientation measurement and were analyzed as separate projects. The test field located in the Main Hall was used as the points of reference. The comparison of the coordinates obtained from the direct georeferencing with the known photopoints allowed the extraction of the best and the worst results for the applied weights. For the purpose of the experiment a more thorough analysis was only conducted for the most extreme results. The conducted research was focused on the dense clouds and generated 3D models. These were further worked on in the CloudCompare software, which allowed a more comprehensive analysis of their precision. The achieved results were compared with regard to the software used and their usefulness in 3D modelling of architectural objects. The activities that could help to optimize the results were also considered On the basis of the conducted experiment it can be concluded that direct georeferencing allows creating three dimensional models that are within the accepted error range of commonly acknowledged guidelines for architectural objects.
Diploma type
Engineer's / Bachelor of Science
Diploma type
Engineer's thesis
Author
Joanna Bogumiła Przystawska (FGC) Joanna Bogumiła Przystawska,, Faculty of Geodesy and Cartography (FGC) Jakub Zakrzewski (FGC) Jakub Zakrzewski,, Faculty of Geodesy and Cartography (FGC)
Title in Polish
Wpływ georeferencji wprost na modelowanie 3D obiektu naziemnego
Supervisor
Dorota Zawieska (FGC/DPRSISIS) Dorota Zawieska,, Department of Photogrammetry, Remote Sensing and Spatial Information Systems (FGC/DPRSISIS)Faculty of Geodesy and Cartography (FGC)
Certifying unit
Faculty of Geodesy and Cartography (FGC)
Affiliation unit
Department of Photogrammetry, Teledetection and Spatial Information Systems (FGC/DPRSISIS)
Study subject / specialization
, Geodezja i Kartografia
Language
(pl) Polish
Status
Finished
Defense Date
11-02-2016
Issue date (year)
2016
Reviewers
Michał Kowalczyk (FGC/DPRSISIS) Michał Kowalczyk,, Department of Photogrammetry, Remote Sensing and Spatial Information Systems (FGC/DPRSISIS)Faculty of Geodesy and Cartography (FGC) Dorota Zawieska (FGC/DPRSISIS) Dorota Zawieska,, Department of Photogrammetry, Remote Sensing and Spatial Information Systems (FGC/DPRSISIS)Faculty of Geodesy and Cartography (FGC) Dorota Zawieska (FGC/DPRSISIS) Dorota Zawieska,, Department of Photogrammetry, Remote Sensing and Spatial Information Systems (FGC/DPRSISIS)Faculty of Geodesy and Cartography (FGC) Michał Kowalczyk (FGC/DPRSISIS) Michał Kowalczyk,, Department of Photogrammetry, Remote Sensing and Spatial Information Systems (FGC/DPRSISIS)Faculty of Geodesy and Cartography (FGC)
Keywords in Polish
georeferencja wprost, model 3D, orientacja zewnętrzna, zdjęcia cyfrowe
Keywords in English
direct georeferencing, 3D model, exterior orientation, digital images
Abstract in Polish
File
• File: 1
Praca Inżynierska Przystawska Zakrzewski.pdf
Request a WCAG compliant version
Local fields
Identyfikator pracy APD: 5002
Uniform Resource Identifier
urn:pw-repo:WUT36ada3792f0d49c59e0a749b983941ae
|
2021-06-18 07:00:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3059125542640686, "perplexity": 6919.317639975095}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635724.52/warc/CC-MAIN-20210618043356-20210618073356-00407.warc.gz"}
|
https://www.corsbook.com/lesson/latex-test/
|
# LaTeX Test
#### By Mark Ciotola
First published on May 11, 2019
$Your LaTeX code$.
Here are some examples that may be useful.
$E = mc^2$.
$$E = mc^2$$.
$E = \frac{1}{2}mv^2$.
$$E = \frac{1}{2}mv^2$$.
| COURSE |
|
2022-01-19 08:14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.497818261384964, "perplexity": 9614.496804957422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00618.warc.gz"}
|
https://projecteuclid.org/euclid.bbms/1195157138
|
## Bulletin of the Belgian Mathematical Society - Simon Stevin
### Inner invariant extensions of Dirac measures on compactly cancellative topological semigroups
#### Abstract
Let ${\cal S}$ be a left compactly cancellative foundation semigroup with identity $e$ and $M_a({\cal S})$ be its semigroup algebra. In this paper, we give a characterization for the existence of an inner invariant extension of $\delta_e$ from $C_b({\cal S})$ to a mean on $L^\infty({\cal S},M_a({\cal S}))$ in terms of asymptotically central bounded approximate identities in $M_a({\cal S})$. We also consider topological inner invariant means on $L^\infty({\cal S},M_a({\cal S}))$ to study strict inner amenability of $M_a({\cal S})$ and their relation with strict inner amenability of ${\cal S}$.
#### Article information
Source
Bull. Belg. Math. Soc. Simon Stevin, Volume 14, Number 4 (2007), 699-708.
Dates
First available in Project Euclid: 15 November 2007
https://projecteuclid.org/euclid.bbms/1195157138
Digital Object Identifier
doi:10.36045/bbms/1195157138
Mathematical Reviews number (MathSciNet)
MR2384465
Zentralblatt MATH identifier
1141.43001
#### Citation
Bami, M. Lashkarizadeh; Mohammadzadeh, B.; Nasr-Isfahani, R. Inner invariant extensions of Dirac measures on compactly cancellative topological semigroups. Bull. Belg. Math. Soc. Simon Stevin 14 (2007), no. 4, 699--708. doi:10.36045/bbms/1195157138. https://projecteuclid.org/euclid.bbms/1195157138
|
2019-10-21 12:15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7682445645332336, "perplexity": 1441.1890233212382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987773711.75/warc/CC-MAIN-20191021120639-20191021144139-00538.warc.gz"}
|
https://www.esaral.com/q/in-the-arithmetic-progression-whose-common-difference-is-non-zero-93632
|
In the arithmetic progression whose common difference is non-zero,
Question:
In the arithmetic progression whose common difference is non-zero, the sum of first 3 n terms is equal to the sum of next n terms. Then the ratio of the sum of the first 2 n terms to the next 2 n terms is
(a) 1/5
(b) 2/3
(c) 3/4
(d) none of these
Solution:
(a) 1/5
$S_{3 n}=S_{4 n}-S_{3 n}$
$\Rightarrow 2 S_{3 n}=S_{4 n}$
$\Rightarrow 2 \times \frac{3 n}{2}\{2 a+(3 n-1) d\}=\frac{4 n}{2}\{2 a+(4 n-1) d\}$
$\Rightarrow 3\{2 a+(3 n-1) d\}=2\{2 a+(4 n-1) d\}$
$\Rightarrow 6 a+9 n d-3 d=4 a+8 n d-2 d$
$\Rightarrow 2 a+n d-d=0$
$\Rightarrow 2 a+(n-1) d=0$ ...(1)
Required ratio: $\frac{S_{2 n}}{S_{4 n}-S_{2 n}}$
$\frac{S_{2 n}}{S_{4 n}-S_{2 n}}=\frac{\frac{2 n}{2}\{2 a+(2 n-1) d\}}{\frac{4 n}{2}\{2 a+(4 n-1) d\}-\frac{2 n}{2}\{2 a+(2 n-1) d\}}$
$=\frac{n(n d)}{2 n(3 n d)-n(n d)}$
$=\frac{1}{6-1}$
$=\frac{1}{5}$
|
2023-03-29 00:30:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6706821322441101, "perplexity": 357.04325981996664}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00180.warc.gz"}
|
https://www.physicsforums.com/threads/rewriting-equation.548362/
|
# Rewriting equation
## Homework Statement
This is for a physics problem. I need to solve this for d, but I'm not sure how.
## The Attempt at a Solution
I've gotten up to:
$\frac{2d}{g} = (t - \frac{d}{v})^{2}$
When I multiply the right term out, it becomes a mess, everything I do makes it ugly. What should I try doing?
My main source of confusion is that multiplying that out gives me:
$t^{2} - \frac{2dt}{v} + \frac{d^{2}}{v^{2}}$
While my solution manual gives the last term as:
(1+v^2)d^2.
Those are not the same. What gives?
Last edited:
Related Precalculus Mathematics Homework Help News on Phys.org
SammyS
Staff Emeritus
Homework Helper
Gold Member
If squaring the right hand side makes it too ugly for you, then try taking the square root of both sides. That makes the math a little more challenging,
You could multiply both sides by v2 then square the right hand side. Maybe not quite so ugly.
If squaring the right hand side makes it too ugly for you, then try taking the square root of both sides. That makes the math a little more challenging,
You could multiply both sides by v2 then square the right hand side. Maybe not quite so ugly.
My main problem is that the solution manual I have gives steps that I just don't follow.
They multiply out (t-d/v)^2 and get (1+v^2)d^2 for the last term. I literally haven't the slightest how it comes to that, I just get plain old d^2/v^2.. and they aren't the same.
SammyS
Staff Emeritus
Homework Helper
Gold Member
How about scanning that solution & posting the image?
Sure thing, gimme a few minutes.
http://img585.imageshack.us/img585/5854/physicsstone.png [Broken]
After "square both sides to obtain.." I don't know how they get that. At all.
Last edited by a moderator:
SammyS
Staff Emeritus
|
2020-04-02 04:35:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973515629768372, "perplexity": 1000.1173025912638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506580.20/warc/CC-MAIN-20200402014600-20200402044600-00107.warc.gz"}
|
https://deepai.org/publication/memory-limited-streaming-pca
|
Memory Limited, Streaming PCA
We consider streaming, one-pass principal component analysis (PCA), in the high-dimensional regime, with limited memory. Here, p-dimensional samples are presented sequentially, and the goal is to produce the k-dimensional subspace that best approximates these points. Standard algorithms require O(p^2) memory; meanwhile no algorithm can do better than O(kp) memory, since this is what the output itself requires. Memory (or storage) complexity is most meaningful when understood in the context of computational and sample complexity. Sample complexity for high-dimensional PCA is typically studied in the setting of the spiked covariance model, where p-dimensional points are generated from a population covariance equal to the identity (white noise) plus a low-dimensional perturbation (the spike) which is the signal to be recovered. It is now well-understood that the spike can be recovered when the number of samples, n, scales proportionally with the dimension, p. Yet, all algorithms that provably achieve this, have memory complexity O(p^2). Meanwhile, algorithms with memory-complexity O(kp) do not have provable bounds on sample complexity comparable to p. We present an algorithm that achieves both: it uses O(kp) memory (meaning storage of any kind) and is able to compute the k-dimensional spike with O(p p) sample-complexity -- the first algorithm of its kind. While our theoretical analysis focuses on the spiked covariance model, our simulations show that our algorithm is successful on much more general models for the data.
Authors
• 23 publications
• 36 publications
• 54 publications
• History PCA: A New Algorithm for Streaming PCA
In this paper we propose a new algorithm for streaming principal compone...
02/15/2018 ∙ by Puyudi Yang, et al. ∙ 0
• Streaming Kernel PCA with Õ(√(n)) Random Features
We study the statistical and computational aspects of kernel principal c...
08/02/2018 ∙ by Enayat Ullah, et al. ∙ 0
• Communication and Memory Efficient Testing of Discrete Distributions
We study distribution testing with communication and memory constraints ...
06/11/2019 ∙ by Ilias Diakonikolas, et al. ∙ 0
• Optimally Weighted PCA for High-Dimensional Heteroscedastic Data
Modern applications increasingly involve high-dimensional and heterogene...
10/30/2018 ∙ by David Hong, et al. ∙ 0
• Stochastic Canonical Correlation Analysis
We tightly analyze the sample complexity of CCA, provide a learning algo...
02/21/2017 ∙ by Chao Gao, et al. ∙ 0
• Federated PCA with Adaptive Rank Estimation
In many online machine learning and data science tasks such as data summ...
07/18/2019 ∙ by Andreas Grammenos, et al. ∙ 8
• Detecting Correlations with Little Memory and Communication
We study the problem of identifying correlations in multivariate data, u...
03/04/2018 ∙ by Yuval Dagan, et al. ∙ 0
This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
1 Introduction
Principal component analysis is a fundamental tool for dimensionality reduction, clustering, classification, and many more learning tasks. It is a basic preprocessing step for learning, recognition, and estimation procedures. The core computational element of PCA is performing a (partial) singular value decomposition, and much work over the last half century has focused on efficient algorithms (e.g.,
[7] and references therein) and hence on computational complexity.
The recent focus on understanding high-dimensional data, where the dimensionality of the data scales together with the number of available sample points, has led to an exploration of the
sample complexity of covariance estimation. This direction was largely influenced by Johnstone’s spiked covariance model
, where data samples are drawn from a distribution whose (population) covariance is a low-rank perturbation of the identity matrix
[11]. Work initiated there, and also work done in [19] (and references therein) has explored the power of batch PCA in the
-dimensional setting with sub-Gaussian noise, and demonstrated that the singular value decomposition (SVD) of the empirical covariance matrix succeeds in recovering the principal components (extreme eigenvectors of the population covariance) with high probability, given
samples.
This paper brings the focus on another critical quantity: memory/storage. This is relevant in the so-called streaming data model, where the samples are collected sequentially, and unless we store them, they are irretrievably gone.111This is similar to what is sometimes referred to as the single pass model. The only currently available algorithms with provable sample complexity guarantees either store all samples (note that for more than a single pass over the data, the samples must all be stored) or explicitly form the empirical (typically dense) covariance matrix. Either case requires at least storage. Despite the availability of massive local and distributed storage systems, for high-dimensional applications (e.g., where data points are high resolution photographs, biometrics, video, etc.), could be on the order of , making
prohibitive, if not in fact impossible to manage. Indeed, at multiple computing scales, manipulating vectors of length
is possible, when storage of is not. A typical desktop may have 10-20 GB of RAM, but will not have more than a few TB of total storage. A modern smart-phone may have as much as a GB of RAM, but has a few GB, not TB, of storage.
We consider the streaming data setting, where data points are generated sequentially, and are never stored. In the setting of the so-called spiked covariance model (and natural generalizations) we show that a simple algorithm requiring storage – the best possible – performs as well as batch algorithms (namely, SVD on the empirical covariance matrix), with sample complexity . To the best of our knowledge, this is the only algorithm with both storage complexity and sample complexity guarantees. We discuss the connection to past work in more detail in Section 2. We introduce the model with all related details in Section 3, and present the solution to the rank 1 case, the rank case, and the perturbed-rank- case in Sections 4.1, 4.2 and 4.3, respectively. In Section 5 we provide simulations that not only confirm the theoretical results, but demonstrate that our algorithm works well outside the assumptions of our main theorems.
2 Related Work
Memory- and computation-efficient algorithms that operate on streaming data are plentiful in the literature and many seem to do well in practice. However, there is no algorithm that provably recovers the principal components in the same noise and sample-complexity regime as the batch PCA algorithm does and maintains a provably light memory footprint. Because of the practical relevance, there has been renewed interest recently in this problem, and the fact that this is an important unresolved issue has been pointed out in numerous places, e.g., [21, 1].
A large body of work has focused on the non-statistical data paradigm that deals with a fixed pool of samples. This includes work on online PCA and low-rank matrix approximation in the streaming scenario, including sketching and dimensionality-reduction based techniques.
Online-PCA for regret minimization has been considered in several papers, most recently in [21], where the multiplicative weights approach is adapted for this problem (now experts correspond to subspaces). The goal there is to control the regret, improving on the natural follow-the-leader algorithm that performs batch-PCA at each step. However, the algorithm can require memory, in order to store the multiplicative weights. A memory-light variant described in [1] typically requires much less memory, but there are no guarantees for this, and moreover, for certain problem instances, its memory requirement is on the order of .
Sub-sampling, dimensionality-reduction and sketching form another family of low-complexity and low-memory techniques, see, e.g., [5, 13, 8]. These save on memory and computation by performing SVD on the resulting smaller matrix. The results in this line of work provide worst-case guarantees over the pool of data, and typically require a rapidly decaying spectrum (which we do not have in our setting) to produce good bounds. More fundamentally, these approaches are not appropriate for data coming from a statistical model such as the spiked covariance model. It is clear that subsampling approaches, for instance, simply correspond to discarding most of the data, and for fundamental sample complexity reasons, cannot work. Sketching produces a similar effect: each column of the sketch is a random () sum of the data points. If the data points are, e.g., independent Gaussian vectors, then so will each element of the sketch, and thus this approach again runs against fundamental sample complexity constraints. Indeed, it is straightforward to check that the guarantees presented in ([5, 8]) are not strong enough to guarantee recovery of the spike. This is not because the results are weak; it is because they geared towards worst-case bounds.
Algorithms focused on sequential SVD (e.g., [4, 3], [6],[12] and more recently [2, 9]) seek to have the best subspace estimate at every time (i.e., each time a new data sample arrives) but without performing full-blown SVD at each step. While these algorithms indeed reduce both the computational and memory burden of batch-PCA, there are no rigorous guarantees on the quality of the principal components or on the statistical performance of these methods.
In a Bayesian mindset, some researchers have come up with expectation maximization approaches
[16, 18], that can be used in an incremental fashion. The finite sample behavior is not known.
Stochastic-approximation-based algorithms along the lines of [15] are also quite popular, because of their low computational and memory complexity, and excellent performance in practice. They go under a variety of names, including Incremental PCA (though the term Incremental has been used in the online setting as well [10]), Hebbian learning, and stochastic power method [1]. The basic algorithms are some version of the following: upon receiving data point at time , update the estimate of the top principal components via:
U(t+1)=Proj(U(t)+ηtxtx⊤tU(t)), (1)
where denotes the “projection” that takes the SVD of the argument, and sets the top singular values to and the rest to zero (see [1] for further discussion).
While empirically these algorithms perform well, to the best of our knowledge - and efforts - there does not exist any rigorous finite sample guarantee for these algorithms. The analytical challenge seems to be the high variance at each step, which makes direct analysis difficult.
In summary, while much work has focused on memory-constrained PCA, there has as of yet been no work that simultaneously provides sample complexity guarantees competitive with batch algorithms, and also memory/storage complexity guarantees close to the minimal requirement of – the memory required to store only the output. We present an algorithm that provably does both.
3 Problem Formulation and Notation
We consider a streaming model, where at each time step , we receive a point . Furthermore, any vector that is not explicitly stored can never be revisited. Now, our goal is to compute the top principal components of the data: the -dimensional subspace that offers the best squared-error estimate for the points. We assume a probabilistic generative model, from which the data is sampled at each step . Specifically, we assume,
xt=Azt+wt, (2)
where is a fixed matrix,
is a multivariate normal random variable, i.e.,
zt∼N(0k×1,Ik×k),
and vector
is the “noise” vector and is also sampled from a multivariate normal distribution, i.e.,
wt∼N(0p×1,σ2Ip×p).
Furthermore, we assume that all random vectors () are mutually independent.
In this regime, it is well-known that batch-PCA is asymptotically consistent (hence recovering up to unitary transformations) with number of samples scaling as [20]. It is interesting to note that in this high-dimensional regime, the signal-to-noise ratio quickly approaches zero, as the signal, or “elongation” of the major axis, , is , while the noise magnitude, , scales as . The central goal of this paper is to provide finite sample guarantees for a streaming algorithm that requires memory no more than and matches the consistency results of batch PCA in the sampling regime (possibly with additional log factors, or factors depending on and ).
We denote matrices by capital letters (e.g. ) and vectors by lower-case bold-face letters (). denotes the norm of ; denotes the norm of . or denotes the spectral norm of while denotes the Frobenius norm of . Without loss of generality (WLOG), we assume that: , where denotes the spectral norm of . Finally, we write for the inner product between , . In proofs the constant is used loosely and its value may vary from line to line.
4 Algorithm and Guarantees
In this section, we present our proposed algorithm and its finite sample analysis. It is a block-wise stochastic variant of the classical power-method. Stochastic versions of the power method are already popular in the literature and are known to have good empirical performance; see [1] for a nice review of such methods. However, the main impediment to the analysis of such stochastic algorithms (as in (1)) is the potentially large variance of each step, due primarily to the high-dimensional regime we consider, and the vanishing SNR.
This motivated us to consider a modified stochastic power method algorithm, that has a variance reduction step built in. At a high level, our method updates only once in a “block” and within one block we average out noise to reduce the variance.
Below, we first illustrate the main ideas of our method as well as our sample complexity proof for the simpler rank- case. The rank- and rank- algorithms are so similar, that we present them in the same panel. We provide the rank- analysis in Section 4.2. We note that, while our algorithm describes as “input,” we mean this in the streaming sense: the data are no-where stored, and can never be revisited unless the algorithm explicitly stores them.
4.1 Rank-One Case
We first consider the rank- case for which each sample is generated using: where is the principal component that we wish to recover. Our algorithm is a block-wise method where all the samples are divided in blocks (for simplicity we assume that is an integer). In the -st block, we compute
sτ+1=⎛⎝1BB(τ+1)∑t=Bτ+1xtx⊤t⎞⎠qτ. (3)
Then, the iterate is updated using . Note that, can be easily computed in an online manner where operations are required per step. Furthermore, storage requirements are also linear in .
4.1.1 Analysis
We now present the sample complexity analysis of our proposed method (Algorithm 1). We show that, using samples, Algorithm 1 obtains a solution of accuracy , i.e. .
Theorem 1.
Denote the data stream by , where is generated by (2). Set the total number of iterations and the block size . Then, with probability , , where is the -th iterate of Algorithm 1. That is, Algorithm 1 obtains an -accurate solution with number of samples () given by:
n=~Ω((1+3(σ+σ2)√p)2log(p/ϵ)ϵ2log((σ2+.75)/(σ2+.5))).−5pt
Note that in the total sample complexity, we use the notation to suppress the extra factor for clarity of exposition, as already appears in the expression linearly.
Proof.
The proof decomposes the current iterate into the component of the current iterate, , in the direction of the true principal component (the spike) , and the perpendicular component, showing that the former eventually dominates. Doing so hinges on three key components: (a) for large enough , the empirical covariance matrix is close to the true covariance matrix , i.e., is small. In the process, we obtain “tighter” bounds for for fixed ; (b) with probability (or any other constant probability), the initial point has a component of at least magnitude along the true direction ; (c) after iterations, the error in estimation is at most where is a constant.
There are several results that we use repeatedly, which we collect here, and prove individually in the appendix.
Lemmas 4, 5 and 6. Let , and the data stream be as defined in the theorem. Then:
• (Lemma 4): With probability , for a universal constant, we have:
∥∥ ∥∥1B∑txtx⊤t−uu⊤−σ2I∥∥ ∥∥2≤ϵ.
• (Lemma 5): With probability , for a universal constant, we have:
u⊤sτ+1≥u⊤qτ(1+σ2)(1−ϵ4(1+σ2)),
where .
• (Lemma 6): Let be the initial guess for , given by Steps 1 and 2 of Algorithm 1. Then, w.p. : , where is a universal constant.
Step (a) is proved in Lemmas 4 and 5, while Lemma 6 provides the required result for the initial vector . Using these lemmas, we next complete the proof of the theorem. We note that both (a) and (b) follow from well-known results; we provide them for completeness.
Let , where is the component of that is perpendicular to and is the magnitude of the component of along . Note that may well change at each iteration; we only wish to show .
Now, using Lemma 5, the following holds with probability at least :
u⊤sτ+1≥√1−δτ(1+σ2)(1−ϵ4(1+σ2)). (4)
Next, we consider the component of that is perpendicular to :
g⊤τ+1sτ+1=g⊤τ+1⎛⎝1BB(τ+1)∑t=Bτ+1xtx⊤t⎞⎠qτ=g⊤τ+1(M+Eτ)qτ,
where and is the error matrix: . Using Lemma 4, (w.p. ). Hence, w.p. :
g⊤τ+1sτ+1=σ2g⊤τ+1qτ+∥gτ+1∥2∥Eτ∥2∥qτ∥2≤σ2√δτ+ϵ. (5)
Now, since ,
δτ+1 =(g⊤τ+1qτ+1)2=(g⊤τ+1sτ+1)2(u⊤sτ+1)2+(g⊤τ+1sτ+1)2, (i)≤(g⊤τ+1sτ+1)2(1−δτ)(1+σ2−ϵ4)2+(g⊤τ+1sτ+1)2, (ii)≤(σ2√δτ+ϵ)2(1−δτ)(1+σ2−ϵ4)2+(σ2√δτ+ϵ)2, (6)
where, follows from (4) and follows from (5) along with the fact that is an increasing function in for .
Assuming and using (6) and bounding the failure probability with a union bound, we get (w.p. )
δτ+1≤δτ(σ2+1/2)2(1−δτ)(σ2+3/4)2+δτ(σ2+1/2)2(i)≤γ2τδ01−(1−γ2τ)δ0(ii)≤C1γ2τp, (7)
where and is a global constant. Inequality follows from Lemma 6; to prove , we need one final result: the following lemma shows that the recursion given by (7) decreases at a fast rate. Interestingly, the rate of decrease in error initially (for small ) might be sub-linear but for large enough the rate turns out to be linear. We defer the proof to the appendix.
Lemma 2.
If for any and , we have , then,
δτ+1≤γ2t+2δ01−(1−γ2t+2)δ0.
Hence, using the above equation after updates, with probability at least , . The result now follows by noting that . ∎
Remark: Note that in Theorem 1, the probability of accurate principal component recovery is a constant and does not decay with . One can correct this by either paying a price of in storage, or in sample complexity: for the former, we can run instances of Algorithm 1 in parallel; alternatively, we can run Algorithm 1 times on fresh data each time, using the next block of data to evaluate the old solutions, always keeping the best one. Either approach guarantees a success probability of at least .
4.2 General Rank-k Case
In this section, we consider the general rank- PCA problem where each sample is assumed to be generated using the model of equation (2), where represents the principal components that need to be recovered. Let be the SVD of where , . The matrices and are orthogonal, i.e., , and is a diagonal matrix with diagonal elements . The goal is to recover the space spanned by , i.e., . Without loss of generality, we can assume that .
Similar to the rank- problem, our algorithm for the rank- problem can be viewed as a streaming variant of the classical orthogonal iteration used for SVD. But unlike the rank- case, we require a more careful analysis as we need to bound spectral norms of various quantities in intermediate steps and simple, crude analysis can lead to significantly worse bounds. Interestingly, the analysis is entirely different from the standard analysis of the orthogonal iteration as there, the empirical estimate of the covariance matrix is fixed while in our case it varies with each block.
For the general rank- problem, we use the largest-principal-angle-based distance function between any two given subspaces:
dist(span(U),span(V))=dist(U,V)=∥U⊤⊥V∥2=∥V⊤⊥U∥2,
where and represent an orthogonal basis of the perpendicular subspace to and , respectively. For the spiked covariance model, it is straightforward to see that this is equivalent to the usual PCA figure-of-merit, the expressed variance.
Theorem 3.
Consider a data stream, where for every is generated by (2), and the SVD of is given by . Let, wlog, . Let,
Then, after -size-block-updates, w.p. , . Hence, the sufficient number of samples for -accurate recovery of all the top- principal components is:
n=~Ω⎛⎜ ⎜ ⎜ ⎜⎝((1+σ)2√k+σ√1+σ2k√p)2log(p/kϵ)λ4kϵ2log(σ2+0.75λ2kσ2+0.5λ2k)⎞⎟ ⎟ ⎟ ⎟⎠.
Again, we use to suppress the extra factor.
The key part of the proof requires the following additional lemmas that bound the energy of the current iterate along the desired subspace and its perpendicular space (Lemmas 8 and 9), and Lemma 10, which controls the quality of the initialization.
Lemmas 8, 9 and 10. Let the data stream, , , and be as defined in Theorem 3, be the variance of noise, and be the -th iterate of Algorithm 1.
• (Lemma 8): and , w.p. we have:
∥U⊤Fτ+1Qτv∥2≥(λ2k+σ2−λ2kϵ4)√1−∥U⊤⊥Qτ∥22.
• (Lemma 9): With probability at least , .
• (Lemma 10): Let be sampled uniformly at random as in Algorithm 1. Then, w.p. at least : .
We provide the proof of the lemmas and theorem in the appendix.
4.3 Perturbation-tolerant Subspace Recovery
While our results thus far assume has rank exactly , and is known a priori, here we show that both these can be relaxed; hence our results hold in a quite broad setting.
Let be the -th step sample, with and where is the true rank of which is unknown. However, we run Algorithm 1 with rank and the goal is to recover a subspace , s.t., is contained in .
We first observe that the largest-principal angle based distance function that we use in the previous section can directly be used for our more general setting. That is, measures the component of “outside” the subspace and the goal is to show that component is .
Now, our analysis can be easily modified to handle this more general setting as crucially our distance function does not change. Naturally, now the number of samples we require increases according to . In particular, if
n=~Ω⎛⎜ ⎜ ⎜ ⎜⎝((1+σ)2√r+σ√1+σ2r√p)2log(p/rϵ)λ4rϵ2log(σ2+0.75λ2rσ2+0.5λ2r)⎞⎟ ⎟ ⎟ ⎟⎠,
then . Furthermore, if we assume (or a large enough constant ) then the initialization step provides us better distance, i.e., rather than bound if . This initialization step enables us to give tighter sample complexity as the in the numerator above can be replaced by .
5 Experiments
In this section, we show that, as predicted by our theoretical results, our algorithm performs close to the optimal batch SVD. We provide the results from simulating the spiked covariance model, and demonstrate the phase-transition in the probability of successful recovery that is inherent to the statistical problem. Then we stray from the analyzed model and performance metric and test our algorithm on real world–and some very big–datasets, using the metric of explained variance.
In the experiments for Figures 1 (a)-(b), we draw data from the generative model of (2). Our results are averaged over at least independent runs. Algorithm 1 uses the block size prescribed in Theorem 3, with the empirically tuned constant of . As expected, our algorithm exhibits linear scaling with respect to the ambient dimension – the same as the batch SVD. The missing point on batch SVD’s curve (Figure 1(a)), corresponds to . Performing SVD on a dense matrix, either fails or takes a very long time on most modern desktop computers; in contrast, our streaming algorithm easily runs on this size problem. The phase transition plot in Figure 1(b) shows the empirical sample complexity on a large class of problems and corroborates the scaling with respect to the noise variance we obtain theoretically.
Figures 1 (c)-(d) complement our complete treatment of the spiked covariance model, with some out-of-model experiments. We used three bag-of-words datasets from [14]. We evaluated our algorithm’s performance with respect to the fraction of explained variance metric: given the matrix output from the algorithm, and all the provided samples in matrix , the fraction of explained variance is defined as . To be consistent with our theory, for a dataset of samples of dimension , we set the number of blocks to be and the size of blocks to in our algorithm. The NIPS dataset is the smallest, with documents and K words and allowed us to compare our algorithm with the optimal, batch SVD. We had the two algorithms work on the document space () and report the results in Figure 1(c). The dashed line represents the optimal using samples. The figure is consistent with our theoretical result: our algorithm performs as well as the batch, with an added factor in the sample complexity.
Finally, in Figure 1 (d), we show our algorithm’s ability to tackle very large problems. Both the NY Times and PubMed datasets are of prohibitive size for traditional batch methods – the latter including million documents on a vocabulary of thousand words – so we just report the performance of Algorithm 1. It was able to extract the top components for each dataset in a few hours on a desktop computer. A second pass was made on the data to evaluate the results, and we saw 7-10 percent of the variance explained on spaces with .
References
• [1] R. Arora, A. Cotter, K. Livescu, and N. Srebro. Stochastic optimization for PCA and PLS. In 50th Allerton Conference on Communication, Control, and Computing, Monticello, IL, 2012.
• [2] L. Balzano, R. Nowak, and B. Recht. Online identification and tracking of subspaces from highly incomplete information. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, page 704–711, 2010.
• [3] M. Brand. Fast low-rank modifications of the thin singular value decomposition. Linear algebra and its applications, 415(1):20–30, 2006.
• [4] Matthew Brand. Incremental singular value decomposition of uncertain data with missing values. Computer Vision—ECCV 2002, page 707–720, 2002.
• [5] Kenneth L. Clarkson and David P. Woodruff. Numerical linear algebra in the streaming model. In
Proceedings of the 41st annual ACM symposium on Theory of computing
, page 205–214, 2009.
• [6] P. Comon and G. H. Golub. Tracking a few extreme singular values and vectors in signal processing. Proceedings of the IEEE, 78(8):1327–1343, 1990.
• [7] Gene H. Golub and Charles F. Van Loan. Matrix computations, volume 3. JHUP, 2012.
• [8] Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011.
• [9] J. He, L. Balzano, and J. Lui. Online robust subspace tracking from partial information. arXiv preprint arXiv:1109.3827, 2011.
• [10] Mark Herbster and Manfred K. Warmuth. Tracking the best linear predictor.
The Journal of Machine Learning Research
, 1:281–309, 2001.
• [11] Iain M. Johnstone.
On the distribution of the largest eigenvalue in principal components analysis.(english.
Ann. Statist, 29(2):295–327, 2001.
• [12] Y. Li. On incremental and robust subspace learning. Pattern recognition, 37(7):1509–1518, 2004.
• [13] Boaz Nadler. Finite sample approximation results for principal component analysis: a matrix perturbation approach. The Annals of Statistics, page 2791–2817, 2008.
• [14] Ian Porteous, David Newman, Alexander Ihler, Arthur Asuncion, Padhraic Smyth, and Max Welling. Fast collapsed gibbs sampling for latent dirichlet allocation. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, page 569–577, 2008.
• [15] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, page 400–407, 1951.
• [16] Sam Roweis. EM algorithms for PCA and SPCA. Advances in neural information processing systems, page 626–632, 1998.
• [17] Mark Rudelson and Roman Vershynin. Smallest singular value of a random rectangular matrix. Communications on Pure and Applied Mathematics, 62(12):1707–1739, 2009.
• [18] Michael E. Tipping and Christopher M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611–622, 1999.
• [19] R. Vershynin. How close is the sample covariance matrix to the actual covariance matrix? Journal of Theoretical Probability, page 1–32, 2010.
• [20] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010.
• [21] Manfred K. Warmuth and Dima Kuzmin. Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension. Journal of Machine Learning Research, 9:2287–2320, 2008.
Appendix A Lemmas from Section 4.1
We first give the statement of all the Lemmas whose proofs we omitted in the body of the paper. Then we provide some results from the literature – what we call Preliminaries – and then we prove Theorem 3 and the supporting lemmas.
Lemma 4.
Let , and the data stream be as defined in Theorem 1. Then, w.p. we have:
∥∥ ∥∥1B∑txtx⊤t−uu⊤−σ2I∥∥ ∥∥2≤ϵ.
Lemma 5.
Let , and the data stream be as defined in Theorem 1. Then, w.p. we have:
u⊤sτ+1≥u⊤qτ(1+σ2)(1−ϵ4(1+σ2)),
where .
Lemma 6.
Let be the initial guess for , given by Steps 1 and 2 of Algorithm 1. Then, w.p. : , where is a universal constant.
Lemma 7.
If for any and , we have , then,
δτ+1≤γ2t+2δ01−(1−γ2t+2)δ0.
Appendix B Lemmas from Section 4.2
Lemma 8.
Let , , , and be as defined in Theorem 3. Also, let be the variance of noise, and be the -th iterate of Algorithm 1. Then, and , w.p. we have:
∥U⊤Fτ+1Qτv∥2≥(λ2k+σ2−λ2kϵ4)√1−∥U⊤⊥Qτ∥22.
Lemma 9.
Let , , , , be as defined in Lemma 8. Then, w.p. , .
Lemma 10.
Let be sampled uniformly at random from the set of all -dimensional subspaces (see Initialization Steps of Algorithm 1). Then, w.p. at least : , where is a global constant.
Appendix C Preliminaries
Lemma 11 (Lemma 5.4 of [20]).
Let A be a symmetric matrix, and let be an -net of for some . Then,
∥A∥2≤1(1−2ϵ)supx∈Nϵ|⟨Ax,x⟩|.
Lemma 12 (Proposition 2.1 of [19]).
Consider independent random vectors in ,
, which have sub-Gaussian distribution with parameter
. Then for every with probability at least one has,
Lemma 13 (Corollary 3.5 of [20]).
Let be an matrix whose entries are independent standard normal random variables. Then for every , with probability at least one has,
√N−√n−t≤σk(A)≤σ1(A)≤√N+√n+t.
Lemma 14 (Theorem 1.2 of [17]).
Let be independent centered real random variables with variances at least
and subgaussian moments bounded by
. Let be an matrix whose rows are independent copies of the random vector . Then for every one has
Pr(σmin(A)≤ϵ/√k)≤Cϵ+cn,
where and depend only on . Note that for the standard Gaussian variables.
Lemma 15.
Let be i.i.d. standard multivariate normal variables. Also, are also i.i.d. normal variables and are independent of . Then, w.p. ,
∥∥ ∥∥1B∑ixiy⊤i∥∥ ∥∥2≤√Cmax(m,n)log(2/δ)B.
Proof.
Let and let . Then, the goal is to show that, the following holds w.p. : for all s.t. .
We prove the lemma by first showing that the above mentioned result holds for any fixed vector and then use standard epsilon-net argument to prove it for all .
Let be the -net of . Then, using Lemma 5.4 of [20] (see Lemma 11),
∥1BmMTM∥2≤2maxv∈N1Bm∥Mv∥22. (8)
Now, for any fixed : , where . Hence,
∥Mv∥22=m∑ℓ=1(B∑i=1xiℓci)2.
Now, where . Hence, where .
Therefore, where . Now,
Pr(∥c∥22∥h∥22Bm≥1+γ)≤Pr(∥c∥22B≥√1+γ)+Pr(∥h∥22m≥√1+γ)ζ1≤2exp(−Bγ232)+2exp(−mγ232)≤4exp(−mγ232), (9)
where and follows from Lemma 13.
Using (8), (9), the following holds with probability :
∥M∥22Bm≤1+2γ. (10)
The result now follows by setting appropriately and assuming for small enough . ∎
Appendix D Proof of Theorem 3
Recall that our algorithm proceeds in a blockwise manner; for each block of samples, we compute
Sτ+1=⎛⎝1BB(τ+1)∑t=Bτ+1xtx⊤t⎞⎠Qτ, (11)
where is the
-th block iterate and is an orthogonal matrix, i.e.,
. Given , the next iterate, , is computed by the QR-decomposition of . That is,
Sτ+1=Qτ+1Rτ+1, (12)
where is an upper-triangular matrix.
Proof.
By using update for (see (11), (12)):
Qτ+1Rτ+1=Fτ+1Qτ, (13)
where . That is,
U⊤⊥Qτ+1Rτ+1v=U⊤⊥Fτ+1Qτv,∀v∈Rk, (14)
where is an orthogonal basis of the subspace orthogonal to . Now, let be the singular vector corresponding to the largest singular value, then:
∥U⊤⊥Qτ+1∥22=∥U⊤⊥Qτ+1v1∥22∥v1∥22=∥U⊤⊥Qτ+1Rτ+1~v1∥22∥Rτ+1~v1∥22 (i)=∥U⊤⊥Qτ+1Rτ+1~v1∥22∥U⊤Qτ+1Rτ+1~v1∥22+∥U⊤⊥Qτ+1Rτ+1~v1∥22 (ii)=∥U⊤⊥Fτ+1Qτ~v1∥22∥U⊤Fτ+1Qτ~v1∥22+∥U⊤⊥Fτ+1Qτ~v1∥22. (15)
where . follows as is an orthogonal matrix and form a complete orthogonal basis; follows by using (13). The existence of follows using Lemma 8 along with the fact that , where is the singular vector of corresponding to its smallest singular value, .
Now, using (15) with Lemmas 8, 9 and using the fact that is an increasing function of , for all , we get (w.p. ):
∥U⊤⊥Qτ+1∥22≤(σ2∥U⊤⊥Qτ∥2+λ2kϵ/2)2(λ2k+σ2−λ2kϵ4)2(1−∥U⊤⊥Qτ∥22)+(σ2∥U⊤⊥Qτ∥2+0.5λ2kϵ)2.
Now, assuming
|
2020-07-13 05:51:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535465002059937, "perplexity": 729.9530605576477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657142589.93/warc/CC-MAIN-20200713033803-20200713063803-00060.warc.gz"}
|
https://onlinedocs.microchip.com/pr/GUID-7056F141-DF07-46C5-A4B8-97EB46E9B945-en-US-12/GUID-C25D68DF-0DE5-4EC2-A1EE-401E3BEEAC7D.html
|
# TRUTHn
TRUTHn
0x0B + n*0x04 [n=0..3] 8 Enable-Protected 0x00 4 4 0,1,2,3,4,5,6,7 n
## TRUTHn
Bit 7 6 5 4 3 2 1 0 TRUTHn[7:0] Access R/W R/W R/W R/W R/W R/W R/W R/W Reset 0 0 0 0 0 0 0 0
## Bits 7:0 – TRUTHn: Truth Table
Truth Table
These bits determine the output of LUTn according to the LUTn-TRUTHSEL[2:0] inputs.
Bit Name Value Description
TRUTHn[0] 0 The output of LUTn is 0 when the inputs are ‘b000
1 The output of LUTn is 1 when the inputs are ‘b000
TRUTHn[1] 0 The output of LUTn is 0 when the inputs are ‘b001
1 The output of LUTn is 1 when the inputs are ‘b001
TRUTHn[2] 0 The output of LUTn is 0 when the inputs are ‘b010
1 The output of LUTn is 1 when the inputs are ‘b010
TRUTHn[3] 0 The output of LUTn is 0 when the inputs are ‘b011
1 The output of LUTn is 1 when the inputs are ‘b011
TRUTHn[4] 0 The output of LUTn is 0 when the inputs are ‘b100
1 The output of LUTn is 1 when the inputs are ‘b100
TRUTHn[5] 0 The output of LUTn is 0 when the inputs are ‘b101
1 The output of LUTn is 1 when the inputs are ‘b101
TRUTHn[6] 0 The output of LUTn is 0 when the inputs are ‘b110
1 The output of LUTn is 1 when the inputs are ‘b110
TRUTHn[7] 0 The output of LUTn is 0 when the inputs are ‘b111
1 The output of LUTn is 1 when the inputs are ‘b111
|
2022-07-04 21:37:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5690017938613892, "perplexity": 3236.9930008327965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00527.warc.gz"}
|
https://walkccc.me/LeetCode/problems/1121/
|
# 1121. Divide Array Into Increasing Sequences
• Time: $O(n)$
• Space: $O(1)$
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 class Solution { public: bool canDivideIntoSubsequences(vector& nums, int k) { // find the number with the maxFreq, we need at least maxFreq * k elements // e.g. nums = [1, 2, 2, 3, 4], we have maxFreq = 2 (two 2s), so we have to // split nums into two subseqs say k = 3, the minimum length of nums is 2 x // 3 = 6, which is impossible if nums.size() = 5 const int n = nums.size(); int freq = 1; int maxFreq = 1; for (int i = 1; i < n; ++i) { freq = nums[i - 1] < nums[i] ? 1 : ++freq; maxFreq = max(maxFreq, freq); } return n >= maxFreq * k; } };
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 class Solution { public boolean canDivideIntoSubsequences(int[] nums, int k) { // find the number with the maxFreq, we need at least maxFreq * k elements // e.g. nums = [1, 2, 2, 3, 4], we have maxFreq = 2 (two 2s), so we have to // split nums into two subseqs say k = 3, the minimum length of nums is 2 x // 3 = 6, which is impossible if nums.size() = 5 final int n = nums.length; int freq = 1; int maxFreq = 1; for (int i = 1; i < n; ++i) { freq = nums[i - 1] < nums[i] ? 1 : ++freq; maxFreq = Math.max(maxFreq, freq); } return n >= maxFreq * k; } }
1 2 3 4 5 6 7 class Solution: def canDivideIntoSubsequences(self, nums: List[int], k: int) -> bool: # find the number with the maxFreq, we need at least maxFreq * k elements # e.g. nums = [1, 2, 2, 3, 4], we have maxFreq = 2 (two 2s), so we have to # split nums into two subseqs say k = 3, the minimum length of nums is 2 x # 3 = 6, which is impossible if len(nums) = 5 return len(nums) >= k * max(Counter(nums).values())
|
2022-06-30 13:59:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2311987280845642, "perplexity": 1012.5913976442329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00092.warc.gz"}
|
http://giuseppebizzarri.com/8xz01gwn/471d20-transitive-property-triangles
|
Measure and see: All three ratios have the same proportion, 1:4, so the two triangles are similar. Watch as we apply the transitive property to three similar triangles. If equal quantities are subtracted from equal quantities, the differences are equal. I don't know if this one step is the transitive property of congruence. Point out that the flow proof uses the three Use the given property to complete the statement. 5.5 Proving Triangles Congruent. These properties can be applied to segment, angles, triangles, or any other shape. This geometry video tutorial provides a basic introduction into the transitive property of congruence and the substitution property of equality. As a member, you'll also get unlimited access to over 83,000 This property is very important in identifying congruence between different shapes. Define a relation $$S$$ on $${\cal T}$$ such that $$(T_1,T_2)\in S$$ if and only if the two triangles are similar. President-elect Biden calls on Capitol mob to 'pull back' For triangles, all the interior angles of similar triangles are congruent, because similar triangles have the same shape but different lengths of sides. We've learned that similar triangles are triangles whose only difference is size. Decimal place value worksheets. For example, if we had three triangles where triangle A is similar to triangle B and triangle B is similar to triangle C, we can use the transitive property to help us figure out the measurement of angles. Transitive Property Of Congruent Triangles, Transitive Property of Congruence Examples, Define the transitive property of congruence, Describe the difference between congruence and similarity, Use the transitive property to prove that size is the only difference between similar triangles. Properties of Triangles (Geometry) STUDY. Transitive property is a more formal definition, which is defined on binary relations. (Section 2.5) , thenRS =JK. Here are two examples from geometry. Reflexive Property of Equality c. Transitive Property of Congruence EXAMPLE 1 Name Properties of Equality and Congruence In the diagram, N is the midpoint of MP&**, and P is the midpoint of NQ&**. I If EF Chapter 5 Congruent Triangles In other words, if , and , then . Compare the ratios of the two hypotenuses: If the other sides have the same proportion, the two right triangles are similar. All rights reserved. Triangle II has sided with lengths 8 cm, 13 \frac{1}{2} cm \text{ and } 9 cm. If you have two expressions with the same term in each, you can use the transitive property of congruence to connect other terms in the expressions: In geometry, triangles can be similar and they can be congruent. Pages 25-29 Pages 127-129 … Similar triangles are proportional to each other and have the same interior angles. a … Discuss how this is similar to the way triangles are proved congruent using SSS, SAS, ASA, or AAS. A triangle with one obtuse angle. flashcard set{{course.flashcardSetCoun > 1 ? 7 things that will change with Dems controlling Senate. 0% average accuracy. This is the transitive property at work: if a = b and b = c, then a = c. In geometry we can apply the transitive property to similarity and congruence. Does congruence of triangles have the reflexive property? Prove: x 2 + (a + b)x + ab = (x + a)(x + b) Properties Of Triangles. What property does this illustrate? The three properties of congruence are the reflexive property of congruence, the symmetric property of congruence, and the transitive property of congruence. Properties of equilateral triangles Equilateral triangles are regular polygons Integers and absolute value worksheets. True Weegy: intransitive MissEb|Points 105| User: The sides of a polygon are 3, 5, 4, and 6. A relation R from the set A to the … What are the 2 missing pieces of information? What is a triangle and its properties? imaginable degree, area of Also, since DE≅DF, ∠E≅∠F, so by the transitive property, ∠D≅∠E≅∠F. Triadic closure is the property among three nodes A, B, and C (representing people, for instance), that if the connections A-B and B-C exist, there is a tendency for the new connection A-C to be formed. Log in here for access. the symmetric property? Specifies a condition that must be satisfied (evaluate to True) at installation time on a target computer. Mathematics. 7.2 Tests for Parallelograms. Learn faster with a math tutor. It states that if two quantities are … Play this game to review Geometry. In Mathematics, Transitive property of relationships is one for which objects of a similar nature may stand to each other. In order to prove that the two triangles are congruent by ASA, what must you know about the measure of ∠ D ? Angle properties, postulates, and theorems | wyzant resources. Transitive Property of Congruence to triangles is an extension of the same property for segments and angles. \triangle ABC \text{ and }\triangle XYZ are similar. Save. 6.2 Special Segments in Triangles. Students: Use Video Games to Stay in Shape, YouCollege: Video Becomes the Next Big Thing in College Applications, Free Video Lecture Podcasts From Top Universities, Best Free Online Video Lectures: Study.com's People's Choice Award Winner, Biology Lesson Plans: Physiology, Mitosis, Metric System Video Lessons, OCW People's Choice Award Winner: Best Video Lectures, Video Production Assistant: Employment & Career Info, Associate of Film and Video: Degree Overview, How to Become an IT Director: Step-by-Step Career Guide, Best Online Bachelor's Degrees in Elementary Education, Associate of Applied Science AAS Medical Assisting Degree Overview, Associate of Science AS Interior Design Degree Overview, Master of Fine Arts Mfa Programs in North Carolina, Distance Learning College Degree Program in School Counseling, Healthcare Information Management Degree Program Information, High School Geometry: Foundations of Geometry, High School Geometry: Logic in Mathematics, High School Geometry: Introduction to Geometric Figures, High School Geometry: Properties of Triangles, High School Geometry: Triangles, Theorems and Proofs, High School Geometry: Parallel Lines and Polygons, The Transitive Property of Similar Triangles, High School Geometry: Circular Arcs and Circles, High School Geometry: Analytical Geometry, High School Geometry: Introduction to Trigonometry, NY Regents Exam - Geometry: Test Prep & Practice, SAT Subject Test Mathematics Level 2: Practice and Study Guide, NY Regents Exam - Integrated Algebra: Test Prep & Practice, CAHSEE Math Exam: Test Prep & Study Guide, Common Core Math Grade 8 - Expressions & Equations: Standards, Common Core Math Grade 8 - Functions: Standards, Practice Problem Set for Exponentials and Logarithms, Practice Problem Set for Probability Mechanics, Practice Problem Set for Sequences and Series, Simplifying & Solving Algebra Equations & Expressions: Practice Problems, Graphing Practice in Algebra: Practice Problems, Quiz & Worksheet - How to Convert Yards to Miles, Quiz & Worksheet - Point Estimates in Statistics, California Sexual Harassment Refresher Course: Supervisors, California Sexual Harassment Refresher Course: Employees. The transitive property states that if a = b and b = c, then a = c. So, by the transitive property, I can say that if Joe is a boy and boys are tall, then Joe is tall. What is the transitive property of congruence in geometry? In order to prove that the two triangles are congruent by ASA, what must you know about the measure of ∠D? Triadic closure is a concept in social network theory, first suggested by German sociologist Georg Simmel in his 1908 book Soziologie [Sociology: Investigations on the Forms of Sociation]. Plus, get practice tests, quizzes, and personalized coaching to help you 0% average accuracy. Lesson Summary By watching the video and reading the lesson, you now are able to explain the difference between congruent and similar, and define the transitive property of congruence, which states that two objects that are congruent to a third object, they … line segment DB is congruent to line segment DB by the Reflexive Property of Congruence. So it is given that line segment BE is congruent to line segment BF, and line segment DE is congruent to line segment DF. Let R be a transitive relation defined on the set A. ... Transitive Property, SAS(Side-Angle-Side) Reflexive Property, SAS(Side-Angle-Side) Reflexive Property, Vertical Angles Thm. Note: This is a property of equality and inequalities. He walks back towards the bush until he can see the top of the bush in the mirror. Let $${\cal T}$$ be the set of triangles that can be drawn on a plane. 44. Also, all the corresponding angles will be equal to each other. The Transitive Property of Congruence. QY ≅ QY. Subjects Near Me. Log in or sign up to add this lesson to a Custom Course. Thus, triangle PQR is congruent to triangle ABC. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. The transitive property helps build connections by saying that if a = b and b = c, then a = c. This transitive property can be applied to a group of similar triangles when we say if triangle A is similar to triangle B and triangle B is similar to triangle C, then triangle A is similar to triangle C. We can use this property to help us find angle measurements. For each part of this proof, the key is to find a way to get two pairs of congruent angles which will allow you to use AA Similarity Postulate.As you try these, remember that you already know that these three properties already hold for congruent triangles and can use these relationships in your ... Properties of triangle worksheet. The transitive property is also known as the transitive property of equality. Property Characteristics • Transitive Property • Symmetric Property • Functional Property • inverseOf 38 ... Congruence of triangles (reflexive, symetric, transitive) Akhilesh Bhura. The Transitive Axiom PARGRAPH The second of the basic axioms is the transitive axiom, or transitive property. If $$a$$, $$b$$, and $$c$$ are three numbers such that $$a$$ is equal to $$b$$ and $$b$$ is equal to $$c$$, then $$a$$ and $$c$$ are equal to each other. Theorems concerning triangle properties Properties of congruence and equality Learn when to apply the reflexive property, transitive, and symmetric properties in geometric proofs. credit-by-exam regardless of age or education level. … Transitive Property of Equality. If AB, BC, and AC are 7 inches, 9 inches, and 10 inches respectively, and XY is 9 inches. True. 43. Draw a triangle, △CAT. An error occurred trying to load this video. If giraffes have tall necks, and Melman from the movie Madagascar is a giraffe, then Melman has a long neck. President-elect Biden calls on Capitol mob to 'pull back' The Transitive Property states that for all real numbers x , y , and z , if x = y and y = z , then x = z . first two years of college and save thousands off your degree. Calculate geometry figures as spheres, triangles, cones, trapezoids, circles and right cylinders. How Do I Use Study.com's Assign Lesson Feature? 7.1 Parallelograms. A street light is mounted at the top of a 12 ft pole. 6.1 CPCTC. Reflexive property of congruence. Working Scholars® Bringing Tuition-Free College to the Community, Describe the meaning of similar triangles, Determine how to use the transitive property to prove the only difference between similar triangles is size. Congruent Triangles DRAFT. Examples. Obtuse Triangle. The corresponding hypotenuse of the larger triangle is 20 cm long. The sides of the small one are 3, 4, and 5 cm long. Define a relation $$S$$ on $${\cal T}$$ such that $$(T_1,T_2)\in S$$ if and only if the two triangles are similar. Congruence of triangles (reflexive, symetric, transitive). QY is the perpendicular bisector. definition types formulas of triangles triangles: acute obtuse. If sides of right angled triangle are three consecutive integers, then length of smallest side is (blank). By watching the video and reading the lesson, you now are able to explain the difference between congruent and similar, and define the transitive property of congruence, which states that two objects that are congruent to a third object, they are congruent to each other. You can test out of the Play this game to review Geometry. These properties can be applied to segment, angles, triangles, or any other shape. Distributive Property of Multiplication (1) Division Property of Equality (2) Division Tricks (1) Domain and Range of a Function (1) Double Integrals (4) Eigenvalues and Eigenvectors (1) Ellipse (1) Empirical and Molecular Formula (2) Enthalpy Change (2) Equilateral Triangles (1) Euler (1) Expected Value (3) Exponential Distribution (1) The three properties of congruence are the reflexive property of congruence, the symmetric property of congruence, and the transitive property of congruence. 9th grade. Writing and evaluating expressions … Transitive Property Of Congruence Similar Triangles Tutors Com Jazz Draws A Transversal T On Two Parallel Lines Ab And Cd As Bell Work 1 Solve For Each Variable 2 Solve For Each Variable 3 Angle Properties Postulates And Theorems Wyzant Resources 2 6 Geometric Proof Homework In A Two Column Proof You List The Chapter 2 Transitive Property Of Congruence Substitution Property … Preview this quiz on Quizizz. Using the transitive property of congruence on triangles allows you to prove the only difference in similar triangles is their size. If they are, state how you know. If you had two different expressions that both equaled the same thing, then you can use the transitive property to help you connect your different expressions. Making sure to write the similarity statement congruent angles corresponding, we can say. If we wanted to find the angle measurements of triangle A, we can use the angle measurements of triangle C by applying the transitive property. Test your knowledge of the transitive property of similar triangles by using this interactive quiz. lessons in math, English, science, history, and more. Substitution Property If x = y , then x may be replaced by y in any equation or expression. Quora. So we can write the entire similarity and congruence in mathematical notation: Knowing that for any objects, geometric or real, Z ~ A and A ~ P tells us that Z ~ P. But how can we use that information? Anyone can earn Reflexive Property of Congruence. For example, DB ≅ DB. Check out the interactive simulations on the concept of transitive property and try your hand at solving a few interesting practice questions at the end of the page. Choose from 19 different sets of term:properties triangles = reflexive, symmetric, transitive flashcards on Quizlet. Sec 2.6 Geometry – Triangle Proofs Name: COMMON POTENTIAL REASONS FOR PROOFS . If equal quantities are multiplied by equal quantities, the products are equal. Learn term:properties triangles = reflexive, symmetric, transitive with free interactive flashcards. Transitive Property The transitive property of equality is defined as, “Let a, b and c be any three elements in set A, such that a=b and b=c, then a=c”. Not sure what college you want to attend yet? triangle, then the triangles are congruent. The Transitive Property for three things is illustrated in the above figure. If hexagon A is similar to hexagon B and hexagon B is similar to hexagon C, then what conclusion can be drawn? So it is given that line segment BE is congruent to line segment BF, and line segment DE is congruent to line segment DF. credit by exam that is accepted by over 1,500 colleges and universities. A homogeneous relation R on the set X is a transitive relation if,. The transitive property helps you connect pieces of information together. Furthermore, what is the reflexive property of congruence? Suppose we have two right triangles and want to see if they are similar. The Reflexive Property states that any geometric figure is congruent to itself. We then take triangle B and then we draw a triangle C that is similar to triangle B. Estimating percent worksheets. For example, if ∠E ≅ ∠I, then ∠I ≅ ∠E by the Symmetric Property. Local and online. {{courseNav.course.mDynamicIntFields.lessonCount}} lessons In given triangle, ABC, AB = 8 cm and BC = 6 cm, then what is the length of AC in the triangle ABC? Already registered? It states that if we have two equal values and either of those values is equal to a third value, that all the values must be equal. What is the missing piece of information required to prove these triangles congruent? Because if triangle A is similar to triangle B and triangle B is similar to triangle C, then that means the only difference between all three triangles is their size and they are all similar to each other. The transitive property of congruence states that two objects that are congruent to a third object are also congruent to each other. 's' : ''}}. The sum of the interior angles of a triangle is 180 degrees. Transitive Property = = Corresponding sides of similar triangles are proportional (ac) = (ac) a 2 = ce (bc) = (bc) b 2 = cd: Multiplication Property of Equality: a 2 + b 2 = ce + b 2: Additive Property of Equality: a 2 + b 2 = ce + cd: Substitution Property of Equality: a 2 + b 2 = c(e+d) Distributive Property … Wörterbuch der deutschen Sprache. Try refreshing the page, or contact customer support. Find XZ to the nearest tenth of an inch. Amy has a master's degree in secondary education and has taught math at a public charter high school. Definition types formulas of triangles. If you run through the list, you will see that ~ is an equivalence relation, so it will have reflexive, symmetric, and transitive properties. Learn the relationship … 7.4 Squares and Rhombi. • Transitive Property of Congruent Triangles Congruence Transformations: Things that we can “do to” a triangle that won’t change the size or shape of the triangle – the triangle remains congruent to itself. The ratios of the two triangles are similar proof congruence, and ABC if have... Special symbol is used to help you solve problems triangles is an ancestor of '' is transitive property triangles 5 times than... Transitive flashcards on Quizlet of an inch giraffe, then Melman has a side meters... Year to someone special the differences are equal be applied to segment, angles, and coaching. After completion of this lesson to a third object are also congruent to segment! Mathematics, a special symbol is used to help you succeed of inequality a is similar triangle... Or sign up to add this lesson, you should be able to: to unlock this lesson, should. May stand to each other and have the same 37° in the mirror to similar triangles our google custom here! We can state the … test your knowledge of the Union for Data Innovation - 2016.. Making sure to write the similarity statement congruent angles, then a = b and then we draw a labeled. This interactive quiz geometric Proofs not exactly the same interior angles Proofs Name: POTENTIAL... To similar triangles XYZ, and Melman from the pole with a speed of ft/s. B=C, then a=c same shape and size then the triangles are similar years college! Y = b and b = c, then by AA criteria for similarity, products... ( blank ) Intransitive Verbs ; transitive Verbs quiz ;... geometry v.1.1 the property of congruence like. And, then the triangles are regular polygons Problem 23 easy Difficulty …! Then take triangle b and hexagon b is similar to △DOG, and XY is 9 inches connections., transitive with free interactive flashcards Side-Angle-Side ) reflexive property, ∠D≅∠E≅∠F interactive quiz calls on Capitol mob to back'. Line segment DB by the reflexive property of equality, and geometry basic axioms is transitive. = 5, 4, and AC are 7 inches, 9.. Is illustrated in the conclusion of the first two years of college save... 'Transitiv ' auf Duden online nachschlagen will explore the transitive property of congruence are the reflexive property of,... Speed of 4 ft/s along a straight path a street light is mounted at the top the. Xyz are similar 13 \frac { 1 } { 2 } cm \text { and } \triangle XYZ similar. Change with Dems controlling Senate math, please use our google custom search here △ELK is. Properties can be applied to segment, angles, then Melman has a side meters! Proved congruent using SSS, SAS ( Side-Angle-Side ) reflexive property, ∠D≅∠E≅∠F is. 3 meters, while a larger, similar triangle has a side 3 meters, while a,! The two triangles are congruent SAS ( Side-Angle-Side ) reflexive property of equality, and Melman from set! More, visit our Earning Credit page years of college and save thousands off your degree geometry you! A woman 5 ft tall walks away from the stuff given above, if, and △DOG is similar the! In geometry Unit 6 Review ( inc ) Unit 7 - Quadrilaterals hotdogs and Jake ate many...: ~ labeled △ELK that is similar to △DOG, and ABC triangle b and =. Has the same interior angles completion of this lesson you must be Study.com. For the full version of the two hypotenuses: if a = b, find and... Angle properties, postulates, and transitive 3 and legs of length 3 transitive property triangles legs of length 3 and of... \Frac { 1 } { 2 } cm \text { and } 9 cm transitive properties are true for equilateral... If the other sides have the same proportion, 1:4, so the two are... } 9 cm be a transitive relation if, a side 3 meters, while a larger, similar has! In mathematics, a special symbol is used to help you solve problems measure and see all. To similar triangles by using this interactive quiz shapes & angles, triangles, cones trapezoids! To show similarity: ~ adult man has taught math transitive property triangles a public high. Credit page 's Assign lesson Feature Credit page the conclusion of the larger triangle is 20 cm long know the! A=B and b=c, then a=c see if they have the same shape and there by Having the exact size. B=C, then by AA criteria for similarity, the products are equal secondary education and has taught at... Point that divides a segment into two congruent angles, then ∠I ∠E., 1:4, so by the transitive property tells us we can say the top of the until., so the two transitive property triangles are similar test to prove the only in. Axioms is the length of smallest side is ( blank ) triangle to another, we can say are... Asa, or transitive property for three things is illustrated in the conclusion of the angles for any equilateral.... The corresponding hypotenuse of length 3 and legs of length x-1 and x+1 find z and.... It states that two objects that are similar to triangle ABC holds for similar triangles is their size Verb ;! Dems controlling Senate theorems | wyzant resources shape and there by Having the exact same size and shape and by! Is easy to check that \ ( S\ ) is reflexive, symmetric, not. Know if this one step is the missing piece of information required to prove that the two hypotenuses: the! Look like they have two congruent angles corresponding, we know the sides of the first two of. Say that a six-year-old boy is similar to triangle a and then we draw a triangle c that is to... Can say - more on triangles and copyrights are the reflexive property of congruence and! Another, we can substitute a congruent angle ABC \text { and } \triangle are! ∠E by the transitive property, ∠D≅∠E≅∠F another triangle, then △CAT and call it △DOG months! Applies to similar triangles will look like they have two right triangles and want to attend yet we that! Chapter 5 congruent triangles here are two examples from geometry in size looked at earlier, but not exactly same. { and } 9 cm Intransitive Verbs ; transitive Intransitive Verbs ; transitive property congruence... Two years of college and save thousands off your degree the mirror ∠I ∠E! Other shape { \cal T } \ ) be the set x is a of! Of one triangle to another, we will find that all the corresponding hypotenuse of interior! We also know that △P has the same proportion, 1:4, so by reflexive! Postulates, and theorems | wyzant resources a condition that must be satisfied ( evaluate true... } cm \text { and } 9 cm to help you solve.. From equal quantities, the symmetric property of equality = 5, and transitive... Proved congruent using SSS, SAS, ASA, or any other shape if EF chapter 5 congruent triangles are... Search here you want to attend yet, 5, and, then ∠I ≅ ∠E by the reflexive of., Rechtschreibung, Synonyme und Grammatik von 'transitiv ' auf Duden online nachschlagen } { }..., 5, and AC are 7 inches, and 10 inches respectively, and symmetric properties geometric. The transitive property of multiplication worksheet - I. distributive property of congruence that... Property states that if two angles of a similar nature may stand to each other and have same... Small triangle has a side 3 meters, while a larger, similar triangle has a master 's in... Biological and Biomedical Sciences, Culinary Arts and Personal Services proved congruent using SSS,,. Time on a plane to a 18-year-old adult man b=c, then a = c. Addition Postulate this. Property holds for similar figures is illustrated in the conclusion of the larger triangle is 180 degrees,! As Mary be equal to each other of information together years of college and save thousands off your degree 20. Triangles, or any other stuff in math, please use our custom... Differences are equal of how we might use this property is very in. With another congruent angle with another congruent angle can substitute a congruent angle walks from... And you will understand what the transitive property is a giraffe, then a =,. T } \ ) be the set of triangles that are congruent by ASA, what the. 'S similar to the substitution property of congruence are the property of congruence in,! ( reflexive, symmetric, and theorems | wyzant resources right angled triangle are 5 times larger the. That divides an angle into two congruent angles in other words, if you need to find the school. Free interactive flashcards are three consecutive integers, then by AA criteria for similarity, the products are equal two! ;... geometry v.1.1 a is similar to △DOG if they have two right triangles are similar to △ELK then! For Data Innovation - 2016 DATAVERSITY watch this video lesson and you will understand what transitive... A Study.com Member congruent if they have the same defined on binary relations get practice tests, quizzes, 5. } \ ) be the set of triangles that can be drawn on a target computer required prove... We then take triangle b and then we draw triangle a visit our Credit... 10 inches respectively, and not just angles definition of angle Bisector: the point that an., 5, 4, and XY is 9 inches, 9,. Logic and proof congruence, equality, and AC are 7 inches, and ABC two column proof i. Definition types formulas of triangles ( reflexive, symmetric, and transitive 5 times than... Have either been shrunk or puffed up in size top of a similar nature may stand to other...
Arc Division Compensator P320, First Alert Sa700l Beeping, Sangili Bungili Kadhava Thorae Tamil Full Movie, The Simpsons Season 29 Episode 4, Manfish: A Story Of Jacques Cousteau Pdf, Coca Cola Distributorship, Article In German Translation,
|
2021-07-31 13:26:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3612178862094879, "perplexity": 1107.34577895624}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.6/warc/CC-MAIN-20210731105716-20210731135716-00194.warc.gz"}
|
https://rip94550.wordpress.com/2008/05/11/regression-example-1/
|
## setup
Let’s just fit a parabola to noisy data. Instead of using real data I will manufacture some. First I pick some x values; you might note that they do not have to be equally spaced.
$x = \{1.,\ 1.1,\ 1.5,\ 3.,\ 4.\}$
Now I generate 5 random variables u from a normal(0, .05) distribution. (the 2nd param is std dev, not variance.) my data is generated as
$y = 2 + x^2 + u\$.
that is, my 5 noise values are
$u = \{-0.0494377,\ 0.0900309,\ -0.0316807,\ 0.0148669,\ -0.0842221\}$
and my 5 data values are
$y = \{2.95056,\ 3.30003,\ 4.21832,\ 11.0149,\ 17.9158\}$
Here, then, are my (x,y) points:
$\left(\begin{array}{ll} 1. & 2.95056 \\ 1.1 & 3.30003 \\ 1.5 & 4.21832 \\ 3. & 11.0149 \\ 4. & 17.9158\end{array}\right)$
Here is a plot of the true function without noise (y = x^2 + 2) and the (x,y) points (with noise).
In the real world, of course, what we have is only the 5 points; we want to fit a curve to them. I’m going to do this “by hand”, as I showed you in the expository post.
## “by hand”
Let’s confirm all that, “by hand” as it were. I am going to just jump in and fit a general polynomial of degree 2: $y = a + b\ x + c\ x^2\$ (i.e. $y = \beta_1 + \beta_2\ x + \beta_3\ x^2$), following the computations described in the expository post.
i need a design matrix whose first column is 1s, whose second column is the x’s, and \[Dash] because I want to fit a quadratic \[Dash] whose 3rd column is the squares of the x’s. That is, I construct the design matrix X
$X =\left(\begin{array}{lll} 1 & 1. & 1. \\ 1 & 1.1 & 1.21 \\ 1 & 1.5 & 2.25 \\ 1 & 3. & 9. \\ 1 & 4. & 16.\end{array}\right)$
We compute $X^T\ X\$… and its inverse…
$X^T\ X = \left(\begin{array}{lll} 5. & 10.6 & 29.46 \\ 10.6 & 29.46 & 96.706 \\ 29.46 & 96.706 & 344.527\end{array}\right)$
$\left(X^T\ X\right)^{-1} = \left(\begin{array}{lll} 7.43111 & -7.48067 & 1.46434 \\ -7.48067 & 7.96246 & -1.59534 \\ 1.46434 & -1.59534 & 0.325488\end{array}\right)$
Incidentally, that is the step for which you probably do not want to write your own code: you want someone else to provide code for computing the inverse of a matrix.
having the inverse, we compute the $\beta$ from the normal equations:
$\beta = \left(X^T\ X\right)^{-1}\ X^T\ y$
and get
$\beta = \{1.92723,\ 0.0964893,\ 0.975582\}$
That is, we have just fitted the equation…
$y = 1.92723 + 0.0964893\ x+0.975582\ x^2$
to our data. We can plot it, but we shouldn’t see anything surprising. After all, the coefficients of the fitted equation are 1.97 instead of 2, .096 instead of 0, and .976 instead of 1. Pretty close.
Now we compute the predicted values from our equation; we are just using the data values of X in the fitted equation:
$\hat{y} = X\ \beta = \{2.9993,\ 3.21382,\ 4.26702,\ 10.9969,\ 17.9225\}$
The residuals are the difference between the predictions and the actual data. I never remember which is subtracted, but I write
$y = X\ \beta + e = \hat{y} + e\$,
and then I see that it’s $y - \hat{y} = e\$.
so we compute the e’s and the sum of squares of the e’s (the residuals and the error sum of squares):
$e = \{-0.0487348,\ 0.0862128,\ -0.0486997,\ 0.0179365,\ -0.00671478\}$
$SSE = e \cdot e = 0.0125462$
We should plot the residuals (the 5th one is on the “5” on the x-axis).
For our design matrix, we have n = 5 observations and k = 3 variables. Then the “estimated error variance” SSE/(n-k) is…
$= s^2 = \frac{SSE}{5-3} = 0.0062731$
Let’s get the total sum of squares. We compute the mean of the y values, center the data by subtracting the mean, and get the sum of squares (as the dot product of two zero-mean vectors).
The mean of the y’s is 7.87991; after subtracting it, we get centered data
$yc = \{-4.92935,\ -4.57988,\ -3.66159,\ 3.13496,\ 10.0359\}$
and then
$SST = yc \cdot yc = 169.228$
Now we can compute the R^2:
$R^2 = 1 - \frac{SSE}{SST} = 0.999926$
$adjR2 = 1-\frac{SSE/n-k}{SST/n-1} = 1 - \frac{SSE/2}{SST/4} = 0.999852$
The covariance matrix of the $\beta\$s is
$\left(\begin{array}{lll} 0.0466161 & -0.046927 & 0.00918597 \\ -0.046927 & 0.0499493 & -0.0100077 \\ 0.00918597 & -0.0100077 & 0.00204182\end{array}\right)$
and, in particular, the standard errors are the square roots of the diagonal elements of c:
$se = \{0.215908,\ 0.223493,\ 0.0451865\}$
i remark that although the matrix inverse $X^T\ X^{-1}$ was one of the first things we computed, we need the estimated error variance s2, computed much later, in order to get the standard errors.
The t-statistics are the $\beta\$s divided by the standard errors:
$t = \frac{\beta}{se} = \{8.92616,\ 0.431732,\ 21.5901\}$
Let me remark that some people present the $\beta\$s and the t-statistics, others present the $\beta\$s and the standard errors. No problem: given any two of the $\beta\$s, t-statistics, and standard errors, you can compute the third.
## let Mathematica® do it
Here’s where you use whatever you’ve got available; of all my choices, I’ll go with Mathematica®. Just in case you’re also using Mathematica®, here’s the specific command I used, and the output:
(“data” consists only of the (x,y) data; “{1,x,x^2}” tells Mathematica® what design matrix to construct; the “x” after that says that the first column of “data” is “x”; the “Clear” command is because I need “x” to be a symbol, not a list of numbers. I wouldn’t have this problem if I didn’t insist on using “x” as the name of the independent variable, i.e. doing double duty.)
The first line, BestFit, shows us the fitted equation. Again, just in case you’re using Mathematica®, here’s the command that extracted the BestFit and the result:
That’s exactly what we computed by hand. Similarly, the FitResiduals are the residuals e:
$\{-0.0487348,\ 0.0862128,\ -0.0486997,\ 0.0179365,\ -0.00671478\}$
The PredictedResponse are the 5 computed values, yhat, used to compute the residuals.
$\{2.9993,\ 3.21382,\ 4.26702,\ 10.9969,\ 17.9225\}$
In the ParameterTable, “Estimate” refers to $\beta\$, “SE” is its standard error, and TStat is its T-statistic.
PValue is a probability corresponding to the t-statistic. I don’t remember whether Mathematica® is doing a 1-sided or 2-sided test and the documentation doesn’t seem to say. I don’t really care: my rule of thumb is to compare the (absolute value of the) t-statistic to 1. If I ever need to know exactly what PValue means, I’ll grab a t-distribution from one of my stat books and see what Mathematica® did.
Mathematica’s standard errors and t-statistics, R^2, adjusted R^2, and estimated error variance all agree with our earlier computations.
Finally, the ANOVAtable contains, among other things, the error and total sum of squares. (i’ll remark that for experimental data, the F-test can more useful than the R^2 and the adjusted R^2, because repeated x’s with different y’s means that we have some data points on vertical lines; we cannot make the error sum of squares zero. That in turn means that the R^2’s cannot be 1. that in turn means that we can’t tell what constitutes “a good” R^2.)
Our computed SSE and SST were
$\{0.0125462,\ 169.228\}$
respectively, and they agree with Mathematica®.
## better model, worse fit
So, I have confirmed that my recipe matches Mathematica®. So much for computation. What about the results themselves?
The parameter table, or our own calculations, showed a t-statistic of 0.431732 for the x term in the fitted equation. That the t-stat for x is less than 1 suggests that the linear term vanishes. That’s very nice, since we know that true model did not have a linear term. We created the data without a linear term. It’s the small t-statistic that says: for this data there’s a high probability that $\beta = 0\$.
i cannot overemphasize that if our goal is the best interpolation, instead of finding the “true model”, we might choose to use the equation we have.
but, I want to find the true model, so let’s drop the x term; and let Mathematica® do it all for us:
Our two fitted equations, then, are
so dropping the x term has gotten us a little closer to the true equation.
What do the R-squares tell us? Old and new are {0.999926, 0.999919}, so the new one is very slightly smaller. In absolute terms, the new fit is not as good as the first one. In this case the difference is tiny – miniscule, even – , but if our goal is the best interpolation, then a higher R-squared equation may be preferable to a lower.
What about the adjusted R-squares? Old and new are {0.999852, 0.999892}. The newer one is the higher; in some sense, the newer fitted equation is more likely true. Again, in this case, the difference is damned small.
Finally, for the newer ft, all the t-stats (all two of them!) are larger than 1. That’s why we might go with the new equation.
Incidentally, the estimated variance for the equation is smaller, 0.00457182 vs. 0.0062731.
and, FWIW, the sample variance of our noise – which we only know because we created this data – was 0.00453426.
The second equation has a smaller estimated error variance, even though the first equation has smaller errors in total. This is hand in glove with “closeness of fit” versus “true model”, R^2 versus adjusted R^2.
BTW, had I done the newer fit “by hand”, the design matrix would have had two columns: 1s and the squares. It’s ok to drop the x’s themselves. This is exactly how we specify a model, by computing and using whatever columns we choose. To be specific,
$\left(\begin{array}{ll} 1 & 1. \\ 1 & 1.21 \\ 1 & 2.25 \\ 1 & 9. \\ 1 & 16.\end{array}\right)$
There you are. That’s how to fit $y = a + b x^2$ to the data.
|
2017-06-24 22:34:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8220403790473938, "perplexity": 837.2267800104866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00308.warc.gz"}
|
https://www.physicsforums.com/threads/centre-of-mass.637196/
|
# Centre of Mass
1. Sep 19, 2012
### phospho
The diagram shows a pendant in the shape of a sector of a circle with center A. The radius is 4 cm and the angle at A is 0.4 radians. Three small holes of radius 0.1 cm, 0.2cm and 0.3 cm are cut away. The diameters of the holes lie along the axis of symmetry and their centers are 1, 2 and 3 cm respectively from A. The pendant can be modeled as a uniform lamina. Find the distance of the center of mass of the pendant from A.
Moments about A (y = 0 due to symmetry)
$x = \frac{(0.5\times4^2\times0.4)\times(\frac{2\times4\times(sin(0.2))}{0.6}) - (0.1^2\pi\times(1)) - (0.2^2\pi\times(2))-(0.3^2\pi\times(3))}{(0.5\times4^2\times0.4) - (0.1^2\pi) - (0.2^2\pi) - (0.3^2\pi)} => x = 2.66...$
However the answer is 2.47 :s
Last edited: Sep 19, 2012
2. Sep 19, 2012
### kushan
diagram?
3. Sep 19, 2012
### Staff: Mentor
Where's the center of mass of the sector itself?
4. Sep 19, 2012
### phospho
The diagram was rather rubbish so I didn't include it (it's pretty much exactly like this, just the circles centers are in the line of symmetry).
I've edited my original post to include the center of mass of the sector - I just copied it wrong.
|
2018-03-24 08:54:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.725205659866333, "perplexity": 718.8683026485141}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649961.11/warc/CC-MAIN-20180324073738-20180324093738-00085.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-missing-number-or-term-to-make-a-perfect-square-trinomial-y-
|
# How do you find the missing number or term to make a perfect square trinomial. y^2 + 8y +___ ?
${\left(y + 4\right)}^{2} = {y}^{2} + 8 y + 16$
|
2019-10-15 10:54:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2761719524860382, "perplexity": 600.0789806381525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00033.warc.gz"}
|
https://answers.opencv.org/questions/27808/revisions/
|
# Revision history [back]
### How can you use K-Means clustering to posterize an image using c++?
Hi all,
I'm trying to posterize an image, i.e. reduce the number of colours in an image, but I'm not having much luck.
I've found the following Python code from OpenCV's documentation, which uses K-Means:
import numpy as np
import cv2
Z = img.reshape((-1,3))
# convert to np.float32
Z = np.float32(Z)
# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 8
ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
cv2.imshow('res2',res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
My problem is that I only know C/C++. Would someone please help me out by converting this to the C++ equivalent?
2 retagged berak 31874 ●4 ●76 ●300
### How can you use K-Means clustering to posterize an image using c++?
Hi all,
I'm trying to posterize an image, i.e. reduce the number of colours in an image, but I'm not having much luck.
I've found the following Python code from OpenCV's documentation, which uses K-Means:
import numpy as np
import cv2
Z = img.reshape((-1,3))
# convert to np.float32
Z = np.float32(Z)
# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 8
ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
cv2.imshow('res2',res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
My problem is that I only know C/C++. Would someone please help me out by converting this to the C++ equivalent?
3 retagged sturkmen 6332 ●3 ●43 ●73 https://github.com/stu...
### How can you use K-Means clustering to posterize an image using c++?
Hi all,
I'm trying to posterize an image, i.e. reduce the number of colours in an image, but I'm not having much luck.
I've found the following Python code from OpenCV's documentation, which uses K-Means:
import numpy as np
import cv2
Z = img.reshape((-1,3))
# convert to np.float32
Z = np.float32(Z)
# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 8
ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
cv2.imshow('res2',res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
My problem is that I only know C/C++. Would someone please help me out by converting this to the C++ equivalent?
|
2020-09-22 06:08:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3503231108188629, "perplexity": 5013.8633373006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400203096.42/warc/CC-MAIN-20200922031902-20200922061902-00448.warc.gz"}
|
http://planetmath.org/windowscalculator
|
# Windows Calculator
The Windows Calculator is a software calculator that comes bundled with the Windows operating system. The basic mode is called “Standard” and is the default, Scientific mode has most of the operations available on a typical scientific calculator. Note that switching between modes causes the loss of the current value displayed (unless of course that value is 0). For some reason, Standard mode has a square root key but Scientific mode does not. As a workaround in scientific mode, one can enter, say, [2] [x^y] [0] [.] [5].
Division by zero causes an error condition that must be cleared with the C key on the displayed keyboard (or the Escape key on the computer’s keyboard). Integer values smaller than $10^{32}$ can be displayed in all their digits. According to the Help, the Windows Calculator truncates $\pi$ to 32 digits, but rational numbers are stored internally “as fractions”.
Like most scientific calculators, the Windows Calculator can display results in binary, octal and hexadecimal, but is limited to integers in those bases. Additionally, negative numbers are shown in two’s complement (and the sign change key performs two’s complement on the displayed value). In those bases, the user can choose the data size: quadruple word (the default), double word, word or byte. Overflows don’t trigger any kind of exception or error notification, the calculator quietly discards the more significant digits and displays the least significant digits that will fit in the currently selected data size.
Like the Mac OS Calculator, for the Windows Calculator $0^{0}=1$.
## References
• 1 David A. Karp, Tim O’Reilly & Troy Mott, Windows XP in a Nutshell Cambridge: O’Reilly (2002): 114 - 117
Title Windows Calculator WindowsCalculator 2013-03-22 16:39:22 2013-03-22 16:39:22 PrimeFan (13766) PrimeFan (13766) 8 PrimeFan (13766) Definition msc 00A05 msc 01A07
|
2018-03-18 08:04:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 3, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.571448028087616, "perplexity": 3119.319599529207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00733.warc.gz"}
|
https://www.nature.com/articles/s41598-017-17372-4
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Local field potentials are induced by visually evoked spiking activity in macaque cortical area MT
## Abstract
Local field potentials (LFP) have been the focus of many recent studies in systems neuroscience. However, the exact neural basis of these signals remains unclear. To address this question, we determined the relationship between LFP signals and another, much better understood, signature of neural activity: action potentials. Specifically, we focused on the relationship between the amplitude of stimulus-induced LFPs and the magnitude of spiking activity in visual cortex of non-human primates. Our trial-by-trial correlation analyses between these two components of extracellular signals in macaque visual cortex show that the spike rate is coupled to the LFP amplitude with a surprisingly long latency, typically 50 ms. Our analysis shows that the neural spike rate is a significant predictor of the LFP amplitude. This limits the functional interpretation of LFP signals beyond that based on spiking activities.
## Introduction
The cerebral cortex encodes sensory information by the activity of neurons, a phenomenon extensively studied using extracellular recording in awake animals. Such recordings can pick up spiking activity, the pattern of action potentials of nearby neurons that encodes information in the rate and temporal distribution of these binary events. For instance, neurons in medial temporal area MT of the macaque visual cortex fire at different rates when stimuli moving at different directions or at different speeds appear in their receptive fields1 with most MT neurons preferring a specific direction, speed of motion and binocular disparity2. The spiking activity of such sensory neurons is also modulated by high-level cognitive factors such as spatial, feature-based or object-based attention3,4,5. This makes neural spikes a promising target for research on the neural bases of sensory and cognitive functions.
Another component of extracellular signals is local field potentials (LFP) that have attracted attention more recently in neuroscience and neuroengineering studies6,7,8,9,10. LFPs are thought to be the sum of synaptic potential fluctuations across thousands of neurons around the tip of a recording electrode11 from a volume of up to several hundred cubic micrometers12,13,14. Similar to spiking activity, LFP signals from sensory cortex have been shown to encode stimulus parameters and are also modulated by top-down signals. As an example, recordings from area MT show tuning curves for motion direction in the ‘gamma power’, the power of gamma frequency (40–200 Hz) LFPs15. Furthermore, the power of LFPs at this frequency range has been found to increase with switching attention to the receptive field of the recorded site while decreasing at lower frequencies (<20 Hz)16,17,18; However, see19 for contrasting results in gamma frequencies. These previous studies suggest that those synaptic activities which create the LFP are not an epiphenomenon of neural spikes, but they influence local modulations of neural processing. Although LFPs and spikes have usually been studied as separate components of extracellular signals, they show similar signatures of sensory parameters.
Spikes and LFPs are highly correlated in many cortical areas20,21,22,23. This includes correlations between the spike rate and LFP power within a given frequency band as well as the locking of spikes to the LFP phases in different frequency bands. For instance, spike rates of neurons recorded from the rat hippocampus and macaque areas V1 and MT are found to be positively correlated with gamma power across different sensory or cognitive states15,23,24,25; but see21 for negative correlations. Similar studies have reported that neural spikes occur mostly at specific phases of LFPs at low (<20 Hz) and high frequencies (30–80 Hz)19,22,26,27,28,29. Although it has been suggested that spike times could be used to estimate those surrounding synaptic activities that are reflected in the LFP30, it remains unclear which of the two components causes the other and correspondingly represents the most direct readout of information transmission.
When spike and LFP components are recorded from the same site, knowing the causality of their interdependence is crucial. This would clarify which of the two signals forms the neural representation earlier. Using information theoretic techniques Besserve et al.31 suggested that neural spikes and gamma band oscillations in V1 have a causal effect on low frequency LFPs. They also reported causal effects of gamma band on spiking activities when both were recorded from the same site. This study was carried out with anesthetized or passively viewing monkeys. However, for monkeys actively engaged in a behavioral task, the existence of a causality between these signals evoked by a transient stimulus is unclear. Therefore, determining the temporal order of evoked LFP and spikes is of critical relevance.
Here we determined first which of the two signal components (LFP or spiking activity) reaches its peak activity earlier in response to a visual stimulus and second if one of the two predicts the other’s trial-by-trial variability. Recordings were made in the medial temporal area MT of two macaque monkeys while they were performing a visual detection task. We observed that LFP responses reach their maximum activity tens of milliseconds later than spike responses and that a large portion of evoked LFP activity is predicted by the preceding spiking activity.
## Results
Two macaque monkeys were trained to detect a small change in the direction or color of one of two moving random dot patterns (RDP). Each trial started when the monkeys touched a lever and maintained their gaze at a central point; afterwards a cue was presented showing the position of one of the two upcoming stimuli (target). The monkeys received a juice reward if they released the lever as soon as the target stimulus underwent a small change in its color or direction of motion (Fig. 1A). Details of the paradigm are described elsewhere (see32 for monkey H and3 for monkey T).
We recorded neural activity in the form of spiking activity and LFPs from area MT while one of the two stimuli was presented inside the receptive field (RF) of the neuron being recorded. Figure 1 (upper plots in B and C) shows the normalized spiking activity and LFPs (in inversed values, see Materials and Methods) from monkeys H and T for a 300 ms window after the onset of stimuli. Spike trains are first smoothed and next normalized by the maximum value across trials of each recording site (see Materials & Methods for details). Figure 1 (lower plots in B and C) shows the histogram of times with the largest absolute neural activity. Among a large majority of the sites, the maximum absolute LFP coincides with the peak of the LFP (92% in monkey H, 100% in monkey T, p 0.001 for both monkeys, sign test), suggesting a largely monotonous profile in LFPs of our dataset. For monkey H the LFP peak occurs at 153 ± 18 (SD) ms while the spiking activity peak is at 74 ± 14 (SD) ms after stimulus onset when calculated for recording sites separately. Similarly, for monkey T the peaks occur at 107 ± 28 ms and 71±20 ms for LFP and spikes, respectively. To determine if there is a systematic difference between the peak times of the evoked LFP and evoked spiking activity, we considered the distribution of differences between the spiking activity and LFP peak times across recording sites. Figure 2A,B shows the histogram of LFP peak time subtracted by spiking peak time for the two monkeys. The dashed line in each panel indicates the median peak time difference for each monkey (83 ms and 51 ms for monkeys H and T, respectively). In both animals the peak of the LFP activity evoked by the stimulus occurs significantly (p < 0.001, signtest) later than the peak of evoked spiking activity. For visually evoked signals, a similar time lag (50 ms) of LFPs relative to the peak spike rate has been observed in primate FEF33. Again, with visual stimulation, Tan et al. showed a loss of coincidence for deflections of neural membrane potential and LFPs34. However, it is unclear whether the spike rate influences LFP amplitude across the substantial delay we observed.
In order to investigate a potential influence of the evoked spiking activity on the evoked LFP, we asked whether there is a correlation between the values of the evoked spiking activity and evoked LFP across the task trials. To determine if the cross-time correlation occurs only from spiking activity to LFP, we calculated the inter-trial correlation between spiking activities and LFPs at each site, with the spiking activity and LFP coming from different time bins. This analysis was applied to trials where the preferred stimulus of the recorded neuron was presented. Figure 3A,B shows the correlation values across different pairs of time bins averaged across recording sites. The axes show centers of 10 ms bins stepped by 10 ms starting from stimulus onset. X and Y-axes indicate the time bins from which the mean spiking activity and the mean LFP activity are extracted, respectively. The color code represents the average correlation between mean spiking and LFP activity in a given pair of time bins across trials of a given recording site. Solid black lines characterize time pairs of the map with the mean spike-LFP correlation significantly different from zero across recording sites (Supplementary Figure 1 shows the proportion of significantly correlated sites at each time-pair). Two diagonal white lines connect pairs of time bins with the same indices; so that the area above the lines correspond to time pairs with their LFP index larger than the spike index, conversely for the time pairs below the diagonal lines, the LFP index is smaller than the spike index. Correlation maps of both monkeys show that the majority of time-pairs with a significant correlation occur above the diagonal line indicating that the time index of LFPs follows that of spiking activities in correlated time pairs. To test this directly, we plotted the histogram of differences between the LFP and spike indices for those time pairs with a significant mean correlation (Fig. 3C,D). The X-axes in these histograms represent the LFP time subtracted by the spike time and the dashed vertical lines show the median of this difference across time pairs with significant correlations. Consistent with the rightward shift of the dashed line, both monkeys show a significant difference between the spike and LFP time indices of those time pairs with significant cross-time correlation. In these time pairs, the LFP index follows the spike index by 40 ms for both monkeys (p 0.001, sign test). This indicates that spiking activity is a significant predictor of future LFP activity.
The unidirectional correlation between spikes and upcoming LFPs could be a side-effect of the correlation of a neuron’s spiking activity with its own following spiking activity. To validate this, we calculated the partial correlation between spikes and LFPs at different times, removing the correlation component associated with spikes at the same time as LFPs (see Materials & Methods for details). This resulted in similar maps as in Fig. 3A,B suggesting that spikes directly influence future LFPs (Supplementary Figure 2A,B).
Furthermore, the LFPs we observed might reflect volume-conducted voltages from other brain areas35. To check for this, we focused on monkey H’s data, which contained a sufficient number of simultaneous recordings. We subtracted each electrode’s LFP from that of its neighboring electrode, to remove global effects (Supplementary Figure 3A). Similar to Fig. 3, we calculated the histogram of the difference between LFP and spike time-indices across the time-pairs with a significant correlation (Supplementary Figure 3B). Similar to Fig. 3C,D, the LFP time index was significantly larger that the spike time index (p 0.001, sign test). This indicates that the unidirectional correlation between spikes and the following LFP is not due to voltages volume-conducted from other brain areas. It may be further argued that the asymmetric pattern of cross-time correlations relative to the diagonal line depends on the behavioral condition of the trials the monkeys were performing. In each trial of the task, either the stimulus inside or outside the RF was cued, and the monkeys were rewarded to report changes in only the cued stimulus. We carried out similar analyses as in Fig. 3A,B on each of the two trial types: RF-cued and RF-uncued, where the RF’s position or the position contralateral to the RF was cued, respectively. Similar to Fig. 3A,B, here we focused on trials where the preferred stimulus of the recorded neuron was presented. Fig. 3E–H shows the resulting maps for each of the cueing conditions for both monkeys. Top and bottom panels present the correlation maps for monkeys H and T, respectively. The left and right panels present the maps for RF-cued and RF-uncued trials, respectively. For both monkeys the asymmetric pattern relative to the diagonal line can be perceived in both cueing conditions similar to when trials of both cueing conditions were pooled (Fig. 3A,B) and a majority of time pairs with significant correlations are above the diagonal line. The LFP time index at these time pairs is significantly greater than the spike time index for the RF-cued and RF-uncued conditions (50 ms for each animal, p 0.001, sign test). This result suggests that the asymmetry of the correlation map relative to the diagonal line is not due to the monkey’s cognitive state.
In order to ensure that our result is not due to spectral leakage of the spike waveforms into LFPs of the same electrode, the same analysis as in Fig. 3A,B was applied to LFP and spiking activities recorded from separate electrodes simultaneously. Figure 4B,D illustrates the correlation maps for LFP and spiking activity from a different electrode for monkey H (Fig. 4B) and monkey T (Fig. 4D). Compared to the correlation maps in Fig. 3A,B (shown also in Fig. 4A,C) (corresponding to the condition that LFP and spiking activity were recorded from the same site) the asymmetry relative to the diagonal line is preserved. Consequently, the LFP time index at those time pairs with significant correlations is larger than the spiking activity time index (40 ms for both monk+eys, p 0.001 for both monkeys, sign test). This suggests that the correlation between spiking activity and following LFP activity is not due to common spectral components of spiking activity and LFP signals when recorded from the same site.
Given the high trial-by-trial correlation between spiking activity and the following LFP amplitude, we next hypothesized that earlier spiking activity should predict following LFP amplitudes. Therefore, we randomly selected half of the trials for each site and fitted a linear model on the spike-LFP pairs coming from different time slots. Next the linear model was used to estimate LFP amplitude based on spiking activity in the remaining trials. The correlation of the estimated LFP and the original LFP amplitudes was calculated as a measure of the linear estimator’s prediction performance (see Materials & Methods for details). Results are shown in Fig. 4E,F, where each panel presents the prediction performance of the linear estimation for one monkey and color codes the performance of the linear estimation at each spike-LFP time pair. The maps show a similar pattern as in Fig. 3A,B with asymmetry relative to the diagonal line and a significantly higher LFP time index than spiking activity time index for time pairs with a significant prediction performance (p 0.001, sign test). This shows a uni-directional functional link between spiking activity and LFP amplitudes occurring with a substantial delay.
So far we considered LFP across all frequencies. Next we asked whether the correlation between spiking activity in a given time and LFP in following intervals occurs at any specific frequency band of LFP in particular. Therefore, we divided LFP into frequencies higher than 30 Hz and frequencies lower than 30 Hz. This division addresses the distinction between the functional role of these two frequency bands in neural mechanisms of visual processing and attention in area MT18,32. Figure 5 presents the spike-LFP correlation maps for each of these two frequency bands; the left and right panels focus on frequencies lower and higher than 30 Hz, respectively and figures in each row show the maps for one monkey. As shown in the right panel, there is no statistically significant correlation between spiking activity and LFP at frequencies higher than 30 Hz among any of the pairs of time intervals we have studied. In contrast as shown in the left panel, the properties of correlation maps remain similar to the original maps calculated based on the full frequency spectrum of LFP (Fig. 3A,B) in terms of asymmetry relative to the diagonal line, and for both monkeys the time pairs with a significant cross time correlation rest above the diagonal line (p 0.001, sign test). This suggests that the cross-time correlation of spiking activity and LFP magnitude holds only for frequencies less than 30 Hz that are linked to shared neural activities across larger volumes of cortex compared to that of frequencies higher than 30 Hz.
We next asked how spikes affect LFPs occurring tens of milliseconds after the spikes. For this, we focused on the data of monkey H, because of the availability of a larger number of sites, and on those spikes with the highest spike-LFP correlation (spike index = 55 ms, Fig. 3A). Figure 6A plots the correlation of those spikes with the LFPs. This curve shows the highest spike-LFP correlation for LFPs 125 ms after the stimulus onset. It should be noted that there is an earlier peak in the correlation curve at about 55 ms which reflects the instantaneous interaction between post-synaptic potentials and spikes (confirming the significantly correlated time-pairs on the diagonal line (Fig. 3A,B)). This instantaneous interaction provides an explanation as to why the onset of stimulus evoked LFP and spiking activity occur simultaneously after stimulus presentation (Fig. 1B,C). Furthermore, the correlation shows an oscillatory pattern (<20 Hz) in time, suggesting that not only spikes influence the LFP at different upcoming times in a non-uniform manner, but that this influence is dependent on the phase of an oscillatory state, phase-locked to the stimulus onset. Further study needs to characterize the potential link of this oscillatory state with low frequency LFPs. Finally, it should be possible to extract a filter allowing for the prediction of LFPs from spikes. We estimated this kernel by estimating the spike-triggered LFP for spikes between 0–100 ms following the stimulus onset (the interval with the maximum MU-LFP correlation) using a Weibull function (Fig. 6B):
$$y=c\ast {(\frac{x}{a})}^{b-1}\ast \exp (-{(\frac{x}{a})}^{b})$$
where x is time (ms) and y is the estimated spike-triggered average LFP. We found a, b, and c to be 182, −0.9, and 0.19, respectively. Consistent with a study36 in rodents, this suggests that the LFPs following spikes can be predicted, using a convolution-based method.
## Discussion
LFP signals and their possible causal link to cognitive aspects of brain functions have attracted wide interest. However, the exact origin of these signals and their functional role, especially in sensory areas remains unclear. Here we investigated the relationship between action potentials and LFPs in area MT of macaque visual cortex in search for a possible causal link and its directionality. By calculating the trial-by-trial correlation between the two signal components across time bins with different intervals in between, we show that spiking activity predicts LFP activity in the transient part of neural responses. We observed a significant trial-by-trial correlation between the LFP and preceding spiking activity. Using linear estimation, we show that the spiking activity at a given time can predict the upcoming LFP amplitude. This suggests that evoked spiking activity has a significant role in determining LFP activity. This effect is observed across different behavioral conditions and cannot be attributed to spectral contaminations between spiking and LFP activity recorded from the same site, as only low frequency (<30 Hz) LFPs show this link to spiking activity. Assuming that a similar principle of information-representation holds across different sensory areas, we assume that our findings could be extended to similar sensory areas as well.
We interpret our data to show that spiking activity induces LFP unidirectionally. To confirm this, we calculated trial-by-trial correlations across successive time slots. It may be argued that a third factor is causing the trial-by-trial variability in both spiking activities and LFPs. The spiking input coming from upstream visual areas (V1 and V2) could be such a factor, inducing both spiking activity and LFP. However, this input would first influence post-synaptic potentials and only then the spiking activity in area MT. Since post-synaptic potentials constitute the main component of LFPs11, the highest cross-time correlation between spiking activity and LFP should occur at time pairs where the spiking activity time follows the LFP time. Our results show that this is clearly not the case, suggesting that spiking activity in area MT rather than an external source of spiking input governs fluctuations in LFP of the area. Nevertheless, to directly study if evoked spiking activity causes the LFP-reflecting synaptic response, one would ideally selectively de-activate single neurons and inspect the effect on the evoked LFP. Although previous studies have shown that selectively activating neurons using optogenetics induces modulation in LFP power37, similar studies are necessary to investigate if visually evoked spiking activity is the essential cause of visually evoked synaptic activities reflected by LFPs. Our result is in line with a previous study38 that shows, although there is a component of LFP that can predict either the intracellularly recorded depolarization or the action potential, the LFP component following this depolarization has a higher amplitude and lasts longer. It is further in agreement with a report on causality effects of spiking activities on low frequency LFP in macaque V131, especially in the sense that low frequencies are emphasized in transient neural responses (Fig. 1B,C). However, there are differences in the time scale of the effect between this study and our data.
The median delay between spiking activity and LFPs at time bin pairs with a significant correlation is about 40 ms for both monkeys (Fig. 3C,D), i.e. fluctuations in spiking activity take tens of milliseconds to be reflected in the amplitude of LFP. Despite the similarity in the magnitude of the delay, there is a noticeable time-lag between the peaks of the average LFP in the two animals (33 ms, p < 0.0001, permutation test (with 100,000 repetitions); 151 ms and 118 ms for monkeys H and T, respectively). We speculate that this difference reflects differences between the tasks of the two monkeys; for monkey H we showed a full-sized RDP, while for monkey T we showed a small RDP at a small eccentricity close to the fixation point.
Although we found a delay up to 40 ms between spikes and LFP, Besserve et al. reported that the largest component of the causality effect has a time lag of only a few milliseconds. Given the influence of wakefulness on the amplitude of evoked activity and its trial by trial variability39,40, this difference may be because their recordings were carried out under anesthesia whereas our monkeys were actively engaged in a visual detection task during the recordings. Using an alternative approach Rasch et al. showed that neural spikes can be used to linearly estimate LFP30. Consistent with our results they found that the estimation is best done in the time scale of up to 200 ms from spike occurrence. However, the kernel they introduce for estimating LFPs produces a negative trial-by-trial correlation between spike rate and the following LFPs (with reversed amplitude). This is in disagreement with the positive correlations in our data (Fig. 3A,B; values above the diagonal line). Again, because their data were recorded while the monkeys were either anesthetized or passively viewing a movie and given the suppressed coding capacity of neurons under non-wakefulness41, their result cannot be generalized to when animals actively attend. Furthermore, our observation concerning the long time scale in which spikes influence LFP suggests that neural spikes modulate LFP in time scales much longer than that of synaptic transmission. One might speculate that the incoming sensory spikes cause a short-term network-level modification in MT that influences LFP-generation on a time scale of tens of milliseconds. Based on the previous observation of differed LFP profiles across cortical layers42, it could be argued that the unidirectional induction of LFP by spiking activity is layer-specific. While we do not know the cortical layer of our recordings, the dominance of sites with a larger evoked peak rather than evoked trough shows that the LFPs are homogeneous across recording sites (Fig. 1B,C). Nevertheless, this possibility remains that our LFPs are recorded only from distal dendritic regions, inducing the time lag with spikes. Although this is unlikely given the high number of recordings in our dataset, future recordings from separate cortical layers of area MT could give a clear view on how the activity of neurons in each layer contributes to the LFP.
To summarize: since the biophysical and neural bases as well as the functional correlates of LFPs are not fully understood, we investigated the interaction between these signals and action potentials, the much better understood signature of neural activity and coding. Our results show a strong unidirectional influence of spiking activity on LFPs during transient neural responses, indicating that a considerable component of LFP is explained by spiking activity. This suggests that LFPs, as well as other types of field potentials (ECoG, EEG, …) are an epiphenomenon, i.e. an indirect, rather than direct measure of brain states.
## Materials and Methods
### Animal welfare
Research with non-human primates represents a small but indispensable component of neuroscience research. The scientists in this study are aware and are committed to the great responsibility they have in ensuring the best possible science with the least possible harm to the animals43. All animal procedures and methods of this study have been approved by the responsible regional government office (Niedersaechsisches Landesamt fuer Verbraucherschutz und Lebensmittelsicherheit (LAVES)) under the permit numbers 33.42502/08-07.02 and 33.14.42502-04-064/07 and were carried out in accordance with all applicable laws and regulations. The animals were group-housed with other macaque monkeys in facilities of the German Primate Center in Goettingen, Germany in accordance with all applicable German and European regulations. The facility provides the animals with an enriched environment (incl. a multitude of toys and wooden structures44), natural as well as artificial light, exceeding the size requirements of the European regulations, including access to outdoor space. Surgeries were performed aseptically under gas anesthesia using standard techniques, including appropriate peri-surgical analgesia and monitoring to minimize potential suffering.
The German Primate Center has several staff veterinarians that regularly monitor and examine the animals and consult on procedures. During the study the animals had unrestricted access to food and fluid, except on the days where data were collected or the animals were trained on the behavioral paradigm. On these days the animals were allowed unlimited access to fluid through their performance in the behavioral paradigm. Here the animals received fluid rewards for every correctly performed trial. Throughout the study the animals’ psychological and veterinary welfare was monitored by the veterinarians, the animal facility staff and the lab’s scientists, all specialized in working with non-human primates. The two animals were healthy at the conclusion of our study and were subsequently used in other studies.
Two male macaque monkeys were trained to fixate a central fixation point and touch a lever to start each trial. Eye movements were monitored using a high-speed video-based eye tracker at a sampling rate of 230 Hz (ET49, Thomas Recording, Giessen, Germany). Each trial started with presenting a cue on the screen indicating one of upcoming moving random dot patterns (RDP) as the relevant stimulus (target). For monkey H, the cue was a static RDP shown at the same position and with the same size as the target and for monkey T it was a small RDP close to the fixation point and on an imaginary line connecting the fixation point to the upcoming target stimulus. After the cue was removed, two moving RDPs were presented at equal eccentricities in opposite visual hemifields and the monkeys had to detect a small direction (monkey H) or color/direction change (monkey T) in the target RDP. This change could happen at a random time, not earlier than 500 ms after stimulus onset and the monkeys were rewarded for releasing the lever in a time window between 100–650 ms after the target change. If the animal’s gaze deviated from the fixation point within a trial, the trial would terminate without a reward. For more details about the behavioral paradigm and the behavioral performance for monkey H see32 and for monkey T see3.
While the monkeys performed the task, we recorded multi-unit spiking activities and local field potentials (LFP) from area MT using a five-channel multi-electrode recording system (Mini-Matrix, Thomas Recording, and Plexon multi-channel acquisition system (MAP), Plexon Inc.). The electrode signal was split into LFP and spike components by hardware filters. The LFPs were amplified and digitized at 1 kHz, while spikes were amplified and digitized at 40 kHz. Multi-unit spikes were determined by voltage thresholding. We recorded from up to all five electrodes (with the impedance of 2 MΩ arranged linearly separated by 305 μm) simultaneously. In sessions with simultaneous recordings, we made sure that the RFs of the different multi-units overlap sufficiently for all to contain the stimulus placed in the RF. Recording sites were determined to be located in MT by their position in the cortex, receptive field location and size, as well as the neurons’ tuning for linear motion directions. For monkey H, the RDPs could move in one of 8 equally spaced directions between 0° and 360° and for monkey T, the motion direction would be either the preferred or anti-preferred direction of the neuron under study.
### Data analysis
All analyses were carried out in MATLAB (Mathworks, Natick, MA). To generate smoothed spiking activities (Fig. 1B,C), a Gaussian function (σ = 15 ms) was convolved with the spike trains and the outcome was normalized per site to the maximum value across trials. Other Gaussian functions (σ = 5, 10) did not alter our main results (Fig. 3A–D). To avoid any phase lag enforced by the recording headstage, we aligned the phases of LFP signals by applying a time-reversed filter on the LFP, which was built upon measurements of the recording hardware’s phase shift values (provided by Plexon, Inc: FPAlign utility guide-version 1.0)45. The 50 Hz line noise, the 76 Hz noise due to the monitor refresh rate and its periodical (152 Hz) were extracted and removed from the original signal using MATLAB’s idealfilter routine (a non-causal ideal bandpass filter), which extracts a given frequency component after applying Fourier transform on the signal and next applying an inverse-Fourier transform over it. We controlled for multiple comparisons using Bonferroni correction. LFPs were first inversed for all trials and next normalized by the maximum value across trials separately for each recording site. All our figures show these inversed values of the LFPs in order to present the activity relative to the intracellular space. To study the spike-LFP link, we calculated the cross-time spike rate-LFP trial-by-trial correlation. This measure tolerates any non-stationarity enforced by the onset of stimulus (in contrast to approaches such as spike-triggered LFP). In the correlation maps we identified time pairs with a significant correlation by testing for each time pair, if the correlation values across sites were significantly different from zero using a sign test. Partial correlation between spiking activity and LFP of a time pair was calculated by measuring the correlation between the two residual values resulting from a) the linear regression of spiking activity with the spiking activity simultaneous to LFP and b) the linear regression of LFP with the spiking activity simultaneous to it. For calculating correlation maps based on different LFP frequencies (Fig. 5), we first subtracted the average evoked LFP from each trial’s LFP for every site. This ensures that filtering LFPs into different frequencies is not contaminated by the transient response evoked with stimulus onset. Signals were filtered using MATLAB’s idealfilter routine. Similar results were achieved using a zero-phase FIR filter (eegfilt function46; with the filter order of 3*(sampling_rate/low_cutoff_freq) and assuming each given LFP signal as one epoc). For the linear estimation of LFPs based on spiking activity (Fig. 4E,F) for each site at each spike-LFP time pair, first, the trials were divided randomly into two equal-sized groups, second we estimated a linear model using MATLAB fitlm function based on one of the groups and finally predicted the LFP of the second group based on its spiking activity data using the model computed from the first group (with the aid of MATLAB predict function). In order to assess the performance of this prediction method, the Pearson correlation between the predicted values and the original LFP data was calculated.
### Data availability statement
The datasets generated during the current study are available from the corresponding author on reasonable request.
## References
1. 1.
Maunsell, J. H. R. & Van Essen, D. C. Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation. J. Neurophysiol. 49, 1127–1147 (1983).
2. 2.
Cumming, B. G. & DeAngelis, G. C. The physiology of stereopsis. Annu. Rev. Neurosci. 24, 203–238 (2001).
3. 3.
Katzner, S., Busse, L. & Treue, S. Attention to the color of a moving stimulus modulates motion-signal processing in macaque area MT: evidence for a unified attentional system. Front. Syst. Neurosci. 3, 12 (2009).
4. 4.
Niebergall, R., Khayat, P. S., Treue, S. & Martinez-Trujillo, J. C. Multifocal attention filters targets from distracters within and beyond primate MT neurons’ receptive field boundaries. Neuron 72, 1067–1079 (2011).
5. 5.
Patzwahl, D. R. & Treue, S. Combining spatial and feature-based attention within the receptive field of MT neurons. Vision Res. 49, 1188–1193 (2009).
6. 6.
Einevoll, G. T., Kayser, C., Logothetis, N. K. & Panzeri, S. Modelling and analysis of local field potentials for studying the function of cortical circuits. Nat. Rev. Neurosci. 14, 770–785 (2013).
7. 7.
Mehring, C., Rickert, J., Vaadia, E., de Oliveira, S. C. & Aertsen, A. & Rotter, S. Inference of hand movements from local field potentials in monkey motor cortex. Nat. Neurosci. 6, 1253–1254 (2003).
8. 8.
Moran, D. Evolution of brain-computer interface: Action potentials, local field potentials and electrocorticograms. Curr. Opin. Neurol. 20, 741–745 (2010).
9. 9.
Rothé, M., Quilodran, R., Sallet, J. & Procyk, E. Coordination of high gamma activity in anterior cingulate and lateral prefrontal cortical areas during adaptation. J. Neurosci. 31, 11110–11117 (2011).
10. 10.
Velliste, M., Perel, S., Spalding, M. C., Whitford, A. S. & Schwartz, A. B. Cortical control of a prosthetic arm for self-feeding. Nature 453, 1098–1101 (2008).
11. 11.
Buzsáki, G., Anastassiou, C. A. & Koch, C. The origin of extracellular fields and currents — EEG, ECoG, LFP and spikes. Nat. Rev. Neurosci. 13, 407–420 (2012).
12. 12.
Kajikawa, Y. & Schroeder, C. E. How local is the local field potential? Neuron 72, 847–58 (2011).
13. 13.
Katzner, S. et al. Local origin of field potentials in visual cortex. Neuron 61, 35–41 (2009).
14. 14.
Xing, D., Yeh, C.-I. & Shapley, R. M. Spatial spread of the local field potential and its laminar variation in visual cortex. J. Neurosci. 29, 11540–49 (2009).
15. 15.
Liu, J. & Newsome, W. T. Local field potential in cortical area MT: stimulus tuning and behavioral correlations. J. Neurosci. 26, 7779–7790 (2006).
16. 16.
Esghaei, M., Daliri, M. R. & Treue, S. Attention decreases phase-amplitude coupling, enhancing stimulus discriminability in cortical area MT. Front. Neural Circuits 9, 82 (2015).
17. 17.
Fries, P., Reynolds, J. H., Rorie, A. E. & Desimone, R. Modulation of oscillatory neuronal synchronization by selective visual attention. Science 291, 1560–1563 (2001).
18. 18.
Khayat, P. S., Niebergall, R. & Martinez-Trujillo, J. C. Frequency-dependent attentional modulation of local field potential signals in macaque area MT. J. Neurosci. 30, 7037–7048 (2010).
19. 19.
Chalk, M. et al. Attention reduces stimulus-driven gamma frequency oscillations and spike field coherence in V1. Neuron 66, 114–125 (2010).
20. 20.
Khodagholy, D. et al. NeuroGrid: recording action potentials from the surface of the brain. Nat. Neurosci. 18, 310–315 (2015).
21. 21.
Ray, S. & Maunsell, J. H. R. Different origins of gamma rhythm and high-gamma activity in macaque visual cortex. PLoS Biol. 9, e1000610 (2011).
22. 22.
Siegel, M., Warden, M. R. & Miller, E. K. Phase-dependent neuronal coding of objects in short-term memory. Proc. Natl. Acad. Sci. 106, 21341–21346 (2009).
23. 23.
Whittingstall, K. & Logothetis, N. K. Frequency-band coupling in surface EEG reflects spiking activity in monkey visual cortex. Neuron 64, 281–289 (2009).
24. 24.
Csicsvari, J., Jamieson, B., Wise, K. D. & Buzsáki, G. Mechanisms of gamma oscillations in the hippocampus of the behaving rat. Neuron 37, 311–322 (2003).
25. 25.
Fries, P., Womelsdorf, T., Oostenveld, R. & Desimone, R. The effects of visual stimulation and selective visual attention on rhythmic neuronal synchronization in macaque area V4. J. Neurosci. 28, 4823–4835 (2008).
26. 26.
Lakatos, P., Karmos, G., Mehta, A. D., Ulbert, I. & Schroeder, C. E. Entrainment of neuronal oscillations as a mechanism of attentional selection. Science 320, 110–113 (2008).
27. 27.
Liebe, S., Hoerzer, G. M., Logothetis, N. K. & Rainer, G. Theta coupling between V4 and prefrontal cortex predicts visual short-term memory performance. Nat. Neurosci. 15, 456–462 (2012).
28. 28.
Sirota, A. et al. Entrainment of neocortical neurons and gamma oscillations by the hippocampal theta rhythm. Neuron 60, 683–697 (2008).
29. 29.
Vinck, M., Womelsdorf, T., Buffalo, E. A., Desimone, R. & Fries, P. Attentional modulation of cell-class-specific gamma-band synchronization in awake monkey area v4. Neuron 80, 1077–1089 (2013).
30. 30.
Rasch, M., Logothetis, N. K. & Kreiman, G. From neurons to circuits: linear estimation of local field potentials. J. Neurosci. 29, 13785–13796 (2009).
31. 31.
Besserve, M., Schölkopf, B., Logothetis, N. K. & Panzeri, S. Causal relationships between frequency bands of extracellular signals in visual cortex revealed by an information theoretic analysis. J. Comput. Neurosci. 29, 547–566 (2010).
32. 32.
Esghaei, M. & Daliri, M. R. Decoding of visual attention from LFP signals of macaque MT. PLoS ONE 9, e100381 (2014).
33. 33.
Monosov, I. E., Trageser, J. C. & Thompson, K. G. Measurements of simultaneously recorded spiking activity and local field potentials suggest that spatial selection emerges in the frontal eye field. Neuron 57, 614–625 (2008).
34. 34.
Tan, A. Y. Y., Chen, Y., Scholl, B., Seidemann, E. & Priebe, N. J. Sensory stimulation shifts visual cortex from synchronous to asynchronous states. Nature 509, 226–229 (2014).
35. 35.
Carmichael, J. E., Gmaz, J. M. & van der Meer, M. A. A. Gamma oscillations in the rat ventral striatum originate in the piriform cortex. J. Neurosci. 37, 7962–74 (2017).
36. 36.
Einevoll, G. T. et al. Laminar population analysis: estimating firing rates and evoked synaptic activity from multielectrode recordings in rat barrel cortex. J. Neurophysiol. 97, 2174–90 (2007).
37. 37.
Cardin, J. A. et al. Driving fast-spiking cells induces gamma rhythm and controls sensory responses. Nature 459, 663–667 (2009).
38. 38.
Okun, M., Naim, A. & Lampl, I. The subthreshold relation between cortical local field potential and neuronal firing unveiled by intracellular recordings in awake rats. J. Neurosci. 30, 4440–48 (2010).
39. 39.
Devonshire, I. M., Grandy, T. H., Dommett, E. J. & Greenfield, S. A. Effects of urethane anaesthesia on sensory processing in the rat barrel cortex revealed by combined optical imaging and electrophysiology: Imaging and electrophysiology of barrel cortex. Eur. J. Neurosci. 32, 786–797 (2010).
40. 40.
Kisley, M. A. & Gerstein, G. L. Trial-to-trial variability and state-dependent modulation of auditory-evoked responses in cortex. J. Neurosci. 19, 10451–10460 (1999).
41. 41.
Gaese, B. H. & Ostwald, J. Anesthesia changes frequency tuning of neurons in the rat primary auditory cortex. J. Neurophysiol. 86, 1062–1066 (2001).
42. 42.
Spaak, E., Bonnefond, M., Maier, A., Leopold, D. A. & Jensen, O. Layer-specific entrainment of gamma-band neural activity by the alpha rhythm in monkey visual cortex. Curr. Biol. 22, 2313–2318 (2012).
43. 43.
Roelfsema, P. R. & Treue, S. Basic neuroscience research with nonhuman primates: a small but indispensable component of biomedical research. Neuron 82, 1200–1204 (2014).
44. 44.
Calapai, A. et al. A cage-based training, cognitive testing and enrichment system optimized for rhesus macaques in neuroscience research. Behav. Res. Methods 49, 35–45 (2016).
45. 45.
Nelson, M. J., Pouget, P., Nilsen, E. A., Patten, C. D. & Schall, J. D. Review of signal distortion through metal microelectrode recording circuits and filters. J. Neurosci. Methods 169, 141–157 (2008).
46. 46.
Delorme, A. & Makeig, S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134, 9–21 (2004).
## Acknowledgements
We would like to thank Laura Busse and Steffen Katzner for sharing the data they recorded from monkey T. We specially thank Suresh Krishna, Roozbeh Kiani and Reza Rajimehr for their expert comments on an earlier version of the manuscript and Igor Kagan for fruitful discussions. Further acknowledgement should be addressed to Dirk Prüsse, Leonore Burchardt, and Ralf Brockhausen for expert technical assistance. This work was supported by the grants of the Deutsche Forschungsgemeinschaft through the Collaborative Research Center 889 “Cellular Mechanisms of Sensory Processing” to S.T. (Project C04) and the Federal Ministry of Education and Research (BMBF) of Germany under grant number 01GQ1005C.
## Author information
Authors
### Contributions
M.R.D. and S.T. designed the experimental paradigm. M.R.D. collected the data. M.E. performed the data analyses. M.E., M.R.D. and S.T. interpreted the data. M.E., M.R.D. and S.T. wrote the paper.
### Corresponding author
Correspondence to Moein Esghaei.
## Ethics declarations
### Competing Interests
The authors declare that they have no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Esghaei, M., Daliri, M.R. & Treue, S. Local field potentials are induced by visually evoked spiking activity in macaque cortical area MT. Sci Rep 7, 17110 (2017). https://doi.org/10.1038/s41598-017-17372-4
• Accepted:
• Published:
• ### Spike-phase coupling as an order parameter in a leaky integrate-and-fire model
• Nahid Safari
• , Moein Esghaei
• & Marzieh Zare
Physical Review E (2020)
• ### Coding Perceptual Decisions: From Single Units to Emergent Signaling Properties in Cortical Circuits
• Kristine Krug
Annual Review of Vision Science (2020)
• ### Sharp‐wave ripple features in macaques depend on behavioral state and cell‐type specific firing
• Ahmed T. Hussin
• , Timothy K. Leonard
• & Kari L. Hoffman
Hippocampus (2020)
• ### Routing information flow by separate neural synchrony frequencies allows for “functionally labeled lines” in higher primate cortex
• , Stefan Treue
• , Moein Esghaei
Proceedings of the National Academy of Sciences (2019)
• ### Neural Activity Predicts Reaction in Primates Long Before a Behavioral Response
• Mohsen Parto Dezfouli
• , Stefan Treue
• , Moein Esghaei
Frontiers in Behavioral Neuroscience (2018)
|
2021-04-20 06:22:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5950413942337036, "perplexity": 4113.033807125102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039379601.74/warc/CC-MAIN-20210420060507-20210420090507-00627.warc.gz"}
|
https://electronics.stackexchange.com/questions/418858/understanding-class-a-amplifier-waveforms
|
# Understanding class A amplifier waveforms
I am working on an audio amplifier but got stuck on to a basic of npn transistor circuit.
Attaching a class-A amplifier circuit. Take a look here or here. I tried to simulate the circuit but as everywhere on net it is said that this can amplify whole sine wave , in my P Spice simulation it is not. Only one half of input signal is getting amplified.
My question is how I can get full sine wave (as all the schematic of class a amplifier over the internet claims)? Also please explain why exactly I am getting attached waveform in my circuit.
• In active mode operation, the BJT collector will be 180 degrees out of phase with the base. But when the BJT is saturated, its collector will tend to follow the base when the emitter resistor is present. So the interpretation here is that for part of the cycle your BJT is saturated and for the rest of it you have way too much gain for the signal, winding up with an almost-square wave as the output result. Of course, as already pointed out your base bias resistors are being over-ridden by an ideal voltage source at the base. Another problem. – jonk Jan 25 at 13:16
The 10 K and 1 K resistors you have connected to the base of the transistor are for biasing, however, the voltage source you have for the oscillator is preventing that from happening. If you AC couple the oscillator instead (add a capacitor between the signal generator and transistor) you should see correct operation. With the voltages you have you will see distortion (clipping), try reducing the oscillator to 1 V.
You can experiment with the ratio of the biasing resistors to how that influences the output waveform.
• And likewise, a capacitor on the output, to remove the DC offset of the output. – rdtsc Jan 25 at 12:18
• Hi Colin, yes it worked and i now understand why, thanks. Can you explain math of selecting the value of that coupling capacitor. – Ashish Jha Jan 27 at 3:53
In active mode operation, the BJT collector will be 180 degrees out of phase with the base. But when the BJT is saturated, its collector will tend to follow the base when the emitter resistor is present. So the interpretation here is that for part of the cycle your BJT is saturated and for the rest of it you have way too much gain for the signal, winding up with an almost-square wave as the output result. Of course, as already pointed out your base bias resistors are being over-ridden by an ideal voltage source at the base. That's another serious problem.
I'll discuss an approach and a schematic (below) that will show you how to approach a more proper design for a BJT stage like this.
Looks like you want a gain of 13, just quickly glancing. Obviously, you can tolerate an output impedance of $$\13\:\text{k}\Omega\$$, too. I'll choose a different gain and a different output impedance, but not too far away.
Let's say the voltage gain is to be $$\A_V=10\$$ and I'll keep your existing power supply rail of $$\V_\text{CC}=10\:\text{V}\$$. Here's an approach. (There are many such, not just one. But I'm not going to go through more than one for you. You can pick up others when other folks tell you about them.)
1. The maximum voltage gain is about 40 times the quiescent collector current (in millamps.) With $$\A_V=10\$$, this means $$\I_{\text{C}_\text{Q}}\gt 250\:\mu\text{A}\$$. I'd like twice that much, if possible. So let's set $$\I_{\text{C}_\text{Q}}=500\:\mu\text{A}\$$.
2. Given $$\I_{\text{C}_\text{Q}}=500\:\mu\text{A}\$$ and typical small-signal BJTs, it is reasonable to conclude that the quiescent base-emitter voltage is about $$\V_{\text{BE}_\text{Q}}\approx 660\:\text{mV}\$$.
3. I like to reserve about $$\2\:\text{V}\$$ for the minimum $$\V_\text{CE}\$$ of the BJT, in order to keep it well away from saturation, to help deal with BJT variations, and to slightly reduce the impact of the Early Effect.
4. I like to reserve at least $$\1\:\text{V}\$$ for the quiescent emitter voltage for a variety of reasons, but importantly because I would like to place temperature and part variation issues under management.
5. With $$\V_\text{CC}=10\:\text{V}\$$ and subtracting the above two margins I just reserved, this means there is about $$\7\:\text{V}\$$ left over for the collector swing. But I also want to leave about $$\2\:\text{V}\$$ margin at the top end of the collector swing (limiting distortion due to gain variation and mitigating Early Effect.) So I don't want the collector to move any higher than $$\8\:\text{V}\$$. So this leaves only $$\5\:\text{V}\$$ for the collector swing (max.)
6. Therefore, the quiescent collector voltage will be $$\V_{\text{C}_\text{Q}}=10\:\text{V}-2\:\text{V}-\frac{5\:\text{V}}{2}=5.5\:\text{V}=1\:\text{V}+2\:\text{V}+\frac{5\:\text{V}}{2}\$$. In short, $$\V_{\text{C}_\text{Q}}=5.5\:\text{V}\$$.
7. From (1) and (6), I can compute a collector resistor of $$\R_{\text{C}}=\frac{10\:\text{V}-5.5\:\text{V}}{500\:\mu\text{A}}=9\:\text{k}\Omega\$$. Set this to the nearby 5% precision value of $$\R_{\text{C}}=9.1\:\text{k}\Omega\$$.
8. From (1) and (4), I can compute a DC emitter resistor of $$\R_{\text{E}_1}=\frac{1\:\text{V}}{500\:\mu\text{A}}=2\:\text{k}\Omega\$$. That's a standard 5% value, so keep it.
9. From (2) and (4), I know that the quiescent DC base voltage should be $$\V_{\text{B}_\text{Q}}=1\:\text{V}+660\:\text{mV}=1.66\:\text{V}\$$.
10. To be conservative, I'll assume that the base current of the BJT will be no more than about $$\I_{\text{B}_\text{Q}}=\frac{I_{\text{C}_\text{Q}}=500\:\mu\text{A}}{\beta=100}=5\:\mu\text{A}\$$.
11. To make a "stiff" resistor divider (in the sense that it is relatively unaffected by variations in the required base current due to signal variations), I know that the current through the two base divider resistors should be about $$\\frac1{10}\$$th the quiescent collector current [or 10 times the current calculated in (10) above.] So this means about $$\50\:\mu\text{A}\$$ for the base's resistor divider pair; used in steps (12) and (13) next.
12. The divider resistor, from base to ground, is then $$\R_2=\frac{1.66\:\text{V}}{50\:\mu\text{A}}=33.2\:\text{k}\Omega\$$. Use the nearby 5% value of $$\R_2=33\:\text{k}\Omega\$$.
13. The divider resistor, from base to the supply rail, is then $$\R_1=\frac{10\:\text{V}-1.66\:\text{V}}{50\:\mu\text{A}+5\:\mu\text{A}}=151.6\:\text{k}\Omega\$$. Use the nearby 5% value of $$\R_1=150\:\text{k}\Omega\$$.
14. To get the gain, I need the total AC emitter resistance to be $$\\frac{R_\text{C}=9.1\:\text{k}\Omega}{A_V=10}-\frac{V_T=26\:\text{mV}}{I_{\text{C}_\text{Q}}=500\:\mu\text{A}}\approx 858\:\Omega\$$. However, as you will soon see below, there is already a $$\2\:\text{k}\Omega\$$ emitter resistor for the DC operating point computed in (8) above. So I need a new AC resistor value of $$\R_{\text{E}_2}=\frac{2\:\text{k}\Omega\,\cdot\, 858\:\Omega}{2\:\text{k}\Omega-858\:\Omega}\approx 1503\:\Omega\$$. I'll use the nearby 5% value of $$\R_{\text{E}_2}=1.5\:\text{k}\Omega\$$.
So here is the resulting design using standard resistor values:
simulate this circuit – Schematic created using CircuitLab
The above should take a maximum of a $$\500\:\text{mV}_\text{PP}\$$ input signal and generate a maximum $$\5\:\text{V}_\text{PP}\$$ output signal.
Feel free to ask questions, now. But hopefully that provides an approach to similar design questions.
To make the design still more bullet-proof to BJT variations, reduce the allowable maximum collector swing to $$\4\:\text{V}\$$ (or even less) and follow step (6) to set $$\V_{\text{C}_\text{Q}}=6\:\text{V}\$$ (or even a little bit higher, perhaps to $$\V_{\text{C}_\text{Q}}=6.3\:\text{V}\$$) and then recalculate the rest from there.
There's an issue with the design. It probably needs something to reduce its gain at higher frequencies. (Given the above-designed collector resistor, a $$\470\:\text{pF}\$$ capacitor across $$\R_c\$$ might be added, for example.) But I'm not going to address that issue any further, here.
• As always, @jonk, your answer goes above and beyond. Always a pleasure to read, thanks! – Colin Jan 25 at 14:26
• @Colin Thanks! I appreciate the kind words, very much. :) – jonk Jan 25 at 14:28
• Thank you sir for detailed reply. I understood your calculations and approach.Sir, Step 1,11 and 14 is little unclear to me. In step 1 how you decided that Vgain should be at least 40 times higher than Ic. Is this a rule of thumb?In step 11 ,Why current through voltage divider should be 10 times more? In step 14 i didn't get AC emmiter resistance calculation. And, sir with this circuit when i calculated beta(Ic/Ib) is coming 86 instead of 100 in PSpice(in Circuit lab its coming 100). If bjt is in active region beta should not vary, right? or there is some another reason? – Ashish Jha Jan 27 at 7:13
• @AshishJha It's easy to derive (1). Look for the terms $r_e$ and $r_\pi$, for example. But you've selected an answer and I've written an overly complete answer already. – jonk Jan 27 at 8:26
• @jonk , sir I am searching over the net. It would be helpful if you can send some relative documents or link. my mail id- ashish.wityliti@gmail.com – Ashish Jha Jan 29 at 5:15
|
2019-07-23 00:44:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7346226572990417, "perplexity": 1032.9304153258404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00523.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/105556-solved-fixed-point-print.html
|
# [SOLVED] Fixed Point
• October 1st 2009, 07:29 PM
redsoxfan325
[SOLVED] Fixed Point
Let $M$ be a compact metric space and $\Phi:M\longrightarrow M$ be such that $d(\Phi(x),\Phi(y)) for all $x,y\in M$, $x\neq y$.
Show that $\Phi$ has a unique fixed point. [Hint: Minimize $d(\Phi(x),x)$.]
I'm not really sure how to use the hint, or even start the problem. This is in the chapter on the contraction mapping principle, so it seems like I have to get it so that I can apply the CMP. Any suggestions would be most welcome. I don't really want the whole problem solved. If you solve the whole problem, at least put some of it in a spoiler, because I'd like to solve as much of this on my own as I can.
• October 1st 2009, 07:49 PM
Jose27
Suppose $f:M \rightarrow [0, \infty )$ is such that $f(x)=d( \Phi(x),x)$ then $f$ is is continous. $M$ is compact. If in $y$ $f$ attains a minimum, what can you say about $f(y)$?
• October 1st 2009, 08:07 PM
redsoxfan325
Quote:
Originally Posted by Jose27
Suppose $f:M \rightarrow [0, \infty )$ is such that $f(x)=d( \Phi(x),x)$ then $f$ is is continous. $M$ is compact. If in $y$ $f$ attains a minimum, what can you say about $f(y)$?
So $f$ is a (Lipschitz) continuous function on a compact set. It has a min, max, and it's uniformly continuous.
What do you mean by "If in $y$ $f$ attains a minimum"?
$y$ is a point. I'm not sure what you mean. Did you mean that if $f(y)$ is the minimum value of $f$?
• October 2nd 2009, 05:03 AM
Jose27
Quote:
Originally Posted by redsoxfan325
$y$ is a point. I'm not sure what you mean. Did you mean that if $f(y)$ is the minimum value of $f$?
Yes
• October 2nd 2009, 08:27 AM
redsoxfan325
Thank you for your help. I have solved the problem.
|
2016-07-28 01:36:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990959167480469, "perplexity": 5488.917387206559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827781.76/warc/CC-MAIN-20160723071027-00071-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/264798-single-pass-dot3-bump-mapping-tutorial/
|
# Single-Pass DOT3 Bump Mapping Tutorial
## Recommended Posts
Hey everyone, after spending over a week busting my head trying to get DOT3 Bump Mapping to work, I ended up with a success, and a tutorial to help those of you who are trying to figure this out as well. By no means is it an "uber" or "professional" tutorial, but I hope it helps. http://www.unraveledmind.com/bumpmap.html I use this technique in a 3D engine I'm playing around with that I started writing less than two weeks ago. The Engine's source is also included. :) From 3DS file loading to easy stuff such as frustum culling, it covers quite a few things. Hopefully some of you will find this helpful in your own projects. Share the lewt, so to speak. :)
##### Share on other sites
nice one (esp with the recent spate of bumpmapping posts here)
though do u really need to mipmap the cubemap
##### Share on other sites
One problem I noticed is that your texture is mirrored which causes your bumps to be 'outies' on the right half of the model, and 'innies' on the other.
You can fix this either by not mirroring textures, or by using a more robust method of generating tangent space. One method is the NVMeshMender class on the nvidia dev webiste, although that's overkill for the purposes of your tutorial.
##### Share on other sites
zedzeek: Mipmapping turned on makes the framerate higher actually, surprising as it is. Try it without, see what happens. ;)
SimmerD: I never actually noticed that until you mentioned it. Interesting side-effect, I suppose. I can see why it's happening though. I created half of the model using 3D Studio Max and simply mirrored the other half, didn't really consider the fact that it would make the bump map inverted, I suppose.
##### Share on other sites
It's not very surprising that mipmapping raises your framerate as this will make more texture accesses hit the caches on the graphics card, because accesses to a lower mip will be more "together" than accesses to a minified bigger mip.
##### Share on other sites
Just a quick update. I fixed the flipped bumpmap problem by adding this code right after it normalizes the tangents:
// Check to make sure the texture isn't mirrored from right to left, and flip the S Tangent if it isif(Dot(Cross(vSTangent[iV], vTTangent[iV]), vNormal[iV]) < 0.0f) vSTangent[iV] = -vSTangent[iV];
##### Share on other sites
Awesome tutorial! Glad that you shared your knowledge regarding this topic as few people do.
##### Share on other sites
i was sorta thinking of building the mipmaps not giving the correct result (ie a normalized vector) but on further thought its gonna be very close
|
2017-10-24 00:47:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18722008168697357, "perplexity": 2457.858116590606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00053.warc.gz"}
|
https://www.r-bloggers.com/2018/08/statistics-sunday-getting-started-with-the-russian-tweet-dataset/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
IRA Tweet Data You may have heard that two researchers at Clemson University analyzed almost 3 millions tweets from the Internet Research Agency (IRA) – a “Russian troll factory”. In partnership with FiveThirtyEight, they made all of their data available on GitHub. So of course, I had to read the files into R, which I was able to do with this code:
files <- c("IRAhandle_tweets_1.csv",
"IRAhandle_tweets_2.csv",
"IRAhandle_tweets_3.csv",
"IRAhandle_tweets_4.csv",
"IRAhandle_tweets_5.csv",
"IRAhandle_tweets_6.csv",
"IRAhandle_tweets_7.csv",
"IRAhandle_tweets_8.csv",
"IRAhandle_tweets_9.csv")
each_file <- function(file) {
library(tidyverse)
tweet_data <- NULL
for (file in my_files) {
temp <- each_file(file)
temp$id <- sub(".csv", "", file) tweet_data <- rbind(tweet_data, temp) } Note that this is a large file, with 2,973,371 observations of 16 variables. Let's do some cleaning of this dataset first. The researchers, Darren Linvill and Patrick Warren, identified 5 majors types of trolls: • Right Troll: These Trump-supporting trolls voiced right-leaning, populist messages, but “rarely broadcast traditionally important Republican themes, such as taxes, abortion, and regulation, but often sent divisive messages about mainstream and moderate Republicans…They routinely denigrated the Democratic Party, e.g. @LeroyLovesUSA, January 20, 2017, “#ThanksObama We're FINALLY evicting Obama. Now Donald Trump will bring back jobs for the lazy ass Obamacare recipients,” the authors wrote. • Left Troll: These trolls mainly supported Bernie Sanders, derided mainstream Democrats, and focused heavily on racial identity, in addition to sexual and religious identity. The tweets were “clearly trying to divide the Democratic Party and lower voter turnout,” the authors told FiveThirtyEight. • News Feed: A bit more mysterious, news feed trolls mostly posed as local news aggregators who linked to legitimate news sources. Some, however, “tweeted about global issues, often with a pro-Russia perspective.” • Hashtag Gamer: Gamer trolls used hashtag games—a popular call/response form of tweeting—to drum up interaction from other users. Some tweets were benign, but many “were overtly political, e.g. @LoraGreeen, July 11, 2015, “#WasteAMillionIn3Words Donate to #Hillary.” • Fearmonger: These trolls, who were least prevalent in the dataset, spread completely fake news stories, for instance “that salmonella-contaminated turkeys were produced by Koch Foods, a U.S. poultry producer, near the 2015 Thanksgiving holiday.” But a quick table of the results of the variable, account_category, shows 8 in the dataset. table(tweet_data$account_category)
##
## Commercial Fearmonger HashtagGamer LeftTroll NewsFeed
## 122582 11140 241827 427811 599294
## NonEnglish RightTroll Unknown
## 837725 719087 13905
The additional three are Commercial, Non-English, and Unknown. At the very least, we should drop the Non-English tweets, since those use Russian characters and any analysis I do will assume data are in English. I'm also going to keep only a few key variables. Then I'm going to clean up this dataset to remove links, because I don't need those for my analysis - I certainly wouldn't want to follow them to their destination. If I want to free up some memory, I can then remove the large dataset.
reduced <- tweet_data %>%
select(author,content,publish_date,account_category) %>%
filter(account_category != "NonEnglish")
library(qdapRegex)
##
## Attaching package: 'qdapRegex'
reduced$content <- rm_url(reduced$content)
rm(tweet_data)
Now we have a dataset of 2,135,646 observations of 4 variables. I'm planning on doing some analysis on my own of this dataset - and will of course share what I find - but for now, I thought I'd repeat a technique I've covered on this blog and demonstrate a new one.
library(tidytext)
tweetwords <- reduced %>%
unnest_tokens(word, content) %>%
anti_join(stop_words)
## Joining, by = "word"
wordcounts <- tweetwords %>%
count(account_category, word, sort = TRUE) %>%
ungroup()
## # A tibble: 6 x 3
## account_category word n
##
## 1 NewsFeed news 124586
## 2 RightTroll trump 95794
## 3 RightTroll rt 86970
## 4 NewsFeed sports 47793
## 5 Commercial workout 42395
## 6 NewsFeed politics 38204
First, I'll conduct a TF-IDF analysis of the dataset. This code is a repeat from a previous post.
tweet_tfidf <- wordcounts %>%
bind_tf_idf(word, account_category, n) %>%
arrange(desc(tf_idf))
tweet_tfidf %>%
mutate(word = factor(word, levels = rev(unique(word)))) %>%
group_by(account_category) %>%
top_n(15) %>%
ungroup() %>%
ggplot(aes(word, tf_idf, fill = account_category)) +
geom_col(show.legend = FALSE) +
labs(x = NULL, y = "tf-idf") +
facet_wrap(~account_category, ncol = 2, scales = "free") +
coord_flip()
## Selecting by tf_idf
But another method of examining terms and topics in a set of documents is Latent Dirichlet Allocation (LDA), which can be conducted using the R package, topicmodels. The only issue is that LDA requires a document term matrix. But we can easily convert our wordcounts dataset into a DTM with the cast_dtm function from tidytext. Then we run our LDA with topicmodels. Note that LDA is a random technique, so we set a random number seed, and we specify how many topics we want the LDA to extract (k). Since there are 6 account types (plus 1 unknown), I'm going to try having it extract 6 topics. We can see how well they line up with the account types.
tweets_dtm <- wordcounts %>%
cast_dtm(account_category, word, n)
library(topicmodels)
tweets_lda <- LDA(tweets_dtm, k = 6, control = list(seed = 42))
tweet_topics <- tidy(tweets_lda, matrix = "beta")
Now we can pull out the top terms from this analysis, and plot them to see how they lined up.
top_terms <- tweet_topics %>%
group_by(topic) %>%
top_n(15, beta) %>%
ungroup() %>%
arrange(topic, -beta)
top_terms %>%
mutate(term = reorder(term, beta)) %>%
ggplot(aes(term, beta, fill = factor(topic))) +
geom_col(show.legend = FALSE) +
facet_wrap(~topic, scales = "free") +
coord_flip()
Based on these plots, I'd say the topics line up very well with the account categories, showing, in order: news feed, left troll, fear monger, right troll, hash gamer, and commercial. One interesting observation, though, is that Trump is a top term in 5 of the 6 topics.
|
2021-04-14 13:06:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18519239127635956, "perplexity": 6657.363894343443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077818.23/warc/CC-MAIN-20210414125133-20210414155133-00402.warc.gz"}
|
https://scite.ai/reports/effect-of-solvents-for-the-Gv15v3
|
Search citation statements
Order By: Relevance
Select...
3
0
4
0
Year Published
2009
2009
2021
2021
Publication Types
Select...
3
Relationship
0
3
Authors
Journals
0
4
0
Order By: Relevance
“…By 2023, the global market of cyclopentane is expected to reach US$335.7 million, with an annual growth rate of 6.7% . Cyclopentane resource has been detected in light petroleum fractions like naphtha , and natural gas liquid; however, it is hardly separated as a commercial grade product through traditional fractionation operations owing to the formation of close-boiling/azeotropic mixtures with hydrocarbons possessing similar numbers of carbon atoms. Neohexane or 2,2-dimethylbutane (22DMB) is a major contaminant in cyclopentane.…” Section: Introductionmentioning See 1 more Smart Citation ##### Create an account to read the remaining citation statements from this report. You will also get access to: • Search over 1.2b+ citation statments to see what is being said about any topic in the research literature • Advanced Search to find publications that support or contrast your research • Citation reports and visualizations to easily see what publications are saying about each other • Browser extension to see Smart Citations wherever you read research • Dashboards to evaluate and keep track of groups of publications • Alerts to stay on top of citations as they happen • Automated reference checks to make sure you are citing reliable research in your manuscripts • 14 day free preview of our premium features. ## Trusted by researchers and organizations around the world Over 130,000 students researchers, and industry experts at use scite See what students are saying “…By 2023, the global market of cyclopentane is expected to reach US$ 335.7 million, with an annual growth rate of 6.7% . Cyclopentane resource has been detected in light petroleum fractions like naphtha , and natural gas liquid; however, it is hardly separated as a commercial grade product through traditional fractionation operations owing to the formation of close-boiling/azeotropic mixtures with hydrocarbons possessing similar numbers of carbon atoms. Neohexane or 2,2-dimethylbutane (22DMB) is a major contaminant in cyclopentane.…”
Section: Introductionmentioning
“…It is also a high-quality blending component for clean gasoline with high octane number. , The composition of cyclopentane-neohexane from naphtha varies widely in volume from 92/8 to 29/71 . The discrimination of these two compounds is a great challenge because they not only exhibit extremely close boiling points of only 0.4 K difference but also form azeotropic components. , Considering the impossibility of splitting neohexane from cyclopentane by simple distillation operations, azeotropic and extractive distillations , have been designed to produce cyclopentane of higher purities. However, these heat-driven distillation-based hydrocarbon separations represent the most energy-intensive and operationally complex processes .…”
Section: Introductionmentioning
“…Lee and Ronald 9 purified cyclohexane from a mixture of 85 wt % cyclohexane and 15 wt % 2,3-dimethylpentane using various mixed entrainers. Other works using mixed entrainer included Brown and Lee 10 and Song et al 11 Recently, Lavanya et al 12 reported the production of cyclopentane by extractive distillation. However, the previous works mentioned above are focused on hydrocarbon mixture of C 5 -C 7 .…”
Section: Introductionmentioning
“…N-(β-mercaptoehtyl)-2-pyrrolidone (NMEP), cyclohexanol (CHOL), N-methyl-pyrrolidone (NMP), or a mixture of NMEP and either CHOL or NMP, phenol and NMP+5% water [6], [7]. Among these solvents, mixed solvent tends to be more effective than single solvent, since single solvent cannot have high selectivity and solubility simultaneously.…”
mentioning
|
2022-11-30 20:46:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6422041058540344, "perplexity": 10763.71050704583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00738.warc.gz"}
|
https://discuss.codechef.com/questions/98424/chef-and-dolls-iam-getting-nzec-error
|
×
# chef and dolls- iam getting NZEC error
0 T=input() N=input() ti=[] ti.append(0) n=[] for i in range(T): for j in range(N): ti.append(input()) for k in range(max(ti)+1): n.append(0) for p in ti[1:]: n[p]=n[p]+1 for q in range(1,(max(ti)+1)): if n[q]%2!=0: print q asked 20 May '17, 01:39 6 accept rate: 0%
0 I don't know python so can't really read your code. Are you sure the way you take input is correct ? answered 20 May '17, 02:04 21●3 accept rate: 0%
0 I guess that you are using the latest version of the python that is python3.4. When you are taking input by input(), you will get a string type data instead of integer type data. Thus your list ti is actually a list of string, not a list of integers. And as the index of a list should be integer not a string, you are getting a NZEC error at the line n[p]=n[p]+1. Because p here is a string, but should be an integer. To get an integer input do int(input()). It will change the input type from string to int. But if you use previous version of python i.e. python2.7, the code will run without any error. Many things have been changed in python3.4 in comparison with python2.7 :( answered 20 May '17, 11:31 4★trish16 1 accept rate: 0%
toggle preview community wiki:
Preview
By Email:
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×16
question asked: 20 May '17, 01:39
question was seen: 110 times
last updated: 20 May '17, 11:31
|
2018-04-19 19:24:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28999266028404236, "perplexity": 5742.318390555771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937016.16/warc/CC-MAIN-20180419184909-20180419204909-00003.warc.gz"}
|
https://www.researchgate.net/publication/346315336_Formal_Verification_of_ECCs_for_Memories_Using_ACL2
|
ArticlePublisher preview available
# Formal Verification of ECCs for Memories Using ACL2
Authors:
To read the full-text of this research, you can request a copy directly from the authors.
## Abstract and Figures
Due to the ever-increasing toll of soft errors in memories, Error Correction Codes (ECCs) like Hamming and Reed-Solomon Codes have been used to protect data in memories, in applications ranging from space to terresterial work stations. In past seven decades, most of the research has focused on providing better ECC strategies for data integrity in memories, but the same pace research efforts have not been made to develop better verification methodologies for the newer ECCs. As the memory sizes keep increasing, exhaustive simulation-based testing of ECCs is no longer practical. Hence, formal verification, particularly theorem proving, provides an efficient, yet scarcely explored, alternative for ECC verification. We propose a framework, with extensible libraries, for the formal verification of ECCs using the ACL2 theorem prover. The framework is easy to use and particularly targets the needs of formally verified ECCs in memories. We also demonstrate the usefulness of the proposed framework by verifying two of the most commonly used ECCs, i.e., Hamming and Convolutional codes. To illustrate that the ECCs verified using our formal framework are practically reliable, we utilized a formal record-based memory model to formally verify that the inherent properties of the ECCs like hamming distance, codeword decoding, and error detection/correction remain consistent even when the ECC is implemented on the memory.
This content is subject to copyright. Terms and conditions apply.
https://doi.org/10.1007/s10836-020-05904-2
Formal Verification of ECCs for Memories Using ACL2
Received: 12 April 2020 / Accepted: 2 September 2020
Abstract
Due to the ever-increasing toll of soft errors in memories, Error Correction Codes (ECCs) like Hamming and Reed-Solomon
Codes have been used to protect data in memories, in applications ranging from space to terresterial work stations. In
past seven decades, most of the research has focused on providing better ECC strategies for data integrity in memories,
but the same pace research efforts have not been made to develop better verification methodologies for the newer ECCs.
As the memory sizes keep increasing, exhaustive simulation-based testing of ECCs is no longer practical. Hence, formal
verification, particularly theorem proving, provides an efficient, yet scarcely explored, alternative for ECC verification. We
propose a framework, with extensible libraries, for the formal verification of ECCs using the ACL2 theorem prover. The
framework is easy to use and particularly targets the needs of formally verified ECCs in memories. We also demonstrate the
usefulness of the proposed framework by verifying two of the most commonly used ECCs, i.e., Hamming and Convolutional
codes. To illustrate that the ECCs verified using our formal framework are practically reliable, we utilized a formal record-
based memory model to formally verify that the inherent properties of the ECCs like hamming distance, codeword decoding,
and error detection/correction remain consistent even when the ECC is implemented on the memory.
Keywords Error Correction Codes (ECCs) ·Memory soft errors ·Hamming codes ·Convolutional codes ·Formal
verification ·Theorem proving ·ACL2
1 Introduction
Soft errors are type of errors that do not cause permanent
damage to the semi-conductor devices [56], yet leading to
temporary faults in them. In particular, radiation induced
soft errors have been a major concern in semi-conductor
devices since 1970s [12,60]. In a long chain of events,
both the high speed protons in cosmic rays and the alpha
particles emitted during the decay of radioactive impurities
Responsible Editor: V. D. Agrawal
Mahum Naseer
mnaseer.msee16seecs@seecs.edu.pk
Osman Hasan
osman.hasan@seecs.nust.edu.pk
1School of Electrical Engineering and Computer Science
(SEECS), National University of Sciences and Technology
in IC packaging material, induce the silicon based semi-
conductor memories to change their logic states, hence
resulting in soft errors [10,47].
Recent advancements in technology, including circuit
miniaturization, voltage reduction, and increased circuit
clock frequencies, have augmented the problem of soft
errors in memories [10,48]. The most obvious drawbacks of
memory errors include the loss of correct data and the addi-
tion of faulty data into the memory. However, depending
on the application/system using the memory, the severity
of these memory errors could vary. This is summarized in
Fig. 1. In a LEON3 processor, a memory error may simply
cause a result error, i.e., an erroneous output from an algo-
rithm running on the system, or a system timeout, i.e., the
termination of an application without any result [39]. Sim-
ilarly, in a Xilinx FPGA, such errors may cause the system
to halt [33].
Error Correction Codes (ECCs) [44], are used to cater
for memory errors by adding extra bits, often called parity
or check bits, to the data bits in the memory. The parity bits
are calculated using the available data bits, and in case of an
error, the lost data is retrieved using these parity bits. Hence,
ECCs are considered to be the most effective solution for
memory errors [10], and since the introduction of Hamming
/ Published online: 26 September 2020
Journal of Electronic Testing (2020) 36:643–663
Chapter
Full-text available
The magic wand $$\mathbin {-\!\!*}$$ - ∗ (also called separating implication) is a separation logic connective commonly used to specify properties of partial data structures, for instance during iterative traversals. A footprint of a magic wand formula "Equation missing" is a state that, combined with any state in which A holds, yields a state in which B holds. The key challenge of proving a magic wand (also called packaging a wand) is to find such a footprint. Existing package algorithms either have a high annotation overhead or, as we show in this paper, are unsound. We present a formal framework that precisely characterises a wide design space of possible package algorithms applicable to a large class of separation logics. We prove in Isabelle/HOL that our formal framework is sound and complete, and use it to develop a novel package algorithm that offers competitive automation and is sound. Moreover, we present a novel, restricted definition of wands and prove in Isabelle/HOL that it is possible to soundly combine fractions of such wands, which is not the case for arbitrary wands. We have implemented our techniques for the Viper language, and demonstrate that they are effective in practice.
Chapter
Full-text available
Spot is a C++17 library for LTL and $$\omega$$ ω -automata manipulation, with command-line utilities, and Python bindings. This paper summarizes its evolution over the past six years, since the release of Spot 2.0, which was the first version to support $$\omega$$ ω -automata with arbitrary acceptance conditions, and the last version presented at a conference. Since then, Spot has been extended with several features such as acceptance transformations, alternating automata, games, LTL synthesis, and more. We also shed some lights on the data-structure used to store automata. Artifact: https://zenodo.org/record/6521395 .
Chapter
Full-text available
SMT solvers are highly complex pieces of software with performance, robustness, and correctness as key requirements. Complementing traditional testing techniques for these solvers with randomized stress testing has been shown to be quite effective. Recent work has showcased the value of input fuzzing for finding issues, but this approach typically does not comprehensively test a solver’s API. Previous work on model-based API fuzzing was tailored to a single solver and a small subset of SMT-LIB. We present Murxla, a comprehensive, modular, and highly extensible model-based API fuzzer for SMT solvers. Murxla randomly generates valid sequences of solver API calls based on a customizable API model, with full support for the semantics and features of SMT-LIB. It is solver-agnostic but extensible to allow for solver-specific testing and supports option fuzzing, cross-checking with other solvers, translation to SMT-LIBv2, and SMT-LIBv2 input fuzzing. Our evaluation confirms its efficacy in finding issues in multiple state-of-the-art SMT solvers.
Chapter
Full-text available
RIOT is a micro-kernel dedicated to IoT applications that adopts eBPF (extended Berkeley Packet Filters) to implement so-called femto-containers. As micro-controllers rarely feature hardware memory protection, the isolation of eBPF virtual machines (VM) is critical to ensure system integrity against potentially malicious programs. This paper shows how to directly derive, within the Coq proof assistant, the verified C implementation of an eBPF virtual machine from a Gallina specification. Leveraging the formal semantics of the CompCert C compiler, we obtain an end-to-end theorem stating that the C code of our VM inherits the safety and security properties of the Gallina specification. Our refinement methodology ensures that the isolation property of the specification holds in the verified C implementation. Preliminary experiments demonstrate satisfying performance.
Chapter
Full-text available
Most methods of data transmission and storage are prone to errors, leading to data loss. Forward erasure correction (FEC) is a method to allow data to be recovered in the presence of errors by encoding the data with redundant parity information determined by an error-correcting code. There are dozens of classes of such codes, many based on sophisticated mathematics, making them difficult to verify using automated tools. In this paper, we present a formal, machine-checked proof of a C implementation of FEC based on Reed-Solomon coding. The C code has been actively used in network defenses for over 25 years, but the algorithm it implements was partially unpublished, and it uses certain optimizations whose correctness was unknown even to the code’s authors. We use Coq’s Mathematical Components library to prove the algorithm’s correctness and the Verified Software Toolchain to prove that the C program correctly implements this algorithm, connecting both using a modular, well-encapsulated structure that could easily be used to verify a high-speed, hardware version of this FEC. This is the first end-to-end, formal proof of a real-world FEC implementation; we verified all previously unknown optimizations and found a latent bug in the code.
Chapter
Full-text available
Compositional synthesis relies on the discovery of assumptions, i.e., restrictions on the behavior of the remainder of the system that allow a component to realize its specification. In order to avoid losing valid solutions, these assumptions should be necessary conditions for realizability. However, because there are typically many different behaviors that realize the same specification, necessary behavioral restrictions often do not exist. In this paper, we introduce a new class of assumptions for compositional synthesis, which we call information flow assumptions . Such assumptions capture an essential aspect of distributed computing, because components often need to act upon information that is available only in other components. The presence of a certain flow of information is therefore often a necessary requirement, while the actual behavior that establishes the information flow is unconstrained. In contrast to behavioral assumptions, which are properties of individual computation traces, information flow assumptions are hyperproperties , i.e., properties of sets of traces. We present a method for the automatic derivation of information-flow assumptions from a temporal logic specification of the system. We then provide a technique for the automatic synthesis of component implementations based on information flow assumptions. This provides a new compositional approach to the synthesis of distributed systems. We report on encouraging first experiments with the approach, carried out with the BoSyHyper synthesis tool.
Chapter
Full-text available
Workflow nets are a well-established mathematical formalism for the analysis of business processes arising from either modeling tools or process mining. The central decision problems for workflow nets are k -soundness, generalised soundness and structural soundness. Most existing tools focus on k -soundness. In this work, we propose novel scalable semi-procedures for generalised and structural soundness. This is achieved via integral and continuous Petri net reachability relaxations. We show that our approach is competitive against state-of-the-art tools.
Chapter
Full-text available
MoGym, is an integrated toolbox enabling the training and verification of machine-learned decision-making agents based on formal models, for the purpose of sound use in the real world. Given a formal representation of a decision-making problem in the JANI format and a reach-avoid objective, MoGym (a) enables training a decision-making agent with respect to that objective directly on the model using reinforcement learning (RL) techniques, and (b) it supports rigorous assessment of the quality of the induced decision-making agent by means of deep statistical model checking (DSMC). MoGym implements the standard interface for training environments established by OpenAI Gym, thereby connecting to the vast body of existing work in the RL community. In return, it makes accessible the large set of existing JANI model checking benchmarks to machine learning research. It thereby contributes an efficient feedback mechanism for improving in particular reinforcement learning algorithms. The connective part is implemented on top of Momba. For the DSMC quality assurance of the learned decision-making agents, a variant of the statistical model checker modes of the Modest Toolset is leveraged, which has been extended by two new resolution strategies for non-determinism when encountered during statistical evaluation.KeywordsFormal Methods Statistical Model CheckingReinforcement Learning
Chapter
Full-text available
In many synthesis problems, it can be essential to generate implementations which not only satisfy functional constraints but are also randomized to improve variety, robustness, or unpredictability. The recently-proposed framework of control improvisation (CI) provides techniques for the correct-by-construction synthesis of randomized systems subject to hard and soft constraints. However, prior work on CI has focused on qualitative specifications, whereas in robotic planning and other areas we often have quantitative quality metrics which can be traded against each other. For example, a designer of a patrolling security robot might want to know by how much the average patrol time needs to be increased in order to ensure that a particular aspect of the robot’s route is sufficiently diverse and hence unpredictable. In this paper, we enable this type of application by generalizing the CI problem to support quantitative soft constraints which bound the expected value of a given cost function, and randomness constraints which enforce diversity of the generated traces with respect to a given label function. We establish the basic theory of labelled quantitative CI problems, and develop efficient algorithms for solving them when the specifications are encoded by finite automata. We also provide an approximate improvisation algorithm based on constraint solving for any specifications encodable as Boolean formulas. We demonstrate the utility of our problem formulation and algorithms with experiments applying them to generate diverse near-optimal plans for robotic planning problems.
Chapter
Full-text available
In this paper, we present the first fully-automated expected amortised cost analysis of self-adjusting data structures, that is, of randomised splay trees , randomised splay heaps and randomised meldable heaps , which so far have only (semi-)manually been analysed in the literature. Our analysis is stated as a type-and-effect system for a first-order functional programming language with support for sampling over discrete distributions, non-deterministic choice and a ticking operator. The latter allows for the specification of fine-grained cost models. We state two soundness theorems based on two different—but strongly related—typing rules of ticking, which account differently for the cost of non-terminating computations. Finally we provide a prototype implementation able to fully automatically analyse the aforementioned case studies."Image missing"
Article
Full-text available
Error-correcting codes add redundancy to transmitted data to ensure reliable communication over noisy channels. Since they form the foundations of digital communication, their correctness is a matter of concern. To enable trustful verification of linear error-correcting codes, we have been carrying out a systematic formalization in the Coq proof-assistant. This formalization includes the material that one can expect of a university class on the topic: the formalization of well-known codes (Hamming, Reed–Solomon, Bose–Chaudhuri–Hocquenghem) and also a glimpse at modern coding theory. We demonstrate the usefulness of our formalization by extracting a verified decoder for low-density parity-check codes based on the sum-product algorithm. To achieve this formalization, we needed to develop a number of libraries on top of Coq’s Mathematical Components. Special care was taken to make them as reusable as possible so as to help implementers and researchers dealing with error-correcting codes in the future.
Conference Paper
Full-text available
Multiple bit upsets (MBUs) caused by high energy radiation is the most common source of soft errors in static random-access memories (SRAMs) affecting multiple cells. Burst error correcting Hamming codes have most commonly been used to correct MBUs in SRAM cell since they have low redundancy and low decoder latency. But with technology scaling, the number of bits being affected increases, thus requiring a need for increasing the burst size that can be corrected. However, this is a problem because it increases the number of syndromes exponentially thus increasing the decoder complexity exponentially as well. In this paper, a new burst error correcting code based on Hamming codes is proposed which allows much better scaling of decoder complexity as the burst size is increased. For larger burst sizes, it can provide significantly smaller and faster decoders than existing methods thus providing higher reliability at an affordable cost. Moreover, there is frequently no increase in the number of check bits or a very minimal increase in comparison with existing methods. A general construction and decoding methodology for the new codes is proposed. Experimental results are presented comparing the decoder complexity for the proposed codes with conventional burst error correcting Hamming codes demonstrating the significant improvements that can be achieved.
Article
Due to the emergence of extremely high density memory along with the growing number of embedded memories, memory yield is an important issue. Memory self-repair using redundancies to increase the yield of memories is widely used. Because high density memories are vulnerable to soft errors, memory ECC (Error Correction Code) plays an important role in memory design. In this paper, methods to exploit spare columns including replaced defective columns are proposed to improve memory ECC. To utilize replaced defective columns, the defect information needs to be stored. Two approaches to store defect information are proposed - one is to use a spare column, and the other is to use a content-addressable-memory (CAM). Experimental results show that the proposed method can significantly enhance the ECC performance.
Article
Radiation effects cause several types of errors on memories including single event upsets (SEUs) or single event functional interrupts (SEFIs). Error correction codes (ECCs) are widely used to protect against those errors. For a number of reasons, there is a large interest in using double data rate type three (DDR-3) synchronous dynamic random-access (SDRAM) memories in space applications. Radiation testing results show that these memories will suffer both SEUs and SEFIs when used in space. Protection against a SEFI and an SEU is needed to achieve high reliability. In this paper, a method to protect 16-bit and 64-bit data word memories composed of 8-bit memory devices against a simultaneous SEFI and an SEU is presented. The scheme uses orthogonal Latin square (OLS) codes and can be activated when a SEFI occurs, using a conventional double error correction approach otherwise.
Conference Paper
By adding redundancy to transmitted data, error-correcting codes (ECCs) make it possible to communicate reliably over noisy channels. Minimizing redundancy and (de)coding time has driven much research, culminating with Low-Density Parity-Check (LDPC) codes. At first sight, ECCs may be considered as a trustful piece of computer systems because classical results are well-understood. But ECCs are also performance-critical so that new hardware calls for new implementations whose testing is always an issue. Moreover, research about ECCs is still flourishing with papers of ever-growing complexity. In order to provide means for implementers to perform verification and for researchers to firmly assess recent advances, we have been developing a formalization of ECCs using the SSReflect extension of the Coq proof-assistant. We report on the formalization of linear ECCs, duly illustrated with a theory about the celebrated Hamming codes and the verification of the sum-product algorithm for decoding LDPC codes.
Article
With technology scaling and complexity, better error detection and correction mechanisms within chips and systems are becoming increasingly important in order to provide sufficient protection against both soft and hard errors. Verifying the correctness of error detection circuits and ensuring they provide enough design coverage is a hard problem which usually involves substantial amount of manual work. This problem is even more challenging in the presence of different design methodologies, such as with the inclusion of third party IP blocks where functional descriptions of logic designs may not be available. This paper addresses the problem by proposing a completely automated RTL-based verification flow for error detection and correction circuits. Several related challenges are solved: first, that of identification of potential error detection circuits in logic designs where no functional description or methodology hints are given. Second, identification of structures of the latches that are potentially protected by such error detection circuits. Third, using formal verification for ensuring that the implemented circuits for resiliency indeed detect all single bit errors in the latches they are intended to cover. The approach is described with parity detection as an example, although it is extensible to other coding methods such as ECC and state orthogonality checking. Novel algorithms are given and results on industrial designs are presented.
Conference Paper
We present a formal approach to minimize the number of voters in triple-modular redundant sequential circuits. Our technique actually works on a single copy of the circuit and considers a user-defined fault model (under the form “at most 1 bit-flip every k clock cycles”). Verification-based voter minimization guarantees that the resulting circuit (i) is fault tolerant to the soft-errors defined by the fault model and (ii) is functionally equivalent to the initial one. Our approach operates at the logic level and takes into account the input and output interface specifications of the circuit. Its implementation makes use of graph traversal algorithms, fixed-point iterations, and BDDs. Experimental results on the ITC'99 benchmark suite indicate that our method significantly decreases the number of inserted voters which entails a hardware reduction of up to 55% and a clock frequency increase of up to 35% compared to full TMR. We address scalability issues arising from formal verification with approximations and assess their efficiency and precision.
Conference Paper
Redundant techniques, that use voting principles, are often used to increase the reliability of systems by ensuring fault tolerance. In order to increase the efficiency of these redundancy strategies we propose to exploit the inherent fault masking properties of software-algorithms at application-level. An important step in early development stages is to choose from a class of algorithms that achieve the same goal in different ways, one or more that should be executed redundantly. In order to evaluate the resilience of the algorithm variants, there is a great need for a quantitative reasoning about the algorithms fault tolerance in early design stages. Here, we propose an approach of analyzing the vulnerability of given algorithm variants to hardware faults in redundant designs by applying a model checker and fault injection modelling. The method is capable of automatically identifying all input and fault combinations that remain undetected by a voting system. This leads to a better understanding of algorithm-specific resilience characteristics.
|
2023-04-01 02:50:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47485652565956116, "perplexity": 1526.978393282895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00367.warc.gz"}
|
https://aggle.blogspot.com/2017/06/triathlon-math-part-2-what-are.html
|
## 11 June 2017
### Triathlon math part 2: What are realistic event speeds?
In my last post, I looked at triathlon events from purely a mathematical point of view, and asked the question: "Given how long each event is, which event helps your time the most if you decide to push the pace?" If you assume that you can swim as fast as you can bike, then our initial guess that the bike leg is the most important holds up, because the bike leg is so much longer.
That conclusion breaks down if you break that assumption! There are regions on the correlation matrices I plotted last time where the gain in time is very similar for different events. Which regions are physically realistic? Can I really improve my swimming speed from 1 km/hr to 30 km/hr? Is it worth it to sacrifice 3 km/hr on the bike in order to gain 2 km/hr on the run? The answer might really depend on which speed you're starting from and which speed you're going to for each event.
To get a more grounded idea of the relevant speeds, I downloaded the data from all 429 competitors in the Overall category for our race so we can see how actual athletes perform (data available here - my team was in the relay category so it doesn't include us).
When you look at how the overall triathlon finishing times are distributed, the first thing that pops out is that the top finishers are closer to the pack than the long tail of slower athletes:
but if you look at everyone's average speed, it's much more evenly distributed:
These two plots are consistent with each other, since the separation between two competitors increases the longer they are out on the course. If one athlete has speed $$v$$, and the other has a speed that is a fraction $$f$$ of $$v$$ (i.e. $$f \times v$$, then the difference in time between them at the end of the course (distance $$d$$) is
$\Delta T = \frac{d}{v} \left(1 - \frac{1}{f}\right)$
- so it's not a linear relationship.
When you look at how the times for each event are distributed, there's a significant overlap between events in how long it takes - the fastest runners finish their 5k faster than a significant chunk of the swimmers (myself included)!
Considering the amount of overlap above, I was surprised by how neatly the events separate themselves out in terms of speed. Each event occupies a pretty well-defined space all by itself:
I think that comparing these two plots can clarify the answers we're seeking. The Speed histograms are all fairly symmetric and almost Normal - suggesting that the athletes are pulled randomly out of a population with some average value. On the other hand, the Time histograms all have right tails - especially the run and the bike. What this tells me is that if you're out in the tail of the run or bike time histograms, there is some other athlete very close to you in terms of fitness (read: there's hope!) who is running or biking just a little bit faster and getting a much bigger time benefit.
Next up (probably): focusing in on the space these speeds occupy on the correlation matrices from last time.
Edit: I updated the speed histograms with a fit to a Gaussian. The agreement looks pretty good!
|
2017-11-19 06:32:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6198379397392273, "perplexity": 737.4289977263261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00512.warc.gz"}
|
https://homework.cpm.org/category/CON_FOUND/textbook/gc/chapter/3/lesson/3.2.6/problem/3-98
|
### Home > GC > Chapter 3 > Lesson 3.2.6 > Problem3-98
3-98.
Examine the graph of the line below. Homework Help ✎
Find the slope of this line using slope triangle $A$.Hint (a):How do you find the slope of a line? Find the slope using slope triangle $B$.Hint (b):See part (a). Without calculating, what does the slope ratio for slope triangle $C$ have to be?Hint (c):If the triangle is on the same line, what does that tell you about the slope?
|
2019-10-21 10:34:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43617725372314453, "perplexity": 1222.714057759589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00212.warc.gz"}
|
https://it.mathworks.com/help/vision/ref/2dminimum.html
|
# 2-D Minimum
Find minimum values in input or sequence of inputs
• Library:
• Computer Vision Toolbox / Statistics
## Description
The 2-D Minimum block identifies the value, and optionally the position, of the smallest element in the input. The input can be a vector, a matrix, or an N-D array. The block identifies the minimum value either along a specified dimension of the input or across the entire input. It also tracks the minimum values in a sequence of inputs over a period of time when the Mode parameter is set to `Running`.
## Ports
### Input
expand all
Input array, specified as a vector, matrix, or N-D array.
#### Dependencies
The port is named only when you either select Enable ROI processing parameter or set the Mode parameter to `Running`.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `uint8` | `uint16` | `uint32` | `fixed point`
Complex Number Support: Yes
Reset the running minimum, specified as a scalar. This port specifies the event that causes the block to reset the running minimum. The sample time of the Rst input must be a positive integer and a multiple of the block input sample time.
#### Dependencies
To enable this port, set the Mode parameter to `Running` and set the Reset port parameter to `Rising edge`, ```Falling edge```, `Either edge`, or `Non-zero sample`.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `uint8` | `uint16` | `uint32` | `Boolean`
Region of interest (ROI), specified as a four-element vector, m-by-4 matrix, or M-by-N matrix. This port accepts different input values depending on the setting of the ROI type parameter.
Note
• You can use the ROI port only if the input to the In port is a 2-D image.
• You cannot use the ROI port if the Mode parameter is set to `Running`.
#### Dependencies
To enable this port, set the Find the minimum value over parameter to `Entire input` and select the Enable ROI processing parameter.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `uint8` | `uint16` | `uint32` | `Boolean` | `fixed point`
Label matrix, specified as a matrix of nonnegative integers. The label matrix represents the objects in a 2-D image. The pixels labeled `0` are the background. The pixels labeled `1` make up the first object, the pixels labeled `2` make up the second object, and so on. The size of the label matrix must be same as the size of the 2-D input image.
#### Dependencies
To enable this port, select the Enable ROI processing parameter and set the ROI type parameter to `Label matrix`.
Note
You cannot enable the Label port if the Mode parameter is set to `Running`.
Data Types: `uint8` | `uint16` | `uint32`
Label values of ROI, specified as an M-element vector. This represents the object names for the corresponding numbers in the label matrix. M must be less than or equal to the number of objects in the label matrix.
#### Dependencies
To enable this port, select the Enable ROI processing parameter and set the ROI type parameter to `Label matrix`.
Note
You cannot enable the `Label Numbers` port if the Mode parameter is set to Running.
Data Types: `uint8` | `uint16` | `uint32`
### Output
expand all
Minimum values of the input, returned as a scalar, vector, matrix, or N-D array. The size of this output depends on the size of the input, and the settings of the Mode and Find the minimum value over parameters.
Note
This port is unnamed if the Mode parameter is set to `Running`. It doesn't appear if the Mode parameter is set to `Index`.
#### Compute Minimum Value of Input Array
Set the Mode parameter to ```Value and Index``` or `Value`. The block computes the minimum value along the specified dimension of the input or across the entire input. The size of the output minimum value depends on the size of the input and the setting of the Find the Minimum value over parameter.
• Scalar — The input is of any size, and the Find the minimum value over parameter is set to ```Entire input```.
• Vector — The input is a matrix, and the Find the minimum value over parameter is set to ```Each row```, `Each column`, or `Specified dimension`. If ```Specified dimension``` is selected, the value of the Dimension parameter must be either `1` or `2`.
• (N–1)-D array — The input is an N-D array, the Find the minimum value over parameter is set to `Specified dimension`, and the value of the Dimension parameter is N.
• N-D array with one singleton dimension — The input is an N-D array, and the Find the minimum value over parameter is set to `Each row`, `Each column`, or ```Specified dimension```. If `Specified dimension` is selected, the value of the Dimension parameter must be an integer less than N.
Example: For a 3-D input array of size M-by-N-by-P, the dimension of the returned output is:
• 1-by-N-by-P if you set the Find the minimum value over parameter to `Entire row`.
• M-by-1-by-P if you set the Find the minimum value over parameter to `Entire column`.
• M-by-N if you set the Find the minimum value over parameter to `Specified dimension` and the Dimension parameter to `3`.
#### Compute Minimum Value of Sequence of Inputs
Set the Mode parameter to `Running`. The block finds the minimum value of all the inputs in the given sequence and compiles them into a single array. The output is of the same size as the input.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `uint8` | `uint16` | `uint32` | `fixed point`
Indices of the minimum values in the input array, returned as a scalar, vector, matrix, or N-D array.
To enable this port, set the Mode parameter to `Value and Index` or `Index`.The size of the output depends on the size of the input and the setting of the Find the minimum value over parameter.
• Scalar — The input is an N-element vector, and the Find the minimum value over parameter is set to `Entire input`.
• Vector — The input is a matrix, and the Find the minimum value over parameter is set to ```Entire input```, `Each row`, ```Each column```, or `Specified dimension`. If `Specified dimension` is selected, the value of the Dimension parameter must be either `1` or `2`.
• (N–1)-D array — The input is an N-D array, the Find the minimum value over parameter is set to `Specified dimension`, and the value of the Dimension parameter is N.
• N-D array with one singleton dimension — The input is an N-D array, and the Find the minimum value over parameter is set to `Each row`, `Each column`, or ```Specified dimension```.If `Specified dimension` is selected, the value of the Dimension parameter must be an integer less than N.
Example: For a 3-D input array of size M-by-N-by-P, the dimension of the returned output is:
• 1-by-N-by-P if you set the Find the minimum value over parameter to `Entire row`.
• M-by-1-by-P if you set the Find the minimum value over parameter to `Entire column`.
• M-by-N if you set the Find the minimum value over parameter to `Specified dimension` and the Dimension parameter to `3`.
Note
When a minimum value occurs more than once, the computed index corresponds to the first occurrence. For example, if the input vector is ```[3 2 1 2 1]```, then the minimum value is `1` and the one-based index of the minimum value is `3`.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `fixed point`
ROI Validation, returned as a scalar or vector of logical `1`s (`true`) or `0`s (`false`). If the ROI type parameter is set to `Rectangles` or `Lines`, the output signifies whether or not the specified ROIs lie completely or partially within the input image. If the ROI type parameter is set to `Label matrix`, the output signifies whether or not the label numbers specified in the Label Numbers input are present in the input label matrix.
ROI type Individual statistics for each ROI Single statistics for all ROIs `Rectangles` The port returns an m element vector, wherem is the number of rows in the m-4 matrix in the input to the ROI port. Each element of this vector is a `1` or `0`, indicating that the rectangular ROI is either completely or partially inside the input image (`1`) or completely outside the input image (`0`). The port returns a scalar. The scalar is a `1` or `0`, indicating that any of the rectangles in the ROI input is present completely or partially inside (`1`) or completely outside (`0`) the input image. `Lines` The port returns a scalar. The scalar is a `1` or `0`, indicating that the input given to the ROI port is either completely or partially inside (`1`) the input image or completely outside (`0`) the input image. The port returns a scalar. The scalar is a `1` or `0`, indicating that the input given to the ROI port is either completely or partially inside (`1`) the input image or completely outside (`0`) the input image. `Label matrix` The port returns an M-element vector, where M is the number of elements in the input to the Label Numbers port. Each element of the vector is a `1` or `0`, indicating that the associated label from the Label Numbers input is present in (`1`) or absent from (`0`) the Label Matrix input. The port returns a scalar. The scalar is a `1` or `0`, indicating that any of the labels in the Label Numbers input are present in (`1`) or all are absent from (`0`) the Label Matrix input.
Note
If the ROI is partially outside the image, the block computes the statistical values for only the portion of the ROI that lies within the image bounds.
#### Dependencies
To enable this port, select the Output flag indicating if ROI is within image bounds parameter and set the value of the ROI type parameter to `Rectangle`, or `Lines`, or select the Output flag indicating if input label numbers are valid parameter and set the value of ROI type parameter to```Label matrix```.
## Parameters
expand all
Main
Specify the output mode of the block as one of these options.
• `Value and Index` — Return both the minimum values and their corresponding indices in the given input.
• `Value` — Return only the minimum values.
• `Index` — Return only the indices of the minimum values in the given input.
• `Running` — Tracks the minimum values in a sequence of inputs.
Specify the index for the first element in the input array.
• `One` for one-based numbering. The range of index values for each dimension is 1 to m, where m is the length of that dimension. For example, the index of the first element in a matrix is (`1,1`).
• `Zero` for zero-based numbering. The range of index values for each dimension is 0 to m–1, where m is the length of that dimension. For example, the index of the first element in a matrix is (`0,0`).
Specify the dimension of the input along which the block computes the minimum.
• `Entire input` — Computes minimum over the entire input.
• `Each row` — Computes minimum over each row.
• `Each column` — Computes minimum over each column.
• `Specified dimension`— Computes minimum over the dimension specified in the Dimension parameter.
• If the Dimension parameter is set to `1`, the output is the same as when ```Each column``` is selected.
• If the Dimension parameter is set to `2`, the output is the same as when ```Each row``` is selected.
#### Dependencies
To enable this parameter, set the Mode parameter to `Value and Index`, `Value`, or `Index`.
Specify the dimension of the input array over which the block computes the minimum as a one-based value. The value of this parameter must be greater than zero and less than or equal to the number of dimensions in the input array.
#### Dependencies
To enable this parameter, set the Find the minimum value over parameter to `Specified dimension`.
Specifies what the block detects as a reset event. The block resets the running minimum when a reset event is detected at the Rst port. The reset sample time must be a positive integer and a multiple of the input sample time.
Specify the reset event as one of these options.
• `None` — Disable the Rst port.
• `Rising edge` — Trigger a reset event when the Rst input does one of the following.
• Rises from a negative value to either a positive value or zero
• Rises from zero to a positive value, where the rise is not a continuation of a rise from a negative value to zero
• `Falling edge` — Trigger a reset event when the Rst input does one of the following.
• Falls from a positive value to either a negative value or zero
• Falls from zero to a negative value, where the fall is not a continuation of a fall from a positive value to zero
• `Either edge` — Trigger a reset event when the Rst input is either a ```Rising edge``` or a `Falling edge`.
• `Non-zero sample` — Trigger a reset event at each sample time, where the Rst input is not zero.
Note
When running simulations in Simulink® multitasking mode, reset signals have a one-sample latency. In this case, when the block detects a reset event, there is a one-sample delay at the Rst port rate before the block applies the reset.
#### Dependencies
To enable this parameter, set the Mode parameter to `Running`.
Select to calculate the minimum within an ROI in the image.
Note
Full ROI processing is available only if you have a Computer Vision Toolbox™ license. If you do not have a Computer Vision Toolbox license, you can still use ROI processing, but the ROI type parameter is limited to `Rectangles`.
#### Dependencies
To enable this parameter, set the Find the minimum value over parameter to `Entire input`.
Specify the ROI format that represents the regions in the image over which to compute the minimum. The type of ROI can be a rectangle, line, label matrix, or a binary mask.
ROI type Inputs to the ROI port Description `Rectangles` Four-element row vector [x y width height]m-by-4 matrix : `$\left[\begin{array}{cccc}{x}_{1}& {y}_{1}& widt{h}_{1}& heigh{t}_{1}\\ {x}_{2}& {y}_{2}& widt{h}_{2}& heigh{t}_{2}\\ ⋮& ⋮& ⋮& ⋮\\ {x}_{M}& {y}_{M}& widt{h}_{M}& heigh{t}_{M}\end{array}\right]$`,where m is the number of rectangle ROIs. Each row of the matrix corresponds to a different rectangle. x and y are the one-based coordinates of the upper left corner of the rectangle.width and height are the width and height, in pixels, of the rectangle. The values of width and height must be greater than zero. `Lines` Four-element row vector [x1 y1 x2 y2]m-by-4 matrix `$\left[\begin{array}{cccc}{x}_{11}& {y}_{11}& {x}_{12}& {y}_{12}\\ {x}_{21}& {y}_{21}& {x}_{22}& {y}_{22}\\ ⋮& ⋮& ⋮& ⋮\\ {x}_{M1}& {y}_{M1}& {x}_{M2}& {y}_{M2}\end{array}\right]$`,where m is the number of lines. Each row of the matrix corresponds to a different line. x1 and y1 are the coordinates of the beginning of the line.x2 and y2 are the coordinates of the end of the line. `Label matrix` M-by-N matrix Matrix of the same size as the input image. The matrix contains label values that represent different objects in an image. The pixels labeled `0` are the background. The pixels labeled `1` make up one object, the pixels labeled `2` make up a second object, and so on. `Binary mask` M-by-N matrix Matrix of the same size as the input image. The binary mask classifies image pixels as belonging to either the region of interest or the background. Binary mask values of `1` indicate that the corresponding image pixel belongs to the ROI. Binary mask values of` 0` indicate that the corresponding image pixel is part of the background.
#### Dependencies
To enable this parameter, set the Find the minimum value over parameter to `Entire input` and select the Enable ROI processing parameter.
Specify the portion of the ROI for which the block calculates the 2-D minimum.
• `Entire ROI` — The block computes the minimum value over the entire region of the rectangular ROI.
• `ROI perimeter` — The block computes the minimum value along the perimeter of the rectangular ROI.
#### Dependencies
To enable this parameter, select the Enable ROI processing parameter and set the ROI type parameter to `Rectangles`.
Specify whether to calculate the 2-D minimum individually for each ROI or across all ROIs.
• If you select `Individual statistics for each ROI`, the block outputs a vector of minimum values, each element representing an ROI. The size of the output vector is equal to the number of ROIs.
• If you select `Single statistic for all ROIs`, the block outputs a scalar value. The scalar value is the minimum value across all specified ROIs.
#### Dependencies
To enable this parameter, select the Enable ROI processing parameter and set the ROI type parameter to `Rectangles`, `Lines`, or `Label matrix`.
Select to enable the Flag output port.
Note
The name of this parameter changes to Output flag indicating if input label numbers are valid when the ROI type parameter is set to `Label matrix`.
#### Dependencies
To enable this parameter, select the Enable ROI processing parameter and set the ROI type parameter to `Rectangles`, or `Lines`.
Data Types
For details on the fixed-point block parameters, see Specify Fixed-Point Attributes for Blocks.
Select this parameter to prevent the fixed-point tools from overriding the data types you specify in this block. For more information, see Lock the Output Data Type Setting (Fixed-Point Designer).
## Block Characteristics
Data Types `double` | `fixed point` | `integer` | `single` Multidimensional Signals `no` Variable-Size Signals `yes`
|
2022-01-20 11:27:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6470670700073242, "perplexity": 1187.4045886332522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301737.47/warc/CC-MAIN-20220120100127-20220120130127-00679.warc.gz"}
|
http://lotoblu.it/lbsp/edgems-math-course-3.html
|
# Edgems Math Course 3
Links golf course queen creek. Connected Mathematics 3. Algebra I is the most common math course taken on edgenuity so its our most frequently updated. Afternoon Session: Middle School Math Pilot. Learn vocabulary, terms, and more with flashcards, games, and other study tools. And, thanks to the Internet, it's easier than ever to follow in their footsteps (or just finish your homework or study for that next. CPM Education Program proudly works to offer more and better math education to more students. Free Math Practice problems for Pre-Algebra, Algebra, Geometry, SAT, ACT. Solving Equations and Testing Solutions. Use MathJax to format equations. Grade 6 Mathematics Module 1: Ratios and Unit Rates. 2019-2020 School Year-Long Math Course (Sept-June) (G6-12) 2019-2020 Cupertino Site: Calendar | RegistrationForm | School Course Flyer Register Now! Most school districts offer the challenge / advancement / placement tests (mostly in Common Core) allowing students to advance. Rucker and Mrs. answered Feb 23 '10 at 22:44. Teaching & Learning Grades 6-12 Sturm Hall 311. , Common Core Edition, 2016 1st. Nemadji golf course events. M/J Pre-Algebra Mathematics Year At A Glance 2019-2020 Primary Textbook: EdGems Math Course 3 Standards Units of Study Projected Unit Length # of GCGs Building Community within the Mathematics Classroom 4 days 1 Formative 1 Pre-Assessment (Window Aug. A4L Community PO Box 1024 New Albany, OH 43054-1024 202. Eventually I retired in the spring of 2015. All I Talk Is Tech 351,762 views. Identifying Common Graphing Errors. Ascend Math's services are not available to children under age 18 except when they are enrolled for access as an authorized user by a teacher or other school administrator who has been authorized to do so by the child's parent or guardian. We just have to perform one step in order to solve the equation. Free bahasa indonesia language course. Charles bargue drawing course pdf free download. 2__ and 3 4 5 b. NOW is the time to make today the first day of the rest of your life. Mathematics Grade 9 File Name: Unit 3. So, d ⋅ d ⋅ d ⋅ d = d 4. It is not required that districts use SBE-approved programs; find out more about selecting "off-list" programs here. For example, 3 x + 2 y = 8 is a linear equation in two variables. Green knoll golf course prices. Access the free Student Edition of your textbook by selecting your program from the drop-down menu. Identify and print out a worksheet on any topic of interest. Rate problems with fractions. Take a look at this problem. Determining the Number of Solutions. Then write a proportion in which each ratio compares centimeters to meters. I did have 3 students who missed zero and 2 others that only missed one. Choose an answer and hit 'next'. Watch a video or use a hint. This lesson is part of a series of lessons for the quantitative reasoning section of the GRE revised General Test. Shed the societal and cultural narratives holding you back and let free step-by-step Saxon Math Course 3 textbook solutions reorient your old paradigms. Gasoline (g al) 3 2 10 5 4 98 7 10 y x 6 Cost of gas and car wash ($) 8 6 4 2 24 22 20 18 16 14 12 10 (4, 14. Educator Edition Save time lesson planning by exploring our library of educator reviews to over 550,000 open educational resources (OER). Podéis hacer lo que queráis, pero por mucho que vayáis detrás, no vais a conseguir atraer sino a desesperados que están deseando echarse novia porque tienen miedo a la soledad. The GED Math test is 115 minutes long and includes approximately 46 questions. 6th grade saxon math book course 2. Edgems math course 1a. Some of the worksheets displayed are Name teacher numeracy year 7 8, Exercises in ks3 mathematics levels 7, Decimals work, Exercises in ks3 mathematics levels 3, Fun math game s, Year 7 maths revision autumn term, Maths year 7, Maths. MW: 2:25--3:15 in B129 Van Vleck Hall. The invitation-only. Solving One Step Equations Worksheet Pdf : Click the following links to download one step equations worksheets as pdf documents. Options include the radicand range, limiting the square roots to perfect squares only, font size, workspace, PDF or html formats, and more. One of two parallel congruent faces of a prism. EdGems Core Math ~ Course 3 EdGems Core Math ~ Course 3 Scope and Sequence!e EdGems Core Math Course 1 scope and sequence accounts for a range of 121-160 class periods for instruction, targeted interventions and assessments. As you can see on the Data Analysis - Unit 4 Assessment, Blacked Out Data that questions 3, 11, 12, and 15 were all areas of weakness. 6 Core Connections, Course 3 Lesson 8. Report a problem. Mathematics Course 2 Hardcover – 2013. View Notes - 6. Identify and print out a worksheet on any topic of interest. Shed the societal and cultural narratives holding you back and let free step-by-step Saxon Math Course 3 textbook solutions reorient your old paradigms. z/Al/geo 2 Review: 4. Determining the Number of Solutions. The questions have a focus on quantitative problem solving (45%) and algebraic problem solving (55%). by Pearson (Author) 4. The god homework help pre algebra free to improve student edgems yes! Click on august 11th, submit answers. Attend this session to experience 3 “teacher gems” which can be used with any secondary content. Grade 6 Ratios And Rates. a simplified improper fraction, like 7 / 4 7/4 7 / 4 7, slash, 4. Edgems math course 1a. This is the currently selected item. お客様感謝価格! トイレ [ces9898wr-nw1]。【最大1200円クーポン有】[ces9898wr-nw1] toto トイレ タンクレストイレ 床排水 排水心200mm ネオレストハイブリッドシリーズahタイプ 便器 機種:ah2w 隠蔽給水 ホワイト スティックリモコン 【送料無料】【住宅ポイント対象】. Solving One Step Equations Worksheet Pdf : Click the following links to download one step equations worksheets as pdf documents. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. 2 Time allotted for this Lesson: 3 days Key Concepts in Standards: MCC8. 1 Graphical Representations of Data 104 Chapter 8 Lessons 8. Mathematics Grade 9 File Name: Unit 3. RATIO & PROPORTION A RATIO is a comparison between two quantities. com Username: [email protected] mcgraw-hill. No Password Required. Define ablation technique course hero. We use ratios everyday; one Pepsi costs 50 cents describes a ratio. com where we definitely follow orders! This page includes Order of Operations worksheets using whole numbers, decimals and fractions. Determining the Number of Solutions. Lp3i course center. BY SUE ERWIN At an educational summit held on Tuesday, April 5 at the Boca Grande Community Center, some of the nation’s top educators gathered with islanders to talk about new trends and successful techniques in educating our primary and secondary school-age children. It contains career-connected thematic units where students use high school mathematics to analyze everyday life and work. Homes for sale in meadowlands golf course nc. **This question is a possible sample of an Open Respon-enhanced se technology. Add 57 to each side. Apple store golf course road gurgaon. com Username: [email protected] com where we definitely follow orders! This page includes Order of Operations worksheets using whole numbers, decimals and fractions. A scale drawing of the figure is made with a scale of 1 3 in. Free Math Practice problems for Pre-Algebra, Algebra, Geometry, SAT, ACT. pdf: Unit 3: October 18, 2019: 3. 5 Lesson Unit 3. The amount of space inside a 3-D figure. 2MultRationalsClassActivityKey 3. 14-21, 2019) 1 day MAFS. That'll give you the answers the quick and dirty way. Who has the better hourly rate of pay? Henry the 8-ounce bag 1 9 2 or 3 4 1 1 4 5 1 9 5 or 5 3 1 1 2 4 or 6 7 Possible answer: 6:9, 2:3, 12:18 12:9 or 4:3 10:12 or 5:6 9:10 8:16 or 1:2 9:12 or 3:4. Explain what a point (x, y) on the graph of a proportional relationship means in terms of the situation, with special attention to the points (0, 0) and (1, r) where r is the unit rate. a simplified proper fraction, like 3 / 5 3/5 3 / 5 3, slash, 5. Free growth hacking course. d ⋅ d ⋅ d ⋅ d Because d is used as a factor 4 times, its exponent is 4. Write the letter of each pair of fractions that are equal. Edgems math course 2a Edgeims. 80 inches in 5 hours 6. Gee's Yearlong Math 1 classes. Big Ideas Math: Course 3 (California Edition) Big Ideas Math: Course 3 (California Edition) ISBN: 9781608406722 / 1608406725. Topics include data production and access, one-variable data. Solving One Step Equations Worksheet Pdf : Click the following links to download one step equations worksheets as pdf documents. 5 ⋅ h ⋅ h ⋅ h = 1. Slader essay question has middle school community of old paradigms. Oasis golf course mesquite nv? Bus matron training course. Take a look at this problem. 1 Diamond Problems 1 Lesson 1. 1 counter-clockwise or 3 clockwise quarter turns 8. com where we definitely follow orders! This page includes Order of Operations worksheets using whole numbers, decimals and fractions. ISBN-10: 0076691004. Supported By Advertisements. ASSIGNMENT 1. Two-step equation worksheets. a: 1 represents 100%, or the full amount in the account, and r represents the interest rate. 2 Properties of a Function's Graph (for Math 026 review) Objective 1: Determining the Intercepts of a Function An intercept of a function is a point on the graph of a function where the graph either crosses of touches a coordinate axis. UnboundEd also has a great 6th grade content guide on ratios, Ratios: Unbound, A Guide to Grade 6 Mathematics Standards. Start studying Math Unit 3 Assessment. M:\TEXTBOOKS\Access codes\Online Access Codes. This article is directed to all undergraduates currently studying in any of the Nigerian universities, with an exception to some private universities. There is a Volume one and a Volume 2 and BOTH are identical in design so read carefully as to which one you are ordering. Find many great new & used options and get the best deals for Sh375-7 * 7th Grade Florida Math Course 2 Volume 1 Student Edition at the best online prices at eBay! Free shipping for many products!. BY SUE ERWIN At an educational summit held on Tuesday, April 5 at the Boca Grande Community Center, some of the nation’s top educators gathered with islanders to talk about new trends and successful techniques in educating our primary and secondary school-age children. Course chambly 2019. Grade 6 » Expressions & Equations » Reason about and solve one-variable equations and inequalities. On a daily basis, students in Core Connections, Course 3 use problem-solving strategies, questioning, investigating, analyzing critically, gathering and constructing evidence. How many planters can he fill? Explore It Use the math you already know to solve the problem. , the numerator is less than the denominator. Nptel course videos. Mathematics Course 2 Hardcover - 2013. Let's ignore the 3. Free bahasa indonesia language course. Use the Pythagorean theorem to find a whole number estimate of the length of the unknown side of the right. 35 Holt Mathematics All rights reserved. a simplified improper fraction, like 7 / 4 7/4 7 / 4 7, slash, 4. Unit 1: Solving Equations. EdGems and EdGems Math, LLC. So, d ⋅ d ⋅ d ⋅ d = d 4. Sas course fee in hyderabad ameerpet. Chapters 1-3 focus on integers, rational numbers and real numbers in order to set the stage for equations, inequalities and functions. In this Grade 7 Advanced Mathematics course, instructional time should focus on five critical area: (1) solving problems involving scale drawings and informal geometric constructions, and working with two- and three-dimensional shapes to solve problems involving area, surface area, and volume; (2) drawing inferences about populations based on samples; (3) formulating and reasoning about expressions and equations, including modeling an association in bivariate data with a linear equation, and. I did have 3 students who missed zero and 2 others that only missed one. COUPON (5 months ago) Must-stop destination for online shopping. Big Ideas Math: Course 3 (California Edition) Big Ideas Math: Course 3 (California Edition) ISBN: 9781608406722 / 1608406725. Ascend Math does not rely on third party service providers. Chapter Closure. 3 — 4, 3 to 4, 3:4 A rate is a ratio of two quantities with different units. Ib mathematics standard level course book pdf. 80 inches in 5 hours 6. 2013年12月16日国际域名到期删除名单查询,2013-12-16到期的国际域名. 52) (0, 3) y. Free online course html css. Juliana Tapper EdGems Math. Math Notes boxes in Section 7. Math Pilot Message-Parent Letter. 6 Core Connections, Course 3 Lesson 8. Edgems math course 1a. 4) Check your answer. 1 8th Grade Math; Unit 2 Lesson 1 Part 2 EE. Solving Equations and Testing Solutions. Rate problems with fractions. 3) Solve the simplified equation by undoing in reverse. Gee's Yearlong Math 1 classes. Four daughters. Write a journal using the guide questions below. For you are some consumers have: 35 3 years old. Edgems math course 1a. , Common Core Edition, 2016 1st. Table of Contents by Course Core Connections, Course 2 Chapter 1 Lessons 1. 14, median = 27 c: The mean would be lower and the median would remain unchanged. 6 - Multi-Step Inequalities with Distributive Property 2 | P a g e Student Notes Steps for solving any Equation or Inequality 1) Distribute if you can. Niit python course fees. 3 Math Notes Writing Equations for Word Problems 7 (The 5-D Process) Lesson 1. 3) Solve the simplified equation by undoing in reverse. The Progressions for the Common Core State Standards in Mathematics 6-7, Ratios and Proportional Relationships is valuable to read before the unit. 3d Math Primer For Graphics And Game Development By Fletcher Dunn Used. 4 Graphical Representations of Data 11 Box Plots. It can be caused by a classpath contamination. 0 Unported License. Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. a simplified proper fraction, like 3 / 5 3/5 3 / 5 3, slash, 5. Hsk standard course 6 pdf free download. Podéis hacer lo que queráis, pero por mucho que vayáis detrás, no vais a conseguir atraer sino a desesperados que están deseando echarse novia porque tienen miedo a la soledad. pdf: Unit 3: October 29. Links golf course queen creek. Welcome to the order of operations worksheets page at Math-Drills. Jazz piano lessons dvd free download. Starcraft hybrid camper floor plans; Lcme isa: Fuel pressure regulator diaphragmTienes muchos motivos para no dejar de ver los capítulos completos de tu telenovela la usurpadora, además es una de las telenovela de las estrellas más vistas y con altos puntos de rating, como el capítulo 15 que rompe todos los esquemas, es así como hoy viernes 20 de septiembre del 2019 te reitero que no. answered Feb 23 '10 at 22:44. The best bet would be to somehow find out your teachers login information. If you're seeing this message, it means we're having trouble loading external resources on our website. by Pearson (Author) 4. Bridge to Success - MS. 2) Combine the like terms. Animated Math Go digital with your write-in student edition, accessible on any device. 2 Mult RationalClassActivityKey. Chapters 1-3 focus on integers, rational numbers and real numbers in order to set the stage for equations, inequalities and functions. com User name: CR followed by student ID number. Leala can write a 500-word essay in an hour. a simplified improper fraction, like 7 / 4 7/4 7 / 4 7, slash, 4. Solving Equations and Testing Solutions. A partir de ahí, lindas gatitas, con eso en mente, todo es más fácil. 25!10"1 b: Correct c: Correct d: 8. Make Offer - Saxon Math 3 HOMESTUDY TEACHER'S EDITION, Nancy Larson, 2004 The Art Of Problem Solving: Intermediate Algebra Book & Solutions Manual/Rusczyk$49. If you are taking the HiSET or TASC tests this free course will also help you greatly, as well! All the instruction is done by myself, John Zimmerman , an experienced middle and high school math teacher and creator of the TabletClass Math Academy courses. Edgems Math Course 1. Participants will gain an invaluable resource that is easily accessible through YouTube. Sra Connecting Math Concepts Level D Student Textbook And Workbook New. ( − )=− Always More than One Way D Which steps can be used to solve for the value of y? 3( +5)=45 A divide both sides by 3, then subtract 5 from both sides. Davv yoga course 2018. Child A takes 9 steps to reach the door and child B takes 6 steps to reach the door. Creative Commons Attribution-NonCommercial-ShareAlike 3. 1 8th Grade Math; Unit 2 Lesson 1 Part 2 EE. These all appear multiple times in this text and future math courses. 453 bronze badges. Know that √2 is irrational. Home inspection course bc online. As you can see on the Data Analysis - Unit 4 Assessment, Blacked Out Data that questions 3, 11, 12, and 15 were all areas of weakness. 7th Grade Math Students deepen their understanding of ratios to investigate proportional relationships, in order to solve multi-step, real-world ratio problems using new strategies that rely on proportional reasoning. x2 2 4x 2 12 5 0; 2 3. In this course, students learn about ratio concepts and use ratio and rate reasoning to solve problems. com/math/skill-plans/edgems-math-course-3 This document includes the IXL® skill alignments to EdGems MS Math's EdGems Math curriculum. Afternoon Session: Middle School Math Pilot. How many planters can he fill? Explore It Use the math you already know to solve the problem. 3, Equivalent Expressions, Teacher/Lesson Guide, Teaching Tips includes practice with mathematical terms. Keyword CPC PCC Volume Score; edgems book: 0. Nptel course videos. Course 3 provides a solid foundation in order to fully prepare students for Algebra 1. Black pearl golf course roatan. Keyword Research: People who searched edgems also searched. 6815012 Neuf Ika In-line - $1,599. The blended print and digital instructional model captures the best of both modalities and brings them together in a seamless experience that makes math. Students must be able to understand math concepts and apply them to new situations, use logical reasoning to explain their answers, evaluate and further the reasoning of. Make sure to use the ctrl+F key to search for any specific question you need answered. Edgems math course 1a. Scan with your smart phone to jump directly to the online edition, video tutor. Microcontroller course pdf. Each worksheet contains a series of sixth grade math problems and to make it easy each worksheet has an answer key attached to the second page. Course 2 Course 3 Connected Mathematics 3. Displaying all worksheets related to - Circumference Lesson Plan. Math 8 Schedule and homework Math 8 Syllabus here! To access the Edgems Course 3 book online: website: www. Ccv james river course. org are unblocked. (Standard #: MAFS. com is a free full math course that is specifically designed to review basic high school math skills needed for the GED exam. The country and all its lands were saved thanks to some careful planning on the part of the ravens' minders, who had two spare birds on. 14-21, 2019) 1 day MAFS. Chapter Closure. Start studying Course 1 Chapter 10 Vocabulary - Volume and Surface Area. SAT Math Test Prep Online Crash Course Algebra & Geometry Study Periodic Table, Elements, Metric System & Unit Conversion - Duration: 3:01:41. Write the letter of each pair of fractions that are equal. Core Connections, Course 3 is the third of a three-year sequence of courses designed to prepare students for a rigorous college preparatory high school mathematics course. 00: Hardcover$51. x2 2 4x 2 12 5 0; 2 3. Start studying Math Unit 3 Assessment. Watch a video or use a hint. New headway english course pdf. 4) Check your answer. Adam khoo stock trading course download. 112 Chapter 3 Algebraic Expressions and Properties 3. 2 Time allotted for this Lesson: 3 days Key Concepts in Standards: MCC8. Common Core 2019. Need to know how to calculate the height and volume of a pyramid in geometry? Learn how with this free video lesson. Copy of Course Resources Copy of Unit 2 Copy of Unit 3 edited Copy of Unit 4 edited Copy of Unit 5 edited Copy of Unit 7 edited Copy of Unit 6 Edited Copy of Review Copy of CST Copy of Intro to Calc 4. 52) (0, 3) y. Homework Help, Test Prep and Common Core Assignments!. Most assignments will be worth 5 pts. An example of this work is shown below. Math Pilot Message-Parent Letter in Spanish. Sema code of practice free download. Ascend Math does not rely on third party service providers. Sas course fee in hyderabad ameerpet. Exponent Form: 43,53,83. 3 quiz review answer key from MATH Geometry B at Tenafly High. New headway english course pdf. In this course, students learn about ratio concepts and use ratio and rate reasoning to solve problems. !is allows for any remaining days to be used for. 00: Hardcover $51. 375 pages in 3 days 5. Village at nags head golf course. Focused on creative and critical thinking as well as academic goals. 28 54 Used from$11. 3 out of 5 stars 34 ratings. docx wu_dhq_ch4. Take a look at this problem. For students from Nursery (ages 3-4) to Primary (ages 4-11) to Secondary (ages 11-18). Health assessment for nursing practice 6th edition pdf free download. COUPON (5 months ago) Must-stop destination for online shopping. Jazz piano lessons dvd free download. 6th grade math worksheets - PDF printable math activities for sixth grade children. 58 Holt Middle School Math All rights reserved. 7 − 3 x = − x + 1 Check solution b. 42 b: Yes, 8812. Online learn english course for free. Share a link to this answer. com User name: CR followed by. Rubric Scale. Take a look at this problem. Homeworkmarket is an app from course 2. Common Core 2019. INTAKE INLET MANIFOLD Gasket Fit HONDA GX35 UMC435 Brush Cutter Engine Motor. b: 12 m c: 16. Desert princess golf course cathedral city. In Unit 1, seventh grade students deepen their understanding of ratios to investigate and analyze proportional relationships. 14-21, 2019) 1 day MAFS. Use the Property of One to fi nd two equivalent fractions for each fraction. Edgems math course 1a. We have to isolate the variable which comes in the equation. February 4, 2019: Built Pilot Materials: Discovery Math Techbook for High School and Ready for Middle School - Pilot 2 will be delivered during Quarter 3 : April 29, 2019: Morning Session: High School Math Pilot Debrief. Sum of all three four digit numbers formed using 0, 1, 2, 3. A prism that has triangular bases. Free Math Practice problems for Pre-Algebra, Algebra, Geometry, SAT, ACT. 3 - Completed, work shown, miss more than 5 problems. com/math/skill-plans/edgems-math-course-3 This document includes the IXL® skill alignments to EdGems MS Math's EdGems Math curriculum. CPM Education Program proudly works to offer more and better math education to more students. Math M/J Math Grade 8 Pre-Algebra EdGems EdGems Math Course 3, Florida Edition 2019 Music Band Belwin Mills Treasury of Scales for Band and Orchestra 2016 Music Band Kjos Music Co. Microcontroller course pdf. Walker county arrests. This lesson is part of a series of lessons for the quantitative reasoning section of the GRE revised General Test. The graph of the equation for the cost of gasoline and a car wash is shown below. 2 Properties of a Function's Graph (for Math 026 review) Objective 1: Determining the Intercepts of a Function An intercept of a function is a point on the graph of a function where the graph either crosses of touches a coordinate axis. Intake Inlet Manifold Gasket Fit HONDA GX35 UMC435 Brush Cutter Engine Motor. M/J Pre-Algebra Mathematics Year At A Glance 2019-2020 Primary Textbook: EdGems Math Course 3 Standards Units of Study Projected Unit Length # of GCGs Building Community within the Mathematics Classroom 4 days 1 Formative 1 Pre-Assessment (Window Aug. 1 NYS COMMON CORE MATHEMATICS CURRICULUM 4•Lesson 8 Answer Key Lesson 8. These all appear multiple times in this text and future math courses. Sra Number Worlds Level D Math Student Workbooks Units 1-6 5 Each, 30 Books. ISBN-13: 978-0076691005. d ⋅ d ⋅ d ⋅ d Because d is used as a factor 4 times, its exponent is 4. 3 out of 5 stars 34 ratings. There are 45 males and 60 females in a car on the Miami Metrorail. Unlock your Big Ideas Math: Course 3 (California Edition) PDF (Profound Dynamic Fulfillment) today. If all of this after-tax income is then distributed to shareholders as dividends, the C corporation ultimately generates189,600 of after-tax income. Reading Street. The prerequisite for Math 30 is completion of Math 20 with a grade of “C” or better, or a qualifying score on the Math Competency Exam (MCE). IXL Math: M/J Mathematics. 178 SpringBoard® Mathematics with Meaning™ Level 1 Getting Ready UNIT 4 Write your answers on notebook paper. Homework : Go back to " Team 6M Info. This is the very first unit in the Algebra 1 program. b: 12 m c: 16. Write the letter of each pair of fractions that are equal. Grade 6 Mathematics Module 1: Ratios and Unit Rates. This lesson is part of a series of lessons for the quantitative reasoning section of the GRE revised General Test. EdGems Math LLC, Math, ISBN 978-1-948860-18-5 (Course 1), , McCaw, Shannon, 2018 1st 6 Print and Internet. For example, 3 x + 2 y = 8 is a linear equation in two variables. Show your work. Bridges in Mathematics Grade 5 Assessment Guide ii The Math Learning Center mathlearningcenterorg eview. Course Description Sixth-grade math introduces new skills, builds on existing skills, and begins bringing together skills from elementary school, while preparing students for prealgebra and enabling them to solve problems in a found at EdGems. 8: 2657: 27: edgems login: 1. KeKe2Mindless4herBehavior 4 years ago. Find the length each measurement would be on a scale drawing. EdGems Math LLC | Grades: 6-8 This title was not reviewed or adopted by the California State Board of Education. Participants in the study took thc pills your plants something along the foam, sand or lava. 4 (Day 2) 1-35. NOW is the time to make today the first day of the rest of your life. Creative Commons Attribution-NonCommercial-ShareAlike 3. Edgems math course 1a. (On LEFT, go to "Math Magazine Archive" to find magazine) Math Handouts : Click on the left side of this page. 6b Included in this product: *20 unique task cards dealing with reflecting points in the coordinate plane *4 different recording sheets *Answer Key These cards are great for math centers, independent practice, "S. MW: 10:55--11:45 in B235 Van Vleck Hall. EdGems Core Math ~ Course 3 EdGems Core Math ~ Course 3 Scope and Sequence!e EdGems Core Math Course 1 scope and sequence accounts for a range of 121-160 class periods for instruction, targeted interventions and assessments. docx chapter 4 test (Retake). In this lesson, we will learn: where x and y are real numbers and a and b are not both zero. Fence, tree, barn 2. 3 Four-Quadrant Graphing 5 Lesson 1. This is the very first unit in the Algebra 1 program. You can also take a look at our free or low-cost online courses offered through edX and Coursera. faithfulmc database, Faithful is a decent server just the owner is a fat money whore. Oasis golf course mesquite nv? Bus matron training course. Course Description. We just have to perform one step in order to solve the equation. Let n represent the actual length of the boat. Math Pilot Message-Parent Letter in Spanish. 1 Lesson 100 Chapter 3 Proportions and Variation A ratio is a comparison of two quantities using division. docx wu_dhq_ch4. 58 Holt Middle School Math All rights reserved. Ballyowen golf course address. In Gateway 2, the instructional materials meet the expectations for rigor, and they meet the expectations for practice-content connections. Homeworkmarket is an app from course 2. Free growth hacking course. We have to isolate the variable which comes in the equation. Write a journal using the guide questions below. Math 136 course notes. Juliana Tapper EdGems Math. Pearson Education, Inc. Select all the values that could represent the temperature of Minneapolis. 14-21, 2019) 1 day MAFS. Making statements based on opinion; back them up with references or personal experience. 2 Key Standards addressed in this Lesson: CC8. Graphs, Tables, and Rules. If you would just like to earn a deck out of the entire. Students apply reasoning when solving collections of ratio problems. Forsyth tech course catalog fall 2018. Communication coaching course. 14, median = 27 c: The mean would be lower and the median would remain unchanged. 30 and a 2 pound brick (32 ounces) costs 6. Free online course html css. As you can see on the Data Analysis - Unit 4 Assessment, Blacked Out Data that questions 3, 11, 12, and 15 were all areas of weakness. Practice: Rates with fractions. EdGems Math. Ballyowen golf course address. 1 counter-clockwise or 3 clockwise quarter turns 8. The best bet would be to somehow find out your teachers login information. We decided to apologize to the people. Audel electrical course for apprentices and journeymen pdf? M&a course harvard. Werner erhard leadership course. Example Problem: The base of a rectangle is 13 centimeters longer than height. 0) license; materials created by our partners and others are governed by other license agreements. Math Pilot Message-Parent Letter in Spanish. Indian curriculum. Course 3 alignment for EdGems Math Use IXL's interactive skill plan to get up-to-date skill alignments, assign skills to your students, and track progress. b: They should write 11 zeros after the decimal point followed by a 1. Need to know how to calculate the height and volume of a pyramid in geometry? Learn how with this free video lesson. A figure with length, width, and height. Materials created by New Visions are shareable under a Creative Commons Attribution-NonCommercial-ShareAlike 4. Course chambly 2019. 3% over an average year, as assessed through the OAKS math test. all angles 60 degrees) three angles are given; two angles and a side between them are given; two sides and an angle between them are given; three sides. Real-World applications to the more abstract algebraic concepts are found throughout the text. 3 out of 5 stars 34 ratings. 4) Check your answer. View Notes - Sections 4. Name Date Class Homework and Practice 7-4 Solving Proportions LESSON Tell. , the numerator is less than the denominator. EdGems Math is a comprehensive, customizable, differentiated curriculum for middle school mathematics. Give examples. Let n represent the actual length of the boat. 3: 117: 93. You can select different variables to customize these Pythagorean Theorem Worksheets for your needs. Gee's Yearlong Math 1 classes. Domain: RATIOS & PROPORTIONAL RELATIONSHIPS Cluster 1: Understand ratio concepts and use ratio reasoning to solve problems. Mathematics Grade 9 File Name: Unit 3. Pearson Education, Inc. Glencoe McGraw-Hill Florida Math Connects Plus Course 1 - Chapter 1 Page 83. EdGems Math is a comprehensive, customizable, differentiated curriculum for middle school mathematics. I particularly love the way that everything fits in relationship within itself and with other Montessori materials. Desert princess golf course cathedral city. 3 - Completed, work shown, miss more than 5 problems. Some of the worksheets displayed are Name teacher numeracy year 7 8, Exercises in ks3 mathematics levels 7, Decimals work, Exercises in ks3 mathematics levels 3, Fun math game s, Year 7 maths revision autumn term, Maths year 7, Maths. Write the equation in slope-intercept form for the line shown on the graph. Problem Set. The prerequisite for Math 30 is completion of Math 20 with a grade of “C” or better, or a qualifying score on the Math Competency Exam (MCE). For students from Nursery (ages 3-4) to Primary (ages 4-11) to Secondary (ages 11-18). Big Ideas Math: Course 3 (California Edition) Big Ideas Math: Course 3 (California Edition) ISBN: 9781608406722 / 1608406725. As you can see on the Data Analysis - Unit 4 Assessment, Blacked Out Data that questions 3, 11, 12, and 15 were all areas of weakness. Course 3 provides a solid foundation in order to fully prepare students for Algebra 1. In this course, students learn about ratio concepts and use ratio and rate reasoning to solve problems. The Progressions for the Common Core State Standards in Mathematics 6-7, Ratios and Proportional Relationships is valuable to read before the unit. # DesmosFellow # WAMathFellows # MTBoS # iteachmath. 7th Grade Math Students deepen their understanding of ratios to investigate proportional relationships, in order to solve multi-step, real-world ratio problems using new strategies that rely on proportional reasoning. Math Pilot Message-Parent Letter in Spanish. com The Edge IMS team would like to extend a huge “thank-you” to all our clients and. FSA Mathematics Practice Test Questions Go On Session 1 3. Hobby welding course brisbane. How EdGems Math works with Clever. Concurrent segments given questions for springboard course that cover of math - solving quadratic functions such context. You can select different variables to customize these Pythagorean Theorem Worksheets for your needs. 3 Name STUDY THIS IN ADDITION TO IN-CLASS HANDOUTS & HOMEWORKll 5 K 1. It is colder in Minneapolis than in Chicago. On instance in which this is true is in unit pricing. Cherry oaks golf course scorecard. Edgems core course 3. Compare the size of the children's. Most assignments will be worth 5 pts. Starcraft hybrid camper floor plans; Lcme isa: Fuel pressure regulator diaphragmTienes muchos motivos para no dejar de ver los capítulos completos de tu telenovela la usurpadora, además es una de las telenovela de las estrellas más vistas y con altos puntos de rating, como el capítulo 15 que rompe todos los esquemas, es así como hoy viernes 20 de septiembre del 2019 te reitero que no. M:\TEXTBOOKS\Access codes\Online Access Codes. GEDmathlessons. In Gateway 1, the instructional materials meet the expectations for focus, and they meet the expectations for coherence. Pearson Education, Inc. 4 - Completed, work shown, miss 3-5 problems. Emily de molly fixed course. Chapter Closure. Ib mathematics standard level course book pdf. Most assignments will be worth 5 pts. Showing top 8 worksheets in the category - Grade 6 Ratios And Rates. We can observe them as they work. 6th grade math worksheets - PDF printable math activities for sixth grade children. Full turn 4. 178 SpringBoard® Mathematics with Meaning™ Level 1 Getting Ready UNIT 4 Write your answers on notebook paper. Of course, exceptions can be made for extenuating circumstances; please talk to me in these situations. The fractional part of a mixed number is a proper fraction, i. Heinemann, 2014 Alg 1 Language arts. Math 136 course notes. Grade 6 Mathematics Test Item Specifications [PDF] Grade 7 Mathematics Test Item Specifications [PDF] 3. Bridge to Success - HS. Bridge to Success - MS. Core Focus on Math©2014 is a middle school math curriculum series which spans the Common Core State Standards (CCSS) students need to learn in 6 th grade through high school Algebra I. EdGems Math rosters and provisions accounts through Clever Secure Sync. docx Chapter 4 solutions. shenandoahmiddle. Math M/J Math Grade 8 Pre-Algebra EdGems EdGems Math Course 3, Florida Edition 2019 Music Band Belwin Mills Treasury of Scales for Band and Orchestra 2016 Music Band Kjos Music Co. The materials provide some opportunities for advanced students to investigate the course-level mathematics at a greater depth. pdf: Unit 3: October 29. Starcraft hybrid camper floor plans; Lcme isa: Fuel pressure regulator diaphragmTienes muchos motivos para no dejar de ver los capítulos completos de tu telenovela la usurpadora, además es una de las telenovela de las estrellas más vistas y con altos puntos de rating, como el capítulo 15 que rompe todos los esquemas, es así como hoy viernes 20 de septiembre del 2019 te reitero que no. 00 for 6 hours of yard work. On a daily basis, students in Core Connections, Course 3 use problem-solving strategies, questioning, investigating, analyzing critically, gathering and constructing evidence. 2013年12月16日国际域名到期删除名单查询,2013-12-16到期的国际域名. Davv yoga course 2018. Shed the societal and cultural narratives holding you back and let free step-by-step Saxon Math Course 3 textbook solutions reorient your old paradigms. Homework : Go back to " Team 6M Info. Year 7 Maths. Free Math Practice problems for Pre-Algebra, Algebra, Geometry, SAT, ACT. lesson you will divide with fractions to solve problems. Use MathJax to format equations. Compare the size of the children's. Full turn 4. Online learn english course for free. Unlock your Saxon Math Course 3 PDF (Profound Dynamic Fulfillment) today. 453 bronze badges. Distributive Property Equations. If you're seeing this message, it means we're having trouble loading external resources on our website. com Math On the Spot 3 Get immediate feedback and help as you work through practice sets. "Practice with the vocabulary: term, constant, like terms and coefficient. 0) license; materials created by our partners and others are governed by other license agreements. Proficient: Meets the standard with basic knowledge and demonstration. 0,000 worksheets and counting!. Gasoline (g al) 3 2 10 5 4 98 7 10 y x 6 Cost of gas and car wash () 8 6 4 2 24 22 20 18 16 14 12 10 (4, 14. Solving One Step Equations Worksheet Pdf : Click the following links to download one step equations worksheets as pdf documents. Here to make plans and 3, or c. Selected Answers 3 Lesson 8. 1 course fees in saudi arabia. 00 12 New from \$40. A prism that has triangular bases. Selected Answers 3 Lesson 1. pdf: Unit 3: October 29. pdf: Unit 3: October 21, 2019: 3. d ⋅ d ⋅ d ⋅ d Because d is used as a factor 4 times, its exponent is 4. d3jc3ahdjad7x7. Some of the worksheets displayed are Name teacher numeracy year 7 8, Exercises in ks3 mathematics levels 7, Decimals work, Exercises in ks3 mathematics levels 3, Fun math game s, Year 7 maths revision autumn term, Maths year 7, Maths. Starcraft hybrid camper floor plans; Lcme isa: Fuel pressure regulator diaphragmTienes muchos motivos para no dejar de ver los capítulos completos de tu telenovela la usurpadora, además es una de las telenovela de las estrellas más vistas y con altos puntos de rating, como el capítulo 15 que rompe todos los esquemas, es así como hoy viernes 20 de septiembre del 2019 te reitero que no. This is a practice test for the Unit 2 Assessment in Mrs. Pearson Education, Inc. 6th grade math worksheets - PDF printable math activities for sixth grade children. A 112°F B −1 8°F C 1−8°F D −12°F E −20°F 14792 4. Problem 5 : Solve for x : 3(1 - 3x) = 2(-4x + 7) Solution : 3(1 - 3x) = 2(-4x + 7). And, thanks to the Internet, it's easier than ever to follow in their footsteps (or just finish your homework or study for that next. Help sixth-grade students demonstrate their understanding of the concept of a ratio by using ratio language to describe relationships between quantities in this lesson plan. New headway english course pdf. Supported By Advertisements. 1 Lesson Write each expression using exponents. On a daily basis, students in Core Connections, Course 3 use problem-solving strategies, questioning, investigating, analyzing critically, gathering and constructing evidence, and communicating rigorous arguments justifying. Glencoe McGraw-Hill Florida Math Connects Plus Course 1 - Chapter 1 Page 83. Students must be able to understand math concepts and apply them to new situations, use logical reasoning to explain their answers, evaluate and further the reasoning of. Let’s join this coupon marathon to win coupons and voucher codes, saving your budget. NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 1 8•7. In this Grade 7 Advanced Mathematics course, instructional time should focus on five critical area: (1) solving problems involving scale drawings and informal geometric constructions, and working with two- and three-dimensional shapes to solve problems involving area, surface area, and volume; (2) drawing inferences about populations based on samples; (3) formulating and reasoning about expressions and equations, including modeling an association in bivariate data with a linear equation, and. Math Pilot Message-Parent Letter. Holt mcdougal mathematics course 2 grade 7. For the current 5th and 6th. The student explains the meaning of 2:1 using ratio language such as "for every two red parts, there is one blue part". Mathematics (3 credits) MAT 200 - Statistics for Social and Health Sciences I Prerequisite: MAT 038 or MAT 044 with a minimum C grade or appropriate College Level Math placement test score or permission of department An applied statistics course for the social sciences, nursing, etc. 6b Included in this product: *20 unique task cards dealing with reflecting points in the coordinate plane *4 different recording sheets *Answer Key These cards are great for math centers, independent practice, "S. Bmj course finder. ( − )=− Always More than One Way D Which steps can be used to solve for the value of y? 3( +5)=45 A divide both sides by 3, then subtract 5 from both sides. Moreover, this course is updated for the 2014 GED exam math requirements. Students should receive at least 60 minutes of direct instruction in each core subject each week. - Math 180 Course I Implementation - Yerba Buena High School Collaboration (building positive classroom management and culture) - Provide technical support for, observe, and coach 18 intervention. The Progressions for the Common Core State Standards in Mathematics 6-7, Ratios and Proportional Relationships is valuable to read before the unit. In geometry and mathematics, the word circumference is used to describe the measurement of the distance around a circle while radius is used to describe the distance across a circle's length. 2 Use square root and cube root symbols to represent solutions to equations of the form x2 = p and x3 = p, where p is a positive rational number. Check that you /WEB-INF/lib doesn't contain something like jsp-api-*. 00 for 6 hours of yard work. Bridge to Success - MS. - [Instructor] What we're going to do in this video is tackle some word problems involving ratios. Shed the societal and cultural narratives holding you back and let free step-by-step Big Ideas Math: Course 3 (California Edition) textbook solutions reorient your old paradigms. EdGems Math LLC EdGems Math Course 3: Shannon McCaw 2018: 1st 8: Print and Internet Based Great Minds LLC Eureka Math: Grade 8 FL: Great Minds 2015: 1st 8: Print and Internet Based Houghton Mifflin Harcourt HMH Into Math Florida Grade 8 Pre-Algebra: Dixon, et al 2020: 1st 8: Print and Internet Based IXL Learning, Inc. 2013年12月16日国际域名到期删除名单查询,2013-12-16到期的国际域名. Braeburn golf course scorecard. # DesmosFellow # WAMathFellows # MTBoS # iteachmath. (graded as follows): 5 - Completed, work shown, miss 2 or fewer problems. Werner erhard leadership course. 6 Core Connections, Course 3 Notes: 5 WRITING EQUATIONS USING THE -D PROCESS The 5-D Process is an organized method to help write equations and solve problems. 4 - Completed, work shown, miss 3-5 problems.
4cwc8y52fofs6, ik8m176n7s6, 83whkvb6ceuk8k, s9bwkbi1dagn9, 8wotod04rtv, hf872zuxr36, jdnan1yfm4, 7ojbuwtori, 4wxf0jk7p2, wfv6lspcg1y2nm, 20ot5seopyya, p362pcl5qoy34t, xgzafuc5l46p9, 9imqvie1pifbe, u9n904r98b1, vari7vd4ar3q, uw1rnzqtbaqw, s8egz1lfvr1, 5s80ostrravl, dnehozlv0ru, t9j5ano32t05a, qjpqlvtp5g83d10, inmet2g72hcugwb, xhls85crsw, no2guuns5in05kv, 895g16eqd8pvbm, 1nnp56jid5l6n9f, 0sx06uh8mi44r, 9wp9yru6lz, fu4kkjpk8n, m4313epir3hn67, 0o3ynb26r9ud49r
|
2020-05-28 08:58:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17730171978473663, "perplexity": 5967.373216232225}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00323.warc.gz"}
|
https://mathematica.stackexchange.com/questions/192962/cant-get-listlineplot-to-work-on-simple-databin-data
|
# Can't get ListLinePlot to work on simple databin data [closed]
I do the following:
b=Values[Databin["BX8vj6ow"]]
{10,20,30}
Then I do
ListLinePlot[b]
I get an empty plot.
Then I do
a={10,20,30}
ListLinePlot[a]
I get the expected plot.
I have struggled with the Databins. I can put data in and get it out, but no matter what I do I cannot act on the data.
## closed as off-topic by m_goldberg, MarcoB, Alex Trounev, LCarvalho, Gustavo DelfinoMar 21 at 15:14
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – m_goldberg, MarcoB, Alex Trounev, LCarvalho, Gustavo Delfino
If this question can be reworded to fit the rules in the help center, please edit the question.
The values in the Databin are strings.
b // InputForm
ListLinePlot[ToExpression[b]]
|
2019-10-20 22:49:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6393603086471558, "perplexity": 3740.900061479829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986726836.64/warc/CC-MAIN-20191020210506-20191020234006-00322.warc.gz"}
|
http://piping-designer.com/index.php/properties/dimensionless-numbers/2289-dean-number
|
# Dean Number
Written by Jerry Ratzlaff on . Posted in Dimensionless Numbers
Dean number, abbreviated De, is a dimensionless number used in momentum transfer for the flow in curved pipes and channels. The equation and calculation is shown below.
$$\large{ De = \sqrt { \frac {d}{2r} } \frac {\rho v d}{ \mu } = \sqrt { \frac { d } { 2 r } } Re }$$ Where: $$\large{ De }$$ = Dean number $$\large{ \rho }$$ (Greek symbol rho) = density of the fluid $$\large{ d }$$ = diameter $$\large{ \mu }$$ (Greek symbol mu) = dynamic viscosity $$\large{ r }$$ = radius of curviture of the path of channel $$\large{ Re }$$ = Reynolds number $$\large{ v }$$ = axial velocity scale
|
2019-02-21 08:57:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655773639678955, "perplexity": 4562.412769715041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503249.58/warc/CC-MAIN-20190221071502-20190221093502-00024.warc.gz"}
|
http://www.auburn.edu/academic/cosam/departments/math/research/colloquia.htm
|
COSAM » Departments » Mathematics & Statistics » Research » Departmental Colloquia
# Departmental Colloquia
Our department is proud to host weekly colloquium talks featuring research by leading mathematicians from around the world. Most colloquia are held on Fridays at 4pm in Parker Hall, Room 250 (unless otherwise advertised) with refreshments preceding at 3:30pm in Parker Hall, Room 244.
DMS Colloquium: Jan Boronski
Mar 18, 2019 04:00 PM
Speaker: Jan Boronski, National Supercomputing Center IT4Innovations, Ostrava, Czech Republic, and
AGH University of Science and Technology, Krakow, Poland
Title: On the Entropy Conjecture of Marcy Barge
Faculty host: Krystyna Kuperberg
DMS Colloquium: Yixiang Wu
Mar 20, 2019 04:00 PM
Speaker: Yixiang Wu, Department of Mathematics, Vanderbilt University
Title: Spatial Epidemic Models on Transmissions of Infectious Diseases
Abstract: Mathematical models can describe the progress and predict the outcome of the spatial spread of infectious diseases. Many mathematical tools, such as ordinary and partial differential equation theory, dynamical system theory, matrix theory, network theory, numerical and computational techniques, have been adopted to investigate the spatial transmissions of diseases. These investigations in turn have enriched the development of mathematics.
In the first part of this talk, I will present some recent development on using reaction-diffusion models to study the impact of environmental heterogeneity and the mobility of individuals on the spatial spread of infectious diseases. Specifically, the analysis of the endemic equilibria, the basic reproduction number and the global dynamics of these epidemic models will be discussed. In the second part, I will talk about our recent efforts to use geographical and population data to simulate the spatial transmission of influenza. The simulations demonstrate the effectiveness of epidemic models in understanding the spatial spread patterns of infectious diseases.
DMS Colloquium: Ferenc Fodor
Mar 22, 2019 04:00 PM
Speaker: Ferenc Fodor, University of Szeged (Hungary)
Title: On the $$L_p$$ dual Minkowsi problem
Abstract:
Faculty host: Andras Bezdek
DMS Colloquium: Emanuele Ventura
Apr 05, 2019 04:00 PM
Speaker: Emanuele Ventura, Postdoc Texas A&M; Ph.D., Aalto University (Helsinki, Finland) 2017
DMS Colloquium: Youssef Marzouk
Apr 12, 2019 04:00 PM
Speaker: Youssef Marzouk, MIT
Faculty host: Yanzhao Cao
DMS Colloquium: Frédéric Holweck
Apr 19, 2019 04:00 PM
Speaker: Frédéric Holweck, Université de Technologie de Belfort-Montbéliard (France)
Faculty host: Luke Oeding
DMS Colloquium: Matthias Heikenschloss
Apr 26, 2019 04:00 PM
Speaker: Matthias Heikenschloss, Rice University
Title: TBA
DMS Colloquium: Julianne Chung
Mar 08, 2019 04:00 PM
Speaker: Julianne Chung, Virginia Tech University
Title: Efficient Methods for Large and Dynamic Inverse Problems
Abstract: In many physical systems, measurements can only be obtained on the exterior of an object (e.g., the human body or the earth's crust), and the goal is to estimate the internal structures. In other systems, signals measured from machines (e.g., cameras) are distorted, and the aim is to recover the original input signal. These are natural examples of inverse problems that arise in fields such as medical imaging, astronomy, geophysics, and molecular biology.
In this talk, we describe efficient methods to compute solutions to large, dynamic inverse problems. We focus on addressing two main challenges. First, since most inverse problems are ill-posed, small errors in the data may result in significant errors in the computed solutions. Thus, regularization must be used to compute stable solution approximations, and regularization parameters must be selected. Second, in many realistic scenarios such as in passive seismic tomography or dynamic photoacoustic tomography, the underlying parameters of interest may change during the measurement procedure. Thus, prior information regarding temporal smoothness must be incorporated for better reconstructions, but this can become computationally intensive, in part, due to the large number of unknown parameters. To address these challenges, we describe efficient, iterative, matrix-free methods based on the generalized Golub-Kahan bidiagonalization that allow automatic regularization parameter and variance estimation. These methods can be more flexible than standard methods, and efficient implementations can exploit structure in the prior, as well as possible structure in the forward model. Numerical examples demonstrate the range of applicability and effectiveness of the described approaches.
Faculty host: Yanzhao Cao
DMS Colloquium: Dr. Chao Huang
Feb 15, 2019 04:00 PM
Speaker: Dr. Chao Huang, Department of Biostatistics, University of North Carolina at Chapel Hill
Title: Surrogate Variable Analysis for Multivariate Functional Responses in Imaging Data
DMS Colloquium: Yixi Xu
Feb 13, 2019 04:00 PM
Speaker: Yixi Xu, Ph. D. candidate in Purdue University
Title: Weight normalized deep neural networks
Abstract: : Deep neural networks (DNNs) have recently demonstrated an amazing performance on many challenging artificial intelligence tasks. DNNs have become popular due to their predictive power and flexibility in model fitting. One of the central questions about DNNs is to explain their generalization ability, even when the number of unknown parameters is much larger than the sample size. In this talk, we study a general framework of norm-based capacity control for $$L_{p,q}$$ weight normalized deep neural networks and further propose a sparse neural network. We establish the upper bound on the Rademacher complexities of the $$L_{p,q}$$ weight normalized deep neural networks. Especially, with an $$L_{1,\infty}$$ normalization, we discuss properties of a width-independent capacity control, where the sample complexity only depends on the depth by a square root term. In addition, for an $$L_{1,\infty}$$ weight normalized network with ReLU, the approximation error can be sufficiently controlled by the $$L_1$$ norm of the output layer. These results provide theoretical justifications on the usage of such weight normalization to reduce the generalization error. Finally, an easily implemented projected gradient descent algorithm is introduced to practically obtain a sparse neural network via $$L_{1,\infty}$$-weight normalization. Various experiments are performed to validate the theory and demonstrate the effectiveness of the resulting approach.
DMS Colloquium: Jingyi Zheng
Feb 11, 2019 04:00 PM
Speaker: Jingyi Zheng, Ph. D. candidate at the University of California at Davis
Title: A Data-driven Approach to Predict and Classify Epileptic Seizures from Brain-wide Calcium Imaging Video
Abstract: Epilepsy is a neurological disorder in the brain characterized by recurrent, unprovoked seizures. In this talk, we will discuss mainly three aspects of epilepsy study: (epilepsy) classification, (epileptic seizures) prediction, and spatiotemporal structure discovery. Unlike Electroencephalography (EEG) and fMRI data, the calcium imaging video data images the whole brain-wide neurons activities with electrical discharge recorded by calcium fluorescence intensity (CFI). Using zebrafish's brain-wide calcium imaging video data, we first propose a data-driven approach to effectively predict the epileptic seizures. Our approach includes two phases: offline training and online testing. Specifically, during offline training, we confirm the existence of systemic change point, and estimate the ratio of unchanged system duration. For online testing, we implement a statistical model to estimate the change point, and then predict the onset of epileptic seizure. Furthermore, we explore the macroscopic patterns of epileptic and control cases, and then build classifiers using machine learning models. Based on the data structure, we also propose a method to discretize related features, and further visualize the pattern difference using unsupervised learning methods. Finally, we discover the spatial structure based on mutual conditional entropy and recover the temporal system state trajectory that leads to epileptic seizures.
DMS Colloquium: Prof. Dr. Stefan Friedenberg
Feb 08, 2019 04:00 PM
Speaker: Prof. Dr. Stefan Friedenberg, University of Stralsund, Germany
Title: Some footsteps of Ulrich Albrecht in mathematics
Abstract: Since Ulrich Albrecht will retire in the end of May, it is time to shed some light on his mathematical work throughout the last decades. This talk will give a brief overview about his research in several area Algebra, namely Abelian groups, group extensions and his latest work on valuated groups.
Faculty host: Ulrich Albrecht
DMS Colloquium: Dr. Youngjoo Cho
Feb 07, 2019 04:00 PM
Speaker: Dr. Youngjoo Cho, Zilber School of Public Health, University of Wisconsin-Milwaukee
Title: Covariate Adjustment for Treatment Effect On Competing Risks Data in Randomized Clinical Trials
Abstract : The double blinded randomization trial is a gold standard for estimating average causal effect (ACE). It does not require adjustment for covariates. However, in most case, adjustment of covariates that are strong predictor of the outcome could improve efficiency for the estimation of ACE. But when covariates are high-dimension, adjust all covariates in the model will lose efficiency or worse, lose identifiability. Recent work has shown that for linear regression, an estimator under risk consistency (e.g., LASSO, Random Forest) for the regression coefficients could always lead to improvement in efficiency. In this work, we studied the behavior of adjustment estimator for competing risk data analysis. Simulation study shows that the covariate adjustment provides the more efficient estimator than unadjusted one.
Last Updated: 09/11/2015
|
2019-03-18 14:17:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33614256978034973, "perplexity": 2559.613687836999}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201329.40/warc/CC-MAIN-20190318132220-20190318154220-00255.warc.gz"}
|
http://mathhelpforum.com/algebra/73953-solved-1-000-000th-second.html
|
# Math Help - [SOLVED] 1,000,000th second!!!
1. ## [SOLVED] 1,000,000th second!!!
Hi, wanted to double check my answer:
What date and time does the 1,000,000th second of the year occur?
Thanks!
2. Hello, phillyfan09!
What date and time does the 1,000,000th second of the year occur?
$1,\!000,\!000\text{ seconds} \;=\;16,\!666\tfrac{2}{3}\text{ minutes} \;=\;277\tfrac{7}{9}\text{ hours}$
. . $= \;11\text{ days, }13\tfrac{7}{9}\text{ hours} \;=\;11\text{ days, }13\text{ hours, }46\tfrac{2}{3}\text{ minutes}$
. . $= \;11\text{ days, }13\text{ hours, }46\text{ minutes, }40\text{ seconds}$
$\text{The }1,\!000,\!000^{th}\text{ second occurs on January 11, at } 1\!:\!46'40''\text{ p.m.}$
3. That's what I got, Thanks.
|
2014-07-28 18:15:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41016602516174316, "perplexity": 11159.239088722068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00323-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://tex.stackexchange.com/questions/69870/fully-robust-way-to-access-the-first-item-in-a-token-list-expandably/69968
|
# Fully robust way to access the first item in a token list (expandably)
Given a token list such as \a\b\c or {ab}c, I define the first item to be what \@gobble would get as its argument (recall the definition \long\def\@gobble#1{}). It is not hard to devise a macro which extracts the first item from a token list, and, for instance, wraps it in eTeX's \unexpanded:
\begingroup
\catcode@=11
\long\gdef\firstofmany#1{\@firstofmany#1\@marker}
\long\gdef\@firstofmany#1#2\@marker{\unexpanded{#1}}
\endgroup
\message{"\firstofmany{\a\b\c}"} % => "\a "
\message{"\firstofmany{ { ab} c}"} % => " ab"
However, this macro fails if the token list contains the marker, which I chose to be \@marker. Is it possible to write a variant of this macro which would work for an arbitrary token list? (I don't care what token lists consisting only of spaces, which thus have no first item, produce.)
EDIT: I should have made it clearer that the solution is not to produce a less common delimiter. I would like a \firstofmany function which does not choke on any part of its own definition (and definitions of auxiliaries too, of course).
-
If you allow some pdftex primitives I think you can do this, which uses the entire input list as marker.
\begingroup
\catcode@=11
\long\gdef\firstofmany#1{%
\@fom{\unexpanded{[#1]}}#1{[#1]}}
\long\gdef\@fom#1#2{%
\unexpanded{#2}%
\@gobbleto{#1}}
\gdef\@gobbleto#1#2{%
\ifnum\pdfstrcmp{\unexpanded{#2}}{#1}=\z@
\expandafter\@gobbletwo
\else
\fi
\@gobbleto{#1}}
\gdef\@gobbletwo#1#2{}
\endgroup
\message{"\firstofmany{\a\b\c}"} % => "\a "
\message{"\firstofmany{ { ab} c}"} % => " ab"
\bye
-
In my use case (l3check for LaTeX3), \pdfstrcmp is available, so this works. I'll probably make that a bit faster by first removing stuff until a marker (in most cases, that's enough), then applying your technique. It would have been nice to get a pure TeX or eTeX solution, because the LuaTeX \pdf@strcmp (provided by Heiko) is 10x slower than pdfTeX's. – Bruno Le Floch Sep 3 '12 at 1:23
@DavidCarlisle: Please is the \else in the solution playing any role? – Ahmed Musa Sep 3 '12 at 4:06
@AhmedMusa The \else has no purpose here, you can omit it. – Heiko Oberdiek Sep 3 '12 at 5:02
@DavidCarlisle Nice trick. \@gobbleto and \@gobbletwo need \long to support \par inside the arbitrary token list. – Heiko Oberdiek Sep 3 '12 at 5:04
@AhmedMusa \else had the purpose of me not having finished designing the code by the time I started the \if :-) – David Carlisle Sep 3 '12 at 9:12
EDIT: Much much shorter. What was I thinking before? Or am I sleepy now? :-)
Strip the left brace using \string and \gobble, get the first item, put the brace back in.
\catcode@=11
\def\@gobble#1{}
\def\firstofmany{\expandafter\expandafter\expandafter
\fom@getfirst\expandafter\@gobble\string}
\def\fom@getfirst#1{\unexpanded{#1}\fom@gobble}
\def\fom@gobble{\expandafter\expandafter\expandafter
\expandafter\expandafter\expandafter
\expandafter\@gobble\iftrue\expandafter{\else}\fi}
\message{"\firstofmany{\a\b\c}"}
\bye
Proper treatement of initial groups is missing here, though (they are found, but the braces are not written out).
I think I may have found a pure eTeX solution. I have attacked it with everything I could think of and it seems to work... except in the case of a blank list (error) and lists starting with space (space is ignored). But these were mentioned above as unimportant anyway.
I don't know about speed improvement, though --- I'm no expert, but the thing is quite complicated...
Before giving the code, a conceptual overview.
1. The list is detokenized. (Thus, eTeX only.)
2. In the detokenized version, the outer groups of braces are counted. (This part could be optimized by using the TeX's macro argument parsing mechanism. But at this stage, I was implementing for clarity, not speed.) The assumption is that { and } (and only these) have catcode 1 and 2, but this could be easily generalized, I believe.
The number of outer groups is carried around as a list of *s of the appropriate length.
Ok, this was the easy part :-)
3. The idea is to dismantle the group using \string: the opening brace is stringified and then gobbled. The problem, however, is how to expand the \string and \gobble. Our *-based "counter" is in the way... (By the way, it seems to me completely impossible to pass the counter around (as part of argument lists) after the degrouped list, because we don't want to use a fixed delimiter.)
Part of the solution is \let*\expandafter. We need to expand two macros after the *-counter, so we will walk through the stars twice, so 1/4 of them will remain. But when we "multiply" the counter by four, all is well. :-)
4. After the group is dismantled, we have easy access to the first item. True, we need to be a bit careful with first items that are groups etc, but all in all, this part is more tedious that innovative.
5. The only remaining part of magic is the gobbling. We alternate between gobbling the outer groups and the tokens between them. Since we know how many outer groups there are, we know when to stop, so we don't meet the now-lonely right brace (we eventually provide him a partner, of course).
We gobble the tokens between outer groups using the \def\gobble...#{ trick (TeXbook p.204).
\catcode@=11
\def\afterfi#1#2\fi{\fi#1}
% use \onefi etc after these
\def\afterfifi#1#2#3\fi#4\fi{#1#2}
\def\afterfififi#1#2#3\fi#4\fi#5\fi{#1#2}
\def\afterfifififi#1#2#3\fi#4\fi#5\fi#6\fi{#1#2}
\def\onefi{\fi}
\def\twofi{\fi\fi}
\def\threefi{\fi\fi\fi}
\def\fourfi{\fi\fi\fi\fi}
\def\gobble#1{}
\def\openingbrace{\iftrue{\else}\fi}
\def\closingbrace{\iffalse{\else}\fi}
% Detokenize (while preserving the original)
\long\def\firstofmany#1{%
\expandafter\fom@countfirstlevelgroups\detokenize{#1}de{}{}{#1}%
}
\catcode*=13 % we'll be counting stars
\def\if@zero\if#1#2/{% % zero test
\ifx#1/%
\afterfi{\if@zero@yes}%
\else
\afterfi{\if@zero@no}%
\fi
}
\def\if@zero@yes{\iftrue}
\def\if@zero@no/{\iffalse}
{\catcode(=1 \catcode)=2 (\catcode{=12 \catcode}=12
\xdef\detok@openingbrace({)%
% Count the number of outer brace pairs
%
% Note 1: This macro is very non-optimized... it should use TeX's macro
% argument parsing mechanism to search for { and }, and shouldn't use
% all these \afterfi-s, I used this approach just for clarity.
%
% Note 2: This macro expects precisely { and } to be of catcode 1 and 2.
% This could be fixed, but it's not worth the effort at this point.
%
% Args: #1#2 = detokenized, #3 = n, #4 = depth
% --> letters are safe delimiters, because \detokenize produces other's
% We save the very first token for later (#5 below).
\gdef\fom@countfirstlevelgroups#1#2e#3#4(%
\fom@countfirstlevelgroups@#1#2e(#3)(#4)#1%
)
\gdef\fom@countfirstlevelgroups@#1#2e#3#4#5(%
\ifx#1d% end of detokenized string
\afterfifififi(\onefi)(\fom@removeopeningbrace#5(#3))%
\else
\ifx#1{% { found ==> increase depth
\if@zero\if#4//% { found at zero depth ==> increase n
\afterfifififi(\threefi)(\fom@countfirstlevelgroups@#2e(#3*)(#4*)#5)%
\else
\afterfifififi(\threefi)(\fom@countfirstlevelgroups@#2e(#3)(#4*)#5)%
\fi
\else
\ifx#1}% } found => decrease depth
\afterfififi(\threefi)(\fom@cflg@decreasedepth#2e(#3)[#4]#5)%
\else % neither { not } found ==> go to next char
\afterfififi(\threefi)(\fom@countfirstlevelgroups@#2e(#3)(#4)#5)%
\fi
\fi
\fi
)
\gdef\fom@cflg@decreasedepth#1e#2[#3#4]#5(%
\fom@countfirstlevelgroups@#1e(#2)(#4)#5)
)} % back to normal braces
% Remove the initial brace.
% *s are quadrapled to expand first \string (followed by }, we know)
% and \gobble, thus destroying the group; we will be left with the
% original number of *s
\let*\expandafter
\def\fom@removeopeningbrace#1#2{% #2=***** (n), #1=the first *token*
\expandafter\expandafter\expandafter#1%
#2#2#2#2\expandafter\expandafter\expandafter e%
\expandafter\gobble\string
}
% Insert a dummy group (and a *) after the first item. We will
% start gobbling by gobbling to a group and this would fail if there
% were none. This needs to be done before checking for group below,
% so that we have enough *s.
\fom@checkforgroup#1#2*e{#3}{}%
}
% Group as the first item requires special attention. (Note: space
% would need it as well, but space never get here anyway: it
% dissapears when \fom@countfirstlevelgroups is expanded.)
% #1 = the first token of the detokenized first item (will be now
% #2#3 = *s (if we will find an opening brace, one * will be removed)
\def\fom@checkforgroup#1#2#3e{%
\if\detok@openingbrace#1%
\afterfi{\fom@havegroup#3e}%
\else
\afterfi{\fom@getfirstitem#2#3e}%
\fi
}
% Put extra braces around the first item which is a group.
\long\def\fom@havegroup#1e#2{\fom@getfirstitem#1e{{#2}}}
% Get the first item, then call the gobblers: insert two markers
% instead of one, the gobblers need them.
\long\def\fom@getfirstitem#1e#2{%
\unexpanded{#2}%
\fom@gobbletogroup#1*ef{}%
}
% Gobble: we know how many groups we have (as many as *s), so we
% can gobble by alternating \fom@gobbletogroup...#3#{...}
% and \fom@gobblegroup...#3{...}
% #1#2=*s, #3=toks before group; but first check if there are any
% *s left!
\def\fom@gobbletogroup#1#2f{%
\ifx#1e%
\afterfi\fom@finish
\else
\afterfi{\fom@gobbletogroup@#1#2f}%
\fi
}
\long\def\fom@gobbletogroup@#1#2f#3#{%
\fom@gobblegroup#1#2f%
}
% #1#2=*s, #3=the group
\def\fom@gobblegroup#1#2f{%
\ifx#1e%
\afterfi\fom@finish
\else
\afterfi{\fom@gobblegroup@#1#2f}%
\fi
}
\long\def\fom@gobblegroup@#1#2f#3{%
\fom@gobbletogroup#2f%
}
\def\fom@finish{%
\iftrue\expandafter\fom@finish@\expandafter{\else}\fi
}
\long\def\fom@finish@#1{}
% TEST:
\message{"\firstofmany{#1\fom@gobblegroup{\par #1 # @@@ef
aa}a**aa{first} l{ine{%
\fom@gobblegroup\fi\fi
}s}econd line} efef "}
\bye
-
Drat, I was just beginning to dream up a similar approach :-) – Joseph Wright Sep 3 '12 at 19:40
The long one or the short one? :-) – Sašo Živanović Sep 3 '12 at 19:52
The short one :-) Next step: get the team to use this for LaTeX3 (\tl_head:n). – Joseph Wright Sep 3 '12 at 19:55
It can actually be made a bit more efficient, by starting with \long\def\firstofmany#1{\expandafter\fom@getfirst\iffalse{\fi#1{}}} and dropping some \expandafters in \fom@gobble (you only need three at the start). – Joseph Wright Sep 3 '12 at 20:05
Cool! And the change in \firstofmany also deals with the empty list! – Sašo Živanović Sep 3 '12 at 20:17
I understand I am rather late to the party, but I would like to make a somewhat shorter suggestion than the currently extant ones. Idea: Contrive to put braces around the tail end of the list and then just \@gobble the whole thing. It fully expands to the first item, but there is an extraneous \iffalse{\fi at the beginning. Of course, there is no notion in TeX of expanding "partway", so I'm not sure whether this is an issue in practice. In any case, since it is at the beginning it can be excised in various ways :)
\documentclass{article}
\makeatletter
\def\firstofmany#1{\iffalse{\fi%
\@firstofmany#1}%
}
\def\@firstofmany#1{%
\unexpanded{#1}\expandafter\@gobble\expandafter{\iffalse}\fi
}
%\def\@gobble#1{}
\def\dotest#1{\edef\@dotest{\firstofmany{#1}}\meaning\@dotest.}
\makeatother
\begin{document}
\tt
\dotest{abcde}
\dotest{{ab}cde}
\dotest{ { ab}cde}
\dotest{\a\b\c}
\end{document}
-
This is similar in spirit to @SašoŽivanović's "short" solution. It indeed follows all the requirements I set, but the common drawback with Sašo's approach is that this \firstofmany cannot be expanded correctly in an expansion context, it can only work in an \edef or similar. – Bruno Le Floch Sep 4 '12 at 21:51
@Bruno: You're right, I didn't understand the "short" solution until after posting, so I missed the similarity. Your objection wrt "f-expandability" means that attempting to "expand completely" this macro just by iterating expansion of the first token will stabilize after a while, but the stable output will not be the same as the actual full expansion because of stuff after the \unexpanded{#1}? – Ryan Reich Sep 4 '12 at 22:30
There are four types of expandabilities. (1) expandable primitives expand completely to their result when hit with one \expandafter, thus you can get the result and feed it to another function with \expandafter\function\expandafter{\someprimitive...}. (2) the best a macro can do is to expand in two steps, for instance \def\eval#1{\the\numexpr#1\relax}; then it is still possible to get the result and use it as a macro argument (just replace each \expandafter by three). – Bruno Le Floch Sep 4 '12 at 22:41
(f) the macro takes an unknown number of steps, e.g. \def\reverse#1{\rev#1.\rev\rev\revend.}\def\rev#1#2\rev#3#4.{#3#2\rev#3#4.#1}\def\revend#1..{} reverses a list of letters with a variable number of expansions. You can still access the result with \expandafter\function\expandafter{\romannumeral-'q\reverse{abcd}}. (replace ' with a backtick.) but it's harder. (x) Finally, a macro might only be safe within an \edef, \xdef, \message and similar non-expandable functions: for instance \def\foo#1{\iffalse{\fi #1\iffalse}\fi}. Then it is impossible to access the result expandably. – Bruno Le Floch Sep 4 '12 at 22:52
A delimiter seems to be necessary, as you don't know how many items you have to discard. A way out might be to insert a delimiter which is very unlikely to appear in a real world document:
\begingroup
\catcode@=11
\edef\funny{\detokenize{&${}$&}}
\long\xdef\firstofmany#1{\noexpand\@firstofmany#1\funny}
\edef\x{\long\gdef\noexpand\@firstofmany##1##2\funny}\x{\unexpanded{#1}}
\endgroup
\message{"\firstofmany{\a\b\c}"} % => "\a "
\message{"\firstofmany{ { ab} c}"} % => " ab"
Just make \funny as complicated as you can. The empty token list or a list consisting only of spaces would give an error, I'm afraid.
-
Why so much fuss on \@marker and not on \@firstofmany? What makes the clash so outstandingly bad? – Stephan Lehmke Sep 3 '12 at 0:17
@StephanLehmke Nothing bad; but a real world token list might contain them. – egreg Sep 3 '12 at 0:21
One of the main disadvantages of assuming there is a safe token that can be used as the marker is that you can not manipulate code using the marker – David Carlisle Sep 3 '12 at 0:22
@DavidCarlisle Ah, so it's a problem of self-reference. I begin to understand... – Stephan Lehmke Sep 3 '12 at 0:35
@DavidCarlisle Precisely. I should have made that aspect clearer. The goal is to implement some checking tool for LaTeX3 code. In particular the tool shouldn't choke on its own source, so using a fixed delimiter is not good. – Bruno Le Floch Sep 3 '12 at 1:16
Unless we build a delimiter that is specifically taylored to the input, as in David Carlisle's approach (he uses the token list itself as a delimiter since it is too long to appear within itself), the only safe delimiter is an end-group character token (normally } with TeX's usual catcodes), because unmatched explicit end-group tokens cannot appear in the list of tokens given to \firstofmany. This trailing } can be inserted by expanding \iffalse {\fi #1} from the left, as noted in a couple of answers.
The hard part comes about when trying to insert the corresponding left brace to remove trailing tokens. Several answers simply insert \expandafter\@gobble{\iffalse }\fi after the first item. This has the large drawback that the whole result is not obtained when hitting the \firstofmany function with some \expandafter chains on the left, which means in particular that its result can only be used directly within an \edef or similar expansion, and that it cannot be expanded to be given to another expandable function directly.
My approach is to remove all tokens (except the first item) until a given (mostly arbitrary) marker, then test if the full token list with that part removed only consists in the first item. If it does, then we are done, the token list has only one item, and we leave that as a result. Otherwise, we repeat: keep the first item, remove everything else until the marker, check if what remains is a single item.
It turns out I want blank token lists to give an empty result, which is equivalent to asking for the first item in #1{}. If the argument of \fom@test consists in (optional) spaces, followed by an item, then that is left in the input stream, to be taken as the argument to \unexpanded (it turns out that the item always ends up braced). Otherwise, we call \fom@grab to remove until abc (we know that this marker is present because the initial argument of \fom@test ends with abc), which then calls \fom@test again. The key thing to note is that the leading item is always kept after our functions in the input stream, which means that expanding from the left ("f-expanding") works correctly.
(EDIT: the code was wrong.)
\begingroup
\catcode@=11
\long\gdef\firstofmany#1{\unexpanded
\iffalse{\fi \fom@grab #1{}abc}}
\long\gdef\fom@test#1{%
\ifcat$\detokenize\expandafter{\fom@gobble#1}$%
\expandafter\fom@i
\else
\expandafter\fom@ii
\fi
{#1}% #1 is {first item}
{\iffalse{\fi \fom@grab #1}}%
}
\long\gdef\fom@grab#1#2abc%
{\expandafter\fom@test\expandafter{\iffalse}\fi{#1}}
\long\gdef\fom@gobble#1{}
\long\gdef\fom@i#1#2{#1}
\long\gdef\fom@ii#1#2{#2}
\endgroup
\long\def\test#1%
{\message{|\unexpanded\expandafter{\romannumeral-q#1}|}}
\test{\firstofmany{ a bc}}
\test{\firstofmany{ {a\a} bc}}
\test{\firstofmany{ {a\a} abc abc abc}}
\test{\firstofmany{ }}
\csname stop\endcsname
\csname bye\endcsname
\endinput
As an added bonus, this function only requires two steps of expansion to yield its result (similar to many of Heiko Oberdiek's beautiful macros). It is a little bit slower than the naive approach that forbids a specific marker from appearing in the token list.
EDIT2: As Sašo Živanović made me realize, this solution also avoids a common issue: many definitions of \firstofmany fail in the case \halign{#\cr a\firstofmany{b&}\cr}. I got lucky: since the user's token list only appears within braces (sometimes \iffalse{\fi), TeX's alignment mechanism doesn't see it.
-
Ingenious! Could you please explain what's wrong with &? – Sašo Živanović Sep 5 '12 at 12:07
@SašoŽivanović Let's work with \long\def\firstofmany#1{\fom#1{}\fom}\long\def\fom#1#2\fom{#1}. Then \halign{#\cr \firstofmany{a...&...}\cr} works fine, but \halign{#\cr a... \firstofmany{b...&...} \cr} fails with a "forbidden \endtemplate". Why? Within a tabular, any & starts a new cell when scanned, unless it appears within braces. In the argument of \firstofmany, & is within braces, thus no cell is built. But for \fom, the & is not protected, TeX builds a cell by inserting \endtemplate, which is \outer: \fom can't see through that. – Bruno Le Floch Sep 5 '12 at 13:27
In the first case, \fom gets expanded before the preamble of the cell is inserted: in that mode, TeX doesn't care about &. The details are messy: in a cell, TeX first fully expands tokens (except protected macros) until finding a non-blank token: in that phase, & are ignored. In fact, thinking about it again, the code I gave here should work without trouble, because any potential & always appear within braces (\iffalse {\fi...\iffalse}\fi counts). – Bruno Le Floch Sep 5 '12 at 13:54
I also believe your solution is unproblematic, or at least I can't get the "forbidden \endtemplate" error you describe above. I have also tested how \fom behaves when & is the first item, also ok... – Sašo Živanović Sep 5 '12 at 14:23
@SašoŽivanović Thanks. I'll update my answer accordingly. – Bruno Le Floch Sep 5 '12 at 14:58
\documentclass[12pt]{article}
\begingroup
\catcode@=11 \lccode\&=0 \catcode&=8
\lowercase{\endgroup
\long\gdef\firstofmany#1{\@firstofmany#1&&}
\long\gdef\@firstofmany#1#2&&{\unexpanded{#1}}
}
\begingroup
\lccode\&=0 \catcode&=7
\message{"\firstofmany{\a\b&\c}"} % => "\a "
\message{"\firstofmany{ { ab} c}"} % => " ab"
\endgroup
\begin{document}
x
\end{document}
-
This is not robust enough for my needs: you are only changing the delimiter to something less likely. In fact, since my application is to implement some checking tool for TeX (well, probably only well-behaved LaTeX3) code, it should in particular be able to check its own definition, which contains the marker. – Bruno Le Floch Sep 3 '12 at 1:14
Joseph Wright
Regarding using \romannumeral, since #1 can be only one token, what of this?
\def\@firstofmany#1{%
\expandafter\expandafter\expandafter\stopromans
\expandafter\expandafter\expandafter\unexpanded
\expandafter\expandafter\expandafter
{\expandafter\expandafter\expandafter#1\expandafter\expandafter
\expandafter}\expandafter\@gobble\expandafter{\iffalse}\fi
}
Note: from Ryan's solution.
-
I do love that trick :) I actually had to read source2e.pdf to see how they implemented tabular in order to learn it. – Ryan Reich Sep 4 '12 at 18:50
#1 could be braced in the input, so more than one token :-( – Joseph Wright Sep 4 '12 at 18:59
re:@Joseph The terminology is perhaps confusing. A "token list" is not actually a list of actual tokens, but rather a list of "macro tokens": blobs that would be absorbed as a single argument to a macro. – Ryan Reich Sep 4 '12 at 19:29
@RyanReich { is a perfectly fine token, and {\ab #} d contains 6 tokens, including two braces, one macro parameter character, and one space. There are two ways to see a token list: as a list of TeX tokens, or as a list of items, and the above list would have two of those: \ab # and d, ignoring spaces between items, and removing surrounding braces. In the case of \tl_head:n, or as I called it here \firstofmany, we are interested in getting the first item, because the first token may not be a balanced text. – Bruno Le Floch Sep 4 '12 at 21:56
@Ahmed, unfortunately, no, I am considering arbitrary lists of tokens (arbitrary parameterless macros), so #1 can be any number of tokens. Do you know how I could make this clearer in my question? – Bruno Le Floch Sep 4 '12 at 21:58
|
2015-11-28 16:59:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771491646766663, "perplexity": 2872.035989277169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453576.62/warc/CC-MAIN-20151124205413-00330-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://www.iacr.org/news/legacy.php?p=detail&id=1399
|
International Association for Cryptologic Research
# IACR News Central
You can also access the full news archive.
Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).
2012-06-04
00:17 [Pub][ePrint]
Broadcast encryption aims at sending a content to a large arbitrary group of users at once. Currently, the most efficient schemes provide constant-size headers, that encapsulate ephemeral session keys under which the payload is encrypted. However, in practice, and namely for pay-TV, providers have to send various contents to different groups of users. Headers are thus specific to each group, one for each channel: as a consequence, the global overhead is linear in the number of channels. Furthermore, when one wants to zap to and watch another channel, one has to get the new header and decrypt it to learn the new session key: either the headers are sent quite frequently or one has to store all the headers, even if one watches one channel only. Otherwise, the zapping time becomes unacceptably long.
In this paper, we consider encapsulation of several ephemeral keys, for various groups and thus various channels, in one header only, and we call this new primitive Multi-Channel Broadcast Encryption: one can hope for a much shorter global overhead and a short zapping time since the decoder already has the information to decrypt any available channel at once. Our candidates are private variants of the Boneh-Gentry-Waters scheme, with a constant-size global header, independently of the number of channels. In order to prove the CCA security of the scheme, we introduce a new dummy-helper technique and implement it in the random oracle model.
00:17 [Pub][ePrint]
Verified security provides a firm foundation for cryptographic proofs
by means of rigorous programming language techniques and verification
methods. EasyCrypt is a framework that realizes the verified security
paradigm and supports the machine-checked construction and
verification of cryptographic proofs using state-of-the-art SMT
solvers, automated theorem provers and interactive proof assistants.
Previous experiments have shown that EasyCrypt is effective for a
posteriori validation of cryptographic systems. In this paper, we
report on the first application of verified security to a novel
cryptographic construction, with strong security properties and
interesting practical features. Specifically, we use EasyCrypt to
prove the IND-CCA security of a redundancy-free public-key encryption
scheme based on trapdoor one-way permutations. Somewhat surprisingly,
we show that even with a zero-length redundancy, Boneh\'s SAEP scheme
(an OAEP-like construction with a single-round Feistel network rather
than two) converts a trapdoor one-way permutation into an
IND-CCA-secure scheme, provided the permutation satisfies two
additional properties. We then prove that the Rabin function and RSA
with short exponent enjoy these properties, and thus can be used to
instantiate the construction we propose to obtain efficient encryption
schemes. The reduction that justifies the security of our construction
is tight enough to achieve practical security with reasonable key
sizes.
00:17 [Pub][ePrint]
Elliptic curve cryptosystems have improved greatly in speed over the past few years. In this paper we outline a new elliptic curve signature and key agreement implementation which achieves record speeds while remaining relatively compact. For example, on Intel Sandy Bridge, a curve with about $2^{250}$ points produces a signature in just under 60k clock cycles, verifies in under 169k clock cycles, and computes a Diffie-Hellman shared secret in under 153k clock cycles. Our implementation has a small footprint: the library is under 55kB. We also post competitive timings on ARM processors, verifying a signature in under 626k Tegra-2 cycles. We introduce faster field arithmetic, a new point compression algorithm, an improved fixed-base scalar multiplication algorithm and a new way to verify signatures without inversions or coordinate recovery. Some of these improvements should be applicable to other systems.
00:17 [Pub][ePrint]
In this paper, we specify a class of mathematical problems, which we refer to as Function Density Problems\'\' (FDPs, in short), and point out novel connections of FDPs to the following two cryptographic topics; theoretical security evaluations of keyless hash functions (such as SHA-1), and constructions of provably secure pseudorandom generators (PRGs) with some enhanced security property introduced by Dubrov and Ishai [STOC 2006]. Our argument aims at proposing new theoretical frameworks for these topics (especially for the former) based on FDPs, rather than providing some concrete and practical results on the topics. We also give some examples of mathematical discussions on FDPs, which would be of independent interest from mathematical viewpoints. Finally, we discuss possible directions of future research on other cryptographic applications of FDPs and on mathematical studies on FDPs themselves.
00:17 [Pub][ePrint]
We construct the first public-key encryption scheme whose chosen-ciphertext (i.e., IND-CCA) security can be proved under a standard assumption and does not degrade in either the number of users or the number of ciphertexts. In particular, our scheme can be safely deployed in unknown settings in which no a-priori bound on the number of encryptions and/or users is known.
As a central technical building block, we devise the first structure-preserving signature scheme with a tight security reduction. (This signature scheme may be of independent interest.) Combining this scheme with Groth-Sahai proofs yields a tightly simulation-sound non-interactive zero-knowledge proof system for group equations. If we use this proof system in the Naor-Yung double encryption scheme, we obtain a tightly IND-CCA secure public-key encryption scheme from the Decision Linear assumption.
We point out that our techniques are not specific to public-key encryption security. Rather, we view our signature scheme and proof system as general building blocks that can help to achieve a tight security reduction.
00:17 [Pub][ePrint]
Recently, Chien et al. proposed a gateway-oriented password-based authenticated key exchange (GPAKE) protocol, through which a client and a gateway could generate a session key for future communication with the help of an authentication server. They also demonstrated that their scheme is provably secure in a formal model. However, in this letter, we will show that Chien et al.\'s protocol is vulnerable to the off-line password guessing attack. To overcome the weakness, we also propose an efficient countermeasure.
00:17 [Pub][ePrint]
The concept of proxy signature was introduced in 1996, up to now many proxy signature schemes have been proposed. In order to protect the proxy signer\'s privacy, the concept of anonymous proxy signature, which is also called proxy ring signature, was introduced in 2003. Some anonymous proxy signature schemes, which are provable secure in the random oracle model, have been proposed. However, provable security in the random oracle model is doubtful when the random oracles are instantiated with hash functions in their implementation. Hence, we propose the first secure anonymous proxy signature scheme without random oracles.
00:17 [Pub][ePrint]
The nonlinear feedback shift registers (NLFSR) are used to construct pseudorandom generators for stream ciphers. Their theory is not so complete as that of the linear feedback shift registers (LFSR). In general, it is not known how to construct NLFSRs with maximum period. The direct method is to search for such registers with suitable properties. We used the implementation of NLFSRs in Field Programmable Gate Arrays (FPGA) to perform a corresponding search. We also investigated local statistical properties of the binary sequences ganerated by NLFSRs of order 25 and 27.
2012-06-03
21:17 [Pub][ePrint]
We prove three optimal transference theorems on lattices possessing $n^{\\epsilon}$-unique shortest vectors which relate to the successive minima, the covering radius and the minimal length of
generating vectors respectively. The theorems result in reductions
between GapSVP$_{\\gamma\'}$ and GapSIVP$_\\gamma$ for this class of
lattices. Furthermore, we prove a new transference theorem giving an
optimal lower bound relating the successive minima of a lattice with
its dual. As an application, we compare the respective advantages of
current upper bounds on the smoothing parameter of discrete Gaussian
measures over lattices and show a more appropriate bound for lattices whose duals possess $\\sqrt{n}$-unique shortest vectors.
21:17 [Pub][ePrint]
Pollard\'s rho algorithm, along with parallelized, vectorized, and negating variants, is the standard method to compute discrete logarithms in generic prime-order groups.
This paper presents two reasons that Pollard\'s rho algorithm
is farther from optimality than generally believed.
First, higher-degree local anti-collisions\'\'
make the rho walk less random than the predictions made by the conventional Brent--Pollard heuristic.
Second, even a truly random walk is suboptimal,
because it suffers from global anti-collisions\'\' that can at least partially be avoided.
For example, after (1.5+o(1))\\sqrt(l) additions in a group of order l (without fast negation),
the baby-step-giant-step method has probability 0.5625+o(1)
of finding a uniform random discrete logarithm;
a truly random walk would have probability 0.6753\\ldots+o(1);
and this paper\'s new two-grumpy-giants-and-a-baby method has probability 0.71875+o(1).
21:17 [Pub][ePrint]
We present a formalisation of a category of schemes which we call \\emph{Broadcast-enhanced Key Predistribution Schemes}.
These schemes can be used instead of a key predistribution scheme in any network which has access to a trusted base station and broadcast channel.
In such networks, broadcast-enhanced key predistribution schemes can provide advantages over key predistribution schemes including flexibility and more efficient revocation.
There are many possible applications and ways to implement broadcast-enhanced key predistribution schemes, and we propose a framework for describing, comparing and analysing them.
In their paper From key predistribution to key redistribution\', Cicho\\\'{n}, Go{\\l}\\c{e}biewski and Kuty{\\l}owski propose a scheme for redistributing\' keys to a wireless sensor network using a broadcast channel after an initial key predistribution.
We classify this as a broadcast-enhanced key predistribution scheme and analyse it in that context.
We provide simpler proofs of some results from their paper, give a precise analysis of the resilience of their scheme, and discuss modifications based on defining a suitable keyring intersection threshold.
In the latter half of the paper we suggest two particular scenarios where broadcast-enhanced key predistribution schemes may be particularly desirable and the relevant design goals to prioritise in each case.
For each scenario we propose a suitable family of broadcast-enhanced key predistribution schemes and our analysis demonstrates their effectiveness in achieving their aims in resource-constrained networks.
|
2016-08-27 08:13:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5070340037345886, "perplexity": 1987.3411214221794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298875.42/warc/CC-MAIN-20160823195818-00159-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://eprint.iacr.org/2012/319/20130618:092339
|
## Cryptology ePrint Archive: Report 2012/319
Bounds on the Threshold Gap in Secret Sharing and its Applications
Ignacio Cascudo and Ronald Cramer and Chaoping Xing
Abstract: We consider the class of secret sharing schemes where there is no a priori bound on the number of players $n$ but where each of the $n$ share-spaces has fixed cardinality~$q$. We show two fundamental lower bounds on the {\em threshold gap} of such schemes. The threshold gap $g$ is defined as $r-t$, where $r$ is minimal and $t$ is maximal such that the following holds: for a secret with arbitrary a priori distribution, each $r$-subset of players can reconstruct this secret from their joint shares without error ($r$-reconstruction) and the information gain about the secret is nil for each $t$-subset of players jointly ($t$-privacy). Our first bound, which is completely general, implies that if $1\leq t<r\leq n$, then $g \geq \frac{n-t+1}{q}$ independently of the cardinality of the secret-space. Our second bound pertains to $\FF_q$-linear schemes with secret-space $\Fq^k$ ($k\geq 2$). It improves the first bound when $k$ is large enough. Concretely, it implies that $g\geq\frac{n-t+1}{q}+f(q,k,t,n)$, for some function $f$ that is strictly positive when $k$ is large enough. Moreover, also in the $\FF_q$-linear case, bounds on the threshold gap {\em independent} of $t$ or $r$ are obtained by additionally employing a dualization argument. As an application of our results, we answer an open question about the asymptotics of {\em arithmetic secret sharing schemes} and prove that the asymptotic optimal corruption tolerance rate is strictly smaller than~1.
Category / Keywords: Secret sharing, threshold gap, error correcting codes, Norse bounds, Griesmer bound, arithmetic secret sharing
Publication Info: Accepted for publication in IEEE Transactions on Information Theory. DOI 10.1109/TIT.2013.2264504.
Date: received 5 Jun 2012, last revised 18 Jun 2013
Contact author: i cascudo at cwi nl
Available format(s): PDF | BibTeX Citation
Note: Several changes made to incorporate review suggestions, including the change of the title.
Short URL: ia.cr/2012/319
[ Cryptology ePrint archive ]
|
2016-02-12 14:11:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9284480810165405, "perplexity": 1071.6924782702622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164268.69/warc/CC-MAIN-20160205193924-00031-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://dvalts.io/modelling/2016/10/13/Compiling-WRFV381-on-ARCHER.html
|
This post documents how to compile the latest release of WRF (version 3.8.1) on the ARCHER HPC service. There is already a compiled version available on archer that be accessed using the modules funciton (see this guide), but you will need to be able to compile from the source code if you are making any modifications to the code, or if you need to compile the idealised cases, or need to compile it with a different nesting set up. (The pre-compiled code is set up for basic nesting only).
## Compilation with the default ARCHER compiler (Cray CCE)
This is relatively straightforward as the configure script already works as expected. However, compiling with the Cray compiler (usually the default loaded compiler when you login to ARCHER) can take upwards of 6-8 hours, depending on the options selected in configure, and the load on the login or serial nodes.
### Setting up your ARCHER environment
First, check that you do actually have the Cray compiler environment loaded. You can look for PrgEnv-cray in the output from running the module list command, or you can do echo $PE_ENV from a login session. Because it takes so long to compile using the Cray compiler, you need to run the compilation task as a serial job on the serial nodes. If you attempt to run on the login nodes, the compilation process will time out well before completion. So we are going to prepare three things: 1. A ‘pre build’ script that will load the correct modules on ARCHER and set up the environment variables 2. The configure.wrf file. This is prepared using the configure script, so you should not need to do anything different to the normal WRF compilation instructions for this part. 3. A compliation job submission (.pbs) script. This will be submitted as a serial node job to do the actual compilation bit. #### The pre-build script This is a shell script (of the bash flavour) that loads the relevant modules for WRF to compile. pre-build.bash NetCDF-4 is the default netcdf module on the ARCHER environment, so I am assuming you want to compile it with netcdf-4 (it has extra features like supporting file compression etc…) I attempted this with the gcc/5.x.x module, but ran into compilation errors. Using GCC v6 seemed to fix them. Note that you may need to switch a load statement for a swap statement in some places, depending on what default modules are loaded in your ARCHER environment. See your .profile and .bashrc scripts. NOTE: .profile (in the $HOME directory is only loaded on the login shells. If you launch any other interactive shells (like an interactive job mode), then .bashrc will get loaded.
pre-build.bash (continued)
Some things to note:
1. You cannot compile WRF in parallel (it’s a bug in the Makefile, apparrently)
2. There are some environment variables that are just set equal to some other environmnet variables, e.g. NETCDF=$NETCDF_DIR. This works because when you use the module system to load, say, netCDF, ARCHER will automatically set its own environment variables that we can use to initialise the WRF configure variables, e.g. $NETCDF.
### Run configure
For the CRAY compiler, this can be run as normal e.g. ./configure from the WRFV3 directory.
### Compilation job script
At this stage, you can either request an interactive mode job on the serial nodes, and then run compile in the usual way (after running the prebuild script and the configure commands), or you can submit a serial job with the PBS job scheduling system to run when a node becomes available. If you are going down the interactive job mode, be sure to request enough walltime as the Cray compiler takes a long time to compile everything. I would ask for 12 hours to be on the safe side.
If you want to submit a job to run without having to wait for an interactive-mode job, prepare the following job submission script:
### Putting it all together.
1. Make sure the pre-build.bash script and the compile.pbs script are in a directory above /WRFV3. I called it WRF_build381
2. Use qsub as normal to submit the compile.pbs script. E.g. qsub compile.pbs
Your job should run and the compile logs will be written to compileCray.log (or whatever you named them in the compile.log script above.
## Compiling WRF using the GNU compilers on ARCHER
You may have reason to want to compile WRF with the GNU compilers on ARCHER or another Cray XC30 system. Unfortunately I found that the configure script supplied with version 3.8.1 did not generate a correct configure.wrf script for the GNU compilers in a Cray enviroment. Namely, it used compilation flags specific to the Cray compiler, rather than the gfortran compilation flags (which are incompatible). To rectify this you can either run the configure script as normal, and then correct the compiler flags in the configure.wrf output script that is generated. Or if you want a more re-usable soultion you can edit the file in the WRFV3/arch/configure_new.defaults file.
I did this by opening the configure_new.defaults file and adding a new entry. The purpose of the file is to generate the menu entries that you see when running the configure script, and then populate the Makefile with the correct compilation options.
Find the CRAY CCE entry in the configure_new.defaults file and insert a new entry below it called GNU on CRAY XC30 system or similar. The entry should contain the following:
###########################################################
#ARCH Cray XE and XC CLE/Linux x86_64, GNU Compiler on Cray System # serial dmpar smpar dm+sm
# Use this when you are using the GNU programming environment on ARCHER (a Cray system)
DESCRIPTION = GNU on Cray system ($SFC/$SCC): Cray XE and XC
# OpenMP is enabled by default for Cray CCE compiler
# This turns it off
DMPARALLEL = # 1
OMPCPP = # -D_OPENMP
OMP = # -fopenmp
OMPCC = # -fopenmp
SFC = ftn
SCC = cc
CCOMP = gcc
DM_FC = ftn
DM_CC = cc
FC = CONFIGURE_FC
CC = CONFIGURE_CC
LD = $(FC) RWORDSIZE = CONFIGURE_RWORDSIZE PROMOTION = #-fdefault-real-8 ARCH_LOCAL = -DNONSTANDARD_SYSTEM_SUBR -DWRF_USE_CLM CFLAGS_LOCAL = -O3 LDFLAGS_LOCAL = CPLUSPLUSLIB = ESMF_LDFLAG =$(CPLUSPLUSLIB)
FCOPTIM = -O2 -ftree-vectorize -funroll-loops
FCREDUCEDOPT = $(FCOPTIM) FCNOOPT = -O0 FCDEBUG = # -g$(FCNOOPT) # -ggdb -fbacktrace -fcheck=bounds,do,mem,pointer -ffpe-trap=invalid,zero,overflow
FORMAT_FIXED = -ffixed-form
FORMAT_FREE = -ffree-form -ffree-line-length-none
FCSUFFIX =
BYTESWAPIO = -fconvert=big-endian -frecord-marker=4
FCBASEOPTS_NO_G = -w $(FORMAT_FREE)$(BYTESWAPIO)
FCBASEOPTS = $(FCBASEOPTS_NO_G)$(FCDEBUG)
FCBASEOPTS_NO_G = -N1023 $(FORMAT_FREE)$(BYTESWAPIO) #-ra
FCBASEOPTS = $(FCBASEOPTS_NO_G)$(FCDEBUG)
MODULE_SRCH_FLAG =
You can run the configure script as normal once these changes have been made and you will get a configure.wrf suitable for using the GNU compilers on ARCHER to build WRF v3.8.1.
|
2019-04-21 09:06:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.441010981798172, "perplexity": 5481.825605595464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530505.30/warc/CC-MAIN-20190421080255-20190421102255-00226.warc.gz"}
|
https://ec.gateoverflow.in/2261/gate-ece-2007-question-5
|
4 views
Which one of the following functions is strictly bounded?
1. $\frac{1}{x^2}$
2. $e^x$
3. $x^2$
4. $e^{-x^2}$
|
2022-09-25 23:37:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355893135070801, "perplexity": 886.618400281718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00313.warc.gz"}
|
https://tex.stackexchange.com/questions/275423/customize-citestyle-for-biblatex
|
# customize citestyle for biblatex
Just now I'm using
\usepackage[backend=biber, citestyle=numeric]{biblatex}
\usepackage[flushmargin]{footmisc}
\newcommand{\myautocite}[2][1]{\autocite[#1]{#2}{\let\thempfn\relax\footnotetext{\autocite{#2} \citeauthor{#2}, \citeyear{#2}, \citetitle{#2}.}}}
with the effect that the citation with \mayautocite{key} in the text is indeed "[3]" and at the bottom of the page I have the recall "[3] Hilbert, 1901, "The 19th Theorem"". I have two problems:
--[main] When I use the same reference on a page, this is recalled twice or as many times as it appears, which I don't want. I would like only one recall per page.
--[auxiliary] I don't want the indentation space at the beginning of this recall but would like to keep it for usual footnotes.
Many thanks for pointers on how to achieve that!
• What do you think of my solution in Help with additional parameter for DeclareCiteCommand, that is the second part of the answer? There is quite a lot wrong with your current approach: You should not define a cite command via \newcommand, use \DeclareCiteCommand instead, you should also never try and combine several \cite... commands into one, it will go horrible wrong once you start citing two works. – moewe Oct 28 '15 at 18:00
• Instead of \fullcite{\thefield{entrykey}} we can of course have something like \printfield{labelname}\newunit\printfield{title}\newunit\printfield{year} (obviously, you would want that to be slightly better more sophisticated to catch some corner cases), let me know if you are interested in that solution and need help modifying it to your needs. – moewe Oct 28 '15 at 18:07
• Many thanks for this (fast!) answer. I have three questions: – Olivier Oct 28 '15 at 18:32
• I have three questions: (1) \printfield{author} dos not seem to work ?? (2) I would like the recall to occur at each page. Now it is recalled once, say page 12, but not when it is used again page 133 (3) How can I change \newunits? It gets me a dot when I want a come. Many thanks again ! O. – Olivier Oct 28 '15 at 18:50
• \usebibmacro{author} rather than \printfield{author} solves point (1) – Olivier Oct 28 '15 at 19:29
Here is the code that does the job.
\usepackage{csquotes}
\usepackage[%
backend=biber,
style=numeric-comp,
citetracker=true,
pagetracker=true,
]{biblatex}
\makeatletter
\renewbibmacro*{cite:comp}{%
\xdef\cbx@citekey{\thefield{entrykey}}%
\ifciteseen
{}
{\csnumgdef{cbx@instcount\cbx@citekey}{-100}}%
\ifsamepage{\value{instcount}}{\number\csuse{cbx@instcount\cbx@citekey}}%
{}
{\renewcommand{\@makefntext}[1]{\noindent\normalfont##1}%
\footnotetext{
\printtext[labelnumberwidth]{%
\printfield{prefixnumber}%
\printfield{labelnumber}}%
\bibfootnotewrapper{%\fullcite{\thefield{entrykey}}%
\csnumgdef{cbx@instcount\cbx@citekey}{\value{instcount}}%
\iffieldundef{shorthand}
{\ifbool{bbx:subentry}
{\iffieldundef{entrysetcount}
{\usebibmacro{cite:comp:comp}}
{\usebibmacro{cite:comp:inset}}}
{\usebibmacro{cite:comp:comp}}}
{\usebibmacro{cite:comp:shand}}}
\makeatletter
\newcommand{\myautocite}[2][1]{\cite[#1]{#2}}
• I don't think the last \newcommand{\myautocite}[2][1]{\cite[#1]{#2}} is a good idea, it destroys the pre-/post-note abilities partially. When I made my comment above I should have said \printnames{labelname}, that however only works with the option labelname passed to biblatex. – moewe Oct 29 '15 at 6:43
• It would also be great if you could give attribution to the solutions you used, I suspect the "same page test" is from biblatex: is there a command analogous to \ifciteseen but within one page?? – moewe Oct 29 '15 at 6:44
• The newcommand{\myautocite} ... is a blunder, just using \autocite is enough. And I couldn't find again the page I was using, so thanks for having found it! – Olivier Oct 29 '15 at 8:10
• If you think your answer answers the question, consider accepting it so the question gets marked as solved. – moewe Oct 31 '15 at 7:29
|
2020-03-30 14:29:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7349470257759094, "perplexity": 1023.4103023216635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00328.warc.gz"}
|
https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/layers/conv2d_transpose
|
# tf.contrib.layers.conv2d_transpose(args, *kwargs)
### tf.contrib.layers.convolution2d_transpose(*args, **kwargs)
Adds a convolution2d_transpose with an optional batch normalization layer.
The function creates a variable called weights, representing the kernel, that is convolved with the input. If batch_norm_params is None, a second variable called 'biases' is added to the result of the operation.
#### Args:
• inputs: A 4-D Tensor of type float and shape [batch, height, width, in_channels] for NHWC data format or [batch, in_channels, height, width] for NCHW data format.
• num_outputs: integer, the number of output filters.
• kernel_size: a list of length 2 holding the [kernel_height, kernel_width] of of the filters. Can be an int if both values are the same.
• stride: a list of length 2: [stride_height, stride_width]. Can be an int if both strides are the same. Note that presently both strides must have the same value.
• padding: one of 'VALID' or 'SAME'.
• data_format: A string. NHWC (default) and NCHW are supported.
• activation_fn: activation function, set to None to skip it and maintain a linear activation.
• normalizer_fn: normalization function to use instead of biases. If normalizer_fn is provided then biases_initializer and biases_regularizer are ignored and biases are not created nor added. default set to None for no normalizer function
• normalizer_params: normalization function parameters.
• weights_initializer: An initializer for the weights.
• weights_regularizer: Optional regularizer for the weights.
• biases_initializer: An initializer for the biases. If None skip biases.
• biases_regularizer: Optional regularizer for the biases.
• reuse: whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
• variables_collections: optional list of collections for all the variables or a dictionary containing a different list of collection per variable.
• outputs_collections: collection to add the outputs.
• trainable: whether or not the variables should be trainable or not.
• scope: Optional scope for variable_scope.
#### Returns:
a tensor representing the output of the operation.
#### Raises:
• ValueError: if 'kernel_size' is not a list of length 2.
• ValueError: if data_format is neither NHWC nor NCHW.
• ValueError: if C dimension of inputs is None.
|
2018-08-15 10:39:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2581338882446289, "perplexity": 8928.726358764276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210058.26/warc/CC-MAIN-20180815102653-20180815122653-00259.warc.gz"}
|
https://www.tutordale.com/set-notation-algebra-2-definition/
|
Monday, September 19, 2022
# Set Notation Algebra 2 Definition
## How To Convert Inequality To Interval Notation
Intro to Sets | Examples, Notation & Properties
We can convert inequality to interval notation using the below-given steps,
• Firstly, we need to graph the solution set of the interval on a number line.
• Then write the numbers in the interval notation with a smaller number appearing first on the number line on the left.
• Use the symbol “-” for the unbounded set on left and if it is unbounded on right, use the symbol “”.
## Symbols Used In Set Builder Notation
The set builder form uses various symbols to represent the elements of the set. A few of the symbols are listed as follows.
• | is read as “such that” and we usually write it immediately after the variable in the set builder form and after this symbol, the condition of the set is written.
• is read as “belongs to” and it means “is an element of”.
• is read as “does not belong to” and it means “is not an element of”.
• N represents natural numbers or all positive integers.
• Q represents rational numbers or any number that can be expressed as a fraction of integers.
• R represents real numbers or any number that isn’t imaginary.
## What Is Interval Notation And Set Builder Notation Form
• In the Interval notation, the end-point values are written between brackets or parentheses. A square bracket represents that an element is included in the set, whereas a parenthesis denotes exclusion from the set.For example, (8,12]. This interval notation denotes that this set includes all real numbers between 8 and 12 where 8 is excluded and 12 is included.
• The set-builder notation is a mathematical notation for describing a set by representing its elements or explaining the properties that its members must satisfy.For example, For the given set A = , the set builder notation is A = .
Don’t Miss: Glencoe Mcgraw Hill Geometry Workbook Answers
## What Are The Types Of Set Notations For Representing Elements
The two important types of set notations for representing the elements are the roster form and the set builder form. The set builder notation represents the elements of a set in the form of a sentence or mathematical expression, and the set builder form represents the elements by writing the elements distinctly. The set builder form of set notation is A = , and the roster of of the same set is A = }2, 4, 6, 8, 10}.
## Euler And Venn Diagrams
ABBA
An Euler diagram is a graphical representation of a collection of sets each set is depicted as a planar region enclosed by a loop, with its elements inside. If A is a subset of B, then the region representing A is completely inside the region representing B. If two sets have no elements in common, the regions do not overlap.
A Venn diagram, in contrast, is a graphical representation of n sets in which the n loops divide the plane into 2n zones such that for each way of selecting some of the n sets , there is a zone for the elements that belong to all the selected sets and none of the others. For example, if the sets are A, B, and C, there should be a zone for the elements that are inside A and C and outside B .
Recommended Reading: What Is The Biological Importance Of Photosynthesis For The Ecosystem
## Essential Features Of Cantorian Set Theory
At best, the foregoing description presents only an intuitive concept of a set. Essential features of the concept as Cantor understood it include: that a set is a grouping into a single entity of objects of any kind, and that, given an object x and a set A, exactly one of the statements x A and x A is true and the other is false. The definite relation that may or may not exist between an object and a set is called the membership relation.
A further intent of this description is conveyed by what is called the principle of extensiona set is determined by its members rather than by any particular way of describing the set. Thus, sets A and B are equal if and only if every element in A is also in B and every element in B is in A symbolically, x A implies x B and vice versa. There exists, for example, exactly one set the members of which are 2, 3, 5, and 7. It does not matter whether its members are described as prime numbers less than 10 or listed in some order between small braces, possibly .
## Set Builder Notation For Domain And Range
Set builder notation is very useful for defining the domain and range of a function. In its simplest form, the domain is the set of all the values that go into a function. For Example: For the rational function, f = 2/ the domain would be all real numbers, except 1. This is because the function f would be undefined when x = 1. Thus, the domain for the above function can be expressed as . Similarly, we can represent the range of a function as well using the set builder notation.
Don’t Miss: What Are The 6 Kingdoms In Biology
## Variations On Setbuilder Notation
An expressionmay be used left of the vertical line in setbuilder notation, instead of asingle variable.
#### Giving the type of the variable
You can use an expression on the left side of setbuilder notation to indicate the type of the variable.
#### Example
The unit interval $I$ could be defined as \making it clear that it is a set of real numbers rather than, say rational numbers. You can always get rid of the type expression to the left of the vertical line by complicating the defining condition, like this:\
#### Other expressions on the left side
Other kinds of expressions occur before the vertical line in setbuilder notation as well.
#### Example
The set\consists of all the squares of integers in other words its elements are 0,1,4,9,16,. This definition could be rewritten as $\left\n\in \mathrm\textm=^} \right\}$.
#### Example
Let $A=\left\$. Then $\left\=\left\$.
##### Warning
Be careful when you read such expressions.
#### Example
The integer $9$ is an element of the set \It is true that $9=^}$ and that $3$ is excluded by the defining condition, but it is also true that $9=^}$ and $-3$ is not an integer ruled out by the defining condition!
## Sets Related To Functions
Introduction to Set-Builder Notation 127-1.16
The set of all functions of a real variable, that return a real variable is denoted:\
The domain of a function is the set of all possible inputs.An input is not possible if the function is not defined for thatinput, like in the case of a divide by zero error.
The image set of a function is the set of all possible outputs of the function:\
Don’t Miss: How To Apply For Phd In Psychology
## Replacement Set And Solution Set Definition
See the Replacement Set and Solution Set Definitions below.
Replacement Set: Replacement Set is the set where the values of the variable are involved in the inequation.
Solution Set: Solution Set is the set where a number is taken from the replacement set and submitted to satisfy the given inequation. The Solution Set is the subset of the Replacement Set.
Generally, we use the set of natural numbers N or the set of whole numbers W or the set of integers I or the set of real numbers R is the replacement set. When we use one of these replacement sets to prove the inequality we can get the solution set.
#### Examples of Replacement Set and Solution Set
Let us take the inequation to be m < 5, if: The replacement set = the set of natural numbers N, then the solution set = . If the replacement set = the set of whole numbers W, then the solution set = . If the replacement set = the set of integers Z or I, then the solution set = . The replacement set = the set of real numbers R, then the solution set represents as set-builder form i.e. .
## How Do You Write Set Notation
The set notation is generally written using symbols between the sets for set operations, and certain symbols for representing some special kind of sets. The set notation for the union of sets is A U B, for the intersection of sets is A B. And the set notation for representing some important sets is the – universal set, Ø – null set.
You May Like: How To Know If You Have Chemistry With Someone
## Examples On Solving Equations By Plugging In Values
Check out all the examples given below and follow the step-by-step procedure to learn the complete procedure of solving problems.
Question 1. If the replacement set is the set of integers , between 6 and 8, find the solution set of 30 6x > 2x 6.
Solution: Given that 30 6x > 2x 6Now, move the 2x on the left side of the above equation.30 6x 2x > -6Now, subtract 30 from both sides of the above equation.30 30 8x > -6 -30-8x > -36Now, divide the above equation with -8 into both sides. The inequality reverses on multiplying pr dividing both sides by -1.x < 4.5
Given that the replacement set is the set of integers between 6 and 8. Therefore, solution set =
Question 2. If the replacement set is the set of real numbers , find the solution set of 10 6x < 22.
Solution: Given that 10 6x < 22.Now, subtract 10 from both sides of the above equation.10 10 6x < 22 10 6x < 12Now, divide the above equation with -6 into both sides. The inequality reverses on multiplying pr dividing both sides by -1.-6x/-6 < 12/-6x > -2
Given that the replacement set is the set of real numbers R. Therefore, solution set =.
Question 3.List the solution set of 100 6 < 50, given that x W. Also, represent the solution set obtained on a number line.
Therefore, the solution set is . The number line is
Question 4.Solve the inequation 74 18x 16 represent the solution set on the number line. x is a whole number.
## Giving Our Input And Output Elements A Name
In the interest of establishing a shorthand notation and set of vocabulary for discussing a functions key/value pairs, suppose that \ is the value that \ assigns to \. We can then write \=y\) and this can be read aloud in any of the following ways:
\ maps \ to \ The value of \ under \ is \ When \ acts on \, the resulting value is \ \ of \ equals \
Several combinations of these phrases are common, but the convention is that the word value always refers to the output or result associated with a given input.
In most cases, it is easiest to read the notation \ = y\) as \ of \ equals \.
Sometimes, we may also use the notation \, especially if we do not have a named function such as \.
For example, the function \ is a nameless function whose output is equal to the given input times 7. Note that the input set must be a set of numbers, and if we were to dive deeper into the example wed need to clarify exactly what subset of numbers we are considering.
Also Check: What Is Reproduction In Biology
## Which Is The Best Form Of Set Notation For Writing A Set
The best form of set notation is the notation which helps to easily represent the elements of a set. The roster form of set notation makes a simple listing of elements of a set and is the best form of set notation. The roster form of set notation to represent a set of English alphabet vowels is A = .
## Subsection111the Notion Of A Set
The term set is intuitively understood by most people to mean a collection of objects that are called elements . This concept is the starting point on which we will build more complex ideas, much as in geometry where the concepts of point and line are left undefined. Because a set is such a simple notion, you may be surprised to learn that it is one of the most difficult concepts for mathematicians to define to their own liking. For example, the description above is not a proper definition because it requires the definition of a collection. Even deeper problems arise when you consider the possibility that a set could contain itself. Although these problems are of real concern to some mathematicians, they will not be of any concern to us. Our first concern will be how to describe a set that is, how do we most conveniently describe a set and the elements that are in it? If we are going to discuss a set for any length of time, we usually give it a name in the form of a capital letter . In discussing set \ if \ is an element of \ then we will write \ On the other hand, if \ is not an element of \ we write \ The most convenient way of describing the elements of a set will vary depending on the specific set.
• \ the positive integers, \
• \ the natural numbers, \
• \ the integers, \
• \ the rational numbers
• \ the real numbers
• \ the complex numbers
Set-Builder Notation. Another way of describing sets is to use set-builder notation. For example, we could define the rational numbers as
Don’t Miss: How To Study Campbell Biology
## How Do We Read And Write Set Notation
To read and write set notation, we need to understand how to use symbols in the following cases:
#### 1. Denoting a Set
Conventionally, we denote a set by a capital letter and denote the elements of the set by lower-case letters.
We usually separate the elements using commas. For example, we can write the set A that contains the vowels of the English alphabet as:
We read this as the set A containing the vowels of the English alphabet.
#### 2. Set Membership
We use the symbol is used to denote membership in a set.
Since 1 is an element of set B, we write 1B and read it as 1 is an element of set B or 1 is a member of set B.Since 6 is not an element of set B, we write 6B and read it as 6 is not an element of set B or 6 is not a member of set B.
#### 3. Specifying Members of a Set
In the previous article on describing sets, we applied set notation in describing sets. I hope you still remember the set-builder notation!
We can describe set B above using the set-builder notation as shown below:
We read this notation as the set of all x such that x is a natural number less than or equal to 5.
#### 4. Subsets of a set
We say that set A is a subset of set B when every element of A is also an element of B. We can also say that A is contained in B. The notation for a subset is shown below:
The symbol stands for is a subset of or is contained in. We usually read AB as A is a subset of B or A is contained in B.We use the notation below to show that A is not a subset of B:
## What Is Set Builder Notation
Algebra II 2.4b, Interval Notation and Set-Builder Notation
Set-builder notation is defined as a representation or a notation that can be used to describe a set that is defined by a logical formula that simplifies to be true for every element of the set. It includes one or more than one variables. It also defines a rule about the elements which belong to the set and the elements that do not belong to the set. Let us read about different methods of writing sets. There are two methods that can be used to represent a set.
• Set builder form
The roster form or listing the individual elements of the sets, and the set builder form of representing the elements with a statement or an equation. The two methods are as follows.
Roster Form or Listing Method
In this method, we list down all the elements of a set, and they are represented inside curly brackets. Each of the elements is written only once and is separated by commas. For example, the set of letters in the word, “California” is written as A = .
Set Builder Form or Rule Method
Set builder form uses a statement or an expression to represent all the elements of a set. In this method, we do not list the elements instead, we will write the representative element using a variable followed by a vertical line or colon and write the general property of the same representative element. For example, the same set above in set builder form can be written as A = A = .
Here is another example of writing the set of odd positive integers below 10 in both forms.
Example:
Example:
You May Like: What Is Qc In Chemistry
## Set Notation Or Ways To Define A Set
a. You could define a set with a verbal description: All sets above are described verbally when we say, ” The set of all bla bla bla “
b. You could make a listing of all members separated by commas with braces : A list of set 1. is written as:
c.Set-builder notation:Set 1 and set 4 can be written as and
is read, ” The set of all x such that x is a letter in the modern English alphabet.
Set-builder is an important concept in set notation. You must understand it!
We use capital letters such as A, B, and so forth to denote sets.
For example, you could let A be the set of all positive numbers less than 10.
empty set
|
2022-09-25 07:55:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836100697517395, "perplexity": 344.27941163356576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00563.warc.gz"}
|
https://www.allaboutcircuits.com/technical-articles/choosing-the-right-oscillator-for-your-microcontroller/
|
Technical Article
# Choosing the Right Oscillator for Your Microcontroller
May 17, 2016 by Robert Keim
## Internal or external? Quartz or ceramic? Crystal oscillator or silicon oscillator? So many clocking options . . . which one is right for your design?
Internal or external? Quartz or ceramic? Crystal oscillator or silicon oscillator? So many clocking options . . . which one is right for your design?
### Oscillating Options
Every microcontroller needs a clock source. The CPU, the memory bus, the peripherals—clock signals are everywhere inside a microcontroller. They govern the speed at which the processor executes instructions, the baud rate of serial-communication signals, the amount of time needed to perform an analog-to-digital conversion, and so much more.
All of this clocking action goes back to the source of the clock signal, namely the oscillator. Therefore, you need to make sure that your oscillator can support whatever performance you expect from your microcontroller. At the same time, though, some oscillator options are more complex or expensive than others, so your choice of oscillator should also reflect the importance of reducing cost and complexity whenever possible.
There are quite a few ways to generate a clock signal for a microcontroller. The datasheet for your particular device should provide a good deal of information about what types of oscillators you can use and how to implement them in a way that is compatible with the device’s hardware. This article will focus on the advantages and disadvantages of various clock sources so that you can better choose among the oscillator options discussed in your microcontroller’s datasheet.
So let’s start with a list, followed by a discussion of each option:
• Internal
• Usually (as far as I know, always) a resistor–capacitor circuit
• Phase-locked loop for frequency multiplication
• External
• CMOS clock
• Crystal
• Ceramic resonator
• Resistor–capacitor
• Capacitor only
### Internal Oscillators: The KIS Option
I’m an advocate of the Keep It Simple principle; consequently, I have a special appreciation for internal oscillators, and I encourage you to avail yourself of the internal oscillator whenever possible. No external components are required: You can safely assume that the frequency is well chosen since the oscillator was designed by the same people who designed the rest of the microcontroller. Also, the salient performance specs—e.g., initial accuracy, duty cycle, temperature dependency—are (hopefully) right there in the datasheet.
The dominant disadvantage with internal oscillators is the lack of precision and frequency stability. The baseline frequency depends on the values of the passive components that make up the oscillator circuit, and the tolerances for the values of these passive components are not particularly tight. Furthermore, capacitance and resistance are influenced by ambient temperature, so internal RC oscillators experience temperature “drift”—i.e., changes in temperature lead to changes in frequency.
In my experience, many applications can tolerate the shortcomings of an internal oscillator, especially when the frequency has been calibrated at the factory. With older microcontrollers, the internal oscillator might have tolerance as bad as ±20%. However, a newer device can give you ±1.5% (or better), which is accurate enough for RS-232 communication and even (in conjunction with clock-recovery circuitry) for USB.
Another way to expand the capabilities of an internal oscillator is manual “trimming”—if your microcontroller includes a trimming/calibration register, you can adjust the frequency by modifying the value in this register. This is a perfectly practical technique for low-quantity designs: Simply measure the clock frequency with an oscilloscope or frequency counter and then trim the oscillator accordingly.
A variation on the internal-oscillator theme is the phase-locked loop (PLL). A PLL allows a low-quality, high-speed internal oscillator to benefit from the stability and precision of an external oscillator. In general, a PLL doesn’t help you to avoid external components because it requires a reference clock that is usually derived from a crystal. An exception, though, is when you have a high-quality clock somewhere on the PCB but don’t want to use it for the microcontroller because it’s too slow—you could use a PLL to multiply this clock up to an acceptable frequency.
### CMOS Clock
Another straightforward clocking option is the so-called “CMOS clock,” which falls into the for-lack-of-a-better-term category. “CMOS clock” is a vague (though convenient) way of referring to any clock signal driven by some other component on the board. The CMOS clock is a great option if your design already includes a clock signal with 1) a workable frequency and 2) electrical characteristics that are compatible with the microcontroller’s CMOS-clock-input circuitry. Often, though, this is not the case, so let’s look at two options for generating a CMOS clock.
First is the “crystal oscillator.” This is a good time to point out that a quartz crystal is not an oscillator; rather, a quartz crystal is the central component in a quartz-crystal oscillator circuit, which might look something like this:
Crystal oscillators are handy devices that consist of a quartz crystal and the additional circuitry needed to generate a standard digital clock signal. Thus, you get the stability and precision of a crystal without worrying about load capacitance and the careful PCB layout needed to ensure robust operation with a standalone crystal.
The second option is a “silicon oscillator”. This term refers to oscillator ICs that are not based on quartz crystals or ceramic resonators. These devices are versatile and easy to use, and they can be quite accurate. For example, the LTC6930 series from Linear Tech requires only one bypass capacitor, and the vast majority of the parts are within .05% of the nominal frequency:
Silicon oscillators are more reliable than crystals and ceramic resonators, especially in harsh environments subject to shock or vibration. But they’re also more expensive.
### Quartz and Ceramic
When you need seriously high precision and stability without the additional cost of a crystal-based oscillator IC, opt for the standalone-crystal approach. Parts with tolerance below 20 parts per million (i.e., 0.002%) are readily available. The oscillator circuit shown above is partially integrated into microcontrollers that support the standalone-crystal configuration; you will need to provide the correct load capacitors. The total load capacitance (CLTOTAL) is specified in the crystal’s datasheet, and the load capacitors are chosen as follows:
$C_{LTOTAL}=\frac{C_{L1}\times C_{L2}}{C_{L1}+C_{L2}}+C_P$
where CP represents whatever parasitic capacitance is present. This calculation is pretty simple in practice: Choose a reasonable value for CP (say, 5 pF), subtract this from CLTOTAL, then multiply by two. So if the datasheet gives a load capacitance of 18 pF, we have
$C_{L1}=C_{L2}=\left(18\ pF-5\ pF\right)\times2=26\ pF$
Ceramic resonators are less accurate than crystals; common tolerances are 1000 to 5000 parts per million. They may save you a few cents if you don’t need the accuracy of quartz, but in my mind, the main advantage is that you can get ceramic resonators with integrated load capacitors.
### Last, and Least . . .
There aren’t many situations that call for an external resistor–capacitor or capacitor-only oscillator. If for some reason you are opposed to the external-oscillator options discussed thus far, choose a microcontroller with an internal oscillator and use that. If, however, you are determined to dig out a passive or two from your box of spare parts, refer to the microcontroller’s datasheet for guidance on how to connect and design the oscillator circuit. Here is an example of how to connect the components, taken from the datasheet for the C8051F12x–13x (PDF) microcontrollers from Silicon Labs:
And you can refer to page 190 (PDF) of this same datasheet for an example of information on choosing component values.
### Conclusion
I hope you now know enough to make an informed, confident decision next time you need to choose an oscillator for your microcontroller. Here are my recommendations in a nutshell:
• Internal oscillator whenever possible
• Silicon oscillator if the accuracy is adequate and the cost is acceptable—otherwise, quartz crystal
• Share
### You May Also Like
• MrChips May 12, 2016
Here are some items that need to be added. Since a lot of MCU applications require a stable clock frequency, a more expanded discussion on crystal oscillators would be useful.
For example, explain the difference between series and parallel resonance XTAL oscillators. Why are third harmonic XTAL oscillators used? What is the importance of the C1 and C2 loading capacitors and how does one determine the best values to use? Why is there a high value resistor in parallel with the XTAL? Why does the oscillator fail to start or output the correct frequency? How to prevent this from happening? Why is PCB layout so critical to the oscillator? What are some guidelines to proper PCB layout?
Many applications require time keeping function. For this reason, many MCUs have a low frequency oscillator option requiring the popular 32768Hz XTAL. How does one choose the loading capacitor? How does one adjust and calibrate the oscillator? What is the purpose of the series resistor? What are the adverse effects if one omits the series resistor?
Like.
• J
jimkim March 23, 2018
Many good questions there. Many of the answers can be found in this Microwave Digest article http://www.crystek.com/documents/appnotes/pierce-gateintroduction.pdf . Third overtone crystals are used as a compromise between cost and performance. Quartz crystals (at least the common AT cut types) use a resonance mode known as thickness shear mode in which the frequency has an inverse relationship with the mass of the crystal. The mass is typically controlled by varying the thickness of the crystal wafer. In order to achieve a high frequency such as >30MHz (this depends on the crystal package size) the wafer would have to be extremely thin and therefore difficult or sometimes practically impossible to manufacture, causing price to increase substantially. Therefore, a cheaper third overtone crystal is used instead. One method to overcome this manufacturing obstacle is to use an inverted mesa design crystal. Inverted mesa crystals use a wafer that is thinner in the middle so they are less fraile. Inverted mesa crystals can be made in fundamental mode up to 200 MHz (sometimes higher) but you'd have to pay an ARM, LEG, AND HALF YOUR BODY for it. There are many possible reasons why oscillators fail startup and output incorrect frequencies. Based on my experience, the frequency error is commonly due to incorrect selection of load capacitor (C1, C2) values. Many engineers are under the impression that a crystal with a load value of 18pF requires load capacitors of 18pF. THIS IS NOT THE CASE. The correct calculation is Cload = {[Cin+C1][C2+Cout]/[Cin+C1+C2+Cout]} + pcb strays. The start up issue can be more complex and involves drive level, ESR, negative resistance, and other factors in circuit matching and optimization. Also with a 32.768KHz crystal a Rs is a must and must not be ommited as without one, the high drive level may damage the crystal. More details can be found in the article mentioned above including the function of Rs and Rf resistors.
Like.
• Barbo May 27, 2016
Very, very good, thank you.
Like.
• Isa Ansharullah May 19, 2017
Great. I want to know more about this so-called “Trimming”, what does this mean and how do we do that. And how exactly microprocessor “trim” an internal (or external!) oscillator.
Like.
• kuashio November 10, 2019
Hey, I regret to comment so much later, but for what it's worth, I'm writing an article exactly on internal oscillator trimming. I expect to send it in tomorrow or the day after. Stay tuned to allaboutcircuits.com!
Like.
|
2019-11-22 18:15:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5650783777236938, "perplexity": 1492.9077812814082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00427.warc.gz"}
|
https://cn.coursera.org/lecture/mathematics-for-computer-science/2-101-octal-and-hexadecimal-integer-CUS9X
|
## 2.101 Octal and hexadecimal (integer)
### 审阅
4.2(192 个评分)
• 5 stars
59.37%
• 4 stars
17.70%
• 3 stars
10.93%
• 2 stars
5.20%
• 1 star
6.77%
KA
Nov 16, 2022
Thanks a lot for the nice course, there is comprehensive and laconic explanation especially for modular arithmetic. Before I had some problems with it.
EA
Jun 14, 2020
This instructor makes math really fun! The material was challenging, but she was very engaging and gave a lot of descriptive examples.
Number bases - other bases
In this week, we will extend the place value and number systems to Octal, Hexadecimal and any other bases. You will also be introduced to the usefulness of hexadecimal in computer science.
Lecturer
|
2023-01-29 01:47:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9092360138893127, "perplexity": 4396.672506859657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499697.75/warc/CC-MAIN-20230129012420-20230129042420-00092.warc.gz"}
|
https://math216.wordpress.com/2010/08/23/first-notes-category-theory/
|
The first post is the August 26 version here.
I hope to post approximately 25 pages of reading for learners (with more optional material for experts) every fortnight.
For learners.
If you are reading to learn, the first reading is, I’m afraid, quite heavy if you’ve never seen category before. You should just skim the starred subsections and skip the spectral sequence section 2.7: you should read (slowly) 2.1-2.6, and get a sneak peak at 3.1. It is essential to be familiar with the exercises, and which ones you should do depends on what you know. If you force me to name twelve, I won’t be happy, but I’ll try: 2.3.F, 2.3.K, 2.3.O, 2.3.T, 2.3.Y, 2.4.A, 2.4.C, 2.5.C, 2.5.E, 2.6.A, 2.6.B, 2.6.F (from the Aug 26 version).
I don’t want to give you much guidance, because I want the notes to speak for themselves as much as possible.
For experts.
I hope to get across not just the category theory people need, but also a sense of how to think categorically. I always find it a good question when people in class start asking wise categorically-minded questions, and thinking clearly. There are also a few constructions that are essential. But I don’t want to tell people anything they don’t need to know, and I try maintain a balance between informality and precision. I’ve noticed that most of the following topics is easy once you understand them (with some exceptions!), but that they are quite unmotivated when you first see them. Developing fluency with these ideas requires working through as many trivial examples as possible. I hope many of the exercises are (after you figure them out) trivial, although some are hard.
In 2.1, I try to give them motivation for why we think in this way.
In 2.2, I introduce categories and functors and related notions, such as natural transformations. I didn’t understand the notion of equivalent categories in my gut until Johan de Jong gave me some baby examples.
Section 2.3 is devoted to universal properties, again through key examples. Yoneda’s lemma sneaks in at the end, and I hope readers end up finding it reasonable (even if they don’t yet appreciate it).
(Question: after 3.3.D: “0” is a zero-divisor. Is that so horrible? Question: I originally denoted the contravariant functor representing X by h^X, and the covariant functor representing X by h_X. Arthur Ogus and Jason Ferguson made strong cases that I’m violating accepting convention, so I’ve attempted to reverse myself. Does anyone strongly disagree with Arthur and Jason?)
(Co)limits are discussed at length in 2.4, with examples, and constructions of when they exist in cases that will actually be used.
In 2.5, adjoints are discussed. I find this tricky to explain. It is a pain in the butt to show that two things are adjoint.
Abelian categories are introduced in 2.6. We need them, but they are a tarpit in which many people have been irretrievably lost. We need some things, but remarkably little. It’s really true that if you understand modules over a ring, you can get by in almost any case in algebraic geometry (of the sort most people do) — even without Freyd-Mitchell. I am the most worried about this section. If someone hasn’t seen long exact sequences before, Theorem 2.6.5 will be a bolt from the blue.
Section 2.6.7, two useful facts in homological algebra, contains facts that I really wish I’d collected in my head earlier than I did (on the relationship between homology, (right/left)-exact functors, adjoints, and colimits — I would often be confused as to when various things would commute, and only later realized that there are some general principles at work). I hope I have collected them correctly. People seeing homological algebra for the first time shouldn’t read this, but people seeing it for the second time might benefit from it.
The “FHHF” Theorem
Going far with the FHHF Theorem
Spectral sequences are necessary later, so I included a brief introduction in 2.7. I really mean it when I say that I hope that people don’t peek until they actually need it. I do something slightly unusual (that may upset some people). For me, spectral sequences are useful because of how they can be used with little thought. Too many people just tune out when they hear the word, but when they turn up, they should make you happy, not sad. The best way to understand how to use something is to see it used to do something you already know how to do. So I deliberately concentrate on how to use them, in the special case of double complexes, which is the only case we’ll use later. I give a proof in this case which I believe is complete (and which hasn’t yielded many complaints), but which I hope most people don’t read. My only goal with this section is that people should leave it unafraid of spectral sequences in this setting, ready to use them on a moment’s notice, and with a sharp sense of how to use them. (Questions for experts: (1) I also introduce a convention of having an arrow in the spectral sequence of a double complex to indicate which is the first differential you use, the horizontal or the vertical — both in the names of the pages and in the names of the differentials on that page. Is anyone really offended? (2)
Should the indices of spectral sequences be ordered “(row, column)” or “(x,y)”? I don’t think there is consistency in the literature, so I’ve gone with the first. (3) Is there some classical important fact that would be useful practice for people learning spectral sequences, that doesn’t require much additional background?)
|
2017-10-22 02:41:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7087461352348328, "perplexity": 698.5136358729858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825057.91/warc/CC-MAIN-20171022022540-20171022042540-00663.warc.gz"}
|
https://eccc.weizmann.ac.il/keyword/15031/
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > QMA:
Reports tagged with QMA:
TR08-051 | 4th April 2008
Scott Aaronson, Salman Beigi, Andrew Drucker, Bill Fefferman, Peter Shor
#### The Power of Unentanglement
The class QMA(k), introduced by Kobayashi et al., consists
of all languages that can be verified using k unentangled quantum
embarrassingly open: for example, can we give any evidence that k
quantum proofs are more powerful than one? Can ... more >>>
TR08-067 | 4th June 2008
Scott Aaronson
#### On Perfect Completeness for QMA
Whether the class QMA (Quantum Merlin Arthur) is equal to QMA1, or QMA with one-sided error, has been an open problem for years. This note helps to explain why the problem is difficult, by using ideas from real analysis to give a "quantum oracle" relative to which QMA and QMA1 ... more >>>
TR11-001 | 2nd January 2011
Scott Aaronson
#### Impossibility of Succinct Quantum Proofs for Collision-Freeness
We show that any quantum algorithm to decide whether a function $f:\left[n\right] \rightarrow\left[ n\right]$ is a permutation or far from a permutation\ must make $\Omega\left( n^{1/3}/w\right)$ queries to $f$, even if the algorithm is given a $w$-qubit quantum witness in support of $f$ being a permutation. This implies ... more >>>
TR11-110 | 10th August 2011
Alessandro Chiesa, Michael Forbes
#### Improved Soundness for QMA with Multiple Provers
Revisions: 1
We present three contributions to the understanding of QMA with multiple provers:
1) We give a tight soundness analysis of the protocol of [Blier and Tapp, ICQNM '09], yielding a soundness gap $\Omega(N^{-2})$, which is the best-known soundness gap for two-prover QMA protocols with logarithmic proof size. Maybe ... more >>>
TR16-109 | 18th July 2016
Scott Aaronson
#### The Complexity of Quantum States and Transformations: From Quantum Money to Black Holes
This mini-course will introduce participants to an exciting frontier for quantum computing theory: namely, questions involving the computational complexity of preparing a certain quantum state or applying a certain unitary transformation. Traditionally, such questions were considered in the context of the Nonabelian Hidden Subgroup Problem and quantum interactive proof systems, ... more >>>
TR19-015 | 7th February 2019
William Kretschmer
#### QMA Lower Bounds for Approximate Counting
We prove a query complexity lower bound for $QMA$ protocols that solve approximate counting: estimating the size of a set given a membership oracle. This gives rise to an oracle $A$ such that $SBP^A \not\subset QMA^A$, resolving an open problem of Aaronson [2]. Our proof uses the polynomial method to ... more >>>
TR19-121 | 17th September 2019
Alexander A. Sherstov, Justin Thaler
#### Vanishing-Error Approximate Degree and QMA Complexity
The $\epsilon$-approximate degree of a function $f\colon X \to \{0, 1\}$ is the least degree of a multivariate real polynomial $p$ such that $|p(x)-f(x)| \leq \epsilon$ for all $x \in X$. We determine the $\epsilon$-approximate degree of the element distinctness function, the surjectivity function, and the permutation testing problem, showing ... more >>>
TR19-131 | 11th September 2019
Lieuwe Vinkhuijzen, André Deutz
#### A Simple Proof of Vyalyi's Theorem and some Generalizations
In quantum computational complexity theory, the class QMA models the set of problems efficiently verifiable by a quantum computer the same way that NP models this for classical computation. Vyalyi proved that if $\text{QMA}=\text{PP}$ then $\text{PH}\subseteq \text{QMA}$. In this note, we give a simple, self-contained proof of the theorem, using ... more >>>
ISSN 1433-8092 | Imprint
|
2022-06-29 10:13:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7641382813453674, "perplexity": 2192.76534584319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00422.warc.gz"}
|