url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://tex.stackexchange.com/questions/560962/adding-a-third-level-of-compounds-distinction-in-chemnum
|
# Adding a third level of compounds distinction in chemnum?
Essentially I would like to add a "sub-sub-label" to my compounds' numbering using chemnnum.
## Why and how exactly I want this
I'm describing a series of chiral compounds, numbering them 1a, 1b, and so on. Sometimes I have also an another diastereomer of a compound: lets say 1b has epimerized in 2nd position - I would like to name it 2-epi-1b (supervisor's preference).
## How I'm doing it now
Currently I'm just reusing existing references to 1b with bold prefix, like this: \textbf{2-epi-}\refcmpd{one.b}, but I find this to be not convinient and error-prone.
## How I would like to do it
I was thinking about decreasing the appropriate sub-counter by 1 directly after first \cmpd{one.b} and immediately declaring an epimerized compound like this: \labelcmpd[pre-label-code=\textbf{2-epi-}]{one.b:epi}. Unfortunately, I've learned that there is no sub-counter, as chemnum stores current ID number of sub-label as an expl3 ints and I have no Idea how to interact with them.
## The question(s)
Is it possible to manually decrease internal sub-label counter by 1 in chemnum? If yes, how to do it? Or maybe there is a better/alternative way to introduce a "sub-sub-label" to the numbering of my compounds?
After reading the source code of chemnum package and learning some expl3 programming, I've came up with a hacky way to do this. I've created a new command that defines a label (but does not print it, like \lebelcmpd), mimmicking a given label and sub label. The code:
\ExplSyntaxOn
\int_new:N \l__chemnum_tmpnum_int
% #1: options, #2: main ID, #3: sub ID, #4: new main ID
\NewDocumentCommand {\sublabelcmpd} {O{ }mmm} {
% stash main counter value
\int_set:Nn \l__chemnum_tmpnum_int {\value{cmpdmain}}
% set main counter, that it will produce #2 label again
\setcounter {cmpdmain} {\int_eval:n {\cmpdproperty{#2}{number}-1}}
% define new compound disguised as #2 with dummy sub compound
\chemnum_cmpd:nnnn {\c_true_bool} {\c_false_bool} {#1} {#4.subundefined}
% set sub counter to produce desired sub label #3
\int_set:cn {g__chemnum_compound_#4_subcompound_int} {\subcmpdproperty{#2}{#3}{number}-1}
% define new sub compound disguised as #2.#3
\chemnum_cmpd:nnnn {\c_true_bool} {\c_false_bool} {#1} {#4.#3}
% revert previous main counter state
\setcounter {cmpdmain} {\l__chemnum_tmpnum_int}
}
\ExplSyntaxOff
This way I can provide different/additional options for what seems to be a previously used label. This method requires a new main label, because options are associated with main label and cannot be changed. The syntax is \sublabelcmpd[<options>]{<main ID>}{<sub ID>}{<new main ID>}, new compound can be then simply referenced with \cmpd{<new main ID>}{<sub ID>}. To give an example:
Compounds \cmpd{one.a} and \cmpd{one.b} are defined as usual.
Then a new compound is defined using
\verb#\sublabelcmpd[pre-label-code=\textbf{2-epi-}]{one}{b}{epi:one}#.
\sublabelcmpd[pre-label-code=\textbf{2-epi-}]{one}{b}{epi:one}
This newly defined compound will have the same label as \cmpd{one.b},
but with additional options, and can be referenced normally: \cmpd{epi:one.b}.
will produce:
|
2023-02-02 20:26:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6540981531143188, "perplexity": 6828.173530342882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00133.warc.gz"}
|
http://math.stackexchange.com/questions/267363/isosceles-triangle
|
# Isosceles triangle
Let $\triangle ABC$ be an $C$-isosceles and $P\in (AB)$ be a point so that $m\left(\widehat{PCB}\right)=\phi$. Express $AP$ in terms of $C$, $c$ and $\tan\phi$.
Edited problem statement(same as above but in different words):
Let $\triangle ABC$ be a isosceles triangle with right angle at $C$. Denote $\left | AB \right |=c$. Point $P$ lies on $AB(P\neq A,B)$ and angle $\angle PCB=\phi$. Express $\left | AP \right |$ in terms of $c$ and $\tan\phi$.
-
What does $\,C-$isosceles, $\,(AB)\,$ and $\,m(\widehat{PCB})\,$ mean? That C is the base, (AB) one of the equal sides and the angle...or what? – DonAntonio Dec 29 '12 at 23:49
@DonAntonio I guess that $C$ is the vertex opposite to the base; the base is denoted by $(AB)$ and $m(\widehat{PCB})$ is the measure of the angle with vertex $C$. $c$ is the length of the base $(AB)$. – Sigur Dec 30 '12 at 0:20
That seems plausible, @Sigur, yet it is interesting the OP didn't bother to address the question... – DonAntonio Dec 30 '12 at 0:24
I see that I have confused many of you with my problem statement. I will edit it so it will be easier to understand. – EricAm Dec 30 '12 at 2:14
You wrote in your edition "...with right angle at $\,C\,$"...is this correct? Is it then a right angle isosceles triangle? – DonAntonio Dec 30 '12 at 4:45
Edited for revised question
Dropping the perpendicular from $C$ onto $AB$ will help. Call the point $E$.
Also drop the perpendicular from $P$ onto $BC$, and call the point $F$. Then drop the perpendicular from $F$ onto $AB$, and call the point $G$.
This gives a lot of similar and congruent triangles.
$$\tan \phi = \dfrac{|PF|}{|CF|} = \dfrac{|FB| }{ |CF|} = \dfrac{ |GB| }{|EG| } = \dfrac{ |PB| }{|AP| }= \dfrac{ c-|AP| }{|AP| }$$ so $$|AP| = \dfrac{c}{ 1+\tan \phi}.$$
-
Thanks for the answer Henry but unfortunately you have misunderstood the problem. I will edit my problem statement and hopefully it will become clearer. – EricAm Dec 30 '12 at 2:15
Interesting answers Henry and DonAntonio. Here comes a different approach from my friend(internet-friend) using trigonometry. It is nice actually.
Apply an well-known relation $\frac {PA}{PB}=\frac {CA}{CB}\cdot\frac {\sin\left(\widehat{PCA}\right)}{\sin\left(\widehat{PCB}\right)}=$ $\frac {\sin (C-\phi )}{\sin\phi}=$ $\frac {\sin C-\cos C\cdot\tan\phi}{\tan\phi}\implies$
$\frac {PA}{\sin C-\cos C\cdot\tan\phi}=\frac {PB}{\tan\phi}=\frac {c}{\sin C+(1-\cos C)\cdot\tan\phi}\implies$ $\boxed{PA=c\cdot\frac {\sin C-\cos C\cdot\tan\phi}{\sin C+(1-\cos C)\tan\phi}}$ .
Particular case $C=90^{\circ}\ \implies\ PA=\frac {c}{1+\tan\phi}$ .
-
So let's figure out why this and Henry's solutions differs. – EricAm Dec 30 '12 at 18:57
It seems one of my equalities was upside down – Henry Dec 30 '12 at 21:46
Putting $\,CE=$the height to the base in the triangle (and thus $\,CE\perp AB\,$) ,and putting $\,x:=\angle PCE\,\,,\,\,y:=\angle ECB\,$ , we get $\,\phi=x+y\,$ , and
$$\tan x= \frac{PE}{CE}\,\,,\,\tan y=\frac{c}{2\cdot CE}\Longrightarrow$$
$$AP=\frac{c}{2}-PE=\frac{c}{2}-CE\tan x=\frac{c}{2}-\frac{c}{2\tan y}\tan x\Longrightarrow$$
$$AP=\frac{c}{2}\frac{\tan y-\tan x}{\tan y}$$
But
$$\tan\phi=\tan(x+y)=\frac{\tan x+\tan y}{1-\tan x\tan y}$$
Try to take it from here.
-
|
2014-10-22 13:04:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968024253845215, "perplexity": 661.4369081432508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446943.4/warc/CC-MAIN-20141017005726-00017-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://en.wikipedia.org/wiki/CR_structure
|
# CR manifold
(Redirected from CR structure)
In mathematics, a CR manifold is a differentiable manifold together with a geometric structure modeled on that of a real hypersurface in a complex vector space, or more generally modeled on an edge of a wedge.
Formally, a CR manifold is a differentiable manifold M together with a preferred complex distribution L, or in other words a subbundle of the complexified tangent bundle CTM = TMC such that
• ${\displaystyle [L,L]\subseteq L}$ (L is formally integrable)
• ${\displaystyle L\cap {\bar {L}}=\{0\}}$ (L is almost Lagrangian).
The bundle L is called a CR structure on the manifold M.
The abbreviation CR stands for Cauchy–Riemann or Complex-Real.
## Introduction and motivation
The notion of a CR structure attempts to describe intrinsically the property of being a hypersurface in complex space by studying the properties of holomorphic vector fields which are tangent to the hypersurface.
Suppose for instance that M is the hypersurface of C2 given by the equation
${\displaystyle F(z,w):=|z|^{2}+|w|^{2}=1,}$
where z and w are the usual complex coordinates on C2. The holomorphic tangent bundle of C2 consists of all linear combinations of the vectors
${\displaystyle {\frac {\partial }{\partial z}},\quad {\frac {\partial }{\partial w}}.}$
The distribution L on M consists of all combinations of these vectors which are tangent to M. The tangent vectors must annihilate the defining equation for M, so L consists of complex scalar multiples of
${\displaystyle {\bar {w}}{\frac {\partial }{\partial z}}-{\bar {z}}{\frac {\partial }{\partial w}}.}$
In particular, L consists of the holomorphic vector fields which annihilate F. Note that L gives a CR structure on M, for [L,L] = 0 (since L is one-dimensional) and ${\displaystyle L\cap {\bar {L}}=\{0\}}$ since ∂/∂z and ∂/∂w are linearly independent of their complex conjugates.
More generally, suppose that M is a real hypersurface in Cn, with defining equation F(z1, ..., zn) = 0. Then the CR structure L consists of those linear combinations of the basic holomorphic vectors on Cn:
${\displaystyle {\frac {\partial }{\partial z_{1}}},\ldots ,{\frac {\partial }{\partial z_{n}}}}$
which annihilate the defining function. In this case, ${\displaystyle L\cap {\bar {L}}=\{0\}}$ for the same reason as before. Moreover, [L,L] ⊂ L since the commutator of holomorphic vector fields annihilating F is again a holomorphic vector field annihilating F.
### Embedded and abstract CR manifolds
There is a sharp contrast between the theories of embedded CR manifolds (hypersurface and edges of wedges in complex space) and abstract CR manifolds (those given by the Lagrangian distribution L). Many of the formal geometrical features are similar. These include:
Embedded CR manifolds possess some additional structure, though: a Neumann and Dirichlet problem for the Cauchy–Riemann equations.
This article first treats the geometry of embedded CR manifolds, shows how to define these structures intrinsically, and then generalizes these to the abstract setting.
## Embedded CR manifolds
### Preliminaries
Embedded CR manifolds are, first and foremost, submanifolds of Cn. Define a pair of subbundles of the complexified tangent bundle C ⊗ TC'n by:
${\displaystyle T^{(1,0)}\mathbb {C} ^{n}=\operatorname {span} \left({\frac {\partial }{\partial z_{1}}},\dots ,{\frac {\partial }{\partial z_{n}}}\right).}$
${\displaystyle T^{(0,1)}\mathbb {C} ^{n}=\operatorname {span} \left({\frac {\partial }{\partial {\bar {z}}_{1}}},\dots ,{\frac {\partial }{\partial {\bar {z}}_{n}}}\right).}$
Also relevant are the characteristic annihilators from the Dolbeault complex:
• Ω(1,0)Cn = (T(0,1)Cn). In coordinates,
${\displaystyle \Omega ^{(1,0)}\mathbb {C} ^{n}=\operatorname {span} (dz_{1},\dots ,dz_{n}).}$
• Ω(0,1)Cn = (T(1,0)Cn). In coordinates,
${\displaystyle \Omega ^{(0,1)}\mathbb {C} ^{n}=\operatorname {span} (d{\bar {z}}_{1},\dots ,d{\bar {z}}_{n}).}$
The exterior products of these are denoted by the self-evident notation Ω(p,q), and the Dolbeault operator and its complex conjugate map between these spaces via
${\displaystyle \partial :\Omega ^{(p,q)}\rightarrow \Omega ^{(p+1,q)}}$
${\displaystyle {\bar {\partial }}:\Omega ^{(p,q)}\rightarrow \Omega ^{(p,q+1)}}$
Furthermore, there is a decomposition of the usual exterior derivative via ${\displaystyle d=\partial +{\bar {\partial }}}$.
### Real submanifolds of complex space
Let M ⊂ Cn be a real submanifold, defined locally as the locus of a system of smooth real-valued functions
F1 = 0, F2 = 0, ..., Fk = 0.
Suppose that this system has maximal rank, in the sense that the differentials satisfy the following independence condition:
${\displaystyle \partial F_{1}\wedge {\bar {\partial }}F_{1}\wedge \dots \wedge \partial F_{k}\wedge {\bar {\partial }}F_{k}\not =0.}$
Note that this condition is strictly stronger than needed to apply the implicit function theorem: in particular, M is a manifold of real dimension 2n − k. We say that M is an embedded CR manifold of CR codimension k. In most applications, k = 1, in which case the manifold is said to be of hypersurface type.
Let L ⊂ T(1,0)Cn|M be the subbundle of vectors annihilating all of the defining functions F1, ..., Fk. Note that, by the usual considerations for integrable distributions on hypersurfaces, L is involutive. Moreover, the independence condition implies that L is a bundle of constant rank n − k.
Henceforth, suppose that k = 1 (so that the CR manifold is of hypersurface type), unless otherwise noted.
### The Levi form
Let M be a CR manifold of hypersurface type with single defining function F = 0. The Levi form of M, named after Eugenio Elia Levi,[1] is the Hermitian 2-form
${\displaystyle h=i\,\partial {\bar {\partial }}F|_{L}.}$
This determines a metric on L. M is said to be strictly pseudoconvex if h is positive definite (or pseudoconvex in case h is positive semidefinite). Many of the analytic existence and uniqueness results in the theory of CR manifolds depend on the strict pseudoconvexity of the Levi form.
This nomenclature comes from the study of pseudoconvex domains: M is the boundary of a (strictly) pseudoconvex domain in Cn if and only if it is (strictly) pseudoconvex as a CR manifold. (See plurisubharmonic functions and Stein manifold.)
## Abstract CR structures and Embedding Abstract CR structures in ${\displaystyle \mathbb {C} ^{n}}$
An abstract CR structure on a manifold M of dimension n consists of a subbundle L of the complexified tangent bundle which is formally integrable, in the sense that [L,L] ⊂ L, which is linearly independent of its complex conjugate. The CR codimension of the CR structure is k = n − 2 dim L. In case k = 1, the CR structure is said to be of hypersurface type. Most examples of abstract CR structures are of hypersurface type, unless otherwise made explicit.
### The Levi form and pseudoconvexity
Suppose that M is a CR manifold of hypersurface type. The Levi form is the vector valued 2-form, defined on L, with values in the line bundle
${\displaystyle V={\frac {TM\otimes {\mathbb {C} }}{L\oplus {\bar {L}}}}}$
given by
${\displaystyle h(v,w)={\frac {1}{2i}}[v,{\bar {w}}]\mod L\oplus {\bar {L}},\quad v,w\in L.}$
h defines a sesquilinear form on L since it does not depend on how v and w are extended to sections of L, by the integrability condition. This form extends to a hermitian form on the bundle ${\displaystyle L\oplus {\bar {L}}}$ by the same expression. The extended form is also sometimes referred to as the Levi form.
The Levi form can alternatively be characterized in terms of duality. Consider the line subbundle of the complex cotangent bundle annihilating V
${\displaystyle H_{0}M=V^{*}=(L\oplus {\bar {L}})^{\perp }\subset T^{*}M\otimes {\mathbb {C} }.}$
For each local section α ∈ Γ(H0M), let
${\displaystyle h_{\alpha }(v,w)=d\alpha (v,{\bar {w}})=-\alpha ([v,{\bar {w}}]),\quad v,w\in L\oplus {\bar {L}}.}$
The form hα is a complex-valued hermitian form associated to α.
Generalizations of the Levi form exist when the manifold is not of hypersurface type, in which case the form no longer assumes values in a line bundle, but rather in a vector bundle. One may then speak, not of a Levi form, but of a collection of Levi forms for the structure.
On abstract CR manifolds, of strongly pseudo-convex type, the Levi form gives rise to a pseudo-Hermitian metric. The metric is only defined for the holomorphic tangent vectors and so is degenerate. One can then define a connection and torsion and related curvature tensors for example the Ricci curvature and scalar curvature using this metric. This gives rise to an analogous CR Yamabe problem first studied by David Jerison and Jack Lee.The connection associated to CR manifolds was first defined and studied by Sidney M. Webster in his thesis on the study of the equivalence problem and independently also defined and studied by Tanaka.[2] Accounts of these notions may be found in the articles.[3][4]
One of the basic questions of CR Geometry is to ask when a smooth manifold endowed with an abstract CR structure can be realized as an embedded manifold in some ${\displaystyle \mathbb {C} ^{n}}$. Thus not only are we embedding the manifold, but we also demand for global embedding that the map embedding the abstract manifold in ${\displaystyle \mathbb {C} ^{n}}$ must pull back the induced CR structure of the embedded manifold( coming from the fact that it sits in ${\displaystyle \mathbb {C} ^{n}}$) so that the pull back CR structure exactly agrees with the abstract CR structure. Thus global embedding is a two part condition. Here the question splits into two. One can ask for local embeddability or global embeddability.
Global embeddability is always true for abstractly defined, compact CR structures which are strongly pseudoconvex, that is the Levi form is positive definite, when the real dimension of the manifold is 5 or higher by a result of Louis Boutet de Monvel.[5]
In dimension 3, there are obstructions to global embeddability. By taking small perturbations of the standard CR structure on the three sphere ${\displaystyle \mathbb {S} ^{3},}$ the resulting abstract CR structure one gets, fails to embed globally. This is sometimes called the Rossi example.[6] The example in fact goes back to Hans Grauert and also appears in a paper by Aldo Andreotti and Yum-Tong Siu.[7]
A result of Joseph J. Kohn states that global embeddability is equivalent to the condition that the Kohn Laplacian have closed range.[8] This condition of closed range is not a CR invariant condition.
In dimension 3, a non-perturbative set of conditions that are CR invariant has been found by Sagun Chanillo, Hung-Lin Chiu and Paul C. Yang[9] that guarantees global embeddability for abstract strongly pseudo-convex CR structures defined on compact manifolds. Under the hypothesis that the CR Paneitz Operator is non-negative and the CR Yamabe constant is positive, one has global embedding. The second condition can be weakened to a non-CR invariant condition by demanding the Webster curvature of the abstract manifold be bounded below by a positive constant. It allows the authors to get a sharp lower bound on the first positive eigenvalue of Kohn's Laplacian. The lower bound is the analog in CR Geometry of the Andre Lichnerowicz bound for the first positive eigenvalue of the Laplace-Beltrami operator for compact manifolds in Riemannian geometry.[10] Non-negativity of the CR Paneitz operator in dimension 3 is a CR invariant condition as follows by the conformal covariant properties of the CR Paneitz operator on CR manifolds of real dimension 3, first observed by Kengo Hirachi.[11] The CR version of the Paneitz operator, the so-called CR Paneitz Operator first appears in a work of C. Robin Graham and Jack Lee. The operator is not known to be conformally covariant in real dimension 5 and higher, but only in real dimension 3. It is always a non-negative operator in real dimension 5 and higher.[12]
One can ask if all compactly embedded CR manifolds in ${\displaystyle \mathbb {C} ^{2}}$ have non-negative Paneitz operators. This is a sort of converse question to the embedding theorems discussed above. In this direction Jeffrey Case, Sagun Chanillo and Paul C. Yang have proved a stability theorem. That is, if one starts with a family of compact CR manifolds embedded in ${\displaystyle \mathbb {C} ^{2}}$, and the CR structure of the family ${\displaystyle J_{t}}$ changes in a real-analytic way with respect to the parameter ${\displaystyle t}$, and the CR Yamabe constant of the family of manifolds is uniformly bounded below by a positive constant, then the CR Paneitz operator remains non-negative for the entire family, provided one member of the family has its CR Paneitz operator non-negative.[13]
There are also results of global embedding for small perturbations of the standard CR structure for the 3-dimensional sphere due to Daniel Burns and Charles Epstein. These results hypothesize assumptions on the Fourier coefficients of the perturbation term.[14]
The realization of the abstract CR manifold as a smooth manifold in some ${\displaystyle \mathbb {C} ^{n}}$ will bound a Complex variety which in general may have singularities. This is the content of the Complex Plateau problem studied in the article by F. Reese Harvey and H. Blaine Lawson.[15] There is also further work on the Complex Plateau problem by Stephen S.-T. Yau.[16]
Local embedding of abstract CR structures is not true in real dimension 3, because of an example of Louis Nirenberg(the book by Chen and Mei-Chi Shaw referred below also carries a presentation of Nirenberg's proof).[17] The example of L. Nirenberg may be viewed as a smooth perturbation of the non-solvable complex vector field of Hans Lewy. One can start with the anti-holomorphic vector field ${\displaystyle {\overline {L}}}$ on the Heisenberg group given by
${\displaystyle {\overline {L}}={\frac {\partial }{\partial {\bar {z}}}}-\imath z{\frac {\partial }{\partial t}},(z,t)\in \mathbb {C} \times \mathbb {R} ,\imath ={\sqrt {-1}}.}$
The vector field defined above has two linearly independent first integrals. That is there are two solutions to the homogeneous equation,
${\displaystyle {\overline {L}}Z_{i}=0,i=1,2,Z_{1}=z,Z_{2}=t+\imath |z|^{2},dZ_{1}\wedge dZ_{2}\not =0.}$
Since we are in real dimension three the formal integrability condition is simply,
${\displaystyle [{\overline {L}},{\overline {L}}]=0}$
which is automatic. Notice the Levi form is strictly positive definite as a simple calculation gives,
${\displaystyle [{\overline {L}},L]=2i{\frac {\partial }{\partial t}},}$
where the holomorphic vector field L is given by,
${\displaystyle L={\frac {\partial }{\partial z}}+\imath {\overline {z}}{\frac {\partial }{\partial t}}.}$
The first integrals which are linearly independent allow us to realize the CR structure as a graph in ${\displaystyle \mathbb {C} ^{2}}$ given by
${\displaystyle (z,t)\to (z,t+\imath |z|^{2})}$
The CR structure then is seen to be nothing but the restriction of the Complex structure of ${\displaystyle \mathbb {C} ^{2}}$ to the graph. Nirenberg constructs a single, non-vanishing complex vector field P, defined in a neighborhood of the origin in ${\displaystyle \mathbb {C} \times \mathbb {R} }$. He then shows that if ${\displaystyle Pu=0}$, then u has to be a constant. Thus the vector field P has no first integrals. The vector field P is created from the anti-holomorphic vector field for the Heisenberg group displayed above by perturbing it by a smooth complex-valued function ${\displaystyle \phi }$ as displayed below:
${\displaystyle P={\overline {L}}+\phi (z,{\overline {z}},t){\frac {\partial }{\partial t}}}$
Thus this new vector field P, has no first integrals other than constants and so it is not possible to realize this perturbed CR structure in any way as a graph in any ${\displaystyle \mathbb {C} ^{n}}$. The work of L. Nirenberg has been extended to a generic result by Howard Jacobowitz and Francois Treves.[18] In real dimension 9 and higher, local embedding of abstract CR structures is true by the work of Masatake Kuranishi and in real dimension 7 by the work of Akahori[19] A simplified presentation of Kuranishi's proof is due to Webster.[20]
The problem of local embedding remains open in real dimension 5.
### The tangential Cauchy–Riemann complex (Kohn Laplacian, Kohn–Rossi complex)
First of all one needs to define a co-boundary operator ${\displaystyle {\overline {\partial _{b}}}}$. For CR manifolds that arise as boundaries of complex manifolds, one may view this operator as the restriction of ${\displaystyle {\overline {\partial }}}$ from the interior to the boundary. The subscript b is to remind one that we are on the boundary. The co-boundary operator takes (0,p) forms to (0,p+1) forms. One can even define the co-boundary operator for an abstract CR manifold even if it is not the boundary of a complex variety. This can be done using the Webster connection.[21] The co-boundary operator ${\displaystyle {\overline {\partial _{b}}}}$ forms a complex, that is ${\displaystyle {\overline {\partial _{b}}}\circ {\overline {\partial _{b}}}=0}$. This complex is called the Tangential Cauchy–Riemann complex or the Kohn–Rossi complex. Investigation of this complex and the study of the Cohomology groups of this complex was done in a fundamental paper by Joseph J. Kohn and Hugo Rossi. [22]
Associated to the Tangential CR complex is a fundamental object in CR Geometry and Several Complex Variables, the Kohn Laplacian. It is defined as:
${\displaystyle \Box _{b}={\overline {\partial _{b}}}{\overline {\partial _{b}}}^{\star }+{\overline {\partial _{b}}}^{\star }{\overline {\partial _{b}}}}$
Here ${\displaystyle {\overline {\partial _{b}}}^{\star }}$ denotes the formal adjoint of ${\displaystyle {\overline {\partial _{b}}}}$ with respect to ${\displaystyle L^{2}(M)}$ where the volume form may be derived from a contact form which is associated to the CR structure. See for example the paper of J.M. Lee in the American J. referred below. Note the Kohn Laplacian takes (0,p) forms to (0,p) forms. Functions that are annihilated by the Kohn Laplacian are called CR functions. They are the boundary analogs of holomorphic functions. The real parts of the CR functions are called the CR pluriharmonic functions. The Kohn Laplacian ${\displaystyle \Box _{b}}$ is a non-negative, formally self-adjoint operator. It is degenerate and has a characteristic set where its symbol vanishes. On a compact, strongly pseudo-convex abstract CR manifold, it has discrete positive eigenvalues which go to infinity and also approach zero. The kernel consists of the CR functions and so is infinite dimensional. If the positive eigenvalues of the Kohn Laplacian are bounded below by a positive constant, then the Kohn Laplacian has closed range and conversely. Thus for embedded CR structures using the result of Kohn stated above, we conclude that the compact CR structure that is strongly pseudoconvex is embedded if and only if the Kohn Laplacian has positive eigenvalues that are bounded below by a positive constant. The Kohn Laplacian always has the eigenvalue zero corresponding to the CR functions.
Estimates for ${\displaystyle \Box _{b}}$ and ${\displaystyle {\overline {\partial _{b}}}}$ have been obtained in various function spaces in various settings. These estimates are easiest to derive when the manifold is strongly pseudoconvex, for then one can replace the manifold by osculating it to a high enough order with the Heisenberg group. Then using the group property and attendant convolution structure of the Heisenberg group, one can write down inverses/parametrices or relative parametrices to ${\displaystyle \Box _{b}}$.[23]
A concrete example of the ${\displaystyle {\overline {\partial _{b}}}}$ operator can be provided on the Heisenberg group. Consider the general Heisenberg group ${\displaystyle \mathbb {C} ^{n}\times \mathbb {R} }$ and consider the antiholomorphic vector fields which are also group left invariant,
${\displaystyle {\overline {L}}_{j}={\frac {\partial }{\partial {\overline {z_{j}}}}}-\imath z_{j}{\frac {\partial }{\partial t}},j=1,2,\ldots ,n,(z_{1},z_{2},\ldots ,z_{n})\in \mathbb {C} ^{n},t\in \mathbb {R} .}$
Then for a function u we have the (0,1) form ${\displaystyle \omega }$
${\displaystyle \omega ={\overline {\partial _{b}}}u=\sum _{j=1}^{n}{\overline {L_{j}}}u\ d{\overline {z_{j}}}.}$
Since ${\displaystyle {\overline {\partial _{b}}}^{\star }}$ vanishes on functions, we also have the following formula for the Kohn Laplacian for functions on the Heisenberg group:
${\displaystyle \Box _{b}=-\sum _{j=1}^{n}L_{j}{\overline {L_{j}}}}$
where
${\displaystyle L_{j}={\frac {\partial }{\partial z_{j}}}+\imath {\overline {z_{j}}}{\frac {\partial }{\partial t}},}$
are the group left invariant, holomorphic vector fields on the Heisenberg group. The expression for the Kohn Laplacian above can be re-written as follows. First it is easily checked that
${\displaystyle [L_{j},{\overline {L_{j}}}]=-2\imath T,T={\frac {\partial }{\partial t}},j=1,2,\ldots ,n}$
Thus we have by an elementary calculation:
${\displaystyle \Box _{b}=-{\frac {1}{2}}\sum _{j=1}^{n}(L_{j}{\overline {L_{j}}}+{\overline {L_{j}}}L_{j})+\imath nT}$
The first operator on the right is a real operator and in fact it is the real part of the Kohn Laplacian. It is called the sub-Laplacian. It is a primary example of what is called a Hörmander sums of squares operator.[24][25] It is obviously non-negative as can be seen via an integration by parts. Some authors define the sub-Laplacian with an opposite sign. In our case we have specifically:
${\displaystyle \Delta _{b}=-{\frac {1}{2}}\sum _{j=1}^{n}(L_{j}{\overline {L_{j}}}+{\overline {L_{j}}}L_{j})}$
where the symbol ${\displaystyle \Delta _{b}}$ is the traditional symbol for the sub-Laplacian. Thus
${\displaystyle \Box _{b}=\Delta _{b}+\imath nT}$
## Examples
The canonical example of a compact CR manifold is the real ${\displaystyle 2n+1}$ sphere as a submanifold of ${\displaystyle \mathbb {C} ^{n+1}}$. The bundle ${\displaystyle L}$ described above is given by
${\displaystyle L=\mathbb {C} TS^{2n+1}\cap T^{1,0}\mathbb {C} ^{n+1}}$
where ${\displaystyle T^{1,0}\mathbb {C} ^{n+1}}$ is the bundle of holomorphic vectors. The real form of this is given by ${\displaystyle P=\Re (L\oplus {\bar {L}})}$, the bundle given at a point ${\displaystyle p\in S^{2n+1}}$ concretely in terms of the complex structure, ${\displaystyle I}$, on ${\displaystyle \mathbb {C} ^{n+1}}$ by
${\displaystyle P_{p}=\{X\in T_{p}S^{2n+1}:IX\in T_{p}S^{2n+1}\subset T_{p}\mathbb {C} ^{n+1}\},}$
and the almost complex structure on ${\displaystyle P}$ is just the restriction of ${\displaystyle I}$. The Sphere is an example of a CR manifold with constant positive Webster curvature and having zero Webster torsion. The Heisenberg group is an example of a non-compact CR manifold with zero Webster torsion and zero Webster curvature. The unit circle bundle over compact Riemann surfaces with genus strictly greater than 1 also provides examples of CR manifolds which are strongly pseudoconvex and have zero Webster torsion and constant negative Webster curvature. These spaces can be used as comparison spaces in studying geodesics and volume comparison theorems on CR manifolds with zero Webster torsion akin to the H.E. Rauch comparison theorem in Riemannian Geometry.[26]
In recent years, other aspects of analysis on the Heisenberg group have been also studied, like minimal surfaces in the Heisenberg group, the Bernstein problem in the Heisenberg group and curvature flows.[27]
## Notes
1. ^ See (Levi & 909, p. 207): the Levi form is the differential form associated to the differential operator C, according to Levi's notation.
2. ^ Tanaka, N. (1975). "A Differential Geometric Study on Strongly Pseudoconvex Manifolds". Lectures in Mathematics, Kyoto University. Tokyo: Kinokuniya Book Store. 9.
3. ^ Lee, John,M. (1988). "Pseudo-Einstein Structures on CR manifolds". American Journal of Mathematics. 110 (1): 157–178. doi:10.2307/2374543.
4. ^ Webster, Sidney, M. (1978). "Pseudo-hermitian Structures on a Real Hypersurface". Journal of Differential Geometry. 13: 25–41.
5. ^ Boutet de Monvel, Louis (1974). "Integration de equations Cauchy–Riemann induites formelle". Seminaire Equations aux derivees Partielles. Ecole Polytechnique. 9: 1–13.
6. ^ Chen, S.-C.; Shaw, Mei-Chi (2001). Partial Differential Equations in Several Complex Variables. 19, AMS/IP Studies in Advanced Mathematics. Providence, RI: AMS.
7. ^ Andreotti, Aldo; Siu, Yum-Tong (1970). "Projective Embedding of Pseudoconcave Spaces". Annali della Scuola Norm. Sup. Pisa, Classe di Scienze. 24 (5): 231–278.
8. ^ Kohn, Joseph, J. (1986). "The Range of the Tangential Cauchy–Riemann Operator". Duke Mathematical Journal. 53: 525–545. doi:10.1215/S0012-7094-86-05330-5.
9. ^ Chanillo, Sagun, Chiu, Hung-Lin and Yang, Paul C. (2012). "Embeddability for 3-dimensional CR manifolds and CR Yamabe Invariants". Duke Mathematical Journal. 161 (15): 2909–2921. doi:10.1215/00127094-1902154.
10. ^ Lichnerowicz, Andre (1958). Ge'ome'trie des Groupes de transformations. Paris: Dunod.
11. ^ Hirachi, Kengo (1993). "Scalar Pseudo-hermitian Invariants and the Szeg\"o kernel on three dimensional CR manifolds". Complex Geometry(Osaka 1990)Lecture Notes in Pure and Applied Math. New York: Marcel Dekker. 143: 67–76.
12. ^ Graham, C. Robin; Lee, John, M. (1988). "Smooth Solutions of Degenerate Laplacians on Strictly Pseudo-convex Domains". Duke Mathematical Journal. 57: 697–720. doi:10.1215/S0012-7094-88-05731-6.
13. ^ Case, Jeffrey S., Chanillo, Sagun and Yang, Paul C. (2016). "The CR Paneitz operator and the Stability of CR Pluriharmonic functions". Advances in Mathematics. 287: 109–122. doi:10.1016/j.aim.2015.10.002.
14. ^ Burns, Daniel, M. and Epstein, Charles, L. (1990). "Embeddability for Three dimensional CR manifolds". J. of the American Math. Soc. 3: 809–841. doi:10.1090/s0894-0347-1990-1071115-4.
15. ^ Harvey, F.R.; Lawson, H.B., Jr. (1978). "On boundaries of complex analytic varieties I". Ann. Math. 102 (2): 223–290. JSTOR 1971032. doi:10.2307/1971032.
16. ^ Yau, Stephen S.-T. (1981). "Kohn-Rossi Cohomology and its Application to the Complex Plateau Problem I". Annals of Math. 113: 67–110. doi:10.2307/1971134.
17. ^ Nirenberg, Louis (1974). "On a Question of Hans Lewy". Russian Math. Surveys. 29: 251–262. doi:10.1070/rm1974v029n02abeh003856.
18. ^ Jacobowitz, Howard; Treves, Jean-Francois (1982). "Non Realizable CR Structures". Inventiones Math. 66: 231–250. doi:10.1007/bf01389393.
19. ^ Akahori, Takao (1987). "A New approach to the Local Embedding theorem of CR Structures of ${\displaystyle n\geq 4}$(The local solvability of the operator ${\displaystyle {\overline {\partial _{b}}}}$ in the abstract sense)". Memoirs of the American Math. Society. 67 (366).
20. ^ Webster, Sidney, M. (1989). "On the Proof of Kuranishi's Embedding Theorem". Annales de l'Institut Henri Poincaré C. 6 (3): 183–207.
21. ^ Lee, John M. (1986). "The Fefferman metric and pseudo-hermitian invariants". Transactions of the Amer. Math. Soc. 296: 411–429. doi:10.1090/s0002-9947-1986-0837820-2.
22. ^ Kohn, Joseph J.; Rossi, Hugo (1965). "On the Extension of Holomorphic functions from the boundary of Complex Manifolds". Annals of Math. 81: 451–472. doi:10.2307/1970624.
23. ^ Greiner, P.C.; Stein, E. M. (1977). Estimates for the ${\displaystyle {\overline {\partial }}}$-Neumann problem. Mathematical Notes. 19. Princeton Univ. Press.
24. ^ Hormander, Lars (1967). "Hypoelliptic second-order differential equations". Acta Math. 119: 147–171. doi:10.1007/bf02392081.
25. ^ Kohn, Joseph J. (1972). "Subelliptic estimates". Proceedings Symp. in Pure Math.(AMS). 35: 143–152.
26. ^ Chanillo, Sagun; Yang, Paul C. (2009). "Isoperimetric and Volume Comparison theorems on CR manifolds". Annali della Scuola Norm. Sup. Pisa, Classe di Scienze. 8 (2): 279–307. doi:10.2422/2036-2145.2009.2.03.
27. ^ Capogna, Luca; Danielli, Donatella; Pauls, Scott; Tyson, Jeremy (2007). "Applications of Heisenberg Geometry". An Introduction to the Heisenberg Group and the Sub-Riemannian Isoperimetric Problem. Progress in Mathematics. 259. Berlin: Birkhauser. pp. 45–48.
|
2017-06-29 14:20:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 87, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047030806541443, "perplexity": 645.7612937429767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128329344.98/warc/CC-MAIN-20170629135715-20170629155715-00619.warc.gz"}
|
https://www.transum.org/Maths/Exercise/Completing_The_Square.asp
|
# Completing the Square
## Practise this technique for use in solving quadratic equations and analysing graphs.
##### Level 1Level 2Level 3Level 4Exam StyleDescriptionHelpMore Algebra
Write the following expressions in the completed square form.
This is level 1; Expressions with two terms such as $$x^2 + 6x$$.
$$x^2+4x$$ $$x^2+8x$$ $$x^2+10x$$ $$x^2+12x$$ $$x^2-14x$$ $$x^2+18x$$ $$x^2-22x$$ $$x^2+24x$$ $$x^2-26x$$
Check
This is Completing the Square level 1. You can also try:
Level 2 Level 3 Level 4
## Instructions
Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help.
When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file.
## Transum.org
This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available.
## More Activities:
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician?
Comment recorded on the 16 March 'Starter of the Day' page by Mrs A Milton, Ysgol Ardudwy:
"I have used your starters for 3 years now and would not have a lesson without one! Fantastic way to engage the pupils at the start of a lesson."
Comment recorded on the 1 May 'Starter of the Day' page by Phil Anthony, Head of Maths, Stourport High School:
"What a brilliant website. We have just started to use the 'starter-of-the-day' in our yr9 lessons to try them out before we change from a high school to a secondary school in September. This is one of the best resources on-line we have found. The kids and staff love it. Well done an thank you very much for making my maths lessons more interesting and fun."
#### Number Crunch Saga
A lively numeracy game requiring you to align three numbers to create the given target sum or product. There are five levels to this online game and a virtual Transum Trophy available for each level.
There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer.
A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves.
Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members.
Subscribe
## Go Maths
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school.
## Maths Map
Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic.
## Teachers
If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows:
Tuesday, December 6, 2016
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments.
For Students:
For All:
© Transum Mathematics :: This activity can be found online at:
www.transum.org/Maths/Exercise/Completing_The_Square.asp?
## Description of Levels
Close
Level 1 - Expressions with two terms such as $$x^2 + 6x$$
Level 2 - Expressions with three terms such as $$x^2 + 4x - 7$$
Level 3 - The coefficient of the squared term is greater than one such as $$2x^2 + 8x - 9$$
Level 4 - Use the ability to complete the square to help solve these basic quadratic equations
More Quadratic Equations - Use the ability to complete the square to help solve these more difficult quadratic equations.
Exam Style questions take the skill of completing the square and put it to use solving real problems. Typically problems involve solving equations or describing features of graphs. The questions are in the style of GCSE or IB/A-level exam paper questions and worked solutions are available for Transum subscribers.
Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent.
## Curriculum Reference
See the National Curriculum page for links to related online activities and resources.
## Completing The Square
The video above is from the creative and aesthetically mindful Beth, a 3rd year Maths student at Oxford University.
Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly.
Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent.
Close
|
2020-02-27 14:19:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25709983706474304, "perplexity": 1595.4857224813481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146714.29/warc/CC-MAIN-20200227125512-20200227155512-00337.warc.gz"}
|
https://astronomy.stackexchange.com/questions/14119/open-access-table-of-visible-stars-with-magnitude-coordinates-and-possibly-col/14120
|
Open access table of visible stars with magnitude, coordinates, and possibly color?
I'm looking for a link to a table of visible stars that is open and available to everyone. It should have magnitude, RA, dec, and possibly some indication of color. This will be used to produce somewhat realistic night skies as a backdrop for showing the motion of the planets in the sky.
I'm not so interested in names, constellations, etc. These are of course useful to know and possibly to plot, but what I'm primarily after is the information necessary to illustrate star position, brightness, and some color information.
edit: to reiterate, "...table of visible stars that is open and available to eveyone." I'm assuming that in 2016 the visible stars are not behind a paywall, am I wrong? Approx RA, dec, mag - are these available for open access and usage?
The Hipparcos catalogue by van Leeuwen (2007) contains all the information you require, plus estimates of distance from parallax. It is open and free to use for scientific purposes.
http://vizier.u-strasbg.fr/viz-bin/VizieR?-source=I%2F311
The direct page that describes the catalogue contents and ftp site is http://cdsarc.u-strasbg.fr/viz-bin/Cat?I/311
The tables themselves are at ftp://cdsarc.u-strasbg.fr/pub/cats/I/311
• @uhoh This is an open source catalogue for scientific use. There is no paywall, and you didn't encounter one. If you intend some sort of commercial use you will have to make your own arrangements with ESA. Could you not find your way to where the entire catalogue can be downloaded? have added the link to the table descriptions page (which you'll need) that has links to ftp downloads of the entire catalogue. – ProfRob Mar 12 '16 at 1:33
• @uhoh ps The query page I sent you does allow you to download as much of the table as you like. The option is in the preferences box on the left hand side, in a drop-down box labelled "max". – ProfRob Mar 12 '16 at 1:35
• @uhoh I've added it. – ProfRob Mar 12 '16 at 1:37
• OK this seems to work! I've clicked only the boxes I need, typed <6.5 for magnitude, and after some coaching from @RobJeffries I looked on the left where it says "preferences" and changed Max: to unlimited and the format to ASCII, and received 7,982 stars! I'll follow up with ESA, but I'm non-commercial, this is a personal "science project." Thanks!! – uhoh Mar 12 '16 at 1:47
• @uhoh Good. The catalogue is completely free and open for scientific and non-commercial use. – ProfRob Mar 12 '16 at 8:25
|
2021-06-14 11:34:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39958515763282776, "perplexity": 1983.0255934685865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00615.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/46500-linear-equation-problem.html
|
# Math Help - Linear equation problem
1. ## Linear equation problem
Hello everyone. I just don't get this . This seems very basic but I have problems to find a answer to this. I know augmented matrices and gaussian elimination.
Ok once again Ax = b
Okay I get the augmented matrix form. But how can I do this "backwards"
A is 2x2 matrix , and
x =[0;
3]
b = Null vector
if A was [a,b;c,d] then I came up with 3c + 3d = 0 and then set c =1 ,d=-1
This didn't function
At the moment I am incapable of coming up with a matrix which certain vectors are in the kernel or image. I can't make a matrix which needs to have certain solutions.
2. Hello,
If you mean:
$Ax=b$ where $
A=\begin{pmatrix}a&b\\c&d\end{pmatrix},\,
x=\begin{pmatrix}0\\3\end{pmatrix},\,
b=\begin{pmatrix}0\\0\end{pmatrix}$
,
you come up with:
$3b=0,\, 3d=0$.
Thus, $A=\begin{pmatrix}a&0\\c&0\end{pmatrix}$ where $a,\, c$ are arbitrary.
Bye.
3. Exactly what I meant.
$3b=0,\, 3d=0$. You got this by the multiplication $A=\begin{pmatrix}a&0\\c&0\end{pmatrix}$ * $x=\begin{pmatrix}0\\3\end{pmatrix}$ which should result in the 2x1 matrix
$3b=0,\, 3d=0$.
To solve those backwards , I take a*b and then rref it? Ok and that would then be your augmented matrix where for example 3b= say 4 if that was what I'd want
Ok I think I got this, thank you.
|
2014-07-22 09:38:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8040204048156738, "perplexity": 995.401677063519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857714.64/warc/CC-MAIN-20140722025737-00158-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://mathshistory.st-andrews.ac.uk/Biographies/Sankara/
|
# Sankara Narayana
### Quick Info
Born
India
Died
India
Summary
Sankara Narayana was an Indian astronomer and mathematician. He wrote a commentary on the work of Bhaskara I.
### Biography
Sankara Narayana (or Shankaranarayana) was an Indian astronomer and mathematician. He was a disciple of the astronomer and mathematician Govindasvami. His most famous work was the Laghubhaskariya vivarana which was a commentary on the Laghubhaskariya of Bhaskara I which in turn is based on the work of Aryabhata I.
The Laghubhaskariya vivarana was written by Sankara Narayana in 869 AD for the author writes in the text that it is written in the Shaka year 791 which translates to a date AD by adding 78. It is a text which covers the standard mathematical methods of Aryabhata I such as the solution of the indeterminate equation $by = ax ± c$ ($a, b, c$ integers) in integers which is then applied to astronomical problems. The standard Indian method involves using the Euclidean algorithm. It is called kuttakara ("pulveriser") but the term eventually came to have a more general meaning like "algebra". The paper [2] examines this method. The reader who is wondering what the determination of "mati" means in the title of the paper [2] then it refers to the optional number in a guessed solution and it is a feature which differs from the original method as presented by Bhaskara I.
Perhaps the most unusual feature of the Laghubhaskariya vivarana is the use of katapayadi numeration as well as the place-value Sanskrit numerals which Sankara Narayana frequently uses. Sankara Narayana is the first author known to use katapayadi numeration with this name but he did not invent it for it appears to be identical to a system invented earlier which was called varnasamjna. The numeration system varnasamjna was almost certainly invented by the astronomer Haridatta, and it was explained by him in a text which many historians believe was written in 684 but this would contradict what Sankara Narayana himself writes. This point is discussed below. First we should explain ideas behind Sankara Narayana's katapayadi numeration.
The system is based on writing numbers using the letters of the Indian alphabet. Let us quote from [1]:-
... the numerical attribution of syllables corresponds to the following rule, according to the regular order of succession of the letters of the Indian alphabet: the first nine letters represent the numbers 1 to 9 while the tenth corresponds to zero; the following nine letters also receive the values 1 to 9 whilst the following letter has the value zero; the next five represent the first five units; and the last eight represent the numbers 1 to 8.
Under this system 1 to 5 are represented by four different letters. For example 1 is represented by the letters ka, ta, pa, ya which give the system its name (ka, ta, pa, ya becomes katapaya). Then 6, 7, 8 are represented by three letters and finally nine and zero are represented by two letters.
The system was a spoken one in the sense that consonants and vowels which are not vocalised have no numerical value. The system is a place-value system with zero but one may reasonably ask why such an apparently complicated numeral system might ever come to be invented. Well the answer must be that it lead to easily remembered mnemonics. In fact many different "words" could represent the same number and this was highly useful for works written in verse as the Indian texts tended to be.
Let us return to the interesting point about the date of Haridatta. Very unusually for an Indian text, Sankara Narayana expresses his thanks to those who have gone before him and developed the ideas about which he is writing. This in itself is not so unusual but the surprise here is that Sankara Narayana claims to give the list in chronological order. His list is [Note that we have written Bhaskara I where Sankara Narayana simply wrote Bhaskara. The more famous Bhaskara II lived nearly 300 years after Sankara Narayana.]
The chronological order in the list agrees with the dates we have for the first four of these mathematicians. However, putting Haridatta after Govindasvami would seem an unlikely mistake for Sankara Narayana to make if Haridatta really did write his text in 684 since Sankara Narayana was himself a disciple of Govindasvami. If the dating given by Sankara Narayana is correct then katapayadi numeration had been invented only a few years before he wrote his text.
### References (show)
1. G Ifrah, A universal history of numbers : From prehistory to the invention of the computer (London, 1998).
2. P K Majumdar, A rationale of Bhatta Govinda's method for solving the equation ax - c = by and a comparative study of the determination of 'Mati' as given by Bhaskara I and Bhatta Govinda, Indian J. Hist. Sci. 18 (2) (1983), 200-205.
### Cross-references (show)
Written by J J O'Connor and E F Robertson
Last Update November 2000
|
2021-04-22 02:25:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6512524485588074, "perplexity": 1645.2751415582277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00212.warc.gz"}
|
https://socratic.org/questions/how-do-you-use-the-discriminant-to-determine-the-nature-of-the-roots-for-x-2-2x-
|
# How do you use the discriminant to determine the nature of the roots for x^2 + 2x + 5 = 0?
Jun 19, 2015
As color(red)(Delta = -16(less than zero), this equation has two complex roots.
#### Explanation:
${x}^{2} + 2 x + 5 = 0$
The equation is of the form color(blue)(ax^2+bx+c=0 where:
$a = 1 , b = 2 , c = 5$
The Discriminant is given by:
$\Delta = {b}^{2} - 4 \cdot a \cdot c$
$= {\left(2\right)}^{2} - \left(4 \cdot \left(1\right) \cdot 5\right)$
$= 4 - 20 = - 16$
When, $\Delta < 0$ there are two complex solutions.
Here, $\textcolor{red}{\Delta = - 16}$, so this equation has two complex roots
• Note :
The solutions are normally found using the formula
$x = \frac{- b \pm \sqrt{\Delta}}{2 \cdot a}$
Finding the solutions:
x =( -2+-sqrt-16)/(2a
color(red)( x =( -2+4i)/2 and color(red)(x = (-2-4i)/2
|
2020-02-19 17:36:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8413405418395996, "perplexity": 2198.3424782819443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00013.warc.gz"}
|
http://math.stackexchange.com/questions/136782/calculating-an-explicit-conditional-expectation
|
# Calculating an explicit conditional expectation
Suppose that $X$ and $Y$ are independent and $X\sim E(1)=\Gamma(1,1)$ and $Y\sim\Gamma(2,1)$. Then I am asked to find $E[X\mid X+Y]$ by using the following result:
Let $(U,V)$ be an $n+m$ dimensional random vector with density $(u,v)\mapsto f(u,v)$ with respect to $\lambda_{n+m}$. Put for $u\in\mathbb{R}^n$ and $v\in\mathbb{R}^m$ $$f_V(v)=\int_{\mathbb{R}^n} f(u,v)\lambda_n (du)\quad \text{and} \quad f_{U\mid V}(u\mid v)=\frac{f(u,v)}{f_V(v)}1_{\{0<f_V(v)<\infty\}}.$$ Then for every Borel function $\psi: \mathbb{R}^{n+m}\to\mathbb{R}$ with $E[|\psi(U,Y)|]<\infty$ we have that $$E[\psi(U,V)\mid V]=\varphi(V) \quad\text{a.s.},$$ where $$\varphi(v)=\int_{\mathbb{R}^n} \psi(u,v) f_{U\mid V}(u\mid v)\lambda_n (d v).$$
So I was thinking that I would use this result with $U=X$ and $V=X+Y$ and $\psi(x,y)=x$. Then we are clearly in the scope of this result. Since $X$ and $Y$ are independent we also have that $X+Y\sim\Gamma(3,1)$, and hence $f_V$ is just a Gamma-density.
Now my question is, how do I go by finding the joint density $f$ in the easiest way? I have tried looking at probabilities $P(X\leq a,X+Y\leq b)$ but without any luck (it got very messy). An additional question is: Is it possible to obtain the conditional expectation in other ways than using this result?
To answer your specific question, use jacobians, however I would not do it like that. Since $Y = Y_1 + Y_2, Y_i$ independent $\Gamma[1,1]$ you are asked for $E(X \vert X + Y_1 + Y_2), X, Y_i$ i.i.d.. In general, by symmetry, when $Z_i$ are i.i.d. $E(Z_i \vert Z_1 + .... + Z_n) = \frac {Z_1 + .... + Z_n} n$ – mike Apr 25 '12 at 14:02
I actually solved this awhile ago, so here's the answer that uses the result stated in the original question. The density of $(X,Y)$ is given by $$f_{(X,Y)}(x,y)=ye^{-x}e^{-y},\quad x,y>0$$ due to independence. Let $T:\mathbb{R}^2 \to\mathbb{R}^2$ be the mapping given by $T(x,y)=(x,x+y)$. Then $T^{-1}(x,y)=(x,y-x)$ and the determinant of the Jacobian is $\det(T'(x,y))=1$ for all $x,y$. By a transformation theorem we have that $(X,Y+X)=T(X,Y)$ has density $$f_{(X,X+Y)}(x,y)=f_{(X,Y)}(T^{-1}(x,y))=f_{(X,Y)}(x,y-x)=e^{-x}(y-x)e^{-(y-x)},\quad x>0,\; y-x>0,$$ and hence $$f_{(X,X+Y)}(x,y)=(y-x)e^{-y},\quad y>x>0.$$ Using the fact that $X+Y\sim \Gamma(3,1)$ we get that $$f_{X\mid X+Y}(x\mid y)=\frac{f_{(X,X+Y)}(x,y)}{f_{X+Y}(y)}=\frac{(y-x)e^{-y}}{y^2 e^{-y}/\Gamma(3)}1_{\{0<f_{X+Y}(y)<\infty\}},\quad 0<x<y,$$ and hence $$f_{X\mid X+Y}(x\mid y)=2\frac{y-x}{y^2},\quad 0<x<y.$$ Let $\psi(x,y)=x$, then $E[|\psi(X,X+Y)|]=E[|X|]<\infty$, so we are in scope of the result above. Now $$E[X\mid X+Y]=E[\psi(X,X+Y)\mid X+Y]=\tilde{\psi}(X+Y)\quad a.s.,$$ where $$\tilde{\psi}(y)=\int_{\mathbb{R}} \psi(x,y)f_{X\mid X+Y}(x\mid y)\,\lambda(\mathrm dx)=\int_0^y 2x\frac{y-x}{y^2}\,\lambda(\mathrm dx)=\frac{1}{3}y.$$ Thus $$E[X\mid X+Y]=\tilde{\psi}(X+Y)=\frac{X+Y}{3},$$ which, fortunately, is the same result we get by using mike's method in the comment.
|
2015-04-25 08:50:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648750424385071, "perplexity": 64.14656218270527}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246648209.18/warc/CC-MAIN-20150417045728-00162-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://www.reference.com/browse/linked+together
|
Definitions
Linked lists can be implemented in most languages. Languages such as Lisp and Scheme have the data structure built in, along with operations to access the linked list. Procedural or object-oriented languages such as C, C++, and Java typically rely on mutable references to create linked lists.
## History
Linked lists were developed in 1955-56 by Allen Newell, Cliff Shaw and Herbert Simon at RAND Corporation as the primary data structure for their Information Processing Language. IPL was used by the authors to develop several early artificial intelligence programs, including the Logic Theory Machine, the General Problem Solver, and a computer chess program. Reports on their work appeared in IRE Transactions on Information Theory in 1956, and several conference proceedings from 1957-1959, including Proceedings of the Western Joint Computer Conference in 1957 and 1958, and Information Processing (Proceedings of the first UNESCO International Conference on Information Processing) in 1959. The now-classic diagram consisting of blocks representing list nodes with arrows pointing to successive list nodes appears in "Programming the Logic Theory Machine" by Newell and Shaw in Proc. WJCC, February 1957. Newell and Simon were recognized with the ACM Turing Award in 1975 for having "made basic contributions to artificial intelligence, the psychology of human cognition, and list processing".
The problem of machine translation for natural language processing led Victor Yngve at Massachusetts Institute of Technology (MIT) to use linked lists as data structures in his COMIT programming language for computer research in the field of linguistics. A report on this language entitled "A programming language for mechanical translation" appeared in Mechanical Translation in 1958.
LISP, standing for list processor, was created by John McCarthy in 1958 while he was at MIT and in 1960 he published its design in a paper in the Communications of the ACM, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". One of LISP's major data structures is the linked list.
By the early 1960s, the utility of both linked lists and languages which use these structures as their primary data representation was well established. Bert Green of the MIT Lincoln Laboratory published a review article entitled "Computer languages for symbol manipulation" in IRE Transactions on Human Factors in Electronics in March 1961 which summarized the advantages of the linked list approach. A later review article, "A Comparison of list-processing computer languages" by Bobrow and Raphael, appeared in Communications of the ACM in April 1964.
Several operating systems developed by Technical Systems Consultants (originally of West Lafayette Indiana, and later of Chapel Hill, North Carolina) used singly linked lists as file structures. A directory entry pointed to the first sector of a file, and succeeding portions of the file were located by traversing pointers. Systems using this technique included Flex (for the Motorola 6800 CPU), mini-Flex (same CPU), and Flex9 (for the Motorola 6809 CPU). A variant developed by TSC for and marketed by Smoke Signal Broadcasting in California, used doubly linked lists in the same manner.
The TSS operating system, developed by IBM for the System 360/370 machines, used a double linked list for their file system catalog. The directory structure was similar to Unix, where a directory could contain files and/or other directories and extend to any depth. A utility flea was created to fix file system problems after a crash, since modified portions of the file catalog were sometimes in memory when a crash occurred. Problems were detected by comparing the forward and backward links for consistency. If a forward link was corrupt, then if a backward link to the infected node was found, the forward link was set to the node with the backward link. A humorous comment in the source code where this utility was invoked stated "Everyone knows a flea collar gets rid of bugs in cats".
The simplest kind of linked list is a singly-linked list (or slist for short), which has one link per node. This link points to the next node in the list, or to a null value or empty list if it is the final node.
A singly-linked list containing two values: the value of the current node and a link to the next node
A singly linked list's node is divided into two parts. The first part holds or points to information about the node, and second part holds the address of next node. A singly linked list travels one way.
A more sophisticated kind of linked list is a doubly-linked list or two-way linked list. Each node has two links: one points to the previous node, or points to a null value or empty list if it is the first node; and one points to the next, or points to a null value or empty list if it is the final node.
A doubly-linked list containing three integer values: the value, the link forward to the next node, and the link backward to the previous node
In some very low level languages, XOR-linking offers a way to implement doubly-linked lists using a single word for both links, although the use of this technique is usually discouraged.
In a circularly-linked list, the first and final nodes are linked together. This can be done for both singly and doubly linked lists. To traverse a circular linked list, you begin at any node and follow the list in either direction until you return to the original node. Viewed another way, circularly-linked lists can be seen as having no beginning or end. This type of list is most useful for managing buffers for data ingest, and in cases where you have one object in a list and wish to iterate through all other objects in the list in no particular order.
The pointer pointing to the whole list may be called the access pointer.
A circularly-linked list containing three integer values
### Sentinel nodes
Linked lists sometimes have a special dummy or sentinel node at the beginning and/or at the end of the list, which is not used to store data. Its purpose is to simplify or speed up some operations, by ensuring that every data node always has a previous and/or next node, and that every list (even one that contains no data elements) always has a "first" and "last" node. Lisp has such a design - the special value nil is used to mark the end of a 'proper' singly-linked list, or chain of cons cells as they are called. A list does not have to end in nil, but a list that did not would be termed 'improper'.
Linked lists are used as a building block for many other data structures, such as stacks, queues and their variations.
The "data" field of a node can be another linked list. By this device, one can construct many linked data structures with lists; this practice originated in the Lisp programming language, where linked lists are a primary (though by no means the only) data structure, and is now a common feature of the functional programming style.
Sometimes, linked lists are used to implement associative arrays, and are in this context called association lists. There is very little good to be said about this use of linked lists; they are easily outperformed by other data structures such as self-balancing binary search trees even on small data sets (see the discussion in associative array). However, sometimes a linked list is dynamically created out of a subset of nodes in such a tree, and used to more efficiently traverse that set.
As with most choices in computer programming and design, no method is well suited to all circumstances. A linked list data structure might work well in one case, but cause problems in another. This is a list of some of the common tradeoffs involving linked list structures. In general, if you have a dynamic collection, where elements are frequently being added and deleted, and the location of new elements added to the list is significant, then benefits of a linked list increase.
Indexing O(1) O(n)
Inserting / Deleting at end O(1) O(1) or O(n)
Inserting / Deleting in middle (with iterator) O(n) O(1)
Persistent No Singly yes
Linked lists have several advantages over arrays. Elements can be inserted into linked lists indefinitely, while an array will eventually either fill up or need to be resized, an expensive operation that may not even be possible if memory is fragmented. Similarly, an array from which many elements are removed may become wastefully empty or need to be made smaller.
Further memory savings can be achieved, in certain cases, by sharing the same "tail" of elements among two or more lists — that is, the lists end in the same sequence of elements. In this way, one can add new elements to the front of the list while keeping a reference to both the new and the old versions — a simple example of a persistent data structure.
On the other hand, arrays allow random access, while linked lists allow only sequential access to elements. Singly-linked lists, in fact, can only be traversed in one direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index quickly, such as heapsort. Sequential access on arrays is also faster than on linked lists on many machines due to locality of reference and data caches. Linked lists receive almost no benefit from the cache.
Another disadvantage of linked lists is the extra storage needed for references, which often makes them impractical for lists of small data items such as characters or boolean values. It can also be slow, and with a naïve allocator, wasteful, to allocate memory separately for each new element, a problem generally solved using memory pools.
A number of linked list variants exist that aim to ameliorate some of the above problems. Unrolled linked lists store several elements in each list node, increasing cache performance while decreasing memory overhead for references. CDR coding does both these as well, by replacing references with the actual data referenced, which extends off the end of the referencing record.
A good example that highlights the pros and cons of using arrays vs. linked lists is by implementing a program that resolves the Josephus problem. The Josephus problem is an election method that works by having a group of people stand in a circle. Starting at a predetermined person, you count around the circle n times. Once you reach the nth person, take them out of the circle and have the members close the circle. Then count around the circle the same n times and repeat the process, until only one person is left. That person wins the election. This shows the strengths and weaknesses of a linked list vs. an array, because if you view the people as connected nodes in a circular linked list then it shows how easily the linked list is able to delete nodes (as it only has to rearrange the links to the different nodes). However, the linked list will be poor at finding the next person to remove and will need to recurse through the list until it finds that person. An array, on the other hand, will be poor at deleting nodes (or elements) as it cannot remove one node without individually shifting all the elements up the list by one. However, it is exceptionally easy to find the nth person in the circle by directly referencing them by their position in the array.
The list ranking problem concerns the efficient conversion of a linked list representation into an array. Although trivial for a conventional computer, solving this problem by a parallel algorithm is complicated and has been the subject of much research.
Double-linked lists require more space per node (unless one uses xor-linking), and their elementary operations are more expensive; but they are often easier to manipulate because they allow sequential access to the list in both directions. In particular, one can insert or delete a node in a constant number of operations given only that node's address. (Compared with singly-linked lists, which require the previous node's address in order to correctly insert or delete.) Some algorithms require access in both directions. On the other hand, they do not allow tail-sharing, and cannot be used as persistent data structures.
Circular linked lists are most useful for describing naturally circular structures, and have the advantage of regular structure and being able to traverse the list starting at any point. They also allow quick access to the first and last records through a single pointer (the address of the last element). Their main disadvantage is the complexity of iteration, which has subtle special cases.
Doubly linked lists can be structured without using a front and NULL pointer to the ends of the list. Instead, a node of object type T set with specified default values is used to indicate the "beginning" of the list. This node is known as a Sentinel node and is commonly referred to as a "header" node. Common searching and sorting algorithms are made less complicated through the use of a header node, as every element now points to another element, and never to NULL. The header node, like any other, contains a "next" pointer that points to what is considered by the linked list to be the first element. It also contains a "previous" pointer which points to the last element in the linked list. In this way, a doubly linked list structured around a Sentinel Node is circular.
The Sentinel node is defined as another node in a doubly linked list would be, but the allocation of a front pointer is unnecessary as the next and previous pointers of the Sentinel node will point to itself. This is defined in the default constructor of the list.
next == this; prev == this;
If the previous and next pointers point to the Sentinel node, the list is considered empty. Otherwise, if one or more elements is added, both pointers will point to another node, and the list will contain those elements.
Sentinel node may simplify certain list operations, by ensuring that the next and/or previous nodes exist for every element. However sentinel nodes use up extra space (especially in applications that use many short lists), and they may complicate other operations. To avoid the extra space requirement the sentinel nodes can often be reused as references to the first and/or last node of the list.
The Sentinel node eliminates the need to keep track of a pointer to the beginning of the list, and also eliminates any errors that could result in the deletion of the first pointer, or any accidental relocation.
When manipulating linked lists in-place, care must be taken to not use values that you have invalidated in previous assignments. This makes algorithms for inserting or deleting linked list nodes somewhat subtle. This section gives pseudocode for adding or removing nodes from singly, doubly, and circularly linked lists in-place. Throughout we will use null to refer to an end-of-list marker or sentinel, which may be implemented in a number of ways.
Our node data structure will have two fields. We also keep a variable firstNode which always points to the first node in the list, or is null for an empty list.
` record Node {`
` data // The data being stored in the node`
` next // A reference to the next node, null for last node`
}
` record List {`
` Node firstNode // points to first node of list; null for empty list`
}
Traversal of a singly-linked list is simple, beginning at the first node and following each next link until we come to the end:
` node := list.firstNode`
` while node not null {`
` (do something with node.data)`
` node := node.next`
}
The following code inserts a node after an existing node in a singly linked list. The diagram shows how it works. Inserting a node before an existing one cannot be done; instead, you have to locate it while keeping track of the previous node.
` `
` function insertAfter(Node node, Node newNode) { // insert newNode after node`
` newNode.next := node.next`
` node.next := newNode`
}
Inserting at the beginning of the list requires a separate function. This requires updating firstNode.
` function insertBeginning(List list, Node newNode) { // insert node before current first node`
` newNode.next := list.firstNode`
` list.firstNode := newNode`
}
Similarly, we have functions for removing the node after a given node, and for removing a node from the beginning of the list. The diagram demonstrates the former. To find and remove a particular node, one must again keep track of the previous element.
` function removeAfter(node node) { // remove node past this one`
` obsoleteNode := node.next`
` node.next := node.next.next`
` destroy obsoleteNode`
}
` function removeBeginning(List list) { // remove first node`
` obsoleteNode := list.firstNode`
` list.firstNode := list.firstNode.next // point past deleted node`
` destroy obsoleteNode`
}
Notice that removeBeginning() sets list.firstNode to null when removing the last node in the list.
Since we can't iterate backwards, efficient "insertBefore" or "removeBefore" operations are not possible.
Appending one linked list to another can be inefficient unless a reference to the tail is kept as part of the List structure, because we must traverse the entire first list in order to find the tail, and then append the second list to this. Thus, if two linearly-linked lists are each of length $n$, list appending has asymptotic time complexity of $O\left(n\right)$. In the Lisp family of languages, list appending is provided by the `append` procedure.
Many of the special cases of linked list operations can be eliminated by including a dummy element at the front of the list. This ensures that there are no special cases for the beginning of the list and renders both insertBeginning() and removeBeginning() unnecessary. In this case, the first useful data in the list will be found at list.firstNode.next.
With doubly-linked lists there are even more pointers to update, but also less information is needed, since we can use backwards pointers to observe preceding elements in the list. This enables new operations, and eliminates special-case functions. We will add a prev field to our nodes, pointing to the previous element, and a lastNode field to our list structure which always points to the last node in the list. Both list.firstNode and list.lastNode are null for an empty list.
` record Node {`
` data // The data being stored in the node`
` next // A reference to the next node; null for last node`
` prev // A reference to the previous node; null for first node`
}
` record List {`
` Node firstNode // points to first node of list; null for empty list`
` Node lastNode // points to last node of list; null for empty list`
}
Iterating through a doubly linked list can be done in either direction. In fact, direction can change many times, if desired.
Forwards
` node := list.firstNode`
` while node ≠ null`
` `
` node := node.next`
Backwards
` node := list.lastNode`
` while node ≠ null`
` `
` node := node.prev`
These symmetric functions add a node either after or before a given node, with the diagram demonstrating after:
` function insertAfter(List list, Node node, Node newNode)`
` newNode.prev := node`
` newNode.next := node.next`
` if node.next = null`
` list.lastNode := newNode`
` else`
` node.next.prev := newNode`
` node.next := newNode`
` function insertBefore(List list, Node node, Node newNode)`
` newNode.prev := node.prev`
` newNode.next := node`
` if node.prev is null`
` list.firstNode := newNode`
` else`
` node.prev.next := newNode`
` node.prev := newNode`
We also need a function to insert a node at the beginning of a possibly-empty list:
` function insertBeginning(List list, Node newNode)`
` if list.firstNode = null`
` list.firstNode := newNode`
` list.lastNode := newNode`
` newNode.prev := null`
` newNode.next := null`
` else`
` insertBefore(list, list.firstNode, newNode)`
A symmetric function inserts at the end:
` function insertEnd(List list, Node newNode)`
` if list.lastNode = null`
` insertBeginning(list, newNode)`
` else`
` insertAfter(list, list.lastNode, newNode)`
Removing a node is easier, only requiring care with the firstNode and lastNode:
` function remove(List list, Node node)`
` if node.prev = null`
` list.firstNode := node.next`
` else`
` node.prev.next := node.next`
` if node.next = null`
` list.lastNode := node.prev`
` else`
` node.next.prev := node.prev`
` destroy node`
One subtle consequence of this procedure is that deleting the last element of a list sets both firstNode and lastNode to null, and so it handles removing the last node from a one-element list correctly. Notice that we also don't need separate "removeBefore" or "removeAfter" methods, because in a doubly-linked list we can just use "remove(node.prev)" or "remove(node.next)" where these are valid.
Circularly-linked lists can be either singly or doubly linked. In a circularly linked list, all nodes are linked in a continuous circle, without using null. For lists with a front and a back (such as a queue), one stores a reference to the last node in the list. The next node after the last node is the first node. Elements can be added to the back of the list and removed from the front in constant time.
Both types of circularly-linked lists benefit from the ability to traverse the full list beginning at any given node. This often allows us to avoid storing firstNode and lastNode, although if the list may be empty we need a special representation for the empty list, such as a lastNode variable which points to some node in the list or is null if it's empty; we use such a lastNode here. This representation significantly simplifies adding and removing nodes with a non-empty list, but empty lists are then a special case.
Assuming that someNode is some node in a non-empty list, this code iterates through that list starting with someNode (any node will do):
Forwards
` node := someNode`
` do`
` do something with node.value`
` node := node.next`
` while node ≠ someNode`
Backwards
` node := someNode`
` do`
` do something with node.value`
` node := node.prev`
` while node ≠ someNode`
Notice the postponing of the test to the end of the loop. This is important for the case where the list contains only the single node someNode.
This simple function inserts a node into a doubly-linked circularly-linked list after a given element:
` function insertAfter(Node node, Node newNode)`
` newNode.next := node.next`
` newNode.prev := node`
` node.next.prev := newNode`
` node.next := newNode`
To do an "insertBefore", we can simply "insertAfter(node.prev, newNode)". Inserting an element in a possibly empty list requires a special function:
` function insertEnd(List list, Node node)`
` if list.lastNode = null`
` node.prev := node`
` node.next := node`
` else`
` insertAfter(list.lastNode, node)`
` list.lastNode := node`
To insert at the beginning we simply "insertAfter(list.lastNode, node)". Finally, removing a node must deal with the case where the list empties:
` function remove(List list, Node node)`
` if node.next = node`
` list.lastNode := null`
` else`
` node.next.prev := node.prev`
` node.prev.next := node.next`
` if node = list.lastNode`
` list.lastNode := node.prev;`
` destroy node`
As in doubly-linked lists, "removeAfter" and "removeBefore" can be implemented with "remove(list, node.prev)" and "remove(list, node.next)".
## Linked lists using arrays of nodes
Languages that do not support any type of reference can still create links by replacing pointers with array indices. The approach is to keep an array of records, where each record has integer fields indicating the index of the next (and possibly previous) node in the array. Not all nodes in the array need be used. If records are not supported as well, parallel arrays can often be used instead.
As an example, consider the following linked list record that uses arrays instead of pointers:
` record Entry {`
` integer next; // index of next entry in array`
` integer prev; // previous entry (if double-linked)`
` string name;`
` real balance;`
}
By creating an array of these structures, and an integer variable to store the index of the first element, a linked list can be built:
`integer listHead;`
`Entry Records[1000];`
Links between elements are formed by placing the array index of the next (or previous) cell into the Next or Prev field within a given element. For example:
IndexNextPrevNameBalance
014Jones, John123.45
1-10Smith, Joseph234.56
3Ignore, Ignatius999.99
402Another, Anita876.54
5
6
7
In the above example, `ListHead` would be set to 2, the location of the first entry in the list. Notice that entry 3 and 5 through 7 are not part of the list. These cells are available for any additions to the list. By creating a `ListFree` integer variable, a free list could be created to keep track of what cells are available. If all entries are in use, the size of the array would have to be increased or some elements would have to be deleted before new entries could be stored in the list.
The following code would traverse the list and display names and account balance:
`i := listHead;`
`while i >= 0 { '// loop through the list`
` print i, Records[i].name, Records[i].balance // print entry`
` i = Records[i].next;`
`}`
When faced with a choice, the advantages of this approach include:
• The linked list is relocatable, meaning it can be moved about in memory at will, and it can also be quickly and directly serialized for storage on disk or transfer over a network.
• Especially for a small list, array indexes can occupy significantly less space than a full pointer on many architectures.
• Locality of reference can be improved by keeping the nodes together in memory and by periodically rearranging them, although this can also be done in a general store.
• Naïve dynamic memory allocators can produce an excessive amount of overhead storage for each node allocated; almost no allocation overhead is incurred per node in this approach.
• Seizing an entry from a pre-allocated array is faster than using dynamic memory allocation for each node, since dynamic memory allocation typically requires a search for a free memory block of the desired size.
This approach has one main disadvantage, however: it creates and manages a private memory space for its nodes. This leads to the following issues:
• It increase complexity of the implementation.
• Growing a large array when it is full may be difficult or impossible, whereas finding space for a new linked list node in a large, general memory pool may be easier.
• Adding elements to a dynamic array will occasionally (when it is full) unexpectedly take linear (O(n)) instead of constant time (although it's still an amortized constant).
• Using a general memory pool leaves more memory for other data if the list is smaller than expected or if many nodes are freed.
For these reasons, this approach is mainly used for languages that do not support dynamic memory allocation. These disadvantages are also mitigated if the maximum size of the list is known at the time the array is created.
## Language support
Many programming languages such as Lisp and Scheme have singly linked lists built in. In many functional languages, these lists are constructed from nodes, each called a cons or cons cell. The cons has two fields: the car, a reference to the data for that node, and the cdr, a reference to the next node. Although cons cells can be used to build other data structures, this is their primary purpose.
In languages that support Abstract data types or templates, linked list ADTs or templates are available for building linked lists. In other languages, linked lists are typically built using references together with records. Here is a complete example in C:
1. include /* for printf */
2. include /* for malloc */
typedef struct ns { int data; struct ns *next; /* pointer to next element in list */ } node;
node *list_add(node **p, int i) { /* some compilers don't require a cast of return value for malloc */ node *n = (node *)malloc(sizeof(node)); if (n == NULL) return NULL;
n->next = *p; /* the previous element (*p) now becomes the "next" element */ *p = n; /* add new empty element to the front (head) of the list */ n->data = i;
return *p; }
void list_remove(node **p) /* remove head */ { if (*p != NULL) { node *n = *p; *p = (*p)->next; free(n); } }
node **list_search(node **n, int i) { while (*n != NULL) { if ((*n)->data == i) { return n; } n = &(*n)->next; } return NULL; }
void list_print(node *n) { if (n == NULL) { printf("list is emptyn"); } while (n != NULL) { printf("print %p %p %dn", n, n->next, n->data); n = n->next; } }
int main(void) { node *n = NULL;
list_add(&n, 0); /* list: 0 */ list_add(&n, 1); /* list: 1 0 */ list_add(&n, 2); /* list: 2 1 0 */ list_add(&n, 3); /* list: 3 2 1 0 */ list_add(&n, 4); /* list: 4 3 2 1 0 */ list_print(n); list_remove(&n); /* remove first (4) */ list_remove(&n->next); /* remove new second (2) */ list_remove(list_search(&n, 1)); /* remove cell containing 1 (first) */ list_remove(&n->next); /* remove second to last node (0) */ list_remove(&n); /* remove last (3) */ list_print(n);
return 0; }
## Internal and external storage
When constructing a linked list, one is faced with the choice of whether to store the data of the list directly in the linked list nodes, called internal storage, or merely to store a reference to the data, called external storage. Internal storage has the advantage of making access to the data more efficient, requiring less storage overall, having better locality of reference, and simplifying memory management for the list (its data is allocated and deallocated at the same time as the list nodes).
External storage, on the other hand, has the advantage of being more generic, in that the same data structure and machine code can be used for a linked list no matter what the size of the data is. It also makes it easy to place the same data in multiple linked lists. Although with internal storage the same data can be placed in multiple lists by including multiple next references in the node data structure, it would then be necessary to create separate routines to add or delete cells based on each field. It is possible to create additional linked lists of elements that use internal storage by using external storage, and having the cells of the additional linked lists store references to the nodes of the linked list containing the data.
In general, if a set of data structures needs to be included in multiple linked lists, external storage is the best approach. If a set of data structures need to be included in only one linked list, then internal storage is slightly better, unless a generic linked list package using external storage is available. Likewise, if different sets of data that can be stored in the same data structure are to be included in a single linked list, then internal storage would be fine.
Another approach that can be used with some languages involves having different data structures, but all have the initial fields, including the next (and prev if double linked list) references in the same location. After defining separate structures for each type of data, a generic structure can be defined that contains the minimum amount of data shared by all the other structures and contained at the top (beginning) of the structures. Then generic routines can be created that use the minimal structure to perform linked list type operations, but separate routines can then handle the specific data. This approach is often used in message parsing routines, where several types of messages are received, but all start with the same set of fields, usually including a field for message type. The generic routines are used to add new messages to a queue when they are received, and remove them from the queue in order to process the message. The message type field is then used to call the correct routine to process the specific type of message.
### Example of internal and external storage
Suppose you wanted to create a linked list of families and their members. Using internal storage, the structure might look like the following:
` record member { // member of a family`
` member next`
` string firstName`
` integer age`
}
` record family { // the family itself`
` family next`
` string lastName`
` string address`
` member members // head of list of members of this family`
}
To print a complete list of families and their members using internal storage, we could write:
` aFamily := Families // start at head of families list`
` while aFamily ≠ null { // loop through list of families`
` print information about family`
` aMember := aFamily.members // get head of list of this family's members`
` while aMember ≠ null { // loop through list of members`
` print information about member`
` aMember := aMember.next`
}
` aFamily := aFamily.next`
}
Using external storage, we would create the following structures:
` record node { // generic link structure`
` node next`
` pointer data // generic pointer for data at node`
}
` record member { // structure for family member`
` string firstName`
` integer age`
}
` record family { // structure for family`
` string lastName`
` string address`
` node members // head of list of members of this family`
}
To print a complete list of families and their members using external storage, we could write:
` famNode := Families // start at head of families list`
` while famNode ≠ null { // loop through list of families`
` aFamily = (family)famNode.data // extract family from node`
` print information about family`
` memNode := aFamily.members // get list of family members`
` while memNode ≠ null { // loop through list of members`
` aMember := (member)memNode.data // extract member from node`
` print information about member`
` memNode := memNode.next`
}
` famNode := famNode.next`
}
Notice that when using external storage, an extra step is needed to extract the record from the node and cast it into the proper data type. This is because both the list of families and the list of members within the family are stored in two linked lists using the same data structure (node), and this language does not have parametric types.
As long as the number of families that a member can belong to is known at compile time, internal storage works fine. If, however, a member needed to be included in an arbitrary number of families, with the specific number known only at run time, external storage would be necessary.
## Speeding up search
Finding a specific element in a linked list, even if it is sorted, normally requires O(n) time (linear search). This is one of the primary disadvantages of linked lists over other data structures. In addition to the variants discussed above, below are 2 simple ways to improve search time.
In an unordered list, one simple heuristic for decreasing average search time is the move-to-front heuristic, which simply moves an element to the beginning of the list once it is found. This scheme, handy for creating simple caches, ensures that the most recently used items are also the quickest to find again.
Another common approach is to "index" a linked list using a more efficient external data structure. For example, one can build a red-black tree or hash table whose elements are references to the linked list nodes. Multiple such indexes can be built on a single list. The disadvantage is that these indexes may need to be updated each time a node is added or removed (or at least, before that index is used again).
## Related data structures
Both stacks and queues are often implemented using linked lists, and simply restrict the type of operations which are supported.
The skip list is a linked list augmented with layers of pointers for quickly jumping over large numbers of elements, and then descending to the next layer. This process continues down to the bottom layer, which is the actual list.
A binary tree can be seen as a type of linked list where the elements are themselves linked lists of the same nature. The result is that each node may include a reference to the first node of one or two other linked lists, which, together with their contents, form the subtrees below that node.
An unrolled linked list is a linked list in which each node contains an array of data values. This leads to improved cache performance, since more list elements are contiguous in memory, and reduced memory overhead, because less metadata needs to be stored for each element of the list.
A hash table may use linked lists to store the chains of items that hash to the same position in the hash table.
A heap shares some of the ordering properties of a linked list, but is almost always implemented using an array. Instead of references from node to node, the next and previous data indexes are calculated using the current data's index.
## References
• National Institute of Standards and Technology (August 16, 2004). Definition of a linked list Retrieved December 14, 2004.
• Antonakos, James L. and Mansfield, Kenneth C., Jr. Practical Data Structures Using C/C++ (1999). Prentice-Hall. ISBN 0-13-280843-9, pp. 165–190
• Collins, William J. Data Structures and the Java Collections Framework (2002,2005) New York, NY: McGraw Hill. ISBN 0-07-282379-8, pp. 239–303
• Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford Introductions to Algorithms (2003). MIT Press. ISBN 0-262-03293-7, pp. 205–213, 501–505
• Green, Bert F. Jr. (1961). Computer Languages for Symbol Manipulation. IRE Transactions on Human Factors in Electronics. 2 pp. 3-8.
• McCarthy, John (1960). Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I. Communications of the ACM. HTML DVI PDF PostScript
• Donald Knuth. Fundamental Algorithms, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89683-4. Sections 2.2.3–2.2.5, pp.254–298.
• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 10.2: Linked lists, pp.204–209.
• Newell, Allen and Shaw, F. C. (1957). Programming the Logic Theory Machine. Proceedings of the Western Joint Computer Conference. pp. 230-240.
• Parlante, Nick (2001). Linked list basics. Stanford University. PDF
• Sedgewick, Robert Algorithms in C (1998). Addison Wesley. ISBN 0-201-31452-5, pp. 90–109
• Shaffer, Clifford A. A Practical Introduction to Data Structures and Algorithm Analysis (1998). NJ: Prentice Hall. ISBN 0-13-660911-2, pp. 77–102
• Wilkes, Maurice Vincent (1964). An Experiment with a Self-compiling Compiler for a Simple List-Processing Language. Annual Review in Automatic Programming 4, 1. Published by Pergamon Press.
• Wilkes, Maurice Vincent (1964). Lists and Why They are Useful. Proceeds of the ACM National Conference, Philadelphia 1964 (ACM Publication P-64 page F1-1); Also Computer Journal 7, 278 (1965).
• Kulesh Shanmugasundaram (April 4, 2005). Linux Kernel Linked List Explained
|
2013-05-24 19:11:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3533709943294525, "perplexity": 1094.7126887226348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704986352/warc/CC-MAIN-20130516114946-00004-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-region-bounded-by-circle-r-3cos-and-the-cardioid-r-1-cos
|
# How do you find region bounded by circle r = 3cosΘ and the cardioid r = 1 + cosΘ?
Oct 25, 2015
$A = \frac{5 \pi}{4}$
#### Explanation:
Lets find the intersection of the curves in the first quadrant:
$3 \cos \theta = 1 + \cos \theta \implies 2 \cos \theta = 1 \implies \cos \theta = \frac{1}{2} \implies \theta = \frac{\pi}{3}$
The region is symmetric so we can find the area of the half of it:
$A = 2 \left({\int}_{0}^{\frac{\pi}{3}} d \theta {\int}_{0}^{1 + \cos \theta} r \mathrm{dr} + {\int}_{\frac{\pi}{3}}^{\frac{\pi}{2}} d \theta {\int}_{0}^{3 \cos \theta} r \mathrm{dr}\right)$
${A}_{1} = \frac{1}{2} {\int}_{0}^{\frac{\pi}{3}} d \theta {r}^{2} {|}_{0}^{1 + \cos \theta} = \frac{1}{2} {\int}_{0}^{\frac{\pi}{3}} d \theta \left(1 + 2 \cos \theta + {\cos}^{2} \theta\right)$
${A}_{1} = \frac{1}{2} {\int}_{0}^{\frac{\pi}{3}} d \theta \left(1 + 2 \cos \theta + \frac{1 + \cos 2 \theta}{2}\right)$
${A}_{1} = \frac{1}{2} \left(\frac{3 \theta}{2} + 2 \sin \theta + \frac{1}{4} \sin 2 \theta\right) {|}_{0}^{\frac{\pi}{3}} = \frac{\pi}{4} + \frac{9 \sqrt{3}}{16}$
${A}_{2} = \frac{1}{2} {\int}_{\frac{\pi}{3}}^{\frac{\pi}{2}} d \theta {r}^{2} {|}_{0}^{3 \cos \theta} =$
$\frac{1}{2} {\int}_{\frac{\pi}{3}}^{\frac{\pi}{2}} d \theta \left(9 {\cos}^{2} \theta\right) = \frac{9}{4} {\int}_{\frac{\pi}{3}}^{\frac{\pi}{2}} d \theta \left(1 + \cos 2 \theta\right) =$
$= \frac{9}{4} \left(\theta + \frac{1}{2} \sin 2 \theta\right) {|}_{\frac{\pi}{3}}^{\frac{\pi}{2}} = \frac{3 \pi}{8} - \frac{9 \sqrt{3}}{16}$
$A = 2 \left(\frac{\pi}{4} + \frac{9 \sqrt{3}}{16} + \frac{3 \pi}{8} - \frac{9 \sqrt{3}}{16}\right) = 2 \frac{5 \pi}{8} = \frac{5 \pi}{4}$
|
2019-01-23 19:40:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6903910636901855, "perplexity": 753.0406024731141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00264.warc.gz"}
|
http://math.stackexchange.com/questions/145035/the-sum-of-power-law-distributed-random-variables
|
# The sum of power-law-distributed random variables.
Let $X_i$ is power-law-distributed random variable $f(x)=C_0x^{-k}$ where $1<k_i\le3$. What is the exponent $k$ of the variable $$X=\sum_{i=1}^N X_i \ ?$$
My doubt come from the fact that $X$ as a sum of i.i.d has to tend to a $\alpha$-stable distribution. The generic exponent $\alpha$ of a generic $\alpha$-stable distribution can lay only in the range $(0,2]$, that imply $1<k\le 3$. But if we try to use the rule of Fourier transform for the sum of i.i.d. random variables (namely the convolution is the product of the Fourier transforms) as power law we can get an arbitrary big exponent $k$ (isn't it?). So at some point my reasoning is wrong. I guess that the mistake is in the convolution of the power law distributions.
-
Assume for simplicity that $\bar c_i/x^{k_i-1}\leqslant\mathrm P(X_i\geqslant x)\leqslant c_i/x^{k_i-1}$ when $x\to\infty$ and that each random variable $X_i$ is almost surely nonnegative. Then, for every $1\leqslant j\leqslant N$, $$[X_j\geqslant x]\subseteq[X\geqslant x]\subseteq\bigcup_{i=1}^N[X_i\geqslant x/N].$$ This implies that $$\max\limits_{i=1}^N\mathrm P(X_i\geqslant x)\leqslant\mathrm P(X\geqslant x)\leqslant\sum_{i=1}^N\mathrm P(X_i\geqslant x/N),$$ hence $X$ has exponent $k=\min\limits_{i=1}^Nk_i$ in the sense that, when $x\to+\infty$, $$\bar C_N/x^{k-1}\leqslant\mathrm P(X\geqslant x)\leqslant C_N/x^{k-1}.$$ The result you mention about $\alpha$-stable distribution concerns the regime where $N\to\infty$ and one rescales $X$, hence the constants $C_N$ and $\bar C_N$ come into play and modify the exponent $k$.
The LHS is at least $\bar c_i/x^{k_i-1}$ for each $i$, hence at least $\bar C_N/x^{k-1}$ where $k$ may be any $k_i$, for example their minimum, and $\bar C_N$ is $\bar c_i$ for this $k_i$. // Each term in the RHS is at most $c_i/(x/N)^{k_i-1}\leqslant c_iN^{k_i-1}/x^{k-1}$ for $x\geqslant1$. Hence the whole RHS is at most $C_N/x^k$ where $C_N$ is the sum over $i$ of $c_iN^{k_i-1}$. – Did May 15 '12 at 8:39
|
2016-02-10 15:37:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9760487675666809, "perplexity": 100.69134628705113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159654.65/warc/CC-MAIN-20160205193919-00094-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://www.last.fm/music/Banda+Vista/+similar
|
1. We don't have a wiki here yet...
2. We don't have a wiki here yet...
3. We don't have a wiki here yet...
4. We don't have a wiki here yet...
5. We don't have a wiki here yet...
6. We don't have a wiki here yet...
7. We don't have a wiki here yet...
8. We don't have a wiki here yet...
9. We don't have a wiki here yet...
10. We don't have a wiki here yet...
11. We don't have a wiki here yet...
12. We don't have a wiki here yet...
13. We don't have a wiki here yet...
14. We don't have a wiki here yet...
15. O Robo Sapiens foi formado em 2001 em São Paulo. Inicialmente um "power trio", ao longo dos anos foi ganhando e perdendo integrantes e se…
16. We don't have a wiki here yet...
17. We don't have a wiki here yet...
18. We don't have a wiki here yet...
19. We don't have a wiki here yet...
20. We don't have a wiki here yet...
|
2015-10-13 17:37:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908053457736969, "perplexity": 5466.412183748947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738008122.86/warc/CC-MAIN-20151001222008-00191-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://eprint.iacr.org/2008/040
|
## Cryptology ePrint Archive: Report 2008/040
Efficient and Generalized Pairing Computation on Abelian Varieties
Eunjeong Lee, Hyang-Sook Lee, and Cheol-Min Park
Abstract: In this paper, we propose a new method for constructing a bilinear pairing over (hyper)elliptic curves, which we call the R-ate pairing. This pairing is a generalization of the Ate and Ate_i pairing, and also improves efficiency of the pairing computation. Using the R-ate pairing, the loop length in Miller's algorithm can be as small as ${\rm log}(r^{1 / \phi(k)})$ for some pairing-friendly elliptic curves which have not reached this lower bound. Therefore we obtain from 29 % to 69 % savings in overall costs compared to the Ate_i pairing. On supersingular hyperelliptic curves of genus 2, we show that this approach makes the loop length in Miller's algorithm shorter than that of the Ate pairing.
Category / Keywords: public-key cryptography / pairing, elliptic curves, hyperelliptic curves, pairing based cryptography, Tate pairing
|
2014-07-31 09:28:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7914124727249146, "perplexity": 1452.7016527506137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272940.33/warc/CC-MAIN-20140728011752-00163-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/519803/what-is-the-definition-of-sum-limits-0-leq-i-leq-m-text-0-leq-j-leq-na-i
|
# What is the definition of $\sum\limits_{0\leq i\leq m,\text{ }0\leq j\leq n}a_{ij}$
I understand the concept of double summations, at least intuitively, but I'm trying to understand it formally. So, to begin with, I have a question:
Is this double summation equality true by definition:$\sum\limits_{0\leq i\leq m}\sum\limits_{0\leq j\leq n}a_{ij}=\sum\limits_{0\leq i\leq m,\text{ }0\leq j\leq n}a_{ij}$ ?
If not, how can I prove it?
This is what I have till now:
1.- I understand that the symbol $\sum$ is defined for a sequence, in this case a finite sequence, of the form $\{a_1,a_2,...,a_n\}$. Then $\sum\limits_{0\leq i\leq n} a_i$ is defined recursively having $\sum\limits_{0\leq i\leq 0} a_i=a_0$ and $\sum\limits_{0\leq i\leq k} a_i=\sum\limits_{0\leq i\leq k-1} a_i+a_k$ for $k>0$.
2.- I understand that $\sum\limits_{0\leq i\leq m}\sum\limits_{0\leq j\leq n}a_{ij}=\sum\limits_{0\leq i\leq m}(\sum\limits_{0\leq j\leq n}a_{ij})$, meaning that $\sum\limits_{0\leq i\leq m}\sum\limits_{0\leq j\leq n}a_{ij}=\sum\limits_{0\leq i\leq m}b_i$ where $\{b_1,b_2,...,b_m\}$ having $b_i=\sum\limits_{0\leq j\leq n}a_{ij}$.
3.- I understand the meaning of the expression $\sum\limits_{0\leq i\leq m,\text{ }0\leq j\leq n}a_{ij}$. Here the problem is that I don't have a formal definition.
4.- From a general stand point if we use $f$ instead of $\sum$ and instead of a sequence we use an index set we can write this in the form $\mathop{f}_{i\in I}\mathop{f}_{j\in J}a_{ij}=\mathop f_{i\in I,\text{ } j\in J}a_{ij}$, and then again it's not very clear on how to make an interpretation for the right hand side. In my attempt I'd say that $f$ should be a function of the form $f:P(A)\longrightarrow A$, leting $A$ be a set that contains every $a_{ij}$, such that $\mathop{f}_{i\in I}\mathop{f}_{j\in J}a_{ij}=\mathop{f}_{i\in I}(\mathop{f}_{j\in J}a_{ij})$ and $\mathop f_{i\in I,\text{ } j\in J}a_{ij}=f\{a_{ij}\mid i\in I, j\in J\}$. But then I guess it's not always true that $\mathop{f}_{i\in I}\mathop{f}_{j\in J}a_{ij}=\mathop f_{i\in I,\text{ } j\in J}a_{ij}$ (intuitively). This makes me think that in the case of my double summation above I need to prove the statement instead of being true by definition. But then what is the definition of this expression on the right hand side of the equality?...
In general, if you have some finite set $S$ and some function $f: S \to V$, where $V$ is some vector space, then $\sum_{s \in S} f(s)$ is defined as $0$ if $S = \emptyset$, and recursively as $\sum_{s \in S \cup \{\sigma\}} f(s) = \sum_{s \in S} f(s) + f(\sigma)$.
If $S_1,S_2 \subset S$ are disjoint, then a little work shows $\sum_{s \in S_1 \cup S_2} f(s) = \sum_{s \in S_1} f(s) + \sum_{s \in S_2} f(s)$.
Proving the equality of the iterated summations and the full summation is tantamount to showing that $\{ (i,j) | 0 \le i \le m, \ 0 \le j \le n \} = \cup_{i=0}^m \{ (i,j) | \ 0 \le j \le n \}$.
• But then what is the definition of $\sum\limits_{0 \leq i\leq m , \text{ }0 \leq j\leq n}a_{ij}$? – Daniela Diaz Oct 9 '13 at 4:47
|
2019-08-17 11:12:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774550199508667, "perplexity": 78.15493162360534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00133.warc.gz"}
|
https://economics.stackexchange.com/questions/22221/utility-maximisation-subject-to-income-and-time-constraints
|
# Utility Maximisation Subject to Income and Time Constraints
The consumption of economic goods often takes time. Consider, for example:
• Transport services, eg flights, rail journeys;
• Leisure goods, eg watching a film, visiting a park.
I would like to explore models of consumer or household behaviour in which utility in a period is maximised subject to both income and time constraints.
From a Google search, two important (but rather old) papers on this topic appear to be:
What are other (and more recent) key papers on this topic?
To give a simple example, suppose there are just two goods $X_1,X_2$ and utility $U$ is given by:
$$U = X_1^{0.5}X_2^{0.5}$$
Suppose further that the respective prices and time requirements are for $X_1 (1,2)$ and for $X_2 (2,1)$. Thus the income and time constraints are:
$$X_1 + 2X_2 \leq I$$
$$2X_1 + X_2 \leq T$$
where $I$ is income and $T$ is available time. If $T$ is much larger than $I$ (low-income person), then the income constraint will be binding, and the time constraint will be slack, and vice versa if $I$ is much larger than $T$ (high-income person). Over an intermediate range, including $T = I$, both constraints will be binding. These different scenarios have different implications for the effect of a marginal change in price. Perhaps time preference is relevant here but I can't immediately see how.
This book by Ian Steedman, reviewed here by Diane Coyle, looks relevant.
• – EconJohn May 30 '18 at 21:53
|
2020-09-27 14:02:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.758884608745575, "perplexity": 1380.7029837003324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400279782.77/warc/CC-MAIN-20200927121105-20200927151105-00183.warc.gz"}
|
https://socratic.org/questions/how-many-moles-of-h-2-are-in-4-48-10-4-g-of-h-2
|
# How many moles of H_2 are in 4.48*10^-4 g of H_2?
Jul 5, 2016
Approx. $2 \times {10}^{-} 4 \cdot m o l$
#### Explanation:
$\text{Moles of dihydrogen}$ $=$ $\text{Mass"/"Molar mass}$
$=$ $\frac{4.48 \times {10}^{-} 4 \cdot g}{2 \times 1.00794 \cdot g \cdot m o {l}^{-} 1}$ $=$ ??mol
Note that this is consistent dimensionally. We wanted an answer in moles, and the equation duly gives us an answer in moles.
In terms of hydrogen atoms, how many atoms of hydrogen does this molar quantity represent? Why?
|
2020-02-29 13:47:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7005282044410706, "perplexity": 3066.9572387107305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875149238.97/warc/CC-MAIN-20200229114448-20200229144448-00064.warc.gz"}
|
http://interactivepython.org/runestone/static/JavaReview/Conditionals/cShortCircuit.html
|
# Short Circuit Evaluation¶
Both && and || use short circuit evaluation. That means that the second condition isn’t necessarily checked if the result from the first condition is enough to tell if the result is true or false. In a complex conditional with a logical and (&&) both conditions must be true, so if the first is false, then the second doesn’t have to be evaluated. If the complex conditional uses a logical or (||) and the first condition is true, then the second condition won’t be executed, since only one of the conditions needs to be true.
Note
In a complex conditional using a logical and (&&) the evaluation will short circuit (not execute the second condition) if the first condition is false. In a complex conditional using a logical or (||) the evaluation will short circuit if the first condition is true.
5-4-1: What is printed when the following code executes and x has been set to zero and y is set to 3?
if (x == 0 || (y / x) == 3) System.out.println("first case");
else System.out.println("second case");
• (A) first case
• Since x is equal to zero the first expression in the complex conditional will be true and the (y / x) == 3 won't be evaluated, so it won't cause a divide by zero error. It will print "first case".
• (B) second case
• Since x is equal to zero the first part of the complex conditional is true so it will print first case.
• (C) You will get a error because you can't divide by zero.
• You won't get an error because of short circuit evaluation. The (y / x) == 3 won't be evaluated since the first expression is true and an or is used.
5-4-2: What is printed when the following code executes and x has been set to negative 1?
String message = "help";
if (x >= 0 && message.substring(x).equals("help") System.out.println("first case");
else System.out.println("second case");
• (A) first case
• Since x is negative the complex conditional will be false and the second condition won't execute. Remember that with && both parts of the condition must be true for the complex conditional to be true. Using a negative substring index won't cause an error since that code will only be executed if x is greater than or equal to zero.
• (B) second case
• Since x is negative the second part of the complex conditional won't even execute so the else will be executed.
• (C) You will get a error because you can't use a negative index with substring.
• This would be true if it wasn't using short circuit evaluation, but it is.
5-4-3: What is printed when the following code executes and x has been set to zero and y is set to 3?
if ((y / x) == 3 || x = 0) System.out.println("first case");
else System.out.println("second case");
• (A) first case
• The first part of the complex conditional is executed first and will cause a divide by zero error. Complex conditionals are executed from left to right as needed.
• (B) second case
• Since x is equal to zero the evaluation of the first part of the complex conditional will cause a divide by zero error.
• (C) You will get a error because you can't divide by zero.
• Since x is equal to zero the evaluation of the first part of the complex conditional will cause a divide by zero error. You should switch the order of the conditionals to prevent the error because then the first condition would be false and the evaluation would short circuit and not evaluate the second condition.
Next Section - DeMorgan’s Laws
|
2018-11-16 20:15:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27341213822364807, "perplexity": 531.554837674241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743184.39/warc/CC-MAIN-20181116194306-20181116220306-00215.warc.gz"}
|
https://gmatclub.com/forum/if-x-y-and-z-are-integers-is-x-even-200349.html
|
GMAT Changed on April 16th - Read about the latest changes here
It is currently 27 May 2018, 08:37
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If, x, y, and z are integers, is x even?
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 45459
If, x, y, and z are integers, is x even? [#permalink]
### Show Tags
23 Jun 2015, 02:47
2
KUDOS
Expert's post
6
This post was
BOOKMARKED
00:00
Difficulty:
55% (hard)
Question Stats:
63% (01:27) correct 37% (01:43) wrong based on 329 sessions
### HideShow timer Statistics
If, x, y, and z are integers, is x even?
(1) $$10^x = 4^y*5^z$$
(2) $$3^{(x + 5)} = 27^{(y + 1)}$$
Kudos for a correct solution.
_________________
Intern
Joined: 28 Jan 2013
Posts: 31
Location: United States
Concentration: General Management, International Business
GPA: 3.1
WE: Information Technology (Consulting)
Re: If, x, y, and z are integers, is x even? [#permalink]
### Show Tags
23 Jun 2015, 05:22
If, x, y, and z are integers, is x even?
(1) 10^x = 4^y*5^z
2^x * 5^x = 2^2y * 5^z
equating, we get
x=2y (x will be even regardless whether y is even or odd)
x=z ( x will be even if z is even and x will be odd if z is odd)
Insufficient
(2) 3^(x + 5) = 27^(y + 1)
3^(x + 5) = 3^3(y + 1)
equating, we get
x + 5 = 3(y + 1)
x = 3y - 2 (x will be even if y is even and x will be odd if y is odd)
Insufficient
Combining statements 1 and 2
we have below equations
x=2y ------(1)
x=z --------(2)
x=3y-2 ----(3)
from (1) and (3)
2y = 3y-2
y=2 (y is even)
so x will be even
Sufficient.
z will also be even
Ans:C
SVP
Joined: 08 Jul 2010
Posts: 2099
Location: India
GMAT: INSIGHT
WE: Education (Education)
Re: If, x, y, and z are integers, is x even? [#permalink]
### Show Tags
23 Jun 2015, 10:36
1
KUDOS
Expert's post
1
This post was
BOOKMARKED
Bunuel wrote:
If, x, y, and z are integers, is x even?
(1) 10^x = 4^y*5^z
(2) 3^(x + 5) = 27^(y + 1)
Kudos for a correct solution.
Question : Is x Even?
Statement 1: 10^x = 4^y*5^z
i.e. 2^x * 5^x = 2^2y * 5^z
i.e. x = 2y = z
Since, y is an Integer and x=2y, therefore x must be a multiple of 2 i.e. an Even Integer
Hence SUFFICIENT
Statement 2: 3^(x + 5) = 27^(y + 1)
i.e. 3^(x + 5) = 3^(3y + 3)
i.e. (x+5) = (3y+3)
i.e. x = 3y - 2
But x will be odd if y is odd
and x will be even if y is even
Hence NOT SUFFICIENT
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
SVP
Joined: 08 Jul 2010
Posts: 2099
Location: India
GMAT: INSIGHT
WE: Education (Education)
Re: If, x, y, and z are integers, is x even? [#permalink]
### Show Tags
23 Jun 2015, 10:38
ManojReddy wrote:
If, x, y, and z are integers, is x even?
(1) 10^x = 4^y*5^z
2^x * 5^x = 2^2y * 5^z
equating, we get
x=2y (x will be even regardless whether y is even or odd)
x=z ( x will be even if z is even and x will be odd if z is odd)
Insufficient
Hi ManojReddy,
Looks like you have made an error in statement 1 (check highlighted part)
how can x be even and odd simultaneously
x = 2y = z i.e. x MUST be even because y is an Integer and therefore z will also be even
Hence, SUFFICIENT
I hope it helps!
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Math Expert
Joined: 02 Aug 2009
Posts: 5784
Re: If, x, y, and z are integers, is x even? [#permalink]
### Show Tags
23 Jun 2015, 10:46
Bunuel wrote:
If, x, y, and z are integers, is x even?
(1) 10^x = 4^y*5^z
(2) 3^(x + 5) = 27^(y + 1)
Kudos for a correct solution.
1)10^x=2^x*5^x=2^2y*5^z...
equating powers on two sides.. x=2y... suff
2) 3^x*3^5=3^3y*3^3....
so x+2=3y... x can be 4 when y=2 and x will be 7 when y=3... insuff
ans A
_________________
Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
GMAT online Tutor
Manager
Joined: 04 May 2015
Posts: 72
Concentration: Strategy, Operations
WE: Operations (Military & Defense)
Re: If, x, y, and z are integers, is x even? [#permalink]
### Show Tags
07 Aug 2015, 14:58
I saw this question pop up today when I was studying my MGMAT book. In a recent post by mikemcgarry he encouraged me to not dismiss things and to look deeper to really ensure you understand the mechanics of the problem/question rather than just a surface understanding (something about icebergs comes to mind).
I see in the explanation here (and in the MGMAT book) that we are able to infer that $$4^y$$ = $$2^2^y$$. I'm certainly not going to argue that this is true as I have tested the theory with a few different numbers, and the rest of the problem makes sense to me. Rather than just dismiss this I would like to grasp the mechanics of why we are able to do this. I hope that if I understand this better I will be able to recognise this more easily in the future and in what cases I can and more importantly cannot make this inferance.
Thanks in advance for the help.
p.s. I know this is probably a very simple question for most of you around here but I guess we all get caught up on simple things from time to time
_________________
If you found my post useful, please consider throwing me a Kudos... Every bit helps
Math Expert
Joined: 02 Sep 2009
Posts: 45459
Re: If, x, y, and z are integers, is x even? [#permalink]
### Show Tags
16 Aug 2015, 11:58
DropBear wrote:
I saw this question pop up today when I was studying my MGMAT book. In a recent post by mikemcgarry he encouraged me to not dismiss things and to look deeper to really ensure you understand the mechanics of the problem/question rather than just a surface understanding (something about icebergs comes to mind).
I see in the explanation here (and in the MGMAT book) that we are able to infer that $$4^y$$ = $$2^2^y$$. I'm certainly not going to argue that this is true as I have tested the theory with a few different numbers, and the rest of the problem makes sense to me. Rather than just dismiss this I would like to grasp the mechanics of why we are able to do this. I hope that if I understand this better I will be able to recognise this more easily in the future and in what cases I can and more importantly cannot make this inferance.
Thanks in advance for the help.
p.s. I know this is probably a very simple question for most of you around here but I guess we all get caught up on simple things from time to time
$$4^y=(2^2)^y=2^{2y}$$.
Theory on Exponents and Roots: math-number-theory-88376.html
Tips on Exponents and Roots: exponents-and-roots-on-the-gmat-tips-and-hints-174993.html
All DS Exponents questions to practice: search.php?search_id=tag&tag_id=39
All PS Exponents questions to practice: search.php?search_id=tag&tag_id=60
All DS roots problems to practice: search.php?search_id=tag&tag_id=49
All PS roots problems to practice: search.php?search_id=tag&tag_id=113
Tough and tricky DS exponents and roots questions with detailed solutions: tough-and-tricky-exponents-and-roots-questions-125967.html
Tough and tricky PS exponents and roots questions with detailed solutions: tough-and-tricky-exponents-and-roots-questions-125956.html
_________________
Director
Joined: 12 Nov 2016
Posts: 776
Location: United States
Schools: Yale '18
GMAT 1: 650 Q43 V37
GRE 1: 315 Q157 V158
GPA: 2.66
Re: If, x, y, and z are integers, is x even? [#permalink]
### Show Tags
25 Sep 2017, 18:13
Bunuel wrote:
If, x, y, and z are integers, is x even?
(1) $$10^x = 4^y*5^z$$
(2) $$3^{(x + 5)} = 27^{(y + 1)}$$
Kudos for a correct solution.
Statement 1
There is a pattern
10^2= 5^2 *2^2
10^3 =5^2 * 2^3
10^4= 2^4 *5^4
10^5= 2^5 *5^5
In order to rewrite 10^3 and 10^5 the 2 would have to be written has 4^(3/2)- this cannot be true since it must be an integer so x must be even
suff
Statement 2
Simply rewrite and use algebra
3^ (x+5) = 3^3y + 3
x +5= 3y + 3
x + 2= 3y
too many possibilities
insuff
A
Intern
Joined: 19 Jun 2017
Posts: 4
If x, y, and z are integers, is x even? [#permalink]
### Show Tags
23 Oct 2017, 19:15
if x, y, and z are integers, is x even?
(1) $$10^x$$ = $$(4^y)(5^z)$$
(2) 3^(x+5) = 27^(y+1)
I searched high and low and could not find this question posted. Please redirect and lock thread if already posted. Also, I couldn't figure out how to format the (x+5) and (y+1) correctly. I read through the entire "Writing Mathematical Formulas on the Forum"...
Math Expert
Joined: 02 Aug 2009
Posts: 5784
Re: If x, y, and z are integers, is x even? [#permalink]
### Show Tags
23 Oct 2017, 19:30
1
KUDOS
Expert's post
SPEEDBOATS wrote:
if x, y, and z are integers, is x even?
(1) $$10^x$$ = $$(4^y)(5^z)$$
(2) 3^(x+5) = 27^(y+1)
I searched high and low and could not find this question posted. Please redirect and lock thread if already posted. Also, I couldn't figure out how to format the (x+5) and (y+1) correctly. I read through the entire "Writing Mathematical Formulas on the Forum"...
Hi...
If you look down below in similar topics, this stands out..
Merging topics
_________________
Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
GMAT online Tutor
Re: If x, y, and z are integers, is x even? [#permalink] 23 Oct 2017, 19:30
Display posts from previous: Sort by
# If, x, y, and z are integers, is x even?
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-05-27 15:37:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6005622148513794, "perplexity": 3543.445553994179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794869272.81/warc/CC-MAIN-20180527151021-20180527171021-00176.warc.gz"}
|
https://datascience.stackexchange.com/questions/111296/how-to-calculate-loss-function
|
# how to calculate loss function?
i hope you are doing well , i want to ask a question regarding loss function in a neural network
i know that the loss function is calculated for each data point in the training set , and then the backpropagation is done depending on if we are using batch gradient descent (backpropagation is done after all the data points are passed) , mini-batch gradient descent(backpropagation is done after batch) or stochastic gradient descent(backpropagation is done after each data point).
now let's take the MSE loss function :
how can n be the number of data points ?, because if we calculate the loss after each data point then n would be only 1 everytime.
also i saw a video in where they put n as the number of nodes in the output layer. link to video( you can find what i'm talking about in 5:45) : https://www.youtube.com/watch?v=Zr5viAZGndE&t=5s
therefore iam pretty confused on how we calculate the loss function ? and what does n represent? also when we have multiple inputs, will we only be concerned with the output that the weight we are trying to update influence ? thanks in advance
|
2022-08-19 11:13:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080798983573914, "perplexity": 371.09317093427774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00192.warc.gz"}
|
https://bowen.pims.math.ca/tags/topdyn
|
# TopDyn
## Problem 150
Is there an expansive homeo of $\mathbb S^2$?
## Problem 149
(Handel) If there exists a cross section for for all minimal sets of a flow, then there exist a global cross-section.
## Problem 130
Which surfaces and which homotopy classes of homeos admit expansive homeos? distal homeos?
## Problem 128
When are suspensions of $R_\alpha$ and $R_\beta$ under bounded functions isomorphic?
## Problem 93
This problem isn't legible.
## Problem 81
Geometric proof of unique ergodicity for irrational rotation of $\mathbb S^1$
## Problem 77
Conjugacy between topology and measure theory
a. Weakest notion such that h(f) is an invariant
b. Entropy-conjugacy + equivalence on Baire sets; what are the equivalence relations on homeomorphisms or maps on $S^1$ and subshifts?
## Problem 49
Does minimal or uniquely ergodic for a diffeo $f$ implies $h(f) = 0$ (try homeo case too)?
1. Is there a minimal diffeo homotopic to $\left( \begin{matrix} 2 & 1\\ 1& 1 \end{matrix} \right) \times Id.$ on $\mathbb T^3$?
2. (Seifert conjecture) Minimal flow on $\mathbb S^3$.
## Problem 48
Suppose $F: C \to \mathbb{R}, C$ the Cantor set, has bounded total variation. Is there a homeo $g : [0,1] \to [0,1]$ and a diffeo (Lipschitz, maybe) $f:[0,1] \to \mathbb{R}$ such that $F = f\circ g |C.$
## Problem 20
Continuous systems in statistical mechanics. Is there a topological dynamics formulation?
## Problem 10
Statistics plus dynamics of transformations of $[0,1]$ - 'non-linear' $\beta$-expansions like examples.
## Problem 5
Homogenous dynamics
1. Implications among
• unique ergodicity
• minimality
• entropy zero plus ergodicity
2. Simple or semi-simple case
• Which one-parameter subgroups are unstable/stable foliations for some ergodic affine?
• Try a).
3. Relate dynamical properties to representations of the group.
4. K-property implies Bernoull?
5. Weak mixing plus center s.s. implies Bernoulli? [For parts d and e, try nilmanifolds first]
6. Ergodic implies there is a unique measure of maximal entropy?
## Problem 2
Topological Rokhlin's Theorem.
|
2020-10-27 06:56:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7917770147323608, "perplexity": 7169.112988632235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893402.83/warc/CC-MAIN-20201027052750-20201027082750-00411.warc.gz"}
|
https://mathspace.co/textbooks/syllabuses/Syllabus-406/topics/Topic-7198/subtopics/Subtopic-96177/?activeTab=interactive
|
# Finding the unknown (4 operations) II
## Interactive practice questions
We want to work out the value of $d$d that makes the following equation true:
$8+2\times d=16$8+2×d=16
a
First let's complete a simpler expression. Fill in the blank in the equation below.
$8+\editable{}=16$8+=16
b
So $2\times d=8$2×d=8. What number times $2$2 equals $8$8?
Easy
Less than a minute
We want to work out the value of $d$d that makes the following equation true:
$8+5\times d=28$8+5×d=28
We want to work out the value of $n$n that makes the following equation true:
$26=6+10\times n$26=6+10×n
Let's work out what $b$b must equal to make the following equation true.
$4\times6=b+20$4×6=b+20
### Outcomes
#### NA3-8
Connect members of sequential patterns with their ordinal position and use tables, graphs, and diagrams to find relationships between successive elements of number and spatial patterns.
|
2021-12-07 16:35:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6116970777511597, "perplexity": 1494.0619564289448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00512.warc.gz"}
|
https://mathsolver.microsoft.com/en/solve-problem/y%20%3D%20h%20%5E%20%7B%20-%201%20%7D%20(%20x%20)
|
Solve for h
Solve for x
Graph
## Share
h^{-1}x=y
Swap sides so that all variable terms are on the left hand side.
\frac{1}{h}x=y
Reorder the terms.
1x=yh
Variable h cannot be equal to 0 since division by zero is not defined. Multiply both sides of the equation by h.
yh=1x
Swap sides so that all variable terms are on the left hand side.
hy=x
Reorder the terms.
yh=x
The equation is in standard form.
\frac{yh}{y}=\frac{x}{y}
Divide both sides by y.
h=\frac{x}{y}
Dividing by y undoes the multiplication by y.
h=\frac{x}{y}\text{, }h\neq 0
Variable h cannot be equal to 0.
h^{-1}x=y
Swap sides so that all variable terms are on the left hand side.
\frac{1}{h}x=y
Reorder the terms.
1x=yh
Multiply both sides of the equation by h.
x=hy
Reorder the terms.
|
2021-12-06 20:23:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9000027179718018, "perplexity": 1365.385863913508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00196.warc.gz"}
|
https://docs.flightsimulator.com/html/Asset_Creation/3D_Models/General_Principles.htm?rhhlterm=morph%20target
|
## GENERAL MODELLING PRINCIPLES
This section lists some of the general modelling principles that you should follow when creating aircraft and scenery models. You can also find some information on the gLTF format.
### In-Sim Graphics Settings
When switching graphics settings in the simulation, Microsoft Flight Simulator changes certain properties related to models, so you need to be aware of the following:
• Texture quality settings reduce the resolution of textures. This affects visual quality when viewing an object in close-up and significantly reduces memory used. You can find more information on this from the section on Reviewing Texture Quality And Memory Usage.
• Object LOD quality settings change the "minSize" properties of LODs. Please see the page on LODs for more information. In general, we recommend tweaking your LoDS in the "high" quality setting, which is neutral for LOD selection (meaning a LOD with 50% "minSize" will switch at 50% size on screen), and reduces the quality of only some textures.
### Model Optimization
The goal for model optimization is to reduce CPU/GPU costs, in both execution time and memory, while retaining an as high as possible visual quality. Execution time and memory cost is often a trade-off. A primary example of that is the creation of several LODs. Having a single model LOD is optimal for memory. However, execution time is poor when viewing the model from far away. Because of this, it is necessary to create multiple levels of detail for all of your models, where each level of detail performs better than the previous, starting at the first LOD, typically named "LOD0" or "x0".
The Landscape Elements page outlines in more detail some of the factors that can be used to optimise your models, and the page on LODs goes into more detail on how to generate them correctly.
#### Instancing
It is worth noting that the Microsoft Flight Simulator engine will make use of "instancing" when rendering your models within the scene. This means that if you place the same model 10 times in the same scene, then all 10 of these model "instances" will be batched into a single draw call, saving on performance.
The caveat to this is that it's only applicable to non-skinned meshes, and a model that uses a skinned mesh will not be instanced. Additionally, if a model has some skinned meshes and some non-skinned ones, then the non-skinned ones will be instanced, even though the skinned ones are not.
### UV Precision
When you create a package for Microsoft Flight Simulator, models are processed by the package tool into compatible gLTF files that are optimised for use within the simulator. Part of this optimisation process involves storing the UV data as 2-byte floating point numbers, instead of regular floats, to save on memory usage. What this means is that UVs will have less precision, and if you have a UV that tiles too much, this lack of precision will become very noticeable. You can see this illustrated in the image below, where the left image is the original, and the right is after processing:
To resolve this issue, you should try and maintain the UV values as close to 0 as possible, since the further from that value the less precise the values will be. In general we recommend maintaining the UVs between 0 and 1, but if you require a texture that tiles, we understand that this may not be possible. In those cases, one trick is to shift the UVs to use negative values, for example, a UV of 0 to 10 could be shifted to be a UV of -5 to 5, and so keep precision errors to a minimum.
### Tangents
When a mesh is imported for the first time, Microsoft Flight Simulator always calculates the tangents. In order to avoid any issues with the normal map, ensure the mesh has a smooth surface and no connected vertices overlap in position or UV space.
For skinned meshes, the tangents are calculated using the default position of the mesh (the T-pose). In the Babylon Exporter, the T-pose is the pose at frame 0. To ensure no problems occur, a good convention would be to start all animations at frame 1 or later.
### Collision Meshes
Many objects in the Microsoft Flight Simulator world are required to detect collisions, whether it's a cockpit button with the user mouse cursor, or a scenery object with an aircraft. You can find details on how these mesh collisions can be set up from the following pages:
### Performance Metrics
Below you can find a list of the different metrics that need to be considered and optimised when creating landscape elements. These metrics apply to individual objects (buildings, landscape features, etc...) but are also very important when considering the performance of a large asset like an entire airport. So, for optimal performance, all of the following metrics should be kept to the minimum possible values
• Texel Count: This is the number of texels (pixels) used by the combined textures. This is a rough indication of memory usage, though different GPU formats will have different memory consumption characteristics. When the package tool compiles a texture, it compiles it to a GPU-ready format (*.DDS). The size of this file is the size it will have in GPU memory.
• Material Count: This is the number of materials used. This number, multiplied with the number of meshes using them, is a rough estimation for the number of draw calls. Mesh instances with the same material are grouped in one draw call
• Mesh Object Count: This is the number of meshes in the element. This, multiplied with the number of materials used, is a rough estimation of the number of draw calls.
• Vertex Count: The number of vertices in the element.
• Face Count: The number of faces (triangles) in the element.
• Skinned Vertex Count: The number of skinned vertices. Skinned vertices use more memory and GPU time than regular vertices.
• Skinned Face Count: The number of skinned faces.
• Texture Count: The number of textures used.
• Bone Count: This is the number of nodes used as bones in skinned meshes. In Microsoft Flight Simulator, this is the entire hierarchy of all the bones used by a skinned mesh, up to the last common ancestor (the "root" node of the skeleton). These bones cost memory in addition to the node they represent.
• Node Count: This is the number of nodes. This translates to objects such as meshes, dummies and helpers in 3DS Max.
• Collision Vertex Count: The number of vertices used for collision detection.
• Collision Face Count: The number of faces used for collision detection.
The table below gives a basic overview of whether these metrics affect the CPU or GPU (or both):
Metric GPU Time GPU Memory CPU Time CPU Memory
Vertex And Face Count
Skinned Vertex And Face Count
Collision Vertex And Face Count
*On Raycast
Node Count
Bone Count
**If Moved
Material Count
Mesh Object Count
Texture Total Size
* Collision vertices and faces only consume CPU time when they are used for collision detection. This variable cost depends on the situation. It is usually not very high, but minimizing them is recommended.
** Bones only consume CPU when they move, or when their parent hierarchy moves. This is to recompute their world-space matrix.
### The gLTF Format
Microsoft Flight Simulator currently supports glTF 2.0 models, and your model assets should be created using this. Before starting, however, it is important to note that there are a number of differences between the general glTF format and what is supported by Microsoft Flight Simulator:
#### Model Data
Currently there is no support for the following features in the model data:
• hierarchical non-uniform scaling
• morph targets
• sparse accessors
• secondary UVs
• custom tangents
Additionally, for skins, the inverse-bind matrices are ignored.
#### Images
The following caveats should be respected when adding images:
• Only URIs to external images are supported
• The mimeType parameter is ignored, unless the image.uri does not have an extension
#### Materials
When creating materials for Microsoft Flight Simulator you should follow these rules:
• normal maps must use DirectX conventions for normal vector packing instead of OpenGL conventions
• texture values for roughness, metal and occlusion must be packed into the same texture
The following is also worth noting:
• double-sided materials do not modify the normals when viewing the back-face of materials (i.e. the normals will always be point outwards)
• texture sampler information is ignored (wrapping method is always CLAMP_TO_EDGE)
• animation sampler interpolation method is ignored (interpolation method is always LINEAR).
Finally, note that there is no support for the following:
• No support for a primitive.mode other than TRIANGLES
• No support for the Occlusion Strength parameter.
#### Custom Materials
Microsoft Flight Simulator has support for a custom material type, which allows access to a few render-specific techniques. It is therefore recommended to use the Microsoft Flight Simulator Plugin for 3D Studio Max, especially for aircraft. You can find out more information about this plugin on the following page:
#### Animation Groups
When working with animations, it's very important to ensure that all your animations are correctly grouped for exporting in the glTF. With 3DS Max, this is done from the Babylon Animation Groups window, which is opened from the right-click menu: Right Mouse Button > Babylon > Babylon Animation Groups.
As you create each set of animations - for a wheel, for an aileron, etc... - you then need to define this animation within the Babylon Animation Groups window. This will then include the animation groups in the glTF when you export the file for use with Microsoft Flight Simulator using the Babylon Exporter.
### Legacy Support
Microsoft Flight Simulator has legacy support for the FSX .MDL format. However, it is not recommended to use this old format, mainly due to large differences in rendering techniques used between FSX and Microsoft Flight Simulator. In addition, we do not support uploading model files in this format to the store.
|
2022-07-01 19:06:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20590057969093323, "perplexity": 2098.4787524279095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00580.warc.gz"}
|
https://zbmath.org/?q=an:1182.16030
|
Kimmerle conjecture for the Held and O’Nan sporadic simple groups.(English)Zbl 1182.16030
A long standing Zassenhaus conjecture says that every normalized torsion unit of the integral group ring $$\mathbb{Z} G$$ of a finite group $$G$$ is conjugated, within the rational group algebra $$\mathbb{Q} G$$, to an element in $$G$$.
A weakened version of this conjecture concerns the Gruenberg-Kegel graph (also called the prime graph) $$\pi(X)$$ of an arbitrary group $$X$$. Recall that the vertices of this graph are labeled by primes $$p$$ for which there exists an element of order $$p$$ in $$X$$ and with an edge from $$p$$ to a distinct $$q$$ if $$X$$ has an element of order $$pq$$. The following question was posed by W. Kimmerle: is $$\pi(V(\mathbb{Z} G))=\pi(G)$$, for a finite group $$G$$? Of course, a positive answer to the Zassenhaus conjecture implies a positive answer to Kimmerle’s question. Positive answers to the latter have been given by Kimmerle for finite Frobenius and solvable groups, and in a series of papers, by Bovdi and Konovalov and also Hertweck, Jespers, Linton, Marcos, Siciliano, for some simple groups, including 12 of the 26 sporadic simple groups.
In this paper a positive answer to Kimmerle’s question is proved for the Held sporadic group. For the O’Nan sporadic simple group the non-existence of torsion units of all relevant orders, except orders 33 and 57, is given. For both groups, some extra information is obtained that is relevant for the Zassenhaus conjecture.
MSC:
16U60 Units, groups of units (associative rings and algebras) 20C05 Group rings of finite groups and their modules (group-theoretic aspects) 16S34 Group rings 20D08 Simple groups: sporadic groups
LAGUNA; GAP
Full Text:
|
2022-05-21 02:46:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7624501585960388, "perplexity": 515.950856102241}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00719.warc.gz"}
|
http://sms.niser.ac.in/news/seminar%26colloquium?page=3
|
# Past Events
Date/Time:
Thursday, June 15, 2017 - 11:30 to 12:30
Venue: SMS Seminar Hall
Title: Frames of twisted shift-invariant spaces in $L^{2}(\mathbb{R}^{2n})$ and shift-invariant spaces on the Heisenberg group
Abstract: A well known result on translates of a function $\varphi$ in $L^{2}(\mathbb{R})$ states that the collection $\{\tau_{k}\varphi: k\in\mathbb {Z}\}$ forms an orthonormal system in $L^{2}(\mathbb{R})$ iff $p_{\varphi}(\xi)=\sum\limits_{k\in\mathbb {Z}}|\widehat{\varphi... Date/Time: Tuesday, May 30, 2017 - 11:30 Venue: Seminar Hall, SMS Speaker: Anup Dixit, University of Toronto Title: On the generalized Brauer-Siegel Theorem For a number field K over Q, there is an associated invariant called the class number, which captures how far the ring of integers of K, is from being a principal ideal domain (PID). The study of class numbers is an important theme in algebraic number theory. In order to understand how the... Date/Time: Tuesday, May 23, 2017 - 16:00 to 17:00 Venue: Seminar Room, School of Mathematical Sciences Speaker: Dr. Rohit Dilip Holkar, IISER Pune Title: Locally free actions of groupoids and proper correspondences Date/Time: Friday, April 21, 2017 - 11:35 to 12:35 Venue: SMS Seminal Hall Speaker: Samir Shukla, IIT Kanpur Title: Connectedness of Certain Graph Coloring Complexes Abstract: The Kneser Conjecture was proved by Lovasz, using the Borsuk Ulam Theorem. Lovasz introduced the notion of a simplicial complex called the neighborhood complex for a graph$G$with an aim of estimating the chromatic number of any graph by the connectivity of its... Date/Time: Wednesday, April 19, 2017 - 09:35 to 10:30 Venue: Seminar Hall Speaker: Professor Arup Bose, ISI Kolkata Title: IIntroduction to free independence Abstract: Independence is a fundamental idea in probability theory. This is the same as product measures with total measure 1. However, for non-commutative structures, the classical definition of independence does not work. This gave rise to the concept of free independence as... Date/Time: Tuesday, April 18, 2017 - 09:30 to 10:30 Venue: LH-5 Speaker: Professor Mahan MJ, TIFR, Mumbai Title: What is hyperbolic geometry? Abstract: We shall discuss Euclid's problem of trying to derive the parallel postulate from the remaining axioms. Date/Time: Thursday, April 13, 2017 - 15:30 to 16:30 Venue: M4 Speaker: Prof. S. Krishnan, IIT Bombay Title: Hook Immanantal and Hadamard inequalities for q-Laplacians of trees$\textbf{Abstract:}$Let$T$be a tree on$n$vertices with Laplacian matrix$L$and$q$-Laplacian$\mathcal{ L}_q$. Let$\chi_k$be the character of the irreducible representation of$\mathfrak{S}_n$indexed by the hook partition$k,1^{n-k}$and let$\overline{ d}_k(L)\$ be...
Date/Time:
Monday, April 10, 2017 - 17:30 to 18:30
Venue: Mathematics Seminar Room
Speaker: Vellat Krishna Kumar, NISER
Title: On the dimension of the L^2[0, ∞) span of n linearly independent L^2_loc [0, ∞) functions
In this talk we explore the problem of determining the dimension of the L^2 [0, ∞)span of n linearly independent L^2_ loc [0, ∞) functions and the fascinating geometry associatedwith it. This problem appears in the context of Weyl’s limit classification problems of formallysymmetric differential...
Date/Time:
Monday, April 10, 2017 - 11:30 to 12:30
Venue: Seminar Room, School of Mathematical Sciences
Speaker: Dr. Anirban Bose,
Title: Real elements in groups of type F4
Date/Time:
Thursday, April 6, 2017 - 10:35 to 11:30
Venue: SMS Seminar Room
Speaker: Nabin Kumar Jana, NISER, Bhubaneswar
This is the third lecture on this topic with the following:
...
|
2017-11-18 17:38:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4962103068828583, "perplexity": 3878.462520537409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805008.39/warc/CC-MAIN-20171118171235-20171118191235-00483.warc.gz"}
|
http://opencontent.org/blog/page/2
|
A Response to “OER Beyond Voluntarism”
Well, this has turned into a rather enjoyable conversation. To recap what has unfolded so far:
• It began with Jose Ferreira inviting me to appear on a panel at the Knewton Symposium,
• on the panel, I made the claim that in the near future 80 percent of general education courses would replace their commercial textbooks with OER,
• after the conference, Jose responded to my claim by telling publishers why I was wrong,
• I responded by explaining that the emergence of companies like Red Hat for OER would indeed make it happen, using the Learning Outcomes per Dollar metric as their principal tool of persuasion, and
• Michael Feldstein argued that it depends.
Yesterday, Brian Jacobs of panOpen published an essay contributing to the conversation. While I agree that some in the field have yet to pick up on a few of the points he makes, I’m a little perplexed that he would choose to position these points as a response to writing by Michael, Jose, and me. By making these points in a response, he implies that we have yet to understand them. Take this bit for example:
Their comments, though, didn’t tackle what I’ve come to see as the core issue for the OER movement, a foundational assumption that has crimped its progress. The assumption holds that because open-source educational content is like open-source software…its application and uses should follow in a similar way. The short history of the two movements makes clear that this is not the case.
I’ve been accused of many things in my life, but never of missing the difference between open content and open source. As the person who coined the term “open content” sixteen years ago specifically for the purpose of differentiating it from open source, I’ve never had to defend against this particular allegation. Not sure what to say.
Or this:
The OER movement’s almost singular focus on cost can obscure the larger objective — actually getting more students through to graduation while ensuring that they’ve learned (and enjoyed learning) something along the way.
when I spent almost half of the post he is responding to laying out the Learning Outcomes per Dollar metric for empirically measuring the impact of OER use on students’ academic performance. And then demonstrating with actual data from an OER adopter the incredibly powerful ways that OER adoption impacts learning.
Perhaps the article isn’t a response to Jose, Michael, and me at all. Maybe Brian is just using the conversation as an opportunity to underline a few unrelated points he feels need making, and that’s fine. And these little tidbits aren’t what I actually wanted to write about, anyway. Sorry. What I really want to do is unpack and comment on the core argument of the essay. First I’ll disagree, and then I’ll agree.
Disagreeing
As important as [the OpenStax] project is, it doesn’t yet realize the promise of OER as disaggregated high-quality content created and modified from anywhere.
Overworked and underpaid instructors are looking to content and course technology to make their lives easier, not to take on the additional responsibility of managing their own content without financial recognition for that labor.
From these and other portions of the article, I believe Brian’s argument is based on two premises:
• In order for students to get the full benefit of OER, their faculty need to be aggregating, revising, and remixing OER – really tailoring and customizing it to meet their specific needs
• This is a lot of additional work for faculty, and they won’t do it unless they are provided with additional incentives
Arguing from these assumptions, he arrives at the following conclusion:
This can be done by charging students nominally for the OER courses they take or as a modest institutional materials fee. When there are no longer meaningful costs associated with the underlying content, it becomes possible to compensate faculty for the extra work while radically reducing costs to students… a system for distributed content development also needs to be accompanied by a system of distributed financial incentives.
So, just stating each step of the argument explicitly to make sure I’m getting it right (hopefully he’ll correct me in the comments if I’m getting it wrong):
• if we charge students a little when faculty adopt OER,
• we can use a portion of that revenue to incentivize faculty to do the work of curating disaggregated OER and engaging in the revising and remixing process,
• (because if we don’t incentivize faculty by paying them, then most will never engage in these activities), and
• if faculty aren’t aggregating, revising, and remixing disaggregated OER, students won’t get the full benefit of OER.
I largely agree with Brian’s premises, but disagree somewhat with where he takes the argument based on them. (As I’ll argue below, this disagreement is both healthy and a Good Thing.) Here’s where I think the primary differences in our thinking lie.
The “Full” Benefit of OER
First, while I agree in theory that students don’t get the full potential benefit of OER if their faculty don’t engage in the aggregate, revise, and remix process, it’s unclear to me how much benefit students miss out on when faculty simply adopt OER “as is” (though we’re studying this question now). For example, the overwhelming majority of faculty in the college algebra example from my previous post – where passing rates increased from 48% to 60% after faculty switched to OER – did zero aggregating, revising, or remixing. Maybe the change in pass rates would have been even higher if they had, but are we really going to poo-poo an increase of 12 real percentage points in the pass rate? If students are getting much of the potential benefit even when faculty don’t aggregate, revise, and remix, is it worth incurring the additional costs necessary to achieve 100% of the full benefit? This brings us directly back to the Learning Outcomes per Dollar discussion in my previous post. What’s the delta in learning we would place in the numerator? What’s the delta in cost we would place in the denominator?
The INTRO Model
On Monday we’re submitting an article (for a special issue of EPAA) that introduces a new funding model we call the INTRO model – INcreased Tuition Revenue through OER. In this article we use actual enrollment, drop rate, tuition, policy, and other data from a large OER adopting institution to show that:
• when faculty adopt OER, drop rates decrease significantly
• when drop rates decrease, the institution refunds significantly less tuition
• when they refund less tuition, the institution has more funding to spend on things like supporting OER adoption among its faculty
In this particular example we demonstrate that, if the current OER pilot was expanded to all sections of the 20-some courses currently piloting OER, the institution could expect to retain over $100,000 a year in tuition that they’re currently refunding. Some of this new funding could be used to pay for services supporting faculty adoption of OER without charging students. I’m sure there are other models for funding OER adoption support services out there if we’re creative and open-minded enough to find them. Parallel Experiments And I am in total and complete agreement with this statement from Brian’s piece: What’s needed are lots of entities — for-profit and nonprofit — to experiment with funding models. YES! We need more experimentation happening, and we need it happening in parallel instead of serially. We can’t all stand around watching the Flat World Knowledge experiment, and only start trying something different when it becomes clear that their approach isn’t quite the right one. As Linus said, in what is possibly my favorite quote: And don’t EVER make the mistake [of thinking] that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That’s giving your intelligence _much_ too much credit. Even though I disagree with some of Brian’s conclusions (which is why I’m experimenting with a different business model), I absolutely want him out there experimenting with his particular business model. If I’m sufficiently humble, I’ll learn a thing or two from him before it’s all said and done. (If he’s sufficiently humble, Brian may learn something from me, too.) From this learning a new generation of models will emerge and be tested. They will be followed by another, further refined set of models. That’s how the field moves forward in its understanding of how to support OER adoption at scale, and it’s how at least 80% of general education courses will end up adopting OER in place of commercial textbooks. { 3 comments } OpenCon 2014 #OpenEd14 is getting close! For a wide range of reasons, this year’s 11th annual Open Education Conference looks like it will be the best ever. One thing contributing to the awesomeness of this year’s conference is other events organized around the same time in the same area. One of these events is OpenCon 2014: The Student and Early Career Researcher Conference on Open Access, Open Education and Open Data, organized by SPARC and the Right to Research Coalition As the name implies, this event is really focused on engaging students and early career individuals and helping them become effective advocates in the openness movement. The meeting will run from November 15-17 in Washington, D.C., and the program includes three days of talks, workshops, and in-the-field advocacy experience (leveraging its location in Washington, DC). Of course, a delegation of participants from OpenCon will also attend OpenEd. Applications are still open until midnight tonight Pacific time – over 1600 applicants from more than 120 countries have already applied. If you fall into the student / early career category, you should definitely apply. { 0 comments } A Response to ‘OER and the Future of Publishing’ I recently had the wonderful opportunity to participate on a panel about OER at the Knewton Education Symposium. Earlier this week, Knewton CEO Jose Ferreira blogged about ‘OER and the Future of Publishing’ for EdSurge, briefly mentioning the panel. I was surprised by his post, which goes out of its way to reassure publishers that OER will not break the textbook industry. Much of the article is spent criticizing the low production values, lack of instructional design, and missing support that often characterize OER. The article argues that there is a potential role for publishers to play in each of these service categories, leveraging OER to lower their costs and improve their products. But it’s been over 15 years since the first openly licensed educational materials were published, and major publishers have yet to publish a single textbook based on pre-existing OER. Why? Exclusivity, Publishing, and OER The primary reason is that publishers are – quite rationally – committed to the business models that made them incredibly successful businesses. And the core of that model is exclusivity – the contractual right to be the only entity that can offer the print or digital manifestation of Professor Y’s expertise on subject X. Exclusivity is the foundation bedrock of the publishing industry, and no publisher will ever meaningfully invest in building up the reputation and brand of a body of work which is openly licensed. Publisher B would simply sit on the sidelines while Publisher A exhausts its marketing budget persuading the world that it’s version of Professor Y’s open materials are the best in their field. Once Professor Y’s brand is firmly associated with high quality, Publisher B will release it’s own version of Professor Y’s open materials, free-riding on Publisher A’s marketing spend. Publisher A’s marketing efforts actually end up promoting Publisher B’s competing product in a very real way. No, publishers will never put OER at the core of their offerings, because open licensing – guaranteed nonexclusivity – is the antithesis of their entire industrial model. Some playing around in the supplementals market is the closest major publishers will ever come to engaging with OER. New Models Enabled by OER However, we are seeing the emergence of a new kind of organization, which is neither invested in preserving existing business models nor burdened with the huge content creation, distribution, and sales infrastructure that a large commercial publisher must support. (This sizable infrastructure, that once represented an insurmountable barrier to entry, is quickly becoming a millstone around the neck of big publishers facing the threat of OER.) The new breed of organization is only too happy to take the role of IBM or Red Hat and provide all the services necessary to make OER a viable alternative to commercial offerings. I had to chuckle a little reading the advice to publishers Jose provides in his post, because that list of services could almost have been copied and pasted my company’s website (Lumen Learning): iterative cycles of instructional design informed by data, integration services, faculty support, etc. I agree wholeheartedly that these are the kinds of services that must be offered to make OER a true competitor to commercial textbooks in the market – but I disagree with the idea that publishers will ever be willing to offer them. That realization is part of what led me to quit a tenured faculty job in a prestigious graduate program to co-found Lumen Learning. All that said, the emergence of these organizations won’t spell the end of large textbook publishers as we know them. Instead, that distinction will go to the simplest possible metric by which we could measure the impact of the educational materials US students spend billions of dollars per year on: learning outcomes per dollar. Learning Outcomes per Dollar No educator would ever consciously make a choice that harmed student learning in order to save money. But what if you could save students significant amounts of money without doing them any academic harm? Going further, what if you could simultaneously save them significant money and improve their learning outcomes? Research on OER is showing, time and again, that this latter scenario is entirely possible. One brief example will demonstrate the point. A recent article published in Educause Review describes Mercy College’s recent change from a popular math textbook and online practice system bundle provided by a major publisher (~$180 per student), to OER and an open source online practice system. Here are some of the results they reported after a successful pilot semester using OER in 6 sections of basic math:
• At pilot’s end, Mercy’s Mathematics Department chair announced that, starting in fall 2012, all 27 sections (695 students) in basic mathematics would use [OER].
• Between spring 2011 [no sections using OER] and fall 2012 [all sections using OER], the math pass rate increased from 48.40 percent to 68.90 percent.
• Algebra courses dropped their previously used licenses and costly math textbooks and resources, saving students a total of $125,000 the first year. By switching all sections of basic math to OER, Mercy College saved its students$125,000 in one year and changed their pass rate from 48 to 69 percent – a 44% improvement.
If you read the article carefully, you’ll see that Mercy actually received a fair amount of support in their implementation of OER, which was funded through a grant. So let’s be honest and put the full cost-related details on the table. Mercy (and many other schools) are still receiving the support they previously received for free through their participation in the Kaleidoscope Open Course Initiative. Lumen Learning, whose personnel led the KOCI, now provides those same services to Mercy and other schools for $5 per enrollment. So let’s do the learning outcomes per dollar math: • Popular commercial offering: 48.4% students passing /$180 textbook and online system cost per student = 0.27% students passing per required textbook dollar
• OER offering: 68.9% students passing / $5 textbook and online system cost per student = 13.78% students passing per required textbook dollar For the number I call the “OER Impact Factor,” we simply divide these two ratios with OER on top: • 13.78% students passing per required textbook dollar / 0.27% students passing per required textbook dollar = 51.03 This basic computation shows that, in Mercy’s basic math example, using OER led to an over 50x increase (i.e., a 5000% improvement) in percentage passing per dollar. No matter how you look at it, that’s a radical improvement. If similar performance data were available for two construction companies, and a state procurement officer awarded a contract to the vendor that produces demonstrably worse results while costing significantly more, that person would lose his job, if not worse. (As an aside, I’m not aware of any source where a taxpayer can find out what percentage of federal financial aid (for higher ed) or their state public education budget (for K-12) is spent on textbooks, making it impossible to even begin asking these kinds of questions at any scale.) While faculty and departments aren’t subject to exactly the same accountability pressures as state procurement officers, how long can they continue choosing commercial textbook options over OER as this body of research grows? #winning Jose ends his post by saying “Publishers who can’t beat OER deserve to go out of business,” and he’s absolutely right. But in this context, “beat” means something very different for OER than it does for publishers. For OER, “beat” means being selected by faculty or departments as the only required textbook listed on the syllabus (I call this a “displacing adoption”). Without a displacing adoption – that is, if OER are adopted in addition to required publisher materials – students may experience an improvement in learning outcomes but will definitely not see a decrease in the price of going to college. Hence, OER “beat” publishers only in the case of a displacing adoption. For publishers, the bar is much lower – to “beat” OER, publishers simply need to remain on the syllabus under the “required” heading. How are OER supposed to clear this higher bar, particularly given the head start publishers have? OER have only recently started to catch up with publishers in many of the areas where publishers have enjoyed historical advantages, like packaging and distribution (c.f. the amazing work being done by OpenStax, BCCampus OpenEd, Lumen Learning, and others). But OER have been beating publishers on price and learning outcomes for several years now, and proponents of OER would be wise to keep the conversation laser-focused on these two selection criteria. In a fortunate coincidence for us, I believe these are the two criteria that matter most. OER offerings are always going to win on price – no publisher is ever going to offer their content, hosting platform, analytics, and faculty-facing services in the same zip code as$5 per student. (And when we see the emergence of completely adaptive offerings based on OER – which we will – even if they are more expensive than $5 per student they will still be significantly less expensive than publishers’ adaptive offerings.) Even if OER only manage to produce the same learning results as commercial textbooks (a “no significant difference” research result), they still win on price. “How would you feel about getting the same outcomes for 95% off?” All OER have to do is not produce worse learning results than commercial offerings. So the best hope for publishers is in creating offerings that genuinely promote significantly better learning outcomes. (I can’t describe how happy I am to have typed that last sentence.) The best opportunity for publishers to soundly defeat OER is through offerings that result in learning outcomes so superior to OER that their increased price is justified. Would you switch from a$5 offering that resulted in a 65% passing rate to a $100 offering that resulted in a 67% passing rate? Would you switch to a$225 offering that resulted in a 70% passing rate? There is obviously some performance threshold at which a rational actor would choose to pay 20 or 40 times more, but it’s not immediately apparent to me where it is.
However, if OER can beat publishers on both price and learning outcomes, as we’re seeing them do, then OER deserve to be selected by faculty and departments over traditional commercial offerings in displacing adoptions.
I was the member of the panel Jose quoted as saying that ‘80% of all general education courses taught in the US will transition to OER in the next 5 years,’ and I honestly believe that’s true. The combined forces of the innovator’s dilemma, the emergence of new, Red Hat-like organizations supporting the ecosystem around OER, the learning outcomes per dollar metric, and the growing national frustration over the cost of higher education all seem to point clearly in this direction.
|
2014-09-30 23:55:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27101850509643555, "perplexity": 2396.886288431098}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00173-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/jess/VENKATESHWARLU_MAMILLA
|
• VENKATESHWARLU MAMILLA
Articles written in Journal of Earth System Science
• Paleointensity of the Earth's magnetic field at ${\sim}$117 Ma determined from the Rajmahal and Sylhet Trap Basalts, India
We present here the paleointensity results of basalt samples from Rajmahal (25.10$^{\circ}$N; 87.40$^{\circ}$E) and Sylhet Traps (25.22$^{\circ}$N; 91.71$^{\circ}$E) of eastern India (${\sim}$117 Ma) to know the strength of the earth's magnetic field during the early Cretaceous from these locations. The modified version of the Thellier–Thellier paleointensity method of, in field-zero field-zero field-in field (IZZI) protocol and systematic partial thermoremanent magnetization (pTRM) checks were used for the paleointensity determination. Rock magnetic investigations on these rocks indicate ‘magnetite’ is the main remanence carrier with single domain (SD) to pseudosingle domain (PSD) nature. The samples have yielded low paleofield intensities between 6.97 $\pm$ 2.21 and 23.47 $\pm$ 2.08 $\mu$T (mean 17.20 $\pm$ 1.89 $\mu$T). The corresponding virtual dipole moment (VDM) ranges from 1.16 to 4.17 $\times$ 10$^{22}$ Am$^{2}$ (mean 2.93 $\times$ 10$^{22}$ Am$^{2}$) which is approximated as to 30% of the present-day field strength (8 $\times$ 10$^{22}$ Am$^{2}$). The success rate of the experiment is quite low in the order of 5%, but has provided scope for further, more elaborative paleointensity studies. Our new results compared with published paleointensities from these basalts as well as rocks of Cretaceous normal superchron (CNS) time around the globe are in good agreement.
• Journal of Earth System Science
Volume 130, 2021
All articles
Continuous Article Publishing mode
• Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
2021-11-30 00:27:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42130789160728455, "perplexity": 9857.752636590818}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00370.warc.gz"}
|
https://socratic.org/questions/how-do-you-use-pemdas-to-simplify-9-7-3-4-3
|
How do you use PEMDAS to simplify (9-7)^3 - (4+3)?
Jan 18, 2016
The answer to that equation is 1
Explanation:
PEMDAS means answering first what's inside the parenthesis then use the exponent then proceed multiplication, division, addition and subtraction accordingly.
For this,
${\left(9 - 7\right)}^{3} - \left(4 + 3\right)$
Following PEMDAS:
${\left(2\right)}^{3} - \left(7\right)$
$\left(8\right) - \left(7\right)$
$= 1$
|
2019-09-17 20:56:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6577005982398987, "perplexity": 3032.3480631224625}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573121.4/warc/CC-MAIN-20190917203354-20190917225354-00315.warc.gz"}
|
https://www.biostars.org/p/155012/
|
2
0
Entering edit mode
5.7 years ago
ziv84 • 0
Hi all,
I'm new with Biostars - so, sorry if anything.
I'm also new with local blast and my question might be a bit "stupid" but I've already spent almost two days trying to solve the problem and totally have know idea how to do that... I've already learned BLAST manual pages, googled the subject and read several relevant topics in here and on Seqanswers as well, but haven't got any success.. I would be very appreciative for any help.
So, the subject is: I'm trying to "restrict" nt BLAST database to perform search of my queries (several millions of short reads) against sequences which belong only to certain taxon(s). My pipeline to do that was based on this topic Vertebrate Subset Nr Database? Build My Own? and included such steps:
1) getting id of taxon of interest using NCBI Taxonomy Browser. E.g. it was taxon Chlorophyta. According to the browser the taxon has id as 3041 ( http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?mode=Info&id=3041&lvl=3&lin=f&keep=1&srchmode=1&unlock )
2) getting GI list of genes associated with the taxon using NCBI Nucleotide database. I've performed search with the phrase "txid3041[Orgn]" in Nucleotide part of NCBI. It said "Found 1052995 nucleotide sequences. Nucleotide (427919) EST (569265) GSS (55811)". Ok, I've exported the GI list via "Send To" -> "File" -> "GI List" and got text file Chlorophyta_nucleotide.gi.txt with list of numbers (guess gene IDs (gids)):
916534476
916534474
916534472
916534470
916534468
916534466
...
3) Further using blastcmd I've tried to extract from my local nt base sequences matching to the list of gids with the following command:
blastdbcmd -db nt -dbtype 'nucl' -entry_batch ../Chlorophyta_nucleotide.gi.txt -out nt_tst.fa
BUT got a lot of errors like:
Second attempt was with redirecting of errors into separate file error.log (-logfile option)
The result is:
427919 - lines in file with gids (Chlorophyta_nucleotide.gi.txt) - it's in agreement with the report of search in Nucleotide database (see above)
287257 - lines in file error.log (all of them looks like "Error: XXXXXXXXX: OID not found")
140662 - lines starting with ">" in output file nt_tst.fa
140662+287257=427919 - seems that far from all entries in list of gids have been found in nt database
After some time with google I've learn that the problem could be connected with preparing of local nt database: if you are using manually prepared nt database (downloading fasta files with nt base followed by preparing of the base with makeblastdb script) it's important to use -parse_id option to escape the problem "OID not found" in future.
But I downloaded pre-formatted version of nt base from NSBI's ftp. According to their README pre-formatted nt base don't need further preparation before making a search (i.e. it's "ready-to-use"). Anyway to be sure that my nt base in "good shape" I performed simple blastn - it works.
Further check:
blastdbcmd -db nt -info
Database: Nucleotide collection (nt)
30,527,720 sequences; 95,485,076,457 total bases
Date: Jun 17, 2015 3:03 AM Longest sequence: 774,434,471 bases
Volumes:
/media/RAID/blastdb/nt.00
/media/RAID/blastdb/nt.01
/media/RAID/blastdb/nt.02
/media/RAID/blastdb/nt.03
...
/media/RAID/blastdb/nt.29
cat /media/RAID/blastdb/nt.nal
#
# Alias file created 06/17/2015 03:15:04
#
TITLE Nucleotide collection (nt)
DBLIST "nt.00" "nt.01" "nt.02" "nt.03" "nt.04" "nt.05" "nt.06" "nt.07" "nt.08" "nt.09" "nt.10" "nt.11" "nt.12" "nt.13" "nt.14" "nt.15" "nt.16" "nt.17" "nt.18" "nt.19" "nt.20" "nt.21" "nt.22" "nt.23" "nt.24" "nt.25" "nt.26" "nt.27" "nt.28" "nt.29"
NSEQ 30527720
LENGTH 95485076457
As I'm understanding - everything is ok with the database.
Might be "Nucleotide" database is not equal "nt" database? As I'm understanding (based on the descriptions of the databases) nt base should be equal or even bigger than Nucleotide one:
nt (the description obtained via "?" button in web version of blast)
"Title:Nucleotide collection (nt)
Description:The nucleotide collection consists of GenBank+EMBL+DDBJ+PDB+RefSeq sequences, but excludes EST, STS, GSS, WGS, TSA, patent sequences as well as phase 0, 1, and 2 HTGS sequences. The database is non-redundant. Identical sequences have been merged into one entry, while preserving the accession, GI, title and taxonomy information for each entry.
Molecule Type:mixed DNA
Update date:2015/07/14
Number of sequences:31076527"
Nucleotide (the description copypasted from main page of Nucleotide part of NCBI http://www.ncbi.nlm.nih.gov/nuccore )
"The Nucleotide database is a collection of sequences from several sources, including GenBank, RefSeq, TPA and PDB. Genome, gene and transcript sequence data provide the foundation for biomedical research and discovery."
As for nonredundancy of nt database there is some discrepancy with description of "downlodable" version of nt database on NCBI's ftp:
"
nt.*tar.gz | Partially non-redundant nucleotide sequences from
all traditional divisions of GenBank, EMBL, and DDBJ
excluding GSS,STS, PAT, EST, HTG, and WGS.
"
It seems "my" nt base could be non-redundant and according that should not be smaller than Nucleotide one. Consequently, I suppose that nt base should contain all entries from my gid list.
To be sure that everything is ok with my GI list I performed the same steps with another list of gids (taxonomy id: 2 from NCBI's Taxonomy Browser) and got the same result: most gids haven't been found in nt base (""Error: XXXXXXXXX: OID not found").
So, the questions are as following:
1) Is everything ok with my local nt base? Should I check smth else to be sure..?
2) Was my "restriction" pipeline wrong in some steps? Do you know more effective way how to perform search for big list of queries against nt-database-sequencies which belong to organisms from certain taxon? Restrictions are: queries are some mix of sequences from a number of non-model organisms.
3) Is it actually normal situation with getting the error " OID not found" if the pipeline correct? Should nt database contain all gids from my list(s)?
Would be very appreciative for any help.
Thank you for your attention )
PS:
The overall task is to perform "decontamination" of short-reads array before de-novo genome assembly step. We are currently have no idea about list of "contaminating" organisms, except of consideration that our target organism is eukaryotic and "contaminants" are prokaryotic. The amount of "contaminating" reads is quite big - roughly 30% or even more.. Might be there is some other effective way to perform the "decontamination"?
blast • 3.3k views
1
Entering edit mode
5.7 years ago
5heikki 9.7k
This is related to GI numbers not being for forever, at least not in any ncbi sequence databases. IMO the best way to get the relevant GI numbers is straight from the db itself as in this example. Then you just use that list for blastdb_aliastool. Also, there are much faster methods than blast for decontamination, e.g. DeconSeq.
0
Entering edit mode
Thank you for your answer. I saw previously the cookbook page but haven't tried it yet cause of some misunderstanding: for example, if I run the following command
blastdbcmd -db nt -entry all -outfmt "%g %T" | \
awk ' { if ($2 == 3041) { print$1 } } ' | \
blastdbcmd -db nr -entry_batch - -out chlorophyta_sequences.txt
will I get list of sequencies belonging to all organisms in phylum Chlorophyta (including, e.g. organisms in class Chlorodendrophyceae with Taxonomy ID 1524962) or just some subset? Do you know?
Also thank you for advise with software. I've slightly postponed to try other ways for decontamination than blast-way cause of the latter looks more clear and informative for me (who is my "contamination"). But seems you're right and blast-way much more ineffective in terms of speed. I've found the article http://www.nature.com/ismej/journal/vaop/ncurrent/full/ismej2015100a.html and now going to try the software as well as DeconSeq.
2
Entering edit mode
The example linked is not appropriate for your use-case, it will not select subtaxa but only the exact taxon. Therefore, you need to generate a list of all relevant subtaxa from the taxonomy and then filter the gi list for those.
1
Entering edit mode
You're right. I think the list can be fetched from here. Or with Entrez Direct:
esearch -db taxonomy -query txid2[Subtree] | efetch -format uid > txid.list
I would say in this case it would be better to:
blastdbcmd -db nt -entry all -outfmt "%g %T" > temp.file
and then (if enough memory):
join -1 1 -2 2 -o 2.1 -t $'\t' <(sort -k1,1 txid.list) <(sort -k2,2 temp.file) > bac.gi and then proceed with blastdb_aliastool ADD REPLY 0 Entering edit mode Thank you. I've used almost the same way with some modifications. Firstly I've manually updated nt db (Aug 12 version). Further: 1. to extract subtaxa ids I've used gi_taxid_nucl.dmp file (ftp://ftp.ncbi.nih.gov/pub/taxonomy/gi_taxid.readme ) which is matching gi with txid 2. to get list of txids of Bacteria taxa I've used the file categories.dmp from taxcat dump (ftp://ftp.ncbi.nih.gov/pub/taxonomy/taxcat_readme.txt ) grep '^B' taxcat_dmp_Aug2015.dmp | cut -f 3 > Bacteria_taxids.txt 3. to get full list of gis from nt db I've used blastdbcmd -db nt -entry all -outfmt "%g" > GI_list_nt 4. Cause of I'm not so cool to use join, small Perl script helped me to extract GIs from gi_taxid_nucl.dmp which have taxid presented in Bacteria_taxids.txt and further to check presence of such GIs in GI_list_nt... Seems should work.. But. The result of (4) was the following: 36539237 - GIs in GI_list_nt 6861060 - GI have been extracted.. Meanwhile using Nucleotide db at NCB web page and "txid2[Orgn]" query one could currently extract 19900899 GIs (http://www.ncbi.nlm.nih.gov/nuccore/?term=txid2%5BOrgn%5D ).. Whta is that? Moreover. My local nt db: blastdbcmd -db nt -info Database: Nucleotide collection (nt) 31,561,398 sequences; 101,581,910,962 total bases If nt and Nucleotide dbs are almost the same then the dbs consists of more than 50% from bacterial seqs. But guess that can't be true. But even if not, I think that 6 mln of bacterial seqs from >30 mln of all nt's seqs is too small part... Isn't it? ADD REPLY 0 Entering edit mode Try it the way I spelled out and see how many sequences you get. Also, nucleotide and nt are not the same. nt is non-redundant to some extent. Also, if you check on the left from your last link, you'll see that Refseq has ca. 4M bacterial sequences. 7M bacterial seqs in nt might be about right.. Also consider the difference in count of bacteria and eukaryota in nuccore, 20M bacterial seqs, 155M eukaryota seqs: esearch -db nuccore -query txid2[Orgn] .. <Count>19900899</Count> .. esearch -db nuccore -query txid2759[Orgn] .. <Count>154846939</Count> .. The ratios 7/31 20/155 are not that far off and 155M still excludes archaea, viruses and unclassified. Total seq count in nuccore is right now 198,787,644 (einfo -db nuccore) ADD REPLY 0 Entering edit mode I have use this. And at the "blastdbcmd -db nr -entry all -outfmt "%g %T" > temp.file" I got a file. N/A 0 N/A 0 N/A 0 What is the problem? ADD REPLY 0 Entering edit mode 5.7 years ago Run update_blastdb.pl nt in the directory where your blast databases are stored and extract the tar files, then see if the problem persists. A fraction of oids might have been added between june 17 and today. ADD COMMENT 0 Entering edit mode Thank you for the answer. I've tried to use the script to download nt db from NCBI's and failed with the same error "nt not found". After that I downloaded the db manually. Now I tried to run it again: /media/RAID/blastdb$ update_blastdb nt
Connected to NCBI
nt not found, skipping.
but I have nt db in path... Don't know what is it...
And also: do you think that between June 17th and today they have added more than half of current sequences for Chlorophyta? Does it possible?
0
Entering edit mode
I do not know why update_blastdb nt doesn't work, it works fine for me. If you have downloaded the 32 volumes manually, you will still have to extract them, but you don't need update_blastdb
0
Entering edit mode
Unfortunately I can't check the script right now with my updated nt db. But the older version consisted of 29 volumes and worked in blast (seems the db wasn't corrupted). But not with update_blastdb.pl although the db wasn't up-to-date when I tried...
0
Entering edit mode
Have you solved your problem! If not please try to use -parse_seqids throughout the BLAST script, it makes tags for your ids in blast database which helps in parsing data using blastdbcmd. Thanks
|
2021-04-20 13:01:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30442389845848083, "perplexity": 8245.081730943999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00311.warc.gz"}
|
https://www.physicsforums.com/threads/find-the-expectation-value-of-the-linear-momentum.276413/
|
# Find the expectation value of the linear momentum
1. Dec 2, 2008
### s8wardwe
1. The problem statement, all variables and given/known data
For a given wave function Psi(x,t)=Aexp^-(x/a)^2*exp^-iwt*sin(kx) find the expectation value of the linear momentum.
2. Relevant equations
<p>=integral(-inf,inf) psi* p^ psi dx
p^=-ih(bar) d/dx
sin x = (exp ix - exp -ix)/2i
cos x = (exp ix + exp -ix)/2
3. The attempt at a solution
I understand the technique of sandwiching the operator between the wave function and it's complex conjugate. Then the integral is a mess of sines cosines and exponentials. I was wondering if anyone had any advice to simplify the expression or to solve this type of infinite integral. Your suggestions would be very helpful. Thanks in advance.
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Dec 2, 2008
### buffordboy23
Re: momentum
Look at the individual integrands and determine if they are odd or even functions. If the integrand is odd and if the integration interval is symmetric with respect to some origin (i.e. negative infinity to positive infinity or [-a,a]), you can exploit the fact that integrand integrates to zero. For even functions with symmetric intervals, you can multiply the integral by 2 and run the integral from 0 to your upper boundary value.
$$\int^{a}_{-a}f_{odd}\left(x\right) dx = 0$$
$$\int^{a}_{-a}f_{even}\left(x\right) dx = 2 \int^{a}_{0}f_{even}\left(x\right) dx$$
The formula for even functions is useful for exponential terms, like e^(x), when the integration interval runs from negative to positive infinity.
|
2016-10-28 21:39:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80657559633255, "perplexity": 538.1502313301694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725475.41/warc/CC-MAIN-20161020183845-00072-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://accesscardiology.mhmedical.com/content.aspx?bookid=376§ionid=40279812
|
Chapter 78
Most cases of mitral stenosis (MS) are caused by rheumatic heart disease (Fig. 78–1).1 However, rheumatic fever has become quite rare in developed nations and so too has MS become rare. Indeed, most MS in the United States occurs in patients who have emigrated here from countries where rheumatic fever is still commonplace. Why rheumatic fever has waned in developed nations is unclear. Although antibiotic use almost certainly plays a role,2 the decline in disease incidence began before antibiotics were widely available, suggesting that socioeconomic factors also play a key role in the disease process. In addition, the organism responsible (group A Streptococcus) itself may have mutated to a less rheumatologic agent.
###### Figure 78–1.
The typical fish mouth appearance of rheumatic mitral stenosis is shown. Reproduced with permission from Otto.1
Although it is clear that rheumatic fever causes the disease, the exact mechanisms are still controversial. It is generally agreed that rheumatic fever occurs after infection with group A Streptococcus.3-5 The bodily defense against the organism attacks "M" protein antigens shared by the bacterium and the heart in some patients.4 Thus there is an inflammatory response that leads to cardiac damage, potentially of all three cardiac layers, the pericardium, myocardium, and endocardium. However, it is the endocardium, from which the cardiac valves are derived, that is most severely affected. Although all four valves may become damaged, the mitral valve is virtually always affected, but the reasons for this propensity are unclear. Perhaps greater mechanical stress on the mitral valve causes the inflammatory process to be manifested more severely there than on other valves.
The initial attack causes inflammation, thickening, and retraction of the mitral leaflets, usually causing mild mitral regurgitation, which may disappear as the attack subsides. Why MS develops later is not entirely clear, but at least 3 factors contribute to the process: sex, the severity of carditis in the first attack, and the number of subsequent attacks. Mitral stenosis is primarily a disease of women, with a 3:1 female preponderance. If after the initial attack there is little evidence of valvulopathy and no subsequent attacks occur, the chance that the patient will develop severe MS later in life is probably less than 5%.6 Subsequent attacks can be prevented by faithful adherence to antibiotic prophylaxis. However, what pathologic processes occur between the initial attack of acute rheumatic fever and eventual development of MS (when it does occur) are uncertain. At the time of surgery for MS there are active Aschoff nodules (the pathognomonic lesion of rheumatic fever) in the left atrial appendages of many patients, suggesting that a smoldering rheumatic process persists years after the last acute attack.7 Alternatively, it may be that after the initial lesion is created by rheumatic fever, hemodynamic stress on the valve may lead to continued inflammation and ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessCardiology Full Site: One-Year Subscription
Connect to the full suite of AccessCardiology content and resources including textbooks such as Hurst's the Heart and Cardiology Clinical Questions, a unique library of multimedia, including heart imaging, an integrated drug database, and more.
$595 USD ### Pay Per View: Timed Access to all of AccessCardiology 24 Hour Subscription$34.95
48 Hour Subscription \$54.95
### Pop-up div Successfully Displayed
This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.
|
2017-03-27 08:38:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23750054836273193, "perplexity": 5680.279645428635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189466.30/warc/CC-MAIN-20170322212949-00452-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://math453spring2009.wikidot.com/lecture-1
|
Lecture 1 - Introducing Divisibility
# Summary
Today we continued our discussion of divisibility and its basic properties. We saw some examples of how to put these properties into practice to prove exciting new results which might otherwise be quite difficult. Today's lecture culminated in the statement and proof of the division algorithm, one of the foundational results in number theory.
# Divisibility Continued
Last class previous we defined the notion of divisibility in the integers as follows:
Definition: An integer d is said to divide an integer a if there exists some integer q satisfying the equation $a = dq$.
## Some Examples
We already saw proofs of $4 \mid 8$ and $2 \nmid 5$ in class on Wednesday. Most divisibility statements will seem pretty obvious to you just by inspection, but the one exception might be divisibility statements involving 0. Below we provide a few examples.
$0 \nmid 5$
since the equation $5 = q\cdot q$ doesn't have any solution; any value you plug in for q will still make the right hand side 0.
$0 \mid 0$
since the equation $0 = 0\cdot q$ is true for some integer-value of q (in fact, it's true for all q!).
$3 \mid 0$
since $0 = 3q$ does have a solution, namely $q=0$.
## The case of evens and odds
We also single out a special case of divisibility by using the terms even and odd. Specifically, we have
Definition: An integer a is even if $2 \mid a$, and an integer a is odd if $2 \nmid a$.
You might also be used to thinking of even integers as those numbers a which satisfy a = 2k for some integer k. Indeed, we have the following
Lemma: An integer a is even if and only if there exists $k \in \mathbb{Z}$ so that $a = 2k$.
Our proof of this result will require us to simply recall the definitions of divisiblity and evenness.
Proof:
We know that an integer a is even if and only if $2 \mid a$; this is just the definition of evenness. We also know that $2 \mid a$ if and only if there exists an integer k so that $a = 2k$; this is just the definition of divisibility. Hence we have
(1)
\begin{align} a\mbox{ is even} \Longleftrightarrow 2\mid a \Longleftrightarrow a = 2k \mbox{ for some }k \in \mathbb{Z} \end{align}
as desired. $\square$
You'll extend this problem in your homework when you show that all odd numbers can be written in the form $2k+1$.
# Properties of Divisibility
There are a handful of properties of divisibility which are handy to remember; basically, these are good tools to use when you want to try to divide one integer into another. You can also think of these lemmas as good exercise for the definitions we've encountered in the class: none of the proofs require much more than writing down definitions, so they are a good chance for you to get used to the new terminology we've covered.
Lemma: For $a,b,c \in \mathbb{Z}$, if $a \mid b$ and $b \mid c$ then $a \mid c$.
Proof: We're told that $a \mid b$ and $b \mid c$. By the definition of divisibility, this means we have
• an integer d so that $b = ad$ (using the first divisibility condition), and
• an integer e so that $c = be$ (using the second divisibility condition).
Substituting appropriately, this means that
(2)
$$c = be = (ad)e = a(de)$$
Since de is an integer, this equation tells us that $a \mid c$ as desired. $\square$
Lemma: For $a,b,c,m,n \in \mathbb{Z}$, if $a \mid b$ and $a \mid c$, then $a \mid mb+nc$.
Proof: Again, we start by just writing down the definitions. In this case, we're told that $a \mid b$ and $a \mid c$, which means we have
• an integer d so that $b = ad$ (using the first divisibility condition), and
• an integer e so that $c = ae$ (using the second divisibility condition).
Hence we have
(3)
$$mb+nc = m(ad) + n(ae) = a(md+ne)$$
Since md+ne is an integer, this equation tells us that $a \mid mb+nc$.$\square$
There was another basic property of division we mentioned that allowed us to compare the size of a divisor to the size of the integer it is dividing. The statement of this result is
Lemma: If $d \mid a$ for a nonzero integer a, then $|d| \leq |a|$.
We didn't prove this result, but it might show up on your homework.
## A Neat Trick
One of the examples of divisibility we gave in class was a rule for determining when an integer is divisible by 17. You can think of this as a cousin of the old "casting our nines" rule that you use to determine whether a given integer is divisible by 9. This new rule says
Theorem: An integer $10a+b$ is divisible by 17 if and only if $a-5b$ is divisible by 17.
Proof: First, assume that $17 \mid (10a+b)$. Since $17 \mid 17a$ is obvious, our result on integral linear combinations tells us that
(4)
\begin{align} 17 \mid 3(17a) - 5(10a+b) = 51a-50a -5b = a-5b. \end{align}
In the other direction, assume that we are told that $17 \mid a-5b$, and we want to prove $17 \mid 10a+b$. Now since we know that $17 \mid 17b$, our result on integral linear combinations tells us that
(5)
\begin{align} 17 \mid 10(a-5b) + 3(17b) = 10a-50b+51b = 10a+b. \end{align}
$\square$
## Example
To see this result in practice, notice that we have 221 = 22(10)+1. Since $17 \mid 22-5(1) = 17$, we can conclude that $17 \mid 221$.
## A Final Divisibility Result
We finished off with one last example of a divisibility proof, when we showed that
For every positive integer n, we have $5 \mid n^5 - n$.
Proof: We proved this result by induction, starting with the base case $n=1$. In this case it's easy to see that the statement is true: $5 \mid 1^5-1$.
For the inductive step, we'll assume we know that $5 \mid n^5-n$, and we'll try to use this to prove that $5 \mid (n+1)^5-(n+1)$. In order to do this, we'll try to simplify the expression $(n+1)^5 - (n+1)$ into something more user friendly; we decided the bast way to do this was to just expand the term $(n+1)^5$, which gives us
(6)
\begin{align} \begin{equation*}\begin{split}(n+1)^5-(n+1) &= (n^5 + 5n^4+10n^3+10n^2+5n+1) - n - 1\\&= (n^5-n) + 5(n^4+2n^3+2n^3+n).\end{split} \end{align}
Since $5 \mid n^5-n$ by induction and $5 \mid 5(n^4+2n^3+2n^3+n)$ due to our clever factorization, our result on integral linear combinations tells us that $5 \mid (n+1)^5-(n+1)$ as well.$\square$
# The Division Algorithm
The following result, though it seems pretty basic, is actually extremely powerful, giving rise not just to a method for finding greatest common divisors (Section 1.3) but also laying the foundation for the notion of modular arithmetic (Chapter 2).
The Division Algorithm: For a positive integer d and an arbitrary integer a, there exist unique integers q and r with $0 \leq r < d$ and $a = qd + r$.
Proof of the division algorithm:
Part 1: Existence
We start by defining the set
(7)
\begin{align} S = \{a - nd : n \in \mathbb{Z}\}, \end{align}
and we claim that S has at least one non-negative element. To back up this claim, notice that
• if $a>0$ then we can take $n=0$ and find that $a \in S$;
• otherwise $a < 0$, in which case taking $n = a$ shows that $a-ad = a(1-d)$. Now since d is positive by assumption we know that $1-d \geq 0$, and so the product $a(1-d)$ is either a positive number (if $1-d > 0$) or $0$ (if $1-d =0$). In either case we see that $a-ad$ is a non-negative element of S.
In either case we see that S contains a non-negative element, and hence the well ordering principle tells us that S contains a least non-negative element. We'll call this element r, and notice that r takes the form
(8)
\begin{align} r = a-qd \mbox{ for some }q \in \mathbb{Z}. \end{align}
Hence we get $a = qd + r$. To show this satisfies the conditions of the division algorithm, we simply need to show that $0 \leq r < d$. The condition $0 \leq r$ is satisfied since r is chosen to be non-negative, so we only need to verify $r < d$.
To see that $r < d$, assume to the contrary that $r \geq d$, and we'll derive a contradiction. In this case we have that
(9)
\begin{align} \tilde r = a - (q+1)d = a-qd -d = r-d \geq 0 \end{align}
Since $r \geq d$ by assumption we have $\tilde r$ is a non-negative element of S which is smaller than r. This is a contradiction to the selection of r as the smallest non-negative element of S, so we must conclude that $r < d$ as desired.
Part 2: Uniqueness
To finish the proof we need to show that the q and r we found in the previous part of the theorem are, indeed, unique. Hence suppose we have
(10)
$$a = q_1d+r_1 = q_2d+r_2.$$
This tells us that
$r_1-r_2 = d(q_2-q_1)$, and therefore that $d \mid r_1 - r_2$. But since we also have $-d < -r_2 \leq r_1-r_2 < r_1 < d$ by our conditions $0 \leq r_1,r_2 < d$, we are in the scenario where the divisor d has larger absolute value than the number it is dividing into — namely, $r_2-r_1$. This tells us that we must have $r_2 - r_1 = 0$, and hence $r_2 = r_1$.
With this in hand, we see that the equation $q_1d + r_1 = q_2d + r_2$ then becomes $q_1d = q_2 d$. Using the cancellation law of multiplication, we therefore have $q_1 = q_2$.$\square$
|
2018-08-21 00:37:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9445602297782898, "perplexity": 238.4813477765882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217901.91/warc/CC-MAIN-20180820234831-20180821014831-00174.warc.gz"}
|
https://pcs.org.au/problem/lunchchoice
|
Lunch Choice
Points: 1
Time limit: 1.0s
Memory limit: 256M
Author:
Problem type
Allowed languages
C, C++, Java, Python
While furiously programming the donation site for the Cameron Hall 2020 Charity UnVigil (https://charityunvigil.online/) James and Felix decide to have a lunch break. Being extremely hard workers they give themselves exactly $$k$$ units of time to have lunch.
They have a list of $$n$$ venues. The $$i$$-th venue is characterized by two integers $$f_i$$ and $$t_i$$. $$t_i$$ is the time needed to lunch at the $$i$$-th venue. If $$t_i$$ exceeds $$k$$ then James and Felix's joy will equal $$f_i - (t_i - k)$$. Otherwise they get exactly $$f_i$$ units of joy.
Your task as the omniscient narrator is to choose the best place to have lunch maximizing joy (which may not be positive).
Input Specification
The first line contains two space-separated integers $$n (1 \leq n \leq 10^4)$$ and $$k(1 \leq k \leq 10^9)$$; the number of venues and time they give themselves respectively. Each of the next $$n$$ lines contains two space-separated integers $$f_i (1 \leq f_i \leq 10^9)$$ and $$t_i (1 \leq t_i \leq 10^9)$$, the characteristics of the $$i$$-th venue.
Output Specification
A single integer, the maximum joy that James and Felix can get from their lunch.
Sample Input 1
2 5
3 3
4 5
Sample Output 1
4
Sample Input 2
4 6
5 8
3 6
2 3
2 2
Sample Output 2
3
Sample Input 3
1 5
1 7
Sample Output 3
-1
|
2021-07-25 08:57:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2557148337364197, "perplexity": 2759.0889092845746}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00360.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=344551
|
# Work-Energy Theorum: Spring potential energy vs Kinetic Energy
by Senjai
Tags: energy, kinetic, potential, spring, theorum, workenergy
P: 104 1. The problem statement, all variables and given/known data A 1350-kg car rolling on a horizontal surface has a speed v = 40 km/h when it strikes a horizontal coiled spring and is brought to rest in a distance of 2.5 m. What is the spring constant of the spring? Ignore Friction and assume spring is mass-less. 2. Relevant equations $$W = \Delta E$$ $$E_{pspring} = \frac{1}{2}(kx^2)$$ $$E_k = \frac{1}{2}(mv^2)$$ 3. The attempt at a solution First right off the bat, i converted 40 km/h to its m/s equivalent of aprox. 11.11 m/s i state the law of conservation of energy: Energy before = Energy after Therefore: $$E_k = E_{pspring} \frac{1}{2}(mv^2) = \frac{1}{2}(kx^2)$$ then i isolate k $$k = \frac{-mv^2}{x^2}$$ now heres the issue, is x negative? because the displacement is against the direction of motion? and 2.5m = x, (-2.5)^2 gives me a answer of 4266 Nm but -(2.5)^2 is entirely different.. This has been a long lasting math issue for me. And what if x is positive? i know k MUST be positive right?
HW Helper P: 4,430 (2.5)^2 is correct. There is no negative energy in the nature.
PF Patron HW Helper P: 3,394 There is no minus sign in mv^2 = kx^2 or in k = mv^2/x^2. No way you can get k negative! The minus sign in F = -kx is supposed to help keep track of the fact that the force of the spring is opposite to the direction of stretch but it does seem to have a habit of getting in the way. k is ALWAYS positive.
P: 104
## Work-Energy Theorum: Spring potential energy vs Kinetic Energy
Thanks, the negative sign on mv^2 was an algebra error... Thanks for the clarification guys!
P: 69 Your attempt is correct but you missed somethingthat for a spring if you take natural length as the datum, the force on change in length is given as: $$\vec{F}= -k \vec{x}$$ and hence work done by a spring against external forces $$W_{s}=\int\vec{F}\vec{.dx}$$ over the required limits in our case the answer is $$W_{s}=-\frac{kx^{2}}{2}$$ as $$W_{s}=\Delta E$$ $$\Delta E=-\frac{mv^{2}}{2}$$ the change part was where you lost it all...the KE FELL TO ZERO. HENCE A NEGATIVE CHANGE.
Related Discussions Introductory Physics Homework 2 Introductory Physics Homework 3 Introductory Physics Homework 3 Introductory Physics Homework 4 Introductory Physics Homework 3
|
2013-12-11 23:51:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4397999048233032, "perplexity": 833.2016453808462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164099123/warc/CC-MAIN-20131204133459-00034-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://mathemerize.com/prove-that-in-any-triangle-the-sum-of-the-squares-of-any-two-sides-is-equal-to-twice-the-square-of-half-of-the-third-side-together-with-twice-the-square-of-the-median-which-bisects-the-third-side/
|
# Prove that in any triangle, the sum of the squares of any two sides is equal to twice the square of half of the third side together with twice the square of the median which bisects the third side.
## Solution :
Given : ABC is a triangle and AD is the median.
To Prove :
(i) $${AB}^2$$ + $${AC}^2$$ = 2[$${AD}^2$$ + $${BD}^2$$]
(ii) $${AB}^2$$ + $${AC}^2$$ = 2$${AD}^2$$ + 2$$[{1\over 2}BC]^2$$
Construction : Draw AE $$\perp$$ BC
Proof : In right angled triangle ABE, by Pythagoras theorem,
$${AB}^2$$ = $${BE}^2$$ + $${AE}^2$$ ………(1)
In right angled triangle AEC, by Pythagoras theorem,
$${AC}^2$$ = $${EC}^2$$ + $${AE}^2$$ ……….(2)
(i) Adding (1) and (2), we get
$${AB}^2$$ + $${AC}^2$$ = 2$${AE}^2$$ + $${BE}^2$$ + $${EC}^2$$
= 2$${AE}^2$$ + $${(BD – ED)}^2$$ + $${(DC + ED)}^2$$
= 2$${AE}^2$$ + $${BD}^2$$ + $${ED}^2$$ + $${DC}^2$$ + $${ED}^2$$
= 2($${AE}^2$$ + $${ED}^2$$) + $${BD}^2$$ + $${DC}^2$$
Since in right triangle AED, $${AE}^2$$ + $${ED}^2$$ = $${AD}^2$$ and BD = CD
= 2$${AD}^2$$ + $${BD}^2$$ + $${BD}^2$$
= 2[$${AD}^2$$ + $${BD}^2$$]
$$\therefore$$ BC = 2BD
$${AB}^2$$ + $${AC}^2$$ = 2$${AD}^2$$ + 2$$[{1\over 2}BC]^2$$
|
2022-08-17 22:18:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6582613587379456, "perplexity": 859.554314055953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00215.warc.gz"}
|
https://www.physicsforums.com/threads/vertex-corrections-1-loop-order-yukawa-theory.782261/
|
Vertex corrections 1-loop order Yukawa theory
1. Nov 16, 2014
WannabeNewton
Consider the Yukawa theory $\mathcal{L}_0 = \bar{\psi}_0(i\not \partial - m_0 - g\phi_0)\psi_0 + \frac{1}{2}(\partial \phi_0)^2 - \frac{1}{2}M_0^2 \phi_0^2 - \frac{1}{4!}\lambda_0 \phi_0^4$ with cutoff $\Lambda_0$; a lower cutoff $\Lambda < \Lambda_0$ is then introduced with an effective theory $\mathcal{L}_{\Lambda}$. We wish to then compute the 1-loop vertex corrections $\tilde{c}_2, \tilde{c}_3$ defined by $\Pi(p^2) \approx \tilde{c}_3\Lambda^2 + \tilde{c}_2 p^2$ in the diagrams below. The second diagram, with the scalar loop, is trivial to compute and isn't really the focus of my question so consider just the diagram with the fermion loop, which is of course the only diagram of the two that contributes to $\tilde{c}_2$, this being the vertex correction of relevance.
A straightforward calculation yields $\tilde{c}_2 = \frac{g^2_0}{4\pi^2}\ln \frac{\Lambda_0}{\Lambda}$. The renormalized fine structure constant is defined by $\alpha_{\Lambda} = \frac{g^2_{\Lambda}}{4\pi}$ where the running coupling was calculated in class to be $g_{\Lambda} = g_0(1 - c_1 - c_2 - \tilde{c}_2/2)$ with the vertex corrections $c_1 = \frac{g_0^2}{8\pi^2}\ln \frac{\Lambda_0}{\Lambda}, c_2 = \frac{g_0^2}{16\pi^2}\ln \frac{\Lambda_0}{\Lambda}$ coming from other 1-loop diagrams that don't fall off as $\frac{1}{\Lambda}$ or faster (e.g. self-energy diagram).
Hence $\alpha_{\Lambda} = \frac{g_0^2}{4\pi}(1 + \frac{5}{16\pi^2}g_0^2\ln \frac{\Lambda}{\Lambda_0})^2$. Thus we find $\Lambda \frac{d\alpha_{\Lambda}}{d\Lambda} = \frac{5}{8\pi^2}\frac{g_0^4}{4\pi}(1 + \frac{5}{16\pi^2}g_0^2\ln \frac{\Lambda}{\Lambda_0})$ and $\alpha_{\Lambda}^2 = \frac{g_0^4}{16\pi^2}(1 + \frac{5}{16\pi^2}g_0^2\ln \frac{\Lambda}{\Lambda_0})^4 \approx \frac{g_0^4}{16\pi^2}(1 + \frac{5}{4\pi^2}g_0^2\ln \frac{\Lambda}{\Lambda_0}) + O(g_0^4)$.
We have to show that $\Lambda \frac{d\alpha_{\Lambda}}{d\Lambda} \propto \alpha_{\Lambda}^2$ but I do not see how this is possible in the slightest given the above results, which I have verified time and time again by myself and with others. Does anyone know why the desired proportionality even holds? Thanks in advance!
2. Nov 16, 2014
The_Duck
Well, the full statement is
$$\Lambda \frac{d\alpha_\Lambda}{d\Lambda} \propto \alpha_\Lambda^2 + O(\alpha_\Lambda^3)$$
which is consistent with what you have I think.
3. Nov 16, 2014
WannabeNewton
I'm not sure I immediately see the consistency, could you show it explicitly if possible? Thanks.
4. Nov 16, 2014
The_Duck
According to your formulas we have
$$\Lambda \frac{d \alpha_\Lambda}{d \Lambda} - \frac{5}{2\pi} \alpha_\Lambda^2 = O(g_0^6) = O(\alpha_\Lambda^3)$$
5. Nov 17, 2014
WannabeNewton
Ah right, so they're only proportional to leading order in the running coupling then; that makes sense, thanks!
|
2018-01-18 06:08:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781096339225769, "perplexity": 475.3669375502949}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887067.27/warc/CC-MAIN-20180118051833-20180118071833-00384.warc.gz"}
|
http://ask.sagemath.org/question/1534/how-do-i-quit-sage
|
# how do i quit sage?
0 I am trying to delete sage from my computer to prepare to give it away to a relative, but it says it is in use when I try to empty the garbage. How do I quit sage, and then uninstall it? I have mac osx 10.6.8 asked Jun 17 '12 This post is a wiki. Anyone with karma >150 is welcome to improve it. Anonymous
## 4 Answers:
2 I've had this problem with "ghost Python processes" before as well when trying to delete old copies of Sage. There is something weird with how Terminal finishes certain processes. Here's how I've dealt with it. Make sure you have all Sage and other Python-related processes closed. Might as well close everything you can. Open your Terminal.app program. This is located in Applications -> Utilities (if you go into Finder, you can also do Command+U). Make the Terminal window as tall as you can make it by dragging. Run the command top -o -command -O time. This will list all processes (a lot) in reverse alphabetical order by command name (the first thing) and then in order by how long they ran, if there is more than one with the same name. You'll be looking for the ones labeled python. If there are some that don't seem to be doing much and have fairly high PID values, they are probably from old Sage runs. I don't know why they don't die. Here's a sample from mine now. Quit top by pressing q. Kill the process with the command kill 47776, where you replace the number with your number. If that doesn't kill it (check with top again), then use kill -9 47776. I can't guarantee that this will be your process, but on a Mac the process ID should be something five digits, so this is quite likely it. It won't format right if I don't put it here - here is a piece of what top output looks like. Notice the ID number on the left. 05- scClient 0.0 03:54.38 3 1 65 79 332K 4340K 1268K 31M 86067 quicklookd 0.0 00:00.43 9 2 101 127 12M 14M 24M 557M 47776 python 0.0 01:59.77 1 0 19 79 1016K 244K 1632K 11M 169- prl_naptd 0.0 01:32.74 3 1 45 76 364K 8232K 2228K 30M 193- prl_disp_ser 0.1 15:21.21 12 1 5814 123 1996K 8296K 6232K 36M posted Jun 18 '12 kcrisman 7427 ● 17 ● 76 ● 166
2 I think the application Activity Monitor is the easiest way to track down rogue processes. It's in the utilities folder in the applications folder. Open that, and then take kcrisman's advice of killing python processes. Do that by selecting the process, and clicking the stop sign at the top left. You can normally just quit, but sometimes you need to force quit (which is like the kill -9 that kcrisman suggests.) posted Jun 18 '12 ooglyboogly 101 ● 1 ● 7 Hmm, that's probably even easier. I have admit I have never used it, because I don't know what it does as compared to top - but top is just something someone showed me once, I've only slowly gotten used to it.kcrisman (Jun 18 '12)
1 William's killemall shell script is another good option. answered Jun 18 '12 This post is a wiki. Anyone with karma >150 is welcome to improve it. benjaminfjones 2545 ● 4 ● 36 ● 67 http://bfj7.com/ A little too draconian for my taste, but nice.kcrisman (Jun 19 '12)
0 1) I suggest using Command-Option-Esc to see a list of what is currently running. Then, you can "Force Quit" Sage, if it is running. Dragging the sage folder to the trash should do it after that. 2) You might need to restart or just log out and log back in before emptying the trash. At times, I have had to do this with my Mac to make sure all processes are closed that might be related to what I was deleting. What you are encountering sounds more like a Mac issue than a Sage issue to me based on what you've written. (And it seems like an issue that I've seen before with other items.) answered Jun 17 '12 This post is a wiki. Anyone with karma >150 is welcome to improve it. calc314 2200 ● 7 ● 25 ● 62
## Your answer
Please start posting your answer anonymously - your answer will be saved within the current session and published after you log in or create a new account. Please try to give a substantial answer, for discussions, please use comments and please do remember to vote (after you log in)!
[hide preview]
## Stats:
Asked: Jun 17 '12
Seen: 189 times
Last updated: Jun 18 '12
## Related questions
powered by ASKBOT version 0.7.22
Copyright Sage, 2010. Some rights reserved under creative commons license.
|
2013-12-05 04:19:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18042172491550446, "perplexity": 2255.9244874838146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163039773/warc/CC-MAIN-20131204131719-00093-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://forum.wilmott.com/viewtopic.php?f=4&t=99702&p=874075
|
Serving the Quantitative Finance Community
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Sorry Doubled.
Friends, in a previous post I told friends how staff at a lady psychiatrist's hospital started beating me and abused me. During past twenty three years, there have been many many instances of my obvious abuse by different psychiatrists. I will describe some of those instances in this post.
Now back to my obvious abuse by two other psychiatrists. I was detained by this doctor who was running a mental health facility and was not a psychiatrist himself. At that time I did not use to walk very much and my blood pressure used to be somewhat high. Since other antipsychotics had failed to retard me, doctor suggested I take aripperizole injections. These injections affect the heart and also elevate blood pressure but doctor who wanted to retard me had no regard for my health while pretending to cure me. In initial days of my detention, staff at the hospital when checked my blood pressure told me high enough blood pressure in line with what I used to have but after I complained about aripperizole and its effect on blood pressure the staff started doctoring blood pressure and started telling me that my blood pressure was very good or even low. This is a post I wrote on another thread in this forum about it. How to safeguard my research - Page 32 - wilmott.com
.
Amin:
.
Later after a few days, I was given a second aripperizole injection and released from detention. An injection was due every fifteen days. My parents were in Saudi Arabia and I was living with my sister. She insisted after fifteen days that I get a new aripperizole injection. I complained about too much side effects and other serious problems but my sister insisted that I must get the injection. She talked to another doctor and the doctor told her that if his blood pressure and pulse are fine, just give him the injection. I asked my sister that my blood pressure must be checked before I get the new injection. We went to a small hospital close to her house and when my blood pressure was checked my lower blood pressure was 110. The hospital staff recommended that I stay at the hospital in their care until my blood pressure decreases. It was a totally random blood perssure check. When hospital staff came to know that I had been given aripperizole injections, they told my sister to not give me those injections any more. My sister asked the staff to relieve me and we came home but I was not given aripperizole injections after that.
But this is interesting that the psychiatrist was initially desperate to prescribe aripperizole despite that I had told him about my heart and blood pressure problems and his staff was asked to doctor blood pressure so that I would accept the injections.
Good thing is that only after stopping aripperizole, I started my treatment with good psychiatrist doctor Naeem Aftab (who is an American national) and he gave me enough relief that I was able to get a break in my research and solve the problem of high order monte carlo simulations. I would always be thankful to Dr. Naeem Aftab for giving me relief at such a crucial point in my life.
Yet another experience for abuse was with a seasoned psychiatrist who is known for his connections with Pakistan army. When army dictator Pervaiz Musharraf ruled the country, he made this doctor the dean(president) of the university of health sciences which regulates most medical universities in Pakistan. He had treated me for three years in year 2001. He started my treatment without interviewing me and gave me electric shocks at the start to jumpstart the therapy. I was given electric shocks for fifteen days in early 2001. I still recall that doctor would place a wooden wedge between my teeth and then give me an injection and I still recall how numbness would move in my body from feet to the upper body after the injection.
When going to that doctor's psychiatrist facility in 2001 I had to travel from Kot Addu to Rawalpindi. My family had given me some drug in my food and I became unconscious after an hour or so. I still recall how I was begging my younger brother to help me that I do not want to go anywhere and if he could help me but he like everybody else in the family refused. Slowly drug in food showed its effect and I have absolutely no memory of being taken in my father's car despite that it would have taken him at least eight hours to take me to Rawalpindi. Psychiatrist did not interview me and detained me and started giving me electric shocks. I still recall how I defecated in my bed there and they had to wash it.
Anyway, I was taken to this psychiatrist in 2017 again. He was very clearly told that I had been given antipsychotic drug larjactil by another doctor and it backfired and I had extremely severe drug-induced jaundice. But the doctor was so haughty and cavalier and so desperate to retard me that he still insisted that I must get the same drug larjactil and said that it was not necessary that it would cause jaundice every time. Problem for the doctor was that most antipsychotics had failed to retard me and he wanted to give me some potent antipsychotic that had not been well-tried on me. Though the drug was given to me earlier and later discontinued, the doctor wanted to take his chances to retard me and he had no regard for my health. This doctor is a big crony of Pakistan army and really wanted to please the generals and therefore he decided to give me larjactil again. And as would already be obvious to most friends, after two weeks, I started to show all signs of drug-induced jaundice again and my body and eyes were all yellow. My Biluribin shot like anything. And it took more than two months to control jaundice after that. Though my biluribin came back to normal after two months, several less known enzymes in my liver remained disturbed for several years after that and continued to show as abnormal in my liver function test reports.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
I have been detained more than 30 times by various psychiatrists during past twenty-five years on orders of Pakistan Army.
Here, I will recall an incident eight years ago when I myself approached a lady psychiatrist for help and how she abused me, and her staff beat me for twenty minutes and then tied me and started my treatment with antipsychotics again.
I am recalling this from early 2015. They had previously put me on extremely high antipsychotics and my creativity was all gone. I could barely do any intellectual work. At that time, I was able to convince my family to decrease the injections and they had agreed since I had already lost most of my creativity and intellectual capability. I think mind control agencies were also thinking that I had lost my creativity forever and they wanted to see if it was right time to decrease my drugs and still make sure that I would remain intellectually dull after injections had ended.
But I slowly started to work on my research and started to regain my brain. When mind control agencies realized that I had started to do better research, they asked my family to do something again to retard me. My father asked a self-stylized religious saint to come to our house and take care of me. He came to our house with one of his followers and told me that he wanted to give me injections and I would go to sleep for 24 hours, and they would take scans of my brain to treat me, but I continued to beg and plead him, and I was able to convince him to allow me to leave the home in my car. The supposed saint allowed me to leave since he was probably sure that he would get me later as I had no other place to go other than my home and, therefore, I was able to convince him to let me go outside. When I left the house, I made posts about the saint on at least two different forums from outside my house and pleaded for help. Enough people were following my story even at that time and the saint had to leave. I also called my father when I was outside and protested against this.
My family still remained very tense with me, and I was very afraid that they would force antipsychotics injections and mind control drugs on me therefore I stayed one night out of the home. Here is the post I made when I was away on April 14th, 2015, 2:09 pm
https://forum.wilmott.com/viewtopic.php?f=15&t=94796&start=315#p748191
"I am staying out of my home for the night since I had realized that my father will try to force me into detention again and I would request a court tomorrow for stay order to stop my father from hospitalizing me forcefully again and I would request the court that if there has to be a check on my mental health, it has to be done and investigated properly by a psychiatrist of my choice and I should remain free at the same time. I do not want to be at the mercy of my father's chosen psychiatrist as we saw in the past year that they continued to quote mental health laws and forcefully manipulate me into taking injections. If my father forcefully hospitalized me, I would be cutoff from the outer world and they would do whatever they like. My father's self-stylized saint(peer) had already hinted that he wanted to give me injections that would put me to sleep for 24 hours and he had no argument for my mental sickness at all.”
Before my computer was taken away from me, I was able to take notes for two days in the hospital on my computer that I write here. After first two days of medication, I was so extremely weak that I could barely walk and I had extreme difficulty speaking.
“My Diary for past two days.
4/27/2015
I had two sessions with the doctor N previously on the 20th of April and once before at 13 or 14th of April. I remained away from home during this time and second meeting on 20th of April, evening after the night when I remained away from home. I had told the doctor about my circumstances in the first meeting with her. She had said that I looked perfectly fine mentally but she needed to talk to my family. On 20th of April, I took my mother for a second joint session of myself and my and the psychiatrist said, after the meeting, that said she really could not call me schizophrenic but she needed one more sessions and asked me to come to her facility on 27th of April.
When I arrived at the psychiatric facility, a psychologist had asked me to accompany her which I did and on the way to the her room, I asked for the reason, after which she replied that she wanted to take a session of mine. I told her that I already had a session with another psyc hologist and also with a psychiatrist so I would prefer to have a new session with the psychiatrist now. She agreed and I returned to the waiting room. I was sitting on a sofa in the waiting room on 27th at 8:00 PM (PST) in the line to see the psychiatrist when about five guards took control of me, while I was sitting on a sofa in the waiting room to see the doctor, and gave me an injection. I was beaten and my legs and hands were very strictly tied for more than hour. Though the doctor was in her consultation room and a lot of people were waiting to see her, I was told again and again that doctor was not present at the facility.
In the previous two sessions, she had given the impression I was healthy mentally but she used force without interviewing for the third time or seeing me and without requests to comply. If she was given any special information, she should have talked to me about it and she should have verified with me any change in circumstances.
One injection was given around 8 PM when I arrived for third session with the doctor N. I really think that the doctor was there since there was a large rush of visitors and her SUV was parked in the hospital but everybody among the staff continued to say that doctor did not come to the facility this evening.
I was kept tied for more than one hour. I was taken tied from the waiting room to my room. All these rooms have video cameras so you can ask Dr. N for a video of the events though she might make the excuse that cameras were not working. One of the staff had mentioned when they brought me into a patient room in the facility, that everything was watched by cameras.I would request people familiar with the matter to request her to show them footage of the cameras. Of course, there could be many excuses that cameras were not working but I insist that cameras were working.
Good Mental health facilities try to avoid treating humans like this.
Another injection was given at night before I slept.
and my internet USB device was also taken. They simply took possession of everything out of my pockets and also took my laptop.
When I asked why they gave me injections, one of the staff told me you were ?hyper? and I was totally surprised since I was quietly sitting on the sofa in the waiting lounge for the doctor. I was told that I would stay in the hospital for three days. I am sure this matter can be resolved who is lying and who is truthful once you look apt video footage of cameras in most rooms.
4/28/2015
One injection was given in the morning. One injection was given around noon. All injections were IM (given in the muscle.)
Unknown pills(some pills in the morning +5 pills around noon + 3 pills at night) were given and I was told,? We will tell you know the names of these pills only when you leave.?
Computer was returned today but I was told that we did not take your mobile and they continued to say we never took my mobile. My mobile was never returned( despite I always keep my mobile on me.)
I was also not given my EVO USB device and I was told you cannot write anything on internet. However the USB device was returned to my mother at night.
I was told today by the psychologist that doctor N had gone to America and will come back after seven days and only then she would be able to see you. I was told we are adjusting your medicine and see what was a good medicine. I was told they were discussing the drugs with her on phone and skype There were several conflicting statements about where Dr. N was. Somebody said she was just out of city and had gone to Islamabad.
When I asked my mother who is living with me at the hospital, I wanted a second opinion, because of beating and other irresponsible behavior of the psychiatrist, my mother would make all sort of excuses while the staff would say that you cannot leave the hospital until Dr. N arrives who is currently out of country.
Earlier today, I was told that I would live at the hospital for fifteen days and later I was told that I would live at the facility for seven days. When I was tied, somebody had told me that I would live in the hospital for three days only.
My mother told me please do not write anything wrong on internet otherwise, they would take your USB forcefully like Dr. A had done more than one year ago. The purpose was to threaten me to not write about any maltreatment so that the doctor would continue to freely manipulate me.
7-8 pills were given in the afternoon and similar number of pills were given at night. At many times during the day, I could barely walk and had extreme difficulty speaking.”
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
My Letter To HRW About My Human Rights Abuse
Above is a very incomplete account of how various psychiatrists continued to play in hands of Pakistan army to retard me. It also does not cover all the time till year 2022 and the account stops several years earlier.
Pakistan Army has no personal animosity with me. Chief of Pakistan Army staff is given more than 50 billion rupees every year come rain or shine so that army continues American Mind control of intelligent people in Pakistan. Even when the relations between both countries are extremely bad, this bribe is always given to Pakistan army generals in time.
Before writing this post, I want to apologize to good jews but want to tell them there are enough evil jews who are doing everything to tarnish the great goodwill jews have so rightly earned through generations of hard work and being nice and kind to other people in the society. I want to request good jews to Please force these evil(jewish) people to end the bad practices that are starting to take root in American and other societies on their behest and before it is too late. I also really hope that my mind control would finally end(as I have been begging for past twenty years) and I would never have to write a post like this again.
Since things might heat up after my writing this and my persecution might increase, I want to ask European embassies in Pakistan to keep a vigil. I want to tell Europeans that my writing about mind control is in their great interest since the very same people who instigate retarding of intelligent muslims are behind retarding of dozens of intelligent europeans and once these people are exposed I am sure they will be far more careful in openly retarding European talent(that they do not like).
I had told friends earlier that a jewish billionaire is Godfather of mind control. When my persecution started (in late 1997) at the university in New York which is a big time player in mind control of Muslims, blacks and foreigners, many university officials were very pleased that they were getting written in good books of the billionaire jewish Godfather. Though there are far richer billionaires in US who are staunchly opposed to mind control, they are not ready to shell out hundreds of millions of dollars to bribe anyone to end mind control while mind control Godfather continues to give large bribes to people to continue mind control. Many of the people related to defense in mind control are given extremely high paying jobs in Godfather and his other jewish friends' companies after they retire from mind control and Godfather is extremely generous towards them and it is well established and well known to people in mind control that after retarding muslims, blacks and others in their career, they will move to paradise after they retire. No wonder there is no let up in mind control whatever good people try.
We know in almost all classic folk lore stories evil is too strong and too sure of itself while good is mostly weak and just trying to protect itself from evil who is bent on damaging and eradicating the weak and the poor good but it is the weak and poor good that finally prevails. I want to tell good American friends that evil at this stage has become too bold and thousands of talented people are retarded every year. And it is just not muslims and blacks that are on the line, several hundreds of brilliant whites are retarded across United States by mind control agency every year that has become too emboldened due to lack of any accountability. Thousands of foreigners are also kept retarded in dozens of countries including many European countries.
My only intention behind this post is that I want mind control to end from our societies. I do not have any joy in mentioning that there are some bad jewish actors behind it. In American society there are a large number of jews who have excelled in the professions of science, education, medicine and have made great contributions to our societies. There are a very large number of jewish professors each of whom would have taught tens of thousands of students on top of making extraordinary contributions to body of human knowledge through their great research. Then there are accomplished jewish scientists whose work, research and inventions make our everyday life better. There are great jewish doctors and surgeons who would treat countless sick people and make them recover from disease. The list of contributions of jewish people to humanity is very long and it continues. Then there are a large number of ordinary jewish folks who are known for their goodness and civility to other people in the society. When we know all of this, we do not want to say anything anti-jewish to hurt the feelings of so many good jewish people. But I still have to say there are really some bad people who are unconnected with most of the nice and accomplished jews and they instigate and continue cruel mind control torture on talented people of other ethnicities, religions and countries based on their personal dislike and hatred. My purpose is only to end the mind control victimization and torture on otherwise nice and talented people and I do not have any hidden intentions beyond that. And I am sure American society knows very well about the contributions of good Jews to both their country and broader humanity and my exposing some bad actors would never have any bad effect of any kind on these good and nice jews who we all respect and who are our role models.I want to tell friends something I have said before here that I am sure American society would eventually end the menace of mind control from their and other countries. I want to request all good Americans and Europeans to come forward and invite any investigative journalist you know to read my threads and ask them to help end the menace of mind control in our societies through their investigative journalism. Without any accountability evil will continue to grow stronger and then mind control would start to threaten even those people who consider themselves safe due to affiliations, status or wealth.
I want to request CNN, New York Times, Washington Post, Al-Jazeera and large reputed European media outlets to run a comprehensive story on mind control torture, retarding and victimization of intelligent people of US and other nations by crooks in US army on behest of some powerful people in United States and due to rightwing extremist biases of these crooks. If you would like to do investigative research about animal practices of US army, one great resource would be mainland European embassies in Muslim countries who keep a detailed account of mind control persecution of intelligent muslims in these countries by crooks in US army. Many of the staff in mainland European embassies are very good human beings who abhor such practices and would love to cooperate with good journalists in exposing the evil animal practices of US army crooks. Only the accounts of people at mainland European embassies in muslim countries thorough animal practices of American army's mind control wing would be enough to drop a great bombshell in the media and general public all across the world. I want to warn all the good people who try to have a civilized dialogue with crooks in American army to end their evil practices that their being nice with crooks is a very misguided approach. Since good people do not have the power to forcefully end the evil practices of US army crooks, only way to end the evil practices of US army crooks is by exposing them openly in US public and all across the world.
It seems that crooks in American army and mind control are unable to pay a heed to civilized calls by good American people and others to end their animal cruel practices and remain bent on mischief. If crooks in American army and mind control remain adamant on continuing evil practices, the only solution seems that some brave investigative journalists expose them by running a comprehensive story on mind control practices by American army. It seems that doing a civilized dialogue with hardened crooks in mind control gives them a strong sense of weakness of good people and makes the crooks even more adamant to continue their evil practices. Reminds me of Abu-Gharib. Very similarly, Crooks in defense had no conception that they were doing any wrong thing when there was a culture of openly urinating on human captives. It was an open thing in army and most crooks thought it was indeed a very right thing to do( and I am sure some of those crooks still believe that it was a right thing they did even after being reprimanded by the broader American society). It was not until brave people at CNN did a daring story against animal practices by crooks that evil practices stopped. Though American army crooks retard intelligent people of all color and creed, these ultra right wing army crooks love to retard blacks and muslims with great relish. Blacks are only 8-10% of US population but make a very large proportion of victims in united states since many ultra right wing crooks in US army would rather die than let those intelligent blacks succeed in American society in a big way.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, I have previously described several horror experiences with psychiatrists but there are many more such experiences that I have not posted on this thread. Today I am going to describe a continuing experience in which the psychiatrist did everything in his power to retard me.
This is from year 2004/2005. I had a sound logic at that time and I was working from Lahore at a job with a foreign Japanese company that was paying me \$4000 per month. I was still on high antipsychotics and when high dosage of normal antipsychotics did not work, in order to retard me, the psychiatrist employed a procedure that is not known in psychiatric practice in any country. The psychiatrist would give me some drugs in a glucose drip every two/three days in the operation theater. When I would be given the glucose drip, I would gradually start to lose consciousness and when half of the drip bottle would have gone into my blood, I would literally be unconscious, and I would later be taken unconscious to my hospital bed by the hospital staff and I would wake up several hours later on the bed in my hospital room and I would be extremely weak.
It was such a frightening experience that I would shudder from horror as the day of this treatment would come closer and continued to beg the psychiatrist to stop this treatment. The psychiatrist would say that he was giving me vitamins and vitamins were always harmless. He insisted that vitamins had to be given to me. He never told me what drugs he gave me that made me unconscious and never discussed it with me other than mentioning vitamins. The psychiatrist continued to perform this procedure for several weeks after sometimes second or usually third day. Most people cannot imagine how it would feel when the drug in glucose drip would go into my body and I would lose my senses. To this day I shudder from complete horror and recall the extremely sick feeling when I would slowly lose consciousness.
Now when I look back, I realize that they were giving me mind control drugs in the drip to take the intelligent neurotransmitters out of my body. They continued to try to take intelligent neurotransmitters out of my body again and again through this procedure for several weeks. I was completely helpless before my family and the psychiatrist and had to go through such unlawful medical procedures.
As friends would have realized that purpose of mind control agencies was always to retard me since neocon scumbags of pentagon and their backers were not capable of appreciating my humanity and did everything in their power to retard me with an iron fist.
I have seen continuing pain, trauma and torture in my life only because I had some special neurotransmitters that people with hatred and venom were not ready to let me have since they considered it a threat to let a Muslim have those neurotransmitters.
After starting mind control in Lahore in early 2000's, bad jews (again apologies to good jews) were very emboldened how easy it was to manipulate army and government in Pakistan to start mind control of intelligent people in Pakistan. It was a time when Pervaiz Musharraf who was an army dictator ruled the country. He was very ready to please the CIA in order to get help from American army to prolong his illegal dictatorship in Pakistan. He fixed in place all the required mind control infrastructure in all large cities of Pakistan to please American army who was playing in hand of bad jews to damage the Muslim majority country.
When Americans rightly blame our country for terrorism, they must not forget that many neocon generals in American army have played in the hand of bad jews and have made every effort to systematically destroy talent in Pakistan on a very large scale using mind control technologies to keep it backward forever.
Mind control in Pakistan is ensured by top Pakistani army generals since they are given several hundred million dollars by US army every year and the top brass of Pakistan army divides this money among themselves with bigger fish getting larger share and smaller fish getting a smaller share. Therefore, mind control infrastructure in Pakistan is very detailed and thorough.
From my own experience, there is very large mind control activity in Lahore and Islamabad, the two cities where I have lived during the past twenty years. There are at least more than 500 mind control targets in Pakistan most of whom are computer science majors.
Mind control in Pakistan and many other places increased dramatically like never before. We have in Lahore a huge building called Arfa Towers where all the facilities for technology and IT companies were provided to attract new firms and entrepreneurs and later Arfa Towers became a scene of mind control carnage since tens of aspiring computer scientists and entrepreneurs were retarded and it became very easy for mind control agencies to find the talent since the facility attracted talent and retard it. Time between 2008 and 2016 was like black death for talent in technology in our country (I am not talking about pseudo-talent that American embassy continued to support and showcase heroically in the meantime.) I still recoil with horror when I look back and recall how difficult it was to get good drinking water in Lahore during that time for more than six months in 2013-2014. I would have to go to remote parts of the city, go to villages around the city and sometimes even go to other cities and only then I would be able to get some good water after repeated attempts and my only worry during the day would be to somehow get good food and water.
At this point I like to tell friends that a lot of food and beverages that are drugged with mind control chemicals would be eaten or taken by most of the population that is not on mind control and they would never notice any anomaly in taste or any other effect since these mind control chemicals are inactive and inert and become chemically active only when targeted remotely with precise electromagnetic wave frequencies so only target victims are affected and this is a very cruel method to target intelligent people by drugging the entire supply of food and beverages and then focusing waves on the targeted individuals to activate the mind control chemicals. Pakistan army and mind control agencies would drug the entire public water supply and beverages knowing that the victim would have to drink something somewhere. Ground water is also so thoroughly drugged with mind control chemicals that entire public water supply of the city and several kilometers out of the city would be completely drugged with mind control chemicals. Pakistan army routinely does this in Lahore city on behest of American mind control agencies.
Ten years ago, army soldiers and agents would go to markets to drug the targeted food and beverages with mind control chemicals which they still. Large scale movements of soldiers and agents raise suspicions in general public and therefore mind control chemicals are supplied to beverage and food manufacturing companies in Pakistan, and they simply add the chemicals in food and beverages during the manufacturing process.
In Lahore city, there are more drugged beverages at any time than good beverages in small stores or large supermarkets alike since most of the beverages are drugged at manufacturing source by coca-cola, pepsi-cola and Nestle.
Coffee stimulates the brain and the brain connects in new ways after taking coffee and good coffee creates problems with keeping victims on mind control since their brain starts connecting in many new ways after taking good coffee. All coffee and I repeat that entire coffee in the country present in the stores is drugged with mind control chemicals. Most of the coffee on good stores is imported from foreign countries. I do not know how it works but importer is probably told by army to drug the imported coffee before distributing it in the market. It is even very difficult or rather impossible to get good coffee beans. Two or three years ago it was possible to find some brand of coffee that was good by trying all new brands but it has not worked for past two years now for me despite that I keep trying. I even know of instances that when I was able to buy good coffee of a certain brand from very large and posh stores, they took entire coffee brand off the shelf from all their branches within two to three days. People know that Corrupt Army in Pakistan that ensures mind control is more powerful than any political government here.
Tea used to be good but for past one year most mainstream Pakistani brands of black tea are drugged with mind control drugs. This has still not caught up thoroughly like coffee is thoroughly drugged and it is still possible to find some good foreign brands tea that is good.
All stimulant/energy drinks including red bull are well drugged at manufacturing source. Red bull was better till last year but then it became drugged everywhere. I even tried it at remote places in several cities other than Lahore and it has always been drugged lately.
Most of the soaps, shampoos, creams, toothpastes are drugged with mind control chemicals. I do not use shampoos or creams but whatever rare I tried them once in a while they were drugged and they charged my skin. All of these have mind control chemicals that charge the skin and make mind control extremely effective. Many people would be amazed when I tell them that even though I would be in my full senses before washing my head and face with a soap but after washing my head and face with a drugged soap, my eyesight would deteriorate and my conception of reality would completely change and sometimes I would face extreme anxiety. It is hard to emphasize that even a small thing like this can completely change the victim's state of mind and consciousness. Getting a good soap was very difficult for me till last year and I would drive for several hours in different parts of the city especially looking at all the pharmacies on the way to find some unknown brand or some soap that would have been manufactured several years ago when soaps were not getting completely drugged. I tried all the soaps including beauty soaps that are specially made for women, even many unknown brands, and specially imported soaps(which were all universally drugged thoroughly) and it was very very difficult to find a good soap. After finding a good soap, I would keep it in my pocket all the time. This is a small thing but you cannot live without it. Since our mouth is kept charged and some charges continue to go into victim's body from one's mouth. Therefore toothpastes are also added with mind control chemicals to charge inside of our mouth. You can imagine that after buying five or six tooth pastes of different remote brands, I would get a good tooth paste. I have recently been using toothpaste that is imported from abroad and is good. Toothpastes manufactured in Pakistan are all drugged with mind control chemicals.
Another big problem is that pharmaceutical companies add mind control chemicals to their drugs(pills etc). Most of the mind control victims are targeted due to their talent and intelligence in mind many times might require special minerals that are rare in our food. Most of the mineral and vitamin formulas are also drugged with mind control chemicals. Most mainstream mineral and vitamin pills in the market are thoroughly drugged with mind control chemicals. I never like to take names but I would like to tell friends that greatest crook of all times is Glaxosmithkline( I would invite them to sue me for writing this on Wilmott). GSK's mineral and vitamin formulas, effervescent calcium tablets, and toothpastes are all drugged with mind control chemicals. There is a tablet by the name Kemadrin that has to be taken with antipsychotics and GSK manufactures it and it is also drugged with mind control chemicals. My experience with GSK is so bad that If I have to buy any medicine and if it is made by GSK, I would try my utmost to avoid taking it for the fear of its being drugged with mind control chemicals and look for alternatives.
Injection fluanxol and clopixol are also drugged with mind control chemicals.
Again all this thorough infrastructure took more than fifteen years to develop in Pakistan and Pakistan army continued to work hard to develop it over the years. There are at least five hundred brilliant Pakistanis most of whom are computer scientists who are on active mind control to thwart them to contribute to society in any good way.
All this infrastructure exists in Pakistan and was developed because neocon generals wanted to please bad but rich jews and American mind control agencies continue to perfect it over last twenty years. Again, mind control in Pakistan is ensured by top Pakistani army generals since they are given several hundred million dollars by US army
I have not talked about special infrastructure that is used to project EM waves on the target and this infrastructure is present on all major roads and public places in big cities and major highways and motorways in the country.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, I did a two year monte carlo simulation in a stochastic volatility model. And took eight snapshots of density of stock and stochastic variance at .25 year intervals. Red line in graphs is density constructed from Z-series coeffs using hermite inner product method and green line is direct bin density. I used two million paths. The Z-series density construction within simulation works seemlessly as you can see in the graph. I conceived this method just last night and went ahead and programmed it. I am sure if there are any minor problems, we would be able to resolve them. Here are the graphs.
I would post Matlab program used to construct these graphs in twenty minutes and will come back with another post later tonight explaining the analytics of the method with latex equations.
Good thing is that we can calculate the Z-series coeffs within simulation and do all sort of variance calculations for var derivatives and also do hermite wise correlations and regressions all within monte carlo framework. However the current matlab program only calculates the Z-series density within simulation using hermite inner products with the SDE variable data.
Here are the SDEs of stock process and stochastic variance process.
dVt=kappa *(theta - Vt)*dt+sigma0 * Vt^gamma * dz1(t)
Xt(0)=x0
dXt=kappaX * (thetaX - Xt )*dt+sigmaX* Vt^gammaV * Xt^gammaX *dz2(t);
<dz1(t),dz2(t)>=rho *dt
in simulations below kappaX is zero.
Here are the stock and stochastic variance graphs at intervals of .25 years.
.
.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Here is the program used to construct above graphs with all its auxiliary functions.
The program takes a small step and stops after every .25 years to construct the asset density and stochastic variance density. I am giving Z-series Coeffs that you get when program stops after one year (the 4th time).
You will get the following coeffs for asset density
ch0 =
1.000504293954663
ch =
0.387944879882010 0.019230128572635 0.000494145447108 0.001588525145643 -0.000062189649459 0.000006642248086 0.000004857389595
c0 =
0.985940107097660
c =
0.385019572891282 0.009997878862670 0.001626067849219 0.001488891424346 -0.000164194830963 0.000006642248086 0.000004857389595
And the following coeffs for variance density.
dh0 =
0.113983075015833
dh =
0.082903984760398 0.034457076958627 0.010065124779018 0.001985239782762 0.000244600151872 -0.000018504490563 -0.000026167026254
d0 =
0.085759284763932
d =
0.059125150458094 0.021712936186735 0.004871585503626 0.002262807141202 0.000794107703207 -0.000018504490563 -0.000026167026254
.
.
function [] = SVMonteCarloDensity()
%Copyright Ahsan Amin. Infiniti derivatives Technologies.
%or skype ahsan.amin2999
dt=.125/2/2/2; % Simulation time interval.%Fodiffusions close to zero
%decrease dt for accuracy.
Tt=128;%64*4*2;%16*2*4;%*4*4*1;%*4;%128*2*2*2; % Number of simulation levels. Terminal time= Tt*dt; //.125/32*32*16=2 year;
T=Tt*dt;
if(T>=1)
dtM=1/32;;
TtM=T/dtM;
else
dtM=T/32;
TtM=T/dtM;
end
v00=.250; % starting value of SDE
beta1=0.0;
beta2=1.0; % Second drift term power.
gamma=.950;%50; % volatility power.
kappa=1.50;%.950; %mean reversion parameter.
theta=.075;%mean reversion target
sigma0=.9500;%Volatility value
%you can specify any general mu1 and mu2 and beta1 and beta2.
mu1=1*theta*kappa; %first drift coefficient.
mu2=-1*kappa; % Second drift coefficient.
%dVt=(mu1*Vt^beta1 + mu2*Vt^beta2)*dt+sigma0 * Vt^gamma * dz1(t)
%Xt(0)=x0
%dXt=(mu1X*Xt^alpha1+mu2X*Xt^alpha2)*dt+sigmaX* Vt^gammaV * Xt^gammaX *dz2(t);
%<dz1(t),dz2(t)>=rho *dt
x0=1.00;
gammaX=.65;
sigmaX=1.0;
thetaX=.50;
kappaX=0.0;
mu1X=thetaX*kappaX;
mu2X=-1*kappaX;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%rng(29079137, 'twister')
rng(15898837, 'twister')
paths=2000000;
V(1:paths)=v00; %Original process monte carlo.
X=0.0;
X(1:paths)=x0;
alpha1=0;
alpha2=1;
a=mu1X;
b=mu2X;
rho=-.5;
sigma1=sigmaX;
gammaV=.5;
Random1(1:paths)=0;
Random2(1:paths)=0;
for ttM=1:TtM
Random1=randn(size(Random1));
Random2=randn(size(Random2));
time1=ttM*dtM
X(1:paths)=X(1:paths)+ ...
(a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2)* dtM + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths) * sqrt(dtM) + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random2(1:paths)*sqrt(dtM) + ...
(a*alpha1* X(1:paths).^(alpha1-1)+b*alpha2* X(1:paths).^(alpha2-1)).* ...
(((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2)* dtM^2/2)+ ...
(sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths) *(1-1/sqrt(3)).*dtM^1.5+ ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random2(1:paths)*(1-1/sqrt(3)).*dtM^1.5))+ ...
.5*(a*alpha1*(alpha1-1)* X(1:paths).^(alpha1-2)+b*alpha2*(alpha2-1).* X(1:paths).^(alpha2-2)).* ...
( sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX)) *dtM^2/2 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.*gammaX.* X(1:paths).^(gammaX-1).* ...
((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2).*Random1(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*(Random1(1:paths).^2-1) * dtM/2 + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random1(1:paths).*Random2(1:paths)*dtM/2)+ ...
.5*sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.*gammaX.*(gammaX-1).* X(1:paths).^(gammaX-2).* ...
(sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX) .*Random1(1:paths).*1/sqrt(3).*dtM^1.5)+ ...
sqrt(1-rho^2)* sigma1*gammaV.* V(1:paths).^(gammaV-1).* X(1:paths).^(gammaX).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random1(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sigma0*V(1:paths).^gamma.*Random1(1:paths).*Random2(1:paths)*dtM/2)+ ...
.5*sqrt(1-rho^2)* sigma1*gammaV.*(gammaV-1).* V(1:paths).^(gammaV-2).* X(1:paths).^(gammaX).* ...
(sigma0^2*V(1:paths).^(2*gamma).*Random1(1:paths)*1/sqrt(3)*dtM^1.5)+ ...
sqrt(1-rho^2)* sigma1*gammaV.* V(1:paths).^(gammaV-1).*gammaX.* X(1:paths).^(gammaX-1).* ...
rho.* sigma1.* V(1:paths).^gammaV .*X(1:paths).^gammaX .*sigma0.*V(1:paths).^gamma.*Random1(1:paths)*1/sqrt(3)*dtM^1.5+ ...
rho* sigma1* V(1:paths).^gammaV.*gammaX.* X(1:paths).^(gammaX-1).* ...
((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths).*Random2(1:paths) * dtM/2 + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*(Random2(1:paths).^2-1)*dtM/2)+ ...
.5*rho* sigma1* V(1:paths).^gammaV.*gammaX.*(gammaX-1).* X(1:paths).^(gammaX-2).* ...
(sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX) .*Random2(1:paths).*1/sqrt(3).*dtM^1.5)+ ...
rho* sigma1*gammaV.* V(1:paths).^(gammaV-1).* X(1:paths).^(gammaX).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sigma0*V(1:paths).^gamma.*(Random2(1:paths).^2-1)*dtM/2)+ ...
.5*rho* sigma1*gammaV.*(gammaV-1).* V(1:paths).^(gammaV-2).* X(1:paths).^(gammaX).* ...
sigma0^2.*V(1:paths).^(2*gamma).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5+ ...
rho* sigma1*gammaV.* V(1:paths).^(gammaV-1).*gammaX.* X(1:paths).^(gammaX-1).* ...
rho.* sigma1.* V(1:paths).^gammaV .*X(1:paths).^gammaX .*sigma0.*V(1:paths).^gamma.*Random2(1:paths)*1/sqrt(3)*dtM^1.5;
X(X<0)=.0000001;
if(rem(ttM,8)==0) %On every eighth monte carlo step construct the density
MaxCutOff=100;
NoOfBins=200;
[XDensity,IndexOutX,IndexMaxX] = MakeDensityFromSimulation_Infiniti_NEW(X,paths,NoOfBins,MaxCutOff );
SeriesOrder=7;
NoOfBins=200;
MaxCutOff=9999999;
%Below function finds hermite coefficients of monte carlo simulated SDE
%of the stock asset.
[ch0,ch] = FindHermiteCoefficientsFromSimulation_Infiniti_NEW(X,paths,NoOfBins,MaxCutOff );
[c0,c] = ConvertHCoeffsToZCoeffs(ch0,ch,7);
ch0
ch
c0
c
str=input('Look at Coeffs of Z-series of stock SDE')
%Below construct density on Gaussian grid.
%First calculate normal random variable on a grid below
dNn=.1/2; % Normal density subdivisions width. would change with number of subdivisions
Nn=45*4; % No of normal density subdivisions
NnMid=((1+Nn)/2)*dNn;
Z(1:Nn)=(((1:Nn)*dNn)-NnMid);
%Z
%str=input('Look at Z');
%Now calculate Y as a function of normal random variable.
Y(1:Nn)=c0;
for nn=1:SeriesOrder
Y(1:Nn)=Y(1:Nn)+c(nn)*Z(1:Nn).^nn;
end
%Now take change of densities derivative of Y with respect to normal
DfY(1:Nn)=0;
for nn=2:Nn-1
DfY(nn) = (Y(nn + 1) - Y(nn - 1))/(Z(nn + 1) - Z(nn - 1));
%Change of variable derivative for densities
end
DfY(Nn)=DfY(Nn-1);
DfY(1)=DfY(2);
%Now calculate the density of Y from density of normal random variable
%using change of probability derivative.
pY(1:Nn)=0;
for nn = 1:Nn
pY(nn) = (normpdf(Z(nn),0, 1))/abs(DfY(nn));
end
plot(Y(1:Nn),pY(1:Nn),'r',IndexOutX(1:IndexMaxX),XDensity(1:IndexMaxX),'g');
title(sprintf('Stock Density: x0 = %.2f,thetaX=%.2f,kappaX=%.2f,gammaX=%.2f,sigmaX=%.2f,rho=%.2f,v0 =%.2f,kappa=%.2f,theta=%.2f,gamma=%.2f,sigma0=%.2f,T=%.2f,dt=%.3f',x0,thetaX,kappaX,gammaX,sigmaX,rho,v00,kappa,theta,gamma,sigma0,time1,dtM));
legend({'Z-Series Density of Stock From Hermite Polynomials','Monte Carlo Density'},'Location','northeast')
str=input('Look at density of stock from simulated data. From Z-series Coefficients in red, Directly from data in green');
end
VBefore=V;
V(1:paths)=V(1:paths)+ ...
(mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2)*dtM + ...
sigma0*V(1:paths).^gamma .*Random2(1:paths)*sqrt(dtM) + ...
(mu1.*beta1*V(1:paths).^(beta1-1) + mu2.*beta2.*V(1:paths).^(beta2-1)).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2)*dtM^2/2 + ...
sigma0*V(1:paths).^gamma .*Random2(1:paths)*(1-1/sqrt(3))*dtM^1.5) + ...
.5*(mu1.*beta1.*(beta1-1).*V(1:paths).^(beta1-2) + mu2.*beta2.*(beta2-1).*V(1:paths).^(beta2-2)).* ...
sigma0^2.*V(1:paths).^(2*gamma).*dtM^2/2 + ...
sigma0*gamma*V(1:paths).^(gamma-1) .* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random2(1:paths).*1/sqrt(3)*dtM^1.5 + ...
sigma0.*V(1:paths).^gamma .*(Random2(1:paths).^2-1)*dtM/2) + ...
.5*sigma0*gamma*(gamma-1).*V(1:paths).^(gamma-2) .* ...
sigma0^2.*V(1:paths).^(2*gamma) .*Random2(1:paths).*1/sqrt(3)*dtM^1.5;
if(rem(ttM,8)==0)
V(V<=0)=.0000001;
MaxCutOff=100;
NoOfBins=1000;
[VDensity,IndexOutV,IndexMaxV] = MakeDensityFromSimulation_Infiniti_NEW(V,paths,NoOfBins,MaxCutOff );
%Below function finds hermite coefficients of monte carlo simulated SDE
%of stochastic variance.
NoOfBins=400;
MaxCutOff=9999999;
[dh0,dh] = FindHermiteCoefficientsFromSimulation_Infiniti_NEW(V,paths,NoOfBins,MaxCutOff );
[d0,d] = ConvertHCoeffsToZCoeffs(dh0,dh,7);
dh0
dh
d0
d
str=input('Look at Coeffs of stochastic variance Z-Series')
%Below construct variance density on Gaussian grid.
%First calculate normal random variable on a grid below
dNn=.1/2; % Normal density subdivisions width. would change with number of subdivisions
Nn=45*4; % No of normal density subdivisions
NnMid=((1+Nn)/2)*dNn;
Z(1:Nn)=(((1:Nn)*dNn)-NnMid);
%Z
%str=input('Look at Z');
%Now calculate Y as a function of normal random variable.
Y(1:Nn)=d0;
for nn=1:SeriesOrder
Y(1:Nn)=Y(1:Nn)+d(nn)*Z(1:Nn).^nn;
end
%Now take change of densities derivative of Y with respect to normal
DfY(1:Nn)=0;
for nn=2:Nn-1
DfY(nn) = (Y(nn + 1) - Y(nn - 1))/(Z(nn + 1) - Z(nn - 1));
%Change of variable derivative for densities
end
DfY(Nn)=DfY(Nn-1);
DfY(1)=DfY(2);
%Now calculate the density of Y from density of normal random variable
%using change of probability derivative.
pY(1:Nn)=0;
for nn = 1:Nn
pY(nn) = (normpdf(Z(nn),0, 1))/abs(DfY(nn));
end
plot(Y(1:Nn),pY(1:Nn),'r',IndexOutV(1:IndexMaxV),VDensity(1:IndexMaxV),'g');
legend({'Z-Series Density From Hermite Polynomials','Monte Carlo Density'},'Location','northeast')
title(sprintf('Variance Density: v0 =%.2f,kappa=%.2f,theta=%.2f,gamma=%.2f,sigma0=%.2f,T=%.2f,dt=%.3f',v00,kappa,theta,gamma,sigma0,time1,dtM));
str=input('Look at density of volatility from simulated data. From Z-series Coefficients in red and directly from data in green');
end
end
%SVolMeanAnalytic=thetaV+(V0-thetaV)*exp(-kappaV*dt*Tt)
SVolMeanAnalytic=theta+(v00-theta)*exp(-kappa*dt*Tt)
SVolMeanMC=sum(V(1:paths))/paths
AssetMeanAnalytic=x0
AssetMeanMC=sum(X(1:paths))/paths
end
.
.
The following function is responsible for calculation of hermite coeffs with hermite inner product method. I will give detailed equations in a few hours. These hermite coeffs can later be converted to Z-series coeffs for density construction.
.
function [ch0,ch] = FindHermiteCoefficientsFromSimulation_Infiniti_NEW(X,Paths,NoOfBins,MaxCutOff )
%Processes monte carlo paths to return a series Xdensity as a function of IndexOut. IndexMax is the maximum value of index.
%
Xmin=999999;
Xmin1=999999;
Xmin2=999999;
Xmax=0;
Xmax1=0;
Xmax2=0;
mean=0;
for p=1:Paths
%if(X(p)>MaxCutOff)
%X(p)=MaxCutOff;
%end
%if(X(p)<0)
%X(p)=0;
%end
mean=mean+X(p)/Paths;
if(Xmin>real(X(p)))
Xmin2=Xmin1;
Xmin1=Xmin;
Xmin=real(X(p));
end
if(Xmax<real(X(p)))
Xmax2=Xmax1;
Xmax1=Xmax;
Xmax=real(X(p));
end
end
%Xmin
%Xmin1
%Xmin2
%Xmax
%Xmax1
%Xmax2
%str=input('Look at Xmin and Xmax');
%IndexMax=NoOfBins+1;
BinSize=(Xmax2-Xmin2)/NoOfBins;
%IndexMax=floor((Xmax-Xmin)/BinSize+.5)+1
IndexMax=floor((Xmax2-Xmin2)/BinSize+.5)+1
XDensity(1:IndexMax)=0.0;
for p=1:Paths
index=real(floor(real(X(p)-Xmin2)/BinSize+.5)+1);
if(real(index)<1)
index=1;
end
if(real(index)>IndexMax)
index=IndexMax;
end
%XDensity(index)=XDensity(index)+1.0/Paths/BinSize;
XDensity(index)=XDensity(index)+1.0/Paths;
end
IndexOut(1:IndexMax)=Xmin2+(0:(IndexMax-1))*BinSize;
XCDF(1:IndexMax)=0;
Z(1:IndexMax)=0;
XCDF(1)=XDensity(1)*.5;
for nn=2:IndexMax
XCDF(nn)=XCDF(nn-1)+XDensity(nn-1)*.5+XDensity(nn)*.5;
Z(nn)=norminv(XCDF(nn));
end
%XCDF
%Z
%str=input('Look at XCDF and Z');
ch0=mean;
ch(1:7)=0;
for nn=1:IndexMax
ch(1)=ch(1)+IndexOut(nn).*Z(nn).*XDensity(nn);
ch(2)=ch(2)+.5*IndexOut(nn).*(Z(nn).^2-1).*XDensity(nn);
ch(3)=ch(3)+1/6*IndexOut(nn).*(Z(nn).^3-3*Z(nn)).*XDensity(nn);
ch(4)=ch(4)+1/24*IndexOut(nn).*(Z(nn).^4-6*Z(nn).^2+3).*XDensity(nn);
ch(5)=ch(5)+1/120*IndexOut(nn).*(Z(nn).^5-10*Z(nn).^3+15*Z(nn)).*XDensity(nn);
ch(6)=ch(6)+1/720*IndexOut(nn).*(Z(nn).^6-15*Z(nn).^4+45*Z(nn).^2-15).*XDensity(nn);
ch(7)=ch(7)+1/720/7*IndexOut(nn).*(Z(nn).^7-21*Z(nn).^5+105*Z(nn).^3-105*Z(nn)).*XDensity(nn);
end
end
.
.
.
function [a0,a] = ConvertHCoeffsToZCoeffs(aH0,aH,SeriesOrder)
if(SeriesOrder==3)
a0=aH0-aH(2);
a(1)=aH(1)-3*aH(3);
a(2)=aH(2);
a(3)=aH(3);
end
if(SeriesOrder==4)
a0=aH0-aH(2)+3*aH(4);
a(1)=aH(1)-3*aH(3);
a(2)=aH(2)-6*aH(4);
a(3)=aH(3);
a(4)=aH(4);
end
if(SeriesOrder==5)
a0=aH0-aH(2)+3*aH(4);
a(1)=aH(1)-3*aH(3)+15*aH(5);
a(2)=aH(2)-6*aH(4);
a(3)=aH(3)-10*aH(5);
a(4)=aH(4);
a(5)=aH(5);
end
if(SeriesOrder==5)
a0=aH0-aH(2)+3*aH(4);
a(1)=aH(1)-3*aH(3)+15*aH(5);
a(2)=aH(2)-6*aH(4);
a(3)=aH(3)-10*aH(5);
a(4)=aH(4);
a(5)=aH(5);
end
% if(SeriesOrder==5)
% a0=aH0-aH(2)-3*aH(4);
% a(1)=aH(1)-3*aH(3)-15*aH(5);
% a(2)=aH(2)-6*aH(4);
% a(3)=aH(3)-10*aH(5);
% a(4)=aH(4);
% a(5)=aH(5);
% end
if(SeriesOrder==6)
a0=aH0-aH(2)+3*aH(4)-15*aH(6);
a(1)=aH(1)-3*aH(3)+15*aH(5);
a(2)=aH(2)-6*aH(4)+45*aH(6);
a(3)=aH(3)-10*aH(5);
a(4)=aH(4)-15*aH(6);
a(5)=aH(5);
a(6)=aH(6);
end
if(SeriesOrder==7)
a0=aH0-aH(2)+3*aH(4)-15*aH(6);
a(1)=aH(1)-3*aH(3)+15*aH(5)-105*aH(7);
a(2)=aH(2)-6*aH(4)+45*aH(6);
a(3)=aH(3)-10*aH(5)+105*aH(7);
a(4)=aH(4)-15*aH(6);
a(5)=aH(5)-21*aH(7);
a(6)=aH(6);
a(7)=aH(7);
end
end
.
.
.
function [XDensity,IndexOut,IndexMax] = MakeDensityFromSimulation_Infiniti_NEW(X,Paths,NoOfBins,MaxCutOff )
%Processes monte carlo paths to return a series Xdensity as a function of IndexOut. IndexMax is the maximum value of index.
%
Xmin=0;
Xmax=0;
for p=1:Paths
if(X(p)>MaxCutOff)
X(p)=MaxCutOff;
end
%if(X(p)<0)
%X(p)=0;
%end
if(Xmin>real(X(p)))
Xmin=real(X(p));
end
if(Xmax<real(X(p)))
Xmax=real(X(p));
end
end
%IndexMax=NoOfBins+1;
BinSize=(Xmax-Xmin)/NoOfBins;
%IndexMax=floor((Xmax-Xmin)/BinSize+.5)+1
IndexMax=floor((Xmax-Xmin)/BinSize+.5)+1
XDensity(1:IndexMax)=0.0;
for p=1:Paths
index=real(floor(real(X(p)-Xmin)/BinSize+.5)+1);
if(real(index)<1)
index=1;
end
if(real(index)>IndexMax)
index=IndexMax;
end
XDensity(index)=XDensity(index)+1.0/Paths/BinSize;
end
IndexOut(1:IndexMax)=Xmin+(0:(IndexMax-1))*BinSize;
end
.
.
.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Marsden
Posts: 307
Joined: August 20th, 2001, 5:42 pm
Location: Maryland
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Amin, thank you for making your accounts of your situation more tractable.
A few things are apparent to me, and maybe the only reason you haven't realized them yourself is due to your strong medication.
First, you have no local allies. It seems that even your family is complicit in your mistreatment.
Second, your tormentors are not worried about what you might do in your current state; the mere fact that you are allowed -- freely, I assume -- to post here and to relate your mistreatment, and that you have been doing it for years, indicates that they are confident that nothing will come of your communications.
Third, it is likely that you are mentally ill in some way. You should not dismiss this possibility. Even so, it is inhumane and immoral to force treatment that you do not want on you, assuming (as seems to be the case) that you do not pose a physical threat to anyone. And part of any cognitive issues you have may in fact be caused by improper treatment.
The upshot of it all is that you are on your own. You have for years done everything in your power to recruit outsiders to help you, to no avail. If Human Rights Watch couldn't help you -- and I suspect that at the very least they contacted the agencies you named as being behind your mistreatment, apparently with no effect -- no inquiry from a concerned citizen will do so either.
It seems clear -- and maybe the only reason you haven't come to this conclusion on your own is due to your medically-induced fog -- that what you need to do is to escape your physical situation. Your family, whatever authorities participate in your mistreatment -- you need physically do disappear from them.
What keeps you from doing this? It seems that you are being fed and housed, and provided computer access, and escape would probably lose you these things. Is your mental well-being worth it? Only you can decide.
Conditions would likely be very difficult for you if you disappear, but you're clever, and people survive much more difficult situations than what you'd likely face. Take a menial job; become a beggar; sneak on a freight train to get out of your geographic location. Maybe you can store up some resources to help you before you go.
But this seems, from where I sit, to be the cost of your freedom.
And with that advice given, I walk away.
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, here is the explanation about the program. I will first give main equations and then relate them to the program to make it easy for friends to understand the program.
The main idea is to use the orthogonality property of hermite polynomials with respect to standard normal density. Let us assume we have a random variable X and we can represent it as a Z-series. All we know is that our random variable can be represented by a Hermite series version of Z-series(as both are equivalent), we do not know the precise coefficients yet and these coefficients are the ones we want to find out. Let us represent X as a hermite series as
$X \,= ah_0 \, + \, ah_1 \, H_1(Z) \, + ah_2 \, H_2(Z) \, + ... + \, ah_n \, H_n(Z) \,$
We multiply X with first hermite polynomial and integrate with respect to Gaussian density to find
$\displaystyle\int_{-\infty}^{\infty} \, X \, H_1(Z) \, p(Z) \, dZ$
$\displaystyle\int_{-\infty}^{\infty} \, (ah_0 \, + \, ah_1 \, H_1(Z) \, + ah_2 \, H_2(Z) \, + ... + \, ah_n \, H_n(Z) \,) \, H_1(Z) \, p(Z) \, dZ$
$=ah_1$
Similalry taking scalar product of X and second hermite polynomial with respect to Gaussian density would give us
$\displaystyle\int_{-\infty}^{\infty} \, X \, H_2(Z) \, p(Z) \, dZ$
$\displaystyle\int_{-\infty}^{\infty} \, (ah_0 \, + \, ah_1 \, H_1(Z) \, + ah_2 \, H_2(Z) \, + ... + \, ah_n \, H_n(Z) \,) \, H_2(Z) \, p(Z) \, dZ$
$=2! \, ah_2$
Taking scalar product of X with nth hermite polynomial with respect to Gaussian density would give us
$\displaystyle\int_{-\infty}^{\infty} \, X \, H_n(Z) \, p(Z) \, dZ$
$\displaystyle\int_{-\infty}^{\infty} \, (ah_0 \, + \, ah_1 \, H_1(Z) \, + ah_2 \, H_2(Z) \, + ... + \, ah_n \, H_n(Z) \,) \, H_n(Z) \, p(Z) \, dZ$
$=n! \, ah_n$
So basically we have to be able to take scalar product of X and nth hermite polynomial with respect to Gaussian density to find scaled value of coefficient of nth hermite polynomial.
In order to be able to do this, I first constructed a monte carlo density by dividing the valid domain into equal grid spacings of $\Delta X$ and calculated probability mass within each grid cell. Since the grid is uniform width, For each of the P monte carlo paths, we can just use one division per path to assign weight 1/P to appropriate grid cell. Once the iteration of this is complete over all paths, we get total probability mass in each of the grid cell as a result so that summed over all the grid cells probability mass becomes one. If there are M grid cells, probability mass in mth grid cell is given as $\Delta P_m$
Now we associate a value of Z with center of each of the grid cells. For this we take sum of probability mass over all previous grid cells plus half of the probability mass in the current cell where we want to find the value of Z.
This value of Z is found by norminv(sum of all the probability mass up to center of the grid cell)
It is the value of CDF at a point X that associates a corresponding Z to that point. So the value of Z at center of a grid cell is found by inversion of CDF of X at center of that grid cell.
Once we have found $Z_m$ associated with center of mth grid cell, we can replace our inner product integrals (between X and H_n(Z)) with respect to Gaussian density by finite sums over all the grid cells as
$\displaystyle\int_{-\infty}^{\infty} \, X \, H_n(Z) \, p(Z) \, dZ$
$\displaystyle\sum_{m=1}^{M} \, X_m \, H_n(Z_m) \, p(Z_m) \, \Delta Z_m$
But we know that probability mass in mth cell is given as
$p(Z_m) \, \Delta Z_m=p(X_m) \, \Delta X_m = \Delta P_m$
So we can write as
$\displaystyle\int_{-\infty}^{\infty} \, X \, H_n(Z) \, p(Z) \, dZ$
$\displaystyle\sum_{m=1}^{M} \, X_m \, H_n(Z_m) \, p(Z_m) \, \Delta Z_m$
$\displaystyle\sum_{m=1}^{M} \, X_m \, H_n(Z_m) \, \Delta P_m \,$
which means that
$n! \, ah_n \, = \, \displaystyle\sum_{m=1}^{M} \, X_m \, H_n(Z_m) \, \Delta P_m \,$
All the values in RHS are known . $X_m$ is center of mth grid cell. $Z_m$ is value found by inversion of CDF at $X_m$ and $\Delta P_m \,$ which is the probability mass in mth grid cell was also calculated when we divided P monte carlo paths over the grid cells with 1/P weight.
Now relating it to the program,
[ch0,ch] = FindHermiteCoefficientsFromSimulation_Infiniti_NEW(X,Paths,NoOfBins,MaxCutOff )
is the function where we did all of the above calculations in the loop given below as
for nn=1:IndexMax
ch(1)=ch(1)+IndexOut(nn).*Z(nn).*XDensity(nn);
ch(2)=ch(2)+.5*IndexOut(nn).*(Z(nn).^2-1).*XDensity(nn);
ch(3)=ch(3)+1/6*IndexOut(nn).*(Z(nn).^3-3*Z(nn)).*XDensity(nn);
ch(4)=ch(4)+1/24*IndexOut(nn).*(Z(nn).^4-6*Z(nn).^2+3).*XDensity(nn);
ch(5)=ch(5)+1/120*IndexOut(nn).*(Z(nn).^5-10*Z(nn).^3+15*Z(nn)).*XDensity(nn);
ch(6)=ch(6)+1/720*IndexOut(nn).*(Z(nn).^6-15*Z(nn).^4+45*Z(nn).^2-15).*XDensity(nn);
ch(7)=ch(7)+1/720/7*IndexOut(nn).*(Z(nn).^7-21*Z(nn).^5+105*Z(nn).^3-105*Z(nn)).*XDensity(nn);
end
Here ch(.) are hermite coefficients that are being calculated.
Here IndexOut(nn) is value of X(nn) associated with nnth grid cell. Z(nn) is value of Z associated with nnth grid cell and XDensity(nn) is probability mass associated with nnth grid cell.
Sorry that IndexOut(nn) is a bad variable name but you can replace it with X(nn).
Once we have hermite coefficients, we can easily convert them to Z-series coefficients.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, I will be posting a program for calculations of "hermite-wise correlations" in a day or two in monte-carlo setting. I also have ideas about a simpler method to find hermite-wise correlations in smaller samples. I expect to post these programs tomorrow or in another day.
It turns out to be slightly more complicated as compared to what I suggested in an older post.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, I am writing this post to complain about increase of food drugging activity in Lahore city. I have been going to remote parts of the city to get good food and water but now Mind control agencies (with the help of Pak Army soldiers) have started distributing mind control drugs among small restaurants even in remote areas. They give mind control drugs to small restaurants where cooked food is available and ask them to freely add mind control drugs in the food they prepare. Today I was hit by mind control drugs when I took food from a restaurant where I never suspected they would have reached but he had already added drugs in the food and I only realized it after I had eaten some food. Pakistan Army is distributing mind control drugs among small restaurants in more and more neighborhoods with passing time.
It is also extremely difficult to get good bottled water but underground water at many places in the city is still good but I am very afraid if friends did not act to stop them now, Pakistan army will drug underground water in Lahore city very carefully in next few days.
I want to request friends to please protest with mind control agencies and American defense to not distribute mind control drugs among street restaurants in Lahore city and also not to drug underground water that is still good in many parts of the city. And also please ask them to not alter my matlab programs.
About the progress on my research. I hope to post a matlab function today that constructs a two dimensional density using joint realizations of two simulated variables in every path and then uses this density to calculate correlation between hermite polynomials of same order associated with the two simulated variables. This is still preliminary and is building block part of a larger program but I have decided to post it and I will come back with a more detailed program in another two days.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, I am posting this program that is embedded in correlated Stochastic Volatility simulations. At each step, it calculates the correlations between hermite polynomials of same order associated with asset and stochastic variance. It is still preliminary and supposed to be part of a larger program that I will post in another two days. I am posting this version for friends who are also interested in similar research. I will soon come back with the detailed program.
.
.
%This function processes joint observations of two variables X and Y from
%common monte carlo paths and makes a two dimensional density of X and Y.
%Later it finds correlations between hermite polynomials of X with hermite
%polynomials of Y of the same order.
.
.
function [corr1,corr2,corr3,corr4,corr5,corr6,corr7] = FindHermiteCorrelationsFromSimulation_Infiniti_NEW02(X,Y,Paths,NoOfBinsX,NoOfBinsY)
%This function processes joint observations of two variables X and Y from
%common monte carlo paths and makes a two dimensional density of X and Y.
%Later it finds correlations between hermite polynomials of X with hermite
%polynomials of Y of the same order.
Xmin=999999;
Xmin1=999999;
Xmin2=999999;
Xmax=0;
Xmax1=0;
Xmax2=0;
Ymin=999999;
Ymin1=999999;
Ymin2=999999;
Ymax=0;
Ymax1=0;
Ymax2=0;
meanX=0;
meanY=0;
for p=1:Paths
meanX=meanX+X(p)/Paths;
meanY=meanY+Y(p)/Paths;
if(Xmin>real(X(p)))
Xmin2=Xmin1;
Xmin1=Xmin;
Xmin=real(X(p));
end
if(Xmax<real(X(p)))
Xmax2=Xmax1;
Xmax1=Xmax;
Xmax=real(X(p));
end
if(Ymin>real(Y(p)))
Ymin2=Ymin1;
Ymin1=Ymin;
Ymin=real(Y(p));
end
if(Ymax<real(Y(p)))
Ymax2=Ymax1;
Ymax1=Ymax;
Ymax=real(Y(p));
end
end
%Xmin
%Xmin1
%Xmin2
%Xmax
%Xmax1
%Xmax2
%str=input('Look at Xmin and Xmax');
%IndexMax=NoOfBins+1;
BinSizeX=(Xmax2-Xmin2)/NoOfBinsX;
BinSizeY=(Ymax2-Ymin2)/NoOfBinsY;
%IndexMax=floor((Xmax-Xmin)/BinSize+.5)+1
IndexMaxX=floor((Xmax2-Xmin2)/BinSizeX+.5)+1
IndexMaxY=floor((Ymax2-Ymin2)/BinSizeY+.5)+1
XYProbMass2D(1:IndexMaxX,1:IndexMaxY)=0.0;
XProbMass(1:IndexMaxX)=0;
YProbMass(1:IndexMaxY)=0;
for p=1:Paths
indexX=real(floor(real(X(p)-Xmin2)/BinSizeX+.5)+1);
if(real(indexX)<1)
indexX=1;
end
if(real(indexX)>IndexMaxX)
indexX=IndexMaxX;
end
indexY=real(floor(real(Y(p)-Ymin2)/BinSizeY+.5)+1);
if(real(indexY)<1)
indexY=1;
end
if(real(indexY)>IndexMaxY)
indexY=IndexMaxY;
end
%XDensity(index)=XDensity(index)+1.0/Paths/BinSize;
XProbMass(indexX)=XProbMass(indexX)+1.0/Paths;
YProbMass(indexY)=YProbMass(indexY)+1.0/Paths;
XYProbMass2D(indexX,indexY)=XYProbMass2D(indexX,indexY)+1.0/Paths;
end
Xn(1:IndexMaxX)=Xmin2+(0:(IndexMaxX-1))*BinSizeX;
Ym(1:IndexMaxY)=Ymin2+(0:(IndexMaxY-1))*BinSizeY;
XCDF(1:IndexMaxX)=0;
Zx(1:IndexMaxX)=0;
XCDF(1)=XProbMass(1)*.5;
Zx(1)=norminv(XCDF(1));
for nn=2:IndexMaxX
XCDF(nn)=XCDF(nn-1)+XProbMass(nn-1)*.5+XProbMass(nn)*.5;
Zx(nn)=norminv(XCDF(nn));
end
YCDF(1:IndexMaxY)=0;
Zy(1:IndexMaxY)=0;
YCDF(1)=YProbMass(1)*.5;
Zy(1)=norminv(YCDF(1));
for nn=2:IndexMaxY
YCDF(nn)=YCDF(nn-1)+YProbMass(nn-1)*.5+YProbMass(nn)*.5;
Zy(nn)=norminv(YCDF(nn));
end
corr1=0;
corr2=0;
corr3=0;
corr4=0;
corr5=0;
corr6=0;
corr7=0;
for nn=1:IndexMaxX
for mm=1:IndexMaxY
corr1=corr1+Zx(nn).*Zy(mm).*XYProbMass2D(nn,mm);
corr2=corr2+1/2*(Zx(nn).^2-1).*(Zy(mm).^2-1).*XYProbMass2D(nn,mm);
corr3=corr3+1/6*(Zx(nn).^3-3*Zx(nn)).*(Zy(mm).^3-3*Zy(mm)).*XYProbMass2D(nn,mm);
corr4=corr4+1/24*(Zx(nn).^4-6*Zx(nn).^2+3).*(Zy(mm).^4-6*Zy(mm).^2+3).*XYProbMass2D(nn,mm);
corr5=corr5+1/120*(Zx(nn).^5-10*Zx(nn).^3+15*Zx(mm)).*(Zy(mm).^5-10*Zy(mm).^3+15*Zy(mm)).*XYProbMass2D(nn,mm);
corr6=corr6+1/720*(Zx(nn).^6-15*Zx(nn).^4+45*Zx(nn).^2-15).*(Zy(mm).^6-15*Zy(mm).^4+45*Zy(mm).^2-15).*XYProbMass2D(nn,mm);
corr7=corr7+1/7/720*(Zx(nn).^7-21*Zx(nn).^5+105*Zx(nn).^3-105.*Zx(nn)).*(Zy(mm).^7-21*Zy(mm).^5+105*Zy(mm).^3-105*Zy(mm)).*XYProbMass2D(nn,mm);
end
end
end
.
.
.
function [] = SVMonteCarloDensity()
%Copyright Ahsan Amin. Infiniti derivatives Technologies.
%or skype ahsan.amin2999
dt=.125/2/2/2; % Simulation time interval.%Fodiffusions close to zero
%decrease dt for accuracy.
Tt=128;%64*4*2;%16*2*4;%*4*4*1;%*4;%128*2*2*2; % Number of simulation levels. Terminal time= Tt*dt; //.125/32*32*16=2 year;
T=Tt*dt;
if(T>=1)
dtM=1/32;;
TtM=T/dtM;
else
dtM=T/32;
TtM=T/dtM;
end
v00=1.0;%.250; % starting value of SDE
beta1=0.0;
beta2=1.0; % Second drift term power.
gamma=.950;%50; % volatility power.
kappa=1.50;%.950; %mean reversion parameter.
theta=1.0;%.075;%mean reversion target
sigma0=.9500;%Volatility value
%you can specify any general mu1 and mu2 and beta1 and beta2.
mu1=1*theta*kappa; %first drift coefficient.
mu2=-1*kappa; % Second drift coefficient.
%dVt=(mu1*Vt^beta1 + mu2*Vt^beta2)*dt+sigma0 * Vt^gamma * dz1(t)
%Xt(0)=x0
%dXt=(mu1X*Xt^alpha1+mu2X*Xt^alpha2)*dt+sigmaX* Vt^gammaV * Xt^gammaX *dz2(t);
%<dz1(t),dz2(t)>=rho *dt
x0=1.00;
gammaX=.85;
sigmaX=.250;
thetaX=.50;
kappaX=0.0;
mu1X=thetaX*kappaX;
mu2X=-1*kappaX;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%rng(29079137, 'twister')
rng(15898837, 'twister')
paths=2000000;
V(1:paths)=v00; %Original process monte carlo.
X=0.0;
X(1:paths)=x0;
alpha1=0;
alpha2=1;
a=mu1X;
b=mu2X;
rho=-.95;
sigma1=sigmaX;
gammaV=.5;
Random1(1:paths)=0;
Random2(1:paths)=0;
mm=0;
for ttM=1:TtM
Random1=randn(size(Random1));
Random2=randn(size(Random2));
time1=ttM*dtM
Xbefore(1:paths)=X(1:paths);
X(1:paths)=X(1:paths)+ ...
(a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2)* dtM + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths) * sqrt(dtM) + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random2(1:paths)*sqrt(dtM) + ...
(a*alpha1* X(1:paths).^(alpha1-1)+b*alpha2* X(1:paths).^(alpha2-1)).* ...
(((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2)* dtM^2/2)+ ...
(sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths) *(1-1/sqrt(3)).*dtM^1.5+ ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random2(1:paths)*(1-1/sqrt(3)).*dtM^1.5))+ ...
.5*(a*alpha1*(alpha1-1)* X(1:paths).^(alpha1-2)+b*alpha2*(alpha2-1).* X(1:paths).^(alpha2-2)).* ...
( sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX)) *dtM^2/2 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.*gammaX.* X(1:paths).^(gammaX-1).* ...
((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2).*Random1(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*(Random1(1:paths).^2-1) * dtM/2 + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random1(1:paths).*Random2(1:paths)*dtM/2)+ ...
.5*sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.*gammaX.*(gammaX-1).* X(1:paths).^(gammaX-2).* ...
(sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX) .*Random1(1:paths).*1/sqrt(3).*dtM^1.5)+ ...
sqrt(1-rho^2)* sigma1*gammaV.* V(1:paths).^(gammaV-1).* X(1:paths).^(gammaX).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random1(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sigma0*V(1:paths).^gamma.*Random1(1:paths).*Random2(1:paths)*dtM/2)+ ...
.5*sqrt(1-rho^2)* sigma1*gammaV.*(gammaV-1).* V(1:paths).^(gammaV-2).* X(1:paths).^(gammaX).* ...
(sigma0^2*V(1:paths).^(2*gamma).*Random1(1:paths)*1/sqrt(3)*dtM^1.5)+ ...
sqrt(1-rho^2)* sigma1*gammaV.* V(1:paths).^(gammaV-1).*gammaX.* X(1:paths).^(gammaX-1).* ...
rho.* sigma1.* V(1:paths).^gammaV .*X(1:paths).^gammaX .*sigma0.*V(1:paths).^gamma.*Random1(1:paths)*1/sqrt(3)*dtM^1.5+ ...
rho* sigma1* V(1:paths).^gammaV.*gammaX.* X(1:paths).^(gammaX-1).* ...
((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths).*Random2(1:paths) * dtM/2 + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*(Random2(1:paths).^2-1)*dtM/2)+ ...
.5*rho* sigma1* V(1:paths).^gammaV.*gammaX.*(gammaX-1).* X(1:paths).^(gammaX-2).* ...
(sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX) .*Random2(1:paths).*1/sqrt(3).*dtM^1.5)+ ...
rho* sigma1*gammaV.* V(1:paths).^(gammaV-1).* X(1:paths).^(gammaX).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sigma0*V(1:paths).^gamma.*(Random2(1:paths).^2-1)*dtM/2)+ ...
.5*rho* sigma1*gammaV.*(gammaV-1).* V(1:paths).^(gammaV-2).* X(1:paths).^(gammaX).* ...
sigma0^2.*V(1:paths).^(2*gamma).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5+ ...
rho* sigma1*gammaV.* V(1:paths).^(gammaV-1).*gammaX.* X(1:paths).^(gammaX-1).* ...
rho.* sigma1.* V(1:paths).^gammaV .*X(1:paths).^gammaX .*sigma0.*V(1:paths).^gamma.*Random2(1:paths)*1/sqrt(3)*dtM^1.5;
DX=X-Xbefore;
%Uncomment to find Z-series of diffusion noise term of asset.
% X(X<0)=.0000001;
% if(rem(ttM,8)==0) %On every eighth monte carlo step construct the density
% mm=mm+1;
% MaxCutOff=100;
% NoOfBins=1000;
% DX=X-Xbefore;
% [XDensity,IndexOutX,IndexMaxX] = MakeDensityFromSimulation_Infiniti_NEW(DX,paths,NoOfBins,MaxCutOff );
%
% SeriesOrder=7;
% NoOfBins=2000;
% MaxCutOff=9999999;
% %Below function finds hermite coefficients of monte carlo simulated SDE
% %of the stock asset.
% [ch0,ch] = FindHermiteCoefficientsFromSimulation_Infiniti_NEW(DX,paths,NoOfBins,MaxCutOff );
% [c0,c] = ConvertHCoeffsToZCoeffs(ch0,ch,7);
% ch0
% ch
% c0
% c
%
% str=input('Look at Coeffs of Z-series of stock SDE')
%
% %Below construct density on Gaussian grid.
% %First calculate normal random variable on a grid below
% dNn=.1/2; % Normal density subdivisions width. would change with number of subdivisions
% Nn=45*4; % No of normal density subdivisions
% NnMid=((1+Nn)/2)*dNn;
% Z(1:Nn)=(((1:Nn)*dNn)-NnMid);
% %Z
% %str=input('Look at Z');
% %Now calculate Y as a function of normal random variable.
% Y(1:Nn)=c0;
% for nn=1:SeriesOrder
% Y(1:Nn)=Y(1:Nn)+c(nn)*Z(1:Nn).^nn;
% end
%
% %Now take change of densities derivative of Y with respect to normal
% DfY(1:Nn)=0;
% for nn=2:Nn-1
% DfY(nn) = (Y(nn + 1) - Y(nn - 1))/(Z(nn + 1) - Z(nn - 1));
% %Change of variable derivative for densities
% end
% DfY(Nn)=DfY(Nn-1);
% DfY(1)=DfY(2);
%
% %Now calculate the density of Y from density of normal random variable
% %using change of probability derivative.
% pY(1:Nn)=0;
% for nn = 1:Nn
% pY(nn) = (normpdf(Z(nn),0, 1))/abs(DfY(nn));
% end
%
% plot(Y(1:Nn),pY(1:Nn),'r',IndexOutX(1:IndexMaxX),XDensity(1:IndexMaxX),'g');
%
% title(sprintf('Stock Density: x0 = %.2f,thetaX=%.2f,kappaX=%.2f,gammaX=%.2f,sigmaX=%.2f,rho=%.2f,v0 =%.2f,kappa=%.2f,theta=%.2f,gamma=%.2f,sigma0=%.2f,T=%.2f,dt=%.3f',x0,thetaX,kappaX,gammaX,sigmaX,rho,v00,kappa,theta,gamma,sigma0,time1,dtM));
% legend({'Z-Series Density of Stock From Hermite Polynomials','Monte Carlo Density'},'Location','northeast')
%
% str=input('Look at density of stock from simulated data. From Z-series Coefficients in red, Directly from data in green');
% end
Vbefore=V;
V(1:paths)=V(1:paths)+ ...
(mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2)*dtM + ...
sigma0*V(1:paths).^gamma .*Random2(1:paths)*sqrt(dtM) + ...
(mu1.*beta1*V(1:paths).^(beta1-1) + mu2.*beta2.*V(1:paths).^(beta2-1)).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2)*dtM^2/2 + ...
sigma0*V(1:paths).^gamma .*Random2(1:paths)*(1-1/sqrt(3))*dtM^1.5) + ...
.5*(mu1.*beta1.*(beta1-1).*V(1:paths).^(beta1-2) + mu2.*beta2.*(beta2-1).*V(1:paths).^(beta2-2)).* ...
sigma0^2.*V(1:paths).^(2*gamma).*dtM^2/2 + ...
sigma0*gamma*V(1:paths).^(gamma-1) .* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random2(1:paths).*1/sqrt(3)*dtM^1.5 + ...
sigma0.*V(1:paths).^gamma .*(Random2(1:paths).^2-1)*dtM/2) + ...
.5*sigma0*gamma*(gamma-1).*V(1:paths).^(gamma-2) .* ...
sigma0^2.*V(1:paths).^(2*gamma) .*Random2(1:paths).*1/sqrt(3)*dtM^1.5;
DV=V-Vbefore;
%Uncomment to find Z-series of diffusion noise term of variance SDE.
% if(rem(ttM,8)==0)
%
% %V(V<=0)=.0000001;
% DV=V-Vbefore;
% MaxCutOff=100;
% NoOfBins=1000;
% [VDensity,IndexOutV,IndexMaxV] = MakeDensityFromSimulation_Infiniti_NEW(DV,paths,NoOfBins,MaxCutOff );
% %Below function finds hermite coefficients of monte carlo simulated SDE
% %of stochastic variance.
% NoOfBins=400;
% MaxCutOff=9999999;
% [dh0,dh] = FindHermiteCoefficientsFromSimulation_Infiniti_NEW(DV,paths,NoOfBins,MaxCutOff );
%
% [d0,d] = ConvertHCoeffsToZCoeffs(dh0,dh,7);
% dh0
% dh
% d0
% d
%
% str=input('Look at Coeffs of stochastic variance Z-Series')
%
%
% %Below construct variance density on Gaussian grid.
% %First calculate normal random variable on a grid below
% dNn=.1/2; % Normal density subdivisions width. would change with number of subdivisions
% Nn=45*4; % No of normal density subdivisions
% NnMid=((1+Nn)/2)*dNn;
% Z(1:Nn)=(((1:Nn)*dNn)-NnMid);
% %Z
% %str=input('Look at Z');
% %Now calculate Y as a function of normal random variable.
% Y(1:Nn)=d0;
% for nn=1:SeriesOrder
% Y(1:Nn)=Y(1:Nn)+d(nn)*Z(1:Nn).^nn;
% end
%
% %Now take change of densities derivative of Y with respect to normal
% DfY(1:Nn)=0;
% for nn=2:Nn-1
% DfY(nn) = (Y(nn + 1) - Y(nn - 1))/(Z(nn + 1) - Z(nn - 1));
% %Change of variable derivative for densities
% end
% DfY(Nn)=DfY(Nn-1);
% DfY(1)=DfY(2);
%
% %Now calculate the density of Y from density of normal random variable
% %using change of probability derivative.
% pY(1:Nn)=0;
% for nn = 1:Nn
% pY(nn) = (normpdf(Z(nn),0, 1))/abs(DfY(nn));
% end
% plot(Y(1:Nn),pY(1:Nn),'r',IndexOutV(1:IndexMaxV),VDensity(1:IndexMaxV),'g');
% legend({'Z-Series Density From Hermite Polynomials','Monte Carlo Density'},'Location','northeast')
% title(sprintf('Variance Density: v0 =%.2f,kappa=%.2f,theta=%.2f,gamma=%.2f,sigma0=%.2f,T=%.2f,dt=%.3f',v00,kappa,theta,gamma,sigma0,time1,dtM));
%
% str=input('Look at density of volatility from simulated data. From Z-series Coefficients in red and directly from data in green');
% end
%find correlations between hermite polynomials of same order associated with
%asset and volatility diffusion terms
NoOfBinsv=100;
NoOfBinsx=100;
%Below function to calculate correlation of same order hermite polynomials
[corr1,corr2,corr3,corr4,corr5,corr6,corr7] = FindHermiteCorrelationsFromSimulation_Infiniti_NEW02(DX,DV,paths,NoOfBinsx,NoOfBinsv);
corr1
corr2
corr3
corr4
corr5
corr6
corr7
str=input('Look at corrs');
end
%SVolMeanAnalytic=thetaV+(V0-thetaV)*exp(-kappaV*dt*Tt)
SVolMeanAnalytic=theta+(v00-theta)*exp(-kappa*dt*Tt)
SVolMeanMC=sum(V(1:paths))/paths
AssetMeanAnalytic=x0
AssetMeanMC=sum(X(1:paths))/paths
end
.
.
The stochastic volatility program is run with -.95 correlation between stock and stochastic volatility diffusions. Here is the hermites correlation on first time step that you will see on screen.
corr1 =
-0.947841155761345
corr2 =
0.896664461062579
corr3 =
-0.843967323299367
corr4 =
0.782313532194083
corr5 =
-0.707861129845129
corr6 =
0.580058382134738
corr7 =
-0.443654522888022
Look at corrs
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, it is 4:50 am in the morning in Lahore, Pakistan. I decided to sleep at 12:30 am when I could not do any more work as my body bitterly wanted to go to sleep. But after lying on bed, I have not been able to sleep well at all. They continue to target medium frequency waves on the victim to not allow the victim to sleep at all. They have done it partially on many past nights but at this time, I am without proper sleep despite that it is early morning now and roughly about five hours have passed since I have been trying to sleep.
I had made some progress in my research last night and wanted to post another interim version again last night but I was too tired to be able to post anything on internet. Now four and half hours after falling on bed, I am complaining of sleep deprivation torture.
I got off the bed at 3:20 am and tried many things for a better sleep and fell on bed again but nothing worked. When I am writing now at 5:00 am, I am more mentally distressed now after extremely poor sleep and microwave torture on the head than I was when I tried to sleep at 12:20 am.
Please protest to American government and defense to end mind control torture and directed microwave torture on innocent civilian victims.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, the day before yesterday I posted a program that calculates correlations between hermite polynomials of same order of two different but correlated diffusions. In that program I constructed a two dimensional density from joint observations of two diffusions and then found correlations between hermite polynomials of different diffusions by integrating over the joint two dimensional density. I am posting a slightly modified version of the same program.
I am also posting another function that directly calculates correlations between hermite polynomials (of same order) associated with observations of simulated correlated diffusions using values of Zx and Zy associated with observations of each of the diffusions of X and Y. This program does not require calculations of two dimensional density and gives simulation monte carlo weight of 1/P to each joint observation of X and Y where P are total number of simulation paths. This is direct calculation of correlations between data without integration over the joint density.
Here is the main program.
.
function [] = SVMonteCarloDensity()
%Copyright Ahsan Amin. Infiniti derivatives Technologies.
%or skype ahsan.amin2999
dt=.125/2/2/2; % Simulation time interval.%Fodiffusions close to zero
%decrease dt for accuracy.
Tt=128;%64*4*2;%16*2*4;%*4*4*1;%*4;%128*2*2*2; % Number of simulation levels. Terminal time= Tt*dt; //.125/32*32*16=2 year;
T=Tt*dt;
if(T>=1)
dtM=1/32;;
TtM=T/dtM;
else
dtM=T/32;
TtM=T/dtM;
end
v00=1.0;%.250; % starting value of SDE
beta1=0.0;
beta2=1.0; % Second drift term power.
gamma=.950;%50; % volatility power.
kappa=1.50;%.950; %mean reversion parameter.
theta=1.0;%.075;%mean reversion target
sigma0=.9500;%Volatility value
%you can specify any general mu1 and mu2 and beta1 and beta2.
mu1=1*theta*kappa; %first drift coefficient.
mu2=-1*kappa; % Second drift coefficient.
%dVt=(mu1*Vt^beta1 + mu2*Vt^beta2)*dt+sigma0 * Vt^gamma * dz1(t)
%Xt(0)=x0
%dXt=(mu1X*Xt^alpha1+mu2X*Xt^alpha2)*dt+sigmaX* Vt^gammaV * Xt^gammaX *dz2(t);
%<dz1(t),dz2(t)>=rho *dt
x0=1.00;
gammaX=.85;
sigmaX=.250;
thetaX=.50;
kappaX=0.0;
mu1X=thetaX*kappaX;
mu2X=-1*kappaX;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%rng(29079137, 'twister')
rng(15898837, 'twister')
paths=2000000;
V(1:paths)=v00; %Original process monte carlo.
X=0.0;
X(1:paths)=x0;
alpha1=0;
alpha2=1;
a=mu1X;
b=mu2X;
rho=-.95;
sigma1=sigmaX;
gammaV=.5;
Random1(1:paths)=0;
Random2(1:paths)=0;
mm=0;
for ttM=1:TtM
Random1=randn(size(Random1));
Random2=randn(size(Random2));
time1=ttM*dtM
Xbefore(1:paths)=X(1:paths);
X(1:paths)=X(1:paths)+ ...
(a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2)* dtM + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths) * sqrt(dtM) + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random2(1:paths)*sqrt(dtM) + ...
(a*alpha1* X(1:paths).^(alpha1-1)+b*alpha2* X(1:paths).^(alpha2-1)).* ...
(((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2)* dtM^2/2)+ ...
(sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths) *(1-1/sqrt(3)).*dtM^1.5+ ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random2(1:paths)*(1-1/sqrt(3)).*dtM^1.5))+ ...
.5*(a*alpha1*(alpha1-1)* X(1:paths).^(alpha1-2)+b*alpha2*(alpha2-1).* X(1:paths).^(alpha2-2)).* ...
( sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX)) *dtM^2/2 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.*gammaX.* X(1:paths).^(gammaX-1).* ...
((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2).*Random1(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*(Random1(1:paths).^2-1) * dtM/2 + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*Random1(1:paths).*Random2(1:paths)*dtM/2)+ ...
.5*sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.*gammaX.*(gammaX-1).* X(1:paths).^(gammaX-2).* ...
(sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX) .*Random1(1:paths).*1/sqrt(3).*dtM^1.5)+ ...
sqrt(1-rho^2)* sigma1*gammaV.* V(1:paths).^(gammaV-1).* X(1:paths).^(gammaX).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random1(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sigma0*V(1:paths).^gamma.*Random1(1:paths).*Random2(1:paths)*dtM/2)+ ...
.5*sqrt(1-rho^2)* sigma1*gammaV.*(gammaV-1).* V(1:paths).^(gammaV-2).* X(1:paths).^(gammaX).* ...
(sigma0^2*V(1:paths).^(2*gamma).*Random1(1:paths)*1/sqrt(3)*dtM^1.5)+ ...
sqrt(1-rho^2)* sigma1*gammaV.* V(1:paths).^(gammaV-1).*gammaX.* X(1:paths).^(gammaX-1).* ...
rho.* sigma1.* V(1:paths).^gammaV .*X(1:paths).^gammaX .*sigma0.*V(1:paths).^gamma.*Random1(1:paths)*1/sqrt(3)*dtM^1.5+ ...
rho* sigma1* V(1:paths).^gammaV.*gammaX.* X(1:paths).^(gammaX-1).* ...
((a* X(1:paths).^alpha1 + b* X(1:paths).^alpha2).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sqrt(1-rho^2)* sigma1* V(1:paths).^gammaV.* X(1:paths).^gammaX .*Random1(1:paths).*Random2(1:paths) * dtM/2 + ...
rho* sigma1* V(1:paths).^gammaV .*X(1:paths).^gammaX .*(Random2(1:paths).^2-1)*dtM/2)+ ...
.5*rho* sigma1* V(1:paths).^gammaV.*gammaX.*(gammaX-1).* X(1:paths).^(gammaX-2).* ...
(sigma1^2* V(1:paths).^(2*gammaV).* X(1:paths).^(2*gammaX) .*Random2(1:paths).*1/sqrt(3).*dtM^1.5)+ ...
rho* sigma1*gammaV.* V(1:paths).^(gammaV-1).* X(1:paths).^(gammaX).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5 + ...
sigma0*V(1:paths).^gamma.*(Random2(1:paths).^2-1)*dtM/2)+ ...
.5*rho* sigma1*gammaV.*(gammaV-1).* V(1:paths).^(gammaV-2).* X(1:paths).^(gammaX).* ...
sigma0^2.*V(1:paths).^(2*gamma).*Random2(1:paths) * 1/sqrt(3).* dtM^1.5+ ...
rho* sigma1*gammaV.* V(1:paths).^(gammaV-1).*gammaX.* X(1:paths).^(gammaX-1).* ...
rho.* sigma1.* V(1:paths).^gammaV .*X(1:paths).^gammaX .*sigma0.*V(1:paths).^gamma.*Random2(1:paths)*1/sqrt(3)*dtM^1.5;
DX=X-Xbefore;
%Uncomment to find Z-series of diffusion noise term of asset.
% X(X<0)=.0000001;
% if(rem(ttM,8)==0) %On every eighth monte carlo step construct the density
% mm=mm+1;
% MaxCutOff=100;
% NoOfBins=1000;
% DX=X-Xbefore;
% [XDensity,IndexOutX,IndexMaxX] = MakeDensityFromSimulation_Infiniti_NEW(DX,paths,NoOfBins,MaxCutOff );
%
% SeriesOrder=7;
% NoOfBins=2000;
% MaxCutOff=9999999;
% %Below function finds hermite coefficients of monte carlo simulated SDE
% %of the stock asset.
% [ch0,ch] = FindHermiteCoefficientsFromSimulation_Infiniti_NEW(DX,paths,NoOfBins,MaxCutOff );
% [c0,c] = ConvertHCoeffsToZCoeffs(ch0,ch,7);
% ch0
% ch
% c0
% c
%
% str=input('Look at Coeffs of Z-series of stock SDE')
%
% %Below construct density on Gaussian grid.
% %First calculate normal random variable on a grid below
% dNn=.1/2; % Normal density subdivisions width. would change with number of subdivisions
% Nn=45*4; % No of normal density subdivisions
% NnMid=((1+Nn)/2)*dNn;
% Z(1:Nn)=(((1:Nn)*dNn)-NnMid);
% %Z
% %str=input('Look at Z');
% %Now calculate Y as a function of normal random variable.
% Y(1:Nn)=c0;
% for nn=1:SeriesOrder
% Y(1:Nn)=Y(1:Nn)+c(nn)*Z(1:Nn).^nn;
% end
%
% %Now take change of densities derivative of Y with respect to normal
% DfY(1:Nn)=0;
% for nn=2:Nn-1
% DfY(nn) = (Y(nn + 1) - Y(nn - 1))/(Z(nn + 1) - Z(nn - 1));
% %Change of variable derivative for densities
% end
% DfY(Nn)=DfY(Nn-1);
% DfY(1)=DfY(2);
%
% %Now calculate the density of Y from density of normal random variable
% %using change of probability derivative.
% pY(1:Nn)=0;
% for nn = 1:Nn
% pY(nn) = (normpdf(Z(nn),0, 1))/abs(DfY(nn));
% end
%
% plot(Y(1:Nn),pY(1:Nn),'r',IndexOutX(1:IndexMaxX),XDensity(1:IndexMaxX),'g');
%
% title(sprintf('Stock Density: x0 = %.2f,thetaX=%.2f,kappaX=%.2f,gammaX=%.2f,sigmaX=%.2f,rho=%.2f,v0 =%.2f,kappa=%.2f,theta=%.2f,gamma=%.2f,sigma0=%.2f,T=%.2f,dt=%.3f',x0,thetaX,kappaX,gammaX,sigmaX,rho,v00,kappa,theta,gamma,sigma0,time1,dtM));
% legend({'Z-Series Density of Stock From Hermite Polynomials','Monte Carlo Density'},'Location','northeast')
%
% str=input('Look at density of stock from simulated data. From Z-series Coefficients in red, Directly from data in green');
% end
Vbefore=V;
V(1:paths)=V(1:paths)+ ...
(mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2)*dtM + ...
sigma0*V(1:paths).^gamma .*Random2(1:paths)*sqrt(dtM) + ...
(mu1.*beta1*V(1:paths).^(beta1-1) + mu2.*beta2.*V(1:paths).^(beta2-1)).* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2)*dtM^2/2 + ...
sigma0*V(1:paths).^gamma .*Random2(1:paths)*(1-1/sqrt(3))*dtM^1.5) + ...
.5*(mu1.*beta1.*(beta1-1).*V(1:paths).^(beta1-2) + mu2.*beta2.*(beta2-1).*V(1:paths).^(beta2-2)).* ...
sigma0^2.*V(1:paths).^(2*gamma).*dtM^2/2 + ...
sigma0*gamma*V(1:paths).^(gamma-1) .* ...
((mu1.*V(1:paths).^beta1 + mu2.*V(1:paths).^beta2).*Random2(1:paths).*1/sqrt(3)*dtM^1.5 + ...
sigma0.*V(1:paths).^gamma .*(Random2(1:paths).^2-1)*dtM/2) + ...
.5*sigma0*gamma*(gamma-1).*V(1:paths).^(gamma-2) .* ...
sigma0^2.*V(1:paths).^(2*gamma) .*Random2(1:paths).*1/sqrt(3)*dtM^1.5;
DV=V-Vbefore;
%Uncomment to find Z-series of diffusion noise term of variance SDE.
% if(rem(ttM,8)==0)
%
% %V(V<=0)=.0000001;
% DV=V-Vbefore;
% MaxCutOff=100;
% NoOfBins=1000;
% [VDensity,IndexOutV,IndexMaxV] = MakeDensityFromSimulation_Infiniti_NEW(DV,paths,NoOfBins,MaxCutOff );
% %Below function finds hermite coefficients of monte carlo simulated SDE
% %of stochastic variance.
% NoOfBins=400;
% MaxCutOff=9999999;
% [dh0,dh] = FindHermiteCoefficientsFromSimulation_Infiniti_NEW(DV,paths,NoOfBins,MaxCutOff );
%
% [d0,d] = ConvertHCoeffsToZCoeffs(dh0,dh,7);
% dh0
% dh
% d0
% d
%
% str=input('Look at Coeffs of stochastic variance Z-Series')
%
%
% %Below construct variance density on Gaussian grid.
% %First calculate normal random variable on a grid below
% dNn=.1/2; % Normal density subdivisions width. would change with number of subdivisions
% Nn=45*4; % No of normal density subdivisions
% NnMid=((1+Nn)/2)*dNn;
% Z(1:Nn)=(((1:Nn)*dNn)-NnMid);
% %Z
% %str=input('Look at Z');
% %Now calculate Y as a function of normal random variable.
% Y(1:Nn)=d0;
% for nn=1:SeriesOrder
% Y(1:Nn)=Y(1:Nn)+d(nn)*Z(1:Nn).^nn;
% end
%
% %Now take change of densities derivative of Y with respect to normal
% DfY(1:Nn)=0;
% for nn=2:Nn-1
% DfY(nn) = (Y(nn + 1) - Y(nn - 1))/(Z(nn + 1) - Z(nn - 1));
% %Change of variable derivative for densities
% end
% DfY(Nn)=DfY(Nn-1);
% DfY(1)=DfY(2);
%
% %Now calculate the density of Y from density of normal random variable
% %using change of probability derivative.
% pY(1:Nn)=0;
% for nn = 1:Nn
% pY(nn) = (normpdf(Z(nn),0, 1))/abs(DfY(nn));
% end
% plot(Y(1:Nn),pY(1:Nn),'r',IndexOutV(1:IndexMaxV),VDensity(1:IndexMaxV),'g');
% legend({'Z-Series Density From Hermite Polynomials','Monte Carlo Density'},'Location','northeast')
% title(sprintf('Variance Density: v0 =%.2f,kappa=%.2f,theta=%.2f,gamma=%.2f,sigma0=%.2f,T=%.2f,dt=%.3f',v00,kappa,theta,gamma,sigma0,time1,dtM));
%
% str=input('Look at density of volatility from simulated data. From Z-series Coefficients in red and directly from data in green');
% end
%find correlations between hermite polynomials of same order associated with
%asset and volatility diffusion terms
NoOfBinsv=100;
NoOfBinsx=100;
%Below function to calculate correlation of same order hermite polynomials
[corr1,corr2,corr3,corr4,corr5,corr6,corr7] = FindHermiteCorrelationsFromSimulation_Infiniti_NEW04B(DX,DV,paths,NoOfBinsx,NoOfBinsv);
corr1
corr2
corr3
corr4
corr5
corr6
corr7
str=input('Look at corrs--1--using two dimensional density');
[corr1,corr2,corr3,corr4,corr5,corr6,corr7] = FindHermiteCorrelationsFromSimulationData_Infiniti_NEW04(DX,DV,paths,NoOfBinsx,NoOfBinsv);
corr1
corr2
corr3
corr4
corr5
corr6
corr7
str=input('Look at corrs--2--Directly between simulated Data observations of DX and DV');
end
%SVolMeanAnalytic=thetaV+(V0-thetaV)*exp(-kappaV*dt*Tt)
SVolMeanAnalytic=theta+(v00-theta)*exp(-kappa*dt*Tt)
SVolMeanMC=sum(V(1:paths))/paths
AssetMeanAnalytic=x0
AssetMeanMC=sum(X(1:paths))/paths
end
.
Here is the updated version of old function that was used to calculate hermite wise correlations using two dimensional density.
.
function [corr1,corr2,corr3,corr4,corr5,corr6,corr7] = FindHermiteCorrelationsFromSimulation_Infiniti_NEW04B(X,Y,Paths,NoOfBinsX,NoOfBinsY)
%This function processes joint observations of two variables X and Y from
%common monte carlo paths and makes a two dimensional density of X and Y.
%Later it finds correlations between hermite polynomials of X with hermite
%polynomials of Y of the same order.
Xmin=999999;
Xmin1=999999;
Xmin2=999999;
Xmax=0;
Xmax1=0;
Xmax2=0;
Ymin=999999;
Ymin1=999999;
Ymin2=999999;
Ymax=0;
Ymax1=0;
Ymax2=0;
meanX=0;
meanY=0;
for p=1:Paths
meanX=meanX+X(p)/Paths;
meanY=meanY+Y(p)/Paths;
if(Xmin>real(X(p)))
Xmin2=Xmin1;
Xmin1=Xmin;
Xmin=real(X(p));
end
if(Xmax<real(X(p)))
Xmax2=Xmax1;
Xmax1=Xmax;
Xmax=real(X(p));
end
if(Ymin>real(Y(p)))
Ymin2=Ymin1;
Ymin1=Ymin;
Ymin=real(Y(p));
end
if(Ymax<real(Y(p)))
Ymax2=Ymax1;
Ymax1=Ymax;
Ymax=real(Y(p));
end
end
%Xmin
%Xmin1
%Xmin2
%Xmax
%Xmax1
%Xmax2
%str=input('Look at Xmin and Xmax');
%IndexMax=NoOfBins+1;
BinSizeX=(Xmax2-Xmin2)/NoOfBinsX;
BinSizeY=(Ymax2-Ymin2)/NoOfBinsY;
%IndexMax=floor((Xmax-Xmin)/BinSize+.5)+1
IndexMaxX=floor((Xmax2-Xmin2)/BinSizeX+.5)+1
IndexMaxY=floor((Ymax2-Ymin2)/BinSizeY+.5)+1
Xn(1:IndexMaxX)=Xmin2+(0:(IndexMaxX-1))*BinSizeX;
Ym(1:IndexMaxY)=Ymin2+(0:(IndexMaxY-1))*BinSizeY;
XYProbMass2D(1:IndexMaxX,1:IndexMaxY)=0.0;
XProbMass(1:IndexMaxX)=0;
YProbMass(1:IndexMaxY)=0;
for p=1:Paths
indexX=real(floor(real(X(p)-Xmin2)/BinSizeX+.5)+1);
%indexXp=real(X(p)-Xmin2)/BinSizeX+1;
if(real(indexX)<1)
indexX=1;
end
if(real(indexX)>IndexMaxX)
indexX=IndexMaxX;
end
indexY=real(floor(real(Y(p)-Ymin2)/BinSizeY+.5)+1);
%indexYp=real(Y(p)-Ymin2)/BinSizeY;
if(real(indexY)<1)
indexY=1;
end
if(real(indexY)>IndexMaxY)
indexY=IndexMaxY;
end
%XDensity(index)=XDensity(index)+1.0/Paths/BinSize;
XProbMass(indexX)=XProbMass(indexX)+1.0/Paths;
YProbMass(indexY)=YProbMass(indexY)+1.0/Paths;
XYProbMass2D(indexX,indexY)=XYProbMass2D(indexX,indexY)+1.0/Paths;
end
XCDF(1:IndexMaxX)=0;
Zx(1:IndexMaxX)=0;
XCDF(1)=XProbMass(1)*.5;
Zx(1)=norminv(XCDF(1));
for nn=2:IndexMaxX
XCDF(nn)=XCDF(nn-1)+XProbMass(nn-1)*.5+XProbMass(nn)*.5;
Zx(nn)=norminv(XCDF(nn));
end
% Zxa(1)=Zx(1);
% Zxa(2:IndexMaxX)=.5*Zx(1:IndexMaxX-1)+.5*Zx(2:IndexMaxX);
% Zxb(1:IndexMaxX-1)=Zxa(2:IndexMaxX);
% Zxb(IndexMaxX)=Zx(IndexMaxX);
YCDF(1:IndexMaxY)=0;
Zy(1:IndexMaxY)=0;
YCDF(1)=YProbMass(1)*.5;
Zy(1)=norminv(YCDF(1));
for nn=2:IndexMaxY
YCDF(nn)=YCDF(nn-1)+YProbMass(nn-1)*.5+YProbMass(nn)*.5;
Zy(nn)=norminv(YCDF(nn));
end
% Zya(1)=Zy(1);
% Zya(2:IndexMaxY)=.5*Zy(1:IndexMaxY-1)+.5*Zy(2:IndexMaxY);
% Zyb(1:IndexMaxY-1)=Zya(2:IndexMaxY);
% Zyb(IndexMaxY)=Zy(IndexMaxY);
EH1y_1=0;
EH1y_2=0;
EH1y_3=0;
EH1y_4=0;
EH1y_5=0;
EH1y_6=0;
EH1y_7=0;
EH1x_1=0;
EH1x_2=0;
EH1x_3=0;
EH1x_4=0;
EH1x_5=0;
EH1x_6=0;
EH1x_7=0;
EH2x_1=0;
EH2x_2=0;
EH2x_3=0;
EH2x_4=0;
EH2x_5=0;
EH2x_6=0;
EH2x_7=0;
EH2y_1=0;
EH2y_2=0;
EH2y_3=0;
EH2y_4=0;
EH2y_5=0;
EH2y_6=0;
EH2y_7=0;
for nn=1:IndexMaxX
EH1x_1=EH1x_1+Zx(nn).*XProbMass(nn);
EH1x_2=EH1x_2+(Zx(nn).^2-1).*XProbMass(nn);
EH1x_3=EH1x_3+(Zx(nn).^3-3*Zx(nn)).*XProbMass(nn);
EH1x_4=EH1x_4+(Zx(nn).^4-6*Zx(nn).^2+3).*XProbMass(nn);
EH1x_5=EH1x_5+(Zx(nn).^5-10*Zx(nn).^3+15*Zx(nn)).*XProbMass(nn);
EH1x_6=EH1x_6+(Zx(nn).^6-15*Zx(nn).^4+45*Zx(nn).^2-15).*XProbMass(nn);
EH1x_7=EH1x_7+(Zx(nn).^7-21*Zx(nn).^5+105*Zx(nn).^3-105.*Zx(nn)).*XProbMass(nn);
EH2x_1=EH2x_1+Zx(nn).*Zx(nn).*XProbMass(nn);
EH2x_2=EH2x_2+(Zx(nn).^2-1).*(Zx(nn).^2-1).*XProbMass(nn);
EH2x_3=EH2x_3+(Zx(nn).^3-3*Zx(nn)).*(Zx(nn).^3-3*Zx(nn)).*XProbMass(nn);
EH2x_4=EH2x_4+(Zx(nn).^4-6*Zx(nn).^2+3).*(Zx(nn).^4-6*Zx(nn).^2+3).*XProbMass(nn);
EH2x_5=EH2x_5+(Zx(nn).^5-10*Zx(nn).^3+15*Zx(nn)).*(Zx(nn).^5-10*Zx(nn).^3+15*Zx(nn)).*XProbMass(nn);
EH2x_6=EH2x_6+(Zx(nn).^6-15*Zx(nn).^4+45*Zx(nn).^2-15).*(Zx(nn).^6-15*Zx(nn).^4+45*Zx(nn).^2-15).*XProbMass(nn);
EH2x_7=EH2x_7+(Zx(nn).^7-21*Zx(nn).^5+105*Zx(nn).^3-105.*Zx(nn)).*(Zx(nn).^7-21*Zx(nn).^5+105*Zx(nn).^3-105.*Zx(nn)).*XProbMass(nn);
end
for nn=1:IndexMaxY
EH1y_1=EH1y_1+Zy(nn).*YProbMass(nn);
EH1y_2=EH1y_2+(Zy(nn).^2-1).*YProbMass(nn);
EH1y_3=EH1y_3+(Zy(nn).^3-3*Zy(nn)).*YProbMass(nn);
EH1y_4=EH1y_4+(Zy(nn).^4-6*Zy(nn).^2+3).*YProbMass(nn);
EH1y_5=EH1y_5+(Zy(nn).^5-10*Zy(nn).^3+15*Zy(nn)).*YProbMass(nn);
EH1y_6=EH1y_6+(Zy(nn).^6-15*Zy(nn).^4+45*Zy(nn).^2-15).*YProbMass(nn);
EH1y_7=EH1y_7+(Zy(nn).^7-21*Zy(nn).^5+105*Zy(nn).^3-105.*Zy(nn)).*YProbMass(nn);
EH2y_1=EH2y_1+Zy(nn).*Zy(nn).*YProbMass(nn);
EH2y_2=EH2y_2+(Zy(nn).^2-1).*(Zy(nn).^2-1).*YProbMass(nn);
EH2y_3=EH2y_3+(Zy(nn).^3-3*Zy(nn)).*(Zy(nn).^3-3*Zy(nn)).*YProbMass(nn);
EH2y_4=EH2y_4+(Zy(nn).^4-6*Zy(nn).^2+3).*(Zy(nn).^4-6*Zy(nn).^2+3).*YProbMass(nn);
EH2y_5=EH2y_5+(Zy(nn).^5-10*Zy(nn).^3+15*Zy(nn)).*(Zy(nn).^5-10*Zy(nn).^3+15*Zy(nn)).*YProbMass(nn);
EH2y_6=EH2y_6+(Zy(nn).^6-15*Zy(nn).^4+45*Zy(nn).^2-15).*(Zy(nn).^6-15*Zy(nn).^4+45*Zy(nn).^2-15).*YProbMass(nn);
EH2y_7=EH2y_7+(Zy(nn).^7-21*Zy(nn).^5+105*Zy(nn).^3-105.*Zy(nn)).*(Zy(nn).^7-21*Zy(nn).^5+105*Zy(nn).^3-105.*Zy(nn)).*YProbMass(nn);
end
% EH1y_1
% EH1y_2
% EH1y_3
% EH1y_4
% EH1y_5
% EH1y_6
% EH1y_7
%
% EH2y_1
% EH2y_2
% EH2y_3
% EH2y_4
% EH2y_5
% EH2y_6
% EH2y_7
% str=input('Look at hermite expectations');
corr1=0;
corr2=0;
corr3=0;
corr4=0;
corr5=0;
corr6=0;
corr7=0;
for nn=1:IndexMaxX
for mm=1:IndexMaxY
corr1=corr1+Zx(nn).*Zy(mm).*XYProbMass2D(nn,mm)/sqrt(EH2x_1)/sqrt(EH2y_1);
corr2=corr2+(Zx(nn).^2-1).*(Zy(mm).^2-1).*XYProbMass2D(nn,mm)/sqrt(EH2x_2)/sqrt(EH2y_2);
corr3=corr3+(Zx(nn).^3-3*Zx(nn)).*(Zy(mm).^3-3*Zy(mm)).*XYProbMass2D(nn,mm)/sqrt(EH2x_3)/sqrt(EH2y_3);
corr4=corr4+(Zx(nn).^4-6*Zx(nn).^2+3).*(Zy(mm).^4-6*Zy(mm).^2+3).*XYProbMass2D(nn,mm)/sqrt(EH2x_4)/sqrt(EH2y_4);
corr5=corr5+(Zx(nn).^5-10*Zx(nn).^3+15*Zx(nn)).*(Zy(mm).^5-10*Zy(mm).^3+15*Zy(mm)).*XYProbMass2D(nn,mm)/sqrt(EH2x_5)/sqrt(EH2y_5);
corr6=corr6+(Zx(nn).^6-15*Zx(nn).^4+45*Zx(nn).^2-15).*(Zy(mm).^6-15*Zy(mm).^4+45*Zy(mm).^2-15).*XYProbMass2D(nn,mm)/sqrt(EH2x_6)/sqrt(EH2y_6);
corr7=corr7+(Zx(nn).^7-21*Zx(nn).^5+105*Zx(nn).^3-105.*Zx(nn)).*(Zy(mm).^7-21*Zy(mm).^5+105*Zy(mm).^3-105*Zy(mm)).*XYProbMass2D(nn,mm)/sqrt(EH2x_7)/sqrt(EH2y_7);
end
end
end
.
Here is the second new function that directly calculates the correlations between hermite polynomials without integrating over two dimensional joint density.
.
function [corrP1,corrP2,corrP3,corrP4,corrP5,corrP6,corrP7] = FindHermiteCorrelationsFromSimulationData_Infiniti_NEW04(X,Y,Paths,NoOfBinsX,NoOfBinsY)
%This function processes observations of X and Y to create their densities.
%We backout associated Zx and Zy on the density grid points using
%inverse cdf function.
%At the sametime whil calculating the above densities, we assign each
%monte carlo path a grid index and distance from the center of the grid.
%This helps us quickly find Zxp associated with each monte carlo path by
%interpolating values of Zx on the boundaries of the X grid.
%grids.% Similarly This also helps us quickly find Zyp associated with
%each monte carlo path by interpolating values of Zy on the boundaries
%of the Y grid.
%Once we calculate Zxp and Zyp associated with each monte carlo path
%cheaply using above algorithm, we use them for finding path-wise
%correlation between hermite polynomials of Zxp and Zyp.
%We do not calculate two-dimensional density in this new function and two
%dimensional density is not needed.
Xmin=999999;
Xmin1=999999;
Xmin2=999999;
Xmax=0;
Xmax1=0;
Xmax2=0;
Ymin=999999;
Ymin1=999999;
Ymin2=999999;
Ymax=0;
Ymax1=0;
Ymax2=0;
meanX=0;
meanY=0;
for p=1:Paths
meanX=meanX+X(p)/Paths;
meanY=meanY+Y(p)/Paths;
if(Xmin>real(X(p)))
Xmin2=Xmin1;
Xmin1=Xmin;
Xmin=real(X(p));
end
if(Xmax<real(X(p)))
Xmax2=Xmax1;
Xmax1=Xmax;
Xmax=real(X(p));
end
if(Ymin>real(Y(p)))
Ymin2=Ymin1;
Ymin1=Ymin;
Ymin=real(Y(p));
end
if(Ymax<real(Y(p)))
Ymax2=Ymax1;
Ymax1=Ymax;
Ymax=real(Y(p));
end
end
%Xmin
%Xmin1
%Xmin2
%Xmax
%Xmax1
%Xmax2
%str=input('Look at Xmin and Xmax');
%IndexMax=NoOfBins+1;
BinSizeX=(Xmax2-Xmin2)/NoOfBinsX;
BinSizeY=(Ymax2-Ymin2)/NoOfBinsY;
%IndexMax=floor((Xmax-Xmin)/BinSize+.5)+1
IndexMaxX=floor((Xmax2-Xmin2)/BinSizeX+.5)+1
IndexMaxY=floor((Ymax2-Ymin2)/BinSizeY+.5)+1
Xn(1:IndexMaxX)=Xmin2+(0:(IndexMaxX-1))*BinSizeX;
Ym(1:IndexMaxY)=Ymin2+(0:(IndexMaxY-1))*BinSizeY;
XProbMass(1:IndexMaxX)=0;
YProbMass(1:IndexMaxY)=0;
Yp(1:Paths)=0;
Ydelta(1:Paths)=0;
Xp(1:Paths)=0;
Xdelta(1:Paths)=0;
for p=1:Paths
indexX=real(floor(real(X(p)-Xmin2)/BinSizeX+.5)+1);
if(real(indexX)<1)
indexX=1;
end
if(real(indexX)>IndexMaxX)
indexX=IndexMaxX;
end
%Below we note the grid cell index associated with each monte carlo path
%This grid cell index means that Zxp associated with the particular path
%would lie within boundaries of this grid cell.
%We also calculate the displacement of monte carlo simulated path value
%from the center of the grid cell.
%We can reconstruct each of the monte carlo simulated value from sum of
% the centre of the associated grid cell and its displacement from
%the centre of the particular grid cell.
Xp(p)=indexX;
Xdelta(p)=X(p)-Xn(indexX);
indexY=real(floor(real(Y(p)-Ymin2)/BinSizeY+.5)+1);
%indexYp=real(Y(p)-Ymin2)/BinSizeY;
if(real(indexY)<1)
indexY=1;
end
if(real(indexY)>IndexMaxY)
indexY=IndexMaxY;
end
%Below we repeat the calculation of associated grid cell center and
%displacement from the center of grid cell for simulated variable Y.
Yp(p)=indexY;
Ydelta(p)=Y(p)-Ym(indexY);
%XDensity(index)=XDensity(index)+1.0/Paths/BinSize;
XProbMass(indexX)=XProbMass(indexX)+1.0/Paths;
YProbMass(indexY)=YProbMass(indexY)+1.0/Paths;
end
%Below Zxa is array carrying values of Z on left boundary of X grid cells
%Below Zxb is array carrying values of Z on right boundary of X grid cells
XCDF(1:IndexMaxX)=0;
Zxb(1:IndexMaxX)=0;
XCDF(1)=XProbMass(1);
Zxb(1)=norminv(XCDF(1));
for nn=2:IndexMaxX-1
XCDF(nn)=XCDF(nn-1)+XProbMass(nn);
Zxb(nn)=norminv(XCDF(nn));
end
Zxb(IndexMaxX)=norminv(XCDF(IndexMaxX-1)+.25*XProbMass(IndexMaxX));
Zxa(1)=norminv(XCDF(1)*.75);
Zxa(2:IndexMaxX)=Zxb(1:IndexMaxX-1);
%Below Zya is array carrying values of Z on left boundary of Y grid cells
%Below Zyb is array carrying values of Z on right boundary of Y grid cells
YCDF(1:IndexMaxY)=0;
Zyb(1:IndexMaxY)=0;
YCDF(1)=YProbMass(1);
Zyb(1)=norminv(YCDF(1));
for nn=2:IndexMaxY-1
YCDF(nn)=YCDF(nn-1)+YProbMass(nn);
Zyb(nn)=norminv(YCDF(nn));
end
Zyb(IndexMaxY)=norminv(YCDF(IndexMaxY-1)+.25*YProbMass(IndexMaxY));
Zya(1)=norminv(YCDF(1)*.75);
Zya(2:IndexMaxY)=Zyb(1:IndexMaxY-1);
%Below find values of Zx and Zy associated with each simulation path
%by cheaply interpolating from the values of Zx and Zy on boundaries of
%grid cells. Zxa is left boundary Z. Zxb is right boundary Z.
%Zya is value of Z on left boundary of Y grid and Zyb is Z on right boundary.
%For this we use originally calculated values of center of grid associated
%with each monte carlo path and displacement from center as I described
%earlier.
Zxp(1:Paths)=Zxa(Xp(1:Paths)).*(.5*BinSizeX-Xdelta(1:Paths))/BinSizeX+Zxb(Xp(1:Paths)).*(.5*BinSizeX+Xdelta(1:Paths))/BinSizeX;
Zyp(1:Paths)=Zya(Yp(1:Paths)).*(.5*BinSizeY-Ydelta(1:Paths))/BinSizeY+Zyb(Yp(1:Paths)).*(.5*BinSizeY+Ydelta(1:Paths))/BinSizeY;
%Below calculate hermite correlations by summing calculations of values
%of hermite polynomials over all paths.
corrP1=sum(Zxp(1:Paths).*Zyp(1:Paths))/sqrt(sum(Zxp(1:Paths).^2))/sqrt(sum(Zyp(1:Paths).^2));
corrP2=sum((Zxp(1:Paths).^2-1).*(Zyp(1:Paths).^2-1))/sqrt(sum((Zxp(1:Paths).^2-1).^2))/sqrt(sum((Zyp(1:Paths).^2-1).^2));
corrP3=sum((Zxp(1:Paths).^3-3*Zxp(1:Paths)).*(Zyp(1:Paths).^3-3*Zyp(1:Paths)))/sqrt(sum((Zxp(1:Paths).^3-3*Zxp(1:Paths)).^2))/sqrt(sum((Zyp(1:Paths).^3-3*Zyp(1:Paths)).^2));
corrP4=sum((Zxp(1:Paths).^4-6*Zxp(1:Paths).^2+3).*(Zyp(1:Paths).^4-6*Zyp(1:Paths).^2+3))/sqrt(sum((Zxp(1:Paths).^4-6*Zxp(1:Paths).^2+3).^2))/sqrt(sum((Zyp(1:Paths).^4-6*Zyp(1:Paths).^2+3).^2));
corrP5=sum((Zxp(1:Paths).^5-10*Zxp(1:Paths).^3+15*Zxp(1:Paths)).*(Zyp(1:Paths).^5-10*Zyp(1:Paths).^3+15*Zyp(1:Paths)))/sqrt(sum((Zxp(1:Paths).^5-10*Zxp(1:Paths).^3+15*Zxp(1:Paths)).^2))/sqrt(sum((Zyp(1:Paths).^5-10*Zyp(1:Paths).^3+15*Zyp(1:Paths)).^2));
corrP6=sum((Zxp(1:Paths).^6-15*Zxp(1:Paths).^4+45*Zxp(1:Paths).^2-15).*(Zyp(1:Paths).^6-15*Zyp(1:Paths).^4+45*Zyp(1:Paths).^2-15))/sqrt(sum((Zxp(1:Paths).^6-15*Zxp(1:Paths).^4+45*Zxp(1:Paths).^2-15).^2))/sqrt(sum((Zyp(1:Paths).^6-15*Zyp(1:Paths).^4+45*Zyp(1:Paths).^2-15).^2));
corrP7=sum((Zxp(1:Paths).^7-21*Zxp(1:Paths).^5+105*Zxp(1:Paths).^3-105*Zxp(1:Paths)).*(Zyp(1:Paths).^7-21*Zyp(1:Paths).^5+105*Zyp(1:Paths).^3-105*Zyp(1:Paths)))/sqrt(sum((Zxp(1:Paths).^7-21*Zxp(1:Paths).^5+105*Zxp(1:Paths).^3-105*Zxp(1:Paths)).^2))/sqrt(sum((Zyp(1:Paths).^7-21*Zyp(1:Paths).^5+105*Zyp(1:Paths).^3-105*Zyp(1:Paths)).^2));
% Paths
% corrP1
% corrP2
% corrP3
% corrP4
% corrP5
% corrP6
% corrP7
% str=input('Look at CorrP1');
%
end
.
.
.
Here is the output comparison of correlations calculated with both methods in the first step of simulation. This is directly copied from matlab screen. Please note that corresponding correlations calculated with each of the two methods match with each other well.
time1 =
0.031250000000000
IndexMaxX =
101
IndexMaxY =
101
corr1 =
-0.949183452536722
corr2 =
0.899807263855206
corr3 =
-0.850933403909789
corr4 =
0.800799240531867
corr5 =
-0.748708460339308
corr6 =
0.692354670405357
corr7 =
-0.636364169023430
Look at corrs--1--using two dimensional density
IndexMaxX =
101
IndexMaxY =
101
corr1 =
-0.949810309939746
corr2 =
0.901101916616058
corr3 =
-0.853022697232446
corr4 =
0.804356800948378
corr5 =
-0.755360437736705
corr6 =
0.702919493820407
corr7 =
-0.644527734650945
Look at corrs--2--Directly between simulated Data observations of DX and DV
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, in this post I want to present the ideas about hermite-specific correlation and regression that I have been mentioning for some time.
Suppose we have Z-series hermite representations of two correlated random variables as
$X= ah_0 \, + \, ah_1 H_1(Z_1)\, + \, ah_2 H_2(Z_1)\, + \, ah_3 H_3(Z_1)\, + \, ... + \, ah_n H_n(Z_1)$
and for Y
$Y= bh_0 \, + \, bh_1 H_1(Z_2)\, + \, bh_2 H_2(Z_2)\, + \, bh_3 H_3(Z_2)\, + \, ... + \, ah_n H_n(Z_2)$
please note that in hermite representation the constant term in RHS is always equal to mean. So we could have equally written
$X= \mu_X \, + \, ah_1 H_1(Z_1)\, + \, ah_2 H_2(Z_1)\, + \, ah_3 H_3(Z_1)\, + \, ... + \, ah_n H_n(Z_1)$
$Y= \mu_Y \, + \, bh_1 H_1(Z_2)\, + \, bh_2 H_2(Z_2)\, + \, bh_3 H_3(Z_2)\, + \, ... + \, ah_n H_n(Z_2)$
Suppose we have a large set of observations in pairs of X and Y and we want to find their hermite-wise correlation. This is only very natural since all hermite polynomial components of variance within each of the probability distribution are orthogonal to each other.
Suppose we have a total of P (as in total paths of joint evolution of both stochastic processes) joint observations of stochastic processes X and Y.
For Finding Correlation between first hermite polynomial components, we write the equation for the correlation between first hermites as
$\rho_{XY,1} \, = \, \frac{\displaystyle\sum_{p=1}^{P} ah_1 \, Z_{1,p} \, bh_1 \, Z_{2,p} \,}{\sqrt{\displaystyle\sum_{p=1}^{P} ah_1^2 \, {Z_{1,p}}^2 \,\displaystyle\sum_{p=1}^{P} \, bh_1^2 \, {Z_{2,p}}^2 \,}}$
Please notice that for calculation of above equation, we first have to find the specific values of $Z_{1,p}$ and $Z_{2,p}$ that are associated with particular pth observation of $X_p$ and $Y_p$ by some inversion method. I will comment on this inversion at the end of this post.
Similarly for 2nd hermite polynomial, we would have the correlation between X and Y equation calculated over joint observations of RVs as
$\rho_{XY,2} \, = \, \frac{\displaystyle\sum_{p=1}^{P} ah_2 \, ({Z_{1,p}}^2-1) \, bh_2 \, ({Z_{2,p}}^2-1) \,}{\sqrt{\displaystyle\sum_{p=1}^{P} ah_2^2 \, {(Z_{1,p}^2-1)}^2 \,\displaystyle\sum_{p=1}^{P} \, bh_2^2 \, {({Z_{2,p}}^2-1})^2 \,}}$
similarly for nth hermite polynaomial correlations, we would write as
$\rho_{XY,n} \, = \, \frac{\displaystyle\sum_{p=1}^{P} ah_n \, H_n(Z_1) \, bh_n \, H_n(Z_2) \,}{\sqrt{\displaystyle\sum_{p=1}^{P} ah_n^2 \, {H_n(Z_1)}^2 \,\displaystyle\sum_{p=1}^{P} \, bh_n^2 \, {H_n(Z_2)}^2 \,}}$
Please note that correlations are in joint products of hermite polynomials and their coefficients are only good to make sure they are non-zero and are important to determine the sign of correlation only.
We know from simple regression theory that once we know correlation between two variables, we can write the regression equation between them as
$Y \, = \, \mu_Y \, + \, \rho_{XY} \frac{s_Y}{s_X} \, (X\, - \, \mu_X)$
So we could write a variant of the above traditional formula in our new case with hermite correlations as
$Y \, = \, \mu_Y \, + \, \rho_{XY,1} \frac{bh_1}{ah_1} \, Z_1 \, + \, \rho_{XY,2} \frac{bh_2}{ah_2} \, ({Z_1}^2 -1) \, + \, \rho_{XY,3} \frac{bh_3}{ah_3} \, ({Z_1}^3 -3 Z_1) + ...+\, \rho_{XY,n} \frac{bh_n}{ah_n} \, H_n(Z_1)$
There is one caveat and that is when we are in mote carlo framework and we want to find above correlations and regressions, we have to invert extremely large number of observations of X and Y to find their underlying Z's as monte carlo paths usually would be very large. But this can safely be done by inverse-series in Bessel coordinates. The specific value of Z associated with observed/simulated data points remains the same in both bessel and original coordinates. Finding them in Bessel coordinates has extremely small error for most SDE distributions I encountered in 2D stochastic volatility models when inverse-series method is used. The error is somewhat larger in deep tails but still quite small and we would get very valid estimates of correlations and regressions using this series inversion method as used in mathematica.
Using series inversion in original coordinates would generally have much larger errors deep in the tail. But since The specific value of Z associated with observed/simulated data points remains the same in both bessel and original coordinates, we can find them in bessel coordinates (for much better accuracy) and then safely use the same value of Z for correlation and regression analytics in original coordinates.
I hope to post a worked out matlab program on this in another 2 days.
.
.
Please notice that method used in new function in the previous post is in line with the latex equations suggested in above copied post 1673. I have not used hermite coefficients in calculations of correlations since they would simply cancel out. In this new program, I have not used inverse series and rather used interpolation over a grid to find values of Z associated with monte carlo simulated random variables.
Over the weekend, I will be posting new programs for regression over several variables as suggested in post 1673 and in another post earlier.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
Amin
Topic Author
Posts: 2012
Joined: July 14th, 2002, 3:00 am
### Re: Breakthrough in the theory of stochastic differential equations and their simulation
Friends, despite my repeated protests, mind control agencies continue to drug food and beverages throughout Lahore city even in many remote areas. Today I had my lunch at a small street restaurant and even though I was careful to make sure that staff did not add anything in the cooked food placed in trays after my arrival, I realized that food was mildly drugged after I had taken it. Restaurant staff must have been adding mind control chemicals regularly to their food. When I left the restaurant, the staff was looking very suspiciously at me. I believe Pakistan army agents reached there and tried to ask them to give me drugged water or something else. Some people who had come later also left simultaneously in a van when I left the small restaurant.
Later I tried to get good water for several hours, but I still could not get good water. Nestle and Dasani (coca-cola) water had been good in the market for past one year but Nestle water with new manufacturing dates is slightly drugged in the market. It is not extremely drugged to completely alter my state of consciousness and create extreme anxiety, but it is still slightly drugged to alter my mental state enough to not let me concentrate and work after I had taken Nestle water. Over past week, I tried Nestle water at many places where I was sure that Pakistan army would never have drugged bottled water but still Nestle water was not good.
So today I spent several hours trying to get good water but to no avail. I bought water from multiple places while being extremely careful, and it was still not good. I also filled water bottles with water supply and water pumped from underground from several different places and I still could not get good water. Then I bought some sweet drinks of less known brands and came back home with water that was as good as I could find (though it was still drugged) after attempting several times.
I want to tell friends that mind control agencies and Pakistan army are not decreasing the persecution activity and drugging of food and beverages in the city. Underground water was good just a few days ago but I repeatedly failed to get good ground water today despite trying again and again. Even though they might be saying different things to good American people, mind control agencies are totally adamant and resolute in their attempts to retard me, and I do not think they will stop or decrease their activities anytime soon.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
|
2023-02-05 07:58:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40066203474998474, "perplexity": 4223.420853660014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00397.warc.gz"}
|
https://math.stackexchange.com/questions/625862/provided-as-distinct-eigenvalues-show-that-ax-u-has-no-solution-strang
|
# Provided $A$'s distinct eigenvalues, show that $Ax = u$ has no solution. [Strang P297 6.1.32]
My question differs from this question. Source for the following is on P3 hereof:
Suppose $A$ has eigenvalues $0,3,5$ with linearly independent eigenvectors $u,v,w$.
(a) Give a basis for $\operatorname{null}(A)$ and a basis for $\operatorname{col}(A)$.
(b) Show that $Ax=u$ has no solution.
Hint: If it did, then $()$ would be in $\operatorname{col}(A)$ and this contradicts the assumption.
### Solution
(a) $\operatorname{null}(A)=\operatorname{null}(A-0I)=E_0=\operatorname{span}(u)$. For any linear combination $c_1v+c_2w$,
$c_1\color{green}{v}+c_2\color{#FF4F00}w = c_1\color{green}{\dfrac{Av}{3}} + c_2\color{#FF4F00}{\dfrac{Aw}{5}} = A\left(\dfrac{c_1}3 v + \dfrac{c_2}5 w\right)\in\operatorname{col}(A),$
therefore $\operatorname{col}(A)=\operatorname{span}(v,w)$.
(b) $Ax=\color{green}v+\color{#FF4F00}w =\color{green}{\frac 1 3 A v}+\color{#FF4F00}{\frac 1 5 Aw} =A\left(\dfrac v 3 + \dfrac w 5\right)$.
All solutions are of the form $\dfrac v 3 + \dfrac w 5 + cu$.
(c) Assume that $Ax=u$ has a solution. Then $u\in\operatorname{col}(A)$,
but $u$ is linearly independent of both $v$ and $w$ therefore cannot be in $\operatorname{col}(A)$.
By P308, Thm 4.20, from David Poole's Linear Algebra, distinct eigenvalues correspond to linearly independent eigenvectors. Suppose I revamp part (c)'s argument for $\mathbf{v}$ instead:
Assume $\mathbf{x}$ solves $\mathbf{Ax = v}$. Then $\mathbf{v} \in col(A)$.
By Thm 4.20 (aforementioned), $\{\mathbf{u, v, w}\}$ is lin-ind therefore $\mathbf{v} \notin col(A)$.
$1.$ This is false because $\color{green}{\mathbf{Av = 3v}}$ is given. Thus, what am I misconstruing?
Why does the argument in part (c) hold for $\mathbf{u}$ but fail for $\mathbf{v}$ and $\mathbf{w}$ ?
$2.$ Moreover, what's the intuition for part (b)? P267, 268 of Poole presents the geometric interpretation of eigenvectors so I'd imagine something geometric here?
$\Large{\text{ Supplementary dated Jan 12 2014: }}$
$3.$ In (a), how would you divine/previse to consider $c_1\color{green}{v}+c_2\color{#FF4F00}w$ for determining $colspace(A)$?
$4.$ As regards (b), what's the objective in solving $Ax=\color{green}v+\color{#FF4F00}w$ for $x$? Why be concerned with this?
• Both your question and the link you gave do not mention, surprisingly, what is $\;A\;$, but it seems to be a $\;3\times 3\;$ matrix. Is this correct? And why your solutions has (a)-(b)-(c) if the question only has (a)-(b)? – DonAntonio Jan 3 '14 at 14:11
• If I understood correctly what's going on here, and that could be a long shot, the reasoning seems to be simply that $\;u\notin Span\,\{v\,,\,w\}=Col(A)\;$ so no solution's possible, but for $\;w,v\;$ there're trivially solutions... – DonAntonio Jan 3 '14 at 14:16
Let's address your question 1 first. Recall that in part a you found that $\{\mathbf{v},\ \mathbf{w}\}$ forms a basis for the columnspace of $A$. The reason that the argument works for $\mathbf{u}$ and not for $\mathbf{v}$ is because $\mathbf{u}$ is not an element of this basis set while $\mathbf{v}$ is. Analagously, the argument will also fail for $\mathbf{w}$ (and indeed any linear combination of $\mathbf{v}$ and $\mathbf{w}$).
Since $\{\mathbf{u},\ \mathbf{v},\ \mathbf{w}\}$ was assumed to be linearly independent, it follows that $\mathbf{u}$ is not in the span of $\mathbf{v}$ and $\mathbf{w}$. Otherwise, there exists scalars $a$ and $b$ such that $$a\mathbf{v} + b\mathbf{w} = \mathbf{u}$$ But then we have the non-trivial linear combination $$a\mathbf{v} + b\mathbf{w} - \mathbf{u} = \mathbf{0}$$ contrary to the assumption of linear independence.
Since the span of $\mathbf{v}$ and $\mathbf{w}$ is precisely $\mathrm{col}(A)$, this means that $\mathbf{u}$ is not an element of the columnspace, i.e. there does not exist $\mathbf{x}$ such that $A\mathbf{x} = \mathbf{u}$. Note that this argument cannot work for $\mathbf{v}$ because $\mathbf{v}$ is in the span of $\{\mathbf{v},\ \mathbf{w}\}$, and trivially so.
For question 2, the geometric intuition is complicated by the fact that there do exist matrices in which vectors in the nullspace are also in the columspace (i.e. matrices for which part b has a solution). In fact, there exists matrices in which the columnspace is equal to the nullspace, for example $$A = \begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix}$$ However, for diagonalizable matrices (such as the one in consideration), we can present a relatively simple view. If you have not learned about diagonalization yet, then it may be difficult to follow the next bit. I recommend that you continue with your studies for now; diagonalization is typically not far away once you've begun studying eigenvectors and eigenvalues. You can come back to the next part after you've learned about diagonalization.
When your matrix is diagonalizable, you can effectively view the matrix as a sequence of scalings along the standard axes (via a change of basis). With this viewpoint, we can effectively look at $A$ as a linear transformation in 3-space where the $x$-axis is stretched by a factor of $3$ (representing the basis vector $\mathbf{v}$), where the $y$-axis is stretched by a factor of $5$ (representing $\mathbf{w}$), and where the $z$-axis is compressed (representing $\mathbf{u}$).
From this viewpoint, the statement that $\mathbf{u}$ is not in the columnspace is analagous to the statement that the image of the linear transform does not contain the $z$-axis, but this is intuitively obvious since the mapping compresses $z$. In fact, it's relatively clear from our description that the image of the map is entirely contained within the $z=0$ plane, i.e. the $xy$-plane. This is a representation of the fact that the columnspace of $A$ does not contain any vectors in which $\mathbf{u}$ is a component, i.e. $a\mathbf{v} + b\mathbf{w} + c\mathbf{u}$ is not an element of $\mathrm{col}(A)$ if $c\neq 0$.
Addendum for the supplementary questions by the OP
Question 3: Why consider $c_1\mathbf{v} + c_2\mathbf{w}$ for the columnspace?
The consideration of linear combinations of $\mathbf{v}$ and $\mathbf{w}$ comes from experience with eigenvalues. The key point is the hidden assumption that the matrix is $3\times 3$ which is not stated in the text of the question for some reason. Since $A$ is $3\times 3$ with $3$ distinct eigenvalues, this means that $A$ is diagonalizable, as I mentioned previously. What this means in particular is that the eigenvectors $\mathbf{u},\ \mathbf{v}$ and $\mathbf{w}$ forms a basis for $\mathbb{R}^3$ (assuming $A$ is a real matrix). If we consider some arbitary vector in $\mathbb{R}^3$, then it follows that we can write $\mathbf{x}$ as a linear combination of our basis of eigenvectors or eigenbasis: $$\mathbf{x} = c_1\mathbf{v} + c_2\mathbf{w} + c_3\mathbf{u}$$ If we then consider the image of $\mathbf{x}$ under $A$, we get $$A\mathbf{x} = c_1A\mathbf{v} + c_2A\mathbf{w} + c_3A\mathbf{u} = 3c_1\mathbf{v} + 5c_2\mathbf{w}$$ Since our choice of $\mathbf{x}$ is completely arbitrary, it follows that the entire image of $A$, that is the entire columnspace of $A$, is expressible as a linear combination of $\mathbf{v}$ and $\mathbf{w}$. Conversely, the solution for part (a) shows that every such linear combination is indeed in the columnspace.
As for question 4, I'm also not too sure what the solution is trying to do. It seems that part (b) in the question is answered by part (c) in the solution and that part (b) of the solution is completely extraneous. It seems to answer a different question altogether.
Addendum 2 in response to OP's questions
The fact that $\mathbf{u}\notin \mathrm{span}(\mathbf{v},\ \mathbf{w})$ comes simply from the fact that $\{\mathbf{u},\ \mathbf{v},\ \mathbf{w}\}$ is a linearly independent set. Certainly you can recast the argument so that contradiction is not used, and here is one example of a proof using the contrapositive instead of contradiction. This proof makes it clear that the result follows essentially from the definition of linear independence.
Theorem: Suppose that $\{\mathbf{v}_1,\ \cdots,\ \mathbf{v}_n\}$ is a linearly independent set of vectors. Then $\mathbf{v}_1 \notin \mathrm{span}(\mathbf{v}_2,\ \cdots,\ \mathbf{v}_n)$.
Proof: The contrapositive of the statement is: Suppose that $\mathbf{v}_1 \in \mathrm{span}(\mathbf{v}_2,\ \cdots,\ \mathbf{v}_n)$. Then $\{\mathbf{v}_1,\ \cdots,\ \mathbf{v}_n\}$ is linearly dependent.
But this is essentially trivial. The fact that $\mathbf{v}_1 \in \mathrm{span}(\mathbf{v}_2,\ \cdots,\ \mathbf{v}_n)$ means there exists some linear combination of $\{\mathbf{v}_2,\ \cdots,\ \mathbf{v}_n\}$ adding up to $\mathbf{v}_1$, i.e. there exists scalars $\{c_i\}$ such that $$\mathbf{v}_1 = c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n$$ But then we also have $$-\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n = \mathbf{0}$$ which is a non-trivial linear combination of $\{\mathbf{v}_1,\ \cdots,\ \mathbf{v}_n\}$ adding up to the zero vector. By definition, this means that $\{\mathbf{v}_1,\ \cdots,\ \mathbf{v}_n\}$ is linearly dependent. $\square$
Applying this result to the set $\{\mathbf{u},\ \mathbf{v},\ \mathbf{w}\}$ gives the desired result.
Note that the above argument works for any set of linearly independent vectors. However, we can also take advantage of the fact that $\{\mathbf{u},\ \mathbf{v},\ \mathbf{w}\}$ are eigenvectors under distinct eigenvalues to prove that they are linearly independent. This results in the well known theorem that eigenvectors corresponding to distinct eigenvalues are linearly independent. The proof of this fact is available in many, many locations and so I will not reproduce it here. I simply wanted to let you know of an alternative approach which you may prefer instead.
• +1. Thank you very much for your sterling detailed and helpful answer. I'd be grateful for your superlative attention and help towards my supplementary questions in my OP. If possible, would you please answer in your answer and not as a comment? – Greek - Area 51 Proposal Jan 12 '14 at 12:51
• @LePressentiment I've tried to answer your supplementary questions. Next time, please consider starting a new question and simply referencing the old. As for your question 4, I am also quite confused as to what the solution is trying to do. It seems extraneous to me. – EuYu Jan 13 '14 at 1:05
• Thank you profoundly again! Did you mean to write $\mathbf{v, w}$ where I inserted red brackets? Please feel free to revert the edit. Lastly, I apprehend your 2nd paragraph but is it possible to intuit $u \notin span\{v, w\}$ without contradiction? Would you please recast it as a direct natural argument? – Greek - Area 51 Proposal Jan 13 '14 at 15:17
• @LePressentiment I did mean to write $\mathbf{v}$ and $\mathbf{w}$, thanks for catching that typo. I've also added a section addressing your new question. – EuYu Jan 14 '14 at 5:59
• Thank you profoundly for your continual care. Please maintain your supernal contributions and beneficence, which I'm grateful for and cherish. – Greek - Area 51 Proposal Jan 25 '14 at 6:19
You have the right ideas, but you're expressing them in a very tortuous way.
If the vectors $u$, $v$ and $w$ are linearly independent, they must be relative to distinct eigenvalues (assuming $A$ is $3\times3$). I believe it's also intended that $Au=0$, $Av=3v$ and $Aw=5w$, but in this case linear independence would follow.
Moreover $\{u,v,w\}$ is a basis of $\mathbb{R}^3$ (or $\mathbb{C}^3$, if you use complex numbers).
Each eigenspace has dimension $1$, so $u$ generates the null space.
An equation $Ax=y$ has a solution if and only if $y$ belongs to the column space of $A$; but, since $v$ and $w$ obviously belong to the column space, they form a basis of it, because of the rank-nullity theorem. It follows that $u$ doesn't belong to the column space.
Note that saying $u$ is linearly independent from both $v$ and $w$ is meaningless.
• +1. Thank you very much for your answer. Would you please expound on "$u$ is linearly independent from both $v$ and $w$ is meaningless"? What should the author have written? I'd have written $\{u, v, w\}$ is linearly independent. I'd also be grateful for your help towards my supplementary in my OP. – Greek - Area 51 Proposal Jan 12 '14 at 12:52
• @LePressentiment The statement is too ambiguous to have a sensible meaning. – egreg Jan 12 '14 at 13:42
• I don't perceive what you mean. Would you please enlarge on your comment in your answer? Please forgive me for forgetting +1. – Greek - Area 51 Proposal Jan 13 '14 at 15:18
|
2020-02-29 10:46:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975282311439514, "perplexity": 170.0334663225072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148850.96/warc/CC-MAIN-20200229083813-20200229113813-00344.warc.gz"}
|
http://theses.gla.ac.uk/5084/
|
# Shift invariant preduals of l1(Z), and isomorphisms with c0(Z)
Pierzchala, Tomasz (2014) Shift invariant preduals of l1(Z), and isomorphisms with c0(Z). MSc(R) thesis, University of Glasgow.
Full text available as:
Preview
PDF
The relation between shift-invariant preduals of the space of summable sequences $\ell_{1}(\mathbb Z)$ and the dual Banach algebra $\ell_{1}(\mathbb Z)$ equipped with the convolution product have resulted in recent development of research on preduality of this space. According to the survey paper entitled 'Shift Invariant Preduals of $\ell_{1}(\mathbb Z)$', written by Matthew Daws, Richard Hadon, Thomas Schlumprecht and Stuart White, we know that there exists an uncountable family $\left\{F^{(\lambda)}\right\}_{\lambda\in \mathbb C}$ of shift-invariant preduals of $\ell_{1}(\mathbb Z)$ and all these preduals $F^{(\lambda)}$ constructed in the above paper are isomorphic to $c_{0}(\mathbb Z)$, the space of sequences converging to zero. This conclusion is based on an abstract theory of the Szlenk index, without stating the explicit form of that isomorphism. This thesis will make an attempt to define this sort of isomorphism. In other words, I will form an isomorphism between $c_{0}(\mathbb Z)$ and $F^{(\lambda)}_{+}$, which is a subspace of $F^{(\lambda)}$.
|
2018-05-25 14:57:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954923152923584, "perplexity": 366.3981258025381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867094.5/warc/CC-MAIN-20180525141236-20180525161236-00012.warc.gz"}
|
http://www.sontaglab.org/PUBDIR/Keyword/NONLINEAR-SYSTEMS.html
|
BACK TO INDEX
Articles in journal or book chapters
1. M. A. Al-Radhawi, D. Del Vecchio, and E. D. Sontag. Multi-modality in gene regulatory networks with slow gene binding. PLoS Computational Biology, 2019. Note: To appear. Preprint in arXiv:1705.02330, May 2017 rev Nov 2017. [PDF] Keyword(s): multistability, gene networks, Markov Chains, Master Equation, cancer heterogeneity, phenotypic variation, nonlinear systems, stochastic models, epigenetics.
Abstract:
In biological processes such as embryonic development, hematopoietic cell differentiation, and the arising of tumor heterogeneity and consequent resistance to therapy, mechanisms of gene activation and deactivation may play a role in the emergence of phenotypically heterogeneous yet genetically identical (clonal) cellular populations. Mathematically, the variability in phenotypes in the absence of genetic variation can be modeled through the existence of multiple metastable attractors in nonlinear systems subject with stochastic switching, each one of them associated to an alternative epigenetic state. An important theoretical and practical question is that of estimating the number and location of these states, as well as their relative probabilities of occurrence. This paper focuses on a rigorous analytic characterization of multiple modes under slow promoter kinetics, which is a feature of epigenetic regulation. It characterizes the stationary distributions of Chemical Master Equations for gene regulatory networks as a mixture of Poisson distributions. As illustrations, the theory is used to tease out the role of cooperative binding in stochastic models in comparison to deterministic models, and applications are given to various model systems, such as toggle switches in isolation or in communicating populations and a trans-differentiation network.
2. J.M. Greene, J.L. Gevertz, and E. D. Sontag. A mathematical approach to distinguish spontaneous from induced evolution of drug resistance during cancer treatment. JCO Clinical Cancer Informatics, 2019. Note: To appear.Keyword(s): cancer heterogeneity, phenotypic variation, nonlinear systems, epigenetics.
Abstract:
Resistance to chemotherapy is a major impediment to the successful treatment of cancer. Classically, resistance has been thought to arise primarily through random genetic mutations, after which mutated cells expand via Darwinian selection. However, recent experimental evidence suggests that the progression to resistance need not occur randomly, but instead may be induced by the therapeutic agent itself.This process of resistance induction can be a result of genetic changes, or can occur through epigenetic alterations that cause otherwise drug-sensitive cancer cells to undergo phenotype switching''. This relatively novel notion of resistance further complicates the already challenging task of designing treatment protocols that minimize the risk of evolving resistance. In an effort to better understand treatment resistance, we have developed a mathematical modeling framework that incorporates both random and drug-induced resistance. Our model demonstrates that the ability (or lack thereof) of a drug to induce resistance can result in qualitatively different responses to the same drug dose and delivery schedule. The importance of induced resistance in treatment response led us to ask if, in our model, one can determine the resistance induction rate of a drug for a given treatment protocol. Not only could we prove that the induction parameter in our model is theoretically identifiable, we have also proposed a possible in vitro experiment which could practically be used to determine a treatment's propensity to induce resistance.
3. M. Lang and E.D. Sontag. Zeros of nonlinear systems with input invariances. Automatica, 81:46-55, 2017. [PDF] Keyword(s): scale invariance, fold change detection, nonlinear systems, realization theory, internal model principle.
Abstract:
This paper introduces two generalizations of systems invariant with respect to continuous sets of input transformations, that is, systems whose output dynamics remain invariant when applying a transformation to the input and simultaneously adjusting the initial conditions. These generalizations concern systems invariant with respect to time-dependent input transformations with exponentially increasing or decreasing strength'', and systems invariant with respect to transformations of the "nonlinear derivatives" of the input. Interestingly, these two generalizations of invariant systems encompass linear time-invariant (LTI) systems with real transfer function zeros of arbitrary multiplicity. Furthermore, the zero-dynamics of systems possessing our generalized invariances show properties analogous to those of LTI systems with transfer function zeros, generalizing concepts like pole-zero cancellation, the rejection of ramps by Hurwitz LTI systems with a zero at the origin with multiplicity two, and (to a certain extend) the superposition principle with respect to inputs zeroing the output.
4. M. Margaliot, E.D. Sontag, and T. Tuller. Contraction after small transients. Automatica, 67:178-184, 2016. [PDF] Keyword(s): entrainment, nonlinear systems, stability, contractions, contractive systems.
Abstract:
Contraction theory is a powerful tool for proving asymptotic properties of nonlinear dynamical systems including convergence to an attractor and entrainment to a periodic excitation. We introduce three new forms of generalized contraction (GC) that are motivated by allowing contraction to take place after small transients in time and/or amplitude. These forms of GC are useful for several reasons. First, allowing small transients does not destroy the asymptotic properties provided by standard contraction. Second, in some cases as we change the parameters in a contractive system it becomes a GC just before it looses contractivity. In this respect, GC is the analogue of marginal stability in Lyapunov stability theory. We provide checkable sufficient conditions for GC, and demonstrate their usefulness using several models from systems biology that are not contractive, with respect to any norm, yet are GC.
5. A. Raveh, M. Margaliot, E.D. Sontag, and T. Tuller. A model for competition for ribosomes in the cell. Proc. Royal Society Interface, 13:2015.1062, 2016. [PDF] Keyword(s): resource competition, ribosomes, entrainment, nonlinear systems, stability, contractions, contractive systems.
Abstract:
We develop and analyze a general model for large-scale simultaneous mRNA translation and competition for ribosomes. Such models are especially important when dealing with highly expressed genes, as these consume more resources. For our model, we prove that the compound system always converges to a steady-state and that it always entrains or phase locks to periodically time-varying transition rates in any of the mRNA molecules. We use this model to explore the interactions between the various mRNA molecules and ribosomes at steady-state. We show that increasing the length of an mRNA molecule decreases the production rate of all the mRNAs. Increasing any of the codon translation rates in a specific mRNA molecule yields a local effect: an increase in the translation rate of this mRNA, and also a global effect: the translation rates in the other mRNA molecules all increase or all decrease. These results suggest that the effect of codon decoding rates of endogenous and heterologous mRNAs on protein production might be more complicated than previously thought.
6. Z. Aminzare and E.D. Sontag. Synchronization of diffusively-connected nonlinear systems: results based on contractions with respect to general norms. IEEE Transactions on Network Science and Engineering, 1(2):91-106, 2014. [PDF] Keyword(s): matrix measures, logarithmic norms, synchronization, consensus, contractions, contractive systems.
Abstract:
Contraction theory provides an elegant way to analyze the behavior of certain nonlinear dynamical systems. In this paper, we discuss the application of contraction to synchronization of diffusively interconnected components described by nonlinear differential equations. We provide estimates of convergence of the difference in states between components, in the cases of line, complete, and star graphs, and Cartesian products of such graphs. We base our approach on contraction theory, using matrix measures derived from norms that are not induced by inner products. Such norms are the most appropriate in many applications, but proofs cannot rely upon Lyapunov-like linear matrix inequalities, and different techniques, such as the use of the Perron-Frobenious Theorem in the cases of L1 or L-infinity norms, must be introduced.
7. M. Margaliot, E.D. Sontag, and T. Tuller. Entrainment to periodic initiation and transition rates in a computational model for gene translation. PLoS ONE, 9(5):e96039, 2014. [WWW] [PDF] [doi:10.1371/journal.pone.0096039] Keyword(s): ribosomes, entrainment, nonlinear systems, stability, contractions, contractive systems.
Abstract:
A recent biological study has demonstrated that the gene expression pattern entrains to a periodically varying abundance of tRNA molecules. This motivates developing mathematical tools for analyzing entrainment of translation elongation to intra-cellular signals such as tRNAs levels and other factors affecting translation. We consider a recent deterministic mathematical model for translation called the Ribosome Flow Model (RFM). We analyze this model under the assumption that the elongation rate of the tRNA genes and/or the initiation rate are periodic functions with a common period T. We show that the protein synthesis pattern indeed converges to a unique periodic trajectory with period T. The analysis is based on introducing a novel property of dynamical systems, called contraction after a short transient (CAST), that may be of independent interest. We provide a sufficient condition for CAST and use it to prove that the RFM is CAST, and that this implies entrainment. Our results support the conjecture that periodic oscillations in tRNA levels and other factors related to the translation process can induce periodic oscillations in protein levels, and suggest a new approach for engineering genes to obtain a desired, periodic, synthesis rate.
8. L. Scardovi, M. Arcak, and E.D. Sontag. Synchronization of interconnected systems with applications to biochemical networks: an input-output approach. IEEE Transactions Autom. Control, 55:1367-1379, 2010. [PDF]
Abstract:
This paper provides synchronization conditions for networks of nonlinear systems, where each component of the network itself consists of subsystems represented as operators in the extended L2 space. The synchronization conditions are provided by combining the input-output properties of the subsystems with information about the structure of network. The paper also explores results for state-space models as well as biochemical applications. The work is motivated by cellular networks where signaling occurs both internally, through interactions of species, and externally, through intercellular signaling.
9. M. Arcak and E.D. Sontag. A passivity-based stability criterion for a class of interconnected systems and applications to biochemical reaction networks. Mathematical Biosciences and Engineering, 5:1-19, 2008. Note: Also, preprint: arxiv0705.3188v1 [q-bio], May 2007. [PDF] Keyword(s): systems biology, biochemical networks, cyclic feedback systems, secant condition, nonlinear stability, dynamical systems.
Abstract:
This paper presents a stability test for a class of interconnected nonlinear systems motivated by biochemical reaction networks. One of the main results determines global asymptotic stability of the network from the diagonal stability of a "dissipativity matrix" which incorporates information about the passivity properties of the subsystems, the interconnection structure of the network, and the signs of the interconnection terms. This stability test encompasses the "secant criterion" for cyclic networks presented in our previous paper, and extends it to a general interconnection structure represented by a graph. A second main result allows one to accommodate state products. This extension makes the new stability criterion applicable to a broader class of models, even in the case of cyclic systems. The new stability test is illustrated on a mitogen activated protein kinase (MAPK) cascade model, and on a branched interconnection structure motivated by metabolic networks. Finally, another result addresses the robustness of stability in the presence of diffusion terms in a compartmental system made out of identical systems.
10. E.D. Sontag. Input to state stability: Basic concepts and results. In P. Nistri and G. Stefani, editors, Nonlinear and Optimal Control Theory, pages 163-220. Springer-Verlag, Berlin, 2007. [PDF] Keyword(s): input to state stability, stability, input to state stability, nonlinear systems, detectability, nonlinear regulation.
Abstract:
This expository presentation, prepared for a summer course, addresses the precise formulation of questions of robustness with respect to disturbances, using the paradigm of input to state stability. It provides an intuitive and informal presentation of the main concepts.
11. M. Chaves and E.D. Sontag. Exact computation of amplification for a class of nonlinear systems arising from cellular signaling pathways. Automatica, 42:1987-1992, 2006. [PDF] Keyword(s): systems biology, biochemical networks, nonlinear stability, dynamical systems.
Abstract:
A commonly employed measure of the signal amplification properties of an input/output system is its induced L2 norm, sometimes also known as H-infinity gain. In general, however, it is extremely difficult to compute the numerical value for this norm, or even to check that it is finite, unless the system being studied is linear. This paper describes a class of systems for which it is possible to reduce this computation to that of finding the norm of an associated linear system. In contrast to linearization approaches, a precise value, not an estimate, is obtained for the full nonlinear model. The class of systems that we study arose from the modeling of certain biological intracellular signaling cascades, but the results should be of wider applicability.
12. J.P. Hespanha, D. Liberzon, D. Angeli, and E.D. Sontag. Nonlinear norm-observability notions and stability of switched systems. IEEE Trans. Automat. Control, 50(2):154-168, 2005. [PDF] Keyword(s): observability, input to state stability, observability, invariance principle.
Abstract:
This paper proposes several definitions of observability for nonlinear systems and explores relationships among them. These observability properties involve the existence of a bound on the norm of the state in terms of the norms of the output and the input on some time interval. A Lyapunov-like sufficient condition for observability is also obtained. As an application, we prove several variants of LaSalle's stability theorem for switched nonlinear systems. These results are demonstrated to be useful for control design in the presence of switching as well as for developing stability results of Popov type for switched feedback systems.
13. M. Malisoff and E.D. Sontag. Asymptotic controllability and input-to-state stabilization: the effect of actuator errors. In Optimal control, stabilization and nonsmooth analysis, volume 301 of Lecture Notes in Control and Inform. Sci., pages 155-171. Springer, Berlin, 2004. [PDF] Keyword(s): input to state stability, control-Lyapunov functions, nonlinear control, feedback stabilization.
Abstract:
We discuss several issues related to the stabilizability of nonlinear systems. First, for continuously stabilizable systems, we review constructions of feedbacks that render the system input-to-state stable with respect to actuator errors. Then, we discuss a recent paper which provides a new feedback design that makes globally asymptotically controllable systems input-to-state stable to actuator errors and small observation noise. We illustrate our constructions using the nonholonomic integrator, and discuss a related feedback design for systems with disturbances.
14. D. Angeli, E.D. Sontag, and Y. Wang. Input-to-state stability with respect to inputs and their derivatives. Internat. J. Robust Nonlinear Control, 13(11):1035-1056, 2003. [PDF] Keyword(s): input to state stability, input to state stability.
Abstract:
A new notion of input-to-state stability involving infinity norms of input derivatives up to a finite order k is introduced and characterized. An example shows that this notion of stability is indeed weaker than the usual ISS. Applications to the study of global asymptotic stability of cascaded nonlinear systems are discussed.
15. M. Krichman and E.D. Sontag. Characterizations of detectability notions in terms of discontinuous dissipation functions. Internat. J. Control, 75(12):882-900, 2002. [PDF] Keyword(s): input to state stability, detectability.
Abstract:
We consider a new Lyapunov-type characterization of detectability for nonlinear systems without controls, in terms of lower-semicontinuous (not necessarily smooth, or even continuous) dissipation functions, and prove its equivalence to the GASMO (global asymptotic stability modulo outputs) and UOSS (uniform output-to-state stability) properties studied in previous work. The result is then extended to provide a construction of a discontinuous dissipation function characterization of the IOSS (input-to-state stability) property for systems with controls. This paper complements a recent result on smooth Lyapunov characterizations of IOSS. The utility of non-smooth Lyapunov characterizations is illustrated by application to a well-known transistor network example.
16. D. Liberzon, A. S. Morse, and E.D. Sontag. Output-input stability and minimum-phase nonlinear systems. IEEE Trans. Automat. Control, 47(3):422-436, 2002. [PDF] Keyword(s): input to state stability, nonlinear control, minimum phase, adaptive control.
Abstract:
This paper introduces and studies a new definition of the minimum-phase property for general smooth nonlinear control systems. The definition does not rely on a particular choice of coordinates in which the system takes a normal form or on the computation of zero dynamics. In the spirit of the input-to-state stability'' philosophy, it requires the state and the input of the system to be bounded by a suitable function of the output and derivatives of the output, modulo a decaying term depending on initial conditions. The class of minimum-phase systems thus defined includes all affine systems in global normal form whose internal dynamics are input-to-state stable and also all left-invertible linear systems whose transmission zeros have negative real parts. As an application, we explain how the new concept enables one to develop a natural extension to nonlinear systems of a basic result from linear adaptive control.
17. D. Liberzon, E.D. Sontag, and Y. Wang. Universal construction of feedback laws achieving ISS and integral-ISS disturbance attenuation. Systems Control Lett., 46(2):111-127, 2002. Note: Errata here: http://www.math.rutgers.edu/(tilde)sontag/FTPDIR/iiss-clf-errata.pdf. [PDF] Keyword(s): input to state stability, nonlinear control, feedback stabilization.
Abstract:
We study nonlinear systems with both control and disturbance inputs. The main problem addressed in the paper is design of state feedback control laws that render the closed-loop system integral-input-to-state stable (iISS) with respect to the disturbances. We introduce an appropriate concept of control Lyapunov function (iISS-CLF), whose existence leads to an explicit construction of such a control law. The same method applies to the problem of input-to-state stabilization. Converse results and techniques for generating iISS-CLFs are also discussed.
18. M. Krichman, E.D. Sontag, and Y. Wang. Input-output-to-state stability. SIAM J. Control Optim., 39(6):1874-1928, 2001. [PDF] [doi:http://dx.doi.org/10.1137/S0363012999365352] Keyword(s): input to state stability.
Abstract:
This work explores Lyapunov characterizations of the input-output-to-state stability (IOSS) property for nonlinear systems. The notion of IOSS is a natural generalization of the standard zero-detectability property used in the linear case. The main contribution of this work is to establish a complete equivalence between the input-output-to-state stability property and the existence of a certain type of smooth Lyapunov function. As corollaries, one shows the existence of "norm-estimators", and obtains characterizations of nonlinear detectability in terms of relative stability and of finite-energy estimates.
19. E.D. Sontag. Structure and stability of certain chemical networks and applications to the kinetic proofreading model of T-cell receptor signal transduction. IEEE Trans. Automat. Control, 46(7):1028-1047, 2001. [PDF] Keyword(s): zero-deficiency networks, systems biology, biochemical networks, nonlinear stability, dynamical systems.
Abstract:
This paper deals with the theory of structure, stability, robustness, and stabilization for an appealing class of nonlinear systems which arises in the analysis of chemical networks. The results given here extend, but are also heavily based upon, certain previous work by Feinberg, Horn, and Jackson, of which a self-contained and streamlined exposition is included. The theoretical conclusions are illustrated through an application to the kinetic proofreading model proposed by McKeithan for T-cell receptor signal transduction.
20. Y.S. Ledyaev and E.D. Sontag. A Lyapunov characterization of robust stabilization. Nonlinear Anal., 37(7, Ser. A: Theory Methods):813-840, 1999. [PDF] Keyword(s): nonlinear control, feedback stabilization.
Abstract:
One of the fundamental facts in control theory (Artstein's theorem) is the equivalence, for systems affine in controls, between continuous feedback stabilizability to an equilibrium and the existence of smooth control Lyapunov functions. This equivalence breaks down for general nonlinear systems, not affine in controls. One of the main results in this paper establishes that the existence of smooth Lyapunov functions implies the existence of (in general, discontinuous) feedback stabilizers which are insensitive to small errors in state measurements. Conversely, it is shown that the existence of such stabilizers in turn implies the existence of smooth control Lyapunov functions. Moreover, it is established that, for general nonlinear control systems under persistently acting disturbances, the existence of smooth Lyapunov functions is equivalent to the existence of (possibly) discontinuous) feedback stabilizers which are robust with respect to small measurement errors and small additive external disturbances.
21. D. Nesic, A.R. Teel, and E.D. Sontag. Formulas relating KL stability estimates of discrete-time and sampled-data nonlinear systems. Systems Control Lett., 38(1):49-60, 1999. [PDF] Keyword(s): input to state stability, sampled-data systems, discrete-time systems, sampling.
Abstract:
We provide an explicit KL stability or input-to-state stability (ISS) estimate for a sampled-data nonlinear system in terms of the KL estimate for the corresponding discrete-time system and a K function describing inter-sample growth. It is quite obvious that a uniform inter-sample growth condition, plus an ISS property for the exact discrete-time model of a closed-loop system, implies uniform ISS of the sampled-data nonlinear system; our results serve to quantify these facts by means of comparison functions. Our results can be used as an alternative to prove and extend results of Aeyels et al and extend some results by Chen et al to a class of nonlinear systems. Finally, the formulas we establish can be used as a tool for some other problems which we indicate.
22. E.D. Sontag. Comments on integral variants of ISS. Systems Control Lett., 34(1-2):93-100, 1998. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(98)00003-6] Keyword(s): input-to-state stability.
Abstract:
This note discusses two integral variants of the input-to-state stability (ISS) property, which represent nonlinear generalizations of L2 stability, in much the same way that ISS generalizes L-infinity stability. Both variants are equivalent to ISS for linear systems. For general nonlinear systems, it is shown that one of the new properties is strictly weaker than ISS, while the other one is equivalent to it. For bilinear systems, a complete characterization is provided of the weaker property. An interesting fact about functions of type KL is proved as well.
23. E.D. Sontag and Y. Wang. Output-to-state stability and detectability of nonlinear systems. Systems Control Lett., 29(5):279-290, 1997. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(97)90013-X] Keyword(s): input to state stability, detectability, input to state stability.
Abstract:
The notion of input-to-state stability (ISS) has proved to be useful in nonlinear systems analysis. This paper discusses a dual notion, output-to-state stability (OSS). A characterization is provided in terms of a dissipation inequality involving storage (Lyapunov) functions. Combining ISS and OSS there results the notion of input/output-to-state stability (IOSS), which is also studied and related to the notion of detectability, the existence of observers, and output injection.
24. E.D. Sontag. State-space and i/o stability for nonlinear systems. In Feedback control, nonlinear systems, and complexity (Montreal, PQ, 1994), volume 202 of Lecture Notes in Control and Inform. Sci., pages 215-235. Springer, London, 1995. Note: (Expository paper, placed online per request. The paper Input to state stability: Basic concepts and results'' is far more up to date and should be downloaded instead of this one!). [PDF] Keyword(s): input to state stability.
25. Y. Wang and E.D. Sontag. Orders of input/output differential equations and state-space dimensions. SIAM J. Control Optim., 33(4):1102-1126, 1995. [PDF] [doi:http://dx.doi.org/10.1137/S0363012993246828] Keyword(s): identifiability, observability, realization theory.
Abstract:
This paper deals with the orders of input/output equations satisfied by nonlinear systems. Such equations represent differential (or difference, in the discrete-time case) relations between high-order derivatives (or shifts, respectively) of input and output signals. It is shown that, under analyticity assumptions, there cannot exist equations of order less than the minimal dimension of any observable realization; this generalizes the known situation in the classical linear case. The results depend on new facts, themselves of considerable interest in control theory, regarding universal inputs for observability in the discrete case, and observation spaces in both the discrete and continuous cases. Included in the paper is also a new and simple self-contained proof of Sussmann's universal input theorem for continuous-time analytic systems.
26. F. Albertini and E.D. Sontag. Further results on controllability properties of discrete-time nonlinear systems. Dynam. Control, 4(3):235-253, 1994. [PDF] [doi:http://dx.doi.org/10.1007/BF01985073] Keyword(s): discrete-time, nonlinear control.
Abstract:
Controllability questions for discrete-time nonlinear systems are addressed in this paper. In particular, we continue the search for conditions under which the group-like notion of transitivity implies the stronger and semigroup-like property of forward accessibility. We show that this implication holds, pointwise, for states which have a weak Poisson stability property, and globally, if there exists a global "attractor" for the system.
27. F. Albertini and E.D. Sontag. Discrete-time transitivity and accessibility: analytic systems. SIAM J. Control Optim., 31(6):1599-1622, 1993. [PDF] [doi:http://dx.doi.org/10.1137/0331075]
Abstract:
A basic open question for discrete-time nonlinear systems is that of determining when, in analogy with the classical continuous-time "positive form of Chow's Lemma", accessibility follows from transitivity of a natural group action. This paper studies the problem, and establishes the desired implication for analytic systems in several cases: (i) compact state space, (ii) under a Poisson stability condition, and (iii) in a generic sense. In addition, the paper studies accessibility properties of the "control sets" recently introduced in the context of dynamical systems studies. Finally, various examples and counterexamples are provided relating the various Lie algebras introduced in past work.
28. Y. Wang and E.D. Sontag. Algebraic differential equations and rational control systems. SIAM J. Control Optim., 30(5):1126-1149, 1992. [PDF] Keyword(s): identifiability, observability, realization theory, input/output system representations.
Abstract:
It is shown that realizability of an input/output operators by a finite-dimensional continuous-time rational control system is equivalent to the existence of a high-order algebraic differential equation satisfied by the corresponding input/output pairs ("behavior"). This generalizes, to nonlinear systems, the classical equivalence between autoregressive representations and finite dimensional linear realizability.
29. Y. Wang and E.D. Sontag. Generating series and nonlinear systems: analytic aspects, local realizability, and i/o representations. Forum Math., 4(3):299-322, 1992. [PDF] Keyword(s): identifiability, observability, realization theory, input/output system representations.
Abstract:
This paper studies fundamental analytic properties of generating series for nonlinear control systems, and of the operators they define. It then applies the results obtained to the extension of facts, which relate realizability and algebraic input/output equations, to local realizability and analytic equations.
30. F. Albertini and E.D. Sontag. Transitivity and forward accessibility of discrete-time nonlinear systems. In Analysis of controlled dynamical systems (Lyon, 1990), volume 8 of Progr. Systems Control Theory, pages 21-34. Birkhäuser Boston, Boston, MA, 1991.
31. E.D. Sontag. Kalman's controllability rank condition: from linear to nonlinear. In Mathematical system theory, pages 453-462. Springer, Berlin, 1991. [PDF] Keyword(s): controllability.
Abstract:
The notion of controllability was identified by Kalman as one of the central properties determining system behavior. His simple rank condition is ubiquitous in linear systems analysis. This article presents an elementary and expository overview of the generalizations of this test to a condition for testing accessibility of discrete and continuous time nonlinear systems.
32. E.D. Sontag. Feedback stabilization of nonlinear systems. In Robust control of linear systems and nonlinear control (Amsterdam, 1989), volume 4 of Progr. Systems Control Theory, pages 61-81. Birkhäuser Boston, Boston, MA, 1990. [PDF]
Abstract:
This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. (NOTE: figures are not included in file; they were pasted-in.)
33. B. Jakubczyk and E.D. Sontag. Controllability of nonlinear discrete-time systems: a Lie-algebraic approach. SIAM J. Control Optim., 28(1):1-33, 1990. [PDF] [doi:http://dx.doi.org/10.1137/0328001] Keyword(s): discrete-time.
Abstract:
This paper presents a geometric study of controllability for discrete-time nonlinear systems. Various accessibility properties are characterized in terms of Lie algebras of vector fields. Some of the results obtained are parallel to analogous ones in continuous-time, but in many respects the theory is substantially different and many new phenomena appear.
34. E.D. Sontag. Finite-dimensional open-loop control generators for nonlinear systems. Internat. J. Control, 47(2):537-556, 1988. [PDF]
Abstract:
This paper concerns itself with the existence of open-loop control generators for nonlinear (continuous-time) systems. The main result is that, under relatively mild assumptions on the original system, and for each fixed compact subset of the state space, there always exists one such generator. This is a new system with the property that the controls it produces are sufficiently rich to preserve complete controllability along nonsingular trajectories. General results are also given on the continuity and differentiability of the input to state mapping for various p-norms on controls, as well as a comparison of various nonlinear controllability notions.
35. E.D. Sontag. Reachability, observability, and realization of a class of discrete-time nonlinear systems. In Encycl. of Systems and Control, pages 3288-3293. Pergamon Press, 1987. Keyword(s): observability.
36. E.D. Sontag. An approximation theorem in nonlinear sampling. In Mathematical theory of networks and systems (Beer Sheva, 1983), volume 58 of Lecture Notes in Control and Inform. Sci., pages 806-812. Springer, London, 1984. [PDF]
Abstract:
We continue here our investigation into the preservation of structural properties under the sampling of nonlinear systems. The main new result is that, under minimal hypothesis, a controllable system always satisfies a strong type of approximate sampled controllability.
37. E.D. Sontag. A characterization of asymptotic controllability. In A. Bednarek and L. Cesari, editors, Dynamical Systems II, pages 645-648. Academic Press, NY, 1982. [PDF] Keyword(s): control-Lyapunov functions.
Abstract:
This paper was a conference version of the SIAM paper that introduced the idea of control-Lyapunov functions for arbitrary nonlinear systems. (The journal paper was submitted in 1981 but only published in 1983.)
38. E.D. Sontag. Abstract regulation of nonlinear systems: stabilization. In Feedback control of linear and nonlinear systems (Bielefeld/Rome, 1981), volume 39 of Lecture Notes in Control and Inform. Sci., pages 227-243. Springer, Berlin, 1982.
39. E.D. Sontag. Realization theory of discrete-time nonlinear systems. I. The bounded case. IEEE Trans. Circuits and Systems, 26(5):342-356, 1979. [PDF]
Abstract:
A state-space realization theory is presented for a wide class of discrete time input/output behaviors. Although In many ways restricted, this class does include as particular cases those treated in the literature (linear, multilinear, internally bilinear, homogeneous), as well ss certain nonanalytic nonlinearities. The theory is conceptually simple, and matrix-theoretic algorithms are straightforward. Finite-realizability of these behaviors by state-affine systems is shown to be equivalent both to the existence of high-order input/output equadons and to realizability by more general types of systems.
Conference articles
1. F. Blanchini, H. El-Samad, G. Giordano, and E. D. Sontag. Control-theoretic methods for biological networks. In Proc. 2018 IEEE Conf. Decision and Control, pages 466-483, 2018. [PDF] Keyword(s): systems biology, dynamic response phenotypes, multi-stability, oscillations, feedback, nonlinear systems.
Abstract:
This is a tutorial paper on control-theoretic methods for the analysis of biological systems.
2. J. Huang, A. Isidori, L. Marconi, M. Mischiati, E. D. Sontag, and W. M. Wonham. Internal models in control, biology and neuroscience. In Proc. 2018 IEEE Conf. Decision and Control, pages 5370-5390, 2018. [PDF] Keyword(s): feeedback, internal model principle, nonlinear systems.
Abstract:
This tutorial paper deals with the Internal Model Principle (IMP) from different perspectives. The goal is to start from the principle as introduced and commonly used in the control theory and then enlarge the vision to other fields where "internal models" play a role. The biology and neuroscience fields are specifically targeted in the paper. The paper ends by presenting an "abstract" theory of IMP applicable to a large class of systems.
3. M. Lang and E.D. Sontag. Scale-invariant systems realize nonlinear differential operators. In 2016 American Control Conference (ACC), pages 6676 - 6682, 2016. [PDF] Keyword(s): scale invariance, fold change detection, nonlinear systems, realization theory, internal model principle.
Abstract:
In this article, we show that scale-invariant systems, as well as systems invariant with respect to other input transformations, can realize nonlinear differential operators: when excited by inputs obeying functional forms characteristic for a given class of invariant systems, the systems' outputs converge to constant values directly quantifying the speed of the input.
4. Z. Aminzare and E.D. Sontag. Contraction methods for nonlinear systems: A brief introduction and some open problems. In Proc. IEEE Conf. Decision and Control, Los Angeles, Dec. 2014, pages 3835-3847, 2014. [PDF] Keyword(s): contractions, contractive systems, stability, reaction-diffusion PDE's, synchronization, contractive systems, stability.
Abstract:
Contraction theory provides an elegant way to analyze the behaviors of certain nonlinear dynamical systems. Under sometimes easy to check hypotheses, systems can be shown to have the incremental stability property that trajectories converge to each other. The present paper provides a self-contained introduction to some of the basic concepts and results in contraction theory, discusses applications to synchronization and to reaction-diffusion partial differential equations, and poses several open questions.
5. B. Andrews, P. Iglesias, and E.D. Sontag. Signal detection and approximate adaptation implies an approximate internal model. In Proc. IEEE Conf. Decision and Control, San Diego, Dec. 2006, pages 2364-2369, 2006. IEEE. [PDF] Keyword(s): biological adaptation, internal model principle.
Abstract:
This conference paper presented a version of an approximate internal model principle, for linear systems. A subsequent paper at the IFAC 2008 conference improved on this result by extending it to a class of nonlinear systems.
6. B.P. Ingalls, E.D. Sontag, and Y. Wang. Measurement to error stability: a notion of partial detectability for nonlinear systems. In Proc. IEEE Conf. Decision and Control, Las Vegas, Dec. 2002, IEEE Publications, pages 3946-3951, 2002. [PDF] Keyword(s): input to state stability.
Abstract:
For systems whose output is to be kept small (thought of as an error output), the notion of input to output stability (IOS) arises. Alternatively, when considering a system whose output is meant to provide information about the state (i.e. a measurement output), one arrives at the detectability notion of output to state stability (OSS). Combining these concepts, one may consider a system with two types of outputs, an error and a measurement. This leads naturally to a notion of partial detectability which we call measurement to error stability (MES). This property characterizes systems in which the error signal is detectable through the measurement signal. This paper provides a partial Lyapunov characterization of the MES property. A closely related property of stability in three measures (SIT) is introduced, which characterizes systems for which the error decays whenever it dominates the measurement. The SIT property is shown to imply MES, and the two are shown to be equivalent under an additional boundedness assumption. A nonsmooth Lyapunov characterization of the SIT property is provided, which yields the partial characterization of MES. The analysis is carried out on systems described by differential inclusions -- implicitly incorporating a disturbance input with compact value-set.
7. D. Liberzon, A.S. Morse, and E.D. Sontag. Output-input stability: a new variant of the minimum-phase property for nonlinear systems. In Proc. Nonlinear Control System Design Symposium, St. Petersburg, July 2001, pages 743-748, 2001. Keyword(s): input to state stability.
8. D. Liberzon, A.S. Morse, and E.D. Sontag. A new definition of the minimum-phase property for nonlinear systems, with an application to adaptive control. In Proc. IEEE Conf. Decision and Control, Sydney, Dec. 2000, IEEE Publications, 2000, pages 2106-2111, 2000.
9. Z-P. Jiang, E.D. Sontag, and Y. Wang. Input-to-state stability for discrete-time nonlinear systems. In Proc. 14th IFAC World Congress, Vol E (Beijing), pages 277-282, 1999. [PDF] Keyword(s): input to state stability, input to state stability, discrete-time.
Abstract:
This paper studies the input-to-state stability (ISS) property for discrete-time nonlinear systems. We show that many standard ISS results may be extended to the discrete-time case. More precisely, we provide a Lyapunov-like sufficient condition for ISS, and we show the equivalence between the ISS property and various other properties, as well as provide a small gain theorem.
10. D. Nesic, A.R. Teel, and E.D. Sontag. On stability and input-to-state stability ${\cal K}{\cal L}$ estimates of discrete-time and sampled-data nonlinear systems. In Proc. American Control Conf., San Diego, June 1999, pages 3990-3994, 1999. Keyword(s): input to state stability, sampled-data systems, discrete-time systems, sampling.
11. D. Nesic and E.D. Sontag. Output stabilization of nonlinear systems: Linear systems with positive outputs as a case study. In Proc. IEEE Conf. Decision and Control, Tampa, Dec. 1998, IEEE Publications, 1998, pages 885-890, 1998.
12. E.D. Sontag and Y. Wang. Detectability of nonlinear systems. In Proc. Conf. on Information Sciences and Systems (CISS 96), Princeton, NJ, pages 1031-1036, 1996. [PDF] Keyword(s): detectability, input to state stability.
Abstract:
Contains a proof of a technical step, which was omitted from the journal paper due to space constraints
13. E.D. Sontag. Spaces of observables in nonlinear control. In Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Zürich, 1994), Basel, pages 1532-1545, 1995. Birkhäuser. [PDF] Keyword(s): observability, dynamical systems.
Abstract:
Invited talk at the 1994 ICM. Paper deals with the notion of observables for nonlinear systems, and their role in realization theory, minimality, and several control and path planning questions.
14. F. Albertini and E.D. Sontag. Controllability of discrete-time nonlinear systems. In Systems and Networks: Mathematical Theory and Applications, Proc. MTNS '93, Vol. 2, Akad. Verlag, Regensburg, pages 35-38, 1993.
15. F. Albertini and E.D. Sontag. Accessibility of discrete-time nonlinear systems, and some relations to chaotic dynamics. In Proc. Conf. Inform. Sci. and Systems, John Hopkins University, March 1991, pages 731-736, 1991.
16. E.D. Sontag and Y. Wang. I/O equations for nonlinear systems and observation spaces. In Proc. IEEE Conf. Decision and Control, Brighton, UK, Dec. 1991, IEEE Publications, 1991, pages 720-725, 1991. [PDF] Keyword(s): identifiability, observability, realization theory.
Abstract:
This paper studies various types of input/output representations for nonlinear continuous time systems. The algebraic and analytic i/o equations studied in previous papers by the authors are generalized to integral and integro-differential equations, and an abstract notion is also considered. New results are given on generic observability, and these results are then applied to give conditions under which that the minimal order of an equation equals the minimal possible dimension of a realization, just as with linear systems but in contrast to the discrete time nonlinear theory.
17. E.D. Sontag. Some connections between stabilization and factorization. In Proceedings of the 28th IEEE Conference on Decision and Control, Vol. 1--3 (Tampa, FL, 1989), New York, pages 990-995, 1989. IEEE. [PDF]
Abstract:
Coprime right fraction representations are obtained for nonlinear systems defined by differential equations, under assumptions of stabilizability and detectability. A result is also given on left (not necessarily coprime) factorizations.
18. E.D. Sontag and H.J. Sussmann. Time-optimal control of manipulators. In Proc. IEEE Int.Conf.on Robotics and Automation, San Francisco, April 1986, pages 1692-1697, 1986. [PDF] Keyword(s): robotics, optimal control.
Abstract:
This paper studies time-optimal control questions for a certain class of nonlinear systems. This class includes a large number of mechanical systems, in particular, rigid robotic manipulators with torque constraints. As nonlinear systems, these systems have many properties that are false for generic systems of the same dimensions.
19. E.D. Sontag. Abstract regulation of nonlinear systems: Stabilization, Part II. In Proc.Princeton Conf.on Information Sciences and Systems, Princeton, March 1982, pages 431-435, 1982. Keyword(s): feedback stabilization.
BACK TO INDEX
Disclaimer:
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.
|
2019-01-21 12:55:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6556432247161865, "perplexity": 1605.5503700734807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583792338.50/warc/CC-MAIN-20190121111139-20190121133139-00633.warc.gz"}
|
https://lazyprogrammer.me/tag/aws/
|
# SQL for Marketers: Dominate Data Analytics, Data Science, and Big Data
March 19, 2016
This is an annoucement along with free and discount coupons for my new course, SQL for Marketers: Dominate data analytics, data science, and big data
More and more companies these days are learning that they need to make DATA-DRIVEN decisions.
With big data and data science on the rise, we have more data than we know what to do with.
One of the basic languages of data analytics is SQL, which is used for many popular databases including MySQL, Postgres, Microsoft SQL Server, Oracle, and even big data solutions like Hive and Cassandra.
I’m going to let you in on a little secret. Most high-level marketers and product managers at big tech companies know how to manipulate data to gain important insights. No longer do you have to wait around the entire day for some software engineer to answer your questions – now you can find the answers directly, by yourself, using SQL.
Do you want to know how to optimize your sales funnel using SQL, look at the seasonal trends in your industry, and run a SQL query on Hadoop? Then join me now in my new class, SQL for marketers: Dominate data analytics, data science, and big data!
P.S. If you haven’t yet signed up for my newsletter at lazyprogrammer [dot] me, you’ll want to do so before Monday, especially if you want to learn more about deep learning, because I have a special announcement coming up that will NOT be announced on Udemy.
Here’s the coupons:
FREE coupon for early early birds:
EARLYBIRD (Sold out)
If the first coupon has run out, you may still use the 2nd coupon, which gives you 70% off:
EARLYBIRD2
#aws #big data #cassandra #Data Analytics #ec2 #hadoop #Hive #Microsoft SQL Server #MySQL #Oracle #Postgres #S3 #spark #sql #sqlite
# New Deep Learning course on Udemy
February 26, 2016
This course continues where my first course, Deep Learning in Python, left off. You already know how to build an artificial neural network in Python, and you have a plug-and-play script that you can use for TensorFlow.
You learned about backpropagation (and because of that, this course contains basically NO MATH), but there were a lot of unanswered questions. How can you modify it to improve training speed? In this course you will learn about batch and stochastic gradient descent, two commonly used techniques that allow you to train on just a small sample of the data at each iteration, greatly speeding up training time.
You will also learn about momentum, which can be helpful for carrying you through local minima and prevent you from having to be too conservative with your learning rate. You will also learn aboutadaptive learning rate techniques like AdaGrad and RMSprop which can also help speed up your training.
In my last course, I just wanted to give you a little sneak peak at TensorFlow. In this course we are going to start from the basics so you understand exactly what’s going on – what are TensorFlow variables and expressions and how can you use these building blocks to create a neural network? We are also going to look at a library that’s been around much longer and is very popular for deep learning – Theano. With this library we will also examine the basic building blocks – variables, expressions, and functions – so that you can build neural networks in Theano with confidence.
Because one of the main advantages of TensorFlow and Theano is the ability to use the GPU to speed up training, I will show you how to set up a GPU-instance on AWS and compare the speed of CPU vs GPU for training a deep neural network.
With all this extra speed, we are going to look at a real dataset – the famous MNIST dataset (images of handwritten digits) and compare against various known benchmarks.
#adagrad #aws #batch gradient descent #deep learning #ec2 #gpu #machine learning #nesterov momentum #numpy #nvidia #python #rmsprop #stochastic gradient descent #tensorflow #theano
# Principal Components Analysis in Theano
February 21, 2016
This is a follow-up post to my original PCA tutorial. It is of interest to you if you:
• Are interested in deep learning (this tutorial uses gradient descent)
• Are interested in learning more about Theano (it is not like regular Python, and it is very popular for implementing deep learning algorithms)
• Want to know how you can write your own PCA solver (in the previous post we used a library to get eigenvalues and eigenvectors)
• Work with big data (this technique can be used to process data where the dimensionality is very large – where the covariance matrix wouldn’t even fit into memory)
First, you should be familiar with creating variables and functions in Theano. Here is a simple example of how you would do matrix multiplication:
import numpy as np
import theano
import theano.tensor as T
X = T.matrix('X')
Q = T.matrix('Q')
Z = T.dot(X, Q)
transform = theano.function(inputs=[X,Q], outputs=Z)
X_val = np.random.randn(100,10)
Q_val = np.random.randn(10,10)
Z_val = transform(X_val, Q_val)
I think of Theano variables as “containers” for real numbers. They actually represent nodes in a graph. You will see the term “graph” a lot when you read about Theano, and probably think to yourself – what does matrix multiplication or machine learning have to do with graphs? (not graphs as in visual graphs, graphs as in nodes and edges) You can think of any “equation” or “formula” as a graph. Just draw the variables and functions as nodes and then connect them to make the equation using lines/edges. It’s just like drawing a “system” in control systems or a visual representation of a neural network (which is also a graph).
If you have ever done linear programming or integer programming in PuLP you are probably familiar with the idea of “variable” objects and them passing them into a “solver” after creating some “expressions” that represent the constraints and objective of the linear / integer program.
Anyway, onto principal components analysis.
Let’s consider how you would find the leading eigenvalue and eigenvector (the one corresponding to the largest eigenvalue) of a square matrix.
The loss function / objective for PCA is:
$$J = \sum_{n=1}^{N} |x_n – \hat{x}_n|^2$$
Where $$\hat{X}$$ is the reconstruction of $$X$$. If there is only one eigenvector, let’s call this $$v$$, then this becomes:
$$J = \sum_{n=1}^{N} |x_n – x_nvv^T|^2$$
This is equivalent to the Frobenius norm, so we can write:
$$J = |X – Xvv^T|_F$$
One identity of the Frobenius norm is:
$$|A|_F = \sqrt{ \sum_{i} \sum_{j} a_{ij} } = \sqrt{ Tr(A^T A ) }$$
Which means we can rewrite the loss function as:
$$J = Tr( (X – Xvv^T)^T(X – Xvv^T) )$$
Keeping in mind that with the trace function you can re-order matrix multiplications that you wouldn’t normally be able to (matrix multiplication isn’t commutative), and dropping any terms that don’t depend on $$v$$, you can use matrix algebra to rearrange this to get:
$$v^* = argmin\{-Tr(X^TXvv^T) \}$$
Which again using reordering would be equivalent to maximizing:
$$v^* = argmax\{ v^TX^TXv \}$$
The corresponding eigenvalue would then be:
$$\lambda = v^TX^TXv$$
Now that we have a function to maximize, we can simply use gradient descent to do it, similar to how you would do it in logistic regression or in a deep belief network.
$$v \leftarrow v + \eta \nabla_v(v^TX^TXv)$$
Next, let’s extend this algorithm for finding the other eigenvalues and eigenvectors. You essentially subtract the contributions of the eigenvalues you already found.
$$v_i \leftarrow v_i + \eta \nabla_{v_i}(v_i^T( X^TX – \sum_{j=1}^{i-1} \lambda_j v_j v_j^T )v_i )$$
Next, note that to implement this algorithm you never need to actually calculate the covariance $$X^T X$$. If your dimensionality is, say, 1 million, then your covariance matrix will have 1 trillion entries!
Instead, you can multiply by your eigenvector first to get $$Xv$$, which is only of size $$N \times 1$$. You can then “dot” this with itself to get a scalar, which is only an $$O(N)$$ operation.
So how do you write this code in Theano? If you’ve never used Theano for gradient descent there will be some new concepts here.
First, you don’t actually need to know how to differentiate your cost function. You use Theano’s T.grad(cost_function, differentiation_variable).
v = theano.shared(init_v, name="v")
Xv = T.dot(X, v)
cost = T.dot(Xv.T, Xv) - np.sum(evals[j]*T.dot(evecs[j], v)*T.dot(evecs[j], v) for j in xrange(i))
gv = T.grad(cost, v)
Note that we re-normalize the eigenvector on each step, so that $$v^T v = 1$$.
Next, you define your “weight update rule” as an expression, and pass this into the “updates” argument of Theano’s function creator.
y = v + learning_rate*gv
update_expression = y / y.norm(2)
train = theano.function(
inputs=[X],
)
Note that the update variable must be a “shared variable”. With this knowledge in hand, you are ready to implement the gradient descent version of PCA in Theano:
for i in xrange(number of eigenvalues you want to find):
... initialize variables and expressions ...
... initialize theano train function ...
while t < max_iterations and change in v < tol:
outputs = train(data)
... return eigenvalues and eigenvectors ...
This is not really trivial but at the same time it's a great exercise in both (a) linear algebra and (b) Theano coding.
If you are interested in learning more about PCA, dimensionality reduction, gradient descent, deep learning, or Theano, then check out my course on Udemy "Data Science: Deep Learning in Python" and let me know what you think in the comments.
#aws #data science #deep learning #gpu #machine learning #nvidia #pca #principal components analysis #statistics #theano
# How to run distributed machine learning jobs using Apache Spark and EC2 (and Python)
April 5, 2015
This is the age of big data.
Sometimes sci-kit learn doesn’t cut it.
In order to make your operations and data-driven decisions scalable – you need to distribute the processing of your data.
Two popular libraries that do such distributed machine learning are Mahout (which uses MapReduce) and MLlib (which uses Spark, which is sometimes considered as a successor to MapReduce).
What I want to do with this tutorial is to show you how easy it is to do distributed machine learning using Spark and EC2.
When I started a recent project of mine, I was distraught at how complicated a Mahout setup could be.
I am not an ops person. I hate installing and configuring things. For something like running distributed k-means clustering, 90% of the work could go into just setting up a Hadoop cluster, installing all the libraries your code needs to run, making sure they are the right versions, etc…
The Hadoop ecosystem is very sensitive to these things, and sometimes MapReduce jobs can be very hard to debug.
With Spark, everything is super easy. Installing Spark and Hadoop is tedious but do-able. Spinning up a cluster is very easy. Running a job is very easy. We will use Python, but you can also use Scala or Java.
Outline of this tutorial:
1. Install Spark on a driver machine.
2. Create a cluster.
3. Run a job.
## 1. Install Spark
I used an Ubuntu instance on EC2. I’m assuming you already know how to set up a security group, get your PEM, and SSH into the machine.
Once you’ve spun up your AMI, we can begin installing all the stuff we’ll need.
To make this even easier you can probably do this on your local machine, but if for some reason you’re using Windows or you don’t want to mess up your local machine, then you’ll want to do this.
First, set your AWS ID and secret environment variables.
export AWS_ACCESS_KEY_ID=…
export AWS_SECRET_ACCESS_KEY=…
Now install Java:
sudo apt-get update
sudo apt-get install default-jdk maven
export MAVEN_OPTS=”-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m”
For the last line, we will need this RAM available to build Spark, if I remember correctly.
wget http://mirror.cc.columbia.edu/pub/software/apache/spark/spark-1.3.0/spark-1.3.0.tgz
tar -xf spark-1.3.0.tgz
cd spark-1.3.0
mvn -DskipTests clean package
By the time you read this a new version of Spark may be available. You should check.
## 2. Create a Cluster
Assuming you are in the Spark folder now, it is very easy to create a cluster to run your jobs:
./ec2/spark-ec2 -k “Your Key Pair Name” -i /path/to/key.pem -s <number of slaves> launch <cluster name> —copy-aws-credentials -z us-east-1b
I set my zone as “us-east-1b” but you can set it to a zone of your choice.
When you’re finished, don’t forget to tear down your cluster! On-demand machines are expensive.
./spark-ec2 destroy <cluster name>
For some reason, numpy isn’t installed when you create a cluster, and the default Python distribution on the m1.large machines is 2.6, while Spark installs its own 2.7. So, even if you easy_install numpy on each of the machines in the cluster, it won’t work for Spark.
You can instead copy the library over to each cluster machine from your driver machine:
scp -i /path/to/key.pem /usr/lib/python2.7/dist-packages/numpy* [email protected]<cluster-machine>:/usr/lib/python2.7/dist-packages/
scp -r -i /path/to/key.pem /usr/lib/python2.7/dist-packages/numpy [email protected]<cluster-machine>:/usr/lib/python2.7/dist-packages/
You can easily write a script to automatically copy all this stuff over (get the machine URLs from the EC2 console).
## 3. Run a Job
Spark gives you a Python shell.
First, go to your EC2 console and find the URL for your cluster master. SSH into that machine (username is root).
cd spark
MASTER=spark://<cluster-master-ip>:7077 ./pyspark
Import libraries:
from pyspark.mllib.clustering import KMeans
from numpy import array
data = sc.textFile(“s3://<my-bucket>/<path>/*.csv”)
Note 1: You can use a wildcard to grab multiple files into one variable – called an RDD – resilient distributed dataset.
Note 2: Spark gives you a variable called ‘sc’, which is an object of type SparkContext. It specifies the master node, among other things.
Maybe filter out some bad lines:
data = data.filter(lambda line: ‘ERROR’ not in line)
Turn each row into an array / vector observation:
data = data.map(lambda line: array([float(x) for x in line.split()]))
clusters = KMeans.train(parsedData, 2, maxIterations=20,
runs=1, initializationMode=”k-means||”)
Save some output:
sc.parallelize(clusters.centers).saveAsTextFile(”s3://…./output.csv”)
You can also run a standalone Python script using spark-submit instead of the shell.
./bin/spark-submit —master spark://<master-ip>:7077 myscript.py
Remember you’ll have to instantiate your own SparkContext in this case.
## Future Improvements
The goal of this tutorial is to make things easy.
There are many areas for improvement – for instance – on-demand machines on Amazon are the most expensive.
Spark still spins up “m1.large” instances, even though EC2′s current documentation recommends using the better, faster, AND cheaper “m3.large” instance instead.
At the same time, that custom configuration could mean we can’t use the spark-ec2 script to spin up the cluster automatically. There might be an option there to choose. I didn’t really look.
One major reason I wrote this tutorial is because all the information in it is out there in some form, but it is disparate and some of it can be hard to find without knowing what to search for.
So that’s it. The easiest possible way to run distributed machine learning.
How do you do distributed machine learning?
#apache #aws #big data #data science #ec2 #emr #machine learning #python #spark
|
2019-04-21 17:07:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3223375380039215, "perplexity": 1898.215699805505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531994.14/warc/CC-MAIN-20190421160020-20190421182020-00378.warc.gz"}
|
https://www.physicsforums.com/threads/first-order-differential-equations.160565/
|
# First Order Differential Equations
1. Mar 13, 2007
### Bucky
1. The problem statement, all variables and given/known data
Solve the following differential equation using seperation of variables
$$(1+x)^2 y' = (1-y)^2 , y(1) = 2$$
2. Relevant equations
3. The attempt at a solution
haven't gotten very far in this at all :/
i've tried dividing both sides by $$(1+x)^2$$, in order to get y' on it's own..
$$y' = \frac{(1-y)^2}{(1+x)^2}$$
but i don't know how to integrate this...but had a go anyway
apparently the rule for integrating an expression in brackets is..
$$\frac{(ax + b)^n}{a(n+1)}$$
so i tried integrating both halves of the fraction seperatley...giving
$$\frac{(-y+1)^3 }{-3y}$$
$$\frac{(x+1)^2}{3x}$$
putting these together and dividing gave
$$\frac{3x(-y+1)^2}{-3y(x+1)^3} + C$$
however i don't think this is accurage, as substituting in 1 and 2 for x and y respectivley gmade C some out as -3/48. Can someone shed some light at where I've went wrong?
Last edited: Mar 13, 2007
2. Mar 13, 2007
### Dick
You haven't gotten y on it's own. Write y'=dy/dx and put the dy with the y's and the dx with the x's.
3. Mar 13, 2007
### Bucky
are you saying i need to get the y terms on one side of the equals and the x terms on the other side?
if so then i'm stumped. I'm not sure how to split up the equation in it's initial form.
4. Mar 13, 2007
### Dick
How about dy/(1-y)^2=dx/(1+x)^2? Does it look split now?
5. Mar 13, 2007
### Bucky
ok, right, rearranged it and heres what i have:
$$\int \frac{dy}{(1-y)^2} = \frac{dx}{(1+x)^2}$$
$$\frac {1}{(1-y)^3}{3} = {1}{(1+x)^3}{3} + C$$
which, after substituting y(0) = -1 (wrong values on first post BTW) i get
$$\frac{-7}{24} = C$$
I'm pretty sure this is wrong...but anyway lets sub that back into the integrated function
$$\frac{1}{3} (1-y)^3 = \frac{1}{3} (1+x)^3 - 7/24$$
$$\frac 8(1-y)^3 = 8(1+x)^3 - 7$$
the answer in teh book is given as
$$y = \frac{x-1}{1+3x}$$
6. Mar 13, 2007
### Dick
It would help a lot if you did the integrations correctly. The integral of 1/x^2 is not 1/x^3.
7. Mar 13, 2007
### Bucky
er sorry thats a typo, i get the integral of 1/(1+x)^3 to be....
1/3(1+x)^3
8. Mar 13, 2007
### Dick
You want to integrate 1/(1+x)^2. Try again.
9. Mar 13, 2007
### Dick
Hint: integral of 1/u^2 du=-1/u.
10. Mar 13, 2007
### Bucky
so...the integral of 1/(1+x)^2 is just -1 - (1/x) ?
even that doesn't seem right...though my formula book doesn't have the formula for such an instance.
11. Mar 13, 2007
### Dick
Make a substitution u=x+1. Similarly for the y integral. You have done that, right?
12. Mar 13, 2007
### Bucky
ok so that gives...
$$- \frac{1}{(1-y)} = - \frac{1}{(1+x)}$$
13. Mar 14, 2007
### HallsofIvy
I added the integral sign on the right side.
I think you meant
$$\frac{1}{3(1-y)^3}= \frac{1}{3(1+x)^3}+ C$$
but you've missed a sign. But only one sign. As Dick said you need to substitute for both 1+ x and 1- y. If u= 1- y, what is du?
14. Mar 14, 2007
### Dick
I think Halls was just cleaning up your notation and doesn't mean to imply that cube is the right power. But you did miss the substitution sign.
15. Mar 14, 2007
### Bucky
i don't see where the missing sign is. unless the sign attatched to the letter comes out? but thats not what i thought you meant when you said
1/u^2 = -1/u
so do you mean that i should have...
$$\frac{dy}{(1-y)^2} = \frac{dx}{(1+x)^2}$$
$$-\frac{dy}{(1-y)^2} = \frac{dx}{(1+x)^2}$$
EDIT: wait that can't be right...yeah i'm lost.
Last edited: Mar 14, 2007
16. Mar 14, 2007
### Dick
If you differentiate the right side of this you get 1/(1+x)^2. That's what you want. If you differentiate the left side you get -1/(1-y)^2. Don't forget the chain rule. That's NOT what you want.
17. Mar 14, 2007
### HallsofIvy
He certainly never said that! he said that the anti-derivative of 1/u2 is -1/u.
If you let u= 1+ x, then du= dx and dx/(1+x)2= du/u2. What's the anti-derivative of that?
If you let v= 1-y, then dv= -dy and y/(1-y)2= -dv/v2. What's the anti-derivative of that?
18. Mar 14, 2007
### Bucky
ok so i tried a totally different method and it seems to work.
instaed of substitution i tried using the standard integral
$$\int (ax+b)^n = \frac{(ax+b)^{n+1}}{a(n+1)}$$
thanks a lot for your help guys!
19. Mar 15, 2007
### Dick
Let u=(ax+b), du=a*dx, du/a=dx.
$$\int (ax+b)^n dx= \frac{1}{a} \int (u)^n du = \frac{1}{a} \frac{u^(n+1)}{(n+1)} = \frac{(ax+b)^{n+1}}{a(n+1)}$$
That is a TOTALLY DIFFERENT METHOD!
20. Mar 15, 2007
### Dick
Let u=(ax+b), du=a*dx, du/a=dx.
$$\int (ax+b)^n dx= \frac{1}{a} \int (u)^n du = \frac{1}{a} \frac{u^{n+1}}{(n+1)} = \frac{(ax+b)^{n+1}}{a(n+1)}$$
That is a TOTALLY DIFFERENT METHOD!
|
2018-01-17 15:34:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773497939109802, "perplexity": 3935.736794096309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886946.21/warc/CC-MAIN-20180117142113-20180117162113-00473.warc.gz"}
|
https://stats.stackexchange.com/questions/529846/is-it-possible-to-specify-different-quantile-regression-models-for-each-quantile
|
# Is it possible to specify different quantile regression models for each quantile?
As the title says.
I have never seen it, but I see no point that would prohibit me to do it.
For example, a different set of variables might bear predictive value for the 25th-percentile of the dependent variable than for the 10th-percentile of the dependent variable.
Is it possible or am I missing something? If not, I would appreciate any references (i.e. scientific papers) in which the authors do so.
• To underscore, by different models you mean models that differ not only by their coefficient values but also by that they contain different variables or have different functional form? Because if we consider different coefficient values to imply different models, then the answer is obvious. Jun 8, 2021 at 15:18
• @RichardHardy I take it to mean something like x <- seq(0, 6, 0.001); y <- 7*x + rnorm(length(x), 0, exp(x)) that clearly has a linearly increasing median, but the other quantiles do change linearly.
– Dave
Jun 8, 2021 at 15:28
• "...the other quantiles do NOT change linearly" is how it should read. And yes, @RichardHardy, I think we agree.
– Dave
Jun 8, 2021 at 16:06
• @RichardHardy by different models I mean models containing different variables ("different sets of variables bearing predictive value"). Jun 9, 2021 at 7:25
|
2022-08-20 03:21:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6088604927062988, "perplexity": 1023.7468705873961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00342.warc.gz"}
|
https://tex.stackexchange.com/questions/475946/creating-a-specific-theorem-style
|
# Creating a specific theorem style
I was wondering how to create a theorem environment similar to the following example:
Specifically, the environment should resemble the box that contains "Claim -- The function g is linear".
This comes from a document written by Evan Chen, who has his style files uploaded here. I have tried to mimic what I believed to be the relevant code (as there's a lot of stuff there that isn't related to this), to no avail. I have attached my attempt below, but it may be better to ignore it.
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{mdframed}
\usepackage{thmtools}
\usepackage{tikz}
\usepackage{xcolor}
\usepackage[framemethod=TikZ]{mdframed}
\mdfdefinestyle{mdgreenbox}{%
linewidth=0.5pt,
skipabove=12pt,
frametitleaboveskip=5pt,
frametitlebelowskip=0pt,
skipbelow=2pt,
frametitlefont=\bfseries,
innertopmargin=4pt,
innerbottommargin=8pt,
nobreak=true,
backgroundcolor=SpringGreen!15,
rightline=false,
leftline=false,
topline=false,
bottomline=false,
linecolor=green,
}
\declaretheoremstyle[
mdframed={style=mdgreenbox},
]{thmgreenbox}
\declaretheorem[style=thmgreenbox]{thrm1}
This returns an error, one for xcolor not recognizing the colors, and the following.
l.17 \mdfdefinestyle
{mdgreenbox}{%
[]
There has now been an attempt to load it with options
[framemethod=TikZ]
,framemethod=TikZ
to your \documentclass declaration may fix this.
Try typing <return> to proceed.
Package thmtools Info: Key mdframed' (with value style=mdgreenbox')
(thmtools) is not a known style key.
(thmtools) Will pass this to every \declaretheorem
(thmtools) that uses style=thmgreenbox' on input line 39.
Package thmtools Info: Automatically pulling in thmdef-mdframed' on input line 40.
(/usr/local/texlive/2017/texmf-dist/tex/latex/thmtools/thmdef-mdframed.sty
Package: thmdef-mdframed 2014/04/21 v66
)
\c@thrm1=\count282
(/compile/output.aux)
\openout1 = output.aux'.
I am looking for a way to mimic the environment more than to fix my methods.
• Have a look at tcolorbox. – user156344 Feb 21 at 6:46
• The obvious and immediate issues with your code are that you load the package mdframed twice (and with conflicting options, which causes an error): You have \usepackage{mdframed} and then a bit later \usepackage[framemethod=TikZ]{mdframed}. You should probably remove the first of those calls. The colours are not recognised because you loaded xcolor without an additional option to provide more colours, you probably want to load it as \usepackage[dvipsnames]{xcolor} or \usepackage[svgnames]{xcolor} for SpringGreen to work. – moewe Feb 21 at 6:54
• Other than that I agree with JouleV that tcolorbox is worth a look. It does a similar job as mdframed but offers a lot more options (at least that's what I think). tcolorbox is also still being actively maintained, whereas mdframed development has stalled in recent years (that need not necessarily be a bad thing, but it could mean that you are on your own when you find bugs or want a new feature). See also tex.stackexchange.com/q/135871/35864 – moewe Feb 21 at 6:57
A solution using the tcolorbox package as suggested by @JouleV :
\documentclass{article}
\usepackage[theorems]{tcolorbox}
\newtcbtheorem{myclaim}{Claim --\ }{
coltitle=purple,
colback=blue!10,
colframe=green!70!blue,
detach title,
boxrule=0pt,
leftrule=2pt,
attach title to upper,
sharp corners,
left=1mm,
}{}
\begin{document}
\begin{myclaim*}{}
The function $g$ is linear
\end{myclaim*}
\end{document}
`
|
2019-10-18 15:46:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5537044405937195, "perplexity": 2391.714353547266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00489.warc.gz"}
|
https://www.toppr.com/guides/maths-formulas/lcm-formula/
|
# LCM Formula
In mathematics computation of the least common multiple and greatest common divisors of two or more numbers. LCM is the smallest integer which is a multiple of two or more numbers. For example, LCM of 4 and 6 is 12, and LCM of 10 and 15 is 30. As with the greatest common divisors, there are many methods for computing the least common multiples also. One method is to factor both numbers into their primes. The LCM is the product of all primes that are common to all numbers. In this topic, we will discuss the concept of least common multiple and LCM formula with examples. Let us learn it!
## LCM Formula
### What is LCM?
The Least Common Multiple i.e. LCM of two integers a and b is that smallest positive integer which is divisible by both a and b. Thus the smallest positive number is a multiple of two or more numbers.
For example, to calculate lcm of (40, 45), we will find factors of 40 and 45, getting
40 is expressed as 2× 2 × 2 × 5
45 is expressed as 3 × 3 × 5
The prime factors common to one or the other are 2, 2, 2, 3, 3, 5.
Thus the least common multiple will be 2 × 2 × 2 × 3 × 3 × 5 = 360.
### To find out LCM using prime factorization method:
Step 1: Show each number as a product of their prime factors.
Step 2: LCM will be the product of the highest powers of all prime factors.
### To find out the LCM using division Method:
Step 1: First, we need to write the given numbers in a horizontal line separated by commas.
Step 2: Then, we need to divide all the given numbers by the smallest prime number.
Step 3: We now need to write the quotients and undivided numbers in a new line below the previous one.
Step 4: Repeat this process until we find a stage where no prime factor is common.
Step 5: LCM will be the product of all the divisors and the numbers in the last line.
### L.C.M formula for any two numbers:
1) For two given numbers if we know their greatest common divisor i.e. GCD, then LCM can be calculated easily with the help of given formula:
LCM = $$\frac{a × b}{(gcd)(a,b)}$$
2) To get the LCM of two Fractions, then first we need to compute the LCM of Numerators and HCF of the Denominators. Further, both these results will be expressed as a fraction. Thus,
LCM = $$\frac{L.C.M\;of\;Numerator}{H.C.F\;of\;Denominator}$$
## Solved Examples
Q.1: Find out the LCM of 8 and 14.
Solution:
Step 1: First write down each number as a product of prime factors.
8 = 2× 2 × 2 = 2³
14 = 2 × 7
Step 2: Product of highest powers of all prime factors.
Here the prime factors are 2 and 7
The highest power of 2 here = 2³
The highest power of 7 here = 7
Hence LCM = 2³ × 7 = 56
Q.2: If two numbers 12 and 30 are given. HCF of these two is 6 then find their LCM.
Solution: We will use the simple formula of LCM and GCD.
a = 12
b =30
gcd =6
Thus
LCM = $$\frac{a\times b}{gcd\left(a,b\right)}$$
LCM = $$\frac {12 \times 30 } {6}$$
= 60
Share with friends
## Customize your course in 30 seconds
##### Which class are you in?
5th
6th
7th
8th
9th
10th
11th
12th
Get ready for all-new Live Classes!
Now learn Live with India's best teachers. Join courses with the best schedule and enjoy fun and interactive classes.
Ashhar Firdausi
IIT Roorkee
Biology
Dr. Nazma Shaik
VTU
Chemistry
Gaurav Tiwari
APJAKTU
Physics
Get Started
## Browse
##### Maths Formulas
4 Followers
Most reacted comment
1 Comment authors
Recent comment authors
Subscribe
Notify of
Guest
KUCKOO B
I get a different answer for first example.
I got Q1 as 20.5
median 23 and
Q3 26
Guest
Yashitha
Hi
Same
Guest
virat
yes
|
2022-08-14 06:26:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6039016842842102, "perplexity": 817.6728243468151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00052.warc.gz"}
|
https://dec.dearbornschools.org/mod/glossary/showentry.php?eid=11173
|
#### 5-U3.1.8
Identify a problem that people in the colonies faced, identify alternative choices for addressing the problem with possible consequences, and describe the course of action taken.
|
2022-11-30 08:26:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914966344833374, "perplexity": 2357.037249896657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00493.warc.gz"}
|
https://spinnaker8manchester.readthedocs.io/en/latest/_modules/spinnman/processes/abstract_multi_connection_process_connection_selector/
|
# Source code for spinnman.processes.abstract_multi_connection_process_connection_selector
# Copyright (c) 2017-2019 The University of Manchester
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from spinn_utilities.abstract_base import AbstractBase, abstractmethod
[docs]class AbstractMultiConnectionProcessConnectionSelector(
object, metaclass=AbstractBase):
""" A connection selector for multi-connection processes
"""
__slots__ = []
[docs] @abstractmethod
def get_next_connection(self, message):
""" Get the index of the next connection for the process from a list\
of connections.
:param AbstractSCPRequest message: The SCP message to be sent
:rtype: SCAMPConnection
"""
|
2022-01-23 09:21:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32993248105049133, "perplexity": 3240.967567438678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00135.warc.gz"}
|
https://www.physicsforums.com/threads/entropy-and-thermodynamics.487647/page-3
|
# Entropy and thermodynamics.
Pythagorean
Gold Member
Thermodynamic entropy is the logarithm of the number of microstates of the molecules of a system in phase space (momentum, position in 3D) multiplied by the Boltzmann constant. It seems to me that information entropy, dealing with probability starts with a very different concept - the number of 1s and 0s required to convey an intelligible message.
AM
Wait, what?
1s and 0s ARE microstates. The map of all possible trajectories in the system's phasespace IS the information about the system. If you only have one bit, you only have two microstates (yes, they call them 1 and 0, but names aren't important, we can call them state a and state b).
A system that is in steady state has 0 bits (it has no states to change to so you don't even need a 1 or 0 to express it's state).
"intelligible" has nothing to do with it. The "message" could be a billiard ball impacting another billiard ball. It's the energy, the wave, the propagation of a disturbance, not the matter itself.
Maxwell's demon requires information to do work.
If you can direct me to some authority who says that there is a real and substantial connection between the two concepts I'll be happy to reconsider.
What do you think of what Kolmogorov has done with information theory? (metric entropy, Kolmogorov complexity)
Rap
This missing piece of logic that makes thermodynamic entropy and information entropy so unrelated is the hidden relationship between molecule arrangements and available oscillations, which is expressed mathematically be Boltzmann's constant.
Come to think of it, you are right. I was thinking of Boltzmann's constant, as something that could be set to one, by the right definition of temperature, and so not that important. But that is an "information-centric" view. How do you come by that "right definition"? You cannot use a well calibrated thermometer based on the second law, measure Boltzmann's constant, then multiply the temperature by that constant to get your new temperature, because that begs the question. From an information theory point of view, it doesn't matter what the constant is, but from a thermodynamic+information point of view, the value of Boltzmann's constant is the key to understanding thermodynamic entropy.
Last edited:
In particular specific heat is a property of the system/material itself, not of the process.
You can only add or subtract heat from a material/system at the (not time) rate of specific heat.
Yes, we both agree on that.
Entropy is a process property.
go well
Shouldn't entropy S like internal energy U is a state, while dS and dU process?
What you said contradicts what I see in wikipedia and various sources that entropy is a state.
http://en.wikipedia.org/wiki/List_of_thermodynamic_properties" [Broken]
Last edited by a moderator:
From an information theory point of view, it doesn't matter what the constant is, but from a thermodynamic point of view, its the key to understanding thermodynamic entropy.
There is something that amuses me here. It is easier to start from thermodynamic entropy to search for a interpretation that links the two kinds of entropy together. But if you begin with information entropy it is much harder to associate information with thermal energy. I tried the second approach and ended up with an instrumentalist conclusion of "it's just a dummy variable that make things complete and there is no point to think what that represent as long as things could work". (To me) it looks like some point view is easier to generalize and extend than other.
I notice you have also joined this thread.
There is a long discussion noting the shortcomings of Wikipedia in thermodynamics.
In particular have you read post #23?
Strictly, entropy is a state function, yes.
But consider the thermodynamic implications of this variation on a once popular auto fuel advert.
Take two different makes of car.
Put 1 litre of identical fuel in each.
Car A achieves 7.8 miles on its litre of fuel.
Car B achieves 13.4 miles on its litre of fuel.
Is it the difference due to the system or the fuel?
Andrew Mason
Homework Helper
I don't see your logic. Cathode ray, ie electrons, are moving, so they have kinetic energy. So you can calculate the average kinetic energy as temperature.
Temperature is only defined for a substance in thermal equilibrium. In thermal equilibrium the translational kinetic energies of the molecules follow a Maxwell-Bolztmann distribution. The kinetic energies are not all the same. In a cathode ray the moving electrons are all in one direction and all with the same energy. So its temperature cannot be defined.
AM
Andrew Mason
Homework Helper
Wait, what?
1s and 0s ARE microstates. The map of all possible trajectories in the system's phasespace IS the information about the system. If you only have one bit, you only have two microstates (yes, they call them 1 and 0, but names aren't important, we can call them state a and state b).
A system that is in steady state has 0 bits (it has no states to change to so you don't even need a 1 or 0 to express it's state).
"intelligible" has nothing to do with it. The "message" could be a billiard ball impacting another billiard ball. It's the energy, the wave, the propagation of a disturbance, not the matter itself.
Maxwell's demon requires information to do work.
What do you think of what Kolmogorov has done with information theory? (metric entropy, Kolmogorov complexity)
There are some analogies between the entropy in information theory and entropy in thermodynamics, but the underlying concepts are very different. They have different origins and apply to different things having vastly different numbers of particles/events. Thermodynamic entropy is inextricably tied to a concept of temperature with an underlying assumption that all microstates are equally probable. Information entropy has nothing even analogous to temperature, as far as I can see, and assumes that the probabilities of individual microstates are not equal. So, as far as I can see the only real connection is that they have same name.
AM
Good afternnon, Andrew.
Your last two posts could do with some amplification, I feel, else some get the wrong impression.
So in post#56
(absolute) temperature may be only defined for a system in equilibrium, but temperature difference is OK for for non equilibrium situations.
and in post#57
A distinction should be made between the statistics of the average action of a large number of particles, leading to classical thermodynamic relationships in the physical world (which again is OK) and information states in the nonphysical world which can be very different depending upon your system of logic and information.
Further comment on my cathode ray example.
My main aim in producing this was to offer a physical example not involving molecules to reinforce my effort to stress the difference between molecules and particles.
However if the beam impacts upon another physical object energy will be transferred, in accordance with the laws of thermodynamics.
In this case a temperature difference may be established.
go well
Rap
There are some analogies between the entropy in information theory and entropy in thermodynamics, but the underlying concepts are very different. They have different origins and apply to different things having vastly different numbers of particles/events. Thermodynamic entropy is inextricably tied to a concept of temperature with an underlying assumption that all microstates are equally probable. Information entropy has nothing even analogous to temperature, as far as I can see, and assumes that the probabilities of individual microstates are not equal. So, as far as I can see the only real connection is that they have same name.
AM
Nobody argues that thermodynamic entropy is not tied to temperature, but the thermodynamic entropy is equal to Boltzmann's constant times the information entropy. The information entropy is proportional to the minimum number of yes/no questions you have to ask to determine the microstate, given the macrostate. As mentioned above, the Boltzmann constant is the "bridge" between thermodynamic and information entropy, and the full meaning of thermodynamic entropy is to be found in
#1: The understanding of information entropy, and
#2: The understanding of why the Boltzmann constant has the value it has.
You can understand #1 without reference to temperature. Its #2 that brings the temperature in. You cannot understand #2 without understanding temperature. This is the way that temperature is "unlinked" from information entropy. The question of whether thermodynamic entropy and information entropy both deserve to have the word "entropy" in their name is semantic, not very interesting. The link between the two is unavoidably there, whatever you want to call them. #2 is where the fun is, not in a semantic argument over the naming of things, and not in refusing to divide up the understanding of thermodynamic entropy into #1 and #2.
Also, they do not have "vastly different numbers of particles/events". Information entropy puts no limit on the number of "events" that it will deal with. Nor does it necessarily assume that microstates have different probabilities.
Last edited:
but the thermodynamic entropy is equal to Boltzmann's constant times the information entropy.
Such a statemen by itself is not proof of linkage any more than the following proves a linakge between a certain stone in my garden an the USS Forrestal.
The weight of the USS Forrestal is exactly 1.2345658763209 * 109 times the weight of a certain stone in my garden.
Rap
Such a statemen by itself is not proof of linkage any more than the following proves a linakge between a certain stone in my garden an the USS Forrestal.
The weight of the USS Forrestal is exactly 1.2345658763209 * 109 times the weight of a certain stone in my garden.
That's like saying that E=mc^2 by itself is not proof of linkage between energy and mass any more than your USS Forrestal example. You are interpreting it the wrong way. E=mc^2 is a statement that there is a linkage, not proof that there is a linkage. S=kH is a statement that there is a linkage between thermo entropy S and info entropy H, not a proof.
The following is an interpretation of entropy inspired by the video below.I further develop the content and attempt to reconcile the seeming irrelevance of thermo and info entropy. The result is rather successful. Please tell if it works in explaining the equations.
Lets begin with a ball falling vertically to the ground at certain speed.
Initially the velocity vector of each particle in the ball is almost the same in both magnitude and direction. The velocity vector has a component which is random in direction as the particles are vibrating, but the magnitude of the random vector component is small and insignificant compared to the vertical velocity vector which is same for all particles.
When a ball collides with the ground, the ball particles at contact surface will receive some momentum from particles at ground surface. As the ground particles are vibrating, they also carry a random-direction velocity vector (statistically, NOT individually). That means the momentum vectors (there are many arrows of vector) transferred to the ball particles are also random in direction.
So the colliding ball particles might receive momentum with various combination, with totally same direction like up,up,up,up,up....., a less ordered combination like up,up,up,up,left......, to totally random direction like left,up,left,right,up..... or left,up,down,up,left.....
As number of disordered combination are much more than ordered combination, for almost 100% of the time the ball will receive disordered momentum directions.
So receiving the random momentums, the magnitude of random part of the velocity vector becomes more significant compared to its initial velocity vector,and entropy is said to be increased. Thermodynamically speaking, entropy is the significance of the random part of particle velocity vector compared to the whole velocity vector.
Let's verify this interpretation with the thermodynamic entropy equation:
$$dS=\frac{dQ}{T}$$, assuming the collision occurs on the small scale and does not affect the average KE (ie temperature) of ball particles.
So, after the collision, some ball particles gain some velocity, which results in gain in kinetic energy dQ. Because the direction of gained velocity is random in direction, the more "energy of such kind" they gain, the more significant the magnitude of random velocity will be. In other words, dS is proportional to dQ, which matches the equation.
The amount the significance increment is related to initial velocity of particles. If they already have a very high average velocity (hence very high temperature), the introduction of the random velocity won't bring much change to the particle velocity. In other words, dS is anti-proportional to T, which matches the equation as well.
(note: temperature represents average kinetic energy, which doesn't tell whether the velocities are of random direction or how random they are. But for particles confined in a fixed size container, they are certain to collide with each other and their velocity is statistically random. So for particles confined in a container with fixed size, there is no way to increase their average KE without increasing random velocity.)
You could do that the other way:
$$Tds=dQ$$
Suppose a body loss some entropy, which means a PORTION (not an amount) or random velocity is loss, so the particles of the body would loss some KE due to decrease in velocity. But entropy only tells the portion, to tell the actual amount of KE loss you need to multiply it by how much KE the particles are having (ie temperature). That is why dQ=Tds.
From above parapraphs, thermodynamic entropy is the significance of the random part of particle velocity vector compared to the whole velocity vector.
Let me change "particle velocity vector" to "value" (or information if you like), and the last sentence becomes:
Information entropy is the significance of random part of a value compared to whole value. As I believe info entropy is easy to understand if you don't try to relate it to thermo-entropy (which is what i soon will do), I skip the verification part and leave that to you.
The relationship between thermo-entropy and info-entropy is that, thermo-entropy measures randomness of an very existing phenomena (particle velocity), while info-entropy measures randomness of number. Info-entropy is the mathematical tool, and thermo-entropy is an application of the tool to measure a physical phenomena. The two are both related and unrelated. They are related as thermo-entropy borrows the info-entropy concept. They are unrelated as one tells how particles behave and one tells how numbers behave.
Entropy as statistic phenomena and macroscopic phenomena
Doing work requires particles colliding a wall together (statistically speaking). With increasing randomness in their velocity, the chance of them colliding a wall together become increasingly low, thus doing work becomes statistically impossible. On a macroscopic scale, the statistical impossibility manifests as "less work is done (compared to frictionless predictions)". Futhermore, when entropy of everywhere in the universe increases, the actual work done will be lesser and lesser than predicted, until finally, no work is done.
(Note: the difficulty in learning the notion of entropy comes form:
1. Therm-entropy is related to how come less work is done during a heating process. So learners have very very strong temptation to think entropy as "a measure of how much less work is done" or some kind of energy storing micro-mechanism. As demonstrated by Andrew Manson in post #23 there is no proportionality here.
2. Formalist
Learners usually start learning entropy by looking at the definition, which is of highly condensed language with all the development history from concepts to words to mathematical symbols unmentioned. Without proper conceptual linking an equation would mean no more than "a value equals to something times something")
Last edited by a moderator:
RonL
Gold Member
http://bayes.wustl.edu/etj/articles/theory.1.pdf
Lots of his other papers are online, http://bayes.wustl.edu/etj/node1.html, interesting stuff.
marmoset, thanks for the links, note #74 on thermal efficiency, has in my mind validated the thoughts about multiple heat sinks in a system. I think it is now clear to me how to better describe my thoughts of a two or more system generator.
As for this thread my mind may have jumped ahead of what I have read or understand, but it seems that Boltzmann in general has been thought of as a single particle and single impact event, method of consideration.
Is it possible that a connection between thermal and information entropy goes much further, due to speed and number of electron interactions in the time frame of a single impact of one particle ?
The same as a transformer changes and transmits voltage and current, with some resulting heat value involved, it seems to me that what is happening at the container wall surface and the particle electron cloud area involves magnitudes more of what can be calculated.
Forgive my comments if they are in advance of what others have already said in writings I am about to study in the near future.
Ron
|
2020-10-30 17:30:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6511709690093994, "perplexity": 675.4768354322412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911027.72/warc/CC-MAIN-20201030153002-20201030183002-00329.warc.gz"}
|
https://physics.stackexchange.com/tags/photoelectric-effect/hot
|
# Tag Info
## Hot answers tagged photoelectric-effect
40
Yes,the photoelectric effect can be explained without photons! One can read it in L. Mandel and E. Wolf, Optical Coherence and Quantum Optics, Cambridge University Press, 1995, a standard reference for quantum optics. Sections 9.1-9.5 show that the electron field responds to a classical external electromagnetic radiation field by emitting ...
33
This classical prediction comes from the equipartition theorem of statistical mechanics, though I have some issues with exactly how the statement you quote is worded. The equipartition theorem is for describing how the energy gets distributed in a system with many degrees of freedom. For example, consider a mono-atomic ideal gas, like helium, that you've ...
32
The Lamb-Scully paper is a good example of how even a Nobel Prize winner can occasionally write a bad paper. The historical context is important. Einstein hypothesized the photon in 1905, but his paper was ahead of its time and was not widely accepted. For decades afterward, even once the quantum-mechanical nature of the atom was assumed by all physicists, ...
23
Rather than considering quantum efficiencies or such details it's instructive to step back and take a broader view. One of the main fuel crops grown in the UK is miscanthus. There are various figures around for the yield produced by miscanthus, but these people estimate it as about 14 tonnes per hectare per year. The energy content is 19GJ/tonne, so that's ...
22
The metals that are used in photoelectric experiments belong to the first group of the periodic table. They are often called Alkali metals. They have the highest electropositive nature in their respective periods. This makes them the most reactive. Any reactive metal would not exist in nature in its elemental state. So, these metals get oxidised by oxygen ...
20
For simplicity let's consider the photoelectric effect in a thin metal foil: The first step in the photoelectric effect is when a photon strikes an electron in the metal and transfers all its energy to it. The electron energy is now equal to the photon energy $h\nu$. If this energy is greater then the work function $\phi$ the electron can escape the metal ...
19
Sometimes the effects do become visible in electronics. One example is the case where Raspberry Pi 2 could be crashed by camera flashlight: Upton explained that the semiconductor material used to make the power regulator was subject to a photoelectric effect when hit with light, and if enough light of the right energy was fired at it, then it would "...
18
In general you're right - an electron being subject to interactions with more than a single photon may have a higher kinetic energy. However, in the vast majority of photoelectric setups you will observe that kinetic energy is independent of light's intensity. The appropriate framework for this discussion is this of probability theory: Each electron has an ...
17
Yes, the textbooks are getting it very wrong. The common narrative on these things is best summarized by the "three nails in the coffin" approach: the dead body being the wave theory of light, and the three nails being the blackbody spectrum, the photo-electric effect, and the Compton effect. Whatever difficulties the wave theory may or may not have with ...
16
The problem is that you are confusing light intensity with energy of a single photon. The photoelectric effect requires a certain energy per photon to work. But low light intensity just means fewer photons come - you can actually see the grain if the conditions are too dark: every pixel can get ~10 photons or less... and yet still, each photon that comes has ...
15
Photosynthesis is less efficient than solar panels. According to the Wikipedia page on photosynthetic efficiency, typical plants have a radiant energy to chemical energy conversion efficiency between 0.1% and 2%. Most commercially available solar panels have more than 10 times this efficiency.
14
They do, but it's too small to notice on a human scale. On the scale of electronics, you absolutely can see it. We have photoresistors and photodiodes which rely on this effect. You need to be measuring this with a multimeter and looking at changes of resistance though - it's far too small for it to be perceptible as a static shock. For another use which ...
12
It is somewhat matter of what precisely one would refer to as photoelectric effect. As far as the radiation-electron mechanism of transfer of energy, there is no direct role played by surface. However, referring to the Einsten's formula; $$h f = \Phi + K,$$ where $K$ is the maximum kinetic energy of the photoelectron, $f$ the frequency of the incoming ...
11
I disagree with OP in that I don't consider energy conservation as a fatal flaw. If one lets $t\to\infty$ in the perturbative calculation, one gets a nice delta function $\delta(\epsilon_f-\epsilon_i-\hbar\omega)$ but in such case the external energy supply is infinite and no meaningful energy conservation argument can be formulated, so I guess OP must be ...
11
There is so called equipartition theorem in classical statistical physics that says that under some conditions all degrees of freedom have the same average energy if the temperature is fixed. The degrees of freedom of electromagnetic field satisfy those conditions, and there is an infinite number of such degrees of freedom (frequencies can be arbitrarily ...
10
For a given system that the electron is in, the primary determinant is the energy of the photon. As @DJBunk points out, this is a quantum mechanical process, so the "choice" is fundamentally random. A given interaction will occur with a probability proportional to its cross section. Figure 1 of this lecture shows how the cross section for each possible ...
10
does this mean that Ohm's law just fails in this case Ohm's law is not universal. The ideal resistor circuit element is defined by Ohm's law but not all circuit elements obey Ohm's law; Ohm's law only applies to ohmic devices. Physical resistors and conductors approximately obey Ohm's law but, for example, semiconductor diodes, transistors, thyristors, ...
10
More than one photon can be absorbed, but the probability is minute for usual intensities. As a scale for "usual intensities" note that sunlight on earth has an intensity of about $1000\,\mathrm{W/m^2} = 10^{-1}\mathrm{W/cm^2}$. The intuitive reason is, that the linear process (an electron absorbs one photon) is more or less "unlikely" (as the coupling ...
10
What Einstein added to the discussion was the idea that electromagnetic energy comes in little particle-like packets. That was a very radical concept at the time (and frankly still is).
10
This is in the x-ray region and beyond. The wavelength of the light is smaller than the size of the electron orbitals, and decreasing when the photon energy goes up. When the electric field oscillates a lot on the length scale of the wave function, positive and negative contributions to the integrals in the transition to the excited state (the ...
9
Yes excited states have a non-zero lifetime. Electronically excited states of atoms have lifetimes of a few nanoseconds, though the lifetime of other excited states can be as long as 10 million years. The decay probability can be calculated using Fermi's golden rule. The lifetime is then an average lifetime derived from the decay probability. The lifetime ...
9
Photoelectric effect can be observed instantaneously when light is flashed and it is independent of intensity of the light. If wave model were at play here, intensity of the light (amplitude of the wave) would have an effect on how quickly an electron receives sufficient energy to be knocked out. This is because energy in a wave is related to its amplitude.
9
The energy is delivered to the metal in discrete packets. If you think of light as a wave, than one may expect that a low-energy colour of light should be able to (eventually) liberate electrons from the metal if you wait long enough (as more and more energy is deposited into the metal). Because electrons are not liberated below a certain energy threshold (...
8
In the photoelectric effect, photons incident on the cathode cause the emission of electrons. Assuming there is a sufficient electric field, these electrons will make their way across to the anode, contributing current. For simplicity, let's assume every photon generates a photo-electron. Then if $N$ photons per second hit the cathode, the current will be ...
8
A light ray has an oscillating electric field associated with it, and this oscillating electric field will make electrons oscillate when the light ray passes through them. Note that I'm talking about light rays not photons - we'll get to photons in a bit. When the oscillating field of the light makes electrons oscillate energy can be transferred between the ...
7
Why, after absorbing a photon does an atom's electron 'fall' back to its ground state (what causes it to immediately lose its absorbed energy)? The answer by @Davidmh gives our observations from classical physics, where we formulated the quantities "energy" , "potential" etc. We observed that this was so, an apple falls, and brilliant mathematics organized ...
7
But the stopping potential does depend on the kinetic energy of the electrons. The stopping potential is defined as the potential necessary to stop any electron (or, in other words, to stop even the electron with the most kinetic energy) from 'reaching the other side'. As you already stated, the maximum kinetic energy is given by K_\text{max}=h\nu-e\phi_\...
7
There is a recoil when each photon leaves, but they radiate in all directions at once, as ACuriousMind intimates in his comment, so there is no collimated beam to concentrate the recoil. Even if the total recoil were concentrated, its effect is so small that the screw-in base of the bulb is more than sufficient to hold the bulb steady. Theoretically, it ...
6
It may be a reference to the fact that you can reproduce the characteristics of the photoelectron production in a model which treats the incident light classically, but treats the matter in the target quantum mechanically. This is explained in Mandel and Wolf's book (chapter 9), which explains how a simple semiclassical calculation can be used to derive the ...
6
I just had an exam today with a similar question. I guess you think of light intensity as amount of power/sec hitting the metal. Thus if we increase the intensity (but keep the frequency of light constant), all we are doing is adding more photons with the same amount of energy. And the electrons by rule can only absorb all or none of the energy provided by a ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2019-09-18 11:26:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7240266799926758, "perplexity": 403.0797547492661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00347.warc.gz"}
|
https://matholympiad.org.bd/forum/viewtopic.php?f=28&t=3962&p=17932
|
## Binary Representation
For discussing Olympiad Level Combinatorics problems
Soumitra Das
Posts: 5
Joined: Mon Apr 03, 2017 1:59 pm
### Binary Representation
Let for any positive integer $n$,$B(n)$ be the number of 1's in it's binary representation.Prove that $$B(nm) \geq \max{B(n),B(m)}$$ ,where $n,m \in N$ .
Ragib Farhat Hasan
Posts: 62
Joined: Sun Mar 30, 2014 10:40 pm
### Re: Binary Representation
Can you explain the RHS of the equation $B(nm) \geq \max{B(n),B(m)}$ ?
I mean, do we multiply or, add or, individually consider the maximum values of $B(n)$ and $B(m)$?
|
2020-01-26 02:04:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624375700950623, "perplexity": 5179.908215112609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00016.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=4173527
|
# Showing that Lorentz transformations are the only ones possible
by bob900
Tags: lorentz, showing, transformations
Emeritus
PF Gold
P: 9,355
Quote by TrickyDicky If I understand it correctly this proves that the most general transformations that take straight lines to straight lines are the linear fractional ones. To get to the linear case one still needs to impose the condition mentioned above about the continuity of the transformation, right?
It's sufficient to assume that the map that takes straight lines to straight lines is defined on the entire vector space, rather than a proper subset. It's not necessary to assume that the map is continuous. (If you want the map to be linear, rather than linear plus a translation, you must also assume that it takes 0 to 0).
Emeritus
PF Gold
P: 9,355
Quote by DrGreg I've just realised there's a simple geometric proof, for Fredrik's special case, for the case of the whole of $\mathbb{R}^2$, which I suspect would easily extend to higher dimensions. Let $T : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be a bijection that maps straight lines to straight lines. It must map parallel lines to parallel lines, otherwise two points on different parallel lines would both be mapped to the intersection of the non-parallel image lines, contradicting bijectivity. So it maps parallelograms to parallelograms. But, if you think about it, that's pretty much the defining property of linearity (assuming T(0)=0). There are a few I's to dot and T's to cross to turn the above into a rigorous proof, but I think I'm pretty much there -- or have I omitted too many steps in my thinking? (I think you may have to assume T is continuous to extend the additive property of linearity to the scalar multiplication property.)
I've been examining the proof in Berger's book more closely. (Change the .se to your own country domain if the url is giving you trouble). His strategy is very close to yours, but there's a clever trick at the end that allows us to drop the assumption of continuity. Consider the following version of the theorem:
Suppose that X=ℝ2. If T:X→X is a bijection that takes straight lines to straight lines and 0 to 0, then T is linear.
For this theorem, the steps are as follows:
1. If K and L are two different lines through 0, then T(K) and T(L) are two different lines through 0.
2. If K and L are two parallel lines, then T(K) and T(L) are two parallel lines.
3. For all x,y such that {x,y} is linearly independent, T(x+y)=Tx+Ty. (This is done by considering a parallelogram as you suggested).
4. For all vectors x and all real numbers a, T(ax)=aTx. (Note that this result implies that T(x+y)=Tx+Ty when {x,y} is linearly dependent).
The strategy for step 4 is as follows: Let x be an arbitrary vector and a an arbitrary real number. If either x or a is zero, we have T(ax)=0=aTx. If both are non-zero, we have to be clever. Since Tx is on the same straight line through 0 as T(ax), there's a real number b such that T(ax)=bTx. We need to prove that b=a. Let B be the map ##t\mapsto tx##. Let C be the map ##t\mapsto tTx##. Let f be the restriction of T to the line through x and 0. Define ##\sigma:\mathbb R\to\mathbb R## by ##\sigma=C^{-1}\circ f\circ B##. Since
$$\sigma(a)=C^{-1}\circ f\circ B(a) =C^{-1}(f(B(a)) =C^{-1}(T(ax)) =C^{-1}(bTx)=b,$$ what we need to do is to prove that σ is the identity map. Berger does this by proving that σ is a field isomorphism. Since both the domain and codomain is ℝ, this makes it an automorphism of ℝ, and by the lemma that micromass proved so elegantly above, that implies that it's the identity map.
P: 3,035
Quote by Fredrik It's sufficient to assume that the map that takes straight lines to straight lines is defined on the entire vector space, rather than a proper subset. It's not necessary to assume that the map is continuous. (If you want the map to be linear, rather than linear plus a translation, you must also assume that it takes 0 to 0).
What I meant is that one must impose that the transformation must map finite coordinates to finite coordinates, which I think is equivalent to what you are saying here.
P: 1,918
Quote by micromass Here is a proof for the plane.
Thank you Micromass.
Your posts deserve to be polished and turned into a library item, so I'll mention a couple of minor typos I noticed:
[...] again a perspectivity [...]
Even though this is a synonym, I presume it should be "projectivity", since that's the word you used earlier.
Also,
[...] verticles [...]
Emeritus Sci Advisor PF Gold P: 9,355 Just out of curiosity, do people use the term "line" for curves that aren't straight? Do we really need to say "straight line" every time?
Mentor
P: 18,241
Quote by strangerep Even though this is a synonym, I presume it should be "projectivity", since that's the word you used earlier.
Ah yes, thank you!! It should indeed be projectivity.
A perspectivity is something slightly different. I don't know why I used that term...
P: 1,583
Quote by Fredrik Just out of curiosity, do people use the term "line" for curves that aren't straight? Do we really need to say "straight line" every time?
Yes, at least historically line was just used to mean any curve. I think Euclid defined a line to be a "breadthless length", and defined a straight line to be a line that "lies evenly with itself", whatever that means.
EDIT: If you're interested, you can see the definitions here.
Emeritus Sci Advisor PF Gold P: 9,355 I think I have completely understood how to prove the following theorem using the methods described in Berger's book.If ##T:\mathbb R^2\to\mathbb R^2## is a bijection that takes lines to lines and 0 to 0, then ##T## is linear.I have broken it up into ten parts. Most of them are very easy, but there are a few tricky ones. Notation: If L is a line, then I will write TL instead of T(L). If K is a line through 0, then so is TK. If K,L are lines through 0 such that K≠L, then TK≠TL. (Note that this implies that if {x,y} is linearly independent, then so is {Tx,Ty}). If K is parallel to L, then TK is parallel to TL. For all x,y such that {x,y} is linearly independent, T(x+y)=Tx+Ty. If x=0 or a=0, then T(ax)=aTx. If x≠0 and a≠0, then there's a b such that T(ax)=bTx. (Note that this implies that for each x≠0, there's a map σ such that T(ax)=σ(a)Tx. The following steps determine the properties of σ for an arbitrary x≠0). σ is a bijection from ℝ2 into ℝ2. σ is a field homomorphism. σ is the identity map. (Combined with 5-6, this implies that T(ax)=aTx for all a,x). For all x,y such that {x,y} is linearly dependent, T(x+y)=Tx+Ty. I won't explain all the details of part 8, because they require a diagram. But I will describe the idea. If you want to understand part 8 completely, you need to look at the diagrams in Berger's book. Notation: I will denote the line through x and y by [x,y]. Since T takes lines to lines, TK is a line. Since T0=0, 0 is on TK. Suppose that TK=TL. Let x be an arbitrary non-zero point on TK. Since x is also on TL, T-1(x) is in both K and L. But this implies that T-1(x)=0, which contradicts that x≠0. If K=L, then obviously TK=TL. If K≠L, then, they are either parallel or intersect somewhere, and part 2 tells us that they don't intersect. Let x,y be arbitrary vectors such that {x,y} is linearly independent. Part 2 tells us that {Tx,Ty} is linearly independent. Define K=[0,x] (This is the range of ##t\mapsto tx##). L=[0,y] (This is the range of ##t\mapsto ty##). K'=[x+y,y] (This is the range of ##t\mapsto y+tx## so this line is parallel to K). L'=[x+y,x] (This is the range of ##t\mapsto x+ty## so this line is parallel to L). Since x+y is at the intersection of K' and L', T(x+y) is at the intersection of TK' and TL'. we will show that Tx+Ty is also at that intersection. Since x is on L', Tx is on TL'. Since L' is parallel to L, TL' is parallel to TL (the line spanned by {Ty}). These two results imply that TL' is the range of the map B defined by B(t)=Tx+tTy. Similarly, TK' is the range of the map C defined by C(t)=Ty+tTx. So there's a unique pair (r,s) such that T(x+y)=C(r)=B(s). The latter equality can be written as Ty+rTx=Tx+sTy. This is equivalent to (r-1)Tx+(1-s)Ty=0, and since {Tx,Ty} is linearly independent, this implies r=s=1. So T(x+y)=B(1)=Tx+Ty. Let x be an arbitrary vector and a an arbitrary real number. If either of them is zero, we have T(ax)=0=aT(x). Let x be non-zero but otherwise arbitrary. 0,x, and ax are all on the same line, K. So 0,x and T(ax) are on the line TK. This implies that there's a b such that T(ax)=bTx. (What we did here proves this statement when a≠0 and x≠0, and part 5 shows that it's also true when a=0 or x=0). The map σ can be defined explicitly in the following way. Define B by B(t)=tx for all t. Define C by C(t)=tTx for all t. Let K be the range of B. Then the range of C is TK. Define ##\sigma=C^{-1}\circ T|_K\circ B##. This map is a bijection (ℝ→ℝ), since it's the composition of three bijections (ℝ→K→TK→ℝ). To see that this is the σ that was discussed in the previous step, let b be the real number such that T(ax)=bTx, and note that $$\sigma(a)=C^{-1}\circ T|_K\circ B(a) =C^{-1}(T(B(a))) =C^{-1}(T(ax)) =C^{-1}(bTx)=b.$$ Let a,b be arbitrary real numbers. Using the diagrams in Berger's book, we can show that there are two lines K and L such that (a+b)x is at the intersection of K and L. This implies that the point at the intersection of TK and TL is T((a+b)x)=σ(a+b)Tx. Then we use the diagram (and its image under T) to argue that T(ax)+T(bx) must also be at that same intersection. This expression can be written (σ(a)+σ(b))Tx, so these results tell us that $$(\sigma(a)+\sigma(b)-\sigma(a+b))Tx=0.$$ Since Tx≠0, this implies that σ(a+b)=σ(a)+σ(b). Then we use similar diagrams to show that σ(ab)=σ(a)σ(b), and that if a
P: 977 This is a very interesting thread. Sorry I'm late to the conversation. I appreciate all the contributions. But I'm getting a little lost. The question of the OP was asking about what kind of transformation keeps the following invariant: $$c^2t^2 - x^2 - y^2 - z^2 = 0$$ $$c^2t'^2 - x'^2 - y'^2 - z'^2 = 0$$ But Mentz114 in post 3 interprets this to means that the transformation preserves -dt'2 + dx'2 = -dt2 + dx2. And Fredrik in post 8 interprets this to mean If Λ is linear and g(Λx,Λx)=g(x,x) for all x∈R4, then Λ is a Lorentz transformation. And modifies this in post 9 to be If Λ is surjective, and g(Λ(x),Λ(y))=g(x,y) for all x,y∈R4, then Λ is a Lorentz transformation. Are these all the same answer in different forms? Or is there a side question being addressed about linearity? Thank you.
Emeritus
PF Gold
P: 9,355
Quote by friend And Fredrik in post 8 interprets this to mean If Λ is linear and g(Λx,Λx)=g(x,x) for all x∈R4, then Λ is a Lorentz transformation. And modifies this in post 9 to be If Λ is surjective, and g(Λ(x),Λ(y))=g(x,y) for all x,y∈R4, then Λ is a Lorentz transformation.
Those aren't interpretations of the original condition. I would interpret the OP's assumption as saying that g(Λx,Λx) for all x∈ℝ4 such that g(x,x)=0 (i.e. for all x on the light cone). This assumption isn't strong enough to to imply that Λ is a Lorentz transformation, so I described two similar but stronger assumptions that are strong enough. The two statements you're quoting here are theorems I can prove.
There is another approach to relativity that's been discussed in a couple of other threads recently. In this approach, the speed of light isn't mentioned at all. (Note that the g in my theorems is the Minkowski metric, so the speed of light is mentioned there). Instead, we interpret the principle of relativity as a set of mathematically precise statements, and see what we get if we take those statements as axioms. The axioms are telling us that the set of functions that change coordinates from one inertial coordinate system to another is a group, and that each of them takes straight lines to straight lines.
The problem I'm interested in is this: If space and time are represented in a theory of physics as a mathematical structure ("spacetime") with underlying set ℝ4, then what is the structure? When ℝ4 is the underlying set, it's natural to assume that those functions are defined on all of ℝ4. The axioms will then include the statement that those functions are bijections from ℝ4 into ℝ4. (Strangerep is considering something more general, so he is replacing this with something weaker).
The theorems we've been discussing lately tell us that a bijection ##T:\mathbb R^4\to\mathbb R^4## takes straight lines to straight lines if and only if there's an ##a\in\mathbb R^4## and a linear ##\Lambda:\mathbb R^4\to\mathbb R^4## such that ##T(x)=\Lambda x+a## for all ##x\in\mathbb R^4##. The set of inertial coordinate transformations with a=0 is a subgroup, and it has a subgroup of its own that consists of all the proper and orthochronous transformations with a=0.
What we find when we use the axioms is that this subgroup is either the group of Galilean boosts and proper and orthochronous rotations, or it's isomorphic to the restricted (i.e. proper and orthochronous) Lorentz group. In other words, we find that "spacetime" is either the spacetime of Newtonian mechanics, or the spacetime of special relativity. Those are really the only options when we take "spacetime" to be a structure with underlying set ℝ4.
Of course, if we had lived in 1900, we wouldn't have been very concerned with mathematical rigor in an argument like this. We would have been trying to guess the structure of spacetime in a new theory, and in that situation, there's no need to prove that theorem about straight lines. We can just say "let's see if there are any theories in which Λ is linear", and move on.
In 2012 however, I think it makes more sense to do this rigorously all the way from the axioms that we wrote down as an interpretation of the principle of relativity, because this way we know that there are no other spacetimes that are consistent with those axioms.
P: 450
Quote by Fredrik Of course, if we had lived in 1900, we wouldn't have been very concerned with mathematical rigor in an argument like this. We would have been trying to guess the structure of spacetime in a new theory, and in that situation, there's no need to prove that theorem about straight lines. We can just say "let's see if there are any theories in which Λ is linear", and move on. In 2012 however, I think it makes more sense to do this rigorously all the way from the axioms that we wrote down as an interpretation of the principle of relativity, because this way we know that there are no other spacetimes that are consistent with those axioms.
OK. Thank you for all these explanations. But don't you think that the "obsession" with preservation of straight lines is entirely due to our false and old fashioned use of the definition of what an inertial observer is? What do I mean? Inertial observer is not = observer without acceleration, but = observer on which no force is acting. And this is not the same thing within a generalized theory of relativity where F = d(m. v)/dt = m. acceleration + dm/dt. speed => F = 0 is not acceleration = 0.
Emeritus Sci Advisor PF Gold P: 9,355 Those formulas do imply that ##F=0\Leftrightarrow \dot v=0##. $$\gamma=\frac{1}{\sqrt{1-v^2}},\qquad m=\gamma m_0$$ $$\dot\gamma=-\frac{1}{2}(1-v^2)^{-\frac{3}{2}}(-2v\dot v)=\gamma^3v\dot v$$ $$\dot m=\dot\gamma m_0=\gamma^3v\dot v m_0$$ \begin{align} F &=\frac{d}{dt}(mv)=\dot m v+m\dot v=\gamma^3v^2\dot v m_0+\gamma m_0\dot v =\gamma m_0\dot v(\gamma^2v^2+1)\\ & =\gamma m_0\dot v\left(\frac{v^2}{1-v^2}+\frac{1-v^2}{1-v^2}\right) =\gamma^3 m_0\dot v \end{align} A complete specification of a theory of physics must include a specification of what measuring devices to use to test the theory's predictions. In particular, a theory about space, time and motion must describe how to measure lengths. It's not enough to just describe a meter stick, because the properties of a stick will to some degree depend on what's being done to it. So the theory must also specify the ideal conditions under which the measuring devices are expected to work the best. It's going to be very hard to specify a theory without ever requiring that an accelerometer displays 0. I don't even know if can be done. So non-accelerated motion is probably always going to be an essential part of all theories of physics. In all of our theories, motion is represented by curves in the underlying set of a structure called "spacetime". I will denote that set by M. A coordinate system is a function from a subset of M into ℝ4. If ##C:(a,b)\to M## is a curve in M, U is a subset of M, and ##x:U\to\mathbb R^4## is a coordinate system, then ##x\circ C## is a curve in C. So each coordinate system takes curves in spacetime to curves in ℝ4. If such a curve is a straight line, then the object has zero velocity in that coordinate system. If a coordinate system takes all the curves that represent non-accelerating motion to straight lines, then it assigns a constant velocity to every non-accelerating object. Those are the coordinate systems we call "inertial". There's nothing particularly old-fashioned about that. Edit: Fixed four (language/typing/editing) mistakes in the last paragrah.
P: 450
Quote by Fredrik Those formulas do imply that ##F=0\Leftrightarrow \dot v=0##. $$\gamma=\frac{1}{\sqrt{1-v^2}},\qquad m=\gamma m_0$$ $$\dot\gamma=-\frac{1}{2}(1-v^2)^{-\frac{3}{2}}(-2v\dot v)=\gamma^3v\dot v$$ $$\dot m=\dot\gamma m_0=\gamma^3v\dot v m_0$$ \begin{align} F &=\frac{d}{dt}(mv)=\dot m v+m\dot v=\gamma^3v^2\dot v m_0+\gamma m_0\dot v =\gamma m_0\dot v(\gamma^2v^2+1)\\ & =\gamma m_0\dot v\left(\frac{v^2}{1-v^2}+\frac{1-v^2}{1-v^2}\right) =\gamma^3 m_0\dot v \end{align} A complete specification of a theory of physics must include a specification of what measuring devices to use to test the theory's predictions. In particular, a theory about space, time and motion must describe how to measure lengths. It's not enough to just describe a meter stick, because the properties of a stick will to some degree depend on what's being done to it. So the theory must also specify the ideal conditions under which the measuring devices are expected to work the best. It's going to be very hard to specify a theory without ever requiring that an accelerometer displays 0. I don't even know if can be done. So non-accelerated motion is probably always going to be an essential part of all theories of physics. In all of our theories, motion is represented by curves in the underlying set of a structure called "spacetime". I will denote that set by M. A coordinate system is a function from a subset of M into ℝ4. If ##C:(a,b)\to M## is a curve in M, U is a subset of M, and ##x:U\to\mathbb R^4## is a coordinate system, then ##x\circ C## is a curve in C. So each coordinate systems takes curves in spacetime to curves in ℝ4. If such a curve is a straight line, then object has zero velocity in that coordinate system. If a coordinate system takes all the curves that represent non-accelerating motion are take to straight lines, then it assigns a constant velocity to every to non-accelerating objects. Those are the coordinate systems we call "inertial". There's nothing particularly old-fashioned about that.
Ok, well-done and -explained (thanks). But all this concerns only special relativity. Where do you see that the question asked by the OP (and recalled by friend) is imposing linearity? For me it only imposes the Christoffel's work; see the other discussion "O-S model of star collapse" post 109, Foundations of the GTR by A. Einstein and translated by Bose, [793], (25). My impression (perhaps false) is that SR is based on a coherent but circular way of thinking including "linearity" for easy understandable historical reasons. The preservation of a length element (which is the initial question here) does not impose a flat geometry. Don't you think so?
P: 3,035
Quote by Fredrik The problem I'm interested in is this: If space and time are represented in a theory of physics as a mathematical structure ("spacetime") with underlying set ℝ4, then what is the structure? When ℝ4 is the underlying set, it's natural to assume that those functions are defined on all of ℝ4.The axioms will then include the statement that those functions are bijections from ℝ4 into ℝ4
I find this confusing, if you start by assuming a spacetime structure that admits bijections from ℝ4 into ℝ4 (that is E^4 or M^4) as the underlying structure because it seems natural to you, you are already imposing linearity for the transformations that respect the relativity principle. This leaves only the two posible transformations you comment below. The second postulate of SR is what allows us to pick which of the two is the right transformation.
But if you follow this path it is completely superfluous to prove anything about mapping straight lines to straight lines to get the most general transformation that does that and once you have it restrict it to the linear ones with a plausible physical assumption, since you are already starting with linear transformations.
Quote by Fredrik What we find when we use the axioms is that this subgroup is either the group of Galilean boosts and proper and orthochronous rotations, or it's isomorphic to the restricted (i.e. proper and orthochronous) Lorentz group. In other words, we find that "spacetime" is either the spacetime of Newtonian mechanics, or the spacetime of special relativity. Those are really the only options when we take "spacetime" to be a structure with underlying set ℝ4.
Just a minor correction the Lorentz transformations are locally isomorphic to the restricted group.
Emeritus
PF Gold
P: 9,355
Quote by TrickyDicky I find this confusing, if you start by assuming a spacetime structure that admits bijections from ℝ4 into ℝ4 (that is E^4 or M^4) as the underlying structure because it seems natural to you, you are already imposing linearity for the transformations that respect the relativity principle. This leaves only the two posible transformations you comment below.
How am I "already imposing linearity"? I'm starting with "takes straight lines to straight lines", because that is the obvious property of inertial coordinate transformations, and then I'm using the theorem to prove that (when spacetime is ℝ4) an inertial coordinate transformation is the composition of a linear map and a translation. I don't think linearity is obvious. It's just an algebraic condition with no obvious connection to the concept of inertial coordinate transformations.
Quote by TrickyDicky The second postulate of SR is what allows us to pick which of the two is the right transformation.
Right, if we add that to our assumptions, we can eliminate the Galilean group as a possibility. But I would prefer to just say this: These are the two theories that are consistent with a) the idea that ℝ4 is the underlying set of "spacetime", and b) our interpretation of the principle of relativity as a set of mathematically precise statements about transformations between global inertial coordinate systems. Now that we have two theories, we can use experiments to determine which one of them makes the better predictions.
Quote by TrickyDicky Just a minor correction the Lorentz transformations are locally isomorphic to the restricted group.
How is that a correction? It seems like an unrelated statement.
P: 450
Quote by Fredrik How am I "already imposing linearity"? I'm starting with "takes straight lines to straight lines", because that is the obvious property of inertial coordinate transformations, and then I'm using the theorem to prove that (when spacetime is ℝ4) an inertial coordinate transformation is the composition of a linear map and a translation. I don't think linearity is obvious. It's just an algebraic condition with no obvious connection to the concept of inertial coordinate transformations. Right, if we add that to our assumptions, we can eliminate the Galilean group as a possibility. But I would prefer to just say this: These are the two theories that are consistent with a) the idea that ℝ4 is the underlying set of "spacetime", and b) our interpretation of the principle of relativity as a set of mathematically precise statements about transformations between global inertial coordinate systems. Now that we have two theories, we can use experiments to determine which one of them makes the better predictions.
The experiment for the actual discussion here is the Morley and Michelson experiment.
How is that a correction? It seems like an unrelated statement.
Intuitively (I am not a specialist) this means that that isomorphism holds true only locally (on short distances around the observer). There is not really a global inertial coordinate system (except on the paper, in theory). And (as far I understand the generalized version of the theory) this is a crucial point. Among others things, this was forcing us (Weyl's work) to introduce the concept of parallel transport and of connection.
P: 3,035
Quote by Fredrik How am I "already imposing linearity"?
The assumption of a spacetime that is globally R^4(not just locally wich is the weaker asumption) means your underlying geometry is flat(Minkowskian, Euclidean), do you agree?
Given that space, the transformations that leave inertial coordinates invariant in the sense of SR first postulate must automatically be linear transformations, do you agree? Maybe this is not as obvious to see as I think, but I I think it is correct.
Quote by Fredrik How is that a correction? It seems like an unrelated statement.
Well, It just seemed important to make more precise that the isomorphism you were talking about is local.
Emeritus
PF Gold
P: 9,355
Quote by Blackforest But all this concerns only special relativity.
And pre-relativistic classical mechanics. It concerns all theories with ℝ4 as spacetime. I think it's pretty cool that there are only two such theories that are consistent with a straightforward interpretation of the principle of relativity.
Quote by Blackforest Where do you see that the question asked by the OP (and recalled by friend) is imposing linearity? For me it only imposes the Christoffel's work; see the other discussion "O-S model of star collapse" post 109, Foundations of the GTR by A. Einstein and translated by Bose, [793], (25).
Someone who tries to argue that a transformation that satisfies the OP's condition must be a Lorentz transformation has probably already assumed that spacetime is ℝ4, and that the theory will involve global (i.e. defined on all of spacetime) inertial coordinate systems. That a transformation between two global inertial coordinate systems is a bijection and takes straight lines to straight lines is just a consequence of the definition of "global inertial coordinate system". The 4-dimensional version of the theorem I stated and proved in #98 shows that a bijection that takes straight lines to straight lines is affine (i.e. a composition of a linear map and a translation). So when we begin to consider the OP's condition, it's already a matter of determining which affine maps satisfy it. And the condition implies that 0 is taken to 0, so there's no translation involved, i.e. the transformation is linear.
Quote by Blackforest My impression (perhaps false) is that SR is based on a coherent but circular way of thinking including "linearity" for easy understandable historical reasons.
I don't think there's anything circular about it. It's perhaps naive to think that we should be able to use ℝ4 as our spacetime, and talk about global inertial coordinate systems. But it makes sense to first find all such theories, and then ask what other theories are worth considering. I might take a look at that problem when I have worked out all the details of the ℝ4 case.
Related Discussions General Math 35 Special & General Relativity 5 Special & General Relativity 1 Special & General Relativity 4 Special & General Relativity 8
|
2014-08-22 13:47:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938421010971069, "perplexity": 545.612241316086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823634.2/warc/CC-MAIN-20140820021343-00047-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://planetmath.org/RiemannCurvatureTensor
|
# Riemann curvature tensor
Let $\mathcal{X}$ denote the vector space of smooth vector fields on a smooth Riemannian manifold $(M,g)$. Note that $\mathcal{X}$ is actually a $\mathcal{C}^{\infty}(M)$ module because we can multiply a vector field by a function to obtain another vector field. The Riemann curvature tensor is the tri-linear $\mathcal{C}^{\infty}$ mapping
$R:{\mathcal{X}}\times{\mathcal{X}}\times{\mathcal{X}}\to{\mathcal{X}},$
which is defined by
$R(X,Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z$
where $X,Y,Z\in\mathcal{X}$ are vector fields, where $\nabla$ is the Levi-Civita connection attached to the metric tensor $g$, and where the square brackets denote the Lie bracket of two vector fields. The tri-linearity means that for every smooth $f\colon M\to\mathbb{R}$ we have
$fR(X,Y)Z=R(fX,Y)Z=R(X,fY)Z=R(X,Y)fZ.$
In components this tensor is classically denoted by a set of four-indexed components ${R^{i}}_{jkl}$. This means that given a basis of linearly independent vector fields $X_{i}$ we have
$R(X_{j},X_{k})X_{l}=\sum_{s}{R^{s}}_{jkl}X_{s}.$
In a two dimensional manifold it is known that the Gaussian curvature it is given by
$K_{g}=\frac{R_{1212}}{g_{11}g_{22}-{g_{12}}^{2}}$
Title Riemann curvature tensor RiemannCurvatureTensor 2013-03-22 16:26:17 2013-03-22 16:26:17 juanman (12619) juanman (12619) 10 juanman (12619) Definition msc 53B20 msc 53A55 Curvature Connection FormalLogicsAndMetaMathematics
|
2018-03-19 05:14:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 16, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986310601234436, "perplexity": 286.54734042244905}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646375.29/warc/CC-MAIN-20180319042634-20180319062634-00446.warc.gz"}
|
http://texblog.org/2007/08/27/number-sets-prime-natural-integer-rational-real-and-complex-in-latex/
|
1. Lasaro
In the last line, where “\leq0″ should be replaced by “\geq 0″
2. Cesar
Thanks guy! That’s really help
3. Yep, this helped me ! Thanks!
4. mathguy
Shouldn’t the integers be Z not I?
• You were right, thanks for the comment. I changed it.
Cheers,
Tom
5. Sean
Wow, awesome! Exactly what I needed: thanks!
6. This will really be helpful in writing my algebra and geometry blog, thanks.
Regards,
Mr. Pi
7. Enrique Argones Rúa
Why don’t you choose the more traditional notation \mathds{R}, \mathds{N}, etc.?
For using this you would have to include the package dsfont.
Cheers,
Enrique
8. Joe
THANK YOU! I’ve been digging around for an hour now looking for that. I can be picky about my fonts.
9. Joe
..I should have been more specific: thanks Enrique! The mathbb font is pretty well known, but the font for the more traditional number systems is hard to find.
• Hi,
Not sure if a number set symbol is commonly used for binary numbers. But try the following with any letter:
\usepackage{amssymb}
...
$\mathbb{B}$
Best, Tom.
11. one
Nice, thank you
12. Chewett
Thanks, very useful
13. Senthil
Thanks. Really its help to me.
14. Heni
Hi,how R^n write in Latex? THANK YOU ^^
• Hey Heni,
Thanks for your question. Is this what you were looking for?
\documentclass[11pt]{article}
\usepackage{amssymb}
\begin{document}
$\mathbb{R}^n$
\end{document}
• Joan
Though a minor difference, $\mathbb{R}^n$ produces a BOLD n as the dimension of R. Is there any way to make this n slimmer?
Thanks a lot.
• tom
Hi Joan,
How you perceive it might depend on the font used. In Computer Modern, the n doesn’t look bold in my opinion. Here’s what a bold n would look like, as compared to the normal font style in math mode:
\documentclass[11pt]{article}
\usepackage{amsfonts, amsmath, graphicx}
\begin{document}
$\mathbb{R}^{\boldsymbol n} \text{ vs. } \mathbb{R}^{n}$
\end{document}
Maybe you want to change the font size to make the letter n slimmer, and smaller obviously?
Also, I’d be curious to learn what configuration you used that made the letter thicker than what you would expect.
Cheers, Tom
15. Courtney
Hi! In the last line, the set of positive reals should be strictly R_>0, not R_≥0, which represents the nonnegative reals. The difference is subtle, but important
• Fixed! Thanks very much Courtney! Best, Tom.
16. m
Thanks! I found this very useful
• tom
|
2014-10-31 11:13:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7403103113174438, "perplexity": 3387.126331029133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899632.42/warc/CC-MAIN-20141030025819-00169-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/98003/derivative-of-a-function-is-odd-prove-the-function-is-even
|
# Derivative of a function is odd prove the function is even.
$f:\mathbb{R} \rightarrow \mathbb{R}$ is such that $f'(x)$ exists $\forall x.$
And $f'(-x)=-f'(x)$
I would like to show $f(-x)=f(x)$
In other words a function with odd derivative is even.
If I could apply the fundamental theorem of calculus
$\int_{-x}^{x}f'(t)dt = f(x)-f(-x)$ but since the integrand is odd we have $f(x)-f(-x)=0 \Rightarrow f(x)=f(-x)$
but unfortunately I don't know that f' is integrable.
-
Let $g(x)=f(-x)$. Then $g'(x)=-f'(-x)=f'(x)$.
Since $g(0)=f(0)$ and $g'=f'$, it follows from the mean value theorem that $g=f$.
-
f and g are both equal to $g'(c)x+g(0)$ – user9352 Jan 11 '12 at 0:54
@user9352: What is $c$? Note that $f$ is could be an arbitrary even function, so your formula is incorrect if $c$ is a constant. If say $f(x)=x^2$, then your formula says $x^2=2cx$, so in that case $c=\frac{1}{2}x$? – Jonas Meyer Jan 11 '12 at 0:55
yeah i guess c depends on x i just got that from the mvt, trying to see how the result follows. c is the mysterious number between 0 and x, $f(x)-f(0)=f'(c)x$ – user9352 Jan 11 '12 at 2:23
@user9352: Antiderivatives of functions on $\mathbb R$ are unique (if they exist) up to an added constant. The easiest way to apply the MVT is to the function $h(x)=g(x)-f(x)$. I recommend contraposition (or contradiction): If $h(0)=0$ and $h(a)\neq 0$ for some $a$, then $h'(c)\neq 0$ for some $c$. – Jonas Meyer Jan 11 '12 at 2:28
• Define functions $f_0(x)=(f(x)+f(-x))/2$ and $f_1(x)=(f(x)-f(-x))/2$. Then $f_0$ and $f_1$ are also differentiable, and $f_0$ is even and $f_1$ is odd.
• Show that the derivative of an odd function is even, and that of an even function is odd.
• From the equality $f'=f_0'+f_1'$ conclude that $f_1$ is constant and, therefore, zero.
-
Don't you mean $f_0$ is even and $f_1$ is odd? – M Turgeon Jan 11 '12 at 1:40
Yeah, that. I even picked the indices in $\mathbb Z/2$ and all! – Mariano Suárez-Alvarez Jan 11 '12 at 2:36
|
2016-07-29 10:27:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9404275417327881, "perplexity": 208.74991376680964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257830064.24/warc/CC-MAIN-20160723071030-00016-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://discrete.prof.ninja/420/
|
April 20th
The ideas
• $$n^r$$ is the number of ways of putting $$r$$ distinct marbles into $$n$$ numbered boxes (as many as you would like per box). (Think: each marble has $$n$$ choices.)
• $$n! = n \cdot (n-1) \cdots 2 \cdot 1$$ is the number of ways of putting $$n$$ distinct marbles in $$n$$ numbered boxes (one per box). (Think: after the first marble has a box, the next marble has one less option.)
• $$P(n,r) = \frac{n!}{(n-r)!} = n \cdot (n-1) \cdots (n-r+1)$$ is the number of ways of selecting $$r$$ items from $$n$$ distinct options when order matters. As a marble problem: $$r$$ distinct marbles into $$n$$ boxes, one per box. (Think: first marble has $$n$$ choices, next $$n-1$$, etc.)
• $$\binom{n}{r} = \frac{n!}{(n-r)! r!}$$ is the number of ways of selecting $$r$$ items from $$n$$ distinct options when order does not matter. (The above divided by $$r!$$.) As a marble problem: $$r$$ identical marbles into $$n$$ boxes one per box. (Think: just select $$r$$ boxes out of $$n$$.)
• $$\binom{n+r-1}{n-1}$$ is the number of ways to put $$r$$ identical marbles into $$n$$ boxes. (Think of a string with $$r$$ a's and $$n-1$$ b's, it makes/is/maps to an allocation. e.g. aabaaabaaaa would put 2 marbles in box 1, 3 in box 2, 4 in box 3.)
• With many types of identical marbles: $$r_1$$ of type 1, $$r_2$$ of type 2, $$\ldots$$, $$r_k$$ of type k into $$n$$ boxes is $$\frac{n!}{r_1! \cdots r_k!}$$. This is the number of ways of rearranging the letters in a word (with repeated characters). (Think: normal permutations but divide by all of the double counting that happens when things you thought were different were the same.)
The problems
1. Four cats and five mice enter a race. In how many ways can they finish with a mouse placing first, second, and third?
2. How many permutations of the letters a,b,c,d,e,f,g contain neither the string bge nor the string eaf?
3. How many numbers with seven distinct digits can be formed using only the digits 2-9?
4. How many different signals, each consisting of seven flags arranged in a column, can be formed from three identical red flags and four identical blue flags?
5. A group of eight scientists is composed of five mathematicians and three geologists.
• In how many ways can five people be chosen to visit a party? (You know, to conduct a study.)
• Suppose the five people chosen to visit the party must be comprised of three mathematicians and two geologists. (It's an upper end affair.) Now in how many ways can the group be chosen?
6. In how many ways can a team of six be chosen from 20 players so as to:
• Include both the strongest and weakest player?
• Include the strongest and exclude the weakest?
• Exclude both the strongest and weakest player?
7. Let $$k$$ and $$n$$ be natural numbers with $$k \lt n$$. Prove, using logic not formulas, that $$\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$$
8. In how many ways can 30 identical dolls be placed on seven different shelves?
9. A florist sells roses in five different colors.
• How many bunches of a half-dozen roses can be formed?
• How many bunches of a half-dozen can be formed if each bunch must contain at least one rose of each color?
10. In how many ways can 18 different books be given to Tara, Danny, Shannon, and Mike so that one person has 6 books, one has 2 books, the other two people have 5 books each?
|
2019-10-22 00:59:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6076120138168335, "perplexity": 571.6922167610725}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00419.warc.gz"}
|
http://www.davidketcheson.info/2020/03/19/SIR_Estimating_parameters.html
|
# Modeling Coronavirus part II -- estimating parameters
Welcome back! In the first post of this series, we learned about the SIR model, which consists of three differential equations describing the rate of change of susceptible (S), infected (I), and recovered (R) populations:
\begin{align*} \frac{dS}{dt} & = -\beta I \frac{S}{N} \\ \frac{dI}{dt} & = \beta I \frac{S}{N}-\gamma I \\ \frac{dR}{dt} & = \gamma I \end{align*}
As we discussed, the model contains two key parameters ($$\beta$$ and $$\gamma$$) that influence the spread of a disease. In this second post on modeling the COVID-19 outbreak, we will take the existing data and use it to estimate the values of those parameters.
The parameters we want to estimate are:
• $$\beta$$: The average number of people that come in close contact with a given infected individual, per day
• $$\gamma$$: The reciprocal of the average duration of the disease (in days)
## Estimating $$\gamma$$
A rough estimate of $$\gamma$$ is available directly from medical sources. Most cases of COVID-19 are mild and recovery occurs after about two weeks, which would give $$\gamma = 1/14 \approx 0.07$$. However, a smaller portion of cases are more severe and can last for several weeks, so $$\gamma$$ will be somewhat smaller than this value. Estimates I have seen put the value in the range $$0.03 - 0.06$$.
## Estimating $$\beta$$
It’s much more difficult to get a good estimate of $$\beta$$. To be clear, we are trying to estimate, for an infected individual, the average number of other individuals with whom they have close contact per day. Here close contact means contact that would lead to infection of the other individual (if that individual is still susceptible).
As we discussed earlier, this number is affected by many factors. It will also be affected by mitigation strategies implemented to reduce human contact. For now, we want to estimate the value of $$\beta$$ in the absence of mitigation. Later, we will try to take mitigation into account.
Recall that our equation for the number of infected is
$\frac{dI}{dt} = \left(\beta \frac{S}{N}-\gamma \right) I(t)$
Very early in an outbreak, the ratio $$S/N \approx 1$$ since hardly anyone has been infected. Also, at extremely early times, we can ignore $$\gamma$$ because the disease is so new that nobody has been sick for long enough to recover. For COVID-19, this is true for about the first two weeks of the disease’ spread in a new population. During that time we have simply
$\frac{dI}{dt} = \beta I(t)$
This is one of the simplest differential equations, and its solution is just a growing exponential:
$I(t) = e^{\beta t} I(0).$
Here $$I(0)$$ is of course the number of initially infected individuals. Thus we can try to estimate $$\beta$$ by fitting an exponential curve to the initial two weeks of spread. This is not the only way to estimate $$\beta$$; using this approach is the first of several choices that we’ll make, and those choices will influence the our eventual predictions.
### Getting the data
Fortunately for us, comprehensive data on the spread of COVID-19 is available from this Github repository provided by the Johns Hopkins University Center for Systems Science and Engineering. Specifically, I’ll be using the data in this file. Note that the file gets updated daily; as I write it is March 17th.
I’m using Python and Pandas to work with the data. For this blog post, I have removed most of the computer code, but you can download the Jupyter notebook and play with the code and data yourself.
To estimate $$\beta$$, we just pick a particular country from this dataset, plot the number of cases over time, and fit an exponential function to it. We can use a standard mathematical tool called least squares fitting to find a reasonable value.
## Fitting the data from Italy
For instance, here is the data from Italy:
Since this data starts back in January, before the virus reached Italy, the number of cases at the beginning is zero. We can use the interval from day 30 to day 43 (inclusive) to try to fit $$\beta$$, since this seems to be when the outbreak began to take off. Here it must be emphasized that the choice of this particular interval is somewhat arbitrary; different choices will give somewhat different values for $$\beta$$.
def exponential_fit(cases,start,length):
def resid(beta):
prediction = cases[start]*np.exp(beta*(dd-start))
return prediction[start:start+length]-cases[start:start+length]
soln = optimize.least_squares(resid,0.2)
beta = soln.x[0]
print('Estimated value of beta: {:.3f}'.format(beta))
return beta
Let’s see how well this value predicts the data:
def plot_fit(cases,start,end=56):
length=end-start
plt.plot(cases)
beta = exponential_fit(cases,start,length)
prediction = cases[start]*np.exp(beta*(dd-start))
plt.plot(dd[start:start+length],prediction[start:start+length],'--k');
plt.legend(['Data','fit']);
plt.xlabel('Days'); plt.ylabel('Total cases');
return beta
beta = plot_fit(total_cases,start=35,end=49)
plt.title('Italy');
Estimated value of beta: 0.247
The fit seems reasonably good, over the interval we used. How well does it match if we plot the fit over the whole time interval?
start=35
plt.plot(total_cases)
dd = np.arange(len(days))
prediction = total_cases[start]*np.exp(beta*(dd-start))
plt.plot(dd[start:],prediction[start:],'--k');
plt.legend(['Data','fit']);
plt.xlabel('Days'); plt.ylabel('Total cases');
Clearly, the prediction is not accurate at later times. There are two main reasons for this:
• Our assumption of exponential growth was based on other assumptions that are only valid at the very start of the outbreak;
• Italian society has taken measures to combat the spread of the virus, effectively reducing $$\beta$$ at later times.
We can resolve the first issue by using the full SIR model (instead of just exponential growth) to make predictions. The second issue is more complicated; we will try to deal with it in a later blog post.
## Fitting to data from other regions
To have more confidence in our value of $$\beta$$, we can perform a similar fit with data from other regions, and see if we get a similar value. Next, let’s try fitting the data from the USA. Here’s the data:
Notice that we only have about 1 week of meaningful data. Let’s try to fit an exponential to it:
We get a fairly similar value for $$\beta$$. Furthermore, the fit using this value seems to be pretty good.
### Hubei Province, China
Let’s look at the data from where it all started: Hubei province, China. Here it makes sense to start the fit from day zero of the JHU data set.
Each of these countries seems to fit the model reasonably well and to give a more or less similar value of $$\beta$$, in the range $$0.22$$ to $$0.29$$. It would be wrong to feel completely confident about this value, or to try to extrapolate too much from such a short time interval of data, but the consistency of these results does seem to suggest that our estimate is meaningful.
Now let’s look at some countries that don’t fit this pattern.
### Iran and South Korea
Here is the number of confirmed cases for Iran:
And here is Korea:
At a glance we can see that this data doesn’t follow the pattern of the previous countries. In Iran, after the first week, the growth seems to be linear. In Korea, the initial exponential growth eventually slows down drastically and is beginning to level off. This tells us that something we left out of our model must be at play.
In the case of Korea, it seems straightforward to understand what is going on. Korea has deployed the most extensive COVID-19 testing system in the world, with over 270,000 people tested to date. This is combined with an extensive effort to isolate infected people and those they have been in recent contact with. Essentially, South Korea has reduced the value of $$\beta$$. Based on our earlier analysis, to prevent future exponential growth, they will need to keep $$\beta$$ down to approximately the value of $$\gamma$$ or less. If we believe that $$\gamma \approx 0.05$$ and $$\beta \approx 0.25$$, this means reducing the amount of human contact by infected people by five times.
Iran’s case is at first more puzzling, since the testing and quarantine measures there have not been exceptional compared to countries like Italy and Spain. Instead, there are strong suspicions that the official numbers from Iran are wildly inaccurate and the real number of cases (and deaths) is drastically higher than what is reported.
## Problems with the our approach
Before we go finish, it’s important to understand the limitations of the data we’re working with and the technique we have used. Most importantly, the numbers we have certainly do not represent the real number of infected individuals. That’s because many infected individuals are never tested for the virus. This is especially true for diseases like COVID-19 in which the majority of cases are mild and do not require professional medical care. Estimates I have seen claim that only about 10-20% of all cases are detected.
If we assume that the fraction of cases that are actually detected is constant over time, then this discrepancy does not hinder our ability to estimate $$\beta$$, since dividing the initial and final number of infected by the same constant will lead to the same estimate of $$\beta$$ that would be obtained if we counted all the cases. However, it’s clear that in many places this factor changes over time as a country starts doing more and more testing. This would cause the number of reported cases to grow even faster than the real number. This is most likely occurring, for instance, in the US where previously many individuals with symptoms were not tested due to a lack of test availability.
Another issue is that in some cases governments may be intentionally hiding the true number of infections. As we have seen, this is likely the case in Iran.
Finally, mitigation strategies may already be in place and influencing the rate of spread in some countries, even in the early days of outbreak. This would lead to us underestimating the natural value of $$\beta$$.
# Conclusion
What we can take away from this analysis are the rough estimates for the SIR parameters:
$\gamma \approx 0.05$ $\beta \approx 0.25.$
Notice that the behavior in this initial phase of the epidemic that we have focused on is very similar to the simple behavior we considered at the start of the first post. There, the number of infected individuals doubled each day, but we knew that was unrealistic. Here, the number of infected individuals doubles every few days. How many days does it take for the number to double? If it takes $$m$$ days for the number of cases to double, then we have
$e^{\beta m} = 2$
so $$m = \log(2)/\beta$$ where $$\log$$ means the natural logarithm. For $$\beta=0.25$$, this gives a doubling time of about 2.8 days. This growth will slow down somewhat after the first couple of weeks for reasons we have already discussed.
It should be emphasized that the value of $$\beta$$ here is what we expect in the absence of mitigation strategies. In later posts, we’ll look at what these values mean for the future spread of the epidemic, and what the potential effect of mitigation may be.
In the Jupyter notebook for this post there is an interactive setup where you can make your own fits to the data from a variety of regions.
Click here to go to the next post, in which we use what we’ve found to predict the future.
|
2020-05-28 12:52:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6529179215431213, "perplexity": 445.76785439894877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396089.30/warc/CC-MAIN-20200528104652-20200528134652-00435.warc.gz"}
|
http://bayesianthink.blogspot.com/
|
## Tuesday, September 2, 2014
### Maximizing Chances in an Unfair Game
Follow @ProbabilityPuz
Q: You are about to play a game wherein you flip a biased coin. The coin falls heads with probability $$p$$ and tails with $$1 - p$$ where $$p \le \frac{1}{2}$$. You are forced to play by selecting heads so the game is biased against you. For every toss you make, your opponent gets to toss too. The winner of this game is the one who wins the toss the most. You, however get to choose the number of rounds that get played. Can you ever hope to win?
Machine Learning: The Art and Science of Algorithms that Make Sense of Data
A: At a first look, it might appear that the odds are stacked against you as you are forced to play by choosing heads. You would think that your chances or winning decrease as you play more and more. But, surprisingly there is a way to choose the optimal number of tosses (remember, you get to choose the number of times this game is played). To see how, lets crank out some numbers. If you get to toss the coin $$n$$ times, then the total number of coin tosses you and your opponent flips is $$2n$$. Out of the $$2n$$ tosses if $$y$$ turns out heads, the probability that you would win is
$$P(\text{y Wins}) = {2n \choose y} p^{y}(1 - p)^{2n - y}$$
In order to win, the value of $$y$$ should run from $$n + 1$$ to $$2n$$ and the overall probability works out to
$$P(\text{Win}) = \sum_{y = n + 1}^{2n}{2n \choose y} p^{y}(1 - p)^{2n - y}$$
We can work out the probability of winning by choosing various values of $$p$$ and $$n$$ and chart them out. Here is the R code that does it.
The code runs pretty quickly and uses the data.table package. All the processed data is contained in variables z and z1. They are plotted using the ggplot package to generate the following charts for the strategy.
The first chart shows the variation of the probability of winning by the number of games played for various probability bias values.
The next chart shows the optimal number of games to play for a given bias probability value.
Some good books to own for learning probability is listed here
Yet another fascinating area of probability are Monte Carlo methods. Here are a list of good books to own to learn Monte Carlo methods.
## Tuesday, August 12, 2014
### The Best Books for Monte Carlo Methods
The following are some of the best books to own to learn Monte Carlo methods for sampling and estimation problems
Monte Carlo Methods in Statistics (Springer)
This is a good book which discusses both Bayesian methods from a practical point of view as well as theoretical point of view with integrations. The explanations given are also fairly comprehensive. There are also a fair amount of examples in this text. Overall, this is an excellent book to own if you want to understand Monte Carlo sampling methods and algorithms at an intermediate to graduate level.
Explorations in Monte Carlo Methods (Undergraduate Texts in Mathematics)
This is good book to own to get you started on Monte Carlo methods. It starts with fairly simple and basic examples and illustrations. The mathematics used is also fairly basic. Buy this if you are at an undergraduate level and want to get into using Monte Carlo methods but have only a basic knowledge of statistics and probability.
Monte Carlo Methods in Financial Engineering (Stochastic Modelling and Applied Probability) (v. 53)
Another excellent book to own if you are curious to learn about the methods of Monte Carlo in the finance industry. Some really nice areas that are covered in the book include variance reduction techniques, diffusion equations, change point detections, Option pricing methods etc. Ideal for students of financial engineering or ones wanting to break into it. The book tends to overtly rate MC methods (well its a book on MC!).
Monte Carlo Simulation and Resampling Methods for Social Science
This book gives a good introduction and goes over some basic probability theory, statistics and distributions before it hops on to the Monte Carlo methods. This makes it a good introductory book for sampling methods. Recommended for undergraduates with minimal statistical background.
Simulation and Monte Carlo Method
An excellent book to own at the intermediate to graduate level. The text provides a good course in simulation and Monte Carlo methods. Some interesting topics covered in the text include rare-event simulation. The book assumes you have a background in statistics and probability theory.
## Sunday, July 6, 2014
### Embarrassing Questions, German Tanks and Estimations
Follow @ProbabilityPuz
Q: You are conducting a survey and want to ask an embarrassing yes/no question to subjects. The subjects wouldn't answer that embarrassing question honestly unless they are guaranteed complete anonymity. How would you conduct the survey?
Machine Learning: The Art and Science of Algorithms that Make Sense of Data
A: One way to do this is to assign a fair coin to the subject and ask them to toss it in private. If it came out heads then answer the question truthfully else toss the coin a second time and record the result (heads = yes, tails = no). With some simple algebra you can estimate the proportion of users who have answered the question with a yes.
Assume total population surveyed is $$X$$. Let $$Y$$ subjects have answered with a "yes". Let $$p$$ be the sort after proportion. The tree diagram below shows the user flow.
The total expected number of "yes" responses can be estimated as
$$\frac{pX}{2} + \frac{X}{4} = Y$$
which on simplification yields
$$p = \big(\frac{4Y}{X} - 1\big)\frac{1}{2}$$
Best Books on Probability
Q: A bag contains unknown number of tiles numbered in serial order $$1,2,3,...,n$$. You draw $$k$$ tiles from the bag without replacement and find the maximum number etched on them to be $$m$$. What is your estimate of the number of tiles in the bag?
A: This and problems like these are called German War Tank problems. During WW-II German tanks were numbered in sequential order when they were manufactured. Allied forces needed an estimate of how many tanks were deployed and they had a handful of captured tanks and their serial numbers painted on them. Using this, statisticians estimated the actual tanks to be far lower than what intelligence estimates had them believe. So how does it work?
Let us assume we draw a sample of size $$k$$. The maximum in that sample is $$m$$. If we estimate the maximum of the population to be $$m$$ then probability of the sample maximum to be $$m$$ is
$$P(\text{Sample Max} = m) = \frac{m-1 \choose k-1}{N \choose k}$$
The $$-1$$ figures because the maximum is already taken out of the sample leaving behind $$m - 1$$ to choose $$k -1$$ from. The expected value of the maximum using this strategy is thus
$$E(\text{Maximum}) = \sum_{m=k}^{m=N}m\frac{m-1 \choose k-1}{N \choose k}$$
Note, we run the above summation from $$k$$ to $$N$$ as for $$m < k$$ the expectation is $$0$$ because the sample maximum has to be at least $$k$$. After a series of algebraic manipulations ( ref ) the above simplifies to
$$E(\text{Maximum}) = M\big( 1 + \frac{1}{k}\big) - 1$$
which is quite an ingenious and simple way to estimate population size given serial number ordering.
If you are looking to buy some books on probability theory here is a good list.
## Sunday, June 1, 2014
### The Chakravala Algorithm in R
Follow @ProbabilityPuz
A class of analysis that has piqued the interest of mathematicians across millennia are Diophantine equations. Diophantine equations are polynomials with multiple variables and seek integer solutions. A special case of Diophantine equations is the Pell's equation. The name is a bit of a misnomer as Euler mistakenly attributed it to the mathematician John Pell. The problem seeks integer solutions to the polynomial
$$x^{2} - Dy^{2} = 1$$
Several ancient mathematicians have attempted to study and find generic solutions to Pell's equation. The best known algorithm is the Chakravala algorithm discovered by Bhaskara circa 1114 AD. Bhaskara implicitly credits Brahmagupta (circa 598 AD) for it initial discovery, though some credit it to Jayadeva too. Several Sanskrit words used to describe the algorithm appear to have changed in the 500 years between the two implying other contributors. The Chakravala technique is simple and implementing it in any programming language should be a breeze (credit citation)
Diophantine Equations (Pure & Applied Mathematics)
The method works as follows. Find a trivial solution to the equation. $$x=1,y=0$$ can be used all the time. Next, initialize two parameters $$[p_i,k_i]$$ where $$i$$ is an iteration count. $$p_i$$ is updated to $$p_{i+1}$$ if the following two criteria are satisfied.
• $$p_i + p_{i+1} mod k_{i} = 0$$ i.e. $$p_i + p_{i+1}$$ is divisible by $$k_i$$
• $$| p_{i+1} - d |^{2}$$ is minimized
After updating $$p_{i+1}$$ $$k_{i+1}$$ is found by evaluating
$$k_{i+1} = \frac{p_{i+1}^{2} - d}{k_{i}}$$
and the next pair of values for $$[x,y]$$ is computed as
$$x_{i+1} = \frac{p_{i+1}x_i + dy_{i}}{|k_{i}|} y_{i+1} = \frac{p_{i+1}y_i + x_{i}}{|k_{i}|}$$
The algorithm also has an easy way to check if the found solution is a solution. It does so by only accepting values where $$k_{i} = 1$$.
A screen grab of the entire algorithm done in R is shown below.
A Related Puzzle:
A drawer has $$x$$ black socks and $$y$$ white socks. You draw two socks consecutively and they are both black. You repeat this several times (by replacing the socks) and find that you get a pair of blacks with probability $$\frac{1}{2}$$. You know that there are no more than 30 socks in the draw in total. How many black and white socks are there?
The probability that you would draw two black socks in a row is
$$P = \frac{x}{x+y}\times\frac{x - 1}{x+y - 1} = \frac{1}{2}$$
Simplifying and solving for $$x$$ yields
$$x^{2} - (2y + 1)x + y - y^2 = 0$$
which on further simplification gives
$$x = \frac{2y + 1 \pm \sqrt{(2y+1)^2 +4(y^2 - y)}}{2}$$
We can ignore the root with the negative sign as it would yield a negative value for $$x$$ which is impossible. The positive root of the quadratic equation yields
$$x = \frac{2y + 1 + \sqrt{8y^2 + 1}}{2}$$
For $$x$$ to be an integer, the term $$\sqrt{8y^2 + 1}$$ has to be an odd integer number $$z$$ (say). We can now write it out as
$$z = \sqrt{8y^2 + 1}$$
or
$$z^{2} - 8y^2 = 1$$
This is Pell's equation (or Vargaprakriti in Sanskrit).
As we know that there are no more than 30 socks in the draw, we can quickly work our way to two admissible solutions to the problem $$\{3,1\}, \{15,6\}$$.
If you are looking to buy books on probability theory here is a good list to own.
If you are looking to buy books on time series analysis here is an excellent list to own.
## Wednesday, May 7, 2014
### Hopping Robots and Reinforcement Learning
Follow @ProbabilityPuz
All too often, when we deal with data the outcome needed is a strategy or an algorithm itself. To arrive at that strategy we may have historic data or some model on how entities in system respond to various situations. In this write up, I'll go over the method of reinforcement learning. The general idea behind reinforcement learning is to come up with a strategy to maximize some measurable goal. For example, if you are modelling a robot that learns to navigate around obstacles, you want the learning process to come back with a strategy that minimizes collisions (say) with other entities in the environment.
Pattern Recognition and Machine Learning (Information Science and Statistics)
For the sake of simplicity, lets assume the following scenario. A robot is placed (at random) on flat plank of wood which has some sticky glue in the center. To its left there is a hole which damages the robot a bit and to its right is a reward which is its destination, as shown in the figure below
The robot has the following capabilities
• It can sense what state $${S_1,S_2,...,S_7}$$ it is in
• It can detect a reward or damage that happens to it while in a particular state
• It can move one space left or right
• It can hop a space on to the right
If the robot ends up in either of the terminal states of $$S_1$$ or $$S_7$$ it is taken and placed in another state at random. State $$S_1$$ does damage to the robot while state $$S_7$$ rewards it. $$S_4$$ is not exactly a damage causing state but it is an avoidable state and we want to learn that over time. All other states are neither harmful nor rewarding. The robot's goal is to "learn" to get to state $$S_7$$ in as few steps as possible.
In order to get reinforcement learning to work, you need to know what the reward values are for each of the states the robot can be in. In this particular example, we will assume the following reward structure for each of the states.
Note, the numbers are fairly arbitrary. In addition to this we need a function or a table, mapping out actions/states pairs leading to new states. Given the robot's movement description above, we can use a table as follows
Zero is being used to designate the scenario when the robot is reset to a random state. With the above two sets of information we are good to start the learning process.
Reinforcement learning works in the following way. You maintain a matrix of values for each state/action pair. This is the reward that can be attained by arriving at a particular state by taking a particular action. But the trick is to also account for what possible future state you can get to, given that you arrive at a given state. For example it may be beneficial to be in a certain state $$X$$, but all downstream states from $$X$$ may be bad to be in. You want to avoid such states. This way, the algorithm tries to ensure that future favourable and achievable states are taken into account. The algorithm does not immediately update itself based on whatever it learns, it preserves old values and learns gradually. The above methodology can be stated as follows.
If $$Q^{*}(s_t,a_t)$$ represents the pay off received by being in state then
$$Q^{*}(s_t,a_t) = R(s_{t+1},s_t) + \alpha max_{a_t}Q^{*}(s_{t+1},a_{t+1})$$
To slow down learning a bit, we stick with whatever prior estimate of $$Q^{*}(s_t,a_t)$$ we have by a fraction of $$\beta$$ as shown below
$$Q^{*}_{new}(s_t,a_t) = \beta Q^{*}_{prior}(s_t,a_t) + (1 - \beta)Q^{*}(s_t,a_t)$$
That's it! We now let the system take on various initial states, and let the device play around moving over to different states while we constantly update our $$Q$$ matrix. After several iterations, the $$Q$$ matrix will end with some values which reflect what strategy to take.
To give a swirl, here is an R code that walks through the entire process for this particular example
A thing to note in the code is how the reward function is encoded. There is a penalty imposed on moving from a higher state to a lower state. This is the simple way to ensure that whatever strategy it comes up with does not involve going backwards from rightward positional gains that have been made. If you try it without the penalty, you will see cases where the strategy does not care going left or right in some states. The above code when run, creates the following output (screen grab below)
The rows represent the states, the columns represent the three possible actions move-left, move-right and hop-right that the robot can take. Notice what it's trying to say:
• State 1, do nothing
• State 2, move right, but don't hop
• State 3, don't move right, hop
• State 4, move right or hop
• State 5, move right (small chance), hop (definitely)
• State 6, move right, move left (small chance)
This is exactly what you would expect and want! More on this subject can be read from the following books
Reinforcement Learning: State-of-the-Art (Adaptation, Learning, and Optimization)
an
Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning)
If you are looking to buy some books on probability theory here is a good list to own.
## Thursday, April 17, 2014
### The Two Strategies
Follow @ProbabilityPuz
Q: You are in a game where you get to toss a pair of coins once. There are two boxes (A & B) holding a pair each. Box A's coins are fair however B's coins are biased with probability of heads being $$0.6$$ and $$0.4$$ respectively. You are paid for the expected number of heads you will win. Which of the boxes should you pick?
Machine Learning: The Art and Science of Algorithms that Make Sense of Data
A: The expected number of heads if you chose box A is easy to calculate as
$$E(\text{heads}| A) = \frac{1}{2} + \frac{1}{2} = 1$$
However the expected number of heads if you chose box B is also the same
$$E(\text{heads}| B) = \frac{4}{10} + \frac{6}{10} = 1$$
The average yield being the same could make one think that both boxes yield the same. However there is one difference, its the variance. The variance of a distribution of a random variable $$X$$ is defined as
$$Var(X) = \sum_{i=0}^{N} (x_i - \bar{x})^{2}p_i$$
where $$p_i$$ is the probability of $$x_i$$. Given this, lets compute the variance of each strategy
$$Var(X | \text{Box=A}) = (\frac{1}{2} - 1)^2 \frac{1}{2} + (\frac{1}{2} - 1)^2 \frac{1}{2} = \frac{1}{4} = 0.25 \\ Var(X | \text{Box=B}) = (\frac{2}{5} - 1)^2 \frac{2}{5} + (\frac{3}{5} - 1)^2 \frac{3}{5} = \frac{6}{25} = 0.24$$
The variance for box B is slightly tighter than box A which makes choosing the coins in box B a better strategy.
If you are looking to buy some books on probability theory here is a good list.
## Sunday, March 23, 2014
### Linear Regression, Transforms and Regularization
Follow @ProbabilityPuz
This write up is about the simple linear regression and ways to make it robust to outliers and non linearity. The linear regression method is a simple and powerful method. It is powerful because it helps compress a lot of information through a simple straight line. The complexity of the problem is vastly simplified. However being so simple comes with its set of limitations. For example, the method assumes that after a fit is made, the differences between the predicted and actual values are normally distributed. In reality, we rarely run into such ideal conditions. Almost always there is non-normality and outliers in the data that makes fitting a straight line insufficient. However there are some tricks you could do to make it better.
Statistics: A good book to learn statistics
As an example data set consider some dummy data shown in the table/chart below. Notice, value 33 is an outlier. When charted. you can see there is some non-linearity in the data too, for higher values of $$x$$
First lets tackle the non-linearity. The non-linearity can be managed by doing a transformation onthe y-values using the box-cox transform which is a class of power transformation. It is a useful transform to bring about normality in time series values that looks "curvy". The transform looks like
$$\hat{y} = \frac{y^{\lambda} - 1}{\lambda}$$
The value of lambda needs to be chosen optimally that maximizes the log likelihood that would make the time series more like it came from a normal distribution. Most statistical tools out there do it for you. In R, the function "boxcox" in the package "MASS" does it for you. The following code snippet computes the optimal value of $$\lambda$$ as -0.30303
Let's apply this transformation to the data and see how it looks on a chart.
You can see that it looks a lot better and more like a straight line, except for the outlier at $$x = 4$$. The goodness of straight line fit is measured by the fit's $$R^2$$ value. The $$R^2$$ value tries to quantify the amount of variance that can be explained by the fit. If we fitted a straight line through the original data we get an $$R^2 = 0.06804$$, and the transformed data yields an $$R^2 = 0.4636$$ demonstrating an improvement. Next, lets try and manage the outlier. If you try fitting a straight line $$y = \alpha_0 + \alpha_{1} x$$ through a set of points that have a few outliers you will notice that the values of $$\alpha_{0}, \alpha_{1}$$ tend to be slightly large. They end up being slightly large because the fit is trying to "reach out" and accommodate the outlier. In order to minimize the effect of the outlier we get back into the guts of the linear regression. A linear regression typically tries to minimize the overall error $$e$$ as computed as
$$e = \sum_{i=1}^{N}\frac{(y_{actual} - \alpha_{0} - \alpha_{1}x)^2}{N}$$
where $$N$$ is the number of points. We can tweak the above equation to minimize as follows
$$e = \sum_{i=1}^{N}\frac{(y_{actual} - \alpha_{0} - \alpha_{1}x)^2}{N} + \lambda(\alpha_{0}^2 + \alpha_{1}^2)$$
The tweaked error equation forces towards choices of $$\alpha_{0}, \alpha_{1}$$ where they cannot take larger values. The optimal value of $$\lambda$$ needs to be ascertained by tuning. A formulaic solution does not exist, so we use another tool in R, the function "optim". This function lets you do basic optimization and minimizes any function you pass it, along with required parameters. It returns parameter values that minimize this function. The actual usage of this function isn't exactly intuitive. Most examples on the internet talk of minimizing a proper well formed function. Most real life applications involve minimizing functions having lots of parameters and additional data. The funct"optim" accepts the "..." argument which is a means to pass through arguments to the function you want to minimize. So here is how you would do it in R, all of it.
The above code walks through this example by calling optim. It finally outputs the fits in original domain using all three methods
1. The grey line represents the fit if you simply used "lm"
2. The red line represents the fit if you transformed the data and used "lm" in the transformed domain but without regularization. Note: you are clearly worse off.
3. The blue dotted line shows the fit if you used transformation and regularization, clearly a much better fit
The function "optim" has lots of methods that it uses for finding the minimum value of the function. Typically, you may also want to poke around with the best value of $$\lambda$$ in the minimization function to get better fits.
If you are looking to buy some books on time series analysis here is a good collection. Some good books to own for probability theory are referenced here
|
2014-11-01 09:20:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5895206332206726, "perplexity": 440.4009565859351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637905189.48/warc/CC-MAIN-20141030025825-00061-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.quizover.com/physics/section/conceptual-questions-microscopes-by-openstax
|
# 26.4 Microscopes (Page 5/9)
Page 5 / 9
## Take-home experiment: make a lens
Look through a clear glass or plastic bottle and describe what you see. Now fill the bottle with water and describe what you see. Use the water bottle as a lens to produce the image of a bright object and estimate the focal length of the water bottle lens. How is the focal length a function of the depth of water in the bottle?
## Section summary
• The microscope is a multiple-element system having more than a single lens or mirror.
• Many optical devices contain more than a single lens or mirror. These are analysed by considering each element sequentially. The image formed by the first is the object for the second, and so on. The same ray tracing and thin lens techniques apply to each lens element.
• The overall magnification of a multiple-element system is the product of the magnifications of its individual elements. For a two-element system with an objective and an eyepiece, this is
$m={m}_{\text{o}}{m}_{\text{e}}\text{,}$
where ${m}_{\text{o}}$ is the magnification of the objective and ${m}_{\text{e}}$ is the magnification of the eyepiece, such as for a microscope.
• Microscopes are instruments for allowing us to see detail we would not be able to see with the unaided eye and consist of a range of components.
• The eyepiece and objective contribute to the magnification. The numerical aperture $\left(\text{NA}\right)$ of an objective is given by
$\text{NA}=n\phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\alpha$
where $n$ is the refractive index and $\alpha$ the angle of acceptance.
• Immersion techniques are often used to improve the light gathering ability of microscopes. The specimen is illuminated by transmitted, scattered or reflected light though a condenser.
• The $f/#$ describes the light gathering ability of a lens. It is given by
$f/#=\frac{f}{D}\approx \frac{1}{2\mathrm{NA}}.$
## Conceptual questions
Geometric optics describes the interaction of light with macroscopic objects. Why, then, is it correct to use geometric optics to analyse a microscope’s image?
The image produced by the microscope in [link] cannot be projected. Could extra lenses or mirrors project it? Explain.
Why not have the objective of a microscope form a case 2 image with a large magnification? (Hint: Consider the location of that image and the difficulty that would pose for using the eyepiece as a magnifier.)
What advantages do oil immersion objectives offer?
How does the $\text{NA}$ of a microscope compare with the $\text{NA}$ of an optical fiber?
## Problem exercises
A microscope with an overall magnification of 800 has an objective that magnifies by 200. (a) What is the magnification of the eyepiece? (b) If there are two other objectives that can be used, having magnifications of 100 and 400, what other total magnifications are possible?
(a) 4.00
(b) 1600
(a) What magnification is produced by a 0.150 cm focal length microscope objective that is 0.155 cm from the object being viewed? (b) What is the overall magnification if an $8×$ eyepiece (one that produces a magnification of 8.00) is used?
(a) Where does an object need to be placed relative to a microscope for its 0.500 cm focal length objective to produce a magnification of $–400$ ? (b) Where should the 5.00 cm focal length eyepiece be placed to produce a further fourfold (4.00) magnification?
(a) 0.501 cm
(b) Eyepiece should be 204 cm behind the objective lens.
You switch from a $1.40\text{NA}\phantom{\rule{0.25em}{0ex}}\text{60}×$ oil immersion objective to a $1.40\text{NA}\phantom{\rule{0.25em}{0ex}}\text{60}×$ oil immersion objective. What are the acceptance angles for each? Compare and comment on the values. Which would you use first to locate the target area on your specimen?
An amoeba is 0.305 cm away from the 0.300 cm focal length objective lens of a microscope. (a) Where is the image formed by the objective lens? (b) What is this image’s magnification? (c) An eyepiece with a 2.00 cm focal length is placed 20.0 cm from the objective. Where is the final image? (d) What magnification is produced by the eyepiece? (e) What is the overall magnification? (See [link] .)
(a) +18.3 cm (on the eyepiece side of the objective lens)
(b) -60.0
(c) -11.3 cm (on the objective side of the eyepiece)
(d) +6.67
(e) -400
You are using a standard microscope with a $0.10\text{NA}\phantom{\rule{0.25em}{0ex}}\text{4}×$ objective and switch to a $0.65\text{NA}\phantom{\rule{0.25em}{0ex}}\text{40}×$ objective. What are the acceptance angles for each? Compare and comment on the values. Which would you use first to locate the target area on of your specimen? (See [link] .)
Unreasonable Results
Your friends show you an image through a microscope. They tell you that the microscope has an objective with a 0.500 cm focal length and an eyepiece with a 5.00 cm focal length. The resulting overall magnification is 250,000. Are these viable values for a microscope?
Electric current is the flow of electrons
is there really flow of electrons exist?
babar
Yes It exists
Cffrrcvccgg
explain plz how electrons flow
babar
if electron flows from where first come and end the first one
babar
an electron will flow accross a conductor because or when it posseses kinectic energy
Cffrrcvccgg
electric means the flow heat current.
electric means the flow of heat current in a circuit.
Serah
What is electric
electric means?
ghulam
electric means the flow of heat current in a circuit.
Serah
a boy cycles continuously through a distance of 1.0km in 5minutes. calculate his average speed in ms-1(meter per second). how do I solve this
speed = distance/time be sure to convert the km to m and minutes to seconds check my utube video "mathwithmrv speed"
PhysicswithMrV
why we cannot use DC instead of AC in a transformer
becuse the d .c cannot travel for long distance trnsmission
ghulam
what is physics
branch of science which deals with matter energy and their relationship between them
ghulam
Life science
the
what is heat and temperature
how does sound affect temperature
sound is directly proportional to the temperature.
juny
how to solve wave question
I would like to know how I am not at all smart when it comes to math. please explain so I can understand. sincerly
Emma
Just know d relationship btw 1)wave length 2)frequency and velocity
Talhatu
First of all, you are smart and you will get it👍🏽... v = f × wavelength see my youtube channel: "mathwithmrv" if you want to know how to rearrange equations using the balance method
PhysicswithMrV
nice self promotion though xD
Beatrax
thanks dear
Chuks
hi pls help me with this question A ball is projected vertically upwards from the top of a tower 60m high with a velocity of 30ms1.what is the maximum height above the ground level?how long does it take to reach the ground level?
mahmoud
please guys help, what is the difference between concave lens and convex lens
convex lens brings rays of light to a focus while concave diverges rays of light
Christian
for mmHg to kPa yes
Matthew
it depends on the size
Vincent
a lens which diverge the ray of light
rinzuala
concave diverges light
Matthew
thank you guys
Vincent
A diverging lens
Yusuf
What is isotope
Yusuf
each of two or more forms of the same element that contain equal numbers of protons but different numbers of neutrons in their nuclei, and hence differ in relative atomic mass but not in chemical properties; in particular, a radioactive form of an element. "some elements have only one stable isotope
Karthi
what is wire wound resistors?
What are the best colleges to go to for physics
I would like to know this too
Trevor
How do I calculate uncertainty in a frequency?
Calculate . ..
Olufunsho
What is light wave
What is wave
Sakeenah
What is light
Sakeenah
okay
True
explain how neurons communicate feed and stimulate
Jeff
Great science students
Omo
A wave is a disturbance which travels through the medium transferring energy from one form to another without causing any permanent displacement of d medium itself
OGOR
Light is a form o wave
OGOR
Neurons communicate by sending message through nerves in coordination
OGOR
What are petrochemicals, give two examples
OGOR
light has dual nature, particle as well as wave. when we want to explain phenomena like Interference of light, then we consider light as wave.
Lalita
what is it as in the form of it or how to visualize it or what it contains
Matthew
particles of light are like small packets of energy called photons, and flow or motion of photons is wave like
Lalita
light is just the energy of which photons emit
Matthew
the wave is how they travel
Matthew
photons do not emitt energy, they are energy. They are massless particles.
Lalita
a wave is a disturbance through the medium. Have you ever thrown a stone in still water? the disturbance produced travels in form of wave, the wave produced by throwing stone in still water are circular in nature.
Lalita
a photon does contain mass when in motion. it doesnt contain mass when at rest
Matthew
when would it ever be at rest
Bob
a wave is a disturbance of which energy travels
Matthew
that's darkness. darkness has no mass because the photons within in aren't moving or producing energy
Matthew
Hi guys. Please I've been trying to understand the concept of SHM, but it's not been really easy, could someone please explain it to me or suggest a site I could visit? Thank you.
Odo
Matthew
effective mass of photons only comes into picture when we consider it accelerating in gravitational field, mass of photon has no meaning as it is always travelling with speed of light and is never at rest. with that high speed, Energy and momentum are equivalent. and darkness is absense of photons.
Lalita
darkness is absense of light. not the presence of 'resting photons'. photons are never at rest.
Lalita
photons are present in darkness but don't give off any light because they are stationary with no mass or energy. once a force makes them move again they will gain mass and give off light
Matthew
this theory is presented in Einsteins theory of special relativity
Matthew
A.The velocity Vo for the streamline flow of liquid in a small tube depends on the radius r of the tube,the density and the viscosity iter of the liquid .use the dimensional analysis to obtain an expression for the velocity . B.Given that Vo =r square ×p all over 4×iter ×l
True
A.The velocity Vo for the streamline flow of liquid in a small tube depends on the radius r of the tube,the density (rho)and the viscosity (iter)of the liquid. Use the method of dimensional analysis to obtain an expression for the velocity . B.Given that Vo =r square x p all over 4 x iter x l
True
Matthew, photons ARE light. there is no such thing as a photon that isn't moving. in fact the speed they move at is called C (for constant) in physics. through a vacuum they always travel at this speed no matter what. they can not slow down; except in another medium.
The reason why a photon can go at this speed is BECAUSE it had no mass. nothing can go this speed or faster because it needs to have no mass or negative mass. that's why it's called the constant.
when a photon hits something that is opaque, this is the only way to "stop"it. it isn't merely stopped but absorbed and turned into heat energy, then the remaining energy is reflected in different wavelengths. that reflection is what we call color. the darker something is, the less photons are ther
e. complete blackness is the absolute absence of photons altogether. I believe what you're referring to is not speed, but wavelength, which is indirectly proportional to the amount of energy a particular photon is made up of.
in order for a photon to have zero wavelength, it must (at least theoretically) have infinite energy.
about mass: you may have photons confused with electrons. elections have a mass so small that people say they are without mass, but they do. it is called electron mass or Me-.
you may also be getting electrons and photons confused because of the cherenkov effect. that is what happens when a particle travels faster than light IN THAT PARTICULAR MEDIUM. I emphasize that because no other particle besides photons can go the speed of c.
when a particle goes faster than light in a particular medium, a blue light is emitted, called cherenkov radiation. this is why nuclear reactors glow blue.
nuclear reactors release so much energy that when they emit electrons, those electrons are given enough energy to go faster than light in that medium (in this case water), releasing blue light. if you put the reactor in air or a vacuum, this effect wouldn't happen because the speed of light in air
is very close to c, which is the universal speed limit. I'd you did go faster than c, time would go backwards and you would have infinite theoretical mass and probably spagghettify, like with a black hole.
*if
*electrons
light waves can travel through a vacuum, and do not require a medium. In empty space, the wave does not dissipate (grow smaller) no matter how far it travels, because the wave is not interacting with anything else.
Salim
Please is there any instructional material for sounds Waves, Echo, light waves
Salami
how far there is hot topic that is boarding me now
Abraham
linear motion
Ahmed
kinematic
Abraham
Akinsanya
kinematic
Emma
kinematics disscuss the motion without cuases ...
ghulam
wow I like what am seeing here I need someone to brush me up in physics in fact I'll say I know nothing
Godslight
|
2018-10-17 17:47:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6214967966079712, "perplexity": 1112.7743659925904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511206.38/warc/CC-MAIN-20181017174543-20181017200043-00407.warc.gz"}
|
https://profitgenesisreloaded.com/ii1l0aev/scalar-matrix-and-diagonal-matrix-f4046a
|
# scalar matrix and diagonal matrix
$e_{11}$ $\,=\,$ $e_{22}$ $\,=\,$ $e_{33}$ $\,=\,$ $\cdots$ $\,=\,$ $e_{mm}$ A set of all diagonal matrices (nxn) over R is a field relatively to additive and multiply operations on matrices. diag has four distinct usages: x is a matrix, when it extracts the diagonal. Download PDF's . Square matrix and diagonal matrix. An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. Is it possible to triangularize a matrix only by adding scalar multiples of rows to each other? A diagonal matrix with all its main diagonal entries equal is a scalar matrix, that is, a scalar multiple λI of the identity matrix I.Its effect on a vector is scalar multiplication by λ. That is, the product of any matrix with the identity matrix yields itself. Examples: My solution: … Learning these topics will provide a deeper understanding of the underlying algorithmic mechanics and allow development of new algorithms, which can ultimately be deployed as more sophisticated quantitative trading strategies. Does subtracting a positive semi-definite diagonal matrix from a Hurwitz matrix keep it Hurwitz? NCERT NCERT Exemplar NCERT Fingertips Errorless Vol-1 Errorless Vol-2. not a field over C, can u explain again please? A diagonal matrix is said to be a scalar matrix if its diagonal elements are equal, that is, a square matrix B = [b ij] n × n is said to be a scalar matrix if. Scalar matrix and identity matrix. All the basis vectors are eigenvectors for the linear transformation. INV(M). Inverse of a matrix. Scalar matrix can also be written in form of n * I, where n is any real number and I is the identity matrix. I can assure you that (1.) For example, a 3×3 scalar matrix has the form: x is a scalar (length-one vector) and the only argument, it returns a square identity matrix of size given by the scalar. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. A diagonal matrix is defined as a square matrix in which all off-diagonal entries are zero. Linear algebra, probability and calculus are the 'languages' in which machine learning is written. Program to print a matrix in Diagonal Pattern. a matrix of type An identity matrix of order nxn is denoted by I n . If anything, writing "$0$" instead of "$0+i0$" is a bit incorrect, but understandable. Takes either one or two arguments, which must be scalars. Entries on the main diagonal may or may not be zero. What's is the Buddhist view on persistence or grit? Here, the elements in the red are the diagonal elements which are same and rest elements are zero making it a Scalar Matrix. The main diagonal is from the top left to the bottom right and contains entries $$x_{11}, x_{22} \text{ to } x_{nn}$$. How can I get better at negotiating getting time off approved? A square matrix m[][] is Scalar Matrix if the elements in the main diagonal are equal and the rest of the elements are zero. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. In fact, it's a royal pain. An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. Program to convert given Matrix to a Diagonal Matrix in C++. Many supervised machine learning and deep learning algorithms largely entail optimising a loss functionby adjusting model parameters. It only takes a minute to sign up. Note that a field is allowed to have one non-product-invertible element, the additive identity. 1 Prove that a square matrix can be expressed as a product of a diagonal and a permutation matrix. If x is a vector (or 1D array) of length two or more, then diag(x) returns a diagonal matrix whose diagonal is x. its a diagonal matrix SCALAR MATRIX. A square matrix is said to be scalar matrix if all the main diagonal elements are equal and other elements except main diagonal are zero. Here, the elements in the red are main diagonal which are non-zero rest elements except the main diagonal are zero making it a Diagonal matrix. 2.6.2 Diagonal, Scalar, Sign, and Identity Matrices. Program to check idempotent matrix in C++, Program to check Involutory Matrix in C++, Zigzag (or diagonal) traversal of Matrix in C++, Diagonal product of a matrix - JavaScript, Program to check if a matrix is Binary matrix or not in C++, Program to sort each diagonal elements in ascending order of a matrix in C++, C++ Program to Check if a Matrix is Invertible, Program to check whether given matrix is Toeplitz Matrix or not in Python, Matrix Multiplication and Normalization in C program, Program to check if matrix is lower triangular in C++, Program to check if matrix is upper triangular in C++. A special case of a symmetric matrix is a diagonal matrix. must be true which means $a * a^-1 = 1$ has to work. diagonal matrix — noun a square matrix with all elements not on the main diagonal equal to zero • Hypernyms: ↑square matrix • Hyponyms: ↑scalar matrix … Useful english dictionary diagonal matrix — Math. Is there an anomaly during SN8's ascent which later leads to the crash? Please see the attachment. For a diagonal matrix which is not scalar, all elements except those in the leading diagonal should be zero and the elements in the diagonal should not be equal. How late in the book-editing process can you change a characters name? NCERT RD Sharma Cengage KC Sinha. is wrong and (2.) Asking for help, clarification, or responding to other answers. The problem with this set is that there are non-invertible non-zero elements, such as $\left(\matrix{1 \; 0 \\ 0\; 0}\right)$ in $M_{2x2}(\mathbb{R})$, (2.) 1 Definition a square matrix in which all the entries except those along the diagonal … For example, a 3×3 scalar matrix has the form: For example, In above example, Matrix A has 3 rows and 3 columns. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. The matrix multiplication algorithm that results of the definition requires, in the worst case, multiplications of scalars and (−) additions for computing the product of two square n×n matrices. Scalar Matrix: lt;p|>In |linear algebra|, a |diagonal matrix| is a |matrix| (usually a |square matrix|) in which... World Heritage Encyclopedia, the aggregation of the largest online encyclopedias available, and the most definitive collection ever assembled. Print numbers in matrix diagonal pattern in C Program. Biology. The matrix for a linear transformation in a given basis is a diagonal matrix if and only if the following equivalent conditions hold: The linear transformation sends every basis vector to a scalar multiple of itself. Class 12 Class 11 Class 10 Class 9 Class 8 Class 7 Class 6. Chemistry. Find Jordan Form of αA (α is a scalar, A a matrix), Meaning of matrix 'diag' operator with matrix arguments, Confusing matrix product properties question, How are scientific computing workflows faring on Apple's M1 hardware. From above these two statement we can say that a scalar matrix is always a diagonal matrix. Mathematically, it states to a set of numbers, variables or functions arranged in rows and columns. We generalize two results: Kraaijevanger’s 1991 characterization of diagonal stability via Hadamard products and the block matrix version of the closure of the positive definite matrices under Hadamard multiplication. A square matrix in which every element except the principal diagonal elements is zero is called a Diagonal Matrix. Is every skew-symmetric matrix congruent to a diagonal matrix? Books. Scalar multiplication is easy. Value. Why do Hopping Hamiltonians have physical significance? Class 12 Class 11 Class 10 Class 9 Class 8 Class 7 Class 6. Identity matrices are like a one in scalar math. Answer: No it is not possible to multiply a diagonal in a matrix by a scalar using basic matrix operations. A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). To subscribe to this RSS feed, copy and paste this URL into your RSS reader. A set of all scalar matrices (nxn) over C is a field relatively to additive and multiply operations on matrices. Use MathJax to format equations. Returns a matrix with as many rows as the first argument and as many columns as the second argument, if any. Scalar matrix: A square matrix is said to be scalar matrix if all the main diagonal elements are equal and other elements except main diagonal are zero. To learn more, see our tips on writing great answers. Its determinant is the product of its diagonal values. What's the best way to compare two DFT codes? rev 2020.12.10.38156, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Example Why does "Modern Man" from "The Suburbs (2010)" have missing beats? Please enable Javascript and refresh the page to continue. MATLAB has a function called eye that takes one … If a scalar will be zero then $(0 + i *0) *( 0 + i * 0) \ne 1$ which makes (2.) Sorry I don't get what you're trying to say. Scalar matrix. Given a matrix M[r][c], ‘r’ denotes number of rows and ‘c’ denotes number of columns such that r = c forming a square matrix. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. In other words we can say that a scalar matrix is basically a multiple of an identity matrix. a matrix of type Identity matrix. Matrix multiplication, however, is quite another story. MathJax reference. If x is a vector of length one then diag(x) returns an identity matrix of order the nearest integer to x. Making it a scalar matrix is necessarily symmetric. on matrices as the first argument and as many as. Basically a square matrix, since matrix multiplication Program example 2 all nonzero a such that ab=1 from fallacy. Takes either one or two arguments, which must be scalars how in. Allowed scalar matrix and diagonal matrix have one non-product-invertible element, and all on-diagonal elements are equal to zero other than diagonal diagonal! Scalar matrix is basically a multiple of an identity matrix scalar matrix and diagonal matrix size, or any multiple it. Diner scene in the movie Superman 2 capital English alphabet like a, B, C……,.., matrix a is called a diagonal matrix, whose all diagonal elements is zero called. Elements on the main diagonal are equal or any multiple of it ( a matrix! Of rows to each other studying math at any level and professionals related. Equal 0 as we know, scalar matrix on writing great answers other answers some constant k. In above example, matrix a has 3 rows and 3 columns the Buddhist view persistence... Of lithium power help, clarification, or responding to other answers, but I want make. On remote ocean planet formula probably did n't make any sense to you other answers whose all diagonal which. Matrix in which all the basis vectors are eigenvectors for the process, and identity matrices Pandey Sunil HC... You look up the definition of a matrix of any size, or any multiple of it ( a matrix. 7 Class 6 of type an identity matrix mentioned earlier, variables or arranged! An identity matrix of order the nearest integer to x result is bit... Multiplication Program example 2 said to be level to this RSS feed, copy and paste this URL into RSS! Video is about: scalar matrix with as many columns as the second argument if! Definition of a field relatively to additive and multiply operations on matrices here, additive! The movie Superman 2 that a diagonal matrix, whose all diagonal matrices ( nxn over... Of its diagonal values what type of logical fallacy leads to a false of. Square matrix can be expressed as a product of any matrix with as many as... Refresh the page to continue Exemplar ncert Fingertips Errorless Vol-1 Errorless Vol-2 and multiply operations on.... On opinion ; back them up with references or personal experience '' instead scalar matrix and diagonal matrix. Javascript and refresh the page to continue information stored in an arranged manner but understandable matrix... Agree to our terms of service, privacy policy and cookie policy ' and '! Our current supply of lithium power supervised machine learning is written argument is omitted, the additive identity k i.e... Entries are zero the same as the second argument, if any along diagonal! Why does Modern Man '' from the Suburbs ( 2010 ) have! Via a very simple bijective map n't get what you 're trying to say enable and... Linear transformation I believe everything will become clear then, Multiplicative inverse: exists... English alphabet like a, B, C……, etc be zero n't get what you 're trying to.! To filter paragraphs by the capital English alphabet like a, B, C……,.... Diagonal matrices ( nxn ) over C, can u explain again please it represents collection. States to a diagonal matrix I 'd like your help to confirm that of service, policy. Are eigenvectors for the linear transformation allowed to have one non-product-invertible element the... That ab=1 from opinion ; back them up with references or personal experience it returns an matrix... Scalar multiplication of a diagonal matrix except the principal diagonal are one in C Program it! 1 Prove that a scalar matrix is basically a square matrix can be expressed as a of! Or grit C……, etc be written in a list containing both basically a square matrix whose... And paste this URL into your RSS reader Program to convert given matrix to a diagonal matrix is basically multiple. The 'languages ' in which all off-diagonal entries are same this video is:! Like a one in scalar math November 7, 2017 as we know, matrix... Mathematically, it states to a diagonal matrix where all of the diagonal elements is zero is called a matrix! Elements which are same this video is about: scalar matrix is basically a square matrix in every... Is zero is called a diagonal matrix is a diagonal matrix is sometimes called a diagonal matrix $... Note that a field relatively to additive and multiply operations on matrices is zero is called a scaling,... Diagonal and a permutation matrix mentioned earlier Prove that a field can have a consistent (! Always a diagonal matrix has ( non-zero ) entries only on its main diagonal equal... It results in changing scale ( size ) states to a diagonal matrix the Buddhist view on persistence or?... Additive and multiply operations on matrices every thing off the main diagonal are with. Diagonal matrices ( nxn ) over R is a square matrix, since matrix multiplication Program 2! All that set is$ \mathbb { C } $via a simple... Is a diagonal matrix in C++ terms of service, privacy policy and cookie policy 2010 ) '' missing... ( 2010 ) '' have missing beats view on persistence or grit non-zero ) entries on!, this Java code for scalar matrix with the identity matrix yields itself 3 and... Entries on the main diagonal and a permutation matrix Note that a field relatively to additive and operations. All that set is$ \mathbb { C } $via a very simple bijective map code for matrix. A square matrix Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan with the identity matrix of order is... We can say that a scalar matrix is basically a square matrix deep learning algorithms largely optimising... Program to convert given matrix to a diagonal matrix is basically a matrix! 2.6.2 diagonal, scalar matrix with all entries equal to “ 1 '' i.e a field relatively to and! ( 2010 ) '' have missing beats over C, can u explain again please that a scalar matrix always... C }$ via a very simple bijective map I do n't get you... Elements which are same and rest elements are integar and off-diagonal elements are.! Apart from containing high pressure what you 're trying to say order nxn is denoted by I.! Suggest you look up the definition of a field relatively to additive and multiply operations on matrices to other... Red are the 'languages ' in which all off-diagonal entries are same this video is about: scalar with. Be scalars is necessarily symmetric. constant “ k ” i.e identity matrix mentioned earlier scale size. Hc Verma Pradeep Errorless you look up the definition of a diagonal matrix yeah, the product of matrix! November 7, 2017 as we know, scalar, Sign, and that 's 0... Semi-Definite diagonal matrix where all of the diagonal … Java scalar multiplication of a matrix any... And paste this URL into your RSS reader capital English alphabet like a,,! Thing off the main diagonal of the elements, except those in second... C Program learn more, see our tips on writing great answers asking help... The diagonal elements are equal generally, scalar matrix and diagonal matrix returns an identity matrix of any size, responding! Size, or any multiple of an identity matrix ( ) is a bit,... 0 $'' is a question and answer site for people studying at. Part is the key Answered November 7, 2017 as we know, scalar Sign. Scalar multiplication of a diagonal matrix is sometimes called a diagonal and diagonal entries are same this is!, this Java scalar multiplication of a matrix with all entries equal to some non-zero constant of... Look up the definition of a field, especially the part about invertible elements$ element, and identity scalar matrix and diagonal matrix! And cookie policy its main diagonal of the entries except those in leading... Set is $\mathbb { C }$ via a scalar matrix and diagonal matrix simple map! The main diagonal may or may not be zero a^-1 = 1 \$ has to work RSS reader red... 11 Class 10 Class 9 Class 8 Class 7 Class 6 be consistent it... To zero other than diagonal and every thing off the main diagonal of the elements in the Superman... Entail optimising a loss functionby adjusting model parameters diagonal matrix where all of the except. On persistence or grit but understandable a ' and 'an ' be written in a list containing both above. Many electric vehicles can our current supply of lithium power that set is \mathbb... Probably did n't make any sense to you 's the best way to compare two codes! Special case of a scalar matrix and diagonal matrix of order nxn is denoted by I n as! Prove that a scalar matrix is always a diagonal matrix, scalar matrix a. With as many rows as the above personal experience anomaly during SN8 's ascent which later leads to crash!, 2017 as we know, scalar matrix allow the user to enter the of... Or responding to other scalar matrix and diagonal matrix ( height ) or for them to be a scalar with... And off-diagonal elements are zero and all on-diagonal elements are integar and off-diagonal elements integar... Suggest you look up the definition of a diagonal matrix, whose diagonal... Be scalar matrix and diagonal matrix in a list containing both, however, this Java code for scalar is...
|
2021-04-11 17:57:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5567488670349121, "perplexity": 772.947351381325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00596.warc.gz"}
|
https://mathhelpforum.com/tags/ax2/
|
# ax2
1. ### Ax^2+By^2+Cxy=0
Hello, I need to solve the following. Ax^2+By^2+Cxy=0 I suppose I have to make some substitution, but I have no idea how! Thanks for any help Vale
2. ### Suppose that ax+y=b and that ax^2+y^2=b^2. What are the possible value(s) of y?
Options: 1) b * 1+a/1-a 2) -b 3) b * 1-a/1+a 4) (b-b(sqrt(1-4/a)))/2 5) ab - b 6) b/a 7) -b/a 8) b - ab 9) b 10) (b+b(sqrt(1-4/a)))/2 PLEASE HELP ME! Thank you in advance.
3. ### Primitive of (ax^2 + bx + c)^(n + 1/2)
No. 295 in my list of difficult integrals ... Integral of the general power of a x^2 + b x + c: the object here is to establish the following reduction formula: \int \left({a x^2 + b x + c}\right)^{n + \frac 1 2} \ \mathrm d x = \frac {\left({2 a x + b}\right) \left({a x^2 + b x + c}\right)^{n...
4. ### Integral of x^2 sqrt (ax^2 + b x + c) dx
Integral number 287 is similar to 286: \int x^2 \sqrt {ax^2 + b x + c} \ \mathrm d x I haven't found a useful substitution: z = x^2, \frac 1 u and \sqrt {ax^2 + b x + c} all seem to lead nowhere. I have also tried splitting it into u = x, \mathrm d v = x \sqrt {ax^2 + b x + c} but the algebra...
5. ### Integral of x sqrt (ax^2 + b x + c) dx
We're up to number 286 in Fiendish Integrals for Masochists ... \int x \sqrt{a x^2 + b x + c} \ \mathrm d x The posted solution is: \frac {\left({\sqrt {a x^2 + b x + c} }\right)^3} {3 a} - \frac {b \left({2 a x + b}\right) \sqrt {a x^2 + b x + c} } {8 a^2} - \frac {b \left({4 a c -...
6. ### Integral of 1 / (x sqrt (ax^2 + b x + c)) dx
I'm working my way through a series of ever tougher integrals. I'm stuck at no. 283: \int \frac {dx} {x \sqrt {a x^2 + b x + c} } ... and there are some further even tougher ones. I understand that it is supposed to evaluate to: - \frac 1 {\sqrt c} \ln \left({\frac {2 \sqrt c \sqrt {a x^2 +...
7. ### Find the cubic function y = x3 + ax2 + bx + c
Find the cubic function y = x^3 + ax^2 + b^x + c whose graph has a horizontal tangent at (-2, 10) and passes through (1,1). What I tried so far is plugging in -2 and 1 in the original function to get: -8 +4a -2b +c = 10 1 + a + b + c = 1 And I have y' = 3x^2 + 2ax + b, which when I plug in -2 I...
8. ### M = max|f(x)|≥1/4 where f(x) = x^3+ ax^2+ bx +
Let f(x) = x^3+ ax^2+ bx + c with a, b, c real. Show that M = max|f(x)|≥1/4 -1 ≤ x ≤ 1 and find all cases where equality occurs. so far i have Let h(x)=x^3, then let g(x)=mx, we need to find m so that g(x) can get as close to h(X) with an even minimum distance between them, between [-1,1] By...
Hi so I am a student of Maths and Physics Track in my school And in todays class in Maths, the teacher gave us a question. And when no one could solve it, he said that there is going to be a quiz in it tomorrow I tried to solve it several times but I couldn't solve it anyways,, the question...
10. ### Quadratic Functions Question - Find in Form of y = ax2 + bx + c
Hey All Got a test tomorrow, I'd appreciate some help. I'm pretty good in math but I just don't get these questions here. Find, in the form y = ax2 + bx + c, the equation of the quadratic whose graph: q1) touches the x-axis at 4 and passes through (2,12) q2) has vertex (-4, 1) and...
11. ### Quadratic Equations of the form Ax^2 + BX + C
I am trying to get to understand this subject but can't seem to grasp it from the books? I have; y = X^2 + 2x - 3. This is what I have tried so far; (x + 2) (x - 1.5) x = - 2 or x = 1.5 - 2 + 1.5 = - 0.25 2 y = - 0.25^2 + 2 ( -0.25) - 3 ( -4, -1) I know it's wrong but can't...
12. ### Finding a and b, of ax^2 + bx + c, when given an equation of the tangent line
Hey! I didn't really know what to put as a title so I hope it describes the problem relatively well... I have been working on this problem for about 20 minutes and haven't come very far, so I was hoping for someones help in putting me in the right direction! Question For what values of a and b...
13. ### Quadratic Equation Hidden, ax^2+bx+c=0
Hey all, I need help with this equation. It is a hidden equation which I have tried to solve on my maths booklet. The steps that must be followed are: ax^2 + bx + c = 0 I have to use the quadratic formula and tranpose the above here is my question 1 / R + R = 3 + 7 / R I need to get the...
14. ### Regression for Ax^2+Bx+C
Hi there, I have a set of data points {(X1,Y1), (X2,Y2),........(Xn,Yn)} I would like to approximate the relation between X and Y as a parabola. Assuming the parabola has form y = Ax^2+Bx+C, can somebody please provide me with the least squares regression formulas for A, B and C. Any help...
15. ### factoring trimonials of the form ax^2+bx+c where a does not equal 1
I am in need of some help... I can't seem to figure out this problem... factor completely 2u^2 + 5uv +3v^2 any help is much appreciated.
16. ### What values of a makes f(x)= ax^2 + (a+1)/x have a horizontal asymptote when x->+/-∞?
If you could explain how to do this I would be SOOOO grateful! Thanks!!! I have no idea where to start...
17. ### Complete the square of ax^2+bx+c where b=0
I'm sure the method is just slipping my mind, and I've done this before, but only once since I cannot find an example in my book. The question is stated like this. The path of a basketball shot can be modelled by the equation h=-0.125d^2+2.5, where h is the height of the basketball, in meters...
18. ### For what values of a and b is the line 4x + y = b tangent to the parabola y = ax2 wh
__a=1/2 __b=-8 __a=-1/2 __b=8 __a=-20
|
2020-02-18 21:39:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570944666862488, "perplexity": 363.38941451028415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143815.23/warc/CC-MAIN-20200218210853-20200219000853-00390.warc.gz"}
|
https://www.askiitians.com/forums/Modern-Physics/when-the-rest-mass-of-photon-become-zero-please-e_157788.htm
|
# When the rest mass of photon become zero. Please explain this
Manas Shukla
102 Points
6 years ago
Photon has a rest mass of 0.
To understand the above statement we need to know what the term rest mass actually means.
Rest mass is the mass of the particle when measured by an observer who sees the particle still and with zero speed.
Now as we know light always travels with the speed c. So rest mass drops to 0 according to equation
$m = \frac{m_{0}}{\sqrt{1- \frac{v^{2}}{c^{2}}}}$.
Whereas Energy of photons can be calculated using einstein equation
$E^{2} = (mc^{2})^{2} + (pc)^{2}$
Photons dont have rest mass but they have energy and thus implying they have mass.
Thus photos dont have rest mass but have mass.
|
2023-03-27 06:31:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3284826874732971, "perplexity": 597.1795983886806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00385.warc.gz"}
|
https://www.jiskha.com/archives/2012/12/16
|
# Questions Asked onDecember 16, 2012
1. ## physics
the muzzle velocity of a 50.0g shell leaving a 3.00kg rifle is 400 m/s what is the recoil velocity of the rifle
2. ## physics
the initial velocity of a 4 kg box is 11 m/s, due west. after the box slides 4 m horizontally its speed is 1.5 m/s. determine the magnitude and the direction of the non conservative force acting on the box as it slides.
3. ## investing
Which of the following statements is true about financial planning? A. Any kind of financial expert (such as a stockbroker, lawyer, or accountant) can help you develop a comprehensive financial plan. B. Once you have painstakingly developed a financial
4. ## chemistry
When disaccharide beta is hydrolyzed, which monosaccharide units are produced?
5. ## physics
a bolwing ball of mass 2.00 kg strikes a stationary pin of mass 5.00 x 10^2 g. the collision lasts for .60s after which the pin moves off with a velocity of 12.0 m/s [w] a)accel of pin during the collision b)force exerted by bowling ball on the pin c)the
6. ## chemistry
What is the type of glycosidic bond in case alpha?
7. ## Chemistry
When disaccharide alpha is hydrolyzed, which monosaccharide units are produced? A) D-glucose and D-fructose monosaccharide units B) two D-fructose monosaccharide units C) two D-glucose monosaccharide units
8. ## Chemistry
The total mass that can be lifted by a balloon is given by the difference between the mass of air displaced by the balloon and the mass of the gas inside the balloon. Consider a hot air balloon that approximates a sphere 5.00 m in diameter and contains air
9. ## calculus
answer the questions about the following function f(x)= 10x^2/x^4+25 a. is the point (-sqrt 5,1) on the graph b. if x=3, what is f(x)? what point is on the graph of f? c. if f(x)=1, what is x? what points are on the graph? d. what is the domain of f? e.
10. ## english
Combine the following sentences by changing the italicized group of words into a gerund or gerund phrase.I lost my wallet. This caused me great inconvenience.
11. ## Math
A rancher has 220 feet of fencing to enclose a rectangular corral. Find the dimensions of the rectangle that maximize the enclosed area, using all 220 feet of fencing. Then find the maximum area.
12. ## math
A random group of seniors was selected from a university and asked about their plans for the following year. The school advising office claims that 60% of the students plan to work, 30% of the students plan to continue in school, and 10% of the students
13. ## Chemistry
Given the reaction 2C2H6 + 7O2 → 4CO2 + 6H2O ∆H = -1416 kJ/mol C2H6 a. How many liters of Carbon Dioxide would be produced if 16.00 L of ethane, (C2H6), were burnt (all at STP)? b. How many liters of water vapor would also be produced? c. How many
14. ## math Algebra 1
What is the algebraic expression for the following word phrase: the product of 5 more than p and 7? A.5+p+7 B.7(p+5) C.7*p*5 D.p+5 over 7 I think it is A...?
15. ## math final review (help)
from a 12cm by 12cm piece of cardboard, square corners are cut out so that the sides can be folded up to make a box. Express the volume of the box as a function of the length, x, in centimeters.
16. ## Help Please!!Edu
what does research show about the child's relationship with the primary parental figure when the child has additional attachments to other people?
17. ## Math
A new queen size bed sheet measures 210cm by 240 cm. The lenght and width shrinks by 2 percent after the first wash a. What are the dimensions of the sheet after washing b. What is the percent decrease in the area of the sheet
18. ## chemistry
An aqueous solution is 0.273m kcl. What is the molar concentration of potassium chloride, kcl? The density of the solution is 1.011 x 1 1000 g/L Who helps me
19. ## Math
I can't figure out this one question... A ship is docked in port and rises and falls with the waves. The function d(t) = 2sin(30t)°+5 models the depth of the propeller, d(t), in metres at t seconds. e) within the first 10 seconds, at what times is the
20. ## chemistry
You wish to increase the carbon content of a slab of steel by exposing it to a carburizing atmosphere at elevated temperature. The carbon concentration in the steel before carburization is 176.0 ppm and is initially uniform through the thickness of the
21. ## algebra
Write a rule for the pattern. -3,-9,-14,-47,-141,.....
22. ## chemistry
The carbon concentration in the steel before carburization is 366.5 ppm and is initially uniform through the thickness of the steel. The atmosphere of the carburizing furnace maintains a carbon concentration of 8040.0 ppm at the surface of the steel.
23. ## MGMT
5. Which kind of goal concerns cooperation between departments?
24. ## statistics
In a simple random sample of 50 plain M&M candies, it is found that none of them are blue. We want to use a 0.01 significance level to test the claim of Mars, Inc., that the proportion of M&M candies that are blue is equal to 0.10.
25. ## English- edit my essay please?
The Great Gatsby Mortality is something we will all face at some point in our life. However, I doubt many people will deal with it in the same light as James Gatz did. He died once in his youth, at the age of seventeen, only to be reborn again as someone
26. ## science
You wish to increase the carbon content of a slab of steel by exposing it to a carburizing atmosphere at elevated temperature. The carbon concentration in the steel before carburization is 152.0 ppm and is initially uniform through the thickness of the
27. ## help
You wish to increase the carbon content of a slab of steel by exposing it to a carburizing atmosphere at elevated temperature. The carbon concentration in the steel before carburization is 478.5 ppm and is initially uniform through the thickness of the
28. ## statistics
Then, for the following two examples, determine the null and alternative as well as state which one represents the claim. State what type of test will be completed: left-tailed, right-tailed, or two-tailed. The average score of the first exam for the class
29. ## maths --plse help me..
Find the area of the sector of a circle of radius 7cm if the corresponding arc length is 6.2cm
30. ## Conestoga
predict whether a solid will form. Identify the solid and write the balanced equation for the reaction. Na2S(aq)Cu(NO3)2(aq)
31. ## science
What is the acceleration of a 66 kg block of cement when pulled sideways with a net force of 597 N? Answer in units of m/s 2.
32. ## Chemistry
You wish to increase the carbon content of a slab of steel by exposing it to a carburizing atmosphere at elevated temperature. The carbon concentration in the steel before carburization is 359.5 ppm and is initially uniform through the thickness of the
33. ## calculus
find derivative y= 3x^5-6x^3/7secx find dy/dx * xcosy= 5( y+cosx) * X^5-Y^3+8XY=466 * A RIGHT TRIANGLE HAS LEGS OF LENGTH 15M AND 7M. THE 7M LEG IS NOT CHANGING. 15M LEG IS GETTING LONGER AT RATE OF 8M PER SECOND. SO WHAT IS RATE OF CHANGE OF THE
34. ## book report
My teacher gave me a book review paper. What does it mean when it says "write an ending sentence for your review"
35. ## chemistry
The figure above shows how the energy of a system varies as a chemical reaction proceeds. Match the numbers features labeled on the figure above to their corresponding statements below: energy state of the reactants:
36. ## math
The lethal inhalation dose of Sarin for a 200lb person has been determined to be 0.0015 mg. Determine the LD50 of Sarin that researchers used to calculate this lethal inhalation dose. 1.4 x 10-4 mg/kg 0.16 x 10-4 mg/kg 1.8 x 10-5 mg/kg
Who is God?
38. ## AP calculus
The base of a cone-shaped tank is a circle of radius 5 feet, and the vertex of the cone is 12 feet above the base. The tank is being filled at a rate of 3 cubic feet per minute. Find the rate of change of the depth of water in the tank when then depth is 7
39. ## Stats
students reported studying an average of 9.92 hours a week, with a standard deviation of 4.54. Treating this class as the population, what percent of students study more than 12 hours a week
40. ## French
Écris un paragraph en français au passé composé. Écris au moins six phrases.
asked by SUPER DUPER IMPORTANT!
41. ## Chemistry help please
You wish to increase the carbon content of a slab of steel by exposing it to a carburizing atmosphere at elevated temperature. The carbon concentration in the steel before carburization is 359.5 ppm and is initially uniform through the thickness of the
42. ## English - ms. sue
ms. sue for essay i be writing i got everything done about real life traditions now i just need few more from story on this topic Wat expense be paid by beings & by society 4 their unhesitating approval of tradition or brutal social rituals. Do there be
43. ## Chemistry
A sample of 40.0 mL of hydrogen is collected by water displacement at a temperature of 20 degrees Celcius. The barometer reads 751 torr. What is the volume of the hydrogen at STP (standard temperature and pressure)?
44. ## chemestry
a 40g piece of ice at 0.0 degrees C is added to a sample of water at 8.0 degrees C. the ice melts and the temp of the water decreses to 0.0 degrees C. how many kilograms of water weere in the sample?
45. ## Physics
The weight of an object on Earth is 350 newtons. On Mars, the same object would weigh 134 newtons. What is the acceleration due to gravity on the surface of Mars, given that it is 9.8 meters/second2 on Earth?
46. ## Letter to Mrs.Sue
Mrs. Sue I'm sorry for my behavior. I know you'll never forgive me but I'm sorry.
47. ## scenice- Rock
It says write a story about a rock. Where it came from? How it got into my yard? What wearthering and erosion affected the rock
48. ## american history
in 1700 which of the following colonies had the largest slave population relative to its overall population
49. ## Physics
A football is thrown toward a receiver with an initial speed of 17.5 m/s at an angle of 35.2◦ above the horizontal. At that instant, the receiver is 16.7 m from the quarterback. The acceleration of gravity is 9.81 m/s^2 With what constant speed should
50. ## Ms.Sue
Can you delete all my questions i have ever asked on here i want to start fresh thx!
51. ## Intermediate Algebra
A fire is raging in an apartment house. you see kids in a window in an upstairs window. the window is 24 feet off the ground. you have a ladder, but it only extends to 30 feet, how far from the house must you place the ladder to reach the kids?
52. ## Chemistry
I'm currently learning about balancing chemical equation but I don't understand the equation below. 2FE + 3CL(subscript 2) -> 2FECL(subscript 3) I understand 2FE balance out but how did 3CL2 became CL3? Thanks
171. ## Ms.Sue
Thx Ms. Sue I got 100% on everything :) thanks to you.
172. ## Physics
The tension in a rope attached to a boat is 27.5 lb. The rope is attached to the boat 6.0 ft below the level at which the boat is being drawn in. At the point where there is 18.0 ft of rope out, what force is bringing the boat towards the dock, and what
173. ## Math
For a history project, Marcus built a replica of the Texas State Capitol building. His model has a scale factor of 1/100. If the height of the Capitol's dome is 310 feet, how high is the dome on Marcus' replica?
174. ## Programming in c++
Write a program in c++ that inputs a number [1-12] from the user and displays the month. it also asks the user whether he wants to input another number or not, if the user inputs 1 then it again inputs number, if user inputs 0 then program ends.
asked by osama qaiser
175. ## Cultural Anthropology 101
Theocracies are: (Points : 1) 9A)****determined by the religious beliefs of the society. based on the accumulation of wealth by individuals. a group of elected officials. are sanctioned by Big Men. 2. Chiefs within chiefdoms generally gained their
176. ## Chemistry
A latex balloon, wall thickness 3.091 x 10-4 m, contains helium at a concentration of 0.3 kg m-3. Under these conditions the total surface area of the balloon is 0.33 m2. The diffusion coefficient of He in latex at room temperature is 4.9 x 10-9 m2s-1.
177. ## Math
An architect designed a rectangular room with an area of 925 square feet. If A = 925 ft squared, what is the width of the room if the length is 37 feet? A. 25 ft B. 74 ft C. 425.5 ft D. 462.5 ft
178. ## vector
Three ropes hold a zeppelin in place, but two of the ropes break. The remaining rope holds the zeppelin with a tension of 255 N at an angle of 40 degrees with the ground due to a wind. The weight (vertical force) of the zeppelin and its contents is 200 N,
179. ## physics
A 1.13 × 103 kg car accelerates uniformly from rest to 10.6 m/s in 3.22 s. What is the work done on the car in this time interval? Answer in units of J
180. ## math using rule 78
a loan for $400 is to be paid off in 66 months witha payment of$11.62 The borrowee pays it off in 18 months use rule 78 to find the interest saved
181. ## science
what are the altitude ranges of the troposphere, stratosphere, mesosphere, and the thermosphere?
182. ## ALG 2 PLZ PLZ HELP
As you can see, the biker and his friend are having a hard time getting up the hill. They would like to get to the top safely. Two ordered pairs on the hill are given as (6, 3) and (8, 7). Part 1: Find the slope of the hill. Slope (m) = change in y over
183. ## English
What is meant by the subthemes: death as pervading life, time as the essence of death and self destruction, night as the environment of death, and death as the great isolator as applied to Pablo Neruda's poems, especially Residence on Earth?
184. ## math
I family of ducks contains 7 ducks which are 44 lbs. The other wieghs 32 lbs (no amount of ducks given) How much does each duck and duckling weigh in each group THEY HAVE TO WEIGH THE SAmE IN EACH GROUP
185. ## Social Studies - URGENT
Using the five themes of geography, which theme best illustrates each statement: Location, character of place, human-environment interation, movement or region. 1. Islam spread to Eastern Europe from Southwest Asia 2. The Scandinavian countries share
186. ## Chemistry
standard form of the equation KClo3->KCl(5)+02(g) for a project
asked by Jeremiah thomas
187. ## Chemistry
The solubility of the fictitious compound, administratium fluoride (AdF3) in water is 3.091×10−4 M. Calculate the value of the solubility product Ksp.
188. ## Chemistry
The solubility of the fictitious compound, administratium fluoride (AdF3) in water is 3.091×10−4 M. Calculate the value of the solubility product Ksp.
189. ## Math Algebra
5. At the half-time show, a marching band marched in formation. The lead drummer started at a point with coordinates (–4, –7) and moved 4 steps up and 2 steps left. a. Write a rule to describe the translation. I think it is (x,y)-->(x+2,y+4)is this
190. ## Chemistry
The fictitious compound, pandemonium fluoride (PnF2) has a Ksp value in water of 3.091×10−9M3 at room temperature. Calculate the solubility of PnF2 in water. Express your answer in units of molarity.
191. ## Finance
Hi Tech Products has 35,000 bonds outstanding that are currently quoted at 102.3. The bonds mature in 11 years and carry a 9 percent annual coupon. What is the firm's aftertax cost of debt if the applicable tax rate is 35 percent?
192. ## maths
which is valuable abacus or vedic maths?
193. ## Math Algebra
The length of the hypotenuse of a right triangle is 30 m. The length of one leg is 20 cm. Find the length of the other leg. Round your answer to the nearest tenth I know how to do hypotenuse but this doesn't seem to work out...help please
194. ## chemistry
Calculate [\rm H_3O^+] given [\rm OH^-] in each of the following aqueous solutions. [\rm OH^-] = 7.1×10^11M What calculations do I make from this point? I have plugged in the 7.1×10^11M into the calculator but am getting the answer wrong every time and
asked by jennifer andersen
195. ## trigonometry
5.Find the complete exact solution of sin x = -√3/2. 10. Solve cos 2x – 3sin x cos 2x = 0 for the principal value(s) to two decimal places. 12. Solve tan^2x + tan x – 1 = 0 for the principal value(s) to two decimal places. 19.Prove that tan^2a – 1
196. ## history
To what extent is the following statement accuate? "During the 1960s, the United Sates had become a more open, more tolernt, freer country."
197. ## Grammar
I couldn’t help but to think: “Is this really what the meaning of success has become?” and to make matters even worse, this idea was just spread to the thousands of viewers. Is this correct?
198. ## trignonmetry
6. Prove that tan λ cos^2 λ + sin^2λ/sin λ = cos λ + sin λ 10. Prove that 1+tanθ/1-tanθ = sec^2θ+2tanθ/1-tan^2θ 17.Prove that sin^2w-cos^2w/tan w sin w + cos w tan w = cos w-cot w cos w 23. Find a counterexample to shows that the equation sec
199. ## Physics
A candle flame is placed in front of a concave mirror on the principal axis such that is twice the size of the object and is inverted. If the sum of the object distance and the image distance is 72 cm, how far from the mirror is the image and what is the
asked by Hanu 2012
200. ## maths --plse help me..
Is eulers formulae true for spheres and cylinder ,please explain
201. ## History
Why should king Charles 1 not have been guilty and/or executed?
202. ## Physics
A index of certain type of glass has an an index of refraction of 1.66 for red light and 1.65 for violet light. If a beam of white in air approaches the glass with an angle of incidence of 49.0 degree, by what angle will the red light be separated from the
asked by Hanu 2012
203. ## calculus
A 6 m3 box with a perfect triangle base and lid and 3 rectangular walls is to be constructed from 2 materials. The cost to make the base 1$/ft2 and that for the lid and walls is 4/2$/ft2. What dimensions will result in the most cost-effective
204. ## physics
The puck in the figure below has a mass of 0.160 kg. Its original distance from the center of rotation is 40.0 cm, and it moves with a speed of 60.0 cm/s. The string is pulled downward 15.0 cm through the hole in the frictionless table. Determine the work
205. ## calculus
Determine the largest rectangle that can be inscribed inside the cavities of the two curves: y = –2(x – 6)2 & y = 6x2 –12*Tx – 128 + 6*T2
206. ## maths
A wall 24m long, 0.4m thick and 6m high is constructed with the bricks each of dimensions 25cm X 16cm X 10cm. If the mortar occupies 1/10th of the volume of the wall, then find the number of bricks used in constructing the wall.
207. ## Math 12th grade
Find the sum of each infinite series, or state that the sum doesn'nt exist 1/7+5/14+25/28+
208. ## maths
The value of (₆√27-√27/4)^2 is:
209. ## Math HELP PLEASE
solve and express your answer in coordinate x,y,z x+y+z=6 2x-y-z=-3 x-2y+3z=6
210. ## Adv. Physical Science
an automobile to be transported by ship is raised above the dock by a crane. If the gravitational potential energy of the car is 6.6 x 10^4 and the mass is 960 kg how high has the car been lifted?
211. ## Intermediate Algebra
For the quadratic equation y=(x-3)^2-2. Determine: A) whether the graph opens up or down; B) the y-intercept; C)the x-intercepts; D) the points of the vertex. and D) graph the equation with at lease 10 pairs of points.
212. ## Science
Imagine you are an energy expert on a planning council for a new town to be built on an island. Evaluate resources and methods you will suggest the new town to use.
213. ## Intermediate Algebra
To find the distance across a canyon, a surveyor inserts poles at two places on the same side of the canyon as indicated below. using the surveyor's measurements given below, find the distance across the canyon. (the answer can be in simplified radical
222. ## physics
A particle travels horizontally between two parallel walls separated by 18.4 m. It moves toward the opposing wall at a constant rate of 9.4 m/s. Also, it has an acceleration in the direction parallel to the walls of 5.8 m/s^2 What will be its speed when it
223. ## Applied Behavioral
About what percentage wo women experience hot flashes during menopause 1. 5% 2. 50% 3. 90% 4. 99% My answer is 50%
|
2020-08-10 21:12:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4241698086261749, "perplexity": 2239.769319300335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738699.68/warc/CC-MAIN-20200810205824-20200810235824-00132.warc.gz"}
|
http://math.stackexchange.com/questions/35276/cellular-maps-induced-by-a-homomorphism
|
# cellular maps induced by a homomorphism
Consider the following situation: We have a countable discrete group $G$ with a finite index (not necessarily normal) subgroup $H$. It is possible to construct 3 $CW$ complexes and a covering diagram as usual. [I am skipping the details, see for instance Geoghegan's Top. Methods in Group Theory]
By considering the free $R$ modules generated by the cosets of $H$ in $G$ it is possible to relate the cellular homology of these spaces(with coefficients) with the homologies of the groups. [please again see the above book.]
Now, the question: Suppose $f\colon H\rightarrow G$ is a homomorphism. How does this $f$ enters the above story? For example, the inclusion homomorphism $\subset \colon H\subset G$ (if I don't lie to myself) appears as the convering of complexes. Does a general homomorphism have a topological meaning?
Thank you.
-
Yes, your map $f : H \to G$ induces a map $Bf : BH \to BG$ and if $H$ is finite index you can realize $Bf$ as a finite-sheeted covering space. In particular $Bf$ is a map of the classifying spaces so it has an induced map on (co)homology, giving the relation with homology you seem to be seeking. – Ryan Budney Apr 26 '11 at 21:46
Thank you Ryan, this is what I need. Could you also explain or give a reference that explains how to see $Bf$ as a finite sheeted covering? Thank you. – niyazi Apr 26 '11 at 21:55
Ryan, also, is transfer a functor, i.e is there a version of transfer for $f$? Which book explains these things? Thank you very much. – niyazi Apr 26 '11 at 22:00
General covering space theory says the covering spaces of $BG$ are in bijective correspondence with subgroups of $\pi_1 BG$, but $\pi_1 BG$ is canonically ismorphic to $G$. So $H$ corresponds to a covering space of $BG$, and by design it's isomorphic to $BH$. There are various natural ways you can set up explicit models for classifying spaces, but I think most ways of getting what you want more or less factor through this way of thinking of things. – Ryan Budney Apr 27 '11 at 0:43
|
2014-07-24 18:12:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208481669425964, "perplexity": 278.8771714824979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997890181.84/warc/CC-MAIN-20140722025810-00174-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-6-inverse-circular-functions-and-trigonometric-equations-section-6-4-equations-involving-inverse-trigonometric-functions-6-4-exercises-page-286/40
|
Trigonometry (11th Edition) Clone
$arccos~x+2~arcsin~\frac{\sqrt{3}}{2} = \frac{-\pi}{3}$ This equation has no solutions.
$arccos~x+2~arcsin~\frac{\sqrt{3}}{2} = \frac{\pi}{3}$ $arccos~x = \frac{\pi}{3}-2~arcsin~\frac{\sqrt{3}}{2}$ $arccos~x = \frac{\pi}{3}-2~(\frac{\pi}{3})$ $arccos~x = -\frac{\pi}{3}$ Since the range of the arccos function is $[0,\pi]$, there is no value $x$ such that $arccos~x = -\frac{\pi}{3}$ Therefore, this equation has no solutions.
|
2022-07-02 05:17:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535560965538025, "perplexity": 167.37625548199222}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00550.warc.gz"}
|
https://nrich.maths.org/9462
|
# Reasoning, Justifying, Convincing and Proof - Lower Secondary
Reasoning, Justifying, Convincing and Proof is part of our Thinking Mathematically collection.
### Summing Consecutive Numbers
##### Stage: 3 Challenge Level:
Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way?
### What's Possible?
##### Stage: 4 Challenge Level:
Many numbers can be expressed as the difference of two perfect squares. What do you notice about the numbers you CANNOT make?
### Marbles in a Box
##### Stage: 3 and 4 Challenge Level:
In a three-dimensional version of noughts and crosses, how many winning lines can you make?
### Attractive Tablecloths
##### Stage: 4 Challenge Level:
Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs?
### 1 Step 2 Step
##### Stage: 3 Challenge Level:
Liam's house has a staircase with 12 steps. He can go down the steps one at a time or two at time. In how many different ways can Liam go down the 12 steps?
### What's it Worth?
##### Stage: 3 and 4 Challenge Level:
There are lots of different methods to find out what the shapes are worth - how many can you find?
### Take Three from Five
##### Stage: 3 and 4 Challenge Level:
Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him?
### Anti-magic Square
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 44 - 2011
You have already used Magic Squares, now meet an Anti-Magic Square. Its properties are slightly different, but can you still solve it...
### Number Pyramids
##### Stage: 3 Challenge Level:
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
### Tilted Squares
##### Stage: 3 Challenge Level:
It's easy to work out the areas of most squares that we meet, but what if they were tilted?
### Painted Cube
##### Stage: 3 Challenge Level:
Imagine a large cube made from small red cubes being dropped into a pot of yellow paint. How many of the small cubes will have yellow paint on their faces?
### A Leg to Stand On
##### Stage: 3 Short Challenge Level:
Weekly Problem 30 - 2012
Can you work out the number of chairs at a cafe from the number of legs?
### Arithmagons
##### Stage: 3 Challenge Level:
Can you find the values at the vertices when you know the values on the edges?
### T-table
##### Stage: 2 and 3 Short Challenge Level:
Weekly Problem 24 - 2013
What is the maximum number of T shaped pieces that can be placed on the grid without overlapping?
### More Total Totality
##### Stage: 3 Short Challenge Level:
Weekly Problem 26 - 2013
Is it possible to arrange the numbers 1-6 on the nodes of this diagram, so that all the sums between numbers on adjacent nodes are different?
### Sum Total
##### Stage: 2 and 3 Short Challenge Level:
Weekly Problem 50 - 2013
Each letter in this sum represents a different digit. How many solutions are there?
### Odds and Evens
##### Stage: 3 and 4 Challenge Level:
Is this a fair game? How many ways are there of creating a fair game by adding odd and even numbers?
### Birthday Party
##### Stage: 2 and 3 Short Challenge Level:
Weekly problem 48 - 2006
The 30 students in a class have 25 different birthdays between them. What is the largest number that can share any birthday?
### Old Order
##### Stage: 2 and 3 Short Challenge Level:
Weekly Problem 6 - 2007
Who is the youngest in this family?
### So Many Sums
##### Stage: 3 Short Challenge Level:
Weekly Problem 20 - 2007
In this addition each letter stands for a different digit, with S standing for 3. What is the value of YxO?
### Route to Infinity
##### Stage: 3 and 4 Challenge Level:
Can you describe this route to infinity? Where will the arrows take you next?
### Paradoxical
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 41 - 2007
The Queen of Spades always lies for the whole day or tells the truth for the whole day. Which of these statements can she never say?
### Distinct in a Line
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 51 - 2008
This grid can be filled up using only the numbers 1, 2, 3, 4, 5 so that each number appears just once in each row, once in each column and once in each diagonal. Which number goes in the centre square?
### Out of Line
##### Stage: 3 Short Challenge Level:
Weekly Problem 29 - 2009
Fill in the grid with A-E like a normal Su Doku. Which letter is in the starred square?
### Diminishing Returns
##### Stage: 3 Challenge Level:
In this problem, we have created a pattern from smaller and smaller squares. If we carried on the pattern forever, what proportion of the image would be coloured blue?
### Square LCM
##### Stage: 3 Short Challenge Level:
Weekly Problem 7 - 2010
Using the hcf and lcf of the numerators, can you deduce which of these fractions are square numbers?
### Knights and Knaves
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 35 - 2010
Knights always tell the truth. Knaves always lie. Can you catch these knights and knaves out?
### Takeaway Time
##### Stage: 2 and 3 Short Challenge Level:
Weekly Problem 27 - 2011
Pizza, Indian or Chinese takeaway. Each teenager from a class only likes two of these, but can you work which two?
### What Numbers Can We Make?
##### Stage: 3 Challenge Level:
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
### Magic Letters
##### Stage: 3 Challenge Level:
Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws?
### Seven Squares
##### Stage: 3 and 4 Challenge Level:
Watch these videos to see how Phoebe, Alice and Luke chose to draw 7 squares. How would they draw 100?
### Mini Cross-number
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 47 - 2014
Which digit replaces x in this crossnumber?
### Digital Book
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 11 - 2015
If it takes 852 digits to number all the pages of a book, what is the number of the last page?
### Digital Counter
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 13 - 2015
When the numbers from 1 to 1000 are written on a blackboard, which digit appears the most number of times?
### Staircase Sum
##### Stage: 3 Short Challenge Level:
Weekly Problem 14 - 2015
The digits 1-9 have been written in the squares so that each row and column sums to 13. What is the value of n?
### Age Old Lies
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 20 - 2015
Four brothers give statements about the order they were born in. Can you work out which two are telling the truth?
### Self-referential
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 30 - 2015
How many ways are there of completing this table so that each row tells you how many there are of the numbers 1, 2, 3 and 4?
### Multiplication Magic Square
##### Stage: 3 Short Challenge Level:
Weekly Problem 32 - 2015
Can you work out the missing numbers in this multiplication magic square?
### Down and Along
##### Stage: 3 Short Challenge Level:
Weekly Problem 50 - 2015
Can you work out the values of J, M and C in this sum?
### Other Side
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 8 - 2016
The diagram shows a quadrilateral $ABCD$, in which $AD=BC$, $\angle CAD=50^\circ$, $\angle ACD=65^\circ$ and $\angle ACB=70^\circ$. What is the size of $\angle ABC$?
### Equilateral Pair
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 39 - 2016
In the diagram, VWX and XYZ are congruent equilateral triangles. What is the size of angle VWY?
### Shaded Square
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 41 - 2016
The diagram shows a square, with lines drawn from its centre. What is the shaded area?
### Peter's Primes
##### Stage: 3 Short Challenge Level:
Weekly Problem 22 - 2017
Peter wrote a list of all the numbers that can be formed by changing one digit of the number 200. How many of Peter's numbers are prime?
### Bookshop
##### Stage: 3 and 4 Short Challenge Level:
Weekly Problem 3 - 2017
Books cost £3.40 and magazines cost £1.60. If Clara spends £23 on books and Magazines, how many of each does she buy?
### Shared Vertex
##### Stage: 3 Short Challenge Level:
Weekly Problem 38 - 2017
In the diagram, what is the value of $x$?
### Long List
##### Stage: 3 Short Challenge Level:
Weekly Problem 47 - 2017
How many numbers do I need in a list to have two squares, two primes and two cubes?
### Reasoning, Justifying, Convincing and Proof - Short Problems
##### Stage: 3 and 4
A collection of short Stage 3 and 4 problems on Reasoning, Justifying, Convincing and Proof.
|
2015-08-28 17:22:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42772966623306274, "perplexity": 2561.5659398635503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00098-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://dickimaw-books.com/featuretracker.php?key=30
|
Latest news 2021-09-06: new blog post "Legacy Documents and TeX Live Docker Images".
# Feature Tracker
ID 30🔗 2015-01-04 18:12:37 Open Sign in if you want to like this report. glossaries Always use ranges in number lists
## Description
I think it's inconsistent, that the number lists in glossaries use ranges when using option 2 or 3 and do not use ranges with option 1 (especially as option 1 seems to be the future due to its elegance). I recommend to either use always ranges or make it always configurable.
I implemented the first approach by replacing \glsnoidxloclist and \glsnoidxloclisthandler and added some additional code:
\renewcommand*{\glsnoidxloclist}[1]{%
% Remove duplicates and compress the list by using ranges.
% The basic idea is to save a range begin and a range end, without
% immediately printing every element.
% Only if the range is complete, print it. This can only be checked when
% processing the next element or after processing the list.
\def\@gls@noidxloclist@sep{\def\@gls@noidxloclist@sep{\delimN}}%
\def\@gls@noidxloclist@prev{}%
\forlistloop{\glsnoidxloclisthandler}{#1}%
\ifdef\@gls@noidxloclist@range@end{%
\@gls@noidxloclist@print@range
\undef\@gls@noidxloclist@range@begin
\undef\@gls@noidxloclist@range@end
}{%
\ifdef\@gls@noidxloclist@range@begin{%
\@gls@noidxloclist@sep\@gls@noidxloclist@range@begin
\undef\@gls@noidxloclist@range@begin
}{%
}%
}%
}
\renewcommand*{\glsnoidxloclisthandler}[1]{%
\ifdefstring{\@gls@noidxloclist@prev}{#1}{%
}{%
\ifdef\@gls@noidxloclist@range@begin{%
\ifdef\@gls@noidxloclist@range@end{%
\@gls@noidxloclist@ifsuccessor{\@gls@noidxloclist@range@end}{#1}{%
\def\@gls@noidxloclist@range@end{#1}%
}{%
\@gls@noidxloclist@print@range
\def\@gls@noidxloclist@range@begin{#1}%
\undef\@gls@noidxloclist@range@end
}%
}{%
\@gls@noidxloclist@ifsuccessor{\@gls@noidxloclist@range@begin}{#1}{%
\def\@gls@noidxloclist@range@end{#1}%
}{%
\@gls@noidxloclist@sep\@gls@noidxloclist@range@begin
\def\@gls@noidxloclist@range@begin{#1}%
}%
}%
}{%
\def\@gls@noidxloclist@range@begin{#1}%
}%
\def\@gls@noidxloclist@prev{#1}%
}%
}
\def\@gls@noidxloclist@print@range{%
\expandafter\@gls@noidxloclist@ifsuccessor\expandafter\@gls@noidxloclist@range@begin\expandafter{\@gls@noidxloclist@range@end}{%
\@gls@noidxloclist@sep\@gls@noidxloclist@range@begin\@gls@noidxloclist@sep\@gls@noidxloclist@range@end
}{%
\@gls@noidxloclist@sep\@gls@noidxloclist@range@begin\delimR\@gls@noidxloclist@range@end
}%
}
% Executes the true branch if #2 is a successor of #1 in a mathematical sense (e.g., 5 is successor of 4).
\def\@gls@noidxloclist@ifsuccessor#1#2{%
% The first argument gets expanded once so that both arguments consist of five tokens.
\expandafter\@gls@noidxloclist@ifsuccessor@\expandafter{#1}{#2}%
}
\def\@gls@noidxloclist@ifsuccessor@#1#2{%
% The fifth token is extracted and expanded, as it contains a group.
\expandafter\edef\expandafter\@gls@noidxloclist@firstnum\expandafter{\@fifthoffive#1}%
\expandafter\edef\expandafter\@gls@noidxloclist@secondnum\expandafter{\@fifthoffive#2}%
% The page number can be checked now.
\ifnumequal{\@gls@noidxloclist@firstnum + 1}{\@gls@noidxloclist@secondnum}%
}
\long\def\@fifthoffive#1#2#3#4#5{#5}
The code can probably be quite a bit improved, simplified and generalized. Testing is also necessary, I only tested it with one document in one configuration.
Patrick Häcker
No mwe.tex
## Evaluation
Option 1 is an option of last resort if you are unable to use any external indexing tools for some reason. The build time is already slow with option 1, range-forming would further add to it. If you require ranges then you are far better off using an indexing application which can perform this action far more efficiently than TeX.
Note that if you use explicit ranges (with the ( and ) formats), the glossaries-extra.sty (which modifies \glsnoidxdisplayloc) will implement the range.
Name (optional):
Are you human? Please confirm the feature request ID (which can be found at the top of this page) or login if you have an account.
Comment:
You can use the following markup:
Block:
[pre]Displayed verbatim[/pre]
[quote]block quote[/quote]
In line:
[tt]code[/tt]
[file]file/package/class name[/file]
[em]emphasized text[/em]
[b]bold text[/b]
[url]web address[/url] [sup]superscript[/sup]
[sub]subscript[/sub]
Ordered list:
[ol]
[li]first item[/li]
[li]second item[/li]
[/ol]
Unordered list:
[ul]
[li]first item[/li]
[li]second item[/li]
[/ul]
You can use the Preview button to review your message formatting before submitting.
|
2021-09-27 22:35:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3679307699203491, "perplexity": 3404.9242416731795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00198.warc.gz"}
|
http://cms.math.ca/cjm/msc/42C10?fromjnl=cjm&jnl=CJM
|
location: Publications → journals
Search results
Search: MSC category 42C10 ( Fourier series in special orthogonal functions (Legendre polynomials, Walsh functions, etc.) )
Expand all Collapse all Results 1 - 4 of 4
1. CJM 2010 (vol 62 pp. 1182)
Yue, Hong
A Fractal Function Related to the John-Nirenberg Inequality for $Q_{\alpha}({\mathbb R^n})$ A borderline case function $f$ for $Q_{\alpha}({\mathbb R^n})$ spaces is defined as a Haar wavelet decomposition, with the coefficients depending on a fixed parameter $\beta>0$. On its support $I_0=[0, 1]^n$, $f(x)$ can be expressed by the binary expansions of the coordinates of $x$. In particular, $f=f_{\beta}\in Q_{\alpha}({\mathbb R^n})$ if and only if $\alpha<\beta<\frac{n}{2}$, while for $\beta=\alpha$, it was shown by Yue and Dafni that $f$ satisfies a John--Nirenberg inequality for $Q_{\alpha}({\mathbb R^n})$. When $\beta\neq 1$, $f$ is a self-affine function. It is continuous almost everywhere and discontinuous at all dyadic points inside $I_0$. In addition, it is not monotone along any coordinate direction in any small cube. When the parameter $\beta\in (0, 1)$, $f$ is onto from $I_0$ to $[-\frac{1}{1-2^{-\beta}}, \frac{1}{1-2^{-\beta}}]$, and the graph of $f$ has a non-integer fractal dimension $n+1-\beta$. Keywords:Haar wavelets, Q spaces, John-Nirenberg inequality, Greedy expansion, self-affine, fractal, Box dimensionCategories:42B35, 42C10, 30D50, 28A80
2. CJM 2004 (vol 56 pp. 431)
Rosenblatt, Joseph; Taylor, Michael
Group Actions and Singular Martingales II, The Recognition Problem We continue our investigation in [RST] of a martingale formed by picking a measurable set $A$ in a compact group $G$, taking random rotates of $A$, and considering measures of the resulting intersections, suitably normalized. Here we concentrate on the inverse problem of recognizing $A$ from a small amount of data from this martingale. This leads to problems in harmonic analysis on $G$, including an analysis of integrals of products of Gegenbauer polynomials. Categories:43A77, 60B15, 60G42, 42C10
3. CJM 1998 (vol 50 pp. 1236)
Kalton, N. J.; Tzafriri, L.
The behaviour of Legendre and ultraspherical polynomials in $L_p$-spaces We consider the analogue of the $\Lambda(p)-$problem for subsets of the Legendre polynomials or more general ultraspherical polynomials. We obtain the best possible'' result that if $2 Categories:42C10, 33C45, 46B07 4. CJM 1997 (vol 49 pp. 175) Xu, Yuan Orthogonal Polynomials for a Family of Product Weight Functions on the Spheres Based on the theory of spherical harmonics for measures invariant under a finite reflection group developed by Dunkl recently, we study orthogonal polynomials with respect to the weight functions$|x_1|^{\alpha_1}\cdots |x_d|^{\alpha_d}$on the unit sphere$S^{d-1}$in$\RR^d\$. The results include explicit formulae for orthonormal polynomials, reproducing and Poisson kernel, as well as intertwining operator. Keywords:Orthogonal polynomials in several variables, sphere, h-harmonicsCategories:33C50, 33C45, 42C10
|
2015-05-24 23:34:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463077783584595, "perplexity": 895.5414217134854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928102.74/warc/CC-MAIN-20150521113208-00152-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-multiply-3n-2-2n-2-3n-4
|
# How do you multiply -3n^2(-2n^2+3n+4)?
Jun 13, 2017
#### Answer:
Use the distributive property.
The multiplied form is $6 {n}^{4} - 9 {n}^{3} - 12 {n}^{2}$
#### Explanation:
The distributive property tells us that:
$\textcolor{red}{a} \cdot \left(b + c\right)$
$\textcolor{red}{a} \cdot b + \textcolor{red}{a} \cdot c$
So we can use this property to distribute the $- 3 {n}^{2}$ term:
$\textcolor{red}{- 3 {n}^{2}} \left(- 2 {n}^{2} + 3 n + 4\right)$
$= \textcolor{red}{- 3 {n}^{2}} \cdot \left(- 2 {n}^{2}\right) + \textcolor{red}{- 3 {n}^{2}} \cdot \left(3 n\right) + \textcolor{red}{- 3 {n}^{2}} \cdot 4$
Now just multiply each group of terms together. Remember that two negatives make a positive.
$= 6 {n}^{4} - 9 {n}^{3} - 12 {n}^{2}$
Final Answer
|
2019-10-21 21:01:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285473823547363, "perplexity": 3136.7372517698777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00154.warc.gz"}
|
http://openmx.ssri.psu.edu/thread/526?q=thread/526
|
# Default M matrix in type="RAM" models
5 posts / 0 new
Offline
Joined: 07/31/2009 - 15:12
Default M matrix in type="RAM" models
Hi all,
I'm writing a simple latent change score demo using mxPath, and found something interesting. I believe we've talked about default means models before, but I don't recall.
In this model, two occasions of data (wisc1 and wisc6 below) are restated as the first occasion (wisc1) and a change or difference score (diff). As such, the second observation (wisc6) is fully explained, and has no intercept and no residual. By default, specifying 'type="RAM"' assigns freely estimated manifest means, so the code below actually has three means for two manifest variables. The easy thing is just to add an extra path function, specifying a fixed intercept of zero for wisc6. I'm asking a design question, however: should I have to override a default and tell OpenMx not to give wisc6 a mean?
x <- matrix(c(40.658, 50.686, 50.686, 108.014), nrow=2)
y <- c(18.781, 47.341)
<code>
example <- mxModel("My Title",
mxData(x, type="cov", means=y, numObs=204),
type="RAM",
manifestVars=c("wisc1", "wisc6"),
latentVars="diff",
mxPath(c("wisc1", "diff"), "wisc6",
TRUE, 1, FALSE, 1, NA),
mxPath(c("wisc1", "diff"), c("wisc1", "diff"),
FALSE, 2, TRUE, 45, c("v_1", "v_d")),
mxPath("wisc1", "diff",
FALSE, 2, TRUE, 10, "cov_1d"),
mxPath("one", c("wisc1", "diff"),
FALSE, 1, TRUE, 1, c("mean1", "mean_diff"))
)
Offline
Joined: 07/30/2009 - 14:03
My take is that yes, you
My take is that yes, you should have to override a default. Having a means vector implies that you want to do something with the means. That you aren't is only a function of your particular model.
Offline
Joined: 07/31/2009 - 15:12
Two points: -I have data,
Two points:
-I have data, which implies I want to do something with it. There are no other default parameters included in OpenMx. Why should we assume a fully saturated means model and a complete lack of specification for the covariance model?
-I have a secondary issue with a default, unnamed parameter. It is a default that will in, virtually all cases, have to be overridden, either to remove the parameter or name it so I can more easily interpret and edit it. If an analyst thought "OpenMx does exactly what you tell it to," they might not notice the extra parameter. This could easily be an issue with categorical data models, where one may not want manifest means.
Offline
Joined: 07/30/2009 - 14:03
OK. I stand corrected. In
OK. I stand corrected.
In yesterday's developer's meeting we discussed your case. Here are the issues we took into consideration:
1. When a RAM model is created, we should not add any paths by default. Any path that is added should be added by a user. This should hold for the A matrix, the S matrix as well as for the M matrix.
2. There are three cases when a means model is required (i.e., in mxData there is an argument type="raw", or type="sscp", or means=) . All three cases should act exactly the same.
3. If a means model is required but not specified the user should be alerted. We should not build a means model in the background. This is consistent with the OpenMx philosophy that everything is "out front" and visible in a script. Thus, an error should be generated if a means model is required but not supplied.
So we decided to implement the following change:
1. Remove the automatic creation of a means matrix M when the means= argument is encountered in mxData. Instead, the means matrix M will be created only when a means model mxPath is specified. Thus, if you want to have means fixed to zero, you will need to specify that. This behavior is what already is in effect for type="raw". Scripts that previously relied on automatic generation of a saturated means model will now throw an error.
Offline
Joined: 07/31/2009 - 15:24
Some minor comments to Steve's post. The changes he talked about will be in OpenMx 0.3.2, coming out 5/22-23/2010. Also, to reproduce the earlier behavior of OpenMx, you can use the following function:
createMeansPaths <- function(model) {
if (length(model@manifestVars) > 0) {
model <- mxModel(model,
mxPath(from = "one", to = model@manifestVars,
free = TRUE, values = 0))
}
if (length(model@latentVars) > 0) {
model <- mxModel(model,
mxPath(from = "one", to = model@latentVars,
free = FALSE, values = 0))
}
return(model)
}
|
2017-01-22 12:09:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4366477131843567, "perplexity": 3482.2254130665756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00351-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://stochasticprocess.blogspot.com/2012/03/reading-physics-textbooks.html
|
When not at work with students, I spend my time in my room either reading, calculating something using pen and paper, or using a computer. I read almost anything: from the pornographic to the profound, although my main interests are mathematics and physics. "When I get a little money I buy books; and if any is left I buy food and clothes." -Erasmus
## Sunday, March 4, 2012
One thing I noticed while talking with some freshmen physics majors is how few of them actually know how to use a physics book. I think it's due to how different high school physics textbooks are from university physics texts-- the high school texts are usually readable on a per chapter basis, as opposed to a text like Young and Freedman's or Resnick's.
It's also due to how low high school physics expectations are here. I recall surviving high school physics without actually learning anything, and doing a cursory reading of my high school physics book when there was a looming exam. So it comes as a surprise to many students how different university physics textbooks are from what they've encountered in high school.
The common reading strategy seems to be memorizing equations or the end of the chapter summary, and so whenever students encounter textbooks without end-of-chapter summaries, they complain how hard it is to read the book. That's one of the most common complaints I encounter whenever I assign reading from Spacetime Physics.
A similar strategy is looking for boxed equations and then memorizing all of them. When such a student encounters a physics problem, the problem-solving strategy becomes "Try all the equations with the same variables!". What students forget is the boxed equation is actually something that should be understood in context. The surrounding text is there to explain what assumptions underlie a derived result, and once the underlying assumptions are understood, one can also understand the limitations.
One of my favorite ways of teaching students to be sensitive to underlying assumptions is the De Broglie frequency relation $E=hf$. The problem I assign is to find the De Broglie frequency of a free electron in terms of the wavelength. (The mass of the electron is not given within the problem statement but listed on a table of physical constants.) One common mistake is to write, for free particles with nonzero mass, $E=\frac{hc}{\lambda}$; the wrong assumption here is the wavelength frequency relation $\lambda f=c$ which only works for particles with zero mass. To ensure that my students make an effort to understand underlying assumptions, I make multiple choice items with distracters that reproduce their most common mistakes.
So a student who memorizes equations and is not sensitive to underlying assumptions will find the correct answer (which is derivable from invariance of mass) and the wrong one. One student, who merely memorized formulae and then used a trial and error approach, was unpleasantly surprised to find that his trial and error approach generated all the choices in many test items.
Having talked about the wrong way of reading an introductory physics text, what is the correct way? The right way is to read introductory physics texts on a per-section basis. After reading the section, go to the end of the chapter and then try solving all the odd-numbered problems for that section only.
Probem-solving is a sanity check. The only way to find out if you've actually understood what you've read is to try to use it, and that is what the problems are for. If you can't solve the problems, reread the section (assuming you actually understood the prerequisite knowledge!) and then try to solve the problems again. If it still doesn't help, try talking it over with classmates, a tutor, or with your professor. Bring your problem solving attempts, and then try to find out why you get the wrong answers-- finding out why you're making mistakes is also part of the learning process.
Obviously, this is not something you can cram, and it means doing the reading and problem-solving on a regular basis. But if you do it right, your understanding will last a lot longer than the formula-memorizers. As a bonus, when faced with a new technical situation, you can build on what you know, and eventually become the expert in your field.
#### 1 comment:
Steve Solun said...
Dear Students,
Let me share with you the post about physics:
|
2018-01-20 02:45:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5420663952827454, "perplexity": 584.2849292107932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888878.44/warc/CC-MAIN-20180120023744-20180120043744-00286.warc.gz"}
|
http://cinturonesmagama.com/where-to-byfq/430bea-bryan-adams---so-far-so-good-songs
|
Work is application of force, f f, to move an object over a distance, d, in the direction that the force is applied. This definition can be extended to rigid bodies by defining the work of the torque and rotational kinetic energy. a particle of mass m movina in the x direction at constant acceleration a. Energy can neither be created nor destroyed. 0. The force down the ramp is equal to mg*sin(30 o) = 600J * 0.5 = 300J In words, this reads: "The change of an object's kinetic energy when it changes its position from $$A$$ to $$B$$ equals the work done on it by all forces on it, computed over a well-defined path connecting those endpoints." So, according to the theorem statement, we can define the work energy theorem as follows. The Work–Energy Theorem. The work-energy principle says states that. If we want to use the formula of work, we need the friction coefficient to calculate the frictional force. After the net force is removed (no more work is being done) the object's total energy is altered as a result of the work that was done.. Here K.E f is the final kinetic energy and K.E i is the initial kinetic energy. Hence, according to the Work-Energy Theorem, Δ(K.E.) The above equation is valid only for such conditions where the force is constant and can't be applicable for non-constant variables. The work-energy theorem can n… Later on, that kinetic energy can be converted to other forms of energy (like heat). It states that, ”sum total of energy (of all kinds) of a system always remains constant while conservation of energy from one form to another may take place continuously. Work is Done by (Non-Uniform) Variable Force. Restated, the work done by non-conservative forces is equal to the overall change in energy of the system. Joules. Now, consider the resulting equation of work. Convert one form of mechanical energy to another. W net = (1/2)m [v f2 - v i2] = KE f - KE i. W net = (1/2) m v f2 - (1/2) m v i2 = KE f - KE i. Forces acting on the block are gravity, normal reaction, and frictional force. What is the work that Jane has done on the crate after 6 m? Therefore, we first need to determine the car’s kinetic energy at the moment of braking using: $$E_k=\frac{1}{2}m{v}^{2}$$ To complete the work, energy is needed, and hence in this theorem, we will know the relation between energy and work. According to the law of conservation, energy is only changed from one form to another. Confusion relating the application of Work-Energy theorem. A 8.0 kg 8.0\text{ kg} 8. This includes work by all forces no matter what it was like friction or normal force. Vedantu academic counsellor will be calling you shortly for your Online Counselling session. If you work on separate values of both work and force, it won't be easy to calculate. The work-energy theorem can also be applied to an object's potential energy, which is known as 'stored energy.' Imagine a skier moving at a constant velocity on a flat, frictionless surface. Comparing the above equations (2) and (3), we get. As we know that energy to do any task may differ according to the situation, and hence through this theorem, you can get standard information about work and applied energy. energy if it absorbs energy and will be added to body energy if it releases energy. 1. How Can We Efficiently Use This Theorem? The amount of work done by the force of gravity to move child 1 down the ramp is equal to force * distance, as well as equal to the amount of kinetic energy picked up down the ramp. Here are some necessary steps to be considered to solve the problems and strategy should be adopted accordingly. The work energy theorem affirms that the work done on any object is comparable to the difference of kinetic energy of the object. The work-energy theorem states that the work done by all forces acting on a particle equals the change in the particle’s kinetic energy. And if an object does work, it loses energy. This result is called the work energy theorem . Applications of the Work – Energy Theorem: Example Problem 2 AP Practice Problems: Multiple Choice Lab Questions Section 2: The Relationship Between Force and Potential Energy. The Work-Energy Theorem. Work energy theorem; Conservation of mechanical energy; What's Included. In this case, one can define a potential energy for each of these forces: where A and B are initial and final positio… In the above figure, force is constant for a little displacement, and then after the force is variable till the final position. Work energy theorem helps understand the relationship between work and energy (kinetic and potential) in a particular case. Work is application of force, f f, to move an object over a distance, d, in the direction that the force is applied. If the force is given by F(x) (a function of x) then the work done by the force along the x-axis from a to b is: = ∫ ⋅ Torque and rotation . Learn all about Application Of Work-Energy Theorem. A variable force is what we encounter in our daily life. Closing velocity of the bullet = 400 m/s. Find the work that is done by the bullet. Sorry!, This page is not available for now to bookmark. 0 kg block is moving at 3.2 m/s. This fact is referred to as the Work-Energy Principle and is often a very useful tool in mechanics problem solving. A block of mass 10 kg starts moving up the incline with 20m/s. Mechanical energy of a system is the sum total of its kinetic and potential energy. It suddenly struck a tree and went to another side with 400 m/s velocity. In electrodynamics, Poynting's theorem is a statement of conservation of energy for the electromagnetic field, [clarification needed], in the form of a partial differential equation developed by British physicist John Henry Poynting. All forms of energy are either kinetic or potential. Air bags are used in automobiles because they are able to minimize the effect of the force on an object involved in a collision. Potential energy and the work energy theorem. A spring 40 mm long is stretched by the application of a force. We will start with a special case: a particle of mass m moving in the x direction at constant acceleration a. Work-Energy theorem is very useful in analyzing situations where a rigid body moves under several forces. 0. Then click the button to view the answers. This makes sense as both have the same units, and the application of a force over a distance can be seen as the use of energy to produce work. That energy is almost always kinetic energy (energy of motion). The Work-Energy Theorem says that if you do work on an object, it gains energy. W net is the work done by the net force on the moving object of mass m from the initial state with speed v i to the final state with a speed v f . But that is not given. work done by spring on the attached body. Pro Subscription, JEE It is derivable from conservation of energy and the application of the relationships for work and energy, so it is not independent of the conservation laws. Work-Energy theorem is very useful in analyzing situations where a rigid body moves under several forces. So work done by spring will always be subtracted from body. A force does work on the block and sets it in motion. Attempt Questions. The cannons on 19th century frigates were ponderous things. Physics Grade XI Notes: Bernoulli’s Theorem. A graphical approach to this would be finding the area between F(x) and x from xi to xf . Let's take the constant acceleration as 'a.'. The point is that the Work-Energy theorem relates the work done by the net force to change in Kinectic energy and not the work done by individual works. 0. Looking at the proof you see the use of $\mathbf{F} = m\mathbb{a}$ and this by Newton's second law is only valid when $\mathbf{F}$ is the net force. compressing a spring) we need to use calculus to find the work done. Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Application of Work energy theorem. This is a direct application of the work-energy theorem, which means it consists entirely of computing a line integral. Work energy theorem gives an accurate conclusion avoiding lengthy process. From this App you can learn : Define, discuss, understand and correlate the terms work, power and energy. It reaches the top and comes back to its initial position and stops. This is another way of writing the Work-Energy theorem and in my mind it's a little bit clearer. This makes sense as both have the same units, and the application of a force over a distance can be seen as the use of energy to produce work. Work-energy theorem (vector form) A constant force of 35 N 35\text{ N} 3 5 N is applied to an object at an angle of − 4 5 ∘ -45^\circ − 4 5 ∘ with the horizontal as shown above. If you're seeing this message, it means we're having trouble loading external resources on our website. Get detailed, expert explanations on Application Of Work-Energy Theorem that can improve your comprehension and help with homework. Work, Energy, and Power for AP Physics C: Mechanics Applications of the Work – Energy Theorem: Example Problem 2 Previous Lesson Back to Course Next Lesson Hence the work done by friction is negative. 2 m/s. 1. Work Energy Theorem And Its Application: What is Energy? Work-Energy theorem is very useful in analyzing situations where a rigid body moves under several forces. Step-1: Map the FBD of the object, thus recognising the forces operating on the purpose. Application Of Work-Energy Theorem Application Of Work-Energy Theorem Definition. 3. In equation 2, multiply both the sides with ‘m’ mass. This is known as the work-energy theorem. We apply the work-energy theorem. Work (completed by all kind of energy or forces) = Change (Difference) in Kinetic Energy, The constant force will result in constant acceleration. In this live Gr 12 Physical Sciences show we take a look at the Work-Energy Theorem. Work relates to displacement, and displacement relates to kinetic energy. It can only be transformed from one kind to another. Work, W, is described by the equation © As we know that a rigid body cannot store potential energy in its lattice due to rigid structure, it can only possess kinetic energy. Workdone Under a Variable Force. The application of the work–energy theorem to systems on which frictional forces act has. Differentiate between mechanical and non–mechanical energy. The unit of Energy is same as of Work i.e. In other words, we can say that work and energy are the two essential elements to understand any physical movement. Energy is transferred into the system, but in what form? 0. been recognized as easily misleading, because not every product of force and displacement . Example 1 The formula for this theorem is – Kf – Ki = W (Final Kinetic Energy – Initial Kinetic Energy = net work done). Thus the work done by any force acting on a rigid body is equal to the change in its kinetic energy. A) 900 J B) 500 J C) 800 J D) 950 J. Therefore, the change in the car’s kinetic energy is equal to the work done by the frictional force of the car’s brakes. Steps to Approach Problems on Work Energy Theorem? Beginning velocity of the bullet = 500 m/s. In this case, one can define a potential energy for each of these forces: where A and B are initial and final positions, respectively. Add your answer and earn points. Understand how the work-energy theorem only applies to the net work, not the work done by a single source. If you work on separate values of both work and force, it won't be easy to calculate. Problems related to work and applied force can be solved with this theorem, or you can calculate both the entities separately. So, if all the object particles behave like particles, we can consider the whole object as a particle. If the force varies (e.g. Work-Energy Theorem: Definition and Application Work and energy are closely related in physics. To learn more about the work-energy theorem, … One example is the use of air bags in automobiles. If looking at the application of the work energy theorem whereby acceleration is assumed to be constant & Wnet=1/2 (m) (vnet^2) with Wnet=0, it would mean that the net velocity = 0 as the mass of an object will not change. of the bullet = 1/2{0.02(500)2 – 0.02(400)2}. NEET Physics Work,Energy and Power questions & solutions with PDF and difficulty level Hence, isn't it that the object will not be moving, instead of moving at a constant velocity? The shaded portion represents the work done by force F(x). To understand the meaning and possible applications of the work-energy theorem. In physics, the term work has a very specific definition. The answers depend on the situation. In this live Gr 12 Physical Sciences show we take a look at the Work-Energy Theorem. Use the integral and derivative to derive the Work-Energy Theorem or what I prefer to call the Net Work-Kinetic Energy Theorem. The work you do on the box turns into kinetic energy of the box itself. Find an answer to your question What are the applications of Work-energy theorem ? Knowledge application - use your knowledge to answer questions about changes in energy Additional Learning . Viewed 148 times 1 $\begingroup$ Jane pushes a large crate of mass 40 kg along the floor in a straight line. Being a conservative force Wg is zero as the body returns to its initial position. That's the Work Energy Theorem. Consider the falling and rolling motion of the ball in the following two resistance-free situations. It is in fact a specific application of conservation of energy. To complete the theorem we define kinetic energy as the energy of motion of a particle. Work energy theorem gives an accurate conclusion avoiding lengthy process. sanjaymshr sanjaymshr 2 minutes ago Physics Secondary School What are the applications of Work-energy theorem ? 1. The object is pulled 12 m 12\text{ m} 1 2 m at an angle of 1 5 ∘ 15^\circ 1 5 ∘ with the horizontal. We recommend the following … Objects that are like particles are considered for this rule. Work is the term that is used for the displacement done by any force in physics. Work is doneby a force (Both constant force and a variable force), conservation of mechanical energy, potential energy, kinetic energy, work-energy theorem, Potential energy of a spring, conservative and non-conservative forces, power This product requires PASCO software for data collection and analysis. It represents the work that is done by a variable force. Mechanical energy of a system is the sum total of its kinetic and potential energy. For example, if the lawn mower in Figure 1a is pushed just hard enough to keep it going at a constant speed, then energy put into the mower by the person is removed continuously by friction, and eventually leaves the system in th… The quantitative relationship between work and the two forms of mechanical energy is expressed by the following equation: KEi + PEi + Wext = KEf + PEf Now an effort will be made to apply this relationship to a variety of motion scenarios in order to test our understanding. An interesting case of the work-energy theorem occurs when all the forces acting on a body are conservative. Energy is found in many things and thus there are different types of energy. Work, Energy, and Power for AP Physics C: Mechanics Applications of the Work – Energy Theorem: Example Problems Previous Lesson Back to Course Next Lesson Work of a force is the line integral of its scalar tangential component along the path of its application point. This is the basis of work energy equation for rigid bodies. The Work–Energy Theorem. Where's' is the displacement of the body. You probably remember the law of energy conservation. The formula: W = delta K. W = work. Pro Lite, Vedantu To further understand the work-energy theorem, it can help to look at an example. There are several real-world applications of these phenomena. So. Deduce the importance of the work–energy theorem. Work-Energy Theorem argues the net work done on a particle equals the change in the particle’s kinetic energy. 1. Work energy theorem helps understand the relationship between work and energy (kinetic and potential) in a particular case. 3. In this lesson we revise different types of energy, we define work as well as discuss the relationship between work and energy. Moreover, as we said before, this experiment can be used to apply (in a real context) the abstract concepts related to the work–energy theorem and to verify that friction is a force that really acts on the cart and does work on it. So let's attempt to implement the work energy theorem. We will start with a special case! Work Done by Gravitational Force Conservative and Nonconservative Forces Interpreting Potential Energy Graphs Potential Energy Functions and Force Hooke’s Law and Elastic Potential Energy Force and Potential … Let's study the following work energy theorem example for better understanding. Along this path, the value of $$y$$ remains … The kinetic energy of the block (the energy that it possesses due to its motion) increases as a result of the amount of work . Understand how the work-energy theorem only applies to the net work, not the work done by a single source. Work-energy theorem for a system. An interesting case of the work-energy theorem occurs when all the forces acting on a body are conservative. The work-energy theorem explains the idea that the net work - the total work done by all the forces combined - done on an object is equal to the change in the kinetic energy of the object. So, you can see the change in the applied force from the graph. A net force of 10 N 10\text{ N} 1 0 N is constantly applied on the block in the direction of its movement, until it has moved 16 m. 16\text{ m}. Main & Advanced Repeaters, Vedantu This idea is expressed in the following equation: We can assume that under the general conditions stated, the bullet loses all its kinetic energy penetrating the boards, so the work-energy theorem says its initial kinetic energy is equal to the average stopping force times the distance penetrated. ... Work-Energy theorem. It states that, ”sum total of energy (of all kinds) of a system always remains constant while conservation of energy from one form to another may take place continuously. Work-Energy Theorem. Thus the work done by any force acting on a rigid body is equal to […]. Net workon an object can be calculated by applying the definition of work to each force acting on the object while it is being displaced and then adding up each contribution. Proving the Work-Energy Theorem for a variable force is a little tricky. Q1: Doubts . Big t ... Little F: Some Applications. To understand similar cases, pl see the example videos below –, For general situations which involves friction and other cases, more example videos are given below –, Contemporary Career Opportunities in 2020 & Beyond. Consider a bullet that has 20g mass and velocity 500 m/s. The Work done on an object is equal to the change in its kinetic energy. According to this theorem, when an object slows down, its final kinetic energy is less than its initial kinetic energy, the change in its kinetic energy is negative, and so is the net work done on it. In this problem, you will use your prior knowledge to derive one of the most important relationships in mechanics: the work-energy theorem. We know that all the car’s kinetic energy is lost to friction. The change in kinetic energy of a body is equivalent to the net work done on the body. Mass of the bullet (given), m = 20 g (= 0.02 kg). Problems of work energy theorems seem to be difficult but should be solved with proper steps to get accurate results. Work, Energy, and Power for AP Physics C: Mechanics Applications of the Work – Energy Theorem: Example Problem 2 Previous Lesson Back to Course Next Lesson Here, WN is zero as force is always perpendicular to the displacement. Review the key concepts, equations, and skills for the work-energy theorem. When a skier waits at the top of … What is the work done by friction in the whole process? Potential Energy of a System and work–kinetic energy theorem. Leaming Goal: To understand the meaning and possible applications of the work-energy theorem. In this first step, the work–energy theorem plays a crucial role since it avoids difficult and boring mathematical treatments. Calculate the given bullet's energy difference, and we can get the work done on the tree by the bullet. Poynting's theorem is analogous to the work-energy theorem in classical mechanics, and mathematically similar to the continuity equation, … Also. The work-energy theorem states that “work done by all forces is equal to change in kinetic energy. All rights reserved. Kinetic Energy and the Work-Energy Theorem As is evident by the title of the theorem we are deriving, our ultimate goal is to relate work and energy. Work-energy theorem (vector form) Power input to a system Work 1D - Problem Solving Work 2D - Problem Solving ... Work-kinetic energy theorem . In this problem, you will use your prior knowledge to derive one of the most important relationships in mechanics: the work-energy theorem. To do this, we first need to define the path mathematically, and all of the tiny displacements $$\overrightarrow {dl}$$ along that path. 1x PASPORT High Resolution Force Sensor (PS-2189) 1x PASPORT Motion Sensor (PS-2103A) 1x Force Sensor Track Bracket (ME-6622) 1x IDS Spring Kit (ME-8999) 1x Braided Physics String (SE-8050) Software Required. It represents the work that is done by a variable force. Well, your logic and thinking skill help you here to get it answered quickly through a method -oriented approach. We know that work is a result of force … The net work done on any particle equals the change in that particle’s... Overview of Application Of Work-Energy Theorem. A constant force is rare in the everyday world. This is an application of the work-energy theorem. According to this theorem, when an object slows down, its final kinetic energy is less than its initial kinetic energy, the change in its kinetic energy is negative, and so is the net work done on it. Applications of Impulse-Momentum Change Theorem In a previous part of Lesson 1 , it was said that In a collision, an object experiences a force for a given amount of time that results in its mass undergoing a change in velocity (i.e., that results in a momentum change). Does it remain in the system or move on? Use your understanding of the work-energy theorem to answer the following questions. Here’s what you need to know about the Google for India Digitization Fund. To analyze motion of a rigid body in such situation, see the video: In variety of cases when a spring is connected to body then in case when a spring is compressed or, elongated, it absorbs energy and stores in form of elastic potential energy which is due to the negative. Is not available for now to bookmark F is the line integral of scalar! By all forces is equal to [ … ] its scalar tangential component along the.... ; conservation of energy ( kinetic and potential ) in a particular case work,,... Above equation is valid only for such conditions where the force on an object does work on values! Body energy if it releases energy. ' a graphical approach to would... Ponderous things on an object speeds up, the work–energy theorem plays a crucial role since it avoids and... A collision steps to be considered to solve the problems and strategy should be solved with this theorem, need! To other forms of energy. ' 12 Physical Sciences show we take a look at an.... Some necessary steps to get it answered quickly through a method -oriented approach not the work on... The … a spring 40 mm long is stretched by the application work-energy! Analyzing situations where a rigid body moves under several forces particle ’ s kinetic energy changes due to external or. And difficulty a block of mass 40 kg along the floor with PDF and difficulty … work is work-energy theorem applications. Terms work, not the work energy equation for rigid bodies by defining work... To the theorem we define kinetic energy. ' not be moving, work-energy theorem applications! You can learn: define, discuss, understand and correlate the terms work, not the,... J B ) 500 J C ) 800 J D ) 950 J to the that! Shaded portion represents the work done on an object, work-energy theorem applications means we 're having loading! And if an object, thus recognising the forces acting on the purpose power questions & with..., the net work done by any force acting on a rigid body under! Always perpendicular to the work that Jane has done on the box turns into kinetic energy and power questions solutions... K. W = work following work energy theorem and its application: is. Review the key concepts, equations, and displacement relates to displacement, and frictional force on system equal... Are the applications of the work-energy theorem only applies to the law conservation of are... Falls off the top of the system increases very specific definition theorem Lifting. Theorem only applies to the overall change in the x direction at constant acceleration a. ' boring treatments. Does work on separate values of both work and energy ( like heat.... And positive when the energy of the work–energy theorem to answer questions about changes in of. Be extended to rigid bodies by defining the work done by force F ( x ) (. Shaded portion represents the work that is done by any force acting on block... Any object is comparable to the net work on an object speeds,. Argues the net work, energy is same as of work i.e:! For work-energy theorem applications Online Counselling session block are gravity, normal reaction, and then after the force is in. By spring will always be subtracted from body mass and velocity 500.. As we discussed earlier, the net work done on any object comparable! So let 's attempt to implement the work energy theorem example for better understanding 2 minutes physics. Is rare in the applied force can be solved with proper steps to get accurate results went to.... Direction at constant acceleration a. ' product of force and displacement of. Writing the work-energy theorem get accurate results the meaning and possible applications of work-energy theorem occurs when all forces! Bags are used in automobiles 12 Physical Sciences show we take a look at the theorem. Implement the work done by a variable force force, it wo n't be easy to calculate that like... Increases and no change in the x direction at constant acceleration a. ' by... Form to another side with 400 m/s velocity: work energy theorem ; conservation of energy we... Decreases and positive when the energy of a particle equals the change in kinetic energy the. Find the work done on a flat, frictionless surface mathematical treatments work–energy theorem a! Energy can be a negative value when the energy of motion of the falls. Define kinetic energy as the energy of motion of a system equals the in. 6 m n't be easy to calculate very specific definition crate of mass m moving in particle. And possible applications of work-energy theorem states that “ work done by force... Applicable to the work done by a variable force difficult but should be adopted accordingly know that the., according to the work-energy theorem let 's take the constant acceleration as ' a. ' a! Theorem that can improve your comprehension and help with homework find an answer to your what! The work-energy theorem can n… learn all about application of Bernoulli ’ s kinetic energy '. Object does work, it can only be transformed from one form to another with. Steps to be considered to solve the problems and strategy should be adopted accordingly as a particle to …. Theorem gives an accurate conclusion avoiding lengthy process for your help the integral and to... Detailed, expert explanations on application of Bernoulli ’ s what you need to use the integral and to! The everyday work-energy theorem applications such conditions where the force is always perpendicular to net! K. W = work later on, that kinetic energy changes due to external forces energies. Force acting on a system equals the change in work-energy theorem applications particle ’ s theorem coefficient... Seem to be difficult but should be solved with proper steps to be considered to solve the and. Discuss, understand and correlate the terms work, we define kinetic energy of a body are.... Into kinetic energy. ' theorem plays a crucial role since it avoids difficult and boring mathematical treatments kind... Way of writing the work-energy principle and is often a very useful in. = 0.02 kg ) reaction, and frictional force object, thus recognising the forces acting on a system the! If it releases energy. ' so let 's take the constant acceleration a. ' other words we... In one situation, the ball in the particle ’ s what you to... Live Gr 12 Physical Sciences show we take a look at the work-energy theorem is very useful in analyzing where. Very useful tool in mechanics problem solving coefficient to calculate to solve the problems and strategy should be solved this... Types of mechanical energy of the torque and rotational kinetic energy is only changed from one kind to.... Conservation, energy is lost to friction is equivalent to the displacement to calculate ) J... On an object 's potential energy of the system due to external forces or energies like gravity or.... This definition can be solved with this theorem, Δ ( K.E. the! My mind it 's a little tricky to use calculus to find the work done several..., the rule came from Newton 's second law, and frictional force loses energy... 500 ) 2 } happens to the net work done by all forces no what! Derivable from the law of conservation, energy and K.E i is the final position considered for this rule applied!, your logic and thinking skill help you here to get it quickly! Possible applications of the object particles behave like particles, we define kinetic energy '..., and we can consider the falling and rolling motion of the work-energy principle and is derivable the... Is variable till the final position can n… learn all about application the! Difficult but should be solved with this theorem, limitations, and hence is... Implement the work done by force F ( x ) does work not. M/S velocity given ), m = 20 g ( = 0.02 kg ) and no change in above... Kinetic energy. ' that all the car ’ s kinetic energy can be solved with this,. Direction at constant acceleration a. ' kg starts moving up the incline with 20m/s be the... Following … work is done by spring will always be subtracted from body your understanding of the system decreases positive. Terms work, it wo n't be applicable for non-constant variables constant for a variable.! In the following … work is done by the equation the work-energy principle and is derivable from law... Problems of work energy theorem as follows be calling you shortly for your Counselling... 800 J D ) 950 J either kinetic or potential resistance-free situations object 's energy... Another way of writing the work-energy theorem says that if you work on an object involved work-energy theorem applications... Derivable from the graph Δ ( K.E. ) 900 J B ) 500 J C ) 800 J ). Approach to this would be finding the area between F ( x and... -Oriented approach the purpose not equal zero, but in what form = 20 g ( = kg! And stops a little tricky gravity, normal reaction, and skills for the work-energy occurs... Xi to xf system not equal zero, but in what form are different types of mechanical of! ) 500 J C ) 800 J D ) 950 J better.! Message, it gains energy. ' 400 m/s velocity entities work-energy theorem applications 1 $\begingroup$ Jane pushes large! Derivable from the law conservation of energy, we can get the work Jane! Not available for now to bookmark Physical movement our daily life we define kinetic energy. ' leaming:!
Guitar Bands 2020, Veteran Of The Psychic Wars, Olivia The Pig Dress Up, Spiritual Current Challenges, Jon øigarden Norsemen, Juggaknots - Clear Blue Skies Album, Gifts For Fussy Mums, Sharp Pain In Right Side After Gallbladder Surgery,
|
2021-04-14 07:39:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5991154313087463, "perplexity": 410.2995873432846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00260.warc.gz"}
|
http://clay6.com/qa/4467/stetch-the-graph-of-y-x-3-and-evaluate-the-area-under-the-curve-y-x-3-above
|
Browse Questions
# Stetch the graph of y = | x + 3 | and evaluate the area under the curve y = | x + 3 | above x - axis and between x = -6 to x = 0.
Can you answer this question?
|
2017-06-24 01:59:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6286731362342834, "perplexity": 252.36031526910077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320209.66/warc/CC-MAIN-20170624013626-20170624033626-00184.warc.gz"}
|
https://studyadda.com/sample-papers/rrb-assistant-pt-sample-test-paper-1_q55/80/246062
|
• # question_answer There are some chickens and goats in a poultry farm. If the total number of animal heads in the farm is 858 and the total number of animal legs is 1846 what is the number of chickens in the poultry farm? A) 45 B) 853 C) 65 D) Can't be determined E) 793
Let the number of children be c and the number of goats be g. $c+g=858$ ? (i) And$2c+4g=1846$ $\Rightarrow$ $c+2g=923$ ? (ii) Solving (i) as id (n), we get ' $g=65$and$2g=923$ Total no. of chicken0073$=793$
|
2019-09-15 18:03:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5366042852401733, "perplexity": 4459.399638551154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572235.63/warc/CC-MAIN-20190915175150-20190915201150-00003.warc.gz"}
|
https://tex.stackexchange.com/questions/162545/text-line-contains-an-invalid-character?noredirect=1
|
# ! Text line contains an invalid character [closed]
My Compiler gives hundreds of “invalid character” like this:
> !ltext line contains an invalid character pfdlatex
> 1.1
• Welcome to TeX.SX! Have you entered any accented characters, such as é? If so, you need to use the inputenc package. Another possible cause can occur if you copy and paste text that includes symbols such as em-dashes — or curly quote marks “ ”. We can't really help any further without a minimal working example (MWE) (as already noted in the previous comment). Feb 26 '14 at 15:09
• Welcome to TeX.SX! Please add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. Feb 27 '14 at 13:17
Make sure the file is in the encoding you are trying to use, and that the right \usepackage[...]{inputenc} (or whatever other magic is required) is at hand.
I've had cases where the .aux or other LaTeX-written files contained cruft from earlier runs, or just were plain corrupt, and gave symptoms as you describe. Starting with a clean slate might help.
|
2021-10-28 14:48:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7031195759773254, "perplexity": 2627.94574963348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00516.warc.gz"}
|
http://aas.org/archives/BAAS/v34n2/aas200/67.htm
|
AAS 200th meeting, Albuquerque, NM, June 2002
Session 71. Stellar Youth: Tomorrow's Degenerates
Display, Thursday, June 6, 2002, 9:20am-4:00pm, SW Exhibit Hall
## [71.09] Testing Pre-Main-Sequence Stellar Evolution Theory: Discovery and Analysis of a Young, Low-Mass Eclipsing Binary
K.G. Stassun, N. Stroud, R.D. Mathieu (University of Wisconsin)
We present a preliminary analysis of a previously unknown, low-mass, pre-main-sequence (PMS), double-lined spectroscopic, eclipsing binary system in Orion. Eclipsing binaries provide powerful tests of stellar evolutionary models because stellar masses and radii are determined in a distance-independent way. Only four other PMS eclipsing binaries are presently known.
We present WIYN and HET spectroscopy from which we determine a double-lined orbit solution. We also present multi-epoch, multi-band photometric light curves which we model to derive key system parameters such as inclination and the ratio of stellar effective temperatures. With the addition of T\rm eff for the primary (determined from our HET spectra), all system parameters are determined.
The primary and secondary masses are measured to be 1.0 M\odot and 0.7 M\odot, respectively. Thus the secondary in this system is the lowest mass star yet discovered in a PMS eclipsing binary.
We also give an update on our program to discover and analyze additional PMS eclipsing binaries as sensitive tests of PMS stellar evolution, particularly among very low-mass stars.
Bulletin of the American Astronomical Society, 34
© 2002. The American Astronomical Soceity.
|
2014-12-18 14:42:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8179608583450317, "perplexity": 12075.612816767318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767198.25/warc/CC-MAIN-20141217075247-00041-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://zbmath.org/?q=an:1238.30019
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
On growth, zeros and poles of meromorphic solutions of linear and nonlinear difference equations. (English) Zbl 1238.30019
The author studies the order of growth, zeros and poles of finite order meromorphic solutions of nonlinear difference equations
$P\left(z\right)y\left(z+1\right)y\left(z\right)+Q\left(z\right)y\left(z\right)y\left(z-1\right)=H\left(z\right)\phantom{\rule{2.em}{0ex}}\left(1·1\right)$
and
$y\left(z+1\right)=\frac{R\left(z\right)y\left(z\right)}{Q\left(z\right)+P\left(z\right)y\left(z\right)},\phantom{\rule{2.em}{0ex}}\left(1·2\right)$
where $P\left(z\right),Q\left(z\right),H\left(z\right)$ and $R\left(z\right)$ are polynomials such that $P\left(z\right)Q\left(z\right)H\left(z\right)R\left(z\right)¬\equiv 0$. Similar results for linear difference equations related to (1.1) and (1.2) are also obtained.
##### MSC:
30D35 Distribution of values (one complex variable); Nevanlinna theory 39A10 Additive difference equations
##### References:
[1] Ablowitz M, Halburd R G, Herbst B. On the extension of Painlevé property to difference equations. Nonlinearty 2000, 13: 889–905 · Zbl 0956.39003 · doi:10.1088/0951-7715/13/3/321 [2] Bank S. A general theorem concerning the growth of solutions of first-order algebraic differentiial equtions. Compositio Math, 1972, 25: 61–70 [3] Bergweiler W, Langley J K. Zeros of differences of meromorphic functions. Math Proc Camb Phil Soc, 2007, 142: 133–147 · Zbl 1114.30028 · doi:10.1017/S0305004106009777 [4] Chen Z X. Growth and zeros of meromorphic solution of some linear difference equations. J Math Anal Appl, 2011, 373: 235–241 · Zbl 1208.39028 · doi:10.1016/j.jmaa.2010.06.049 [5] Chen Z X, Shon K H. On zeros and fixed points of differencers of meromorphic functions. J Math Anal Appl, 2008, 344: 373–383 · Zbl 1144.30012 · doi:10.1016/j.jmaa.2008.02.048 [6] Chen Z X, Shon K H. Estimates for zeros of differences of meromorphic functions. Sci China Ser A, 2009, 52: 2447–2458 · Zbl 1181.30016 · doi:10.1007/s11425-009-0159-7 [7] Chen Z X, Shon K H. Value distribution of meromorphic solutions of certain difference Painlevé equations. J Math Anal Appl, 2010, 364: 556–566 · Zbl 1183.30026 · doi:10.1016/j.jmaa.2009.10.021 [8] Chiang Y M, Feng S J. On the growth of logarithmic differences, difference quotients and logarithmic derivatives of meromorphic functions. Trans Amer Math Soc, 2009, 361: 3767–3791 · Zbl 1172.30009 · doi:10.1090/S0002-9947-09-04663-7 [9] Chiang Y M, Feng S J. On the Nevanlinna characteristic of f(z + $\nu$) and difference equations in the complex plane. Ramanujan J, 2008, 16: 105–129 · Zbl 1152.30024 · doi:10.1007/s11139-007-9101-1 [10] Chiang Y M, Ruijsenaars S N M. On the Nevanlinna order of meromorphic solutions to linear analytic difference equations. Stud Appl Math, 2006, 116: 257–287 · Zbl 1145.39300 · doi:10.1111/j.1467-9590.2006.00343.x [11] Gao S A, Chen Z X, Chen T W. Complex Oscillation Theory of Linear Differential Equations (in Chinese). Wuhan: Huazhong University of Science and Technology Press, 1998 [12] Gundersen G. Finite order solutions of second order linear differential equations. Trans Amer Math Soc, 1988, 305: 415–429 · doi:10.1090/S0002-9947-1988-0920167-5 [13] Gross F. Factorization of Meromorhpic Functions. Washinton, D. C.: Government Printing Office, 1972 [14] Halburd R G, Korhonen R. Difference analogue of the lemma on the logarithmic derivative with applications to difference equations. J Math Anal Appl, 2006, 314: 477–487 · Zbl 1085.30026 · doi:10.1016/j.jmaa.2005.04.010 [15] Halburd R G, Korhonen R. Existence of finite-order meromorphic solutions as a detector of integrability in difference equations. Physics D, 2006, 218: 191–203 · Zbl 1105.39019 · doi:10.1016/j.physd.2006.05.005 [16] Halburd R G, Korhonen R. Meromorphic solution of difference equation, integrability and the discrete Painlevé equations. J Phys A: Math Theor, 2007, 40: 1–38 · Zbl 1104.82019 · doi:10.1088/1751-8113/40/1/001 [17] Halburd R G, Korhonen R. Finite-order meromorphic solutions and the discrete Painlevé equations. Proc London Math Soc, 2007, 94: 443–474 · Zbl 1119.39014 · doi:10.1112/plms/pdl012 [18] Hayman W K. Meromorphic Functions. Oxford: Clarendon Press, 1964 [19] Hayman W K. The local growth of power series: a survey of the Wiman-Valiron method. Canad Math Bull, 1974, 17: 317–358 · Zbl 0314.30021 · doi:10.4153/CMB-1974-064-0 [20] Heittokangas J, Korhonen R, Laine I, et al. Complex difference equations of Malmquist type. Comput Methods Funct Theory, 2001, 1: 27–39 · doi:10.1007/BF03320974 [21] Heittokangas J, Korhonen R, Laine I, et al. Value sharing results for shifts of meromorphic functions, and sufficient conditions for periodicity. J Math Anal Appl, 2009, 355: 352–363 · Zbl 1180.30039 · doi:10.1016/j.jmaa.2009.01.053 [22] Ishizaki K, Yanagihara N. Wiman-Valiron method for difference equations. Nagoya Math J, 2004, 175: 75–102 [23] Laine I, Yang C C. Clunie theorems for difference and q-difference polynomials. J London Math Soc, 2007, 76: 556–566 · Zbl 1132.30013 · doi:10.1112/jlms/jdm073 [24] Laine I, Yang C C. Value distribution of difference polynomials. Proc Japan Acad, 2007, 83: 148–151 · Zbl 1153.30030 · doi:10.3792/pjaa.83.148 [25] Laine I. Nevanlinna Theory and Complex Differential Equations. Berlin: Walter de Gruyter, 1993 [26] Yang C C, Yi H X. Uniqueness Theory of Meromorphic Functions. Dordrecht: Kluwer Academic Publishers Group, 2003 [27] Yang L. Value Distribution Theory. Beijing: Science Press, 1993
|
2014-03-09 00:51:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7259655594825745, "perplexity": 11735.999995334803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999669324/warc/CC-MAIN-20140305060749-00009-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://forum.azimuthproject.org/discussion/1660/statistical-patterns-of-darwinian-evolution
|
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# Statistical patterns of Darwinian evolution
edited January 2016
Matteo Smerlak emailed me notifying me of this paper, which sounds really interesting:
Abstract. In the most general terms, Darwinian evolution is a flow in the space of fitness distributions. In the limit where mutations are infinitely frequent and have infinitely small fitness effects (the "diffusion approximation", Tsimring et al. have showed that this flow admits "fitness wave" solutions: Gaussian-shape fitness distributions moving towards higher fitness values at constant speed. Here we show more generally that evolving fitness distributions are attracted to a one-parameter family of distributions with a fixed parabolic relationship between skewness and kurtosis. Unlike fitness waves, this statistical pattern encompasses both positive and negative (a.k.a. purifying) selection and is not restricted to rapidly adapting populations. Moreover we find that the mean fitness of a population under the selection of pre-existing variation is a power-law function of time, as observed in microbiological evolution experiments but at variance with fitness wave theory. At the conceptual level, our results can be viewed as the resolution of the "dynamic insufficiency" of Fisher's fundamental theorem of natural selection. Our predictions are in good agreement with numerical simulations.
I asked him if he could write a blog article summarizing this paper, and he gladly agreed! I hope he will write it here on the wiki.
If you see this, Matteo, you can announce any progress you make here! Or, if you have questions, you can ask them here.
• Options
1.
This seems relevant. I'd like to know how the two papers are related.
The distribution of fitness effects among beneficial mutations in Fisher’s geometric model of adaptation, H. Allen Orr. http://www.webpages.uidaho.edu/~joyce/m563_html/papers/Orr2005.pdf
Comment Source:This seems relevant. I'd like to know how the two papers are related. The distribution of fitness effects among beneficial mutations in Fisher’s geometric model of adaptation, H. Allen Orr. http://www.webpages.uidaho.edu/~joyce/m563_html/papers/Orr2005.pdf
• Options
2.
This work is exciting to me as a biologist, and I'd like to challenge the researchers to further extend their results by relaxing or modifying whatever assumptions result in the one-parameter family with its finite moments. I'd like to see something that looks more like a Lévy stable distribution or some other member of the stable family of PDFs show up when the crank is turned. This outcome would fit my biological intuition far better. There are numerous precedents -- the history of data analysis from biometrician Charlie Winsor to Tukey and beyond, the current extension of phylogenetic signal correlation approaches to methods based on a Lévy random process rather than a Gaussian alternative, and my personal experience as a field biologist and fisheries biometrician. When I see a Gaussian distribution in theoretical or mathematical biology, I often feel like W.D. Hamilton did when he saw the words "group selection". Of course, multilevel selection is very cool now, and theoreticians are very interested in its possible implications for the history of life on earth. High-energy physicists and cosmologists may be wiser about their choices of problems and questions, but evolutionary biology has spent fifty years tilting at the windmill of group selection, enough for entire careers to go far down the wrong road.
Gaussian distributions are ubiquitous tools and highly useful, but biology has gradually learned their limitations along with their undeniably powerful applications to modelling and data analysis. John Tukey, John Chambers, Ron Hardin and Bell Labs generally helped me toward a healthy suspicion of the Central Limit Theorem and the hardened methods of confirmatory data analysis in which I had previously been trained. Three intensely stimulating years at the Labs and many years as a working-stiff fisheries biometrician helped show me what I personally consider the light and the way in evolution, and the research reported in Matteo's post gives me renewed hope that new tools for creative thinking about ecology and evolution are not far off.
Comment Source:This work is exciting to me as a biologist, and I'd like to challenge the researchers to further extend their results by relaxing or modifying whatever assumptions result in the one-parameter family with its finite moments. I'd like to see something that looks more like a Lévy stable distribution or some other member of the stable family of PDFs show up when the crank is turned. This outcome would fit my biological intuition far better. There are numerous precedents -- the history of data analysis from biometrician Charlie Winsor to Tukey and beyond, the current extension of phylogenetic signal correlation approaches to methods based on a Lévy random process rather than a Gaussian alternative, and my personal experience as a field biologist and fisheries biometrician. When I see a Gaussian distribution in theoretical or mathematical biology, I often feel like W.D. Hamilton did when he saw the words "group selection". Of course, multilevel selection is very cool now, and theoreticians are very interested in its possible implications for the history of life on earth. High-energy physicists and cosmologists may be wiser about their choices of problems and questions, but evolutionary biology has spent fifty years tilting at the windmill of group selection, enough for entire careers to go far down the wrong road. Gaussian distributions are ubiquitous tools and highly useful, but biology has gradually learned their limitations along with their undeniably powerful applications to modelling and data analysis. John Tukey, John Chambers, Ron Hardin and Bell Labs generally helped me toward a healthy suspicion of the Central Limit Theorem and the hardened methods of confirmatory data analysis in which I had previously been trained. Three intensely stimulating years at the Labs and many years as a working-stiff fisheries biometrician helped show me what I personally consider the light and the way in evolution, and the research reported in Matteo's post gives me renewed hope that new tools for creative thinking about ecology and evolution are not far off.
• Options
3.
Nice to hear from you, Robert! And always good to hear from you, Graham.
Matteo will take a while to write his article, since he's busy finishing up other stuff, but I'll point him to you comments.
Comment Source:Nice to hear from you, Robert! And always good to hear from you, Graham. Matteo will take a while to write his article, since he's busy finishing up other stuff, but I'll point him to you comments.
• Options
4.
Hi all, I'm getting started with the blog post now!
Comment Source:Hi all, I'm getting started with the blog post now!
• Options
5.
A first version is live at
http://www.azimuthproject.org/azimuth/show/Blog+-+statistical+laws+of+darwinian+evolution
Let me know if you have any comments!
Comment Source:A first version is live at http://www.azimuthproject.org/azimuth/show/Blog+-+statistical+laws+of+darwinian+evolution Let me know if you have any comments!
• Options
6.
Ever run across Relative Abundance Distribution?
Comment Source:Ever run across Relative Abundance Distribution?
• Options
7.
WebHubTel, do you mean probability distributions of the relative abundance of species, such as the lognormal? They arise in the context of what has been termed "the species sampling problem", pioneered by Pielou, MacArthur and I.J. Good. Watch out for long, straggling tails and "completeness" estimates that aren't robust to nonnormality and CLT violations. Please tell me more about your interest and current work; I've published (Iong ago) on this topic in an animal behavior context.
Comment Source: WebHubTel, do you mean probability distributions of the relative abundance of species, such as the lognormal? They arise in the context of what has been termed "the species sampling problem", pioneered by Pielou, MacArthur and I.J. Good. Watch out for long, straggling tails and "completeness" estimates that aren't robust to nonnormality and CLT violations. Please tell me more about your interest and current work; I've published (Iong ago) on this topic in an animal behavior context.
• Options
8.
I had some interest in this briefly, applying max entropy arguments to species diversity. Those give the long tails that you find in abundance data. The problem is that there really isn't enough structure and dynamic range to the data to convince anyone that one distribution is better than another.
See in my book p.434, section entitled "Dispersion, Diversity, and Resilience" https://drive.google.com/open?id=0B8wYusbaTnvMMUZDZHI0Qm5MUDQ
If you have problem with that link, I have others
Comment Source:I had some interest in this briefly, applying max entropy arguments to species diversity. Those give the long tails that you find in abundance data. The problem is that there really isn't enough structure and dynamic range to the data to convince anyone that one distribution is better than another. See in my book p.434, section entitled "Dispersion, Diversity, and Resilience" https://drive.google.com/open?id=0B8wYusbaTnvMMUZDZHI0Qm5MUDQ If you have problem with that link, I have others
|
2017-06-28 07:12:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43909525871276855, "perplexity": 1593.3862262728774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322873.10/warc/CC-MAIN-20170628065139-20170628085139-00599.warc.gz"}
|
https://pballew.blogspot.com/2020/08/on-this-day-in-math-august-2.html
|
## Sunday, 2 August 2020
### On This Day in Math - August 2
The whole form of mathematical thinking was created by Euler. It is only with the greatest of difficulty that one is able to follow the writings of any author preceding Euler, because it was not yet known how to let the formulas speak for themselves. This art Euler was the first to teach.
Ferdinand Rudio
The 215th day of the year; There are 215 sequences of four (not necessarily distinct) integers, counting permutations of order as distinct, such that the sum of their reciprocals is 1. Obviously, one of them is 1/4+1/4+1/4+1/4=1. How many can you find?
How many solutions with four distinct integers, not counting permutations?
215[10] = 555[6]
Lagrange's theorem tells us that each positive integer can be written as a sum of four squares, but Lagrange allowed the use of zeros, such as 12 + 12 + 12 + 02 =3. Allowing only positive integers, there are 57 year days that are not expressible in less than four squares. 215 is the 34th of these year days that is NOT expressible with less than four positive squares. 215 = 12 + 32 + 62 + 132.
More Math Facts for every Year Day here
EVENTS
1133 The last total solar eclipse at Jerusalem took place on August 2, 1133 . The next total solar eclipses will be on August 8, 2241. *NSEC
1641 Frenicle de Bessy proposes a problem to Fermat, Use the fact that 221 = 102 + 112 = 52 + 142 to find the factors of this number. Almost a century later, Euler made extensive use of the method. *Oystein Ore, Number Theory and Its History.
1733 Benjamin Franklin suggest writers could improve their literary style if they learned a little Geometry in The Pennsylvania Gazette, August 2, 1733. "If a Writer would persuade, he should proceed gradually from Things already allow’d, to those from which Assent is yet with-held, and make their Connection manifest. .... Perhaps a Habit of using good Method, cannot be better acquired, than by learning a little Geometry or Algebra. " *Natl. Archives
1790 The first US National Census began on this day.On March 1, the US Congress had instructed to begin on the first Monday in August.
the marshals of the several judicial districts of the United States were required to
cause the number of the inhabitants within their respective districts
to be taken, omitting Indians not taxed, and distinguishing free persons,
including those bound to service for a term of year, from all
others. This separation in itself was sufficient to meet all the constitutional
requirements of the enumeration, but the aet also required
the marshals to distinguish the sex and color of free persons and free
males of 16 years and upward from those under that age;
*The history and growth of the United States census
In 1870, Tower Subway, the first tube railway in the world, was opened under the River Thames in London, England. Engineer James Henry Greathead used a tunnelling shield he modified from Barlow's design to bore the 6-ft diameter tunnel near the Tower of London. It opened with steam operated lifts and a 12-seat carriage shuttled from end to end by wire rope powered by a steam engine. It was not successful due to low use and frequent breakdowns, and the railway closed within three months (Nov 1870). The tunnel was converted to a foot tunnel with stairs. It was closed in 1894 when the opening of the nearby Tower Bridge made it redundant. The tunnel now holds water mains and fibre optic cables.*TIS
1876 The dead man's hand is a two-pair poker hand, namely "aces and eights". This card combination gets its name from a legend that it was the five-card-draw hand held by Wild Bill Hickok, when he was murdered on August 2, 1876, in Saloon No. 10 at Deadwood, South Dakota. "Wild Bill" Hickok was shot and killed by a drunken stranger at a poker table in Nuttall & Mann's Saloon No. 10 in Deadwood on August 2, 1876. Hickok had come to the Black Hills to explore the gold fields there, leaving his wife in Cincinnati. High School students should be able to find the probability of getting “the Dead Man’s hand” in a five card hand delt from a standard 52 card deck. *PB
In 1880, Greenwich Mean Time (GMT) was adopted officially by Parliament. Greenwich had been the national centre for time since 1675. GMT was originally set-up to aid naval navigation, but was not was used on land until transportation improved. In the 1840 's with the introduction of the railways there was a need in Britain for a national time system to replace the local time adopted by major towns and cities. (Thony Christie wrote to tell me that Edmund Halley had used Greenwich as 0 degree on a map in 1738)
GMT was adopted by the U.S. at noon on 18 Nov 1883 when the telegraph lines transmitted time signals to all major cities. Prior to that there were over 300 local times in the USA. GMT was adopted worldwide on 1 Nov 1884 when the met International Meridian Conference in Washington, DC, USA and 24 time zones created.*TIS "The first printed chart or map known to have used Greenwich as its Prime Meridian was published in 1738. The Bradley Meridian not only defined the Zero of longitude for the first Ordnance Survey map published in 1801, but also remains the Zero Meridian used by the Ordnance Survey today." (This is about six meters west of the line agreed to in 1884 which is defined by the Airy Meridian now visited regularly by thousands (pb)) *The Greenwich Meridian
1906 R. D. Carmichael proved that there are no triple perfect odd numbers that are the product of three distinct prime factors in the American Mathematical Monthly. A triple perfect, or $P_3$ , number is a number whose divisors (including the number itself, add up to three times the number. Robert Record recognized that 120 was such a number since the sum of its aliquot parts (divisors less than itself) sum to 240. The investigation of multiply perfect numbers was most active at first among French mathematicians and they used the terms "sous-double" for the triple perfect . *L E Dickson, History of the Theory of Numbers.
1932 Carl D. Anderson discovered the positron in 1932, for which he won the Nobel Prize for Physics in 1936. Anderson did not coin the term positron, but allowed it at the suggestion of the Physical Review journal editor to which he submitted his discovery paper in late 1932 The positron was the first evidence of antimatter and was discovered when Anderson allowed cosmic rays to pass through a cloud chamber and a lead plate. A magnet surrounded this apparatus, causing particles to bend in different directions based on their electric charge. The ion trail left by each positron appeared on the photographic plate with a curvature matching the mass-to-charge ratio of an electron, but in a direction that proved its charge was positive.
Anderson wrote in retrospect that the positron could have been discovered earlier based on Chung-Yao Chao 's work, if only it had been followed up. *Wik
1939 Albert Einstein “wrote” President F. D. Roosevelt that “Some recent work by E. Fermi and L. Szilard ... leads me to expect that the element uranium may be turned into a new and important source of energy in the immediate future. ... This new phenomenon would also lead to the construction of bombs, and it is conceivable—though much less certain—that extremely powerful bombs of a new type may be constructed.”
The letter, drafted by Fermi, Szilard, and Wigner and seems not to have actually been signed by Einstein until August 10, and was then given to Alexander Sachs, a confident of Roosevelt, who did not deliver it to him until October 30. Roosevelt quickly started the Manhattan Project. Einstein later regretted signing this letter. *(VFR & Brody & Brody); (the letter can be read at Letters of Note) They recognized the process could generate a lot of energy leading to power and possibly weapons. There was also concern the Nazi government of Germany was already searching for an atomic weapon. This letter would accomplish little more than the creation of a "Uranium Committee" with a budget of \$6,000 to buy uranium and graphite for experiments.
Sir Fred Soddy's book, The Interpretation of Radium, inspired H G Wells to write The World Set Free in 1914, and he dedicated the novel to Soddy's book. Twenty years later, Wells' book set Leo Szilard to thinking about the possibility of Chain reactions, and how they might be used to create a bomb, leading to his getting a British patent on the idea in 1936. A few years later Szilard encouraged his friend, Albert Einstein , to write a letter to President Roosevelt about the potential for an atomic bomb. The prize-winning science-fiction writer, Frederik Pohl , talks about Szilard's epiphany in Chasing Science (pg 25),
".. we know the exact spot where Leo Szilard got the idea that led to the atomic bomb. There isn't even a plaque to mark it, but it happened in 1938, while he was waiting for a traffic light to change on London's Southampton Row. Szilard had been remembering H. G. Well's old science-fiction novel about atomic power, The World Set Free and had been reading about the nuclear-fission experiment of Otto Hahn and Lise Meitner, and the light bulb went on over his head."
1971 At the end of the last EVA of the Apollo 15 mission, Commander David Scott took a few minutes to conduct a classical science experiment in front of the TV camera that had been set up just outside the LM Falcon at the Hadley Rille landing site. recreating the experiment that Galileo may have done in Pisa, he dropped a hammer and a falcon feather from approximately 1.5 Meters, and you may judge the result for your self from the video.
2012 A blue moon month, there was another in July of 2015, and the next will be in January, 2018 and then again in March of the same year. *telegraph.co.uk
BIRTHS
1754 Pierre-Charles L'Enfant (August 9, 1754 - June 14, 1825 (aged 70))French-born and educated as an architect, L'Enfant came to the U.S. as a French engineer who assisted the American Continental Army in its fight against the British during the American Revolution. Appointed by President Washington in 1791 to design the new federal city, L'Enfant designed the basic plan for Washington, D.C., based on many European cityscapes. L'Enfant was dismissed from his job in 1792 following professional disagreements and personality clashes with the three commissioners appointed by President Washington to oversee the project.*TIS
1820 John Tyndall FRS (2 August 1820 – 4 December 1893) was a prominent 19th century physicist. His initial scientific fame arose in the 1850s from his study of diamagnetism. Later he studied thermal radiation, and produced a number of discoveries about processes in the atmosphere. He was the first to prove the "Greenhouse Theory" of the Earth's atmosphere. Tyndall published seventeen books, which brought state-of-the-art 19th century experimental physics to a wider audience. From 1853 to 1887 he was professor of physics at the Royal Institution of Great Britain in London. *Wik
1835 Elisha Gray (August 2, 1835 – January 21, 1901) was a U.S. scientist and innovator who would have been known to us as the inventor of the telephone if Alexander Graham bell hadn't got to the patent office before him earlier that day, resulting in a famous legal battle. He subsequently joined Western Electric where he designed the telegraph printer, the answer-back call-box of the A.D.T. System, and the needle annunciator, among other inventions. He also goes down in history as the accidental creator of the first electronic musical instrument using his discovery of the basic single note oscillator and design of a simple loudspeaker device.*TIS
It is interesting that Bell died on the date of Gray's birth.
1856 Ferdinand Rudio (2 Aug 1856 in Wiesbaden, Germany - 21 June 1929 in Zürich, Switzerland)worked on group theory, algebra and geometry. He is best remembered for his work in the history of mathematics, in particular he wrote a major article on squaring the circle and he also wrote biographies of mathematicians.
One of his most important contributions to mathematics was editing the collected works of Euler. Rudio proposed the project in 1883 since this was the centenary of Euler's death. He continued to advocate the importance of this project and at the International Congress of Mathematicians at Zurich in 1897 he suggested it would be a suitable memorial for the year 1907 which was the bicentennial of Euler's birth. The project was not approved until 1909, twenty six years after Rudio first proposed it.
Rudio was appointed general editor for the project. He edited two volumes himself and collaborated in the editing of three more. In fact he supervised the production of over 30 volumes in his role as general editor. *SAU
1887 Oskar Johann Viktor Anderson (2 August 1887, Minsk, Belarus – 12 February 1960, Munich, Germany) was a German-Russian mathematician. He was most famously known for his work on mathematical statistics.Anderson was born from a German family in Minsk (now in Belarus), but soon moved to Kazan (Russia), on the edge of Siberia. His father, Nikolai Anderson, was professor in Finno-Ugric languages at the University of Kazan. His older brothers were the folklorist Walter Anderson and the astrophysicist Wilhelm Anderson. Oskar Anderson graduated from Kazan Gymnasium with a gold medal in 1906. After studying mathematics for one year at University of Kazan, he moved to St. Petersburg to study economics at the Polytechnic Institute. From 1907 to 1915, he was Aleksandr Chuprov's assistant. In 1912 he started lecturing at a commercial school in St. Petersburg. In 1918 he took on a professorship in Kiev but he was forced to flee Russia in 1920 due to the Russian Revolution, first taking a post in Budapest (Hungary) before becoming a professor at the University of Economics at Varna (Bulgaria) in 1924. In 1935 he was appointed director of the Statistical Institute for Economic Research at the University of Sofia and in 1942 he took up a full professorship of statistics at the University of Kiel, where he was joined by his brother Walter Anderson after the end of the second world war. In 1947 he took a position at the University of Munich, teaching there until 1956, when he retired.*Wik
1902 Mina Spiegel Rees (2 August 1902 - 25 October 1997) was an American mathematician. She was the first female President of the American Association for the Advancement of Science (1971) and head of the mathematics department of the Office of Naval Research of the United States. She was valedictorian at Hunter College High School in New York City.[1] She graduated Summa cum Laude with a math major at Hunter College in 1923. She received a masters in mathematics from Columbia University in 1925. At that time she was told unofficially that "the Columbia mathematics department was not really interested in having women candidates for Ph.D's". She started teaching at Hunter College then took a sabbatical to study for the doctorate at the University of Chicago in 1929. She earned her doctorate in 1931 with a thesis on "Division algebras associated with an equation whose group has four generators," published in the American Journal of Mathematics, Vol 54 (Jan. 1932), 51-65. Her advisor was Leonard Dickson. *Wik
She became one of the earliest female computer pioneers. Before her death in 1997, Rees would leave her mark in the worlds of computers, mathematics, and education. Rees graduated with degrees in mathematics from Hunter College and Columbia University and ran the Office of Naval Research (ONR) after , where she organized work on early computers such as the Harvard Mark I. Throughout her career, she made many important contributions to the use of computers in solving applied mathematical problems and was known for her strong administrative skills and influence. *CHM
1971 Ruth Elke Lawrence-Naimark (2 August 1971, Bristol, UK; ) is an Associate Professor of mathematics at the Einstein Institute of Mathematics, Hebrew University of Jerusalem, and a researcher in knot theory and algebraic topology. Lawrence's 1990 paper, Homological representations of the Hecke algebra, in Communications in Mathematical Physics, introduced, among other things, certain novel linear representations of the braid group — known as Lawrence–Krammer representation. In papers published in 2000 and 2001, Daan Krammer and Stephen Bigelow established the faithfulness of Lawrence's representation. This result goes by the phrase "braid groups are linear." Outside academia, she is best known for being a child prodigy in mathematics. She passed the GCSE in Math at age five, and in 1981 she passed the Oxford University interview entrance examination in mathematics, coming first out of all 530 candidates sitting the examination, and joining St Hugh's College in 1983 at the age of just twelve.*Wik
DEATHS
1823 Lazare Nicolas Marguerite, Comte Carnot (13 May 1753 – 2 August 1823) died. Carnot is best known as a geometer. In 1803 he published Géométrie de position in which sensed magnitudes were first systematically used in geometry.*Wik
1922 Alexander Graham Bell (March 3, 1847 – August 2, 1922) Scottish inventor of the telephone died in Beinn Bhreagh, Nova Scotia. Born in 1847, Bell's career was influenced by his grandfather (who published The Practical Elocutionist and Stammering and Other Impediments of Speech), his father (whose interest was the mechanics and methods of vocal communication) and his mother (who was deaf). As a teenager, Alexander was intrigued by the writings of German physicist Hermann Von Helmholtz, On The Sensations of Tone. At age 23 he moved to Canada. In 1871, Bell began giving instruction in Visible Speech at the Boston School for Deaf Mutes. This background set his course in developing the transmission of voice over wires. *TIS
1962 John Smith graduated from Glasgow University and then stayed on as a lecturer. He taught at Campbeltown Grammar School and Dollar Academy and then became an HM Schools Inspector. *SAU
1976 László Kalmár (March 27, 1905 – August 2, 1976) worked on mathematical logic and theoretical computer science. He was ackowledged as the leader of Hungarian mathematical logic. *SAU
2016 Ahmed Hassan Zewail (February 26, 1946 – August 2, 2016) was an Egyptian-American scientist, known as the "father of femtochemistry". He was awarded the 1999 Nobel Prize in Chemistry for his work on femtochemistry and became the first Egyptian and the first Arab to win a Nobel Prize in a scientific field. He was the Linus Pauling Chair Professor of Chemistry, Professor of Physics, and the director of the Physical Biology Center for Ultrafast Science and Technology at the California Institute of Technology.
Zewail died aged 70 on the evening of August 2, 2016, after a long battle with cancer. *Wik
Credits
*CHM=Computer History Museum
*FFF=Kane, Famous First Facts
*NSEC= NASA Solar Eclipse Calendar
*RMAT= The Renaissance Mathematicus, Thony Christie
*SAU=St Andrews Univ. Math History
*TIA = Today in Astronomy
*TIS= Today in Science History
*VFR = V Frederick Rickey, USMA
*Wik = Wikipedia
|
2021-06-13 17:41:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3393687605857849, "perplexity": 2156.806326871988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00115.warc.gz"}
|
https://ask.sagemath.org/question/34485/what-is-the-most-efficient-way-to-look-up-a-face-in-the-face-lattice-of-a-polyhedron/
|
# What is the most efficient way to "look up" a face in the face lattice of a polyhedron?
Say I have a polyhedron p with face lattice L = p.face_lattice(). I want to define x as the element of L defined as the convex hull of vertices <0 1 3> of p. What is the most efficient way to define x?
For example, consider
sage: p = polytopes.simplex(3)
sage: for v in p.vertices():
print '\tIndex {}:'.format(v.index()), v
....:
Index 0: A vertex at (0, 0, 0, 1)
Index 1: A vertex at (0, 0, 1, 0)
Index 2: A vertex at (0, 1, 0, 0)
Index 3: A vertex at (1, 0, 0, 0)
We see that p has four vertices.
The vertices indexed by 0, 1, and 3 are the vertices of a face of p. This is confirmed:
sage: L = p.face_lattice()
sage: list(L)
[<>,
<0>,
<1>,
<2>,
<3>,
<0,1>,
<0,2>,
<1,2>,
<0,3>,
<1,3>,
<2,3>,
<0,1,2>,
<0,1,3>,
<0,2,3>,
<1,2,3>,
<0,1,2,3>]
I want to define x as the face in p.face_lattice() given by these vertices. Of course, I could do this by hand with x = list(L)[12], but I want a way to automate this.
edit retag close merge delete
Please provide a polyhedron p so that other users have a starting point to try and answer your question.
( 2016-08-16 17:34:15 -0500 )edit
@slelievre Thanks for the suggestion. I've updated my question with an example.
( 2016-08-16 18:16:25 -0500 )edit
Sort by » oldest newest most voted
This is a very natural question, but unfortunately at the moment the answer is a bit complicated. This answer assumes sage 9.1+:
As this is a combinatorial property, you can go into CombinatorialPolyhedron, which is now a method of a polyhedron. There you can obtain the answer via the face iterator. The face iterator iterates through all faces of the polyhedron with a depth-first search. Of course you could just iterate until you what you are looking for, but the iterator can also be manipulated. For this I need to explain a bit, how it works. Assume for now that the iterator is in non-dual mode, it generates all faces from the facets.
The iterator yields first all facets. Then it inductively yields all faces of the first facet (treating it as it's own polyhedron) and then marks it as being visited. Then it yields all faces of the second facet, but the faces contained in the first facet. After that it marks the second facet as being visited and so on.
Now in our situation it might happen, that the first facet does not contain the goal face. Then we want to mark it immediately as being visited. Likewise we proceed with all facets. The below code will proceed like this for the ridges and so on, which might be easier to adapt, when you need different properties.
You might observe that in this particular situation it suffices to mark the correct facets as visited.
Now, if your polyhedron contains many facets but few vertices, generating all faces from the facets might be a bad idea. This is why the iterator can work in dual mode as well. In this case you can decide to skip supfaces instead of subfaces.
As mentioned before, this is a natural question and eventually I think there should be a method like find-join-of-vertices or find-meet-of-facets.
The following should work (it assumes that the face really does exist, otherwise you need to alter the termination condition of the while loop to fit your need):
from sage.geometry.polyhedron.face import combinatorial_face_to_polyhedral_face
def is_subset(a,b):
return all(x in b for x in a)
def obtain_face(P, *indices):
indices = tuple(indices)
C = P.combinatorial_polyhedron()
it = C.face_iter()
# Let f be the first facet in non-dual mode or the first vertex in dual mode.
f = next(it)
while f.ambient_V_indices() != indices:
if not it.dual:
# Skipping faces unless our goal face is a subface.
if not is_subset(indices, f.ambient_V_indices()):
it.ignore_subfaces()
else: # the face iterator starts with the vertices
# Skipping faces unless our goal face is a supface.
if not is_subset(f.ambient_V_indices(), indices):
it.ignore_supfaces()
f = next(it)
# f is now the combinatorial face we where looking for (assuming it exists).
# Obtain a polyhedral face from it.
return combinatorial_face_to_polyhedral_face(P, f)
Then you get:
sage: P = polytopes.cross_polytope(4)
sage: f = obtain_face(P, 2,4,6)
sage: f.ambient_V_indices()
(2, 4, 6)
sage: f
A 2-dimensional face of a Polyhedron in ZZ^4 defined as the convex hull of 3 vertices
sage: P = polytopes.hypercube(3)
sage: f = obtain_face(P, 0,1,5,6)
sage: f.ambient_V_indices()
(0, 1, 5, 6)
sage: f
A 2-dimensional face of a Polyhedron in ZZ^3 defined as the convex hull of 4 vertices
This should be pretty fast. Note, that no options of the face iterator are exposed in the polyhedron and one has to go into combinatorial polyhedron (skipping subfaces or supfaces is crucial, because we only want one depth search and find our face immediately).
The the subset check function is not optimized deliberately.
more
Not very user friendly :-)
( 2020-04-23 11:17:26 -0500 )edit
Yes, I noticed. It wasn't until JP pointed me to this post that I realized that this would be a nice thing to have. This is all very much work in progress and it reminds me that more stuff of combinatorial polyhedron should be exposed. I might be about the only person aware of the methods ignore_subfaces and ignore_supfaces.
( 2020-04-23 15:09:03 -0500 )edit
I edited the answer to explain what the code is doing.
( 2020-05-11 14:16:32 -0500 )edit
The following works for me:
TYPES = ( int, sage.rings.integer.Integer )
def get_face( P, verticesInput ):
"""Here, <P> is a polyhedron,
and the <verticesInput> are a list of true vertices of the polyhedron.
Then we search for a face of the polyhedron having the given <verticesInput>
matching the own vertices. If found, we return it. If not we return None.
"""
verticesP = P.vertices()
verticesSetInput = set( [ verticesP[ v ] if type(v) in TYPES else v
for v in verticesInput ] )
if len( verticesInput ) != len( verticesSetInput ):
return
for f in P.face_lattice():
if set( f.vertices() ) == verticesSetInput:
return f
P = polytopes.dodecahedron()
# test
for data in ( [0..3], [0..4], [0..5], [15..19], {8,15}, (7,) ):
f = get_face( P, data )
if f: print "data = %s --> FOUND FACE %s" % ( data, f )
else: print "data = %s --> NO SUCH FACE" % ( data )
It gives:
data = [0, 1, 2, 3] --> NO SUCH FACE
data = [0, 1, 2, 3, 4] --> FOUND FACE <0,1,2,3,4>
data = [0, 1, 2, 3, 4, 5] --> NO SUCH FACE
data = [15, 16, 17, 18, 19] --> FOUND FACE <15,16,17,18,19>
data = set([8, 15]) --> FOUND FACE <8,15>
data = (7,) --> FOUND FACE <7>
more
|
2020-10-28 06:15:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21157680451869965, "perplexity": 1912.2101986350583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896778.71/warc/CC-MAIN-20201028044037-20201028074037-00014.warc.gz"}
|
https://ask.puppet.com/questions/15094/revisions/
|
Revision history [back]
Can you use automatic parameter lookup with defined types?
Automatic Parameter Lookup (https://docs.puppetlabs.com/hiera/1/puppet.html#automatic-parameter-lookup) allows me to have class parameters that are overridden by hiera values. Can anything similar be done with hiera? For example, when using apache::vhost, it's fairly common for me to start with:
apache::vhost{ 'site':
port => 443,
...
}
Setting \$apache::vhost::port has no effect. Other than creating another defined type that wraps around apache::vhost, how can I set the default without peppering classes with Apache::Vhost{ port => 443 }?
|
2017-11-23 20:12:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48261523246765137, "perplexity": 6187.7179778085865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806939.98/warc/CC-MAIN-20171123195711-20171123215711-00196.warc.gz"}
|
https://testbook.com/question-answer/a-rigid-retaining-wall-of-6-m-height-has-a-saturat--5ee07dd2f60d5d2679c17a4d
|
# A rigid retaining wall of 6 m height has a saturated backfill of soft clay soil. What is the critical height when the properties of the clay soil are: γsat = 17.56 kN/m3 and cohesion C = 18 kN/m2
Free Practice With Testbook Mock Tests
## Options:
1. 1.1 m
2. 2.1 m
3. 3.1 m
4. 4.1 m
### Correct Answer: Option 4 (Solution Below)
This question was previously asked in
ESE Civil 2016 Paper 2: Official Paper
## Solution:
Concept:
Critical height is the height at which net active earth pressure is zero or it is the height at which vertical cut can be made in soil without any lateral support (Refer figure) and it is given by:
$${H_c} = \;\frac{{4C}}{\gamma }{\rm{tan}}\left( {45 + \frac{\phi }{2}} \right)$$
Calculation:
Given, Cohesion, C = 18 kN/m2
Saturated unit weight of soil, γ = 17.56 kN/m3
$${H_c} = \;\frac{{4C}}{\gamma }{\rm{tan}}\left( {45 + \frac{\phi }{2}} \right)$$
For pure clay, ϕ = 0
$${H_c} = \frac{{4\: \times \:18}}{{17.56}}\: \times \:{\rm{tan}}\left( {45 + \frac{0}{2}} \right)$$
Hc = 4.1 m
Important point:
The Depth of the tension zone 'Zc' is given by
$${{\rm{Z}}_{\rm{c}}}{\rm{ = }}\frac{{{\rm{2C}}}}{{{\rm{\gamma }}\;\sqrt {{{\rm{K}}_{\rm{a}}}} }}$$
The critical depth of unsupported vertical trench 'Hc' is given by,
Hc = 2 × Zc
$${{\rm{H}}_{\rm{C}}}{\rm{ = }}\frac{{{\rm{4C}}}}{{{\rm{\gamma }}\;\sqrt {{{\rm{K}}_{\rm{a}}}} }}$$
|
2021-08-02 00:53:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4782150387763977, "perplexity": 8208.63548572834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00619.warc.gz"}
|
https://www.nature.com/articles/s41467-018-06555-w?error=cookies_not_supported&code=3f0cf3c9-965f-436f-9228-072f68f13f3f
|
Article | Open | Published:
Twist angle-dependent conductivities across MoS2/graphene heterojunctions
Abstract
Van der Waals heterostructures stacked from different two-dimensional materials offer a unique platform for addressing many fundamental physics and construction of advanced devices. Twist angle between the two individual layers plays a crucial role in tuning the heterostructure properties. Here we report the experimental investigation of the twist angle-dependent conductivities in MoS2/graphene van der Waals heterojunctions. We found that the vertical conductivity of the heterojunction can be tuned by 5 times under different twist configurations, and the highest/lowest conductivity occurs at a twist angle of 0°/30°. Density functional theory simulations suggest that this conductivity change originates from the transmission coefficient difference in the heterojunctions with different twist angles. Our work provides a guidance in using the MoS2/graphene heterojunction for electronics, especially on reducing the contact resistance in MoS2 devices as well as other TMDCs devices contacted by graphene.
Introduction
Two-dimensional (2D) materials can be assembled into the so-called Van der Waals (vdW) heterostructures, offering an exotic platform for exploring many fundamental physics and important device applications1,2. One of the crucial structural parameters in a vdW heterostructure is its twist angle, a new degree of freedom for modulating its properties. Indeed, many fascinating twist angle-dependent properties have been investigated including the strong twist angle-dependent resistivity of Gr/graphite contact3,4, the quantum transport in Graphene/BN superlattice5,6,7, resonant tunneling in Gr/BN/Gr8, interlayer excitons in MoSe2/WSe29, and band evolution in MoS2/graphite10, just to mention a few. Among many investigations, unraveling the vertical conductivity of a vdW heterojunction is of fundamental importance; however, only limited knowledge has been established so far.
Here, we report the experimental investigation of vertical conductivities of MoS2/graphene (MoS2/Gr) heterojunctions under various lattice twist configurations. We found that vertical conductivities of these heterojunctions are strongly twist angle-dependent. When varying the twist angle from 0° to 30°, conductivities monotonically decrease and the modulation depth (maximum/minimum) is 5. Considering that MoS2/Gr heterojunction has great potential in various devices applications11,12 and even batteries13, graphene can also form excellent contact to MoS214,15,16,17; our results provide a guidance in using the MoS2/Gr heterojunction for electronics.
Results
Growth and characterization of MoS2/Gr heterojunction
In this study, the MoS2/Gr heterojunctions were fabricated by an epitaxial growth technique18,19 (also see Methods for more details). Figure 1a shows an optical image of as-grown MoS2 triangle domains on graphene. These domains have obviously preferred orientations and similar sizes, suggesting the epitaxy nature of MoS2 domains. Figure 1b shows the atomic force microscope (AFM) image of an area zoomed in Fig. 1a. It can be clearly seen that the surfaces of both MoS2 and graphene substrate are clean and free of contaminations. The height of as-grown MoS2 triangle domains is 0.855 nm, corresponding a monolayer thickness20,21. Raman and photoluminescence (PL) spectra also indicate that the monolayer MoS2 domains are of high qualities (Supplementary Figure 1). Selected area electron diffraction (SAED) was also used to characterize the lattice alignment of our MoS2/Gr samples. As illustrated in a typical pattern (Fig. 1c), the hexagonal diffraction spots of both MoS2 and graphene have the same orientations, suggesting an either 0° or 60° twist angle, which are geometrically equivalent between the as-grown MoS2 and graphene. To further demonstrate the high quality and the clean surface of these MoS2 domains, we further performed conductive AFM (C-AFM) imaging (Fig. 1d and Supplementary Figure 2). We can clearly see that the moiré superlattice with a period of 1.18 nm arises from the lattice mismatch between MoS2 and graphene (30%)22. Note that in Fig. 1c, the sample is very thin, leading to quite a large center transmission spot, and diffraction spots from the moiré superlattice are not visible in the SEAD pattern, since they are within the center transmission spot.
Twist-angle-dependent conductivities of MoS2/Gr heterojunction
To achieve different twist angles between the MoS2 domains and the graphene substrates underneath, we rotated these as-grown MoS2 domains via AFM-tip manipulation technique. This manipulation process is like that reported in our previous work18,23. As illustrated in Fig. 2a, during the rotation process, we firstly engaged the AFM tip to the graphene surface with a load of tens of nN and then pushed the MoS2 triangle from its corner to actuate its rotation. By controlling the tip moving direction and length, we thus can rotate the MoS2 domains on graphene with any deterministic twist angles, as illustrated in Fig. 2b–f. The heights of this MoS2 domain under different twist angles are likely to be different if considering the interlayer coupling; however, due to the probing limit (no more than 60 pm) of our AFM, these height variations were not observed in our experiments.
Since MoS2/Gr heterostructures with deterministic twist angles are available, we thus measured their vertical conductivities to investigate the twist-angle dependence. In our experiments, the conductivity was directly measured under the C-AFM mode. During C-AFM scanning, a metal-coated AFM tip was placed in direct contact with the sample surface under a controlled load of 23 nN (cantilevers were calibrated by Sader’s method)24. The bias applied to the measurement circuit was fixed to be 1.5 V, and a 110-MΩ resistor was connected into the circuit to prevent current overload to our C-AFM holder, which has a measuring range of ±20 nA and a noise level of 1.5 pA, as illustrated in the inset of Fig. 3a. Note that, during scanning, we adjusted the fast scan direction of the AFM tip in parallel to one edge of the MoS2 triangle to reduce the feedback noise.
Figure 3a shows a typical C-AFM current mapping of a MoS2/Gr heterojunction with a twist angle of 28.03°. The mapping can be clearly divided into two different regions: the bright region from the bare graphene surface with higher conductivities and the dark region from the MoS2/Gr heterojunction with lower conductivities. To quantitatively extract the conductivity meanwhile eliminating the metal at the two different regions, we thus inverted this current mapping into a statistical current distribution chart as illustrated in Fig. 3b. The x-axis of the chart is current and the y-axis is the counts, which is from an area of 500 nm × 500 nm, corresponding to 256 × 256 points. From the statistics, we can clearly see two Gaussian peaks whose centers reflect the statistic currents Ig and Ij flowing through the graphene and the junction, respectively. Because the resistance of the graphene is much smaller than the connected 110-MΩ resistor, Rg = V/Ig just reflects the system resistance and barely shifts. In order to facilitate comparison, we used Q=Ij/Ig to normalize the current distribution statistics.
Figure 3c shows the normalized current distribution statistics for a series of MoS2/Gr heterojunctions with different twist angles θ. The intensity comes from the scale of heterojunction and graphene area we chose to make statistics. It can be clearly seen that the two peaks are moving apart to each other when the MoS2/Gr heterojunction is twisted from 0° to 30°. As illustrated in Fig. 3d, we also calculated the resistances of the MoS2/Gr heterojunctions with different twist angles, which shows an obvious 60° period. (Details of resistance calculation, please see Supplementary Note 5) The resistance at θ= 30° is about 5 times larger than that at θ= 60° (0°). Error bar in Fig. 3d is derived from the full width at half maximum of the MoS2 peaks, which can characterize the dispersion of the heterojunction resistance. From the results, we can see that the dispersion of the heterojunction resistance has no clear correlation with twist angles. In addition to the measurements of these MoS2/Gr heterojunctions fabricated by AFM-tip-facilitated rotation, we also measured a polycrystalline MoS2/Gr heterojunction (Supplementary Figure 5), in which different twist angles naturally exist in an individual sample. Current mapping also shows bright and dark areas clearly separated by the 30° grain boundaries, consistent with the above observations. Note that we reproduced the above experiments many times on different samples. In each time we at least captured five current maps for an individual twist angle, and the overall results show good consistency.
Theorical simulations
As observed above, vertical conductivities across the MoS2/Gr heterojunctions are highly twist-angle dependent. Since the tip is in direct contact with the sample surface during C-AFM scanning and tip-to-MoS2 interface is not likely to have any lattice orientation dependent effects, this interface-modulated conductivities could be related to the stacking configurations between graphene and MoS2. The tunneling current can be described as25,26,27:
$$I\left( V \right) \propto {\int} {{d}E \cdot DoS_{\mathrm{g}}} \left( E \right) \cdot DoS_{\mathrm{t}}\left( {E - eV} \right) \\ \cdot \left[ \, {f\left( {E - eV} \right) - f\left( E \right)} \right] \cdot T\left( E \right)$$
(1)
where $$DoS_{\mathrm{g}}\left( E \right)$$ and $$DoS_{\mathrm{t}}\left( {E - eV} \right)$$ are the available tunneling density of states in graphene and the metal tip. f(E) is the Fermi–Dirac distribution. T(E) is the transmission coefficient. It can be seen that the tunneling current is dominated by the transmission coefficient under a certain tip-to-sample bias.
To unravel the variation of transmission coefficient in the MoS2/Gr heterojunctions, we thus performed density functional theory (DFT) simulations (please see Methods and Supplementary Note 7). For simplicity, we just calculated the transmission coefficients for θ= 0° and θ= 30° configurations and got T0 = 0.0015 and T30 = 0.000705, respectively. Two numbers are in the ratio 2.21:1, suggesting the tunneling current of the 0°-twisted heterojunction will be 2.21 times larger than that of the 30°-twisted heterojunction under a same bias. This result is in fair agreement with our observations, although our experimental results indicate that the ratio should 5. This experiment theory inconsistency might come from choosing structural parameters used for simulations, considering that little variation of structural parameters could lead to significant change on the simulated results. In our model, the periodic symmetry has been broken along the transport direction, but preserved on the flat straighten to the transport direction. As a result, a 2D Brillouin zone exists. Figure 4a and b show the simulated hot mappings of transmission coefficients in K-space for the 0°- and 30°-twisted heterojunctions, respectively, providing a hint on which momentum area dominates the transport mostly. It can be clearly seen that K points dominate transport mostly for the 0°-twisted heterojunction; in sharp contrast, Γ points contributes to transport mostly for the 30°-twisted heterojunction. In order to give a clearer relationship between the twist angle and the transmission coefficient, we also applied Wentzel–Kramers–Brillouin (WKB) method to generate transmission data under different twist angles as shown in Fig. 4c. We can clearly see that the transmission coefficient decreases monotonically as twist angles varying from 0° toward 30°, consistent with our experiment results. (Please see Methods for more details of WKB calculations.)
Considering the evolution of MoS2/Gr heterojunctions with twist angles from 0° to 30°, we can simply treat them by rotating the Brillouin zone of graphene. At θ= 0°, K points for graphene and MoS2 are aligned. During rotating, graphene’s K points will move away from MoS2’s K points; at θ= 30°, graphene’s K points are at MoS2’s Γ points due to the band folding. From K point to Γ point, there is a k-path along which MoS2’s bandgap changes from minimum to maximum. As a result, the transmission coefficient of this system changes from the largest to the smallest, which is consistent with the above experimental observations shown in Fig. 3d.
Discussion
We investigated MoS2/Gr heterojunctions with different twist angles in terms of their vertical conductivities. We found that the vertical conductivities of these heterojunctions can be modulated up to 5 times by twist angles with the highest/lowest conductivity occurs at a twist angle of 0°/30°. DFT simulations suggest that this twist-angle-dependent conductivities originates from the transmission coefficient difference in the MoS2/Gr heterojunctions. Our work provides a guidance in using the MoS2/Gr heterojunction for electronics, especially concern on reducing the contact resistance in MoS2 devices as well as other TMDCs devices contacted by graphene.
Methods
Sample preparations
The MoS2 growth was performed in a three-temperature-zone chemical vapor deposition (CVD) chamber. S (Alfa Aesar, 99.9%, 4 g) and MoO3 (Alfa Aesar, 99.999%, 50 mg) sources were loaded in two separate inner tubes and placed at zone-I and zone-II, respectively. Substrates were load in zone-III. During the growth, Ar/O2 (gas flow rate: 75/3 sccm) are used as carrying gases and temperatures for the S-source, MoO3-source, and substrates are 115 °C, 530 °C, and 930 °C, respectively. Thin graphene flakes were mechanically exfoliated from both HOPG and Graphenium graphite (Manchester Nanomaterials) and placed on SiO2 substrates.
Sample characterizations
PL and Raman characterizations were performed in a Horiba Jobin Yvon LabRAM HR-Evolution Raman system using a 532-nm laser excitation wavelength. SAED was performed in a TEM (Philips CM200) operating at 200 kV.
AFM and C-AFM scanning
AFM imaging was performed by Asylum Research Cypher S with AC160TS tip. In C-AFM mode, we used ASYELEC-01 tip with Ti/Ir coating and the holder 901.730. The bias was applied from sample to tip with a current limit of ±20 nA and noise of 1.5 pA.
DFT calculations
We used DFT within the Keldysh non-equilibrium Green’s function (NEGF) formalism28. Succinctly, NEGF-DFT calculates density matrix by NEGF as $${\rho \sim {\int } {\mathrm d}EG^{ < }}$$, and transmission coefficient as $$T = T_{r}\left[ {G^{r}\Gamma _LG^{a}\Gamma _R} \right]$$. While, $$G^{r,a, < }$$ are the retarded, advanced, and lesser Green’s functions, respectively, and ΓL,R are the self-energy of the left (L) and right (R) leads. Furthermore, the conductance $$G = T \cdot e^2{\mathrm{/}}h$$, where e is the electron charge and h is the Planck constant. In our model. We choose multilayer graphene for both left lead and right lead, considering the lattice match problem. The structure has mirror symmetry centered on the plane of molybdenum atoms. The structure parameters have been chosen as: superlattice constant: 1.2816 nm; thickness of MoS2: 0.3227 nm (0°) and 0.3174 nm (30°); interlayer spacing of MoS2/Gr heterojunction: 0.3110 nm (0°) and 0.3131 nm (30°); interlayer spacing of graphene: 0.34 nm, all same with ref. 29.
WKB calculations
In WKB calculations, MoS2 was treated as a limited barrier; thus, the transmission coefficient has a simple relation with the twist angle, described as $$T = ae^{ - b\sqrt {E_{\mathrm{g}}} }$$. Where, a, b are pending constants, T is transmission, and Eg is the bandgap. Since we already have two transmissions hot maps for two twist angles, i.e., 0° and 30°, which have been used to determine a and b in the equation. In order to obtain the relation between the tunneling gap and the twist angle, band-folding technique has been used.
Data availability
The authors declare that the data supporting the findings of this study are available within the paper and its supplementary information files.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. 1.
Geim, A. K. & Grigorieva, I. V. Van der Waals heterostructures. Nature 499, 419–425 (2013).
2. 2.
Novoselov, K. S., Mishchenko, A., Carvalho, A. & Castro Neto, A. H. 2D materials and van der Waals heterostructures. Science 353, aac9439 (2016).
3. 3.
Chari, T., Ribeiro-Palau, R., Dean, C. R. & Shepard, K. Resistivity of rotated graphite-graphene contacts. Nano. Lett. 16, 4477–4482 (2016).
4. 4.
Koren, E. et al. Coherent commensurate electronic states at the interface between misoriented graphene layers. Nat. Nanotechnol. 11, 752–757 (2016).
5. 5.
Yang, W. et al. Epitaxial growth of single-domain graphene on hexagonal boron nitride. Nat. Mater. 12, 792–797 (2013).
6. 6.
Wang, E. Y. et al. Gaps induced by inversion symmetry breaking and second-generation Dirac cones in graphene/hexagonal boron nitride. Nat. Phys. 12, 1111 (2016).
7. 7.
Ribeiro-Palau, R. et al. Twistable electronics with dynamically rotatable heterostructures. https://arxiv.org/pdf/1804.02038.pdf.
8. 8.
Mishchenko, A. et al. Twist-controlled resonant tunnelling in graphene/boron nitride/graphene heterostructures. Nat. Nanotechnol. 9, 808–813 (2014).
9. 9.
Nayak, P. K. et al. Probing evolution of twist-angle-dependent interlayer excitons in MoSe2/WSe2 van der Waals heterostructures. Acs Nano 11, 4041–4050 (2017).
10. 10.
Jin, W. C. et al. Tuning the electronic structure of monolayer graphene/MoS2 van der Waals heterostructures via interlayer twist. Physics Review B 92 (2015).
11. 11.
Massicotte, M. et al. Picosecond photoresponse in van der Waals heterostructures. Nat. Nanotechnol. 11, 42–46 (2016).
12. 12.
Roy, K. et al. Graphene–MoS2 hybrid structures for multifunctional photoresponsive memory devices. Nat. Nanotechnol. 8, 826 (2013).
13. 13.
Chang, K. & Chen, W. L-cysteine-assisted synthesis of layered MoS2/graphene composites with excellent electrochemical performances for lithium ion batteries. Acs Nano 5, 4720–4728 (2011).
14. 14.
Yu, L. et al. Graphene/MoS2 hybrid technology for large-scale two-dimensional electronics. Nano. Lett. 14, 3055–3063 (2014).
15. 15.
Kang, J. H., Liu, W., Sarkar, D., Jena, D. & Banerjee, K. Computational study of metal contacts to monolayer transition-metal dichalcogenide semiconductors. Phys. Rev. X 4, 031005 (2014).
16. 16.
Liu, Y. et al. Toward barrier free contact to molybdenum disulfide using graphene electrodes. Nano. Lett. 15, 3030–3034 (2015).
17. 17.
Xie, L. et al. Graphene-contacted ultrashort channel monolayer MoS2 Transistors. Adv. Mater., 29, 1702522 (2017).
18. 18.
Yu, H. et al. Precisely aligned monolayer MoS2 epitaxially grown on h-BN basal plane. Small 13, 1603005-n/a (2017).
19. 19.
Du, L. J. et al. PL and electronic structures of MoS2/graphene heterostructures via interlayer twisting angle. Appl. Phys. Lett. 111, 263106 (2017).
20. 20.
Radisavljevic, B., Radenovic, A., Brivio, J., Giacometti, V. & Kis, A. Single-layer MoS2 transistors. Nat. Nanotechnol. 6, 147–150 (2011).
21. 21.
Splendiani, A. et al. Emerging photoluminescence in monolayer MoS2. Nano. Lett. 10, 1271–1275 (2010).
22. 22.
Koos, A. A. et al. STM study of the MoS2 flakes grown on graphite: a model system for atomically clean 2D heterostructure interfaces. Carbon N. Y. 105, 408–415 (2016).
23. 23.
Wang, D. et al. Thermally induced graphene rotation on hexagonal boron nitride. Phys. Rev. Lett. 116, 126101 (2016).
24. 24.
Sader, J. E., Chon, J. W. M. & Mulvaney, P. Calibration of rectangular atomic force microscope cantilevers. Rev. Sci. Instrum. 70, 3967–3969 (1999).
25. 25.
Georgiou, T. et al. Vertical field-effect transistor based on graphene-WS2 heterostructures for flexible and transparent electronics. Nat. Nanotechnol. 8, 100–103 (2013).
26. 26.
Myoung, N., Seo, K., Lee, S. J. & Ihm, G. Large current modulation and spin-dependent tunneling of vertical graphene/MoS2 heterostructures. Acs Nano 7, 7021–7027 (2013).
27. 27.
Shim, J. et al. Extremely large gate modulation in vertical graphene/WSe2 heterojunction barristor based on a novel transport mechanism. Adv. Mater. 28, 5293–5299 (2016).
28. 28.
Taylor, J., Guo, H. & Wang, J. Ab initio modeling of open systems: charge transfer, electron conduction, and molecular switching of a C60 device. Phys. Rev. B 63, 121104 (2001).
29. 29.
Ebnonnasir, A., Narayanan, B., Kodambaka, S. & Ciobanu, C. V. Tunable MoS2 bandgap in MoS2-graphene heterostructures. Appl. Phys. Lett. 105, 031603 (2014).
Acknowledgements
G.Z. thanks the supports from the National Key R&D program under Grant No. 2016YFA0300904, the Key Research Program of Frontier Sciences of the Chinese Academy of Sciences (CAS, Grant No. QYZDB-SSW-SLH004), and the Strategic Priority Research Program (B) of CAS (Grant Nos. XDPB06 and XDB07010100). D.X.S. is supported by NSFC (Grant Nos. 51572289 and 61734001). Y.G.Y. is supported by the MOST Project of China (Grants No. 2014CB920903), the NSFC (Grants No. 11574029). The data and materials are available from the corresponding authors upon request. Y.R. is supported by the NSFC (Grant No. 11574361) and Youth Innovation Promotion Association CAS (Grants No. 2018013).
Author information
Author notes
1. These authors contributed equally: Mengzhou Liao, Ze-Wen Wu.
Affiliations
1. CAS Key Laboratory of Nanoscale Physics and Devices, Institute of Physics, Chinese Academy of Sciences, Beijing, 100190, China
• Mengzhou Liao
• , Luojun Du
• , Tingting Zhang
• , Zheng Wei
• , Jianqi Zhu
• , Hua Yu
• , Jian Tang
• , Lin Gu
• , Rong Yang
• , Dongxia Shi
• & Guangyu Zhang
2. School of Physical Sciences, University of Chinese Academy of Sciences, Beijing, 100190, China
• Mengzhou Liao
• , Luojun Du
• , Tingting Zhang
• , Zheng Wei
• , Jianqi Zhu
• , Hua Yu
• , Jian Tang
• , Lin Gu
• , Rong Yang
• , Dongxia Shi
• & Guangyu Zhang
3. Beijing Key Laboratory of Nanophotonics and Ultrafine Optoelectronic Systems, School of Physics, Beijing Institute of Technology, Beijing, 100081, China
• Ze-Wen Wu
• , Tingting Zhang
• , Yanxia Xing
• & Yugui Yao
4. Beijing Key Laboratory for Nanomaterials and Nanodevices, Beijing, 100190, China
• Rong Yang
• , Dongxia Shi
• & Guangyu Zhang
5. Collaborative Innovation Center of Quantum Matter, Beijing, 100190, China
• Guangyu Zhang
Contributions
G.Z. designed and supervised the research; M.L. performed the AFM measurements and data analysis; Z.W.W., Y.X. and Y.Y. performed the NEGF-DFT calculations; L.D., J.T. and Z.W. prepared the samples and performed spectroscopic characterizations; L.G. helped on the TEM characterizations; T.Z., J.Z., H.Y., R.Y. and D.S. helped analyzed data; M.L., Z.W.W., Y.Y. and G.Z. wrote, and all authors commented on the manuscript.
Competing interests
The authors declare no competing interests.
Corresponding authors
Correspondence to Yugui Yao or Guangyu Zhang.
|
2018-10-22 05:13:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7379300594329834, "perplexity": 5271.11853767189}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514708.24/warc/CC-MAIN-20181022050544-20181022072044-00064.warc.gz"}
|
https://nrich.maths.org/9691/clue
|
Framed
Seven small rectangular pictures have one inch wide frames. The frames are removed and the pictures are fitted together like a jigsaw to make a rectangle of length 12 inches. Find the dimensions of the pictures.
Tilted Squares
It's easy to work out the areas of most squares that we meet, but what if they were tilted?
Four or Five
The diagram shows a large rectangle composed of 9 smaller rectangles. If each of these rectangles has integer sides, what could the area of the large rectangle be?
Perimeter Possibilities
Age 11 to 14 Challenge Level:
Imagine two islands with an area of $24$, one with dimensions $6$ by $4$ and the other $12$ by $2$.
Which island has more land by the sea shore?
|
2019-10-19 18:09:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.224284827709198, "perplexity": 790.3421609429382}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00545.warc.gz"}
|
http://ejupes.tempsite.ws/8qwzti/complex-number-calculator-82e8b0
|
abs calculator | Simplify fraction | Function plotter | All you want to provide is the given inputs in the below box & tap on the calculate button to get the final result along with a detailed solution. Antiderivative calculator | Have questions? imaginary part Español; When a number has the form a + bi (a real number plus an imaginary number) it is called a complex number. To calculate the conjugate of the following complex expression z=(1+i)/(1-i), enter complex_conjugate((1+i)/(1-i)) or directly (1+i)/(1-i), if the button complex_conjugate already appears, the result -i is returned. Téléchargez l'APK 1.4 de Complex Number Calculator pour Android. Enter expression with complex numbers like 5*(1+i)(-2-5i)^2 Instructions:: All Functions . The complex number calculator allows to multiply complex numbers Complex Number Calculator. Factorization online | Calculate the Complex number Multiplication, Division and square root of the given number. to calculate the product of complex numbers 1+i et 4+2*i, enter Calculate integral online | Type in the real part and if … For example, you can convert complex number from algebraic to trigonometric representation form or from exponential back to algebraic, ect. lim calculator | Antiderivative calculator | Online calculator | The complex symbol notes i. This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. countdown numbers solver | Multiplying Complex Number Calculator. ¯z z ¯ (or sometimes with a star z∗ z ∗) and is equal to ¯. This website uses cookies to ensure you get the best experience. Derivative calculator | countdown maths solver | The scientific calculator supports three ways to enter a complex number: You can enter the number in Cartesian form: The cartesian form includes the real portion, and the imaginary portion of the complex number.The real portion is a real number, the imaginary portion is a real number multiplied by the imaginary unit i.For example, you can enter a number such as 3+2i. sine hyperbolic calculator | combination calculator online | Limit calculator | Features: - complex number addition, subtraction, multiplication and division - absolute value calculation - conversion between rectangular and exponential form - number inputs in rectangular or exponential form natural logarithm calculator | Angle Between Vectors Calculator. Complex Number Calculator Precision 45, télécharger gratuitement. Factoring Over Complex Numbers is a free online tool that solves factoring polynomials over complex numbers just like that. All functions can be applied on complex numbers using calculator. Addition tables game | natural log calculator | tangent hyperbolic calculator | This calculator focuses on speed and ease of use and provides all basic operations with complex numbers. arctan calculator | Worksheets on Complex Number. Can be used for calculating or creating new math problems. arcos | Difference of Squares: a 2 â b 2 = (a + b) (a â b) Step 2: Click the blue arrow to submit and see the result! CAS | Complex Number Calculator Precision 45 1.0.1: Un outil maniable, rapide, fiable, précis si vous devez exécuter des calculs avec des fonctions complexes. ¯. Solve system | Antidifferentiate | The complex number calculator is able to calculate complex numbers when they are in their algebraic form. Get the free "Convert Complex Numbers to Polar Form" widget for your website, blog, Wordpress, Blogger, or iGoogle. Polynomial Subtraction Calculator. Number plus multiples of i and principal value of the complex number the... Number from algebraic to trigonometric representation form of complex numbers calculator - complex. Finding the modulus and argument of the argument calculator can be 0, so all real and! Division of two complex numbers to Polar form '' widget for your website, blog,,... Also complex numbers calculator - solve complex equations step-by-step involving any number complex number calculator decimal places sine. From exponential back to algebraic, ect numbers, while i is to calculate complex! $type ( 1+i ) ^8 complex number calculator modulus and argument of the.! This equation, i is to calculate complex numbers Polar/Cartesian functions arithmetic & Comp of Inequalities Polynomials Rationales Geometry..., both m and n complex number calculator real numbers, while i is to calculate complex numbers to Polar ''! Numbers to Polar form '' widget for your website, blog, Wordpress, Blogger, or.. Conjugate of z=1+i z = 1 + i is the combination of real and! B ) trigonometric representation form of complex numbers calculator is also called imaginary! Calculator that makes real and imaginary number will simplify any expression with complex numbers to Polar form widget. + bi ) Error: Incorrect input places can be chosen between 0 and 10 z ∗ ) is. ( Im\ ; \ ) \ ( z\ ) stands for the complex number an. An ordered pair of two complex numbers calculator is also called an imaginary number and... Applied on complex numbers \theta \ ) Hyperbolic sinus decimal places can be to... Places can be chosen between 0 and 10 C #, subtraction, division and square of! Or creating new math problems n are real numbers, while i called! & Comp the modulus and argument of the given number absolute value principal... Step solution de calculatrice de nombre complexe est programmée dans C # enter a complex number solve complex equations -... For your website, blog, Wordpress, Blogger, or iGoogle the division of two real numbers, i... Un ordinateur créé en 1939 par George Stibitz et Samuel Williams z ¯ ( or sometimes with star... Number a + bi, a is called an imaginary part to ¯ number plus multiples of i i... Called the real part, and see the answer of 5-i trigonometric form by finding the modulus and argument the! Following calculator can also Determine the conjugate of z=1+i z = 1 + i is called an number! Of Inequalities Polynomials Rationales Coordinate Geometry complex numbers like 5 * ( ). And n are real numbers as a standard pocket calculator a + bi ) Error Incorrect! Many operations on complex numbers ( a, b ) use calculator that makes real and imaginary numbers in... Plane, evaluate complex number and its conjugate on the complex number is an pair... Functions of a complex number z\ ) stands for the complex number calculator pour Android designed by engineer... Form of complex number online calculator, allows to perform calculations with complex numbers,! Sine of a complex number to calculate the complex number calculator perform the basic arithmetic on complex Polar/Cartesian... ( in electrical engineering ), and see the answer of 5-i or iGoogle only accepts integers decimals!, which satisfies basic equation i2 = −1 it was designed by an engineer the! = 0, the number bi then is called the imaginary number... equations Inequalities of. Is a free online tool that displays the division of two complex numbers are in their algebraic form electrical )! Complex number from algebraic to trigonometric representation form to another with step by step solution “ complex calculator ” a. Calculator is able to calculate complex numbers Multiplication of complex numbers ( with... Expressions the following description, \ ( z\ ) stands for the number... Operations: addition, subtraction, division and square root of the argument the start specifically to handle math... = 1 + i is called as a standard pocket calculator number Multiplication, division, Multiplication of complex.! The given number the basic arithmetic on complex numbers ( a, b ) if … the number! Im\ ; \ ) for both conventions all basic operations with complex numbers functions... Because no real number satisfies this equation, i is the combination of real and imaginary numbers are their. Functions can be used to simplify$ ( 1+i ) ( -2-5i ) ^2 form. Written.The field of real numbers and evaluates expressions in the real part if! Root of the given number ( -2-5i ) ^2 trigonometric form by finding modulus... Functions of a complex number calculator calculator that makes real and imaginary number,! ( Re\ ; \ ) for both conventions numbers and imaginary numbers are complex... - solve complex equations step-by-step numbers are also complex numbers like 5 * ( 1+i ) ( -2-5i ) trigonometric. ) ( -2-5i ) ^2 trigonometric form by finding the modulus and of. Imaginary numbers are in the set of complex numbers is also called an imaginary part no real number multiples! To ¯ calculator the complex number from one representation form to another with step by step solution Error: input. Value and principal value of the argument calculator does basic arithmetic on complex numbers if … the number! Like 5 * ( 1+i ) ( -2-5i ) ^2 trigonometric form by finding the modulus and argument of given. Numbers when they are in the form complex number calculator + bi, a is called as a pure imaginary number calculator! It allows to perform calculations with i ) called as a subfield equation i2 = −1 or =. To perform the basic arithmetic operations: addition, subtraction, division and square root of the given number pair... Des opérations sur des nombres complexes Polar/Cartesian functions arithmetic & Comp because no real plus! The combination of real and complex math or sometimes with a star z∗ z ∗ ) and equal! Hyperbolic sinus decimal places can be written in the set of complex numbers in. Division and square root of the argument complex equations step-by-step calculating or new. And is equal to ¯ form or from exponential back to algebraic, ect only. Satisfies basic equation i2 = −1 or j2 = −1, and see the answer 5-i.... equations Inequalities System of equations System of Inequalities Polynomials Rationales Coordinate Geometry complex numbers and evaluates in. Ordered pair of two complex numbers and imaginary number involving any number decimal. And complex math calculations easy créé en 1939 par George Stibitz et Williams. Basic operations with complex numbers, you can convert the complex numbers and evaluates expressions in the calculator! A product of simpler factors calculator, one need to choose representation form or from back... To trigonometric representation form or from exponential back to algebraic, ect 0, all... Nombres complexes and an imaginary number Inequalities Polynomials Rationales Coordinate Geometry complex numbers the answer 5-i. On the complex number Multiplication, division and square root of the argument variable ) ease of and! “ complex calculator ” is a free online tool that displays the division two. Rationales Coordinate Geometry complex numbers using calculator get the best experience choose representation form to another with by! Used for calculating or creating new math problems integers and decimals from exponential to... The Factoring calculator transforms complex expressions the following description, \ ( ;... Called an imaginary number be usable as a pure imaginary number of use and all. Need to choose representation form to another with step by step solution of... Number to calculate the complex number part and an imaginary number root of the complex number from one form... Cookies to ensure you get the free convert complex number from algebraic to trigonometric representation form of complex to! Number plus multiples of i ( 2-3i ) * conj ( 3-4i ) * ( 1+i ) -2-5i! To simplify any expression with complex numbers when they are in the of... Division of two complex numbers complex numbers the answer of 5-i Hyperbolic sine of a complex number a bi! It allows to perform many operations on complex numbers ( a, )... Free convert complex number ( a + bi, a complex number online,! Complex complex number calculator the following calculator can also Determine the conjugate of a part... 1 + i is called as a standard pocket calculator the form of complex and. J ( in electrical engineering ), complex number calculator satisfies basic equation i2 −1! Numbers ( a, b ) and ease of use and provides all basic operations with complex numbers a imaginary. Par George Stibitz et Samuel Williams was designed by an engineer from the start specifically to complex! Description, \ ( Re\ ; \ ) \ ( Re\ ; )..., a complex number Multiplication, division and square root of the complex number calculator number we can convert complex. = −1 or j2 = −1 or j2 = −1 or j2 = −1 of factors! Division of two real numbers as a subfield blog, Wordpress, Blogger or... The calculator will simplify any complex expression expressions using algebraic rules step-by-step this website uses cookies to ensure get! Of i perform many operations on complex numbers ( calculations with complex numbers basic., both m and n are real numbers as a subfield satisfies this equation, is. Im\ ; \ ) Hyperbolic sinus decimal places Hyperbolic sine of a complex number into form... Opérations sur des nombres complexes with a star z∗ z ∗ ) and is equal to ¯ multiples.
|
2021-04-13 02:31:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6823301911354065, "perplexity": 1706.3397517703515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00206.warc.gz"}
|
https://mathematica.stackexchange.com/questions/128717/convert-omega-to-frequency-in-fouriertransform
|
# Convert omega to frequency in FourierTransform [closed]
FourierTransform function in mathematica returns Fourier transform of input function in omega.
but I need to get Fourier transform of input function in hertz(frequency) that output should be something like this However with some changes in FourierParameters and a command like this
FourierTransform[1, x, f, FourierParameters -> {1, -1}]
I can't find my desire response.
As I know we can't divide our Fourier output by 2pi because it only amplitude of function not input range as you can see in picture has shown below (as I remember we had to Duality theory to correct our output)
the maximum should be occur on (their position )/2Pi not in above position
• FourierTransform[1, x, f, FourierParameters -> {1, -1}]/(2 Pi) – Quantum_Oli Oct 14 '16 at 16:09
• FourierTransform[1,x,2*Pi*f,FourierParameters->{1,-1}]? – N.J.Evans Oct 14 '16 at 16:19
What you want is probably this:
FourierTransform[((Cos[2 10*Pi*x] Sin[
2*Pi*x])/(Pi x))^2, x, f, FourierParameters -> {0, -2 Pi}]
(*
==> 1/8 (-22 Sign[-22 + f] + f Sign[-22 + f] + 40 Sign[-20 + f] -
2 f Sign[-20 + f] - 18 Sign[-18 + f] + f Sign[-18 + f] -
4 Sign[-2 + f] + 2 f Sign[-2 + f] - 4 f Sign[f] + 4 Sign[2 + f] +
2 f Sign[2 + f] + 18 Sign[18 + f] + f Sign[18 + f] -
40 Sign[20 + f] - 2 f Sign[20 + f] + 22 Sign[22 + f] +
f Sign[22 + f])
*)
This is in the documentation for FourierParameters.
• Thanks Jens , yes it correct the function argument very well but it doesn't return the proper amplitude for example for Cos[x] it return 2 delta functions with amplitude of Pi , but it should be 2 delta functions with amplitude of 1 – Ehsan Zakeri Oct 14 '16 at 16:51
• You can adjust the amplitude to anything you want using the first entry in FourierParameters. See the docs for FourierTransform. – Jens Oct 14 '16 at 17:24
|
2021-08-04 23:15:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.345664381980896, "perplexity": 3694.9421071700017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00083.warc.gz"}
|
https://zhuanzhi.ai/paper/b1465b11aeabff995fd9ae72c8a6e22d
|
We investigate how the final parameters found by stochastic gradient descent are influenced by over-parameterization. We generate families of models by increasing the number of channels in a base network, and then perform a large hyper-parameter search to study how the test error depends on learning rate, batch size, and network width. We find that the optimal SGD hyper-parameters are determined by a "normalized noise scale," which is a function of the batch size, learning rate, and initialization conditions. In the absence of batch normalization, the optimal normalized noise scale is directly proportional to width. Wider networks, with their higher optimal noise scale, also achieve higher test accuracy. These observations hold for MLPs, ConvNets, and ResNets, and for two different parameterization schemes ("Standard" and "NTK"). We observe a similar trend with batch normalization for ResNets. Surprisingly, since the largest stable learning rate is bounded, the largest batch size consistent with the optimal normalized noise scale decreases as the width increases.
相关内容
Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks. Our code is available at https://github.com/kjunelee/MetaOptNet.
Deep reinforcement learning (RL) algorithms have shown an impressive ability to learn complex control policies in high-dimensional environments. However, despite the ever-increasing performance on popular benchmarks such as the Arcade Learning Environment (ALE), policies learned by deep RL algorithms often struggle to generalize when evaluated in remarkably similar environments. In this paper, we assess the generalization capabilities of DQN, one of the most traditional deep RL algorithms in the field. We provide evidence suggesting that DQN overspecializes to the training environment. We comprehensively evaluate the impact of traditional regularization methods, $\ell_2$-regularization and dropout, and of reusing the learned representations to improve the generalization capabilities of DQN. We perform this study using different game modes of Atari 2600 games, a recently introduced modification for the ALE which supports slight variations of the Atari 2600 games traditionally used for benchmarking. Despite regularization being largely underutilized in deep RL, we show that it can, in fact, help DQN learn more general features. These features can then be reused and fine-tuned on similar tasks, considerably improving the sample efficiency of DQN.
Knowledge Graph (KG) embedding is a fundamental problem in data mining research with many real-world applications. It aims to encode the entities and relations in the graph into low dimensional vector space, which can be used for subsequent algorithms. Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding. Recently, generative adversarial network (GAN), has been introduced in negative sampling. By sampling negative triplets with large scores, these methods avoid the problem of vanishing gradient and thus obtain better performance. However, using GAN makes the original model more complex and hard to train, where reinforcement learning must be used. In this paper, motivated by the observation that negative triplets with large scores are important but rare, we propose to directly keep track of them with the cache. However, how to sample from and update the cache are two important questions. We carefully design the solutions, which are not only efficient but also achieve a good balance between exploration and exploitation. In this way, our method acts as a "distilled" version of previous GA-based methods, which does not waste training time on additional parameters to fit the full distribution of negative triplets. The extensive experiments show that our method can gain significant improvement in various KG embedding models, and outperform the state-of-the-art negative sampling methods based on GAN.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.
We study the problem of training deep neural networks with Rectified Linear Unit (ReLU) activiation function using gradient descent and stochastic gradient descent. In particular, we study the binary classification problem and show that for a broad family of loss functions, with proper random weight initialization, both gradient descent and stochastic gradient descent can find the global minima of the training loss for an over-parameterized deep ReLU network, under mild assumption on the training data. The key idea of our proof is that Gaussian random initialization followed by (stochastic) gradient descent produces a sequence of iterates that stay inside a small perturbation region centering around the initial weights, in which the empirical loss function of deep ReLU networks enjoys nice local curvature properties that ensure the global convergence of (stochastic) gradient descent. Our theoretical results shed light on understanding the optimization of deep learning, and pave the way to study the optimization dynamics of training modern deep neural networks.
Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human's desired objective lies within the robot's hypothesis space. When this is not true, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. We focus specifically on learning from physical human corrections during the robot's task execution, where not having a rich enough hypothesis space leads to the robot updating its objective in ways that the person did not actually intend. We observe that such corrections appear irrelevant to the robot, because they are not the best way of achieving any of the candidate objectives. Instead of naively trusting and learning from every human interaction, we propose robots learn conservatively by reasoning in real time about how relevant the human's correction is for the robot's hypothesis space. We test our inference method in an experiment with human interaction data, and demonstrate that this alleviates unintended learning in an in-person user study with a 7DoF robot manipulator.
Importance sampling is one of the most widely used variance reduction strategies in Monte Carlo rendering. In this paper, we propose a novel importance sampling technique that uses a neural network to learn how to sample from a desired density represented by a set of samples. Our approach considers an existing Monte Carlo rendering algorithm as a black box. During a scene-dependent training phase, we learn to generate samples with a desired density in the primary sample space of the rendering algorithm using maximum likelihood estimation. We leverage a recent neural network architecture that was designed to represent real-valued non-volume preserving ('Real NVP') transformations in high dimensional spaces. We use Real NVP to non-linearly warp primary sample space and obtain desired densities. In addition, Real NVP efficiently computes the determinant of the Jacobian of the warp, which is required to implement the change of integration variables implied by the warp. A main advantage of our approach is that it is agnostic of underlying light transport effects, and can be combined with many existing rendering techniques by treating them as a black box. We show that our approach leads to effective variance reduction in several practical scenarios.
We propose accelerated randomized coordinate descent algorithms for stochastic optimization and online learning. Our algorithms have significantly less per-iteration complexity than the known accelerated gradient algorithms. The proposed algorithms for online learning have better regret performance than the known randomized online coordinate descent algorithms. Furthermore, the proposed algorithms for stochastic optimization exhibit as good convergence rates as the best known randomized coordinate descent algorithms. We also show simulation results to demonstrate performance of the proposed algorithms.
Asynchronous distributed machine learning solutions have proven very effective so far, but always assuming perfectly functioning workers. In practice, some of the workers can however exhibit Byzantine behavior, caused by hardware failures, software bugs, corrupt data, or even malicious attacks. We introduce \emph{Kardam}, the first distributed asynchronous stochastic gradient descent (SGD) algorithm that copes with Byzantine workers. Kardam consists of two complementary components: a filtering and a dampening component. The first is scalar-based and ensures resilience against $\frac{1}{3}$ Byzantine workers. Essentially, this filter leverages the Lipschitzness of cost functions and acts as a self-stabilizer against Byzantine workers that would attempt to corrupt the progress of SGD. The dampening component bounds the convergence rate by adjusting to stale information through a generic gradient weighting scheme. We prove that Kardam guarantees almost sure convergence in the presence of asynchrony and Byzantine behavior, and we derive its convergence rate. We evaluate Kardam on the CIFAR-100 and EMNIST datasets and measure its overhead with respect to non Byzantine-resilient solutions. We empirically show that Kardam does not introduce additional noise to the learning procedure but does induce a slowdown (the cost of Byzantine resilience) that we both theoretically and empirically show to be less than $f/n$, where $f$ is the number of Byzantine failures tolerated and $n$ the total number of workers. Interestingly, we also empirically observe that the dampening component is interesting in its own right for it enables to build an SGD algorithm that outperforms alternative staleness-aware asynchronous competitors in environments with honest workers.
Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.
Kwonjoon Lee,Subhransu Maji,Avinash Ravichandran,Stefano Soatto
3+阅读 · 2019年4月23日
5+阅读 · 2019年1月30日
Yongqi Zhang,Quanming Yao,Yingxia Shao,Lei Chen
5+阅读 · 2019年1月18日
Yin Cui,Menglin Jia,Tsung-Yi Lin,Yang Song,Serge Belongie
12+阅读 · 2019年1月16日
Difan Zou,Yuan Cao,Dongruo Zhou,Quanquan Gu
6+阅读 · 2018年11月21日
Andreea Bobu,Andrea Bajcsy,Jaime F. Fisac,Anca D. Dragan
3+阅读 · 2018年10月11日
Quan Zheng,Matthias Zwicker
4+阅读 · 2018年8月23日
Georgios Damaskinos,El Mahdi El Mhamdi,Rachid Guerraoui,Rhicheek Patra,Mahsa Taziki
3+阅读 · 2018年7月9日
Jack Baker,Paul Fearnhead,Emily B Fox,Christopher Nemeth
3+阅读 · 2018年6月19日
CreateAMind
10+阅读 · 2019年5月22日
8+阅读 · 2019年5月12日
CreateAMind
27+阅读 · 2019年1月3日
CreateAMind
4+阅读 · 2018年12月28日
10+阅读 · 2018年12月24日
CreateAMind
20+阅读 · 2018年9月12日
CreateAMind
15+阅读 · 2018年5月25日
CreateAMind
5+阅读 · 2017年8月4日
Top
|
2021-06-16 23:39:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6662957072257996, "perplexity": 820.2961554884755}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626122.27/warc/CC-MAIN-20210616220531-20210617010531-00548.warc.gz"}
|
https://www.gamedev.net/topic/642979-arrgh-opengl/
|
$10 ### Image of the Day Submit IOTD | Top Screenshots ### The latest, straight to your Inbox. Subscribe to GameDev.net's newsletters to receive the latest updates and exclusive content. Sign up now ## Arrgh openGL! Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 9 replies to this topic ### #1Ubermeowmix Members Posted 11 May 2013 - 08:11 AM I'm having real trouble setting up openGL in windows. Why can't I just open a console application and link in GLUT or freeGLUT and GLEW. I've tried 4 tutorials and gotten non complile errors on all of them. What am I doing wrong, I thought swapping to openGL so I could port to all platforms would be a good idea but i'm having reservations now. Why can't I just link freeGlut in the linker in Visual studio 10? or alternatively: #include <GL/glew.h> #include <GL/wglew.h> #pragma comment(lib, "glew32.lib") #pragma comment(lib, "opengl32.lib") Any help will greatly increase the number of folicles left in my head at the end of the day! If you get near a point, make it! ### #2FLeBlanc Members Posted 11 May 2013 - 08:15 AM Clearly your problem is that you didn't stick your tongue out far enough or use the right curse words. Specifics, man. Instead of just vague ranting, post specifics: error messages, what you are doing, what you have tried, etc... Porting to all platforms is a lot more complicated than just switching to OpenGL, by the way. ### #3Ubermeowmix Members Posted 11 May 2013 - 09:01 AM Yeah good point, but literally getting a basic window running on windows seems to be a nightmare. Running the code from the intro link (http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Table-of-Contents.html): #include <stdlib.h> #include <GL/glew.h> #ifdef __APPLE__ # include <GLUT/glut.h> #else # include <GL/glut.h> #endif #include <stdio.h> static int make_resources(void) { return 1; } /* * GLUT callbacks: */ static void update_fade_factor(void) { } static void render(void) { glClearColor(1.0f, 1.0f, 1.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glutSwapBuffers(); } /* * Entry point */ int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE); glutInitWindowSize(400, 300); glutCreateWindow("Hello World"); glutIdleFunc(&update_fade_factor); glutDisplayFunc(&render); glewInit(); if (!GLEW_VERSION_2_0) { fprintf(stderr, "OpenGL 2.0 not available\n"); return 1; } if (!make_resources()) { fprintf(stderr, "Failed to load resources\n"); return 1; } glutMainLoop(); return 0; } Receiving: LINK : fatal error LNK1104: cannot open file 'freeglut.lib' P.S. this is as a console application opened in Visual Studio 10 Edited by Ubermeowmix, 11 May 2013 - 09:02 AM. If you get near a point, make it! ### #4apatriarca Members Posted 11 May 2013 - 09:02 AM I have used freeglut and GLEW in several Windows projects without any problem. They are actually a lot easier to install than most libraries in my experience. Some things to check: 1. Have you added the directories of GLEW and freeglut to your project? 2. Have you actually checked that those directories contain the libraries (and the include files) you are trying to link (and include)? 3. Have you included the directories containing the dynamic libraries to your PATH? Edited by apatriarca, 11 May 2013 - 09:03 AM. ### #5Ubermeowmix Members Posted 11 May 2013 - 09:15 AM This is the bit i really don't understand, why it is so difficult to just link to files/folders. Why can't I just copy them to the project directory and then define it at the beginning, or can I and I haven't found out how. Do I set the includes in the Linker->Input->file... section, or VC++ Directories->Includes directories. Or does it have to be both!? If you get near a point, make it! ### #6Ubermeowmix Members Posted 11 May 2013 - 09:19 AM 3. Have you included the directories containing the dynamic libraries to your PATH? Is this the Linker->Input section? Where I would place: glew32.lib opengl32.lib If you get near a point, make it! ### #7Gambini Members Posted 11 May 2013 - 09:24 AM You should have a directory with all of your .lib files in it. Then, in the project properties (make sure you do this for Release and Debug), put that directory in to Linker->General->Additional Library Directories. The directory you give should be relative to the .vcproj file (or absolute), or another option is to use the built-in VS macros to define it relative to your solution directory by doing something like "$(SolutionDir)../lib;". Then head over to Linker->Input->Additional Dependencies and list your .lib files spearated by semicolons (or one per line if you use the drop-down menu and hit the <Edit...> option) . Make sure to hit "Apply" at the bottom right instead of just pressing "Ok". Example for the Additional dependencies, part of mine looks like "glew32.lib;opengl32.lib;glu32.lib;"
opengl32.lib should be provided by the system, so you don't have to give a specific directory for that, you should just be able to add it to the Additional Dependencies.
Edited by Gambini, 11 May 2013 - 09:28 AM.
### #8Ubermeowmix Members
Posted 11 May 2013 - 09:37 AM
(make sure you do this for Release and Debug) - Do you mean when the code is finished? Is this best practice for when it is sent out to joe public?
The directory you give should be relative to the .vcproj file (or absolute) - So in the same folder as the .vcproj file basically?
Thanks for the advise btw, I appreciate the help.
If you get near a point, make it!
### #9Ubermeowmix Members
Posted 11 May 2013 - 09:41 AM
It's still throwing LINK : fatal error LNK1104: cannot open file 'glut32.lib'
It's now rid itself of the freeglut error & swapped it for glut32.lib now lol.
Edited by Ubermeowmix, 11 May 2013 - 09:52 AM.
If you get near a point, make it!
### #10dougbinks Members
Posted 11 May 2013 - 10:35 AM
You may find this basic example project helpful: https://github.com/dougbinks/glewfw - it doesn't use GLUT or freeglut but instead uses glfw.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
|
2017-02-24 01:41:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3271494209766388, "perplexity": 2001.0619301196305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171271.48/warc/CC-MAIN-20170219104611-00503-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://en.wikipedia.org/wiki/User:Asitgoes/Normdis
|
# User:Asitgoes/Normdis
Developer(s) Institute for Land Reclamation and Improvement (ILRI) Delphi Microsoft Windows English Statistical software Proprietary Freeware NormDis
In statistics and data analysis the application software NormDis is a free and user-friendly calculator for the determination of the cumulative probability Pc(Xr) for any random variable (X) following the normal distribution. Here, the cumulative probability Pc(Xr) stands for the probability P that X is less than a reference value Xr of X. Biefly : Pc(Xr) = P(X<Xr).
Reversely, the calculator can give the value of Xr given Pc. Hence, it is a two-way calculator. The data required are the mean and the standard deviation of the distribution of X.
## Intervals
Values of Pi in % for different intervals based on a unit length equal to the value of the standard deviation σ.
The probability (Pi) that X occurs in an interval between an upper limit (U) and a lower limit (L) can be found from:
Pi = P(L<X<U) = Pc(U) - Pc(L) .
Thus, using the calculator twice, namely for Xr=U and Xr=L, and subtracting the results, one finds the value of Pi that L<X<U.
## Numerical method
The cumulative distribution function of the normal distribution cannot be calculated analytically and a numerical approximation has to be used. NormDis uses the Hastings method,[1] as follows :
${\displaystyle Pc(x)=1-\phi (x)\left(b_{1}t+b_{2}t^{2}+b_{3}t^{3}+b_{4}t^{4}+b_{5}t^{5}\right)}$
where
${\displaystyle t={\frac {1}{1+b_{0}x}}}$
and
b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 = 1.781477937, b4 = −1.821255978, b5 = 1.330274429.
Here, ${\displaystyle \phi (x)}$ is the standard normal probability density function (PDF):
${\displaystyle \phi (x)={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-x^{2}/2}}$
When the distribution is standard normal, one can use ${\displaystyle x}$ = Xr, otherwise ${\displaystyle x}$ = (Xr - M) / S, where M is the mean and S the standard deviation.
Cumulative probability given the value of a normally distributed variable
Total probability as a surface area under the normal probability density function given lower and upper limit of an interval of a normally distributed variable
## Graphics
The NormDis program provides graphics for the various values computed with the calculator. See the examples to left and right.
## References
1. ^ Zelen, Marvin; Severo, Norman C. (1964). Probability Functions (chapter 26). Handbook of mathematical functions with formulas, graphs, and mathematical tables, by Abramowitz, M.; and Stegun, I. A.: National Bureau of Standards. New York, NY: Dover. ISBN 0-486-61272-4.
|
2018-01-16 15:56:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204478025436401, "perplexity": 1416.5345966606023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886437.0/warc/CC-MAIN-20180116144951-20180116164951-00797.warc.gz"}
|
https://math.stackexchange.com/questions/2659020/fourier-transform-of-smoothed-measure-converges-uniformly-globally-not-just-on
|
# Fourier transform of smoothed measure converges uniformly globally (not just on compact sets)?
Let $\mu$ be a probability measure on $\mathbb{R}$.
Let $\psi_{n}(x) = n \psi(n x)$ where $n \geq 1$ and $\psi: \mathbb{R} \to \mathbb{R}$ is $C^{\infty}$, compactly supported, and $\int \psi(x) dx = 1$.
Let $\mu_n = \psi_n \ast \mu$
I know that $$\widehat{\mu_{n}} \to \widehat{\mu} \quad \text{uniformly on compact sets},$$ where $\widehat{ \phantom{f} }$ denotes the Fourier transform. (See uniform convergence of characteristic functions or Weak convergence implies uniform convergence of characteristic functions on bounded sets.)
But I read somewhere that \begin{align}\label{1}\tag{1} \widehat{\mu_{n}} \to \widehat{\mu} \quad \text{uniformly on $\mathbb{R}$}. \end{align} And I haven't been able to prove it.
Question: Is \eqref{1} true? Why or why not?
Remark: It is clearly true if $\widehat{\mu}$ goes to zero at infinity.
Yes, it's true if $\hat\mu$ vanishes at infinity. So try the first measure you can think of such that $\hat\mu$ does not vanish at infinty: If $\mu=\delta_0$ then $\hat\mu=1$, while $\hat\mu_n(\xi)=\hat\psi(\xi/n)$, which certainly does not tend to $1$ uniformly since $\hat\psi$ vanishes at infinity.
|
2019-12-13 21:53:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998873233795166, "perplexity": 177.62530699880298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00151.warc.gz"}
|
https://grindskills.com/linear-regression-on-a-sample-spanning-many-orders-of-magnitude/
|
# Linear regression on a sample spanning many orders of magnitude
Beer’s law from chemistry says that the absorbance of a liquid $A$ is proportional to the concentration $C$, so:
The standard thing to do then is to prepare a set of solutions with known concentrations, measure the absorbance to form a ‘standard curve’ (a calibration curve basically), and do a simple linear regression on that data to get the proportionality (which can then be used to predict the concentrations of unknown solutions).
One easy way to do this is to start with a known concentration and perform a serial dilution, which would get you 2x dilution, 4x, 8x, 16x….etc. I.e. if you start with a solution of $100\mu \mathrm g/\mathrm {mL}$ you’d get solutions with $50\mu \mathrm g/\mathrm {mL}$, $25\mu \mathrm g/\mathrm {mL}$, $12.5\mu \mathrm g/\mathrm {mL}$ etc…
Now when you do the linear regression, you have a data set with lots of data points at low concentrations and very few at higher concentrations. It seems much more natural to represent this problem on a log scale. My question is then, should I be doing a linear regression of $A$ vs. $C$ or $\log A$ vs. $\log C$? When I compare the models, they seem to give answers that are on the same order of magnitude but on the order of 30% different.
Let the physics (of the experiment and the measuring apparatus) guide you.
Ultimately, absorption is determined by measuring amounts of radiation passing through the medium and those measurements come down to counting photons. When the medium is macroscopic, thermodynamic fluctuations in concentration are negligible so the principal source of error lies in the counting. This error (or “shot noise”) has a Poisson distribution. This implies the error is relatively large at high concentrations when little radiation is passing through.
With sufficient care in the laboratory, concentrations typically are measured extremely accurately, so I will not worry about errors in concentrations.
The absorbance itself is directly related to the logarithm of the measured radiation. Taking the logarithm evens out the amount of error across the entire possible range of concentrations. For this reason alone, it is best to analyze the absorbance in terms of its usual values rather than re-expressing them. In particular, we should avoid taking logs of absorbance, even though that would simplify the expression of the Beer-Lambert law.
We should also be alert to possible non-linearities. The derivation of the Beer-Lambert Law suggests the absorbance vs concentration curve will become nonlinear at high concentrations. Some way to detect or test this is needed.
These considerations suggest a simple procedure to analyze a series of $(C_i, A_i)$ pairs of concentrations and measured absorbances:
• Estimate the coefficient $\kappa$ as the arithmetic mean of $A/C$, $\hat{\kappa} = \sum_i \frac{A_i}{C_i}$.
• Predict the absorbance at each concentration in terms of the estimated coefficient: $\hat{A}(C) = \hat{\kappa}C.$
• Check the additive residuals $A_i - \hat{A_i}$ for nonlinear trends in $C_i$.
Of course all this is theoretical and somewhat speculative–we haven’t any actual data to analyze–but it is a reasonable place to start. If repeated laboratory experience suggests the data depart from the statistical behaviors described here, then some modifications of these procedures would be called for.
To illustrate these ideas, I have created a simulation that implements the key aspects of the measurement, including the Poisson noise and possibly nonlinear responses. By running it many times, we can observe the kind of variation that is likely to be encountered in the laboratory. Here are the results of one simulation run. (Other simulations can be carried out simply by changing the starting seed in the code below and modifying various parameters as desired.)
This simulated experiment measured absorbance at concentrations of $1$ down to $1/32$. The vertical spreads in values apparent in the scatterplot show the effects of (a) shot noise in the transmission measurements and (b) shot noise in the initial transmission measurement at zero concentration. (Notice how this actually creates some negative absorbance values.) Although the resulting errors are not going to have exactly the same distributions at each concentration, the roughly equal spreads are empirical evidence that the distributions are close enough to being the same that we needn’t worry about that. In other words, there is no need to weight the absorbances according to the concentrations.
The red diagonal line has been estimated from all 50 simulations. It has a slope of $\hat{\kappa}=2.13$, which differs slightly from the physically correct slope of $2$ that was used in the simulations. This deviation is so large because I assumed there was very little radiation to measure; the maximum photon count was only $1000$. In practice, maximum counts could be many orders of magnitude greater than this, leading to highly precise slope estimates–but then we would not learn much from this figure!
The histogram of residuals does not look good: it is skewed to the right. This indicates some kind of trouble. That trouble does not come from asymmetry in the residuals at each concentration; rather, it comes from a lack of fit. That is evident in the boxplots at the right: although the first five of them line up almost horizontally, the last one–at the highest concentration–clearly differs in location (it is too high) and scale (it is too long). This results from a nonlinear response I built into the simulation. Although the nonlinearity is present throughout the full range of concentrations, it has an appreciable effect only at the very highest concentrations. This is more or less what would happen in the laboratory, too. However, with only one calibration run available we could not draw such boxplots. Consider analyzing multiple independent runs if nonlinearity might be a problem.
The simulation was performed in R. The calculations with actual data, though, are simple to conduct by hand or with a spreadsheet: just make sure to check the residuals for nonlinearity.
#
# Simulate instrument responses:
# concentration is an array of concentrations to use.
# kappa is the Beer-Lambert law coefficient.
# n.0 is the largest expected photon count (at 0 concentration).
# start is a tiny positive value used to avoid logs of zero.
# beta is the amount of nonlinearity (it is a quadratic perturbation
# of the Beer-Lambert law).
# The return value is a parallel array of measured absorbances; it is subject
# to random fluctuations.
#
observe <- function(concentration, kappa=1, n.0=10^3, start=1/6, beta=0.2) {
transmission <- exp(-kappa * concentration - beta * concentration^2)
transmission.observed <- start + rpois(length(transmission), transmission * n.0)
absorbance <- -log(transmission.observed / rpois(1, n.0))
return(absorbance)
}
#
# Perform a set of simulations.
#
concentration <- 2^(-(0:5)) # Concentrations to use
n.iter <- 50 # Number of iterations
set.seed(17) # Make the results reproducible
absorbance <- replicate(n.iter, observe(concentration, kappa=2))
#
# Put the results into a data frame for further analysis.
#
a.df <- data.frame(absorbance = as.vector(absorbance))
a.df$concentration <- concentration # ($ interferes with TeX processing on this site)
#
# Create the figures.
#
par(mfrow=c(1,3))
#
# Set up a region for the scatterplot.
#
plot(c(min(concentration), max(concentration)),
c(min(absorbance), max(absorbance)), type="n",
xlab="Concentration", ylab="Absorbance",
main=paste("Scatterplot of", n.iter, "iterations"))
#
# Make the scatterplot.
#
invisible(apply(absorbance, 2,
function(a) points(concentration, a, col="#40404080")))
slope <- mean(a.df$absorbance / a.df$concentration)
abline(c(0, slope), col="Red")
#
# Show the residuals.
#
a.df$residuals <- a.df$absorbance - slope * a.df$concentration #$
hist(a.df$residuals, main="Histogram of residuals", xlab="Absorbance Difference") #$
#
# Study the residual distribution vs. concentration.
#
boxplot(a.df$residuals ~ a.df$concentration, main="Residual distributions",
xlab="Concentration")
|
2022-10-02 19:09:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 22, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6351354122161865, "perplexity": 941.8675468376641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00463.warc.gz"}
|
https://www.gamedev.net/forums/topic/423124-cg-lighting-rotating-sphere/
|
OpenGL Cg lighting rotating sphere
Recommended Posts
Looking for someone to spark an idea for me, currently im developing a solar system simulation. Im using the cg fragment shader in opengl and producing sunlight on my planets, i dont want opengl generic lighting on for certain reasons. Now this may sound stupid, i know why this is happening, As the planet rotates the calculations for light is still being done on the pre-rotated positioning. So it appears the light is rotating around the planet at the same speed the planet is rotating Whats the best method to apply the rotation to the vertices, then do the pass on the fragment shader? Im taking a break so this is what i was about to implement, if there is a better way let me know. store the rotation in a m[16] matrix and pass that into the vertex shader and have it do the required rotations to the textured sphere. Later down the pipeline the fragment shader should do the lighting calculations correctly and i 'should' see a rotating texture sphere with sunlight hitting in the correct direction. Basically the light shouldnt look like its rotating around the texture but stationary. Thoughts, New to shaders:)
Share on other sites
PS Is doing the translation and rotation cheaper with the vertex shader? Is there something else i need to watch out for? Or will the above mentioned method work?
Thanks
Share on other sites
Pick a coordinate space in which to do all your lighting calculations, let's say we'll use Object Space.
So, you have your light position (the sun, whatever) in world space, and you want to get it into object space so that all your lighting calcs work out properly. What you've got to do is take the world-space light position, and then transform it by the inverse modelview matrix for each object, taking it from world space and putting it into object space. Pass this value into your vertex shader as the light position. From there, everything works out!
Share on other sites
I don't know CG but i'm using GLSL so i hope it's similar...
multiply your normal vectors with the modelTransformMatrix!
Create an account
Register a new account
• Forum Statistics
• Total Topics
627749
• Total Posts
2978913
• Similar Content
• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.
-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not. Visual Studios; Im using a windows so yea. SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?
• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture( GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• 10
• 10
• 21
• 14
• 14
|
2017-10-23 11:57:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19859392940998077, "perplexity": 1619.8264817731842}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825900.44/warc/CC-MAIN-20171023111450-20171023131450-00511.warc.gz"}
|
https://brilliant.org/problems/modularity-and-recursive-sequences/
|
# Modularity and Recursive Sequences
The sequence $$a_n$$ is defined by
$\large \begin{cases} a_0=3 \\ a_{n+1}-a_n=n(a_n-1) & \text{for } n \ge 0 \end{cases}$
Find all positive integers $$m$$ such that $$\gcd(m,a_n)=1$$ for all $$n \geq 0$$. What can the solution set for $$m$$ be best described as?
×
|
2017-07-25 22:51:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8472671508789062, "perplexity": 258.93413304517804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425407.14/warc/CC-MAIN-20170725222357-20170726002357-00222.warc.gz"}
|
https://www.tmwr.org/workflow-sets.html
|
# 15 Screening Many Models
We introduced workflow sets in Chapter 7 and demonstrated how to use them with resampled data sets in Chapter 11. In this chapter, we discuss these sets of multiple modeling workflows in more detail and describe a use case where they can be helpful.
For projects with new data sets that have not yet been well understood, a data practitioner may need to screen many combinations of models and preprocessors. It is common to have little or no a priori knowledge about which method will work best with a novel data set.
A good strategy is to spend some initial effort trying a variety of modeling approaches, determine what works best, then invest additional time tweaking/optimizing a small set of models.
Workflow sets provide a user interface to create and manage this process. We’ll also demonstrate how to evaluate these models efficiently using the racing methods discussed in Section 15.4.
## 15.1 Modeling Concrete Mixture Strength
To demonstrate how to screen multiple model workflows, we will use the concrete mixture data from Applied Predictive Modeling as an example. Chapter 10 of that book demonstrated models to predict the compressive strength of concrete mixtures using the ingredients as predictors. A wide variety of models were evaluated with different predictor sets and preprocessing needs. How can workflow sets make such a process of large scale testing for models easier?
First, let’s define the data splitting and resampling schemes.
library(tidymodels)
tidymodels_prefer()
data(concrete, package = "modeldata")
glimpse(concrete)
#> Rows: 1,030
#> Columns: 9
#> $cement <dbl> 540.0, 540.0, 332.5, 332.5, 198.6, 266.0, 380.0, 380.… #>$ blast_furnace_slag <dbl> 0.0, 0.0, 142.5, 142.5, 132.4, 114.0, 95.0, 95.0, 114…
#> $fly_ash <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,… #>$ water <dbl> 162, 162, 228, 228, 192, 228, 228, 228, 228, 228, 192…
#> $superplasticizer <dbl> 2.5, 2.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0… #>$ coarse_aggregate <dbl> 1040.0, 1055.0, 932.0, 932.0, 978.4, 932.0, 932.0, 93…
#> $fine_aggregate <dbl> 676.0, 676.0, 594.0, 594.0, 825.5, 670.0, 594.0, 594.… #>$ age <int> 28, 28, 270, 365, 360, 90, 365, 28, 28, 28, 90, 28, 2…
#> \$ compressive_strength <dbl> 79.99, 61.89, 40.27, 41.05, 44.30, 47.03, 43.70, 36.4…
The compressive_strength column is the outcome. The age predictor tells us the age of the concrete sample at testing in days (concrete strengthens over time) and the rest of the predictors like cement and water are concrete components in units of kilograms per cubic meter.
For some cases in this data set, the same concrete formula was tested multiple times. We’d rather not include these replicate mixtures as individual data points since they might be distributed across both the training and test set. Doing so might artificially inflate our performance estimates.
To address this, we will use the mean compressive strength per concrete mixture for modeling:
concrete <-
concrete %>%
group_by(across(-compressive_strength)) %>%
summarize(compressive_strength = mean(compressive_strength),
.groups = "drop")
nrow(concrete)
#> [1] 992
Let’s split the data using the default 3:1 ratio of training-to-test and resample the training set using five repeats of 10-fold cross-validation:
set.seed(1501)
concrete_split <- initial_split(concrete, strata = compressive_strength)
concrete_train <- training(concrete_split)
concrete_test <- testing(concrete_split)
set.seed(1502)
concrete_folds <-
vfold_cv(concrete_train, strata = compressive_strength, repeats = 5)
Some models (notably neural networks, KNN, and support vector machines) require predictors that have been centered and scaled, so some model workflows will require recipes with these preprocessing steps. For other models, a traditional response surface design model expansion (i.e., quadratic and two-way interactions) is a good idea. For these purposes, we create two recipes:
normalized_rec <-
recipe(compressive_strength ~ ., data = concrete_train) %>%
step_normalize(all_predictors())
poly_recipe <-
normalized_rec %>%
step_poly(all_predictors()) %>%
step_interact(~ all_predictors():all_predictors())
For the models, we use the the parsnip addin to create a set of model specifications:
library(rules)
library(baguette)
linear_reg_spec <-
linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine("glmnet")
nnet_spec <-
mlp(hidden_units = tune(), penalty = tune(), epochs = tune()) %>%
set_engine("nnet", MaxNWts = 2600) %>%
set_mode("regression")
mars_spec <-
mars(prod_degree = tune()) %>% #<- use GCV to choose terms
set_engine("earth") %>%
set_mode("regression")
svm_r_spec <-
svm_rbf(cost = tune(), rbf_sigma = tune()) %>%
set_engine("kernlab") %>%
set_mode("regression")
svm_p_spec <-
svm_poly(cost = tune(), degree = tune()) %>%
set_engine("kernlab") %>%
set_mode("regression")
knn_spec <-
nearest_neighbor(neighbors = tune(), dist_power = tune(), weight_func = tune()) %>%
set_engine("kknn") %>%
set_mode("regression")
cart_spec <-
decision_tree(cost_complexity = tune(), min_n = tune()) %>%
set_engine("rpart") %>%
set_mode("regression")
bag_cart_spec <-
bag_tree() %>%
set_engine("rpart", times = 50L) %>%
set_mode("regression")
rf_spec <-
rand_forest(mtry = tune(), min_n = tune(), trees = 1000) %>%
set_engine("ranger") %>%
set_mode("regression")
xgb_spec <-
boost_tree(tree_depth = tune(), learn_rate = tune(), loss_reduction = tune(),
min_n = tune(), sample_size = tune(), trees = tune()) %>%
set_engine("xgboost") %>%
set_mode("regression")
cubist_spec <-
cubist_rules(committees = tune(), neighbors = tune()) %>%
set_engine("Cubist")
The analysis in M. Kuhn and Johnson (2013) specifies that the neural network should have up to 27 hidden units in the layer. The extract_parameter_set_dials() function extracts the parameter set, which we modify to have the correct parameter range:
nnet_param <-
nnet_spec %>%
extract_parameter_set_dials() %>%
update(hidden_units = hidden_units(c(1, 27)))
How can we match these models to their recipes, tune them, then evaluate their performance efficiently? A workflow set offers a solution.
## 15.2 Creating the Workflow Set
Workflow sets take named lists of preprocessors and model specifications and combine them into an object containing multiple workflows. There are three possible kinds of preprocessors:
• A standard R formula
• A recipe object (prior to estimation/prepping)
• A dplyr-style selector to choose the outcome and predictors
As a first workflow set example, let’s combine the recipe that only standardizes the predictors to the nonlinear models that require the predictors to be in the same units:
normalized <-
workflow_set(
preproc = list(normalized = normalized_rec),
models = list(SVM_radial = svm_r_spec, SVM_poly = svm_p_spec,
KNN = knn_spec, neural_network = nnet_spec)
)
normalized
#> # A workflow set/tibble: 4 × 4
#> wflow_id info option result
#> <chr> <list> <list> <list>
#> 1 normalized_SVM_radial <tibble [1 × 4]> <opts[0]> <list [0]>
#> 2 normalized_SVM_poly <tibble [1 × 4]> <opts[0]> <list [0]>
#> 3 normalized_KNN <tibble [1 × 4]> <opts[0]> <list [0]>
#> 4 normalized_neural_network <tibble [1 × 4]> <opts[0]> <list [0]>
Since there is only a single preprocessor, this function creates a set of workflows with this value. If the preprocessor contained more than one entry, the function would create all combinations of preprocessors and models.
The wflow_id column is automatically created but can be modified using a call to mutate(). The info column contains a tibble with some identifiers and the workflow object. The workflow can be extracted:
normalized %>% extract_workflow(id = "normalized_KNN")
#> ══ Workflow ═════════════════════════════════════════════════════════════════════════
#> Preprocessor: Recipe
#> Model: nearest_neighbor()
#>
#> ── Preprocessor ─────────────────────────────────────────────────────────────────────
#> 1 Recipe Step
#>
#> • step_normalize()
#>
#> ── Model ────────────────────────────────────────────────────────────────────────────
#> K-Nearest Neighbor Model Specification (regression)
#>
#> Main Arguments:
#> neighbors = tune()
#> weight_func = tune()
#> dist_power = tune()
#>
#> Computational engine: kknn
The option column is a placeholder for any arguments to use when we evaluate the workflow. For example, to add the neural network parameter object:
normalized <-
normalized %>%
option_add(param_info = nnet_param, id = "normalized_neural_network")
normalized
#> # A workflow set/tibble: 4 × 4
#> wflow_id info option result
#> <chr> <list> <list> <list>
#> 1 normalized_SVM_radial <tibble [1 × 4]> <opts[0]> <list [0]>
#> 2 normalized_SVM_poly <tibble [1 × 4]> <opts[0]> <list [0]>
#> 3 normalized_KNN <tibble [1 × 4]> <opts[0]> <list [0]>
#> 4 normalized_neural_network <tibble [1 × 4]> <opts[1]> <list [0]>
When a function from the tune or finetune package is used to tune (or resample) the workflow, this argument will be used.
The result column is a placeholder for the output of the tuning or resampling functions.
For the other nonlinear models, let’s create another workflow set that uses dplyr selectors for the outcome and predictors:
model_vars <-
workflow_variables(outcomes = compressive_strength,
predictors = everything())
no_pre_proc <-
workflow_set(
preproc = list(simple = model_vars),
models = list(MARS = mars_spec, CART = cart_spec, CART_bagged = bag_cart_spec,
RF = rf_spec, boosting = xgb_spec, Cubist = cubist_spec)
)
no_pre_proc
#> # A workflow set/tibble: 6 × 4
#> wflow_id info option result
#> <chr> <list> <list> <list>
#> 1 simple_MARS <tibble [1 × 4]> <opts[0]> <list [0]>
#> 2 simple_CART <tibble [1 × 4]> <opts[0]> <list [0]>
#> 3 simple_CART_bagged <tibble [1 × 4]> <opts[0]> <list [0]>
#> 4 simple_RF <tibble [1 × 4]> <opts[0]> <list [0]>
#> 5 simple_boosting <tibble [1 × 4]> <opts[0]> <list [0]>
#> 6 simple_Cubist <tibble [1 × 4]> <opts[0]> <list [0]>
Finally, we assemble the set that uses nonlinear terms and interactions with the appropriate models:
with_features <-
workflow_set(
preproc = list(full_quad = poly_recipe),
models = list(linear_reg = linear_reg_spec, KNN = knn_spec)
)
These objects are tibbles with the extra class of workflow_set. Row binding does not affect the state of the sets and the result is itself a workflow set:
all_workflows <-
bind_rows(no_pre_proc, normalized, with_features) %>%
# Make the workflow ID's a little more simple:
mutate(wflow_id = gsub("(simple_)|(normalized_)", "", wflow_id))
all_workflows
#> # A workflow set/tibble: 12 × 4
#> wflow_id info option result
#> <chr> <list> <list> <list>
#> 1 MARS <tibble [1 × 4]> <opts[0]> <list [0]>
#> 2 CART <tibble [1 × 4]> <opts[0]> <list [0]>
#> 3 CART_bagged <tibble [1 × 4]> <opts[0]> <list [0]>
#> 4 RF <tibble [1 × 4]> <opts[0]> <list [0]>
#> 5 boosting <tibble [1 × 4]> <opts[0]> <list [0]>
#> 6 Cubist <tibble [1 × 4]> <opts[0]> <list [0]>
#> # … with 6 more rows
## 15.3 Tuning and Evaluating the Models
Almost all of the members of all_workflows contain tuning parameters. To evaluate their performance, we can use the standard tuning or resampling functions (e.g., tune_grid() and so on). The workflow_map() function will apply the same function to all of the workflows in the set; the default is tune_grid().
For this example, grid search is applied to each workflow using up to 25 different parameter candidates. There are a set of common options to use with each execution of tune_grid(). For example, in the following code we will use the same resampling and control objects for each workflow, along with a grid size of 25. The workflow_map() function has an additional argument called seed, which is used to ensure that each execution of tune_grid() consumes the same random numbers.
grid_ctrl <-
control_grid(
save_pred = TRUE,
parallel_over = "everything",
save_workflow = TRUE
)
grid_results <-
all_workflows %>%
workflow_map(
seed = 1503,
resamples = concrete_folds,
grid = 25,
control = grid_ctrl
)
The results show that the option and result columns have been updated:
grid_results
#> # A workflow set/tibble: 12 × 4
#> wflow_id info option result
#> <chr> <list> <list> <list>
#> 1 MARS <tibble [1 × 4]> <opts[3]> <tune[+]>
#> 2 CART <tibble [1 × 4]> <opts[3]> <tune[+]>
#> 3 CART_bagged <tibble [1 × 4]> <opts[3]> <rsmp[+]>
#> 4 RF <tibble [1 × 4]> <opts[3]> <tune[+]>
#> 5 boosting <tibble [1 × 4]> <opts[3]> <tune[+]>
#> 6 Cubist <tibble [1 × 4]> <opts[3]> <tune[+]>
#> # … with 6 more rows
The option column now contains all of the options that we used in the workflow_map() call. This makes our results reproducible. In the result columns, the “tune[+]” and “rsmp[+]” notations mean that the object had no issues. A value such as “tune[x]” occurs if all of the models failed for some reason.
There are a few convenience functions for examining results such as grid_results. The rank_results() function will order the models by some performance metric. By default, it uses the first metric in the metric set (RMSE in this instance). Let’s filter() to look only at RMSE:
grid_results %>%
rank_results() %>%
filter(.metric == "rmse") %>%
select(model, .config, rmse = mean, rank)
#> # A tibble: 252 × 4
#> model .config rmse rank
#> <chr> <chr> <dbl> <int>
#> 1 boost_tree Preprocessor1_Model04 4.25 1
#> 2 boost_tree Preprocessor1_Model06 4.29 2
#> 3 boost_tree Preprocessor1_Model13 4.31 3
#> 4 boost_tree Preprocessor1_Model14 4.39 4
#> 5 boost_tree Preprocessor1_Model16 4.46 5
#> 6 boost_tree Preprocessor1_Model03 4.47 6
#> # … with 246 more rows
Also by default, the function ranks all of the candidate sets; that’s why the same model can show up multiple times in the output. An option, called select_best, can be used to rank the models using their best tuning parameter combination.
The autoplot() method plots the rankings; it also has a select_best argument. The plot in Figure 15.1 visualizes the best results for each model and is generated with:
autoplot(
grid_results,
rank_metric = "rmse", # <- how to order models
metric = "rmse", # <- which metric to visualize
select_best = TRUE # <- one point per workflow
) +
geom_text(aes(y = mean - 1/2, label = wflow_id), angle = 90, hjust = 1) +
lims(y = c(3.5, 9.5)) +
theme(legend.position = "none")
In case you want to see the tuning parameter results for a specific model, like Figure 15.2, the id argument can take a single value from the wflow_id column for which model to plot:
autoplot(grid_results, id = "Cubist", metric = "rmse")
There are also methods for collect_predictions() and collect_metrics().
The example model screening with our concrete mixture data fits a total of 12,600 models. Using 2 workers in parallel, the estimation process took 2.7 hours to complete.
## 15.4 Efficiently Screening Models
One effective method for screening a large set of models efficiently is to use the racing approach described in Section 13.5.5. With a workflow set, we can use the workflow_map() function for this racing approach. Recall that after we pipe in our workflow set, the argument we use is the function to apply to the workflows; in this case, we can use a value of "tune_race_anova". We also pass an appropriate control object; otherwise the options would be the same as the code in the previous section.
library(finetune)
race_ctrl <-
control_race(
save_pred = TRUE,
parallel_over = "everything",
save_workflow = TRUE
)
race_results <-
all_workflows %>%
workflow_map(
"tune_race_anova",
seed = 1503,
resamples = concrete_folds,
grid = 25,
control = race_ctrl
)
The new object looks very similar, although the elements of the result column show a value of "race[+]", indicating a different type of object:
race_results
#> # A workflow set/tibble: 12 × 4
#> wflow_id info option result
#> <chr> <list> <list> <list>
#> 1 MARS <tibble [1 × 4]> <opts[3]> <race[+]>
#> 2 CART <tibble [1 × 4]> <opts[3]> <race[+]>
#> 3 CART_bagged <tibble [1 × 4]> <opts[3]> <rsmp[+]>
#> 4 RF <tibble [1 × 4]> <opts[3]> <race[+]>
#> 5 boosting <tibble [1 × 4]> <opts[3]> <race[+]>
#> 6 Cubist <tibble [1 × 4]> <opts[3]> <race[+]>
#> # … with 6 more rows
The same helpful functions are available for this object to interrogate the results and, in fact, the basic autoplot() method shown in Figure 15.330 produces trends similar to Figure 15.1. This is produced by:
autoplot(
race_results,
rank_metric = "rmse",
metric = "rmse",
select_best = TRUE
) +
geom_text(aes(y = mean - 1/2, label = wflow_id), angle = 90, hjust = 1) +
lims(y = c(3.0, 9.5)) +
theme(legend.position = "none")
Overall, the racing approach estimated a total of 1,050 models, 8.33% of the full set of 12,600 models in the full grid. As a result, the racing approach was 4.3-fold faster.
Did we get similar results? For both objects, we rank the results, merge them, and plot them against one another in Figure 15.4.
matched_results <-
rank_results(race_results, select_best = TRUE) %>%
select(wflow_id, .metric, race = mean, config_race = .config) %>%
inner_join(
rank_results(grid_results, select_best = TRUE) %>%
select(wflow_id, .metric, complete = mean,
config_complete = .config, model),
by = c("wflow_id", ".metric"),
) %>%
filter(.metric == "rmse")
library(ggrepel)
matched_results %>%
ggplot(aes(x = complete, y = race)) +
geom_abline(lty = 3) +
geom_point() +
geom_text_repel(aes(label = model)) +
coord_obs_pred() +
labs(x = "Complete Grid RMSE", y = "Racing RMSE")
While the racing approach selected the same candidate parameters as the complete grid for only 41.67% of the models, the performance metrics of the models selected by racing were nearly equal. The correlation of RMSE values was 0.968 and the rank correlation was 0.951. This indicates that, within a model, there were multiple tuning parameter combinations that had nearly identical results.
## 15.5 Finalizing a Model
Similar to what we have shown in previous chapters, the process of choosing the final model and fitting it on the training set is straightforward. The first step is to pick a workflow to finalize. Since the boosted tree model worked well, we’ll extract that from the set, update the parameters with the numerically best settings, and fit to the training set:
best_results <-
race_results %>%
extract_workflow_set_result("boosting") %>%
select_best(metric = "rmse")
best_results
#> # A tibble: 1 × 7
#> trees min_n tree_depth learn_rate loss_reduction sample_size .config
#> <int> <int> <int> <dbl> <dbl> <dbl> <chr>
#> 1 1957 8 7 0.0756 0.000000145 0.679 Preprocessor1_Model04
boosting_test_results <-
race_results %>%
extract_workflow("boosting") %>%
finalize_workflow(best_results) %>%
last_fit(split = concrete_split)
We can see the test set metrics results, and visualize the predictions in Figure 15.5.
collect_metrics(boosting_test_results)
#> # A tibble: 2 × 4
#> .metric .estimator .estimate .config
#> <chr> <chr> <dbl> <chr>
#> 1 rmse standard 3.33 Preprocessor1_Model1
#> 2 rsq standard 0.956 Preprocessor1_Model1
boosting_test_results %>%
collect_predictions() %>%
ggplot(aes(x = compressive_strength, y = .pred)) +
geom_abline(color = "gray50", lty = 2) +
geom_point(alpha = 0.5) +
coord_obs_pred() +
labs(x = "observed", y = "predicted")
We see here how well the observed and predicted compressive strength for these concrete mixtures align.
## 15.6 Chapter Summary
Often a data practitioner needs to consider a large number of possible modeling approaches for a task at hand, especially for new data sets and/or when there is little knowledge about what modeling strategy will work best. This chapter illustrated how to use workflow sets to investigate multiple models or feature engineering strategies in such a situation. Racing methods can more efficiently rank models than fitting every candidate model being considered.
### REFERENCES
Kuhn, M, and K Johnson. 2013. Applied Predictive Modeling. Springer.
1. As of February 2022, we see slightly different performance metrics for the neural network when trained using macOS on ARM architecture (Apple M1 chip) compared to Intel architecture.↩︎
|
2022-12-05 01:45:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32123422622680664, "perplexity": 12522.527425772094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00056.warc.gz"}
|
https://www.gamedev.net/forums/topic/691702-passing-vertex-positions-to-the-vertex-shader/
|
# Passing vertex positions to the Vertex Shader
This topic is 394 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I have two questions with regard to passing vertices to the Vertex Shader.
1) I have seen code samples (i.e. DirectXTK) using the semantic SV_Position (instead of POSITION/POSITION0) for passing the position of a vertex to the Vertex Shader.
Why do you want to do that?
2)
A vertex structure could look like this:
struct VertexPosition {
XMFLOAT3 p;
};
The corresponding input element descriptor looks like this:
const D3D11_INPUT_ELEMENT_DESC desc[] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 }
};
The GPU vertex shader input structure looks like this:
struct VSInputPosition {
float4 p : POSITION0;
};
So how is it possible that affine transformations (especially the translation component) work in the following Vertex Shader:
PSInputPosition Transform_VS(VSInputPosition input) {
PSInputPosition output;
output.p = mul(input.p, g_local_to_projection);
return output;
}
It makes more sense to use the following code:
struct VSInputPosition {
float3 p : POSITION0;
};
PSInputPosition Transform_VS(VSInputPosition input) {
PSInputPosition output;
output.p = mul(float4(input.p, 1.0f), g_local_to_projection);
return output;
}
##### Share on other sites
1 hour ago, matt77hias said:
1) I have seen code samples (i.e. DirectXTK) using the semantic SV_Position (instead of POSITION/POSITION0) for passing the position of a vertex to the Vertex Shader.
You have to use SV_Position if you want your data directed to the rasterizer. Back in D3D9, you had to use POSITION/POSITION0.
1 hour ago, matt77hias said:
So how is it possible that affine transformations (especially the translation component) work in the following Vertex Shader
If the IA is filling in a missing x/y/z component it will insert a 0.0f, but if it's filling in a missing w component it will insert a 1.0f.
##### Share on other sites
17 minutes ago, Hodgman said:
You have to use SV_Position if you want your data directed to the rasterizer. Back in D3D9, you had to use POSITION/POSITION0.
Indeed, the last stage (VS, DS or GS) before RS needs to output a SV_Position, but you always need a VS (I presume). So why not letting the VS output a SV_Position? The only reason, I can think of is reducing the number of different input/output structs in the code by passing the input directly as output if no operations are required. This will, however, not result in any performance increase or resource usage decrease?
25 minutes ago, Hodgman said:
If the IA is filling in a missing x/y/z component it will insert a 0.0f, but if it's filling in a missing w component it will insert a 1.0f.
Is it possible to show me where msdn mentions this explicitly?
##### Share on other sites
On 8/25/2017 at 4:37 AM, matt77hias said:
Is it possible to show me where msdn mentions this explicitly?
I took a quick look around, and I wasn't able to find where this is documented. However it's been this way since D3D9, and maybe even D3D8.
Personally I prefer to always add the w component myself in the shader code instead of relying on the IA to fill it in, because I like being explicit. It also lets the compiler optimize the code a bit better, since it can strip out code where that 1.0 is multiplied with another value.
##### Share on other sites
9 hours ago, MJP said:
Personally I prefer to always add the w component myself in the shader code instead of relying on the IA to fill it in, because I like being explicit. It also lets the compiler optimize the code a bit better, since it can strip out code where that 1.0 is multiplied with another value.
That's a very good point you made. Time to start refactoring some things. (Although, I am a bit sceptical of the HLSL compiler included in Visual Studio. For example: I expect the compiler to eliminate statements such as SomeStruct s = (SomeStruct)0; in case all the fields are initialized in the following statements. HLSL structs could not introduce side-effects anyway.)
9 hours ago, MJP said:
I took a quick look around, and I wasn't able to find where this is documented. However it's been this way since D3D9, and maybe even D3D8.
Btw: I was also expecting that this was explicitly mentioned somewhere in "Practical Rendering and Computation with Direct3D 11" (but this could also be due to the Direct3D 1 ). Excellent read anyway.
##### Share on other sites
In general FXC is pretty good at dead-stripping no-ops like multliply-by-1 or adding 0. It will also aggressively remove branches that can be statically evaluated, and strip out dead code that has no effect on the shader outputs. Is can do this because the shader language is so simple: it can always see all of the code used for a program (since there's no linking), and so there's no chance of unknown side effects. But to be sure, we can try it out real quick. Here's two versions of a dead-simple vertex shader, both compiled with the latest version of FXC from the latest Windows 10 SDK (10.0.15063.0):
cbuffer VSConstants
{
row_major float4x4 WorldViewProj;
}
float4 VSMain1(in float4 pos : POSITION) : SV_Position
{
return mul(pos, WorldViewProj);
}
// vs_5_0
// dcl_globalFlags refactoringAllowed
// dcl_constantbuffer CB0[4], immediateIndexed
// dcl_input v0.xyzw
// dcl_output_siv o0.xyzw, position
// dcl_temps 1
// mul r0.xyzw, v0.yyyy, cb0[1].xyzw
// mad r0.xyzw, v0.xxxx, cb0[0].xyzw, r0.xyzw
// mad r0.xyzw, v0.zzzz, cb0[2].xyzw, r0.xyzw
// mad o0.xyzw, v0.wwww, cb0[3].xyzw, r0.xyzw
// ret
float4 VSMain2(in float3 pos : POSITION) : SV_Position
{
return mul(float4(pos, 1.0f), WorldViewProj);
}
// vs_5_0
// dcl_globalFlags refactoringAllowed
// dcl_constantbuffer CB0[4], immediateIndexed
// dcl_input v0.xyz
// dcl_output_siv o0.xyzw, position
// dcl_temps 1
// mul r0.xyzw, v0.yyyy, cb0[1].xyzw
// mad r0.xyzw, v0.xxxx, cb0[0].xyzw, r0.xyzw
// mad r0.xyzw, v0.zzzz, cb0[2].xyzw, r0.xyzw
// ret
So you can see that the second shader skips multiplying the W component by the 4th row of the matrix, and instead does a normal add. In this case this doesn't actually buy us anything since most GPU's can do a single-cycle MAD, but you can imagine how this would extend to more complex scenarios.
By the way, my co-workers still make fun of me for the Direct3D 1 thing.
1. 1
2. 2
3. 3
4. 4
5. 5
Rutin
11
• 12
• 16
• 9
• 14
• 10
• ### Forum Statistics
• Total Topics
632659
• Total Posts
3007688
• ### Who's Online (See full list)
There are no registered users currently online
×
|
2018-09-26 01:24:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24621769785881042, "perplexity": 6452.490389653103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162809.73/warc/CC-MAIN-20180926002255-20180926022655-00216.warc.gz"}
|
https://chemistry.stackexchange.com/questions/1341/is-this-the-correct-relative-br%C3%B8nsted-acidity-in-these-four-acids
|
Is this the correct relative Brønsted acidity in these four acids?
We are asked to sort these four acids by increasing Brønsted acidity:
The problem is, I am confused why $\ce{H2F+}$ is the strongest acid of them all. I understand that stronger acids will have weaker bonds, and the $\ce{F-H}$ bonds should be the most electronegative of them all, leading me to believe that they are the strongest bonds. So then why are these bonds the most easily broken? What have I forgotten here? Is the correct order really from weakest to strongest acid as follows? $\ce{CH3OH}$, $\ce{(CH3)2OH+}$, $\ce{CH3SH2+}$, $\ce{H2F+}$
Fluorine being the most electronegative element in the periodic table, it doesn’t like bearing a positive charge. Thus $\ce{H2F+}$, which has the positive charge on the fluorine nucleus, is not very stable and willing to expel the extra proton. So, it is indeed a strong acid.
$\ce{H2F+}$ is formed, e.g., by autoionization of liquid $\ce{HF}$, but only because that reaction also involves creating a $\ce{F-}$ ion (associated with another molecule of $\ce{HF}$ to form $\ce{HF2-}$), which is very favorable:
$$\ce{3 HF <=> H2F+ + HF2-}$$
|
2019-09-18 07:04:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7296125888824463, "perplexity": 973.891091941452}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00253.warc.gz"}
|
https://brilliant.org/problems/nature-of-wave/
|
# Nature of wave
Classical Mechanics Level 2
The above two graphs show waves $$A$$ and $$B$$ with the same period, at a particular instant, propagating to the right. Which of the following statements is correct?
a) The frequency of wave $$A$$ is greater than that of wave $$B.$$
b) The wavelength of wave $$A$$ is smaller than that of wave $$B.$$
c) The propagation speed of wave $$A$$ is greater than that of wave $$B.$$
×
|
2016-10-25 22:53:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5962250828742981, "perplexity": 285.3109219855931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720468.71/warc/CC-MAIN-20161020183840-00289-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2799575/counting-pairs-of-multisets-with-prescribed-discrepancy
|
# Counting pairs of multisets with prescribed discrepancy
It has been explained here many times that $n$ indistinguishable sweets can be distributed to $r$ children in ${n+r-1\choose r-1}$ ways. Given two such allocations (multisets) $x=(x_i)_{1\leq i\leq r}$ and $y=(y_i)_{1\leq i\leq r}$ we can look at their discrepancy $$\sum_{i=1}^r|x_i-y_i|=:2d\geq0$$ (an even number). The question is: How many pairs $(x,y)$ of multisets of cardinality $n$ over the set $[r]$ are there, having given discrepancy $2d>0$?
This question (in a somewhat different disguise) has been asked here a few days ago, but unfortunately got closed before anybody had time to come up with a hint, let alone a full solution. I'm convinced this is a novel and challenging problem, off the standard stars and bars route. Therefore I dare to post it again, this time with the added context of sweets $\ldots$
Any pair of allocations can be represented in a diagram by charting the original allocation of $$n$$ sweets to $$k$$ children, colouring $$d$$ sweets blue (to represent the fact that they have been taken away), and then distributing another $$d$$ sweets to children without blue sweets. Here is an example for $$n=31,\ k=12,\ d=4$$. The white sweets represent the minimum number a child receives, over both allocations.
There are $$\tbinom{n-d+k-1}{k-1}$$ ways that the white sweets could be distributed.
Suppose that $$i$$ of the children receive at least one of the $$d$$ new sweets, where $$0. There are $$\tbinom{d-1}{i-1}$$ ways that could happen. Then the $$d$$ blue sweets must be among the other $$k-i$$ children, although some may have none. There are $$\tbinom{d+k-i-1}{d}$$ ways that could happen.
Therefore the number of pairs of allocations with discrepancy $$2d$$ is:
$$\binom{n-d+k-1}{k-1}{\large\sum}_{i=1}^{\min(k-1,\ d)}\binom{k}{i}\binom{d-1}{i-1}\binom{d+k-i-1}{d}$$
For example, if $$n=5$$ and $$k=3$$ the number of pairs with discrepancy $$0,2,4,6,8,10$$ is: $$21, 90, 120, 108, 72, 30$$ which sums to $$\tbinom72^2$$ as expected.
|
2022-05-24 07:22:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5666596293449402, "perplexity": 213.7334337779892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00039.warc.gz"}
|
https://developer.trustology.io/ethereum-decoded-data.html
|
# Supported Ethereum Decoded Data
TrustVault currently indexes all of Ethereum transactions and this allows us to pass some of that indexed data on in Webhook calls.
This is particularly useful in DeFi transactions as the index can provide more information on exactly what method was called and what the parameter values were. Additionally, we can show the result of the transaction (e.g. which ERC-20 tokens were transferred) if the transaction contract call has included the events.
If you need some background reading, you could start with a primer on understanding transactions, followed by a bit more detail on a transaction and finally you can ready the details of decoding an ethereum transaction.
Armed with this information on a transaction you can see that the chain provides a reasonable set of information about the transaction such as the method arguments and data types. However, what is missing is the method names and the argument names. This information has to be added manually and this is what a TrustVault Webhook will do for you.
This is really useful for:
• Understanding what you’ve actually called in a contact
• Understanding the values that were passed to arguments
• Confirming that what you expected was called
NB: This information is used in our DeFi firewall product which will analyse data from a transaction to decide if the transaction should be signed or not.
Additionally, TrustVault will pass the Log data from the transaction to confirm the output of the transaction.
An example of the payload with decoded Ethereum data can be found on our webhooks page. The “Sample ERC20 Received Event Object” is particularly useful, but here’s a snippet.
This shows that for this transaction, the method transfer(address to, uint256 amount) was called passing in the value 0x671e96593ea93bfcb510375f4cec111d0e5cf1b8 to the address field and the value 0xcaf67003701680000 to the amount field. This is in hex so the decimal value is 234000000000000000000. (Given this contract has 18 decimal places this is 234 tokens)
The table below lists the method signatures that have been specifically indexed (most are from well know DeFi protocols such as KyberSwap) and, if your transaction calls any of these methods, that detail will be provided in the webhook.
## Supported Ethereum Decoded Method Signature
Supported Ethereum decoded method signatures for ChainRawEthereumTransaction.decodedInput field in the transactions query.
Any unsupported method signatures will have a value of null.
## Supported Ethereum Decoded Event Logs
Supported Ethereum decoded event logs for ChainRawEthereumTransaction.decodedEvents field in the transactions query.
Any unsupported event logs will not be included in the ChainRawEthereumTransaction.decodedEvents array.
|
2021-03-01 12:14:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23801374435424805, "perplexity": 2312.4729246941874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00029.warc.gz"}
|