text stringlengths 256 16.4k |
|---|
For a rate law, the rate constant $k$ is given by the equation
$$\ln k = \ln A - \frac{E_\mathrm{a}}{RT}\tag{1}$$
If $k_\mathrm{fwd}$ is the forward rate constant and $k_\mathrm{rev}$ is the reverse rate constant:
$$\ln k_\mathrm{fwd} = \ln A - \frac{E_\mathrm{a,fwd}}{RT}\tag{2}$$
$$\ln k_\mathrm{rev} = \ln A - \frac{E_\mathrm{a,rev}}{RT}\tag{3}$$
Subtracting the two equations yields
$$\ln\frac{k_\mathrm{fwd}}{k_\mathrm{rev}} = \frac{E_\mathrm{a,rev}}{RT} - \frac{E_\mathrm{a,fwd}}{RT} = \frac{1}{RT} (E_\mathrm{a,rev} - E_\mathrm{a,fwd})\tag{4}$$
If $K$ is the equilibrium constant for the reaction, then $\displaystyle K = \frac{k_\mathrm{fwd}}{k_\mathrm{rev}}$:
$$-RT\ln K = \Delta E\tag{5}$$
where $\Delta E$ is the change in potential energy on a reaction diagram, such as the one shown here.
However, we know that
$$-RT\ln K = \Delta_\mathrm{r} G^\circ\tag{6}$$
Does this mean that potential energy is equal to the standard Gibbs free energy of reaction? |
(Log-)convex functions do not need to be twice differentiable, so any proof through $\frac{d^2}{dx^2}$ lacks generality. On the other hand $e^x$ is both a convex and log-convex function, and we may wonder when the composition of two convex functions is convex. Assume that $f(x)$ is convex . Then $g(x)=e^{f(x)}$ is convex iff
$$ g(\lambda x+(1-\lambda) y) \leq \lambda g(x) + (1-\lambda) g(y)\qquad \forall \lambda\in[0,1] $$which is equivalent to
$$ \exp\left[f(\lambda x+(1-\lambda) y)\right]\leq \lambda e^{f(x)}+(1-\lambda)e^{f(y)}\qquad \forall\lambda\in[0,1]. $$By the convexity of $f$ we know that $f(\lambda x+(1-\lambda) y)\leq \lambda f(x)+(1-\lambda)f(y)$.
Since $\exp$ is increasing we get$$ \exp\left[f(\lambda x+(1-\lambda) y)\right]\leq \exp\left[\lambda f(x)+(1-\lambda)f(y)\right] $$unconditionally, and since $\exp$ is convex we get$$\exp\left[\lambda f(x)+(1-\lambda)f(y)\right]\leq \lambda e^{f(x)}+(1-\lambda)e^{f(y)}$$as wanted. In other terms, if $a(x),b(x)$ are convex functions and $a(x)$ is (weakly) increasing, then $(a\circ b)(x)$ is a convex function, as shown here, too. It follows that any log-convex function is also convex, as the exponential of a convex function.
In general nothing can be said about the composition of a concave function (like $\log$) with a convex function. If we take $f_1(x)=x^2, f_2(x)=e^{x^2}, f_3(x)=x e^{x\sqrt{x}}$, then over $\mathbb{R}^+$ we have that $f_1,f_2,f_3$ are convex but $\log f_1$ is concave, $\log f_2$ is convex and $\log f_3$ is neither convex or concave. |
Improved results for Klein-Gordon-Maxwell systems with general nonlinearity
School of Mathematics and Statistics, Central South University, Changsha, 410083 Hunan, China
$\left\{ \begin{align} &-\vartriangle u+\left[ m_{0}^{2}-{{(\omega +\phi )}^{2}} \right]u = f(u),\ \ \ \ \text{in}\ \ {{\mathbb{R}}^{3}}, \\ &\vartriangle \phi = (\omega +\phi ){{u}^{2}},\ \ \ \ \text{in}\ \ {{\mathbb{R}}^{3}}, \\ \end{align} \right.$
$0 < ω≤ m_0$
$f∈ \mathcal{C}(\mathbb{R}, \mathbb{R})$
$0 < ω < m_0$
$f$
$ω = m_0$
$f$ Mathematics Subject Classification:35J10, 35J20. Citation:Sitong Chen, Xianhua Tang. Improved results for Klein-Gordon-Maxwell systems with general nonlinearity. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2333-2348. doi: 10.3934/dcds.2018096
References:
[1] [2] [3]
A. Azzollini, L. Pisani and A. Pomponio,
Improved estimates and a limit case for the electrostatic Klein-Gordon-Maxwell system,
[4] [5]
V. Benci and D. Fortunato,
Solitary waves of the nonlinear Klein-Gordon equation coupled with the Maxwell equations,
[6] [7]
P. Carrião, P. Cunha and O. Miyagaki,
Positive ground state solutions for the critical Klein-Gordon-Maxwell system with potentials,
[8]
D. Cassani,
Existence and non-existence of solitary waves for the critical Klein-Gordon equation coupled with Maxwell' s equations,
[9]
S. T. Chen and X. H. Tang, Ground state sign-changing solutions for a class of Schrödinger-Poisson type problems in $\mathbb{R}^3$,
[10] [11]
S. T. Chen and X. H. Tang,
Ground state sign-changing solutions for asymptotically cubic or super-cubic Schrödinger-Poisson systems without compact condition,
[12] [13]
T. D'Aprile and D. Mugnai,
Solitary waves for nonlinear Klein-Gordon-Maxwell and Schrödinger-Maxwell equations,
[14] [15]
L. Ding and L. Li,
Infinitely many standing wave solutions for the nonlinear Klein-Gordon-Maxwell system with sign-changing potential,
[16] [17]
L. Jeanjean,
On the existence of bounded Palais-Smale sequences and application to a Landesman-Lazer-type problem set on $\mathbb{R}^N$,
[18]
W. Jeong and J. Seok,
On perturbation of a functional with the mountain pass geometry, Calc. Var. Partial Differential Equations,
[19]
G. B. Li and C. Wang,
The existence of a nontrivial solution to a nonlinear elliptic problem of linking type without the Ambrosetti-Rabinowitz condition,
[20] [21]
P. L. Lions,
The concentration-compactness principle in the calculus of variations. The locally compact case. Ⅰ & Ⅱ,
[22]
D. D. Qin, Y. B. He and X. H. Tang,
Ground state solutions for Kirchhoff type equations with asymptotically 4-linear nonlinearity,
[23] [24]
X. H. Tang and B. T. Cheng,
Ground state sign-changing solutions for Kirchhoff type problems in bounded domains,
[25]
X. H. Tang and S. T. Chen,
Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials,
[26]
X. H. Tang and S. T. Chen, Ground state solutions of Nehari-Pohozaev type for Kirchhoff-type problems with general potentials,
[27] [28] [29]
L. Zhang, X. H. Tang and Y. Chen,
Infinitely many solutions for a class of perturbed elliptic equations with nonlocal operators,
[30]
J. Zhang, W. Zhang and X. L. Xie,
Existence and concentration of semiclassical solutions for Hamiltonian elliptic system,
[31]
J. Zhang, W. Zhang and X. H. Tang,
Ground state solutions for Hamiltonian elliptic system with inverse square potential,
show all references
References:
[1] [2] [3]
A. Azzollini, L. Pisani and A. Pomponio,
Improved estimates and a limit case for the electrostatic Klein-Gordon-Maxwell system,
[4] [5]
V. Benci and D. Fortunato,
Solitary waves of the nonlinear Klein-Gordon equation coupled with the Maxwell equations,
[6] [7]
P. Carrião, P. Cunha and O. Miyagaki,
Positive ground state solutions for the critical Klein-Gordon-Maxwell system with potentials,
[8]
D. Cassani,
Existence and non-existence of solitary waves for the critical Klein-Gordon equation coupled with Maxwell' s equations,
[9]
S. T. Chen and X. H. Tang, Ground state sign-changing solutions for a class of Schrödinger-Poisson type problems in $\mathbb{R}^3$,
[10] [11]
S. T. Chen and X. H. Tang,
Ground state sign-changing solutions for asymptotically cubic or super-cubic Schrödinger-Poisson systems without compact condition,
[12] [13]
T. D'Aprile and D. Mugnai,
Solitary waves for nonlinear Klein-Gordon-Maxwell and Schrödinger-Maxwell equations,
[14] [15]
L. Ding and L. Li,
Infinitely many standing wave solutions for the nonlinear Klein-Gordon-Maxwell system with sign-changing potential,
[16] [17]
L. Jeanjean,
On the existence of bounded Palais-Smale sequences and application to a Landesman-Lazer-type problem set on $\mathbb{R}^N$,
[18]
W. Jeong and J. Seok,
On perturbation of a functional with the mountain pass geometry, Calc. Var. Partial Differential Equations,
[19]
G. B. Li and C. Wang,
The existence of a nontrivial solution to a nonlinear elliptic problem of linking type without the Ambrosetti-Rabinowitz condition,
[20] [21]
P. L. Lions,
The concentration-compactness principle in the calculus of variations. The locally compact case. Ⅰ & Ⅱ,
[22]
D. D. Qin, Y. B. He and X. H. Tang,
Ground state solutions for Kirchhoff type equations with asymptotically 4-linear nonlinearity,
[23] [24]
X. H. Tang and B. T. Cheng,
Ground state sign-changing solutions for Kirchhoff type problems in bounded domains,
[25]
X. H. Tang and S. T. Chen,
Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials,
[26]
X. H. Tang and S. T. Chen, Ground state solutions of Nehari-Pohozaev type for Kirchhoff-type problems with general potentials,
[27] [28] [29]
L. Zhang, X. H. Tang and Y. Chen,
Infinitely many solutions for a class of perturbed elliptic equations with nonlocal operators,
[30]
J. Zhang, W. Zhang and X. L. Xie,
Existence and concentration of semiclassical solutions for Hamiltonian elliptic system,
[31]
J. Zhang, W. Zhang and X. H. Tang,
Ground state solutions for Hamiltonian elliptic system with inverse square potential,
[1] [2]
Pietro d’Avenia, Lorenzo Pisani, Gaetano Siciliano.
Klein-Gordon-Maxwell systems in a bounded domain.
[3] [4]
Paulo Cesar Carrião, Patrícia L. Cunha, Olímpio Hiroshi Miyagaki.
Existence results for the
Klein-Gordon-Maxwell equations in higher dimensions with critical
exponents.
[5]
Takahisa Inui, Nobu Kishimoto, Kuranosuke Nishimura.
Scattering for a mass critical NLS system below the ground state with and without mass-resonance condition.
[6]
Giuseppe Maria Coclite, Helge Holden.
Ground states of the Schrödinger-Maxwell system with dirac
mass: Existence and asymptotics.
[7]
Hartmut Pecher.
Low regularity solutions for the (2+1)-dimensional Maxwell-Klein-Gordon equations in temporal gauge.
[8]
Zhanping Liang, Yuanmin Song, Fuyi Li.
Positive ground state solutions of a quadratically coupled schrödinger system.
[9]
Jian Zhang, Wen Zhang, Xianhua Tang.
Ground state solutions for Hamiltonian elliptic system with inverse square potential.
[10]
Jian Zhang, Wen Zhang.
Existence and decay property of ground state solutions for Hamiltonian elliptic system.
[11] [12]
Wen Feng, Milena Stanislavova, Atanas Stefanov.
On the spectral stability of ground states of semi-linear Schrödinger and Klein-Gordon equations with fractional dispersion.
[13]
Magdalena Czubak, Nina Pikula.
Low regularity well-posedness for the 2D Maxwell-Klein-Gordon equation in the Coulomb gauge.
[14]
M. Keel, Tristan Roy, Terence Tao.
Global well-posedness of the
Maxwell-Klein-Gordon equation below the energy norm.
[15]
Sitong Chen, Xianhua Tang.
Existence of ground state solutions for the planar axially symmetric Schrödinger-Poisson system.
[16]
Sitong Chen, Junping Shi, Xianhua Tang.
Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity.
[17]
Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang.
Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent.
[18]
Fábio Natali, Ademir Pastor.
Orbital stability of periodic waves for the
Klein-Gordon-Schrödinger system.
[19] [20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
This page presents the TTE model library proposed by Lixoft. It includes an introduction on time-to-event data, the different ways to model this kind of data, and typical parametric models.
What is time-to-event data In case of time-to-event data the recorded observations are the times at which events occur. We can for instance record the time (duration) from diagnosis of a disease until death, or the time between administration of a drug and the next epileptic seizures. In the first case, the event is one-off, while in the second it can be repeated. In addition, the event can be: exactly observed: we know the event has happen exactly at time \(t_i (T_i=t_i)\) interval censored: we know the event has happen during a time interval, but not exactly when \((a_i \leq T_i \leq b_i)\) right censored: the observation period ends before the event can be observed \((T_i > t_{end}) \) Formatting of time-to-event data in the MonolixSuite
In the data set, exactly observed events, interval censored events and right censoring are recorded for each individual.
Contrary to other softwares for survival analysis, the MonolixSuite requires to specify the time at which the observation period starts. This allow to define the data set using absolute times, in addition to durations (if the start time is zero, the records represent durations between the start time and the event).
For instance for single events, exactly observed (with or without right censoring),
one must indicate the start time of the observation period (Y=0), and the time of event (Y=1) or the time of the end of the observation period if no event has occurred (Y=0). In the following example: ID TIME Y 1 0 0 1 34 1 2 0 0 2 80 0
the observation period last from starting time t=0 to the final time t=80. For individual 1, the event is observed at t=34, and for individual 2, no event is observed during the period. Thus it is indicated that at the final time (t=80), no event had occurred. Using absolute times instead of durations, we could equivalently write:
ID TIME Y 1 20 0 1 54 1 2 33 0 2 113 0
The durations between start time and event (or end of the observation period) are the same as before, but this time we record the day at which the patients enter the study and the days at which they have events or leave the study. Different patients may enter the study at different times.
Important concepts: hazard and survival
Two functions have a key role in time-to-event analysis: the survival function and the hazard function.
The survival function S(t) is the probability that the event happens after time t. A common way to estimate it non-parametrically is to calculate the Kaplan-Meier estimate. The hazard function h(t) is the instantaneous rate of an event, given that it has not already occurred. Both are linked by the following equation:
$$S(t)=e^{-\int_0^t h(x) dx}$$
Different types of approaches
Depending on the goal of the time-to-event analysis, different modeling approaches can be used: non-parametric, semi-parametric (Cox models) and parametric.
Non-parametric modelsdo not require assumptions on the shape of the hazard or survival. Using the Kaplan-Meier estimate, statistical tests can be performed to check if the survival differs between sub-populations. The main limitations of this approach are that (i) only categorical covariates can be tested and (ii) the way the survival is affected by the covariate cannot be assessed. Semi-parametric models (Cox models)assume that the hazard can be written as a baseline hazard (that depends only on time), multiplied by a term that depends only on the covariates (and not time). Under this hypothesis of proportional covariate effect, one can analyze the effect of covariates (categorical and continuous) in a parametric way, leaving the baseline hazard undefined. Parametric modelsrequire to fully specify the hazard function. If a good model can be found, statistical tests are more powerful than for semi-parametric models. In addition, there is no restrictions on how the covariates affects the hazard. Parametric models can also be easily used for predictions.
The table below synthesizes the possibilities for the 3 approaches.
Focus on parametric modeling with the MonolixSuite In the MonolixSuite, the only possible approach is the parametric approach. The model is defined via the hazard function, which in a population approach typically depends on individual parameters: \(h(t,\psi_i)\). With the hazard function, the survival function can easily be computed, as well as the conditional distribution \(p_{y_i|\psi_i}\) for various censoring situations (which is required for parameter estimation via SAEM, log-likelihood calculation, etc). The typical syntax to define the output is the following: DEFINITION: Event = {type=event, maxEventNumber=1, hazard=h}
The output
Event will be matched to the time-to-event data of the data set. The
hazard function h is usually defined via an expression including the input individual parameters. For one-off events, the maximal number of events per individual is 1. It is important to indicate it in the
maxEventNumber argument to speed up calculations. To use the model for simulations with Simulx,
rightCensoringTime must be given as additional argument. Check here for details.
Note that the hazard can be a function of other variables such as drug concentration or tumor burden for instance (joint PK-TTE or PD-TTE models). An example of the syntax is given here.
Library of parametric models for time-to-event data
To describe the various shapes that the survival Kaplan-Meier estimate can take, several hazard functions have been proposed. Below we display the
survival curves, for the most typical hazard functions:
A few comments:
We have reparametrized \(T_e’\) as a function of \(T_e\) to better separate the effects of the scale parameter (characteristic time) and the shape parameter (shape of the curve). All parameters are positive. If we assume inter-individual variability, a log-normal distribution is usually appropriate.
The table below summarizes the number of parameters and typical parameter values:
For each model, we can in addition consider a delay
del as additional parameter. The delay will simply shift the survival curve to the right (later times). For t<del, the survival is S(t<del)=1.
Downloads:
These models can be explore in Mlxplore (download Mlxplore project here). A shiny-mlxR app also permits to give any hazard function and visualize the corresponding survival curve (click here).
All models are available as Mlxtran model file in the TTE library. Each model can be with/without delay and for single/repeated events. For performance reasons, it is important to choose the file ending with ‘_singleEvent.txt’ if you want to model one-off events (death, drop-out, etc). Case studies
Two case studies show the modeling and simulation workflow for TTE data: |
S Jain
Articles written in Pramana – Journal of Physics
Volume 75 Issue 5 November 2010 pp 883-888 Conributed Papers
Optical diagnostics of laser-produced plasma requires a coherent, polarized probe beam synchronized with the pump beam. The probe beam should have energy above the background emission of plasma. Though the second harmonic probe beam satisfies most of the requirements, the plasma emission is larger at the harmonic frequencies of the pump. Hence, at high intensities we need a probe beam at non-harmonic frequencies. We have set up a Raman frequency shifted probe beam using a pressurized hydrogen cell that is pumped by the second harmonic of the Nd glass laser that operates at only one Stokes line of 673.75 nm.
Volume 75 Issue 6 December 2010 pp 1203-1208 Conributed Papers
Hohlraums of high-𝑍 materials are used as soft X-ray sources to study indirect drive fusion, equation of state of materials etc. Here, we describe a method to develop spherical gold hohlraums of large wall thickness ($\sim 70–80 \mu$m) on which laser entrance and diagnostics holes are drilled using a 10 Hz Nd:YLF laser. Holes of different diameters have been drilled with lenses of different focal lengths. The back wall of the hohlraum is protected from the damage by shutting off the laser at pre-determined hole drilling time.
Volume 82 Issue 1 January 2014 pp 159-163 Contributed Papers
High energy, high power (HEHP) Nd:glass laser systems are used for inertial confinement fusion and equation of state (EOS) studies of materials at high temperature and pressure. A program has been undertaken for the indigenous development of Nd-doped phosphate laser glass rods and discs for HEHP lasers. In this paper, we report the characterization of the Nd-doped phosphate laser glass rods produced under this program and compare the indigenously developed laser glass to LHG-8 laser glass of M/s Hoya, Japan. We experimentally measured the values of the stimulated emission cross-section (𝜎) and coefficient of intensity-dependent refractive index ($n_2$) and hence the figure of merit $F = \sigma/n_2$ of the indigenous phosphate laser glass rods. This value of figure of merit is found comparable to the reported value of identically doped Nd:glass rods.
Current Issue
Volume 93 | Issue 6 December 2019
Click here for Editorial Note on CAP Mode |
What's the radius and area of circle of max area that can be inscribed in a isoceles triangle with $2$ equal sides of length $1$? Radius formula is given, $r = \dfrac{2A}{P}$, where $A$ is area of triangle and $P$ is perimeter of triangle. I have no idea how to do this. I'm guessing the triangle might be equilateral because equilateral is also isosceles.
Computing the area and perimeter, we get $A=x\sqrt{1-x^2}$ and $P=2+2x$.
$\hspace{4cm}$
Therefore, $$ \begin{align} r^2 &=\frac{4A^2}{P^2}\\ &=x^2\frac{1-x}{1+x}\tag{1} \end{align} $$ This reaches maximum when the derivative of $(1)$ is $0$; that is, when $$ x^3+x^2-x=0\require{cancel}\tag{2} $$ The only root of $(2)$ within the allowable range of $x\in(0,1)$ is $x=\frac{-1+\sqrt5}2=\frac1\phi$. Thus, the maximum area of the inscribed circle is $$ \begin{align} \pi r^2 &=\frac\pi{\phi^5}\\ &\doteq0.283277232857953\tag{3} \end{align} $$ Divide the area by $\pi$ and take the square root to get the radius of the circle.
We need to optimize the area of the triangle to optimize the radius that can fit inside it . Draw an isosceles triangle with two legs of length one, and split the triangle symmetrically so you think of your triangle as two back-to-back right triangles. Then label the height $x$ and width $\sqrt{1-x^2}$. Choose one angle to be $\theta$. Then you should see that $$A = \frac{bh}{2}$$ or equivalently we can define $A$ as a function in terms of $\theta$, since we already have $b,h$ defined in terms of $\theta$. Let $$\\ A(\theta) =\ \frac{\cos(\theta)\sin(\theta)}{2} = \frac{\sin(2\theta)}{4}$$ Since $A(\theta)$ is only the area of our right triangle, we need to double it to account for the area of the entire isosceles triangle. Let $$A_I(\theta) = 2A(\theta) = \frac{\sin(2\theta)}{2}$$ where $A_I(\theta)$ is the area of the isosceles. Similarly, we can define $P$ as a function of $\theta$. The perimeter of the isosceles is $P =$ leg $1$ + leg $2$ + leg $3= 2+b_I$, where we have $b_I = 2b = 2\cos(\theta)$. Hence, $$P(\theta) = 2+2\cos(\theta)$$
Now we can define $r$ as a function of $\theta$ via the relation $$r(\theta) = \frac{A_I(\theta)}{P(\theta)} = \frac{\sin(2\theta)}{4(1+\cos(\theta))}$$ Now you can find when $r'(\theta) = 0 $ and optimize $r(\theta)$
Let the unknown triangle's base be $2l$.
Draw a diagram and use Pythagoras' Theorem to obtain the height of the triangle as $\sqrt{1-l^2}$. Now use the triangle's area formula to obtain the area $$\sqrt{1-l^2}\times\frac {2l}2=l\sqrt{1-l^2}$$ The perimeter of the triangle is $2+2l$. Thus the radius of the circle is $$\frac{2l\sqrt{1-l^2}}{2+2l}=\frac{l\sqrt{1-l^2}}{1+l}$$ And the area of the circle is $$\pi\left(\frac{l\sqrt{1-l^2}}{1+l}\right)^2=\frac{\pi l^2(1-l^2)}{(1+l)^2}=\frac{\pi l^2(1-l)}{1+l}$$
Hint: Let the other side be $L$. You should be able to find an equation for the radius of a circle inscribed in a $1-1-L$ isosceles triangle. If not, the center has to be on the bisector of the vertex angle. A little geometry and you can derive it. Then take the derivative, set to zero ....
First assume that $\triangle ABC$ has $AB = AC = 1$, and denote $S$ = area of $\triangle ABC$, and $P$ = perimeter of the triangle, then:
$r = \dfrac{2S}{P} = \dfrac{2\cdot \dfrac{1}{2}\cdot 1\cdot 1\cdot \sin A}{1+1+2\sin A} = \dfrac{1}{2}\cdot \dfrac{\sin A}{1+\sin A} = f(A)$.
Taking derivative of $f$ w.r.t $A$ we have: $f'(A) = \dfrac{\cos A}{2(1+\sin A)^2} = 0 \iff \cos A = 0 \iff A = \dfrac{\pi}{2}$. This shows that $A = \dfrac{\pi}{2}$ is a maxima, and thus $r_{max} = f(\dfrac{\pi}{2}) = \dfrac{1}{4}$, and the circle with maximum area equal to: $\pi\cdot \left(\dfrac{1}{4}\right)^2 = \dfrac{\pi}{16}$. |
It's wrong. Try $b=c\rightarrow0^+$.
The following inequality is true already.
Let $a$, $b$ and $c$ be non-negative numbers such that $a^2+b^2+c^2=1$. Prove that:
$$\sum_{cyc}a\sqrt{\frac{(ab+1)(ac+1)}{bc+1}}\leq2.$$
Indeed, let $a+b+c=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.
Thus, $9u^2-6v^2=1$ and we need to prove that:$$\sum_{cyc}a(ab+1)(ac+1)\leq2\sqrt{\prod_{cyc}(ab+1)}$$ or$$\sum_{cyc}(a^3bc+a^2b+a^2c+a)\leq2\sqrt{a^2b^2c^2+1+\sum_{cyc}(a^2bc+ab)}$$ or$$abc+\sum_{cyc}(a^2b+a^2c+a)\leq2\sqrt{a^2b^2c^2+1+\sum_{cyc}(a^2bc+ab)}$$ or$$9uv^2-2w^3+3u(9u^2-6v^2)\leq$$$$\leq2\sqrt{w^6+(9u^2-6v^2)^3+3uw^3(9u^2-6v^2)+3v^2(9u^2-6v^2)^2}.$$Now, let $$f(w^3)=2\sqrt{w^6+(9u^2-6v^2)^3+3uw^3(9u^2-6v^2)+3v^2(9u^2-6v^2)^2}-$$$$-(9uv^2-2w^3+3u(9u^2-6v^2)).$$Thus, it's obvious that $f$ increases.
Id est, it's enough to prove our inequality for an extreme value of $w^3$, which happens in the following cases.
$w^3=0$.
The homogenization gives$$abc+\sum_{cyc}(a^2b+a^2c)+(a+b+c)(a^2+b^2+c^2)\leq2\sqrt{\prod_{cyc}(ab+a^2+b^2+c^2)}.$$Since this inequality is an even degree and for $b=c=0$ it's obviously true, it's enough to assume $b=1$ and $c=0$, which gives$$a^2+a+(a+1)(a^2+1)\leq2(a^2+a+1)\sqrt{a^2+1}$$ or$$a+1\leq2\sqrt{a^2+1},$$ which is true because by C-S$$2\sqrt{a^2+1}\geq\sqrt{(1+1)(a^2+1)}\geq a+1.$$2. Two variables are equal.
We can assume $b=c=1$ and it's enough to prove that:$$a+2(a^2+a+1)+(a+2)(a^2+2)\leq2(a^2+a+2)\sqrt{a^2+3}$$ or$$2\sqrt{a^2+3}\geq a+3,$$ which is true by C-S again:$$2\sqrt{a^2+3}=\sqrt{(1+3)(a^2+3)}\geq a+3$$ and we are done!
Also, we can prove that $f(w^3)\geq0$ by the following way.
We need to prove that:$$2\sqrt{w^6+(9u^2-6v^2)^3+3uw^3(9u^2-6v^2)+3v^2(9u^2-6v^2)^2}\geq$$$$\geq9uv^2-2w^3+3u(9u^2-6v^2)$$ or$$2\sqrt{w^6+(9u^2-6v^2)^2(9u^2-3v^2)+3uw^3(9u^2-6v^2)}\geq27u^3-9uv^2-2w^3.$$Now, by AM-GM and C-S we obtain:$$2\sqrt{w^6+(9u^2-6v^2)^2(9u^2-3v^2)+3uw^3(9u^2-6v^2)}\geq$$ $$\geq2\sqrt{w^6+6u^2(9u^2-6v^2)^2+9w^6}=2\sqrt{10w^6+54u^2(3u^2-2v^2)^2}=$$$$=\frac{1}{4}\sqrt{(10+54)(10w^6+54u^2(3u^2-2v^2)^2)}\geq\frac{1}{4}(10w^3+54u(3u^2-2v^2))=$$$$=\frac{1}{2}(5w^3+81u^3-54uv^2).$$Id est, it's enough to prove that:$$\frac{1}{2}(5w^3+81u^3-54uv^2)\geq27u^3-9uv^2-2w^3$$ or$$3u^3-4uv^2+w^3\geq0,$$ which is Schur. |
find dy/dx;
a) $\frac{1-2x}{\sqrt{2+x}}$
b.) $3x(1-x^2)^{1/3}$
My attempt at a) use the quotient rule:
so $dy/dx = -2 \sqrt{2+x}+ (1-2x)0.5(2+x)^{-1/2}$ but then I get stuck there, cannot simplify it, wolfram gives a nice simplified answer but not sure how to get it.
b.) Product rule:
$dy/dx= 3(1-x^2)^{0.5} + 3 \times 1/3 \times (1-x^2)^{-2/3}$ and again i can't seem to simplify that either to a nice wolfram answer. |
To compute the vacuum expectation value
$$\langle \Omega | T\{q(t_{1})\cdots q(t_{n})\}|\Omega\rangle$$
in the path integral formalism, we start with the time-ordered product in the path integral representation
$$\langle q_{f},t_{f}|\ T\{q(t_{1})\cdots q(t_{n})\}\ |q_{i},t_{i}\rangle=\int\ \mathcal{D}\phi\ e^{iS}\ q(t_{1})\cdots q(t_{n}),$$
use the fact that
$$|\psi\rangle = \int\ dq_{i}\ |q_{i},t_{i}\rangle\ \langle q_{i},t_{i}|\psi\rangle$$
and the projection trick
$$\lim\limits_{T \to\infty}e^{-iHT(1-i\epsilon)}|\Omega\rangle = \sum\limits_{n}e^{-iE_{n}T(1-i\epsilon)}|n\rangle\langle n|\Omega\rangle$$
to project out all the states with $n \neq 0$ and obtain
$$\langle \Omega | T\{q(t_{1})\cdots q(t_{n})\}|\Omega\rangle = \frac{\int\mathcal{D}q(t)\ e^{iS[q]}q(t_{1})\cdots q(t_{n})}{\int \mathcal{D}q(t)\ e^{iS[q]}}.$$
How can we be sure that the projection trick is a legitimate step in the calculation and not some sleight of hand? Is the projection trick performed in the limit that $\epsilon \rightarrow 0$?
How is the expansion of $|\psi\rangle$ used in the time-ordered product to obtain the vacuum expectation value? |
Physic models in STAR-CCM+ (Part IV)
We can find turbulence every day in almost every daily situation, whether it be smoke from a cigarette, water in a river or waterfall, the flow in boat wakes and around aircraft wing tips. Turbulent flows are characterized mainly by unsteadiness, vorticity, three-dimensionality, dissipation, wide spectrum of scales, and large mixing rates.
Turbulent flows are irregular in nature. This makes a deterministic approach to turbulence problems impossible and therefore statistical methods are used for treating turbulent flows. There are several leading computational approaches to turbulent flows and all of them fall under one of these approaches:
Simulations,where equations are solved for a time dependent velocity field that, to some extent, represents the velocity field $\vect{U}(\vect{x},t)$, for one realization of the turbulent flow. Turbulence models,where equations are solved for some mean quantities $\langle \vect{U} \rangle$, $\langle u_iu_j \rangle$, and $\varepsilon$.
In STAR-CCM+ the available approaches to modelling turbulence are:
Modelsthat give closure of the Reynolds-Averaged Navier-Stokes (RANS) equations. Large eddy simulation(LES). Detached eddy simulation(DES).
Over the years, many models have been proposed and many are currently in use. As no single turbulence model is the best for every flow simulation, it is important to follow some criteria to assess different models, such as,
Level of description. Completeness. Cost and ease of use. Range of applicability. Accuracy.
Following comes a description of the most common approaches used today and easily found on any CFD software.
Contents
1 Direct numerical simulation 2 Reynolds-Averaged Navier-Stokes 2.1 Turbulent-viscosity models 2.1.1 K-Epsilon models 2.1.2 K-Omega models 2.1.3 Spalart-Allmaras model 2.2 Reynolds-stress models 2.1 Turbulent-viscosity models 3 Large-eddy simulation 4 Detached eddy simulation 5 References Direct numerical simulation
In DNS, the Navier-Stokes equations are solved to determine $\vect{U}(\vect{x},t)$ for one realization of the flow. Conceptually it is the simplest approach and, when it can be applied, it is unrivalled in accuracy and in the level of description provided. However, computational cost is extremely high. To an approximation, the number of floating-point operations required to perform a simulation is proportional to the product of the number of modes and the number of steps, $N^3M$ (mode-steps). The preceding results yield
$$N^3M \sim 160 {\rm Re}^3_L \approx 0.55 {\rm R}^6_\lambda,$$
showing the very steep rise with the Reynolds number.
Generally, the computational cost increases so steeply with the Reynolds number (as ${\rm R}^6_\lambda$ or ${\rm Re}^3_L$) that it is impracticable to go much higher than ${\rm R}_\lambda \sim 100$ with gigaflop computers —my Intel C2Q Q8300 peaks at 31 GFLOPS, but we may easily find recent desktop architectures, such as Haswell, exceeding 150 GFLOPS.
In Reynolds-Averaged Navier-Stokes (RANS), the Navier-Stokes equations for the instantaneous velocity and pressure fields are decomposed into a mean value and a fluctuating component. The resulting equations for the mean quantities are essentially identical to the original equations, except that an additional term now appears in the momentum transport equation, the Reynolds stress tensor:
$$ \vect{T}_t \equiv -\rho\overline{v’v’}=-\rho \begin{bmatrix}
\overline{u’u’} & \overline{u’v’} & \overline{u’w’} \\ \overline{u’v’} & \overline{v’v’} & \overline{v’w’} \\ \overline{u’w’} & \overline{v’w’} & \overline{w’w’} \end{bmatrix}$$
To model this Reynolds stress tensor in terms of the mean flow quantities, two basic approaches are available: turbulent-viscosity models and Reynolds-stress models.
Turbulent-viscosity models
These models use the concept of a turbulent viscosity $\mu_t$ to model the Reynolds stress tensor as a function of mean flow quantities.
Classes of turbulence models available in STAR-CCM+ include Spalart-Allmaras, K-Epsilon and K-Omega models.
K-Epsilon models
Introduced in 1972 by Jones and Launder [4], the K-Epsilon model belongs to the class of two equation models, in which model transportation equations are solved for two turbulence quantities. It is the most widely used complete turbulence model, and it is incorporated in most commercial CFD codes.
The K-Epsilon model consists of the model transportation equation for $k$
$$ \frac{\bar{\rm D}k}{\bar{\rm D}t} = \nabla\cdot \left( \frac{\nu_T}{\sigma_k} \nabla k \right) + \mathcal{P} – \varepsilon, $$
the model transportation equation for $\varepsilon$
$$ \frac{\bar{\rm D}\varepsilon}{\bar{\rm D}t} = \nabla\cdot \left( \frac{\nu_t}{\sigma_\varepsilon}\nabla\varepsilon \right) + C_{\varepsilon 1}\frac{\mathcal{P}\varepsilon}{k} – C_{\varepsilon 2}\frac{\varepsilon^2}{k}, $$
which is empirical, and the specification of the turbulent viscosity as
$$ \nu_T = C_\mu k^2/\varepsilon. $$
It is somewhat a semi-empirical model, mainly because the modelled transport equation for dissipation used in the model depends on phenomenological considerations and empiricism.
Standard K-Epsilon
This is the de facto standard version of the two-equation model that involves transport equations for the turbulent kinetic energy $k$ and its dissipation rate $\varepsilon$.
Standard K-Epsilon Two-Layer
This model combines the Standard K-Epsilon model with the two-layer approach, which allow the K-Epsilon model to be applied in the sublayer.
In the layer next to the wall, the turbulent dissipation rate $\varepsilon$ and the turbulent viscosity $\mu$, are specified as functions of wall distance. The values of $\varepsilon$ specified in the near-wall layer are blended smoothly with the values computed from solving the transport equation far from the wall.
Standard K-Epsilon Low-Re
This model has identical coefficients to the Standard K-Epsilon model, but provides more damping functions. These functions let it be applied in the viscous-affected regions near walls.
Realizable K-Epsilon
Proposed by Shih et al. [5] in 1994, this model differs from the Standard K-Epsilon model in that it contains a new formulation for the turbulent viscosity and it has a new transport equation for the dissipation rate $\varepsilon$ which is derived from an exact equation for the transport of the mean-square vorticity fluctuation.
The term
realizable means that the model satisfies certain mathematical constraints on the Reynolds stresses. Realizable K-Epsilon Two-Layer*
The Realizable Two-Layer K-Epsilon model combines the Realizable K-Epsilon model with the two-layer approach. The coefficients in the model are identical, but the model gains the added flexibility of an all y+ wall treatment.
Abe-Kondoh-Nagano K-Epsilon Low-Re
It has different damping coefficients than the Standard K-Epsilon model, and uses different damping functions than the Standard Low-Reynolds Number model. It is a good choice for applications where the Reynolds numbers are low but the flow is relatively complex.
V2F K-Epsilon
The V2F K-Epsilon model is known to capture the near-wall turbulence effects more accurately. It is based on the root mean square normal velocity fluctuations $\overline{v’^2}$ as the velocity scale rather than turbulence kinetic energy $k$; thus it is capable of handling the wall region without the need for the additional damping functions because the normal velocity fluctuations are known to be quite sensitive to the presence of wall and thus are like a natural damper.
The model employs four transport equations for the closure of the RANS equations, solving two more turbulence quantities, namely the normal stress function and the elliptic function, in addition to $k$ and $\varepsilon$.
K-Omega models
There have been many two-equation models proposed and in most of them, $k$ is taken as one of the variables, with diverse choices for the second. In 1988, Wilcox [7] proposed the original K-Omega model which takes the specific dissipation rate $\omega$ ($\equiv \varepsilon/k$) as the second variable.
$$ \frac{\bar{\rm D}\omega}{\bar{\rm D}t} = \nabla\cdot \left( \frac{\nu_T}{\sigma_\omega}\nabla \omega \right) + C_{\omega 1}\frac{\mathcal{P}\omega}{k} – C_{\omega 2}\omega^2 $$
Although the specific dissipation rate $\omega$ can be thought of as the ratio of $\varepsilon$ and $k$, there are subtle differences among them. Deriving the $\omega$ equation implied by the K-Epsilon model and taking some simplifications, the result is
$$ \frac{\bar{\rm D}\omega}{\bar{\rm D}t} = \nabla\cdot \left( \frac{\nu_T}{\sigma_\omega}\nabla \omega \right) + \left( C_{\omega 1} – 1 \right) \frac{\mathcal{P}\omega}{k} – \left( C_{\omega 2} – 1 \right) \omega^2 + \frac{2\nu_T}{\sigma_\omega k}\nabla \omega \cdot \nabla k $$
which for inhomogeneous flows contains an additional term, the final term in the equation.
SST (Menter) K-Omega*
Developed by Menter [8] in 1994, the Shear Stress Transport K-Omega model takes advantage of accurate formulation of the K-Omega model in the near-wall region with the free-stream independence of the K-Epsilon model in the far field. It does so multiplying the final additional term, obtained from deriving the K-Epsilon model, by a blending function. Close to wall the damped cross-diffusion derivative term is zero (leading to the standard $\omega$ equation), whereas remote from wall the blending function is unity (corresponding to the standard $\varepsilon$ equation).
Standard (Wilcox) K-Omega
The original model proposed by Wilcox [7], it has seen most application in the aerospace industry. Therefore, it is recommended as an alternative to the Spalar-Allmaras models for similar types of applications.
Spalart-Allmaras model
Spalart-Allmaras model is a one equation model which solves a transport equation for the turbulent viscosity $\nu_T$.
The model equation is of the form
$$ \frac{\bar{\rm D}\nu_t}{\bar{\rm D}t} = \nabla \cdot \left( \frac{\nu_t}{\sigma_\nu} \nabla \nu_T \right) + S_\nu. $$
It has clear limitations as a general model but to the aerodynamic flows for which it is intended, the model has proved quite successful.
Standard Spalar-Allmaras*
This is the original model proposed by Spalart and Allmaras [9] back in 1992.
High-Reynolds Number Spalart-Allmaras
In this derivation of the model, the viscous damping within the buffer layer and viscous sublayer is not included.
Reynolds-stress models
In Reynolds-stress models, model transport equations are solved for the individual Reynolds stresses $\langle u_i u_j \rangle$ and for the dissipation $\varepsilon$ (or for another quantity, e.g., $\omega$, that provides a length or time scale of the turbulence).
The RST model carries significant computations overhead as seven additional equations (three Reynolds shear stresses, three Reynolds normal stresses, and one for dissipation) must be solved in three dimensions (as opposed to the two equations of a K-Epsilon model). There is also likely to be a penalty in the total number of iterations required to obtain a converged solution.
This model accounts for the effects of streamline curvature, swirl, rotation, and rapid changes in strain rate in a more physical manner than that by one-equation or two-equation models. So it is used for computing cyclone flows, swirling flows in combustors, rotating flow passages, and the stress-induced secondary flows in ducts.
Large-eddy simulation
In large-eddy simulation (LES), the larger three-dimensional unsteady turbulent motions are directly represented, whereas the effects of the smaller scale motions are modelled. The modelling approach combines features of RANS in part of the flow and LES in unsteady, separated regions. Because the large-scale unsteady motions are represented explicitly, LES can be expected to be more accurate and reliable than Reynolds-stress models for flows in which large-scale unsteadiness is significant.
Detached eddy simulation
Detached eddy simulation (DES) is a hybrid modelling approach that combines features of RANS simulation in some parts of the flow and LES in others. The user gets the best of both worlds: a RANS simulation in the boundary layers and an LES simulation in the unsteady separated regions.
References
[1]
User Guide STAR-CCM+ Version 8.06. 2013. [2] Dewan, A. 2011. Tackling turbulent flows in engineering. Berlin: Springer. [3] Pope, S. B. 2009. Turbulent flows. Cambridge: Cambridge University Press. [4] Jones, W. and Launder, B. 1972. The prediction of laminarization with a two-equation model of turbulence. International Journal of Heat and Mass Transfer, 15 (2), pp. 301–314. [5] Shih, T., Liou, W., Shabbir, A., Yang, Z. and Zhu, J. 1994. A new k-epsilon eddy viscosity model for high Reynolds number turbulent flows: Model development and validation. NASA STI/Recon Technical Report N, 95 p. 11442. [6] Abe, K., Kondoh, T. and Nagano, Y. 1994. A new turbulence model for predicting fluid flow and heat transfer in separating and reattaching flows—I. Flow field calculations. International Journal of Heat and Mass Transfer, 37 (1), pp. 139–151. [7] Wilcox, D. 1988. Reassessment of the scale determining equation for advanced turbulence models. AIAA journal, 19 (2), pp. 248–251. [8] Menter, F. R. 1994. Two-equation eddy-viscosity turbulence models for engineering applications. AIAA journal, 32 (8), pp. 1598–1605. [9] Spalart, P. and Allmaras, S. 1992. A one-equation turbulence model for aerodynamic flows. AIAA Paper, 92-0439. |
I am sort of confused by the notion of approximate Nash equilibrium. I will try to express my confusion in the following exercise.
Problem. Is it true that for every two player game where every player has $n$ available actions and all payoffs $\in [0,1]$, there exists approximate $\epsilon$- Nash equilibrium where all players probabilities are integer multiples of $Ω(\epsilon/n)$, for all $\epsilon$.
$\sigma$ is $\epsilon$-Nash equlilbrim if $\forall$ player $i$ and $\forall j$ action of $i, u_i(\sigma_{-i},\sigma_{j}) - u_i(\sigma) \leq \epsilon$. In another words by deviating to any other action every player can't gain more than $\epsilon$.
The problems is to prove that for every 2 player game with $n$ action, exists $\epsilon$-nash equilibrium where all probabilities bounded below by $\frac{\epsilon \cdot k}{n}$, for some positive $k>0$, so there is no probability that equals to $0$.
First of all every game has at least one mixed Nash equilibrium, therefore the problem reduced to show that there exists some strategy for players with probability of actions is bounded below by $\frac{\epsilon \cdot k}{n}$ and difference between payoff of Nash equilibrium and the payoff of strategy is at most $\epsilon$.
Let's assume the worst case when the given game has pure Nash equilibrium, such that the single action has probability $1$ and $n-1$ others are $0$, therefore in $\epsilon$-Nash equilibrium it's required to increase every probability by $\frac{\epsilon}{n}$ and there are $n-1$ such actions and the maximum payoff of the action is 1. Therefore we change the payoff at most by $\frac{\epsilon}{n} \cdot (n-1) \cdot 1 < \epsilon$.
It looks like a right reasoning regarding the problem, but still it feels like not enough rigorous proof, in addition the same is applied to the second player and then we have the difference $>\epsilon$.
If you have any idea how to proceed with the proof please share it with us. |
Find cardinality of $$B = \left\{ f : \mathbb N \rightarrow \mathbb N \mid \forall n(f(n)\le n) \wedge \forall m\; \exists n (f(n) > m) \right\}$$
My try
I have solved this, but I am not sure if it is correct (I have not a lot of experience in set theory). Can somebody check that or give some tips (or both)?
$|B| < \mathfrak{c}$ because $|B| < |\mathbb N|^{|\mathbb N|}$ from the other hand I can define $G$ injective such as: $$G: (\mathbb N \rightarrow \left\{0,1 \right\}) \rightarrow B $$ $$ G(\alpha)(n) = \begin{cases} \alpha(n) + G(\alpha)(n-1), &\text{if }n \neq 0 \\ 0, &\text{if }n = 0. \end{cases} $$
The function $G$ increases and is injective, and its power is $\mathfrak{c} $ so $|B| = \mathfrak{c}$ |
The question I'm asking might be rather simple, but I couldn't find relevant information (maybe it's too trivial?). Here's the question that baffled me.
Let $f:X\rightarrow Y$ and $g:Y\rightarrow Z$ functions. If g and $g \circ f$ are invertible, then is f also invertible?
Now, the reason I'm confused is that I'm currently learning set theory. I'm using the textbook "Introduction to Set Theory" by Karel Harbacek and Thomas Jech. In the book, the composite function is defined as follows:
$g \circ f$={(x, y)| $\exists$z(f(x)=z $\land$ g(z)=y)} where dom($g \circ f$)=domf $\cap$$f^{-1}$[domg]
Notice that we only need the intermediate $z$ to find the elements of the composite function, and only the domain is defined. Now, consider the case where f(1)=1 and g(k)=k for all $k$, where $k$ is equal or less than a certain natural number $n$.
In this case, clearly $g$ is bijective, hence invertible. The problem is the composite function. Since we defined only the domain of a composite function, the domain of the composite function in this case is {1} and the range is {1}.
Now, should we regard this range as the comain(surjective) so the composite function is invertible? Or, should we say the codomain of the composite function is $Z$, the codomain of g? This ambiguousity arose because the definition of the composite function seems somewhat incomplete.
My second question is, what if the problem didn't specify all the domains and codomains of each function? Then would be the conclusion different from the first case? Lastly, I've heard from one of my fellows that in some textbook the domain of the composite function is defined as just plainly, $domf$. What made all the authors to make different definitions to such an important concept! I'm being confused!
Thanks in advance. |
Let me ask about part of a problem which I have in solving of a nonlinear DE (numerically). I am working with vectors of length $xN$. I have an initial vector $v$, a given $xN\times xN$ matrix $A$ and a given nonlinear function $f:\mathbb{R}^2\to\mathbb{R}$. Next, I need to construct the following two-dimensional list of vectors (I do not use here Mathematica's brackets just to simplify notations) using the rule $$ w_{n,m}[k]=\begin{cases} v[k], & \text{if } n=0 \text{ or } m=1,\\ w_{n,m-1}[k]+f\Bigl[(A.w_{n-1,m})[k],w_{n-1,m}[k] \Bigr], &\text{otherwise}. \end{cases} $$ Here $k=1,\ldots,xN$ are vector indexes, $n=0,\ldots nN$ are iteration numbers, and $m=1,\ldots,tN$ are 'time'-steps. After all $nN$ iterations are done, I have to save the last iteration $$ u_m[k]=w_{nN,m}[k], \qquad m=1,\ldots,tN, \quad k=1,\ldots,xN, $$ and I have to redefine the initial condition by the value of the last iteration at the last 'moment of time' $$ v[k]=u_{tN}[k], \qquad k=1,\ldots,xN, $$ and repeat the same procedure to get now values for vectors $u_{tN+m}$, $m=1,\ldots,tN$, and so on until I find all vectors $u_{j}$ for $j=1,\ldots, tN\cdot tS$, for some chosen natural $tS$.
My realization is the following (here the matrix $A$ and the initial $v$ are random, whereas I need a particular; on the other hand, for some $f$ one can have an overflow, thus I use very simple $f$ just to show the slowness even in this case):
ClearAll["Global`*"]; xN = 20;tN = 50;tS = 30;nN = 20;A = RandomReal[NormalDistribution[0, 1], {xN, xN}];v = RandomReal[NormalDistribution[0, 1], {xN}];f[p_?NumericQ, r_?NumericQ] := p + r;rhs[vec_] := rhs[vec] = MapThread[f, {A.vec, vec}];Do[ w[n_, m_] := w[n, m] = If[n == 0 || m == 1, v, w[n, m - 1] + rhs[w[n - 1, m]]]; Do[u[m + (i - 1) tN] = w[nN, m], {m, tN, 1, -1}]; v = w[nN, tN]; Clear[w], {i, 1, tS}]; // AbsoluteTiming
It works, but for $xN=20$ it spends 18 sec, whereas for $xN=30$ almost 40 sec; unfortunately, I need $xN=200$ and less trivial $f$. (Note that the code above is very sensitive to $f$, if $f(p,r)=0.01(p+r)$ then everything is ten times faster, hence something is bad in the code).
Could you suggest me an improvement for the code (I am very newbie in Mathematica)? |
A black hole would radiate mass optimally for interstellar-travel applications in the range between $10^7$ and $10^8$ kilograms. Assuming a light-only radiation emission spectrum, with a parabolic reflector with efficiency $f$, this would create an acceleration
$$ a = \frac{f P}{mc}$$
$$ a = \frac{ f \hbar c^5 }{ 15360 \pi G^2 M^3}$$
$$ a = \frac{ f 10^{24} m \times sec^{-2}}{M^3} $$
The problem is that the schwarzschild radius at this mass is a few attometers, which creates a host of problems:
1) the rate at which it can feed from normal matter is too small compared to the rate BH mass is being radiated
2) any electric charge we throw in the BH will be quickly radiated by super radiance effects and Schwinger pair production, so it will stay neutral most of the time.
3) only super hard gamma rays have (to my limited knowledge) the short enough wavelength in order to scatter against such a tiny BH
By the 3 points above, it is unclear how to apply a back-force on the black hole so that a payload, comprising at least of the parabolic reflector, can be accelerated with it
are there any ideas out there about how to exert a force or moment on such a tiny black hole? |
The dynamics of the Heston Model is
\begin{align*} \frac{dS}{S} & = \lambda \sqrt{\nu} d W^S \\[0.5em] d \nu & = k (1- \nu )dt + \epsilon \sqrt{\nu} dW^\sigma \end{align*}
where $\lambda$ is the instantaneous volatility. Let $\nu_0 = 1$. The Brownian motions are correlated with $\rho dt$.
Now I want to use this online pricer: https://kluge.in-chemnitz.de/tools/pricer/heston_price.php to determine the prices.
How would I go about this without $\lambda$ being specified in the model of the pricer?
Say I want to find the price for $S_0=100$, $K=90$, $\epsilon = 0.3$, $\kappa = 0.05$, $\rho = 0.5$ and $\lambda = 0.2$.
I know that by using Ito on the spot I get
$$ d \log S_t = \lambda \sqrt{\nu_t} d W_t^S - \frac{1}{2} \lambda^2 \nu_t dt $$
Do I somehow need to use the relationship between $\lambda^2 \nu_t$? If so, how? |
This question already has an answer here:
What is the remainder when $45!$ is divided by $47$ ?
Is there any method to approach such questions ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
What is the remainder when $45!$ is divided by $47$ ?
Is there any method to approach such questions ?
By Wilson's theorem, $46! \equiv -1 \pmod{47}$. Thus, we have $$ -1 \equiv 46! \equiv 46 \cdot 45! \equiv (-1) \cdot 45! \pmod{47} $$ deduce that $45! \equiv 1 \pmod{47}$. |
Solvers General orthogonal coordinates Data processing Output functions Input functions Interactive Basilisk View Miscellaneous functions/modules Tracking floating-point exceptions See also Solvers
\displaystyle \partial_t \int_{\Omega} \mathbf{q} d \Omega = \int_{\partial \Omega} \mathbf{f} ( \mathbf{q}) \cdot \mathbf{n}d \partial \Omega - \int_{\Omega} hg \nabla z_b \displaystyle \mathbf{q} = \left(\begin{array}{c} h\\ hu_x\\ hu_y \end{array}\right), \;\;\;\;\;\; \mathbf{f} (\mathbf{q}) = \left(\begin{array}{cc} hu_x & hu_y\\ hu_x^2 + \frac{1}{2} gh^2 & hu_xu_y\\ hu_xu_y & hu_y^2 + \frac{1}{2} gh^2 \end{array}\right)
Semi-implicit scheme Multiple layers \displaystyle \partial_th + \partial_x\sum_{l=0}^{nl-1}h_lu_l = 0
\displaystyle \partial_t(h\mathbf{u}_l) + \nabla\cdot\left(h\mathbf{u}_l\otimes\mathbf{u}_l + \frac{gh^2}{2}\mathbf{I}\right) = - gh\nabla z_b - \partial_z(h\mathbf{u}w) + \nu h\partial_{z^2}\mathbf{u}
Green-Naghdi \displaystyle \partial_t \int_{\Omega} \mathbf{q} d \Omega = \int_{\partial \Omega} \mathbf{f} ( \mathbf{q}) \cdot \mathbf{n}d \partial \Omega - \int_{\Omega} hg \nabla z_b + h \left( \frac{g}{\alpha}\nabla \eta - D \right) \displaystyle \alpha h\mathcal{T} \left( D \right) + hD = b \displaystyle b = \left[ \frac{g}{\alpha} \nabla \eta +\mathcal{Q}_1 \left( u \right) \right]
\displaystyle \partial_t\left(\begin{array}{c} s_i\\ \mathbf{v}_j\\ \end{array}\right) + \nabla\cdot\left(\begin{array}{c} \mathbf{F}_i\\ \mathbf{T}_j\\ \end{array}\right) = 0
\displaystyle \partial_t\mathbf{q} + \nabla\cdot(\mathbf{q}\mathbf{u}) = - \nabla p + \nabla\cdot(\mu\nabla\mathbf{u}) + \rho\mathbf{a} \displaystyle \partial_t p + \mathbf{u}\cdot\nabla p = -\rho c^2\nabla\cdot\mathbf{u}
Navier–Stokes
Streamfunction–Vorticity formulation \displaystyle \partial_t\omega + \mathbf{u}\cdot\nabla\omega = \nu\nabla^2\omega \displaystyle \nabla^2\psi = \omega
“Markers-And-Cells” (MAC or “C-grid”) formulation Centered formulation \displaystyle \partial_t\mathbf{u}+\nabla\cdot(\mathbf{u}\otimes\mathbf{u}) = \frac{1}{\rho}\left[-\nabla p + \nabla\cdot(2\mu\mathbf{D})\right] + \mathbf{a} \displaystyle \nabla\cdot\mathbf{u} = 0 Azimuthal velocity for axisymmetric flows \displaystyle \partial_t w + u_x \partial_x w + u_y \partial_y w + \frac{u_y w}{y} = \frac{1}{\rho y} \left[ \nabla \cdot (\mu y \nabla w) - w \left( \frac{\mu}{y} + \partial_y \mu \right) \right] Two-phase interfacial flows Electrohydrodynamics Ohmic conduction \displaystyle \partial_t\rho_e = \nabla \cdot(K \nabla \phi) \displaystyle \nabla \cdot (\epsilon \nabla \phi) = - \rho_e Ohmic conduction of charged species \displaystyle \partial_tc_i = \nabla \cdot( K_i c_i \nabla \phi) Electrohydrodynamic stresses \displaystyle M_{ij} = \varepsilon (E_i E_j - \frac{E^2}{2}\delta_{ij}) Viscoelasticity
\displaystyle \rho\left[\partial_t\mathbf{u}+\nabla\cdot(\mathbf{u}\otimes\mathbf{u})\right] = - \nabla p + \nabla\cdot(2\mu_s\mathbf{D}) + \nabla\cdot\mathbf{\tau}_p + \rho\mathbf{a} \displaystyle \mathbf{\tau}_p = \frac{\mu_p \mathbf{f_s}(\mathbf{A})}{\lambda} \displaystyle \Psi = \log\mathbf{A} \displaystyle D_t \Psi = (\Omega \cdot \Psi -\Psi \cdot \Omega) + 2 \mathbf{B} + \frac{e^{-\Psi} \mathbf{f}_r (e^{\Psi})}{\lambda}
Other equations Hele-Shaw/Darcy flows \displaystyle \mathbf{u} = \beta\nabla p \displaystyle \nabla\cdot(\beta\nabla p) = \zeta Advection \displaystyle \partial_tf_i+\mathbf{u}\cdot\nabla f_i=0 Interfacial forces \displaystyle \phi\mathbf{n}\delta_s Reaction–Diffusion \displaystyle \theta\partial_tf = \nabla\cdot(D\nabla f) + \beta f + r Poisson–Helmholtz \displaystyle \nabla\cdot (\alpha\nabla a) + \lambda a = b Runge–Kutta time integrators \displaystyle \frac{\partial\mathbf{u}}{\partial t} = L(\mathbf{u}, t) Signed distance field Okada fault model General orthogonal coordinates
When not written in vector form, some of the equations above will change depending on the choice of coordinate system (e.g. polar rather than Cartesian coordinates). In addition, extra terms can appear due to the geometric curvature of space (e.g. equations on the sphere). An important simplification is to consider only orthogonal coordinates. In this case, consistent finite-volume discretisations of standard operators (divergence etc…) can be obtained, for any orthogonal curvilinear coordinate system, using only a few additional geometric parameters.
The face vector
fm is the scale factor for the length of a face i.e. the physical length is fm\Delta and the scalar field cm is the scale factor for the area of the cell i.e. the physical area is cm\Delta^2. By default, these fields are constant and unity (i.e. the Cartesian metric).
Several metric spaces/coordinate systems are predefined:
Axisymmetric Stokes stream function \displaystyle \frac{\partial^2\psi}{\partial z^2} + \frac{\partial^2\psi}{\partial r^2} - \frac{1}{r}\partial_r\psi = - \omega r Spherical Radial/cylindrical Data processing Various utility functions: timing, field statistics, slope limiters, etc. Tagging connected neighborhoods Counting droplets Output functions Multiple fields interpolated on a regular grid (text format) Single field interpolated on a regular grid (binary format) Portable PixMap (PPM) image output Volume-Of-Fluid facets Basilisk snapshots Basilisk View Gerris simulation format ESRI ASCII Grid format VTK format Input functions Interactive Basilisk View bview: a script to start the client/server visualisation pipeline. bview-server.c: the server. bview-client.py: the client. Miscellaneous functions/modules Tracking floating-point exceptions
On systems which support signaling NaNs (such as GNU/Linux), Basilisk is set up so that trying to use an unitialised value will cause a floating-point exception to be triggered and the program to abort. This is particularly useful when developing adaptive algorithms and/or debugging boundary conditions.
To maximise the “debugging potential” of this approach it is also recommended to use the
trash() function to reset any field prior to updates. This will guarantee that older values are not mistakenly reused. Note that this call is quite expensive and needs to be turned on by adding
-DTRASH=1 to the compilation flags (otherwise it is just ignored).
Doing
ulimit -c unlimited
before running the code will allow generation of
core files which can be used for post-mortem debugging (e.g. with gdb).
Visualising stencils
It is often useful to visualise the values of fields in the stencil which triggered the exception. This can be done using the
-catch option of
qcc.
We will take this code as an example:
Copy and paste this into
test.c, then do
ulimit -c unlimitedqcc -DTRASH=1 -g -Wall test.c -o test -lm./test
you should get
Floating point exception (core dumped)
Then do
gdb test core
you should get
...Core was generated by `./test'.Program terminated with signal 8, Arithmetic exception.#0 0x0000000000419dbe in gradients (f=0x7fff5f412430, g=0x7fff5f412420) at /home/popinet/basilisk/wiki/src/utils.h:203203 v.x[] = (s[1,0] - s[-1,0])/(2.*Delta);
To visualise the stencil/fields which lead to the exception do
qcc -catch -g -Wall test.c -o test -lm./test
you should now get
Caught signal 8 (Floating Point Exception)Caught signal 6 (Aborted)Last point stencils can be displayed using (in gnuplot) set size ratio -1 set key outside v=0 plot 'cells' w l lc 0, \ 'stencil' u 1+3*v:2+3*v:3+3*v w labels tc lt 1 title columnhead(3+3*v), \ 'coarse' u 1+3*v:2+3*v:3+3*v w labels tc lt 3 t ''Aborted (core dumped)
Follow the instructions i.e.
gnuplotgnuplot> set size ratio -1gnuplot> set key outsidegnuplot> v=0gnuplot> plot 'cells' w l lc 0, \ 'stencil' u 1+3*v:2+3*v:3+3*v w labels tc lt 1 title columnhead(3+3*v), \ 'coarse' u 1+3*v:2+3*v:3+3*v w labels tc lt 3 t ''
With some zooming and panning, you should get this picture
The red numbers represent the stencil the code was working on when the exception occured. It is centered on the top-left corner of the domain. Cells both inside the domain and outside (i.e. ghost cells) are represented. While the field inside the domain has been initialised, ghost cell values have not. This causes the
gradients() function to generate the exception when it tries to access ghost cell values.
To initialise the ghost-cell values, we need to apply the boundary conditions i.e. add
boundary ({a});
after initialisation. Recompiling and re-running confirms that this fixes the problem.
Note that the blue numbers are the field values for the parent cells (in the quadtree hierarchy). We can see that these are also un-initialised but this is not a problem since we don’t use them in this example.
The
v value in the gnuplot script is important. It controls which field is displayed.
v=0 indicates the first field allocated by the program (i.e.
a[] in this example), accordingly
ga.x[] and
ga.y[] have indices 1 and 2 respectively.
Tracing permissions
Some recent systems disallow tracing of processes for security reasons. The symptom will be an error message from gdb looking like:
Attaching to process 9351Could not attach to process. If your uid matches the uid of the targetprocess, check the setting of /proc/sys/kernel/yama/ptrace_scope, or tryagain as the root user. For more details, see /etc/sysctl.d/10-ptrace.confptrace: Operation not permitted.
To enable tracing (which weakens your system’s security), you need to do:
sudo sh -c 'echo 0 > /proc/sys/kernel/yama/ptrace_scope' |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
First, you should note that the set of isolated points of $E$ is countable. This is in fact a general property of $\mathbb{R}$
Theorem: Let $E$ be a subset of $\mathbb{R}$ and let $F$ be the set of isolated points of $\mathbb{R}$. Then $F$ is at most countable.
Proof: Suppose otherwise, that is, that $F$ is uncountable. Then there exists some interval $[k,k+1]$ such that $F\cap[k,k+1]$ is uncountable. For each $x\in F\cap [k,k+1]$, choose a rational number $q_x$, $0<q_x<1$ such that $(x-2q_x,x+2q_x)\cap F=\varnothing$. Since the set $\left\{q_x:x\in F\cap[k,k+1]\right\}$, then there exists some $q$ such that $X=\left\{x:q_x=q\right\}$ is uncountable, in particular infinite. The choice of $q_x$ implies that the sets $(x-q,x+q)$ are all disjoint for $x\in X$, and they are all contained in $[k-1,k+2]$. Therefore, we constructed an infinite family of disjoint intervals of length $2q$, all of which are contained in the bounded interval $[k-1,k+2]$, a contradiction. QED
(Probably, there is a nicer proof of this theorem somewhere in this site.)
Therefore, we should not worry about the isolated points of $E$ when analysing derivatives: the set of isolated points has null measure.
A trick that works here is to extend your function $f$ to an interval containing $E$. We can do this in the following manner:
Let $E\subseteq\mathbb{R}$ and $f:E\to\mathbb{R}$ be monotonic. The function $\hat{f}:(\inf E,\sup E)\to\mathbb{R}$ given by $\hat{f}(x)=\sup_{y\in E,y\leq x}f(y)$ is an extension of $f$ (if $\sup E$ or $\inf E\in E$, define $\hat{f}(\sup E)=f(\sup E)$ or $\hat{f}(\inf E)=f(\inf E)$).
Another extension is given by $\overline{f}(x)=\inf_{y\in E,y\geq x}f(x)$. In fact, you can check that if $g$ is any other extension of $f$ defined on $[\inf E,\sup E]\cap E$, then $\hat{f}(x)\leq g(x)\leq\overline{f}(x)$ for all $x$.
Alternatively, you can prove this with Zorn's Lemma, but the argument is basically the same: Zorn's lemma gives you a maximal extension of $f$ to a monotonic function $\widetilde{f}:F\to \mathbb{R}$ defined on some subset $F\supseteq E$. To show that $F$ is an interval you apply the argument above and extend $\widetilde{f}$ to some interval containing $F$. Maximality implies that $F$ is that interval.
Now, about your question of differentiability of $f$: For almost every point $x$ of $(\inf E,\sup E)$, the function $\hat{f}$ is differentiable at $x$. But we also know that almost every point of $E$ is not isolated. Using these two fact, we conclude that almost every point $x$ of $E\cap(\inf E,\sup E)$ is not an isolated point of $E$, and $\hat{f}$ is differentiable at $x$. You can then check that for such $x$, $f$ is differentiable at $x$, and $f'(x)=\hat{f}'(x)$.
Therefore, $f$ is differentiable at almost every point of $E$. |
Given the comments I think I can now give a coherent answer to why no paradox arises. The bottom line is that no standing wave is formed, and the simple reason is that the emitter of the light will look as if it is moving backward, and this will create a temporal zig-zag pattern instead of a spacial one.
More formally it goes like this:A light source is emitting light at an angle of $30^\circ$ such that the $x$-component of the light is $v=\frac c2$. For a stationary observer the bouncing light beam will look like a zig-zag pattern.
The path of the pattern will be$$\vec r=(\frac c2(t-t_0),\mathrm{zz}(\omega (t-t_0)))$$
Here $\mathrm{zz}$ is the zig-zag, bouncing pattern. $\omega$ is the frequency of the bounce, and depends on the distance between the two mirrors. $t_0$ is the time the light pulse was emmited. I'm including it here as it will become important soon.
A moving observer now flies by at $v=\frac c2$. It concludes that the light beam is bouncing up an down between the same two positions. However, before it concludes that the light is forming a standing wave, it also notices that the emitter now seems to be moving at a speed $v_e=-\frac c2$.
The position of the beam will now be given by$$ x'=\frac{x-\frac c2t}{\sqrt{\gamma}}=\frac{\frac c2(t-t_0)-\frac c2t}{\sqrt{\gamma}}=-\frac{\frac c2t_0}{\sqrt{\gamma}}$$
The $y$ coordinate does not transform, but time itself gets dilated as
$$ t=\frac{t'+\frac c2\frac{x'}{c^2}}{\sqrt{\gamma}}=\frac{t'+\frac c2\frac{-\frac{\frac c2t_0}{\sqrt{\gamma}}}{c^2}}{\sqrt{\gamma}}=\frac{t'-\frac 14\frac{t_0}{\sqrt{\gamma}}}{\sqrt{\gamma}}$$
So the position of the light beam is now
$$\vec r'=(-\frac{\frac c2t_0}{\sqrt{\gamma}},\mathrm{zz}(\omega (\frac{t'}{\sqrt{\gamma}}-\frac 14\frac{t_0}{\gamma}-t_0))).$$
We now conclude that the light beam appears to be bouncing up and down, as $x'$ in constant. The frequency of the bounce is a bit different because of time dilation though.
However, the effect of the moving emitter is that although the light is going up and down, no light beams will ever overlap.
So there is not paradox here, both observer see a zig-zag pattern. It's just that for the stationary observer the pattern is stationary and for the moving observer the patterns (compressed by length contraction) seems to be moving backwards.
NOTE: The same phenomena happens also in without mixing in relativity. To see this, put $\gamma=1$ and let $t=t'$. |
I'll decompose your big question into smaller questions and answer them in (hopefully) simple terms.
1. What is meant by the risk neutral measure?
This is how I understand the risk-neutral measure (commonly denoted by $\mathbb{Q}$): It is the probability measure under which the current value of all financial assets at a time, say $t$, are equal to the expected future payoff of the asset discounted at the risk-free rate, $r$. It's used heavily in the the prices of financial derivatives because of the
Fundamental Theorem of Asset Pricing (see: Wikipedia).
This theorem implies that in a complete market (i.e., a market that allows the hedging of the risk inherent in any investment strategy), a financial derivative's price is the discounted expected value of the future payoff under $\mathbb{Q}$.
It's well-known that in the case of a geometric Brownian motion model a unique risk-neutral measure exists. However, the introduction of jumps, as in Merton's 1976 paper, destroys this notion of completeness and so we no longer have a
unique risk-neutral measure $\mathbb{Q}$.
Finally, the risk-neutral pricing formula of a European call at time $t$ with the parameters you mentioned is
$$C = C(t, S_t)=\mathbb{E}_{\mathbb{Q}}[e^{-rT}(S_t-K)^+|\mathcal{F}_t],$$
where for now just read $\mathcal{F}_t$ as the all the information known at time $t$.
2. What's the price of European call in Merton's model?
A closed-form solution for European options under Merton's jump-diffusion model exists. Let $C_{BS}$ denote the price of your European call under the Black-Scholes model. You'd like $C_{JD}$, its value under Merton's jump-diffusion model, where your jump size follows a log-normal distribution with average jump size $m$ and jump size volatility $\nu$. The formula for $C_{JD}$ can be written as:
$$C_{JD}(S, K, \sigma, r, T, \lambda, m, \nu)=\sum_{k=0}^{\infty}\frac{\exp{(-m\lambda T)(m\lambda T)^k}}{k!}C_{BS}(S, K, \sigma_k, r_k, T),$$
where $\sigma_k = \sqrt{\sigma^2 + k\nu^2/T}$ and $r_k = r - \lambda(m-1)+k\log(m)/T$.
Each term in the infinite series corresponds to every possible jump frequency scenario.
3. How do we simulate Merton's jump-diffusion model on a computer?
This is possibly the broadest question of them all and (correct me users if I'm wrong) depends on a variety of factors. In my opinion, the simplest way to do so is follows (I won't go into much detail here):
i. Get the Euler discretisation of the Merton jump-diffusion model;
ii. Get your parameters (a major topic in its own right);
iii. Generate three sets of independent random numbers corresponding to the three random variables in your discretisation scheme;
iv. Get the values for the simulated stock path using these;
iv. Use Monte Carlo integration to get the price of your call option.
I hope this helps and excuse any mistakes I've made along the way.
Thanks, Vladimir
Extra: Here's a thesis and book which provide great introductions (and more) to this topic. |
If you change variables to optimize for the residual of the linear part, then the Hessian will be a low-rank update to the identity. Then L-BFGS would work very well. Specifically, your problem takes the form$$\min_x \frac{1}{2}\|Ax-b\|^2 + \frac{\mu}{2}\|g(x)\|^2$$where $Ax=b$ is the linear PDE and $g$ is the nonlinear part, and $\mu$ is a tradeoff parameter. Let $r:=Ax-b$. Then the problem becomes$$\min_r \frac{1}{2}\|r\|^2 + \frac{\mu}{2}\|g(A^{-1}(r+b))\|^2.$$The gradient of the problem is$$\nabla f(r) = r + \mu A^{-T} G^T g(A^{-1}(r+b));$$the (Gauss-Newton) Hessian for this problem is$$H = I + \mu A^{-T}G^TG A^{-1},$$where $G$ is the Jacobian of $g$, which has rank $10$. Hence the Hessian is a rank-10 update to the identity. So, on this modified problem L-BFGS will probably converge in around 10 iterations.
Each iteration requires solving a system of the form $Ax=z$ to evaluate the objective function and $A^Tx=w$ to compute the gradient.
Edit: Heres a simple python mockup. Seems to work.
from numpy import ones, zeros, arange, dot, sqrt
from numpy.random import choice, randn
from numpy.linalg import norm
from scipy.sparse import eye, diags, kron, coo_matrix, bmat
from scipy.sparse.linalg import factorized
from scipy.optimize import minimize
from matplotlib.pyplot import matshow, show
# See: https://scicomp.stackexchange.com/a/29205/1502
n=40
N=n*n
dx = 1./n
m=10
inds = choice(N,2*m)
xx_inds = inds[:m]
yy_inds = inds[m:]
mu = 1./(m*(dx**2)) # how strongly to force the differences being 1 vs forcing the PDE being solved
# A: 2D Neumann Laplacian
A1D = diags([-ones(n - 1), 2*ones(n), -ones(n - 1)], [-1, 0, 1], shape=(n, n)).tocsr()
A1D[0,0] = 1
A1D[-1,-1] = 1
I=eye(n)
A0 = (kron(A1D, I) + kron(I, A1D)).tocsr()
A = bmat([[A0, ones([N,1])],[ones([1,N]), zeros(1)]])
solve_A = factorized(A)
solve_At = solve_A # symmetric
B = (coo_matrix((ones(m),(arange(m),xx_inds)), shape=(m,N+1))
- coo_matrix((ones(m),(arange(m),yy_inds)), shape=(m,N+1))).tocsr()
def g(u):
return (B*u)**2 - 1.
def G(u):
return 2*diags(B*u,0)*B
def objective(r):
return 0.5*norm(r)**2 + mu*0.5*norm(g(solve_A(r)))**2
def gradient(r):
u = solve_A(r)
return r + mu*solve_At(G(u).T * g(u))
# Check gradient with finite differences
r0 = randn(N+1)
s = 1e-7
dr = randn(N+1)
r1 = r0 + s*dr
j0 = objective(r0)
j1 = objective(r1)
dj_diff = (j1-j0)/s
dj = dot(gradient(r0),dr)
gradient_err = norm(dj - dj_diff)/norm(dj_diff)
print 's=', s, ', gradient_err=', gradient_err
# Solve with L-BFGS
result = minimize(objective, ones(N+1), jac=gradient, method='L-BFGS-B', options={'maxiter':50})
r_optimal = result['x']
u_optimal = solve_A(r_optimal)
g_optimal = g(u_optimal)
num_iter = result['nit']
print 'num_iter=', num_iter
print 'norm(r_optimal*(dx**2))=', norm(r_optimal*(dx**2))
print 'norm(g_optimal)/m=', norm(g_optimal)/m
print 'u_optimal[xx_inds] - u_optimal[yy_inds]=', u_optimal[xx_inds] - u_optimal[yy_inds]
matshow(u_optimal[:N].reshape([n,n]))
show()
Edit2: Enforcing Neumann condition
Let $\{\phi\}_{i=1}^N$ be a set of finite element basis functions, let$$A^0_{ij}:= \int_\Omega \nabla\phi_i \cdot \nabla \phi_j ~dx,$$and let$$\mathbb{1}_j:= \int_\Omega \phi_j ~dx.$$Then at the discrete level solving the pure Neumann Laplacian problem with average value zero constraint is equivalent to solving the following saddle point system:$$\underbrace{\begin{bmatrix}A^0 & \mathbb{1} \\ \mathbb{1}^T & 0\end{bmatrix}}_{A}\underbrace{\begin{bmatrix}u \\ \lambda\end{bmatrix}}_{x} = 0.$$The matrix $A$ is invertible since $A^0$ is invertible on $\text{ker}(\mathbb{1}^T)$. For some background on invertibility of saddle point systems I recommend this: https://arxiv.org/abs/1202.3330 .
If you have some specialized or black box solver that solves the problem$$A^0 \widehat{u} = f, \quad\quad \text{and} \quad\quad \mathbb{1}^T \widehat{u}=c^0,$$you can use it to solve problems of the form$$\begin{bmatrix}A^0 & \mathbb{1} \\ \mathbb{1}^T & 0\end{bmatrix}\begin{bmatrix}u \\ \lambda\end{bmatrix} = \begin{bmatrix}f \\ c\end{bmatrix}$$by setting $u = \widehat{u}+(c-c^0)$ and $\lambda = (A^0 u - f)_j/\mathbb{1}_j$, any component $j$ will do; Since the first row block equation is satisfied, the quantity $(A^0 u - f)_j/\mathbb{1}_j$ will be the same for all $j$. |
Defining parameters
Level: \( N \) = \( 5 \) Weight: \( k \) = \( 10 \) Nonzero newspaces: \( 2 \) Newforms: \( 3 \) Sturm bound: \(20\) Trace bound: \(1\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{10}(\Gamma_1(5))\).
Total New Old Modular forms 11 9 2 Cusp forms 7 7 0 Eisenstein series 4 2 2 Decomposition of \(S_{10}^{\mathrm{new}}(\Gamma_1(5))\)
We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 5.10.a \(\chi_{5}(1, \cdot)\) 5.10.a.a 1 1 5.10.a.b 2 5.10.b \(\chi_{5}(4, \cdot)\) 5.10.b.a 4 1 |
I was having some lectures and I didn't quite understand the following: let's say you have a grid like $$G=\{ x \in \mathbb{R} : x = x_j = hj, \ j = 0,1,...,n,\ h=1/n\}.$$ And you write in difference form the following 1D diffusion equation (comma before index indicates derivative along the index): $$-(au_{,1})_{,1}=s, \qquad x \in \Omega := (0,1).$$
Giving: $$ \{ -(a_{j-1} + a_j)u_{j-1} + (a_{j-1}+2a_j+a_{j+1})u_j-(a_j+a_{j+1} )u_{j+1}\}/2h^2 = s_j$$ Now let's say you have a discontinuity of $a$ along some interface $\Gamma$. In our 1D problem, let: $$a(x) = \epsilon, \ \ 0<x\le x^*, \ \ a(x) = 1, \ \ x^*<x<1.$$
For the boundary conditions $u(0) = 0$ and $u(1) = 1$. This jump can be written: $$\epsilon \,\underset{x \uparrow x^*}{\lim u_{,1}} = \underset{x \downarrow x^*}{\lim u_{,1}}. $$
Here is what I don't understand: the lecturer claims that by proposing a
linear piecewise solution for the diffusion equation it is found that:$$u= cx, \ \ 0\le x< x^*, \ \ u=\epsilon c x + 1 -\epsilon c, \ \ x^*\le x\le1.$$Where:$$c= 1/(x^*-\epsilon x^* + \epsilon).$$
This is done with the
difference equation, assuming $x_j < x^* \le x_{j+1}$, giving:$$u_j= \alpha j, \ \ 0\le j< k, \ \ u=\beta j + 1 -\beta n, \ \ k+1\le j\le n.$$Where,$$ \beta =\epsilon \alpha, \ \ \alpha=\left( \epsilon\frac{1-\epsilon}{1+\epsilon} + \epsilon(n-k)+k\right)^{-1}.$$Hence:$$u_k = x_k/(\epsilon h(1-\epsilon)/(1+\epsilon)+(1-\epsilon)x_k+\epsilon).$$
How are those Piecewise Linear Solution built?
Any input will be much appreciated. |
A Meditation on π (1)
A few number-theoretic \(\pi\) facts:
\(\pi\) is provably transcendental, thus also irrational. \(\pi\) is suspected, but not known, to be normal, a generalization of transcendence. \(\pi\), provably, has Liouville-Roth constant (or irrationality coefficient) no greater than \(7.6063\), and is suspected to have constant no greater than \(2.5\). (As a consequence of its irrationality, its L-R constant is \(\geq2\).)
Note, though, that each of these things is also true of
literally 100% of numbers. And before you scoff at my use of the figurative 'literally', no no -- measure-theoretically, the non-(normal, transcendental, irrational, irrationality-coefficient-less-than-8) numbers make up exactly, mathematically 0% of the number line.
For the record: irrational algebraics like \(\sqrt2\) are
also nonterminating and nonrepeating, and it's not clear what features of the stringwise-local decimal expansion (which seems to be the only thing \(\pi\) enthusiasts focus on, |
Let $A = \operatorname{span}\left\{ a_1, a_2, \ldots , a_m\right\}$ given a set of basis vectors $a_1, a_2, \ldots , a_m$ in $\mathbb{C}^{n}$, and likewise let $B = \operatorname{span}\left\{ b_1, b_2, \ldots , b_m\right\}$ for a set of basis vectors $b_1, b_2, \ldots , b_m$.
Let C be the sum of the two vector spaces, i.e. $$C = \left\{ c_1 a + c_2 b; \;\; a \in A,\, b\in B; \; c_1, c_2 \in \mathbb{C}\right\}$$
and let $D$ be the subspace of $C$ orthogonal to everything in $A$, i.e. $$D = \left\{ c \in C \;\;\text{s.t}\;\;c\cdot a = 0\;\;\forall \;a \in A\right\}$$ Is there an accepted compact notation for $D$ in terms of $A$ and $B$? For example, would $$(A \oplus B) \mod A$$ or $$B \ominus A$$ generally be comprehensible to a reader knowing that $A$ and $B$ are both vector spaces in $\mathbb{C}^n$?
I'm working this into a physics paper where space is extremely tight, and I'd ideally like to describe $D$ as compactly as possible.
Thanks for any assistance. |
From the beginning: I have function $u(r)$ and radial symmetry in system. Also I've got results as data array ${u_i(r_i)}$. And I want to plot it as $u(x,y)$. Due to radial symmetry its gonna be like $x=r \cos(\varphi), y=r \sin(\varphi)$ In other words I have function profile and want to "integrate" it over $2\pi\,\mathrm d\varphi$. Like this for Gaussian
One option would be to use
RevolutionPlot3D.
u = Table[Sin[2 \[Pi]*r], {r, 0, 1, 0.1}]; (*u is a dummy u[r]*)f = ListInterpolation[u, {0, 1}]; (*Create an interpolating function over the range {0,1}*)(*Plot it over the domain.*)RevolutionPlot3D[f[r], {r, #1, #2}] & @@@ f["Domain"]
You could also generate the points yourself and use
ListPointPlot3D
u = Table[{r, Sin[2 \[Pi]*r]}, {r, 0, 1, 0.1}];(*table of {r,u[r]}*)xyz = Flatten[Table[{#1*Cos[\[Theta]], #1*Sin[\[Theta]], #2} & @@@ u, {\[Theta],0, 2 \[Pi], 2 \[Pi]/100} ], 1];ListPointPlot3D[ xyz , Filling -> Axis ] |
Difference between revisions of "Main Page"
(→Threads)
(→Threads)
Line 26: Line 26:
We are also collecting bounds for [[Fujimura's problem]], motivated by a [[hyper-optimistic conjecture]].
We are also collecting bounds for [[Fujimura's problem]], motivated by a [[hyper-optimistic conjecture]].
+ +
Here are some [[unsolved problems]] arising from the above threads.
Here are some [[unsolved problems]] arising from the above threads.
Revision as of 20:26, 13 February 2009 The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Useful background materials
Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (final call) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (arriving at station)
There is also a chance that we will be able to improve the known bounds on Moser's cube problem.
Here are some unsolved problems arising from the above threads.
Here is a tidy problem page.
Bibliography
Density Hales-Jewett
H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
Behrend-type constructions
M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint.
Triangles and corners
M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239 |
Let $a$, $b$ and $c$ be non-negative numbers. Prove that: $$\frac{a}{a^2+ab+b^2+3}+\frac{b}{b^2+bc+c^2+3}+\frac{c}{c^2+ca+a^2+3}\leq\frac{1}{2}$$
I think this inequality is very interesting because most of the contest's inequalities are homogeneous, but this inequality is non-homogeneous.
Testing for $c=0$ gives $$\frac{a}{a^2+ab+b^2+3}+\frac{b}{b^2+3}\leq0.455...<\frac{1}{2}.$$ For $b=c=0$ we obtain something obvious.
One of the standard ways to prove these inequalities is to try to make a homogenization.
By the way, trying of homogenization gives a wrong inequality: $$\sum\limits_{cyc}\frac{a}{a^2+ab+b^2+3}\leq\sum\limits_{cyc}\frac{a}{2\sqrt{3(a^2+ab+b^2}}.$$ Thus, it remains to prove that $$\sum\limits_{cyc}\frac{a}{\sqrt{3(a^2+ab+b^2)}}\leq1,$$ which is wrong for $c=0$ and $a\rightarrow+\infty$.
Also we can try the following.
We know that $\sum\limits_{cyc}\frac{x}{2x+y}\le1$ for positives $x$, $y$ and $z$.
Indeed, by C-S we obtain: $$1-\sum_{cyc}\frac{x}{2x+y}=1-\frac{3}{2}-\sum_{cyc}\left(\frac{x}{2x+y}-\frac{1}{2}\right)=\frac{1}{2}\sum_{cyc}\frac{y}{2x+y}-\frac{1}{2}=$$ $$=\frac{1}{2}\sum_{cyc}\frac{y^2}{2xy+y^2}-\frac{1}{2}\geq\frac{1}{2}\frac{(x+y+z)^2}{\sum\limits_{cyc}(y^2+2xy)}=\frac{1}{2}-\frac{1}{2}=0.$$ Thus, it's enough to prove that $$a^2+ab+b^2+3\geq2(2a+\sqrt{ab})$$ because if it's true so $$\sum_{cyc}\frac{a}{a^2+ab+b^2+3}\leq\sum_{cyc}\frac{a}{2(2a+\sqrt{ab})}=\frac{1}{2}\sum_{cyc}\frac{\sqrt{a}}{2\sqrt{a}+\sqrt{b}}\leq\frac{1}{2}.$$ But the inequality $a^2+ab+b^2+3\geq2(2a+\sqrt{ab})$ is wrong! Try $a=2$ and $b=\frac{1}{4}$
Also we can try to use a full expanding (I tried!) and to hope to use AM-GM,
but I think this way is very ugly and it's probably nothing.
Any hint would be desirable.
Thank you! |
Q. A block of mass 10 kg is kept on a rough inclined plane as shown in the figure. A force of 3 N is applied on the block. The coefficient of static friction between the plane and the block is 0.6. What should be the minimum value of force P, such that the block doesnot move downward ?
(take $g = 10 \; ms^{-2}$)
Solution:
$mg \sin45^{\circ} = \frac{100}{\sqrt{2}} = 50\sqrt{2} $ $ \mu mg \cos\theta = 0.6 \times mg \times\frac{1}{\sqrt{2}} = 0.6\times50\sqrt{2} $ $ P = 31.28 \simeq 32N$ Questions from JEE Main 2019 5. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 10. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$.
If this light falls on a silver plate having a work
function of 4.7 eV, what will be the maximum
kinetic energy of the photo electrons ?
$(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
I'm wondering if there is a standard way of measuring the "sortedness" of an array? Would an array which has the median number of possible inversions be considered maximally unsorted? By that I mean it's basically as far as possible from being either sorted or reverse sorted.
No, it depends on your application. The measures of sortedness are often refered to as
measures of disorder, which are functions from $N^{<N}$ to $\mathbb{R}$, where $N^{<N}$ is the collection of all finite sequences of distinct nonnegative integers. The survey by Estivill-Castro and Wood [1] lists and discusses 11 different measures of disorder in the context of adaptive sorting algorithms.
The number of inversions might work for some cases, but is sometimes insufficient. An example given in [1] is the sequence
$$\langle \lfloor n/2 \rfloor + 1, \lfloor n/2 \rfloor + 2, \ldots, n, 1, \ldots, \lfloor n/2 \rfloor \rangle$$
that has a quadratic number of inversions, but only consists of two ascending runs. It is nearly sorted, but this is not captured by inversions.
Mannila [1] axiomatizes presortedness (with a focus on comparison-based algorithms) as follows (paraphrasing).
Let $\Sigma$ a totally ordered set. Then a mapping $m$ from $\Sigma^{\star}$ (the sequences of distinct elements from $\Sigma$) to the naturals is a
measure of presortednessif it satisfies below conditions.
If $X \in \Sigma^{\star}$ is sorted then $m(X) = 0$.
If $X,Y \in \Sigma^{\star}$ with $X = x_1 \dots x_n$, $Y = y_1 \dots y_n$ and $x_i < x_i \iff y_i < y_j$ for all $i,j \in [1..n]$, then $m(X) = m(Y)$.
If $X$ is a subsequence of $Y \in \Sigma^{\star}$, then $m(X) \leq m(Y)$.
If $x_i < y_j$ for all $i \in [1..|X|]$ and $j \in [1..|Y|]$ for some $X,Y \in \Sigma^{\star}$, then $m(X \cdot Y) \leq m(X) + m(Y)$.
$m(a \cdot X) \leq |X| + m(X)$ for all $X \in \Sigma^{\star}$ and $a \in E \setminus X$.
Examples of such measures are the
number of inversions, number of swaps, the number of elements that are not left-to-right maxima, and the length of a longest increasing subsequence (subtracted from the input length).
Note that random distributions using these measures have been defined, i.e. suchthat make sequences that are more/less sorted more or less likely. These arecalled
Ewens-like distributions [2, Ch. 4-5; 3, Example 12; 4], a special case of which is the so-called Mallows distribution. The weights are parametric in a constant $\theta > 0$ and fulfill
$\qquad\displaystyle \operatorname{Pr}(X) = \frac{\theta^{\,m(X)}}{\sum_{Y \in \Sigma^{\star} \cap \Sigma^{|X|}} \theta^{\,m(Y)}} $.
Note how $\theta = 1$ defines the uniform distribution (for all $m$).
Since it is possible to sample permutations w.r.t. these measures efficiently, this body of work can be useful in practice when benchmarking sorting algorithms.
Measures of Presortedness and Optimal Sorting Algorithms by H. Mannila (1985) Logarithmic combinatorial structures: a probabilistic approach by R. Arratia, A.D. Barbour and S. Tavaré (2003) On adding a list of numbers (and other one-dependent determinantal processes) by A. Borodin, P. Diaconis and J. Fulman (2010) Ewens-like distributions and Analysis of Algorithms by N. Auger et al. (2016)
I have my own definition of "sortedness" of a sequence.
Given any sequence [a,b,c,…] we compare it with the sorted sequence containing the same elements, count number of matches and divide it by the number of elements in the sequence.
For example, given sequence
[5,1,2,3,4] we proceed as follows:
1) sort the sequence:
[1,2,3,4,5]
2) compare the sorted sequence with the original by moving it one position at a time and counting the maximal number of matches:
[5,1,2,3,4][1,2,3,4,5] one match [5,1,2,3,4] [1,2,3,4,5] no matches [5,1,2,3,4] [1,2,3,4,5] no matches [5,1,2,3,4] [1,2,3,4,5] no matches [5,1,2,3,4] [1,2,3,4,5] no matches [5,1,2,3,4] [1,2,3,4,5] 4 matches [5,1,2,3,4] [1,2,3,4,5] no matches ... [5,1,2,3,4] [1,2,3,4,5] no matches
3) The maximal number of matches is 4, we can calculate the "sortedness" as 4/5 = 0.8.
Sortedness of a sorted sequence would be 1, and sortedness of a sequence with elements placed in reversed order would be 1/n.
The idea behind this definition is to estimate the minimal amount of work we would need to do to convert any sequence to the sorted sequence. In the example above we need to move just one element, the 5 (there are many ways, but moving 5 is the most efficient). When the elements would be placed in reversed order, we would need to move 4 elements. And when the sequence were sorted, no work is needed.
I hope my definition makes sense.
If you need something quick and dirty (summation signs scare me) I wrote a super easy disorder function in C++ for a Class named Array which generates int arrays filled with randomly generated numbers:
void Array::disorder() { double disorderValue = 0; int counter = this->arraySize; for (int n = 0; n < this->arraySize; n++) { disorderValue += abs(((n + 1) - array[n]));// cout << "disorderValue variable test value = " << disorderValue << endl; counter++; } cout << "Disorder Value = " << (disorderValue / this->arraySize) / (this->arraySize / 2) << "\n" << endl;}
Function simply compares the value in each element to the index of the element + 1 so that an array in reverse order has a disorder value of 1, and a sorted array has a disorder value of 0. Not sophisticated, but working.
Michael |
I’d like to write about the situation that occurs in set theory when a forcing extension $V[G]=V[H]$ arises over a ground model $V$ in two different ways simultaneously, using generic filters over two different forcing notions $G\subset\mathbb{B}$ and $H\subset\mathbb{C}$. The general fact, stated in theorem 1, is that in this case, the two forcing notions are actually isomorphic on a cone $\mathbb{B}\upharpoonright b\cong\mathbb{C}\upharpoonright c$, with the isomorphism carrying the one generic filter to the other. In other words, below these respective conditions $b$ and $c$, the forcing notions and the respective generic filters are not actually different.
I have always assumed that this fact was part of the classical forcing folklore results, but
it doesn’t seem to be mentioned explicitly in the usual forcing literature (it appears as lemma 25.5 in Jech’s book), and so I am writing an account of it here. Victoria Gitman and I have need of it in a current joint project. (Bob Solovay mentions in the comments below that the result is due to him, and provides a possible 1975 reference.) Theorem 1. If $V[G]=V[H]$, where $G\subset \mathbb{B}$ and $H\subset\mathbb{C}$ are $V$-generic filters on the complete Boolean algebras $\mathbb{B}$ and $\mathbb{C}$, respectively, then there are conditions $b\in\mathbb{B}$ and $c\in\mathbb{C}$ such that $\mathbb{B}\upharpoonright b$ is isomorphic to $\mathbb{C}\upharpoonright c$ by an isomorphism carrying $G$ to $H$.
The proof will also establish the following related result, concerning the situation where one extension is merely contained in the other.
Theorem 2. If $V[H]\subset V[G]$, where $G\subset\mathbb{B}$ and $H\subset\mathbb{C}$ are $V$-generic filters on the complete Boolean algebras $\mathbb{B}$ and $\mathbb{C}$, respectively, then there are conditions $b\in\mathbb{B}$ and $c\in\mathbb{C}$ such that $\mathbb{C}\upharpoonright c$ is isomorphic to a complete subalgebra of $\mathbb{B}\upharpoonright b$.
By $\mathbb{B}\upharpoonright b$, where $b$ is a condition in $\mathbb{B}$ (that is, a nonzero element of $\mathbb{B}$), what I mean is the Boolean algebra consisting of the interval $[0,b]$ in $\mathbb{B}$, using relative complement $b-a$ as the negation of $a$. This is the complete Boolean algebra that arises when forcing with the conditions in $\mathbb{B}$ below $b$.
Proof: In order to prove theorem 2, let me assume at first only that $V[H]\subset V[G]$. It follows that $H=\dot H_G$ for some $\mathbb{B}$-name $\dot H$, and we may choose a condition $b\in G$ forcing that $\dot H$ is a $\check V$-generic filter on $\check{\mathbb{C}}$.
I claim that there is some $c\in H$ such that every $d\leq c$ has $b\wedge [\![\check d\in\dot H]\!]^{\mathbb{B}}\neq 0$. Note that every $d\in H$ has $[\![\check d\in\dot H]\!]\in G$ by the truth lemma, since $H=\dot H_G$, and so $b\wedge [\![\check d\in\dot H]\!]^{\mathbb{B}}\neq 0$ for $d\in H$. If $c\in H$ forces that every $d$ in the generic filter has that property, then indeed every $d\leq c$ has $b\wedge [\![\check d\in\dot H]\!]^{\mathbb{B}}\neq 0$ as claimed.
In other words, from the perspective of the $\mathbb{B}$ forcing, every $d\leq c$ has a nonzero possibility to be in $\dot H$.
Define $\pi:\mathbb{C}\upharpoonright c\to\mathbb{B}$ by $$\pi(d)=b\wedge [\![\check d\in\dot H]\!]^{\mathbb{B}}.$$ Using the fact that $b$ forces that $\dot H$ is a filter, it is straightforward to verify that
$d\leq e\implies \pi(d)\leq\pi(e)$, since if $d\leq e$ and $d\in H$, then $e\in H$. $\pi(d\wedge e)=\pi(d)\wedge \pi(e)$, since $[\![\check d\in\dot H]\!]\wedge[\![\check e\in \dot H]\!]=[\![\check{(b\wedge e)}\in\dot H]\!]$. $\pi(d-e)=\pi(d)-\pi(e)$, since $[\![\check{(d-e)}\in\dot H]\!]=[\![\check d\in\dot H]\!]-[\![\check e\in\dot H]\!]$.
Thus, $\pi$ is a Boolean algebra embedding of $\mathbb{C}\upharpoonright c$ into $\mathbb{B}\upharpoonright\pi(c)$.
Let me argue that this embedding is a complete embedding. Suppose that $a=\bigvee A$ for some subset $A\subset\mathbb{C}\upharpoonright c$ with $A\in V$. Since $H$ is $V$-generic, it follows that $a\in H$ just in case $H$ meets $A$. Thus, $[\![\check a\in\dot H]\!]=[\![\exists x\in\check A\, x\in \dot H]\!]=\bigvee_{x\in A}[\![\check x\in\dot H]\!]$, and so $\pi(\bigvee A)=\bigvee_{x\in A}\pi(x)$, and so $\pi$ is complete, as desired. This proves theorem 2.
To prove theorem 1, let me now assume fully that $V[G]=V[H]$. In this case, there is a $\mathbb{C}$ name $\dot G$ for which $G=\dot G_H$. By strengthening $b$, we may assume without loss that $b$ also forces that, that is, that $b$ forces $\Gamma=\check{\dot G}_{\dot H}$, where $\Gamma$ is the canonical $\mathbb{B}$-name for the generic object, and $\check{\dot G}$ is the $\mathbb{B}$-name of the $\mathbb{C}$-name $\dot G$. Let us also strengthen $c$ to ensure that $c$ forces $\dot G$ is $\check V$-generic for $\check{\mathbb{C}}$. For $d\leq c$ define $\pi(d)=[\![\check d\in\dot H]\!]^{\mathbb{B}}$ as above, which provides a complete embedding of $\mathbb{C}\upharpoonright c$ to $\mathbb{B}\upharpoonright\pi(c)$. I shall now argue that this embedding is dense below $\pi(c)$. Suppose that $a\leq \pi(c)$ in $\mathbb{B}$. Since $a$ forces $\check a\in\Gamma$ and also $\check c\in\dot H$, it must also force that there is some $d\leq c$ in $\dot H$ that forces via $\mathbb{C}$ over $\check V$ that $\check a\in\dot G$. So there must really be some $d\leq c$ forcing $\check a\in\dot G$. So $\pi(d)$, which forces $\check d\in\dot H$, will also force $\check a\in\check{\dot G}_{\dot H}=\Gamma$, and so $\pi(d)\Vdash_{\mathbb{B}}\check a\in\Gamma$, which means $\pi(d)\leq a$ in ${\mathbb{B}}$. Thus, the range of $\pi$ on $\mathbb{C}\upharpoonright c$ is dense below $\pi(c)$, and so $\pi$ is a complete dense embedding of ${\mathbb{C}}\upharpoonright c$ to ${\mathbb{B}}\upharpoonright \pi(c)$. Since these are complete Boolean algebras, this means that $\pi$ is actually an isomorphism of $\mathbb{C}\upharpoonright c$ with $\mathbb{B}\upharpoonright \pi(c)$, as desired.
Finally, note that if $d\in H$ below $c$, then since $H=\dot H_G$, it follows that $[\![\check d\in\dot H]\!]\in G$, which is to say $\pi(d)\in G$, and so $\pi$ carries $H$ to $G$ on these cones. So $\pi^{-1}$ is the isomorphism stated in theorem 1.
QED
Finally, I note that one cannot get rid of the need to restrict to cones, since it could be that $\mathbb{B}$ and $\mathbb{C}$ are the lottery sums of a common forcing notion, giving rise to $V[G]=V[H]$, together with totally different non-isomorphic forcing notions below some other incompatible conditions. So we cannot expect to prove that $\mathbb{B}\cong\mathbb{C}$, and are content to get merely that $\mathbb{B}\upharpoonright b\cong\mathbb{C}\upharpoonright c$, an isomorphism below respective conditions. |
Author: Yury Kashnitsky (@yorko). Edited by Anna Tarelina (@feuerengel), and Mikhail Korshchikov (@MS4). This material is subject to the terms and conditions of the Creative Commons CC BY-NC-SA 4.0 license. Free use is permitted for any non-commercial purpose.
In this assignment, we will find out how a decision tree works in a regression task, then will build and tune classification decision trees for identifying heart diseases.
Prior to working on the assignment, you'd better check out the corresponding course material:
import numpy as npimport pandas as pdfrom matplotlib import pyplot as pltfrom sklearn.model_selection import train_test_split, GridSearchCVfrom sklearn.metrics import accuracy_scorefrom sklearn.tree import DecisionTreeClassifier, export_graphviz
Let's consider the following one-dimensional regression problem. We need to build a function $\large a(x)$ to approximate the dependency $\large y = f(x)$ using the mean-squared error criterion: $\large \min \sum_i {(a(x_i) - f(x_i))}^2$.
X = np.linspace(-2, 2, 7)y = X ** 3 # original dependecy plt.scatter(X, y)plt.xlabel(r'$x$')plt.ylabel(r'$y$');
Let's make several steps to build a decision tree. In the case of a
regression task, at prediction time, the leaf returns the average value for all observations in this leaf.
Let's start with a tree of depth 0, i.e. all observations placed in a single leaf.
You'll need to build a tree with only one node (also called root) that contains all train observations (instances). How will predictions of this tree look like for $x \in [-2, 2]$? Create an appropriate plot using a pen, paper and Python if needed (but no
sklearn is needed yet).
# You code here
Making first splits. Let's split the data according to the following condition $[x < 0]$. It gives us the tree of depth 1 with two leaves. To clarify, for all instances with $x \geqslant 0$ the tree will return some value, for all instances with $x < 0$ it will return another value. Let's create a similar plot for predictions of this tree.
# You code here
In the decision tree algorithm, the feature and the threshold for splitting are chosen according to some criterion. The commonly used criterion for regression is based on variance: $$\large Q(X, y, j, t) = D(X, y) - \dfrac{|X_l|}{|X|} D(X_l, y_l) - \dfrac{|X_r|}{|X|} D(X_r, y_r),$$ where $\large X$ and $\large y$ are a feature matrix and a target vector (correspondingly) for training instances in a current node, $\large X_l, y_l$ and $\large X_r, y_r$ are splits of samples $\large X, y$ into two parts w.r.t. $\large [x_j < t]$ (by $\large j$-th feature and threshold $\large t$), $\large |X|$, $\large |X_l|$, $\large |X_r|$ (or, the same, $\large |y|$, $\large |y_l|$, $\large |y_r|$) are sizes of appropriate samples, and $\large D(X, y)$ is variance of answers $\large y$ for all instances in $\large X$: $$\large D(X, y) = \dfrac{1}{|X|} \sum_{j=1}^{|X|}(y_j – \dfrac{1}{|X|}\sum_{i = 1}^{|X|}y_i)^2$$ Here $\large y_i = y(x_i)$ is the answer for the $\large x_i$ instance. Feature index $\large j$ and threshold $\large t$ are chosen to maximize the value of criterion $\large Q(X, y, j, t)$ for each split.
In our 1D case, there's only one feature so $\large Q$ depends only on threshold $\large t$ and training data $\large X$ and $\large y$. Let's designate it $\large Q_{1d}(X, y, t)$ meaning that the criterion no longer depends on feature index $\large j$, i.e. in 1D case $\large j = 1$.
def regression_var_criterion(X, y, t): pass # You code here
Create the plot of criterion $\large Q_{1d}(X, y, t)$ as a function of threshold value $t$ on the interval $\large [-1.9, 1.9]$.
# You code here
Question 1. What is the worst threshold value (to perform a split) according to the variance criterion?
**Answer options:**
For discussions, please stick to ODS Slack, channel #mlcourse_ai_news, pinned thread #a2_part1_fall2019
Then let's make splitting in each of the leaves nodes.
Take your tree with first threshold [$x<0$]. Now add a split in the left branch (where previous split was $x < 0$) using the criterion $[x < -1.5]$, in the right branch (where previous split was $x \geqslant 0$) with the following criterion $[x < 1.5]$. It gives us a tree of depth 2 with 7 nodes and 4 leaves. Create a plot of this tree predictions for $x \in [-2, 2]$.
# You code here
Question 2. Tree predictions is a piecewise-constant function, right? How many "pieces" (horizontal segments in the plot that you've just built) are there in the interval [-2, 2]?
**Answer options:**
For discussions, please stick to ODS Slack, channel #mlcourse_ai_news, pinned thread #a2_part1_fall2019
Let's read the data on heart diseases. The dataset can be downloaded from the course repo from here by clicking on
Download and then selecting
Save As option. If you work with Git, then the dataset is already there in
data/mlbootcamp5_train.csv.
Problem
Predict presence or absence of cardiovascular disease (CVD) using the patient examination results.
Data description
There are 3 types of input features:
Feature Variable Type Variable Value Type Age Objective Feature age int (days) Height Objective Feature height int (cm) Weight Objective Feature weight float (kg) Gender Objective Feature gender categorical code Systolic blood pressure Examination Feature ap_hi int Diastolic blood pressure Examination Feature ap_lo int Cholesterol Examination Feature cholesterol 1: normal, 2: above normal, 3: well above normal Glucose Examination Feature gluc 1: normal, 2: above normal, 3: well above normal Smoking Subjective Feature smoke binary Alcohol intake Subjective Feature alco binary Physical activity Subjective Feature active binary Presence or absence of cardiovascular disease Target Variable cardio binary
All of the dataset values were collected at the moment of medical examination.
df = pd.read_csv('../../data/mlbootcamp5_train.csv', index_col='id', sep=';')
df.head()
age gender height weight ap_hi ap_lo cholesterol gluc smoke alco active cardio id 0 18393 2 168 62.0 110 80 1 1 0 0 1 0 1 20228 1 156 85.0 140 90 3 1 0 0 1 1 2 18857 1 165 64.0 130 70 3 1 0 0 0 1 3 17623 2 169 82.0 150 100 1 1 0 0 1 1 4 17474 1 156 56.0 100 60 1 1 0 0 0 0
Transform the features:
cholesterol.
gluc.
pandas.get_dummies. There is no need to use the original features
cholesterol and
gluc after encoding.
# You code here
Split data into train and holdout parts in the proportion of 7/3 using
sklearn.model_selection.train_test_split with
random_state=17.
# You code here# X_train, X_valid, y_train, y_valid = ...
Train a decision tree on the dataset
(X_train, y_train) with
max depth equal to 3 and
random_state=17. Plot this tree with
sklearn.tree.export_graphviz and Graphviz. Here we need to mention that
sklearn doesn't draw decision trees on its own, but is able to output a tree in the
.dot format that can be used by Graphviz for visualization.
How to plot a decision tree, alternatives:
print(dot_data.getvalue()) with
dot_data defined below (this can be done without pydotplus and Graphviz), go to http://www.webgraphviz.com, paste the graph code string (digraph Tree {...) and generate a nice picture
There are may be some troubles with graphviz for Windows users.The error is 'GraphViz's executables not found'.
To fix that - install Graphviz from here. Then add graphviz path to your system PATH variable. You can do this manually, but don't forget to restart kernel. Or just run this code:
import ospath_to_graphviz = '' # your path to graphviz (C:\\Program Files (x86)\\Graphviz2.38\\bin\\ for example) os.environ["PATH"] += os.pathsep + path_to_graphviz
Question 3. Which 3 features are used to make predictions in the created decision tree?
**Answer options:**
For discussions, please stick to ODS Slack, channel #mlcourse_ai_news, pinned thread #a2_part1_fall2019
Make predictions for holdout data
(X_valid, y_valid) with the trained decision tree. Calculate accuracy.
# You code here
Set up the depth of the tree using cross-validation on the dataset
(X_train, y_train) in order to increase quality of the model. Use
GridSearchCV with 5 folds. Fix
random_state=17 and change
max_depth from 2 to 10.
tree_params = {'max_depth': list(range(2, 11))}tree_grid = GridSearchCV # You code here
Draw the plot to show how mean accuracy is changing in regards to
max_depth value on cross-validation.
# You code here
Print the best value of
max_depth where the mean value of cross-validation quality metric reaches maximum. Also compute accuracy on holdout data. This can be done with the trained instance of the class
GridSearchCV.
# You code here
Сalculate the effect of
GridSearchCV: check out the expression (acc2 - acc1) / acc1 * 100%, where acc1 and acc2 are accuracies on holdout data before and after tuning max_depth with GridSearchCV respectively.
# You code here
Question 4. Choose all correct statements.
**Answer options:**
GridSearchCV increased holdout accuracy by
GridSearchCV increased holdout accuracy by
For discussions, please stick to ODS Slack, channel #mlcourse_ai_news, pinned thread #a2_part1_fall2019
Take a look at the SCORE table to estimate ten-year risk of fatal cardiovascular disease in Europe. Source paper.
Let's create new features according to this picture:
If the values of age or blood pressure don't fall into any of the intervals then all binary features will be equal to zero.
Add a
smoke feature.
Build the
cholesterol and
gender features. Transform the
cholesterol to 3 binary features according to it's 3 unique values (
cholesterol=1,
cholesterol=2 and
cholesterol=3). Transform the
gender from 1 and 2 into 0 and 1. It is better to rename it to
male (0 – woman, 1 – man). In general, this is typically done with
sklearn.preprocessing.LabelEncoder but here in case of only 2 unique values it's not necessary.
Finally, the decision tree is built using these 12 binary features (excluding all original features that we had before this feature engineering part).
Create a decision tree with the limitation
max_depth=3 and train it on the whole train data. Use the
DecisionTreeClassifier class with fixed
random_state=17, but all other arguments (except for
max_depth and
random_state) should be left with their default values.
Question 5. Which binary feature is the most important for heart disease detection (i.e., it is placed in the root of the tree)?
**Answer options:**
For discussions, please stick to ODS Slack, channel #mlcourse_ai_news, pinned thread #a2_part1_fall2019
# You code here |
I want to find the fair value of a European cash-or-nothing option that pays \$1 if $S_t>K$ and $S$ breached the level $M<0<K$, where $S$ is the risk-neutral process $dS_t=\sigma dW_t$.
My idea is to define a first passage time $\tau$ to level $M$ (since the other condition must be met anyway to get \$1 at time $T$) $(\tau=\min\{t;S_t=M\})$ and use the reflection principle of the Brownian motion to determine the probability density of $\tau$.
Integrating both sides of the SDE we find its solution $S_t=S_0+\sigma W_t$. Then, we applying the reflection principle and change of variable in integration $\nu=w/\sqrt{t} \Rightarrow d\nu=dw/ \sqrt{t}$:
\begin{align*} \mathbb{P}(\tau\leq t)&=\mathbb{P}(\tau\leq t,S_t\geq M)+\mathbb{P}(\tau\leq t,S_t\leq M) \\ & = 2\mathbb{P}(\tau\leq t,S_t\geq M) \\ &=2\mathbb{P}(S_t\geq M) \\ & = 2\int_{M}^{\infty}\frac{1}{\sqrt{2\pi t}}e^{-w^2/2t}dw \\ & = 2\int_{M/\sqrt{t}}^{\infty}\frac{1}{\sqrt{2\pi t}}e^{-\nu^2/2}d\nu \\ & = 2-2\Phi\left(\frac{M}{\sqrt{t}}\right) \end{align*}
The fair value of a standard cash-or-nothing option is $\mathbb{E}^\mathbb{Q}[\mathbb{I}_{\{S_t>K\}}]$. In this case, I think that we need to multiply that by $\mathbb{P}(\tau\leq t)$, i.e. the price of the cash-or-nothing option with barrier is:
$$\mathbb{E}^\mathbb{Q}[\mathbb{I}_{\{S_t>K\}}]\times\mathbb{P}(\tau\leq t)$$ Do you think this is correct? |
I’d like to write about the situation that occurs in set theory when a forcing extension $V[G]=V[H]$ arises over a ground model $V$ in two different ways simultaneously, using generic filters over two different forcing notions $G\subset\mathbb{B}$ and $H\subset\mathbb{C}$. The general fact, stated in theorem 1, is that in this case, the two forcing notions are actually isomorphic on a cone $\mathbb{B}\upharpoonright b\cong\mathbb{C}\upharpoonright c$, with the isomorphism carrying the one generic filter to the other. In other words, below these respective conditions $b$ and $c$, the forcing notions and the respective generic filters are not actually different.
I have always assumed that this fact was part of the classical forcing folklore results, but
it doesn’t seem to be mentioned explicitly in the usual forcing literature (it appears as lemma 25.5 in Jech’s book), and so I am writing an account of it here. Victoria Gitman and I have need of it in a current joint project. (Bob Solovay mentions in the comments below that the result is due to him, and provides a possible 1975 reference.) Theorem 1. If $V[G]=V[H]$, where $G\subset \mathbb{B}$ and $H\subset\mathbb{C}$ are $V$-generic filters on the complete Boolean algebras $\mathbb{B}$ and $\mathbb{C}$, respectively, then there are conditions $b\in\mathbb{B}$ and $c\in\mathbb{C}$ such that $\mathbb{B}\upharpoonright b$ is isomorphic to $\mathbb{C}\upharpoonright c$ by an isomorphism carrying $G$ to $H$.
The proof will also establish the following related result, concerning the situation where one extension is merely contained in the other.
Theorem 2. If $V[H]\subset V[G]$, where $G\subset\mathbb{B}$ and $H\subset\mathbb{C}$ are $V$-generic filters on the complete Boolean algebras $\mathbb{B}$ and $\mathbb{C}$, respectively, then there are conditions $b\in\mathbb{B}$ and $c\in\mathbb{C}$ such that $\mathbb{C}\upharpoonright c$ is isomorphic to a complete subalgebra of $\mathbb{B}\upharpoonright b$.
By $\mathbb{B}\upharpoonright b$, where $b$ is a condition in $\mathbb{B}$ (that is, a nonzero element of $\mathbb{B}$), what I mean is the Boolean algebra consisting of the interval $[0,b]$ in $\mathbb{B}$, using relative complement $b-a$ as the negation of $a$. This is the complete Boolean algebra that arises when forcing with the conditions in $\mathbb{B}$ below $b$.
Proof: In order to prove theorem 2, let me assume at first only that $V[H]\subset V[G]$. It follows that $H=\dot H_G$ for some $\mathbb{B}$-name $\dot H$, and we may choose a condition $b\in G$ forcing that $\dot H$ is a $\check V$-generic filter on $\check{\mathbb{C}}$.
I claim that there is some $c\in H$ such that every $d\leq c$ has $b\wedge [\![\check d\in\dot H]\!]^{\mathbb{B}}\neq 0$. Note that every $d\in H$ has $[\![\check d\in\dot H]\!]\in G$ by the truth lemma, since $H=\dot H_G$, and so $b\wedge [\![\check d\in\dot H]\!]^{\mathbb{B}}\neq 0$ for $d\in H$. If $c\in H$ forces that every $d$ in the generic filter has that property, then indeed every $d\leq c$ has $b\wedge [\![\check d\in\dot H]\!]^{\mathbb{B}}\neq 0$ as claimed.
In other words, from the perspective of the $\mathbb{B}$ forcing, every $d\leq c$ has a nonzero possibility to be in $\dot H$.
Define $\pi:\mathbb{C}\upharpoonright c\to\mathbb{B}$ by $$\pi(d)=b\wedge [\![\check d\in\dot H]\!]^{\mathbb{B}}.$$ Using the fact that $b$ forces that $\dot H$ is a filter, it is straightforward to verify that
$d\leq e\implies \pi(d)\leq\pi(e)$, since if $d\leq e$ and $d\in H$, then $e\in H$. $\pi(d\wedge e)=\pi(d)\wedge \pi(e)$, since $[\![\check d\in\dot H]\!]\wedge[\![\check e\in \dot H]\!]=[\![\check{(b\wedge e)}\in\dot H]\!]$. $\pi(d-e)=\pi(d)-\pi(e)$, since $[\![\check{(d-e)}\in\dot H]\!]=[\![\check d\in\dot H]\!]-[\![\check e\in\dot H]\!]$.
Thus, $\pi$ is a Boolean algebra embedding of $\mathbb{C}\upharpoonright c$ into $\mathbb{B}\upharpoonright\pi(c)$.
Let me argue that this embedding is a complete embedding. Suppose that $a=\bigvee A$ for some subset $A\subset\mathbb{C}\upharpoonright c$ with $A\in V$. Since $H$ is $V$-generic, it follows that $a\in H$ just in case $H$ meets $A$. Thus, $[\![\check a\in\dot H]\!]=[\![\exists x\in\check A\, x\in \dot H]\!]=\bigvee_{x\in A}[\![\check x\in\dot H]\!]$, and so $\pi(\bigvee A)=\bigvee_{x\in A}\pi(x)$, and so $\pi$ is complete, as desired. This proves theorem 2.
To prove theorem 1, let me now assume fully that $V[G]=V[H]$. In this case, there is a $\mathbb{C}$ name $\dot G$ for which $G=\dot G_H$. By strengthening $b$, we may assume without loss that $b$ also forces that, that is, that $b$ forces $\Gamma=\check{\dot G}_{\dot H}$, where $\Gamma$ is the canonical $\mathbb{B}$-name for the generic object, and $\check{\dot G}$ is the $\mathbb{B}$-name of the $\mathbb{C}$-name $\dot G$. Let us also strengthen $c$ to ensure that $c$ forces $\dot G$ is $\check V$-generic for $\check{\mathbb{C}}$. For $d\leq c$ define $\pi(d)=[\![\check d\in\dot H]\!]^{\mathbb{B}}$ as above, which provides a complete embedding of $\mathbb{C}\upharpoonright c$ to $\mathbb{B}\upharpoonright\pi(c)$. I shall now argue that this embedding is dense below $\pi(c)$. Suppose that $a\leq \pi(c)$ in $\mathbb{B}$. Since $a$ forces $\check a\in\Gamma$ and also $\check c\in\dot H$, it must also force that there is some $d\leq c$ in $\dot H$ that forces via $\mathbb{C}$ over $\check V$ that $\check a\in\dot G$. So there must really be some $d\leq c$ forcing $\check a\in\dot G$. So $\pi(d)$, which forces $\check d\in\dot H$, will also force $\check a\in\check{\dot G}_{\dot H}=\Gamma$, and so $\pi(d)\Vdash_{\mathbb{B}}\check a\in\Gamma$, which means $\pi(d)\leq a$ in ${\mathbb{B}}$. Thus, the range of $\pi$ on $\mathbb{C}\upharpoonright c$ is dense below $\pi(c)$, and so $\pi$ is a complete dense embedding of ${\mathbb{C}}\upharpoonright c$ to ${\mathbb{B}}\upharpoonright \pi(c)$. Since these are complete Boolean algebras, this means that $\pi$ is actually an isomorphism of $\mathbb{C}\upharpoonright c$ with $\mathbb{B}\upharpoonright \pi(c)$, as desired.
Finally, note that if $d\in H$ below $c$, then since $H=\dot H_G$, it follows that $[\![\check d\in\dot H]\!]\in G$, which is to say $\pi(d)\in G$, and so $\pi$ carries $H$ to $G$ on these cones. So $\pi^{-1}$ is the isomorphism stated in theorem 1.
QED
Finally, I note that one cannot get rid of the need to restrict to cones, since it could be that $\mathbb{B}$ and $\mathbb{C}$ are the lottery sums of a common forcing notion, giving rise to $V[G]=V[H]$, together with totally different non-isomorphic forcing notions below some other incompatible conditions. So we cannot expect to prove that $\mathbb{B}\cong\mathbb{C}$, and are content to get merely that $\mathbb{B}\upharpoonright b\cong\mathbb{C}\upharpoonright c$, an isomorphism below respective conditions. |
AP calc final tomorrow, this was part of my review, I have no idea how to solve it. I know the answer but not how to get the answer, which is really important.
Let $t=e^x$. Then $e^{-x}=1/e^x=1/t$, so you have $$t=\frac1t$$ which means $$t^2=1.$$ This means $t=\pm1$. So it only remains all possibilities for $x$ such that $$e^x=1 \quad\text{or}\qquad e^x=-1.$$
hint: multiply by $e^x$ on both sides. Then you get a constant on one side. Can you solve it now?
Let x=iy. Then, cos(y)+isin(y)=cos(y)-isin(y).
sin(y)=0.
y=n*pi, n=0, (+/-)1, (+/-)2, ...
x=i(n*pi)
x=0 is only one of the many solutions.
$$e^x = e^{-x}$$
$$\ln(e^x) = \ln(e^{-x})$$
$$x = -x$$
$$x=0$$
Come on man.
A more general question: for which $z \in \mathbb C$ is $e^z=e^{-z}$ ?
Answer: first recall that the complex solutions of the equation $e^w=1$ are given by
$$w=2 k \pi i, \quad k \in \mathbb Z.$$
Hence $e^z=e^{-z}$ iff $e^{2z}=1$ iff $z= k \pi i$ for some $k \in \mathbb Z$.
For real $z$ it follows:
$e^z=e^{-z}$ iff $z=0$. |
Does there exist a topological space $X$ such that $X \ncong Y\times Y$ for every topological space $Y$ but $$X\times X \times X \cong Z\times Z$$ for some topological space $Z$ ?
Here $\cong$ means homeomorphic.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Does there exist a topological space $X$ such that $X \ncong Y\times Y$ for every topological space $Y$ but $$X\times X \times X \cong Z\times Z$$ for some topological space $Z$ ?
Here $\cong$ means homeomorphic.
Yes.
Here are two pieces of input data.
1) Let $M$ be any noncompact space. Beyond the usual invariants like homology and homotopy groups, there is a further invariant of the homeomorphism type (or proper homotopy type) of $M$, called the fundamental group at infinity: if you choose a proper (inverse image of compact sets is compact) map $\gamma: [0,\infty) \to M$, and let $K_n$ be an increasing compact exhaustion of $M$ (that is, $K_n \subset K_{n+1}$ and $\bigcup K_n = M$) so that $\gamma(t) \not \in K_n$ for $t \in [n, n+1]$, then one may write the inverse limit $$\pi_1^\infty(M,\gamma) := \lim \pi_1(M - K_n, \gamma(n));$$ strictly speaking we have restriction maps $$\pi_1(M - K_n, \gamma(n)) \to \pi_1(M - K_{n-1}, \gamma(n)),$$ but we may use the path $\gamma$ from $\gamma(n)$ to $\gamma(n-1)$ to get a natural isomorphism $\pi_1(M - K_{n-1}, \gamma(n)) \to \pi_1(M - K_{n-1}, \gamma(n-1))$ so that we may take the inverse limit over a sequence of maps as above. This is essentially independent of the choice of sequence $K_n$. It only depends on the ray $\gamma$ up to a proper homotopy.
(Similarly, there is a notion of the set of ends of a space - this is the inverse limit over $\pi_0(M - K_n)$. This is the set we choose $\gamma$ from, in the sense that we choose a connected component for the usual fundamental group.)
2) If $M$ is a smooth, connected, noncompact manifold of dimension $n \geq 5$, a theorem of Stallings (the piecewise linear structure of Euclidean space, here) says that if $M$ is both contractible and $\pi_1^\infty(M,\gamma) = 0$ for the unique end $\gamma$ of $M$, then $M \cong \Bbb R^n$.
Our strategy, therefore, is to find a noncompact, contractible smooth manifold $M$ of dimension $n \geq 3$ with nontrivial fundamental group at infinity. We will argue that $\pi_1^\infty(M^k, \gamma) = 0$ for $k>1$, and hence that $M^k \cong \Bbb R^{nk}$. Because you asked for a square root of $3n$, we should take $n$ even. At the end we will specify $n = 4$.
Here is a helpful tool in constructing such noncompact manifolds. If $M$ is a compact manifold with boundary, then its ends of its interior $M^\circ$ are in bijection with $\pi_0(\partial M)$, and the fundamental group at infinity is equal to $\pi_1(\partial M)$. (Take the ray to extend to a map $[0, \infty] \to M$, and let the basepoint in $\partial M$ be $\gamma(\infty)$; if $[0,1) \times \partial M \subset M$ is a collar of the boundary, let the compact exhaustion be the complement of $[0, 1/n) \times \partial M$.)
In this situation above, the product $M \times M$ is a compact topological manifold with boundary (it has "corners", but these are topologically the same as boundary points). The boundary is homeomorphic to $(\partial M \times M) \cup_{\partial M \times \partial M} (M \times \partial M)$. If $\partial M$ is connected and $M$ is simply connected, the Seifert van Kampen theorem dictates that the fundamental group of the result is $$\pi_1\partial M *_{\pi_1 \partial M \times \pi_1 \partial M} \pi_1 \partial M = 0.$$
Therefore, if $M$ is simply connected with connected boundary, $M \times M$ has simply connected boundary; and hence $(M \times M)^\circ = M^\circ \times M^\circ$.
What this proves, altogether, is that if $M$ is a compact, contractible manifold of dimension $n \geq 3$, and $\pi_1(\partial M) \neq 0$, then $M$ is not homeomorphic to $\Bbb R^n$, but $M^k$ is homeomorphic to $\Bbb R^{nk}$ for any $k > 1$. What remains is twofold: to show that such $M$ exist; and to find one that is itself not a square.
First, existence. In dimension 3 there are none of these of interest: a compact contractible 3-manifold is homeomorphic to the 3-ball by the solution of the Poincare conjecture. In dimension 4 these are called Mazur manifolds and come in great supply. In dimension $n \geq 5$, if $\Sigma$ be an $(n-1)$-manifold which has $H_*(\Sigma;\Bbb Z) \cong H_*(S^{n-1};\Bbb Z)$, it is a theorem of Kervaire that $\Sigma$ bounds a contractible manifold $M$. If $\pi_1 \Sigma \neq 0$ (which is equivalent to saying "$\Sigma$ is not homeomorphic to the $(n-1)$-sphere", by the higher-dimensional Poincare conjecture), then this gives an example of what we want. (In fact, for $n \geq 6$, Kervaire proved that you can even construct such `homology $(n-1)$-spheres' with any specified finitely presented fundamental group $\pi$, modulo the conditions $H_1(\pi) = H_2(\pi) = 0$.) So we see there is any such compact manifold $M$, and hence noncompact manifold $M^\circ$, for any dimension $n \geq 4$.
If $M^\circ$ were a product $X \times X$ of two spaces, then first, observe $X$ would need to be contractible; second, it is a homology manifold (this is a local condition in terms of the relative homology of $(X, X - p)$ at all points $p$ which ensures duality properties) of dimension $\dim M/2$. A homology manifold of dimension $\leq 2$ is a manifold (this seems to be well-known, but the only reference I could find was Theorem 16.32 in Bredon's sheaf theory), so let's take $\dim M = 4$ here; then $X$ is a contractible surface, so the classification of compact surfaces implies $X \cong \Bbb R^2$ (see eg here). This contradicts $0 = \pi_1^\infty(\Bbb R^4) \cong \pi_1^\inf(M^\circ) \neq 0$, and so this is impossible.
In fact, with some more work you can show that this $M$ may not even be decomposed into a product at all.
EDIT: Thanks to Moishe Cohen's answer here we can prove that if $M$ is a compact contractible manifold of dimension $n \geq 4$ for which $\pi_1 \partial M \neq 0$, then $M$ does not admit a square root. For if it did, $X \times X = M$, the space $X$ would be a contractible homology manifold of dimension at least 2; by Moishe's answer, it must have one end. Using the decomposition $\text{End}(X \times X) \sim \text{End}(X) * \text{End}(X)$ of end-spaces of a product, we see that $\pi_1^\inf(X \times X) = \pi_1^\inf(X) *_{\pi_1^\inf(X) \times \pi_1^\inf(X)} \pi_1^\inf(X) = 0$, exactly as in the case of manifolds with boundary. Thus $M$ admits no square root.
This method thus produces some $M$ that admits no square root but whose $n$th power admits a $k$th root, for any pair of positive integers $(n,k)$ with $n > 1$. It has no power to find spaces for which $X^j$ is similarly un-rootable for $j$ in some range; it is unique to $j=1$ that this works. |
TL;DR Your Maxwell–Boltzmann diagram up there is not sufficient to describe the variation of rate with $E_\mathrm{a}$. Simply evaluating the shaded area alone does not reproduce the exponential part of the rate constant correctly, and therefore the shaded area should not be taken as a quantitative measure of the rate (only a qualitative one).
There is a subtle issue with the way you've presented your drawing. However, we'll come to that slightly later. First, let's establish that the "proportion of molecules with sufficient energy to react" is given by
$$P(\varepsilon) = \exp \left(-\frac{\varepsilon}{kT}\right) \tag{1}$$
Therefore, for a reaction $\ce{X <=> Y}$ with uncatalysed forward activation energy $E_\mathrm{f}$ and uncatalysed backward activation energy $E_\mathrm{b}$, the rates are given by
$$k_\mathrm{f,uncat} = A_\mathrm{f} \exp \left(-\frac{E_\mathrm{f}}{kT}\right) \tag{2} $$
$$k_\mathrm{b,uncat} = A_\mathrm{b} \exp \left(-\frac{E_\mathrm{b}}{kT}\right) \tag{3} $$
The equilibrium constant of this reaction is given by
$$K_\mathrm{uncat} = \frac{k_\mathrm{f,uncat}}{k_\mathrm{b,uncat}} = \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \tag{4}$$
As you have noted, the change in activation energy due to the catalyst is the same. I would be a bit careful with using "$\mathrm{d}E$" as the notation for this, since $\mathrm{d}$ implies an infinitesimal change, and if the change is infinitesimal, your catalyst isn't much of a catalyst. So, I'm going to use $\Delta E$. We then have
$$k_\mathrm{f,cat} = A_\mathrm{f} \exp \left(-\frac{E_\mathrm{f} - \Delta E}{kT}\right) \tag{5} $$
$$k_\mathrm{b,cat} = A_\mathrm{b} \exp \left(-\frac{E_\mathrm{b} - \Delta E}{kT}\right) \tag{6} $$
and the new equilibrium constant is
$$\begin{align}K_\mathrm{cat} = \frac{k_\mathrm{f,cat}}{k_\mathrm{b,cat}} &= \frac{A_\mathrm{f}\exp[-(E_\mathrm{f} - \Delta E)/kT]}{A_\mathrm{b}\exp[-(E_\mathrm{b} - \Delta E)/kT]} \tag{7} \\[0.2cm]&= \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \frac{\exp(\Delta E/kT)}{\exp(\Delta E/kT)} \tag{8} \\[0.2cm]&= \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \tag{9}\end{align}$$
Equations $(9)$ and $(4)$ are the same, so there is no change in the equilibrium constant.
The question then arises as to how eq. $(1)$ is obtained. The simplest way is to invoke a Boltzmann distribution, which almost by definition gives the desired form. However, since you have a
Maxwell–Boltzmann curve, I guess I should talk about it a bit more. The fraction of molecules with energy $E_\mathrm{a}$ or greater is simply the shaded area under the curve, i.e. one can obtain it by integrating the curve over the desired range.
$$P(\varepsilon) = \int_{E_\mathrm{a}}^\infty f(\varepsilon)\,\mathrm{d}\varepsilon \tag{10}$$
where the Maxwell–Boltzmann distribution of energies is given by (see Wikipedia)
$$f(\varepsilon) = \frac{2}{\sqrt{\pi}}\left(\frac{1}{kT}\right)^{3/2} \sqrt{\varepsilon} \exp\left(-\frac{\varepsilon}{kT}\right) \tag{11}$$
At first glance, we would expect this to be directly proportional to the exponential part of the rate constant, i.e. $\exp(-E_\mathrm{a}/kT)$. Alas, it is not that simple. If you try to work out the integral
$$\int_{E_\mathrm{a}}^{\infty} \frac{2}{\sqrt{\pi}}\left(\frac{1}{kT}\right)^{3/2} \sqrt{\varepsilon} \exp\left(-\frac{\varepsilon}{kT}\right) \,\mathrm{d}\varepsilon \tag{12}$$
you don't get anything
close to the form of $\exp(-E_\mathrm{a}/kT)$. Instead, you get some "error function" rubbish, and some nasty square roots and exponentials. (You can use WolframAlpha to verify this.)
Why is this so? Well, it turns out that there are other terms that also depend on $\varepsilon$ and therefore need to go inside that integral (they aren't constants and can't be taken out).
The simplest example is that faster molecules tend to collide more often, so even though the right-hand tail of the diagram seems to contribute very little to the "proportion of molecules with sufficient energy", it actually contributes more significantly to the overall
rate because these molecules collide more often. In collision theory this is described using the "relative velocity" of the particles $v_\mathrm{rel}$.
There is also another complication, in that the Maxwell–Boltzmann distribution, the direction of the particles is not accounted for. (For more insight please refer to Levine
Physical Chemistry 6th ed., p 467.) Therefore, there has to be yet another term that takes into account the direction of movement of the particles. The idea is that a head-on collision between two molecules is more likely to overcome the activation barrier than is a $90^\circ$ collision. The term that compensates for this is the "collision cross-section" $\sigma$.
If you go through the maths (and I don't really intend to type it out here, it's rather long, but I will give some references) then you will find that at the end you will recover the form $\exp(-\varepsilon/kT)$. Once you have arrived at this, it's very straightforward to see that the increases in rate of both the forward and backward reaction cancel each other out.
Now, as for the promised references, Pilling and Seakins's
Reaction Kinetics pp 61-2 have a short outline of the proof. Atkins's Physical Chemistry 10th ed. has a slightly longer proof on pp 883-4. |
You might find it useful to prove the following formula for the wedge product:
$$ (\omega \wedge \eta)(v_1,v_2,v_3,v_4) = \sum_{\sigma \in S_{2,2}} \operatorname{sign}(\sigma) \omega(v_{\sigma(1)}, v_{\sigma(2)})\eta (v_{\sigma(3)},v_{\sigma(4)}) $$
where $S_{2,2}$ denotes the $(2,2)$-shuffles which are the permutations $\sigma$ of $\{1,2,3,4\}$ that satisfy $\sigma(1) < \sigma(2)$ and $\sigma(3) < \sigma(4)$. The advantage of this formula (which can be proved by a combinatorical argument and generalizes for higher order forms) is that it involves working with only $|S_{2,2}| = {4 \choose 2} = 6$ permutations and not $4! = 24$ permutations.
The relevant permutations in your case are
$$ \operatorname{id}, (23), (243), (123), (1243), (13)(24) $$
and so
$$ (\omega \wedge \omega)(v_1,v_2,v_3,v_4) = \omega(v_1,v_2)\omega(v_3,v_4) - \omega(v_1,v_3)\omega(v_2,v_4) + \omega(v_1,v_4)\omega(v_2,v_3) \\+ \omega(v_2,v_3)\omega(v_1,v_4) - \omega(v_2,v_4)\omega(v_1,v_3) + \omega(v_3,v_4)\omega(v_1,v_2) \\= 2\omega(v_1,v_2)\omega(v_3,v_4) - 2\omega(v_1,v_3)\omega(v_2,v_4) + 2\omega(v_2,v_3)\omega(v_1,v_4).$$
Alternatively, you can brute-force your way through all $4! = 24$ permutations and use the fact that $\omega$ is alternating to get the same expression. More explicitly, you have
$$ (\omega \wedge \eta)(v_1,v_2,v_3,v_4) = \frac{4!}{2!\cdot 2!} \operatorname{Alt}(\omega \otimes \eta)(v_{\sigma(1)}, v_{\sigma(2)}, v_{\sigma(3)}, v_{\sigma(4)}) \\ = \frac{1}{2! \cdot 2!} \sum_{\sigma \in S_{4}} \operatorname{sign}(\sigma) \omega(v_{\sigma(1)}, v_{\sigma(2)})\eta (v_{\sigma(3)},v_{\sigma(4)}). $$
I have already computed the right hand side for six permutations. Now show that each of the six permutations above comes with three permutations such that result in the same value. For example, the permutations
$$ \operatorname{id}, (12)(34), (12), (34) $$
fix $\{ 1,2 \}$ and $\{ 3, 4 \}$ and give us
$$ \omega(v_1,v_2)\omega(v_3,v_4) + \omega(v_2,v_1)\omega(v_4,v_3) - \omega(v_2,v_1)\omega(v_3,v_4) - \omega(v_1,v_2)\omega(v_4,v_3) = 4 \omega(v_1,v_2) \omega(v_3, v_4) $$
and similarly for each of the five other permutations. |
I have a function $$f(x)= \begin{cases} x & \text{if } x \ge0 \\ x^2 & \text{if } x<0 \end{cases} $$ and want to show that it is continuous but not differentiable at $x=0$
Now to show that a function is differentable we show that $$f'(x_0)= \lim_{x \to x_0}\frac{f(x)-f(x_0)}{x-x_0}$$
but I am always confused with such funtions. Do I have to choose $x$ or $x^2$
Taking the comments into consideration a functions is differentiable if the difference quotient $\frac{f(x)-f(x_0)}{x-x_0}, x_0\ne0$ approaches a limit. And limit exist only if left- and right-hand side limit is equal. So $$\lim_{x \to 0^-}= \frac{x^2-0}{x-0}=\frac{x^2}{x}=x=0$$ and $$\lim_{x \to 0^+}= \frac{x-0}{x-0}=\frac{x}{x}=1$$ thus they are not equal which means f is not differentiable.
For the continuity part I am considering the definition: $\forall \varepsilon >0 \ \exists \delta>0$ s.t $\mid f(x)-f(x_0)\mid < \varepsilon$ if $\mid x-x_0 \mid < \delta.$ |
This approach picks up on the comment I made. After that reading, I came up with an alternate way to prove this.
We have in general that the sum of two periodic functions is not in general periodic. However, when the periods of the summands are
commensurable, their sum will also be periodic. Take a look at this question. My solution exploits this fact and the idea that changing the frequency of a wave by a tiny amount will still approximate a wave fairly well, as long as we don't look too far ahead. As an example, $\cos 2.0001x$ approximates $\cos 2x$ really well for a long time. In fact, you need to look at $x$ greater than $1000$ to see the former finally miss the first decimal place in the approximation.
These two facts imply that we can create a sequence of
periodic functions $f_n$ such that $f_n$ converges to $f$ compactly. Look at this article. We can also choose the $f_n$ to only improve their approximation on each interval of the form $[-n, n]$. See the animation here.
Now, we know that $f$ attains a non-zero value somewhere, say at $x_0$ (since $\sin$ functions of different frequency are linearly independent). Call this value $a$. Now suppose we had that $f$ approached $0$ as $x\rightarrow\infty$. This means that there is an $M\in\Bbb N$ such that $|f(x)|<|a|/2$ for all $x>M$. Now since the $f_n$ approximate $f$ compactly there is eventually an $f_N$ such that $\sup|f(x)-f_k(x)|<|a|/2$ for all $k\geq N$ and $x<M+200$ (the 200 is arbitrary; just need something bigger than 0). But $f_N$ is periodic. This means its amplitude is less than $|a|/2 $
everywhere. This contradicts the fact that our earlier $f_n$ approximated $f$ well around $x_0$.
Basically, our early terms of the sequence $f_n$ had a 'large' amplitude. If $f$ approached zero, our later terms would have to decrease in amplitude, and this throws off the fact we chose the $f_n$ to only improve their approximation as we went out further.
EDIT
Knowing what my $f_m$ (re-indexed $n$ to $m$) are would help. Each $f_m$ is of the form$$\sum_{j=1}^k a_{j}\sin(\alpha_{j_m}\pi n)$$where the $\alpha_{j_m}$ are chosen so that $\alpha_{j_m}$ is 'sufficiently close' to $\alpha_j$ and we have that $\alpha_{1_m}, \dots, \alpha_{k_m}$ are commensurable. |
Let $\Omega\subset \mathbb{R}^d$, $d\in \{2,3\}$ be an open bounded polygonal/polyhedral set. Suppose I want to solve the following pde
\begin{align*} \vec{q}+\vec{\nabla}u &=0\,&x&\in \Omega\\ \vec{\nabla} \cdot \vec{q} &= f\, \quad& x&\in \Omega\\ u&=0\,& x&\in \partial\Omega_{\mathrm{dir}}\\ \vec{q}\cdot \vec{\eta}&= 0\,& x&\in \partial\Omega_{\mathrm{neu}} \end{align*}
To avoid complications assume that the measure of the Dirichlet boundary is nonzero.
Suppose further that there is triangulation of $\Omega$, $T$, such that the source function $f$ is piecewise polynomial with respect to $T$. Suppose that $T$ has no "hanging nodes". Suppose I want to approximate the solution to the pde using a the same triangulation $T$.
If I choose polynomial trial/test spaces sufficiently rich, can I generate the exact solution to the pde? For example, if $f$ is only piecewise constant, can I choose the first Raviart Thomas space that contains (continuous) piecewise affine functions for the trail/test space for $\vec{q}$? and an appropriate trail/test space for $u$ and generate the exact solution? This result is true for $d=1$, but I haven't been able to prove it to myself for $d=2$ or higher.
Since this seems true and I haven't been able to prove it, I am looking for a counter example, a proof, or at least a sketch of a proof. I appreciate any ideas. I am mainly interested in proofs/ideas that allow for domains that are not convex. |
In analogy to a characterisation of operator matrices generating $C_0$-semigroups due to R. Nagel (\cite{[Na89]}), we give conditions on its entries in order that a $2\times 2$ operator matrix generates a cosine operator function. We apply this to systems of wave equations, to second order initial-boundary value problems, and to overdamped wave equ...
Published in Mathematische Zeitschrift
In their study of diophantine approximation of the exponential function in connection with Sondow’s Conjecture, Berndt et al. (Adv Math 348:1298–1331, 2013) have constructed certain p-adic functions arising from the sequence of convergents to the continued fraction of e. We solve an open problem posed in [2], more precisely we show that those p-adi...
We show that the distributions occurring in the geometric and spectral side of the twisted Arthur-Selberg trace formula extend to non-compactly supported test functions. The geometric assertion is modulo a hypothesis on root systems proven when the group is split. The result extends the work of Finis-Lapid (and M\"uller, spectral side) to the twist...
Published in Mathematische Zeitschrift
We obtain a quantitative estimate on the generalised index of translators for the mean curvature flow with bounded norm of the second fundamental form. The estimate involves the dimension of the space of weighted square integrable f-harmonic 1-forms. By the adaptation to the weighted setting of Li–Tam theory developed in previous works, this yields...
Published in Mathematische Zeitschrift
A one-parameter family of coupled flows depending on a parameter κ>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa >0$$\end{document} is introduced which reduce...
For an arbitrary quiver Q = (I, Ω) and dimension vector d ∈ N I we define the dimension of absolutely cuspidal functions on the moduli stacks of representations of dimension d of a quiver Q over a finite field Fq, and prove that it is a polynomial in q, which we conjecture to be positive and integral. We obtain a closed formula for these dimensions...
We provide a characterization of Symplectic Grassmannians in terms of their Varieties of Minimal Rational Tangents. |
For a while now, I have been struggling to find a source explaining the complexity of the following 2 elementary operations
Calculating the $n^\text{th}$ root of an integer $x$, $$ \sqrt[\leftroot{-3}\uproot{3}n]{x} $$ Checking whether $$ a \equiv b \mod r $$ for $a,b,r \in \mathbb{N}$
I now think I may have come close to finding an answer to the second of these problems, but I am not certain.
On pages 10-11 of the book Édouard Lucas and Primality Testing (by Hugh C. Williams), the following statement is made:
For any $m$ and $a,b$ with $0 < |a|, |b| < m$ calculating $$ r \equiv ab \mod m $$ can be done in $O((\log m)^2)$ operations.
Am I to assume, based on this, that operation 2. has complexity $O((\log m)^2)$?
I also suspect that operation $1$ has complexity $O((\log m)^2)$, but I have no justification for this. Could someone please also tell me whether this is correct? |
I am thinking of the following situation:
The lie algebra $g = sl_2(\mathbb(C))$, and $V(1)$ is the unique dimension 2 irreducible representation (the defining representation). Let $U$ be the universal enveloping algebra of $g$.
I pick a basis $v_1, v_2$ for $V(1)$, where $H v_1 = v_1$ and $Hv_2 = -v_2$, and $v^1, v^2$ is the dual basis. (So $Ev_1 = 0$, $Fv_2 = 0$, $E v_2= v_1$, $Fv_1 = v_2$.)
$c^i_j(u) = v^i( u. v_j)$, for $u \in U$, is a matrix coefficient for the Hopf-algebra $H$.
I need to verify that $c^1_1 c^2_2 - c^1_2 c^2_1 = 1$. (In order to show that the algebra of matrix coefficients is isomorphic to the coordinate ring of $SL_2(\mathbb{C})$.)
Here is my attempt:
$c^1_1 c^2_2(u) = m \circ (v^1 \otimes v^2) [ \Delta(u) (v_1 \otimes v_2)] = m \circ (v^1 \otimes v^2)[ ( u \otimes 1 + 1 \otimes u). (v_1 \otimes v_2)] = v^1(uv_1) v^2(v_2) + v^1(v_1) v^2 (u.v_2) = v^1( u. v_1) + v^2 (u. v_2)$
Plugging in $u = E$, $F$ or $H$ from the Lie algebra gives $0$, given my choice of basis above. Plugging in $1$ gives $2$, so $c^1_1 c^2_2 = 2$. (By which I mean, 2 times the unit of $U^*$.)
$c^1_1 c^2_2(u) = m \circ (v^1 \otimes v^2) [ \Delta(u) (v_2 \otimes v_1)] = m \circ (v^1 \otimes v^2)[ ( u \otimes 1 + 1 \otimes u). (v_2 \otimes v_1)] = v^1(uv_2) v^2(v_1) + v^1(v_2) v^2 (u.v_2) = 0$
(since $v^i(v_j) = \delta^i_j$).
Clearly, the difference is not $1$! Please tell me what I am doing wrong.
Thank you for reading! |
The system in the title has a damper factor $\lambda > 0$ and the matrix $A$ is sparse and rectangular, with a structure I can exploit to solve matrix vector products very fast. My current solver, LSMR, is trying to solve the normal equations $(A^TA + \lambda I) x = A^T b$ associated to the original problem $\min \|Ax - b\|$.
Although each iteration is computed very fast, the algorithm uses the maximum number of iterations. I know this can be fixed with a good preconditioner. This is where lies my problem.
$A^TA + \lambda I$ is a SPD matrix, which is a good property to have. On the other side, this matrix is no more sparse. I don't know how to choose and use a preconditioner for this dense matrix. I suppose this is already worked by someone.
I want to know how to proceed in this case and, if possible, how to use the sparsity of $A$ to obtain a good preconditioner. What are the common approaches?
EDIT: In order to be more complete, I'll briefly describe how the matrix $A$ is obtained. My problem at hand consists in minimizing the error associated to a low tensor rank-$r$ approximation. You can consider a tensor $T$ as being a multidimensional array. In this case, a 3-D multidimensional array with coordinates $T_{ijk}$, for $1 \leq i \leq m, 1 \leq j \leq n, 1 \leq k \leq p$. I am considering an approximation $\tilde{T}_{ijk} = \sum_{\ell=1}^r X_{i \ell} \cdot Y_{j \ell} \cdot Z_{k \ell}$. The error in this approximation is given by$$ \frac{1}{2} \sum_{i,j,k} \left( T_{ijk} - \tilde{T}_{ijk} \right)^2 = \frac{1}{2} \sum_{i,j,k} res_{i,j,k} (X,Y,Z)^2,$$where $X, Y, Z$ lists all components $X_{i \ell}, Y_{j \ell}, Z_{k \ell}$ and $res_{ijk}$ is the residual of the component with index $i,j,k$.
To find the components of $\tilde{T}$ which minimize the error above, it is of interest to find the Jacobian matrix of $res = (res_{111}, res_{112}, \ldots, res_{mnp})$. We have the formulas below for the partial derivatives:
$$\frac{\partial res_{ijk}}{\partial X_{I \ell}} = \left\{ \begin{array}{c} - Y_{j \ell} Z_{k \ell},\quad \text{if } i = I,\\ 0, \quad \text{otherwise} \end{array}\right.$$
$$\frac{\partial res_{ijk}}{\partial Y_{J \ell}} = \left\{ \begin{array}{c} - X_{j \ell} Z_{k \ell},\quad \text{if } j = J,\\ 0, \quad \text{otherwise} \end{array}\right.$$
$$\frac{\partial res_{ijk}}{\partial Z_{K \ell}} = \left\{ \begin{array}{c} - X_{i \ell} Y_{j \ell},\quad \text{if } k = K,\\ 0, \quad \text{otherwise} \end{array}\right.$$
This will give a sparse matrix, which becomes more sparse as we increase the dimensions. The structure follows a nested for loop pattern, from left to right. The figure below shows this structure for $m = 3, n = 5, p = 7, r = 10$. I hope this can be useful for someone to spot the "right" preconditioner, because at the moment I really don't know how to proceed. Keep im mind that I'm trying to use this structure to find a preconditioner for $A^TA + \lambda I$, where $A$ is this sparse matrix just described. |
I believe that there is a good alternative that benefits from the geometry of the parameter space and completely eliminates the need for constrained optimization. If you explicitly wanted to make use of Lagrangians, I will definitely not be answering the question, but I thought it might be worthwhile to consider the perspective I will describe.In particular, the approach I will be presenting uses Riemannian manifold optimization tools.
Note that the parameter of interest $\mathbf{X}\in \mathbb{R}^{n \times k}$ is simply an element of the Stiefel Manifold $\mathcal{M}\equiv V_k(\mathbb{R}^n)$ (the set of all orthonormal $k$-frames in $\mathbb{R}^n$):$$\begin{align}\mathbf{X} &\in V_k(\mathbb{R}^n) \subset \mathbb{R}^{n\times k}\\V_k(\mathbb{R}^n) = \{\mathbf{X}&\in\mathbb{R}^{n\times k} : \mathbf{X}^\top\mathbf{X}=\mathbf{I}\}.\end{align}$$
With this definition, we can benefit from the Riemannian structure of $V_k(\mathbb{R}^n)$ and optimize the energy with any
unconstrained optimization algorithm, such as the Riemannian-gradient descent:
\begin{align} &\text{ While } \mathbf{X}_{k} \text{ does not sufficiently minimize } f \\ &\text{$\quad$- Pick a gradient related descent direction }\boldsymbol{\eta}_k\in T_{\mathbf{X}_k}\mathcal{M}\\ &\text{$\quad$- Choose a retraction } R_{\mathbf{X}_k}:T_{\mathbf{X}_k}\mathcal{M}\rightarrow \mathcal{M}.\\ &\text{$\quad$- Choose a step length } \tau_k\in \mathbb{R}.\\ &\text{$\quad$- Set } \mathbf{X}_{k+1}\gets R_{\mathbf{X}_k}(\tau_k\boldsymbol{\eta}_k).\\ &\text{$\quad$- } k\gets k+1\end{align}
Here $f$ denotes the function we optimize:$$f = \frac{1}{2}\|\mathbf{X}\mathbf{X}^\top - \mathbf{A} \|_\mathcal{F}^2 = \frac{1}{2}\text{tr}\big((\mathbf{X}\mathbf{X}^\top-\mathbf{A})(\mathbf{X}\mathbf{X}^\top-\mathbf{A})^\top\big)$$whose gradient reads:$$\nabla_{\mathbf{X}}f = (\mathbf{X} \mathbf{X}^\top -\mathbf{A})\mathbf{X}+(\mathbf{X}\mathbf{X}^\top +(-\mathbf{A})^\top )\mathbf{X}.$$
We generally pick the gradient related direction $\boldsymbol{\eta}$ as the projection of the Euclidean gradient onto the tangent space of the manifold (or for a broad class of manifolds including the Stiefel manifold, we can instead use the logarithmic-map):
$$\boldsymbol{\eta}\triangleq \text{grad }_{\mathbf{X}} f = \Pi_{\mathbf{X}} \Big( -\nabla_{\mathbf{X}} f \Big)$$
For now let us assume that $\tau_k$ is fixed,
e.g. $\tau_k=0.1$.The retraction operator $R(\cdot)$ of the Stiefel manifold is analytically defined or in other words the true exponential map is available. More details on that are given in this math-se post and of course in the seminal paper of Edelman et. al:
Edelman, Alan, Tomás A. Arias, and Steven T. Smith. "The geometry of
algorithms with orthogonality constraints." SIAM journal on Matrix
Analysis and Applications 20.2 (1998): 303-353.
All the Riemannian operators for Stiefel manifold are included in the toolboxes such as Manopt or ROPTLIB. Finally, we can update the obtain the new value $\mathbf{X}_{k+1}$ by:\begin{align}\mathbf{X}_{k+1} = R_{\mathbf{X}_k}(\tau_k \boldsymbol{\eta}_k)\end{align}Note that $\mathbf{X}_{k+1}$ will be an orthonormal $k$-frame obeying the aforementioned constrained naturally. Riemennian gradient descent is the simplest choice of Riemannian optimization and there are many others such as Riemannian-LBFGS or Riemannian-Trust Region. Many of those choices vary in how they compute the step size $\tau_k$, usually by a form of line-search.
More recently, Hu et al. considered a very similar problem to the one of this questions where the minimization with orthonormality constraints (again, Stiefel manifold) is made efficient. This method too, uses the Riemannian structure of the problem. |
Mathematics - Functional Analysis and Mathematics - Metric Geometry
Abstract
The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society
Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA |
META TOPICPARENT name="https://twiki/cern/ch/twiki/bin/view/LHCPhysics.LHCHXSWGExoticDecay" Abstract < < In this study, we devise a search strategy for the exotic decay of the 125 GeV Higgs boson in the $\gamma\gamma+\MET$ final state. The studied final state comes in two different topologies: resonant and non-resonant. In the resonant case, the Higgs decays into two scalars, one being undetected and the other decaying resonantly into two photons. The non-resonant case, based on low scale SUSY breaking models, the Higgs decays into two neutralinos, each subsequently decaying into a photon and a gravitino. We estimate the sensitivity of these searches using a DELPHES detector simulation, and targeting $100$ fb$^{-1}$ of $\sqrt{s}=14$ TeV $pp$ data from the LHC. > > In this study, we devise a search strategy for the exotic decay of the 125 GeV Higgs boson in the $\gamma\gamma+MET$ final state. The studied final state comes in two different topologies: resonant and non-resonant. In the resonant case, the Higgs decays into two scalars, one being undetected and the other decaying resonantly into two photons. The non-resonant case, based on low scale SUSY breaking models, the Higgs decays into two neutralinos, each subsequently decaying into a photon and a gravitino. We estimate the sensitivity of these searches using a DELPHES detector simulation, and targeting $100$ fb$^{-1}$ of $\sqrt{s}=14$ TeV $pp$ data from the LHC. Figures from ggMET < <
pdf Figure 1a: Feynman diagrams for the non-resonant signal scenarios > >
pdf Figure 1a: Feynman diagrams for the non-resonant signal scenarios (Based on low scale SUSY breaking models, the Higgs decays into two neutralinos, each subsequently decaying into a photon and a gravitino) < <
pdf Figure 1a: Feynman diagrams for the resonant signal scenarios > >
pdf Figure 1b: Feynman diagrams for the resonant signal scenarios (Higgs decays into two scalars, one being undetected and the other decaying resonantly into two photons) < <
pdf Figure 1a: Efficiency > >
pdf Figure 2: signal selection efficiency after triggers selection v.s. mass for different signal scenarios and types < <
pdf Figure 1a: MT < <
pdf Figure 1a: MET > >
pdf Figure 3: Missing transverse distribution of signal and background for the gluon-gluon production mode < <
pdf Figure 1a: deltaPhi between di photon > >
pdf Figure 4: Distribution of deltaPhi between di photon for signal and background for the gluon-gluon production mode < <
pdf Figure 1a: Photons invariant mass > >
pdf Figure 5: MT distribution (MT of $\gamma\gamma+MET$, $\mu\mu$) of signal and backgrounds for the ZH production mode < <
pdf Figure 1a: delta phi between di photon and di muons > >
pdf Figure 6: Photons invariant mass distribution of signal and backgrounds for the ZH production mode < <
pdf Figure 1a: Di muon invariant mass > >
pdf Figure 7: delta phi between di photon and di muons distribution of signal and backgrounds for the ZH production mode > >
pdf Figure 8: Di muon invariant mass distribution of signal and backgrounds for the ZH production mode < <
pdf Figure 1a: Pt of dimuon > >
pdf Figure 9: Pt of dimuon for signal and backgrounds for the ZH production mode < <
pdf Figure 1a: ∆φ between Diphoton and MET > >
pdf Figure 10: ∆φ between Diphoton and MET distribution of signal and backgrounds for the ZH production mode < <
pdf Figure 1a: leading Photon Pt > >
pdf Figure 11: leading Photon Pt distribution of signal and backgrounds for the ZH production mode > >
pdf Figure 12: subleading Photon Pt distribution of signal and backgrounds for the ZH production mode < <
pdf Figure 1a: Transverse Mass > >
pdf Figure 13: Transverse Mass distribution of signal and backgrounds for the ZH production mode < <
pdf Figure 1a: subleading Photon Pt > >
pdf Figure 14: Significance plots for different trigger scenarios in the gluon fusion analysis < <
pdf Figure 1a: Significance plots for different trigger scenarios in the gluon fusion analysis > >
pdf Figure 15: Significance plots for different trigger and signal scenarios in the gluon fusion analysis < <
pdf Figure 1a: Significance plots for different trigger scenarios in the gluon fusion analysis > >
pdf Figure 16: 5σ branching ratios for the ggF channel, for resonant (in red) and non-resonant (in black) final states, using the γ + E/T trigger.. < <
pdf Figure 1a: 5σ branching ratios for the ggF channel, for resonant (in red) and non-resonant (in black) final states, using the γ + E/T trigger..
pdf Figure 1a: Branching ratios for 95% confidence level exclusion in the ZH case, resonant and non-resonant topologies, requiring at least one photon (Nγ ≥ 1, in green and blue, respectively) and at least two photons (Nγ ≥ 2 in black and red, respectively). The shaded areas correspond to a variation in systematics up to 10% > >
pdf Figure 17: Branching ratios for 95% confidence level exclusion in the ZH case, resonant and non-resonant topologies, requiring at least one photon (Nγ ≥ 1, in green and blue, respectively) and at least two photons (Nγ ≥ 2 in black and red, respectively). The shaded areas correspond to a variation in systematics up to 10% |
Let $P=\{P_1,P_2\cdots P_n\}$ be a set of $n\geq 4$ points in the plane and $P_iP_j$ be the line segment connecting $P_i$ and $P_j$ that satisfy:
$(1)$Any three points of $P$ are not on a line;
$(2)$In the set $\{P_1P_2,P_2P_3,\cdots,P_{n-1}P_n,P_nP_1\}\setminus \{P_iP_{i+1}\}$,$P_iP_{i+1}$ only intersect with $P_{i-1}P_i$ and $P_{i+1}P_{i+2}$ ,$i=1,2,\cdots,n$(where $P_0=P_n,P_{n+1}=P_1$).
I conjecture that there must exist $1\leq i\leq n$ such that there is no point of $P$ in the interior of $\bigtriangleup P_{i-1}P_iP_{i+1}$,where $P_0=P_n,P_{n+1}=P_1$.
Is that right? |
Motivation:
In isogeometric analysis, state variables(e.g. displacement) are defined in the parametric domain, which can be mapped to the physical domain by $\boldsymbol{\xi}\mapsto \boldsymbol{x}$ as shown beneath. However the quantity related to displacment such as stress, strain are spacial derivatives of displacement. The following procedure is commonly used for solving those derivatives. $\blacksquare$
Let $u$ be one component of displacement vector $\boldsymbol{u}$
\begin{equation} u(\xi,\eta) = \sum_{i} c_iN^i(\xi,\eta) \end{equation} with geometric mapping from the parametric domain to the physical domain $$x(\xi,\eta) = \sum_{i} x_i N^i(\xi,\eta), \quad y(\xi,\eta) = \sum_{i} y_i N^i(\xi,\eta),$$ where $c_i,x_i,y_i$ are constants, with assumption that $(\xi,\eta)\mapsto(x,y)$ is bijective, i.e. inverse exists, $$J :=[\frac{\partial x_i}{\partial \xi_j}],\: |J| \neq 0\quad (\text{where }x_2 = y,\,\xi_2 = \eta).$$
By chain rule, $$\frac{\partial u}{\partial \xi_j} = \frac{\partial u}{\partial x_i}\frac{\partial x_i}{\partial \xi_j}$$
or
$$ \begin{bmatrix} \frac{\partial u}{\partial \xi}\\ \frac{\partial u}{\partial \eta} \end{bmatrix} = \begin{bmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial y}{\partial \xi}\\ \frac{\partial x}{\partial \eta} & \frac{\partial y}{\partial \eta} \end{bmatrix} \begin{bmatrix} \frac{\partial u}{\partial x}\\ \frac{\partial u}{\partial y} \end{bmatrix} = J^T \begin{bmatrix} \frac{\partial u}{\partial x}\\ \frac{\partial u}{\partial y} \end{bmatrix}. $$
Hence, $$ \begin{bmatrix} \frac{\partial u}{\partial x}\\ \frac{\partial u}{\partial y} \end{bmatrix} = (J^T)^{-1} \begin{bmatrix} \frac{\partial u}{\partial \xi}\\ \frac{\partial u}{\partial \eta} \end{bmatrix} $$
This is the common procedure used in isogemetric analysis for computing stiffness matrix.
However, when working on sensitivity analysis, I need to compute the second derivatives $$\frac{\partial^2 u}{\partial x_i\partial x_j},\quad i \text{ and }j \in \{1,2\}.$$
One may think displacement $\boldsymbol{u} = [u, v]^T$.
Having searched on Scirus site, I failed to find any useful papers. But I believe it has to be done by some groups already. And reference would be appreciated. I have many accesses to scientific database, so only a link to the paper would be sufficient.
Thanks a lot.
A little bit explanation
I have encountered this problem when working on sensitivity analysis, especially adjoint method with the approach of material derivatives. Let assume we have an objective functional $$\phi = \int_{\Omega}f(\sigma,\epsilon,p)\,\mathrm{d}\Omega$$ where $p$ is a design parameter(e.g. coordinate of one control point). With linear elastostatics, $\sigma,\,\epsilon$ are functions of $\nabla \boldsymbol{u}$, so we can write $f = f(\nabla u)$.
And the material derivative of the domain functional is $$\dot{\phi} = \frac{\mathrm{d}}{\mathrm{d} p}\phi = \int_{\Omega}\frac{\partial}{\partial \nabla \boldsymbol{u}}f:\nabla\dot{\boldsymbol{u}}-\frac{\partial}{\partial \nabla \boldsymbol{u}}f:\nabla((\nabla\boldsymbol{u})\boldsymbol{v})\,\mathrm{d}\Omega + \int_{\partial\Omega}f(\boldsymbol{v}\cdot\boldsymbol{n})\,\mathrm{d}\Gamma. $$
The first term in the domain integral is taken care of by adjoint formulation. From the second term in the domain integral, the second derivative is required. |
The HTML5 canvas is fast becoming a replacement for Flash when it comes to little particle effects and artistic experiments in the browser, but sadly one favorite animation technique is still lacking: slow fade-outs of images. And it seems unlikely that the necessary cross-browser consistency will be coming soon, at least not in the standard 2D canvas context.
Below is a screencap of a canvas animation featuring a single randomly moving particle. The trail is supposed to fade to black but instead an ugly gray remnant is left behind:
These permanent gray remnants occur when we fade images by using a low-alpha black rectangle painted over the entire canvas (when animating on a black background). In theory this should slowly fade all colors to black, and if you test this in a browser like Safari this is what you will see. But in many browsers, you’ll see these gray remnants. If you would like to anticipate the exact color of these trails, I’ll show you below how to calculate it.
The problem: Different browsers do different pixel math
The problem comes down to the way different browsers compute pixel values when blending colors according to alpha. And because different browsers do different things, you cannot code your effects in a way that will look consistent in all browsers.
Since colors are ultimately stored as integer values, any calculated pixel values resulting in fractional values have to be rounded to integers. In some browsers, values are rounded to the
nearest whole number, in other browsers values are always rounded down (that is, floored). But what difference would it make? Surely no one can tell the difference, right? Well, as it turns out it makes quite a big difference. An Example
Have a look at the live canvas example below. Here, a white square has been drawn in the middle, and then a low-alpha (alpha = 0.04) black square is drawn over the top of it repeatedly (73 times). Assuming your machine works the same as mine, if you view this in Safari or Opera you’ll just see a black square. If you view it in Chrome, IE or Firefox, then you’ll see a faint gray square remaining in the middle, of color #0C0C0C. The color will be stuck here; painting the transparent black square again and again will still result in this same color.
Examples and downloads Simple square examples: To see a slowly fading version with color readout below, click here. To see the same issue with fading using transparent white (which will create remnants in anybrowser), click here. Simple particle examples: Source
Download all source files here
What’s happening?
In the examples above, we painted a black rectangle with alpha 0.04 over the top of the display. But in truth, there is no such thing as alpha equaling 0.04. Just like color components, alpha is stored as an integer between 0 and 255, and the closest whole number with ratio 0.04 to 255 is 10. So the number 10/255 (approximately 0.0392157) is the number used for calculations when blending by alpha.
When a color \(topColor\) with alpha \(\alpha\) is painted on top of a solid color \(baseColor\), the resulting color should be
\[ (1-\alpha) \cdot baseColor + \alpha \cdot topColor \] (the calculation is done in this way separately on the red, green, and blue channels). But since this calculation results in a float value, not a whole number, rounding must occur. Some browsers round down all the time (Safari, Opera), other browsers round to the nearest whole number (Chrome, IE, Firefox).
But this is the effect of the rounding: In the examples above, we blend a transparent black on top of a gray color. When the gray is at the hex value #0C0C0C, each RGB component has decimal value 12. Blending the black with alpha 10/255 on top results in each RGB component being calculated as
\[ \left(1-\frac{10}{255}\right) \cdot 12 + \left(\frac{10}{255}\right) \cdot 0 \approx 11.5294. \] If this float value is rounded down, it will be set to the value 11. But if it is rounded to the nearest whole number, it will be rounded back up to 12, the same color we started with. What about white?
Fading to a white background by painting low-alpha white is no better. In fact, this will leave behind gray remnants in
any browser. In case you missed it above, click here for an example. What color will the remnants be?
I’ll provide some explanation below, but if you want to skip the details here are some formulas. Suppose \(c\) is the red, green, or blue component value of a solid color, and we paint a transparent black or white over the top of it. Then:
nearestinteger (such as Chrome, IE, or Firefox): When painting blackwith alpha \(0 \leq \alpha \leq 1\), a color component \(c\) will be rounded back to the same value \(c\) if \[c \leq \frac{1}{2\alpha}.\] When painting whitewith alpha \(\alpha\), a color component \(c\) will be rounded back to the same value \(c\) if \[c \gt 255 – \frac{1}{2\alpha}.\] downto the nearest integer (such as Safarior Opera): When painting blackwith alpha \(\alpha\), all color components will reduce to a lower value (which after repeated paintings will fade images to complete black), When painting whitewith alpha \(\alpha\), a color component \(c\) will be rounded back to the same value \(c\) if \[c \gt 255 – \frac{1}{\alpha}.\]
This means, for example, that if you make an animation featuring white particles on a black background, and have them fade out to black by drawing black with alpha \(10/255\) over the top, then the gray remnants will have value 12, because this is the brightest color value gray below \(255/(2\cdot 10)\ = 12.75\). Thus the remnants will have hex color #0C0C0C.
The math
I will only derive one of the formulas above; the rest are obtained in a similar way. Consider the example of painting transparent white with alpha \(\alpha\) over the top of an opaque color with the value \(c\) (which could be a red, green, or blue value). And suppose we are doing this in Safari, where colors are always rounded down. Then rounding will produce the same resulting color \(c\) whenever the computed color lies in between \(c\) and \(c+1\), that is, when
\[ c \leq (1-\alpha)c + \alpha \cdot 255 \lt c+1. \] It is easy to check that the first inequality is automatically held by any color value \(c\). And a little algebra will turn the second inequality into \[ \alpha(255 – c) \lt 1, \] and some more algebra (being careful to flip the inequality when multiplying by a negative) produces \[ c > 255 – \frac{1}{\alpha}. \] Thus any color component of value greater than or equal to \(255-1/\alpha\) will remain unchanged when the transparent white is painted over the top. Even more trouble…a bug in Chrome
While testing some examples for this post, I discovered a bug in the current version (26) of Chrome. When canvases are smaller than 255×255 in size, using
fillRect to paint a transparent color over a solid (alpha = 1) color results in a final color with less than 100% alpha! I submitted a bug report with more information here.
Solutions?
At this point, this cross-browser nightmare is enough to send you screaming back to Flash! But if we want to use the HTML5 canvas, we will have to find some workarounds.
Use a gray background
One option is to simply paint the background of your canvas the same color as the anticipated gray trails. If you choose the right fade color, this color can be close enough to pure black (or pure white) so as to not be noticeable. Of course, this color will be different in different browsers. But if you paint the gray color, in a few frames it should equalize to a consistent color. If you missed it above, here is a particle example where the background is colored the same gray as the trail remnants. The page background is made a lighter gray to trick you into thinking the canvas gray background is actually pure black.
Manipulate the pixels directlly
Another option is to take charge of the bitmap data of the canvas, and explicitly dim each pixel according to whatever math you want to apply. For example, you can reduce the alpha of each pixel by a fixed amount on each animation frame, producing a slow fade. I presented some particle examples using this method in one of my early posts here. But this method is very heavy on the CPU. If a large canvas is used with many animated particles, things can really slow down.
Do you really need that fading effect?
Maybe give up on the idea of slowly fading images, and just cleanly erase the canvas on each frame.
The future?
Certainly there will be other options in the future. CSS filters (still not widely supported) would allow us to darken the entire canvas by a few bits, bringing those gray remnants down to black. Or perhaps hardware accelearted graphics can allow the type of alpha subtraction described above to take place with minimal CPU impact.
Comments?
Do you have any other ideas or projections? I’d love to hear from you in the comments below! |
If by definition $r=\sqrt{x^2 + y^2}$, then why do we allow $r$ to be negative? Relatedly, I do not understand the last section of this conversation discussing points being represented by multiple $\theta$:
Student: So a single point could have many different values?
Mentor: Correct! The values for $r$ can be given as positive and negative values and $\theta$ can be given not only in positive and negative values, but also as any value $\theta$ + any multiple of $2\pi$. So, unlike the Cartesian system where each point has a unique set of coordinates, in the polar system any point can have an infinite number of coordinates!
Student: That means that the point given as $(2,\pi/4)$ could also be given as $(2,-7\pi/4)$ or $(2, 9\pi/4)$ or $(-2,5\pi/4)$!
Mentor: Exactly. Try plotting those points using the Polar Coordinates activity and verify they are the same point!
This part in particular is confusing to me:
$\theta$ can be given not only in positive and negative values, but also as any value $\theta$ + any multiple of 2π.
So if $\theta=5\pi/3$ then it would also equal $5\pi/3 + \pi$? I'm pretty confused on polar coordinates; they seem so inferior to traditional Cartesian when not dealing with imaginary numbers. Could someone clarify this redundancy in polar coordinates? |
To give you a better picture, what I mean with my comment, consider
Stokes Theorem
$$\int_D db^{n-1}=\int_{\partial D}b^{n-1}$$
be $b^{n-1}$ an arbitrary 1-form$$b^{1}= a_1 dx + a_2 dy$$leads to$$\int_D f(x,y)dxdy = \int_D\left(\frac{\partial a_2}{\partial x}-\frac{\partial a_1}{\partial y}\right)dxdy$$Without loss of generality, $a_1=0$ one needs to solve $f(x,y)=\frac{\partial a_2}{\partial x}$ for $a_2$.
Now you need the parametrization for the boundary functions. For the first three bounding functions $$(x=0,y\in[-h,\eta(0)]),(x\in[0,L],y=-h),(x=L,y\in[-h,\eta(L)])$$ this is straight forward. For the bounding function I assume you know or can evaluate the function in $(x(t),y(t)) \wedge t\in[0,1]$. So I suggest using two Gaussian quadratures. First from $[lb,peak]$ and second from $[peak, rb]$.
Given you can obtain information on the peak and integrate $f$, this method will give you very high accuracy and very good convergence.
Supplementing Edit
As bigge pointed out, knowing $f$ does not guarantee to find an parametrization easily. Of course this is true. But even if one is not able to solve $f(x,y)=\frac{\partial a_2}{\partial x}$ analytically, one can still use this approach. Assuming Gaussian quadrature techniques one can read it as
$$a_2(x,y)=\int_{x_0}^x f(t,y)dt =\int_{-1}^1 f(r,y) \frac{dt}{dr} dr\approx \sum_k w_k f(r_k,y) \frac{dt}{dr}$$ With e.g. $x_0=0$ chosen arbitrarily. Now the scaling factor $\frac{dt}{dr}$ depends on the integration range $x$. Since we still integrate line integrals along the boundary, $x$ is not arbitrary but specific set of quadrature points for evaluating $\int_{\partial D} a_2(x,y) dy$.
So the final term is a double sum for each boundary integral as expected for an integral over a 2-D domain.In that form it is simply a very sophisticated 2D Simpson's rule.
In my opinion, it is usually more challenging to find a good parametrization of the bounding curves than evaluation of $f$. |
Find the smallest integer $n$ such that
$$(x^2 + y^2 + z^2)^2 \leq n(x^4 + y^4 + z^4)$$for all real numbers $x, y,$ and $z.$
How should I manipulate this inequality? I am stuck and don't know how to proceed. All solutions are greatly appreciated!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Find the smallest integer $n$ such that
$$(x^2 + y^2 + z^2)^2 \leq n(x^4 + y^4 + z^4)$$for all real numbers $x, y,$ and $z.$
How should I manipulate this inequality? I am stuck and don't know how to proceed. All solutions are greatly appreciated!
Let's work with $a=x^2$, $b=y^2$, and $c=z^2$ which are all nonnegative. Then $$ (a+b+c)^2\leq 3(a^2+b^2+c^2) \tag{$*$} $$ (either by Cauchy-Schwarz or by expanding both sides) so $n\leq 3$. But ($*$) is an equality when $a=b=c>0$ so $n\geq 3$. We infer that $n=3$.
$$(x^2 + y^2 + z^2)^2 \leq n(x^4 + y^4 + z^4)\iff (xy)^2+(xz)^2+(yz)^2\le \frac {n-1}{2}(x^4+y^4+z^4)$$ If $n=3-\epsilon$ where $\epsilon\gt 0$ and $x=y=z$ then $2\le 2-\epsilon$, absurd. Thus the smallest $n$ is $3$. |
In chemistry, the
Henderson–Hasselbalch equation describes the derivation of pH as a measure of acidity (using p K a, the negative log of the acid dissociation constant) in biological and chemical systems. The equation is also useful for estimating the pH of a buffer solution and finding the equilibrium pH in acid-base reactions (it is widely used to calculate the isoelectric point of proteins).
The equation is given by:
\mathrm{pH} = \mathrm{p}K_\mathrm{a}+ \log_{10} \left ( \frac} \right )
Here, [HA] is the molar concentration of the undissociated weak acid, [A⁻] is the molar concentration (molarity,
M) of this acid's conjugate base and p K a is −log 10 K a where K a is the acid dissociation constant, that is: \mathrm{p}K_\mathrm{a} = - \log_{10} (K_\mathrm{a}) = - \log_{10} \left ( \frac} \right ) for the non-specific Brønsted acid-base reaction: \mathrm{HA} + \mathrm{H}_{2}\mathrm{O} \rightleftharpoons \mathrm{A}^- + \mathrm{H}_{3}\mathrm{O}^+
In these equations, A⁻ denotes the ionic form of the relevant acid. Bracketed quantities such as [base] and [acid] denote the molar concentration of the quantity enclosed.
Contents For bases 1 Derivation 2 History 3 Limitations 4 Estimating blood pH 5 See also 6 References 7 Further reading 8 External links 9 For bases
For the standard base equation:
[1] \mathrm{B} + \mathrm{H}^{+} \rightleftharpoons \mathrm{BH}^{+}
A second form of the equation, known as the Heylman Equation, expressed in terms of K_\mathrm{b} where K_\mathrm{b} is the base dissociation constant:
\mathrm{p}K_\mathrm{b} = - \log_{10} (K_\mathrm{b}) = - \log_{10} \left ( \frac} \right )
In analogy to the above equations, the following equation is valid:
\mathrm{pOH} = \mathrm{p}K_\mathrm{b}+ \log_{10} \left ( \frac} \right )
Where BH
+ denotes the conjugate acid of the corresponding base B. Using the properties of these terms at 25 degrees Celsius one can synthesise an equation for pH of basic solutions in terms of p K a and pH: \mathrm{pH} = \mathrm{p}K_\mathrm{a} + \log_{10} \left(\frac}}\right) Derivation
The Henderson–Hasselbalch equation is derived from the acid dissociation constant equation by the following steps:
[2] K_\mathrm{a} = \frac }
Taking the log, to base ten, of both sides gives:
\log_{10}K_\mathrm{a} = \log_{10} \left ( \frac} \right )
Then, using the properties of logarithms:
\log_{10}K_\mathrm{a} = \log_{10}[\mathrm{H}^+] + \log_{10} \left ( \frac} \right )
Identifying the left-hand side of this equation as -p
K a and the \log_{10} [\mathrm{H}^{+}] as -pH: -\mathrm{p}K_\mathrm{a} = -\mathrm{pH} + \log_{10} \left ( \frac} \right )
Adding pH and p
K a to both sides: \mathrm{pH} = \mathrm{p}K_\mathrm{a} + \log_{10} \left ( \frac} \right )
The ratio [\mathrm{A}^-]/[\mathrm{HA}] is unitless, and as such, other ratios with other units may be used. For example, the mole ratio of the components, n_{A^-}/n_{HA} or the fractional concentrations \alpha_{A^-}/\alpha_{HA} where \alpha_{A^-}+\alpha_{HA}=1 will yield the same answer. Sometimes these other units are more convenient to use.
History
Lawrence Joseph Henderson wrote an equation, in 1908, describing the use of carbonic acid as a buffer solution. Karl Albert Hasselbalch later re-expressed that formula in logarithmic terms, resulting in the Henderson–Hasselbalch equation [1]. Hasselbalch was using the formula to study metabolic acidosis.
Limitations
There are some significant approximations implicit in the Henderson–Hasselbalch equation. The most significant is the assumption that the concentration of the acid and its conjugate base at equilibrium will remain the same as the formal concentration. This neglects the dissociation of the acid and the binding of H+ to the base. The dissociation of water and relative water concentration itself is neglected as well. These approximations will fail when dealing with relatively strong acids or bases (pKa more than a couple units away from 7), dilute or very concentrated solutions (less than 1 mM or greater than 1M), or heavily skewed acid/base ratios (more than 100 to 1). In high buffer dilutions, where the concentration of protons arising from water become equally or more prevalent than the buffer species themselves (at pH 7, this means buffer component concentrations of <10
−5 M formally, but practically much higher), the pKa of the 'buffer' system will tend towards neutrality. Estimating blood pH
The Henderson–Hasselbalch equation can be applied to relate the pH of blood to constituents of the bicarbonate buffering system:
[3] \mathrm{pH} = \mathrm{p}K_{\mathrm{a}~\mathrm{H}_2\mathrm{CO}_3} + \log_{10} \left ( \frac \right ),
where:
This is useful in arterial blood gas, but these usually state
p CO, that is, the partial pressure of carbon dioxide, rather than H 2 2CO 3. However, these are related by the equation: [3] [\mathrm{H}_2\mathrm{CO}_3] = k_{\rm H~CO_2}\, \times p_{\mathrm{CO}_2},
where:
[H 2CO 3] is the concentration of carbonic acid in the blood k H CO is the Henry's law constant for the solubility of carbon dioxide in blood. 2 k is approximately 0.0307 mmol/(L-torr) H CO 2 p CO is the partial pressure of carbon dioxide in the blood 2
Taken together, the following equation can be used to relate the pH of blood to the concentration of bicarbonate and the partial pressure of carbon dioxide:
[3] \mathrm{pH} = 6.4 + \log_{10} \left ( \frac{0.0307 \times p_{\mathrm{CO}_2}} \right ),
where:
pH is the acidity in the blood [HCO 3 −] is the concentration of bicarbonate in the blood p CO is the partial pressure of carbon dioxide in the arterial blood 2 See also References ^ Larsen, D. "Henderson-Hasselbalch Approximation". Chemwiki. University of California. Retrieved 27 March 2014. ^ b K and p a KHenderson Hasselbalch Equation: Derivation of p ^ a b c page 556, section "Estimating plasma pH" in: Bray, John J. (1999). Lecture notes on human physiolog. Malden, Mass.: Blackwell Science. Further reading Lawrence J. Henderson (1 May 1908). "Concerning the relationship between the strength of acids and their capacity to preserve neutrality" (Abstract). Hasselbalch, K. A. (1917). "Die Berechnung der Wasserstoffzahl des Blutes aus der freien und gebundenen Kohlensäure desselben, und die Sauerstoffbindung des Blutes als Funktion der Wasserstoffzahl". Po, Henry N.; Senozan, N. M. (2001). "Henderson–Hasselbalch Equation: Its History and Limitations". de Levie, Robert. (2003). "The Henderson–Hasselbalch Equation: Its History and Limitations". de Levie, Robert (2002). "The Henderson Approximation and the Mass Action Law of Guldberg and Waage". External links
Variable buffer system Henderson–Hasselbalch Calculator Applications and Example Problems Using Henderson-Hasselbalch Equation Henderson–Hasselbalch Calculator Online Henderson–Hasselbalch Calculator Derivation and detailed discussion of Henderson–Hasselbalch equation True example of using Henderson–Hasselbalch equation for calculation net charge of proteins
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
I have an optimization problem in which the objective function and most constraints are linear, but I also have several nonlinear (and nonconvex) constraints. I am wondering if my problem can be reformulated as a convex one, or if you have some advice on how to approach it.
This is the problem I have:
\begin{equation*} \begin{aligned} & \underset{x}{\text{minimize}} & & c^{\textrm{T}} x \\ & \text{subject to} & & x\geq 0\\ & & & x_i \leq \textrm{UpperBound}_i\\ & & & \textrm{A}x\leq \textrm{b}\\ \end{aligned} \end{equation*}
Up to now I just have a linear program, but I also have several quadratic constraints on auxiliary variables $y$: \begin{equation*} \begin{aligned} & \text{subject to} & & \textrm{D}x + \textrm{e}= y\\ & & & \textrm{if}\ \ (y_{12}\leq y_{11})\\ & & & \qquad y_1\cdot y_2 \geq y_3^2 -y_3\cdot y_4\\ & & & \textrm{if}\ \ (y_{13}\leq y_{11})\\ & & & \qquad y_5\cdot y_6 \geq y_7^2 -y_7\cdot y_4\\ & & & \textrm{if}\ \ (y_{14}\leq y_{11})\\ & & & \qquad y_8\cdot y_9 \geq y_{10}^2 -y_{10}\cdot y_4\\ \end{aligned} \end{equation*}
The auxiliary variables $y$ are defined by the equality constraints $\textrm{D}x= y$ (although I think it is possible to define them as $\textrm{D}x\leq y$ and it would still hold). The last 3 constraints in the problem are quadratic.
Note that these quadratic constraints are also conditional (I implement these conditional constraints by using big-M's). At least one of the quadratic constraints is always enforced, but two or all three might be enforced.
I know that they are non-convex constraints, because matrix $Q$ is indefinite in this equivalent formulation of the quadratic constraint:
$$ [y_1, y_2, y_3, y_4]\ \textrm{Q}\ [y_1, y_2, y_3, y_4]^\textrm{T}\leq 0 $$ where \begin{equation*} Q= \begin{bmatrix} 0&-1/2&0&0\\ -1/2&0&0&0\\ 0&0&1&-1/2\\ 0&0&-1/2&0\\ \end{bmatrix} \end{equation*}
Do you have any advice on how to tackle this problem? I have found something similar in Appendix B1 of Stephen's Boyd book, but I think it doesn't hold for my problem since I have more than one quadratic constraint.
Thanks for your help! |
Q. A rod of length 50cm is pivoted at one end. It is raised such that if makes an angle of 30° from the horizontal as shown and released from rest. Its angular speed when it passes through the horizontal (in rad $s^{-1}$) will be $(g = 10ms^{-2})$
Solution:
Work done by gravity from initial to final position is, $W = mg \frac{\ell}{2} \sin30^{\circ} $ $ = \frac{mg \ell}{4} $ According to work energy theorem $W = \frac{1}{2} I \omega^{2} $ $ \Rightarrow \frac{1}{2} \frac{m \ell^{2}}{3} \omega^{2} = \frac{mg \ell}{4} $ $ \omega = \sqrt{\frac{3g}{2\ell}} = \sqrt{\frac{3\times10}{2 \times0.5}} $ $\omega = \sqrt{30} $ rad / sec $\therefore$ correct answer is (1) Questions from JEE Main 2019 3. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 8. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$.
If this light falls on a silver plate having a work
function of 4.7 eV, what will be the maximum
kinetic energy of the photo electrons ?
$(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ 10. Two Carrnot engines A and B are operated in series. The first one, A, receives heat at $T_1(= 600 \; K)$ and rejects to a reservoir at temperature $T_2$. The second engine B receives heat rejected by the first engine and, in turn, rejects to a heat reservoir at $T_3(= 400 \; K)$. Calculate the temperature $T_2$ if the work outputs of the two engines are equal : Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
Recently, I came across the blogdown Rpackage, a variant of RStudio’s popularbookdown R package, made by Yihui Xieand Amber Thomas. Blogdown allows the user to write blog posts with code chunks,in any of the large variety of languages supported by RMarkdown, allowing forcomputationally reproducible writing and programming. It also plays well withthe new static site engine Hugo. Here, I’m mostly justgoing to take
blogdown for a spin.
First, let’s generate some data and try doing some very simple summary statistics:
# we're going to be simulating...set seed and constantsset.seed(6142)n <- 1000tx_mean <- 25# generate variables randomly and by structural equationW <- replicate(3, rnorm(n))A <- rnorm(n, mean = tx_mean, sd = 2*sqrt(tx_mean))Y <- as.numeric(A > tx_mean & W[, 1] > 0)O <- as.data.frame(cbind(Y, A, W))colnames(O) <- c("Y", "A", paste0("W", seq_len(3)))head(O)
## Y A W1 W2 W3## 1 1 28.40771 1.6361789 0.3361618 2.2859933## 2 1 43.26174 2.0236876 0.3815559 0.1296723## 3 1 30.45214 0.1319931 0.3985546 0.7973811## 4 0 23.52230 -0.4502451 0.7176603 -1.0905016## 5 1 29.88353 1.7895223 -0.3369940 0.7153515## 6 0 21.33805 1.7653295 -0.8362112 -0.7107431
skim(O)
## Skim summary statistics## n obs: 1000 ## n variables: 5 ## ## ── Variable type:numeric ──────────────────────────────────────────────────## variable missing complete n mean sd p0 p25 p50 p75 p100## A 0 1000 1000 25.18 9.77 -4.68 18.12 25.37 32.17 56.11## W1 0 1000 1000 0.045 1 -3.48 -0.59 0.036 0.7 3.41## W2 0 1000 1000 -0.025 0.99 -3.68 -0.65 -0.038 0.65 3.51## W3 0 1000 1000 -0.023 1.01 -3.37 -0.66 -0.034 0.61 2.99## Y 0 1000 1000 0.27 0.44 0 0 0 1 1 ## hist## ▁▂▅▇▇▅▁▁## ▁▁▃▇▇▅▁▁## ▁▁▃▇▇▅▁▁## ▁▁▃▇▇▅▂▁## ▇▁▁▁▁▁▁▃
Look at that! In the above, we generate background covariates (\(W\)) based on the standard Normal distribution, a treatment (\(A\)) based on a Normal distribution with specified mean (\(\mu = 25\)) and standard deviation (\(\sigma = 2 \cdot \sqrt{\mu} = 10\)), and an outcome (\(Y\)) based on a specified structural equation of the form:
\[Y = I(A > 25) \cdot I(W_1 > 0),\] for \(n = 100\).
Having just recently returned from ACIC ’17, I have causal inference on my mind. You’ll notice that in specifying the data generating processes above, I provided a structural equation for \(Y\) – now, let’s play with marginal structural models (MSMs) just a little bit:
msm <- glm(Y ~ ., family = "binomial", data = O)summary(msm)
## ## Call:## glm(formula = Y ~ ., family = "binomial", data = O)## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -2.67224 -0.38138 -0.10080 0.07985 2.31857 ## ## Coefficients:## Estimate Std. Error z value Pr(>|z|)## (Intercept) -9.523376 0.687346 -13.855 <2e-16## A 0.272499 0.021257 12.820 <2e-16## W1 2.589087 0.200157 12.935 <2e-16## W2 0.053711 0.119842 0.448 0.654## W3 -0.006588 0.108947 -0.060 0.952## ## (Dispersion parameter for binomial family taken to be 1)## ## Null deviance: 1168.50 on 999 degrees of freedom## Residual deviance: 516.94 on 995 degrees of freedom## AIC: 526.94## ## Number of Fisher Scoring iterations: 7
Above, we projected the observed data (\(O = (W, A, Y)\)) onto a working MSM forthe outcome, using a logistic regression with terms for all of the baselinecovariates and the treatment. From the
summary of the resultant
glm object,we notice that the estimated coefficients for the terms corresponding to thetreatment (\(A\)) and first baseline covariate (\(W_1\)) are statisticallysignificant – recall that these were the variables we used in specifying thestructural equation for \(Y\) above.
I think I’ll wrap up this experiment by trying some simple plotting:
p <- ggplot(O, aes(A, Y)) + geom_point()p <- p + stat_smooth(method = "glm", method.args = list(family = "binomial"), formula = y ~ x)p <- p + xlab("A (treatment)") + ylab("Y (outcome)") + theme_nima()print(p)
We can plot the relationship between the treatment and outcome using a logisticregression fit. Pretty awesome that
blogdown supports such nice graphics soeasily. In my old blog, I had to come up with a custom R script to use
knitrto process R Markdown to standard markdown,allowing them to be rendered on my old Jekyll blog. |
Matching Annotations Aug 2019 If your comment is about a typo, problem with the website or anything else, please use our contact form.
First time I've seen users directed to use a separate channel than the main comments area for notifying about typos. Good idea.
Apr 2019
There is no question that technology is becoming a part of our lives more every day. What we have to take a closer look at is the increasing dependency that children have on smart devices, which is taking over all other normal childhood activities, an important occurrence because it is interfering with normal childhood development and negatively impacting relationships between parents and children. Smartphones give young adults access to almost unlimited information and almost unlimited content that they may not yet be equipped to navigate. Every parent wants to know what is going on in their child's life, but with smartphones, this is highly suggested just to make sure the internet has not led them down a dark path. For example, an astonishing study found that 19% of young adults ages 13-19 sent sexually suggestive content online, and 31% had received this type of content. Smartphones make exposure to these things at an early age much easier, and preventing the exposure much harder. There are many restrictions and blocks that can be put in place to help guide children in the right parts of the internet, but there is still the issue of time management on devices. Smartphones and other smart devides have taken the place of activities that should be prioitized for a healthy lifestyle, such as homework and exercise. One aspect that has drawn many children into overuse of technology is online gaming. Online games often have interactions with other online players, which allows children to feel as if they are socializing without actually interacting with friends, especially for children who struggle with in person interaction.
While this may be beneficial for short term socializing or motor skills, in the long-run, the children are choosing to sit and stare at a screen instead of interacting with people around them, or doing productive things such as homework, so the short-term benefits are outweighed by long-term consequences.
Oct 2018 The perplexity of the model q is defined as b − 1 N ∑ i = 1 N log b q ( x i ) {\displaystyle b^{-{\frac {1}{N}}\sum _{i=1}^{N}\log _{b}q(x_{i})}}
The perplexity formula is missing the probability distribution \(p\)
Sep 2018 Aug 2018 www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov Feb 2018 Dec 2017 Nov 2017 Nov 2016 |
2
Share Signals Systems GATE EE Quiz 9
4 years ago .
Here is Signals Systems GATE EE Quiz 9 to help you prepare for your upcoming GATE exam. The GATE EE paper has several subjects, each one as important as the last. However, one of the most important subjects in GATE EE is Signals and Systems. The subject is vast, but practice makes tackling it easy.
This quiz contains important questions which match the pattern of the GATE exam. Check your preparation level in every chapter of Signals and Systems for GATE EE by taking the quiz and comparing your ranks. Learn about Shifting and scaling operations, Laplace Transform, Sampling theorem and more!
r(n) – r(n-1) for \([1,\infty)\) is equal to ___________
Find the value of \(\mathop \smallint \limits_{ – \infty }^\infty \delta \left( {{t^2} – \frac{{3\pi }}{2}t + \frac{{5{\pi ^2}}}{{16}}} \right)Sin\left( {\frac{\pi }{4} – t} \right)dt\)
The system y(n) = nx(-n) is a
1. Dynamic 2. Noncausal 3. Linear 4. Time invariant
The value of the integral
\(\mathop \smallint \limits_{ – \infty }^\infty x\left( t \right).\delta '\left( {t – 1} \right)dt\ is\)
Where
\(x\left( t \right) = u\left( {t – 2} \right)*u\left( {t – 3} \right)*\delta \left( {t + 1} \right)\)
A continuous time signal is given below
Fourier transform of this signal can be expressed as –
More Signals and Systems for GATE EE Quizzes here:
2 |
Difference between revisions of "Fujimura.tex"
(2 intermediate revisions by 2 users not shown) Line 1: Line 1:
\section{Fujimura's problem}\label{fujimura-sec}
\section{Fujimura's problem}\label{fujimura-sec}
+
Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid
Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid
$$\Delta_n := \{(a,b,c)\in {\mathbb Z}^3_+ : a+b+c = n\}$$
$$\Delta_n := \{(a,b,c)\in {\mathbb Z}^3_+ : a+b+c = n\}$$
Line 5: Line 6:
(Kobon Fujimura is a prolific inventor of puzzles, and in [http://www.puzzles.com/PuzzlePlayground/CoinsAndTriangles/CoinsAndTriangles.htm this] puzzle asked the related question of eliminating all equilateral triangles.)
(Kobon Fujimura is a prolific inventor of puzzles, and in [http://www.puzzles.com/PuzzlePlayground/CoinsAndTriangles/CoinsAndTriangles.htm this] puzzle asked the related question of eliminating all equilateral triangles.)
−
The
+
The table was formed mostly by computer searches for optimal solutions. We also found human proofs for most of them (see http://michaelnielsen.org/polymath1/index.php?title=Fujimura').
−
\begin{figure}\centerline{
+
\begin{figure}
−
\begin{tabular}
+
\centerline{
−
$n$
+
\begin{tabular}l|
−
\hline $\overline{c}^\mu_n$ & 1 & 2 & 4 & 6 & 9 & 12 & 15 & 18 & 22 & 26 & 31 & 35 & 40 & 46
+
$n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13\\
−
\end{tabular}\label{lowFujimura}\caption{Fujimura numbers}}
+
\hline
+
$\overline{c}^\mu_n$ & 1 & 2 & 4 & 6 & 9 & 12 & 15 & 18 & 22 & 26 & 31 & 35 & 40 & 46
+
\end{tabular}
+ +
\label{lowFujimura}
+
\caption{Fujimura numbers}
+
}
−
For any equilateral triangle $(a+r,b,c)$,$(a,b+r,c)$ and $(a,b,c+r)$, the value $y+2z$ forms an arithmetic progression of length 3. A Behrend set is a finite set of integers with no arithmetic progression of length 3 (see
+
For any equilateral triangle $(a+r,b,c)$,$(a,b+r,c)$ and $(a,b,c+r)$, the value $y+2z$ forms an arithmetic progression of length 3. A Behrend set is a finite set of integers with no arithmetic progression of length 3 (see http://arxiv.org//arxiv/pdf/0811/0811.3057v2.pdf). By looking at those triples (a,b,c)with a+2binside a Behrend set, one can obtain the lower bound of $\overline{c}^\mu_n \geq n^2 exp(-O(\sqrt{\log n}))$.
−
It can be shown by a
+
It can be shown by a corners theorem' of Ajtai and Szemeredi that $\overline{c}^\mu_n = o(n^2)$ as $n \rightarrow \infty$.
An explicit lower bound is $3(n-1)$, made of all points in $\Delta_n$ with exactly one coordinate equal to zero.
An explicit lower bound is $3(n-1)$, made of all points in $\Delta_n$ with exactly one coordinate equal to zero.
−
An explicit upper bound comes from counting the triangles. There are $\binom{n+2}{3} triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for $\overline{c}^mu_n$.
+
An explicit upper bound comes from counting the triangles. There are $\binom{n+2}{3}triangles, and each point belongs to nof them. So you must remove at least (n+2)(n+1)/6points to remove all triangles, leaving (n+2)(n+1)/3points as an upper bound for $\overline{c}^mu_n$.
Latest revision as of 06:46, 27 July 2009
\section{Fujimura's problem}\label{fujimura-sec}
Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid $$\Delta_n := \{(a,b,c)\in {\mathbb Z}^3_+ : a+b+c = n\}$$ which contains no equilateral triangles $(a+r,b,c), (a,b+r,c), (a,b,c+r)$ with $r>0$. These are upward-pointing equilateral triangles. We shall refer to such sets as 'triangle-free'. (Kobon Fujimura is a prolific inventor of puzzles, and in this puzzle asked the related question of eliminating all equilateral triangles.)
The table in Figure \ref{lowFujimura} was formed mostly by computer searches for optimal solutions. We also found human proofs for most of them (see {\tt http://michaelnielsen.org/polymath1/index.php?title=Fujimura's\_problem}).
\begin{figure} \centerline{ \begin{tabular}{l|llllllllllllll} $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13\\ \hline $\overline{c}^\mu_n$ & 1 & 2 & 4 & 6 & 9 & 12 & 15 & 18 & 22 & 26 & 31 & 35 & 40 & 46 \end{tabular} } \label{lowFujimura} \caption{Fujimura numbers} \end{figure}
For any equilateral triangle $(a+r,b,c)$,$(a,b+r,c)$ and $(a,b,c+r)$, the value $y+2z$ forms an arithmetic progression of length 3. A Behrend set is a finite set of integers with no arithmetic progression of length 3 (see {\tt http://arxiv.org/PS\_cache/arxiv/pdf/0811/0811.3057v2.pdf}). By looking at those triples $(a,b,c)$ with $a+2b$ inside a Behrend set, one can obtain the lower bound of $\overline{c}^\mu_n \geq n^2 exp(-O(\sqrt{\log n}))$.
It can be shown by a `corners theorem' of Ajtai and Szemeredi \cite{ajtai} that $\overline{c}^\mu_n = o(n^2)$ as $n \rightarrow \infty$.
An explicit lower bound is $3(n-1)$, made of all points in $\Delta_n$ with exactly one coordinate equal to zero.
An explicit upper bound comes from counting the triangles. There are $\binom{n+2}{3}$ triangles, and each point belongs to $n$ of them. So you must remove at least $(n+2)(n+1)/6$ points to remove all triangles, leaving $(n+2)(n+1)/3$ points as an upper bound for $\overline{c}^\mu_n$. |
Defining parameters
Level: \( N \) = \( 3 \) Weight: \( k \) = \( 43 \) Nonzero newspaces: \( 1 \) Newforms: \( 2 \) Sturm bound: \(28\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{43}(\Gamma_1(3))\).
Total New Old Modular forms 15 15 0 Cusp forms 13 13 0 Eisenstein series 2 2 0 Decomposition of \(S_{43}^{\mathrm{new}}(\Gamma_1(3))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 3.43.b \(\chi_{3}(2, \cdot)\) 3.43.b.a 1 1 3.43.b.b 12 |
Code Tensor-product Bézier surfaces
Tensor product Bézier surfaces are a direct generalization of Bézier curves. Instead of a control polygon $\mathbf b_i$, we consider a control net $\mathbf b_{ij}$. A Bézier surface is the described as
where $m,n$ are degrees in $u,v$, respectively; $B_{l}^{k}$ are Bernstein polynomials.
Evaluation
To evaluate a point $S(u,v)$ on a surface, let’s fix $j$ and vary $i$.
\begin{equation} \displaystyle B(u,v) = \sum_{j=0}^{n} B_{j}^{n}(v) \underset{= \mathbf b_{j}(u) } { \underbrace{ \left[ \sum_{i=0}^{m} \mathbf b_{ij} B_{i}^{m}(u) \right] } } \end{equation}
The $\mathbf b_{j}(u)$ defines a Bézier curve in $u$ and can be evaluated using the De Casteljau.
\begin{equation} \displaystyle B(u,v) = \sum_{j=0}^{n} \mathbf b_{j}(u)B_{j}^{n}(v) \label{final} \end{equation}
This equation defines a Bézier curve in $v$ with control points $\mathbf b_{j}(u)$ depending on $u$. At the end of the day, we have
$n+1$ evaluations of the De Casteljau for the degree $m$; $1$ evaluation of the De Casteljau for the degree $n$.
Alternatively, we can fix $i$ and vary $j$.
Bézier patch evaluation scheme. (image by Pierre-Luc Manteaux)
Coordinate matrices
In the code, the control points $\mathbf b_{ij}$ are actually stored in three coordinate matrices $\texttt{Mx}, \texttt{My}, \texttt{Mz} \; $ so that
This means the code is more comprehensible as the structure of the matrices directly represents the grid topology of the patch. On the other hand, it also means the computation needs to be done for each coordinate individually.
Bézier surface vs. piecewise Bézier surface
In practice, a Bézier surface often consists of multiple surface
patches, each having its own control polygon.Therefore, it is sometimes called a piecewise Bézier surface.
Some surfaces in the
data/ folder consist of more than one patch, ranging from 2 (heart) to 32 (teapot).They are saved in the BPT format; some datafiles are taken from the website of Ryan Holmes where you’ll also find more details about this format in case you’re interested.
The discontinuity in isophotes shows the piecewise Bézier Utah teapot is not $\mathcal C^1$.
ToDo Implement the evaluation of Bézier surfaces for $(u,v) \in [0,1]^2$.Use
simpleand
wavefor first tests (these contain only one patch).
When you’re sure the implementation works for the simple cases, test your algorithm on datasets with multiple patches:
heart(2),
sphere(8),
teapot(32),
teacup(26),
teaspoon(16). Don’t set the
densityparameter too high, always start with smaller values (5 or 10) as the number of computed points is
density². |
I am trying to build a systematic formula for how magic operates. This is based on magic as energy manipulation and transformation. It can be based on historical or modern esoteric magic principles.
I found the Q. Equation for magic system involving negative resistance and exposure time But its basic asumptions are different and the equations seems overly conplicated for a non-scientist.
Very tentatively, I came up with
$$E= \frac{PF}{R}$$
Energy/effect of the Spell = (Power used $\times$ Focus) $\div$ Resistance
$P$ = Amount of power being generated, Source of the energy and how well it is accumulated
$F$= Focus of the power, how well the energy is controlled/directed
$R$= Resistance to the desired effect based on the size of the effect, the laws of nature, easiness of the result, R could need a formula of its own.
So for instance, if one taps universal energy by connecting to an external source, $P$ is huge, but it is so unwieldy and uncontrolled that $F$ is tiny, whereas if one uses internal energy, like by meditating, $P$ is very small, but the focus is high.
If the "Belief" of the user is a factor as in esoteric magic, I could add a factor of $B$ (Belief that the magic will succeed). Also, since "Will" is an important part of ceremonial magic, it could be factored in, though it is already implied as part of the Focus.
Any ideas about other factors that should be taken into consideration? |
This
crossword clue is for the definition: Connection. it’s A 10 letters crossword puzzle definition. Next time, when searching for online help with your puzzle, try using the search term “Connection crossword” or “Connection crossword clue”. The possible answerss for Connection are listed below.
Did you find what you needed?
We hope you did!. Possible Answers: NEXUS.
Last seen on: LA Times Crossword 23 Oct 18, Tuesday
Random information on the term “Connection”:
Geometry of quantum systems (e.g.,noncommutative geometry and supergeometry) is mainlyphrased in algebraic terms of modules andalgebras. Connections on modules aregeneralization of a linear connection on a smooth vector bundle E → X {\displaystyle E\to X} written as a Koszul connection on the C ∞ ( X ) {\displaystyle C^{\infty }(X)} -module of sections of E → X {\displaystyle E\to X} .[1]
Let A {\displaystyle A} be a commutative ringand P {\displaystyle P} an A-module. There are different equivalent definitionsof a connection on P {\displaystyle P} .[2] Let D ( A ) {\displaystyle D(A)} be the module of derivations of a ring A {\displaystyle A} . Aconnection on an A-module P {\displaystyle P} is definedas an A-module morphism
Random information on the term “NEXUS”:
The Dragonlance Nexus is a Dragonlance fansite that was created in 1996 as “Dragon Realm”. The site was overhauled and a new name was given to it as the “Dragonlance Nexus”. Beginning on November 28, 2005, the site began publishing articles written by established authors starting with an article on Jaymes Markham by the author Douglas Niles.[2][3] Other authors have contributed to the Lexicon, such as Nancy Varian Berberick,[4][5][6] Mary H. Herbert,[7][8] Kevin T. Stein,[9][10] and more recently Jean Rabe.[11] Some of the articles found in the site have been published in the Dragonlance Campaign Setting by Sovereign Press. The site won the gold ENnie for best fan site on August 16, 2007 at Gen Con.[12]
The Dragonlance Nexus traces its origins past 1996, but its current form was launched in January 2001.
The origin of the Nexus begins with a small site called “the Dragon’s Realm.” The site was started in the summer of 1996 as an experiment by long-time staff member Matt Haag aka Paladin to learn HTML and to talk about some of the AD&D Gold Box videogames he was playing at the time. |
We know that if two topological spaces $X$ and $Y$ are homeomorphic, then they have the same fundamental groups, and the same homology. In other words, we have functors$$\pi_1 : \mathsf{Top} \to \mathsf{Grp} \quad\text{and}\quad H_n : \mathsf{Top} \to \mathsf{Ab}$$(actually this works even if the spaces are homotopy equivalent). The important thing here is that these functors can be used to prove that the two spaces are
not homeomorphic: for instance $H_3(S^3) \cong \Bbb Z \not\cong 0 = H_3(S^2)$, so that $S^3$ and $S^2$ are not homeomorphic (they don't even have the same homotopy type).
I was wondering whether there was somehow a "converse" to this, i.e. is they a way to prove that two topological spaces are homeomorphic. More precisely:
Is there a category $\scr C$ and a functor $\mathsf F : \mathsf{Top} \to \mathscr C$ such that $\mathsf F(X) \cong \mathsf F(Y) \implies X \cong Y$ ? Of course, I want to avoid obvious examples as $\mathsf{Id_{Top}}$ .
(By the way, I don't know if there is a name for such functors, which are injective on objects.
Faithful is already used for something different). I would also accept discussing the case where the homeomorphism $ X \cong Y$ is replaced by a homotopy equivalence $X \simeq Y$.
The closest result I found is a theorem due to Gelfand and Kolmogorov : given two
compact and Hausdorff spaces, if the commutative rings $C(X)$ and $C(Y)$ of continuous functions $f\,:\,X,Y\rightarrow \mathbb{R}$ (under pointwise addition and multiplication) are isomorphic, then $X$ and $Y$ are homeomorphic. Maybe we could try to generalize this to the category of locally compact and Hausdorff spaces, using the Alexandorff compactification.
Thank you for your comments! |
If one has a sufficiently smooth function $u$ that is approximated by a piecewise constant function $u_h=\Pi^0_h u$ on a mesh of cell size $h$ (where $\Pi^0_h$ is the $L_2$ projection onto the constants on each cell), then it is not hard to see graphically (or from the first term of the Taylor expansion) that the size of jumps must be ${\cal O}(h)$.
But what if I use piecewise linear functions? Taylor expansion gives me that I should expect the difference between $u$ and $u_h$ at the end points of each cell to be ${\cal O}(h^2)$. But it is not inconceivable that the "jump" of $u_h$ at these end points (i.e., the difference $|u_h(x_+)-u_h(x_-)|$ when approaching the point $x$ that separates two cells from the left and right) might be of higher order than just two. How about projections $u_h$ into even higher order discontinuous spaces — say, of polynomial degree $k$: Is the jump then of size ${\cal O}(h^{k+1})$, or is it even better?
I am certain this is a pretty standard question for those who do error estimation for Discontinuous Galerkin finite element methods, but it's a bit outside my theoretic knowledge — and so help would be appreciated!
As a side note, here's what I'm really after: I have an estimate that involves on the right hand side the term $h^{s+1} \|u_h\|_{H^s}$. Since $u_h=\Pi^k u$ is piecewise constant, it's not in $H^1$ or higher, but only $L_2$. So $s=0$ is really the only choice. That's likely the best I can do for piecewise constants ($k=0$).
But at least for higher degree piecewise polynomials, the jump becomes pretty small, and so while $\Pi_h^k u\not\in H^s$ for $s>0$, one would expect that one could find an asymptotically equivalent norm $\|\cdot\|_\ast$ so that $\|\Pi_h^k u\|_{\ast,H^s} \le C h^{-p} \|u\|_{H^s}$ or some such, for a power $p$ that is probably related to the polynomial degree $k$ of the space into which we project and, I believe, the space dimension. This estimate blows up, of course, as $h\rightarrow 0$, but at a rate that is predictable.
I want this because then I could replace my poor estimate $h\|u_h\|_{L_2}$ by something more like $h^{s-p} \|u\|_{H^s}$. The question is what $p$ needs to be: If the blow-up happens sufficiently slowly, then the power $s-p$ may be substantially better than the order 1 one gets by just taking the simple route and estimating via $h\|\Pi_h^k u\|_{L_2}$. Calculating $p$ clearly involves the size of the jump of $\Pi_hu$ from one cell to the next, thus my question. |
Export 30 results:Author Title [ Type] Year
Filters: is First Letter Of Title [Clear All Filters] C
Conference Paper
Carbides and grain defect formation in directionally solidified nickel-base superalloys. Advanced Technologies for Superalloy Affordability as held at the 2000 TMS Annual Meeting. :2000.. 2000.
A comparative examination of aging and creep behavior of die-cast MRI230D and AXJ530. Symposium on Magnesium Technology 2008 (TMS 9 March 2008 to 13 March 2008). :117–122.. 2008.
COMPRESSION CREEP BEHAVIOR OF B 2 AL-NI-RU TERNARY ALLOYS. Advanced Intermetallic-Based Alloys(MRS Symposium Proceedings Series Volume 980). 980:45–50.. 2007.
Compression Creep Behavior of B2 AL-Ni-Ru Ternary Alloys. MRS Proceedings. 980:0980–II01.. 2006.
Journal Article
Carburization of W-and Re-rich Ni-based alloys in impure helium at 1000° C. Corrosion Science. 53:388–398.. 2011.
Cast gamma titanium aluminides for low pressure turbine blades: a design case study for intermetallics. Minerals, Metals and Materials Society/AIME, Structural Intermetallics 2001(USA),. :3–12.. 2001.
Cast structure and property variability in gamma titanium aluminides. Intermetallics. 6:629–636.. 1998.
Chromia-Assisted Decarburization of W-Rich Ni-Based Alloys in Impure Helium at 1273 K (1000° C). Metallurgical and Materials Transactions A. 42:1229–1244.. 2011.
A combinatorial investigation of palladium and platinum additions to $\beta$-NiAl overlay coatings. Acta Materialia. 77:379–393.. 2014.
A combined grain scale elastic–plastic criterion for identification of fatigue crack initiation sites in a twin containing polycrystalline nickel-base superalloy. Acta Materialia. 103:461–473.. 2016.
A comparative analysis of low temperature deformation in B2 aluminides. Materials Science and Engineering: A. 317:241–248.. 2001.
A comparative investigation of oxide formation on EQ (Equilibrium) and NiCoCrAlY bond coats under stepped thermal cycling. Surface and Coatings Technology. 205:3066–3072.. 2011.
Crack progression during sustained-peak low-cycle fatigue in single-crystal Ni-base superalloy René N5. Metallurgical and Materials Transactions A. 41:947–956.. 2010.
Creep and directional coarsening in single crystals of new $\gamma$–$\gamma$′ cobalt-base alloys. Scripta Materialia. 66:574–577.. 2012.
Creep and Elemental Partitioning Behavior of Mg-Al-Ca-Sn Alloys with the Addition of Sr. Magnesium Technology 2011. :215–222.. 2011.
Creep behavior under isothermal and non-isothermal conditions of AM3 single crystal superalloy for different solutioning cooling rates. Materials Science and Engineering: A. 601:145–152.. 2014.
Creep deformation and the evolution of precipitate morphology in nickel-based single crystals. Modelling of Microstructural Evolution in Creep Resistant Materials. :1998.. 1998.
Creep deformation mechanisms in Ru-Ni-Al ternary B2 alloys. Metallurgical and Materials Transactions A. 39:39–49.. 2008.
Creep deformation-induced antiphase boundaries in L1 2-containing single-crystal cobalt-base superalloys. Acta Materialia. 77:352–359.. 2014.
Creep of $\alpha$ 2+ $\beta$ Titanium Aluminide Alloys. ISIJ International. 31:1139–1146.. 1991.
Creep resistance of CMSX-3 nickel base superalloy single crystals. Acta Metallurgica et Materialia. 40:1–30.. 1992.
CREEP RESISTANCE OF CMSX-3 NICKEL-BASE SUPERALLOY SINGLE-CRYSTALS (VOL 40, PG 1, 1992). ACTA METALLURGICA ET MATERIALIA. 41:2253–2253.. 1993.
Creep resistance of nickel-base superalloy single crystals. Creep and fracture of engineering materials and structures. :287–301.. 1990.
Creep-induced planar defects in L1 2-containing Co-and CoNi-base single-crystal superalloys. Acta Materialia. 82:530–539.. 2015.
The Critical Role of Shock Melting in Ultrafast Laser Machining. Minerals, Metals and Materials Society/AIME, 420 Commonwealth Dr., P. O. Box 430 Warrendale PA 15086 United States.[np]. Feb.. 2011. |
Boundedness in a two-species chemotaxis parabolic system with two chemicals
1.
College of Mathematic & Information, China West Normal University, Nanchong, 637002, China
2.
School of Sciences, Southwest Petroleum University, Chengdu, 610500, China
$\left\{\begin{aligned}&u_t=Δ u-χ\nabla·(u\nabla v), &x∈Ω,\,t>0,\\& τ v_t=Δ v-v+w, &x∈Ω,\,t>0,\\&w_t=Δ w-ξ\nabla·(w\nabla z), &x∈Ω,\,t>0,\\& τ z_t=Δ z-z+u, &x∈Ω,\,t>0,\end{aligned}\right.$
$χ, \, ξ∈\mathbb{R}$
$Ω\subset\mathbb{R}^2$
$\int_Ω u_0dx$
$\int_Ω w_0dx$ i.e.,the case of
$τ=1$ Mathematics Subject Classification:Primary:35A01, 35B44, 35K57, 35Q92;Secondary:92C17. Citation:Xie Li, Yilong Wang. Boundedness in a two-species chemotaxis parabolic system with two chemicals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2717-2729. doi: 10.3934/dcdsb.2017132
References:
[1] [2] [3] [4] [5]
T. Cieslak, P. LaurenÇot and C. Morales-Rodrigo,
Global existence and convergence to steady-states in a chemorepulsion system,
[6]
A. Friedman,
Partial Differential Equations, Holt, Rinehart and Winston, New York, 1969.
Google Scholar
[7]
M. A. Gates, V. M. Coupe, E. M. Torres, R. A. Fricker-Gares and S. B. Dunnett,
Spatially and temporally restricted chemoattractant and chemorepulsive cues direct the formation of the nigro-striatal circuit,
[8] [9]
M. E. Hibbing, C. Fuqua, M. R. Parsek and S. B. Peterson,
Bacterial competition: surviving and thriving in the microbial jungle,
[10] [11] [12] [13] [14] [15] [16] [17] [18]
M. Luca, A. Chavez-Ross, L. Edelstein-Keshet and A. Mogilner,
Chemotactic signalling, microglia, and Alzheimers disease senile plague: Is there a connection?,
[19] [20]
X. Li and Z. Xiang,
Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source,
[21] [22] [23]
T. Nagai,
Blow-up of nonradial solutions to parabolic-elliptic systems modelling chemotaxis in two-dimensional domains,
[24] [25]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[26] [27] [28] [29] [30] [31] [32]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[33]
Y. Tao and M. Winkler,
Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusible attractant,
[34] [35] [36]
Y. Wang,
A quasilinear attraction-repulsion chemotaxis system of parabolic-elliptic type with logistic source,
[37]
Y. Wang,
Global bounded weak solutions to a degenerate quasilinear attraction-repulsion chemotaxis system with rotation,
[38]
Y. Wang,
Global existence and boundedness in a quasilinear attraction-repulsion chemotaxis system of parabolic-elliptic type,
[39]
Y. Wang and Z. Xiang,
Boundedness in a quasilinear 2D parabolic-parabolic attraction-repulsion chemotaxis system,
[40] [41] [42]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[43]
show all references
References:
[1] [2] [3] [4] [5]
T. Cieslak, P. LaurenÇot and C. Morales-Rodrigo,
Global existence and convergence to steady-states in a chemorepulsion system,
[6]
A. Friedman,
Partial Differential Equations, Holt, Rinehart and Winston, New York, 1969.
Google Scholar
[7]
M. A. Gates, V. M. Coupe, E. M. Torres, R. A. Fricker-Gares and S. B. Dunnett,
Spatially and temporally restricted chemoattractant and chemorepulsive cues direct the formation of the nigro-striatal circuit,
[8] [9]
M. E. Hibbing, C. Fuqua, M. R. Parsek and S. B. Peterson,
Bacterial competition: surviving and thriving in the microbial jungle,
[10] [11] [12] [13] [14] [15] [16] [17] [18]
M. Luca, A. Chavez-Ross, L. Edelstein-Keshet and A. Mogilner,
Chemotactic signalling, microglia, and Alzheimers disease senile plague: Is there a connection?,
[19] [20]
X. Li and Z. Xiang,
Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source,
[21] [22] [23]
T. Nagai,
Blow-up of nonradial solutions to parabolic-elliptic systems modelling chemotaxis in two-dimensional domains,
[24] [25]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[26] [27] [28] [29] [30] [31] [32]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[33]
Y. Tao and M. Winkler,
Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusible attractant,
[34] [35] [36]
Y. Wang,
A quasilinear attraction-repulsion chemotaxis system of parabolic-elliptic type with logistic source,
[37]
Y. Wang,
Global bounded weak solutions to a degenerate quasilinear attraction-repulsion chemotaxis system with rotation,
[38]
Y. Wang,
Global existence and boundedness in a quasilinear attraction-repulsion chemotaxis system of parabolic-elliptic type,
[39]
Y. Wang and Z. Xiang,
Boundedness in a quasilinear 2D parabolic-parabolic attraction-repulsion chemotaxis system,
[40] [41] [42]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[43]
[1]
Maria Antonietta Farina, Monica Marras, Giuseppe Viglialoro.
On explicit lower bounds and blow-up times in a model of chemotaxis.
[2]
Sainan Wu, Junping Shi, Boying Wu.
Global existence of solutions to an attraction-repulsion chemotaxis model with growth.
[3]
Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa.
Global existence for an attraction-repulsion chemotaxis fluid model with logistic source.
[4]
Hai-Yang Jin, Tian Xiang.
Repulsion effects on boundedness in a quasilinear attraction-repulsion chemotaxis model in higher dimensions.
[5]
Pan Zheng, Chunlai Mu, Xuegang Hu.
Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source.
[6]
Yan Li.
Emergence of large densities and simultaneous blow-up in a two-species chemotaxis system with competitive kinetics.
[7]
Yilong Wang, Zhaoyin Xiang.
Boundedness in a quasilinear 2D parabolic-parabolic attraction-repulsion chemotaxis system.
[8]
Shijie Shi, Zhengrong Liu, Hai-Yang Jin.
Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source.
[9]
Rachidi B. Salako.
Traveling waves of a full parabolic attraction-repulsion chemotaxis system with logistic source.
[10] [11] [12] [13] [14] [15]
W. Edward Olmstead, Colleen M. Kirk, Catherine A. Roberts.
Blow-up in a
subdiffusive medium with advection.
[16] [17] [18]
José M. Arrieta, Raúl Ferreira, Arturo de Pablo, Julio D. Rossi.
Stability of the blow-up time and the blow-up set under
perturbations.
[19]
Mohamed-Ali Hamza, Hatem Zaag.
Blow-up results for semilinear wave equations in the
superconformal case.
[20]
Huyuan Chen, Hichem Hajaiej, Ying Wang.
Boundary blow-up solutions to fractional elliptic
equations in a measure framework.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top] |
Here is another way to see this, so you don't have to take as many powers as in azimut's answer.
Theorem. Suppose that $p$ is an odd prime. If $g\bmod p$ is a primitive root for $\mathbb{Z}/p\mathbb{Z}$ and $g^{p-1}\not\equiv 1 \bmod p^2$, then $g$ is also a primitive root of $p^2$. If $g^{p-1}\equiv 1 \bmod p^2$, then $g+p$ is a primitive root of $p^2$.
Now let's apply this result to your example, when $p=7$. In $\mathbb{Z}/7\mathbb{Z}$ there are $\phi(\phi(p))=\phi(6)=2$ primitive roots, namely $3$ and $5$. In $\mathbb{Z}/49\mathbb{Z}$ there are $\phi(\phi(49))=\phi(42)=\phi(6)\phi(7)=12$. If $h$ is a primitive root modulo $49$, then $h$ is also a primitive root modulo $7$, so $h\equiv 3$ or $5\bmod 7$. There are seven such $h\equiv 3$ and seven such $h\equiv 5\bmod 7$, so we have $14$ candidates for primitive roots, of which $12$ are primitive roots. By the theorem, those which are
not primitive roots must satisfy $h^{p-1}\equiv 1 \bmod p^2$. Clearly$$(3^7)^6\equiv 3^{(7\cdot 6)}\equiv 1 \quad \text{ and } \quad (5^7)^6\equiv 5^{(7\cdot 6)}\equiv 1 \bmod 49,$$by Euler's theorem, because $\phi(49)=42=7\cdot 6$. Since $3^7\equiv 3 \bmod 7$ and $5^7\equiv 5\bmod 7$, by Fermat's little theorem, we conclude that $3^7$ and $5^7$ are the two exceptions:$$3^7\equiv 31 \bmod 49, \quad \text{ and } \quad 5^7\equiv 19 \bmod 49.$$Hence, the set of primitive roots modulo $49$ are:$$\{3+7k: 0\leq k\leq 6, k\neq 4\} \quad \text{ and } \quad \{5+7j: 0\leq j\leq 6, j\neq 2\}.$$ |
Q. If the magnetic field of a plane electromagnetic wave is given by (The speed of light = $3 \times 10^8 \; /m/s$)
$B = 100 \times 10^{-6} \; \sin [ 2 \pi \times 2 \times 10^{15} ( t - \frac{x}{c} ) ] $ then the maximum electric field associated with it is :
Solution:
$E_0 = B_0 \times C$ $= 100 \times 10^{-6} \times 3 \times 10^8$ $= 3 \times 10^4 \; N/C$ $\therefore $ correct answer is $3 \times 10^4 \; N/C$ Questions from JEE Main 2019 Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
I found this to be an interesting question, a solenoid when supplied with a current is a form of electromagnet, and your question led me to consider a rectangular electromagnet as a counter example.
I think, if my calculations are valid, that this result is just a coincidence.
At any rate, my logic:
Define the $x$ axis to be along the principle axis of the solenoid.
Cylinder
Using the Biot-Savart Law, we can calculate the magnetic field on axis for a current loop:
$$ d\textbf{B} = \frac{\mu_{0}I}{4\pi|\textbf{r-r'}|^{3}}d\textbf{l}\times(\textbf{r-r'})$$
We can ignore the contributions in the $y$ and $z$ directions, as they will cancel out around the loop. Integrating around the loop:
Let $\textbf{r-r'} = \textbf{R}$$$B_{x} = \int_{0}^{2\pi}{ \frac{\mu_{0}I}{4\pi|\textbf{R}|^{2}}}asin\theta d\theta = \frac{\mu_{0}Iasin\theta}{2|\textbf{R}|^{2}}$$
where $\theta$ is the angle between the $x$ axis and $\textbf{R}$.
Now we consider the cyinder, which we think of as an infinite number of current loops, each with current $Indx$, where n is the density of current turns, $\frac{N}{2a}$. Integrating this from one end of the finite cylinder to the other, changing the integral to be in terms of $\theta$ for convenience:
$$B_{x} = \int_{-a}^{a}{\frac{\mu_{0}Inasin\theta}{2R^{2}}dx}$$
$R = \frac{a}{sin\theta}$, $x = -Rcos\theta$ => $dx=\frac{R^{2}}{a}d\theta$
$$B_{x} = \int_{0}^{\pi}{\frac{\mu_{0}Insin\theta}{2}d\theta} = \mu_{0}In = \frac{\mu_{0}IN}{2a}$$
This corresponds to your first result, as the field is along the x axis.
Spherical
This is a similar calculation, but the radius of the current loop changes with x, while the vector $\textbf{R}$ has a constant magnitude $a$.
Conducting the same finite integral as before, but changing to suit the spherical system, $\rho$ is the radius of the current loop:
$$B_{x} = \int_{-a}^{a}{\frac{\mu_{0}In\rho sin\theta}{2a^{2}}dx}$$
$\rho = asin\theta$, $x = -acos\theta$, => $dx = asin\theta d\theta$
$$B_{x} = \int_{0}^{\pi}{\frac{\mu_{0}Insin^{3}\theta}{2}d\theta} = \frac{2\mu_{0}n}{3} = \frac{\mu_{0}IN}{3a}$$
Your second result.
Square
Consider a square current loop, of side length $2a$. By using the Biot-Savart law from above and some nifty cancellations, we can calculate the on axis field for one side of the loop. A similar procedure to the loop gives the x component of the field contributed by an infinitesimal length on the side as:
$$dB_{x} = \frac{a\mu_{0}I}{4\pi R^{3}}dl$$
If we consider the edge to be aligned with the $y$ direction, and expand $R$ before integrating across the edge:
$$B_{xside} = \int_{-a}^{a}\frac{a\mu_{0}I}{4\pi(x^2+y^2+a^2)^{\frac{3}{2}}}dy = \frac{a^2\mu_{0}I}{2\pi(2a^{2}+x^{2})^{\frac{1}{2}}(a^2+x^2)}$$
From the symmetry of the situation, it can be seen that each side of the square current loop contributes equally, so the total magnetic field in the $x$ direction is 4 times the prior value.
Again, we consider the infinite number of current loops:
$$B_{x} = \int_{-a}^{a}\frac{2a^2\mu_{0}nI}{\pi(2a^{2}+x^{2})^{\frac{1}{2}}(a^2+x^2)}dx = \frac{2a^{2}\mu_{0}nI}{\pi}[\frac{tan^{-1}(\frac{x}{(2a^{2}+x^{2})^\frac{1}{2}})}{a^{2}}]_{-a}^{a} = \frac{2\mu_{0}In}{3} = \frac{\mu_{0}IN}{3}$$
Now, comparing the volumes, the volume of this cylinder is $8a^3$, which is $\frac{4}{\pi}$ times the cylindrical volume, but the field is $\frac{2a}{3}$ times the cylindrical field.
So, the neat consistency between the fields and volumes for the spherical and cylindrical case doesn't hold for a rectangular electromagnet. Unfortunately, I currently can't give you any ideas as to how to approach this from a more theoretical and thought based approach, rather than a bunch of nasty integrals, whether the relation between the spherical and cylindrical systems is due to the similarities between the two coordinate systems, or is something more fundamental.
Any, hope this answer was at least interesting to you, as I'm not sure it's too much help, but I had fun. |
My question is similar to this and this. I would like to have the theorem number in the left margin (0.6em spacing to the main text) and the Name 'Theorem' should be justified to the main text. The solution should made with the
amsthmpackage, because I've made my very specific
proof environment with
\renewenvironment from
amsthm.
\documentclass[12pt,a4paper]{article}\usepackage[ngerman]{babel}\usepackage{lipsum,lmodern,libertine}\usepackage{amsmath,amsfonts,amssymb}\usepackage{amsthm}\newtheoremstyle{mystyle}{\topsep} %space above{\topsep} %space below{} %bodyfont{} %indent{\bfseries} %headfont{} %punctuation{0.6em} %space after head{#2 #1 #3} %theoremheadspec\theoremstyle{mystyle}\newtheorem{theorem}{Theorem}\begin{document}\lipsum[1] \begin{theorem}Let $f\colon [a,b]\to\mathbb{R}$ be continious and let $F$ be an antiderivative of $f$, then\begin{align*}\int_a^b f(x)\,\mathrm{d}x=F(b)-F(a).\end{align*}\end{theorem}\end{document}
This looks like
But it should look like this (this was made with the
ntheorem package): |
Difference between revisions of "Fujimura.tex"
Line 1: Line 1:
\section{Fujimura's problem}\label{fujimura-sec}
\section{Fujimura's problem}\label{fujimura-sec}
+
Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid
Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid
$$\Delta_n := \{(a,b,c)\in {\mathbb Z}^3_+ : a+b+c = n\}$$
$$\Delta_n := \{(a,b,c)\in {\mathbb Z}^3_+ : a+b+c = n\}$$
Line 5: Line 6:
(Kobon Fujimura is a prolific inventor of puzzles, and in [http://www.puzzles.com/PuzzlePlayground/CoinsAndTriangles/CoinsAndTriangles.htm this] puzzle asked the related question of eliminating all equilateral triangles.)
(Kobon Fujimura is a prolific inventor of puzzles, and in [http://www.puzzles.com/PuzzlePlayground/CoinsAndTriangles/CoinsAndTriangles.htm this] puzzle asked the related question of eliminating all equilateral triangles.)
−
The following table was formed mostly by computer searches for optimal solutions. We also found human proofs for most of them (see
+
The following table was formed mostly by computer searches for optimal solutions. We also found human proofs for most of them (see http://michaelnielsen.org/polymath1/index.php?title=Fujimura').
−
\begin{figure}\centerline{
+
\begin{figure}
−
\begin{tabular}
+
\centerline{
−
$n$
+
\begin{tabular}l|
−
\hline $\overline{c}^\mu_n$ & 1 & 2 & 4 & 6 & 9 & 12 & 15 & 18 & 22 & 26 & 31 & 35 & 40 & 46
+
$n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13\\
−
\end{tabular}\label{lowFujimura}\caption{Fujimura numbers}}
+
\hline
+
$\overline{c}^\mu_n$ & 1 & 2 & 4 & 6 & 9 & 12 & 15 & 18 & 22 & 26 & 31 & 35 & 40 & 46
+
\end{tabular}
+ +
\label{lowFujimura}
+
\caption{Fujimura numbers}
+
}
−
For any equilateral triangle $(a+r,b,c)$,$(a,b+r,c)$ and $(a,b,c+r)$, the value $y+2z$ forms an arithmetic progression of length 3. A Behrend set is a finite set of integers with no arithmetic progression of length 3 (see
+
For any equilateral triangle $(a+r,b,c)$,$(a,b+r,c)$ and $(a,b,c+r)$, the value $y+2z$ forms an arithmetic progression of length 3. A Behrend set is a finite set of integers with no arithmetic progression of length 3 (see http://arxiv.org//arxiv/pdf/0811/0811.3057v2.pdf). By looking at those triples (a,b,c)with a+2binside a Behrend set, one can obtain the lower bound of $\overline{c}^\mu_n \geq n^2 exp(-O(\sqrt{\log n}))$.
It can be shown by a 'corners theorem' of Ajtai and Szemeredi that $\overline{c}^\mu_n = o(n^2)$ as $n \rightarrow \infty$.
It can be shown by a 'corners theorem' of Ajtai and Szemeredi that $\overline{c}^\mu_n = o(n^2)$ as $n \rightarrow \infty$.
Line 19: Line 26:
An explicit lower bound is $3(n-1)$, made of all points in $\Delta_n$ with exactly one coordinate equal to zero.
An explicit lower bound is $3(n-1)$, made of all points in $\Delta_n$ with exactly one coordinate equal to zero.
−
An explicit upper bound comes from counting the triangles. There are $\binom{n+2}{3} triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for $\overline{c}^mu_n$.
+
An explicit upper bound comes from counting the triangles. There are $\binom{n+2}{3}triangles, and each point belongs to nof them. So you must remove at least (n+2)(n+1)/6points to remove all triangles, leaving (n+2)(n+1)/3points as an upper bound for $\overline{c}^mu_n$.
Revision as of 16:13, 25 July 2009
\section{Fujimura's problem}\label{fujimura-sec}
Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid $$\Delta_n := \{(a,b,c)\in {\mathbb Z}^3_+ : a+b+c = n\}$$ which contains no equilateral triangles $(a+r,b,c), (a,b+r,c), (a,b,c+r)$ with $r>0$. These are upward-pointing equilateral triangles. We shall refer to such sets as 'triangle-free'. (Kobon Fujimura is a prolific inventor of puzzles, and in this puzzle asked the related question of eliminating all equilateral triangles.)
The following table was formed mostly by computer searches for optimal solutions. We also found human proofs for most of them (see {\tt http://michaelnielsen.org/polymath1/index.php?title=Fujimura's\_problem}).
\begin{figure} \centerline{ \begin{tabular}{l|llllllllllllll} $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13\\ \hline $\overline{c}^\mu_n$ & 1 & 2 & 4 & 6 & 9 & 12 & 15 & 18 & 22 & 26 & 31 & 35 & 40 & 46 \end{tabular} } \label{lowFujimura} \caption{Fujimura numbers} \end{figure}
For any equilateral triangle $(a+r,b,c)$,$(a,b+r,c)$ and $(a,b,c+r)$, the value $y+2z$ forms an arithmetic progression of length 3. A Behrend set is a finite set of integers with no arithmetic progression of length 3 (see {\tt http://arxiv.org/PS\_cache/arxiv/pdf/0811/0811.3057v2.pdf}). By looking at those triples $(a,b,c)$ with $a+2b$ inside a Behrend set, one can obtain the lower bound of $\overline{c}^\mu_n \geq n^2 exp(-O(\sqrt{\log n}))$.
It can be shown by a 'corners theorem' of Ajtai and Szemeredi that $\overline{c}^\mu_n = o(n^2)$ as $n \rightarrow \infty$.
An explicit lower bound is $3(n-1)$, made of all points in $\Delta_n$ with exactly one coordinate equal to zero.
An explicit upper bound comes from counting the triangles. There are $\binom{n+2}{3}$ triangles, and each point belongs to $n$ of them. So you must remove at least $(n+2)(n+1)/6$ points to remove all triangles, leaving $(n+2)(n+1)/3$ points as an upper bound for $\overline{c}^\mu_n$. |
Apparently, Rogawski's Calculus for AP contains the following problem:
108. Explain why L'Hôpital's rule does not apply to
$$ \lim_{x\rightarrow 0}\frac{x^2\sin\frac{1}{x}}{\sin x} $$
It seems to me that it
does apply:
The
L'Hôpital's rule says: if $\lim_{x\rightarrow c}f(x)=\lim_{x\rightarrow c}g(x)=0$ and both $f$ and $g$ are differentiable at $x=c$ and $g'(c)\ne 0$, then $\lim_{x\rightarrow c}\frac{f(x)}{g(x)}$ exists and is equal to $\frac{f'(c)}{g'(c)}$.(Note that nothing is assumed about differentiability of $f$ and $g$ other than at $x=c$).
Define the numerator $f(x)=x^2\sin\frac{1}{x}$ to be $f(0)=0$ at $x=0$. Now, both numerator $f$ and denominator $g(x)=\sin(x)$ are continuous at $x=0$ and their values are $f(0)=g(0)=0$.
The numerator $f$ is differentiable at $x=0$ and the derivative is $f'(0)=0$ (the derivative itself is discontinuous at 0, but that is irrelevant - even the existence of the derivative at any point other than 0 does NOT matter). One can see that from the
definitionof the derivative: $f'(0)=\lim_{h\rightarrow 0} \frac{h^2\sin\frac{1}{h}}{h} = \lim_{x\rightarrow 0} h\sin\frac{1}{h} = 0$ (see PS step 2 below).
The denominator $g$ is differentiable at $x=0$ and the derivative is $g'(0)=\cos 0=1$.
Thus the limit is $\frac{0}{1} = 0$.
What am I missing?
PS. Note that I am not asking why the limit is 0. That can be easily seen without L'Hôpital:
$\lim_{x\rightarrow 0}\frac{x}{\sin x} = 1$: this is the inverse of the standard limit $\lim_{x\rightarrow 0}\frac{\sin x}{x} = 1$.
$\lim_{x\rightarrow 0} x \sin\frac{1}{x} = 0$ because $\sin\frac{1}{x}$ is bounded and $\lim_{x\rightarrow 0} x = 0$, this follows from Squeeze theorem.
the Product Rule for Limits implies that $$\lim_{x\rightarrow 0}\frac{x^2\sin\frac{1}{x}}{\sin x} = \lim_{x\rightarrow 0}x\sin\frac{1}{x} \times \lim_{x\rightarrow 0}\frac{x}{\sin x} = 0 \times 1 = 0$$
PPS Here is the scan from the textbook: |
Could anyone try to prove that the below conjectured formula is valid for relating $\pi$ with ALL of its convergents - those, which are described in OEIS via A002485(n)/A002486(n)?
$$(-1)^n\cdot(\pi - \text{A002485}(n)/\text{A002486}(n))$$$$=(|i|\cdot2^j)^{-1} \int_0^1 \big(x^l(1-x)^{2(j+2)}(k+(i+k)x^2)\big)/(1+x^2)\; dx$$ (1)
and in unformatted form:
(-1)^n*(Pi−A002485(n)/A002486(n))=(abs(i)*2^j)^(-1)
Int((x^l(1-x)^(2*(j+2))*(k+(i+k)*x^2))/(1+x^2),x=0...1)
where integer $n = 0,1,2,3,...$ serves as the index for terms in OEIS A002485(n) and A002486(n), and $\{i, j, k, l\}$ are some integers (to be found experimentally or otherwise), which are probably some functions of $n$.
The "interesting" (I think) part of my generalization conjecture is that both "$i$" and "$j$" are present in both denominator of the coefficient in front of the integral and in the body of the integral itself
At this time it could be shown that the formula under question is applicable for some first few convergents (of the A002485(n)/A002486(n) type)
1) For example for $\frac{22}{7}$
$$\frac{22}{7} - \pi = \int_{0}^{1}\frac{x^4(1-x)^4}{1+x^2}\,\mathrm{d}x$$
with $n=3, i=-1, j=0, k=1, l=4$ - with regards to my above suggested generalization.
In Maple notation
i:=-1; j:=0; k:=1; l:=4;Int(x^l*(1-x)^(2*(j+2))*(k+(k+i)
x^2)/((1+x^2)(abs(i)*2^j)),x= 0...1)
yields 22/7 - Pi
2) It also works for found by Lucas
formula for $\frac{333}{106}$
$$\pi - \frac{333}{106} = \frac{1}{530}\int_{0}^{1}\frac{x^5(1-x)^6(197+462x^2)}{1+x^2}\,\mathrm{d}x$$
with $n=4, i=265, j=1, k=197, l=5$ -with regards to my above suggested generalization.
In Maple notationi:=265; j:=1; k:=197; l:=5;Int(x^l*(1-x)^(2*(j+2))*(k+(k+i)
x^2)/((1+x^2)(abs(i)*2^j)),x= 0...1)
yields Pi - 333/106
3) And it works for Lucas's formula for $\frac{355}{113}$
$$\frac{355}{113} - \pi = \frac{1}{3164}\int_{0}^{1}\frac{(x^8(1-x)^8(25+816x^2)}{(1+x^2)}$$
with $n=5, i=791, j=2, k=25, l=8$ -with regards to my above suggested generalization.
In Maple notation
i:=791; j:=2; k:=25; l:=8;Int(x^(2*(j+2))
(1-x)^l(k+(k+i) x^2)/((1+x^2)(abs(i)*2^j)),x= 0...1)
yields 355/113 - Pi
4) And it works as well for Lucas's formula for $\frac{103993}{33102}$
$$\pi - \frac{103993}{33102} = \frac{1}{755216}\int_{0}^{1}\frac{x^{14}(1-x)^{12}(124360+77159x^2)}{1+x^2}\,\mathrm{d}x$$
with $n=6, i= -47201, j=4, k=124360, l=14$ -with regards to my above suggested generalization.
In Maple notation
i:=-47201; j:=4; k:=124360; l:=14;Int(x^l*(1-x)^(2*(j+2))*(k+(k+i)
x^2)/((1+x^2)(abs(i)*2^j)),x= 0...1)
yields Pi - 103993/33102
5) And also it works Lucas's formula for $\frac{104348}{33215}$
$$\frac{104348}{33215} - \pi = \frac{1}{38544}\int_{0}^{1}\frac{x^{12}(1-x)^{12}(1349-1060x^2)}{1+x^2}\,\mathrm{d}x$$
with $n=7, i= -2409, j=4, k=1349, l=12$ - with regards to my above suggested generalization.
In Maple notation
i:=-2409; j:=4; k:=1349; l:=12;Int(x^l*(1-x)^(2*(j+2))*(k+(k+i)
x^2)/((1+x^2)(abs(i)*2^j)),x= 0...1)
yields 104348/33215 - Pi
6) And it works as well for $\frac{618669248999119}{196928538206400}$
which, by the way, is not part of A002485/A002486 OEIS sequences:
$$\frac{618669248999119}{196928538206400} - \pi = \frac{1}{755216}\int_{0}^{1}\frac{x^{14}(1-x)^{12}(77159+124360x^2)}{1+x^2}\,\mathrm{d}x$$
with $i= 47201, j=4, k=77159, l=14$ -with regards to my above suggested generalization.
In Maple notation
i:=47201; j:=4; k:=77159; l:=14;Int(x^l*(1-x)^(2*(j+2))*(k+(k+i)
x^2)/((1+x^2)(abs(i)*2^j)),x= 0...1)
yields 618669248999119/196928538206400 - Pi
This question relates to my answer given in https://math.stackexchange.com/questions/1956/is-there-an-integral-that-proves-pi-333-106/127618#127618
Update: Recently Thomas Baruchel (see his answer at https://math.stackexchange.com/questions/860499/seeking-proof-for-the-formula-relating-pi-with-its-convergents ) has conducted extensive calculations and found that even the parametric formula (with four parameters) yields infinite number of solutions for each n.
Thomas shared with me his calculations results and supplied me with quite a few of valid combinations of i, j, k, l values - so now I have a lot of experimentally found five-tuples {n,i, j, k, l}, which satisfy above parameterization, where n varies in the range from 2 to 26.
Based on this data, of course, it would be nice to find how (if at all) i, j, k, l are inter-related between each other and with "n" - but such inter-relation (if exists) is not obvious and difficult to derive just by observation ... (though it is clearly seen that an absolute value of "i" is strongly increasing as "n" is growing from 2 to 26).
RHS could be reduced (after performing integration - please let me know if I made a mistake in doing this) to:
(abs(i)*2^j)^(-1)*Gamma(2*j+5)*((k+i)*Gamma(l+3)*HypergeometricPFQ(1,l/2+3/2,l/2+2;j+l/2+4,j+l/2+9/2;-1)/Gamma(2*j+l+8)+k*Gamma(l+1)*HypergeometricPFQ(1,l/2+1/2,l/2+1;j+l/2+3, j+l/2+7/2;-1)/Gamma(2*j+l+6))
May be from discussed parametric identity one could derive irrationality measure for pi, if to assume that RHS in this identity holds true, when the rational fraction on the LHS is equal to 0? Are there any {i,j,k,l}, which would satisfy such condition?
Pi = (abs(i)*2^j)^(-1)*Gamma(2*j+5)*((k+i)*Gamma(l+3)*HypergeometricPFQ(1,l/2+3/2,l/2+2;j+l/2+4,j+l/2+9/2;-1)/Gamma(2*j+l+8)+k*Gamma(l+1)*HypergeometricPFQ(1,l/2+1/2,l/2+1;j+l/2+3, j+l/2+7/2;-1)/Gamma(2*j+l+6))
Update #2:
Thanks to Jaume Oliver Lafont, at least one case, answering affirmatively to the last question, is identified: i=-1, j=-2, k=1, l=0
$$\pi = \int_{0}^{1}\frac{4}{1+x^2}\,\mathrm{d}x$$
Should there be infinite number of such cases?
P.S. Per discussion with Jaume Oliver Lafont, depending on the value of the polynomial x degree in the integral body's numerator (while denominator stays to be the same "1+x^2"), the result varies from "Pi" to "log(2)" and also to "+/- (Pi - p/q)" as well as to "+/-(log(2)-p/q)", so perhaps now one could produce two distinct families of parameterization: one for Pi and the differences between Pi and its convergents and another for log(2) and the differences between log(2) and its convergents.
P.P.S. While manipulating expressions in Wolfram Cloud Development Platform and WolframAlpha I came across the following parametric identity
Sqrt[Pi] = (1/(2^j)*((k Gamma[5 + 2 j] Gamma[1 + l] HypergeometricPFQ[{1, 5/2 + j, 3 + j}, {3 + j + l/2,7/2 + j + l/2}, -1])/Gamma[6 + 2 j + l] + ((k + i) Gamma[7 + 2 j] Gamma[1 + l] HypergeometricPFQ[{1, 7/2 + j, 4 + j}, {4 + j + l/2,9/2 + j + l/2}, -1])/Gamma[8 + 2 j + l]))/(2^(-5 - 3 j -l) Gamma[5 + 2 j] Gamma[1 + l] (k HypergeometricPFQRegularized[{1, 5/2 + j,3 + j}, {3 + j + l/2, 7/2 + j + l/2}, -1] +1/2 (3 + j) (5 + 2 j) (k + i) HypergeometricPFQRegularized[{1,7/2 + j, 4 + j}, {4 + j + l/2, 9/2 + j + l/2}, -1])) (2)
which indeed gave Sqrt[Pi] for each set of {i,j,k,l} given in above-listed cases 1), 2), 3), 4), 5), 6)
I presume that above identity (2) will yield Sqrt[Pi] for other (infinite) number of sets of {i,j,k,l}.
Is it an interesting identity?
Thanks,
Best Regards,
Alexander R. Povolotsky
I was notified that Maple simplifies the expression on the right hand side of (2) to sqrt(Pi). It seems to be true for arbitrary j,k,l,m.
Based on above we could consider case:
Sqrt[Pi] = (2^(5+3 j) (Gamma[5+2 j] Gamma[8+3 j] HypergeometricPFQ[{1,5/2+j,3+j},{3+(3 j)/2,7/2+(3 j)/2},-1]+2 Gamma[7+2 j] Gamma[6+3 j] HypergeometricPFQ[{1,7/2+j,4+j},{4+(3 j)/2,9/2+(3 j)/2},-1]))/(Gamma[5+2 j] Gamma[6+3 j] Gamma[8+3 j] (HypergeometricPFQRegularized[{1,5/2+j,3+j},{3+(3 j)/2,7/2+(3 j)/2},-1]+(15+11 j+2 j^2) HypergeometricPFQRegularized[{1,7/2+j,4+j},{4+(3 j)/2,(3 (3+j))/2},-1]))
(3) |
Journal of the European Mathematical Society
Full-Text PDF (251 KB) | Metadata | Table of Contents | JEMS summary
Volume 20, Issue 7, 2018, pp. 1561–1594 DOI: 10.4171/JEMS/793
Published online: 2018-05-15
Strongly minimal theories with recursive modelsUri Andrews
[1]and Julia F. Knight [2](1) University of Wisconsin, Madison, USA
(2) University of Notre Dame, USA
We give effectiveness conditions on a strongly minimal theory $T$ guaranteeing that all countable models have computable copies. In particular, we show that if $T$ is strongly minimal and for all $n\geq 1$, $T\cap\exists_{n+2}$ is $\Delta^0_n$, uniformly in $n$, then every countable model has a computable copy. A longstanding question of computable model theory asked whether for a strongly minimal theory with one computable model, every countable model has an arithmetical copy. Relativizing our main result, we get the fact that if there is one computable model, then every countable model has a $\Delta^0_4$ copy.
Keywords: Strongly minimal, worker argument, recursive models, computable models
Andrews Uri, Knight Julia: Strongly minimal theories with recursive models.
J. Eur. Math. Soc. 20 (2018), 1561-1594. doi: 10.4171/JEMS/793 |
Science Advisor
Gold Member
7,190 1,008
Here is one I am having trouble following. Can anyone help me through my confusion?
Our setup is a normal Bell test using entangled photons created using spontaneous parametric down conversion (PDC). Such a setup uses 2 BBO crystals oriented a 90 degrees relative to each other. See for example Dehlinger and Mitchell's http://users.icfo.es/Morgan.Mitchell/QOQI2005/DehlingerMitchellAJP2002EntangledPhotonsNonlocalityAndBellInequalitiesInTheUndergraduateLaboratory.pdf [Broken].
1. Say we have Alice and Bob set their polarizers at identical settings, at +45 degrees relative to the vertical. Once the individual results of Alice and Bob are examined, it will be seen (in the ideal case) that they always match (either ++ or --). According to the local realist or local hidden variables (LHV) advocate, this is "easily" explained:
But that explanation does not seem reasonable to me, even in the case above in which Alice and Bob have identical settings. Here is the paradox as I see it. The source of the photon pairs is the 2 crystals. They achieve an EPR entangled state for testing by preparing a superposition of states as follows:
[tex] |\psi_e_p_r\rangle = \frac {1} {\sqrt{2}} (|V\rangle _s|V\rangle _i + |H\rangle _s|H\rangle _i) [/tex]
This is the standard description per QM. We already know this leads to the [tex] cos^2 \theta [/tex] relationship and the results will be 100% correlation.
The local realist presumably would not accept this description as accurate because it is not complete, and violates the basic premise of any LHV theory. He has an alternate explanation, and the Heisenberg Uncertainty Principle (HUP) is not part of it. So now it appears that our experimental results are compatible with the expectations of both QM and LHV (at least when Alice and Bob have matching settings); however, they have different ways of obtaining identical predictions. But let's look deeper, because I think there is a paradox in the LHV side.
2. Suppose I remove one of the BBO crystals, say the one which produces pairs that are horizontally polarized. I have removed an element of uncertainty of the output stream, as we will now know which crystal was the source of the photon pair. Now the results of Alice and Bob no longer match in all cases, and such is predicted by the application of QM: Alice and Bob will now have matched results only 50% of the time. This follows because the resulting photon pairs emerge from the remaining BBO crystal with a vertical orientation. Each photon has a 50-50 chance of passing through the polarizer at Alice and Bob. But since there is no longer a superposition of states, Alice and Bob do not end up with correlated results.
But what about our LHV theory? We should still get matching results for Alice and Bob because we are still measuring the same attribute on both photons and the conservation rule remains in effect! Yet the actual results are now matches only 50% of the time, no better than even odds.
I mean, if the LHV advocate denies there is superposition in case 1 (such denial is essentially a requirement of any LHV, right?), how does the greater knowledge of the state change anything in case 2?
Our setup is a normal Bell test using entangled photons created using spontaneous parametric down conversion (PDC). Such a setup uses 2 BBO crystals oriented a 90 degrees relative to each other. See for example Dehlinger and Mitchell's http://users.icfo.es/Morgan.Mitchell/QOQI2005/DehlingerMitchellAJP2002EntangledPhotonsNonlocalityAndBellInequalitiesInTheUndergraduateLaboratory.pdf [Broken].
1. Say we have Alice and Bob set their polarizers at identical settings, at +45 degrees relative to the vertical. Once the individual results of Alice and Bob are examined, it will be seen (in the ideal case) that they always match (either ++ or --). According to the local realist or local hidden variables (LHV) advocate, this is "easily" explained:
if you measure the same attribute of two separated particles sharing such a common origin, you will naturally always get the same answer.There is no continuing entanglement or spooky action at a distance, and conservation rules are sufficient to provide a suitable explanation. I.e. in LHV theories there is no continuing connection between spacelike separated particles that interacted in the past. The results will be 100% correlation.
But that explanation does not seem reasonable to me, even in the case above in which Alice and Bob have identical settings. Here is the paradox as I see it. The source of the photon pairs is the 2 crystals. They achieve an EPR entangled state for testing by preparing a superposition of states as follows:
[tex] |\psi_e_p_r\rangle = \frac {1} {\sqrt{2}} (|V\rangle _s|V\rangle _i + |H\rangle _s|H\rangle _i) [/tex]
This is the standard description per QM. We already know this leads to the [tex] cos^2 \theta [/tex] relationship and the results will be 100% correlation.
The local realist presumably would not accept this description as accurate because it is not complete, and violates the basic premise of any LHV theory. He has an alternate explanation, and the Heisenberg Uncertainty Principle (HUP) is not part of it. So now it appears that our experimental results are compatible with the expectations of both QM and LHV (at least when Alice and Bob have matching settings); however, they have different ways of obtaining identical predictions. But let's look deeper, because I think there is a paradox in the LHV side.
2. Suppose I remove one of the BBO crystals, say the one which produces pairs that are horizontally polarized. I have removed an element of uncertainty of the output stream, as we will now know which crystal was the source of the photon pair. Now the results of Alice and Bob no longer match in all cases, and such is predicted by the application of QM: Alice and Bob will now have matched results only 50% of the time. This follows because the resulting photon pairs emerge from the remaining BBO crystal with a vertical orientation. Each photon has a 50-50 chance of passing through the polarizer at Alice and Bob. But since there is no longer a superposition of states, Alice and Bob do not end up with correlated results.
But what about our LHV theory? We should still get matching results for Alice and Bob because we are still measuring the same attribute on both photons and the conservation rule remains in effect! Yet the actual results are now matches only 50% of the time, no better than even odds.
What happened to our explanation that "measuring the same attribute" gives identical results?It seems to me that the only way for a LHV to avoid the paradox is to incorporate the HUP - and maybe the projection postulate too - as a fundamental part of the theory so that it can give the same predictions as QM.
I mean, if the LHV advocate denies there is superposition in case 1 (such denial is essentially a requirement of any LHV, right?), how does the greater knowledge of the state change anything in case 2?
Last edited by a moderator: |
Prove that if the sequence of partial sums $A_n$ of $\sum a_n$ is bounded, then the sequence $(a_n)_{n \geq 1}$ is bounded.
Proof
Since the sequence $(A_n)_{n \geq 1}$ is bounded, there exists (by definition of boundedness) a real number $M$ such that $|A_n|\leq M$ for all $n~\in~\mathbb{N}$. Consider $|A_{n+1}-A_n|=|a_{n+1}|$. We have $$2M \geq |A_{n+1}|+|A_{n}| \geq |A_{n+1}-A_{n}| = |a_{n+1}|,$$ so that the sequence $(a_{n+1})_{n \geq 1}$ is bounded by $2M$. This completes the proof.
Question
I am sometimes not sure whether my proofs are totally correct. Did I make any logical errors in this proof? |
I am trying to use the
Animate command to vary a parameter of the Lorenz Equations in 3-D phase space and I'm not having much luck.
The equations are:
$\begin{align*} \dot{x} &= \sigma(y-x)\\ \dot{y} &= rx-y-xz\\ \dot{z} &= xy-bz \end{align*}$
Where $\sigma, r, b > 0$ are parameters to be varied.
Insofar, I am using the
NDSolve command to numerically integrate these equations, then
ParametricPlot3D and the
Evaluate command to plot them.
Just for starters, I am trying to create an animate command to vary $\sigma$ for example from 0 to 10. Can anyone guide me in the right direction? My code looks like this so far:
σ = 10;NDSolve[{x'[t] == σ (y[t] - x[t]), y'[t] == 28 x[t] - y[t] - x[t] z[t], z'[t] == x[t] y[t] - 8/3 z[t], x[0] == z[0] == 0, y[0] == 2}, {x, y, z}, {t, 0, 25}]Animate[ParametricPlot3D[ Evaluate[{x[t], y[t], z[t]} /. solution], {t, 0, 25}], {σ, 0, 25}, AnimationRunning -> False]
This will generate an animated plot but obviously as
σ varies, nothing is changing since I am not implementing new
NDSolve commands. Can anyone guide me as to how I can implement successive
NDSolve's inside the animate command? Thank you
EDIT: I am using $r=28$ and $b=\frac83$ in place of
r and
b in my code. |
Here is mine. It's taken from page 11 of "An Introduction To Abstract Harmonic Analysis", 1953, by Loomis:
(By the way, I don't know why this book is not more famous.)
To prove that a product $K=\prod K_i$ of compact spaces $K_i$ is compact, let $\mathcal A$ be a set of closed subsets of $K$ having the finite intersection property (FIP) ---
viz. the intersection of finitely many members of $\mathcal A$ is nonempty ---, and show $\bigcap\mathcal A\not=\varnothing$ as follows.
By Zorn's Theorem, $\mathcal A$ is contained into some maximal set $\mathcal B$ of (not necessarily closed) subsets of $K$ having the FIP.
The $\pi_i(B)$, $B\in\mathcal B$, having the FIP and $K_i$ being compact, there is, for each $i$, a point $b_i$ belonging to the closure of $\pi_i(B)$ for all $B$ in $\mathcal B$, where $\pi_i$ is the $i$-th canonical projection. It suffices to check that $\mathcal B$ contains the neighborhoods of $b:=(b_i)$. Indeed, this will imply that the neighborhoods of $b$ intersect all $B$ in $\mathcal B$, hence that $b$ is in the closure of $B$ for all $B$ in $\mathcal B$, and thus in $A$ for all $A$ in $\mathcal A$.
For each $i$ pick a neighborhood $N_i$ of $b_i$ in such a way that $N_i=K_i$ for almost all $i$. In particular the product $N$ of the $N_i$ is a neighborhood of $b$, and it is enough to verify that $N$ is in $\mathcal B$. As $N$ is the intersection of finitely many $\pi_i^{-1}(N_i)$, it even suffices, by maximality of $\mathcal B$, to prove that $\pi_i^{-1}(N_i)$ is in $\mathcal B$.
We have $N_i\cap\pi_i(B)\not=\varnothing$ for all $B$ in $\mathcal B$ (because $b_i$ is in the closure of $\pi_i(B)$), hence $\pi_i^{-1}(N_i)\cap B\not=\varnothing$ for all $B$ in $\mathcal B$, and thus $\pi_i^{-1}(N_i)\in\mathcal B$ (by maximality of $\mathcal B$).
A pdf version is available at http://iecl.univ-lorraine.fr/~Pierre-Yves.Gaillard/DIVERS/Tycho/ .
Many people credit the general statement of Tychonoff's Theorem to Cech. But, as pointed out below by KP Hart, Tychonoff's Theorem seems to be entirely due to ... Tychonoff. This observation was already made on page 636 of
Chandler, Richard E.; Faulkner, Gary D. Hausdorff compactifications: a retrospective. Handbook of the history of general topology, Vol. 2 (San Antonio, TX, 1993), 631--667, Hist. Topol., 2, Kluwer Acad. Publ., Dordrecht, 1998
The statement is made by Tychonoff on p. 272 of "Ein Fixpunktsatz"
where he says that the proof is the same as the one he gave for a product of intervals in "Über die topologische Erweiterung von Räumen" |
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues |
Q. A copper wire is stretched to make it 0.5% longer. The percentage change in its electrical resistance if its volume remains unchanged is:
Solution:
$R = \frac{\rho\ell}{A} $ and volume (V) = A$\ell$. $ R = \frac{\rho\ell^{2}}{V} $ $ \Rightarrow \frac{\Delta R}{R} = \frac{2\Delta\ell}{\ell} = 1\%$ Questions from JEE Main 2019 1. A power transmission line feeds input power at 2300 V to a step down transformer with its primary windings having 4000 turns. The output power is delivered at 230 V bv the transformer. If the current in the primary of the transformer is 5A and its efficiency is 90 %, the output current would be : 3. The pitch and the number of divisions, on the circular scale, for a given screw gauge are 0.5 mm and 100 respectively. When the screw gauge is fully tightened without any object, the zero of its circular scale lies 3 divisions below the mean line.
The readings of the main scale and the circular scale, for a thin sheet, are 5.5 mm and 48 respectively, the thickness of this sheet is : 9. A convex lens is put 10 cm from a light source and it makes a sharp image on a screen, kept 10 cm from the lens. Now a glass block (refractive index 1.5) of 1.5 cm thickness is placed in contact with the light source. To get the sharp image again, the screen is shifted by a distance d. Then d is : Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
Tests of the standard model and its possible extensions using precision spectroscopy have a distinguished history, and include tests of electroweak theory, parity violation and searches for the electric dipole moments. In the last fifteen years there has been a step change in the precision of atomic spectroscopic measurements, due to the application of techniques such as laser cooling and optical frequency combs. The current state of the art is a spectroscopic precision of $\approx 3 \times 10^{-19}$ achieved for an optical clock transition in neutral strontium [Marti et al 2018]. This is the most precisely measured quantity in any physical system.
We propose to fully exploit high precision spectroscopy to test various aspects of fundamental physics. The UK is internationally competitive in this area, with relevant research groups including (but not limited to)
Durham University (Rydberg atoms, Imperial College (electron EDM), NPL (optical atomic clocks) UCL (positronium, helium atoms), Swansea (antihydrogen).
The programme of research we envision includes:
Spectroscopic searches for a fifth force and Dark Matter
A very light, new boson $\phi$ can induce a fifth force which leads to a modification of the Coulomb potential in atoms for the force between two particles $i,j$,
$$ Z\frac{\alpha}{r}\longrightarrow Z\frac{\alpha}{r}+\frac{y_iy_j}{4\pi}\frac{e^{-m_\phi r}}{r}\,. $$ Precision measurements of atomic transition frequencies can in principle be used to set stringent bounds on whether such a deviation exists. The main difficulty with this approach is the many-body nature of the electronic wavefunction for most atoms, which means that an exact standard model prediction for the transition frequency is not possible. A number of approaches have been proposed, such as using isotope shifts [Berengut et al 2018] [Frugiuele et al 2018] , to reduce the sensitivity to electronic structure. An exciting alternative is to exploit Rydberg states with principal quantum numbers of $n \gtrapprox 100$. Advantages of Rydberg states include the ability to tune the size of the atomic wavefunction $\propto n^2 a_0$ relative to the range of the fifth force, and the availability of a great number of transitions for each isotope and element, enabling more complex searches for systematic deviations. Furthermore, Rydberg electrons only weakly interact with the nucleus and the remaining core electrons, raising the possibility of precise \emph{ab initio} calculations with developments in existing theory.
While fifth forces are more general, very light Dark Matter with masses $m_\text{DM} \ll $ eV can be probed by these searches. This type of Dark Matter is particularly well motivated by fits to the small scale spectrum [Hui et al 2018]
For example a light scalar dark matter field $\phi$ with a coupling to SM fermions $f$
$$ \mathcal{L} = -\frac{\phi}{\Lambda^2}m_f f\bar f $$ leads to a time-dependent variation of the fermion masses [Stadnik, Flambaum 2015] $$ m_f \to m_f\left( 1+\frac{\phi}{\Lambda} \right)$$ The relic abundance of this light Dark Matter is set by the misalignment mechanism, leading to oscillations in the field which induce an oscillating mass $$ \frac{\delta m_f}{m_f}= \frac{\phi_0}{\Lambda}\cos (m_\phi t)$$ as well as a fifth force between the fermions $f$.
We propose to set improved constraints using optical spectrocopy of Rydberg states in heavier atoms such as Rb, Cs and Sr that are easily laser cooled and trapped. For example the Durham group is already able to perform such measurements with $\sim$kHz absolute precision in Sr. In parallel we recommend funding the theoretical development of the required atomic physics ``phenomenology''.
Precision tests using few-body systems
The problem of direct comparison with theoretical calculations can be solved by using light atoms such as the isotopes of hydrogen and helium, where highly accurate standard-model calculations of the atomic wavefunction are possible. An example is provided by the well-known spectroscopy of the $1S\rightarrow2S$ transition in hydrogen that is at the heart of the proton radius puzzle. During the next years we propose to build dedicated experiments for precision spectroscopy in these light atomic species, using both laser cooling and alternative cooling methods based on Stark and Zeeman deceleration [Liu et al 2018] to produce cold samples suitable for precision spectroscopy. As well as contributing to improved measurements of the proton radius, precision measurements of Rydberg states in these atoms would improve the constraints on fifth forces considerably. For example, comparing the Rydberg constant measured from low Hydrogen states with $r=a_0$ with the Rydberg constant measured in Rydberg states with $n \approx 30$, which corresponds to $r\approx 10^3 a_0$, allows the derivation of limits on new bosons with masses between eV and keV.
Existing searches achieve a precision of $\delta R_\infty =\mathcal{O}(10^{-6})$, which translates in limits on the product of couplings to electrons and protons of $y_Py_e\lesssim10^{-12}$ for these masses [Liu et al 2017] [Karshenboim 2010] . We propose that an improvement of between 3-6 orders of magnitude is possible, by creating for example an atomic fountain of hydrogen atoms. Such measurements would also contribute to searches for CPT violation by providing reference measurements to compare with antihydrogen.
We note that the few and many-body approaches are complementary; in H and He theory predictions are very precise but high-precision measurements are hard, whereas in atoms with higher $Z$ such as $^{88}$Sr or $^{174}$Yb, very high experimental precision can be achieved, but theoretical calculations are challenging.
As well as increasing the discovery potential for new physics, comparing measurements in heavy and light Rydberg atoms will help to advance both the experimental state-of-the-art for light atoms and atomic structure calculations in high-$Z$ Rydberg atoms.
Precision tests using exotic atoms
In the final stage of our proposal we plan to search for new forces in the leptonic sector. We note that the UCL group is world leading in the creation and spectroscopy of low-energy positronium atoms. The possibility of precision spectroscopy in muonium is particularly interesting, since a number of observables involving muons have shown deviations from SM predictions, such as the anomalous magnetic moment $(g-2)_\mu$ [Davier et al 2011] and hints of lepton non-universality in rare $b\to s \ell^+\ell^-$ transitions [LHCb 2014] . Spectroscopy in muonium and muonic Hydrogen would provide a unique and timely probe of light new physics that could be responsible for these effects. Beyond searches for new forces coupling to muons, we intend to perform an independent, improved determination of the proton radius. Here the UK gains a competitive edge from hosting the muon user facility at the ISIS facility (RAL) (one of only two in Europe), opening the possibility of a longer-term, medium-scale project to develop an instrumental station for precision measurements in muonic systems. |
Higher-dimensional Fujimura
Let [math]\overline{c}^\mu_{n,4}[/math] be the largest subset of the tetrahedral grid:
[math] \{ (a,b,c,d) \in {\Bbb Z}_+^4: a+b+c+d=n \}[/math]
which contains no tetrahedrons [math](a+r,b,c,d), (a,b+r,c,d), (a,b,c+r,d), (a,b,c,d+r)[/math] with [math]r \gt 0[/math]; call such sets
tetrahedron-free.
These are the currently known values of the sequence:
n 0 1 2 [math]\overline{c}^\mu_{n,4}[/math] 1 3 7 n=0
[math]\overline{c}^\mu_{0,4} = 1[/math]:
There are no tetrahedrons, so no removals are needed.
n=1
[math]\overline{c}^\mu_{1,4} = 3[/math]:
Removing any one point on the grid will leave the set tetrahedron-free.
n=2
[math]\overline{c}^\mu_{2,4} = 7[/math]:
Suppose the set can be tetrahedron-free in two removals. One of (2,0,0,0), (0,2,0,0), (0,0,2,0), and (0,0,0,2) must be removed. Removing any one of the four leaves three tetrahedrons to remove. However, no point coincides with all three tetrahedrons, therefore there must be more than two removals.
Three removals (for example (0,0,0,2), (1,1,0,0) and (0,0,2,0)) leaves the set tetrahedron-free with a set size of 7.
General n
A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero.
You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size [math]cn^2[/math]. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. I think [math] c = \frac{24^{1/4}}{6} + o(\frac{1}{n})[/math].
With coordinates (a,b,c,d), take the value a+2b+3c. This forms an arithmetic progression of length 4 for any of the tetrahedrons we are looking for. So we can take subsets of the form a+2b+3c=k, where k comes from a set with no such arithmetic progressions. [This paper] gives a complicated formula for the possible number of subsets.
One upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has [math]\frac{1}{24}n(n+1)(n+2)(n+3)[/math] tetrahedrons. Each point on the grid is part of n tetrahedrons, so [math]\frac{1}{24}(n+1)(n+2)(n+3)[/math] points must be removed to remove all tetrahedrons. This gives an upper bound of [math]\frac{1}{8}(n+1)(n+2)(n+3)[/math]. |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
So far all I have is this:
Let $f$ be a function where $f(x)=ex-e^x\leq 0$
$f'(x)=e-e^x \leq 0$, so $f$ is decreasing.
I'm stuck here. Can someone help me with the next steps?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let $f(x) := e^{x} - ex$ for all $x \in \mathbb{R}.$ We claim that $f \geq 0$ on $\mathbb{R}.$
Note that $f(1) = 0,$ that $f'(x) > 0$ for all $x > 1$, and that $f'(x) < 0$ for all $x < 1.$ Thus $f$ is increasing on $]1, +\infty[$ so that $f > 0$ on $]1, +\infty[.$ Since $f$ is decreasing on $]-\infty, 1[$ and is continuous on $\mathbb{R},$ it follows that $f > 0$ on $]-\infty, 1[.$
Including the case where $f(1) = 0,$ we thus have $f \geq 0$ on $\mathbb{R}.$
Hint: $e^x$ is convex, hence stays above its tangent at $x=1$.
First of all, $f'(x)$ isn't always less than $0$. It's less than/equal to $0$ at $x=1$ however. At $x=1$, $f(x)=0$, and if it's decreasing it will always be less than $0$. For $x<1$, it shouldn't be hard to show.
There are $3$ cases to consider:
$a):$ $x < 0 \Rightarrow ex < 0 < e^x \Rightarrow ex < e^x$.
$b):$ $ 0 \leq x \leq 1$, $f(x) = ex - e^x \to f'(x) = e - e^x \geq 0 \to f(x) \leq f(1) = e - e = 0 \Rightarrow ex - e^x \leq 0 \Rightarrow ex \leq e^x$.
$c): $ $ 1 < x $: $f(x) = ex - e^x \to f'(x) = e - e^x < 0 \to f(x) < f(1) = e - e = 0 \to ex < e^x$. |
I have a linear system of equations $$Ax=b$$ where $A$ is an $N\times N$ matrix with integer values, and $b$ is a $N\times 1$ vector with integer values. Due to prior knowledge, I know that I am guaranteed that there exists exactly one rational solution $x$ such that $x=A^{-1}b$ (as well as that $A$ has full rank).
I am now searching for an algorithm in Matlab (or any other language which I can rewrite into Matlab) which provides the exact rational solution to this problem for a given $A$ and $b$, for example in the form$\vec{x}=\frac{r}{s}\vec{t}$, where $r$ and $s$ are scalar integers and $t$ is an $N\times 1$ vector of integers, or in the form (Matlab notation)
x=r./s where
r and
s are $N\times 1$ integer vectors.
I have tried
[r,s]=rat(A\b), which works for simple enough cases but quickly results in rounding problems (i.e.
r./s only approximates $x$, but I need the exact solution). Using symbolic calculations works; however, I have to compile the program to stand-alone, and the Matlab Compiler does not support the Symbolic Toolbox as far as I know and as far as I tried.
Some stats: $N$ should be allowed to be at least around $100$ or higher. The values in $A$ can become rather large, such that I am already now using
int64. I understand that at one point I will run into numeric problems, and it's sufficient for me to postpone these kinds of problems as long as possible (I am grateful for any solution, even if it only works for smaller $N$). However, I need the precise solution, and I prefer an error message over any approximation, no matter how good. Runtime of the algorithm is secondary, and it's OK if it needs an hour or so for $N=100$ (being more efficient is however a plus). |
I want to discretize the following equation using a Finite Volume Method $$\nabla \cdot (a(x)\nabla u)=f(x)\\x\in \Omega \subset \mathbb{R}^2 \\u_{|\partial\Omega}=g$$ I'm using Voronoi cells here: let $V_i \subset\Omega$, $i=1,\dots,N$, such that $\bigcup\limits_{i=1}^N V_i=\Omega$, $V_i\cap V_j=\partial V_i \cap \partial V_j$ for $i\ne j$ be the mesh cells.
After applying the Divergence theorem and integrating the equation over each $V_i$, obtain
$$\int_{V_i} \nabla \cdot (a(x)\nabla u) dx = \int_{\partial V_i} a(x)\nabla u \cdot n ds=\int_{V_i} fdx$$
Let $x_i$ be the "center" of $V_i$ (this is a cell-cenetered method). Define $n_i\subset \{1,\dots,N\}$ as the set of indices of the cells immediately neighbouring $V_i$. Let the boundary between any two mesh cells be defined as
$$\Gamma_{i,j}:=\partial V_i \cap \partial V_j$$
Define the length of the boundary segment between neighbouring cells $V_i$ and $V_j$ as $$l_{i,j}:=\left|\Gamma_{i,j}\right|, j\in n_i$$
and the distance between two cell "centers" as $$h_{i,j}:=\|x_i-x_j\|_2, j\in n_i$$
Approximate $a(x)$ for each two neighbouring cells $V_i,V_j$ as
$$a_{i,j}:=a\left(\frac{x_i+x_j}{2}\right), j\in n_i$$
So we obtain the following discretization:
$\int_{\partial V_i} a(x)\nabla u \cdot n ds=\sum_{j\in n_i}\int_{\Gamma_{i,j}}a(x)\nabla u(x)\cdot n ds \approx \sum_{j\in n_i} a_{i,j} \frac{u_j-u_i}{h_{i,j}}l_{i,j}=|V_i|f(x_i)\approx \int_{V_i} f(x)dx$
To discretize the Dirichlet BC using centered cells, as far as I understand, one needs to use outer ghost cells. So let $V_{G_j}$ be such a cell, adjacent to $V_j$ for some $j\in\{1,\dots, N\}$, and $x_{G_{j}}\in V_{G_{j}}$ be its "center". But this is where I'm stuck. Where do I go from here?
P.S. I know there is a similar question here, but I couldn't quite follow the notation there. |
Let $X$ be a non-empty set and $\mathscr A$ an algebra on it. A
premeasure on a $\mathscr A$ is a function $\lambda:\mathscr A\to[0,\infty]$ such that $\lambda(\varnothing)=0$; and if $A_1,A_2,\ldots$ is a countable collection of disjoint sets in $\mathscr A$ and if their union is contained in $\mathscr A$, then $$\lambda\left(\bigcup_{n=1}^{\infty} A_n\right)=\sum_{n=1}^{\infty}\lambda(A_n).$$
If $\mathscr B$ is a $\sigma$-algebra on $X$, then a
measure on $\mathscr B$ is a function $\mu:\mathscr B\to[0,\infty]$ such that $\mu(\varnothing)=0$; and if $B_1,B_2,\ldots$ is a countable collection of disjoint sets in $\mathscr B$ (and, since $\mathscr B$ is a $\sigma$-algebra, their union is already contained in $\mathscr B$, so this condition need not be prescribed explicitly in this case), then $$\mu\left(\bigcup_{n=1}^{\infty} B_n\right)=\sum_{n=1}^{\infty}\mu(B_n).$$
Basically, yes, the main difference is that of the domains,
viz., a premeasure is defined on an algebra and a measure is defined on a $\sigma$-algebra. There is another subtle difference, though: while both concepts are required to satisfy $\sigma$-additivity, in the case of a premeasure this makes sense only for countable collections of disjoint sets of an algebra whose unions, too, are actually in the algebra. |
Tetration to Sheldon base Tetration to Sheldon base refers to the specific case of tetration to the complex base \(b=s\), where
\( s=1.52598338517 + 0.0178411853321~ {i}\).
is the Sheldon number. In such a way, it is holomorphic function \(\mathrm{tet}_s\).
Yet, no specific definition for the Sheldon number is available;the only observation, that Sheldon Levenstein somehow had believed that the tetration to such a base is very difficult to evaluate
[1] with his algorithm of "merging of solution" [2].
In such a way, the constant \(s\) above can be qualified as "exact".Following the general ideology of TORI, the solutions that are believed not to exist (or to be extremely difficult to evaluate) are of special interest.For this reason, tetration to the Sheldon base is described in this article and in the book Superfunctions
[3]. Fixed points of log s and properties of tet s \(\!\!\!\!\!\!\!\! (1) \displaystyle ~ ~ ~ \exp(a~ \mathrm{tet_s}(z)) = \mathrm{tet}_s(z\!+\!1)\)
where \(a=\ln(b)\approx \) \(0.4227073870410604 + 0.0116910660021443~ i\) with the additional condition
\(\!\!\!\!\!\!\!\! (2) \displaystyle ~ ~ ~ \mathrm{tet_s}(0)=1\)
and the asimptitic behavior determined with function Filog :
\(\!\!\!\!\!\!\!\! (3) \displaystyle ~ ~ ~ \lim_{y \rightarrow +\infty} \mathrm{tet_s}(x+\mathrm i y)=\) \(L_1 =\) \(\mathrm{Filog}(a)\approx 2.0565398441043761 +1.1445267140098765~ i\) \(\!\!\!\!\!\!\!\! (4) \displaystyle ~ ~ ~ \lim_{y \rightarrow -\infty} \mathrm{tet_s}(x+\mathrm i y)=\) \(L_2=\) \(\mathrm{Filog}(a^*)^*\approx 2.2284359658711805 -1.3507994961102865~ i\)
In addition, \(\mathrm{tet}_s(z)\) is supposed to be holomorphic at least for \(z>-2\), and also in the left hand side of the complex half-line, except some facility of the negative part of the real axis \(z<-2\); the tetration should have the cutline there.
In the upper half-plane, the tetration should behave as \(L_1+\exp(k_1 (z-z_1)) + O(\exp(2 k_1 (z-z_1))\);
In the lower half-plane, the tetration should behave as \(L_2+\exp(k_2 (z-z_1)) + O(\exp(2 k_2 (z-z_2))\); for some complex constants \(z_1\) and \(z_2\), at \(~k_1 =\ln(L_1 a) ~\) and \(~k_2 =\ln(L_2 a) ~\).
For the Sheldon base, the increments \(k_1\) and \(k_2\) are evaluated as
\(k_1 =\ln(L_1 a) \approx\) \( -0.0047589243931785 + 0.5354935770338939~ i\) \(k_2 =\ln(L_2 a) \approx\) \(0.0970758595007548 - 0.517289596155984~i\)
The quasiperiod in the upper half-plane
\(T_1=2 \pi/k_1 \approx\) \( -0.1042667514229599~i + 11.7325200133916496~ \)
and that in the lower half-plane is
\(T_2=2 \pi/k_2 \approx \) \( -2.2018723603861230~i + 11.7331504449085493~ \)
These quasi-periods are used to arrange the labels at the complex map of the tetration with the multiput command; in the upper half-plane the increment is \(T_1\), and in the lower half-plane, it is \(T_2\).
The properties above give the simple and efficient way of the precise evaluation of \(\mathrm{tet}_s\) through the Cauchi integral equation
[4]. References Sheldon Levenstein. Tetration for e^(1/e) branch point.Sun, 19 Feb 2012 06:20:26 -0600. .. I don't have a working generic algorithm for any generic complex base, especially one near the Shell Thron boundary. For example, consider the superfunction developed from the attracting fixed point, where sexp_b(1), for an arbitrary complex base b inside the Shell Thron boundary, For a specific case, consider b=1.52598338517 + 0.0178411853321~ i. This imaginary component of this base is about 10% bigger than the nearest base that is only the Shell Thron boundary. http://math.eretrandre.org/tetrationforum/showthread.php?tid=729 Sheldon Levenson. Complex base tetration program. Tetration and Related Topics, 2012 March 1. http://www.ils.uec.ac.jp/~dima/BOOK/437.pdf http://mizugadro.mydns.jp/BOOK/437.pdf D.Kouznetsov. Superfunctions. 2015, in preparation. http://www.ams.org/mcom/2009-78-267/S0025-5718-09-02188-7/home.html D.Kouznetsov. (2009). Solutions of \(F(z+1)=\exp(F(z))\) in the complex plane.. Mathematics of Computation, 78: 1647-1670. |
Journal of the European Mathematical Society
Full-Text PDF (528 KB) | Metadata | Table of Contents | JEMS summary
Volume 20, Issue 7, 2018, pp. 1759–1818 DOI: 10.4171/JEMS/799
Published online: 2018-05-22
The frequency and the structure of large character sumsJonathan Bober
[1], Leo Goldmakher [2], Andrew Granville [3]and Dimitris Koukoulopoulos [4](1) University of Bristol, UK
(2) Williams College, Williamstown, USA
(3) Université de Montréal, Canada
(4) Université de Montréal, Canada
Let $M(\chi)$ denote the maximum of $|\sum_{n\le N}\chi(n)|$ for a given non-principal Dirichlet character $\chi$ modulo $q$, and let $N_\chi$ denote a point at which the maximum is attained. In this article we study the distribution of $M(\chi)/\sqrt{q}$ as one varies over characters modulo $q$, where $q$ is prime, and investigate the location of $N_\chi$. We show that the distribution of $M(\chi)/\sqrt{q}$ converges weakly to a universal distribution $\Phi$, uniformly throughout most of the possible range, and get (doubly exponential decay) estimates for $\Phi$'s tail. Almost all $\chi$ for which $M(\chi)$ is large are odd characters that are 1-pretentious. Now, $M(\chi)\ge |\sum_{n\le q/2}\chi(n)| = \frac{|2-\chi(2)|}\pi \sqrt{q} |L(1,\chi)|$, and one knows how often the latter expression is large, which has been how earlier lower bounds on $\Phi$ were mostly proved. We show, though, that for most $\chi$ with $M(\chi)$ large, $N_\chi$ is bounded away from $q/2$, and the value of $M(\chi)$ is little bit larger than $\frac{\sqrt{q}}{\pi} |L(1,\chi)|$.
Keywords: Distribution of character sums, distribution of Dirichlet $L$-functions, pretentious multiplicative functions, random multiplicative functions
Bober Jonathan, Goldmakher Leo, Granville Andrew, Koukoulopoulos Dimitris: The frequency and the structure of large character sums.
J. Eur. Math. Soc. 20 (2018), 1759-1818. doi: 10.4171/JEMS/799 |
What is Euler's Totient Function?
Number theory is one of the most important topics in the field of Math and can be used to solve a variety of problems. Many times one might have come across problems that relate to the prime factorization of a number, to the divisors of a number, to the multiples of a number and so on.
Euler's Totient function is a function that is related to getting the number of numbers that are coprime to a certain number $$X$$ that are less than or equal to it. In short , for a certain number $$X$$ we need to find the count of all numbers $$Y$$ where $$ gcd(X,Y)=1 $$ and $$1 \le Y \le X$$.
A naive method to do so would be to
Brute-Force the answer by checking the gcd of $$X$$ and every number less than or equal to $$X$$ and then incrementing the count whenever a $$GCD$$ of $$1$$ is obtained. However, this can be done in a much faster way using Euler's Totient Function.
According to Euler's product formula, the value of the Totient function is below the product over all prime factors of a number. This formula simply states that the value of the Totient function is the product after multiplying the number $$N$$ by the product of $$(1-(1/p))$$ for each prime factor of $$N$$.
So,
$$ \phi(n)= n\prod_{\substack{p \text{ prime } p \vert n}} \left( 1- \frac{1}{p}\right) $$ Algorithm steps: Generate a list of primes. While dealing with a certain $$N$$, check and store all the primes that perfectly divide $$N$$. Now, it is just needed to use these primes and the above formula to get the result. Implementation:
set<> primes;static void mark(int num,int max,int[] arr){ int i=2,elem; while((elem=(num*i))<=max) { arr[elem-1]=1; i++; }}GeneratePrimes(){ int arr[max_prime]; for(int i=1;i<arr.length;i++) { if(arr[i]==0) { list.add(i+1); mark(i+1,arr.length-1,arr); } }}main(){ GeneratePrimes(); int N=nextInt(); int ans=N; for(int k:set) { if(N%k==0) { ans*=(1-1/k); } } print(ans);}
There are a few subtle observations that one can make about Euler's Totient Function.
The sum of all values of Totient Function of all divisors of $$N$$ is equal to $$N$$. The value of Totient function for a certain prime $$P$$ will always be $$P-1$$ as the number $$P$$ will always have a $$GCD$$ of $$1$$ with all numbers less than or equal to it except itself. For 2 number A and B, if $$GCD(A,B)==1$$ then $$Totient (A) \times Totient(B)$$ = $$Totient(A \cdot B)$$. |
Yes, it is meaningful, but often ignored for practical reasons. In the lab, e.g. in a desiccator, you would just use a large excess of the desiccant of choice which would always work. It’s often more meaningful to classify them as to whether they are acidic, basic or neutral.
But let’s assume we wanted to create that scale. For simplicity reasons, let’s check a single desiccant first. We need to consider the following equilibrium:
$$\ce{H2O_{(g)} <=> H2O_{(x)}}$$
With (x) again being the water bound to the desiccant in whichever way. (
Do not think of this as chemical bonding; it usually is not). It is important to remember that this is an equilibrium with an equilibrium constant k.
$$k_{\mathrm{x}} = \frac{[\ce{H2O_{(x)}}]}{[\ce{H2O_{(g)}}]}$$
One could rearrange this in numerous ways resulting in a representation that essentially says the partial pressure of water in the surrounding atmosphere is more or less proportional to the concentration of water in the desiccant.
$$p_{(\ce{H2O})} \approx c \cdot [\ce{H2O_{(x)}}]$$
Now let’s add a second desiccant into our consideration. The first equation now rearranges itself to the following:
$$\ce{H2O_{(y)} <=> H2O_{(g)} <=> H2O_{(x)}}$$
For each of the two subsystems we again get an equilibrium constant $k$ which we can rearrange to proportionalise the partial pressure to the concentration of water in x or y. $k_{\mathrm{x}} \neq k_{\mathrm{y}}$. There will be one partial pressure of water where both the left side and the right side of the double equation will be at equilibrium; that partial pressure is the final equilibrium that will be reached. This gives us the following conclusions:
Never will the air be completely dry in the presence of (only) a (or two, or a hundred) desiccant(s).
Neither of the two desiccants will fall completely dry again; rather unlike when heating (in an open atmosphere).
Say you start the experiment with the weaker desiccant and then add a dry sample of the stronger desiccant:
The stronger desiccant will further reduce the vapour pressure; The reduced vapour pressure will remove water from the weaker desiccant; That water will partly be absorbed by the stronger desiccant However neither will the weaker desiccant be completely dried nor will the partial pressure of water in the atmosphere be completely zero.
How dry (or wet) either desiccant will end up/how low the partial pressure of water will end up, depends on the strengths of the desiccants. If you use a very strong and a very weak one, the strong one will absorb much more water than the weak one. If the two are similar in strength, they will absorb similar amounts of water. |
12th SSC CGL Tier II level Question Set, topic Trigonometry 3
This is the 12th question set for the 10 practice problem exercise for SSC CGL exam and 3rd on topic Trigonometry. The answers to the questions and link to the detailed solutions are given at the end.
We repeat the method of taking the test. It is important to follow result bearing methods even in practice test environment.
Method of taking the test for getting the best results from the test: Before start,you may refer to our tutorial or any short but good material to refresh your concepts if you so require. Basic and rich Trigonometric concepts and applications Answer the questionsin an undisturbed environment with no interruption, full concentration and alarm set at 12 minutes. When the time limit of 12 minutes is over,mark up to which you have answered, but go on to complete the set. At the end,refer to the answers given at the end to mark your score at 12 minutes. For every correct answer add 1 and for every incorrect answer deduct 0.25 (or whatever is the scoring pattern in the coming test). Write your score on top of the answer sheet with date and time. Identify and analyzethe problems that you couldn't doto learn how to solve those problems. Identify and analyzethe problems that you solved incorrectly. Identify the reasons behind the errors. If it is because of your shortcoming in topic knowledgeimprove it by referring to only that part of conceptfrom the best source you can get hold of. You might google it. If it is because of your method of answering,analyze and improve those aspects specifically. Identify and analyzethe problems that posed difficulties for you and delayed you. Analyze and learn how to solve the problems using basic concepts and relevant problem solving strategies and techniques. Give a gapbefore you take a 10 problem practice test again.
Important:both and practice tests must be timed, analyzed, improving actions taken and then repeated. With intelligent method, it is possible to reach highest excellence level in performance. mock tests Resources that should be useful for you You may refer to: or 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but section on SSC CGL generally for any hard MCQ test.
If you like,you may to get latest content from this place. subscribe 12th question set- 10 problems for SSC CGL Tier II exam: 3rd on Trigonometry - testing time 12 mins Problem 1.
If $2abcos \theta + (a^2-b^2)sin \theta=a^2+b^2$ then the value of $tan \theta$ is,
$\displaystyle\frac{1}{2ab}(a^2+b^2)$ $\displaystyle\frac{1}{2}(a^2-b^2)$ $\displaystyle\frac{1}{2}(a^2+b^2)$ $\displaystyle\frac{1}{2ab}(a^2-b^2)$ Problem 2.
$\displaystyle\frac{\sin^2 \theta}{\cos^2 \theta}+\displaystyle\frac{\cos^2 \theta}{\sin^2 \theta}$ is equal to,
$\displaystyle\frac{1}{\sin^2 {\theta}\cos^2 \theta}$ $\displaystyle\frac{1}{\sin^2 {\theta}\cos^2 \theta} -2$ $\displaystyle\frac{1}{\tan^2 \theta - \cot^2 \theta}$ $\displaystyle\frac{\sin^2 \theta}{\cot \theta - \sec \theta}$ Problem 3.
If $cos \theta + sec \theta = 2$, then the value of $cos^5 \theta + sec^5 \theta$ is,
$-2$ $2$ $1$ $-1$ Problem 4.
$sin(\alpha + \beta -\gamma)=cos(\beta + \gamma -\alpha)=\displaystyle\frac{1}{2}$ and $tan(\gamma + \alpha -\beta)=1$. If $\alpha$, $\beta$ and $\gamma$ are positive acute angles, value of $2\alpha + \beta$ is,
$105^0$ $110^0$ $115^0$ $120^0$ Problem 5.
If $sin \theta + sin^2 \theta=1$, then the value of $cos^{12} \theta + 3cos^{10} \theta + 3cos^{8} \theta + cos^6 \theta - 1$ is,
$1$ $0$ $2$ $3$ Problem 6.
The value of $sec \theta\left(\displaystyle\frac{1+sin \theta}{cos \theta}+\displaystyle\frac{cos \theta}{1+sin \theta}\right) - 2tan^2 \theta$ is,
4 0 2 1 Problem 7.
If $4cos^2 \theta - 4\sqrt{3}cos \theta + 3=0$ and $0^0 \leq \theta \leq 90^0$, then the value of $\theta$ is,
$60^0$ $90^0$ $30^0$ $45^0$ Problem 8.
If $sin \theta + cos \theta=\sqrt{2}sin(90^0 - \theta)$ then the value of $cot \theta$ is,
$\sqrt{2}-1$ $\sqrt{2}+1$ $-\sqrt{2}+1$ $-\sqrt{2}-1$ Problem 9.
If $x=asin \theta-bcos \theta$ and $y=acos \theta + bsin \theta$, then which of the following is true?
$x^2+y^2=a^2+b^2$ $\displaystyle\frac{x^2}{a^2}+ \displaystyle\frac{y^2}{b^2}=1$ $x^2+y^2=a^2-b^2$ $\displaystyle\frac{x^2}{y^2}+ \displaystyle\frac{a^2}{b^2}=1$ Problem 10.
If $tan \theta=\displaystyle\frac{a}{b}$, then the value of $\displaystyle\frac{asin^3 \theta - bcos^3 \theta}{asin^3 \theta + bcos^3 \theta}$ is,
$\displaystyle\frac{a^4-b^4}{a^4+b^4}$ $\displaystyle\frac{a^3+b^3}{a^3-b^3}$ $\displaystyle\frac{a^3-b^3}{a^3+b^3}$ $\displaystyle\frac{a^4+b^4}{a^4-b^4}$
For detailed solutions refer to the companion
SSC CGL Tier II Solution set 12 Trigonometry 3, questions with solutions.
You may also watch the video solutions at the two-part video below.
Part 1: Q1 to Q5 Part 2: Q6 to Q10
The answers to the questions are given below.
Answers to the questions. Problem 1. Answer: Option d: $\displaystyle\frac{1}{2ab}(a^2-b^2)$. Problem 2. Answer: Option b: $\displaystyle\frac{1}{\sin^2 \theta.cos^2 \theta} -2$. Problem 3. Answer: Option b: $2$. Problem 4. Answer: Option d: $120^0$. Problem 5. Answer: Option b: $0$. Problem 6. Answer: Option c: 2. Problem 7. Answer: Option c: $30^0$. Problem 8. Answer: Option b: $\sqrt{2}+1$. Problem 9. Answer: Option a: $x^2 + y^2=a^2 + b^2$. Problem 10. Answer: Option a: $\displaystyle\frac{a^4-b^4}{a^4+b^4}$. Resources on Trigonometry and related topics
You may refer to our useful resources on Trigonometry and other related topics especially algebra.
Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL Tier II level Question set 12 Trigonometry 3, questions with answers |
According to
Introduction to Topology: Pure and Applied by Colin Adams,
Let $X$ be a topological space and A be a set (that is not necessarily a subset of $X$). Let $p: X \rightarrow A$ be a surjective map. Define a subset $U$ of $A$ to be open in $A$ if and only if $p^{-1}(U)$ is open in $X$. The resultant collection of open sets in $A$ is called the quotient topology induced by $p$, and the function $p$ is called a quotient map. The topological space $A$ is called a quotient space. (89).
As an example, I'd like to consider $\mathbb{ℝ}$ with the standard topology, and define
$$ p:\mathbb{R} \rightarrow \{a,b,c\} \ by \ p(x) = \left\{ \begin{array}{1l} c & \quad x < -10\\ b & \quad -10\leq x\leq10 \\ a & \quad x > 10 \end{array} \right. $$ Open sets in the quotient topology include $\{a\}$, $\{c\}$, $\{a,c\}$,$\{a,b,c\}$.
$\{b\}$ is not open since its preimage is not open in $\mathbb{R}$. |
New in Forest 2 - Parametric Programs¶
pyQuil is for constructing and running hybrid quantum/classical algorithms on real quantum computers. With the release of pyQuil 2, we have changed parts of the API to take advantage of some exciting new features available on QCS.
A hybrid algorithm involves using the quantum computer to create a quantum state that would be difficult to prepare classically; measure it in a way particular to your problem; and then update your procedure for creating the state so that the measurements are closer to the correct answer. A
real hybrid algorithm involves structured ansatze like QAOA for optimization or a UCC ansatz for chemistry. Here, we’ll do a much simpler parameterized program
[1]:
from pyquil import Program, get_qcfrom pyquil.gates import *def ansatz(theta): program = Program() program += RY(theta, 0) return programprint(ansatz(theta=0.2))
RY(0.2) 0
Scan over the parameter (the old way)¶
For this extrordinarily simple ansatz, we can discretize the parameter theta and try all possible values. As the number of parameters increases, the number of combinations increases exponentially so doing a full grid search will become intractable for anything more than ~two parameters.
[2]:
import numpy as npqc = get_qc("9q-square-qvm")thetas = np.linspace(0, 2*np.pi, 21)results = []for theta in thetas: program = ansatz(theta) bitstrings = qc.run_and_measure(program, trials=1000) results.append(np.mean(bitstrings[0]))
[3]:
%matplotlib inlinefrom matplotlib import pyplot as pltplt.plot(thetas, results, 'o-')plt.xlabel(r'$\theta$', fontsize=18)_ = plt.ylabel(r'$\langle \Psi(\theta) | \frac{1 - Z}{2} | \Psi(\theta) \rangle$', fontsize=18)
Do an optimization (the old way)¶
Instead of doing a full grid search, we will employ a classical optimizer to find the best parameter values. Here we use scipy to find the theta that results in sampling the most
1s in our resultant bitstrings.
[4]:
def objective_function(theta): program = ansatz(theta[0]) bitstrings = qc.run_and_measure(program, trials=1000) result = np.mean(bitstrings[0]) return -resultimport scipy.optimizeres = scipy.optimize.minimize(objective_function, x0=[0.1], method='COBYLA')res
[4]:
fun: -1.0 maxcv: 0.0 message: 'Optimization terminated successfully.' nfev: 13 status: 1 success: True x: array([3.1])
[5]:
plt.plot(thetas, results, label='scan')plt.plot([res.x], [-res.fun], '*', ms=20, label='optimization result')plt.legend()
[5]:
<matplotlib.legend.Legend at 0x1015dedf28>
Compilation¶
Prior to QCS, a QPU job would be routed via a series of cloud-based queues and compilation steps. With Forest 2, you are in control of the two stages of compilation so you can amortize the cost of compiling. Your QMI and all classical infrastructure is hosted on the Rigetti premises, so network latency is minimal.
Quil to native quil¶
The first step of compilation converts gates to their hardware-supported equivalent. For example, our parametric RY is converted into RX’s and RZ’s because these are physically realizable on a Rigetti QPU
[6]:
nq_program = qc.compiler.quil_to_native_quil(ansatz(theta=0.5))print(nq_program)
PRAGMA EXPECTED_REWIRING "#(0 1 2 3 4 5 6 7 8)"RX(pi/2) 0RZ(0.5) 0RX(-pi/2) 0PRAGMA CURRENT_REWIRING "#(0 1 2 3 4 5 6 7 8)"PRAGMA EXPECTED_REWIRING "#(0 1 2 3 4 5 6 7 8)"PRAGMA CURRENT_REWIRING "#(0 1 2 3 4 5 6 7 8)"
Native quil to executable¶
The second step of compilation will turn named gates into calibrated pulses stored in a binary format suitable for consumption by the control electronics. This means that you can fully compile a given program and run it many times with minimal classical overhead.
Note: since we’re using a QVM, for which there is no binary format, this stage is mocked out and you can see the original Quil inside the
PyQuilExecutableResponse that is returned. When running on the QPU, this will return a
BinaryExecutableResponse whose contents are opaque.
TODO: obscure the contents of
PyQuilExecutableResponse: https://github.com/rigetti/pyquil/issues/700
[7]:
qc.compiler.native_quil_to_executable(nq_program)
[7]:
PyQuilExecutableResponse(attributes={'native_quil_metadata': {'final-rewiring': [0, 1, 2, 3, 4, 5, 6, 7, 8], 'topological_swaps': 0, 'gate_depth': 3, 'gate_volume': 3, 'program_duration': 18.01, 'program_fidelity': 1.0, 'multiqubit_gate_depth': 0}, 'num_shots': 1}, program='PRAGMA EXPECTED_REWIRING "#(0 1 2 3 4 5 6 7 8)"\nRX(pi/2) 0\nRZ(0.5) 0\nRX(-pi/2) 0\nPRAGMA CURRENT_REWIRING "#(0 1 2 3 4 5 6 7 8)"\nPRAGMA EXPECTED_REWIRING "#(0 1 2 3 4 5 6 7 8)"\nPRAGMA CURRENT_REWIRING "#(0 1 2 3 4 5 6 7 8)"\n')
Parametric compilation¶
This doesn’t buy us much if we have to know exactly what circuit we want to run before compiling it and amortizing the compilation cost. Maybe you could get away with it when you’re doing a parameter scan, but for hybrid algorithms, the circuit parameter (here:
theta) depends on the results of a circuit before. This is the essence of hybrid programming! Therefore, all compilation steps have been upgraded to support named, symbolic parameters that will be updated at runtime with minimaloverhead.
With this feature, you can compile a parametric program once and run it many times will different parameter values
and you need not know the parameter values at compilation time.
There are a couple of prerequisites to use this feature effectively from PyQuil, which we address in this document.
First, you must declare a parameter when constructing your quil program. When declaring a named classical variable, you must specify at least a name and a type. It is conventional to make sure the Python variable name of the reference memory matches the Quil variable name. In our case, we name both things
theta. Our circuit above would be modified in this way:
[8]:
program = Program()theta = program.declare('theta', memory_type='REAL')program += RY(theta, 0)print(program)
DECLARE theta REAL[1]RY(theta) 0
Measuring¶
In the documentation so far, we’ve been using the
run_and_measure functionality of
QuantumComputer. It’s time to get our hands dirty and introduce explicit measure instructions.
Above, we declared a classical piece of memory, we’ve given it a name (
theta), and we’ve given it a type (
REAL). The bits that we measure (or “read out” –
ro for short) must now also be declared, given a name, and a type. Additionally, we’ll usually be measuring more than one qubit so we can give this register a size.
The index of the readout register need not match the qubit index. For example below, we create a bell state on qubits 5 and 6 and measure the readout results into
ro[0] and
ro[1].
Note: The readout register must be named “ro” (for now)
[9]:
program = Program()ro = program.declare('ro', memory_type='BIT', memory_size=2)program += H(5)program += CNOT(5, 6)program += MEASURE(5, ro[0])program += MEASURE(6, ro[1])print(program)
DECLARE ro BIT[2]H 5CNOT 5 6MEASURE 5 ro[0]MEASURE 6 ro[1]
Our very simple ansatz only has one qubit, so the measurement is quite simple.
[10]:
program = Program()theta = program.declare('theta', memory_type='REAL')ro = program.declare('ro', memory_type='BIT', memory_size=1)program += RY(theta, 0)program += MEASURE(0, ro[0])print(program)
DECLARE theta REAL[1]DECLARE ro BIT[1]RY(theta) 0MEASURE 0 ro[0]
Number of shots¶
The number of trials is compiled into the executable binary, so we must specify this number
prior to compilation. TODO: add to str / repr https://github.com/rigetti/pyquil/issues/701
[11]:
program = Program()theta = program.declare('theta', memory_type='REAL')ro = program.declare('ro', memory_type='BIT', memory_size=1)program += RY(theta, 0)program += MEASURE(0, ro[0])program.wrap_in_numshots_loop(shots=1000)print(program)
DECLARE theta REAL[1]DECLARE ro BIT[1]RY(theta) 0MEASURE 0 ro[0]
Using
qc.run()¶
To use the lower-level but more powerful
qc.run interface, we have had to take control of these three things
We decalred a read-out register named
roof type
BITand included explicit
MEASUREinstructions. Since this sets up a (potentially sprase) mapping from qubits to classical addresses, we can expect
qc.run()to return the classic 2d ndarray of yore instead of the dictionary returned by
run_and_measure
We have called
program.wrap_in_numshots_loop()prior to compilation so the number of shots can be encoded in an efficient binary representation of the program
We have taken control of compilation; either by calling
qc.compile(program)or by using the lower-level functions:
nq_program = qc.compiler.quil_to_native_quil(program) executable = qc.compiler.native_quil_to_executable(nq_program)
[12]:
def ansatz(theta): program = Program() ro = program.declare('ro', memory_type='BIT', memory_size=1) program += RY(theta, 0) program += MEASURE(0, ro[0]) return programprint(ansatz(theta=np.pi))
DECLARE ro BIT[1]RY(pi) 0MEASURE 0 ro[0]
We can run the program with a pre-set angle (here,
theta = np.pi).
[13]:
program = ansatz(theta=np.pi)program.wrap_in_numshots_loop(shots=5)executable = qc.compile(program)bitstrings = qc.run(executable)print(bitstrings.shape)bitstrings
(5, 1)
[13]:
array([[1], [1], [1], [1], [1]])
Scan over the parameter (the new way)¶
Finally, all the pieces are in place to compile and run parameterized executable binaries. We declare parameters that will be compiled symbolically into the binary allowing us to amortize the cost of compilation when running hybrid algorithms.
[14]:
def ansatz(): program = Program() theta = program.declare('theta', memory_type='REAL') ro = program.declare('ro', memory_type='BIT', memory_size=1) program += RY(theta, 0) program += MEASURE(0, ro[0]) return programprint(ansatz())
DECLARE theta REAL[1]DECLARE ro BIT[1]RY(theta) 0MEASURE 0 ro[0]
Using
memory_map¶
Now, when we call
qc.run we provide a
memory_map argument which will substitute in values for previously-declared Quil variables in a pre-compiled executable.
[15]:
program = ansatz() # look ma, no arguments!program.wrap_in_numshots_loop(shots=1000)executable = qc.compile(program)thetas = np.linspace(0, 2*np.pi, 21)results = []for theta in thetas: bitstrings = qc.run(executable, memory_map={'theta': [theta]}) results.append(np.mean(bitstrings[:, 0]))%matplotlib inlinefrom matplotlib import pyplot as pltplt.plot(thetas, results, 'o-')plt.xlabel(r'$\theta$', fontsize=18)_ = plt.ylabel(r'$\langle \Psi(\theta) | \frac{1 - Z}{2} | \Psi(\theta) \rangle$', fontsize=18)
Do an optimization (the new way)¶
Since parameters are compiled symbolically, we can do hybrid algorithms just as fast as parameter scans.
[16]:
program = ansatz() # look ma, no arguments!program.wrap_in_numshots_loop(shots=1000)executable = qc.compile(program)def objective_function(thetas): bitstrings = qc.run(executable, memory_map={'theta': thetas}) result = np.mean(bitstrings[:, 0]) return -resultres = scipy.optimize.minimize(objective_function, x0=[0.1], method='COBYLA')res
[16]:
fun: -1.0 maxcv: 0.0 message: 'Optimization terminated successfully.' nfev: 12 status: 1 success: True x: array([3.1])
[17]:
plt.plot(thetas, results, label='scan')plt.plot([res.x], [-res.fun], '*', ms=20, label='optimization result')plt.legend()
[17]:
<matplotlib.legend.Legend at 0x1015f13898> |
Nocedal and Wright on Conjugate Gradient Methods, p. 123, describe a
restart strategy ... whenever two consecutive gradients are far from orthogonal
$\qquad {{| \nabla f_k^T \ \nabla f_{k-1} |} \over { \|\nabla f_k\|^2 }} \ge \nu $, with $\nu$ typically 1/10.
Can anyone comment on CG with such restarts, or point to test cases on the web ?
Or is the "popular choice max $( 0, \beta^{PR} )$" (Nonlinear_conjugate_gradient_method) good enough, satisficing ?
(A good answer to bfgs-vs-conjugate-gradient-method says,
Anecdotal evidence points to restarting being a tricky issue,
as it is sometimes unnecessary and sometimes very necessary.
Well, that's generally true of a lot of things (taxes come to mind).
Test cases with plots of $\beta_k$ or $\theta_k$ might be interesting.)
A possibly silly test case that led to the question is CG on an ill-conditioned quadratic in 2d:
import numpy as npfrom scipy.optimize import fmin_cgn = 2cond = 100eigenvalues = np.linspace( 1./cond, 1, n )xmin = 1000 * np.ones( n )def fprime( x ): return eigenvalues * (x - xmin)def f( x ): return (x - xmin) .dot( eigenvalues * (x - xmin)) / 2x0 = np.zeros( n)ret = fmin_cg( func, x0, fprime )
Added:
for
nin [1,2,3,4,5] this takes 80 81 6 40 9 iterations, 721 722 28 94 27 function evaluations. (Is CG generally
verysensitive to the linesearch ?) The Mathematica conjugate gradient minimizer has a RestartThreshold with default 1/10. But, sorry, I don't speak Mathematica. Any native speaker care to try this ? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.