text
stringlengths 256
16.4k
|
|---|
Before you answer this OP, please read all the terms and conditions below. Thank you...
Today I hold an unofficial little contest on brilliant.org. Now, I will hold it here on Math S.E. It's just for fun guys. (>‿◠)✌
Before we start the contest, here are the rules of my little contest that you should obey as a contestant (the one who post an answer to this OP during contest period):
The contest will be started on Sunday, October $26$, $2014$ at $12.00$ p.m. noon (UTC). So, until D-Day, please do not make any attempts(posting comments or answers) that can lead users here to answer the problem contest. You may post your answer after this OP is marked as a bounty question by me. I will give a bounty to this OP at least $250$ reps. The bounty will be awarded to the winner. The winner of this contest is the highest voted answer. So, the jury of this contest is not only me, but all of us. Each contestant can only post one answer and the answer can be posted anytime during the contest period. Once the answer is posted, it cannotbe edited during the contest period. Since you have at least $4$ days to prepare the answer to this OP, please make sure you create a zero erroranswer. Do not postthis contest problem on other sites for asking a hint or an answer. The contest will be ended on Sunday, November $2$, $2014$ (after bounty period over). The contestant will be disqualifiedfor violating the contest rules above. The term disqualifiedmeans the contestant who violates the rules but she/ he owns the highest voted answer at the end of the contest will not be declared as the winner. Therefore, the bounty will not be awarded to her/ him.
If necessary, the rules may be changed accordingly. To be fair, I will not answer this OP nor give any hints to anyone. I really need your cooperation in order to make this contest succeed. Thank you.
Here is the contest problem:
Prove\begin{equation}\large{\int_0^{\Large\frac{\pi}{2}}}\ln\left(\frac{\ln^2\sin\theta}{\pi^2+\ln^2\sin\theta}\right)\,\frac{\ln\cos\theta}{\tan\theta}\,d\theta=\frac{\pi^2}{4}\end{equation}
Feel free to use any methods to solve this problem (
real or complex analysis approach). Okay, happy problem solving and best of luck! ٩(˘◡˘)۶
|
Ring of Polynomial Forms over Integral Domain is Integral Domain Theorem Then $\struct {D \sqbrk X, \oplus, \odot}$ is an integral domain. Proof
From Ring of Polynomial Forms is Commutative Ring with Unity it follows that $\struct {D \sqbrk X, +, \circ}$ is a commutative ring with unity.
Suppose $f, g \in D \sqbrk X$ such that neither $f$ nor $g$ are the null polynomial.
Let $\map \deg f = n$ and $\map \deg g = m$.
From Degree of Product of Polynomials over Integral Domain the degree of $f \odot g$ is $n + m$.
Thus by definition $f \odot g$ is not the null polynomial of $D \sqbrk X$.
Thus neither $f$ nor $g$ is a proper zero divisor of $D \sqbrk X$.
This holds for any two arbitrary non-null polynomials elements of $D \sqbrk X$.
That is, $\struct {D \sqbrk X, \oplus, \odot}$ is an integral domain.
$\blacksquare$
|
Solve over real $a$ $$\sqrt{3a-4}+\sqrt[3]{5-3a}=1.$$
If $p=3a-4$, $$\sqrt{p}+\sqrt[3]{1-p}=1.$$ If $q=5-3a$, $$\sqrt{1-q}+\sqrt[3]{q}=1.$$ Seems useful, but not sure how to proceed.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Solve over real $a$ $$\sqrt{3a-4}+\sqrt[3]{5-3a}=1.$$
If $p=3a-4$, $$\sqrt{p}+\sqrt[3]{1-p}=1.$$ If $q=5-3a$, $$\sqrt{1-q}+\sqrt[3]{q}=1.$$ Seems useful, but not sure how to proceed.
It is indeed useful. Write $$ \sqrt[3]{1-p}=1-\sqrt{p} $$ and cube: $$ 1-p=1-3\sqrt{p}+3p-p\sqrt{p} $$ Writing $r=\sqrt{p}$ you get $$ r^3-4r^2+3r=0 $$ and the rest should be easy. Just beware that the solutions are subject to $r\ge0$ and $p\ge0$ (in this case there's no problem, however).
Let $x:=\sqrt{p}$ so $0=1-x^2-(1-x)^3=x(1-x)(3-x)$ and$$x\in\{0,\,1,\,3\}\implies p=x^2\in\{0,\,1,\,9\}\implies a\in\left\{\tfrac43,\,\tfrac53,\,\tfrac{13}{3}\right\},$$all of which work.
Using your first substitution:
$$\begin{align*} \sqrt p+\sqrt[3]{1-p}&=1\\[1ex] \sqrt[3]{1-p}&=1-\sqrt p\\[1ex] 1-p&=(1-\sqrt p)^3\\[1ex] 1-p&=1-3\sqrt p+3p-|p|\sqrt p\\[1ex] \end{align*}$$
If $p=3a-4\ge0$ (and this is the only case if you're looking for real-valued solutions, since $\sqrt p$ would be defined only for $p\ge0$), or $a\ge\frac43$, then $|p|=p$:
$$\begin{align*} 1-p&=1-3\sqrt p+3p-p\sqrt p\\[1ex] 1-p&=1+3p-(3+p)\sqrt p\\[1ex] 4p&=(3+p)\sqrt p\\[1ex] \frac{4p}{3+p}&=\sqrt p\\[1ex] \frac{16p^2}{(3+p)^2}&=p\\[1ex] 16p^2&=p(3+p)^2\\[1ex] 16p^2&=p^3+6p^2+9p\\[1ex] p(p-1)(p-9)&=0 \end{align*}$$
Then either $p=0$, $p=1$, or $p=9$, which means either $a=\frac43$, $a=\frac53$, or $a=\frac{13}3$.
One strategy is to get rid of the cube root first. Two reasons: first, it is the most complicated part of the problem; second, if you are working in the real numbers you can always take a cube root, but square roots require some care.
So write $x^3=5-3a$ to get $\sqrt {1-x^3}=1-x$
(It is obvious from this that $x=1$ is a solution). Now square this, conscious that squaring is likely to produce solutions which belong to the negative square root as well as the positive square root. $$1-x^3=(1-x)^2$$
One solution is $x=1$, and otherwise divide by $(1-x)$ to obtain $1+x+x^2=1-x$ or $x^2+2x=0$
Then work backwards and check that the solutions work for the original equation (and not the one with the negative square root).
That seems to work more smoothly than the other methods which have been suggested.
|
The context of this question is coming up with the parameters for the ElGamal encryption scheme.
One of the requirements for the parameters for ElGamal is that we have primes $p$ and $q$ such that $p = q \cdot k + 1$ for some $k$. For simplicity, let $k=2$. We also need a generator $g$ for $p$ such that $g^q \equiv 1 \pmod p$. (Feel free to correct me if I got any of this wrong).
However, according to this Crypto SE answer to
“How to test if a number is a primitive root?”, a number $g$ is a generator of $p$ iff $g^{\varphi(p)/j} \not \equiv 1 \pmod p$ for all prime factors $j$ of $\varphi(p)$. Since $\varphi(p) = p-1$ (as $p$ is prime) and $p-1$ has just two prime factors, $2$ and $q$, we have $g^q \not\equiv 1 \pmod p$.
Doesn't this directly contradict the above requirement of ElGamal?
For a real world example, take IKE groups 1 and 2 from RFC 2409. With group 2, we have $p = $ some large prime and $(p-1)/2$ also equals some large prime. They give us the generator $g=2$. This checks out for ElGamal because $2^p \equiv 1 \pmod p$. I've also tested it with my implementation of ElGamal encryption and everything works fine. However, doesn't this contradict the definition for a generator? If I try to find a generator for $p$, the first one I get is $g=11$, because $11^q \not\equiv 1 \pmod p$ and $11^2 \not\equiv 1 \pmod p$. But if I use $g=11$, the encryption fails because, as far as I can tell, $11^q \not\equiv 1 \pmod p$.
|
This is about simple infinite continued fraction. I don't understand the line '...then $C_0 < x < C_1$'. $C_k$ here refers to $C_k=[a_0;a_1,a_2,...,a_k]$ where $1 \leq k \leq n$. $C_o=a_0$.
Can anyone explain it to me why is the inequality true?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This is about simple infinite continued fraction. I don't understand the line '...then $C_0 < x < C_1$'. $C_k$ here refers to $C_k=[a_0;a_1,a_2,...,a_k]$ where $1 \leq k \leq n$. $C_o=a_0$.
Can anyone explain it to me why is the inequality true?
Certainly $C_0 < x$, since $\displaystyle x = a_0 + \frac{1}{[a_1;a_2,a_3\cdots]}$ and the fraction is positive.
On the other hand, $\displaystyle \frac{1}{a_1 + \frac{1}{[a_2;a_3,a_4\cdots]}} < \frac{1}{a_1}$ since the fraction in the denominator is positive. Thus $\displaystyle x = a_0 + \frac{1}{[a_1;a_2,a_3\cdots]} = a_0 + \frac{1}{a_1 + \frac{1}{[a_2;a_3,a_4\cdots]}} < a_0 + \frac{1}{a_1} = C_1$.
The convergents of a continued fraction alternate above and below the final value. There are more formal ways of showing this, but for now simply consider that a continued fraction can be formed by truncating the integer part and taking the reciprocal of the remainder at each step. Since truncating always takes a lower value, it is apparent that the first convergent, $C_0=a_0$, is less than $x$; however truncating a reciprocal will take a higher value, thus $C_1 \left( =a_0+\dfrac1{a_1} \right) > x$. Since the reciprocal operation reverses the direction that truncating takes at each step, the convergents will continue to alternate around the ultimate value, unless the CF is finite.
|
Convergent Sequence is Cauchy Sequence/Normed Division Ring Theorem
Let $\struct {R, \norm {\,\cdot\,}} $ be a normed division ring.
Let $\epsilon > 0$.
Then also $\dfrac \epsilon 2 > 0$.
Because $\sequence {x_n}$ converges to $l$, we have:
$\exists N: \forall n > N: \norm {x_n - l} < \dfrac \epsilon 2$
So if $m > N$ and $n > N$, then:
\(\displaystyle \norm {x_n - x_m}\) \(=\) \(\displaystyle \norm {x_n - l + l - x_m}\) \(\displaystyle \) \(\le\) \(\displaystyle \norm {x_n - l} + \norm {l - x_m}\) Triangle Inequality \(\displaystyle \) \(<\) \(\displaystyle \frac \epsilon 2 + \frac \epsilon 2\) (by choice of $N$) \(\displaystyle \) \(=\) \(\displaystyle \epsilon\)
Thus $\sequence {x_n}$ is a Cauchy sequence.
By Convergent Sequence is Cauchy Sequence in metric space then $\sequence {x_n} $ is a Cauchy sequence in $\struct {R, d}$.
$\blacksquare$
Also see
|
$\newcommand{\Cof}{\text{cof}}$ Let $d>2$. Let $f \in W^{1,p}(\Omega,\mathbb{R}^d)$ where $\Omega$ is an open subset of $\mathbb{R}^d$. Let $2 \le k \le d-1$ be fixed.
Suppose that $\det df>0$ a.e. and that $\bigwedge^k df$ is smooth. Is $f$ smooth?
Partial answer: If $k,d$ are not both even and $\bigwedge^k df \in \text{GL}(\bigwedge^{k}\mathbb{R}^d)$, the answer is positive. (see details below).
I am not sure the answer remains positive when $k,d$ are both even, since in that case, $\bigwedge^k A=\bigwedge^k (-A)$ and both $A,-A \in \text{GL}^+(\mathbb{R}^d)$. Thus the minors cannot distinguish beween a map and its negative, so theoretically $df$ could "switch" between "something" and its negative, thus violating smoothness.
It would be interesting to find a concrete counter example for smoothness in this case. The smallest dimensions are $d=4,k=2$. Every possible counter-example must have non-continuous weak derivatives.
When $k,d$ are not both even, $\bigwedge^k df$ uniquely determines $df$ (assuming $\det df>0$). That is, if $A,B \in \text{GL}^+(\mathbb{R}^d)$ and $\bigwedge^k A=\bigwedge^k B$, then $A=B$. Indeed, write $S=AB^{-1}$. Then we have $\bigwedge^k S=\text{Id}_{\bigwedge^k \mathbb{R}^d}$. This implies, any $k$-dimensional subspace of $\mathbb{R}^d$ is $S$-invariant, hence $S$ is a multiple of the identity, i.e. $S=\lambda \text{Id}$, which then forces $\lambda^k=1$, so $\lambda=\pm 1$. If $k$ is odd, then $\lambda=1$, and $S=\text{Id}$. If $d$ is odd, then the requirement $S \in \text{GL}^+(\mathbb{R}^n)$ forces $\lambda^d=\det S >0$, so again $S=\text{Id}$.
We showed that the map $\psi: A \to \bigwedge^k A$ is a smooth
injective homomorphism of Lie groups $\text{GL}^+(V) \to \text{GL}(\bigwedge^{k}V)$. Set $ S=\text{Image} (\psi)$. Since $S$ is an embedded submanifold, $\psi:\text{GL}^+(V) \to S$ is a diffeomorphism. Now composing $x \to \bigwedge^k df_x$ with the smooth inverse of $\psi$ finishes the job.
Some more details are provided in my answer below.
|
The derivative of exponential function with respect to a variable is equal to the product of the exponential function and natural logarithm of base of the exponential function. The differentiation of exponential function $a^{\displaystyle x}$ with respect to $x$ can be derived in differential calculus by first principle.
To find the derivative of exponential function $a^{\displaystyle x}$ with respect to $x$, write the derivative of this function in limit form by the definition of the derivative.
$\dfrac{d}{dx}{\, f(x)}$ $\,=\,$ $\displaystyle \large \lim_{\Delta x \,\to\, 0}{\normalsize \dfrac{f(x+\Delta x)-f(x)}{\Delta x}}$
$\implies$ $\dfrac{d}{dx}{\, f(x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f(x+h)-f(x)}{h}}$
Assume $f{(x)} = a^{\displaystyle x}$, then $f{(x+h)} = a^{\displaystyle x+h}$. Now, substitute them in the first principle of the derivative for evaluating the differentiation of the function $a^{\displaystyle x}$ with respect to $x$ in differential calculus.
$\implies$ $\dfrac{d}{dx}{\, (a^{\displaystyle x})}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{a^{\displaystyle x+h}-a^{\displaystyle x}}{h}}$
Use product rule of exponents with same base to split the first term in the numerator of the exponential function.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{a^{\displaystyle x} \times a^{\displaystyle h}-a^{\displaystyle x}}{h}}$
In numerator, $a^{\displaystyle x}$ is a common factor in the both terms of the expression. So, it can be taken out common from them for simplifying this exponential function mathematically.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{a^{\displaystyle x}\Big(a^{\displaystyle h}-1\Big)}{h}}$
Now, factorize the function as two different functions.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg[a^{\displaystyle x} \times \dfrac{a^{\displaystyle h}-1}{h}\Bigg]}$
Use, the product rule of limits for calculating the limit of the function by the product of their limits.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize a^{\displaystyle x}}$ $\times$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{a^{\displaystyle h}-1}{h}}$
Now, evaluate the limit of the exponential function $a^{\displaystyle x}$ as $h$ approaches zero by using direct substitution method.
$=\,\,\,$ $a^{\displaystyle x}$ $\times$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{a^{\displaystyle h}-1}{h}}$
As per the limit of an exponential function, the limit of $\dfrac{a^{\displaystyle h}-1}{h}$ as $h$ approaches zero is equal to natural logarithm of $a$.
$=\,\,\,$ $a^{\displaystyle x}$ $\times$ $\log_{e}{a}$
$\therefore \,\,\,\,\,\,$ $\dfrac{d}{dx}{\, (a^{\displaystyle x})} \,=\, a^{\displaystyle x}\log_{e}{a}$
Therefore, it is proved that the derivative of exponential function $a^{\displaystyle x}$ with respect to $x$ is equal to product of the exponential function $a^{\displaystyle x}$ and natural logarithm of $a$.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Film Boiling Analysis in Porous Media From Thermal-FluidsPedia
(6 intermediate revisions not shown) Line 1: Line 1: -
Film boiling
+
Film boiling in porous <></>
- +
Film boiling in porous <math>{T_w} > {T_{sat}}</math> is analyzed (see ). Vapor generated at the liquid-vapor interface flows upward due to buoyancy force. The liquid adjacent to the vapor layer is dragged upward by the vapor. The temperature at the liquid-vapor interface is at the saturation temperature. There are velocity and thermal boundary layers in the liquid phase adjacent to the vapor film. The solution of the film boiling problem requires solutions of vapor and liquid flow, as well as heat transfer in both the vapor and liquid phases. It is assumed that boundary layer approximations are applicable to the vapor film and to convection heat transfer in the liquid phase. It is further assumed that the vapor flow is laminar, two-dimensional; Darcy’s law is applicable in both the vapor and liquid phases. The continuity, momentum, and energy equations in the vapor film are
- + -
<math>{T_w} > {T_{sat}}</math> is analyzed (see
+
<center><math>\frac{{\partial {u_v}}}{{\partial x}} + \frac{{\partial {v_v}}}{{\partial y}} = 0\qquad \qquad(1) </math></center>
<center><math>\frac{{\partial {u_v}}}{{\partial x}} + \frac{{\partial {v_v}}}{{\partial y}} = 0\qquad \qquad(1) </math></center>
- +
<center><math>{u_v} = - \frac{K}{{{\mu _v}}}({\rho _\ell } - {\rho _v})g\qquad \qquad(2) </math></center>
<center><math>{u_v} = - \frac{K}{{{\mu _v}}}({\rho _\ell } - {\rho _v})g\qquad \qquad(2) </math></center>
- +
<center><math>{u_v}\frac{{\partial {T_v}}}{{\partial x}} + {v_v}\frac{{\partial {T_v}}}{{\partial y}} = {\alpha _{mv}}\frac{{{\partial ^2}{T_v}}}{{\partial {y^2}}}\qquad \qquad(3) </math></center>
<center><math>{u_v}\frac{{\partial {T_v}}}{{\partial x}} + {v_v}\frac{{\partial {T_v}}}{{\partial y}} = {\alpha _{mv}}\frac{{{\partial ^2}{T_v}}}{{\partial {y^2}}}\qquad \qquad(3) </math></center>
- +
where <math>{\alpha _{mv}}</math> is thermal diffusivity of the porous medium saturated with the vapor.
where <math>{\alpha _{mv}}</math> is thermal diffusivity of the porous medium saturated with the vapor.
Line 18: Line 16:
<center><math>\frac{{\partial {u_\ell }}}{{\partial x}} + \frac{{\partial {v_\ell }}}{{\partial y}} = 0\qquad \qquad(4) </math></center>
<center><math>\frac{{\partial {u_\ell }}}{{\partial x}} + \frac{{\partial {v_\ell }}}{{\partial y}} = 0\qquad \qquad(4) </math></center>
- +
<center><math>{u_\ell } = \frac{K}{{{\mu _\ell }}}{\rho _\infty }g{\beta _\ell }({T_\ell } - {T_\infty })\qquad \qquad(5) </math></center>
<center><math>{u_\ell } = \frac{K}{{{\mu _\ell }}}{\rho _\infty }g{\beta _\ell }({T_\ell } - {T_\infty })\qquad \qquad(5) </math></center>
- +
<center><math>{u_\ell }\frac{{\partial {T_\ell }}}{{\partial x}} + {v_\ell }\frac{{\partial {T_\ell }}}{{\partial y}} = {\alpha _{m\ell }}\frac{{{\partial ^2}{T_\ell }}}{{\partial {y^2}}}\qquad \qquad(6) </math></center>
<center><math>{u_\ell }\frac{{\partial {T_\ell }}}{{\partial x}} + {v_\ell }\frac{{\partial {T_\ell }}}{{\partial y}} = {\alpha _{m\ell }}\frac{{{\partial ^2}{T_\ell }}}{{\partial {y^2}}}\qquad \qquad(6) </math></center>
- +
where <math>{\alpha _{m\ell }}</math> is thermal diffusivity of the porous medium saturated with the liquid.
where <math>{\alpha _{m\ell }}</math> is thermal diffusivity of the porous medium saturated with the liquid.
Line 33: Line 31:
, & {y = 0} \\
, & {y = 0} \\
\end{array}\qquad \qquad(7) </math></center>
\end{array}\qquad \qquad(7) </math></center>
- +
<center><math>T = {T_w}\begin{array}{*{20}{c}}
<center><math>T = {T_w}\begin{array}{*{20}{c}}
, & {y = 0} \\
, & {y = 0} \\
\end{array}\qquad \qquad(8) </math></center>
\end{array}\qquad \qquad(8) </math></center>
- + + +
.
-
<center><math>{u_\ell } = 0\begin{array}{*{20}{c}}
<center><math>{u_\ell } = 0\begin{array}{*{20}{c}}
, & {y \to \infty } \\
, & {y \to \infty } \\
\end{array}\qquad \qquad(9) </math></center>
\end{array}\qquad \qquad(9) </math></center>
- +
<center><math>{T_\ell } = {T_\infty }\begin{array}{*{20}{c}}
<center><math>{T_\ell } = {T_\infty }\begin{array}{*{20}{c}}
, & {y \to \infty } \\
, & {y \to \infty } \\
\end{array}\qquad \qquad(10) </math></center>
\end{array}\qquad \qquad(10) </math></center>
- + -
The mass balance at the liquid-vapor interface is
+
The mass balance at the liquid-vapor interface is (]:
<center><math>{\left( {\rho u\frac{{d\delta }}{{dx}} - \rho v} \right)_v} = {\left( {\rho u\frac{{d\delta }}{{dx}} - \rho v} \right)_\ell }\begin{array}{*{20}{c}}
<center><math>{\left( {\rho u\frac{{d\delta }}{{dx}} - \rho v} \right)_v} = {\left( {\rho u\frac{{d\delta }}{{dx}} - \rho v} \right)_\ell }\begin{array}{*{20}{c}}
, & {y = {\delta _v}} \\
, & {y = {\delta _v}} \\
\end{array}\qquad \qquad(11) </math></center>
\end{array}\qquad \qquad(11) </math></center>
- +
The temperature at the liquid-vapor interface is equal to the saturation temperature:
The temperature at the liquid-vapor interface is equal to the saturation temperature:
Line 62: Line 61:
, & {y = \delta } \\
, & {y = \delta } \\
\end{array}_v}\qquad \qquad(12) </math></center>
\end{array}_v}\qquad \qquad(12) </math></center>
- + -
The above film boiling problem can be solved using a similarity solution like that for film condensation in porous media
+
The above film boiling problem can be solved using a similarity solution like that for film condensation in porous media.
- +
=References
- +
Chengand Verma..in . .
- +
'', , ,
- + - + - + - + - + - + - + - + - + - +
, ., , ., and , . ., , ''Heat and Mass Transfer'', , .
- + - + - + - + - + - + - + - + - +
Nield, D.A., and Bejan, A., 1999, ''Convection in Porous Media'', 2<sup>nd</sup> ed., Springer-Verlag, New York.
Nield, D.A., and Bejan, A., 1999, ''Convection in Porous Media'', 2<sup>nd</sup> ed., Springer-Verlag, New York.
Current revision as of 01:59, 9 July 2010
Film boiling of liquid saturated in a porous medium at an initial temperature of next to a vertical, impermeable heated wall at a temperature of
T > w T is analyzed (see figure). Vapor generated at the liquid-vapor interface flows upward due to buoyancy force. The liquid adjacent to the vapor layer is dragged upward by the vapor. The temperature at the liquid-vapor interface is at the saturation temperature. There are velocity and thermal boundary layers in the liquid phase adjacent to the vapor film. The solution of the film boiling problem requires solutions of vapor and liquid flow, as well as heat transfer in both the vapor and liquid phases. It is assumed that boundary layer approximations are applicable to the vapor film and to convection heat transfer in the liquid phase. It is further assumed that the vapor flow is laminar, two-dimensional; Darcy’s law is applicable in both the vapor and liquid phases. The continuity, momentum, and energy equations in the vapor film are s a t where α is thermal diffusivity of the porous medium saturated with the vapor. The governing equations for the liquid boundary layer are mv where is thermal diffusivity of the porous medium saturated with the liquid.
The boundary conditions at the heated wall (
y = 0) are It should be pointed out that u is not equal to zero at the heating surface under Darcy’s law, i.e., slip occurs at the surface. The boundary condition in the liquid that is far from the heated surface is v The mass balance at the liquid-vapor interface is (see Film Boiling Analysis): The temperature at the liquid-vapor interface is equal to the saturation temperature: The above film boiling problem can be solved using a similarity solution like that for film condensation in porous media. References
Cheng, P., and Verma, A.K., 1981, “The Effect of Subcooling Liquid on Film Boiling about a Vertical Heated Surface in a Porous Medium,”
International Journal of Heat and Mass Transfer, Vol. 24, pp. 1151-1160.
Faghri, A., and Zhang, Y., 2006,
Transport Phenomena in Multiphase Systems, Elsevier, Burlington, MA
Faghri, A., Zhang, Y., and Howell, J. R., 2010,
Advanced Heat and Mass Transfer, Global Digital Press, Columbia, MO.
Nield, D.A., and Bejan, A., 1999,
Convection in Porous Media, 2 nd ed., Springer-Verlag, New York.
|
Question in the title.
It intuitively seems absurd that $p_N - p_{N-1} \gt p_{N-1} - 3 = $ the largest gap formable from all $p_i = $ odd primes $3, \dots, p_{N-1}$.
Was wondering how difficult the proof is.
$2 p_i$ is the smallest composite divisible by $p_i$. And $p_N + 3$ is certainly a composite. Not sure if that helps : >
Thanks @PaoloLeonetti in the comments. According to the article:
In 1998, Pierre Dusart improved the result in his doctoral thesis, showing that for $k \geq 463, p_{k+1} \leq (1 + 1/(\ln^2 p_k))p_k$, ...
So we want to show that $p_{k+1} \leq 2 p_{k} - 3$ for sufficiently large $k$.
$2 p_k - 3 = (2 - \dfrac{3}{p_k})p_k$ and
$$ 2 - \dfrac{3}{p_k} \geq 1 + 1 / (\ln^2 p_k) \iff (1 - \dfrac{3}{p_k})\ln^2 p_k \geq 1 \iff \\ \ln^2 p_k \geq \dfrac{p_k}{p_k - 3} $$
the last operation being valid since $k \geq 463$ and so $3$ is much less than $p_k$ and so $1 - \dfrac{3}{p_k} \gt 0$.
Now take the exponential:
$$ \iff 8 \approx \ln(p_k) \geq e^{\frac{p_k}{p_k - 3}} \approx 2 $$ Where the approximation is at least valid enough for $k \geq 463$.
|
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
|
Take a vector error correction (VECM) model:
$$\;\;\;\Delta y_t=\Pi y_{t-1}+\Gamma_1\Delta y_{t-1}+...+\Gamma_{p-1}\Delta y_{t-(p-1)}+\varepsilon_t$$
where $\Pi=\alpha \beta'$ and each row of $\beta'$ (or, equivalently, each column of $\beta$) is a cointegrating vector.
Questions: When VECM is estimated by maximum likelihood (ML), is an estimate of the $\beta'$ matrix taken as given (e.g. it could be obtained as a by-product of the Johansen procedure)? Or is $\Pi$ estimated simultaneously with all the other parameters in the model, subject to restrictions on $\Pi$ due to the cointegration rank (which needs to be obtained in advance, e.g. via the Johansen procedure)? When VECM is estimated by ordinary least squared (OLS), is an estimate of the $\beta'$ matrix taken as given?
Here is a related question (see Edit of guess 1 and questions 4, 5).
|
Lower attic
From Cantor's Attic
Revision as of 16:52, 28 December 2011 by Jdh
Welcome to the lower attic, where we store the comparatively smaller notions of infinity. Roughly speaking, this is the realm of countable ordinals and their friends.
$\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including $\omega_1^x$ admissible ordinals $\Gamma$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers the small countable ordinals, those below $\epsilon_0$ Hilbert's hotel $\omega$, the smallest infinity down to the subattic, containing very large finite numbers
|
I was working in my project when I was struck by the question of whether it would be necessary, or at least cautious, prevent overflow and underflow in the calculation of these two distances.
I remembered that there is an implementation of the calculation of the hypotenuse to prevent this. Most languages implementers, and is known for Hypot
The calculation of the Euclidean distance remains the same "pattern" and I thought that if
Hypot() controls the overflow and underflow should also beware of the Euclidean distance. I've disappointed to note that the language we use, and others, do not control the overflow and underflow for calculating distance. Will not worth spend this "additional effort"?
I did a searchs and came to a question in Math.StackExchange
There is no definitive answer to this issue and is somewhat old. The first thing I wondered is: Will okay? I think that yes, seeing that is a generalization of the same procedure that performs
Hypot().
I decided to extrapolate this concept to the Mahalanobis distance. The original is as follows:
$$D_M(X,Y,L) = \sqrt{\sum_{i=1}^{n}\left(\frac{X_i-Y_i}{L_i}\right)^2}$$
Since $L$ is the vector of eigenvalues.
And my proposal is this:
$$D_M(X,Y,L) = C\sqrt{\sum_{i=1}^{n}\left(\frac{X_i-Y_i}{L_i }\frac{1}{C}\right)^2}$$
That is the same to:
$$D_M(X,Y,L) = C\sqrt{\sum_{i=1}^{n}\left(\frac{X_i-Y_i}{L_i C}\right)^2}$$
And $C$ is the max value from the $|(X_i-Y_i)/L_i|$:
$$C = \max_{i}\left(|\frac{X_i-Y_i}{L_i}|\right)$$
Is it okay?
|
Search
Now showing items 1-10 of 34
Search for top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in √s = 8 TeV pp collisions with the ATLAS detector
(Springer, 2014-11)
The results of a search for top squark (stop) pair production in final states with one isolated lepton, jets, and missing transverse momentum are reported. The analysis is performed with proton-proton collision data at s√ ...
Search for supersymmetry in events with large missing transverse momentum, jets, and at least one tau lepton in 20 fb−1 of √s = 8 TeV proton-proton collision data with the ATLAS detector
(Springer, 2014-09-18)
A search for supersymmetry (SUSY) in events with large missing transverse momentum, jets, at least one hadronically decaying tau lepton and zero or one additional light leptons (electron/muon), has been performed using ...
Measurement of the top quark pair production charge asymmetry in proton-proton collisions at √s = 7 TeV using the ATLAS detector
(Springer, 2014-02)
This paper presents a measurement of the top quark pair ( tt¯ ) production charge asymmetry A C using 4.7 fb−1 of proton-proton collisions at a centre-of-mass energy s√ = 7 TeV collected by the ATLAS detector at the LHC. ...
Measurement of the low-mass Drell-Yan differential cross section at √s = 7 TeV using the ATLAS detector
(Springer, 2014-06)
The differential cross section for the process Z/γ ∗ → ℓℓ (ℓ = e, μ) as a function of dilepton invariant mass is measured in pp collisions at √s = 7 TeV at the LHC using the ATLAS detector. The measurement is performed in ...
Measurements of fiducial and differential cross sections for Higgs boson production in the diphoton decay channel at √s=8 TeV with ATLAS
(Springer, 2014-09-19)
Measurements of fiducial and differential cross sections are presented for Higgs boson production in proton-proton collisions at a centre-of-mass energy of s√=8 TeV. The analysis is performed in the H → γγ decay channel ...
Measurement of the inclusive jet cross-section in proton-proton collisions at \( \sqrt{s}=7 \) TeV using 4.5 fb−1 of data with the ATLAS detector
(Springer, 2015-02-24)
The inclusive jet cross-section is measured in proton-proton collisions at a centre-of-mass energy of 7 TeV using a data set corresponding to an integrated luminosity of 4.5 fb−1 collected with the ATLAS detector at the ...
ATLAS search for new phenomena in dijet mass and angular distributions using pp collisions at $\sqrt{s}$=7 TeV
(Springer, 2013-01)
Mass and angular distributions of dijets produced in LHC proton-proton collisions at a centre-of-mass energy $\sqrt{s}$=7 TeV have been studied with the ATLAS detector using the full 2011 data set with an integrated ...
Search for direct chargino production in anomaly-mediated supersymmetry breaking models based on a disappearing-track signature in pp collisions at $\sqrt{s}$=7 TeV with the ATLAS detector
(Springer, 2013-01)
A search for direct chargino production in anomaly-mediated supersymmetry breaking scenarios is performed in pp collisions at $\sqrt{s}$ = 7 TeV using 4.7 fb$^{-1}$ of data collected with the ATLAS experiment at the LHC. ...
Search for heavy lepton resonances decaying to a $Z$ boson and a lepton in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detector
(Springer, 2015-09)
A search for heavy leptons decaying to a $Z$ boson and an electron or a muon is presented. The search is based on $pp$ collision data taken at $\sqrt{s}=8$ TeV by the ATLAS experiment at the CERN Large Hadron Collider, ...
Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector
(Springer, 2015-04-21)
Results of a search for $H \to \tau \tau$ decays are presented, based on the full set of proton--proton collision data recorded by the ATLAS experiment at the LHC during 2011 and 2012. The data correspond to integrated ...
|
I found this SO post which expresses the PDF of a multivariate t-distribution in terms of the gamma and normal distribution in python as follows
$$ G = \Gamma (k = \nu /2 ; \theta = 2 / \nu)\\ Z = N (\mu; \Sigma)\\ PDF_t = \mu + Z / \sqrt{G} $$
where $\mu$ is the mean vector of the distribution, $\nu$ is the degrees of freedom of the t-distribution, $\Gamma$ is the gamma distribution with shape $k$ and scale $\theta$, $N$ is the multivariate normal distribution with mean $\mu$ and covariance $\Sigma$.
My dimensions and notation may be slightly wrong in the equations above, but that is why I am asking the question.
Explicitly, the python code is
d = len(Sigma) # d is length of Sigma, the covariance matrix# g below generates m samples of the univariate gamma distribution# then copies (np.tile) these d times and takes the transpose to produce a m*d size matrixg = np.tile(np.random.gamma(nu/2, 2/nu, m), (d,1)).T # nu is the DOFZ = np.random.multivariate_normal(np.zeros(d), Sigma, m) # generate samples from multivariate normalt = mu + Z/np.sqrt(g)
How can samples from the univariate gamma distribution be combined with samples from the multivariate normal distribution in the formula $PDF_t$ above? Is there some general relation between the t-distribution and the gamma and normal distribution? I found a mathworld explanation of the t-distribution and its relation to the normal and chi-squared distribution which led to a relation between chi-squared and gamma, but I haven't been able to reconcile these to get the relation above. How does this work?
|
Tool to calculate triple Integral. The calculation of three consecutive integral makes it possible to compute volumes for functions with three variables to integrate over a given interval.
Triple Integral - dCode
Tag(s) : Functions, Symbolic Computation
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Sponsored ads
Tool to calculate triple Integral. The calculation of three consecutive integral makes it possible to compute volumes for functions with three variables to integrate over a given interval.
The triple integral calculation is equivalent to a calculation of three consecutive integrals from the innermost to the outermost.
$$ \iiint f(x,y,z) \text{ d}x\text{ d}y\text{ d}z = \int_{(x)} \left( \int_{(y)} \left( \int_{(x)} f(x,y) \text{ d}x \right) \text{ d}y \right) \text{ d}z $$
Example: Calculate the integral of $ f(x,y,z)=xyz $ over $ x \in [0,1] $, $ y \in [0,2] $ and $ z \in [0,3] $ $$ \int_{0}^{3} \int_{0}^{2} \int_{0}^{1} xyz \text{ d}x\text{ d}y\text{ d}z = \int_{0}^{3} \int_{0}^{2} \frac{y^2,z^2}{8} \text{ d}y\text{ d}z = \int_{0}^{3} \frac{z^2}{2} \text{ d}z = \frac{9}{2} $$
Enter the function to be integrated on dCode with the desired upper and lower bounds for each variable and the calculator automatically returns the result.
The cylindrical coordinates are often used to perform volume calculations via a triple integration by changing variables:
$$ \iiint f(x,y,z) \text{ d}x\text{ d}y\text{ d}z = \iiint f(r \cos(\theta), r\sin(\theta), z) r \text{ d}r\text{ d}\theta\text{ d}z $$
The spherical coordinates are often used to perform volume calculations via a triple integration by changing variables:
$$ \iiint f(x,y,z) \text{ d}x\text{ d}y\text{ d}z = \iiint f(\rho \cos(\theta) \sin(\varphi), \rho \sin(\theta)\sin(\varphi), \rho \cos(\varphi) ) \rho^2 \sin(\varphi) \text{ d}\rho \text{ d}\theta \text{ d}\varphi $$
dCode retains ownership of the source code of the script Triple Integral online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Triple Integral script for offline use on PC, iPhone or Android, ask for price quote on contact page !
|
Let $F$ be a locally constant sheaf on $X$ and $U$ is an open subset and $F|U$ is a constant sheaf. Let $x\in U$, now let $s,s'$ be two sections from $F(U)$ s.t. $s(x)=s'(x)$, can we say $s=s'$ in a local neighbourhood of $x$?
I think there are two different understandings of "constant"
I know the sections of the constant sheaf $A$ over an open set $U$ may be interpreted as the continuous functions $U\to A$, where $A$ is given the discrete topology. If $U$ is connected, then these locally constant functions are
constant.
I feel confused with the example below, where the sections are defined on a connected set however they are not
constant, I wonder what the constant means?
If $X$ is locally connected, locally constant sheaves are (up to isomorphism) exactly the sheaves of sections of covering spaces $\pi:Y\to X$.
Such a locally constant sheaf is a constant sheaf if and only the covering $\pi$ is trivial. So any non trivial covering will give you a non-constant but locally constant sheaf. The simplest example is the sheaf of sections of the two sheeted non trivial covering $\mathbb C^*\to \mathbb C^*:z\mapsto z^2$ or its restriction to the unit circle $S^1\to S^1: e^{i\theta}\mapsto e^{2i\theta}$
|
What is a correlation?
A correlation quantifies the linear association between two variables. From one perspective, a correlation has two parts: one part quantifies the association, and the other part sets the scale of that association.
The first part—the covariance, also the correlation numerator—equates to a sort of “average sum of squares” of two variables:
\(cov_{(X, Y)} = \frac{\sum(X - \bar X)(Y - \bar Y)}{N - 1}\)
It could be easier to interpret the covariance as an “average of the X-Y matches”: Deviations of X scores above the X mean multipled by deviations of Y scores below the Y mean will be negative, and deviations of X scores above the X mean multipled by deviations of Y scores above the Y mean will be positive. More “mismatches” leads to a negative covariance and more “matches” leads to a positive covariance.
The second part—the product of the standard deviations, also the correlation denominator—restricts the association to values from -1.00 to 1.00.
\[\sqrt{var_X var_Y} = \sqrt{\frac{\sum(X - \bar X)^2}{N - 1} \frac{\sum(Y - \bar Y)^2}{N - 1}}\]
Divide the numerator by the denominator and you get a sort of “ratio of the sum of squares”, the Pearson correlation coefficient:
\[r_{XY} = \frac{\frac{\sum(X - \bar X)(Y - \bar Y)}{N - 1}}{\sqrt{\frac{\sum(X - \bar X)^2}{N - 1} \frac{\sum(Y - \bar Y)^2}{N - 1}}} = \frac{cov_{(X, Y)}}{\sqrt{var_X var_Y}}\]
Square this “standardized covariance” for an estimate of the proportion of variance of Y that can be accounted for by a linear function of X, \[R^2_{XY}\].
By the way, the correlation equation is very similar to the bivariate linear regression beta coefficient equation. The only difference is in the denominator which excludes the Y variance:
\[\hat{\beta} = \frac{\frac{\sum(X - \bar X)(Y - \bar Y)}{N - 1}}{\sqrt{\frac{\sum(X - \bar X)^2}{N - 1} }} = \frac{cov_{(X, Y)}}{\sqrt{var_X}}\] What does it mean to “adjust” a correlation?
An adjusted correlation refers to the (square root of the) change in a regression model’s \(R^2\) after adding a single predictor to the model: \(R^2_{full} - R^2_{reduced}\). This change quantifies that additional predictor’s “unique” contribution to observed variance explained. Put another way, this value quantifies observed variance in Y explained by a linear function of X after removing variance shared between X and the other predictors in the model.
Model and Conceptual Assumptions for Linear Regression Correct functional form.Your model variables share linear relationships. No omitted influences.This one is hard: Your model accounts for all relevant influences on the variables included. All models are wrong, but how wrong is yours? Accurate measurement.Your measurements are valid and reliable. Note that unreliable measures can’t be valid, and reliable measures don’t necessairly measure just one construct or even your construct. Well-behaved residuals.Residuals (i.e., prediction errors) aren’t correlated with predictor variables or eachother, and residuals have constant variance across values of your predictor variables. Libraries
# library("tidyverse")# library("knitr")# library("effects")# library("psych")# library("candisc")library(tidyverse)library(knitr)library(effects)library(psych)library(candisc)# select from dplyrselect <- dplyr::selectrecode <- dplyr::recode
Load data
From
help("HSB"): “The High School and Beyond Project was a longitudinal study of students in the U.S. carried out in 1980 by the National Center for Education Statistics. Data were collected from 58,270 high school students (28,240 seniors and 30,030 sophomores) and 1,015 secondary schools. The HSB data frame is sample of 600 observations, of unknown characteristics, originally taken from Tatsuoka (1988).”
HSB <- as_tibble(HSB)# print a random subset of rows from the datasetHSB %>% sample_n(size = 15) %>% kable()
id gender race ses sch prog locus concept mot career read write math sci ss 88 female asian high public academic -0.39 1.19 1.00 prof1 40.5 59.3 41.9 33.6 50.6 113 male african-amer high public academic -0.03 0.87 1.00 military 53.2 46.3 43.0 41.7 50.6 515 male white middle public academic 0.68 -0.26 1.00 prof2 62.7 61.9 56.2 47.1 45.6 141 male african-amer middle public vocation -2.23 1.19 1.00 operative 36.3 38.5 39.3 39.0 45.6 439 female white low public academic 0.68 0.88 1.00 prof1 46.9 61.9 53.0 52.6 60.5 514 male white middle public academic 0.06 0.03 0.00 proprietor 46.9 51.5 57.2 52.6 60.5 207 female white middle public general 1.11 0.90 0.33 homemaker 55.3 50.2 41.7 58.5 55.6 353 male white middle public academic -0.21 -1.13 0.00 craftsman 38.9 41.1 43.6 55.3 45.6 219 female white middle public vocation 0.10 0.03 0.33 manager 49.5 56.7 48.0 47.1 60.5 169 female white middle public academic 1.16 1.19 1.00 prof2 70.7 64.5 72.2 66.1 55.6 501 male white middle public academic 0.91 0.59 1.00 prof2 65.4 67.1 67.1 66.1 61.8 150 female african-amer middle public academic 0.91 -0.28 1.00 military 60.1 67.1 56.2 37.4 50.6 376 male white middle public general -1.33 0.03 0.67 craftsman 41.6 30.7 56.9 47.1 50.6 63 female hispanic low public general -0.34 0.59 1.00 military 38.9 33.9 35.1 44.4 40.6 568 female white middle private academic 0.00 0.34 0.33 prof1 46.9 59.3 53.7 58.0 45.6 Do students who score higher on a standardized math test tend to score higher on a standardized science test? Scatterplot
alphabelow refers to the points’ transparency (0.5 = 50%),
lmrefers to linear model and
serefers to standard error bands
HSB %>% ggplot(mapping = aes(x = math, y = sci)) + geom_point(alpha = 0.5) + geom_smooth(method = "lm", se = FALSE, color = "red")
Center the standardized math scores
If the standardized math scores are centered around their mean (i.e., 0 = mean), then we can interpret the regression intercept—x = 0 when the regression line crosses the y-axis—as the grand mean standardized science score.
HSB <- HSB %>% mutate(math_c = math - mean(math, na.rm = TRUE))
Fit linear regression model
scimath1 <- lm(sci ~ math_c, data = HSB)
Summarize model
summary(scimath1)
## ## Call:## lm(formula = sci ~ math_c, data = HSB)## ## Residuals:## Min 1Q Median 3Q Max ## -20.7752 -4.8505 0.3355 5.1096 25.4184 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 51.76333 0.30154 171.66 <2e-16 ***## math_c 0.66963 0.03206 20.89 <2e-16 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 7.386 on 598 degrees of freedom## Multiple R-squared: 0.4219, Adjusted R-squared: 0.4209 ## F-statistic: 436.4 on 1 and 598 DF, p-value: < 2.2e-16
# print the standardized science score descriptive statisticsHSB %>% pull(sci) %>% describe()
## vars n mean sd median trimmed mad min max range skew kurtosis## X1 1 600 51.76 9.71 52.6 51.93 12.01 26 74.2 48.2 -0.16 -0.7## se## X1 0.4
Interpretation
On average, students scored 51.76 points (
SD= 9.71 points) on the standardized science test. However, for every one more point students scored on the standardized math test, they scored 0.67 more points ( SE= 0.03) on the standardized science test, t(598) = 20.89, p< .001. If we account for the fact that students who score higher on a standardized math test also tend to score higher on a standardized reading test, do students who score higher on the standardized math test still tend to score higher on the standardized science test? Center the standardized reading scores
Same explanation as above: Because the regression line crosses the y-axis when the predictors’ axes = 0, transforming those predictors so that 0 reflects their means allows us to interpret the regression intercept as the grand mean standardized science score.
HSB <- HSB %>% mutate(read_c = read - mean(read, na.rm = TRUE))
Fit linear regression model
scimath2 <- lm(sci ~ math_c + read_c, data = HSB)
Summarize model
summary(scimath2)
## ## Call:## lm(formula = sci ~ math_c + read_c, data = HSB)## ## Residuals:## Min 1Q Median 3Q Max ## -19.5139 -4.5883 0.0933 4.5700 22.4739 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 51.76333 0.26995 191.754 <2e-16 ***## math_c 0.34524 0.03910 8.829 <2e-16 ***## read_c 0.44503 0.03644 12.213 <2e-16 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 6.612 on 597 degrees of freedom## Multiple R-squared: 0.5375, Adjusted R-squared: 0.5359 ## F-statistic: 346.8 on 2 and 597 DF, p-value: < 2.2e-16
Compute \(R^2\) change and compare models
# adjusted R-squared is an unbiased estimate of R-squaredsummary(scimath2)$adj.r.squared - summary(scimath1)$adj.r.squared
## [1] 0.114985
# compare modelsanova(scimath1, scimath2)
## Analysis of Variance Table## ## Model 1: sci ~ math_c## Model 2: sci ~ math_c + read_c## Res.Df RSS Df Sum of Sq F Pr(>F) ## 1 598 32624 ## 2 597 26102 1 6521.7 149.16 < 2.2e-16 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Save both model predictions in tables
Below, I use the
effect()function to estimate predicted standardized science scores across a range of unique values of standardized math scores; for scimath2, the full model, the predicted scores have been purged of the linear effect of standardized reading scores. I transform the result from
effect()into a
tibble
data.frame, which includes predicted values (fitted values), predictor values, standard errors of the predictions, and upper and lower confidence limits for the predictions. I can use this table to create a regression line and confidence bands in a plot.
(scimath_predtable1 <- effect(term = "math_c", mod = scimath1) %>% as_tibble())
## # A tibble: 5 x 5## math_c fit se lower upper## <dbl> <dbl> <dbl> <dbl> <dbl>## 1 -20 38.4 0.708 37.0 39.8## 2 -9 45.7 0.417 44.9 46.6## 3 2 53.1 0.308 52.5 53.7## 4 10 58.5 0.440 57.6 59.3## 5 20 65.2 0.708 63.8 66.5
(scimath_predtable2 <- effect(term = "math_c", mod = scimath2) %>% as_tibble())
## # A tibble: 5 x 5## math_c fit se lower upper## <dbl> <dbl> <dbl> <dbl> <dbl>## 1 -20 44.9 0.827 43.2 46.5## 2 -9 48.7 0.444 47.8 49.5## 3 2 52.5 0.281 51.9 53.0## 4 10 55.2 0.475 54.3 56.1## 5 20 58.7 0.827 57.0 60.3
Plot adjusted relationship
Below, I create the lines and the confidence “ribbons” from the tables I created above. The points come from the original
data.framethough. Follow the code line by line:
geom_pointuses the HSB data, and both
geom_lines use data from different tables of predicted values. In other words, layers of lines and ribbons are added on top of the layer of points.
HSB %>% ggplot(mapping = aes(x = math_c, y = sci)) + geom_point(alpha = 0.5) + geom_line(data = scimath_predtable1, mapping = aes(x = math_c, y = fit), color = "red") + geom_line(data = scimath_predtable2, mapping = aes(x = math_c, y = fit), color = "blue") + geom_ribbon(data = scimath_predtable2, mapping = aes(x = math_c, y = fit, ymin = lower, ymax = upper), fill = "blue", alpha = 0.25) + labs(x = "Standardized math score (grand mean centered)", y = "Standardized science score")
## Warning: Ignoring unknown aesthetics: y
Interpretation
After partialling out variance shared between standardized math and reading scores, for every one more point students scored on the standardized math test, they scored 0.35 more points (
SE= 0.04) on the standardized science test, t(597) = 12.21, p< .001. Importantly, the model that includes standardized reading scores explained 53.60% of the observed variance in standardized science scores, an 11.50% improvement over the model that included only standardized math scores. Resources Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. New York, NY: Routledge. Gonzalez, R. (December, 2016). Lecture Notes #8: Advanced Regression Techniques IRetreived from http://www-personal.umich.edu/~gonzo/coursenotes/file8.pdf on June 28th, 2018. MacKinnon, D. P. (2008). Introduction to statistical mediation analysis.New York, NY: Lawrence Erlbaum Associates. General word of caution
Above, I listed resources prepared by experts on these and related topics. Although I generally do my best to write accurate posts, don’t assume my posts are 100% accurate or that they apply to your data or research questions. Trust statistics and methodology experts, not blog posts.
|
The interaction term in the Lagrangian for Yukawa theory is given by
$$ \mathcal{L}_\text{int} = -g\phi\bar{\Psi}\Psi, $$
where $g$ is the coupling constant, $\phi$ some scalar field and $\Psi$ a fermion field. My question might be a little bit naive but I'm trying to understand how you can see that for a given quantum field theory a particular scattering process is possible.
Consider e.g. fermion-fermion scattering, so $\Psi\Psi\to\Psi\Psi$. How is such a process allowed in Yukawa theory? My point is that there is no term in the interaction Lagrangian which is proportional to something like $\Psi\Psi$. Such a term would probably not be Lorentz invariant, but how do I see that the scattering event I mentioned is allowed nevertheless and that there are not only processes like $\bar{\Psi}\Psi \to \bar{\Psi}\Psi$?
|
The Annals of Statistics Ann. Statist. Volume 46, Number 3 (2018), 1077-1108. On the systematic and idiosyncratic volatility with large panel high-frequency data Abstract
In this paper, we separate the integrated (spot) volatility of an individual Itô process into integrated (spot) systematic and idiosyncratic volatilities, and estimate them by aggregation of local factor analysis (localization) with large-dimensional high-frequency data. We show that, when both the sampling frequency $n$ and the dimensionality $p$ go to infinity and $p\geq C\sqrt{n}$ for some constant $C$, our estimators of High dimensional Itô process; common driving process; specific driving process, integrated High dimensional Itô process, common driving process, specific driving process, systematic and idiosyncratic volatilities are $\sqrt{n}$ ($n^{1/4}$ for spot estimates) consistent, the best rate achieved in estimating the integrated (spot) volatility which is readily identified even with univariate high-frequency data. However, when $Cn^{1/4}\leq p<C\sqrt{n}$, aggregation of $n^{1/4}$-consistent local estimates of systematic and idiosyncratic volatilities results in $p$-consistent (not $\sqrt{n}$-consistent) estimates of integrated systematic and idiosyncratic volatilities. Even more interesting, when $p<Cn^{1/4}$, the integrated estimate has the same convergence rate as the spot estimate, both being $p$-consistent. This reveals a distinctive feature from aggregating local estimates in the low-dimensional high-frequency data setting. We also present estimators of the integrated (spot) idiosyncratic volatility matrices as well as their inverse matrices under some sparsity assumption. We finally present a factor-based estimator of the inverse of the spot volatility matrix. Numerical studies including the Monte Carlo experiments and real data analysis justify the performance of our estimators.
Article information Source Ann. Statist., Volume 46, Number 3 (2018), 1077-1108. Dates Received: March 2016 Revised: January 2017 First available in Project Euclid: 3 May 2018 Permanent link to this document https://projecteuclid.org/euclid.aos/1525313076 Digital Object Identifier doi:10.1214/17-AOS1578 Mathematical Reviews number (MathSciNet) MR3797997 Zentralblatt MATH identifier 06897923 Citation
Kong, Xin-Bing. On the systematic and idiosyncratic volatility with large panel high-frequency data. Ann. Statist. 46 (2018), no. 3, 1077--1108. doi:10.1214/17-AOS1578. https://projecteuclid.org/euclid.aos/1525313076
Supplemental materials Supplement to “On the integrated systematic and idiosyncratic volatility with large panel high-frequency data”. This supplement contains the technical proof of Lemmas 3–5, which is crucial in proving Theorem 1 and Theorem 2.
|
Inaccessible cardinal Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the worldly cardinals can still be viewed as large cardinals.
A cardinal $\kappa$ being inaccessible implies the following:
$V_\kappa$ is a model of ZFC and so inaccessible cardinals are worldly. The worldly cardinals are unbounded in $\kappa$, so $V_\kappa$ satisfies the existence of a proper class of worldly cardinals. $\kappa$ is an aleph fixed point and a beth fixed point, and consequently $V_\kappa=H_\kappa$. (Solovay)there is an inner model of a forcing extension satisfying ZF+DC in which every set of reals is Lebesgue measurable; in fact, this is equiconsistent to the existence of an inaccessible cardinal. For any $A\subseteq V_\kappa$, the set of all $\alpha<\kappa$ such that $\langle V_\alpha;\in,A\cap V_\alpha\rangle\prec\langle V_\kappa;\in,A\rangle$ is club in $\kappa$.
An ordinal $\alpha$ being inaccessible is equivalent to the following:
$V_{\alpha+1}$ satisfies $\mathrm{KM}$. $\alpha>\omega$ and $V_\alpha$ is a Grothendiek universe. $\alpha$ is $\Pi_0^1$-Indescribable. $\alpha$ is $\Sigma_1^1$-Indescribable. $\alpha$ is $\Pi_2^0$-Indescribable. $\alpha$ is $0$-Indescribable. $\alpha$ is a nonzero limit ordinal and $\beth_\alpha=R_\alpha$ where $R_\beta$ is the $\beta$-th regular cardinal, i.e. the least regular $\gamma$ such that $\{\kappa\in\gamma:\mathrm{cf}(\kappa)=\kappa\}$ has order-type $\beta$. $\alpha = \beth_{R_\alpha}$. $\alpha = R_{\beth_\alpha}$. $\alpha$ is a weakly inaccessible strong limit cardinal (see weakly inaccessible below). Contents Weakly inaccessible cardinal
A cardinal $\kappa$ is
weakly inaccessible if it is an uncountable regular limit cardinal. Under GCH, this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent.
There are a few equivalent definitions of weakly inaccessible cardinals. In particular:
Letting $R$ be the transfinite enumeration of regular cardinals, a limit ordinal $\alpha$ is weakly inaccessible if and only if $R_\alpha=\aleph_\alpha$ A nonzero cardinal $\kappa$ is weakly inaccessible if and only if $\kappa$ is regular and there are $\kappa$-many regular cardinals below $\kappa$; that is, $\kappa=R_\kappa$. A regular cardinal $\kappa$ is weakly inaccessible if and only if $\mathrm{REG}$ is unbounded in $\kappa$ (showing the correlation between weakly Mahlo cardinals and weakly inaccessible cardinals, as stationary in $\kappa$ is replaced with unbounded in $\kappa$) Levy collapse
The Levy collapse of an inaccessible cardinal $\kappa$ is the $\lt\kappa$-support product of $\text{Coll}(\omega,\gamma)$ for all $\gamma\lt\kappa$. This forcing collapses all cardinals below $\kappa$ to $\omega$, but since it is $\kappa$-c.c., it preserves $\kappa$ itself, and hence ensures $\kappa=\omega_1$ in the forcing extension.
Inaccessible to reals
A cardinal $\kappa$ is
inaccessible to reals if it is inaccessible in $L[x]$ for every real $x$. For example, after the Levy collapse of an inaccessible cardinal $\kappa$, which forces $\kappa=\omega_1$ in the extension, the cardinal $\kappa$ is of course no longer inaccessible, but it remains inaccessible to reals. Universes
When $\kappa$ is inaccessible, then $V_\kappa$ provides a highly natural transitive model of set theory, a universe in which one can view a large part of classical mathematics as taking place. In what appears to be an instance of convergent evolution, the same universe concept arose in category theory out of the desire to provide a hierarchy of notions of smallness, so that one may form such categories as the category of all small groups, or small rings or small categories, without running into the difficulties of Russell's paradox. Namely, a
Grothendieck universe is a transitive set $W$ that is closed under pairing, power set and unions. That is, (transitivity) If $b\in a\in W$, then $b\in W$. (pairing) If $a,b\in W$, then $\{a,b\}\in W$. (power set) If $a\in W$, then $P(a)\in W$. (union) If $a\in W$, then $\cup a\in W$.
The
Grothendieck universe axiom is the assertion that every set is an element of a Grothendieck universe. This is equivalent to the assertion that the inaccessible cardinals form a proper class. Degrees of inaccessibility
A cardinal $\kappa$ is
$1$-inaccessible if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom.
More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals.
$1$-inaccessibility is already consistency-wise stronger than the existence of a proper class of inaccessible cardinals, and $2$-inaccessibility is stronger than the existence of a proper class of $1$-inaccessible cardinals. More specifically, a cardinal $\kappa$ is $\alpha$-inaccessible if and only if for every $\beta<\alpha$: $$V_{\kappa+1}\models\mathrm{KM}+\text{There is a proper class of }\beta\text{-inaccessible cardinals}$$
As a result, if $\kappa$ is $\alpha$-inaccessible then for every $\beta<\alpha$: $$V_\kappa\models\mathrm{ZFC}+\text{There exists a }\beta\text{-inaccessible cardinal}$$
Therefore $2$-inaccessibility is weaker than $3$-inaccessibility, which is weaker than $4$-inaccessibility... all of which are weaker than $\omega$-inaccessibility, which is weaker than $\omega+1$-inaccessibility, which is weaker than $\omega+2$-inaccessibility...... all of which are weaker than hyperinaccessibility, etc.
Hyper-inaccessible and more
A cardinal $\kappa$ is
hyperinaccessible if it is $\kappa$-inaccessible. One may similarly define that $\kappa$ is $\alpha$-hyperinaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$, it is a limit of $\beta$-hyperinaccessible cardinals. Continuing, $\kappa$ is hyperhyperinaccessible if $\kappa$ is $\kappa$-hyperinaccessible.
More generally, $\kappa$ is
hyper${}^\alpha$-inaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$ it is $\kappa$-hyper${}^\beta$-inaccessible, where $\kappa$ is $\alpha$-hyper${}^\beta$-inaccessible if it is hyper${}^\beta$-inaccessible and for every $\gamma<\alpha$, it is a limit of $\gamma$-hyper${}^\beta$-inaccessible cardinals.
Meta-ordinal terms are terms like $Ω^α · β + Ω^γ · δ +· · ·+Ω^\epsilon · \zeta + \theta$ where $α, β...$ are ordinals. They are ordered as if $Ω$ were an ordinal greater then all the others. $(Ω · α + β)$-inaccessible denotes $β$-hyper${}^α$-inaccessible, $Ω^2$-inaccessible denotes hyper${}^\kappa$-inaccessible $\kappa$ etc. Every Mahlo cardinal $\kappa$ is $\Omega^α$-inaccessible for all $α<\kappa$ and probably more. Similar hierarchy exists for Mahlo cardinals below weakly compact. All such properties can be killed softly by forcing to make them any weaker properties from this family.[1]
ReferencesMain library
|
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
|
Absolutely continuous measures
A concept in measure theory (see also Absolute continuity). If $\mu$ and $\nu$ are two measures on a $\sigma$-algebra $\mathcal{B}$ of subsets of $X$, we say that $\nu$ is absolutely continuous with respect to $\mu$ if $\nu (A) =0$ for any $A\in\mathcal{B}$ such that $\mu (A) =0$. The absolute continuity of $\nu$ with respect to $\mu$ is denoted by $\nu\ll\mu$. If the measure $\nu$ is finite, i.e. $\nu (X) <\infty$, the property $\nu\ll\mu$ is equivalent to the following stronger statement: for any $\varepsilon>0$ there is a $\delta>0$ such that $\nu (A)<\varepsilon$ for every $A$ with $\mu (A)<\delta$.
This definition can be generalized to signed measures $\nu$ and even to vector-valued measure $\nu$. Some authors generalize it further to vector-valued $\mu$'s: in that case the absolute continuity of $\nu$ with respect to $\mu$ amounts to the requirement that $\nu (A) = 0$ for any $A\in\mathcal{B}$ such that $|\mu| (A)=0$, where $|\mu|$ is the total variation of $\mu$ (see Signed measure for the relevant definition).
The Radon-Nikodym theorem characterizes the absolute continuity of $\nu$ with respect to $\mu$ with the existence of a function $f\in L^1 (\mu)$ such that $\nu = f \mu$, i.e. such that \[ \nu (A) = \int_A f\rd\mu \qquad \text{for every '"`UNIQ-MathJax37-QINU`"'.} \] A corollary of the Radon-Nikodym, the Hahn decomposition theorem, characterizes signed measures as differences of nonnegative measures. We refer to Signed measure for more on this topic.
Two measures which are mutually absolutely continuous are sometimes called equivalent.
Radon-Nikdoym decomposition
If $\mu$ is a nonnegative measure on a $\sigma$-algebra $\mathcal{B}$ and $\nu$ another nonnegative measure on the same $\sigma$-algebra (which might be a signed measure, or even taking values in a finite-dimensional vector space), then $\nu$ can be decomposed in a unique way as $\nu=\nu_a+\nu_s$ where - $\nu_a$ is absolutey continuous with respect to $\mu$; - $\nu_s$ is singular with respect to $\mu$, i.e. there is a set $A$ of $\mu$-measure zero such that $\nu_s (X\setminus A)=0$ (this property is often denoted by $\nu_s\perp \mu$. This decomposition is called Radon-Nikodym decompoition by some authors and Lebesgue decomposition by some other. The same decomposition holds even if $\nu$ is a signed measure or, more generally, a vector-valued measure. In these cases the property $\nu_s (X\setminus A)=0$ is substituted by $|\nu_s| (X\setminus A)=0$, where $|\nu_s|$ denotes the total variation measure of $\nu_s$ (we refer to Signed measure for the relevant definition).
Comments
A set of non-zero measure that has no subsets of smaller, but still positive, measure is called an atom of the measure. When considering the Borel $\sigma$-algebra $\mathcal{B}$ in the euclidean space and the measure $\lambda$ as reference measure, it is a common mistake to claim that the singular part of a second measure $\nu$ must be concentrated on points which are atoms. A singular measure may be atomless, as is shown by the measure concentrated on the standard Cantor set which puts zero on each gap of the set and $2^{-n}$ on the intersection of the set with the interval of generation $n$ (such measure is also the distributional derivative of the Cantor ternary function or devil staircase).
When some canonical measure $\mu$ is fixed, (as the Lebesgue measure on $\mathbb R^n$ or its subsets or, more generally, the Haar measure on a topological group), one says that $\nu$ is absolutely continuous meaning that $\nu\ll\mu$.
References
[AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration", Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory", 1, Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures", Wiley (1968) MR0233396 Zbl 0172.21201 [Ha] P.R. Halmos, "Measure theory", v. Nostrand (1950) MR0033869 Zbl 0040.16802 [He] E. Hewitt, K.R. Stromberg, "Real and abstract analysis", Springer (1965) MR0188387 Zbl 0137.03202 [Ro] H.L. Royden, "Real analysis", Macmillan (1968) How to Cite This Entry:
Absolutely continuous measures.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolutely_continuous_measures&oldid=27258
|
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover.
By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact
I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure
The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure
that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure
What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set
Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value?
@AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure
We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$.
Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$?
$\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value
The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure
What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set
Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$
(cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series
Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively
Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below
Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$
Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and:
@AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently.
If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies
@Overflow2341313 Could you send a picture or a screenshot of the problem?
nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum
So there are only countably many disjoint intervals in the cover $C$
@Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t).
Simply restrict the codomain so that it is onto? Making it bijective and hence invertible.
hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird...
In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set:
@Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}.
To do this... the smallest t can be is the trivial topology on R - {\emptyset, R}
But, we required that everything in U be in t under f?
@Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse
I'm not sure if adding the additional condition that $f$ is an open map will make an difference
For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship
Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system
Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
|
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Impact Factor 2019: 1.204 Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing: - solutions by mathematical methods of problems emerging in computer science - solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Article Type: Research Article
Abstract: This paper deals with the modular analysis of distributed concurrent systems modelled by Petri nets. The main analysis techniques of such systems suffer from the well-known problem of the combinatory explosion of state space. In order to cope with this problem, we use a modular representation of the state space instead of the ordinary one. The modular representation, namely modular state space, is much smaller than the ordinary state space. We propose to distribute the modular state space on every machine associated with one module. We enhance the modularity of the verification of some local properties of any module by …limiting it to the exploration of local and some global information. Once the construction of the distributed state space is performed, there is no communication between modules during the verification. Show more
Keywords: Distributed systems, modular verification, Petri nets, state space explosion
DOI: 10.3233/FI-2013-850
Citation: Fundamenta Informaticae, vol. 125, no. 1, pp. 1-20, 2013
Authors: Felisiak, Mariusz
Article Type: Research Article
Abstract: By applying computer algebra tools (mainly, Maple and C++), given the Dynkin diagram $\Delta = \mathbb{A}_n$ , with n ≥ 2 vertices and the Euler quadratic form $q_\Delta : \mathbb{Z}^n \rightarrow \mathbb{Z}$ , we study the problem of classifying mesh root systems and the mesh geometries of roots of Δ (see Section 1 for details). The problem reduces to the computation of the Weyl orbits in the set $Mor_\Delta \subseteq \mathbb{M}_n(\mathbb{Z})$ of all matrix morsifications A of qΔ , i.e., the non-singular matrices $A \in \mathbb{M}_n(\mathbb{Z})$ such that (i) qΔ (v) = v · …A · vtr , for all $v \in \mathbb{Z}^n$ , and (ii) the Coxeter matrix CoxA := −A · A−tr lies in $Gl(n,\mathbb{Z})$ . The Weyl group $\mathbb{W}_\Delta \subseteq Gl(n, \mathbb{Z})$ acts on MorΔ and the determinant det $A \in \mathbb{Z}$ , the order cA ≥ 2 of CoxA (i.e. the Coxeter number), and the Coxeter polynomial $cox_A(t) := det(t \centerdot E \minus Cox_A) \in \mathbb{Z}[t]$ are $\mathbb{W}_\Delta$ -invariant. The problem of determining the $\mathbb{W}_\Delta$ -orbits $\cal{O}rb(A)$ of MorΔ and the Coxeter polynomials coxA (t), with $A \in Mor_\Delta$ , is studied in the paper and we get its solution for n ≤ 8, and $A = [a_{ij}] \in Mor_{\mathbb{A}}_n$ , with $\vert a_{ij} \vert \le 1$ . In this case, we prove that the number of the $\mathbb{W}_\Delta$ -orbits $\cal{O}rb(A)$ and the number of the Coxeter polynomials coxA (t) equals two or three, and the following three conditions are equivalent: (i) $\cal{O}rb(A) = \mathbb{O}rb(A\prime)$ , (ii) coxA (t) = coxA′ (t), (iii) cA · det A = cA′ · det A′. We also construct: (a) three pairwise different $\mathbb{W}_\Delta$ -orbits in MorΔ , with pairwise different Coxeter polynomials, if $\Delta = \mathbb{A}_{2m \minus 1}$ and m ≥ 3; and (b) two pairwise different $\mathbb{W}_\Delta$ -orbits in MorΔ , with pairwise different Coxeter polynomials, if $\Delta = \mathbb{A}_{2m}$ and m ≥ 1. Show more
DOI: 10.3233/FI-2013-851
Citation: Fundamenta Informaticae, vol. 125, no. 1, pp. 21-49, 2013
Article Type: Research Article
Abstract: A logic called sequence-indexed linear logic (SLL) is proposed to appropriately formalize resource-sensitive reasoning with sequential information. The completeness and cut-elimination theorems for SLL are proved, and SLL and a fragment of SLL are shown to be undecidable and decidable, respectively. As an application of SLL, some specifications of secure password authentication systems are discussed.
Keywords: Linear logic, resource-sensitive reasoning, sequence modal operator, informational interpretation, completeness theorem, secure password authentication system
DOI: 10.3233/FI-2013-852
Citation: Fundamenta Informaticae, vol. 125, no. 1, pp. 51-70, 2013
Article Type: Research Article
Abstract: Atkin's algorithm [2] for computing square roots in $Z^*_p$ , where p is a prime such that p ≡ 5 mod 8, has been extended by Müller [15] for the case p ≡ 9 mod 16. In this paper we extend Atkin's algorithm to the general case p ≡ 2s + 1 mod 2s + 1, for any s ≥ 2, thus providing a complete solution for the case p ≡ 1 mod 4. Complexity analysis and comparisons with other methods are also provided.
Keywords: Square Roots, Efficient Computation, Complexity
DOI: 10.3233/FI-2013-853
Citation: Fundamenta Informaticae, vol. 125, no. 1, pp. 71-94, 2013
Authors: Tyszka, Apoloniusz
Article Type: Research Article
Abstract: Let En = {xi = 1; xi + xj = xk ; xi · xj = xk : i; j; k ∈ {1,...,n}}. We conjecture that if a system $S \subseteq E_n$ has only finitely many solutions in integers x1 ,...,xn , then each such solution (x1 ,...,xn ) satisfies |x1 |,...,|xn | ≤ 22n−1 . Assuming the conjecture, we prove: (1) there is an algorithm which to each Diophantine equation assigns an integer which is greater than the heights of integer (non-negative integer, rational) solutions, if these solutions form …a finite set, (2) if a set $\cal{M} \subseteq \mathbb{N}$ is recursively enumerable but not recursive, then a finite-fold Diophantine representation of $\cal{M}$ does not exist. Show more
Keywords: Davis-Putnam-Robinson-Matiyasevich theorem, Matiyasevich's conjecture on finite-fold Diophantine representations
DOI: 10.3233/FI-2013-854
Citation: Fundamenta Informaticae, vol. 125, no. 1, pp. 95-99, 2013
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
|
The rule of changing the base of a logarithmic term is called the change of base rule of logarithmic term. In logarithms, the base of any logarithm term can be changed in three ways and they are used as change of base formulas in logarithms.
The change of base logarithm formula in division form.
$\large \log_{b}{m} = \dfrac{\log_{d}{m}}{\log_{d}{b}}$
The change of base logarithm formula in product form.
$\large \log_{b} m = \log_{a} m \times \log_{b} a$
The change of base logarithm formula in reciprocal form.
$\large \log_{b}{m} = \dfrac{1}{\log_{m}{b}}$
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
It is a common exercise in algebra to show that there does not exist a field $F$ such that its additive group $F^+$ and multiplicative group $F^*$ are isomorphic. See e.g. this question.
One of the snappiest proofs I know is that, if we suppose for a contradiction they are, then any isomorphism sends solutions of the equation $2x = 0$ in the additive group to solutions of the equation $y^2 = 1$ in the multiplicative group. Depending on whether the characteristic of $F$ is or isn't 2, the former has either $|F|$ or $1$ solution(s), while the latter has $1$ or $2$ solutions, respectively. There is no field for which these numbers agree, so $F^+ \not\cong F^*$ ever.
One might now ask whether there is a pair of fields, $E$ and $F$, for which $E^+ \cong F^*$ as groups.
Clearly $\def\GF#1{\mathrm{GF}(#1)}\GF2^+ \cong \GF3^*$, $\GF3^+ \cong \GF4^*$, and in general if $p$ is a prime and $p+1$ is a prime power, then $\GF p^+ \cong \GF{p+1}^*$.
You can see that this characterizes the situation in the positive characteristic case, from the same equation trick above: if $\def\c{\operatorname{char}}\c E = 2$ and $\c F \ne 2$, we can make $|E| = 2$ and get a solution. Else we must have $\c E \ne 2 = \c F$. If $\c E = c \ne 0$, then elements of $E$ must get mapped to $c$-th roots of unity in $F$, and there can be at most $c$ of those.
This leaves the case where $\c E = 0$, for which there are no finite fields. In fact, none of the cases above permit any infinite fields, either. This brings me to my question:
Do there exist infinite fields $E$ and $F$ such that $E^+ \cong F^*$?
I believe the answer is no, and it seems unlikely that such an isomorphism would exist, but I can't make heads or tails of it, really. Here's what I have.
As above, I can show that if $E$ and $F$ are infinite, and $\phi: E^+ \to F^*$ is an isomorphism, then we may assume $\c E = 0$—so WLOG it is an extension of $\mathbb Q$—and $\c F = 2$.
Every element of $E$ has infinite additive order, so every element of $F^*$ has infinite multiplicative order, and there are no roots of unity except $1 = \phi(0)$. However, if $a \ne 1$ in $F$, then $a$ has a $k$-th root $\phi(\frac1k \phi^{-1}(a))$ for all $k$, since $E \supseteq \mathbb Q$.
|
Let $(x_n)$ be a bounded but not convergent sequence. Prove that $(x_n)$ has two subsequences converging to different limits.
My attempt is: Since the sequence is bounded , there exists $M>0$ such that $x_n \in [-M,M]$ for all $n \in \mathbb{N}$. Since the sequence does not converge to $x$, there exists $\epsilon_0>0$ such that $ \forall N \in \mathbb{N}$, there exists $n \geq N$ such that $|x_n-x| \geq \epsilon_0$.
Then we have $x_n \in [-M,x-\epsilon_0] \cup x_n \in [x+\epsilon_0,M]$. By Bolzano-weierstrass theorem, there exists a convergent subsequence in the two intervals.
Is my proof valid? ${}{}$
|
Regularity for Variational Problems in the Heisenberg Group
Speaker
Shirsho Mukherjee, Department of Mathematics and Statistics, University of Jyväkylä
When Jan 12, 2016
from 04:00 PM to 05:00 PM
Where LH006 Add event to calendar vCal
iCal
Abstract: We examine the local interior regularity of minimizers of scalar variational integrals of $p$-growth, with the $p$-Laplace equation $ \divo(|\Xu|^{p-2}\X u) = 0$ as a model example. This is done in the setting of the Heisenberg Group $\mathbb{H}^n$, which is $\mathbb{R}^{2n+1}$ endowed with a certain sub-riemannian geometry which gives rise to left invariant vector fields satisfying the Heisenberg algebra $ [X_i,X_j] = T\delta_{j,n+i}$ and hence differential operators like horizontal gradient and divergence, sub-laplacian etc. coming from them.
A longstanding literature in this area begins since the late 60's, going back to works of H\"{o}rmander in which regularity for $ p = 2$ is well established. The equation being quasilinear for $ p\neq 2$, investigating regularity is not quite trivial due to the non commutative vector fields. It is known that weak solutions are H\"{o}lder continuous due to Capogna, Danielli and Garofalo in 1993. But there has been no complete theory for the regularity of the horizontal gradient $\Xu$ and the problem remains open for atleast two decades until now, as we believe. There have been partial results for the gradient regularity over the years; Domokos and Manfredi showed H\"{o}lder continuity by Cordes perturbation technique in 2005 when $p$ is very close to $2$, Lipschitz continuity for the weak solution has been established for $2 \leq p < 4$ by G. Mingione, J. Manfredi and for $p\geq 2$ by X. Zhong. in 2007. Other regularity results include the proof of $ Tu \in L^p$ for $ 1< p< 4$ by Domokos and Marchi in 2004, where the restriction $ p < 4$ is seemingly unavoidable.
As a joint work of myself and X. Zhong, we show the local $ C^{1,\alpha}$ regularity of weak solutions of the equation for all $1 < p < \infty $ with quantitative estimates, which we believe to be optimal at least for $ p > 2$, as is the case for the corresponding Eucledean problem. We follow the De-Giorgi's method that is implemented by P. Tolksdorff and E. Debenedetto in 1982, for the same regularity of the problem in $\R^n$, taking suitable truncations of the gradient. I would like to present our solution and discuss the use of a reversed Caccioppoli type inequality for $Tu$ that allows us to get rid of the extra items coming from the commutators, the singular integrals for $ 1 < p < 2$ estimated in small measure sets, thereafter.
|
I've been given the following question and solution:
Let $W_t$ be a standard Brownian Motion w.r.t. ($\mathbf{P},\mathcal{F}_t)$. Prove that \begin{align} E[|W_t|] < \infty, \forall \text{ } t \end{align}
Solution: \begin{align} E[|W_t|] < E[1+W_t^{2}] < 1 + E[W_t^2] < 1+t <\infty \end{align}
My question is, what allows us to state the following? \begin{align} E[|W_t|] < E[1+W_t^2] \end{align}
Many thanks,
John
|
It is a special function, which contains algebraic, trigonometric and exponential functions in terms of a variable $x$. So, the limit of the function should be simplified to our known form of exponential and trigonometric limit rules to solve this limit problem.
The numerator contains two exponential functions and a trigonometric function. Write the exponential functions closer to simplify them.
$\displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{e^{3\displaystyle+x}-e^3-\sin{x}}{x}}$
The exponent of the first exponential term is a sum of $3$ and variable $x$. It can be written as product of two exponential terms by the product rule of exponents.
$= \,\,\,$ $\displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{e^3 \times e^{\displaystyle x}-e^3-\sin{x}}{x}}$
$e^3$ is a common factor in the first two terms of the numerator. So, take $e^3$ common from them.
$= \,\,\,$ $\displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{e^3{(e^{\displaystyle x}-1)}-\sin{x}}{x}}$
$= \,\,\,$ $\displaystyle \large \lim_{x \to 0}{\normalsize \Bigg[\dfrac{e^3{(e^{\displaystyle x}-1)}}{x}-\dfrac{\sin{x}}{x}\Bigg]}$
As per subtraction rule of limits, the limit of subtraction of two functions is equal to subtraction of their limits.
$= \,\,\,$ $\displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{e^3{(e^{\displaystyle x}-1)}}{x}}$ $-$ $\displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{\sin{x}}{x}}$
Now, find the limit of product of exponential functions but $e^3$ is a constant function. So, use constant multiple rule of limit for the product of constant and function to simplify the first term.
$= \,\,\,$ $e^3 \displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{e^{\displaystyle x}-1}{x}}$ $-$ $\displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{\sin{x}}{x}}$
$= \,\,\,$ $e^3 \displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{e^{\displaystyle x}-1}{x}}$ $-$ $\displaystyle \large \lim_{x \to 0}{\normalsize \dfrac{\sin{x}}{x}}$
As per exponential limit rule, the limit of quotient of subtraction of one from natural exponential function by the variable as the variable approaches zero is equal to one. Similarly, the limit of quotient of sine of a variable by variable as the variable tends to zero is also equal to one as per trigonometric limit rule.
$= \,\,\,$ $e^3 \times 1$ $-$ $1$
$= \,\,\, e^3-1$
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
→ → → → Browse Dissertations and Theses - Mathematics by Contributor "Erdogan, M. Burak"
Now showing items 1-12 of 12
(2016-08-25)One of the widely studied topics in singular integral operators is T1 theorem. More precisely, it asks if one can extend a Calder\'on-Zygmund operator to a bounded operator on $L^p$. In addition, Tb theorem was raised when ...
application/pdfPDF (571kB)
(2012-09-18)The thesis consists of six chapters. In Chapter 1, we will briefly introduce the background of the topic, as well as some results we already know. The next five chapters can be divided into two parts. The first part is ...
application/pdfPDF (406kB)
(2010)Finally in Chapter 3 we consider a problem involving the non-linear Schrodinger equation. In particular, we consider the following equation that arises in fiber optic communication systems, iut+dtu xx+u2u=0. We can ...
application/pdfPDF (1MB)
(2010-08-31)In this document we explore the issue of $L^1\to L^\infty$ estimates for the solution operator of the linear Schr\"{o}dinger equation, \begin{align*} iu_t-\Delta u+Vu&=0 &u(x,0)=f(x)\in \mathcal S(\R^n). \end{align*} We ...
application/pdfPDF (1000kB)
(2011-08-25)A Gabor system is a collection of modulated and translated copies of a window function. If we have a signal in $L^2(\mathbb{R})$, it can be analyzed with a Gabor system generated by a certain window $g$ and then synthesized ...
application/pdfPDF (12MB)
(2014-01-16)This thesis is concerned with the restriction theory of the Fourier transform. We prove two restriction estimates for the Fourier transform. The first is a bilinear estimate for the light cone when the exponents are on a ...
application/pdfPDF (1MB)
(2018-06-22)We first give a survey on multilinear Hilbert transforms. Then we study several variants of bilinear Hilbert transform such as bilinear Hilbert transform along two polynomials, discrete (integer) bilinear Hilbert transform ...
application/pdfPDF (770kB)
(2010-08-20)There have been a lot of research done on the relationship between locally compact groups and algebras associated with them. For example, Johnson proved that a locally compact group G is amenable if and only if the ...
application/pdfPDF (668kB)
(2014-01-16)In [20], Keith and Zhong prove that spaces admitting Poincar e inequalities also admit a priori stronger Poincar e inequalities. We use their technique, with slight adjustments, to obtain a similar result in the case of ...
application/pdfPDF (992kB)
(2014-05-30)This dissertation consists of 4 chapters. In Chapter 1, we will briefly introduce some background of the trilinear oscillatory integrals and the motivations for their study. We also outline some key ideas in their proofs, ...
application/pdfPDF (479kB)
(2017-04-12)This thesis is primarily concerned with the smoothing properties of dispersive equations and systems. Smoothing in this context means that the nonlinear part of the solution flow is of higher regularity than the initial ...
application/pdfPDF (955kB)
(2013-08-22)In this dissertation we are interested in the study of dynamical systems that display rigidity and weak mixing. We are particularly interested in the topological analogue of rigidity, called uniform rigidity. A map $T$ ...
application/pdfPDF (536kB)
|
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Impact Factor 2019: 0.808
The journal
Asymptotic Analysis fulfills a twofold function. It aims at publishing original mathematical results in the asymptotic theory of problems affected by the presence of small or large parameters on the one hand, and at giving specific indications of their possible applications to different fields of natural sciences on the other hand. Asymptotic Analysis thus provides mathematicians with a concentrated source of newly acquired information which they may need in the analysis of asymptotic problems.
Article Type: Research Article
Abstract: The aim of this paper is to study the evolution of the surface of a crystal structure, constituted by a linearly elastic substrate and a thin film. After appropriate scalings, a formal asymptotical expansion of the displacement, under some assumptions, yields the following nonlinear PDE \begin{equation}\frac{\curpartial h}{\curpartial t}=-\frac{\curpartial ^{2}}{\curpartial x^{2}}\big((1-\theta h)h''-\frac{\theta }{2}h'^{2}\big),\end{equation} where θ is a coefficient related to the crystal, and h(t,x) describes the spatial evolution of the film surface. We give here some results about the finite‐time blow‐up and prove the existence and uniqueness of a solution in L2 (0,t* ;Hper 4 (0,1))∩L∞ (0,t* ;Hper …2 (0,1)). We also present some numerical computations confirming the blow‐up scenario. Show more
Keywords: nonlinear partial differential equations, finite time blow‐up, initial boundary value problem, local solution
Citation: Asymptotic Analysis, vol. 38, no. 2, pp. 93-128, 2004
Article Type: Research Article
Abstract: We study by homogenization the various macroscopic models associated to the diffusion through a fissured media, i.e., a set of low conductivity blocks crossed by a net of highly conducting fissures. According to the fissure thickness, δε, range we obtain different models. For this, first we homogenize by seeking the limit when ε, the small parameter associated to the blocks size, goes to zero and then we study the limit when δ goes to zero. In each situation we prove the convergence to the corresponding macroscopic model.
Citation: Asymptotic Analysis, vol. 38, no. 2, pp. 129-141, 2004
Article Type: Research Article
Abstract: It is well known that distributional solutions of an elliptic equation with constant coefficients behave asymptotically near an interior point as sums of polynomials and linear combinations of derivatives of a fundamental solution. We consider a class of quasilinear elliptic systems and give mild conditions ensuring the same asymptotic behaviour. The sharpness of our conditions is illustrated by examples. The results are obtained as corollaries of a general theorem on the asymptotics of solutions to nonlinear ordinary differential equations in Banach spaces.
Citation: Asymptotic Analysis, vol. 38, no. 2, pp. 143-165, 2004
Article Type: Research Article
Abstract: In this paper we study a nonlinear lattice with memory and show that the problem is globally well posed. Furthermore we find uniform rates of decay of the total energy. Our main result shows that the memory effect is strong enough to produce a uniform rate of decay. That is, if the relaxation function decays exponentially then, the corresponding solution also decays exponentially. When the relaxation kernel decays polynomially then, the solution also decays polynomially as time goes to infinity.
Citation: Asymptotic Analysis, vol. 38, no. 2, pp. 167-185, 2004
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
|
Topological Methods in Nonlinear Analysis Topol. Methods Nonlinear Anal. Volume 33, Number 1 (2009), 51-64. Wecken property for periodic points on the Klein bottle Abstract
Suppose $f\colon M\to M$ on a compact manifold. Let $m$ be a natural number. One of the most important questions in the topological theory of periodic points is whether the Nielsen-Jiang periodic number $NF_m(f)$ is a sharp lower bound on $\# {\rm Fix}(g^m)$ over all $g\sim f$. This question has a positive answer if ${\rm dim}\, M\geq 3$ but in general a negative answer for self maps of compact surfaces. However, we show the answer to be positive when $M=\mathbb K$ is the Klein bottle. As a consequence, we reconfirm a result of Llibre and compute the set ${\rm HPer} (f)$ of homotopy minimal periods on the Klein bottle.
Article information Source Topol. Methods Nonlinear Anal., Volume 33, Number 1 (2009), 51-64. Dates First available in Project Euclid: 27 April 2016 Permanent link to this document https://projecteuclid.org/euclid.tmna/1461782239 Mathematical Reviews number (MathSciNet) MR2512954 Zentralblatt MATH identifier 1179.55003 Citation
Jezierski, Jerzy; Keppelmann, Edward; Marzantowicz, Wacław. Wecken property for periodic points on the Klein bottle. Topol. Methods Nonlinear Anal. 33 (2009), no. 1, 51--64. https://projecteuclid.org/euclid.tmna/1461782239
|
Tarski–Grothendieck set theory Tarski–Grothendieck set theory ( TG, named after mathematicians Alfred Tarski and Alexander Grothendieck) is an axiomatic set theory. It is a non-conservative extension of Zermelo–Fraenkel set theory (ZFC) and is distinguished from other axiomatic set theories by the inclusion of Tarski's axiom which states that for each set there is a Grothendieck universe it belongs to (see below). Tarski's axiom implies the existence of inaccessible cardinals, providing a richer ontology than that of conventional set theories such as ZFC. For example, adding this axiom supports category theory. Contents Axioms 1 Implementation in the Mizar system 2 Implementation in Metamath 3 See also 4 Notes 5 References 6 External links 7 Axioms
Tarski–Grothendieck set theory starts with conventional Zermelo–Fraenkel set theory and then adds "Tarski's axiom". We will use the axioms, definitions, and notation of Mizar to describe it. Mizar's basic objects and processes are fully formal; they are described informally below. First, let us assume that:
Given any set A, the singleton \{A\} exists. Given any two sets, their unordered and ordered pairs exist. Given any family of sets, its union exists. TG includes the following axioms, which are conventional because they are also part of ZFC: Set axiom: Quantified variables range over sets alone; everything is a set (the same ontology as ZFC). Extensionality axiom: Two sets are identical if they have the same members. Axiom of regularity: No set is a member of itself, and circular chains of membership are impossible. Axiom schema of replacement: Let the domain of the function F be the set A. Then the range of F (the values of F(x) for all members x of A) is also a set.
It is Tarski's axiom that distinguishes
TG from other axiomatic set theories. Tarski's axiom also implies the axioms of infinity, choice, [1] [2] and power set. [3] [4] It also implies the existence of inaccessible cardinals, thanks to which the ontology of TG is much richer than that of conventional set theories such as ZFC. Tarski's axiom(adapted from Tarski 1939 [5]). For every set x, there exists a set y whose members include:
- x itself;
- every subset of every member of y;
- the power set of every member of y;
- every subset of y of cardinality less than that of y.
More formally:
\forall x\exists y [x\in y \wedge \forall z\in y(\mathcal P(z)\subseteq y\wedge\mathcal P(z)\in y) \wedge \forall z\in\mathcal P(y)(\neg z\approx y\to z\in y)] Implementation in the Mizar system
The Mizar language, underlying the implementation of
TG and providing its logical syntax, is typed and the types are assumed to be non-empty. Hence, the theory is implicitly taken to be non-empty. The existence axioms, e.g. the existence of the unordered pair, is also implemented indirectly by the definition of term constructors.
The system includes equality, the membership predicate and the following standard definitions:
Singleton: A set with one member; Unordered pair: A set with two distinct members. \{a,b\} = \{b,a\}; Ordered pair: The set \{\{a,b\},\{a\}\} = (a,b) \neq (b,a); Subset: A set all of whose members are members of another given set; The union of a family of sets Y: The set of all members of any member of Y. Implementation in Metamath
The Metamath system supports arbitrary higher-order logics, but it is typically used with the "set.mm" definitions of axioms. The ax-groth axiom adds Tarski's axiom, which in Metamath is defined as follows:
⊢ ∃y(x ∈ y ∧ ∀z ∈ y (∀w(w ⊆ z → w ∈ y) ∧ ∃w ∈ y ∀v(v ⊆ z → v ∈ w)) ∧ ∀z(z ⊆ y → (z ≈ y ∨ z ∈ y)))
See also Notes Tarski (1938) http://mmlquery.mizar.org/mml/current/wellord2.html#T26 Robert Solovay, Re: AC and strongly inaccessible cardinals. . grothpwMetamath Tarski (1939) References Andreas Blass, I.M. Dimitriou, and Benedikt Löwe (2007) "Inaccessible Cardinals without the Axiom of Choice," Fundamenta Mathematicae194: 179-89. Patrick Suppes (1960) Axiomatic Set Theory. Van Nostrand. Dover reprint, 1972. Trybulec, Andrzej, 1989, "Tarski–Grothendieck Set Theory", Journal of Formalized Mathematics. Metamath: "Proof Explorer Home Page." Scroll down to "Grothendieck's Axiom." PlanetMath: "Tarski's Axiom"
|
Given an elliptic curve $(E/\mathbb{K})$ where $char(\mathbb{K}) \ne 2,3$ defined by the Weierstrass equation $y^2=x^3+ax+b$. The $j$-invariant is $j=1728 \frac{4a^3}{4a^3+27b^2}$.
I want to understand very clearly how this j-invariant is constructed and especially from where does the 1728 come.
A rather simple but interesting explanation is, given $(E/\mathbb{K})$ in Legendre form $y^2=x(x-1)(x-\lambda)$, one looks for metric that is invariant with respect to $\lambda$ permutations; that is invariant w.r.t. $\lambda, \frac{1}{\lambda}, \lambda-1, \frac{1}{\lambda-1}, \frac{\lambda}{\lambda-1}, \frac{\lambda-1}{\lambda}$. One can try to multiply or sum these permutations but the result is not only invartiant w.r.t $\lambda$ but also w.r.t the elliptic curve. Finally trying to sum the squares results in the $j$-invariant but without the constant 1728.
So where does it come from? An answer with all the math details of the complex multiplication theory would be great.
|
Let $A(n)$ be a finite square $n \times n$ matrix with entries $a_{ij}=1$ if $i+j$ is a perfect power; otherwise equals to $0$. Is it true that $${1 \above 1.5 pt n^2}\sum_{i=1}^n \sum_{j=1}^n a_{ij} \leq {1 \above 1.5pt 3}$$ with equality holding if and only if $n=3$ or $n=6$ ? Question:
Let $A(n)$ be a finite square $n \times n$ matrix with entries $a_{ij}=1$ if $i+j$ is a perfect power; otherwise equals to $0$. For an example consider $A(5)$
$$A(5)= \text{ }\begin{pmatrix} 0&0&1&0&0\\ 0&1&0&0&0\\ 1&0&0&0&1\\ 0&0&0&1&1\\ 0&0&1&1&0\\ \end{pmatrix}$$
Can we show that ${1 \above 1.5 pt n^2}\sum_{i=1}^n \sum_{j=1}^n a_{ij} \leq {1 \above 1.5pt 3}$ with equality holding if and only if $n=3$ or $n=6$. The graph below plots the values of ${1 \above 1.5 pt n^2}\sum_{i=1}^n \sum_{j=1}^n a_{ij}$ for small $n$. The graph is what motivated me to ask the question. It appears that the maximums are achieved if $n=3$ or $n=6$.
I have corrected several terms and added several new terms check out: A293462. UPDATE:
Here was my approach: Let $^t$ be the transpose map that sends the entry $a_{ij} \to a_{ji}$. The commutativity of addition shows us that if $i+j$ is a perfect power then so is $j+i$. Equivalently we see that $a_{ij}=a_{ji}$. In particular $A(n)^t=A(n)$ and so $A(n)$ is symmetric. Now observe that $(a_{ij})^2=a_{ij}$. Since $A(n)$ is symmetric $A(n)^tA(n)=A(n)^2$. The following result is easy to show $$\sum_{i=1}^n \sum_{j=1}^n a_{ij}=tr(A(n)^2)$$ Similarly it easy to show that if ${1 \above 1.5 pt n^2}\sum_{i=1}^n \sum_{j=1}^n a_{ij}={1 \above 1.5 pt x}$ then $x$ is a divisor of $n$. Assume ${tr(A(n)^2) \above 1.5pt n^2}={1 \above 1.5pt 3}$ then $3$ divides $n$. We start by showing via inspection the base case of $n=3$ and $n=6$. Suppose $n=3$ then
$$A(3)^2= \text{ } \begin{pmatrix} 1&0&0\\ 0&1&0\\ 0&0&1\\ \end{pmatrix}$$
And so $tr(A(3)^2)=3$. Surely ${3 \above 1.5pt 3^2}={1 \above 1.5 pt 3}$. Similarly it is easy to compute and show that if $n=6$
$$A(6)^2= \text{ }\begin{pmatrix} 1&0&0&0&1&1\\ 0&2&1&0&0&1\\ 0&1&3&1&0&0\\ 0&0&1&2&1&0\\ 1&0&0&1&2&1\\ 1&1&0&0&1&2\\ \end{pmatrix} $$
And from this we can see that $tr(A(6)^2)=12$. And again we have that ${12 \above 1.5pt 6^2}={1 \above 1.5 pt 3}$.
Assume $n\neq 3$ and $n\neq 6$. Now since ${tr(A(n)^2) \above 1.5pt n^2}={1 \above 1.5pt 3}$ we know that $3\times tr(A(n)^2)=n^2$. If $a_{ii}$ is any entry on the diagonal of $A(n)^2$ then explictly $3(a_{11}+ \ldots +a_{nn})=n^2$ so $\sqrt{3}\sqrt{a_{11}+\ldots + a_{nn}}=n$ and consequently $\sqrt{3} \mid \sqrt{a_{11}+\ldots + a_{nn}}$ otherwise $n$ is not an integer which is a contradiction.
^Update 1: The argument scratched out above is wrong thanks to commentator @SEWillB. ^Update 2: The argument previously scratched out above is correct. See edits.
That is all I can come up with - and it might not be the best approach and possibly the problem is trivial and I am just missing it. It could also be wrong. The picture below provides some data for small values of $n$.
|
Exponential Function is Well-Defined/Real/Proof 5 Theorem
Let $x \in \R$ be a real number.
Let $\exp x$ be the exponential of $x$.
Then $\exp x$ is well-defined. Proof
This proof assumes the definition of $\exp$ as the solution to an initial value problem.
That is, suppose $\exp$ solves:
$ (1): \quad \dfrac \d {\d x} y = \map f {x, y}$ $ (2): \quad \map \exp 0 = 1$
on $\R$, where $\map f {x, y} = y$.
From Derivative of Exponential Function: Proof 4, the function $\phi : \R \to \R$ defined as: $\displaystyle \map \phi x = \sum_{k \mathop = 0}^\infty \frac {x^k} {k!}$
satisfies $\map {\phi'} x = \map \phi x$.
So $\phi$ satisfies $(1)$.
$\map \phi 0 = 1$
So $\phi$ satisfies $(2)$.
Thus, $\phi$ is a solution to the initial value problem given. From Exponential Function is Continuous: Proof 5 and $(1)$: $\phi$ is continuously differentiable on $\R$.
Because $\map f {x, \phi} = \phi$:
$f$ is continuously differentiable on $\R^2$. Thus, from Uniqueness of Continuously Differentiable Solution to Initial Value Problem, this solution is unique.
$\blacksquare$
|
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Impact Factor 2019: 0.808
The journal
Asymptotic Analysis fulfills a twofold function. It aims at publishing original mathematical results in the asymptotic theory of problems affected by the presence of small or large parameters on the one hand, and at giving specific indications of their possible applications to different fields of natural sciences on the other hand. Asymptotic Analysis thus provides mathematicians with a concentrated source of newly acquired information which they may need in the analysis of asymptotic problems.
Article Type: Research Article
Abstract: We give a new region of existence of solutions to the superhomogeneous Dirichlet problem \begin{equation}\begin{array}{l}-\Delta_{p}u=v^{\delta},\quad v>0\ \mbox{in}\ B,\\-\Delta_{q}v=u^{\mu},\quad u>0\ \mbox{in}\ B,\\u=v=0\quad\mbox{on}\ \curpartial B,\end{array}\label{(S_R)}\end{equation} where B is the ball of radius R>0 centered at the origin in $\mathbb {R}^{N}$ . Here δ,μ>0 and Δm u=div(|∇u|m−2 ∇u) is the m-Laplacian operator for m>1.
Keywords: m-Laplacian, energy identities
Citation: Asymptotic Analysis, vol. 48, no. 1,2, pp. 1-18, 2006
Article Type: Research Article
Abstract: We study decay estimates of the energy for the nonlinear wave equation in the whole space $\mathbb{R}^{N}$ . The dissipative term consists of the following two parts: The one part is nonlinear in a suitable ball; the other part is linear in the outside of the ball and it may be effective only at infinity. So we may call such a dissipation as a half-linear dissipation. We note that the method of proof is based on the multiplier technique and the unique continuation.
Keywords: nonlinear wave equation, energy decay, Cauchy problem
Citation: Asymptotic Analysis, vol. 48, no. 1,2, pp. 19-32, 2006
Article Type: Research Article
Abstract: We study the asymptotic behavior of a non-linear elastic material lying in a thin neighborhood of a non-planar line when the diameter of the section tends to zero. We first estimate the rigidity constant in such a domain then we prove the convergence of the three-dimensional model to a one-dimensional model. This convergence is established in the framework of $\varGamma $ -convergence. The resulting model is the one classically used in mechanics. It corresponds to a non-extensional line subjected to flexion and torsion. The torsion is an internal parameter which can eventually by eliminated but this elimination leads to …a non-local energy. Indeed the non-planar geometry of the line couples the flexion and torsion terms. Show more
Keywords: beam, rod, non-linear elasticity, 3D–1D, $\varGamma $-convergence
Citation: Asymptotic Analysis, vol. 48, no. 1,2, pp. 33-54, 2006
Article Type: Research Article
Abstract: We present a rigorous analysis of the eigenvalue problem associated with the onset of superconductivity for a thin domain in the presence of a large applied magnetic field. We prove the validity of the formal result of Richardson and Rubinstein (Proc. Roy. Soc. London A 455 (1999), 2549–2564) revealing that in this double limit of thin domain and large field, the appropriate Rayleigh quotient differs from the standard one for order 1 applied fields through the addition of a potential depending on the field.
Citation: Asymptotic Analysis, vol. 48, no. 1,2, pp. 55-76, 2006
Article Type: Research Article
Abstract: In this paper the homogenization of degenerate quasilinear parabolic equations \[\curpartial _{t}u-\mathop {\mathrm {div}}\nolimits a\Bigl(\frac{t}{\varepsilon},\frac{x}{\varepsilon},u,\nabla u\Bigr)=f(t,x)\] is studied via a weighted compensated compactness result, where a(t,y,α,λ) is periodic in (t,y).
Keywords: degenerate parabolic equations, homogenization, compensated compactness
Citation: Asymptotic Analysis, vol. 48, no. 1,2, pp. 77-89, 2006
Authors: Frank, Rupert L.
Article Type: Research Article
Abstract: We study the Schrödinger operator (hD−A)2 with periodic magnetic field B=curl A in an antidot lattice $\varOmega_{\infty}=\mathbb{R}^{2}\setminus\bigcup_{\alpha\in\varGamma}(U+\alpha)$ . Neumann boundary conditions lead to spectrum below hinf B. Under suitable assumptions on a “one-well problem” we prove that this spectrum is localized inside an exponentially small interval in the semi-classical limit h→0. For this purpose we construct a basis of the corresponding spectral subspace with natural localization and symmetry properties.
Keywords: semi-classical analysis, tunneling effect, magnetic Schrödinger operator, periodic operator
Citation: Asymptotic Analysis, vol. 48, no. 1,2, pp. 91-120, 2006
Article Type: Research Article
Abstract: Solutions of scalar viscous conservation laws whose initial data are bounded and tend at x=±∞ to values that may be connected by a shock profile are shown to converge in L∞ to a time-dependent translation of that profile. Unlike the standard theory, the initial data is not restricted to be an L1 perturbation of the shock profile, and the translation may not be linear in time. Estimates for the translation are obtained.
Citation: Asymptotic Analysis, vol. 48, no. 1,2, pp. 121-140, 2006
Article Type: Research Article
Abstract: We consider a thin curved film made of a martensitic material. The behavior of the film is governed by a free energy composed of a bulk energy term and an interfacial energy term. We show that the minimizers of the free energy converge to the minimizers of an energy depending on a two-dimensional deformation and one Cosserat vector field when the thickness of the curved film goes to zero using $\varGamma$ -convergence arguments.
Citation: Asymptotic Analysis, vol. 48, no. 1,2, pp. 141-171, 2006
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
|
Another one from my assignment:
Assume $$ \tan (\frac{x}{2})=\tan A\tanh B $$
Prove that $$ \tan(x)=\frac{\sin(2A)\sinh(2B)}{1+cos(2A)\cosh(2B)} $$
I manipulated $$ \tan(x)=\tan2(\frac {x}{2})=\frac {2\tan A \tanh B}{1-(\tan A\tanh B)^2} $$ Using the half angle formula for tan. From there I seem to just be going in circles and coming back to the original question every attempt.
|
Oct 22nd 2017, 10:46 AM
# 2
Forum Admin
Join Date: Apr 2008
Location: On the dance floor, baby!
Posts: 2,818
Originally Posted by
avito009
I was noticing that there is a lot of similarity between the formula of kinetic energy 1/2 mv2 and Einstein's E=mc2. Was Einstein inspired from this kinetic energy formula when he came up with E=mc2?
The short answer is "no."
The longer answer is that $\displaystyle E = mc^2$ (where m is the rest mass, sometimes written as $\displaystyle m_0$) is only true for an object at rest. The full equation for a moving object is $\displaystyle E^2 = (mc^2)^2 + (pc)^2$, where p is the momentum of the object. The total energy, E, can also be shown to be $\displaystyle E = \gamma ~ mc^2$, where I have defined $\displaystyle \gamma$ below.
The formula for kinetic energy in Special Relativity is $\displaystyle E = ( \gamma - 1)mc^2$. To make contact with non-relativistic theories we can expand the $\displaystyle \gamma = \frac{1}{\sqrt{1 - \left ( \frac{v}{c} \right ) ^2}}$ for v << c which gives
$\displaystyle KE = ( \gamma -1 )mc^2 = \left ( \frac{1}{\sqrt{1 - \left ( \frac{v}{c} \right ) ^2}} - 1 \right ) mc^2 \approx \frac{1}{2} mv^2 + \frac{3}{8} m \frac{v^4}{c^2} + \text{ ...}$
You can see that the largest term is the usual kinetic energy and that the extra terms are fairly small and can be ignored for Classical Physics.
-Dan
__________________
Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.
See the forum rules here
.
|
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Impact Factor 2019: 0.808
The journal
Asymptotic Analysis fulfills a twofold function. It aims at publishing original mathematical results in the asymptotic theory of problems affected by the presence of small or large parameters on the one hand, and at giving specific indications of their possible applications to different fields of natural sciences on the other hand. Asymptotic Analysis thus provides mathematicians with a concentrated source of newly acquired information which they may need in the analysis of asymptotic problems.
Authors: Briani, Ariela
Article Type: Research Article
Abstract: We consider the sequence of optimal control problems having as state equation y′ (t)=an (t,y)+bn (t,u) (t∈(0,T], y(0)=x) and cost functional \[$J_{n}(y,u)=\mathop{\mathrm{ess}}\mathop{\mathrm{sup}}\nolimits_{t\in[0,T]}f_{n}(t,y(t),u(t))$ . We prove a Γ-convergence result and we study the entailed properties on the stability for the related Hamilton–Jacobi equations.
Citation: Asymptotic Analysis, vol. 45, no. 3,4, pp. 171-190, 2005
Article Type: Research Article
Abstract: In this paper we consider the following biharmonic equation with critical exponent \[$(P_{\varepsilon})\dvt\Delta^{2}u=Ku^{\frac{n+4}{n-4}-\varepsilon}$ , u>0 in Ω and u=Δu=0 on \[$\curpartial \varOmega $ , where Ω is a smooth bounded domain in \[$\mathbb{R}^{n}$ , n≥5, ε is a small positive parameter, and K is a smooth positive function in \[$\overline{ \varOmega }$ . We construct solutions of (Pε ) which blow up and concentrate at strict local maximum of K either at the boundary or in the interior of Ω. We also construct solutions of (Pε ) concentrating at an interior strict local minimum point of …K. Finally, we prove a nonexistence result for the corresponding supercritical problem which is in sharp contrast to what happened for (Pε ). Show more
Keywords: fourth-order elliptic equations, critical Sobolev exponent, biharmonic operator
Citation: Asymptotic Analysis, vol. 45, no. 3,4, pp. 191-225, 2005
Authors: Gravejat, Philippe
Article Type: Research Article
Abstract: We investigate the asymptotic behaviour of the subsonic travelling waves of finite energy in the Gross–Pitaevskii equation in dimension larger than two. In particular, we give their first-order asymptotics in the case they are axisymmetric, and link it to their energy and momentum.
Keywords: nonlinear Schrödinger equation, travelling waves, asymptotic behaviour
Citation: Asymptotic Analysis, vol. 45, no. 3,4, pp. 227-299, 2005
Article Type: Research Article
Abstract: In this paper the asymptotic behavior of the solutions, as time goes to infinity, to nonlinear parabolic equations with two classes of nonlocal terms is investigated. One of important features of our problems is that equilibria may be a continuum which is diffeomorphic to a piece of curve in R2 .
Keywords: nonlinear parabolic equations, nonlocal term, asymptotic behavior
Citation: Asymptotic Analysis, vol. 45, no. 3,4, pp. 301-312, 2005
Article Type: Research Article
Abstract: We obtain a nonlocal functional as the variational limit of an integral functional associated with the strain energy of a pseudo-plastic material reinforced by a ε-periodic distribution of pseudo plastic fibers. We begin by studying separately the variational limit behavior, when ε goes to zero, of the structure made of the soft material without fibers, and of the structure constituted by the distribution of the small fibers. We show that the complete structure is, in the sense of the Γ-convergence, equivalent to a new homogeneous structure whose functional energy is the epigraphical sum of the limit integral functionals energies modeling …each two previous structures. Show more
Keywords: homogenization, Γ-convergence, convex functional of measures, pseudo-plasticity
Citation: Asymptotic Analysis, vol. 45, no. 3,4, pp. 313-339, 2005
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
|
On Wikipedia, the Gibbs measure defines the probability as:
$$ P(X=x) = \frac{1}{Z(\beta)}\exp(-\beta E(x)) $$
Now, the familiar form of the normal distribution is:
$$ P(x) = \frac{1}{\sqrt{2\pi}\sigma}\exp-{\frac{(x-\mu)^{2}}{2\sigma^2}} $$
Now, it seems that the normal distribution is a Gibbs measure and I was wondering whether I can equate some of the terms. So, for example what would be the $\beta$ term and more importantly, what is the corresponding energy term in the normal distribution. Can I say $\beta$ is $\frac{1}{2}$ and the energy term is $\frac{(x-\mu)^{2}}{\sigma^2}$ but then I do not understand how the partition function $Z$ is a function of $\beta$.
|
Show that the following problem is a convex optimization problem.
$f(x,y,z)=2x^2-y+z^2 \rightarrow min! $
$g_1(x,y,z)=y+x\le1$,
$g_2(x,y,z)=z-y\le1$
Convex optimization problem if:
(1) $f(x)\rightarrow min!$
(2) $f(x)$ is convex
(3) all constraints $g_i$ are convex, $ i=1,..,m$
My idea is to calculate the Hessian matrix of the objective function and constraints and check if the matrix is positive (semi) definite, which would imply (strictly) convex function.
$H_f(x,y,z)=\begin{pmatrix} 4&0&0\\0&0&0\\0&0&2\end{pmatrix}$
This is a positvie semidefinite matrix (Eigenvalues $\geq0$)
$\Rightarrow f(x)$ is convex
The Hessian matrix of $g_1$ and $g_2$ is a zero matrix which is both convex and concave.
So the problem is a convex optimization problem.
Is my computation/conclusion correct?
Thank you in advance.
|
I would like to "reopen" the previous post regarding Modus ponens because, frankly speaking, I'm not satisfied with some (most of ?) answers by the mathematicians community.
Disclaim: I'm not aiming to "unravel the mystery", but I'm not convincd either that mathematicians and philosophers speaks completely different languages.
This is my argument, in two steps : a "mental experiment" followed by some considerations about formalization and natural language.
The experiment I'm trying is based on a reformulation of McGee's first example (see Vann McGee,
A Counterexample to Modus Ponens (1985)), regarding the US presidential election of 1980.
I'll neglect the aspects regarding "belief" and the nuances connected to verbal tense (see the paper of Robert Fogelin & W.Sinnott-Armstrong,
A defense of Modus Ponens (1986)), also because I'm not a native english speaker.
I assume as
domain of the problem a non-empty universe (call it $US$) where there are only two mutually exclusive subsets : $rep$ and $dem$ (so that : $rep \cap dem = \emptyset$).
I assume that the set $rep$ has only two elements $R$ and $A$ (i.e. $rep = \{ R, A \}$, and $A \ne R$).
I assume only one "obvious" axioms, translating the "rule of the game", using a single predicate $win$ :
$win(dem) \lor win(rep)$.
The first consideration - we will discuss it later - is that the above condition is really a "XOR": "a republican will win or a democrat will win, but not both".
We have also :
$\lnot win(rep) \equiv win(dem)$.
So we have the "tirvial" :
$\lnot win(rep) \lor win(rep)$.
But due to the fact that the only republican candidates are $R$ and $A$, the last amount to :
$\lnot win(rep) \lor [win(R) \lor win(A)]$ ---
(A). Note : we are not using $\rightarrow$ in this argument; if we would use it, with the classical truth-functional semantics, the sub-formula between the square brackets would amount to : $\lnot win(R) \rightarrow win(A)$.
I introduce now what I'll call
Shoenfield rule (from Joseph Shoenfield, Mathematical Logic (1967), page 28 :
if $\vdash A$ and $\vdash \lnot A \lor B$, then $\vdash B$.
The above rule is proved in Shoenfield's system using three of the four "propositional" primitive rules [page 21 : the last one, the
Associative Rule, is not used in the proof below] :
Expansion Rule: infer $B \lor A$ from $A$
Contraction Rule: infer $A$ from $A \lor A$
Cut Rule: infer $B \lor C$ from $A \lor B$ and $\lnot A \lor C$.
With the
Cut Rule and the (only) propositional axiom : $\lnot A \lor A$, we can derive the Lemma 1 : if $\vdash A \lor B$, then $\vdash B \lor A$.
Now we prove
Shoenfield rule : (1) --- $\vdash A$ (2) --- $\vdash B \lor A$ --- from (1) by Expansion (3) --- $\vdash A \lor B$ --- from (2) by Lemma 1 (4) --- $\vdash \lnot A \lor B$ (5) --- $\vdash B \lor B$ --- from (3) and (4) by Cut (6) --- $\vdash B$ --- from (5) by Contraction. Disclaim: nothing new; all is trivial (classical) propositional logic.
Now, we go back to
(A) :
$\lnot win(rep) \lor (win(R) \lor win(A))$
and add the premise :
$win(rep)$;
by
Shoenfield rule we conclude the "obvious" :
$win(R) \lor win(A)$.
Nothing has gone wrong ... We only has used standard rules for propositional connectives in a
classical framework, with the use of $\lor$ in a situation where the alternative are mutually exclusive.
Question: Is the previous argument "sound" ?
The above argument, assuming it is "sound" suggests to me some considerations about formalization and natural language.
The "regimentation" that symbolic logic - from Frege on - has deliberately imposed on natural language has been greatly fruitful; this does not imply that the richness of natural language can be wholly "explained away" with formalization.
The dissatisfaction of McGee about the
modus ponens seems to me the "old" dissatisfactions about the translation of "if ... then" in term of the truth-functional connective $\rightarrow$.
This one is blind about the nuances of natural language (that
relevant logic try to recover). In the same way, when I use $\lor$ in a context where the alternatives are mutually exclusive, I "loose" some presuppositions (some implicit information that the speaker aware of the context knows).
This does not means that the rule of logic are "wrong"; neither that philosopher does not know logic. Aristotle and Leibniz and Peirce and Frege and Russell were all philosophers.
In conclusion, I think that there is no "contradiction" between the way mathematical logic formalize truth-functional connectives and natural language.
|
The
medium cash bag is an uncommon item won from Treasure Hunter. When opened it will give the player coins based on the player's total level. During the weekend of 1 February 2013, the gold received from the bag was doubled.
The coins received from a medium cash bag is a random number between $ 9X $ and $ 11X $, where $ X $ depends on the player's total level
[1]. In the formula below, $ L $ is your total level.
$ X = \begin{cases} 750 + \left ( \frac{L}{27} - 1 \right ) \times 22 & \text{if }L < 324 \\ 1000 + \left ( \frac{L}{27} - 12 \right ) \times 75 & \text{if }324 \le L < 1404 \\ 4000 + \left ( \frac{L}{27} - 52 \right ) \times 125 & \text{if }1404 \le L < 2268 \\ 8000 + \left ( \frac{L}{27} - 84 \right ) \times 133 & \text{if }2268 \le L \end{cases} $
This means that it is possible to receive the following amounts of coins at the threshold total levels:
Total level Coins 27 6,750 to 8,250 coins 324 9,000 to 11,000 coins 1404 36,000 to 44,000 coins 2268 72,000 to 88,000 coins 2715 91,817 to 112,220 coins
If opened on a non-members server, members skills above level 5 will not be counted towards the skill total for the calculation.
Drop sources This list was created dynamically. For help, see the FAQ. To force an update of this list, click here. For an exhaustive list of all known sources for this item, see here. References
Exclusive rewards (Limited) Medium (Empty • Full)
|
I'm trying to find the intermediate fields of the extension $\mathbb Q\big /\mathbb Q(\alpha)$, where $\alpha = \sqrt{7+\sqrt{13}}$. To do so I've tried to use the Galois correspondence. I've already found that $\rm{Gal}\left(\mathbb Q\big /\mathbb Q(\alpha)\right)$ has order $4$ and that is isomorphic to $\mathbb Z\big/2\mathbb Z\times \mathbb Z\big/2\mathbb Z$, which has three subgroups of order 2. Therefore, there must be three normal intermediate fields in the extension.
$E:=\mathbb Q(\alpha)$ is the splitting field of $f(x)=x^4-14x^2+36$, which has $4$ roots numbered as:
$$\left\{\alpha_1 = \alpha, \alpha_2 = -\sqrt{7+\sqrt{13}}, \alpha_3 = \sqrt{7-\sqrt{13}}, \alpha_4 = -\sqrt{7-\sqrt{13}}\right\}.$$
Then, the automorphisms in $\rm{Gal}\left(\mathbb Q\big /\mathbb Q(\alpha)\right)$ are
$$\sigma_1(\alpha) = \alpha_1$$ $$\sigma_2(\alpha) = \alpha_2$$ $$\sigma_3(\alpha) = \alpha_3$$ $$\sigma_4(\alpha) = \alpha_4$$
EDIT: corrected subgroups
If we see them as elements of $S_4$, they are $id, (1,2)(3,4), (1,3)(2,4), (1,4)(2,3)$, respectively. Thus, the subgroups are $H_1:=\langle(1,2)(3,4)\rangle, H_2:=\langle(1,3)(2,4)\rangle$ and $H_3:=\langle(1,4)(2,3)\rangle$ (all isomorphic to $\mathbb Z\big/2\mathbb Z$).
Then, to find the intermediate fields I'm looking for $E^{H_i}=\{x\in E\mid \sigma(x), \forall\sigma\in H_i\}$
However, when I try with $H_2$, for example, it gets very nasty. In this case we'd have to impose that $\sigma_2(\gamma)=\gamma$ for all $\gamma\in E$. On the one hand, since $\{1,\alpha,\alpha^2,\alpha^3\}$ is a $\mathbb Q-$basis for $E$,
$$\gamma = a_0+a_1\alpha+a_2\alpha^2+a_3\alpha^3,\qquad a_i\in\mathbb Q$$
On the other hand, we have
$$\sigma_2(\gamma) = a_0+a_1\sigma_2(\alpha)+a_2\sigma_2(\alpha^2)+a_3\sigma_3(\alpha^3)$$
$$=a_0-a_1\alpha+a_2\alpha^2-a_3\alpha^3$$
And then, coefficients of both expressions should be equal.
EDIT: added manipulations of these expressions
From these expressions we get that $\alpha_1=\alpha_3=0$ and $\alpha_0,\alpha_2$ are free (so it has 2 degrees of freedom as expected).
$$\Rightarrow E^{H_2}=\{a+b(7+\sqrt{13})\mid a, b\in \mathbb Q\}=\mathbb Q(7+\sqrt{13})=\mathbb Q(\sqrt{13}).$$
So, $\mathbb Q(\sqrt{13})$ is the intermediate field that corresponds to the group $H_2$.
But I don't know how to do it with the other subgroups..
With $\sigma_3(\alpha) = \alpha_3 = \frac{6}{\alpha}$,
$$\sigma_3(\gamma) = a_0+a_1\frac{6}{\alpha}+a_2\left(\frac{6}{\alpha}\right)^2+a_3\left(\frac{6}{\alpha}\right)^3$$
|
In Bourbaki Lie Groups and Lie algebras chapter 6 section 4 excercise 1(c), they have used the word pseudo-discriminant. The reference is given to be Algebra chapter IX which I can't find a English translation of. Here the following definition is given. Since I don't know French I can't understand the definition. Any help will be appreciated. Sorry if this kind of question is not for this site.
Here is a translation in English. I added few things into parentheses.
Assume that $A$ is a field of characteristic $2$, $E$ is a vector space on $A$ of even dimension $2r$, and $Q$ is a non-degenerate quadratic form on $E$.
a) Let $(e_i)$ a symplectic basis of $E$ for the alternating bilinear fomr $\Phi$ associated to $Q$ . Show that the element $z=e_1e_1+e_3e_4+\cdots+e_{2r-1}e_{2r}\in C^+(Q)$ (the even Clifford algebra) forms, together with the unit element, a basis of the center $Z$ of $C(Q) (the Clifford algbra).
Moreover, $Z$ is the direct sum of two fields of and only if th element $\Delta(Q)=Q(e_1)Q(e_2)+Q(e_3)Q(e_4)+\cdots+ Q(e_{2r-1})Q(e_{2r})$, called the pseudo-discriminant of $Q$ with respect to the symplectic basis $(e_i)$, has the form $\lambda^2+\lambda$ for some $\lambda\in A$.
Some comments on this notion. Even if this is not explicitely asked by the OP, I think it would be nice to comment on this notion.
Recall that the determinant $\det(Q)$ of a non degenerate quadratic form $Q$ over a field $A$ (to stick with the previous notation) is the square class of the determinant of any representative matrix of the associated bilinear form. It is an element $A^\times/A^{\times 2}$, which is an invariant of the isometry class of $Q$ (two isometric quadratic forms have equal determinants). If $A$ has characteristic different from two, this invariant is quite useful, and may be use to obtain classification results.
If $A$ has characteristic two, then a representative matrix of the associated bilinear form is alternating, so its determinant is always a square. Hence, in this case , $\det(Q)$ is always the trivial square class.
Set $\wp(A)=\{ \lambda^2+\lambda\mid \lambda\in A\}$. This is a subgroup of $A$. One may show that the class of $\Delta(Q)$ in $A/\wp(A)$ does not depend on the choice of the symplectic basis, so we can set ${\rm Arf}(Q)$ to be the class of $\Delta(Q)$ in $A/\wp(A)$. The class ${\rm Arf}(Q)$ is called the Arf invariant of $Q$. One may show this is an anvariant of the isometry class of $Q$. It plays the same role as the determinant.
|
The sum of all three interior angles in a triangle is $180^\circ$.
Three interior angles are formed internally by the intersection of every two sides of a triangle. The addition of all three angles is always equal to $180^\circ$ geometrically.
If, $\alpha$, $\beta$ and $\gamma$ are three interior angles in a triangle, then
$\alpha+\beta+\gamma = 180^\circ$
This basic geometrical property of a triangle is often used as a formula in geometry in some special cases
There are three geometrical steps involved for proving that the sum of interior angles in a triangle is equal to $180^\circ$.
$\Delta RST$ is a triangle and its angles are $\alpha$, $\beta$ and $\gamma$.
The side $\small \overline{ST}$ is another transversal of the parallel lines $\small \overline{SR}$ and $\small \overline{TV}$. In this case, $\small \angle RST$ and $\small \angle STV$ are interior alternate angles.
It is proved that when two parallel lines are intersected by their transversal, the interior alternate angles are equal.
$\small \angle RST = \angle STV = \beta$
Therefore, $\small \angle STV$ is also equal to $\beta$ geometrically.
$\small \angle RTS$, $\small \angle STV$ and $\small \angle VTU$ are three angles and sum of three interior angles is equal to $\small \angle RTU$.
$\small \angle RTS + \angle STV + \angle VTU$ $\,=\,$ $\small \angle RTU$
Actually, $\small \angle RTU$ is a straight angle of the straight line $\small \overline{RU}$. Geometrically, the angle of a straight line is equal to $180^\circ$.
$\implies$ $\small \angle RTS + \angle STV + \angle VTU$ $\,=\,$ $\small \angle RTU$ $\,=\,$ $180^\circ$
$\implies$ $\gamma+\beta+\alpha$ $\,=\,$ $180^\circ$
$\,\,\, \therefore \,\,\,\,\,\, \alpha+\beta+\gamma \,=\, 180^\circ$
$\alpha$, $\beta$ and $\gamma$ are three interior angles of triangle $RST$ and it is proved that the sum of angles in a triangle is equal to $180^\circ$ geometrically.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
What would be appropriate metaphors to call the
entropy of a question? I was thinking along the lines of "information value," but this would clearly be inappropriate, because it is the answers that contain information, not the question, and an unanswered question has no information value.
Each question has its
entropy, which is the weighted average of the amounts of information contained in all possible answers. The weight of each possible answer is its a priory probability, and the amount of information is minus the logarithm of its probability:$$S(q) =\sum_{a\in\operatorname{possible-answers}(q)}-P(a)\log(P(a))$$
An interactive questionnaire where new questions depend on previous answers similarly has its entropy, and having a more descriptive term would be nice.
Having an appropiriate descriptive term would help to explain entropy to students and to use it in such contexts as estimating the minimal possible average cost of a comparison sort algorithm or analysing efficiency of a binary search tree. I would like to be able to say something like: "the
local informativeness of this node in this binary search tree is such and such, and the (global) informativeness of the whole tree is the sum of the (local) informativeness of its root and the weighted average of the (global) informativeness of its left and right subtrees."
|
ISSN: 1432-0916
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics , Physics
Notes: Abstract In ordinary quantum mechanics for finite systems, the time evolution induced by Hamiltonians of the form $$H = \frac{{P^2 }}{{2m}} + V(Q)$$ is studied from the point of view of *-automorphisms of the CCRC*-algebra $$\bar \Delta $$ (see Ref. [1, 2]). It is proved that those Hamiltonians do not induce *-automorphisms of this algebra in the cases: a) $$V \in \bar \Delta $$ and b)V εL ∞ (ℝ,dx) ∩L 1 (ℝ,dx), except when the potential is trivial.
Type of Medium: Electronic Resource
URL: Permalink
http://dx.doi.org/10.1007/BF01646197
|
Assume, $x$ is a variable. The derivative of a variable $x$ with respect to $x$ is written in mathematical form as follows in differential calculus.
$\dfrac{d}{dx}{\, (x)}$
Use definition of the derivative to express the differentiation of a function $f{(x)}$ with respect to $x$ in limits form. It is useful to prove the differentiation of variable $x$ by first principle.
$\dfrac{d}{dx}{\, f{(x)}}$ $\,=\,$ $\displaystyle \large \lim_{\Delta x \,\to \, 0}{\normalsize \dfrac{f{(x+\Delta x)}-f{(x)}}{\Delta x}}$
Take $f{(x)} \,=\, x$, then $f{(x+\Delta x)} \,=\, x+\Delta x$. Now, replace them in the above formula.
$\implies$ $\dfrac{d}{dx}{\, (x)}$ $\,=\,$ $\displaystyle \large \lim_{\Delta x \,\to \, 0}{\normalsize \dfrac{x+\Delta x-x}{\Delta x}}$
Take $\Delta x = h$ and write the equation in terms of $h$ in stead of $\Delta x$.
$\implies$ $\dfrac{d}{dx}{\, (x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to \, 0}{\normalsize \dfrac{x+h-x}{h}}$
$\implies$ $\dfrac{d}{dx}{\, (x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to \, 0}{\normalsize \dfrac{x-x+h}{h}}$
In numerator, there are three terms but two of them have opposite signs. So, they both get cancelled as per subtraction of the terms.
$\implies$ $\dfrac{d}{dx}{\, (x)}$ $\,=\,$ $\require{cancel} \displaystyle \large \lim_{h \,\to \, 0}{\normalsize \dfrac{\cancel{x}-\cancel{x}+h}{h}}$
$\implies$ $\dfrac{d}{dx}{\, (x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to \, 0}{\normalsize \Big(\dfrac{h}{h}\Big)}$
$\implies$ $\dfrac{d}{dx}{\, (x)}$ $\,=\,$ $\require{cancel} \displaystyle \large \lim_{h \,\to \, 0}{\normalsize \Big(\dfrac{\cancel{h}}{\cancel{h}}\Big)}$
The quotient of $h$ by $h$ is equal to one mathematically.
$\implies$ $\dfrac{d}{dx}{\, (x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to \, 0}{\normalsize \Big(1\Big)}$
There is no $h$ term in function. So, the limit of one as $h$ approaches zero is equal to one.
$\,\,\, \therefore \,\,\,\,\,\,$ $\dfrac{d}{dx}{\, (x)}$ $\,=\,$ $1$
The derivative of a variable rule is derived from first principle in this way in differential calculus.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Real Number Ordering is Compatible with Multiplication/Negative Factor
Jump to navigation Jump to search
Contents Theorem $\forall a, b, c \in \R: a < b \land c < 0 \implies a c > b c$ where $\R$ is the set of real numbers. Proof
Thus:
\(\displaystyle a\) \(<\) \(\displaystyle b\) \(\displaystyle \leadsto \ \ \) \(\displaystyle b - a\) \(>\) \(\displaystyle 0\) Definition of Positivity Property \(\displaystyle \leadsto \ \ \) \(\displaystyle c \times \paren {b - a}\) \(<\) \(\displaystyle 0\) Product of Strictly Negative Element with Strictly Positive Element is Strictly Negative \(\displaystyle \leadsto \ \ \) \(\displaystyle b \times c - a \times c\) \(<\) \(\displaystyle 0\) Ring Axioms: Product is Distributive over Addition \(\displaystyle \leadsto \ \ \) \(\displaystyle a \times c\) \(>\) \(\displaystyle b \times c\) Definition of Positivity Property
$\blacksquare$
We have that:
$15 > 12$ $15 \times 3 > 12 \times 3$
That is:
$45 > 36$
We have that:
$15 > 12$ $\dfrac {15} 3 > \dfrac {12} 3$
That is:
$5 > 4$
We have that:
$15 > 12$ $15 \times \paren {-3} < 12 \times \paren {-3}$
That is:
$-45 < -36$
We have that:
$15 > 12$ $\dfrac {15} {-3} < \dfrac {12} {-3}$
That is:
$-5 < -4$
|
The fractional part of common logarithm is called mantissa.
The logarithm of a quantity is expressed as two quantities and they are in fractional and integral forms.
Initially, the fractional part is in the form logarithm of a decimal number but later it’s transformed into another decimal number according to the logarithmic table and it’s called mantissa.
$\log{(Q)} \,=\, C + \log{(m)}$
The literal $C$ expresses characteristic and the quantity in decimal form obtained from $\log{m}$ is known as mantissa.
The quantity ($m$) inside the logarithm is adjusted to a decimal number whose whole number part should be greater than or equal to $1$ but less than $10$. In other words, $1 \leq m < 10$
Therefore, $0 \leq \log{m} < 1$
$6583$ is a quantity and let us find the mantissa for this quantity.
Express the quantity inside the logarithm as a decimal number but the whole number part of this decimal number should be greater than or equal to $1$ and less than $10$.
$\log{(6583)}$ $\,=\,$ $\log{(6.583 \times {10}^3)}$
The logarithm of product of two quantities can be written as sum of their logs as per product rule of logarithms.
$\implies$ $\log{(6583)}$ $\,=\,$ $\log{(6.583)}$ $+$ $\log{({10}^3)}$
Forget about logarithm of quantity in exponential form but concentrate on logarithm of decimal number to find the mantissa. Remember, we consider only first four digits of any number to find the mantissa by using log table.
Finally, add both quantities. $8182+2 = 8184$.
The quantity inside the logarithm is $6583$ and it’s adjusted to $6.583$. It’s actually greater than or equal to $1$ and less than $10$. Hence, the logarithm of $6.583$ should be greater than or equal to $0$ but less than $1$.
Therefore, $\log{(6.583)} = 0.8184$
Therefore, the mantissa for the logarithm of number $6583$ is $0.8184$. Thus, you can find mantissa for any quantity by using logarithmic table.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Fred Kline
Contact: fred.kline.98104ATgmailDOTcom
I donate regularly to the The OEIS Foundation.
When I look at the patterns, I can hear the wheels turning.
When I look at the math, I find out the hamsters have died.
6
answers
3
questions
~4k
people reached
Seattle, WA
Member for 7 years, 5 months
144 profile views
Last seen Aug 22 at 22:56 Communities (16)
Mathematica
799
79922 gold badges1414 silver badges3737 bronze badges
Mathematics
542
54222 gold badges1010 silver badges4040 bronze badges
MathOverflow
530
53022 gold badges88 silver badges3232 bronze badges
Physics
227
22711 gold badge66 silver badges1515 bronze badges
Law
128
12855 bronze badges View network profile → Top network posts 50 Are there suitable versioning systems for Mathematica notebooks? 36 $\prod_{n=1}^{\infty} n^{\mu(n)}=\frac{1}{4 \pi ^2}$ 22 How can we create Randolph diagrams in Mathematica? 17 Conjecture---Identity for Sieve of Eratosthenes collisions. 16 Need tips on improving this directed graph 15 Infinite product for Zeta[2]? 12 Ulam's Spiral with Oppermann's Diagonals (quarter-squares) View more network posts → Top tags (17)
5 Cleaning up post that has been cited by OEIS Jul 19 '14
3 How to retain posted contents for later reference Sep 30 '14
2 Question about post that was closed and deleted. Sep 7 '12
2 Policy on using real names? May 2 '13
-2 Name Suggestions for SE Blog May 15 '14
-8 I don't understand the motives behind the hold. Jan 17 '15
|
Question
Is there a closed form for integrals such as
$\int_{-\infty }^{\infty } e^{-y^2} \text{erf}(1-y) \, dy$
The integrant seems simple enough?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
There is a table of different integrals involving the $\text{erf}$ function, where one can find an answer ($13$,
p.$8$) to your question,
$$ \int_{-\infty}^\infty e^{-y^2}\text{erf}(1-y)\:dy=\sqrt{\pi}\cdot \text{erf}\left(\frac1{\sqrt{2}}\right). \tag1 $$
Let's find a way to obtain the given closed form.
Proposition. One has, for any real number $b$,
$$ \int_{-\infty}^\infty e^{-(y+b)^2}\text{erf}(y)\:dy=\sqrt{\pi}\cdot \text{erf}\left(\frac{b}{\sqrt{2}}\right). \tag2 $$
Proof. One has$$\begin{align}\partial_b \left(\int_{-\infty}^\infty e^{-(y+b)^2}\text{erf}(y)\:dy \right)&=-2\int_{-\infty}^\infty (y+b)e^{-(y+b)^2}\text{erf}(y)\:dy\\\\&=\left[e^{-(y+b)^2}\text{erf}(y) \right]_{-\infty}^\infty-\frac{2}{\sqrt{\pi}}\int_{-\infty}^\infty e^{-(y+b)^2}e^{-y^2}dy\\\\&=0-\frac{2}{\sqrt{\pi}}\int_{-\infty}^\infty e^{-(y+b)^2}e^{-y^2}dy\\\\&=-\sqrt{2}\: e^{\large-\frac{b^2}{2}}\end{align}$$ then one obtains $(2)$ by integrating the latter function. By putting $b=-1$ and making a change of variable one gets the desired integral.
|
Cauchy Sequence Is Eventually Bounded Away From Non-Limit Theorem
Let $\struct {R, \norm {\, \cdot \,} }$ be a normed division ring.
Let $\sequence {x_n}$ be a Cauchy sequence in $R$.
Suppose $\sequence {x_n}$ does not converge to $l \in R$, then:
$\exists K \in \N$ and $C \in \R_{\gt 0}: \forall n \gt K: C \lt \norm {x_n - l}$ Proof
Since $\sequence {x_n}$ does not converge to $l$ then:
$\exists \epsilon \in \R_{\gt 0}: \forall n \in \N, \exists m >= n: \norm {x_m - l} >= \epsilon$
Since $\sequence {x_n}$ is a Cauchy sequence then:
$\exists K \in \N: \forall n, m \ge K: \norm {x_n - x_m} \lt \dfrac \epsilon 2$
Let $M >= K: \norm {x_M - l} >= \epsilon$
Then $\forall n > K$:
\(\displaystyle \epsilon\) \(\le\) \(\displaystyle \norm {x_M - l }\) \(\displaystyle \) \(=\) \(\displaystyle \norm {x_M - x_n + x_n - l }\) \(\displaystyle \) \(\le\) \(\displaystyle \norm {x_M - x_n } + \norm {x_n - l }\) Axiom (N3) of norm (Triangle Inequality) \(\displaystyle \) \(\lt\) \(\displaystyle \dfrac \epsilon 2 + \norm {x_n -l }\) Since $n, M \ge K$ \(\displaystyle \implies \ \ \) \(\displaystyle \dfrac \epsilon 2\) \(\lt\) \(\displaystyle \norm {x_n - l }\) Subtracting $\dfrac \epsilon 2$ from both sides of the equation.
Let $C = \dfrac \epsilon 2$ and the result follows.
$\blacksquare$
|
Since the explanation was a little more complicated than I initially thought, I figured it would be worth it to combine my comments (and info from Physics SE) into an answer.
Quantum particles satisfy Fermi–Dirac or Bose–Einstein statistics depending on whether they are fermions or bosons. These distributions have the form $$\langle n_i\rangle=\frac{1}{\exp[(\epsilon_i-\mu)/k_bT]\pm1}$$ where the plus/minus is for fermions/bosons.
To consider the high temperature limit, we need to note that not only is there a direct temperature dependence, but $\mu$ is also dependent on temperature. Specifically, in the high temperature limit, $\mu<0$ and $|\mu|>k_bT$. Combining this information, we can make the (very accurate) approximation $$\exp[(\epsilon_i-\mu)/k_bT]\pm1\approx\exp[(\epsilon_i-\mu)/k_bT]$$ as the exponential function will be much larger than $1$. This gives that at high temperatures $$\langle n_i\rangle\approx\frac{1}{\exp[(\epsilon_i-\mu)/k_bT]}$$ which matches the form of the Maxwell–Boltzmann distribution. We can see that at low temperatures these two distributions would not agree, as the additional term in the denominator would become more significant as the exponential got smaller.
It's important to remember that the particles are always indistinguishable; all we have done in the high temperature limit is made an approximation that simplified the functional form. We should
not take this coincidental agreement with the MB distribution (for which the particles are assumed to be distinguishable) to imply that the particles have become distinguishable.
|
In geometry, the notion of a
connection makes precise the idea of transporting data along a curve or family of curves in a parallel and consistent manner. There are a variety of kinds of connections in modern geometry, depending on what sort of data one wants to transport. For instance, an affine connection, the most elementary type of connection, gives a means for transporting tangent vectors to a manifold from one point to another along a curve. An affine connection is typically given in the form of a covariant derivative, which gives a means for taking directional derivatives of vector fields: the infinitesimal transport of a vector field in a given direction.
Connections are of central importance in modern geometry in large part because they allow a comparison between the local geometry at one point and the local geometry at another point. Differential geometry embraces several variations on the connection theme, which fall into two major groups: the infinitesimal and the local theory. The local theory concerns itself primarily with notions of parallel transport and holonomy. The infinitesimal theory concerns itself with the differentiation of geometric data. Thus a covariant derivative is a way of specifying a derivative of a vector field along another vector field on a manifold. A Cartan connection is a way of formulating some aspects of connection theory using differential forms and Lie groups. An Ehresmann connection is a connection in a fibre bundle or a principal bundle by specifying the allowed directions of motion of the field. A Koszul connection is a connection generalizing the derivative in a vector bundle.
Connections also lead to convenient formulations of
geometric invariants, such as the curvature (see also curvature tensor and curvature form), and torsion tensor. Motivation: the unsuitability of coordinates
Parallel transport (of black arrow) on a sphere. Blue, respectively red arrows represent parallel transports in different directions but ending at the same lower right point. The fact that they end up not pointing in the same direction is a function of the curvature of the sphere.
Consider the following problem. Suppose that a tangent vector to the sphere
S is given at the north pole, and we are to define a manner of consistently moving this vector to other points of the sphere: a means for parallel transport. Naïvely, this could be done using a particular coordinate system. However, unless proper care is applied, the parallel transport defined in one system of coordinates will not agree with that of another coordinate system. A more appropriate parallel transportation system exploits the symmetry of the sphere under rotation. Given a vector at the north pole, one can transport this vector along a curve by rotating the sphere in such a way that the north pole moves along the curve without axial rolling. This latter means of parallel transport is the Levi-Civita connection on the sphere. If two different curves are given with the same initial and terminal point, and a vector v is rigidly moved along the first curve by a rotation, the resulting vector at the terminal point will be different from the vector resulting from rigidly moving v along the second curve. This phenomenon reflects the curvature of the sphere. A simple mechanical device that can be used to visualize parallel transport is the south-pointing chariot.
For instance, suppose that
S is given coordinates by the stereographic projection. Regard S as consisting of unit vectors in R 3. Then S carries a pair of coordinate patches: one covering a neighborhood of the north pole, and the other of the south pole. The mappings \begin{align} \varphi_0(x,y) & =\left(\frac{2x}{1+x^2+y^2}, \frac{2y}{1+x^2+y^2}, \frac{1-x^2-y^2}{1+x^2+y^2}\right)\\[8pt] \varphi_1(x,y) & =\left(\frac{2x}{1+x^2+y^2}, \frac{2y}{1+x^2+y^2}, \frac{x^2+y^2-1}{1+x^2+y^2}\right) \end{align}
cover a neighborhood
U 0 of the north pole and U 1 of the south pole, respectively. Let X, Y, Z be the ambient coordinates in R 3. Then φ 0 and φ 1 have inverses \begin{align} \varphi_0^{-1}(X,Y,Z)&=\left(\frac{X}{Z+1}, \frac{Y}{Z+1}\right), \\[8pt] \varphi_1^{-1}(X,Y,Z)&=\left(\frac{-X}{Z-1}, \frac{-Y}{Z-1}\right), \end{align}
so that the coordinate transition function is inversion in the circle:
\varphi_{01}(x,y) = \varphi_0^{-1}\circ\varphi_1(x,y)=\left(\frac{x}{x^2+y^2},\frac{y}{x^2+y^2}\right)
Let us now represent a vector field in terms of its components relative to the coordinate derivatives. If
P is a point of U 0 ⊂ S, then a vector field may be represented by the pushforward v(P) = J_{\varphi_0}(\varphi_0^{-1}(P))\cdot {\bold v}_0(\varphi_0^{-1}(P))\qquad(1)
where J_{\varphi_0} denotes the Jacobian matrix of φ
0, and v 0 = v 0( x, y) is a vector field on R 2 uniquely determined by v. Furthermore, on the overlap between the coordinate charts U 0 ∩ U 1, it is possible to represent the same vector field with respect to the φ 1 coordinates: v(P) = J_{\varphi_1}(\varphi_1^{-1}(P))\cdot {\bold v}_1(\varphi_1^{-1}(P)). \qquad (2)
To relate the components
v 0 and v 1, apply the chain rule to the identity φ 1 = φ 0 o φ 01: J_{\varphi_1}(\varphi_1^{-1}(P)) = J_{\varphi_0}(\varphi_0^{-1}(P))\cdot J_{\varphi_{01}}(\varphi_1^{-1}(P)). \,
Applying both sides of this matrix equation to the component vector
v 1(φ 1 −1( P)) and invoking (1) and (2) yields {\bold v}_0(\varphi_0^{-1}(P)) = J_{\varphi_{01}}(\varphi_1^{-1}(P))\cdot {\bold v}_1(\varphi_1^{-1}(P)). \qquad (3)
We come now to the main question of defining how to transport a vector field parallelly along a curve. Suppose that
P( t) is a curve in S. Naïvely, one may consider a vector field parallel if the coordinate components of the vector field are constant along the curve. However, an immediate ambiguity arises: in which coordinate system should these components be constant?
For instance, suppose that
v( P( t)) has constant components in the U 1 coordinate system. That is, the functions v 1( φ 1 −1( P( t))) are constant. However, applying the product rule to (3) and using the fact that d v 1/ dt = 0 gives \frac{d}{dt}{\bold v}_0(\varphi_0^{-1}(P(t)))=\left(\frac{d}{dt}J_{\varphi_{01}}(\varphi_1^{-1}(P(t)))\right)\cdot {\bold v}_1(\varphi_1^{-1}(P(t))).
But \left(\frac{d}{dt}J_{\varphi_{01}}(\varphi_1^{-1}(P(t)))\right) is always a non-singular matrix (provided that the curve
P( t) is not stationary), so v 1 and v 0 cannot ever be simultaneously constant along the curve. Resolution
The problem observed above is that the usual directional derivative of vector calculus does not behave well under changes in the coordinate system when applied to the components of vector fields. This makes it quite difficult to describe how to parallelly translate vector fields, if indeed such a notion makes any sense at all. There are two fundamentally different ways of resolving this problem.
The first approach is to examine what is required for a generalization of the directional derivative to "behave well" under coordinate transitions. This is the tactic taken by the covariant derivative approach to connections: good behavior is equated with covariance. Here one considers a modification of the directional derivative by a certain linear operator, whose components are called the Christoffel symbols, which involves no derivatives on the vector field itself. The directional derivative
D u v of the components of a vector v in a coordinate system φ in the direction u are replaced by a covariant derivative: \nabla_{\bold u} {\bold v} = D_{\bold u} {\bold v} + \Gamma(\varphi)\{{\bold u},{\bold v}\}
where Γ depends on the coordinate system φ and is bilinear in
u and v. In particular, Γ does not involve any derivatives on u or v. In this approach, Γ must transform in a prescribed manner when the coordinate system φ is changed to a different coordinate system. This transformation is not tensorial, since it involves not only the first derivative of the coordinate transition, but also its second derivative. Specifying the transformation law of Γ is not sufficient to determine Γ uniquely. Some other normalization conditions must be imposed, usually depending on the type of geometry under consideration. In Riemannian geometry, the Levi-Civita connection requires compatibility of the Christoffel symbols with the metric (as well as a certain symmetry condition). With these normalizations, the connection is uniquely defined.
The second approach is to use Lie groups to attempt to capture some vestige of symmetry on the space. This is the approach of Cartan connections. The example above using rotations to specify the parallel transport of vectors on the sphere is very much in this vein.
Historical survey of connections
Historically, connections were studied from an infinitesimal perspective in Riemannian geometry. The infinitesimal study of connections began to some extent with Christoffel. This was later taken up more thoroughly by Gregorio Ricci-Curbastro and Tullio Levi-Civita (Levi-Civita & Ricci 1900) who observed in part that a connection in the infinitesimal sense of Christoffel also allowed for a notion of parallel transport.
The work of Levi-Civita focused exclusively on regarding connections as a kind of differential operator whose parallel displacements were then the solutions of differential equations. As the twentieth century progressed, Élie Cartan developed a new notion of connection. He sought to apply the techniques of Pfaffian systems to the geometries of Felix Klein's Erlangen program. In these investigations, he found that a certain infinitesimal notion of connection (a Cartan connection) could be applied to these geometries and more: his connection concept allowed for the presence of curvature which would otherwise be absent in a classical Klein geometry. (See, for example, (Cartan 1926) and (Cartan 1983).) Furthermore, using the dynamics of Gaston Darboux, Cartan was able to generalize the notion of parallel transport for his class of infinitesimal connections. This established another major thread in the theory of connections: that a connection is a certain kind of differential form.
The two threads in connection theory have persisted through the present day: a connection as a differential operator, and a connection as a differential form. In 1950, Jean-Louis Koszul (Koszul 1950) gave an algebraic framework for regarding a connection as a differential operator by means of the Koszul connection. The Koszul connection was both more general than that of Levi-Civita, and was easier to work with because it finally was able to eliminate (or at least to hide) the awkward Christoffel symbols from the connection formalism. The attendant parallel displacement operations also had natural algebraic interpretations in terms of the connection. Koszul's definition was subsequently adopted by most of the differential geometry community, since it effectively converted the
analytic correspondence between covariant differentiation and parallel translation to an algebraic one.
In that same year, Charles Ehresmann (Ehresmann 1950), a student of Cartan's, presented a variation on the connection as a differential form view in the context of principal bundles and, more generally, fibre bundles. Ehresmann connections were, strictly speaking, not a generalization of Cartan connections. Cartan connections were quite rigidly tied to the underlying differential topology of the manifold because of their relationship with Cartan's equivalence method. Ehresmann connections were rather a solid framework for viewing the foundational work of other geometers of the time, such as Shiing-Shen Chern, who had already begun moving away from Cartan connections to study what might be called gauge connections. In Ehresmann's point of view, a connection in a principal bundle consists of a specification of
horizontal and vertical vector fields on the total space of the bundle. A parallel translation is then a lifting of a curve from the base to a curve in the principal bundle which is horizontal. This viewpoint has proven especially valuable in the study of holonomy. Possible approaches References
Levi-Civita, T.; Ricci, G. (1900), "Méthodes de calcul différential absolu et leurs applications", Math. Ann. B 54: 125–201, Cartan, Élie (1926), "Espaces à connexion affine, projective et conforme", Acta Math. 48: 1–42, Ehresmann, C. (1950), Les connexions infinitésimales dans un espace fibré différentiable, Colloque de Toplogie, Bruxelles, pp. 29–55 Koszul, J. L. (1950), "Homologie et cohomologie des algebres de Lie", Bulletin de la Société Mathématique 78: 65–127 Lumiste, Ü. (2001), "Connection", in Hazewinkel, Michiel, Osserman, B. (2004), Connections, curvature, and p-curvature (PDF) Mangiarotti, L.; . Morita, Shigeyuki (2001), Geometry of Differential Forms, AMS, External links Connections at the Manifold Atlas See also
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
Journal of Symbolic Logic J. Symbolic Logic Volume 49, Issue 4 (1984), 1137-1145. Decidable Subspaces and Recursively Enumerable Subspaces Abstract
A subspace $V$ of an infinite dimensional fully effective vector space $V_\infty$ is called decidable if $V$ is r.e. and there exists an r.e. $W$ such that $V \oplus W = V_\infty$. These subspaces of $V_\infty$ are natural analogues of recursive subsets of $\omega$. The set of r.e. subspaces forms a lattice $L(V_\infty)$ and the set of decidable subspaces forms a lower semilattice $S(V_\infty)$. We analyse $S(V_\infty)$ and its relationship with $L(V_\infty)$. We show: Proposition. Let $U, V, W \in L(V_\infty)$ where $U$ is infinite dimensional and $U \oplus V = W$. Then there exists a decidable subspace $D$ such that $U |oplus D = W$. Corollary. Any r.e. subspace can be expressed as the direct sum of two decidable subspaces. These results allow us to show: Proposition. The first order theory of the lower semilattice of decidable subspaces, $\mathrm{Th}(S(V_\infty))$, is undecidable. This contrasts sharply with the result for recursive sets. Finally we examine various generalizations of our results. In particular we analyse $S^\ast(V_\infty)$, that is, $S(V_\infty)$ modulo finite dimensional subspaces. We show $S^\ast(V_\infty)$ is not a lattice.
Article information Source J. Symbolic Logic, Volume 49, Issue 4 (1984), 1137-1145. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183741693 Mathematical Reviews number (MathSciNet) MR771782 Zentralblatt MATH identifier 0585.03016 JSTOR links.jstor.org Citation
Ash, C. J.; Downey, R. G. Decidable Subspaces and Recursively Enumerable Subspaces. J. Symbolic Logic 49 (1984), no. 4, 1137--1145. https://projecteuclid.org/euclid.jsl/1183741693
|
What I suggested back then was to sample the parameters of a Dirichlet distribution by sampling
froma Dirichlet distribution and then multiplying that by a magnitude sampled from an exponential distribution. As it turns out, this is a special case of a much nicer general method.
The new method is to sample each parameter independently from a gamma distribution
\[\gamma_i \sim \mathrm{Gamma}(\alpha_i, \beta_i) \]
This can be related to my previous method where we had the parameters expressed as a magnitude \(\alpha\) multiplied by a vector \(\vec m\) whose components sum to 1. Expressed in terms of the new method
\[\alpha = \sum_i \gamma_i \]
and
\[ m_i = \gamma_i / \alpha \]
Moreover, if all of the \(\beta_i\) are equal, then \(\alpha\) is gamma distributed with shape parameter \(\sum_i \alpha_i\). If this sum is 1, then we have an exponential distribution for \(\alpha\) as in the original method.
I am pretty sure that this formulation also makes MCMC sampling from the posterior distribution easier as well because the products inside the expression for the joint probability will get glued together in a propitious fashion.
|
Suppose A($\cdot$,$\cdot$) is an efficient randomized algorithm and L is a language such that
$\text{If }x \in L, \text{Pr}_r[(A(x,r) = 1)] = 1$ and if $x \notin L, \text{Pr}_r[A(x, r) = 0] \ge \frac{1}{2}$.
Let $H$ be a hitting set such that for all inputs $x$ of length $n$, if $x \notin L$, then $\exists y \in H, A(x, y) = 0$.
We need to show that there exists a hitting set $H$ of size $O(n)$.
My idea is that from the condition $x \notin L, \text{Pr}_r[A(x, r) = 0] \ge \frac{1}{2}$, we can get that $x \notin L, \text{Pr}_r[A(x, r) = 1] < \frac{1}{2}$. Then we can randomly choose $y_1$, $y_2$, ..., $y_m$ to construct a set $S$. Then for any input $x \notin L, \Pi_{i = 1}^{i = m}Pr[A(x, y_i) = 1] < 2^{-m}$. Therefore, the probability that there exists at least one $y_i$ such that $A(x, y_i) = 0$ is $1 - 2^{-m}$. Now the problem is if we need to prove the hitting set $H$ must exist, then $1 - 2^{-m}$ should be 1, which means $m$ should be large enough. However, I cannot find the relationship between $m$ and $n$ and how come if $m$ in size of $O(n)$ would prove such $y_i$ must exist.
Or maybe I am going in the wrong way. Can somebody help me with this? Any help would be appreciated. Thanks in advance.
|
As Wikipedia reports, the fastest currently known algorithm for the gcd of two $n$-bit numbers runs in $O(n f(n))$ time where $f(n)$ is a slow-growing function of $n$ (roughly $\log n \cdot \log \log n$). It is not known whether the gcd of two $n$-bit numbers can be computed in $O(n)$ time.
This means that the iterative algorithm for computing the gcd of $k$ $n$-bit numbers (as described in the question) runs in $O(kn f(n))$ time. Obviously, any algorithm needs at least $\Omega(kn)$ time -- this lower bound follows from the fact that any algorithm needs to read the entire input. This doesn't leave much of a gap between the trivial lower bound and the running time of the iterative algorithm. In other words, it doesn't leave much room for improvement in running time.
That's the theoretical answer. From a pragmatic perspective, the $O(n f(n))$-time algorithms only become faster than (asymptotically slower) alternatives once $n$ is fairly large. As a result, if $n$ is not too large, these results might not mean much.
Also, from a pragmatic perspective, we can expect that $\gcd(a,\gcd(b,\gcd(c,d)))$ might be faster than $\gcd(\gcd(a,b),\gcd(c,d))$. They both yield the same answer, but the former might often be faster, because typically one of the two arguments to the gcd will be small. Computing $\gcd(x,y)$ might be faster in practice if $y$ is much smaller than $x$ (because the first step is to replace $y$ with $y \bmod x$) than when $x,y$ are about the same size. So, you might benefit by performing the gcd's in an order that gets you down to a small number as soon as possible.
|
[1101.1650] The cosmological bulk flow: consistency with $\Lambda$CDM and $z\approx 0$ constraints on $\sigma_8$ and $\gamma$
Authors: Adi Nusser, Marc Davis Abstract: We derive estimates for the cosmological bulk flow from the SFI++ catalog of Tully-Fisher (TF) measurements of spiral galaxies. For a sphere of radius $40 \hmpc$ centered on the Milky Way (MW), we derive a bulk flow of $333 \pm 38\kms $ towards Galactic $ (l,b)=(276^\circ,b=14^\circ)$ within a $3^\circ$ $1\sigma$ error. Within a $ 100\hmpc$ we get $ 257\pm 44\kms$ towards $(l,b)=(279^\circ, 10^\circ)$ within a $6^\circ$ error. These directions are at a $40^\circ$ angle with the Supergalactic plane, close to the apex of the motion of the Local Group (LG) of galaxies after correcting it for the Virgocentric infall \citep{st10}. Our findings are consistent with the $\Lambda$CDM model with the latest WMAP best fit cosmological parameters. But the bulk flow allows independent constraints. For WMAP inferred Hubble parameter $h=0.71$ and baryonic mean density parameter $\Omega_b=0.0449$, the constraint from the bulk flow on the matter mean density $\Omega_m$, the normalization of the density power spectrum, $\sigma_8$, and the growth index, $\gamma$, can be expressed as $\sigma_8\Omega_m^{\gamma-0.55}(\Omega_m/0.266)^{0.28}=0.86\pm 0.11$ (for $\Omega_m\approx 0.266$). Fixing $\sigma_8=0.8$ and $\Omega_m=0.266$ as favored by WMAP, we get $\gamma=0.495\pm 0.096$. These local constraints are independent of the biasing relation between mass and galaxies. Our results are based on a method termed \ace\ (All Space Constrained Estimate) which reconstructs the bulk flow from an all space three dimensional peculiar velocity field constrained to match the TF measurements. For comparison, a maximum likelihood estimate (MLE) is found to lead to similar bulk flows, but with larger errors. [PDF] [PS] [BibTex] [Bookmark]
Discussion related to specific recent arXiv papers
Post Reply
10 posts • Page
1of 1
This paper studies the bulk flow of galaxies on scales up to 100[tex]h^{-1}[/tex] Mpc using a single survey of spiral galaxies, with distances determined from the Tully-Fisher relation; the sample has 2859 galaxies. (The authors say they use the inverse Tully-Fisher relation instead of the Tully-Fisher relation; I confess ignorance of the difference.) The velocities are reconstructed by generating a random basis of velocity fields from a [tex]\Lambda[/tex]CDM power spectrum and fitting the coefficients, demanding that on very large scales, the result agrees with [tex]\Lambda[/tex]CDM.
The results are found to be consistent with [tex]\Lambda[/tex]CDM. This might not be interesting, were it not for the contrary claim of 0911.5516, which argues that there are flows in significant excess of the [tex]\Lambda[/tex]CDM expectation on 100[tex]h^{-1}[/tex] Mpc scales. (There are other papers claiming large bulk flows on even larger scales.) The present authors hint that miscalibration between different catalogues in the composite used in 0911.5516 could be the origin of the bulk flow found there. On the other hand, the assumption of a vanilla [tex]\Lambda[/tex]CDM model seems to be heavily used in the present analysis, and it is not transparent how much this biases the results.
The results are found to be consistent with [tex]\Lambda[/tex]CDM. This might not be interesting, were it not for the contrary claim of 0911.5516, which argues that there are flows in significant excess of the [tex]\Lambda[/tex]CDM expectation on 100[tex]h^{-1}[/tex] Mpc scales. (There are other papers claiming large bulk flows on even larger scales.)
The present authors hint that miscalibration between different catalogues in the composite used in 0911.5516 could be the origin of the bulk flow found there. On the other hand, the assumption of a vanilla [tex]\Lambda[/tex]CDM model seems to be heavily used in the present analysis, and it is not transparent how much this biases the results.
Posts:14 Joined:September 27 2004 Affiliation:University of Canterbury Contact:
In our paper we used a different approach http://arxiv.org/pdf/1010.4276 and we also found that the SFI++ catalogue was consistent with [tex]\Lambda[/tex]CDM, see table 1. But, we found that the SN peculiar velocity data where mildly inconsistent with [tex]\Lambda[/tex]CDM at the two sigma level.
I had missed that paper, thanks.
Just to shortly explain what the inverse T-F relation is: you fit the two parameters "s" and "eta_0" in the relation eta = s * M + eta_0, where eta = log("line_width") is a measure of the galaxy's circular velocity and M is the absolute magnitude. In the original T-F relation it was rather M = a * eta + b.
Now for the results of the paper. The interesting thing is the discrepancy with the results of Watkins et al. 2009 and Feldman et al. 2010. They also use the SFI++ catalog and present results also for this catalog alone, together with other catalogs (including the "Composite"). Their bulk flow from SFI++ alone grows from ~20 Mpc/h, whereas it decreases for Nusser and Davis although they use the same data. So what is the reason? I can see at last two: different methods are used to estimate the bulk flow; the data are handled differently. For example, it is interesting that Watkins et al. present results up to ~60 Mpc/h. Nusser and Davis use the same data and claim to have measured the bulk flow up to ~100 Mpc/h, although their sample is smaller! The bulk flow issue seems to be far from settled IMO.
Now for the results of the paper. The interesting thing is the discrepancy with the results of Watkins et al. 2009 and Feldman et al. 2010. They also use the SFI++ catalog and present results also for this catalog alone, together with other catalogs (including the "Composite"). Their bulk flow from SFI++ alone grows from ~20 Mpc/h, whereas it decreases for Nusser and Davis although they use the same data.
So what is the reason? I can see at last two: different methods are used to estimate the bulk flow; the data are handled differently. For example, it is interesting that Watkins et al. present results up to ~60 Mpc/h. Nusser and Davis use the same data and claim to have measured the bulk flow up to ~100 Mpc/h, although their sample is smaller!
The bulk flow issue seems to be far from settled IMO.
So in the inverse relation you determine the circular velocity from the magnitude, and not vice versa?
Not exactly :) The circular velocity is your observable, as is the observed magnitude and redshift. What you want is a mean relation between a measure of the circular velocity and absolute magnitude. The latter is calculated from the observed m and redshift. The question is what is your "x" in the f(x)=a*x+b fit. In the ITF, the "x" is absolute magnitude M. For more details, take a look at the recent Davis et al. paper http://arxiv.org/abs/1011.3114
I don't understand... To me the equations are the same, with some terms moved from one side to the other.
They are in a sense... It's only the question of what you fit with your linear regression or whatever, i.e. what is your x and what is your y. But it's still the same Tully-Fisher relation that is dealt with :)
Maybe someone else will make it clearer...
Maybe someone else will make it clearer...
The reason to use the inverse TF relation is that there is intrinsic scatter in the relation and there is also an apparent magnitude limit. If you aren't careful with how you treat the scatter you can introduce a Malmquist-type bias. Malmquist biases are more severe with a larger dispersion in the population (brightness of a not-exactly-standard candle, or scatter about a relation).
Typical methods of fitting y on x effectively assign the scatter to y, the dependent variable. If you make magnitude the dependent variable then your fitted relation will be biased - it will have biases in zeropoint and slope - because you're missing some fraction of low luminosity galaxies and you're not missing an equal fraction at all velocities. If you make velocity the dependent variable, then the scatter is parallel to the selection limit, so to speak, and the fitted relation is less biased. Discussions of fitting relations with both observational errors and scatter can be found in the appendix of Weiner et al 2006, http://adsabs.harvard.edu/abs/2006ApJ...653.1049W and B. Kelly 2007, http://adsabs.harvard.edu/abs/2007ApJ...665.1489K. My paper is Tully Fisher-oriented, although the fitting issues are general; Kelly's paper is more statistically sophisticated and correct.
Typical methods of fitting y on x effectively assign the scatter to y, the dependent variable. If you make magnitude the dependent variable then your fitted relation will be biased - it will have biases in zeropoint and slope - because you're missing some fraction of low luminosity galaxies and you're not missing an equal fraction at all velocities. If you make velocity the dependent variable, then the scatter is parallel to the selection limit, so to speak, and the fitted relation is less biased.
Discussions of fitting relations with both observational errors and scatter can be found in the appendix of Weiner et al 2006, http://adsabs.harvard.edu/abs/2006ApJ...653.1049W and B. Kelly 2007, http://adsabs.harvard.edu/abs/2007ApJ...665.1489K. My paper is Tully Fisher-oriented, although the fitting issues are general; Kelly's paper is more statistically sophisticated and correct.
That makes it a little clearer, thanks.
|
For the following sequence, how do I find if it converges and if so how do I find its limits.
$$a_n = \frac{12−8n}{4n+36}, (n=1,2,3,...)$$
What are the steps that I need to follow to get the answer?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Hint: Divide numerator and denominator by $n$. Then check what the limit is as $ n$ tends to $\infty$.
There are several possibilities to solve this problem.
One was already mentioned by lsp.
Divide numerator and denominator by n (the limit should stay the same) and check wath happens when $n$ goes to infinity. Remember ($\frac{a}{n} \to 0$ when $n \to \infty$)
You dan also see the limit as the asymptotic limit of two continuous functions (make the $n$ continuous instead of discrete. This subsequence should converge to the same limit.) With de l'Hospital: $$ \lim_{n \to \infty} \frac{12-8n}{4n+36} = \lim_{n \to \infty} \frac{-8}{4} = -2 $$
|
I have the following system: $m\cdot\frac{dx^2}{dt^2}=-k(x-lo)-\frac{dx}{dt}\cdot d+m\cdot g$ It represents a mass with a spring and a damper. It is easy to solve using NDSolve but I'm trying to solve it using matrices. (Because if we represent the system using state equations, we can use some transformations, like diagonalization or triangularization so the time of computation is reduced). I tried using regular matrices but it doesn't work. Is there any way to do this? The system after an order reduction is:
$ \begin{bmatrix} x1'(t) \\ x2'(t) \\ \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ \frac{-k}{m} & \frac{-d}{m} \\ \end{bmatrix} \begin{bmatrix} x1 \\ x2 \\ \end{bmatrix}+ \begin{bmatrix} 0\\ \frac{1}{m}\\ \end{bmatrix} f(t) + \begin{bmatrix} 0\\ \frac{kl_o}{m}+g\\ \end{bmatrix} $
where $f(t)=15u(t-5)$ (u(t) is the unit step function).
I have tried this:
lo = 0.50; m = 1.5; k = 20; d = 3; g = 9.8;A = {{0, 1}, {-k/m, -m}} z[t_] = 15*HeavisideTheta[t - 5];b = {{0}, {1/m}};γ = {{0}, {(k*lo)/m + g}};S = A*{{x0[t]}, {x1[t]}} + b*z[t] + γ eqns = {{x0'[t]}, {x1'[t]}} NDSolve[{eqns == A*{{x0[t]}, {x1[t]}} + b*z[t] + γ , x0[0] == 0, x1[0] == 0}, {x0[t], x1[t]}, {t, 0, 10}]
|
$\log_{b}{(m \times n)}$ $\,=\,$ $\log_{b}{m}+\log_{b}{n}$
The product rule is a most commonly used logarithmic identity in logarithms. It states that logarithm of product of quantities is equal to sum of their logs. It can be proved mathematically in algebraic form by the relation between logarithms and exponents, and product rule of exponents.
$m$ and $n$ are two quantities, both quantities are expressed in product form on the basis of another quantity $b$.
The total number of multiplying factors of $b$ is $x$ and the product of them is equal to $m$.
$m$ $\,=\,$ $\underbrace{b \times b \times b \times \ldots \times b}_{\displaystyle x \, factors}$
$\implies m \,=\, b^{\displaystyle x}$
Similarly, the total number of multiplying factors of $b$ is $y$ and the product of them is equal to $n$.
$n$ $\,=\,$ $\underbrace{b \times b \times b \times \ldots \times b}_{\displaystyle y \, factors}$
$\implies n \,=\, b^{\displaystyle y}$
Finally, the quantities are expressed in exponential form as follows.
$(1) \,\,\,\,\,\,$ $m \,=\, b^{\displaystyle x}$
$(2) \,\,\,\,\,\,$ $n \,=\, b^{\displaystyle y}$
The quantities in exponential form can be written in logarithmic form on the basis of mathematical relation between exponential and logarithmic operations.
$(1) \,\,\,\,\,\,$ $b^{\displaystyle x} \,=\, m$ $\,\, \Leftrightarrow \,\,$ $\log_{b}{m} = x$
$(2) \,\,\,\,\,\,$ $b^{\displaystyle y} \,=\, n$ $\,\,\,\, \Leftrightarrow \,\,$ $\log_{b}{n} = y$
Multiply the quantities $m$ and $n$ for getting product of them mathematically.
$m \times n$
In fact, the values of the quantities $m$ and $n$ in exponential form are $b^{\displaystyle x}$ and $b^{\displaystyle y}$ respectively.
$\implies m \times n \,=\, b^{\displaystyle x} \times b^{\displaystyle y}$
As per product rule of exponents, the product of exponential terms having same base is equal to base is raised to the power of sum of the exponents.
$\implies m \times n \,=\, b^{\,({\displaystyle x}\,+\,{\displaystyle y})}$
The product of quantities $m$ and $n$ can be written in dot product form or simply $mn$ in mathematics.
$\implies m.n \,=\, b^{\,({\displaystyle x}\,+\,{\displaystyle y})}$
$\implies mn \,=\, b^{\,({\displaystyle x}\,+\,{\displaystyle y})}$
Take $s = x+y$ and $p = mn$. Now, write the equation in terms of $s$ and $p$.
$\implies p = b^{\displaystyle s}$
Now express the equation in logarithmic form.
$\implies \log_{b}{p} = s$
Replace the actual values of quantities $s$ and $p$.
$\implies \log_{b}{(m.n)} = x+y$
The logarithm of product of two quantities $m$ and $n$ to the base $b$ is equal to sum of the quantities $x$ and $y$.
Actually, $x \,=\, \log_{b}{m}$ and $y \,=\, \log_{b}{n}$. Replace them to get the property for the product rule of logarithms.
$\,\,\, \therefore \,\,\,\,\,\, \log_{b}{(m.n)}$ $\,=\,$ $\log_{b}{m}+\log_{b}{n}$
It is proved that logarithm of product of two quantities to a base is equal to sum their logs to the same base. This fundamental is not limited to two quantities and it can be applied to more than two quantities. Therefore, this logarithmic identity is used as a formula in mathematics.
$\log_{\displaystyle b}{(m.n.o \cdots)}$ $=$ $\log_{b} m$ $+$ $\log_{b} n$ $+$ $\log_{b} o$ $\large + \cdots$
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Plugging zero into $x$ gives me infinity-infinity which is indeterminate. I then try to multiply the function by $$\frac{\frac{1}{x^2} + \frac{1}{x\sin(x)}}{\frac1{x^2} + \frac1{x\sin(x)}}$$ which gives me another undeterminate and harder function... I know I need to use L'Hospital's rule, but I can't seem to find the right algebraic form to use that rule.
Hint $$\begin{align}\frac{1}{{{x^2}}} - \frac{1}{{x\sin x}} &=\frac{x\sin x-x^2}{x^3\sin x}\\&= \frac{x}{{\sin x}}\frac{{\sin x - x}}{{{x^3}}}\end{align}$$
And $$\sin x=x-\frac{x^3}6+o(x^3)\tag 1 $$
As it has been noted $(1)$ is a consequence of L'Hôpital's rule, if you may.
Peter uses Taylor series for the sine function. In case you are not familiar with those, you must use L'Hospital three times. (Usually, L'Hospital's Rule is dealt with at an earlier stage than series).
The numerator will give you $-\cos x$ and the denominator gives $6$. Now "plug in" $x→0$ to find the answer: $-\dfrac 16$
|
Research Open Access Published: A posteriori error estimates for fourth order hyperbolic control problems by mixed finite element methods Boundary Value Problems volume 2019, Article number: 90 (2019) Article metrics
285 Accesses
Abstract
In this paper, we consider the a posteriori error estimates of the mixed finite element method for the optimal control problems governed by fourth order hyperbolic equations. The state is discretized by the order
k Raviart–Thomas mixed elements and control is discretized by piecewise polynomials of degree k. We adopt the mixed elliptic reconstruction to derive the a posteriori error estimates for both the state and the control approximations. Introduction
The finite element approximation for optimal control problems has an enormously important function in the numerical approach for these problems. Scientists have studied extensively this area; see, for example, [4, 12, 13, 21, 25]. They discussed the a priori error estimates using finite element approximations, such as [1, 16, 23], in which elliptic or parabolic problems are considered by optimal control theory. They studied adaptivity for many optimal control problems; for example, see [4, 11, 17, 20,21,22].
In some optimal control problems, for the objective function containing a gradient of the state variable, we use mixed finite element methods to discretize the state equation, so that the scalar variable and its flux variable can be approximated in the same accuracy; for example, see [3]. Many scientists have addressed the mixed finite element methods for elliptic problems [6,7,8, 14], for the first bi-harmonic equation [5], for parabolic problems [26] and for hyperbolic problems [9, 15].
The purpose of this work is to discuss the a posteriori error estimates of the semidiscrete mixed finite element approximation for fourth order hyperbolic optimal control problems. Considering the fourth order hyperbolic equations by the idea of a mixed elliptic reconstruction [24], we obtain the error estimates for the state and the control approximations. The following is the model we considered:
where \(\varOmega \subset {\mathbf{R}^{2}}\) is an open set of polygon with
∂Ω. K is in \(U=L^{2}(J;L^{2}(\varOmega ))\), a closed convex set, \(J=[0,T]\), \(f,y_{d}\in L^{2}(J;L^{2}(\varOmega ))\) and \(y_{0}, y_{1}\in H^{4}(\varOmega )\). K is defined as follows:
In the paper, we adopt the standard notation \(W^{m,p}(\varOmega )\) for Sobolev space on
Ω with a norm \(\Vert v\Vert _{m,p}\) given by \(\Vert v \Vert _{m,p}^{p}:=\sum_{\vert \alpha \vert \leq m}\Vert D^{\alpha }v\Vert _{L^{p}(\varOmega )}^{p}\), and a seminorm \(\vert v\vert _{m,p}\) given by \(\vert v\vert _{m,p}^{p}:=\sum_{\vert \alpha \vert = m}\Vert D^{\alpha }v\Vert _{L^{p}( \varOmega )}^{p}\). We set \(W_{0}^{m,p}(\varOmega )=\{v\in W^{m,p}(\varOmega ): \gamma (D^{\alpha }v )\vert _{\partial \varOmega }=0, \vert \alpha \vert =m \}\), where γ is the trace operator. We denote \(W^{m,2}(\varOmega )(W_{0}^{m,2}(\varOmega ))\) by \(H^{m}(\varOmega )(H_{0}^{m}(\varOmega ))\).
We denote by \(L^{s}(0,T;W^{m,p}(\varOmega ))\) the Banach space of all \(L^{s}\) integrable functions from
J into \(W^{m,p}(\varOmega )\) with norm \(\Vert v\Vert _{L^{s}(J;W^{m,p}(\varOmega ))}= (\int _{0}^{T}\Vert v\Vert _{W^{m,p}( \varOmega )}^{s}\,dt )^{\frac{1}{s}}\) for \(s\in [1,\infty )\), and the standard modification for \(s=\infty \). For simplicity of presentation, we denote \(\Vert v\Vert _{L^{s}(J;W^{m,p}(\varOmega ))}\) by \(\Vert v\Vert _{L^{s}(W^{m,p})}\). Similarly, one can define the spaces \(H^{1}(J;W^{m,p}(\varOmega ))\) and \(C^{k}(J;W^{m,p}(\varOmega ))\). We can find details in [19]. C is a general positive constant independent of h.
The rest of this paper is as follows: In Sect. 2, we introduce the optimal control problems and its mixed finite element scheme. Section 2 ends with the definition of the mixed elliptic reconstructions, which is useful in deriving the a posteriori estimates for the fourth order hyperbolic optimal control problems in Sect. 3. Finally, we make some concluding remarks in Sect. 4.
Optimal control problems for mixed methods
A semidiscrete approximation of a mixed finite element for the optimal control problems (1.7)–(1.14) will be constructed. We set the state spaces \(\boldsymbol{L}=L^{2}(J;\boldsymbol{V})\), \(\boldsymbol{L}_{0}=L^{2}(J;\boldsymbol{V}_{0})\) and \(Q=L^{2}(J;W)\), \(W=L^{2}(\varOmega )\), where
, and \(\boldsymbol{V}_{0}\) are defined as follows: V
The space
is a Hilbert space, its norm is defined as follows: V
Now we introduce operators: div, ∇, curl and Curl. For any \(\boldsymbol{v}=(\boldsymbol{v}_{1},\boldsymbol{v}_{2})\in (H ^{1}(\varOmega ))^{2}\) or \(w\in H^{1}(\varOmega )\),
From [18], we know that the above optimal control problem has a unique solution \((\tilde{\boldsymbol{p}},y,\boldsymbol{p},\tilde{y},u)\), and that \((\tilde{\boldsymbol{p}},y,\boldsymbol{p},\tilde{y},u)\) is the solution of (2.3)–(2.9) if and only if there is a co-state \((\tilde{\boldsymbol{q}},z,\boldsymbol{q},\tilde{z})\in (\boldsymbol{L}_{0}\times Q\times \boldsymbol{L}\times Q)\) such that \((\tilde{\boldsymbol{p}},y,\boldsymbol{p},\tilde{y}, \tilde{\boldsymbol{q}},z,\boldsymbol{q},\tilde{z},u)\) satisfies the following optimality conditions:
where \((\cdot ,\cdot )\) is the inner product of \(L^{2}(\varOmega )\).
K is a control constraint, so we can get a relationship between u and z. This relationship is important for our result. Lemma 2.1 denotes the integral average on \(\varOmega \times J\) of the function z.
Let \({\mathcal{T}}_{h}\) be regular triangulations of
Ω, \(h_{\tau }\) is the diameter of τ and \(h=\max h_{\tau }\). Furthermore, let \(\mathcal{E}_{h}\) be the set of element sides of the triangulation \({\mathcal{T}}_{h}\) with \(\varGamma _{h}=\bigcup \mathcal{E} _{h}\). Let \(\boldsymbol{V}_{h}\times W_{h}\subset \boldsymbol{V}\times W\) denote the Raviart–Thomas space [3] associated with the triangulations \(\mathcal {T}_{h}\) of Ω. \(P_{k}\) denotes the space of polynomials of total degree no greater than k (\(k\geq 0\)). Let \(\boldsymbol{V}({\tau })= \{\boldsymbol{v}\in P_{k}^{2}({\tau })+x\cdot P_{k}({\tau })\}\), \(W({\tau })=P _{k}({\tau })\). We set
where \(y_{0}^{h}(x)\in W_{h}\) and \(y_{1}^{h}(x)\in W_{h}\) are the mixed elliptic projections of \(y_{0}\) and \(y_{1}\). The optimal control problem (2.23)–(2.29) again has an unique solution \(( \tilde{\boldsymbol{p}}_{h},y_{h},\boldsymbol{p}_{h},\tilde{y}_{h},u_{h})\), and \((\tilde{\boldsymbol{p}}_{h},y_{h},\boldsymbol{p}_{h},\tilde{y}_{h},u_{h})\) is the solution of (2.23)–(2.29) if and only if there is a co-state \((\tilde{\boldsymbol{q}}_{h},z_{h},\boldsymbol{q}_{h},\tilde{z}_{h})\) such that the following optimality conditions hold:
For Lemma 2.1, the relationship between \(u_{h}\) and \(z_{h}\) is given as follows:
where \({\check{z}_{h}}=\frac{\int _{0}^{T}\int _{\varOmega }z_{h}\,dx\,dt}{\int _{0}^{T}\int _{\varOmega }1\,dx\,dt}\) is the integral average on \(\varOmega \times J\) of the function \(z_{h}\).
Now, we give the local definition of \(\operatorname {div}_{h}\), \(\operatorname{curl} _{h}:H^{1}(\mathcal {T}_{h})^{2}\rightarrow L^{2}(\varOmega )\) and \(\nabla _{h}\), \(\operatorname{Curl}_{h}:H^{1}(\mathcal {T}_{h})\rightarrow L^{2}(\varOmega )^{2}\), such that for any \(T\in \mathcal {T}_{h}\)
Set \(P_{h}:W\rightarrow W_{h}\) to be the orthogonal \(L^{2}(\varOmega )\)-projection into \(W_{h}\) [2], which satisfies
The commuting diagram property reads
where
I denotes the identity operator.
Next, the intermediate variable \(\tilde{u}\in K\) is introduced as follows:
Next, we present mixed elliptic constructions \((\tilde{\bar{\boldsymbol{p}}}, \bar{y}, \bar{\boldsymbol{p}}, \tilde{\bar{y}},\tilde{\bar{\boldsymbol{q}}}, \bar{z},\bar{\boldsymbol{q}}, \tilde{\bar{z}})\in (\boldsymbol{V}\times W)^{4}\):
For simplicity of presentation, we resolve the errors in following forms:
From mixed elliptic reconstructions [24], we derive the error estimates as below.
Lemma 2.2 For Raviart–Thomas elements, there exists a positive constant C which depends the domain Ω, the shape regularity of the elements and polynomial degree k such that where \(\nabla _{h}\) and \(\operatorname{curl}_{h}\) have been defined in (2.44) –(2.45), \(J(\boldsymbol{v}\cdot \boldsymbol{t})\) denotes the jump of \(\boldsymbol{v}\cdot \boldsymbol{t}\) across the element edge E for all \(\boldsymbol{v}\in \boldsymbol{V}\) with t being the tangential unit vector along the edge \(E\in \varGamma _{h}\). Error estimation of optimal control Lemma 3.1 Proof
We integrate (3.8) from 0 to
t, use Gronwall’s inequality and the Cauchy inequality, then we obtain
where
Note that
then we have
Similar to (3.9), we derive
Taking \(t=0\) and \(w=e_{2tt}(0)\) in (3.4) leads to
Note that
Lemma 3.2 Proof
and
Letting \(\boldsymbol{v}=e_{7}\) in (3.27), we get
Next, for (3.25), we differentiate two times with respect to
t, and set \(\boldsymbol{v}=e_{7}\). for (3.26), we also differentiate two times with respect to t, and set \(w=e_{8}\). For (3.27), we set \(\boldsymbol{v}=e_{5tt}\). For (3.28), we set \(w=\operatorname {div}e_{7}\). Combining the new four equalities, we derive Lemma 3.3 where ϵ is an arbitrary small positive constant. Lemma 3.4
[15]
Let \((\tilde{\boldsymbol{p}},y,\boldsymbol{p},\tilde{y}, \tilde{\boldsymbol{q}},z,\boldsymbol{q},\tilde{z},u)\) and \((\tilde{\boldsymbol{p}}_{h},y _{h},\boldsymbol{p}_{h},\tilde{y}_{h},\tilde{\boldsymbol{q}}_{h},z_{h},\boldsymbol{q}_{h},u _{h})\) be the solutions of (2.10) –(2.22) and (2.30) –(2.42), respectively. Suppose that \((u_{h}+z_{h})\vert _{ \tau }\in H^{1}(\tau )\) and that there exists \(w\in K_{h}\) such that where Theorem 3.1 Let \((y,\tilde{\boldsymbol{p}},\tilde{y},\boldsymbol{p},z,\tilde{\boldsymbol{q}}, \tilde{z},\boldsymbol{q},u)\) and \((y_{h},\tilde{\boldsymbol{p}}_{h},\tilde{y}_{h},\boldsymbol{p}_{h},z_{h},\tilde{\boldsymbol{q}}_{h}, \tilde{z}_{h},\boldsymbol{q}_{h},u_{h})\) be the solutions of (2.9) –(2.19) and (2.26) –(2.36), respectively. Then we have Proof Conclusion and future work
In the article, using semidiscrete Raviart–Thomas mixed finite element methods, we studied fourth order hyperbolic equations of quadratic problems for optimal control, and then got the posteriori error estimates. In subsequent work, an a posteriori estimation will be considered by a fully discrete approximation of the mixed finite element. Of course, the error estimates of the same problems certainly also can be discussed with nonlinear fourth order hyperbolic equations.
References 1.
Arada, N., Casas, E., Tröltzsch, F.: Error estimates for the numerical approximation of a semilinear elliptic control problem. Comput. Optim. Appl.
23, 201–229 (2002) 2.
Babuška, I., Strouboulis, T.: The Finite Element Method and Its Reliability. Oxford University Press, Oxford (2001)
3.
Brezzi, F., Fortin, M.: Mixed and Hybrid Finite Element Methods. Springer, Berlin (1991)
4.
Brunner, H., Yan, N.: Finite element methods for optimal control problems governed by integral equations and integro-differential equations. Numer. Math.
101, 1–27 (2005) 5.
Cao, W., Yang, D.: Ciarlet–Raviart mixed finite element approximation for an optimal control problem governed by the first bi-harmonic equation. J. Comput. Appl. Math.
233(2), 372–388 (2009) 6.
Chen, Y.: Superconvergence of quadratic optimal control problems by triangular mixed finite elements. Int. J. Numer. Methods Eng.
75, 881–898 (2008) 7.
Chen, Y., Huang, Y., Liu, W.B., Yan, N.N.: Error estimates and superconvergence of mixed finite element methods for convex optimal control problems. J. Sci. Comput.
42, 382–403 (2009) 8.
Chen, Y., Liu, W.B.: A posteriori error estimates for mixed finite element solutions of convex optimal control problems. J. Comput. Appl. Math.
211, 76–89 (2008) 9.
Chen, Y., Sun, C.M.: Error estimates and superconvergence of mixed finite element methods for fourth order hyperbolic control problems. Appl. Math. Comput.
244, 642–653 (2014) 10.
Douglas, J., Roberts, J.E.: Global estimates for mixed finite element methods for second order elliptic equations. Math. Comput.
44, 39–52 (1985) 11.
Gong, W., Yan, N.: A posteriori error estimate for boundary control problems governed by the parabolic partial differential equations. J. Comput. Math.
27, 68–88 (2009) 12.
Haslinger, J., Neittaanmaki, P.: Finite Element Approximation for Optimal Shape Design. Wiley, Chichester (1989)
13.
Hou, L., Turner, J.C.: Analysis and finite element approximation of an optimal control problem in electrochemistry with current density controls. Numer. Math.
71, 289–315 (1995) 14.
Hou, T.: Error estimates of mixed finite element approximations for a class of fourth order elliptic control problems. Bull. Korean Math. Soc.
4(50), 1127–1144 (2013) 15.
Hou, T.: A posteriori \(L^{\infty }(L^{2})\)-error estimates of semidiscrete mixed finite element methods for hyperbolic optimal control problems. Bull. Korean Math. Soc.
50, 321–341 (2013) 16.
Knowles, G.: Finite element approximation of parabolic time optimal control problems. SIAM J. Control Optim.
20, 414–427 (1982) 17.
Li, R., Liu, W., Ma, H., Tang, T.: Adaptive finite element approximation of elliptic control problems. SIAM J. Control Optim.
41, 1321–1349 (2002) 18.
Lions, J.: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971)
19.
Lions, J., Magenes, E.: Non Homogeneous Boundary Value Problems and Applications. Grandlehre, vol. 181. Springer, Berlin (1972)
20.
Liu, W., Ma, H., Tang, T., Yan, N.: A posteriori error estimates for discontinuous Galerkin time-stepping method for optimal control problems governed by parabolic equations. SIAM J. Numer. Anal.
42, 1032–1061 (2004) 21.
Liu, W., Yan, N.: A posteriori error estimates for convex boundary control problems. SIAM J. Numer. Anal.
39, 73–99 (2001) 22.
Liu, W., Yan, N.: A posteriori error estimates for optimal control problems governed by Stokes equations. SIAM J. Numer. Anal.
40, 1850–1869 (2003) 23.
Mcknight, R., Bosarge, Jr., W.: The Ritz–Galerkin procedure for parabolic control problems. SIAM J. Control Optim.
11, 510–524 (1973) 24.
Memon, S., Nataraj, N., Pani, A.K.: An a posteriori error analysis of mixed finite element Galerkin approximations to second order linear parabolic problems. SIAM J. Numer. Anal.
50, 1367–1393 (2012) 25.
Neittaanmaki, P., Tiba, D.: Optimal Control of Nonlinear Parabolic Systems: Theory, Algorithms and Applications. Dekker, New York (1994)
26.
Xing, X., Chen, Y.: Error estimates of mixed methods for optimal control problems governed by parabolic equations. Int. J. Numer. Methods Eng.
75, 735–754 (2008) Acknowledgements
The authors express their thanks to the referees for their helpful suggestions, which led to improvements of the presentation.
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Funding
This work is supported by Youth Innovative Talents Project (Natural Science) of research on humanities and social sciences in Guangdong normal university (2017KQNCX265), The issue for the 13th Five-Year plan for the development of philosophy and social sciences in Guangzhou of 2018 (2018GZGJ168), School projects of Huashang College Guangdong University of Finance and Economics.
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article Received Accepted Published DOI MSC 49J20 65N30 Keywords A posteriori error estimates Optimal control problems Fourth order hyperbolic equations Mixed finite element methods
|
Holt-Winters Filtering
Computes Holt-Winters Filtering of a given time series. Unknown parameters are determined by minimizing the squared prediction error.
Keywords ts Usage
HoltWinters(x, alpha = NULL, beta = NULL, gamma = NULL, seasonal = c("additive", "multiplicative"), start.periods = 2, l.start = NULL, b.start = NULL, s.start = NULL, optim.start = c(alpha = 0.3, beta = 0.1, gamma = 0.1), optim.control = list())
Arguments x
An object of class
ts
alpha
\(alpha\) parameter of Holt-Winters Filter.
beta
\(beta\) parameter of Holt-Winters Filter. If set to
FALSE, the function will do exponential smoothing.
gamma
\(gamma\) parameter used for the seasonal component. If set to
FALSE, an non-seasonal model is fitted.
seasonal
Character string to select an
"additive"(the default) or
"multiplicative"seasonal model. The first few characters are sufficient. (Only takes effect if
gammais non-zero).
start.periods
Start periods used in the autodetection of start values. Must be at least 2.
l.start
Start value for level (a[0]).
b.start
Start value for trend (b[0]).
s.start
Vector of start values for the seasonal component (\(s_1[0] \ldots s_p[0]\))
optim.start
Vector with named components
alpha,
beta, and
gammacontaining the starting values for the optimizer. Only the values needed must be specified. Ignored in the one-parameter case.
optim.control
Optional list with additional control parameters passed to
optimif this is used. Ignored in the one-parameter case.
Details
The additive Holt-Winters prediction function (for time series with period length p) is $$\hat Y[t+h] = a[t] + h b[t] + s[t - p + 1 + (h - 1) \bmod p],$$ where \(a[t]\), \(b[t]\) and \(s[t]\) are given by $$a[t] = \alpha (Y[t] - s[t-p]) + (1-\alpha) (a[t-1] + b[t-1])$$ $$b[t] = \beta (a[t] -a[t-1]) + (1-\beta) b[t-1]$$ $$s[t] = \gamma (Y[t] - a[t]) + (1-\gamma) s[t-p]$$
The multiplicative Holt-Winters prediction function (for time series with period length p) is $$\hat Y[t+h] = (a[t] + h b[t]) \times s[t - p + 1 + (h - 1) \bmod p].$$ where \(a[t]\), \(b[t]\) and \(s[t]\) are given by $$a[t] = \alpha (Y[t] / s[t-p]) + (1-\alpha) (a[t-1] + b[t-1])$$ $$b[t] = \beta (a[t] - a[t-1]) + (1-\beta) b[t-1]$$ $$s[t] = \gamma (Y[t] / a[t]) + (1-\gamma) s[t-p]$$ The data in
x are required to be non-zero for a multiplicative model, but it makes most sense if they are all positive.
The function tries to find the optimal values of \(\alpha\) and/or \(\beta\) and/or \(\gamma\) by minimizing the squared one-step prediction error if they are
NULL (the default).
optimize will be used for the single-parameter case, and
optim otherwise.
For seasonal models, start values for
a,
b and
s are inferred by performing a simple decomposition in trend and seasonal component using moving averages (see function
decompose) on the
start.periods first periods (a simple linear regression on the trend component is used for starting level and trend). For level/trend-models (no seasonal component), start values for
a and
b are
x[2] and
x[2] - x[1], respectively. For level-only models (ordinary exponential smoothing), the start value for
a is
x[1].
Value
An object of class
"HoltWinters", a list with components:
A multiple time series with one column for the filtered series as well as for the level, trend and seasonal components, estimated contemporaneously (that is at time t and not at the end of the series).
The original series
alpha used for filtering
beta used for filtering
gamma used for filtering
A vector with named components
a, b, s1, ..., sp containing the estimated values for the level, trend and seasonal components
The specified
seasonal parameter
The final sum of squared errors achieved in optimizing
The call used
References
C. C. Holt (1957) Forecasting seasonals and trends by exponentially weighted moving averages,
ONR Research Memorandum, Carnegie Institute of Technology 52. (reprint at http://dx.doi.org/10.1016/j.ijforecast.2003.09.015).
P. R. Winters (1960). Forecasting sales by exponentially weighted moving averages.
Management Science, 6, 324--342. 10.1287/mnsc.6.3.324. See Also Aliases HoltWinters print.HoltWinters residuals.HoltWinters Examples
library(stats)
# NOT RUN {require(graphics)## Seasonal Holt-Winters(m <- HoltWinters(co2))plot(m)plot(fitted(m))(m <- HoltWinters(AirPassengers, seasonal = "mult"))plot(m)## Non-Seasonal Holt-Wintersx <- uspop + rnorm(uspop, sd = 5)m <- HoltWinters(x, gamma = FALSE)plot(m)## Exponential Smoothingm2 <- HoltWinters(x, gamma = FALSE, beta = FALSE)lines(fitted(m2)[,1], col = 3)# }
Documentation reproduced from package stats, version 3.5.2, License: Part of R 3.5.2
|
PFA
A cardinal $\kappa$ is a
PFA cardinal if $\kappa$ is not zero and the canonical forcing of the PFA of length $\kappa$, which is the countable support iteration that at each stage $\gamma$ forces with the lottery sum of all minimal-rank proper partial orders $\mathbb{Q}$ for which there is a family $\cal{D}$ of $\omega_1$ many dense sets in $\mathbb{Q}$ for which there is no filter in $\mathbb{Q}$ meeting them.
Every supercompact cardinal is a PFA cardinal. It is not yet clear whether the converse is true.
|
Fermat's Last Theorem Theorem $\forall a, b, c, n \in \Z_{>0}, \; n > 2$, the equation $a^n + b^n = c^n$ has no solutions. Proof
The proof of this theorem is beyond the current scope of $\mathsf{Pr} \infty \mathsf{fWiki}$, and indeed, is beyond the understanding of many high level mathematicians.
For the curious reader, the proof can be found here, in a paper published by Andrew Wiles, entitled
Modular elliptic curves and Fermat's Last Theorem, in volume 141, issue 3, pages 443 through 551 of the Annals of Mathematics.
It is worth noting that Wiles' proof was indirect in that he proved a special case of the Taniyama-Shimura Conjecture, which then along with the already proved Epsilon Conjecture implied that integral solutions of the theorem were impossible.
Source of Name
This entry was named for Pierre de Fermat.
As Fermat himself put it, sometime around $1637$:
Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos et generaliter nullam in infinitum ultra quadratum potestatem in duos ejusdem nominis fas est dividere: cujus rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.
Loosely translated from the Latin, that means:
The equation $x^n + y^n = z^n$ has no integral solutions when $n > 2$. I have discovered a perfectly marvellous proof, but this margin is not big enough to hold it. Nobody managed to find such a proof, until it was finally proved by Andrew Wiles in $1994$. It is seriously doubted that Fermat actually had found a general proof of it, and it is almost impossible that he found Wiles' proof since it uses areas of mathematics that were not yet invented in Fermat's time. Sources 1972: George F. Simmons: Differential Equations... (previous) ... (next): $\S 1.6$: The Brachistochrone. Fermat and the Bernoullis 1973: Donald E. Knuth: The Art of Computer Programming: Volume 3: Sorting and Searching: Notes on the Exercises: Exercise $3$ 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $2$ 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {A}.13$: Fermat ($1601$ – $1665$) 1994: Andrew John Wiles: Modular elliptic curves and Fermat's Last Theorem( Ann. Math. Vol. 141, no. 3: 443 – 551) 1997: Gary Cornell, Joseph H. Silverman and Glenn Stevens: Modular Forms and Fermat's Last Theorem 1997: Donald E. Knuth: The Art of Computer Programming: Volume 1: Fundamental Algorithms(3rd ed.) ... (previous) ... (next): Notes on the Exercises: Exercise $4$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $2$ 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: Fermat's last theorem 2008: Ian Stewart: Taming the Infinite... (previous) ... (next): Chapter $7$: Patterns in Numbers 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: Fermat's Last Theorem
|
So, I know I'm missing something simple, but I can't find a way to solve the Laplacian with the boundary conditions I've got down.
The Problem:
"Consider the semi-infinite plate sketched below with thickness 2b. The temperature at the base (x = 0) is constant at $T_0$. Heat is transferred on both sides of the plate to a surrounding temperature $T_{\infty}$. The conductivity of the plate is k and the heat transfer coefficient for both sides is h. There is no heat generation. As $x \rightarrow \infty $, the temperature is finite. Solve for steady state temperature."
Below is a sketch of the problem
So far I've managed to define my PDE
$$\frac {\partial^2T}{\partial y^2} + \frac {\partial^2T}{\partial x^2} = 0$$
and boundary conditions
\begin{align} T(0,y) &= T_0 \\ T(\infty , y) &< \infty \\ \frac{\partial T}{\partial y}(x,b) &= - \frac hk [T_b(x) - T_{\infty}] \\ &= f(x) \\ \frac{\partial T}{\partial y}(x,0) &= 0 \end{align}
And I know I'm going to use my separation of variables, but I run into trouble when I try to solve my boundary conditions. I can't re-figure my boundary conditions in a way that gives me a known eigenvalue (generally of the form $\lambda = \frac{n\pi}b$) without giving me some $\sin(\infty)$ kind of a condition. I've tried for a few hours to find boundary conditions that work, but at this point I'm thinking I've either forgotten a simple trick, or I never learned how to deal with Laplacian boundary conditions approaching infinity. Any help would be appreciated.
Also, this is my first post and I'm sure I've left out some important detail or committed some faux pas. Please let me know if I've committed any breaches of etiquette, or if there's something I've left out.
|
I've always used the method of Lagrange multipliers with blind confidence that it will give the correct results when optimizing problems with constraints. But I would like to know if anyone can provide or recommend a derivation of the method at physics undergraduate level that can highlight its limitations, if any.
Lagrange multipliers are used to obtain the maximum of a function $f(\mathbf{x})$ on a surface $\{ \mathbf{x}\in\mathbb{R}^n\mid g(\mathbf{x}) = 0\}$ (I use "surface", but whether it is a 2-dimensional, 1-dimensional, or whatever-dimensional object will depend on the $g$ and the $\mathbb{R}^n$ we are dealing with).
The gradient of $f$, $\nabla f$, points in the direction of greatest increase for $f$. If we want to find the largest value of $f$ along $g$, then we need the direction of greatest increase to be
orthogonal to $g$; otherwise, moving along $g$ will "capture" some of that increase and $f$ will not achieve its maximum among $g$ at that point (this is akin to the fact that in one-variable calculus, the derivative should be $0$ at the maximum, otherwise moving a bit will increase in one direction will increase the value of the function).
In order for $\nabla f$ to be perpendicular to the surface, it must be parallel to the
gradient of $g$; so $\nabla f$ must be a scalar multiple of $\nabla g$. So this amounts to finding a solution to the system\begin{align*}\nabla f(\mathbf{x}) &= \lambda \nabla g(\mathbf{x})\\g(\mathbf{x}) &= 0\end{align*}for both $\mathbf{x}$ and $\lambda$. Added. Such a point is not guaranteed to be a maximum or a minimum; it could also be a saddle point, or nothing at all, much as in the one-variable case, points where $f'(x)=0$ are not guaranteed to be extremes of the function. Another obvious limitation is that if the surface $g$ is not differentiable (does not have a well-defined gradient) then you cannot even set up the system.
An algebraic way of looking at this is as follows:
From an algebraic view point, we know how to find the extremum of a function of many variables. Say we want to find the extremum of $f(x_1,x_2,\ldots,x_n)$, we set the gradient to zero and look at the definiteness of the Hessian.
We would like to extend this idea, when we want to find the extremum of a function along with some constraints. Say the problem is: $$\begin{align} \text{Minimize }f(x_1,x_2,\ldots,x_n)\\\ \text{subject to: }g_k(x_1,x_2,\ldots,x_n) = 0\\\ \text{where }k \in \{1,2,\ldots,m\}\\\ \end{align} $$
If we find the extremum of $f$ just by setting the gradient of $f$ to zero, these extremum need not satisfy the constraints.
Hence, we would like to include the constraints in the previous idea. One way to it is as follows. Define a new function: $$F(\vec{x},\vec{\lambda}) = f(\vec{x}) - \lambda_1 g_1(\vec{x}) - \lambda_2 g_2(\vec{x}) - \cdots - \lambda_m g_m(\vec{x})$$ where $\vec{x} = \left[ x_1,x_2,\ldots,x_n \right], \vec{\lambda} = \left[\lambda_1,\lambda_2,\ldots,\lambda_m \right]$
Note that when the constraints are enforced, we have $F(\vec{x},\vec{\lambda}) = f(\vec{x})$ since $g_j(x) = 0$ when the constraints are enforced.
Let us find the extremum of $F(\vec{x},\vec{\lambda})$. This is done by setting $\frac{\partial F}{\partial x_i} = 0$ and $\frac{\partial F}{\partial \lambda_j} = 0$ where $i \in \{1,2,\ldots,n\}$ and $j \in \{1,2,\ldots,m\}$
Setting $\frac{\partial F}{\partial x_i} = 0$ gives us $$\vec{\nabla}f = \vec{\nabla}g \cdot \vec{\lambda}$$ where $\vec{\nabla}g = \left[\vec{\nabla} g_1(\vec{x}),\vec{\nabla} g_2(\vec{x}),\ldots,\vec{\nabla} g_m(\vec{x}) \right]$
Setting $\frac{\partial F}{\partial \lambda_j} = 0$ gives us $$g_j(x) = 0$$ where $j \in \{1,2,\ldots,m\}$
Hence, we find that when we find the extremum of $F$, the constraints are automatically enforced. This means that the extremum of $F$ corresponds to extremum of $f$ with the constraints enforced.
To decide, if the extremum is a minimum (or) maximum (or) if the point we obtain by solving the system is a saddle point, we need to look at the definiteness of the Hessian of $F$ and decide.
|
It is known that every non-constant polynomial with real coefficients admits a factorization in terms of real and quadratic factors. The proof normally uses the Fundamental Theorem of Algebra. Is there an elementary proof of the above which does not involve complex numbers at all?
I published such a proof (see my article
Another Proof of the Fundamental Theorem of Algebra, American Mathematical Monthly 112 (1), 2005, pp. 76–78), although I doubt that you'll find it elementary.
So, let $p(x)\in\mathbb{R}[x]$ be an irreducible polynomial. If I prove that its degree is $1$ or $2$, I will have proved that every polynomial in $\mathbb{R}[x]$ can be written as a product of linear and quadratic polynomials. Let $n=\deg p(x)$ and assume that $n>1$. The idea is to prove that $n=2$. Note that $\mathbb{R}[x]/\bigl\langle p(x)\bigr\rangle$ is a field which is an extension of $\mathbb R$ and whose dimension as a $\mathbb R$-vector space is $n$. So, all that is needed is to prove that if $K$ is such an extension of $\mathbb R$, then $n=2$.
In order to prove that, I proved that there is a norm $\|\cdot\|$ on $K$ such that$$(\forall x,y\in K):\|x.y\|\leqslant\|x\|.\|y\|.$$This allows us to define the exponential function$$\begin{array}{rccc}\exp\colon&K&\longrightarrow&K\\&x&\mapsto&\displaystyle\sum_{n=0}^\infty\frac{x^n}{n!}.\end{array}$$It turns out that $(\forall x,y\in K):\exp(x+y)=\exp(x)\exp(y)$. That is, $\exp$ is a group homomorphism from $(K,+)$ into $(K\setminus\{0\},.)$. It can be proved that $\exp(K)$ is both an open and a closed set of $K\setminus\{0\}$. Since $n>1$, $K\setminus\{0\}$ is connected and therefore $\exp$ is surjective.
It can also be proved that $\ker\exp$ is a discrete subgroup of $(K,+)$. This means that either $\ker\exp=\{0\}$ or that there are $k$ linearly independent vectors $v_1,\ldots,v_k\in K$ such that $\ker\exp=\mathbb{Z}v_1\oplus\cdots\oplus\mathbb{Z}v_k$. It is not hard to prove that $\exp$ induces a homeomorphism between $K/\ker\exp$ and $K\setminus\{0\}$. But then there are two possibilites:
$\ker\exp=\{0\}$: this is impossible, because $\mathbb{R}^n$ and $\mathbb{R}^n\setminus\{0\}$ are not homeomorphic. $\ker\exp=\mathbb{Z}v_1\oplus\cdots\oplus\mathbb{Z}v_k$. Then $K/\ker\exp$ is homeomorphic to $(S^1)^k\times\mathbb{R}^{n-k}$, which is not simply connected. But $\mathbb{R}^n\setminus\{0\}$ issimply connected when $n>2$. Therefore, $n=2$ and this completes my (not that much elementary) proof.
|
This is a great question! It's totally reasonable to expect - assuming GCH - that $A^B=A$ when the base $A$ is larger than the exponent $B$ since that's true in all the "simply-imaginable" situations. However, that's not the whole picture. As you've noticed, limit cardinals pose an odd difficulty, and it turns out that a particular kind of limit cardinals break the pattern entirely -
even if GCH holds. Some weirdness
Let me begin with a
counterexample to your reasonable intuition, which works regardless of whether GCH holds, to motivate what follows:
$$(\aleph_{\omega})^{\aleph_0}>\aleph_\omega.$$
(Recall that $\aleph_\omega$ is the limit of the $\aleph_n$s ($n\in\mathbb{N}$). Even with GCH it's a bit of a weird object, in contrast with say $\aleph_2$ which is just the cardinality of the set of real functions under GCH.)
The fact above may look mysterious, but its proof is actually just a direct diagonalization argument.
First, let's replace $(\aleph_{\omega})^{\aleph_0}$ with something more meaningful. Specifically, it's not hard to show that $(\aleph_\omega)^{\aleph_0}$ is the cardinality of the set $Seq$ of increasing $\omega$-sequences of ordinals less than $\aleph_\omega$.
Now let's set up our diagonalization. No need to use proof by contradiction - let's be constructive! Suppose $F:\aleph_\omega\rightarrow Seq$; I want to produce an $\omega$-sequence $S$ of ordinals $<\aleph_\omega$ which is not in the range of $F$.
To do this, the trick is to "chop $\aleph_\omega$ into $\omega$-many blocks" (namely, "up to $\aleph_0$," "from $\aleph_0$ to $\aleph_1$," ..., "from $\aleph_n$ to $\aleph_{n+1}$," ...) - even though the blocks together cover all of $\aleph_\omega$, each individual block is "small" (= of size $<\aleph_\omega$).
Now just let the $i$th entry of our "antidiagonal sequence" $S$ be the smallest ordinal which isn't any of the first $i$ entries of any of the sequences $F(\kappa)$ for $\kappa<\aleph_i$. So, for example, to find $S(2)$ we look at the first $\aleph_2$-many (according to $F$) elements of $Seq$, and check all of the ordinals that occur as either the first or second terms of any of those; there are only $\aleph_2$-many of these, so there is some ordinal which doesn't appear in the first two terms of $F(\kappa)$ for any $\kappa<\aleph_2$, and the smallest of these is the ordinal we pick to be $S(2)$.
It's easy to check that the sequence $S$ so built is an element of $Seq$ not in the range of $F$, so we're done!
This is
really weird. What makes $\aleph_\omega$ so different from, say, $\aleph_{17}$?
The answer is:
Cofinality
The distinction between limit and successor (= non-limit) cardinals isn't all there is. The limit cardinals themselves split further into two types - the
regular and singular limit cardinals - and it is the singular limit cardinals that often cause all the trouble.
Incidentally, it is consistent with ZFC+GCH that there are no regular limit cardinals at all - however, there are guaranteed to be lots of singular limit cardinals.
Intuitively, a limit cardinal $\kappa$ is singular if we can "count up to it" in fewer than $\kappa$-many steps. For example, the sequence $$\aleph_1,\aleph_2,\aleph_3,...$$ lets us count up to the cardinal $\aleph_\omega$ in $\omega$-many steps; since $\aleph_\omega$ is much bigger than $\omega$, this means that $\aleph_\omega$ is singular.
This is exactly the "block-chopping-into" thing we did above, but phrased a bit more abstractly.
By contrast, it's not hard to show that every successor (= non-limit) cardinal is regular (= non-singular): if $(\alpha_\eta)_{\eta<\delta}$ is an increasing sequence of ordinals with limit $\beta=\gamma^+$, then $\beta$ is the union of $\delta$-many sets of size $\le\gamma$, so $\beta=\delta\times\gamma$ and since $\gamma<\beta$ this means $\delta=\beta$.
The number of steps you need to count up to a given cardinal is called its
cofinality, and the cofinality of $\kappa$ is denoted $cf(\kappa)$. Exponentiation
So what does this have to do with exponentiation?
Well, looking back at the proof that $(\aleph_\omega)^{\aleph_0}>\aleph_\omega$, the key point was that we were able chop the "base" (= $\aleph_\omega$) into "exponent-many" (= $\aleph_0$) small blocks; that is,
the cofinality of the base was no larger than the exponent. Indeed, this turns out to be a fundamental issue - if the exponent $\lambda$ is large relative to the cofinality of the base $\kappa$ (not just the base itself!), we get $\kappa^\lambda>\kappa$ (a bit more snappily, we have $\kappa^{cf(\kappa)}>\kappa$ for all $\kappa$). Coda
Let me end by mentioning three points around this topic:
The fact that $\kappa^{cf(\kappa)}>\kappa$ is a consequence of the more general Konig's theorem. If you want to get a handle on basic cardinal arithmetic, you should play around with this theorem until you're comfortable with it.
Interestingly, in a precise sense Konig's theorem is basically the only nontrivial fact about cardinal exponentiation which ZFC can outright prove - this is a consequence of Easton's theorem. This is a
very technical result, but I mention it only because knowing that something like it exists gives some additional "punch" to Konig's theorem.
Easton's theorem (and its method of proof in general) suggests a rather bleak picture for ZFC: that basically any nontrivial question about cardinal arithmetic can't be decided from the ZFC axioms alone. This turns out to be
false, and the ZFC-only investigation of cardinal arithmetic was pioneered by Shelah - I think this paper of his is a good, if quite hard, survey of the situation. I won't try to describe it here, but I'll mention one of his flashier results: that if $\aleph_\omega$ is a "strong limit cardinal" (that is, $2^{\aleph_n}<\aleph_\omega$ for all finite $n$ - this is implied by, but much weaker than, GCH up to $\aleph_\omega$), then $$2^{\aleph_\omega}<\aleph_{\omega_4}.$$ Incidentally, Shelah is on record as asking "Why the hell is it $4$?" (page $4$ of the above-linked article).
|
I know how to work with the triple integral of the divergence of F part of the theorem, but in many textbooks, they don't explain the surface integral component. I don't understand how they go from here: $$\iint_{\delta W} F\boldsymbol{\cdot}kdS = \iint_{S_1} F\boldsymbol{\cdot}kdS_1 + \iint_{S_2} F\boldsymbol{\cdot}kdS_2$$
to here: "The surface $S_1$ is defined by $g_1(x,y)$, and \begin{equation}dS_1 = \Big( \frac{\delta g_1}{\delta x}i + \frac{\delta g_1}{\delta y}j - k \Big)dxdy\end{equation} Therefore, $$\iint_{S_1} F\boldsymbol{\cdot}kdS_1 = - \iint_D F(x,y,g_1(x,y))dxdy$$
Similarly, for the top face $S_2$, \begin{equation}dS_2 = \Big( -\frac{\delta g_2}{\delta x}i - \frac{\delta g_2}{\delta y}j + k \Big)dxdy\end{equation} Therefore, $$\iint_{S_2} F\boldsymbol{\cdot}kdS_2 = \iint_D F(x,y,g_2(x,y))dxdy$$"
I understand why one is negative and the other isn't, but I don't understand how they got the derivative of the surface like that.
|
Let $A$ be an orthogonal matrix with $\det (A)=1$. Show that there exists an orthogonal matrix $B$ such that $B^2=A$.
Thank you very much.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Edit: This is indeed true...Thanks @Dirk.
An orthogonal matrix (see the "Canonical form" paragraph or this thread exhibited by user1551) $A$ is block diagonalizable in an orthonormal basis with blocks $$ \left(\matrix {\cos\theta&-\sin\theta\\\sin\theta&\cos\theta}\right) $$ or $\pm 1$ along the diagonal, i.e. $P^*AP=A'$ with $P$ orthogonal and $A'$ block diagonal of rotations as above and $\pm 1$. I denote $M^*$ the adjoint matrix of $M$, which is nothing but the transpose $M^T$ in the real case.
If $\det A=1$, there are 2$k$ numbers $-1$ on the diagonal, corresponding to $k$ rotations of angle $\pi$. So we only have rotations and $1$'s.
Now, as pointed by
@Dirk in the $-1$ case,
$$ \left(\matrix{\cos\theta&-\sin\theta\\\sin\theta&\cos\theta} \right)=\left(\matrix{\cos\theta/2&-\sin\theta/2\\\sin\theta/2&\cos\theta/2} \right)^2. $$
The genuine idea is that a rotation, is the square of the half-angle rotation. This works for $-I_2$ in particular, with $\theta=\pi$.
Do that as many times as needed to get $A'=C^2$ with $C$ orthogonal. Then define $$ B:=PCP^*\qquad\Rightarrow\qquad B^2=PCP^*PCP^*=PC^2P^*=PA'P^*=A $$ and note that $B^*B=I$, so $B$ is orthogonal.
Edit: I missunterstood the question at first thanks @Dirk again.
Thats true, $A$ is diagonalizable and with $\pm 1$ on the diagonal. you get an even number of $-1$ because else the determinant can't $1$. For this the matrix $B$ will be something like $$B= \begin{pmatrix} I_n &0 &0\\ 0 & 0 &1\\ 0& -1 & 0 \\ \end{pmatrix} $$ or more of the $$\begin{pmatrix} 0 & 1\\ -1 &0 \\\end{pmatrix}$$ blocks.
So we have $$A=P^{-1} D P = P^{-1} B B P= (P^{-1} B P) (P^{-1} B P) $$ And $B^T B=I$ hence $B$ is orthogonal.
Depending on what you know, the proof varies from a one-liner to a half page.
A one-line proof is this: every real orthogonal matrix with determinant $1$ can be written as $e^K$ for some skew symmetric matrix $K$; therefore $A=B^2$ for $B=e^{K/2}$.
A longer proof is this: $A$ is real and normal. Hence it is orthogonally similar to its
real Jordan form. That is,$$A=Q\left(I_p\oplus-I_q\oplus R(\theta_1)\oplus\cdots\oplus R(\theta_m)\right)Q^T$$where $Q$ is orthogonal, $\theta_1,\ldots,\theta_m\in(0,\pi)$, $R(\theta)$ denotes the $2\times2$ rotation matrix for angle $\theta$, and $p+q+2m=n$. (See this related question for the reason why such a decomposition is possible.) Since $\det A=1$, $q$ must be an even number. Therefore $A=B^2$, where$$B=Q\left(I_p\oplus\underbrace{R(\frac{\pi}2)\oplus \cdots R(\frac{\pi}2)}_{\frac q2 \text{ blocks}}\oplus R(\frac{\theta_1}2)\oplus\cdots\oplus R(\frac{\theta_m}2)\right)Q^T.$$
|
I have never read or otherwise studied the Principia; however, I think the general distinction to which Russell is alluding is still very much a recognized principle in modern (formalized) mathematics. Its basically the difference between a sentence $\varphi,$ versus the metasentence $\vdash \varphi$.
Conceptually, the distinction is best explained with reference to partially ordered sets (hereafter
poset). In a poset, we can assert $x \leq y$ (intuitively, $x$ entails $y$). We may also have a meet-semilattice structure, in which case our assertions can be more sophisticated: we may write $x \wedge x' \leq y,$ intuitively asserting that $x$ and $x',$ taken together, entail $y$.
Note that $\wedge$ is a function, $\leq$ a relation.
Now furthermore, any given meet-semilattice may or may not admit the existence of a function $\rightarrow$ with the following property.
$x \wedge x' \leq y$ iff $x \leq x' \rightarrow y$.
If such a function exists, it is unique, by this result. (If it is not clear what the above definition has to do with Galois connections, please comment and I will clarify.)
Anyway, if there
is such a function (which I will call "implication"), then it can be added to the language (alongside $\wedge$ and $\leq$) to get a more expressive language. And we can prove the basic facts we expect from implication, such as modus ponens:
$$(x \rightarrow y) \wedge x \leq y$$
By the way, I recommend saying $\leq$ as "entails", and $\rightarrow$ as "implies", although this is not standardized.
Anyway, the point is that $\rightarrow$ can be conceived as an internalization of $\leq.$ Note that $\rightarrow$ is a function, while $\leq$ is a relation. Thus, spiritually, we can think of $x \rightarrow y$ as a statement
internal to the language, while a formula like $x \leq y$ can (kind of) be viewed as part of the metalanguage. I am speaking very informally, here, of course.
Now I haven't really explained
how $\rightarrow$ is an internalization of $\leq$, so lets do that. It turns out that if a function $\rightarrow$ with the property of interest exists, then so too does a top element, so long as we're working in a non-empty poset. (Hint: consider the expression $x \rightarrow x$). Denote the top element $\top;$ we can think of this as denoting unadulterated truthood. Furthermore, it can be shown that $x \leq y$ is equivalent to $\top \leq (x \rightarrow y)$. This is the sense in which $\rightarrow$ is an internalization of $\leq$.
Finally, lets switch to more logical notation. Instead of $\leq$, write $\vdash$ (this can also be articulated: "entails"). And lets move to greek letters, which can be thought of as denoting logical formulae. Furthermore, as shorthand for $\top \vdash \varphi$, let us write $\vdash \varphi.$
Then there is a clear difference between $\varphi$, and $\vdash \varphi$.
However, oftentimes $\varphi$ can be used as shorthand for $\vdash \varphi$, if the meaning is clear in context. Similarly, sometimes $\varphi \rightarrow \psi$ can be used as shorthand for $\vdash \varphi \rightarrow \psi$, or in other words $\varphi \vdash \psi$.
I think this is (at least conceptually) the distinction to which Russell is alluding.
|
Supposed that the derivative of $f:X\to Y$ is an isomorphism whenever $x$ lies in the sub-manifold $Z \subset X$, and assume that $f$ maps $Z$ diffeomorphically onto a $f(Z)$ . Prove that $f$ map a neighborhood of $Z$ diffeomorphically on to a neighborhood of $f(Z)$
Here is what I got so far
Since the derivative of $f:X\to Y$ is an isomorphism whenever $x$ lies in the sub-manifold $Z \subset X$, for $x\in Z$, $f$ is locally diffeomorphism. and since $f$ maps $Z$ diffeomorphically onto a $f(Z)$, there is a local inverse $g_i: U_i \to X$ where $U_i$ is a locally finite collection of opensubset of $Y$ covering $f(Z)$
Let $W=\{y\in U_i:g_i(y)=g_j(y)$ whenever $y\in U_i \cap U_j\}$ Since we have $f$ is locally diffeomorphism, $g_i$ can be patch together to define a smooth inverse $g:U\to X$. According to the partition of unity properties, $g_i$ is finitely many. I know that I need to show that $W$ contain an open neighborhood of $f(Z)$, but my brain cant find any way to do so. Maybe in my previous arguments, I missed something
|
To find limits of exponential functions, it is essential to study some properties and standards results in calculus and they are used as formulas in evaluating the limits of functions in which exponential functions are involved.
There are four basic properties in limits, which are used as formulas in evaluating the limits of exponential functions.
$\displaystyle \large \lim_{x \,\to\, a}{\normalsize {f{(x)}}^{g{(x)}}}$ $\,=\,$ $\displaystyle \large \lim_{x \,\to\, a}{\normalsize {f{(x)}}^{\, \displaystyle \large \lim_{x \,\to\, a}{\normalsize g{(x)}}}}$
$\displaystyle \large \lim_{x \,\to\, a}{\normalsize b^{f{(x)}}}$ $\,=\,$ $b^{\, \displaystyle \large \lim_{x \,\to\, a}{\normalsize f{(x)}}}$
$\displaystyle \large \lim_{x \,\to\, a}{\normalsize {[f{(x)}]}^n}$ $\,=\,$ ${\Big[\displaystyle \large \lim_{x \,\to\, a}{\normalsize f{(x)}}\Big]}^n$
$\displaystyle \large \lim_{x \,\to\, a}{\normalsize \sqrt[\displaystyle n]{f{(x)}} }$ $\,=\,$ $\sqrt[\displaystyle n]{ \displaystyle \large \lim_{x \,\to\, a}{\normalsize f{(x)} }}$
There are five standard results in limits and they are used as formulas while finding the limits of the functions in which exponential functions are involved.
$(1) \,\,\,$ $\displaystyle \large \lim_{x \,\to\, a}{\normalsize \dfrac{x^n-a^n}{x-a}}$ $\,=\,$ $n.a^{n-1}$
$(2) \,\,\,$ $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \normalsize x}-1}{x}}$ $\,=\,$ $1$
$(3) \,\,\,$ $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{a^{\displaystyle \normalsize x}-1}{x}}$ $\,=\,$ $\log_{e}{a}$
$(4) \,\,\,$ $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize {(1+x)}^\frac{1}{x}}$ $\,=\,$ $e$
$(5) \,\,\,$ $\displaystyle \large \lim_{x \,\to\, \infty}{\normalsize {\Bigg(1+\dfrac{1}{x}\Bigg)}^x}$ $\,=\,$ $e$
List of solved limits problems for evaluating the limits of functions in which exponential functions are involved.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
[Note: if you are using smartphone or portable device to browser this post, some math formula might not appear properly. To see the math in correct form, scroll down to the bottom and click " View web version"]
Raspberry Pi has a Broadcom BCM2835 chip, which controls 26 GPIO (general purpose input/output) pins. There are C library or RPi.GPIO python package available online that can be used to control the pins. The RPi.GPIO package is by default included in most Raspberry Pi system, such as Raspbian, a RPi version of Debian linux system.
One drawback of RPi, compared to arduino, is that it doesn't have any analog pin. All the GPIO pins are purely digital. For example, if pin A is an output pin, it can only output LOW (0V) or HIGH (3.3V), represented as 0 or 1. If pin A is an input pin, for any voltage below 0.8V applied on pin A, it takes it as LOW or 0; for any voltage above 1.3V (surprisingly low actually!), it takes it as HIGH or 1 [ref: RPi GPIO].
In real world, however, purely 0 or 1 rarely happens. We always get information that can have continuous value in its range. For example, temperature can be 10C or 50F, or 100C or 212F. These number contains more information than simply "cold" or "hot". A distance can be 2 cm or 10 m, and it is not enough to only know "close" or "far away".
There are some methods to overcome this drawback. RPi does support SPI or I2C interface, so that we can use some external analog to digital converter (ADC) and use SPI or I2C interface to get quasi-analog signal through these ADCs, such as MCP3008, TLC549, MCP23017, etc. These chips usually cost several bucks. However, with additional commercial sensors, the whole part can cost more than $20 to $30, and it is difficult to make the system compact. For robotic project one usually need more than one sensor, and the cost can add up easily.
In fact, in many situations, it is actually possible to avoid using these external devices, and still able to get
analogsignals through the digitalpins!
The key is to convert analog signal to time duration. Because time is always analog!
I build a simple infrared proximity sensor using several infrared LEDs, one phototransistor, one 2N3904 NPN transistor, a 100nF ceramic capacitor and several low power resistors. And I am able to get some analog reading.
Here is the circuit.
Infrared Proximity Sensor
It doesn't really matter what LEDs, phototransistor or NPN transistors are being used. They are pretty much the same.
The only thing that might matter a little bit is the 100nF (0.1uF) capacitor. I used a low profile ceramic one, which is probably not the best choice. A class 1 ceramic, or film capacitor, will be more suitable here.
Here is the real image of the sensor.
Connect the +5V and GND wires to an external 5V power supply, also connect the GND wire to the ground of the Raspberry Pi GPIO pins. Choose one GPIO pin, say, Pin A as the trigger and connect it to the Trigger wire. Choose another GPIO pin, say, Pin B, as the signal input/output and connect it to the OUT wire.
To measure the distance of an object, we send a trigger signal to activate the infrared LEDs. The light emitted by these LEDs are then reflected by the object in front of the sensor. The phototransistor in the middle collects the reflected light and generates a proportional current. This current is used to integrate the voltage across the capacitor (I=CdV/dt). By monitoring the time it takes the capacitor voltage to reach some certain threshold, we have a sense of how much current was generated by the phototransistor, or equivalently, how much light got reflected. Apparently, closer the object is, more the reflected light is. By carefully calibrating the timing of the sensor, we should be able to get a pretty precise measurement of the distance.
Here is the detailed sequential of operations.
1. Zero the capacitorFirst set Pin B to be an output pin and set it to be zero.
GPIO.setup(PIN_B,GPIO.OUT)
GPIO.output(PIN_B,0)
time.sleep(0.01)
This will discharge any residual voltage on the capacitor. Note that the RC time for discharging the capacitor is t=RC=500ohm * 100nF = 50 us = 0.00005 sec. By maintaining zero volt at pin B for 200RC time, we make sure the capacitor is fully discharged (the residual voltage should be \(e^{-200}=10^{-87}\) times the original residual voltage).
2. Set Pin B as InputNow we use Pin B as an input pin to get data from the phototransisto.
GPIO.setup(PIN,GPIO.IN)
3. Light up the LEDsIt's time to turn on the infrared LEDs.
GPIO.setup(PIN_A,GPIO.OUT)
GPIO.output(PIN_A,1)
This will set the voltage of trigger pin to be 3.3V. Since the BE node of 2N3904 drops 0.7V, the voltage across R1 is 2.6V. The current through R1 is then \(I=2.6V/4.3k \Omega=0.6 mA\). 2N3904 then amplifiers this current by ~150 times, resulting a ~ 100mA current from its collector to emitter. Each of the LEDs will conduct about 50mA for a short time period.
4. Timing the duration of Pin B remaining LOWStart to measure how long it takes the capacitor to reach RPi's threshold so Pin B becomes HIGH
counter=0
t1=time.clock()*1000
while(GPIO.input(PIN_B)==0)&(counter<1e4):
counter = counter+1
deltat=time.clock()*1000-t1
deltat is the time duration of Pin B remaining LOW. Since deltat is proportional to the
reciprocal of phototransistor current(or amount of reflected light), and phototransistor current is roughly proportional to the reciprocal of the distance, deltat is roughly proportional to the distance.
deltat\(\propto \frac{1}{I}\propto \frac{1}{light}\propto distance\)
The (counter<1e4) term is to prevent the situation that it takes too long to integrate the capacitor due to extremely low phototransistor current, or equivalently, infinite distance.
5. Turn off the LEDsGPIO.output(PIN_A,1)
Before using the sensor practically, we can calibrate the sensor by establishing a table of 1-to-1 correspondence of deltat and distance. When using it, after getting deltat, we just need to check the 1-to-1 table obtained during the calibration.
Now there is a clear flaw of the sensor. The amount of reflected light is not only affected by the distance, but also affected by the reflectivity of the object. A shinny mirror will definitely reflect more light compare to a sponge. I don't know if there is a simple solution to this issue. If we know what object the sensor might encounter, we can use that object to calibrate the sensor.
Speed
What's the speed of the sensor? It has something to do with the distance of the object, and the capacitance of C. A shorter distance or smaller C will give a faster measurement speed. I did some test with the configuration shown in the circuit above, it generally takes no more than 0.05 sec to measure distance from 0 to 10 cm (4in).
Minimize the affect of ambient lightAnother potential issue is the ambient light. If the ambient light is too strong, it will interference with the LED light and result in an unexpected large phototransistor current, and a shorter deltat. The sensor might think it is getting too close to an object, but in fact it is facing some bright light source.
If the ambient light is fast varying, there isn't too much we can do about it, unfortunately. However, if the ambient light level remains roughly constant when measuring the distance, there is a simple solution to this issue. Since it takes about 0.05 sec to measure a distance, this requires the ambient light stays constant during this time period. Good enough for most everyday usage.
We can first perform the above steps and get a deltat , denoted as \(\delta t_1\). Then keep the LEDs off, and redo step 1, 2, 4 to get a second deltat , denoted as \(\delta t_2\).
\(\delta t_2\) is the time duration it takes the ambient light to charge the capacitor. \(\delta t_1\) is the time duration it takes both the ambient light and LED light to charge the capacitor.
Since (1/deltat\(\propto light\)), we need to compute
$$\delta t \equiv \frac{1}{1/\delta t_1-1/\delta t_2}=\frac{\delta t_1\delta t_2}{\delta t_2-\delta t_1}$$
The ambient light effect is then removed.
Or we can use pulse-width-modulation (PWM) to precisely control the amount the light emitted by the LEDs, and measure deltat several times with different LED brightness, and then perform
a linear regressionto get the true deltat given by pure LED light.
For example, set the LED power to be 0% (off), 25% (0.25 on duty during each PWM duty cycle), 50%, 75%, 100% (solid on), and get the corresponding \(\delta t_1, \delta t_2, \delta t_3, \delta t_4, \delta t_5\)
Denote \(P\) to be the power of LED, \(P=0, 0.25, 0.5, 0.75, 1\).
If both the distance and ambient light remain constant, there should be
$$\frac{1}{\delta t}=\alpha P+\beta$$
\(\frac{1}{\delta t}\) is proportional to the total light collected by the phototransistor. \(P\) is proportional to the light emitted by the LEDs, \(\beta\) is the effect of constant ambient light.
Obviously, coeffcient \(\alpha\)
is determined purely by the distance. In fact, \(\frac{1}{\alpha}\) is the true deltat without any ambient light. Roughly speaking, \(\alpha\) is proportional to 1/distance.
We perform a linear regression to \((P_i, 1/\delta t_i)\), where \(i=1,2,3,4,5\). The best fit coefficient \(\alpha\)
is given by
$$ \alpha \equiv \frac{<P_i/\delta t_i>-<P_i><1/\delta t_i>}{<P_i^2>-<P_i>^2} $$
where \(<x_i>\equiv\sum(x_i)/N\) is the mean of all \(x_i\).
Other thought
I got this idea from this post
http://www.rpiblog.com/2012/11/reading-analog-values-from-digital-pins.html
The idea of converting analog signal to time duration is very neat. The original post claims
The above technique will only work with sensors that act like resistors like photocells, thermistors, flex sensors, force-sensitive resistors, etc. It cannot be used with sensors that have a pure analog output like IR distance sensors or analog accelerometers.
However we show that it is actually doable with devices like photodiode or so.
I am putting the python codes into a mature package now. I will keep updating this post and include the package.
|
As I am working on a problem with 3 linear equations with 2 unknowns I discover when I use any two of the equations it seems I always find a solution ok. But when I plug it into the third equation with the same two variables , the third may or may not cause a contradiction depending if it is a solution and I am OK with that BUT I am confused on when I pick the two equations with two unknowns it seems like it has no choice but to work. Is there something about linear algebra that makes this so and are there any conditions where it won't be the case that I will find a consistent solution using only the two equations? My linear algebra is rusty and I am getting up to speed. These are just equations of lines and maybe the geometry would explain it but I am not sure how. Thank you.
Each linear equation represents a line in the plane. Most of the time two lines will intersect in one point, which is the simultaneous solution you seek. If the two lines have exactly the same slope, they may not meet so there is no solution or they may be the same line and all the points on the line are solutions. When you add a third equation into the mix, that is another line. It is unlikely to go through the point that solves the first two equations, but it might.
There are three possible cases for $2$ linear equations with $2$ unknowns (slope and intercept):
$\qquad$ $\mathbf{0}$ solution points $\qquad$ $\qquad$ $\mathbf{1}$ solution point $\qquad$ $\qquad$ $\mathbf{\infty}$ solution points
$\qquad \quad$ $\nexists$ no existence $\qquad$ $\qquad$ $\exists !$ uniqueness $\qquad$ $\qquad$ $\exists$ no uniqueness
The lines have the form $y(x) = mx + b$.
Case 1: parallel lines
A solution
does not exist.
The lines are parallel: they have the same slope.
$$ % \begin{align} % y_{1}(x) &= m x + b_{1} \\ % y_{2}(x) &= m x + b_{2} \\ % \end{align} % $$
Case 2: intersecting lines
We have
existence and uniqueness.
The slopes are distinct.
$$m_{1} \ne m_{2}$$
$$ % \begin{align} % y_{1}(x) &= m_{1} x + b_{1} \\ % y_{2}(x) &= m_{2} x + b_{2} \\ % \end{align} % $$
Case 3: coincident lines
We have
existence, but not uniqueness. There is an infinite number of solutions. Every point solves the system of equations.
Both lines are the same.
$$ % \begin{align} % y_{1}(x) &= m x + b \\ % y_{2}(x) &= m x + b \\ % \end{align} % $$
In terms of linear algebra, look at the problem in terms of $\color{blue}{range}$ and $\color{red}{null}$ spaces.
The linear system for two equations is $$ % \begin{align} % m_{1} x - y &= b_{1} \\ % m_{2} x - y &= b_{1} \\ % \end{align} $$ which has the matrix form $$ % \begin{align} % \mathbf{A} x &= b \\ % \left[ \begin{array}{cc} m_{1} & -1 \\ m_{2} & -1 \\ \end{array} \right] % \left[ \begin{array}{cc} x \\ y \\ \end{array} \right] % &= % \left[ \begin{array}{cc} b_{1} \\ b_{2} \\ \end{array} \right] % \end{align} % $$
The Fundamental Theorem provides a natural framework for classifying data and solutions.
Fundamental Theorem of Linear Algebra
A matrix $\mathbf{A} \in \mathbb{C}^{m\times n}_{\rho}$ induces for fundamental subspaces: $$ \begin{align} % \mathbf{C}^{n} = \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \oplus \color{red}{\mathcal{N} \left( \mathbf{A} \right)} \\ % \mathbf{C}^{m} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} % \end{align} $$
Case 1: No existence
The matrix $\mathbf{A}$ has a rank defect $(m_{1} = m_{2})$ and $b_{1} \ne b_{2}$. $$ b = \color{blue}{b_{\mathcal{R}}} + \color{red}{b_{\mathcal{N}}} $$ It is the $\color{red}{null}$ space component which precludes direct solution. (Interestingly enough, there is a least squares solution.) $$
The data vector $b$ is not a combination of the columns of $\mathbf{A}$. The column space is $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m \\ -1 \end{array} \right] } \, \right\} \qquad \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} = % \text{span } \left\{ \, \color{red}{ \left[ \begin{array}{r} -1 \\ m \end{array} \right] } \, \right\} $$
Case 2: Existence and uniqueness
The matrix $\mathbf{A}$ has full rank $(m_{1}\ne m_{2})$. The data vector is entirely in the $\color{blue}{range}$ space $\color{blue}{\mathcal{R} \left( \mathbf{A} \right)}$ $$ b = \color{blue}{b_{\mathcal{R}}} $$
The $\color{red}{null}$ space is trivial: $\color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)}=\mathbf{0}$. $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m_{1} \\ -1 \end{array} \right] }, \, \color{blue}{ \left[ \begin{array}{c} m_{2} \\ -1 \end{array} \right] } \right\} $$
Case 3: Existence, no uniqueness
The matrix $\mathbf{A}$ has a rank defect $(m_{1} = m_{2} = m)$, yet $b_{1} = b_{2}$. $$ b = \color{blue}{b_{\mathcal{R}}} $$ The column space is has $\color{blue}{range}$ and $\color{red}{null}$ space components: $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m \\ -1 \end{array} \right] } \, \right\} \qquad \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} = % \text{span } \left\{ \, \color{red}{ \left[ \begin{array}{r} -1 \\ m \end{array} \right] } \, \right\} $$
Postscript: the theoretical foundations here are useful. The trip to understanding starts with simple examples like in @Nick's comment.
Let's think using vector notation.
A linear system with two unknowns $x$ and $y$, and two equations $$ \begin{align*} v_1 x + w_1 y &= a_1 \\ v_2 x + w_2 y &= a_2 \end{align*} $$ can be written in vector notation as $$ x\, \vec{v} + y\, \vec{w} = \vec{a}. $$ That is, you want to know if $\vec{a}$ can be written as a linear combination of $\vec{v}$ and $\vec{w}$.
Fixed the vectors $\vec{v}$ and $\vec{w}$, to state that a solution always exists whatever $\vec{a}$ is, is the same as to state that $\vec{v}$ and $\vec{w}$ spans the whole plane. If not ($\vec{v}$ and $\vec{w}$ are parallel), depending on $\vec{a}$, the solution might not exist. And when it exists, it will not be unique.
|
The lower attic
From Cantor's Attic
Revision as of 11:31, 31 May 2013 by Austinmohr (removing superfluous bullet points)
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
$\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the Bachmann-Howard ordinal the large Veblen ordinal the small Veblen ordinal the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers the omega one of chess, $\omega_1^{\rm chess}$ indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel and other toys in the playroom $\omega$, the smallest infinity down to the parlour, where large finite numbers dream
|
Little explorations with HP calculators (no Prime)
03-23-2017, 01:23 PM (This post was last modified: 03-23-2017 01:23 PM by pier4r.)
Post: #21
RE: Little explorations with the HP calculators
(03-23-2017 12:19 PM)Joe Horn Wrote: I see no bug here. Variables which are assigned values should never be used where formal variables are required. Managing them is up to the user.
Ok (otherwise a variable cannot just be feed in a function from a program) but then how come that when I set the flags, that let the function return reals, the variable is purged? The behavior could be more consistent.
Nothing bad, just a quirks like this, when not clear to spot, may lead to other solutions (see the advice of John Keit that I followed)
Wikis are great, Contribute :)
03-24-2017, 01:45 PM (This post was last modified: 03-24-2017 03:23 PM by pier4r.)
Post: #22
RE: Little explorations with the HP calculators
Quote:Brilliant.org
Is there a way to solve this without using a wrapping program? (hp 50g)
I'm trying around some functions (e.g: MSLV) with no luck, so I post this while I dig more on the manual, and on search engines focused on "site:hpmuseum.org" or "comp.sys.hp48" (the official hp forums are too chaotic, so I won't search there, although they store great contributions as well).
edit: I don't mind inserting manually new starting values, I mean that there is a function to find at least one solution, then the user can find the others changing the starting values.
Edit. It seems that the numeric solver for one equation can do it, one has to set values to the variables and then press solve, even if one variable has already a value (it was not so obvious from the manual, I thought that variables with given values could not change it). The point is that one variable will change while the others stay constant. In this way one can find all the solutions.
Wikis are great, Contribute :)
03-24-2017, 03:23 PM (This post was last modified: 03-24-2017 03:24 PM by pier4r.)
Post: #23
RE: Little explorations with the HP calculators
Quote:Same site as before.
This can be solved with multiple applications of SOLVEVX (hp 50g) on parts of the equation with proper observations. So it is not that difficult, just I found it nice and I wanted to share.
Wikis are great, Contribute :)
03-24-2017, 10:04 PM (This post was last modified: 03-24-2017 10:05 PM by pier4r.)
Post: #24
RE: Little explorations with the HP calculators
How do you solve this using a calculator to compute the number?
I solved it translating it in a formula after a bit of thinkering, and I got a number that I may write as "x + 0.343 + O(0.343)" if I'm not mistaken. I used the numeric solver on the hp 50g as helper.
I also needed to prove to myself that the center on the circle is on a particular location to proceed to build the final equation.
Wikis are great, Contribute :)
03-24-2017, 10:54 PM (This post was last modified: 03-24-2017 11:07 PM by Dieter.)
Post: #25
RE: Little explorations with the HP calculators
I think a calculator is the very last thing required here.
(03-24-2017 10:04 PM)pier4r Wrote: I solved it translating it in a formula after a bit of thinkering, and I got a number that I may write as "x + 0.343 + O(0.343)" if I'm not mistaken. I used the numeric solver on the hp 50g as helper.
Numeric solver? The radius can be determined with a simple closed form solution. Take a look at the diagonal through B, O and D which is 6√2 units long.
So 6√2 = 2√2 + r + r√2. Which directly leads to r = 4 / (1 + 1/√2) = 2,343...
Dieter
03-25-2017, 08:17 AM (This post was last modified: 03-25-2017 08:19 AM by pier4r.)
Post: #26
RE: Little explorations with the HP calculators
As always, first comes the mental work to create the formula or the model, but to compute the final number one needs a calculator most of the time.
Quote:Numeric solver? The radius can be determined with a simple closed form solution. Take a look at the diagonal through B, O and D which is 6√2 units long.
I used the numeric solver because instead of grouping r on the left, I just used the formula of one step before - the one without grouping - to find the value. Anyway one cannot just use the diagonal because the picture is well done, one has to prove to himself that O is on the diagonal (nothing difficult, but required), otherwise it may be a step taken for granted.
Wikis are great, Contribute :)
03-27-2017, 12:14 PM
Post: #27
RE: Little explorations with the HP calculators
Quote:Brilliant.org
This one defeated me at the moment. My rusty memory about mathematical relations did not help me. At the end, having the hp50g, I tried to use some visual observations to write down the cartesian coordinates of the points defining the inner square, or observing the length of the sides, so if the side of the inner square is 2r so the sides of the triangle are "s+2r" and "s" from which one can say that "s^2+(s+2r)^2=1" . This plus the knowledge that 4 times the triangles plus the inner square add up to 1 as area. Still, those were not enough for a solution (with or without the hp50g). I end in too ugly/tedious formulae.
Wikis are great, Contribute :)
03-27-2017, 12:54 PM (This post was last modified: 03-27-2017 03:42 PM by Thomas Okken.)
Post: #28
RE: Little explorations with the HP calculators
Consider a half-unit circle jammed into the corner of the first quadrant (so its center is at (0.5, 0.5)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The tangent on the circle where it meets that radius will intersect the X axis at 1 + tan(phi), and the Y axis at 1 + cot(phi) or 1 + 1 / tan(phi). The triangle formed by the X axis, the Y axis, and this tangent, is like the four triangles in the puzzle, and the challenge is to find phi such that X = Y + 1 (or X = Y - 1). The answer to the puzzle is then obtained by scaling everything down so that the hypotenuse of the triangle OXY becomes 1, and then the diameter of the circle is 1 / sqrt(X^2 + Y^2).
EDIT: No, I screwed up. The intersections at the axes are not at 1 + tan(phi), etc., that relationship is not quite that simple. Back to the drawing board!
Second attempt:
Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)). The tangent on the circle at that point will have a slope of -1 / tan(phi), and so it will intersect the X axis at Px + Py * tan(phi), or (1 + sin(phi)) * tan(phi) + 1 + cos(phi), and it will intersect the Y axis at (1 + cos(phi)) / tan(phi) + 1 + sin(phi). The triangle formed by the X axis, the Y axis, and this tangent, is like the four triangles in the puzzle, and the challenge is to find phi such that X = Y + 2 (or X = Y - 2). The answer to the puzzle is then obtained by scaling everything down so that the hypotenuse of the triangle OXY becomes 1, and then the radius of the circle is 1 / sqrt(X^2 + Y^2).
Because of symmetry, sweeping the angles from 0 to pi/2 is actually not necessary; you can restrict yourself to 0 through pi/4 and the case that X = Y - 2.
03-27-2017, 02:27 PM (This post was last modified: 03-27-2017 02:28 PM by pier4r.)
Post: #29
RE: Little explorations with the HP calculators
(03-27-2017 12:54 PM)Thomas Okken Wrote: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)).
I'm a bit blocked.
http://i.imgur.com/IW4QIeU.jpg
Would be possible to add a quick sketch?
Wikis are great, Contribute :)
03-27-2017, 03:44 PM (This post was last modified: 03-27-2017 03:44 PM by Thomas Okken.)
Post: #30
RE: Little explorations with the HP calculators
(03-27-2017 02:27 PM)pier4r Wrote:(03-27-2017 12:54 PM)Thomas Okken Wrote: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)).
OK; I attached a sketch to my previous post.
03-27-2017, 03:55 PM
Post: #31
RE: Little explorations with the HP calculators
Thanks and interesting approach. On brilliant.org there were dubious solutions (that did not prove their assumptions) and just one really cool making use of a known relationship with circles enclosed in right triangles.
Wikis are great, Contribute :)
03-27-2017, 05:29 PM (This post was last modified: 03-27-2017 05:31 PM by pier4r.)
Post: #32
RE: Little explorations with the HP calculators
Quote:Brilliant.org
For this I wrote a quick program, remembering some quality of the mean that after enough iterations it stabilizes. (I should find the right statement though).
Code:
But i'm not sure about the correctness of the approach. Im pretty sure there is a way to compute this with an integral ad then a closed form too.
Anyway this is the result at the moment:
Wikis are great, Contribute :)
03-27-2017, 06:01 PM (This post was last modified: 03-27-2017 07:33 PM by Dieter.)
Post: #33
RE: Little explorations with the HP calculators
(03-27-2017 12:14 PM)pier4r Wrote: ...so if the side of the inner square is 2r so the sides of the triangle are "s+2r" and "s" from which one can say that "s^2+(s+2r)^2=1" . This plus the knowledge that 4 times the triangles plus the inner square add up to 1 as area. Still, those were not enough for a solution
Right, in the end you realize that both formulas are the same. ;-)
The second constraint for s and r could be the formula of a circle inscribed in a triangle. This leads to two equations in two variables s and r. Or with d = 2r you'll end up with something like this:
(d² + d)/2 + (sqrt((d² + d)/2) + d)² = 1
I did not try an analytic solution, but using a numeric solver returns d = 2r = (√3–1)/2 = 0,36603 and s = 1/2 = 0,5.
Edit: finally this seems to be the correct solution. ;-)
Dieter
03-27-2017, 06:14 PM (This post was last modified: 03-27-2017 06:15 PM by pier4r.)
Post: #34
RE: Little explorations with the HP calculators
(03-27-2017 06:01 PM)Dieter Wrote: Right, in the end you realize that both formulas are the same. ;-)
How? One should be from the Pythagoras' theorem, a^2+b^2=c^2 (where I use the two sides of the triangle to get the hypotenuse) the other is the composition of the area of the square, made up from 4 triangles and one inner square. To me they sound as different models for different measurements. Could you explain me why are those the same?
Anyway to me it is great even the numerical solution (actually the one that I search with the hp50g) but I cannot tell you if it is right or not because I did not solve it by myself, other reviewers are needed.
Edit, anyway I remember that a discussed solution mentioned the relationship of a circle inscribed in a triangle, so I guess your direction is right.
Wikis are great, Contribute :)
03-27-2017, 06:30 PM
Post: #35
RE: Little explorations with the HP calculators
(03-27-2017 06:14 PM)pier4r Wrote:(03-27-2017 06:01 PM)Dieter Wrote: Right, in the end you realize that both formulas are the same. ;-)
Just do the math. On the one hand, \( s^2 + (s + 2r)^2 = 1\) from Phythagorus' Theorem as you observed. And your other observation is that
\[ 4 \cdot \underbrace{\frac{1}{2} \cdot s \cdot (s+2r)}_{\text{area of }\Delta}
+ \underbrace{(2r)^2}_{\text{area of } \Box} = 1 \]
Simplify the left hand side:
\[
\begin{align}
4 \cdot \frac{1}{2} \cdot s \cdot (s+2r) + (2r)^2 & =
2s^2+4rs + 4r^2 \\
& = s^2 + s^2 + 4rs + 4r^2 \\
& = s^2 + (s+2r)^2
\end{align} \]
Hence, both formulas are the same.
Graph 3D | QPI | SolveSys
03-27-2017, 06:57 PM (This post was last modified: 03-27-2017 06:57 PM by pier4r.)
Post: #36
RE: Little explorations with the HP calculators
Thanks, I did not worked on the formula, I was more stuck (and somewhat still stuck) on the fact that they should represent different objects/results.
But then again, the square build on the side, it is the square itself. So now I see it. I wanted to see it in terms of "represented objects (1)" not only formulae.
Wikis are great, Contribute :)
03-27-2017, 07:07 PM
Post: #37
RE: Little explorations with the HP calculators
(03-27-2017 06:57 PM)pier4r Wrote:
Are you familiar with the geometric proofs of Phythagorus' Theorem? What I wrote above is just a variation of one of the geometric proofs using areas of regular polygons (triangles, rectangles, squares).
A few geometric proofs: http://www.cut-the-knot.org/pythagoras/
Graph 3D | QPI | SolveSys
03-27-2017, 07:22 PM
Post: #38
RE: Little explorations with the HP calculators
(03-27-2017 07:07 PM)Han Wrote: Are you familiar with the geometric proofs of Phythagorus' Theorem? What I wrote above is just a variation of one of the geometric proofs using areas of regular polygons (triangles, rectangles, squares).
Maybe my choice of words was not the best. I wanted to convey the fact that if I try to model two different events (or objects in this case) and I get the same formula, for me it is not immediate to say "oh, ok, then they are the same object", I have to, how can I say, "see it". So in the case of the problem, I saw it when I realized that the 1^2 is not only equal to the area because also the area is 1^2, it is exactly the area because it models the area of the square itself. (I was visually building 1^2 outside the square, like a duplicate)
Anyway the link you shared is great. I looked briefly and I can say:
- long ago I saw the proof #1
- in school I saw the proof #9
- oh look, the proof #34 would have helped, as someone mentioned
- how many!
Great!
Wikis are great, Contribute :)
03-27-2017, 07:24 PM (This post was last modified: 03-27-2017 07:28 PM by Joe Horn.)
Post: #39
RE: Little explorations with the HP calculators
(03-27-2017 05:29 PM)pier4r Wrote:Quote:Brilliant.org
After running 100 million iterations several times in UBASIC, I'm surprised that each run SEEMS to be converging, but each run ends with a quite different result:
10 randomize
20 T=0:C=0
30 repeat
40 T+=sqr((rnd-rnd)^2+(rnd-rnd)^2):C+=1
50 until C=99999994
60 repeat
70 T+=sqr((rnd-rnd)^2+(rnd-rnd)^2):C+=1
80 print C;T/C
90 until C=99999999
run
99999995 0.5214158234249566646569152059
99999996 0.5214158242970253667174680247
99999997 0.5214158240318481570747604814
99999998 0.5214158247892039896051570164
99999999 0.5214158253601312510245695897
OK
run
99999995 0.5213642776110289008920452545
99999996 0.5213642752079475043717958065
99999997 0.52136427197858201293861314
99999998 0.5213642744828552963477424429
99999999 0.5213642759132547792130043215
OK
run
99999995 0.5213770659191193073147616413
99999996 0.5213770610000764506616015052
99999997 0.5213770617149058467216528505
99999998 0.5213770589414874167694264508
99999999 0.5213770570854305903944611055
OK
So it SEEMS to be zeroing on something close to -LOG(LOG(2)), but I give up.
<0|ɸ|0>
-Joe-
03-27-2017, 07:35 PM (This post was last modified: 03-27-2017 07:37 PM by pier4r.)
Post: #40
RE: Little explorations with the HP calculators
(03-27-2017 07:24 PM)Joe Horn Wrote: 99999999 0.5213770570854305903944611055
Interestingly your number is quite different from mine. Ok that you have a couple of iterations more, but my average was pretty stable for the first 3 digits. I wonder why the discrepancy.
Moreover if you round to the 4th decimal place, you always get 0.5214 if the rounding is adding a "+1" to the last digit in the case the first digit to be excluded is higher than 5.
Wikis are great, Contribute :)
User(s) browsing this thread: 1 Guest(s)
|
Statement:
Every permutation $\sigma$ of a finite set is a product of disjoint cycles.
Proof:
Let $B_{1},B_{2},...,B_{r}$ be the orbits of $\sigma$, and let $\mu_{i}$ be the cycle defined by:
$\mu_{i}(x) = \begin{cases}\sigma(x) \text{ for } x \in B_{i}\\\ x \text{ otherwise }\\ \end{cases}$
Clearly $\sigma = \mu_{1}\mu_{2}...\mu_{r}$. Since the equivalence-class orbits $B_{1},B_{2},...B_{r}$, being distinct equivalence classes, are disjoint, the cycles $\mu_{1},\mu_{2},...,\mu_{r}$ are also disjoint.
|
You seem to have had the right idea of fixing the scale by arbitrarily choosing the value of one coefficient, and then solving for the rest. Apparently, you just got stuck at some point, presumably either because you couldn't solve for $b$ just with simple substitutions, or because your initial choice of $a = 1$ gave you fractional values for the other coefficients that you didn't know how to deal with.
If you plug $a = 1$ into your original equations, you'll immediately get $c = 1$, and by substituting that into the other equations, you're left with:
\begin{aligned}b & = 2e, \\b & = 2+d, \text{ and} \\3b & = 6+d+e.\end{aligned}
One way to solve that remaining set of equations is to first subtract the second one from the last one, to get: $$3b - b = (6+d+e)-(2+d) \implies 2b = 4+e,$$ and then substitute in the first one to get: $$2(2e) = 4+e \implies 4e = 4 + e.$$ Subtracting $e$ from both sides then gives you $3e = 4$, which you can divide by 3 to get $e = \frac43$.
Once you have a numerical value for $e$, even if it's a fraction, you can then substitute that back into the first equation above to get $b = 2\cdot\frac43 = \frac83$, which you can then plug into the second one to get $\frac83 = 2 + d$, and then subtract 2 from both sides to get $d = \frac83-2 = \frac83 - \frac63 = \frac23$.
Now we have a solution $(a=1, b=\frac83, c=1, d=\frac23, e=\frac43)$, but it still contains fractional values that we'd like to get rid of. The way to fix that, however, is simple: just multiply all the values by their least common denominator, 3, to get $(a=3, b=8, c=3, d=2, e=4)$.
The reason that works is because your original system of linear equations was, by construction, homogeneous, i.e. every term in every equation contained exactly one of the coefficients $a$, $b$, $c$, $d$ and $e$. Thus, multiplying all the coefficients by the same scaling factor multiplies every term in the equations by the same amount, and thus turns any valid solution into another equally valid one.
Such rescaling does, in fact, have a reasonable physical interpretation: if we have $n$ instances of the reaction going on at the same time, then the combined reaction will obviously consume $n$ times as many of each reactant and produce $n$ times as many of each product. Allowing $n$ to take on fractional values does require a bit more of a mathematical leap of imagination, but we may e.g. interpret the fractionally scaled reaction as describing just a fraction of the original equation — which may or may not be chemically meaningful, depending on whether all the scaled coefficients work out to whole numbers of molecules, but which nonetheless correctly describes the
proportions of reactants consumed and products yielded.
Of course, you could've also arrived at the appropriately scaled solution directly, if you had happened to start with the initial guess $a = 3$ instead of $a = 1$.
Indeed, the fact that you didn't need to make any
further arbitrary choices during the solution, after this initial choice of scale, proves that all the possible solutions to this homogeneous system of linear equations are simply scalar multiples of each other. On the other hand, if you'd added a $+\ce{f NO2}$ term to the products of your reaction (as MaxW suggests in their answer, for extra realism), then you would've had to make an arbitrary choice about the proportion of $\ce{NO}$ and $\ce{NO2}$ products at some point during the solution (or, alternatively, leave it unspecified, leaving you with some non-numeric factors in your result), reflecting the fact that the solution space of this extended system of equations is multidimensional, i.e. has more than one degree of freedom, and that this extended reaction can thus yield varying proportions of its products depending on the conditions under which it occurs.
|
Let $f(x)$ be a function in a variable $x$. In differential calculus, the differentiation of the function $f(x)$ with respect to $x$ is written in the following mathematical form.
$\dfrac{d}{dx}{\, f(x)}$
For deriving the derivative of a constant multiple function with respect to a variable, we must know the fundamental definition of the differentiation of a function in limit form.
According to the definition of the derivative, the derivative of a function $f(x)$ with respect to $x$ can be written in limit form.
$\dfrac{d}{dx}{\, f(x)}$ $\,=\,$ $\displaystyle \large \lim_{\Delta x \,\to\, 0}{\normalsize \dfrac{f(x+\Delta x)-f(x)}{\Delta x}}$
If we take the change in variable $x$ is equal to $h$, which means $\Delta x = h$, then the equation can be expressed in terms of $h$ as follows.
$\implies$ $\dfrac{d}{dx}{\, f(x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f(x+h)-f(x)}{h}}$
Let $k$ be a constant, then the product of constant $k$ and the function $f(x)$ is called the constant multiple function, which is written in product form as $k.f(x)$ mathematically.
Take, the constant multiple function is denoted by $g(x)$. Therefore, $g(x) = k.f(x)$. Now, write the differentiation of $g(x)$ with respect to $x$ in limit form as per the definition of the derivative.
$\dfrac{d}{dx}{\, g(x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{g(x+h)-g(x)}{h}}$
In this case, $g(x) = k.f(x)$ then $g(x+h) = k.f(x+h)$. Now, substitute them in the above equation.
$\implies$ $\dfrac{d}{dx}{\, g(x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{k.f(x+h)-k.f(x)}{h}}$
$\implies$ $\dfrac{d}{dx}{\, \Big(k.f(x)\Big)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{k.f(x+h)-k.f(x)}{h}}$
$k$ is a common factor in the expression of the numerator of the function and it can be taken out as a common factor from the terms as per the factorization by taking out the common factors.
$\implies$ $\dfrac{d}{dx}{\, \Big(k.f(x)\Big)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{k\Big(f(x+h)-f(x)\Big)}{h}}$
Now, factorize the function for separating the constant.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg[k \times \Bigg(\dfrac{f(x+h)-f(x)}{h}\Bigg) \Bigg]}$
It can be further simplified by the product rule of limits.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize (k)}$ $\times$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f(x+h)-f(x)}{h}}$
Now, evaluate the limit of the constant by the direct substitution method.
$=\,\,\,$ $k \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f(x+h)-f(x)}{h}}$
According to the fundamental definition of the derivative, the limit of the second function is the differentiation of the function $f(x)$.
$=\,\,\,$ $k \times \dfrac{d}{dx}{\, f(x)}$
$\,\,\, \therefore \,\,\,\,\,\,$ $\dfrac{d}{dx}{\, \Big(k.f(x)\Big)}$ $\,=\,$ $k.\dfrac{d}{dx}{\, f(x)}$
Therefore, it is proved that the derivative of a constant multiple function with respect to a variable is equal to the product of the constant and the derivative of the function.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Diagonalization
Diagonalization is a process that helps to directly compute values of hierarchies without having to go from the bottom. Each ordinal that is not a sucsessor has a fundemental sequence that helps. When we say some ordinal diagonalized to some finite number, we use: some ordinal[number] to express. You can replace any ordinal with the (whatever number you are diagonalizing to)th of the fundemental sequence.
Sequences
The sequence for \(\omega\) is \(\lbrace 1,2,\cdots \rbrace\).
The sequence for \(\omega2\) is \(\lbrace \omega+1,\omega+2,\cdots \rbrace\).
The sequence for \(\omega3\) is \(\lbrace \omega2+1,\omega2+2,\cdots \rbrace\).
\(\cdots\cdots\)
The sequence for \(\omega^2\) is \(\lbrace \omega,\omega2,\omega3,\cdots \rbrace\).
From now on, we just replace a loose \(\omega\) with the number we are diagonalizing to, and replace something like \(\omega3\) with \(\omega2+\omega\).
The sequence for \(\omega^{\omega}\) is \(\lbrace \omega,\omega^2,\omega^3,\cdots \rbrace\).
The sequence for \(\omega^{\omega^{\omega}}\) is \(\lbrace \omega^{\omega},\omega^{\omega^2},\omega^{\omega^3},\cdots \rbrace\)
\(\cdots\)
The sequence for \(\varepsilon_0\) is \(\lbrace 1,\omega,\omega^{\omega},\omega^{\omega^{\omega}},\omega^{\omega^{\omega^{\omega}}},\cdots \rbrace\)
The sequence for \(\varepsilon_1\) is \(\lbrace \omega^{\varepsilon_0+1},\omega^{\omega^{\varepsilon_0+1}},\omega^{\omega^{\omega^{\varepsilon_0+1}}},\cdots \rbrace\)
The sequence for \(\varepsilon_2\) is \(\lbrace \omega^{\varepsilon_1+1},\omega^{\omega^{\varepsilon_1+1}},\omega^{\omega^{\omega^{\varepsilon_1+1}}},\cdots \rbrace\)
\(\cdots\cdots\)
The sequence for \(\varepsilon_{\omega}\) is \(\lbrace \varepsilon_1,\varepsilon_2,\varepsilon_3,\cdots \rbrace\)
We can even replace loose ordinals with the thing it is supposed to turn into if it is alone.
The sequence for \(\varepsilon_{\varepsilon_0}\) is \(\lbrace \varepsilon_1,\varepsilon_{\omega},\varepsilon_{\omega^{\omega}},\cdots \rbrace\)
The sequence for \(\varepsilon_{\varepsilon_{\varepsilon_0}}\) is \(\lbrace \varepsilon_{\varepsilon_1},\varepsilon_{\varepsilon_{\omega}},\varepsilon_{\varepsilon_{\omega^{\omega}}},\cdots \rbrace\)
\(\cdots\)
The sequence for \(\zeta_0\) is \(\lbrace \varepsilon_0,\varepsilon_{\varepsilon_0},\varepsilon_{\varepsilon_{\varepsilon_0}},\varepsilon_{\varepsilon_{\varepsilon_{\varepsilon_0}}},\cdots \rbrace\)
The sequence for \(\eta_0\) is \(\lbrace \zeta_0,\zeta_{\zeta_0},\zeta_{\zeta_{\zeta_0}},\zeta_{\zeta_{\zeta_{\zeta_0}}},\cdots \rbrace\)
Now we introduce the Veblen function, which is defined as follows:
1. \(\varphi_0(0)=1\)
2. If \(\alpha\) is a succsessor, then \(\varphi_\alpha(0)[n]=\varphi_{\alpha-1}^n(0)\) and \(\varphi_\alpha(a)[n]=\varphi_{\alpha-1}^n(\varphi_\alpha(a-1)+1)\).
3. If \(\beta\) is a limit ordinal, then \(\varphi_\beta(0)[n]=\varphi_n(0)\) and \(\varphi_\beta(a)[n]=\varphi_n(\varphi_\beta(a-1)+1)\).
The sequence for \(\varphi_4(0)\) is \(\lbrace \eta_0,\eta_{\eta_0},\eta_{\eta_{\eta_0}},\eta_{\eta_{\eta_{\eta_0}}},\cdots \rbrace\)
The sequence for \(\varphi_\omega(0)\) is \(\lbrace \varepsilon_0,\zeta_0,\eta_0,\varphi_4(0),\cdots \rbrace\)
\(\cdots\cdots\)
The sequence for \(\Gamma_0=\varphi(1,0,0)\) is \(\lbrace 1,\varphi_1(0),\varphi_{\varphi_1(0)}(0),\cdots \rbrace\)
Now we encounter large countable ordinals that can only be expressed with the Extended Veblen function.
The sequence for \(\Gamma_1=\varphi(1,0,1)\) is \(\lbrace \varphi_{\Gamma_0+1}(0),\varphi_{\varphi_{\Gamma_0+1}(0)}(0),\cdots \rbrace\)
\(\cdots\cdots\)
The sequence for \(\Gamma_{\Gamma_{\Gamma_\ddots}}=\varphi(1,1,0)\) is \(\lbrace stuck\)
Well, that's about it. The Wainer hierarchy only lasts up to the limit of the \(\Gamma\) function.
ψ function sequences
In Madore's ψ function, there is a thing called \(\Omega\).
The sequence for \(\psi(\Omega^\omega+\Omega^3\omega2+\Omega)\) is
\(\{\psi(0),\psi(\Omega^\omega+\Omega^3\omega2+\psi(0)),\psi(\Omega^\omega+\Omega^3\omega2+\psi(\Omega^\omega+\Omega^3\omega2+\psi(0))),\psi(\Omega^\omega+\Omega^3\omega2+\psi(\Omega^\omega+\Omega^3\omega2+\psi(\Omega^\omega+\Omega^3\omega2+\psi(0)))),\cdots\}\)
Well, it's like this.
|
Can you explain me, please, what does it mean the transpose of a matrix ? I know the definition in the context of matrix theory and its generalization to adjoint operators (transpose of a linear application). What is the fundamental idea behind transpose ? and why it is introduced and considered in today's mathematics ? Thank you
I'll try to explain some of the History, which is unusual because it started at the highest level of abstraction and worked its way down.
The study of symmetric differential operators predates the systematic study of symmetric matrices. While studying differential operators such as $$ Lf = -\frac{d}{dx}p\frac{d}{dx}f + V f $$ on an interval $[a,b]$ with two endpoint conditions such as (a) $f(a)=f(b)=0$ or (b) $f(a)=f(g)$, $f'(a)=f'(b)$, it was found that $$ \int_{a}^{b} (Lf)g\,dx = \int_{a}^{b}f(Lg)\,dx, $$ provided $f$, $g$ satisfy the endpoint conditions. And it was this property that was isolated as the reason for the integral 'orthogonality' of eigenfunction solutions of $Lf=\lambda f$. For example, the classical Fourier series functions are eigenfunctions of $L=-\frac{d^{2}}{dx^{2}}$ with periodic endpoint conditions on $[0,2\pi]$: $$ 1,\cos x,\sin x,\cos 2x,\sin 2x,\cos 3x,\sin 3x,\cdots\;. $$ This is an infinite-dimensional space of functions with infinitely many eigenvalues $0,1^{2}, 2^{2}, 3^{2},\cdots$. If you take any two of these which are different--call them $f$ and $g$--then $\int_{0}^{2\pi}fg\,dx=0$. And this happens for a broad class of eigenfunction solutions associated with such an operator $L$, including Bessel functions, Legendre Polynomials, Hermite Polynomials, etc..
The study of eigenfunction solutions came out of Fourier's separation of variables technique from ~1805. The trick they found to finally explain the general orthogonality of solutions of these ODEs was this: If $Lf=\lambda f$ and $Lg=\mu g$ with $\mu \ne \lambda$, then $$ (\lambda-\mu)(f,g) = (Lf,g)-(f,Lg) = 0. $$ This abstract pairing eventually became a general inner product $(\cdot,\cdot)$ as shown above, and symmetric operators became a focus because of the properties similar to Fourier series expansion in orthogonal functions $1,\cos x,\sin x,\cos 2x,\sin 2x,\cdots$ on $[0,2\pi]$.
In the context of matrices, a matrix $A$ which is equal to its transpose is the natural symmetric operator, because $$ y^{T}(Ax) = (A^{T}y)x. $$ Written as an inner product $(x,y) = y^{T}x$, this becomes $(Ax,y)=(x,A^{T}y)$. So symmetry in this context is $A=A^{T}$, which is the same as symmetry across the diagonal. Like a symmetric differential operator, a symmetric matrix also has a basis of eigenfunctions which are mutually orthogonal for different eigenvalues; this is easily shown using the same trick used to show orthogonality for eigenfunction solutions of symmetric ODEs coming from separation of variables--same notation, just different objects.
One of the reasons this seems so foreign at the level of matrices is that it's coming from a much more abstract infinite-dimensional setting where symmetry arises naturally in the study of differential equations of Physics and Engineering. But because of this natural connection, there are many applications of symmetric matrix theory as well.
|
I have a fixed number $a$. Now using $a$ I need to construct a number $b$ such that $0.99\leq b\leq 1$. Is there any mathematical formulation of such a construction that looks random. The generation of such a number should be deterministic. can somebody hint at any algorithms
For example $\,b=0.99 + \dfrac{\sin^2(\lambda a)}{100}\,$, with $\,\lambda \in \mathbb{R}\,$ adjusted for the range of $\,a\,$.
The provided value, $a$, is your
. seed value
The algorithm you want is a pseudo-random number generator (PRNG). Note that pseudo-random number generators are explicitly deterministic; they merely appear random.
Depending on how random you need it to look, you may want to use a cryptographically secure PRNG. Such an algorithm, if truly cryptographically secure, should pretty much look random as far as any human-like observer could tell (though its operation may be plain and obvious to super-human observers, e.g. advanced aliens or deities; can't do much there).
Most PRNG's are designed to produce a bit-array (a bunch of $\left\{0,~1\right\}$ values) given a seed. If $a$ is real $\left(a\in\mathbb{R}\right)$, then that bit-array may contain arbitrarily many bits. Typically, the bit-array's defined such that each bit is pseudo-unrelated to the others, such that can be assumed to be mutually independent.
Then, it's just a matter of mapping the bit-array to numeric values. You might do this by interpreting the bit array as a serialization of a binary real $\in{\left[0.9,1\right]}$.
What about $$0.99+\frac1{100(1+a^2)}?$$
|
Exam-Style Questions on Sequences Problems on Sequences adapted from questions set in previous Mathematics exams.
1. GCSE Higher
Here is a picture of four models. Some of the cubes are hidden behind other cubes.
Model one consists of one cube. Model two consists of four cubes and so on.
(a) How many cubes are in the third model?
(b) How many cubes are in the fourth model?
(c) If a fifth model were built, how many cubes would it take?
(d) Find an expression for the number of cubes used in the nth model.
(e) Sketch a side view, front view and plan view of the fourth model.
2. GCSE Higher
(a) Find the \(n\)th term of the sequence 7, 13, 19, 25,...
(b) In a sequence of four numbers, the difference between each number is 9.
The sum of the four numbers is 2. What are the numbers in the sequence? You must show all your working.
3. IB Studies
Consider the number sequence where \(u_1=500, u_2=519, u_3=538\) and \(u_4=557\) etc.
(a) Find the value of \(u_{30}\)
(b) Find the sum of the first 12 terms of the sequence, \(\sum_{n=1}^{12} u_n \)
Another number sequence is defined where \(w_1=4, w_2=8, w_3=16\) and \(w_4=32\) etc.
(c) Find the exact value of \(w_{10}\).
(d) Find the sum of the first 9 terms of this sequence.
\(k\) is the smallest value of \(n\) for which \(w_n\) is greater than \(u_n\).
(e) Calculate the value of \(k\).
4. GCSE Higher
Find an expression, in terms of \(n\), for the \(n\)th term of the sequence that has the following first five terms:$$6 \qquad 13 \qquad 23 \qquad 36 \qquad 52 $$
5. IGCSE Extended
The diagrams above show a growing fractal of triangles. The sides of the largest equilateral triangle in each diagram are of length 1 metre.
In the second diagram there are four triangles each with sides of length \(\frac{1}{2}\) metre.
In the third diagram there are 16 triangles each with sides of length \(\frac{1}{4}\) metre.
(a) Complete this table for more diagrams.
Diagram 1 Diagram 2 Diagram 3 Diagram 4 Diagram 5 Diagram 6 Diagram \(n\) Length of Side 1 \(\frac{1}{2}\) \(\frac{1}{4}\) Power of 2 2
0 2
-1 2
-2
(b) Complete this table for the number of the smallest triangles in diagrams 4, 5 and 6.
Diagram 1 Diagram 2 Diagram 3 Diagram 4 Diagram 5 Diagram 6 Diagram \(n\) Number of smallest triangles 1 4 16 Power of 2 2
0 2
2 2
4
(c) Calculate the number of the smallest triangles in the diagram where the smallest triangles have sides of length \(\frac{1}{256}\) metre.
6. GCSE Higher
The diagrams below show a sequence of patterns made from red and yellow tiles.
(a) Find an ex
The total number of red and yellow tiles in each pattern is always the sum of the squares of two consecutive whole numbers.
(b) Find an ex
(c) Is there a pattern for which the total number of tiles is 303? Give a reason for your answer.
(d) Explain why the total number of tiles in any pattern of this sequence is always an odd number.
7. IB Standard
The first three and last terms of an arithmetic sequence are \(7,13,19,...,1357\)
(a) Find the common difference.
(b) Find the number of terms in the sequence.
(c) What is the sum of the sequence.
8. IB Standard
An arithmetic sequence is given by 6, 13, 20, …
(a) Write down the value of the common difference, d.
(b) Find \(u_{100}\);
(c) Find \(S_{100}\);
(d) Given that \(u_n=1434\) , find the value of n.
9. IB Standard
In an arithmetic sequence, the fifth term is 44 and the ninth term is 80.
(a) Find the common difference.
(b) Find the first term.
(c) Find the sum of the first 50 terms of the sequence.
10. IB Standard
A square is drawn with sides of length 32 cm. The midpoints of the sides of this square are joined to form a new square and four red triangles. The process is repeated to produce yellow triangles and then again to produce blue triangles.
The length of the equal sides of the red triangles are denoted by \(x_1\) and their areas are each \(A_1\).
The length of the equal sides of the yellow triangles are denoted by \(x_2\) and their areas are each \(A_2\). The length of the equal sides of the blue triangles are denoted by \(x_3\) and their areas are each \(A_3\).
(a) The following table gives the values of \(x_n\) and \(A_n\), for \(1\le n\le3\). Copy and complete the table.
\(n\) 1 2 3 \(x_n\) 16 \(A_n\) 128
(b) The process of drawing smaller and smaller squares inside each new square is repeated. Find \(A_7\)
(c) Consider an initial square of side length \(k\) cm. The process described above is repeated indefinitely. The total area of one of each colour triangles is \(k\) cm
2. Find the value of \(k\).
11. IB Standard
The first term of an infinite geometric sequence is 10. The sum of the infinite sequence is 500.
(a) Find the common ratio.
(b) Find the sum of the first 9 terms.
(c) Find the least value of n for which S
n > 250.
12. IB Studies
A Grecian amphitheatre was built in the form of a horseshoe and has 22 rows.
The number of seats in each row increase by a fixed amount, \(d\), compared to the number of seats in the previous row. The number of seats in the fifth row, \(u_5\), is 58, and the number of seats in the ninth row, \(u_{9}\), is 86. \(u_1\) represents the number of seats in the first row.
(a) Write an equation for \(u_5\) in terms of \(d\) and \(u_1\).
(b) Write an equation for \(u_{9}\) in terms of \(d\) and \(u_1\).
(c) Write down the value of \(d\);
(d) Write down the value of \(u_1\).
(e) Find the total number of seats in the amphitheatre.
Some time later, a second level was added to increase the amphitheatre’s capacity by another 2590 seats. Each row has five more seats than the previous row. The first row on this level has 82 seats.
(f) Find the number of rows on the second level of the amphitheatre.
13. IB Studies
Chris checks his Twitter account and notices that he received a tweet at 8:00am. At 8:05am he forwards the tweet to four people. Five minutes later, those four people each forward the tweet to four new people. Assume this pattern continues and each time the tweet is sent to people who have not received it before.
The number of new people who receive the tweet forms a geometric sequence:$$1 , 4 , …$$
(a) Write down the next two terms of this geometric sequence.
(b) Write down the common ratio of this geometric sequence.
(c) Calculate the number of people who will receive the tweet at 8:40am.
(d) Calculate the total number of people who will have received the tweet by 8:40am.
(e) Calculate the exact time at which a total of 5 592 405 people will have received the tweet.
14. IB Standard
The sums of the terms of a sequence are given as follows where \(k\in \mathbb{Z}\):$$S_1=k+1, S_2=4k+3, S_3=9k+7, S_4=16k+15$$
(a) Find the first four terms of the sequence.
(b) Find a general ex
th term of the sequence, \(u_n\).
|
First off, the fact that the board actually blocks the sun light going into the house may have cooled down the house itself (same effect as a solar screen). Since this question is about how the pressure and temperature will be changed after installing the Eco-Cooler air conditioner (the bottle board solely), I will give the following analysis.
From the Gay-Lussac Law that \begin{align}\frac{P_1}{T_1}=\frac{P_2}{T_2},\end{align}the ratio between pressure and temperature is a constant for a given approximately fixed volume. When one installs the bottles on the windows, the shape of the bottles increases the air pressure before entering the room. This can be understood in the following manner: we assume the wind is entering an almost closed house that every cross section over the course of the bottle pipe is approximately under the force balanced/equilibrium condition that $$F_a=S_aP_a\approx S_bP_b$$ for two arbitrary cross sections $S_a$ and $S_b$. Since the bottleneck area is the smallest, pressure there may be the biggest before entering the house. In this process, however, the temperature may not be changed since it is contacting with the outdoor environment constantly.
When the air blows into the room, the pressure gets decreased immediately to the normal or even maybe below-normal (depending on the actual room pressure) atmosphere pressure as there is no bottleneck-shaped pipe limiting its volume. As a result of the equation given above, the temperature goes down immediately inside of the room.
One important note regarding the other answers and the valid condition of equation (2) above -- I have noticed other answers have been focusing on the opening of chimney and other windows, but here I don't need to assume that condition, and indeed I think the other outlets of the house should keep closed to prevent heat exchange from those openings.
Firstly, we shouldn't focus on whether there is a chimney or outlets on the other side of the house to explain the temperature change due to the bottle board installation. Because before or after the bottle board is installed, the other chimney or windows are always there if there were any, they shouldn't be the cause of the change of temperature -- it is the installation of the bottle inlets generate the temperature change. Secondly, since the house is relatively large compared to the openings from the video, air flow will experience a friction effect when it enters the house (all the other answers haven't considered this effect). In other words, you can imagine the house is approximately a closed volume that it will give a resistance on the entering air and reduce its entering speed. Therefore, the air flow going through the bottle pipes will be compressed at the bottle neck position and the speed of entering the house is lower than the case that it enters an completely open space. This validates the condition of equation (2) which is any cross section of the air flow over the bottle path is approximately in a force balanced/equilibrium condition. Thirdly, a chimney may help to let warm air go out and the houses shown in the video are made by wood and are not air-tight indeed, but it is actually important to keep other big windows of the house closed in practice to prevent the hot air entering the house to raise the temperature again! The video shows the room temperature can be $5^\circ$ lower than outside. If you keep other big windows open, it is very easy to rebalance the room temperature back to high again. This is the same common requirement as we turn on air conditioner in summer, and makes the point 2 above even valid. Obviously, other answers may have ignored this common knowledge -- instead of analyzing how the bottle board helps but to argue about there must be openings to let air flow freely go in and out of the house to make the cooling process possible if at all.
Feasibility and conditions to make it work: We see from the video that there is a $5^\circ$ temperature difference. We can assume that the outside temperature is about $30^\circ C$ or $T_1=303K$; and the temperature inside is about $25^\circ C$ or $T_2=298K$. Therefore, the pressure raised at the bottleneck relative to the normal house pressure is \begin{align}\eta=\frac{P_1}{P_2}=\frac{T_1}{T_2}\approx 1.017,\end{align}which is about $1.7\%$ of pressure increase. From equation (2), since the bottleneck cross section is a lot of smaller than the intake area, the ideal pressure increase can be a lot more than $1.7\%$ when Equ. (2) is completely an equation. Considering Equ. (2) becomes a complete equation only when the bottleneck is completely closed from the house side, which is not totally true, and the house is constantly exchanging heat from other non-ideal channels with the environment, we may find the $5^\circ$ temperature decrease is possible from a rough estimation. To make the Eco-Cooler work well, it is crucial to have a good insulation condition of the house and make sure all the other windows/openings of the house closed to make the constrain well satisfied. However, if there is no air flowing into the house from the bottles, this Eco-cooler may not work well from the pressure-temperature transitions, but it may still work to some extent by blocking the sunshine from shining into the house.
A similar rule governs the case that when you evaporate water inside of an open room, in which the volume of the water vapor is increased from water and the chemical energy of water vapor from liquid water is also changed so that in the end water vapor will absorb heat from the air. Hopefully this helps your understanding on the power of physics laws.
|
I was reviewing RSA by hand.
I picked 53 and 59 as my primes. For e I picked 5. When I solved for d using extended euclid, I got 1 which obviously doesn't decrypt anything. I checked my answers online using calculators and got the same result. Did I pick e or a prime wrong? How do I make sure I don't get 1 as d?
I was reviewing RSA by hand.
No, your values for $e$ and primes are fine (well, at least for a toy example); $e$ is relatively prime to both $p-1$ and $q-1$, and that's the only hard requirement (not counting the security related ones, of course).
I get $d=905$, as $5 \times 905 \equiv 1 \pmod{ \operatorname{lcm}((53-1),(59-1))}$; alternatively, you might get $d=2413$, if you do the computation $\bmod (53-1) \cdot (59-1)$.
I can only suspect that you're doing Extended Euclidean wrong; however without knowing exactly what you did, I really can't say exactly where you erred.
|
broall wrote:
EgmatQuantExpert wrote:
What is the product of all values of x that satisfies the equation: \(\sqrt{5x} + 1= \sqrt{(7x - 3)}\) ?
A. 9 B. 11 C. 49 D. 99 E. None of the above
First, make sure that \(\sqrt{5x}\) and \(\sqrt{7x-3}\) have real value so \(x \geq 0\) and \(x \geq \frac{3}{7}\). Hence \(x \geq \frac{3}{7}\).
Now, solve that equation
\(\begin{align}
\quad \sqrt{5x}+1 &= \sqrt{7x-3} \\
5x + 2\sqrt{5x}+1 &= 7x-3 \\
2\sqrt{5x} &= 2x-4 \\
\sqrt{5x} &= x-2 \\
x - \sqrt{5x} - 2 &=0
\end{align}\)
Set \(t = \sqrt{x} \geq \sqrt{3/7}\) we have \(t^2 -t\sqrt{5} -2 =0\)
\(\delta = 5 + 4*2 = 13 \implies t_{12}=\frac{\sqrt{5} \pm \sqrt{13}}{2}\)
We have \(t_1 = \frac{\sqrt{5}+ \sqrt{13}}{2} \approx 2.9 > 1 > \sqrt{\frac{3}{7}}\). Choose this root.
\(t_2 = \frac{\sqrt{5}- \sqrt{13}}{2} \approx -0.7 < 0 < \sqrt{\frac{3}{7}}\). Eliminate this root.
Hence \(x=t_1^2=\frac{9+\sqrt{65}}{2}\).
The answer E.
It's actually
x=(9+√65)/2
and
x=(9-√65)/2
There are 2 solutions
the best way to check it is b^2-4ac
4 possible scenarios, one of them if the result is positive non-perfect square number then there are 2 irrational solutions but the product of them will be rational number, in that case 4
(9+√65)/2*(9-√65)/2=(81-65)/4=4
the question wasn't formulated right
I've spent like 3 or 4 minutes double and cross checking myself, 'cause i got the solution here it's but not among the available answers...
|
I'm trying to approximate $\sqrt{101}$ using the Taylor series for the function $f(x)=\sqrt{x}$ centered at the point $x=100$. I need to obtain an approximation that is within $0.01$ of the correct answer. The Taylor series is given by
$$ f(x) = \sum_{k=0}^{n-1} \frac{f^{(k)}(100)}{k!}(x-100)^k + R_n(x) $$
where $R_n(x)$ is the remainder term. For the case $n=2$, we have
$$ R_2(x) = \frac{f''(c)}{2!}(101-100)^2 = \frac{f''(c)}{2}. $$
where $c \in [100, \, 101]$. Since $f''(x) = -1/4x^{3/2}$, we have
$$ \left|R_2(101)\right| = \left| \frac{1}{2} \cdot \frac{-1}{4c^{3/2}} \right| = \left| \frac{1}{8c^{3/2}} \right| $$
Since we are considering the interval $[100, \, 101]$, we can bound this by
$$ \left|R_2(101)\right| \leq \frac{1}{8\cdot 100^{3/2}} = \frac{1}{8000} = 0.000125. $$
Thus, the approximation using the first two terms of the Taylor expansion should be sufficiently accurate. Those two terms are given by
$$ f(101) = f(100) + f'(100)/2 = 10 + 1/40 = 401/40. $$
However, using numerical software to confirm the approximation, we see that
$$ \left| (401/40) - \sqrt{101} \right| \approx 0.024875621, $$
which is not sufficiently accurate. Can anyone tell me why I am not obtaining a sufficiently accurate approximation? Thanks in advance!
|
Recently stumbled upon a discussion in the forum
What is Shamir’s Trick used for?
Are there any such examples?
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.Sign up to join this community
No, Shamir's trick doesn't break ECDSA. Verifying an ECDSA signature involves evaluating a sum of scalar multiplications $[h s^{-1}]G + [r s^{-1}]P$. You can compute the scalar multiplications $[h s^{-1}]G$ and $[r s^{-1}]P$ separately and then add the results, but Shamir's trick does it more efficiently as a combined computation.
This trick for evaluating a sum of products $[\alpha]P + [\beta]Q$ is to go through the binary expansions of $\alpha = \sum_i \alpha_i 2^i$ and $\beta = \sum_i \beta_i 2^i$ from msb to lsb, and add to the sum either nothing if the bits are both zero, $P$ if only $\alpha_i = 1$, $Q$ if only $\beta_i = 1$, or $P + Q$ if both bits are 1; then double the sum and move on to the next bit.
Beware: If the selection of which point to add or the arithmetic is not done in constant time, this procedure may leak the scalars $\alpha$ and $\beta$ through timing side channels. When verifying signatures this is usually not a problem (unless for some reason the signature has to be secret), but other applications may involve a sum of two scalar multiplications whose scalars are secret.
Shamir's trick: Given $N, x, z, e, F$ s.t. $x^e = z^F \mod N$ and $e, F$ are relatively primes, you can efficiently find $z^{1/e}\mod N$. The trick is to compute integers $a,b$ s.t. $a.e + b.F = 1$.
RSA says the following: given $N, y, e$, it's difficult to compute $y^{1/e}\mod N$.
Using shamir's trick you can prove the following: given $N, y, F, e$ s.t. $F,e$ are relatively primes, then it's difficult to compute $y^{F/e}\mod N$.
|
No one of the expressions is correct or wrong. Definitions are not correct or wrong. Definitions are consensus that we take about elements in a formalism.
We can define anything as we want in a formalism, simply we have to respect basic logic rules, such as using always the same definition, once we chose one. Moreover in theories of physical content the definition must have physical meaning.
So we have to ask, what is the physical meaning of kinetic energy? Usually we define kinetic energy as the energy that changes due to work and both definitions of kinetic energy are physically acceptable
$$E_{kin}^{(1)} \equiv mc^2 (\gamma -1)$$
$$E_{kin}^{(2)} \equiv mc^2 \gamma$$
because their time derivatives equal the rate of doing work
$$\frac{dE_{kin}^{(1)}}{dt} = \frac{dE_{kin}^{(2)}}{dt} = \mathbf{F}\mathbf{v}$$
which is expected because the difference between both definitions is a constant term, which vanish when differentiating.
Most books use the first definition, but I have seen books using the definition $E_{kin}^{(2)}$. A reason for choosing this definition is given by Richard Talman in
Geometric Mechanics; Toward a Unification of Classical Physics (Wiley 2007):
Here, we have used the symbol $E_{kin}$ which, since it includes the rest energy,
differs by that much from being a generalization of the "kinetic energy" of
Newtonian mechanics. Nevertheless it is convenient to have a symbol for the
energy of a particle that accompanies its very existence and includes its energy
of motion but does not include any "potential energy" due to its position in a
field of force.
|
Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Impact Factor 2019: 0.808
The journal
Asymptotic Analysis fulfills a twofold function. It aims at publishing original mathematical results in the asymptotic theory of problems affected by the presence of small or large parameters on the one hand, and at giving specific indications of their possible applications to different fields of natural sciences on the other hand. Asymptotic Analysis thus provides mathematicians with a concentrated source of newly acquired information which they may need in the analysis of asymptotic problems.
Authors: Souganidis, Panagiotis E.
Article Type: Research Article
Abstract: Homogenization‐type results for the Cauchy problem for first‐order PDE (Hamilton–Jacobi equations) are presented. The main assumption is that the Hamiltonian is superlinear and convex with respect to the gradient and stationary and ergodic with respect to the spatial variable. Some of applications to related problems as well as to the asymptotics of reaction–diffusion equations and turbulent combustion are also presented.
Citation: Asymptotic Analysis, vol. 20, no. 1, pp. 1-11, 1999
Article Type: Research Article
Abstract: The aim of this study is to give complete semiclassical asymptotics of the residues {\rm Res}[S(\lambda,\omega,\omega^\prime),\rho] at some pole \rho of the distributional kernel of the scattering matrix S(\lambda) corresponding to a semiclassical two‐body Schrödinger operator P=-h^2\Delta + V , and considered as a meromorphic operator‐valued function with respect to the energy \lambda . We do it in the case where the pole \rho considered is a shape resonance of P . This is a continuation of A. Benbernou, Estimation des residus de la matrice de diffusion associés a …dès résonances de forme I (to appear in Ann. Inst. H. Poincaré), where an extra geometrical condition was assumed (namely the absence of caustics near the energy level {\rm Re}\ \rho ). Here we drop this assumption by using an FBI transform which permits to work in the complexified phase space. Then we show that some semiclassical WKB expansions are global, and this allows us to find out estimates for the residue of the type {\rm O}(h^N{\rm e}^{-2S_0/h}) , where S_0 is the Agmon width of the potential barrier, and N may be arbitrarily large depending on an explicit geometrical location of the incoming and outgoing waves \omega and \omega' one consider. Full asymptotic expansions are obtained under some additional generic geometric assumption on the potential V . Show more
Citation: Asymptotic Analysis, vol. 20, no. 1, pp. 13-38, 1999
Authors: Percivale, Danilo
Article Type: Research Article
Citation: Asymptotic Analysis, vol. 20, no. 1, pp. 39-59, 1999
Authors: Tassa, Tamir
Article Type: Research Article
Abstract: We study the homogenization of oscillatory solutions of partial differential equations with a multiple number of small scales. We consider a variety of problems – nonlinear convection–diffusion equations with oscillatory initial and forcing data, the Carleman model for the discrete Boltzman equations, and two‐dimensional linear transport equations with oscillatory coefficients. In these problems, the initial values, force terms or coefficients are oscillatory functions with a multiple number of small scales – f(x,{x}/{{\varepsilon}_1},\ldots,{x}/{{\varepsilon}_n}) . The essential question in this context is what is the weak limit of such functions when {\varepsilon}_i \downarrow 0 and what is the corresponding …convergence rate. It is shown that the weak limit equals the average of f(x,\cdot) over an affine submanifold of the torus T^n ; the submanifold and its dimension are determined by the limit ratios between the scales, \alpha_i= \lim {{\varepsilon}_1}/{{\varepsilon}_i} , their linear dependence over the integers and also, unexpectedly, by the rate in which the ratios {{\varepsilon}_1}/{{\varepsilon}_i} tend to their limit \alpha_i . These results and the accompanying convergence rate estimates are then used in deriving the homogenized equations in each of the abovementioned problems. Show more
Citation: Asymptotic Analysis, vol. 20, no. 1, pp. 61-96, 1999
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901 100025, Beijing China Free service line: 400 661 8717 Fax: +86 10 8446 7947 china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.