text stringlengths 256 16.4k |
|---|
Separability of Finite Topological Products
Recall from the First Countability of Finite Topological Products and the Second Countability of Finite Topological Products pages that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of first/second countable topological spaces then the topological product $\displaystyle{\prod_{i=1}^{n} X_i}$ is also first/second countable.
We will now look at another property that is inherited from a finite collection of topological spaces - separability!
Theorem 1: Let $\{ X_1, X_2, …, X_n \}$ be a finite collection of topological spaces. If $X_i$ is separable for each $i \in I$ then the topological product $\displaystyle{\prod_{i=1}^{n} X_i}$ is also separable. Proof:Let $\{ X_1, X_2, ..., X_n \}$ be a finite collection of separable topological spaces. Then each $X_i$ contains a countable and dense subset $A_i$. We form a countable and dense subset of $\displaystyle{\prod_{i=1}^{n} X_i}$ as the product of all of these countable dense subsets: Clearly $A$ is countable since it is the product of a finite collection of countable sets. Furthermore, we claim that $A$ is dense in $\displaystyle{\prod_{i=1}^{n} A_i}$. To show this, let $U = U_1 \times U_2 \times ... \times U_n \in \prod_{i=1}^{n} X_i$ be open. Then $U_i$ is open in $X_i$ for each $i \in I$. Since $A_i$ is dense in $U_i$ we have that: So there exists a point $a_i \in A_i \cap U_i$ for each $i \in I$. So the point $\mathbf{a} = (a_1, a_2, ..., a_n) \in A \cap U$ which shows that $A \cap U \neq \emptyset$ for each open set $U$ in the topological product. Therefore $\displaystyle{\prod_{i=1}^{n} X_i}$ is separable. $\blacksquare$ |
For finite $F$ the only difference between $\sum_{k\in F}a_k$ and $\sum_Fa_k$ is notational: the latter is an abbreviation for the former.
The theorem does indeed say that a non-negative series can be rearranged and broken up arbitrarily without any effect on its value. This is perhaps easiest to see in the special case in which $I=\Bbb Z^+$, say, so that
$$\sum_Ia_k=\sum_{k\in\Bbb Z^+}a_k=\sum_{k=1}^\infty a_k\;.$$
Call this sum $S$. Now we partition the positive integers into sets $I_j$ for $j\in\Bbb Z^+$: these sets are pairwise disjoint, and their union is all of $\Bbb Z^+$. The theorem says that if for each $j\in\Bbb Z^+$ we set
$$S_j=\sum_{I_j}a_k=\sum_{k\in I_j}a_k\;,$$
then
$$S=\sum_{j=1}^\infty S_j\;.$$
We might, for instance, let $I_1=\{2\}$, $I_2=\{1\}$, $I_3=\{4\}$, $I_4=\{3\}$, and so on, so that
$$I_j=\begin{cases}\{a_{j+1}\},&\text{if }j\text{ is odd}\\\{a_{j-1}\},&\text{if }j\text{ is even}\;;\end{cases}$$
then
$$S_j=\begin{cases}a_{j+1},&\text{if }j\text{ is odd}\\a_{j-1},&\text{if }j\text{ is even}\;,\end{cases}$$
and
$$\sum_{j=1}^\infty S_j=a_2+a_1+a_4+a_3+a_6+a_5+\ldots$$
is just the sum of the rearrangement in which we interchange $a_{2n-1}$ and $a_{2n}$ for each $n\in\Bbb Z^+$. Any rearrangement of the original series can be obtained in this way, and the theorem says that they all produce the same sum.
But it actually says even more than that, since the sets $I_j$ can contain more than one index. For instance, we can let
$$I_1=\{2n-1:n\in\Bbb Z^+\}=\{1,3,5,7,\ldots\}$$
and
$$I_2=\{2n:n\in\Bbb Z^+\}=\{2,4,6,8,\ldots,\}\;,$$
setting $I_j=\varnothing$ if $j>2$. Then
$$\begin{align*}S_1&=\sum_{n=1}^\infty a_{2n-1}=a_1+a_3+a_5+\ldots\;,\\S_2&=\sum_{n=1}^\infty a_{2n}=a_2+a_4+a_6+\ldots\;,\end{align*}$$
and $S_j=0$ for $j>2$. Clearly we can ignore the $0$ terms, so the theorem says in this case that $S=S_1+S_2$, i.e., that
$$\sum_{n=1}^\infty a_n=\sum_{n=1}^\infty a_{2n-1}+\sum_{n=1}^\infty a_{2n}\;:$$
we can sum the odd-numbered and the even-numbered terms separately, and the sum of those two subseries totals will be the same as the sum of the original series. Thus, the theorem covers not only rearrangements of the individual terms, but also breaking up the series into disjoint subseries and summing those subseries individually.
I can’t say exactly what is meant by
the sum above is well-defined without having the entire relevant context. If it refers to $\sum_{k\in F}a_k$ for a finite $F$, for instance, it simply means that since this is a finite sum, we already know that the order in which it’s evaluated makes no difference, so we can safely specify the set of indices as a ‘lump’ — the set $F$ — instead of specifying the order in which the terms are to be added. |
We had for this PDE: $-Lu=f$ in $\Omega$ (1) and $u|_{\partial \Omega}=0$ in $\partial \Omega$ (2) the following existence theorem:
Let $\Omega \subseteq \mathbb{R}^n$ be open and $L$ be a uniformly elliptic differential operator of second order. Then there's a$\lambda_0 \in \mathbb{R}$ such that $\forall \lambda \in \mathbb{K}$ with $Re \lambda \geq \lambda_0$ and $\forall f \in H^{-1}(\Omega)$, there is exatly one $u \in H^1_0(\Omega)$ with $\lambda u - Lu =f$ in $H^{-1}(\Omega)$. This means that $$ \lambda \int_\Omega u(x) \overline{\varphi(x)}dx - \langle Lu, \varphi \rangle=\langle f, \varphi \rangle~~ (*)$$ for all $\varphi \in H^1_0(\Omega) $. In particular, we have for (1, 2) with $-L$ replaced by $\lambda-L$ a unique weak solution $u \in H^1_0(\Omega)$ if $f \in L^1_{loc}(\Omega) \cap H^{-1}(\Omega).(**)$
We defined $$\langle Lu, \varphi \rangle= -\sum_{j,k=1}^n \int_\Omega a_{j,k}(x) \partial_{x_j}u(x) \overline{ \partial _{x_k} \varphi(x)} dx+ \sum_{j=1}^n \int_\Omega b_{j}(x) \partial_{x_j}u(x) \overline{ \varphi(x)}dx+ \int_\Omega c(x) u(x) \overline{ \varphi(x)}dx $$.
My Questions: I understand the statement of this existence theorem, but why is this equal to the statement in (*). In (**): why does this hold and why needs $f$ to be in this set? If it helps, the statement in our proof was: "As $C^{\infty}_0(\Omega) \subseteq H^1_0(\Omega)$, we get that $u$ is a weak solution of (1,2) under the conditions mentioned in the theorem." Apparently it has to follow from the more general statement but I fail to see this.
Thank you for your help! |
I would appreciate some help with this problem:
Evaluate:
$$\lim_{x \to \infty}\frac{1}{x}\displaystyle\int_0^x|\sin(t)|dt$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Note that $\vert\sin(t)\vert$ is non-negative, periodic with period $\pi$, and that $$\int_0^\pi\vert \sin(t)\vert dt=2.$$ Let $f(x)$ be the largest integer smaller than or equal to $x/\pi$. Then it holds that $$\int_0^{f(x)\pi}\vert\sin(t)\vert dt\leq\int_0^x\vert\sin(t)\vert dt\leq\int_0^{[f(x)+1]\pi}\vert\sin(t)\vert dt.$$ This can be written as $$2f(x)\leq\int_0^x\vert\sin(t)\vert dt\leq2[f(x)+1].$$ Dividing by $x$ and noting that $\lim_{x\to\infty}f(x)/x=1/\pi$ it follows that $$\frac2\pi\leq\lim_{x\to+\infty}\frac1x\int_0^x\vert\sin(t)\vert dt\leq\frac2\pi.$$
Ok, thanks for your comments I think I get it now:
$\displaystyle\int_0^{2\pi}|\sin (t)|dt$ = 4
And due to the periodicity of $|\sin (t|)|$,
$\displaystyle\int_0^{2k\pi}|\sin (t)|dt = k\displaystyle\int_0^{2\pi}|\sin (t)|dt = 4k$
So the limit is equivalent to:
$$\lim_{k\to\infty}\left(\frac{1}{2k\pi}\cdot k\displaystyle\int_0^{2\pi}|\sin (t)|dt\right) = \lim_{k\to\infty}\frac{4k}{2k\pi} = \frac{2}{\pi}$$
... so in reality this question probably wasn't as hard as I first thought it was :/ |
$$\sigma=\begin{pmatrix} 1 & 2 &3 & 4& 5& 6&7 &8 &9 &10 & 11 \\ 4&2&9&10&6&5&11&7&8&1&3 \end{pmatrix}$$
(1) I am asked to write this permutation in $S_{11}$ as a product of disjoint cycles and also as a product of transpositions.
(2) Also find the order of the element. Is this permutation even or odd?
i think these are the disjoint cycles
$E_{1}=(1,4,10)$, $\;\operatorname{order}E_1 =3$
$E_{2}= (3,9,8,7,11),\;$ $\operatorname{order}E_{2} =5$
$E_{3}=(5,6),\;$ $\operatorname{order}E_{3} =2$
$S_{11}$= $E_{1} \cdot E_{2}\cdot E_{3}$
$\operatorname{order}E_{3}$ is even so the order of the permutation is even. Why are they asking this? and what is the significance of it being even or odd?
transpositions i have read this a couple times but the one example in my textbook is rather unclear, i am not sure what this means?
i think the transposition for $E_{1}$ is $(1,4)(1,10)$ but im not sure what this means. |
Let $\Omega\subset\mathbb{R}^N$ be a bounded smooth domain. Suppose that $0<k<t<1$ and $\Phi$ is a mooth function satisfying: $\Phi(x)=0$ for $x\le k$, $\Phi(x)=1$ for $x\ge t$.
Take $u\in C_0^\infty(\overline{\Omega})$ with $u\ge 0$ and $u$ superharmonic ($-\Delta u\ge 0$). Note that $$\Delta (\Phi\circ u)=(\Phi''\circ u)|\nabla u|^2+(\Phi'\circ u)\Delta u,\tag{1}$$
so $$|\Delta (\Phi\circ u)|\le C(|\nabla u|^2+\chi_{u>k}|\Delta u|).\tag{2}$$
Once $u\ge 0$ and $-\Delta u\ge 0$, we have that from $(2)$ that $$\chi_{u>k}|\Delta u|\le \frac{1}{k}u|\Delta u|=-\frac{1}{k}u\Delta u\tag{3}.$$
We combine Green's indetity with $(3)$ to conclude that $$\int_{u>k}|\Delta u|\le \frac{-1}{k}\int _\Omega u\Delta u=\frac{1}{k}\int_\Omega |\nabla u|^2.\tag{4}$$
Therefore, $(1)$ and $(4)$ gives $$\int_\Omega |\Delta (\Phi\circ u)|\le C\left(1+\frac{1}{k}\right)\int_\Omega |\nabla u|^2,$$
or equivalently $$\|\Delta(\Phi\circ u)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u\|_2^2,\ \forall \ u\in C_0^\infty (\overline{\Omega}),\ u\ge 0,\ -\Delta u\ge 0.\tag{5}$$
So my question is: Can we extend $(5)$ for all functions $u\in W_0^{1,2}(\Omega)$ such that $u\ge 0$ and $u$ is superharmonic (the distributional laplacean of $u$ is nonegative)?
Remark 1: $C_0^\infty(\overline{\Omega})$ is the space of all $C^\infty(\overline{\Omega})$ functions, which vanishes on the boundary.
Remark 2: $C>0$ is a constant, which can change in every line and it depends only on $k$, $\|\Phi'\|_\infty$ and $\|\Phi''\|_\infty$.
Remark 3: In this question, I tried to solve this problem, by showing that $(1)$ were true in the sense of measure, also for $u\in W_0^{1,2}(\Omega)$, however, it does not seems to be true.
$\textbf{Update (A Supposed Proof)}$: Assume that $u\in W_0^{1,2}(\Omega)$ satisfies $u\ge 0$, $-\Delta u\ge 0$. Let $\varphi_n$ be the standard mollifier sequence. Extend $u$ by zero outside $\Omega$ (we still using the same notation $u$) and consider the sequence $$u_n(x)=\int_{\mathbb{R}^N} \varphi_n(x-y)u(y)dy,\ x\in \mathbb{R}^N$$
Once $u\in W_0^{1,2}(\mathbb{R}^N)$, we have that $u_n\in C_0^\infty(\overline{\Omega})$, $u_n\ge 0$ and $-\Delta u_n\ge 0$. Moreover, $u_n\to u$ in $W^{1,2}(\Omega)$. From $(5)$, we have that $$\|\Delta (\Phi\circ u_n)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u_n\|_2^2,$$
therefore, we can assume without loss of generality that $\Phi\circ u_n$ converges in the weak star topology, to some measure $\mu\in \mathcal{M}(\Omega)$. Let $v\in W_0^{1,1}(\Omega)$ be the solution of the problem
$$ \left\{ \begin{array}{ccc} \Delta v =\mu&\mbox{ in $\Omega$} \\ v=0 &\mbox{ on $\partial \Omega$} \end{array} \right. $$
By the weak convergence, we have that $$\int_\Omega \phi(x)\Delta (\Phi\circ u_n)\to \int_\Omega \phi(x) d\mu=\int_\Omega \phi(x)\Delta v, \forall \phi\in C_0(\overline{\Omega}),\tag{6}$$
hence, from $(6)$ and the definition of $v$, we conclude that $$\int_\Omega \Delta \phi(x) (\Phi\circ u_n)\to\int_\Omega \Delta \phi(x) v,\ \forall \phi\in C_0^\infty(\overline{\Omega}) .$$
As $\Phi\circ u_n \to \Phi\circ u$ in $W^{1,2}(\Omega)$, we must concude that $v=\Phi\circ u$ and thus, $\Delta(\Phi\circ u)\in \mathcal{M}(\Omega)$ and $$\|\Delta (\Phi\circ u)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u\|_2^2$$
Is this proof right, can anyone check it for me please? |
When the only inputs to your robot are the next x,y,z coordinates of the next point on and on, I can simply just calculate IK to find the angles and make my actuators move to these angles. Whats the point in calculating forward kinematics in this case? does it merely act as a check for your inverse kinematics integrity? (this shouldn't be accurate because I don't think an analytical solution of the IK will have errors). I see no purpose in calculating the forwards kinematics in this application.
You can control the robot using just the inverse kinematics solution. However, you must be careful to handle the multiple inverse solutions properly (for example, $acos(\theta) = acos(-\theta))$. Forward kinematics can help you visualize the selected solution. They are also helpful when building a graphical simulation of the robot.
The forward kinematic will become useful e.g. if you want to compute the working envelope of your robot.
If you think of classical Jacobian-based methods for IK (still representing the majority), then you ought to consider that they make use of forward-kinematics (FK) maps internally in order to compute the error in the operational space as well as the Jacobian itself, which is actually a differential FK law.
In particular, the IK method based on Jacobian pseudo-inverse generates iteratively joint velocities profiles $\dot{q}$ that drive the system toward the target. In formula:
$$ \begin{cases} e=x_d-f\left(q\right) \\ \dot{q}=J^{-1}\left(q\right) \cdot \left(\dot{x}_d+Ke\right) \end{cases}, $$
where $x=f\left(q\right)$ and $J=\partial f/\partial q$ are the standard and differential forward-kinematics maps, respectively.
This reasoning and/or considerations apply also to other IK methods.
Forward kinematic has many uses, but for your case YES it can be used “merely” as a check. However the check is probably redundant since most IK solvers will give you some kind of residual which indicates the quality of the solution.
If you have the next target position you can convert that in joint space and make a PTP or point to point type motion and there is no need for forward kinematics.
The forward kinematics is crucial when geometrical contraints to the path are considered, e.g. a circular motion (CIRC), linear motion (LIN) or spline motion (SPLINE). Usually these are set also with a desired speed. In this case converting the current joint space coordinates to cartesian space and computing the trajectory (incl. velocity and acceleration) from the current end effector coordinates to the target in cartesian space will assure that the set parameters (path shape and target velocity) will be (more) precisely executed.
Also in the case where robos are programmed or operated (monitored) it is useful to know what the current cartesian coordinates of the end-effector is. This is done by calculating the forward kinematics. For determinig the current velocity of the robot differential forward kinematics are used. |
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that? |
3.6. Spectral solver load definition Purpose
Defining the load applied to the the volume element (VE)
.The load file is a plain text file with default file extension *.load
.A load file may contain a number of consecutive load cases; each load case corresponds to one line in the load file.
Valid keywords
All valid keywords are given in the following table.Keywoards are case-insensitive.
Overview
Keyword Meaning Arguments Comments
Fdot,
dotF
deformation gradient rate ($ \dot{\bar{\tnsr F}}$) 9 real numbers or asterisks instead of
L or
F; component wise exclusive with
P
F
deformation gradient aim ($\bar{\tnsr F}$) 9 real numbers or asterisks instead of
L or
Fdot; component wise exclusive with
P
L
velocity gradient ($ \bar{\tnsr L}$) 9 real numbers or asterisks instead of
Fdot or
F; component wise exclusive with
P
P
Piola–Kirchhoff stress ($ \bar{\tnsr P}$) 9 real numbers or asterisks component wise exclusive with
Fdot,
F, and
L
t,
time,
delta
total time increment 1 real number
incs,
N
number of increments; linear time scaling 1 integer instead of
logIncs
logIncs,
logIncrements,
logSteps
number of increments; logarithmic time scaling 1 integer instead of
incs
freq,
frequency,
outputfreq
frequency of results output 1 integer default value is 1, e.g. every step is written out
euler
rotation of loadcase frame by z-x-z Euler angles keyword (optional); 3 real values Keywords:
deg,
degree (default),
radian; instead of
rot
rot,
rotation
rotation of loadcase frame by rotation matrix 9 real values instead of
euler
dropguessing,
guessreset
reset guessing None
r,
restart,
restartwrite
frequency of saving restart information 1 integer default value of 0 disables writing of restart information Deformation gradient rate ( Fdot)
Specifies the rate of deformation gradient evolution.See the example "Mixed Boundary Conditions"
for more information about applying a deformation gradient rate in combination with stress boundary conditions.
Deformation gradient aim ( F)
Specifies the deformation gradient at the end of the load case.A deformation gradient rate between initial and final deformation gradient is linearly interpolated.See the example "Mixed Boundary Conditions"
for more information about applying a deformation gradient rate in combination with stress boundary conditions.
Velocity gradient ( L)
Specifies the velocity gradient applying deformation to the VE.See the example "Mixed Boundary Conditions"
for more information about applying a velocity gradient in combination with stress boundary conditions.
PiolaKirchhoff stress ( P)
Specifies the stress boundary conditions.See the example "Mixed Boundary Conditions"
for more information about using stress boundary conditions in combination with deformation boundary conditions.
Total time increment ( t)
Specifies the total increment $\Delta t$ of time in seconds for the load case.Thus, the load case runs from $t_0$ to $t_0 + \Delta t$.
With linear time stepping (by keyword
incs
), the time at increment $n$ out of the total $N$ is given by
\[ t(n) = t_0 + \frac{n}{N} \Delta t \]
If time scaling is switched to logarithmic (by keywords
logIncs
)then the time of increment $n$ is given by
\[ t(n) =\begin{cases}2^{n-N} \Delta t & \text{in the first loadcase};\\ t_0 \left(\displaystyle\frac{t_0 + \Delta t}{t_0}\right)^{n/N} & \text{in subsequent loadcases}.\end{cases}\]
Number of increments ( [log]incs)
Specifies the number
N
of increments the load case is subdivided into.If prefixed by »log«, a logarithmic time scaling is used.Otherwise, linear time scaling is used.See total time increment
for details on the time step calculation.
Frequency of results output ( freq)
Specifies the frequency at which results are written to the output file SolverJobName.spectralOut
. By default, the results of every increment are written out, e.g.
freq
is set to 1.
Rotate load frame ( euler, rot)
The rotation of the load frame allows loading directions that are not in the direction of the periodic expansion of the VE.
z-x-z Euler angles ( euler)
Specifies the rotation between load frame and laboratory frame as z-x-z Euler angles.By default, or when using the keywords
deg
or
degree
, angles are given in degree; keyword
radian
switches to radians.
Rotation matrix ( rot)
Specifies the rotation between load frame and laboratory frame as a rotation matrix $SO(3)$.
Rotation matrices requirements
A rotation matrix must be symmetric and its determinant must be 1.0 (with some allowed numerical tolerances).The following three basic rotation matrices rotate three-dimensional vectors about the $x$, $y$, and $z$ axis, respectively:
\begin{align}R_x(\theta) &=\begin{bmatrix}1 & 0 & 0\\ 0 & \cos \theta & -\sin \theta\\ 0 & \sin \theta & \cos \theta\\ \end{bmatrix}\\[6pt]R_y(\theta) &=\begin{bmatrix}\cos \theta & 0 & \sin \theta\\ 0 & 1 & 0\\ -\sin \theta & 0 & \cos \theta\\ \end{bmatrix}\\[6pt]R_z(\theta) &=\begin{bmatrix}\cos \theta & -\sin \theta & 0\\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1\\ \end{bmatrix}\end{align}
Reset guessing ( dropguessing)
Turns off guessing along former trajectory at start of a consecutive load case and calculation start with a homogeneous guess.By default, guessing is on for each load case except for the first one where only a homogeneous guess is possible.
Frequency of saving restart information ( restart)
Specifies the frequency at which restart information is written.By default, restart information is never saved.
Examples Basic load case ( Fdot, L, time, steps)
The most simple load case is defined by prescribing a deformation $\bar{\tnsr F}$ resulting e.g. from a constant technical deformation rate $ \dot{\bar{\tnsr F}}$, a loading time $t$ and the number of steps $n$ in which the problem should be solved.
Fdot followed by 9 floating point values lists the deformation rate tensor components in the order 11,12,13,21,22,23,31,32,33.
t followed by a positive floating point value specifies the total time of the load case.
incs followed by an integer value larger than one indicates the number of steps to use.
Thus, a load case describing tension in 11 and compression in 22 direction is given by:
Fdot 1.0e-4 0.0 0.0 0.0 -1.0e-4 0.0 0.0 0.0 0.0 t 10.0 incs 10
Each increment has the duration of 1.0 second because the total deformation time of 10.0 seconds is divided into 10 increments.The resulting deformation in 11-direction is ≈ 10
-3
and in 22-direction ≈ -10
-3
.
Instead of prescribing a constant technical strain rate by defining $\dot{\bar{\tnsr F}}$, it is possible to prescribe a velocity gradient $\bar{\tnsr L}$ to get a constant true strain rate.The velocity gradient is indicated by the keywords
L
and is also followed by 9 floating point values:
L 1.0e-4 0.0 0.0 0.0 -1.0e-4 0.0 0.0 0.0 0.0 t 10.0 incs 10
Mixed boundary conditions ( P, Fdot, L)
The samples load cases above force the VE to significantly change its volume at large deformations.Except for special cases (simple shear, rotation, etc.), a load case prescribing all components of $\bar{\tnsr F}$ will lead to a non-volume preserving load.Therefore, the deformation should be undefined at at least one component of the 3x3 tensor $\bar{\tnsr F}$ and a stress must be prescribed at those components to get a unique solution.
To leave a component of the deformation undefined, use an asterisk at the corresponding position.In the following example, a deformation is described in 11 direction and the deformation in 22 direction will be adjusted to a value such that the average Piola–Kirchhoff stress (keyword
P
) in that direction is 0.0.All other components have 0 deformation (resulting potentially in stress).
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 P * * * * 0.0 * * * * t 10.0 incs 10
Mixed boundary conditions need to fulfill the following requirements:
Stress and Deformation BCs are mutually exclusive. The stress boundary conditions must not allow for rotation, e.g. the opposite off-diagonal elements cannot have stress components at the same time. If a velocity gradient is prescribed, each row of the tensors must either contain stress or velocity gradient.
Load cases not possible due to restriction 1:
Fdot 1.0e-4 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 P 0.0 0.0 0.0 0.0 10.0 0.0 0.0 0.0 0.0 t 10.0 incs 40 L 1.0e-4 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 P 0.0 0.0 0.0 0.0 10.0 0.0 0.0 0.0 0.0 t 10.0 incs 40 Fdot * * * * * * * * * P * * * * * * * * * t 10.0 incs 40 L * * * * * * * * * P * * * * * * * * * t 10.0 incs 40 Fdot 1.0e-4 * * * * * * * * P 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 t 10.0 incs 40 Fdot 0.0 0.0 * 0.0 0.0 0.0 0.0 0.0 0.0 P * * * 0.0 * * * * * t 10.0 incs 40 L 0.0 0.0 0.0 * * * * * * P * * * * * * 0.0 0.0 0.0 t 10.0 incs 40
Load cases not possible due to restriction 2:
Fdot 1.0e-4 * 0.0 * 0.0 0.0 0.0 0.0 0.0 P * 0.0 * 0.0 * * * * * t 10.0 incs 40 L * * * * * * 0.0 0.0 0.0 P 0.0 0.0 0.0 0.0 0.0 * * * * t 10.0 incs 40 Fdot 1.0e-4 0.0 * 0.0 0.0 0.0 * 0.0 0.0 P * * 0.0 * * * 0.0 * * t 10.0 incs 40 Fdot 1.0e-4 * * * 0.0 0.0 * 0.0 0.0 P * 0.0 0.0 0.0 * * 0.0 * * t 10.0 incs 40
Load cases not possible due to restriction 3:
L 1.0e-4 * 0.0 0.0 * 0.0 0.0 0.0 0.0 P * 0.0 * * 0.0 * * * * t 10.0 incs 40 L 1.0e-4 * 0.0 0.0 0.0 0.0 0.0 0.0 0.0 P * 0.0 * * * * * * * t 10.0 incs 40 L * 1.0 * 0.0 0.0 0.0 0.0 0.0 0.0 P 1.0 * 0.0 * * * * * * t 10.0 incs 40 L * 1.0 * 0.0 0.0 0.0 * * * P 1.0 * 0.0 * * * 0.0 0.0 0.0 t 10.0 incs 40
The following load cases do not offend any of the restrictions and are allowed:
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 P * * * * 0.0 * * * * t 10.0 incs 40 Fdot * 0.0 0.0 0.0 10.0e-4 0.0 0.0 0.0 0.0 P 0.0 * * * * * * * * t 10.0 incs 40 Change of loading direction ( dropguessing)
Each line in a load file specifies one load case, the load cases are subsequently applied to the VE.In the following example, uniaxial tension in 11 direction is applied at the same rate with increasing time increments.
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 3.0 incs 2
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 5.0 incs 2
To increase the performance of the iterative scheme, the predicted deformation at the beginning of each new load case (and also during subsequent increments of the same load case) follows the rate of the last increment.However, when changing the loading direction, this strategy can lead to longer calculation times or even prevent convergence.The keyword
dropguessing
disables the guessing at the beginning of a new load case as shown in the following example where the deformation direction changes form the 11 to the 22 component:
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2
Fdot * 0.0 0.0 0.0 1.0e-4 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 dropguessing
Naturally, no guessing is possible for the first increment of the first load case.
Rotation of load frame ( rotation, euler)
The rotation of the load frame allows to load the VE in arbitrary directions, e.g. not following the sample x-y-z coordinate system.
Equivalent load cases (Rotation by 180°)
By rotating the VE by 180°, the sample x-y-z coordinate axes are aligned with the the lab x-y-z coordinate axes, thus the load
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2
can be applied also using the following load cases (rotation of 180° around z) when using an isotropic material.For crystalline material, the orientation definition also needs to be rotated, i.e. using the keyword
rotation
in the texture part in material.config
Fdot * 0.0 0.0 0.0 1.0e-4 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 rot -1.0 0.0 0.0 0.0 -1.0 0.0 0.0 0.0 1.0
Fdot * 0.0 0.0 0.0 1.0e-4 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 euler 180.0 0.0 0.0
Fdot * 0.0 0.0 0.0 1.0e-4 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 euler radian 3.14159265359 0.0 0.0
Rotation by 45°
The following load cases apply the same load, but for the second one load and lab frame are rotated to each other by 45°.
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2
Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 rot 0.70710678 -0.70710678 0.0 0.70710678 0.70710678 0.0 0.0 0.0 1.0
The shape change of the original loading direction and of the rotated one are shown in figure 1
in blue and red, respectively.
(a) not rotated (b) rotated by 45° Figure 1: Shape changes of VE for unrotated and rotated load frame
Rotating loadcase about z-axis in 10\deg increments Figure 2: Shape changes of VE under rotating load frame |
It looks like you're new here. If you want to get involved, click one of these buttons!
Is the usual \(\leq\) ordering on the set \(\mathbb{R}\) of real numbers a total order?
So, yes.
1. Reflexivity holds
2. For any \\( a, b, c \in \tt{R} \\) \\( a \le b \\) and \\( b \le c \\) implies \\( a \le c \\)
3. For any \\( a, b \in \tt{R} \\), \\( a \le b \\) and \\( b \le a \\) implies \\( a = b \\)
4. For any \\( a, b \in \\tt{R} \\), we have either \\( a \le b \\) or \\( b \le a \\)
So, yes.
Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the Dedekind-MacNeille_completion of the rationals. Matthew has told us interesting things about it before.
Hausdorff, on his part, in the book I mentioned here, says that any total order, dense, and without \( (\omega,\omega^*) \) gaps, has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here).
Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the [Dedekind-MacNeille_completion of the rationals](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion#Examples). Matthew has told us interesting things about it [before](https://forum.azimuthproject.org/discussion/comment/16714/#Comment_16714).
Hausdorff, on his part, in the book I mentioned [here](https://forum.azimuthproject.org/discussion/comment/16154/#Comment_16154), [says](https://books.google.es/books?id=M_skkA3r-QAC&pg=PA85&dq=each+everywhere+dense+type&hl=en&sa=X&ved=0ahUKEwjLkJao-9DaAhWD2SwKHVrkBcIQ6AEIKTAA#v=onepage&q=each%20everywhere%20dense%20type&f=false) that any total order, dense, and without \\( (\omega,\omega^*) \\) [gaps](https://en.wikipedia.org/wiki/Hausdorff_gap), has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here).
I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
I believe the [hyperreal numbers](https://en.wikipedia.org/wiki/Hyperreal_number) give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
That's an interesting question, Jonathan.
That's an interesting question, Jonathan.
Jonathan Castello
I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
First, we can observe that \(|\mathbb{R}| = |^\ast \mathbb{R}|\). This is because \(^\ast \mathbb{R}\) embeds \(\mathbb{R}\) and is constructed from countably infinitely many copies of \(\mathbb{R}\) and taking a quotient algebra modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads.
Next, observe that all unbounded dense linear orders of cardinality \(\aleph_0\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the Morley categoricity theorem. From this we have that all unbounded dense linear orders with cardinality \(\kappa \geq \aleph_0\) are isomorphic. This is referred to in model theory as \(\kappa\)-categoricity.
Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders.
Puzzle MD 1: Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic.
[Jonathan Castello](https://forum.azimuthproject.org/profile/2316/Jonathan%20Castello)
> I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.)
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
First, we can observe that \\(|\mathbb{R}| = |^\ast \mathbb{R}|\\). This is because \\(^\ast \mathbb{R}\\) embeds \\(\mathbb{R}\\) and is constructed from countably infinitely many copies of \\(\mathbb{R}\\) and taking a [quotient algebra](https://en.wikipedia.org/wiki/Quotient_algebra) modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads.
Next, observe that all [unbounded dense linear orders](https://en.wikipedia.org/wiki/Dense_order) of cardinality \\(\aleph_0\\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the [Morley categoricity theorem](https://en.wikipedia.org/wiki/Morley%27s_categoricity_theorem). From this we have that all unbounded dense linear orders with cardinality \\(\kappa \geq \aleph_0\\) are isomorphic. This is referred to in model theory as *\\(\kappa\\)-categoricity*.
Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders.
**Puzzle MD 1:** Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic.
Hi Matthew, nice application of the categoricity theorem! One question if I may. You said:
In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
But in my understanding the lattice and poset structure is inter-translatable as in here. Can two lattices be isomorphic and their associated posets not?
Hi Matthew, nice application of the categoricity theorem! One question if I may. You said:
> In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited.
But in my understanding the lattice and poset structure is inter-translatable as in [here](https://en.wikipedia.org/wiki/Lattice_(order)#Connection_between_the_two_definitions). Can two lattices be isomorphic and their associated posets not?
(EDIT: I clearly have no idea what I'm saying and I should probably take a nap. Disregard this post.)
Can two lattices be isomorphic and their associated posets not?
Can two lattices be isomorphic and their associated posets not?
If two lattices are isomorphic preserving infima and suprema, ie limits, then they are order isomorphic.
The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive.
From model theory we have two maps \(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \) and \(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \) such that:
Now consider \(\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\).
The hyperreals famously violate the Archimedean property. Because of this \(\bigwedge_{^\ast \mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\) does not exist.
On the other than if we consider \( \bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\), that does exist by the completeness of the real numbers (as it is bounded below by \(\psi(0)\)).
Hence
$$\bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\right)$$So \(\psi\) cannot be a complete lattice homomorphism, even though it is part of an order isomorphism.
However, just to complicate matters, I believe that \(\phi\) and \(\psi\) are a mere lattice isomorphism, preserving finite meets and joints.
> Can two lattices be isomorphic and their associated posets not?
If two lattices are isomorphic preserving *infima* and *suprema*, ie *limits*, then they are order isomorphic.
The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive.
From model theory we have two maps \\(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \\) and \\(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \\) such that:
- if \\(x \leq_{\mathbb{R}} y\\) then \\(\phi(x) \leq_{^\ast \mathbb{R}} \phi(y)\\)
- if \\(p \leq_{^\ast \mathbb{R}} q\\) then \\(\psi(q) \leq_{\mathbb{R}} \psi(q)\\)
- \\(\psi(\phi(x)) = x\\) and \\(\phi(\psi(p)) = p\\)
Now consider \\(\\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\).
The hyperreals famously violate the [Archimedean property](https://en.wikipedia.org/wiki/Archimedean_property). Because of this \\(\bigwedge_{^\ast \mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\) does not exist.
On the other than if we consider \\( \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\), that *does* exist by the completeness of the real numbers (as it is bounded below by \\(\psi(0)\\)).
Hence
$$
\bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\right)
$$
So \\(\psi\\) *cannot* be a complete lattice homomorphism, even though it is part of an order isomorphism.
However, just to complicate matters, I believe that \\(\phi\\) and \\(\psi\\) are a mere *lattice* isomorphism, preserving finite meets and joints. |
Sadly, no. Tl;dr: the minimum size of such a creature is on the scale of kilometers and thus pretty infeasible. Instead, try making the creature some kind of colonial organism and boosting your planet.
First, let's consider the simple hypothetical: how much hydrogen would it take to simply
lift a whale? Well, a blue whale weighs 200 tons- that's 200,000 kg. Each cubic meter of hydrogen can lift approximately 1.1 kg, so to lift a whale we're talking about 181,000 cubic meters of air. This is about the same size as the Hindenburg or your classic zeppelin- which you probably think is a lot smaller than it actually is:
It also brings to mind some really fun mental images of a whale soaring through the sky while strapped to the bottom of a zeppelin. Unfortunately, that comparison is unhelpful because the skin of the zeppelin is assumed to have a negligible weight- something that we can't do with biology.
So, let's assume a spherical whale.
What we're trying to figure out here is the minimum size of a biological gasbag. We model that as a sphere of $H_2$ gas surrounded by a thin shell of skin.
Beware, physics below
Our initial equation starts out pretty simply:
$V_{hyd}*F_{buoy} = M_{skin\ shell} = V_{shell}*\rho_{shell}$
where $\rho$ is the density of our shell.
This is then expanded to give us some actual formulas. We're trying to solve for the radius of this biological gasbag, so we're hoping to end up with $r$ alone on one side set equal to a bunch of numbers.
$\frac{4}{3}\pi r^3*F_{buoy} = 4\pi r^2t*\rho_{shell}$
Where $t$ is the thickness of the shell- I'm going to assume it's 1 meter thick. Sounds approximately right to me. We can simplify a bit with that information and some quick algebra:
$r^3 * F_{buoy} = 3r^2*\rho_{shell}$
Which immediately simplifies to exactly what we were hoping for!
$r * F_{buoy} = 3*\rho_{shell}$
Let's deal with those other two variables. The $F_{buoy}$ is the force of buoyancy due to our lifting gas, in this case, hydrogen. There's a lot to it, but Wikipedia has a shortcut: $1\ m^3$ of hydrogen can lift $\approx 1.1kg$. Cool! We can also deal with the other variable, $\rho_{shell}$. Here, a quick google search tells us that the density of skin is about $800\frac{kg}{m^3}$. Let's plug those numbers in.
$r*1.1 = 800*3 = 2400$
Note: I fudge my units for simplicity's sake here. The $F_{buoy}$ term is a good bit more complex.
So our minimal radius for our idealized gasbag is $\approx 2200m$, or 2 kilometers.
Biological assessment:
Totally infeasible. A whale 4 kilometers long is nowhere near plausible, and that's the absolute minimum. You'd have to add things besides skin, and that all adds weight, and every time you add something you increase the radius that much further. With some back of the envelope calculations, I get a minimum size of 8 kilometers; including water and muscle mass as well as a tubular body. What really sunk this, however, was the circulation system. Even though the volume scales as the cube of the radius, the amount of liquid needed to provide circulation throughout the body scales even faster. Sad.
Fictional solutions
There are two main ways I see to combat the problems above.
Modify the organism
If the mammalian whale-like characteristics aren't a hard necessity, I humbly submit the siphonophore for your consideration. It's a marine creature that's actually colonial- made up of individual cells working in unison. There are two big perks to this. One, they're clearly capable of it- the Portugese man o' war is a siphonophore, and it already has a large float that could be modified to hold hydrogen (in a fictional universe). Plus, many siphonophores are bioluminescent, which would be
awesome to see as a large creature floating overhead. I estimate the minimum size of these to be 5 kilometers in diameter (water weighs more than skin, but they're fine being spherical), so they'd be like glowing clouds. If that isn't epic sci-fi, I don't know what is. Modify the environment
I fudged the buoyancy term in my derivation above, but it's based on essentially two things- the force of gravity and the density of the atmosphere. Here in Worldbuilding, we're free to modify both of those! What we want is a small planet (low gravity) with a dense atmosphere. If we have an atmosphere like Venus, which is some 60 times denser than Earth's, and a planet about the size of Titan, which has a gravity about 1/8th of ours, we can get a much larger buoyancy force. On this planet, every cubic meter of hydrogen is going to be able to lift around 250 kg- a massive increase from the 1.1 we used on Earth. This cuts our minimum radius down to just
10 meters! That's much more reasonable for an organism, especially one that's supposed to be a whale, and quite manageable in any fiction novel. |
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 詳細記錄 - 相似記錄 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 詳細記錄 - 相似記錄 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 詳細記錄 - 相似記錄 2019-05-15 16:57 詳細記錄 - 相似記錄 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 詳細記錄 - 相似記錄 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 詳細記錄 - 相似記錄 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 詳細記錄 - 相似記錄 2019-01-10 15:54 詳細記錄 - 相似記錄 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 詳細記錄 - 相似記錄 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. 詳細記錄 - 相似記錄 |
I'm referencing the book, Advanced Calculus, 3rd ed., by Creighton Buck. I feel like the proof is fallacious, but want to confirm my suspicion. I'll state one direction of the proof first, and then call out the particular point where I feel it is fallacious.
Let $f$ be a function defined on a compact set $D$ in $n$ space and taking values in $m$ space, and let $G$ be its graph. Then $f$ is continuous on $D$ if and only if $G$ is compact.
Let the graph of $f$ be $$G = \{\text{all } P = (p,q) \text{ where } q = f(p), \text{ and } p \in D\}$$ Assume that $G$ is compact. We want to show that $f$ must be continuous. Suppose that this were false. Then there would be some point $p_0$ in $D$, and a sequence $\{p_n\}$ in $D$ with $p_n \rightarrow p_0$, and an $\epsilon > 0$ such that $|f(p_n) - f(p_0)| > \epsilon$ for all $n$. Put $q_n = f(p_n)$ and consider the points $P_n = (p_n, q_n)$ in $G$, for $n = 1, 2, ...$. Note that $p_n \rightarrow p_0 \in D$, but that $|q_n - f(p_0)| > \epsilon$. Because $G$ is compact, the sequence $\{P_n\}$ must have a subsequence $\{P_{n_k}\}$ that converges to some point $(p, q)$ in $G$. Accordingly, $\lim_{n \rightarrow \infty}p_{n_k} = p$ and $\lim_{n \rightarrow \infty}q_{n_k} = q$. However, since $\{p_{n_k}\}$ is a subsequence of $\{p_n\}$, which itself converges to $p_0$, we have $p = p_0$. Because the point $(p, q)$ in in $G$, which is the graph of $f$, $q = f(p) = f(p_0)$. However, $\lim_{n \rightarrow \infty}q_{n_k} = f(p_0)$ and $|q_n - f(p_0)| > \epsilon$ are not both possible. We thus conclude that $f$ is continuous on $D$.
I take issue with this statement in the proof:
If $f$ were not continuous, then there would be some point $p_0$ in $D$, and a sequence $\{p_n\}$ in $D$ with $p_n \rightarrow p_0$, and an $\epsilon > 0$ such that $|f(p_n) - f(p_0)| > \epsilon$ for all $n$.
Buck is using the convergence preserving property of continuous functions, which states that a function $f$ is continuous if and only if for any sequence $p_n \rightarrow p$, then $f(p_n) \rightarrow f(p_0)$. Why does this inequality hold true for all $n$?
I feel like this should say:
If $f$ were not continuous, then there would be some point $p_0$ in $D$, and a sequence $\{p_n\}$ in $D$ with $p_n \rightarrow p_0$, and an $\epsilon > 0$ such that for any $N$, $\exists n > N$ such that $|f(p_n) - f(p_0)| > \epsilon$.
With this correction, however, I'm not able to continue with his line of reasoning to arrive at the proof. |
Subbases of a Topology
Recall from the Bases of a Topology page that if $(X, \tau)$ is a topological space then a base for the topology $\tau$ is a collection $\mathcal B$ of open sets such that every $U \in \tau$ can be written as a union of a subcollection of open sets from $\mathcal B$. In other words, for every $U \in \tau$ there exists a $\mathcal B^* \subseteq \mathcal B$ such that:(1)
We will now define a similar term known as a subbase
Definition: Let $(X, \tau)$ be a topological space. A collection $\mathcal S \subseteq \tau$ is called a Subbase (sometimes Subbasis) for $\tau$ if the collection of finite intersections of elements from $\mathcal S$ forms a basis of $\tau$, i.e. $\mathcal B_S = \{ U_1 \cap U_2 \cap ... \cap U_k : U_1, U_2, ..., U_k \in \mathcal S \}$ is a basis of $\tau$.
The following proposition gives us an alternative definition of a subbase for a topology.
Proposition 1: Let $(X, \tau)$ be a topological space. Then $\mathcal S$ is a subbase for $\tau$ if and only if $\tau$ is the smallest topology containing $\mathcal S$. Example 1
Consider the topological space $(\mathbb{R}, \tau)$ where $\tau$ is the usual topology of open intervals in $\mathbb{R}$. Consider the following set of semi-infinite open intervals:(2)
Notice that for $a, b \in \mathbb{R}$ and $a < b$ we have that:(3)
For $a \geq b$ we have that $(-\infty, b) \cap (a, \infty) = \emptyset$. Therefore the collection of all finite intersctions of elements from $\mathcal S$ are either open intervals or the empty set. Recall that the collection of open intervals already forms a basis of the usual topology on $\mathbb{R}$. Hence $S$ is a subbasis of $\tau$ since $\mathcal B_S = \{ U_1 \cap U_2 \cap ... \cap U_k : U_1, U_2, ..., U_k \}$ is a basis of $\tau$. |
The Weak Topology Induced by F
Let $X$ be any nonempty set and let $\mathcal F$ be a collection of real-valued functions defined on $X$. For each $\epsilon > 0$, each finite subcollection $F \subseteq \mathcal F$, and each $x \in X$ let:(1)
Then for each $x \in X$, the following collection of sets will form a base at $x$:(2)
The topology on $X$ that is generated by these local bases has a special name.
Proposition 1: Let $X$ be a nonempty set and let $\mathcal F$ be a collection of functions on $X$. For each $x \in X$, the collection $\{ N_{\epsilon, F}(x) : \epsilon > 0, x \in X, F \subseteq \mathcal F \: \mathrm{finite} \}$ is a base for some topology $\tau$ on $X$. Recall from the Bases for a Topology page that a collection $\mathcal B$ of subsets of a set $X$ is a base for some topology $\tau$ on $X$ if and only if $X$ is covered by $\mathcal B$ and for all $B_1, B_2 \in \mathcal B$ and $x \in B_1 \cap B_2$ where exists a $B \in \mathcal B$ such that $x \in B \subseteq B_1 \cap B_2$. We use this result in proving the proposition above. Proof:Let $f \in \mathcal F$ and let $\epsilon = 1 > 0$. Then $x \in N_{1, \{ f \}}(x)$ for each $x \in X$ and so $X = \bigcup_{x \in X} N_{1, \{ f \}}(x)$ which shows that $X$ is covered by $\{ N_{\epsilon, F}(x) : \epsilon > 0, x \in X, F \subseteq \mathcal F \: \mathrm{finite} \}$. Secondly let $\epsilon_1, \epsilon_2 > 0$, $F_1, F_2 \subseteq \mathcal F$ be finite, and let $x_1, x_2 \in X$. Suppose that:
Then by definition $x \in N_{\epsilon_1, F_1}(x_1)$ and $x \in N_{\epsilon_2, F_2}(x_2)$ and so:(4)
Let $\delta > 0$ be such that:(5)
Now we have that $\delta > 0$ and $F_1 \cup F_2$ is finite (since $F_1$ and $F_2$ are both finite sets). So if $x' \in N_{\delta, F_1 \cup F_2}(x)$ we have that for all $f \in F_1$:(6)
And we have that for all $g \in F_2$ that:(7)
Therefore $x' \in N_{\epsilon_1, F_1}(x_1) \cap N_{\epsilon_2, F_2}(x_2)$. Thus:(8)
So indeed, $\mathcal B$ is a base for some topology $\tau$ on $X$. $\blacksquare$
Proposition 1 tells us that the collection of sets $\{ N_{\epsilon, F}(x) : \epsilon > 0, x \in X, F \subseteq \mathcal F \: \mathrm{finite} \}$ generates a topology on $X$. This topology will be given a special name which we define below.
Definition: Let $X$ be a nonempty set and let $\mathcal F$ be a collection of real-valued functions on $X$. The Weak Topology Induced by $\mathcal F$ or the $\mathcal F$-Weak Topology on $X$ is the topology generated by the local bases for each $x \in X$ given by $\{ N_{\epsilon, F}(x) : \epsilon > 0, F \subseteq \mathcal F \: \mathrm{finite} \}$. It is the weakest topology (the initial topology) which makes every function in $\mathcal F$ continuous with respect to the topology. Local Base for x
For each $x \in X$, a local base for $x$ in the $\mathcal F$-weak topology consists of sets of the form:(9)
F-Weak Convergence
It is useful to characterize what it means for a sequence of points $(x_n)$ to converge to a point $x$ with respect to the $\mathcal F$-Weak topology on $X$. We define this below in the following proposition.
Proposition 2: Let $X$ be equipped with the $\mathcal F$-weak topology. A sequence of points $(x_n)$ $\mathcal F$-weakly converges to $x \in X$ if $\displaystyle{\lim_{n \to \infty} f(x_n) = f(x)}$ for every $f \in \mathcal F$. Recall that if $(X, \tau)$ is a topological space then a sequence of points $(x_n)$ converges to $x \in X$ if for every $U \in \tau$ with $x \in U$ there exists an $N \in \mathbb{N}$ such that if $n \geq N$ then $x_n \in U$. |
I'm reading the introduction of
An Introduction to the Trace Formula by James Arthur and wanted to understand something in the introduction.
Let $H$ be a unimodular locally compact Hausdorff group, and $\Gamma$ a discrete subgroup of $H$. Let $\mathscr H = L^2(\Gamma \backslash H)$ be the Hilbert space of measurable functions $\phi: \Gamma \backslash H \rightarrow \mathbb C$ satisfying $||\phi||^2 = \int\limits_{\Gamma \backslash H} |\phi(h)|dh < \infty$.
Assume $\Gamma \backslash H$ is compact. Fix $f \in C_c^{\infty}(H)$, and let $K \in L^2(\Gamma \backslash H \times \Gamma \backslash H)$ be the function defined by
$$K(x,y) = \sum\limits_{\gamma \in \Gamma} f(x^{-1}\gamma y)$$ which is a finite sum. To this kernel we can associate a compact operator $R(f)$ on $\mathscr H$ defined by
$$[R(f)\phi](x) = \int\limits_{\Gamma \backslash H} K(x,y)\phi(y)dy$$
It can be shown that this integral is equal to just $\int\limits_H f(y)\phi(xy)dy$.
Arthur arranges that $R(f)$ is a compact self adjoint operator, and claims the decomposition of $\mathscr H$ into a Hilbert space direct sum of irreducible subrepresentations (under the action of $H$ by right translation) follows from the spectral theorem for self adjoint operators. How does this follow? |
Research Open Access Published: Global existence of solution for multidimensional generalized double dispersion equation Boundary Value Problems volume 2019, Article number: 155 (2019) Article metrics
79 Accesses
Abstract
In this paper, we study the Cauchy problem of multidimensional generalized double dispersion equation. To prove the global existence of solutions, we introduce some new methods and ideas, and fill some gaps in the established results.
Introduction
In this paper, we study the Cauchy problem of the multidimensional generalized double dispersion equation
where
k is a constant. Equation (1.1) is called the generalized damped multidimensional Boussinesq equation in [1], which is related to some Boussinesq equations and their generalized forms. Therefore we may consider this class of equations as an approach for propagation of long waves on shallow water; see [1,2,3,4,5,6,7,8,9,10,11,12] and references therein. Thus, in the present paper, we call Eq. (1.1) the multidimensional generalized double dispersion equation, since in the one-dimensional case, Eq. (1.1) is the generalized double dispersion equation
which was introduced to consider the model of interaction between the surface of a nonlinear elastic rod, whose material is hyperelastic (e.g., the Murnaghan material). For derivation and research background, we refer the readers to [3,4,5,6,7,8,9,10,11,12] and references therein. When \(f'(u)\) is bounded below, the initial boundary value problem and the Cauchy problem of Eq. (1.3) were considered in [5] and [6], respectively. Later the corresponding initial boundary value problem was studied in [3], where \(f(u)\) is either of general convex form or the particular case \(f(u)=|u|^{p}\), \(p>1\). These conditions satisfied by \(f(u)\) enable authors to give some sharp conditions on the global existence for the initial boundary value problem of Eq. (1.3) and some invariant sets by the variational principle. However, for the multidimensional generalized double dispersion equation, it was still an open question of giving the local and global well-posedness before the publication of [1]. In [1] the authors managed to deal with the Cauchy problem of the multidimensional nonlinear evolution equation (1.1)–(1.2). They first gave the existence of local solutions. By some estimates of a local solution they obtained the existence of a global solution, even though some gaps are still there, for example, it is not clear how to define an appropriate space that contains \((u_{tt}, u_{t})\), which also needs additional restrictions on other parameters like
s. Also, in the proof of [1, Lemma 3.3] the inequality \(\|u^{\rho }\|_{\infty } \le C\|\Delta u\|\) does not hold if \(2\le n\le 4\). Moreover, the energy also needs further estimation. Despite these gaps, the work of [1] motivated many studies like [2,3,4,5,6,7,8,9,10,11,12,13].
So in the present paper, we provide more insight into overcoming the difficulties in the multidimensional case and fill the gaps in [1] to obtain the global existence of a solution for problem (1.1)–(1.2).
Propostion 1.1
([1])
Let \(f(u)\in C^{[s]+1} (\mathbb{R})\) for some \(s> \frac{n}{2}\). Then for any \(u_{0}\in H^{s}\) and \(u_{1}\in H^{s-1}\), problem (1.1) –(1.2) admits a unique local solution \(u(t)\in C ( [0,T^{0});H^{s} ) \cap C^{1} ( [0,T^{0});H ^{s-1} )\) with \(u_{tt}\in C ( [0,T^{0});H^{s-2} )\), where \(T^{0}\) is the maximal existence time of \(u(t)\). Moreover, if then \(T^{0}=+\infty \). Existence of global solution Lemma 2.1 Let the assumptions in Proposition 1.1 hold. Assume that \(s\ge \frac{3}{2}\), \((-\Delta )^{-\frac{1}{2}}u_{1} \in L^{2}\), and \(F(u_{0})\in L^{1}\). Then for the solution u given in Proposition 1.1, we have where Lemma 2.2 Let the assumptions of Lemma 2.1 hold. Assume that \(k\ge 0\) and that either \(F(u)\ge 0\) for all \(u\in \mathbb{R}\) or \(f(0)=0\) and \(\inf_{u\in \mathbb{R}^{n}} f'(u)=C_{0}>-1\). Then for the solution u given in Proposition 1.1, we have where \(C\ge 2\) is a positive constant. Proof (i) (ii)
If \(k\ge 0\), \(f(0)=0\), and \(\inf_{u\in \mathbb{R}^{n}} f'(u)=C _{0}>-1\), then letting \(k_{0}=\max \{-C_{0},0\}\) and \(f_{1}(u)=f(u)+k _{0} u\), we have \(0\le k_{0}<1\), \(f_{1}(0)=0\), and \(f_{1}'(u)\ge 0\). Hence \(F_{1}(u)=\int _{0}^{u} f_{1}(\tau )\,\mathrm{d}\tau \ge 0\). Substituting \(\int _{\mathbb{R}^{n}} F(u) \,\mathrm{d}x= \int _{\mathbb{R}^{n}} F_{1}(u) \,\mathrm{d}x -\frac{k_{0}}{2} \|u\|^{2}\) into (2.1) yields (2.2) with \(C =\frac{2}{1-k_{0}}\). □
Theorem 2.3 Let \(n=1,2\), \(f(u)\in C^{2}(\mathbb{R})\), \(u_{0}\in H^{\frac{3}{2}}\), \(u_{1}\in H^{\frac{1}{2}}\), \((-\Delta )^{-\frac{1}{2}}u _{1}\in L^{2}\), and \(F(u_{0})\in L^{1}\). Assume that \(k\ge 0\) and that either \(F(u)\ge 0\) for any \(u\in \mathbb{R}\) or \(f(0)=0\) and \(\inf_{u\in \mathbb{R}^{n}} f'(u)>-1\). Then problem (1.1) –(1.2) admits a unique global solution \(u(t)\in C ( [0,\infty );H^{1} ) \cap C^{1} ( [0,\infty );L ^{2} )\). Proof
Setting \(s=\frac{3}{2}\), by Proposition 1.1 we have \(\frac{3}{2}> \frac{n}{2}\) for \(n=1,2\). Hence by Proposition 1.1 problem (1.1)–(1.2) admits a unique local solution \(u\in C ( [0,T^{0});H^{\frac{3}{2}} ) \cap C^{1} ( [0,T^{0});H^{\frac{1}{2}} )\subset C ( [0,T^{0});H ^{1} ) \cap C^{1} ( [0,T^{0});L^{2} )\). Furthermore, from Lemma 2.2 we get
which, together with Proposition 1.1, gives \(T^{0}=+\infty \). □
Theorem 2.4 Let \(n=3\) and \(s>\frac{3}{2}\) or \(n=4\) and \(s>2\), and let \(f(u)\in C ^{[s]+1}(\mathbb{R})\), \(u_{0}\in H^{s}\), \(u_{1}\in H^{s-1}\), \((-\Delta )^{-\frac{1}{2}}u_{1}\in L^{2}\), and \(F(u_{0})\in L^{1}\). Assume that \(k\ge 0\) and that either \(F(u)\ge 0\) for any \(u\in \mathbb{R}\) or \(f(0)=0\) and \(\inf_{u\in \mathbb{R}^{n}} f'(u)>-1\). Then problem (1.1) –(1.2) admits a unique global solution \(u(t)\in C ( [0,\infty );H^{1} ) \cap C^{1} ( [0,\infty );L^{2} )\). Proof
First, we have \(s>\frac{n}{2}\) for \(n=3,4\). Hence by Proposition 1.1 problem (1.1)–(1.2) admits a unique local solution \(u\in C ( [0,T^{0});H^{s} ) \cap C^{1} ( [0,T ^{0});H^{s-1} )\subset C ( [0,T^{0});H^{1} ) \cap C ^{1} ( [0,T^{0});L^{2} )\). The remainder of this proof is the same as that in the proof of Theorem 2.3. □
In light of the argument developed in the present paper, we can conclude that for problem (1.1)–(1.2), if we adopt the course in [1], that is, first prove the existence of a local solution and then get the existence of a global solution, then we can only obtain the existence of global \(H^{1}\) solutions. Moreover, as in Theorem 2.3 and Theorem 2.4, we see that to obtain the global existence of an \(H^{1}\) solution from the existence of a local solution, it is reasonable to assume that \(u_{0}\in H^{s}\) for some \(s\ge \frac{3}{2}\). Namely, our argument relies on the assumption of a higher-order regularity of the initial data than that of the corresponding solution. Obviously, we cannot get the same regularity for both the initial data and the corresponding solution, which is still an open question for further investigation.
References 1.
Polat, N., Ertaş, A.: Existence and blow-up of solution of Cauchy problem for the generalized damped multidimensional Boussinesq equation. J. Math. Anal. Appl.
349(1), 10–20 (2009) 2.
Liu, Y., Xu, R.: Global existence and blow up of solutions for Cauchy problem of generalized Boussinesq equation. Phys. D, Nonlinear Phenom.
237(6), 721–731 (2008) 3.
Liu, Y., Xu, R.: Potential well method for Cauchy problem of generalized double dispersion equations. J. Math. Anal. Appl.
338(2), 1169–1187 (2008) 4.
Lian, W., Xu, R.: Global well-posedness of nonlinear wave equation with weak and strong damping terms and logarithmic source term. Adv. Nonlinear Anal.
9(1), 613–632 (2020) 5.
Chen, G., Wang, Y., Wang, S.: Initial boundary value problem of the generalized cubic double dispersion equation. J. Math. Anal. Appl.
299(2), 563–577 (2004) 6.
Wang, S., Chen, G.: Cauchy problem of the generalized double dispersion equation. Nonlinear Anal.,
64(1), 159–173 (2006) 7.
Xu, R., Liu, Y.: Global existence and nonexistence of solution for Cauchy problem of multidimensional double dispersion equations. J. Math. Anal. Appl.
359(2), 739–751 (2009) 8.
Xu, R., Liu, Y., Liu, B.: The Cauchy problem for a class of the multidimensional Boussinesq-type equation. Nonlinear Anal.
74(6), 2425–2437 (2011) 9.
Polat, N., Piskin, E.: Asymptotic behavior of a solution of the Cauchy problem for the generalized damped multidimensional Boussinesq equation. Appl. Math. Lett.
25(11), 1871–1874 (2012) 10.
Wang, H., Wang, S.: Global existence and asymptotic behavior of solution for the Rosenau equation with hydrodynamical damped term. J. Math. Anal. Appl.
401(2), 763–773 (2013) 11.
Wang, S., Da, F.: On the asymptotic behaviour of solution for the generalized double dispersion equation. Appl. Anal.
92(6), 1179–1193 (2013) 12.
Wang, Y., Wei, C.: Asymptotic profile of global solutions to the generalized double dispersion equation via the nonlinear term. Z. Angew. Math. Phys.
69(2), 34 (2018) 13.
Saanouni, T.: Global well-posedness of some high-order focusing semilinear evolution equations with exponential nonlinearity. Adv. Nonlinear Anal.
7(1), 67–84 (2017) Acknowledgements
The author thanks the referees for their constructive suggestions.
Availability of data and materials
Not applicable.
Funding
This work was supported by the Natural Science Foundation of Jiangsu Province (BK20160564).
Ethics declarations Competing interests
The author declares that there are no competing interests.
Additional information Abbreviations
Not applicable.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Both the set of rational numbers $\mathbb{Q}$ and its complement are dense in $\mathbb{R}$, but the relationship between them is very asymmetric. For instance, the rationals are countable and have Lebesgue measure 0, whereas the irrationals are uncountable and have infinite Lebesgue measure. Is it possibly to decompose the real numbers into dense subsets in a more symmetric way, so that $\mathbb{R}$ can be written as a union of finitely many disjoint sets which can be mapped into each other by translation or reflection (i.e. are congruent)?
$$A=\bigcup_{n\in\mathbb Z}[2n,2n+1)$$ $$B=\bigcup_{n\in\mathbb Z}[2n+1,2n+2)$$ $$\mathbb R= [(A\cap\mathbb Q)\cup(B\setminus\mathbb Q)] \cup [(B\cap\mathbb Q)\cup(A\setminus\mathbb Q)] $$ The translation $x\mapsto x+1$ maps $[(A\cap\mathbb Q)\cup(B\setminus\mathbb Q)]$ onto $[(B\cap\mathbb Q)\cup(A\setminus\mathbb Q)].$
Yes, this is doable, via a construction like that of the Vitali set but for integers. Indeed, with Choice we can get an
uncountably dense example! (Note that bof's answer solves the problem as stated, without using Choice at all.)
For $x, y\in\mathbb{R}$, let $x\sim y$ if $x-y\in\mathbb{Z}$. Now via Choice we can get an uncountably dense transversal $T$ for $\sim$ - that is, $T$ is uncountably dense and contains exactly one real from each $\sim$-class. (Note that $[0, 1)$ is a non-dense transversal - the existence of a transversal, full stop, does not require choice.)
Now let $$A=\{t+2k: t\in T, k\in\mathbb{Z}\},\quad B=\{t+2k+1: t\in T, k\in\mathbb{Z}\}.$$ It's not hard to see that $B$ is gotten by shifting $A$ one unit (in either direction!), and that $A$ and $B$ are disjoint and cover $\mathbb{R}$. |
Let $f\colon\alpha\to\beta$ be a function between ordinals $\alpha,\beta$.
I want to define a function $g\colon\alpha\to\gamma$ for some ordinal $\gamma$ so that
$(\forall \eta < \alpha)(g(\eta) = \max\{f(\eta),\bigcup_{\xi < \eta} (g(\xi)+1)\})$
This construction appears in a book by Krzysztof Ciesielski called "Set Theory for the working mathematician". The author doesn't specify the process of obtaining such a function from $f$, simply saying that is it defined by "transfinite induction".
My thoughts on the matter:
From what I can see, we need the following transfinite recursion theorem:
Transfinite Recursion.Let $G$ be a class function from the class of all sets into the class of all sets. Then there is a class function $F$ from the class of ordinals to the class of all sets so that for any ordinal $\eta$ we have
$$F(\eta) = G(F{\restriction}_{\eta}).$$
For instance, one could start like this: consider a class function $G$ so that for any set $x$, $G(x) = \bigcup\mathrm{ran}(x)$ if $x$ is a relation and $G(x) = \varnothing$ otherwise.
Then there is a class function $F$ so that for any ordinal $\alpha$, $F(\eta) = G(F{\restriction}_{\eta}) = \bigcup\mathrm{ran}(F{\restriction}_{\eta}) = \bigcup_{\xi < \eta} F(\xi)$.
But then we would need a way to obtain a set $\bigcup_{\xi < \eta} (F(\xi) + 1)$ from a set $\bigcup_{\xi < \eta} F(\xi)$ for every ordinal $\alpha$, and I currently don't see it. Perhaps, it was a wrong strategy to begin with.
Moreover, even if we had a a class function $F$ sending each ordinal $\eta$ to $\bigcup_{\xi < \eta} F(\xi)$, we would still need to somehow obtain a class function $H$ sending each ordinal $\eta$ to $\max\{f(\eta),\bigcup_{\xi < \eta} F(\xi) \}$.
Of course, any such class function $H$ could to restricted to $\alpha$ to obtain a desired function $g$. |
Linear Operators and Bounded Linear Operators
Recall from the Linear Functionals and Bounded Linear Functionals page that if $X$ is a linear space then a linear functional on $X$ is a function $T : X \to \mathbb{R}$ such that for all $x, y \in X$ and for all $\alpha \in \mathbb{R}$ we have that:
$T(x + y) = T(x) + T(y)$. $T(\alpha x) = \alpha T(x)$.
Furthermore, if $(X, \| \cdot \|)$ is a normed linear space then a linear functional $T$ on $X$ was said to be bounded if there exists an $M > 0$ such that $|T(x)| \leq M \| x \|$ for all $x \in X$.
The concept of a linear functional and bounded linear functional can be extended if the codomain is changed from $\mathbb{R}$ to another linear space, $Y$. Such functions are called linear operators from $X$ to $Y$.
Definition: Let $X$ and $Y$ be linear spaces. A Linear Operator from $X$ to $Y$ is a function $T : X \to Y$ such that: a) $T(x_1 + x_2) =T(x_1) + T(x_2)$ for all $x_1, x_2 \in X$. b) $T(\alpha x) = \alpha T(x)$ for all $x \in X$.
The
Set of All Linear Operators from $X$ to $Y$ is denoted by $\mathcal L (X, Y)$.
Definition: Let $(X, \| \cdot \|_X)$ and $(Y, \| \cdot \|_Y)$ be normed linear spaces and let $T$ be a linear operator from $X$ to $Y$. $T$ is said to be Bounded if there exists an $M > 0$ such that $\| T(x) \|_Y \leq M \| x \|_X$ for every $x \in X$. The Set of All Bounded Linear Operators from $X$ to $Y$ is denoted by $\mathcal B(X, Y)$. Sometimes the subscripts on the norms for $X$ and $Y$ will be omitted if no ambiguity arises in which norm we are talking of.
Recall that a linear functional $T$ on $X$ is bounded if and only if $T$ is continuous on $X$ and if and only if $T$ is continuous at $0$. The same result is true for linear operators from $X$ to $Y$.
Proposition 1: Let $(X, \| \cdot \|_X)$ and $(Y, \| \cdot \|_Y)$ be normed linear spaces and let $T$ be a linear operator from $X$ to $Y$. Then the following statements are equivalent: a) $T$ is a bounded linear operator. b) $T$ is continuous on $X$. c) $T$ is continuous at $0 \in X$. Proof: $(a) \Rightarrow (b)$ Suppose that $T$ is a bounded linear operator. Then there exists an $M > 0$ such that $\| T(x) \|_Y \leq M \| x \|_X$ for all $x \in X$. Let $x_0 \in X$. Let $\epsilon > 0$ be given and choose $\delta = \frac{\epsilon}{M} > 0$. Then if $\| x - x_0 \|_X < \delta$ we have that: Therefore $T$ is continuous at $x_0$. Since $x_0$ is arbitrary, $T$ is continuous on $X$. $(b) \Rightarrow (c)$ Suppose that $T$ is continuous on $X$. Then $T$ is trivially continuous at $0 \in X$. $(c) \Rightarrow (a)$ Suppose that $T$ is continuous at $0 \in X$. Then for $\epsilon = 1 > 0$ there exists a $\delta > 0$ such that if $\| x \|_X \leq \delta$ then: If $x \neq 0$ then $\frac{\delta x}{\| x \|_X}$ has norm $\delta$. Therefore: Which implies that for every $x \in X$ we have that: So $T: X \to Y$ is a bounded linear operator. $\blacksquare$ |
Let's go back and consider the Biot-Savart Law for a filamentary current:
$$B(r) \propto \int_C I(r') \times \frac{r-r'}{4\pi |r-r'|^3} \, dr'$$
As I've written it, $r'$ is some position on the coil (we integrate over the coil to get the whole magnetic field). $r$ is the position we want to find the field at--somewhere inside the coil's empty body.
Let's consider $r$ directly at the center of the coil. As we start integrating, we slide along the coil. I'll traverse the loop clockwise, so let's have $r'$ start at the top of the batter, come across the top of the page, and start traversing the topmost loop of the coil.
As we start on the rightmost part of the topmost loop, the instantaneous direction of current is out of the page. If we take $r = 0$, so that the center of the loop is the origin, then all we have is the vector $I(r') \times [-r'/|r'|^3]$ as the integrand. $-r'$ points
inward toward the center of the coil. It should be clear that the resulting magnetic field from a small piece of the wire at this point in the coil is both downward and to the left.
Let's consider what happens when we get to the leftmost part of the topmost loop. The vector $r'$ is downward and to the left. The current is into the page. The resulting magnetic field is downward and to the right.
In general, as we traverse the topmost loop, each small piece of wire adds a magnetic field that is (a) downward and (b) pointing out of the coil. (I must remind that we're talking about the magnetic field
only at the center of the whole coil right now).
If the topmost loop were rotationally symmetric, we could argue that any components that point away from the central axis of the coil must cancel. The real coil does not have this symmetry, but it's "pretty close" to being rotationally symmetric, and any such real components ought to be small.
All the other loops work basically the same way, contributing only net downward magnetic field when integrated over a whole circular loop.
For some reason, you referred to the magnetic field contribution from different points as being clockwise or anticlockwise. I do not understand this. This coil does not create any kind of closed magnetic loops that can be seen on this scale. The field is more clearly described using fixed directions (into or out of the page, left or right, down or up).
I think it's this reason that you thought "clockwise and anticlockwise should cancel". But you described it more correctly in saying that at both points, there is a downward component; the only thing that can cancel a downward component is an upward component! You were right to say that at points where the current moves left, the field's direction is down and out of the page, and that when the the current moves right the field direction is down and into the page. It's just that only the into/out of page components cancel, and net downward is left behind. |
86 0 1. Homework Statement
Let [itex]G_1[/itex] and [itex]G_2[/itex] be groups with normal subgroups [itex]H_1[/itex] and [itex]H_2[/itex], respectively. Further, we let [itex]\iota_1 : H_1 \rightarrow G_1[/itex] and [itex]\iota_2 : H_2 \rightarrow G_2[/itex] be the injection homomorphisms, and [itex]\nu_1 : G_1 \rightarrow G_1/H_1[/itex] and [itex]\nu_2 : G_2/H_2[/itex] be the quotient epimorphisms.
Given that there exists a homomorphism [itex]\sigma : G_1 \rightarrow G_2[/itex], show that there exists a unique mapping [itex]\overline{\sigma} : G_1/H_1 \rightarrow G_2/H_2[/itex] such that [itex]\overline{\sigma} \circ \nu_1 = \nu_2 \circ \sigma[/itex] if and only if [itex]\sigma[H_1] \subset H_2[/itex]. If such a [itex]\overbar{\sigma}[/itex] exists, it is a homomorphism.
2. Homework Equations
There aren't any equations, as this is a proof.
3. The Attempt at a Solution
I know that since [itex]\nu_1[/itex] and [itex]\nu_2[/itex] are epimorphisms, they are surjective homomorphisms. So [itex]Im(\nu_1)=G_1/H_1[/itex] and [itex]Im(\nu_2)=G_2/H_2[/itex]. But I really don't see how to get this proof off the ground. Please help get me started.
The next question reads as follows.
Prove that there exists a unique mapping [itex]\sigma^{\prime} : H_1 \rightarrow H_2[/itex] such that [itex]\iota_2 \circ \sigma^{\prime} = \sigma \circ \iota_1[/itex] if and only if [itex]\sigma[H_1] \subset H_2[/itex]. If such a [itex]\sigma^{\prime}[/itex] exists, it is a homomorphism.
Last edited: |
Nilpotent Groups
Definition: Let $G$ be a group and let $1$ denote the identity in $G$. The Upper Ascending Central Series of $G$ is the subnormal series $\{ 1 \} = Z_0 \triangleleft Z_1 \triangleleft ... \triangleleft Z_k \triangleleft ... \triangleleft G$ where $Z_1 = Z(G)$ and all other groups $Z_{i+1}$ are defined such that $Z_{i+1}/Z_i = Z(G/Z_i)$.
//Recall that if $G$ is a group then $Z(G)$ denotes the center of $G$, which is the set of all points in $G$ that commute with every other element in $G$, i.e.:(1)
\begin{align} \quad Z(G) = \{ g \in G : gh = hg, \forall h \in G \} \end{align}
Recall also that a group $G$ is abelian if and only if $G = Z(G)$, that is, every element in $G$ commutes with every element in $G$.
Definition: Let $G$ be a group. If for some least $n \in \mathbb{N}$, the upper ascending central series of $G$ terminates, i.e., $Z_n = G$, that $G$ is said to be a Nilpotent Group of Class $n$.
Theorem 1: Let $G$ be a group. Then $G$ is a nilpotent group of class $1$ if and only if $G$ is an abelian group. (2) Proof: $\Rightarrow$ Suppose that $G$ is a nilpotent group of class $1$. Then the upper ascending central series of $G$ is:
\begin{align} \quad \{ 1 \} = Z_0 \trianglelefteq Z_1 = G \end{align}
And by definition, we have that $Z_1/Z_0 = Z(G)$. That is, $G = Z(G)$. So $G$ is an abelian group. $\Leftarrow$ Suppose that $G$ is an abelian group. Then $G = Z(G)$. So $\{ 1 \} \trianglelefteq G$ is an upper ascending central series of $G$ since $Z_1/Z_0 = G = Z(G)$. So $G$ is a nilpotent group of class $1$. $\blacksquare$
Theorem 2: Let $G$ be a group. If $G$ is a nilpotent group of class $c$ for any $c \in \mathbb{N}$ then $G$ is a solvable group. (3) Proof: Let $G$ be a nilpotent group of class $c$ and consider the following ascending central series of $G$:
\begin{align} \quad \{ 1 \} = Z_0 \triangleleft Z_1 \triangleleft ... \triangleleft Z_c = G \end{align}
Consider the factor $Z_{i+1}/Z_i = Z(G/Z_i)$. Then every factor if the center of some group. But the center of a group is an abelian group. So each factor $Z_{i+1}/Z_i$ is an abelian group. Hence $G$ is a solvable group. $\blacksquare$ |
I keep reading about the Unification Algorithm.
What is it and why is so important to Inference Engines? Why is it so important to Computer Science?
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
I keep reading about the Unification Algorithm.
Unification is such a fundamental concept in computer science that perhaps at time we even take it for granted. Any time we have a rule or equation or pattern and want to apply it to some data, unification is used to specialize the rule to the data. Or if we want to combine two general but overlapping rules, unification provides us with the most general combined rule. Unification is at the core of
Proof assistants such as Isabelle/HOL work on a syntactical level on a logical calculus. Imagine you have the modus ponens rule (MP)
$\qquad \displaystyle P\to Q, P\ \Longrightarrow\ Q$
and the proof goal
$\qquad \displaystyle (a \lor b) \to (c \land d), a \lor b \ \overset{!}{\Longrightarrow} c\land d$
We humans see immediately that this follows with modus ponens, but the machine has to match goal to rule
syntactically (wether you do
apply rule mp or
apply simp), and this is what unification does. The algorithm finds $\varphi$ with $\varphi(P) = a\lor b$ and $\varphi(Q) = c \land d$, instantiates the rule and applies it.
The good thing about assistants' methods like
simp now is that if your goal is
$\qquad \displaystyle (a \lor b) \to (c \land d), a \ \overset{!}{\Longrightarrow} d$
that they will find a suitable sequence of applications of rules MP, $P \land Q \Longrightarrow P$ and $P \Longrightarrow P \lor Q$ with compatible unifications for the respective steps and solve the goal.
Notation: With $\Gamma = \{\varphi_1, \dots, \varphi_n\}$ a set of logical formulae, the notation
$\qquad \Gamma \Longrightarrow \psi$
means the following:
If I have derived/proven all formulae in $\Gamma$ (i.e. they are
valid) then this rule asserts that $\psi$ is also valid.
In a sense, the rule $\Gamma \Longrightarrow \psi$ is the last step in a (long) proof for $\psi$. Proofs are nothing but chains of such rule applications.
Note that
rules usually contain schematic variables ($P$ and $Q$ in the above) that can be replaced by arbitrary formulae as long as the same variable is replaced with the same formula in all instances; the result of that format is the concrete rule instance (or intuitively, a proof step). This replacement is above denoted by $\varphi$ which was found by unification.
Often people use $\models$ instead of $\Longrightarrow$.
I don't think it is important to
inference engines. The unification algorithm is however very helpful for type inference. These are two very different kinds of inference.
Type inference is important to computer science because
types are important in the theory of programming languages, which is a significant part of computer science. Types are also close to logic and are intensively used in automated theorem proving. There are implementations of unification algorithms in many, if not all, proof assistants and SMT solvers.
Inference engines are related to artificial intelligence, which is also important but very different. (I've seen links between learning and logic but this seems fetched.) |
Let $x_1,\dots,x_n$ be iid samples from a $N(\mu,\sigma^2)$ and consider the hypotesis
$$H_0:\theta\in\Theta_0,\,\,\,\,\,\,vs\,\,\,\,\,\,H_1:\theta\in\Theta _{0}^c$$
where $\Theta _{0} = \{\mu>0,\sigma^2>1\}$, $\theta=(\mu,\sigma)$. Let $\Lambda(x_1,\dots,x_n)$ be the likelihood ratio test
$${\displaystyle \Lambda (x_1,\dots,x_n)={\frac {\sup\{\,{\mathcal {L}}(\theta \mid x):\theta \in \Theta _{0}\,\}}{\sup\{\,{\mathcal {L}}(\theta \mid x):\theta \in \Theta \,\}}},}$$
What is the number of degrees of freedom of the $\chi^2$ distribution here? Usually, it is the dimension of $\Theta$ minus the dimension of $\Theta_0$, but in this case, both spaces have dimension 2. |
I'm reading the book Polarimetric Radar Imaging: From Basics to Applications, on chapter 6 it is said that:
The s-matrixfor a dihedral corner reflectorhas the form: $$S=\begin{bmatrix}e^{2j\gamma_H}R_{TH}R_{GH} & 0 \\ 0 & e^{2j\gamma_V}R_{TV}R_{GV}\end{bmatrix}$$ Assuming that the reflector surfaces can be made of different dielectric materials. The vertical (trunk) surface has Fresnel reflection coefficients $R_{TH}$ and $R_{TV}$ for vertical and horizontal polarizations, respectively. And the horizontal (ground) surface has Fresnel reflection coefficients $R_{GH}$ and $R_{GV}$ for vertical and horizontal polarizations, respectively. Assuming that the complex coefficients $\gamma_H$ and $\gamma_V$ represent any propagation attenuation and phase change effects. The s-matrixfor first-order bragg surface scatteringhas the form: $$S=\begin{bmatrix}R_H & 0 \\ 0 & R_V\end{bmatrix}$$ where: $$R_H=\frac{\cos\theta-\sqrt{\varepsilon_r-\sin^2\theta}}{\cos\theta+\sqrt{\varepsilon_r-\sin^2\theta}}$$ $$R_V=\frac{(\varepsilon_r-1)\{\sin^2\theta-\varepsilon_r(1+\sin^2\theta)\}}{\left(\varepsilon_r\cos\theta+\sqrt{\varepsilon_r-\sin^2\theta}\right)^2}$$ are the bragg surface reflection coefficients for vertically and horizontally polarized waves in which $\theta$ is the local incidence angle and $\varepsilon_r$ is the relative dielectric constant of the surface.
I'm looking for the book, paper, etc in which the above expressions are proven or are mentioned for the first time.
It seems that the two expressions do work for the whole electromagnetic spectrum not only the microwave or radar part of the spectrum so maybe this reference is very old? I don't know! I have the proof for Fresnel reflection coefficients from lecture 13 in Dr. Tim Noe's lecture note for ELEC262: Introduction to Waves and Photonics course for Electrical and Computer Engineering Faculty of Rice University but I want to understand why the s-matrix takes the given forms especially in the dihedral case? |
OpenCV 4.1.2-dev
Open Source Computer Vision
Here we will learn to extract some frequently used properties of objects like Solidity, Equivalent Diameter, Mask image, Mean Intensity etc. More features can be found at Matlab regionprops documentation.
*(NB : Centroid, Area, Perimeter etc also belong to this category, but we have seen it in last chapter)*
It is the ratio of width to height of bounding rect of the object.
\[Aspect \; Ratio = \frac{Width}{Height}\]
Extent is the ratio of contour area to bounding rectangle area.
\[Extent = \frac{Object \; Area}{Bounding \; Rectangle \; Area}\]
Solidity is the ratio of contour area to its convex hull area.
\[Solidity = \frac{Contour \; Area}{Convex \; Hull \; Area}\]
Equivalent Diameter is the diameter of the circle whose area is same as the contour area.
\[Equivalent \; Diameter = \sqrt{\frac{4 \times Contour \; Area}{\pi}}\]
Orientation is the angle at which object is directed. Following method also gives the Major Axis and Minor Axis lengths.
In some cases, we may need all the points which comprises that object. It can be done as follows:
Here, two methods, one using Numpy functions, next one using OpenCV function (last commented line) are given to do the same. Results are also same, but with a slight difference. Numpy gives coordinates in **(row, column)** format, while OpenCV gives coordinates in **(x,y)** format. So basically the answers will be interchanged. Note that,
row = x and column = y.
We can find these parameters using a mask image.
Here, we can find the average color of an object. Or it can be average intensity of the object in grayscale mode. We again use the same mask to do it.
Extreme Points means topmost, bottommost, rightmost and leftmost points of the object.
For eg, if I apply it to an Indian map, I get the following result : |
It looks like you're new here. If you want to get involved, click one of these buttons!
Let's look at some examples of feasibility relations!
Feasibility relations work between preorders, but for simplicity suppose we have two posets \(X\) and \(Y\). We can draw them using Hasse diagrams:
Here an arrow means that one element is less than or equal to another: for example, the arrow \(S \to W\) means that \(S \le W\). But we don't bother to draw all possible inequalities as arrows, just the bare minimum. For example, obviously \(S \le S\) by reflexivity, but we don't bother to draw arrows from each element to itself. Also \(S \le N\) follows from \(S \le E\) and \(E \le N\) by transitivity, but we don't bother to draw arrows that follow from others using transitivity. This reduces clutter.
(Usually in a Hasse diagram we draw bigger elements near the top, but notice that \(e \in Y\) is not bigger than the other elements of \(Y\). In fact it's neither \(\ge\) or \(\le\) any other elements of \(Y\) - it's just floating in space all by itself. That's perfectly allowed in a poset.)
Now, we saw that a
feasibility relation from \(X\) to \(Y\) is a special sort of relation from \(X\) to \(Y\). We can think of a relation from \(X\) to \(Y\) as a function \(\Phi\) for which \(\Phi(x,y)\) is either \(\text{true}\) or \(\text{false}\) for each pair of elements \( x \in X, y \in Y\). Then a feasibility relation is a relation such that:
If \(\Phi(x,y) = \text{true}\) and \(x' \le x\) then \(\Phi(x',y) = \text{true}\).
If \(\Phi(x,y) = \text{true}\) and \(y \le y'\) then \(\Phi(x,y') = \text{true}\).
Fong and Spivak have a cute trick for drawing feasibility relations: when they draw a blue dashed arrow from \(x \in X\) to \(y \in Y\) it means \(\Phi(x,y) = \text{true}\). But again, they leave out blue dashed arrows that would follow from rules 1 and 2, to reduce clutter!
Let's do an example:
So, we see \(\Phi(E,b) = \text{true}\). But we can use the two rules to draw further conclusions from this:
Since \(\Phi(E,b) = \text{true}\) and \(S \le E\) then \(\Phi(S,b) = \text{true}\), by rule 1.
Since \(\Phi(S,b) = \text{true}\) and \(b \le d\) then \(\Phi(S,d) = \text{true}\), by rule 2.
and so on.
Puzzle 171. Is \(\Phi(E,c) = \text{true}\) ? Puzzle 172. Is \(\Phi(E,e) = \text{true}\)?
I hope you get the idea! We can think of the arrows in our Hasse diagrams as
one-way streets going between cities in two countries, \(X\) and \(Y\). And we can think of the blue dashed arrows as one-way plane flights from cities in \(X\) to cities in \(Y\). Then \(\Phi(x,y) = \text{true}\) if we can get from \(x \in X\) to \(y \in Y\) using any combination of streets and plane flights!
That's one reason \(\Phi\) is called a feasibility relation.
What's cool is that rules 1 and 2 can also be expressed by saying
$$ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} $$is a monotone function. And it's especially cool that we need the '\(\text{op}\)' over the \(X\). Make sure you understand that: the \(\text{op}\) over the \(X\) but not the \(Y\) is why we can drive
to an airport in \(X\), then take a plane, then drive from an airport in \(Y\).
Here are some ways to lots of feasibility relations. Suppose \(X\) and \(Y\) are preorders.
Puzzle 173. Suppose \(f : X \to Y \) is a monotone function from \(X\) to \(Y\). Prove that there is a feasibility relation \(\Phi\) from \(X\) to \(Y\) given by
$$ \Phi(x,y) \text{ if and only if } f(x) \le y .$$
Puzzle 174. Suppose \(g: Y \to X \) is a monotone function from \(Y\) to \(X\). Prove that there is a feasibility relation \(\Psi\) from \(X\) to \(Y\) given by
$$ \Psi(x,y) \text{ if and only if } x \le g(y) .$$
Puzzle 175. Suppose \(f : X \to Y\) and \(g : Y \to X\) are monotone functions, and use them to build feasibility relations \(\Phi\) and \(\Psi\) as in the previous two puzzles. When is
$$ \Phi = \Psi ? $$
To read other lectures go here. |
I'm interested in multistage optimization problems. Are there any good R packages around to solve such problems over time? I'm not at all an expert in it, so maybe someone knows a good paper / lecture notes to start with? I know classical optimization (linear optimization, convex optimiziation etc) but I've never had to deal with optimization over time. Any reference, theoretical and regarding the implementation are very welcome. I know that this is a very general question, but this is due to my (not yet) attained knowledge. If further clarification is needed I'm happy to share thix. Many thanks in advance
EDIT
Let's take for example the following paper, there we have a optimization problem of the form:
$$\max \sum_{i=1}^{n+1}r^L_ix_i^L$$
such that
$$ x^l_i=r^{l-1}_i x_i^{l-1}-y_i^l+z^l_i,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$ x^l_i=r^{l-1}_{n+1} x_{n+1}^{l-1}+\sum_{i=1}^n(1-\mu^l_i)y_i^l-\sum_{i=1}^n(1+\nu_i^l)z^l_i$$ $$y^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$x^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$z^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ where some $x_i^l$ is the value (in dollar) of an asset $i$ at time $l$, $r_i^l$ is the asset return, $y^l_i$ and $z^l_i$ are the amount of asset sold and bought. $\mu^l_i $ and $\nu_i^l$ have also economical interpretation, but are not that important for the question. Assuming everthing is deterministic, we can solve this problem using interior points / simplex method since it is an "simple" LP. However the theory I'm looking for should give me ideas if it is optimal to solve at every time $l$ the subproblem (maximize $\sum_{i=1}^{n+1}r^l_ix^l_i$ under the corresponding constraints or is this not a good idea. I have heard / read that one could solve such kind of problem using stochastic programming, but still I'm interested in knowing how to subdivide (if possible) such kind of problems. |
We will look into how perceptrons are structured and what they actually do. Perceptrons are the first generation of neural networks. Research on perceptrons was conducted since 1960s.
Network Architecture Types Feed-forward network Information flows from the input layer in one direction to the output layer “deep” neural network term is used to indicate existing of more than one hidden layer. Recurrent neural network Information can flow around in cycles They are more biologically realistic Difficult to train Are able to remember information in the hidden state
In 2011 Ilya Sutskever designed a recurrent neural network which was generating one character at a time.
Symetrically connected network Similar to recurrent networks They have symmetrical connections between units (same weight in both directions) Easier to analyze than recurrent networks Perceptrons Perceptrons are the first generation of neural networks Have some limitation (published in 1969 Minsky and Papert, “Perceptrons” book) Is still widely used today for tasks with huge feature vectors, containg many million of features. Binary threshold unit is used as a decision unit
Statistical Pattern Recognition Approach:
convert raw input into a vector or feature activations learn how to weight each feature activation to get a single scalar quantity if this quantity is above a certain threshold, we decide that the input vector is a positive example of a target class Binary Threshold Neurons (with bias)
\begin{equation} z = b + \sum_{i}{x_i w_i} \end{equation}
\begin{equation} y = \begin{cases} 1, if z \geq $\Theta$ \\ 0, otherwise \end{cases} \end{equation}
z – total input calculation
y – output of the neuron Learning biases
Biases can be treated like weight, and can be trained accordingly. We can provide input “1” to the “bias neuron” and include it in training like any other neuron.
Training Perceptrons
Using this training algorithm it is guaranteed that the right answer will be found for each training case
if any such set exists. add extra component with the constant value of “1” to the input vector. The weight of this component is minus the threshold. pick the training cases using policy which makes sure that every training case will keep getting picked (i.e. don’t change as we go) output unit correct: leave its weights output unit incorrecly outputs to “0”: add the input vector to the weight vector output unit incorrectly outputs to “1”: subtract the input vector from the weight vector Geometrical View
If we would imagine the weight space as n-dimensional space where each dimension corresponds to the value of a certain weight, then a point in this space would represent a set of weight values. Training cases in this sense would correspond to the planes.
Considering a training case in which the correct answer is one, the weight vector needs to be on the correct side of hyperplane in order to get the answer right. Having the weight vector on the same side as input vector, means the angle between these two should be less than 90 degrees, implies that the scaler product on the input vector with a weight vector will be positive. When the weight vector is on the wrong side of the plane, the angle of the weight vector and the input vector will be negative, and the scalar product will be less than zero and we will get a wrong answer. Looking from the other perspective: for any input vector with the correct answer = 0, any weight vector which lies on the angle of < 90 with the input vector, will cause a positive scalar product, causing the answer to be = 1. This weight vector is considered bad. All “good weights” vectors are vectors which have angle to input vector > 90.
Inputs can be thought of as constains. This means that inputs constrain the set of weights that give the correct classification results by paritioning the space into two halves.
There are mathematics proofs that learning of perceptrons would lead to a feasible space if there is any feasible space.
Limitations of Perceptrons
Binary threshold neurons cannot sometimes satisfy easiest cases:
1. Cases that are not linearly separable
Imagine the case of sets of 2D inputs, where we want to classify the same feature inputs. In these terms, the points (0,0) and (1,1) should lead to the answer = 1, while (0,1) and (1,0) would lead to the answer = 0. In this example, there’s no single set of weights which would satisfy all the constraints.
We can notice that there’s a set of training cases which are not linearly separable and would lead to non-existence of weight plane which can properly classify the input.
2. Translations with wrap-arounds
Another case which perceptrons could not resolve is the case of discrimination of simple patterns when you translate them with wrap-around.
Based on this, Minskys and Papert’s paper “Group Invariance Theorem” (1960s) says that the part of a Perceptron that learns cannot learn to do this if the transformations form a group. (Translations with wrap-around form a group)
Networks withou hidden units are very limited in what they can learn to model. What is required is multiple layers of adaptive, non-linear hidden units. We need the way to adapt all the weights, and not just the last layer. |
The standard Newtonian centripetal acceleration is:
$$g = \frac{V^2}{R}$$
where \(V\) is the rectilinear velocity being bent into a circular motion and \(R\) is the radius of the circular trajectory that it is being bent into.
When the velocity is closer to lightspeed, the Lorentzian gamma-factor gets involved:
$$\gamma^2 = \frac{1}{1-\beta^2}$$
where \(\beta\) is the ratio of the velocity to the speed of light, \(\frac{V}{c}\).
The centripetal acceleration – as felt by the observer in circular motion – becomes:
$$g = \frac{\gamma^2V^2}{R} = \frac{c^2(\gamma^2-1)}{R}$$
To get a feel for the numbers, at what velocity is the force per unit mass equivalent to the Newtonian case of the speed of light? In otherwords…
$$g = \frac{c^2(\gamma^2-1)}{R} = \frac{c^2}{R}$$
i.e. \(\gamma^2-1 = 1\) or \(\gamma^2 = 2\), thus \(\gamma = 1.4142…\)
To convert from \(\gamma\) to \(\beta\) requires some hyperbolic functions:
$$\beta = tanh(acosh(\gamma))$$
…which, in this case of \(\gamma = 1.4142\) means \(\beta = 0.70710\).
In Stephen Baxter’s novel “Ring” the starship “The Great Northern” sets out in a huge loop to take it 5 million years into the future. It accelerates at 1 gee for the whole journey, for the sake of its human passengers, and its total trip time is about 1,000 years, meaning it must have a gamma-factor of about 5,000. Such a gamma-factor would require a loop of roughly 25 million light-years radius, as the above equation implies, thus the journey would be much longer than 5 million years.
Reference
Yongwan Gim, Hwajin Um, Wontae Kim, “Unruh temperatures in circular and drifted Rindler motions”, (Submitted on 28 Jun 2018) https://arxiv.org/abs/1806.11439 [accessed 02 August 2018] |
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that? |
The argument for the first question goes as follows:
Consider the Pauli-Lubanski vector $ W_{\mu} = \epsilon_{\mu\nu\rho\sigma}P^{\nu}M^{\rho\sigma}$. Where $P^{\mu}$ are the momenta and $M^{\mu\nu}$ are the Lorentz generators. (The norm of this vector is a Poincare group casimir but this fact will not be needed for the argument.)
By symmetry considerations We have $W_{\mu} P^{\mu} = 0$. Now, in the case of a massless particle, a vector orthogonal to a light-like vector must be proportional to it (easy exercise). Thus$ W^{\mu} = h P^{\mu}$, ($ h = const.$). Now, the zero component of the Pauli-Lubanski vector is given by:
$ W_{0} = \epsilon_{0\nu\rho\sigma}P^{\mu}M^{\mu\nu} = \epsilon_{abc}P^{a}M^{bc} = \mathbf{P}.\mathbf{J}$, (where the summation after the second equality is on the spatial indices only, and $\mathbf{J}$ are the rotation generators ).
Therefore the proportionality constant$ h = \frac{W^{0}}{P^{0}}= \frac{\mathbf{P}.\mathbf{J}}{|\mathbf{P}|}$is the helicity.
Now, on the quantum level, if we rotate by an angle of $2 \pi$ around the momentum axis, the wave function acquires a phase of:$exp(2 \pi i\frac{\mathbf{P}}{|\mathbf{P}|}.\mathbf{J}) = exp(2 \pi i h)$.This factor should be $\pm 1$ according to the particle statistics thus $h$ must be half integer.
As for the second question, a very powerful method to construct the gluon amplitudes is by the twistor approach.Please see the following article by N.P. Nair for a clear exposition.
Update:
This update refers to the questions asked by user6818 in the comments:
For simplicity I'll consider the case of a photon and not gluons.
The strategy of the solution is based on the explicit construction of the angular momentum and spin of a free photon field (which depend on the polarization vectors) and showing that the above relations are satisfied for the photon field.The photon momentum and the angular momentum densities can be obtained via the Noether theorem from the photon Lagrangian. Alternatively, it is well known that the photon linear momentum is given by the Poynting vector (proportional to) $\vec{E}\times\vec{B}$, and it is not difficult to convince onself that the total angular momentum density is (proportional to) $\vec{x}\times \vec{E}\times\vec{B}$.
Now, the total angular momentum can be decomposed into angular and spin angular momenta (please see K.T. Hecht: quantum mechanics (page 584 equation 16))
$\vec{J} = \int d^3x (\vec{x}\times \vec{E}\times\vec{B}) =\int d^3x (\vec{E}\times\vec{A} + \sum_{i=1}^3 E_j \vec{x} \times \vec{\nabla} A_j )$
The first term on the right hand side can be interpreted as the spin and the second as the orbital angular momentum as it is proportional to the position.
Now, Neither the spin nor the orbital angular momentum densities are gauge invariant (only their sum is). But, one can argue that the total orbital angular momentum is zero because the position averages to zero, thus the total spin:
$ \vec{S} =\int d^3x (\vec{E}\times\vec{A})$
is gauge invariant:
Now, we can obseve that in canonical quantization: $[A_j, E_k] = i \delta_{jk}$, we get $[S_j, S_k] = 2i \epsilon_{jkl} S_l$. Which are the angular momentum commutation relations apart from the factor 2.
Now, by substituting the plane wave solution:
$A_k = \sum_{k,m=1,2} a_{km} \vec{\epsilon_m}(k) exp(i(\vec{k}.\vec{x}-|k|t)) +h.c.$
(The condition $\vec{\epsilon_m}(k).\vec{k} = 0$, is just a consequence of the vanishing of the sources).
We obtain:
$\vec{S} = \sum_{k,m=1,2}(-1){m} a^\dagger_{km}a_{km} \hat{k} = \sum_{k}(n_1-n_2)\hat{k}$
(where $n_1$, $n_2$ are the numbers of right and left circularly polarized photons). Thus for a single free photon, the total spin, thus the total angular momentum are aligned along or opposite to the momentum, which is the same result stated in the first part of the answer.
Secondly, the photon total spin operators exist and transform (up to a factor of two) as spin 1/2 angular momentum operators. |
Consider the following protocol for $P$ and $V$ :
Note: The multiplicative group is $Z_p^*$ and $p$ is prime. Note: The protocol is for proving that $x \in \langle \alpha \rangle$.
Input to $P$ and $V$ : a prime $p$ and $\alpha, x \in Z_p^∗, k = \log_2(p)$.
Input to $P$ : $y$, so that $\alpha^y = x \bmod p$.
Protocol:
$V$ checks that $\gcd(x, p) = \gcd(\alpha, p) = 1$ and rejects if this is not the case.
$P$ chooses $r$ at random in $[0, p - 2]$, and sends $a = \alpha^r \bmod p$ to $V$.
$V$ chooses $b$ at random in $\{0, 1\}$ and sends $b$ to $P$.
$P$ sends $z = (r + by) \bmod (p-1)$ to $V$.
$V$ checks that $\alpha^z = ax^b \bmod p$. If OK, then accept, otherwise reject.
Problem:
I've already proven completeness and soundness of the protocol. However, I need to prove that it is zero-knowledge.
To do this, I consider a simulator playing the role of $P$.
I know, I could use the
Rewinding Lemma, so I consider a perfect honest-verifier simulator.
I try to follow the ZK proof of the graph isomorphism protocol, however, I'm stuck.
I don't know how to respond if $b=1$, since I cannot assume the simulator knows $y$. The $b=0$ case is easy.
Can someone help me out? |
IEEE Single Precision Floating Point Format Examples 1
Recall from the Storage of Numbers in IEEE Single-Precision Floating Point Format page that for 32 bit storage, a computer can be stored as $x = \sigma \cdot \bar{x} \cdot 2^e$ and with 32 bits $b_1b_2...b_{32}$ we had that:
Bit $1$ The bit $b_1$ corresponds to the sign $\sigma$ of $x$ where $b_1 = \left\{\begin{matrix} 0 & \mathrm{if} \: \sigma = +1\\ 1 & \mathrm{if} \: \sigma = -1 \end{matrix}\right.$. Bits $2$-$9$ Most computers do not store the exponent $e$ of a floating point binary number directly. Instead, they define $E = e + 127$ which is a positive binary number (since $-126 ≤ e$). The eight bits $b_2b_3...b_8b_9$ correspond to this number $E$. Bits $10$-$32$ The $23$ succeeding digits $a_1a_2...a_{22}a_{23}$ of the significand of $x$, $1.a_1a_2...a_{22}a_{23}$ are stored here.
We will now look at some examples of determining the decimal value of IEEE single-precision floating point number and converting numbers to this form.
Example 1 Consider the following floating point number presented in IEEE single precision (32 bits) as $01101011101101010000000000000000$. Determine the sign $\sigma$, exponent $e$, and significand/mantissa $\bar{x}$ and determine the value of $x = \sigma \cdot \bar{x} \cdot 2^e$.
We note that the first bit of the number given above is $b_1 = 0$. It immediately follows that we have that the sign of $x$ is $\sigma = +1$.
Now the next eight bits $b_2b_3…b_9$ are $11010111$ and represent $E = e + 127$. We want to find what decimal number represents the binary number $E = (11010111)_2$. We have that:(1)
Thus we get that $e = E - 127 = 215 - 127 = 88$.
Lastly, recall that the twenty-three bits $b_{10}b_{11}…b_{32}$ represent the fractional part of the significand/mantissa $\bar{x}$, and that $\bar{x} = 1.b_{10}b_{11}…b_{32}$ and so:(2)
So the decimal representation of this number is $x = \sigma \cdot \bar{x} \cdot 2^e = + (1.4140625) \cdot 2^{88}$.
Example 2 Consider the following number presented in IEEE single precision 32 bits $11001100101111100010000000000000$. Determine the sign $\sigma$, exponent $e$, and significand/mantissa $\bar{x}$ and determine the value of $x = \sigma \cdot \bar{x} \cdot 2^e$.
Once again we immediately have that since $b_1 = 1$ then the sign of $x$ is $\sigma = -1$.
Now next eight bits are $10011001$. These bits represent $E = e + 127$. Thus we have that:(3)
Therefore the exponent of $x$ is $e = E - 127 = 153 - 127 = 26$.
Lastly we will calculate the mantissa using the last twenty-three bits of the given number. We have that:(4)
So the decimal representation of this number is $x = \sigma \cdot \bar{x} \cdot 2^e = - (1.4853515625) \cdot 2^{26}$.
Example 3 Consider the number $x = -\left ( 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{16} + \frac{1}{32} \right ) 2^{-48}$. Determine the floating point representation in IEEE single precision (32 bits).
We immediately see that $x$ is a negative number and so the sign is $\sigma = 1$. Therefore the first bit in our floating point representation of this number will be $b_1 = 1$.
Now we also see that the exponent $e = -48$. IEEE floating point single precision (32 bits) stores the number $E = e + 127$ instead though, and hence $E = -48 + 127 = 79$. We must now convert $79$ to binary number. We have that:(5)
Therefore $b_2b_3…b_9 = 01001111$. Lastly we will determine the last twenty-three digits which represent the fractional part of the significand/mantissa. We note that $\bar{x} = \left ( 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{16} + \frac{1}{32} \right )$. If we convert $\bar{x}$ to binary we get that:(6)
So the digits $b_{10}b_{11}…b_{32}$ are thus $110110…0$. Therefore the floating point representation of $x$ is:(7) |
Topological Quotients in Euclidean Space
Recall from the Topological Quotients page that if $(X, \tau)$ is a topological space and $\sim$ is an equivalence relation on $X$ where for all $x \in X$, $[x] = \{ y \in X : x \sim y \}$ denotes the equivalence class of $x$ with respect to the relation $\sim$ and $X \: / \sim$ denotes the set of all equivalences classes on $X$, then the
Quotient Topology is the final topology induced by the canonical quotient map $q : X \to X \: / \sim$.
We will now look at some examples of quotient topological spaces in Euclidean space with the respective usual topologies.
Consider the set $X = [0, 1]$.
Let $\tau_X$ is the usual topology consisting of open intervals from $X$ and consider the topological space $(X, \tau_x)$. Define an equivalence relation $\sim$ by $x \sim y$ if $x = y$ OR $x \sim y$ if $\{ x, y \} = \{0, 1 \}$. This equivalence relation can be thought of as "gluing" the endpoints of the closed interval $X$ together to form a circle:
Note that $\sim$ is indeed an equivalence relation. It is reflexive because $x \sim x$, symmetric since $x \sim y$ if and only if $y \sim x$, and transitive since $x \sim y$ and $y \sim z$ implies that $x \sim z$. Now consider the set of equivalence classes $X \: / \sim$:(1)
Consider the canonical quotient map $q : X \to X \: / \sim$ defined for all $x \in [0, 1]$ by $q(x) = [x]$. Then the final topology induced by $q$ is given by:(2)
So the topology on $\tau_{X / \sim}$ can be described somewhat vaguely as unions of "open arcs" not containing $0$ or $1$ as illustrated below:
For another example of a quotient topological space, let $X = [0, 1] \times [0, 1]$.
Consider the topological space $([0, 1] \times [0, 1], \tau_X)$ where $\tau_X$ is the usual topology whose open sets are generated by open disks in $\mathbb{R}^2$.
Define an equivalence relation $\sim$ on $X$ by $(x, y) \sim (w, z)$ if $(x, y) = (w, z)$, $y = z$ and $\{x, w \} = \{0 , 1 \}$, or $x = w$ and $\{ y, z \} = \{0 , 1 \}$.
By "gluing" equivalent points together, we can visualize $X \: / \sim$ with the following diagram:
The topology on $X \: / \sim$ will be generated by "open subsurfaces" of the torus above that do not intersect the "glue-lines". |
Now that we have seen how neural networks work, we realize that understandingof the gradients flow is essential for survival. Therefore, we will reviseour strategy on the lowest level. However, as neural networks become more complicated,calculation of gradients by hand becomes a murky business. Yet, fear not young
padawan, there is a way out! I am very excited that today we will finally getacquainted with automatic differentiation, an essential tool in your deeplearning arsenal. This post was largely inspired by Hacker's guide to NeuralNetworks. For comparison, see alsoPython version.
Before jumping ahead, you may also want to check the previous posts:
The source code from this guide is available on Github. The guide is written in literate Haskell, so it can be safely compiled.
Why Random Local Search Fails
Following Karpathy's guide, we first consider a simple multiplication circuit. Well, Haskell is not JavaScript, so the definition is pretty straightforward:
forwardMultiplyGate = (*)
Or we could have written
forwardMultiplyGate x y = x * y
to make the function look more intuitively $f(x,y) = x \cdot y$. Anyway,
forwardMultiplyGate (-2) 3
returns -6. Exciting.
Now, the question: is it possible to change the input $(x,y)$ slightly in order to increase the output? One way would be to perform local random search.
_search tweakAmount (x, y, bestOut) = do x_try <- (x + ). (tweakAmount *) <$> randomDouble y_try <- (y + ). (tweakAmount *) <$> randomDouble let out = forwardMultiplyGate x_try y_try return $ if out > bestOut then (x_try, y_try, out) else (x, y, bestOut)
Not surprisingly, the function above represents a single iteration of a"for"-loop. What it does, it randomly selects points around initial $(x, y)$and checks if the output has increased. If yes, then it updates the best knowninputs and the maximal output. To iterate, we can use
foldM :: (b -> a -> IOb) -> b -> [a] -> IO b. This function is convenient since we anticipate someinteraction with the "external world" in the form of random numbers generation:
localSearch tweakAmount (x0, y0, out0) = foldM (searchStep tweakAmount) (x0, y0, out0) [1..100]
What the code essentially tells us is that we seed the algorithm with someinitial values of
x0,
y0, and
out0 and iterate from 1 till 100. The coreof the algorithm is
searchStep:
searchStep ta xyz _ = _search ta xyz
which is a convenience function that glues those two pieces together. Itsimply ignores the iteration number and calls
_search. Now, we would like tohave a random number generator within the range of [-1; 1). From thedocumentation,we know that
randomIO produces a number between 0 and 1. Therefore, we scalethe value by multiplying by 2 and subtracting 1:
randomDouble :: IO DoublerandomDouble = subtract 1. (*2) <$> randomIO
The
<$> function is a synonym to
fmap. What it essentially does isattaching the pure function
subtract 1. (*2) which has type
Double ->Double, to the "external world" action
randomIO, which has type
IO Double(yes, IO = input/output)
1.
A hack for a numerical minus infinity:
inf_ = -1.0 / 0
Now, we run
localSearch 0.01 (-2, 3, inf_) several times:
(-1.7887454910045664,2.910160042416705,-5.205535653974539)(-1.7912166830200635,2.89808308735154,-5.19109477484237)(-1.8216809458018006,2.8372869694452523,-5.168631610010152)
In fact, we see that the outputs have increased from -6 to about -5.2. But the improvement is only about 0.8/100 = 0.008 units per iteration. That is an extremely inefficient method. The problem with random search is that each time it attempts to change the inputs in random directions. If the algorithm makes a mistake, it has to discard the result and start again from the previously known best position. Wouldn't it be nice if instead each iteration would improve the result at least by a little bit?
Automatic Differentiation
Instead of random search in random direction, we can make use of the precise direction and amount to change the input so that the output would improve. And that is exactly what the gradient tells us. Instead of manually computing the gradient every time, we can employ some clever algorithm. There exist multiple approaches: numerical, symbolic, and automatic differentiation. In his article, Dominic Steinitz explains the differences between them. The last approach, automatic differentiation is exactly what we need: accurate gradients with minimal overhead. Here, we will briefly explain the concept.
The idea behind automatic differentiation is that we explicitly define gradients only for elementary, basic operators. Then, we exploit the chain rule combining those operators into neural networks or whatever we like. That strategy will infer the necessary gradients by itself. Let us illustrate the method with an example.
Below we define both multiplication operator and its gradient using the chain rule, i.e. $\frac {d} {dt} x(t) y(t) = x(t) y'(t) + x'(t) y(t)$:
(x, x') *. (y, y') = (x * y, x * y' + x' * y)
The same can be done with addition, subtraction, division, and exponent:
(x, x') +. (y, y') = (x + y, x' + y')x -. y = x +. (negate1 y)negate1 (x, x') = (negate x, negate x')(x, x') /. (y, y') = (x / y, (y * x' - x * y') / y^2)exp1 (x, x') = (exp x, x' * exp x)
We also have
constOp for constants:
constOp :: Double -> (Double, Double)constOp x = (x, 0.0)
Finally, we can define our favourite sigmoid $\sigma(x)$ combining the operators above:
sigmoid1 x = constOp 1 /. (constOp 1 +. exp1 (negate1 x))
Now, let us compute a neuron $f(x, y) = \sigma(a x + b y + c)$, where $x$ and $y$ are inputs and $a$, $b$, and $c$ are parameters
neuron1 [a, b, c, x, y] = sigmoid1 ((a *. x) +. (b *. y) +. c)
Now, we can obtain the gradient of
a in the pointwhere $a = 1$, $b = 2$, $c = -3$, $x = -1$, and $y = 3$:
abcxy1 :: [(Double, Double)]abcxy1 = [(1, 1), (2, 0), (-3, 0), (-1, 0), (3, 0)]
neuron1 abcxy1(0.8807970779778823,-0.1049935854035065)
Here, the first number is the result of the neuron's output and the secondone is the gradient with respect to
a ($\frac d {da}$). Let us verify themath behind the result:
$$\begin{equation}\sigma(ax + by + c) | _{a=(a,1), b=(b,0), c=(c,0), x=(x,0), y=(y,0)} = \\
\sigma[(a, 1) (x, 0) + (b, 0) (y, 0) + (c, 0)] = \\ \sigma[(ax, a \cdot 0 + 1 \cdot x) + (by, 0 \cdot b + 0 \cdot y) + (c, 0)] = \\ \sigma[(ax + by + c, x)] = \\ \frac {(1, 0)} {(1, 0) + \exp \left[ -(ax + by + c, x) \right]} = \\ \frac {(1, 0)} {(1, 0) + \exp \left[ -ax - by - c, -x) \right]} = \\ \frac {(1, 0)} {(1, 0) + (\exp (-ax - by - c), -x \exp (-ax - by - c))} = \\ \frac {(1, 0)} {(1 + \exp(-ax - by - c), -x \exp(-ax - by - c))} = \\ \left( \sigma(ax + by + c), \frac {x \exp(-ax - by -c)} {(1 + \exp(-ax - by -c))^2} \right). \end{equation} $$
The first expression is the result of neuron's computation and the second one is the exact analytic expression for $\frac d {da}$. That is all the magic behind automatic differentiation! In a similar way, we can obtain the rest of the gradients:
neuron1 [(1, 0), (2, 1), (-3, 0), (-1, 0), (3, 0)](0.8807970779778823,0.3149807562105195)neuron1 [(1, 0), (2, 0), (-3, 1), (-1, 0), (3, 0)](0.8807970779778823,0.1049935854035065)neuron1 [(1, 0), (2, 0), (-3, 0), (-1, 1), (3, 0)](0.8807970779778823,0.1049935854035065)neuron1 [(1, 0), (2, 0), (-3, 0), (-1, 0), (3, 1)](0.8807970779778823,0.209987170807013)
Introducing backprop library
The backprop library was specifically designed for differentiable programming. It provides combinators to reduce our mental overhead. In addition, the most useful operations such as arithmetics and trigonometry, have already been defined in the library. See also hmatrix-backprop for linear algebra. So all you need for differentiable programming now is to define some functions:
neuron :: Reifies s W => [BVar s Double] -> BVar s Doubleneuron [a, b, c, x, y] = sigmoid (a * x + b * y + c)sigmoid x = 1 / (1 + exp (-x))
Here
BVar s wrapper signifies that our function isdifferentiable. Now, the forward pass is:
forwardNeuron = BP.evalBP (neuron. BP.sequenceVar)
We use
sequenceVar isomorphism to convert a
BVar of a list into a list of
BVars, as required by our
neuron equation. And the backward pass is
backwardNeuron = BP.gradBP (neuron. BP.sequenceVar)abcxy0 :: [Double]abcxy0 = [1, 2, (-3), (-1), 3]forwardNeuron abcxy0-- 0.8807970779778823backwardNeuron abcxy0-- [-0.1049935854035065,0.3149807562105195,0.1049935854035065,0.1049935854035065,0.209987170807013]
Note that all the gradients are in one list, the type of the first
neuronargument.
Summary
Modern neural networks tend to be complex beasts. Writing backpropagation gradients by hand can easily become a tedious task. In this post we have seen how automatic differentiation can face this problem.
In the next posts we will apply automatic differentiation to real neural networks. We will talk about batch normalization, another crucial method in modern deep learning. And we will ramp it up to convolutional networks allowing us to solve some interesting challenges. Stay tuned!
Further reading Visual guide to neural networks Backprop documentation Article on backpropagation by Dominic Steinitz In fact, 64 bit double precision is not necessary for neural networks, if not an overkill. In practice you would prefer to use a 32 bit
Floattype.
^ |
Limits of Complex Functions
The concept of a limit of a complex function is analogous to that of a limit of a real function. We define this concept below.
Definition: Let $A \subseteq \mathbb{C}$ and let $z_0 \in \mathbb{C}$ be an accumulation point of $A$. The Limit of $f$ as $z$ Approaches $z_0$ is $L$ denoted $\displaystyle{\lim_{z \to z_0} f(z) = L}$ if for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $z \in A$ and $\mid z - z_0 \mid < \delta$ then $\mid f(z) - L \mid < \epsilon$. A point $z_0 \in \mathbb{C}$ is said to be an accumulation point of $A \subseteq \mathbb{C}$ if for all $r > 0$ we have that $B(z_0, r) \cap A \setminus \{ z_0 \} \neq \emptyset$. The requirement that $z_0$ is an accumulation point of $A$ in the definition above ensures us that we can actually approach $z_0$ in the domain.
For example, consider the function $f(z) = z$ which is the identity function. We claim that for all $z_0 \in \mathbb{C}$ that:(1)
To prove this, let $\epsilon > 0$ be given. Then notice that:(2)
So choose $\delta = \epsilon > 0$. Then if $\mid z - z_0 \mid < \delta$ we have by $(*)$ that:(3)
Therefore $\displaystyle{\lim_{z \to z_0} f(z) = z_0}$.
We will now state some basic properties of limits of complex functions that the reader should be familiar with for real functions. The proofs of these theorems are pretty much identical to that for real functions, so we will omit their proofs for now.
Theorem 1 (Uniqueness of Limits): If $\displaystyle{\lim_{z \to z_0} f(z) = L}$ and $\displaystyle{\lim_{z \to z_0} f(z) = M}$ then $L = M$.
Thereom 2: If $\displaystyle{\lim_{z \to z_0} f(z) = L}$ and $\displaystyle{\lim_{z \to z_0} g(z) = M}$ then: a) $\displaystyle{\lim_{z \to z_0} [f(z) + g(z)] = L + M}$. b) $\displaystyle{\lim_{z \to z_0} [f(z) - g(z)] = L - M}$. c) $\displaystyle{\lim_{z \to z_0} f(z)g(z) = L \cdot M}$. d) $\displaystyle{\lim_{z \to z_0} \frac{f(z)}{g(z)} = \frac{L}{M}}$ provided that $M \neq 0$. |
Claim: Let $f : [0,1] \to [0,1]$ be continuous and differentiable almost everywhere on $[0,1]$. If the derivative of $f$ is positive wherever it exists, then $f$ is strictly increasing.
Here's my
fallacious proof:
By way of contradiction, suppose there exist $x < y$ in $[0,1]$ such that $f(x) \ge f(y)$. I think I can say by continuity (intermediate value theorem?) that $f(x) \ge f(z)$ whenever $z \in [x,y]$. Now, if for every $z \in (x,y]$ we had that $f$ wasn't differentiable on $(x,z)$, then this would contradict the fact that $f$ is differentiable almost everywhere. Hence, there must exist a $z \in (x,y]$ such that $f$ is differentiable on $(x,z)$. By the mean value theorem, there is a $c \in (x,z)$ such that $f'(c) = \frac{f(z)-f(x)}{z-x} \le 0$, which is a contradiction. Hence, $f$ must be strictly increasing.
As Ryan Unger pointed out in the chatroom, I haven't given a terribly convincing reason why $f$ should be differentiable on any open interval in $[0,1]$, let alone $(x,z)$. So, my question is twofold. First, is the above claim true; is there any way to salvage my proof?
My next question is, does there exist a continuous function which is differentiable almost everywhere but the set of points of differentiability contains no intervals? I was thinking maybe the fat cantor set could help...?
EDIT: I should point out that $f$ doesn't have to have the domain and codomain that I gave it; I only specified those because I'm thinking about Thompson's group $F$. |
I've built a simplistic Excel monte carlo model to price a zero-coupon bond, but it came up with a slightly unepxected result so I wanted to confirm whether my maths is just a little rusty or my model is wrong.
Suppose we have a 40 year ZCB with a price $P_t$ that equates to a flat discount rate of $z_t$ at time $t$; i.e. the price $P_t = \exp(-(40 - t) \cdot z_t)$, and let's say $z_0 = 2\%$ in this case . (This is obviously different from the usual zero coupon bond vs rate pricing formula, $P_t = \mathbb{E}[\exp(-\int_t r_t)]$, as it refers to a single "average" rate over the remaining life of the bond)
The monte carlo starts with this spread $z_0$, which evolves randomly after each period with no drift, according to $z_{t+\Delta t} = z_t + (\sigma \cdot \sqrt{\Delta t} \cdot \omega)$ where $\sigma$ is a vol parameter and $\omega \sim \text{N}(0,1)$.
At year 30, I look at the simulated value of $z_{30}$, and then infer the price of the bond as $P_{30} = \exp (-(40 - 30) \cdot z_{30})$.
Having run this simulation for a number of Monte Carlo runs, I took the average spread $z_t$ at $t = 30$, which was unsurprisingly equal to $2\% = z_0$, and the average price $P_{30}$ across all of these runs.
I was expecting that this average price would be approximately $\exp (-(40 - 30) \cdot z_{0}) = \exp(-10 \cdot 2\%)) = \exp (-(40 - 30) \cdot \mathbb{E}(z_{30}))$ but in fact the value was consistently higher than .
To some extent I can justify this to myself; the bond price is convex, so in absolute terms you'll see bigger absolute positive gains than negative losses when the spread moves tighter or wider by the same $\%$ amount (assuming the avg spread is still centered around $z_0 = 2\%$).
But on the other hand, this means your expected bond price in the monte carlo is a function of volatility: $\sigma = 0$ gives you $z_{30} = z_0$ in all cases and therefore a lower expected price than for some $\sigma > 0 $. I wouldn't intuitively expect that the expected bond price over time is dependent on vol even with zero drift; it's not an option-type payoff after all, and I've never previously seen bond prices contemplated as a function of vol. Discussions such as this one make me think that the $+\sigma^2/2$ lognormal expectation effect, skewing gains over losses, should be offset by $-\sigma^2/2$ term in the expected Brownian motion path (though admittedly this is a slightly different security from the one in that link), but my model appears to suggest otherwise.
What am I missing here? Is my model wrong, am I trying to reconcile 2 fundamentally different quantities or should the expected bond price with nonzero vol genuinely be higher than the bond price discounted at the expected spread? |
Let me collect all the comments into a single answer.
Since the space of integrable simple functions is dense in $L^p(\Omega, \mathcal{F}, \mu)$ for all $p\in[1,+\infty)$, and in addition, all simple functions are in $L^\infty(\Omega, \mathcal{F}, \mu)$, we have that $\bigcap_{1\leq p\leq\infty}L^p(\Omega, \mathcal{F}, \mu)$ is dense in $L^q(\Omega, \mathcal{F}, \mu)$ for all $q\in[1,+\infty)$.
If $q=\infty$ the answer depends on whether or not $\mu$ is finite.
If $\mu$ is finite, then $L^\infty(\Omega, \mathcal{F}, \mu)\subseteq L^p(\Omega, \mathcal{F}, \mu)$ for each $p$, so obviously we can put $q=\infty$ above. If $\mu$ is infinite (like the Lebesgue measure), then any nonzero constant function is in $L^\infty$, but very far from any integrable function (since no integrable function can be bounded away from zero on a set of infinite measure).
Worth mentioning, if inessential for this exercise, is that if $\Omega,\mu$ are sufficiently well-behaved ($\Omega$ is locally compact Hausdorff, $\mu$ is inner and outer regular, locally finite), then compactly supported continuous functions are dense in each $L^p(\mu)$ with $p<\infty$. This applies in particular to Haar measures on locally compact Hausdorff groups, such as the Lebesgue measure. |
In the Wikipedia article describing the Modulo operation, Raymond T. Boute introduces the Euclidean Definition, where the remainder is always positive and therefore consistent with Euclidean Division.
a (the Dividend) modulo n (the Divisor) where q is the quotient and r is the remainder.
$ q \in \mathbb{Z} $
$ a = n \times q + r\,$
$0 \leq r < |n|$
Two corollaries being:
$ n > 0 \Rightarrow q = \left\lfloor \frac{a}{n} \right\rfloor$
$ n < 0 \Rightarrow q = \left\lceil \frac{a}{n} \right\rceil $
equivalently:
$ q = sgn(n) \left\lfloor \frac{a}{\left|n\right|} \right\rfloor. $
When I verified these statements using Python, I got incorrect results using a -Dividend and a +Divisor. Specifically, I had a = -1.0, n = 7.0 and I was expecting a quotient of 0.0 and a positive remainder of 1.0.
In the Wikipedia article on Euclidean Division it states that the remainder is the only term that needs to be positive.
However, if we have a -Dividend, a zero quotient, and a required positive remainder, then:
-a == n x zero + r
becomes incorrect because:
-a == +r
Therefore, in Euclidean Division, we cannot have a -Dividend?
Where am I reasoning incorrectly?
Many thanks, David.
UPDATE_ ______________________________________________
Your clarifications have helped me a lot, and have moved me to laughter, when realising my own insanity.
I've always believed Mathematics to be a visual art. Probably because of the visual proof of Pythagorus's Theorem et al. When I expect that something should be visualisable, I get confused if I cannot visualise it.
When reading about Euclidean Division, the visual example made perfect sense, only when a >= 0, b > 0, q >= 0, r >= 0
@ @ @ @ @ @ @ @ @ @ @ @ @
13 swirls (a) divided by 4 (b) gives a (q) of 3 and a (r) of 1
a == bq + r => 13 == 4 * 3 + 1, or 13 swirls == 4 quotients of 3 + a remainder of 1
Which is visually and linguistically intuitive.
But when you have:
-7 swirls (a) divided by 3 (b) giving a (q) of -3 and a (r) of 2, because -7 == 3 * -3 + 2; I become visually lost.
How can -3 go into -7, 3 times and leave remainder 2?
The reason for this confusion is because during the Khan Academy addition videos, I visualised Integers > 0 being Violet Circles, Integers == 0 being Green Circles, and Integers < 0 being Red Circles.
In my head 2 - 2 == 2 + -2 == 2 Violet Circles merging with 2 Red Circles == the creation of 1 Green Circle.
Kind of like Particles Annihilating each other to create other Particles.
The comments system loses all formatting, so my related comments are here:
@Caveman, thank you for constructing those beautiful natural number asterisk sets.
Let's be totally clear on this:
13/4 is 3 sets of 4 union 1 set of 1,
or less ambiguously,
A set of 13 natural numbers asterisks divided into 4 equal natural amounts is 3 sets of 4 union 1 set of 1.
or if we want to think fractionally,
A set of 13 real number asterisks divided into 4 equal fractional amounts (of desired and sensible accuracy) is 3 sets 4 + 1/4.
In reality, you cannot divide 12 oranges by -4 oranges because -4 oranges is a figment of your Cartesian imagination.
The notion of a negative amount has been patched by mathematicians to make it a credible public concept. Complex Numbers are an extension of this insanity.
There's no such thing a a negative orange or a negative amount of pizza. You can only divide them into smaller amounts up until you reach the limits of known reality; the Planck length.
The notion of a negative amount is an intellectual conspiracy eloquently constructed to obfuscate reality.
Sure, you can subtract a positive amount from a positive amount, but you cannot do this if the result is negative. 2 oranges minus 3 oranges is not defined in Natures reality.
Negative amounts have been constructed by intellectual Magicians.
I still love you (there's no such thing as negative love; only positive hatred)
From the History Of Negative Numbers:
Although the first set of rules for dealing with negative numbers was stated in the 7th century by the Indian mathematician Brahmaputra, it is surprising that in 1758 the British mathematician Francis Maseres was claiming that negative numbers
"... darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple" . |
This may be a naive question, but I'll pose it.
Is there an example of a notion of forcing $\mathbb{P}$ that has the $\kappa$-c.c. which is
not also $\kappa$-Knaster Property also is not "factorable" as a product of partial orderings?
For reference, let me state the relevant definitions and a typical example.
Definition. A partial order $\mathbb{P}$ is $\theta$-c.c. if there is no antichain $A\subseteq\mathbb{P}$ of size $\kappa$.
Definition. A partial order is $\kappa$-Knaster if, for every sequence $\overline{p}=\langle p_\alpha\rangle_{\alpha<\kappa}$ of elements of $\mathbb{P}$ of size $\kappa$, there is an unbounded set $X\subset\kappa$ of ordinals such that $\langle p_\beta\rangle_{\beta\in X}\subset \overline{p}$ which is pairwise compatible.
The prototype example involves
a product of Souslin trees:
If $T$ is a Souslin tree, then $T$ has $\omega_1$-c.c. and is therefore also also $\omega_1$-Knaster. However, $T\times T$ is not $\omega_1$-c.c., yet this
product is $\omega_1$-Knaster.
Is this an accident of the separability of $T$ or are there "natural" (perhaps even non-separative) examples of partial orders that are not "resolvable" or "decomposable" into products (Cartesian or otherwise) of other partial orders?
Edit:
After going through much confusion and re-reading my source text several times, I realize that the "prototypical example" I gave above is
not $\omega_1$-Knaster. In fact, the very text I was reading from explicitly states this and for whatever reason, I did not see the word "not" there. It turns out this is the most important word in that particular sentence, so many thanks are due to Paul McKenney for raising the issue.
I think my original question was intended to be whether the Knaster property can hold for partial orders that are not decomposable into products of some sort, but obviously I got carried away trying to give some background. |
This is Dirac's formalism. It is a generalization to continuous basis, i.e., those enumerated by an index which takes values in a continuous set like $\mathbb{R}^n$.
In that setting you have that in the same way that $|e_n\rangle$ is a discrete basis which allows you to decompose
$$|\psi\rangle=\sum_{n}\langle e_n|\psi\rangle |e_n\rangle,$$
and with respect to which the closure relation holds
$$\sum_{n}|e_n\rangle\langle e_n|=\mathbf{1}$$
we suppose that we can have one basis $|x\rangle$ enumerated by some continuous parameter like $x\in \mathbb{R}$, such that we can decompose
$$|\psi\rangle=\int \langle x|\psi\rangle |x\rangle dx$$
with closure relation
$$\int|x\rangle \langle x|dx = \mathbf{1}.$$
The issue is that roughly if $A$ is one compact operator, the spectral theorem will ensure you have a discrete orthonormal basis of eigenvectors $|a_n\rangle$ such that $A|a_n\rangle = a_n |a_n\rangle$ (I assume non-degenerate for simplicity).
When $A$ is unbounded as often happens in QM, no such basis exists. But you suppose that the above sort of generalized basis do exist. So if $X$ is unbounded you assume that for every eigenvalue $x\in \sigma(X)$ the spectrum of $X$ there is a state ket $|x\rangle$ with $X|x\rangle = x|x\rangle$ and forming a basis.
Notice that whenever you have position and momentum operators $X,P$ you want to require $[X,P]=i\hbar$ and there is a theorem which ensures that at least one of them will be unbounded, so the thing above will be needed then.
This is important due to the postulates of QM. Observables are hermitian operators. The possible values to be measured are exactly the values on the spectrum, i.e., the "eigenvalues" and the states with definite value of the quantity are the eigenvectors, the value measured is then the corresponding eigenvalue.
It is then postulated that if $A$ is the observable with continuous basis $|a\rangle$ then $\rho(a) = |\langle a|\psi\rangle|^2$ is the probability density of finding the value of $A$ in the state $|\psi\rangle$ to be between $a$ and $a+da$.
You then connects this with wave mechanics. Consider a particle in one dimension. We have the observable $X$ corresponding to position. Let $|S(t)\rangle$ be the state at time $t$. As we know position can assume any possible value so $\sigma(X)=\mathbb{R}$. Let $x\in \mathbb{R}$, the corresponding generalized eigenvector is $|x\rangle$. The probability of finding the particle between $x$ and $x+dx$ is then $\rho(x) = |\langle x|\psi\rangle|^2$.
So making contact with wave mechanics, we see that $\Psi(x,t)=\langle x|S(t)\rangle$ indeed.
As a final remark, all the thing about generalized eigenvectors and continuous basis from Dirac's formalism is extremely useful and elegant, but it is non rigorous. In rigorous functional analysis there is no eigenvector for unboudned operators and these expansions are not defined. There is, though, one workaround that makes it all make sense, called the Gel'fand triplet approach. |
Advances in Differential Equations Adv. Differential Equations Volume 22, Number 11/12 (2017), 983-1012. Multiple solutions of a Kirchhoff type elliptic problem with the Trudinger-Moser growth Abstract
We consider a Kirchhoff type elliptic problem; \begin{equation*} \begin{cases} -\left(1+\alpha \int_{\Omega}|\nabla u|^2dx\right)\Delta u =f(x,u),\ u\ge0\text{ in }\Omega,\\ u=0\text{ on }\partial \Omega, \end{cases} \end{equation*} where $\Omega\subset \mathbb{R}^2$ is a bounded domain with a smooth boundary $\partial \Omega$, $\alpha > 0$ and $f$ is a continuous function in $\overline{\Omega}\times \mathbb{R}$. Moreover, we assume $f$ has the Trudinger-Moser growth. We prove the existence of solutions of (P), so extending a former result by de Figueiredo-Miyagaki-Ruf [11] for the case $\alpha =0$ to the case $\alpha>0$. We emphasize that we also show a new multiplicity result induced by the nonlocal dependence. In order to prove this, we carefully discuss the geometry of the associated energy functional and the concentration compactness analysis for the critical case.
Article information Source Adv. Differential Equations, Volume 22, Number 11/12 (2017), 983-1012. Dates First available in Project Euclid: 1 September 2017 Permanent link to this document https://projecteuclid.org/euclid.ade/1504231228 Mathematical Reviews number (MathSciNet) MR3692916 Zentralblatt MATH identifier 1379.35122 Citation
Naimen, D.; Tarsi, C. Multiple solutions of a Kirchhoff type elliptic problem with the Trudinger-Moser growth. Adv. Differential Equations 22 (2017), no. 11/12, 983--1012. https://projecteuclid.org/euclid.ade/1504231228 |
There is a shortcut around the Forward Equation when you are looking for the stationary distribution. Let me write $$dX = \mu(X)dt +\sigma(X)dW$$ for $$\mu(x)=b(1-x)-ax\ \text{ and }\ \sigma^2(x)=x(1-x)$$
The Forward Equation indeed states that the stationary distribution $p(x)$ will be satisfied for $\partial p/\partial t = 0$, therefore$$\frac{1}{2}\frac{d^2}{dx^2}\left[\sigma^2(x)p(x)\right] - \frac{d}{dx}\left[\mu(x)p(x)\right] = 0$$
The trick is to take one differential as a common factor and write$$\frac{d}{dx} \left\{ \frac{1}{2}\frac{d}{dx}\left[\sigma^2(x)p(x)\right] - \mu(x)p(x) \right\}= 0$$Then, the term in the braces will be a constant (it's derivative is zero), and we can take it to be zero. Then we are facing the first order ODE$$\frac{1}{2}\frac{d}{dx}\left[\sigma^2(x)p(x)\right] = \mu(x)p(x)$$Solving this yields the stationary distribution up to the normalization constant. The solution is actually given by$$p(x) \propto \sigma^{-2}(x) \exp\left( \int^x \frac{2\mu(u)}{\sigma^2(u)} du \right)$$
The above holds for any process. In your particular case, the integral becomes$$\int^x \frac{2b(1-u)-2au}{u(1-u)} du = 2b \int^x \frac{du}{u}-2a\int^x \frac{du}{1-u} = \log \left( x^{2b} (1-x)^{2a} \right)$$
Hence, overall the stationary distribution is Beta with parameters $(\alpha,\beta)=(2b,2a)$$$p(x) \propto x^{2b-1} (1-x)^{2a-1}$$ |
Studying the Polarization of Light with a Fresnel Rhomb Simulation
When analyzing optical designs, it is essential to consider not only light intensity but also polarization. Careful manipulation of light polarization can greatly improve image quality by filtering out light from unwanted sources — for example, to minimize glare. A useful way to learn about light polarization, and how to manipulate it, is through a Fresnel rhomb. With the COMSOL Multiphysics® software, engineers can model the effect of a Fresnel rhomb and similar elements on light polarization in optical systems.
Manipulating Linearly and Circularly Polarized Light
In the early 1800s, Augustin-Jean Fresnel, known for his research and inventions in the field of optics, was the first scientist to describe light as linearly, circularly, or elliptically polarized. In a linearly polarized plane electromagnetic wave, the two transverse components of the electric field are in phase with each other. We can think of these components as sine or cosine functions that reach their maximum at the same position and also reach zero at the same position. The following plot shows a linearly polarized wave; the electric field of the wave itself is shown in yellow, whereas the
x– and y-components are shown in red and green, respectively. A linearly polarized plane electromagnetic wave.
Compare this linearly polarized wave to the circularly polarized wave shown below. Here, the
x– and y-components of the electric field are equal in magnitude but are offset by a 90° phase delay. Thus, when one component reaches a maximum or minimum value, the other is zero. The net effect is that the sum of these two components has a spiral-like shape, hence the name “circular polarization”. A circularly polarized plane electromagnetic wave. The x – and y -components are shown in red and green, respectively. The direction of propagation is in blue, and the total electric field amplitude is in yellow.
Fresnel’s discovery of linearly and circularly polarized radiation lent support to his hypothesis that light is a pure transverse wave (with no longitudinal component), meaning that the oscillations of the electric and magnetic fields are always perpendicular to the direction of propagation. By further studying light polarization, he was able to explain that the total internal reflection (TIR) of light does not depolarize incident linearly polarized light, as previously thought, but rather changes it to elliptically or circularly polarized light. The production of circularly polarized light by TIR can be conveniently demonstrated by a glass parallelepiped in which light undergoes TIR at two opposite faces — a setup now known as a Fresnel rhomb.
A
Fresnel rhomb is a type of glass prism that manipulates the polarization of light. The incident light is linearly polarized at a 45° angle to the plane of incidence. The light then undergoes TIR at two different faces. Each instance of TIR causes a phase delay of 45° between the electric field components polarized in the plane of incidence and perpendicular to it, for a total phase delay of 90°. Thus, the outgoing light is circularly polarized.
By using COMSOL Multiphysics® and the add-on Ray Optics Module, engineers can predict the polarization of light as it propagates through an optical system. This is because the Ray Optics Module records light intensity and polarization using the Stokes–Mueller calculus, or simply Mueller calculus, which can fully represent any state of polarization. In the next section, let’s look at a tutorial model that demonstrates light changing from linearly to circularly polarized at a specific angle of incidence in a Fresnel rhomb.
Modeling a Fresnel Rhomb with COMSOL Multiphysics®
The Fresnel rhomb geometry is a simple uncoated glass prism in the shape of a parallelepiped. A simple diagram of the geometry is given below.
Linearly polarized light (1) enters the prism (2). This light is linearly polarized 45° to the plane of incidence — that is, the components of the electric field lying in the screen and perpendicular to it have equal magnitudes, and they are in phase. The prism has an angle (
θ) that, if set properly, will cause a 45° phase delay between the orthogonal electric field components during each instance of TIR (3 and 4). The light then exits the prism (5). The outgoing light (6) is now circularly polarized. The components of the electric field lying in your screen and perpendicular to it still have equal magnitudes, but they are now 90° out of phase.
The exact value of the phase delay caused by TIR on an uncoated surface depends on the refractive indices on either side of the surface,
n 1 and n 2, and on the angle of incidence, θ. The relationship between these quantities and the phase delay can be derived from Snell’s law and the Fresnel equations. (For more details about the equations, you can check the Fresnel Rhomb model documentation.)
2 \times \textrm{atan}\left(\frac{\cos\theta\sqrt{\sin^2\theta – \left(n_2/n_1\right)^2}}{\sin\theta}\right) = 45^{\circ}
The model consists of two studies. First, the value of
θ that gives the desired phase delay of 45° is solved for. Then, this angle is used to define the geometry for the ray optics simulation. Study 1: Solving for the Angle of Incidence
First, you can use the
Global ODEs and DAEs interface to solve the above equation for the angle of incidence that causes a phase retardation of δ = 45° between the s– and p-polarized components during each TIR for the refractive index ratio of n = 1/1.51. (Even though this equation is not an ordinary differential equation or differential-algebraic equation, you can still use this interface to solve it.) The resulting value of θ in this example is 0.84855 radians or ~48.618°. Study 2: Tracing the Path of the Light Ray
Next, you can use the
Geometrical Optics interface to trace the path of a light ray through the Fresnel rhomb as it undergoes TIR at the angle of incidence computed by the Global ODEs and DAEs interface. Note that the ray is linearly polarized at this stage, with its direction of polarization at a 45° angle to the plane of incidence.
The value for the angle of incidence obtained from Study 1 (0.84855 radians) can be used to set up the model geometry for the Fresnel rhomb, which is a parallelogram extruded into a 3D geometry. Using a 3D geometry is preferred because it helps to illustrate the state of ray polarization, since the 3D
Ray Trajectories plot can display polarization ellipses. In this step, the Stokes parameters, which are computed along each ray trajectory, are used to describe the degree to which a ray is linearly or circularly polarized. Evaluating the Simulation Results
Now, let’s take a closer look at the simulation results in the aforementioned
Ray Trajectories plot. In this plot, shown below, the linear ray enters from the left side of the prism. The colors illustrate the optical path length along the ray, and the circles and ellipses along the ray path show the polarization. The arrows along the perimeter of each circle/ellipse show the sense of rotation of the instantaneous field vector.
As the light goes through the Fresnel rhomb, the linearly polarized ray becomes elliptically polarized after one TIR. After two TIRs, the polarization ellipses change to circles, meaning the light is circularly polarized.
Ray propagation in a Fresnel rhomb.
At this camera angle, it might not be obvious that the outgoing light is circular. But by rotating the 3D plot in the Graphics window, you can see that the light changes from linear to elliptical to circular polarization.
Animation showing the incident light in a Fresnel rhomb when it is linearly polarized (δ = 0), elliptically polarized after one reflection (δ = 45°), and circularly polarized after two reflections, (δ = 90°).
You can visualize this polarization in another way by plotting the ratio of the Stokes parameters, the results of which are displayed below. To start, the ray is linearly polarized, with the ratio at zero. After the first TIR, the ratio has values with magnitudes between zero and one, indicating varying degrees of elliptical polarization. Then, after the second TIR, the magnitude is almost exactly unity, becoming more aligned with circular polarization.
The ratio of the fourth and first Stokes parameters plotted as a function of the optical path length. Next Steps
To try the Fresnel rhomb model featured in this blog post, click the button below. Then, in the Application Gallery, you can download the step-by-step documentation for this example and (with a valid software license) the accompanying MPH-file.
A natural extension of the Fresnel rhomb tutorial is to apply thin dielectric coatings to the prism surfaces. Dielectric coatings affect the Fresnel coefficients at an interface; therefore, they can affect the phase delay between in-plane and out-of-plane electric field components. An example of this is a TIR thin-film achromatic phase shifter (TIRTF-APS). It uses thin films to offset the frequency dependence of the refractive index in the glass so that outgoing light remains circularly polarized for a wide range of wavelengths.
Further Reading
Want to learn more about ray optics modeling? Take a look at these blog posts:
Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Overview > What is fracture mechanics ?
Let us now consider fracture from a physical point of view.
Bonding in materials
There exist mainly 4 different bonding types in materials:
Molecular or Van der Waals bonding, Ionic bonding, Covalent bonding, and Mettalic bonding.
Depending on the nature of the bonding, we will see that the fracture behavior of the material can be different.
Molecular or Van der Waals bonding
Van der Waals bonding (Picture I.9) results from the interaction between dipoles. Examples of materials having such bonds are Argon (Ar), polymers, C
6H 6, graphite. As this kind of bonding is weak, only a reduced amount of energy is necessary to break the bonds. Ionic bonding
Ionic bonding (Picture I.10) results from the interactions between ions. One or more electrons are transferred from an atom (Cation) to another (Anion), as in NaCl, HCl ... The forces involve in this kind of bonding are non-directional coloumbic forces, reulting in hard & brittle crystals. A material is said to be brittle when it breaks without noticeable plastic deformations.
Covalent bonding
Covalent bonding (Picture I.11) involves bonds between atoms resulting from the "sharing" of electrons, as in CH
4 or diamond. This bonding forces involved are directional and strong, leading to hard & brittle crystals. Metallic bonding
Metallic bonding (Picture I.12) is due to the existence of a cloud or sea of electrons. Theses valence electrons are not shared between two atoms but are common to all the nuclei, and can thus move freely. This leads to materials, such as metals and alloys, which are conductors. This bonding is non-directional and will lead either to brittle or ductile (meaning experiencing plastic deformations before failure) materials as it will be explained.
Free electrons model of a crystal
As previously stated, the nature of the bonding will have an effect on fracture behavior of the material. To model the bonding within the material, we consider the free electrons model. In this model each atom is supposed to have a certain radius r
0, determined by the equilibrium between the attractive and repulsive forces. The attractive potential
The expression of this potential is different for each kind of bonding but its shape remains similar. For instance for metallic bonding, the attractive potential is due to the interactions between the negative electronic cloud and the positive nuclei, and is given by:
\begin{equation} U_a = - M \frac{z_1 z_2 q^2}{4 \pi \varepsilon_0 r} = -\frac{A_a}{r} \label{eq:attrapot},\end{equation}
where $z_i$ is the valences, $\varepsilon_0$ is the permittivity equal to 8.85 pF/m, $q$ is the electronic charge equal to 1.602*10
-19 C, and $M$ is the Madelungen constant, which depends on the geometric arrangement of the crystal.
The attractive force to which the atoms are submitted is the gradient of the potential (\ref{eq:attrapot}), which is given by:
\begin{equation} f_a = \frac{A_a}{r^2} \label{eq:attra}.\end{equation}
The repulsive potential
The repulsive potential is due to the interactions between the electrons within the electronic cloud itself or between the nuclei themselves. It can be expressed as:
\begin{equation} U_r = b\lambda \exp{\frac{-r}{\rho}}= A_r \exp{\frac{-r}{\rho}}, \label{eq:repupot}\end{equation}
where $b$ is the number of adjacent ions, and where both $\lambda$ usually expressed in eV (where eV is an energy unit equal to 1.602*10
-19 J ) and $\rho$ usually expressed in nm are the repulsive parameters. The repulsive force is obtained from the gradient of the potential (\ref{eq:repupot}):
\begin{equation} f_r = -\frac{A_r}{\rho} e^{-\frac{r}{\rho}}. \label{eq:repu}\end{equation}
Total potential
As both potentials have been modeled, it is now possible to determine the equilibrium radius between the spheres. From basic mechanics this equilibrium is obtained for the minimum value of the total potential when the attractive force is equal to the repulsive force as illustrated in Picture I.13 for the ionic bonding in NaCl crystal. If we consider the potentials given by (\ref{eq:attrapot}) and (\ref{eq:repupot}) we find that the equilibrium is obtained for $r_0$ and $U_0$ respectively equal to:
\begin{align} r_0^2\exp{\frac{-r_0}{\rho}} &= \frac{A_a\rho}{A_r}, \text{ and}\\ U_0 &= - \frac{A_a}{r_0}\left[1-\frac{\rho}{r_0}\right]. \label{eq:equpot}\end{align}
Cleavage model of a perfect crystal
The free electrons model can be used to derive the maximal force $f_\text{max}$ that can be reached between the atoms at a corresponding radius $r_\star$, see Pictures I.13. These two values can be evaluated from (\ref{eq:attra}) and (\ref{eq:repu}) and are respectively equal to:
\begin{align} r_\star^3\exp{\frac{-r_\star}{\rho}} &= \frac{2 A_a\rho^2}{A_r},\text{ and}\\ f_\text{max} &= \frac{A_a}{r_\star^2}\left[1-\frac{\rho}{r_\star}\right]. \label{eq:maxforce}\end{align}
However, the fracture load of a perfect crystal does not only depend on the maximal bonding force between the atoms (\ref{eq:maxforce}), but also depends on the crystal lattice. For instance metallic bonds tend to form crystal structures like in the body-centered crystal (BCC) (iron at low temperature, ferrite ...) or in the face-centered-cubic crystal (FCC) (iron at high temperature, austenite, Aluminum, Copper ...). On Pictures I.14 and I.15 are displayed examples of such lattices, where $r_0$ is the equilibrium radius calculated with the free electron model and where $a_0^u$ is the size of a unit cell.
Once the bonding forces and the lattice structures are known, one has the information needed to evaluate theoretically the failure force along a cleavage plane of a perfect crystal. By definition the stress σ is defined as being the sum of all the forces acting on a surface divided by the surface area:
\begin{equation} \sigma = \frac{1}{S_\text{ref} (\text{plane},r_0)} \sum_i f_i , \label{eq:stress}\end{equation}
where $f_i$ correspond to all the forces between atoms crossing the cleavage plane, see Picture I.16. The young modulus $E$ is by definition obtained from:
\begin{equation} E = \lim_{dr\rightarrow 0}\frac{\sigma\left(r_0+dr\right)-\sigma\left(r_0\right)}{dr}r_0, \label{eq:young}\end{equation}
see Picture I.16. Finally, the surface energy in the case of identical atoms is obtained as follows. Twice the surface energy related to a surface is given by the number of bonds $N_\text{rupture}$ being cut by the cleavage surface times the value of the potential at equilibrium $U_0$ and divided by the reference surface area. This corresponds to the energy one needs to furnish to seperate the
upper and lower surfaces (this explains the coefficient 2 in the formula):
\begin{equation} 2\gamma_s = \frac{N_\text{rupture}U_0}{S_\text{ref}}, \label{eq:energsurf}\end{equation}
see Picture I.16. Obviously both $S_\text{ref}$ and $N_f$ depend on the crystal lattice as well as on the orientation of the cleavage plane. For instance in the case of cleavage along a (1,1,0) -Miller index- surface of a BCC crystal, see Picture I.17, these values are equal to:
$S_\text{ref} = \sqrt{2} (a_0^u)^2$ $N_\text{rupture} = 4 \frac{1}{8}$ (bonds of corner atoms) $+ 2 \frac{1}{8}$ (bonds of the central atom)
We can now derive a theoretical formula to predict the theoretical value σ
Th (in the case of a brittle perfect crystal) of the tensile strength σ TS at cleavage. To do so we assume that the atoms on both part of the cleavage plane separate at once as shown in Picture I.18. If we look at the evolution of the bonding force and of the resulting stress in Picture I.16, we see that a sinusoidal approximation can be considered in order of simplifying the calculations, see Picture I.19:
\begin{equation} \sigma = \sigma_{\text{Th}} \sin{\left(\pi\frac{a^u-a^u_0}{\delta}\right)}. \label{eq:stresssinapprox} \end{equation}
The missing parameter $\delta$ can be obtained in terms of the Young modulus by using (\ref{eq:young}):
\begin{equation} \frac{E}{a^u_0} = \left.\frac{d\sigma}{d a^u}\right|_{a_0^u} = \sigma_{\text{Th}} \frac{\pi}{\delta}. \label{eq:expyoung} \end{equation}
Finally (twice) the surface energy can be introduced by using its definition -the work of separation- which corresponds to the integral of the stress tensor (\ref{eq:stresssinapprox}) from the equilibrium distance to the value for which the stress tensor vanishes:
\begin{equation} 2\gamma_s = \int_{a_0^u}^{a_0^u+\delta} \sigma da^u = 2 \sigma_{\text{Th}}\frac{\delta}{\pi}. \label{eq:expenergsurf} \end{equation}
Combining (\ref{eq:expyoung}) and (\ref{eq:expenergsurf}) leads to the theoretical tensile strength $\sigma_\text{Th}$:
\begin{equation} \sigma_{\text{Th}} = \sqrt{\frac{E\gamma_s}{a^u_0}}. \label{eq:tensstress} \end{equation}
The table below shows some examples of the theoretical tensile strength computed using (\ref{eq:tensstress}):
$a^u_0$ [nm] $E$ [GPa] $\gamma_s$ [J m^{-2}] $\sigma_\text{Th}$ [GPa] Glass 0.3 60 21 64 !!! Steel (low T) 0.3 210 3400 1,500 !!!
When comparing the values of the theoretical model in Table I.1 to the values obtained from experiments, it appears that they are some orders of magnitudes too high. This discrepancy was explained by Griffith in 1920 when he analysed the influence of a scratch of size $2a$ on a glass plate - thermally threated to remove the residual stresses- on the tensile strength. He observed a dependency which follows the equation:
\begin{equation}\sigma_{\text{TS}}\sqrt{a} \div \sqrt{E\, 2\gamma_s}, \label{eq:griffith} \end{equation}
as illustrated on Picture I.20. The experiments of Griffith put in evidence the effect of a defect as a scratch (which is actually a crack) on the material strength. Fracture mechanics was pioneered: the resistance of a structure could only be evaluated by considering its defects. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
I have problems deriving formal proofs of following problem inspired in dynamic programming:
$V^{k+1}=\min_{\mu'}G(\mu',V^k)=T(V^k)$
$\mu^{k+1}=\arg \min_{\mu'} G(\mu',V^k)=\mu^*(V^k)$
where $V\in\mathcal{V}$ and $\mu\in\mathcal{U}$.
We make the following assumptions:
$\mathcal{V}$ and $\mathcal{U}$ are compact. $G(\mu,V)$ is convex and differentiable in $\mu$ and thus $\mu^*(V)$ is unique. $\mu^*(V)$ is differentiable in $V$. $\mu^*(V)$ is an interior point for every $V\in \mathcal{V}$. $T$ is a contraction mapping, i.e., $\|\frac{\partial T(V)}{\partial V}\|<1$.
Since $T$ is a contraction mapping, $V^k$ converges to the (unique) fixed point of $T$. It is obvious then that $\mu^k$ also converges, since it is unique for each $V^k$ and $V^k$ converges.
However, I'm having problems deriving a "formal" proof for the whole fixed point system:
$(V,\mu)=F\left((V,\mu)\right)=(T(V),\mu^*(V))$
This fixed point system will have a unique solution if $F$ is a contraction mapping in the space $\mathcal{V}\times\mathcal{U}$, i.e., $\|\frac{\partial F(V,\mu)}{\partial (V,\mu)}\|<1$ in the product norm. Since $F$ does not depend on $\mu$, $\|\frac{\partial F(V,\mu)}{\partial \mu}\|=0$. Now choosing the $\infty$ norm for the Jacobian the norm would be:
$\|\frac{\partial F(V,\mu)}{\partial (V,\mu)}\|=\max\{\|\frac{\partial T(V)}{\partial V}\|,\|\frac{\partial \mu^*(V)}{\partial V}\|\}$
Now, from the assumptions we know that $\|\frac{\partial T(V)}{\partial V}\|<1$, but I don't know how to prove that $\|\frac{\partial \mu^*(V)}{\partial V}\|<1$ using the given assumptions, or if I overlooked something important.
Please feel free to correct any mistakes or give any ideas. Any help would be greatly appreciated.
Thank you very much. |
The reason why planets in our solar system have stable orbits is because, during the formation of the solar system, debris disk which consisted mostly of gas was orbiting the sun, During this period when protoplanets started forming they were interacting with this debris disk, due to this interactions(frictional forces) on the planets from the debris disk, Planets achieved a more or less a circular orbit. Afterwards our solar system continued to evolve and This debris disk disappeared(Asteroids, Comets were formed), From this point planets were fixed at an orbit they achieved.
The image bellow represents a star system with one planet orbiting also with debris disk.
how is orbit of a body like satellite or planet has perfect balance between gravitational pull and centrifugal force of revolution?
No it doesn't have a perfect balance. If that were the case all the orbits would have been perfectly circular which is not he case. at a given distance $r$ from the earths there is a specific orbital velocity at which an object has a circular orbit, If an object at that orbit accelerated to a larger velocity gravitational pull wouldn't increase just to keep that object in a circular path, gravitational pull stays the same Therefore that object will have an elliptical orbit. To explain this mathematically, For an object to be in a circular orbit it must have centripetal force equal to gravitational force:
$$\frac{mv^2}{r} = G\frac{Mm}{r^2} \Rightarrow v^2=G\frac{GM}{r}$$
$$v_{orbital}=\sqrt{\frac{GM}{r}}$$
From the equation we can see that if we increase the orbital speed the above equality doesn't hold between centripetal acceleration and gravitational attraction Thus it no longer has a circular orbit.
From the picture bellow we can see in red that if orbital velocity equals the above equation we calculated then it has a circular orbit, If it is less than or more it achieves an elliptical orbit. |
Overview > Brittle and Ductile Materials
In the previous section we have seen the effect of cracks on cleavage of brittle materials. However other modes of failure exist.
Macroscopic behavior
By definition samples made of a brittle material do not exhibit plastic deformations prior to their macroscopic failure (red curve on Picture I.21), contrarily to samples made of ductile materials (blue curve on Picture I.21). The shape of the fracture surface is also different.
Brittle materials
For a brittle fracture the fractured surface remains planar on average as shown in Picture I.22. When magnifying the fracture surface it appears that the failure occurred along the lattice planes by cleavage, i.e. by separation of crystallographicplanes see Picture I.23. The cleavage happens within the grains in the case od intra-granular failure Note that under some circumstanceslike during creep, corrosion, presence of trapped H
2 .... the failure mode can become inter-granular at it will be discussed in another chapter.
For brittle materials Griffith law is valid:
\begin{equation} \sigma_{\text{TS}}\sqrt{a} \div \sqrt{E\, 2\gamma_s} \label{eq:griffith} \end{equation}
Ductile materials
Contrarily to brittle materials, ductile materials exhibit plastic deformations prior to macroscopic failure. For a tensile test a macroscopic sample will exhibit necking before fracture, see Picture I.24. The fracture surface has also a different aspect: instead of being composed of crystallographic planes, the surface is rougher and exhibits small craters. This appearance can be explained by the existence of plastic deformations, which corresponds to the motion of dislocations. As dislocations are nothing else but missing atoms in the crystallographic organisation, they can be seen as voids. During the disclocations motion, these voids tend to gather together around obstacles which block their motion, as inclusions or grain boundaries, and to form micro-voids. These in turn leads to micro cavities coalescence and to crack growth, see Picture I.25.
For such materials the Griffith law cannot be applied but has to be modified. Irwin (1950) found that the plastic work at the crack tip should be added to the surface energy:
\begin{equation} \sigma_{\text{TS}}\sqrt{a} \div \sqrt{E\left(2\gamma_s+W_\text{pl}\right)} \label{eq:irwin} \end{equation}
The question is now, why does a material exhibit a brittle or ductile behavior? This will be explained by the failure induced mechanics at the micro-level in the next section. However note that another common fracture mode exists: fatigue, which also exhibits a different fracture surface as in Picture I.26. We will briefly introduce this point later on and will detail it in a coming section.
Microscopic behaviors
In order to predict the failure mode of a given material we first need to analyze the different deformation mechanisms involved at the atomic level.
Elasticity
When submitted to small traction, most of the materials exhibit reversible deformations, meaning that once the stress is released the initial configuration is recovered. This is called elasticity and can be explained by the microscopic behavior illustrated in Picture I.27. As it can be seen, this elastic behavior is due to the stretching of bonds (Picture I.27, middle), which remains reversible for small stress (Picture I.27, right). The slope at the origin of the stress-strain curve, see Picture I.28 right, is the Young modulus $E$. This modulus can be deduced from the free electrons model by considering the slope of the function $\sigma(a^u)$, see Picture I.28 left, as explained in the previous section. Note that elasticity can be either linear (metals) or non-linear (polymers), see Picture I.28.
Plasticity
Some materials exhibit plastic deformation before fracture, see Picture I.29. Usually plasticity occurs above a given stress threshold: the yield stress. If the stress is then released, only a part of the deformations is recovered: the elastic part $\varepsilon^e$. A permanent deformation remains: the plastic deformation $\varepsilon^p$.
This elasto-plastic behavior can be explained at the microscopic level by the plane shearing mechanism illustrated in Picture I.30. At a first stage for $\sigma = \sigma_1$, only bonds stretching is involved and a pure elastic behavior is observed, see Picture I.30 center-left . However for a higher stress $\sigma = \sigma_2$, on top of the bonds stretching there is a plane shearing involved, see Picture I.30 center-right. Once the stress is released only the deformations resulting from the bonds stretching can be recovered, see Picture I.30 right. The question which arises now is "What is involved behind the plane shearing mechanism?" This plane shearing mechanism can be modeled in two different ways.
Lattice-plane motion assumption
The first model consists in assuming that a whole part of the crystal lattice can slip at once as illustrated by Picture I.31. This means that a whole row of atoms slides on another row of atoms.
The activation shear stress $\tau$ required for this motion can be evaluated using the model represented in Picture I.32. To do so we will use a sine approximation: from Picture I.32, it appears that the shear stress $\tau$ has to be equal to 0 for slip values $\delta$ equal to 0 and $2r_0$ as both configurations correspond to equilibrium configurations of the crystal, but also for a slip value $\delta=r_0$ for symmetry reasons. Thus $\tau$ is given by:
\begin{equation} \tau = \tau_\text{max}\sin{\frac{\pi\delta}{r_0}} \label{eq:shear} \end{equation}
The shear modulus $G$ is defined as the variation of the shear stress with respect to the variation of shear angle, and can be evaluated from (\ref{eq:shear}):
\begin{equation} G = \left.\frac{d \tau}{d \delta/a_0^u}\right|_{\delta=0} = \tau_\text{max} \frac{\pi a_0^u}{r_0} \label{eq:expG} \end{equation}
If we replace the variables $a_0^u$ and $r_0$ by the values of a BCC crystal we have:\begin{equation} \tau_\text{max} = \frac{G\sqrt{3}}{4\pi} \label{eq:shearBCC} \end{equation}
Using this formula for iron ($G=82$ GPa), one gets $\tau_\text{max}$ of 11 GPa, which is much higher than the measured yield stress, meaning that the shearing mechanism associated to plasticity cannot be a lattice plane motion.
Slip by dislocation glide assumption
A second possible mechanism is the slip by dislocation glide as illustrated in Picture I.33. This mechanism has as origin the existence of imperfections in a real crystal: substitutional atoms, interstitial atoms or dislocations. A dislocation actually corresponds to missing atoms in the crystal structure as depicted in Picture I.33. Thus the slip can occur by breaking the bonds one by one, leading to a shearing plane mechanism which requires lower activation shear stress.
In Picture I.33, as the lattice is purely 2D the only dislocation that propagates is an edge dislocation. To characterize a defect the following material parameters are considered:
The Burger vector $\mathbf{b}$, which characterizes the difference in the perimeter length between the distorted lattice and the perfect lattice. The perimeter is chosen to surround the lattice defect, see Picture I.34. The dislocation line is the line along which the distortion is the largest, see Picture I.34. The slip plane is the plane along which the dislocation motion occurs, see Picture I.34.
An edge dislocation is characterized by a dislocation line perpendicular to the Burger vector as illustrated in Picture I.34.
Other kinds of dislocations exist as the screw dislocation for which the burger vector is parallel to the dislocation line, see Picture I.35. A mixed dislocation has a dislocation turning so that it corresponds to an edge dislocation on one side of the crystal and to a screw dislocation on a perpendicular face, see Picture I.36.
Illustration of the slip mechanism
The slip mechanism can be observed experimentally on a single crystal under tension. The plastic deformations will occur along the slip planes as illustrated in Picture I.37. The tension stress $\sigma$ can then be related to the activation stress along the slip plane as $\tau=\sigma\cos\left(\varphi\right)\cos\left(\lambda\right)$. The deformation along the slip plane can be seen in the tension test of a single cadmium crystal in Picture I.38.
The remaining question is "Why do some crystals exhibit plastic deformations, meaning allow for dislocation propagation, contrarily to other ones?" In the following part we will related this behavior the nature of the crystal bonding. Remember that a deformation by dislocations motion corresponds to moving parts of the crystal lattice.
Ductile or brittle behavior of a crystal
As seen in the previous section, the bonding in a crystal can be ionic, covalent or metallic. We will investigate whether the different bonding allow for dislocation propagation or not. Remember that a dislocation corresponds to an uncompleted atomic plane in the crystal lattice.
Crystals with ionic bonding (NaCl)
For crystals with ionic bonding the motion of dislocation is difficult. Indeed when moving part of a crystal lattice, there is a configuration for which a positive ion would be in front of another one, see Picture I.39. Thus cleavage requires a lower activation stress than the dislocation propagation, meaning that the ionic crystal is brittle.
Crystals with covalent bonding (Si, diamond)
Here again the motion of dislocations is difficult as covalent bonds are strong and directional. As for ionic crystals, cleavage requires a lower activation stress and the crystal is brittle, see Picture I.40.
Crystals with metallic bonding
For such crystals, see Picture I.41, the motion of dislocations requires a lower activation stress as bonds are non-directional and result from the negative cloud made of electrons which surrounds the nuclei. So atomic motions along slip directions is possible depending on the crystal structure as it is explained here below.
For instance for an FCC crystal as Aluminum or Copper, there exist 4 slip planes. For each slip plane there are 3 slip directions, which make 12 slip systems. Moreover all these slip systems correspond to close-packed planes, see Picture I.42, allowing the motion of dislocations at low energy, and thus at any temperature. These FCC crystals are always ductile.
For BCC crystals as ferrite, we have 6 slip planes with 2 slip directions for each slip plane, leading to 12 slip systems as for FCC crystals. However in this case not all the slip systems correspond to close-packed plane, see Picture I.43. This means that at low temperature an atom does not have enough energy to move, and thus the material is brittle as disclocations cannot propagate. On the contrary, at high temperature such a motion is possible (Peierls Stress) and the crystal is ductile. So in the case of a BCC crystal there exists a ductile/brittle transition Temperature (DBTT) below which the material is brittle and above which the crystal is ductile.
Note that it is possible to make the motion of dislocations more difficult by introducing obstacles, leading to harder and less ductile materials. Examples of hardening methods are:
Introducing substitutional solutes (Sn in Cu, ., duralumin) Introducing interstitial solutes (Cr or V in Fe, . ) Multiplying the grain boundaries (small crystal size after cold rolling: Hall- Petch effect) Introducing precipitate particles (Martensite which is a Body-Centered-Tetragonal in ferrite after quench) Multiplying the existing dislocations, which are repulsive (cold work) Liberty ships during WW2
The ships built during WII were made of lower grade steel, leading to a DBTT close to the sea temperature. Failure analyzes were based on a ductile material, meaning that Irwin formula predicts a fracture energy of about $2\gamma_s + W_{pl}$ = 200 kJ/m. In dry docks the ships experienced no problem as the temperature was high enough: existing defects were smaller than the critical size that could be predicted using (\ref{eq:irwin}). However, once the ship was in cold water, below the DBTT, the metal was brittle, with a fracture energy of $2\gamma_s$ = 6800 J/m, and the critical size of a defect was becoming much lower (\ref{eq:griffith}). As consequence some existing defects were larger than the critical size leading to crack propagation, see Picture I.44.
In this section we have focused on the failure of crystals. Other materials are also of interest, and their failure mode is summarized here below.
Failure modes of polymers
The failure mode of polymers depends on the structure of the covalent chain structure.
Aligned and cross-linked or networked polymers
In such a material as Epoxy, the polymers are mainly aligned as shown in Picture I.45 and the molecules are cross-linked by covalent bonding to other ones. Under strain, the material deforms due to bond stretching (as they are already aligned) mechanism. The material thus remains elastic up to a brittle-like failure.
Semi crystalline polymers
In this case the polymers have some atomic groups, which tend to attract each other and to crystallize into lamellar thin plates: in these regions the chain folds back and forth. The remaining part of the polymer chain has no interaction with other molecules and remains amorphous, see Picture I.46. The behavior of this material strongly depends on the temperature.
On the one hand, at high temperature the behavior is very different. As long as the stress remains below a certain value, the crystal region remains together and the deformation is due to the disentangling of the long chain molecules. Once the amorphous regions are stretched so that the crystalline regions are aligned, the thermal energy being high enough, the lamellar plates start to slip along one to another leading to irreversible deformation mechanisms corresponding to a necking. Failure occurs as soon as the remaining material volume is small enough to cause the covalent bonding to break. This ductile-like behavior is illustrated in Picture I.47, blue curve.
On the other hand, at low temperature the lamellar plates of the crystalline structure cannot slip and the behavior remains linear elastic until failure, which is brittle-like, see Picture I.47 (red curve).
Elastomer polymers
In this case the chains are very long, kinked and heavily cross-linked as in rubber. Such polymers have an amorphous structure, which means that at the microscopic level no real arrangement can be observed. The behavior remains reversible until the brittle-like rupture due to failure of the covalent cross bonds, although elasticity is not linear, Picture I.48.
Composites
Composites are materials which are heterogeneous at the microscopic level. They are often made of fibers, which can be metals, ceramics, or polymers embedded in a matrix, which can also be made of metals, ceramics, or polymers. For such materials the macroscopic properties depend on the micro-constituents properties and on the fibers arrangement (short fibers, long unidirectional, woven ...).
Such materials have very different failures modes that can be explained from the micromechanics, Picture I.49:
Transverse matrix fracture, Longitudinal matrix fracture, Fiber rupture, Fiber debonding, or Delamination.
At the macroscopic level, the composite materials usually fail as brittle materials.
This section shows that there exist many different failure modes and each one will be modeled in a different way, as will be shown along this class. |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
I recently went through some commodities forward curve modeling documentations, where a diffusion model for the forward price $F(t,T)$ was modeled as a driftless diffusion process (as a function of t with T fixed). The document did not mention whether this model is under risk-neutral measure or real-world measure. The model was estimated using historical data assuming trendless. It was later also used for derivative pricing, which is supposedly under the risk neutral measure. For privacy purposes I cannot reveal the source of this document, but just wonder if it is the case that for commodities, these two measures are the same? Is the forward price expected to not change over time even under the real-world measure? If so, what is the argument? Risk premium equal to zero in the commodities world?
Futures prices are drift less under risk neutral measure. In commodities market, it is often Futures. They need to estimate volatility in their model. Since volatilities are not affected by change of probability, you can estimate under real word measure. So what you describe seems correct.
#
To satisfy commentators.
In the continuous time semi martingale framework of Girsanov theorem, a change of probability measure (equivalent) affects only the finite variation part. So of course, if you speak about stochastic volatility models, then the sentence is not completely true. Volatilities have to be understood as diffusion part (the $\sigma_t$ in front of $dW_t$)
Definitions
For fixed $T$ and moving $t \leq T$ then by definition $\color{blue}{(*)}$, forward prices $F(t,T)$ and future prices $\text{Fut}(t,T)$ are both
conditional expectations. However, these expectations are not taken under the same probability measure. More specifically:$$ F(t,T) = \Bbb{E}^{\Bbb{Q}^T}\left[ \left. S_T \right\vert \mathcal{F}_t \right]$$$$ \text{Fut}(t,T) = \Bbb{E}^{\Bbb{Q}}\left[ \left. S_T \right\vert \mathcal{F}_t \right] $$where $\Bbb{Q}$ the risk-neutral measure: measure under which the $t$-value of all self-financing portfolio emerges as a martingale when expressed relative to the money market account numéraire $B_t = \exp(\int_0^t r_s ds)$ $\Bbb{Q}^T$ denotes the $T$-forward measure: measure under which the $t$-value of all self-financing portfolio emerges as a martingale when expressed with respect to the zero-coupon $T$-bond numéraire $P(t,T) = \Bbb{E}^\Bbb{Q}_t\left[B_t B_T^{-1}\right]$.
From the above definitions we have that:
When interest rates are deterministic, both measures coincide. This can be seen by writing the Radon-Nikodym derivativeof the change of measure: $$ \left. \frac{d\Bbb{Q}}{d\Bbb{Q}^T} \right\vert_{\mathcal{F}_t} = \frac{P(0,T) B_t}{B_0 P(t,T)} $$ Being simple conditional expectations, forward and future prices are martingales under their respective measures. This is a direct consequence of the tower property of conditional expectations. Indeed, looking at the future price process without loss of generality, for $s < t$ one can always write: $$\Bbb{E}^{\Bbb{Q}^T}\left[ \left. \text{Fut}(t,T) \right\vert \mathcal{F}_s \right] = \Bbb{E}^{\Bbb{Q}^T}\left[ \Bbb{E}^{\Bbb{Q}^T}\left[ \left. \left. S_T \right\vert \mathcal{F}_t \right] \right\vert \mathcal{F}_s \right] = \Bbb{E}^{\Bbb{Q}^T}\left[ \left. S_T \right\vert \mathcal{F}_s \right] = \text{Fut}(s,T) $$
$\color{blue}{(*)}$ e.g. the
forward price is defined such that the value of a forward contract is zero at inception. Denoting $t$ the inception date, the forward price is thus the strike $K$ such that the expected discounted cash flows of the contract $\Bbb{E}^{\Bbb{Q}}_t \left[ B_t B_T^{-1} (S_T-K) \right] = 0$, which is equivalent to claiming that $F(t,T) = \Bbb{E}_t^{\Bbb{Q}^T} [S_T]$ with $\Bbb{Q}^T$ the measure associated to the zero coupon $T$-bond numéraire $P(t,T)$ since $P(t,T)$ is $\mathcal{F}_t$-measurable. A similar reasoning can be used for futures (but the daily settlement mechanism makes it a bit trickier to write down). Answer
To simplify things assume deterministic interest rates (so that $\Bbb{Q} = \Bbb{Q}^T$) and that we only manipulate adapted, continuous paths processes that verify the usual conditions.
From the above definitions, forward/future prices are martingales under some measure $\Bbb{Q}$ equivalent to the real world measure $\Bbb{P}$. By the
martingale representation theorem, this means that they are driftless. Indeed we have that:$$ F(t,T) = \Bbb{E}^\Bbb{Q}_t [ S_T ] \iff dF(t,T) = \sigma_t dW_t^\Bbb{Q} $$
By
Girsanov theorem (or Abstract Bayes), this also means that they are not driftless under the equivalent real-world measure $\Bbb{P}$ in general. Indeed, $$ F(t,T) = \Bbb{E}^\Bbb{P}_t [ S_T ] = \Bbb{E}^\Bbb{Q}_t \left[ S_T \frac{Z_T}{Z_t} \right] \iff dF(t,T) = \mu dt + \sigma_t dW_t^\Bbb{P} $$
where $\mu$ is given by the differential of the
quadratic covariation between $W_t^\Bbb{Q}$ and the stochastic logarithm of the change of measure process $\frac{Z_T}{Z_t}$, where $Z_t= \left. \frac{d\Bbb{P}}{d\Bbb{Q}}\right\vert_{\mathcal{F}_t}$ $$ \mu = d\langle W_t^\Bbb{Q}, \mathcal{L}(Z_T/Z_t) \rangle_t $$
The answer given by MJ73550 already covers most if the points in my opinion.
I would put it like this:
Concerning the drift: if the cost-of-carry relationship is used in your model then this is the correct drift to use for the spot price to derive the price any derivatives (forwards, futures, options) - this is the risk neutral drift. This has nothing to do with the real world drift (which you can only guess/model for the future or observe from traded prices for the past).
If you look at forwards then you imply the drift of the spot. Sometimes this is different to the cost-of-carry relationship (in commodity markets it might also be difficult to determine the convenience yield).
Thus the spot has some drift and it is fixed either by the forwards traded or cost-of-carry.
So far having modelled the drift of the spot in the risk neutral world we recall that we do not use it to guess the future but to make the prices of traded instruments match.
For a forward or futures contract we don't need a risk neutral drift to match traded prices. The futures price already is the correct price for a linear derivative of the spot at some point in time in the future and that's it. But it still has volatility.
Then we have two approaches:
if we want to price e.g. options on these futures then we have to use implied volatility (derived from other options on the same commodity). Then again prices will fit together. This is risk neutral.
if we want to estimate the risk that we have from this futures position then we talk about the real world. Then we could look at historical volatility. |
Blending is a combination of two or more models in order to create a model which is better than any of them. Since different models have different weak and strong sides, blending may significantly improve performance.
Blending of many (10-1000) models is necessary to achive the best possible performance in contests. In real world problems, blending is often very useful, but the number of models is not that big. For instance, for the Netflix prize contest, even the first year winning submission was a blend of 107 different models. For the real world, Netflix selected two models: Matrix Factorization and Restricted Boltzmann Machines. Their RMSE were 0.89 and 0.90 correspondently, the blend's RMSE was 0.88 . Note that RMSE of the best model was 0.8567. This was achieved by blending probably hundreds of models [1].
Definitions Edit
Let
$ y_i $ is the dependent variable for the observation $ i $ $ \hat y_{mi} $ is the prediction for $ y_i $ by the model m $ \hat y_i $ is the final prediction for $ y_i $ Linear regression Edit
If there are M models, and the prediction of the model m for observation i is $ \hat p_{mi} $, then the total prediction is
$ \hat y_i = \sum_i \beta_m \hat y_{mi} $
The vector $ \beta $ are obtained via linear regression.
It is important that the observation used for calculating $ \beta $ and the observations used for training the models themselves will be non-intersecting. This is because most models perform better on the training set than on the test set, therefore if we use the training set (or part of it) for estimating $ \beta $, the coefficients will be wrong.
Even estimating the metaparameters might overfit the model, therefore it may be useful to divide the set of observations into 3 parts:
training set, for training the models validation set 1, for estimating metaparameters of the model (i. e. learning ratio) validation set 2, for estimating $ \beta $. Lasso or ridge regression Edit
When there are many models, it may be useful to use lasso, ridge regression, stepwise elimination, or other relaxation methods for $ \beta $. See the linear regression article for more details. The metaparameters of relaxation methods are obtained via cross-validation of the validation set 2.
Generalized Additive Model Edit
For
generalized additive model (GAM), target function is a sum of one-dimensional functions: $ \hat y_i = \alpha + \sum_m f_m\left(\hat y_{mi}\right)\! $
Algorithm: estimate $ f_m $ one by one in a loop. Each time estimate the residual $ \mathbf{y}-\sum_{l\ne m} \hat f_l(\mathbf{\hat y_l})\! $ as a smooth spline and subtract its average (so that the average of $ f_m $ will be zero).
Alternatively, we can calibrate each model separately:
$ \hat y^{(final)}_{mi} = f_m\left(\hat y^{(raw)}_{mi}\right) $
As in the previous section, the validation set 2 should be used for calibrations and parameter estimations.
Neural networks Edit
Sometimes, neural network is used to estimate the final prediction
$ \hat y_{mi} = \mathrm{NeuralNet}\left(\hat y_{1i} , ... , \hat y_{Mi}\right) $ Dependence on parameters Edit
Different models performs better on different parts of the input space. The linear model can be upgraded as:
$ \hat y_i = \sum_i \beta_m(\mathbf{x_i}) \hat y_{mi} $
where $ \mathbf{x_i} $ are explanatory variables.
For recommender engines at the Netflix contents, the following was useful:
$ \hat r_{ui} = \sum_m \left( \beta_m + \beta^\prime_m \log (1+S_u) + \beta^{\prime\prime}_m \log (1+S_i) \right) \hat r^{(m)}_{ui} $ ,
where
$ \hat r_{ui} $ is the final prediction of the rating for user u, item i. $ \hat r^{(m)}_{ui} $ is the model m's prediction of the rating for user u, item i. $ S_u $ is the support for the user u, that is, number of items rated by him. $ S_i $ is the support for the item, that is, number of users who rated it. Liang Sun's models Edit Oracle blending Edit
The oracle blending model is for use in competitions. In order to describe it, recall that the ridge regression is given by the formula:
$ \beta = \left(\hat Y^T \hat Y + \lambda I \right)^{-1} \hat Y y $ ,
where
$ \beta $ are oracle weights $ \hat Y_{mi} $ is the prediction of the model m for the observation i $ y_i $ is the value of the dependent variable for the observation i $ \lambda $ is the ridge regression parameter
If we can obtain RMSE of any vector of predictions on the test set, can we use it to estimate $ \beta $?
On the test set, we know the covariance matrix $ Y^T Y $. As for $ \hat Y y $ , we have:
$ \left( \hat Y y \right)_m = \sum_i \hat y_{mi} y_i = \frac{1}{2} \left( \lVert \hat y_m - y \rVert^2 - \lVert \hat y_m \rVert^2 - \lVert y \rVert^2 \right) $ ,
where $ \hat y_m = \left( \hat y_{m1} , ... , \hat y_{mn} \right) $, n is the number of observations
The first term can be found as:
$ \lVert \hat y_m - y \rVert^2 = n \cdot \operatorname{RMSE}(\hat y_m)^2 $
Similarly,
$ \lVert y \rVert^2 = n \cdot \operatorname{RMSE}(0)^2 $ ,
where $ \operatorname{RMSE}(0) $ means RMSE of the submission containing zeros in all positions.
Finally, $ \lVert \hat y_m \rVert^2 $ is known.
Therefore,
$ \left( \hat Y y \right)_m = \frac{1}{2} \left( n \cdot \operatorname{RMSE}(\hat y_m)^2 - n \cdot \operatorname{RMSE}(0)^2 - \lVert y \rVert^2 \right) $
Therefore, we can find $ \beta $ on the test set.
Bagged oracle blending Edit
For bagged oracle blending, we take $ N $ random subsets of the initial set of oracles, each subset contains $ k_0 $ oracles, $ N $ and $ k_0 $ are model parameters. For each subset, we determine weights via oracle blending. Finally, we take average weights of all $ N $ subsets.
This procedure reduces overfitting.
Constrained oracle blending Edit
Constrained oracle blending imposes constraints on both weights and errors on the validation set:
$ \min_\beta f(\beta) = \lVert y - \hat Y \beta \rVert^2 $
subject to:
$ -w_0 \le \beta_m \le w_0 $ for $ m=1,...,M $ $ \lVert y^{(v)} - \hat Y^{(v)} \beta \rVert^2 \le t $
where
$ y^{(v)} $ is the vector of dependent variables on the validation set, and $ \hat Y^{(v)} $ is the matrix of model predictions, also on the validation set. $ w_0 $ and $ t $ are parameters of the model
These constraints do not let any (potentially overfit) model become too big. Finding the weights is a quadratic optimization problem. |
If you take a look at my Side Projects page, you’ll see that I run a cheering/race photography project to encourage NYC-area runners in local (and sometimes international!) races. Race photography is one of my favorite hobbies. I have the privilege of interacting with a community of positive, endorphin-filled humans at least a few times a month. So I want to make sure my photos come out as sharp as possible.
The biggest enemy of outdoor photography is strong sunlight directly on the subject. Because these races tend to happen early in the morning, the sun shines at enough of an angle to avoid the perils of high noon. But if I don’t pick my cheer spots carefully, I might end up in a situation where the sun shines on the runners’ front side. This makes them glow in an unflattering way. Check out the one below from the 2017 Chicago Marathon:
Note that the shadow is cast to the side of the runner. This conundrum made me realize that I should be studying the location of the sun in advance, to determine which cheer spots would be ideal. We want the sun to be behind the runners, so a shadow is cast in front of them. The photo below, taken at this year’s PPTC Cherry Tree 10 Miler, provides an example of this phenomenon.
If I know where the sun is at each time of the day, and I know the course, I should be able to determine the amount of sunlight projected in front of the runner in the worst case scenario – no clouds. I then want to pick the cheer spot that maximizes a “shade metric” over the course of the race.
Caveats
The formula I will use to calculate this shade factor is an oversimplification. First, we disregard the effect of local changes in elevation (e.g. hills) on this value. Second, as mentioned above, we’re going to assume the worst case scenario (no clouds), as we cannot predict cloud positions on race day. Lastly, and most importantly, we do not include trees, buildings, or any other possible local sources of shade. This again ties into the worst case scenario where the only shadows are the ones cast by the runners.
Calculations
Given a time of day, a location coordinate (in terms of latitude/longitude), and the direction one is facing, one can calculate a shade metric.
Let:
$$ \begin{align} (\lambda_0, \phi_0) &= \text{current position (latitude, longitude)}\\ (\lambda_1, \phi_1) &= \text{next position}\\ \phi_s &= \text{the solar azimuth angle}\\ \alpha_s &= \text{solar elevation angle, or altitude of the sun} \end{align} $$
where \(\phi_s\) starts at due north and increases in the clockwise direction. We need to determine the direction angle from our current position to the next position. Comparing this angle to the solar azimuth will allow us to calculate the shade metric. To do so, we first need to convert from geographic to Cartesian coordinates:
$$ \begin{align} \Delta x &\approx a \Delta \lambda \cos\left(\overline{\phi}\right)\\ \Delta y &\approx a \Delta \phi \end{align} $$
where \(a\) is the radius of the earth and \(\overline{\phi}\) is the average latitude on the course. As long as we’re not near the north or south poles, this approximation will do just fine. We can then define the direction angle \(\theta_a\) (with respect to the
eastward direction, unlike the solar azimuth): $$ \begin{align} \theta_a &= \begin{cases} \arctan\left(\frac{\Delta y}{\Delta x}\right) & \Delta x \geq 0 \\ \pi – \arctan\left(\frac{\Delta y}{|\Delta x|}\right) & \Delta x < 0 \end{cases} \end{align} $$
We then want to define a photography shade factor \(S\) that satisfies the following criteria:
$$ \begin{align} & 0 \leq S \leq 1 \\ &S\left(\left\{\theta_a = \frac{\pi}{2} – \phi_s\right\}\right) = 0\\ &S\left(\left\{\theta_a = -\frac{\pi}{2} – \phi_s \right\}\right) = \cos\left(\alpha_s\right)\\ &S\left(\left\{\alpha_s=\frac{\pi}{2}\right\}\right) = 0 \end{align} $$
The third requirement addresses the fact that the higher the altitude of the sun, the shorter the shadow cast in front of the runner. The function below satisfies all the above requirements:
$$ \begin{align} S &= \frac{1}{2}\left( 1-\sin\left(\theta_a +\phi_s\right) \right)\cos\left(\alpha_s\right) \end{align} $$ Example: Prospect Park
Let’s consider the course for the Cherry Tree 10 Miler, where I took that nice photo above. The race took place on February 17th, started at 10 AM EST, and consisted of about three loops of the park. I made a counter-clockwise route around the park on Strava, downloaded the GPX data, and imported it into Python using the gpxpy library. I computed the running direction by taking differences in adjacent positions. Then, I used the PySolar library to compute the elevation and azimuth of the sun at each point on the map. Finally, putting all of this together, I used the above formula to calculate the average shade factor over the period from 10 AM to 11:30 AM. The x and y axes are simply offsets from the origin (in miles)
The best photo conditions tend to emerge when the runners run in the northwest direction, which is away from the sun. My cheer spot at this race was at approximately \((-0.3, 0.6)\), giving me a decent blue shade factor!
Possible Future Work: Adding Buildings
The City of New York maintains a database for the footprints of all buildings that are at least 400 square feet and 12 feet tall. If we know the locations of the buildings near a race course, then we can calculate the local shade effect to find an optimal cheer spot. This will require much more work, but will be a more useful tool, particularly for races on city streets with many tall buildings. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
As Raziel wrote, the local question is whether one can find a local basis of orthonormal vector fields that are divergence-free.
It's true that, in dimension $2$, this can only be done if the metric is locally flat, which is the local obstruction. This is because this is an overdetermined problem; one has two equations for a single unknown.
However, in higher dimensions, it is not clear that there is a local obstruction because this is a system of $n$ first-order PDE for $\tfrac12n(n{-}1)$ unknowns, so, for $n=3$, this is a determined system while, when $n>3$ it is underdetermined.
One can prove that the system for $n=3$ is always locally solvable in the real-analytic case (even though the determined system cannot be written in Cauchy-Kowalevski form, even locally), so there cannot be any local obstruction that is computed on the basis of some kind of curvature condition or identity. (Presumably, it is also always locally solvable in the smooth case as well, but that would require further study.)
Remark: It is interesting to note that, if $(M^3,g)$ is real-analytic and possesses a real-analytic orthonormal frame field $X = (X_1,X_2,X_3)$ where each of the $X_i$ are divergence free, then $(M,g)$ can be isometrically embedded as a hypersurface in a Calabi-Yau surface $S$ in such a way that $X_i = I_i(N)$ where $N$ is the oriented unit normal and $I_1$, $I_2$, and $I_3$ are the orthogonal parallel complex structures that define the Calabi-Yau structure.
I expect that, in dimension $n>3$, the problem is so underdetermined that it is always locally solvable, though I have not yet carried out the analysis.
However, see below, where I do complete the analysis in the real-analytic case.
Global solvability is, of course, much harder, and it's conceivable that there are counterexamples, even for metrics on $S^3$, though I don't know of one.
The analysis via exterior differential systems: At the OP's request, I will sketch the EDS analysis of this system. I don't have time to put in all the details, and, in any case, they won't make sense to anyone who doesn't already know Cartan-Kähler theory, but for those who do know this theory, the following explains the proof of the following results:
When $n=2$, local solutions exist if and only if the metric is flat.
When $n=3$, local solutions always exist if the metric is real-analytic. Moreover, the local solutions depend on 2 arbitrary functions of two variables. If the metric is real-analytic and the scalar curvature is positive, then every local solution is also real analytic.
When $n>3$, local solutions always exist if the metric is real-analytic, and the local solutions depend on ${n\choose2}{-}n$ functions of $n$ variables.
In particular, (2) and (3) imply that there are no curvature-type obstructions to local solvability when $n>2$. Whether we have solvability in the smooth case when $n>2$ will require further study.
Here is the argument: Let $(M^n,g)$ be a Riemannian manifold, and let $\pi:F\to M$ be the orthonormal frame bundle, so that an element $f\in F$ is an $n$-tuple $f = (e_1,\ldots,e_n)$ where $e_1,\ldots,e_n$ is an orthonormal basis of $T_xM$ where $x = \pi(f)$. We define the
canonical $1$-forms $\omega_1,\ldots,\omega_n$ on $F$ so that the equation$$\pi'(v) = \omega_1(v)\,e_1 + \cdots \omega_n(v)\,e_n$$holds for all $v\in T_fF$ where $f = (e_1,\ldots,e_n)$. A standard result (cf. Kobayashi & Nomizu) then says that there exist unique $1$-forms $\phi_{ij}=-\phi_{ji}$ (where the indices run from $1$ to $n$) satisfying the first structure equations$$\mathrm{d}\omega_i = -\phi_{ij}\wedge\omega_j\,,$$that the forms $\omega_i$ and $\phi_{ij}$ ($i<j$) give a basis for the $1$-forms on $F$,and that the $\phi_{ij}$ satisfy the second structure equations$$\mathrm{d}\phi_{ij} = -\phi_{ik}\wedge\phi_{kj} + \tfrac12\,R_{ijkl}\,\omega_k\wedge\omega_l$$for some unique functions $R_{ijkl}=-R_{ijlk}$ on $F$.
A local orthonormal coframing $X = (X_1,\ldots,X_n)$ defined on an open set $U\subset M$is simply a section of $F$ over $U$, and it satisfies $X^*\omega_i = \xi_i$ where the $\xi_i$ are the $1$-forms on $U$ dual to the $X_i$. The volume form of the metric is, up to a sign, the wedge product of the $\xi_i$, and so the condition that the $X_i$ be divergence free is that $$\mathrm{d}\left(\xi_1\wedge\cdots\wedge\widehat{\xi_i}\wedge\cdots\wedge\xi_n\right) = 0$$for all $i = 1,\ldots,n$. In other words, defining the $(n{-}1)$-forms$$\Omega_i = (-1)^{i-1}\,\omega_1\wedge\cdots\wedge\widehat{\omega_i}\wedge\cdots\wedge\omega_n\,,$$we are requiring that $\mathrm{d}\left(X^*\Omega_i\right)=X^*\left(\mathrm{d}\Omega_i\right) = 0$, i.e., that the image of the section $X$ in $F$ should be an integral manifold of the differential ideal $\mathcal{I}$ on $F$ generated by the $n$ $n$-forms $\mathrm{d}\Omega_i$.
Unfortunately, $\mathcal{I}$ is not involutive. However, it turns out that $\mathcal{I}$ can be enlarged to an ideal $\mathcal{I}_+$ as follows: For $i<j$, define the $(n{-}2)$-forms$$\Omega_{ij} = (-1)^{i+j-1}\,\omega_1\wedge\cdots\wedge\widehat{\omega_i}\wedge \cdots\wedge\widehat{\omega_j}\wedge\cdots\wedge\omega_n\,,$$and set $\Omega_{ii}=0$ while $\Omega_{ji}=-\Omega_{ij}$. Now define the $(n{-}1)$-form$$\Upsilon = \phi_{ij}\wedge\Omega_{ij}\,.$$It is not hard to show that $\mathrm{d}\Omega_i = \pm\omega_i\wedge\Upsilon$, and, from this, one concludes that $X_i$ is divergence free for all $i$ if and only iff $X^*(\Upsilon)=0$. Thus, we can let $\mathcal{I}_+$ be the differential ideal generated by $\Upsilon$ (i.e., the exterior ideal generated by $\Upsilon$ and $\mathrm{d}\Upsilon$) and look for integral manifolds of this ideal instead.
When $n=2$, $\Upsilon = 2\phi_{12}$, so $\mathrm{d}\Upsilon = 2R_{1212}\,\omega_1\wedge\omega_2 = 2K\,\mathrm{d}A$, so there are no sections $X$ that are integral manifolds unless $K=0$. (When $K=0$, of course, the sections that are integral manifolds of $\phi_{12}$ are exactly the parallel sections.)
When $n>2$, the structure equations show that $\mathcal{I}_+$, which encodes a system of $n{+}1$ first-order equations on the orthonormal coframing $X$, is involutive, with the Cartan characters of a regular flag being $s_i = 0$ for $i<n{-}2$, $s_{n-2}=1$, $s_{n-1} = n{-}1$, and $s_n = {n\choose2}-n$. Now apply Cartan-Kähler. |
The Cofinite Topology
Recall from the Topological Spaces page that a set $X$ an a collection $\tau$ of subsets of $X$ together denoted $(X, \tau)$ is called a topological space if:
$\emptyset \in \tau$ and $X \in \tau$, i.e., the empty set and the whole set are contained in $\tau$. If $U_i \in \tau$ for all $i \in I$ where $I$ is some index set then $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$, i.e., for any arbitrary collection of subsets from $\tau$, their union is contained in $\tau$. If $U_1, U_2, ..., U_n \in \tau$ then $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$, i.e., for any finite collection of subsets from $\tau$, their intersection is contained in $\tau$.
We will now look at a rather interested topology known as the cofinite topology.
Definition: If $X$ be a nonempty set, then the Cofinite Topology of $X$ is the collection of subsets $\tau = \{ U \subseteq X : U = \emptyset \: \mathrm{or} \: U^c \: \mathrm{is \: finite} \}$. Another term for the cofinite topology is the " Finite Complement Topology".
Let's verify that $(X, \tau)$ is a topological space.
For the first condition, we clearly see that $\emptyset \in \tau = \{ U \subseteq X : U = \emptyset \: \mathrm{or} \: U^c \: \mathrm{is \: finite} \}$. Furthermore, $X^c = \emptyset$ (noting that the universal set in this instance is $X$ itself), so $X \in \tau$.
For the second condition, let $\{ U_i \}_{i \in I}$ be a collection of subsets of $\tau$ for some index set $I$. By the generalized De Morgan's Laws we have that:(1)
Suppose that $U_i = \emptyset$ for some $i \in I$. Then $\displaystyle{\bigcap_{i \in I} U_i^c = \emptyset \in \tau}$. Now suppose that $U_i \neq \emptyset$ for all $i \in I$. Then $U_i^c$ is finite for all $i \in I$ we must have that $\displaystyle{\bigcap_{i \in I} U_i^c}$ is finite. Hence $\displaystyle{\left ( \bigcup_{i \in I} U_i \right )^c}$ is finite, so $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$.
For the third condition, let $U_1, U_2, ..., U_n \in \tau$. Then by applying the generalized De Morgan's laws once again we see that:(2)
Suppose that $U_i = \emptyset$ for some $i \in \{1, 2, ..., n \}$. Then $\displaystyle{\bigcup_{i=1}^{n} U_i^c = X \in \tau}$. Now suppose that $U_i \neq \emptyset$ for all $i \in \{1, 2, ..., n \}$. Then $U_i^c$ is finite for all $i \in \{1, 2, ..., n \}$ we must have that $\displaystyle{\bigcup_{i=1}^{n} U_i^c}$ is finite. Hence $\displaystyle{\left ( \bigcap_{i=1}^{n} U_i \right )^c}$ is finite, so $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$.
Therefore $(X, \tau )$ is a topological space.
Proposition 1: If $X$ is a finite set then the cofinite topology on $X$ is the discrete topology. Proof:Suppose that $X$ is a finite set. Then every subset $U \subseteq X$ is finite, and $U^c = X \setminus U$ is finite. Therefore, for every subset $U \subseteq X$ we have that $U \in \tau$, so $\tau$ contains all subsets of $X$, i.e., $\tau = \mathcal P (X)$ so the cofinite topology on $X$ is the discrete topology. $\blacksquare$ |
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the
average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*}
The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its
edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$.
By virtue of Proposition 20 we have another interpretation for the total influence of
monotone functions:
This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice:
Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$
Rousseau
[Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$
Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition:
Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \]
Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce:
An alternative analytic definition involves introducing the
Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$.
In the exercises you are asked to verify the following:
$\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$.
We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence:
Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \]
Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights.
Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the
Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$.
Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$.
For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or
(edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets.
For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$:
This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”. |
I want to use a slanted
\sum symbol denoting a different meaning from summation. How do I get a slanted
\sum symbol?
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
If you are using pdfTeX then you can try this code:
\def\itsum{\mathop{\mathpalette\itsumA{}\phantom\sum}}\def\itsumA#1#2{\pdfsave\pdfliteral{1 0 .2 1 0 0 cm}\rlap{$#1\sum$}\pdfrestore}$\sum_i^5 \itsum_j^{\,7} a_{ij}$
One way to do it is to use a slanted capital sigma. By using the
scalerel package, one can make that symbol (approximately) the same size as the original
\sum operator:
\documentclass{article}\usepackage{amsmath}\usepackage{scalerel}\DeclareMathOperator*{\itsum}{\scalerel*{\mathit{\Sigma}}{\sum}}\begin{document}\[\sum_i x_i\quad\itsum_i x_i\]\end{document} |
If we consider the application $T: \mathcal{D}(\mathbb{R}^{\star}) \to \mathbb{C}$ defined by $<T,\varphi>= \sum_{n \geq 1} \varphi(\dfrac{\ln n}{n})$.
My question is: $<T,\varphi>$ converge for all $\varphi \in \mathcal{D}(\mathbb{R}^\star)$?
I'm lost beacause $Supp \varphi \subset \mathbb{R}^\star$ and in the same times, $n$ can take $1$.
Thank you for the help. |
The generating functional, as always, is given by
$$ Z[J, \eta, \eta^{\dagger}] = \int \mathcal{D}\phi \mathcal{D} \psi \mathcal{D} \psi^{\dagger} \exp \left[ \frac{i}{\hbar} \int d^4 x \, \left( \mathcal{L}(\phi, \psi, \psi^{\dagger}) +J\phi + \eta^{\dagger}\psi + \psi^{\dagger}\eta\right)\right] = $$$$ \exp \left[ -\frac{i g}{\hbar} \int d^4 x \, \frac{\delta}{\delta J} \frac{\delta}{\delta \eta} \frac{\delta}{\delta \eta^{\dagger}} \right] \; Z_0[J, \eta, \eta^{\dagger}], $$
where $Z_0$ is the generating functional of the free theory:
$$ Z_0[J, \eta, \eta^{\dagger}] = \hbar \int d^4 x \, \int d^4 y \, \left( \frac{1}{2} J(x) \Delta_M(x, y) J(y) + \eta^{\dagger}(x) \Delta_m(x, y) \eta(y) \right) $$
with $\Delta_m$ the Klein-Gordon propagator with mass $m$.
Your formula for correlations (aka Green's functions of the interacting theory) is correct. You have to Tailor-expand the exponential of the interacting term to the appropriate order in perturbation theory.
Just do the math and you will arrive at the correct expressions for correlation functions. Spoiler alert: the second one vanishes.
Btw you don't need to explicitly write the generating functional to calculate correlations. There's a simpler way: first, note that you can calculate expressions like
$$ \left< F[\phi, \psi, \psi^{\dagger}] \right>_0 = \mathcal{N}_0 \int\mathcal{D}\phi \mathcal{D}\psi \mathcal{D}\psi^{\dagger} \exp\left[ \frac{i}{\hbar} \int d^4 x \, \mathcal{L}_0(\phi, \psi, \psi^{\dagger}) \right] \cdot F[\phi, \psi, \psi^{\dagger}] $$
where $F$ is polynomial in fields with help of Wick's theorem. Also note the diagrammatic interpretation of the terms in the Wick expansion.
Then define
$$ \left< F[\phi, \psi, \psi^{\dagger}] \right> = \mathcal{N} \int\mathcal{D}\phi \mathcal{D}\psi \mathcal{D}\psi^{\dagger} \exp\left[ \frac{i}{\hbar} \int d^4 x \, \mathcal{L}(\phi, \psi, \psi^{\dagger}) \right] \cdot F[\phi, \psi, \psi^{\dagger}] = $$$$ \frac{\mathcal{N}}{\mathcal{N}_0} \left< F \cdot \exp \left[ - \frac{i g}{\hbar} \int d^4 x \, \phi \psi^{\dagger} \psi \right] \right>_0. $$
and Tailor-expand the exponential of the interacting term to any order (given in advance). You will arrive at the polynomial expression (because both $F$ and the truncated Tailor series are polynomials in $\phi$, $\psi$ and $\psi^{\dagger}$). We already know how to compute these:
The expectation bracket, to each order in the perturbative series, is given by a sum of terms. Each term can be represented graphically as a Feynman diagram.
The expectation brackets are by definition normalized such that$$ \left<1\right>_0 = \left<1\right> = 1, $$
which means that $\mathcal{N}_0$ and $\mathcal{N}$ are related to each other. There's a very generic result, which is: the ratio $\mathcal{N} / \mathcal{N_0}$ corresponds to the product of all bubble graphs (the ones without external legs). These nicely factor out, giving a convenient way to calculate correlations:
The appropriate normalization can be accounted for by simply disregarding graphs with disconnected bubbles. |
sorry i don't really understand that - how did you work out that s was 5/6? And did you just choose random values for a0, a1 and an? I have rechecked my homework question and that is exactly what it said!
1. Homework StatementLet s be the sum of the alternating series \sum(from n=1 to \infty)(-1)n+1an with n-th partial sum sn. Show that |s - sn| \leqan+12. Homework EquationsI know about Cauchy sequences, the Ratio test, the Root test3. The Attempt at a SolutionI really have...
Right okay, so now for the variance do I use the same formula but use 1.6 (I presume it was just a typo and it should have been 0.7 *2) instead of 0.53?Do I then use the Central Limit Theorem for part b?
Okay I have since realised that for part a) I think i was doing it wrong so now for the mean I have:((0.7*2) + (0.2*1) + (0.1*0))/3 = 0.53But for the Variance I have:((0.7 - 0.53)2+ (0.2 - 0.53)2 + (0.1 - 0.53)2)/3 = 0.1076 which makes far more sense!!Now I'm thinking of using...
1. Homework StatementSuppose that, on average, 70% of graduating students want 2 guest tickets for a graduation ceremony, 20% want 1 guest ticket and the remaining10% don't want any guest tickets.(a) Let X be the number of tickets required by a randomly chosen student. Find the mean and...
1. Homework StatementSuppose that f: R -> R is continuous on R and that lim (x -> \infty+)(f(x) = 0) and lim (x -> \infty-)(f(x)=0).Prove that f is bounded on R2. Homework EquationsI have got the proof of when f is continuous on [a,b] then f is bounded on[a,b] but I'm unsure as to...
1. Homework StatementLet V = {differentiable f:R -> R}, a vector space over R. Take g1,g2,g3 in V where g1(x) = e^{}x, g2(x) = e^{}2x and g3(x) = e^{}3x.Show that g1, g2 and g3 are distinct.2. Homework EquationsIf g1-g3 are linearly independent, it means that for any constant, k in F...
1. Homework StatementProve that lim n \rightarrow\infty 2^{}n/n! = 02. Homework EquationsThis implies that 2^{}n/n! is a null sequence and so therefore this must hold:(\forall E >0)(\existsN E N^{}+)(\foralln E N^{}+)[(n > N) \Rightarrow (|a_{}n| < E)3. The Attempt at a... |
The Closure of Closed Sets in a Topological Space
Recall from The Closure of a Set in a Topological Space page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then the closure of $A$ is the smallest closed subset containing $A$ denoted $\bar{A}$.
On the The Closure of a Set Equals the Union of the Set and Its Accumulation Points page, we found a nice relationship between the closure $\bar{A}$, the set $A$, and the set of accumulation points $A'$ in an equivalent definition of the closure of a set:(1)
We will now look at a rather simple but nice theorem which says that the $A \subseteq X$ is closed if and only if $\bar{A} = A$.
Theorem 1: Let $(X, \tau)$ be a topological space and let $A \subseteq X$. Then $A$ is closed if and only if $\bar{A} = A$. Proof:$\Rightarrow$ Suppose that $A$ is closed. Then the smallest closed subset containing $A$ is $A$ and hence: $\Leftarrow$ Suppose that $\bar{A} = A$. By definition $\bar{A}$ is a closed set so $A$ is a closed set. $\blacksquare$
Corollary 2: Let $(X, \tau)$ be a topological space and let $A \subseteq X$. If $A$ is clopen then $\bar{A} = A$. Proof:Suppose that $A$ is clopen. Then $A$ is closed and by Theorem 1 we have that $\bar{A} = A$. $\blacksquare$ |
I'm trying to figure out some kind of immunization using a factor model I developed for interest rates. Here is the basic problem. Let's say that we have a bond portfolio containing $N$ bonds with weights $x_i$ and one liability. Call the present value of the liability $P_L$. I'll suppose that the bond prices are given by $$P_i=\Sigma_{t=1}^T F_{it}e^{-r_tt}$$ where $F_{it}$ is the future payoff of bond $i$ and $r_t$ is the interest rate at time $t$. Now the factor model for the term structure of the interest rates can be written $$\Delta r_t=\Sigma_{j=1}^k \beta_{jt} \Delta f_j +\epsilon_t$$ for $k$ independent factors and standard normal error term $\epsilon_t$. I'm assuming I have small but not necessarily parallel-shifts in the term structure, I want a first order condition for factor immunization. Can someone help?
The present value is $$ P_i= \sum_{t=1}^T F_{i,t} \exp(-r_t t), $$ what happens if rates change to $r_t + \Delta r_t$ then the new price is $$ P_i^{new} = \sum_{t=1}^T F_{i,t} \exp(-(r_t+\Delta r_t) t). $$ by the exponential series $\exp(x)\approx 1 + x$ we can write $$ P_i^{new} - P_i =: \Delta P_i \approx -\sum_{t=1}^T F_{i,t} \Delta r_t t. $$ Observing the shifts in these rates we have some kind of key-rate duration setting.
If you can decompose your liablity $P_L$ in the is way too (denote the cash flows by $F_t$, then you can try to find quantities $Q_i$ such that $$ |\sum_{t=1}^T (Q_i F_{i,t} - F_t) \Delta r_t t| -> Min $$
Usually the moviments in the yield curve are decomposed by level, slope and curvature or parallel, steepness and bends in the term structure. These 3 factors can explain over 95% 98% of the total variance. Sometimes a second curvature point is used and the explanation reaches 99%. Instead of approximate it numerically why won't you fit the Svensson or Nelson Siegel model and use it as your factors? |
Search
Now showing items 1-9 of 9
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE
(Elsevier, 2017-11)
The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Jet-hadron correlations relative to the event plane at the LHC with ALICE
(Elsevier, 2017-11)
In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ...
Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
It looks like you're new here. If you want to get involved, click one of these buttons!
In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:
Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.
Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.
Today we'll conclude our discussion of Chapter 1 with two more bombshells:
Joins
are left adjoints, and meets are right adjoints.
Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.
This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!
Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.
In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.
Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have
all joins: it's enough that all the joins in this formula exist:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have
all meets: it's enough that all the meets in this formula exist:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.
Suppose \(A\) is a poset with all binary joins. Then we get a function
$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:
$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that
$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the
diagonal
$$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called
duplication, since it duplicates any element of \(A\).
Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:
$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).
Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).
A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function
$$ \wedge : A \times A \to A $$that's the
right adjoint of \( \Delta \). This is just a clever way of saying
$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.
Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.
All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.
Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by
$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.
I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.
Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).
So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called
duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.
Once you start looking you can find duality everywhere, from ancient Chinese philosophy:
to modern computers:
But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!
This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises. |
Suppose that $A$ is a finitely generated abelian group with even rank and without $2$-torsion. Does there exist a compact symplectic $6$-manifold $(M,\omega)$ such that $A$ Is isormorphic to $H^{3}(M,\mathbb{Z})$?
(The reason for excluding $2$-torsion is the fact that a (closed, orientable, connected) smooth $6$-manifold without torsion in $H^{3}$ has an almost complex structure hence a hope to be symplectic).
Edit 1: Someone pointed out to me a very simple solution to the original question. Let $2k = rank(A)$, by Gompf we may find a compact symplectic $4$-manifold N such that $\pi_{1}(N) = H_{1}(N,\mathbb{Z}) = \mathbb{Z}^{k} \oplus T$ where $T$ is the torsion sub-group of $A$. Then by the Kunneth short exact sequence $H^{3}(N \times S^2,\mathbb{Z}) \cong A$. This works for any finitely generated abelian group of even rank without the assumption on $2$-torsion.
We can even (by finding appropriate algebraic surface with $\pi_{1} = T$) stay in the category of smooth projective 3-folds.
Question Same question as above but we insist $\pi_{1}(M) = \{1\}$. What about in the category of (smooth) projective 3-folds? |
I'm doing Exercise 6.26 in Casella and Berger's
Statistical Inference, and I'm trying to prove the following:
"Use Theorem 6.6.5 to establish that, given a sample $X_1,...,X_n$, the maximum order statistic $X_{(n)}$ is a minimal sufficient statistic for the uniform$(0,\theta)$ family of distributions."
Theorem 6.6.5 (Minimal sufficient statistics)Suppose that the family of densities $\{f_0(\boldsymbol{X}),...,f_k(\boldsymbol{X})\}$ all have common support. Then
a. The statistic $$T(\boldsymbol{X}) = \bigg( \frac{f_1(\boldsymbol{X})}{f_0(\boldsymbol{X})}, \frac{f_2(\boldsymbol{X})}{f_0(\boldsymbol{X})}, ..., \frac{f_k(\boldsymbol{X})}{f_0(\boldsymbol{X})} \bigg)$$
is minimal sufficient for the family $\{f_0(\boldsymbol{X}),...,f_k(\boldsymbol{X})\}$.
b. If $\mathscr{F}$ is a family of densities with common support, and
(i) $f_i(\boldsymbol{x}) \in \mathscr{F}$, $i=0,1,...,k,$
(ii) $T(\boldsymbol{x})$ is sufficient for $\mathscr{F}$,
then $T(\boldsymbol{x})$ is minimal sufficient for $\mathscr{F}.$
I know how to prove that this statistic is a minimal sufficient statistic for this family of distributions, but I don't know how to prove it this way. Any help would be much appreciated, because I'm getting nowhere with this. Each member of this family of distributions has a different support (namely $(0,\theta)$), so I don't see how this theorem could be applied. But I found a set of notes (https://www.stat.colostate.edu/~riczw/teach/STAT730_S15/Lecture/ST730note.pdf) with the same problem (but no solution), so I'm thinking perhaps this is indeed doable and is not an error in the statement of the problem. |
Table of Contents
The Order of HK of Two Subgroups H and K of a Group G
Recall from The Product HK of Two Subgroups H and K of a Group G page that if $G$ is a group and $H$, $K$ are subgroups of $G$ then we defined the product of $H$ and $K$ to be:(1)
We noted that in general, $HK$ need not be a subgroup of $G$, and proved that $HK$ is a subgroup of $G$ if and only if $HK = KH$.
The following proposition tells us how large the $|HK|$ is.
Proposition 1: Let $G$ be a group and let $H$ and $K$ be subgroups of $G$. Then $\displaystyle{|HK| = \frac{|H| |K|}{|H \cap K|}}$. Note that since $H \cap K$ is a subgroup (by the proposition on the The Intersection and Union of Two Subgroups page) we have that $|H \cap K| \geq 1$. So in the equation above - we are never dividing by $0$. Proof:Since $H$ and $K$ are subgroups of $G$ we have that $H \cap K$ is a subgroup of $G$. Thus, if $z \in H \cap K$ then $z^{-1} \in H \cap K$. Furthermore, since $H \cap K$ is a subgroup of $H$ and a subgroup of $K$, we see that if $h \in H$, $k \in K$, and $z \in H \cap K$ then $hz \in H$ and $z^{-1}k \in K$. Thus, if $hk \in HK$ then $hk = (hz)(z^{-1}k)$. So every $hk \in HK$ can be written in at least $|H \cap K|$ different ways as an element from $H$ times an element from $K$. But also, note that if $hk = h'k'$ then $h^{-1}h' = kk'^{-1}$. So $h^{-1}h' = kk'^{-1} \in H \cap K$ and so there exists a $z \in H \cap K$ $h^{-1}h' = z$ and $kk'^{-1} = z$. so $h' = hz$ and $k' = z^{-1}k$, i.e., $h'k' = (hz)(z^{-1}k)$ for some $z \in H \cap K$. Therefore, every $hk \in HK$ can be written in exactly $|H \cap K|$ different ways as an element from $H$ times an element from $K$. Therefore: |
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724
Like this:
[/url][/wiki][/url]
[/wiki]
[/url][/code]
Many different combinations work. To reproduce, paste the above into a new post and click "preview".
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
I wonder if this works on other sites? (Remove/Change )
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Related:[url=http://a.com/]
[/url][/wiki]
My signature gets quoted. This too. And my avatar gets moved down
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote:
Related:
[
Code: Select all
[wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki]
]
My signature gets quoted. This too. And my avatar gets moved down
It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places.
Here, I'll fix it:
[/wiki][url]conwaylife.com[/url]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
It appears I fixed @Saka's open <div>.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
toroidalet Posts: 1019 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
The post before the one you quoted. The code was:
Code: Select all
[wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up.
Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage.
Last edited by Saka
on June 21st, 2017, 10:51 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
I actually laughed at the terminology.
"IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways,
[/wiki]
I like making rules
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it...
-Fluffykitty Pusher
Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
Screenshot?
New one yay.
-Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side.
Last edited by Saka
on June 21st, 2017, 10:20 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
Someone should create a phpBB-based forum so we can experiment without mucking about with the forums.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
The testing grounds have now become similar to actual military testing grounds.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
We also have this thread. Also,
is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you:
Code: Select all
[wiki][viewer][/wiki][viewer][/viewer][/viewer]
Last edited by fluffykitty
on June 22nd, 2017, 11:50 am, edited 1 time in total.
I like making rules
83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact:
oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar.
EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale.
Code: Select all
x = 8, y = 10, rule = B3/S23
3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo!
No football of any dui mauris said that.
Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki]
This dosen't do good things
Edit:
Code: Select all
[wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url]
Neither does this
^
What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer]
I get about five different scroll bars when I preview this
Edit:
Code: Select all
[viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki]
Makes a really long post and makes the rest of the thread large and centred
Edit 2:
Code: Select all
[url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer]
Just don't do this
(Sorry I'm having a lot of fun with this)
^
What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy
Here's another small one:
Code: Select all
[url][wiki][viewer][/wiki][/url][/viewer]
fg
Moosey Posts: 2493 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
Code: Select all
[wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki]
[/code]
Is a pinch broken
Doesn’t this thread belong in the sandbox?
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm
Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Moosey Posts: 2493 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Now it's half an aidan mode testing grounds.
Also, fluffykitty's messmaker:
Code: Select all
[viewer][wiki][*][/viewer][/*][/wiki][/quote]
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf |
Construct two functions $ f,g: R^+ → R^+ $ satisfying:
$f, g$ are continuous; $f, g$ are monotonically increasing; $f \ne O(g)$ and $g \ne O(f)$.
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
There are many examples for such functions. Perhaps the easiest way to understand how to get such an example, is by manually constructing it.
Let's start with function over the natural numbers, as they can be continuously completed to the reals.
A good way to ensure that $f\neq O(g)$ and $g\neq O(f)$ is to
alternate between their orders of magnitude. For example, we could define
$f(n)=\begin{cases} n & n \text{ is odd}\\ n^2 & n \text{ is even}\\ \end{cases}$
Then, we could have $g$ behave the opposite on the odds and evens. However, this
doesn't work for you, because these functions are not monotonically increasing.
However, the choice of $n,n^2$ was somewhat arbitrary, and we could simply increase the magnitudes so as to have monotonicity. This way, we may come up with:
$f(n)=\begin{cases} n^{2n} & n \text{ is odd}\\ n^{2n-1} & n \text{ is even}\\ \end{cases}$, and $g(n)=\begin{cases} n^{2n-1} & n \text{ is odd}\\ n^{2n} & n \text{ is even}\\ \end{cases}$
Clearly these are monotone functions. Also, $f(n)\neq O(g(n))$, since on the odd integers, $f$ behaves like $n^{2n}$ while $g$ behaves like $n^{2n-1}=n^{2n}/n=o(n^{2n})$, and vice-versa on the evens.
Now all you need is to complete them to the reals (e.g. by adding linear parts between the integers, but this is really beside the point).
Also, now that you have this idea, you could use the trigonometric functions in order to construct ``closed formulas'' for such functions, since $\sin$ and $\cos$ are oscillating, and peak on alternating points.
Good illustration for me is: http://www.wolframalpha.com/input/?i=sin%28x%29%2B2x%2C+cos%28x%29%2B2x
$$ f(n) =2x+sin(x) $$ $$ g(n) =2x+cos(x) $$ $$ f\neq O(g) $$ $$ g\neq O(f) $$ |
Denote a 2\times 2 orthogonal matrix by \begin{bmatrix}a & c\\ b & d\end{bmatrix}.
Since it is an orthogonal matrix, the entries satisfy a^2+b^2=1, ac+bd=0, c^2+d^2=1.
Since a^2+b^2=1, it is standard to parametrize a and b by a=\cos\theta and b=\sin\theta. Using the last two equations, we can get c=\mp \sin\theta, d=\pm\cos\theta. In other words, we have two types of orthogonal matrix A = \begin{bmatrix}\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta\end{bmatrix}, B = \begin{bmatrix}\cos\theta & \sin\theta \\ \sin\theta & -\cos\theta\end{bmatrix}.
Problem: What are the eigenvalues and correspondent eigenvectors of the matrix B? How are those two eigenvectors related to each other? What does the linear transformation S that sends x to Bx do? What about the linear transformation T that sends x to B^2x? Solution: To get the eigenvalues of the matrix B, consider the characteristic equation det\left(B-\lambda I\right)=\begin{vmatrix}\cos\theta-\lambda & \sin\theta \\ \sin\theta & -\cos\theta-\lambda\end{vmatrix}=(\cos\theta-\lambda)(-\cos\theta-\lambda)-\sin^2\theta=\lambda^2-1=0. This gives us two eigenvalues \lambda_1 = 1, \lambda_2=-1.
For \lambda_1=1, the eigenvector can be \mathbf{u}=(\sin\theta, 1-\cos\theta). For \lambda_2=-1, the eigenvector can be \mathbf{v}=(-\sin\theta, 1+\cos\theta). These two vectors \mathbf{u}, \mathbf{v} are perpendicular to each other because their inner product is 0. Now we have two perpendicular directions \mathbf{u} and \mathbf{v}. On one hand, because the vector \mathbf{u} is correspondent to the eigenvalue 1, S preserves all the vectors who have the same direction as \mathbf{u}. On the other hand, because the vector \mathbf{v} is correspondent to the eigenvalue -1, S reverses the direction of all the vectors who have the same direction as \mathbf{v}. Therefore S is a reflection across the line through the origin with direction \mathbf{u}. Since the composition of two same reflections does nothing, the linear transformation T that sends x to B^2x is an identity map. Also it is easy to check B^2 is indeed an identity matrix. |
My textbook defines power delivered to an inductor as:
$$P= V_{L\rm\ peak}I_{\rm peak} \cos ( \omega t) \sin( \omega t)$$ where $\omega$ is angular frequency.
but makes no mention of $P_{RMS}$. It simply says that $P_{av}$ is zero (which makes sense since it's defined as the product of two circular functions).
However, when we covered power, current, and voltage delivered to a resistor in an AC circuit, we used RMS values for current and voltage, and an average value for power. This made sense since power delivered to a resistor is a function of a squared sinusoidal function, so average was adequate.
In this section (inductors in AC circuits), only instantaneous power was discussed. This seemed odd to me. In previous sections the book discussed how taking an average of a sinusoidal function just returns zero, which is why we use RMS values instead. That makes perfect sense, so why not apply that approach here? Do we not care about RMS power? if so, why not?
They did say that the average power is given by $I_{rms}r$ where $r$ is internal resistance, assuming internal resistance is substantial. I'm curious about cases where internal resistance is negligible. |
I'm not a biochemist. Biochemistry is super-duper complicated. The following is essentially wild speculation, but hopefully it can help guide your thinking.
The overall equation for photosynthesis on earth is:
$$6 CO_2 + 6 H_2O + \gamma \rightarrow C_6H_{12}O_6 + 6O_2$$
Here are the important things to note about this process, from the perspective of changing it. These features need to be present to allow for anything like the photosynthesis we see on modern earth:
The reaction requires energy to proceed (is endothermic), rather than releasing energy when it proceeds The solid output of the reaction (glucose) is a store of energy that is stable enough not to spontaneously decompose (A) Glucose is a moderately complicated molecule. This allows the biochemical system to manipulate it with a good degree of specificity using targeted proteins, and minimizes the chances that it will cause unwanted side reactions. The byproduct of the reaction is a gas, which can easily escape the plant. Liquids are OK too, but not solids (which are difficult to transport out of the plant)
You ask about instead using the reaction scheme:
$$nNH_3 + mX + \gamma \rightarrow iY + jCH_4$$
You also posit that this ammonia exists in a liquid state (analogous to water). This is your first problem: ammonia boils at -33$^{\circ}$C. This is a problem because at this temperature, about 50$^{\circ}$C colder than on earth, and as a rule of thumb every 10$^{\circ}$C difference results in a factor of 2 change in chemical reaction rates. That means that reactions on this planet will take place about 32x slower than on earth, which makes it unlikely that an endothermic reaction like this could take place.
You have a few ways around this. Perhaps your biochemistry includes a ubiquitous reaction that is quite exothermic, which is used to locally heat life enough for reactions to happen at a reasonable pace. Perhaps life only grow around geothermal hot-spots, where the temperature is locally higher and the ammonia is gaseous. Or perhaps the planet has a similar temperature to that of earth, but an atmospheric pressure about 10x higher, allowing liquid ammonia at room temperature (this likely creates its own set of problems).
Anyway, passing that on, let's see if we can come up with a moderately stable, moderately complex nitrogen compound to replace sugar. That will guide the rest of the reaction. I think an amino acid is probably a decent choice. I'll use glycine, because it's simple and this is already hard enough as-is:
Now, we have the reaction scheme
$$nNH_3 + mX + \gamma \rightarrow NH_2CH_2COOH + jCH_4$$
We can (stoichiometrically) satisfy this reaction using propinoic acid
$$NH_3 + CH_3CH_2COOH \rightarrow NH_2CH_2COOH + CH_4$$
Based on a quick look at the standard enthalpies of formation, this reaction should be endothermic*. Check. Glycine is a relatively stable solid (we produce it all the time in our bodies) that is moderately complex (complex enough to be used to build proteins at least). Check. We're consuming ammonia and producing methane, as you asked. Check.
So this is my submission for your photosynthesis. The next step is to come up with whatever kind of crazy pathway this reaction schema could possibly use. However that sort of thing is way over my head (even photosynthesis on earth is really very complicated, involving lots of electron transfer and stuff), so this is where I'll leave you.
Happy worldbuilding!
$*$ The enthalpies of formation I found are as follows (rounded quite a bit):
Ammonia, -45 kJ/mol Propinoic acid, -510 kJ/mol Methane, -75 kJ/mol Glycine, 1430 kJ/mol
Thus the overall reaction has (45 + 510) < (1430 - 75) which implies it will not be spontaneous, with a net endotherm of about 800 kJ/mol. I believe this is a bit under half the endotherm of photosynthesis on Earth. |
Taiwanese Journal of Mathematics Taiwanese J. Math. Volume 17, Number 5 (2013), 1819-1837. AN EXTENSION OPERATOR ASSOCIATED WITH CERTAIN $G$-LOEWNER CHAINS Abstract
In this paper we are concerned with an extension operator $\Phi_{n,\alpha,\beta}$ that provides a way of extending a locally univalent function $f$ on the unit disc $U$ to a locally biholomorphic mapping $F\in H(B^{n})$. By using the method of Loewner chains, we prove that if $f$ can be embedded as the first element of a $g$-Loewner chain on the unit disc, where $g(\zeta)=\frac{1-\zeta}{1+(1-2\gamma)\zeta}$ for $|\zeta|\lt 1$ and $\gamma \in (0,1)$, then $F=\Phi_{n,\alpha,\beta}(f)$ can also be embedded as the first element of a $g$-Loewner chain on $B^n$, whenever $\alpha\in [0,1]$, $\beta \in [0,1/2]$, $\alpha +\beta \leq 1$. In particular, if $f$ is starlike of order $\gamma \in (0,1)$ on $U$, then $F=\Phi_{n,\alpha,\beta}(f)$ is also starlike of order $\gamma$ on $B^n$. Also, if $f$ is spirallike of type $\delta$ and order $\gamma$ on $U$, where $\delta\in (-\pi/2,\pi/2)$ and $\gamma \in (0,1)$, then $F=\Phi_{n,\alpha,\beta}(f)$ is spirallike of type $\delta$ and order $\gamma$ on $B^n$. We also obtain a subordination preserving result under the operator $\Phi_{n,\alpha,\beta}$ and we consider some radius problems associated with this operator.
Article information Source Taiwanese J. Math., Volume 17, Number 5 (2013), 1819-1837. Dates First available in Project Euclid: 10 July 2017 Permanent link to this document https://projecteuclid.org/euclid.twjm/1499706240 Digital Object Identifier doi:10.11650/tjm.17.2013.2966 Mathematical Reviews number (MathSciNet) MR3106045 Zentralblatt MATH identifier 1288.32007 Keywords biholomorphic mapping $g$-Loewner chain $g$-Parametric representation Roper-Suffridge extension operator spirallike mapping of type $\delta$ and order $\gamma$ starlike mapping starlike mapping of order $\gamma$ subordination Citation
Chirilǎ, Teodora. AN EXTENSION OPERATOR ASSOCIATED WITH CERTAIN $G$-LOEWNER CHAINS. Taiwanese J. Math. 17 (2013), no. 5, 1819--1837. doi:10.11650/tjm.17.2013.2966. https://projecteuclid.org/euclid.twjm/1499706240 |
Ugh I just lost my post but the short version is that on top of Igor's answer, it is easy to prove this using Edmonds' characterization of the perfect matching polytope, which implies putting weight 1/k on every edge will give you a vector in the polytope. From this fact the matching-coveredness is straightforward.
EDIT:
Edmonds proved that a vector (i.e. an edge-weighting $w(e)$) is in the perfect matching polytope (i.e. the convex hull of incidence vectors of perfect matchings) if and only if the following hold:
1) Every edge has weight in $[0,1]$.
2) Every set $S$ of vertices with odd size has $\sum_{e\in\delta(S)} w(e) \geq 1$, where $\delta(S)$ is the set of edges with exactly one endpoint in $S$.
3) Every vertex $v$ satisfies $\sum_{e\in\delta(\{v\})} w(e) = 1$.
It is an easy exercise show that these conditions are necessary, but as Edmonds proved, they are also sufficient. This implies immediately that if $G$ is a $k-1$-edge-connected graph that is $k$-regular, the vector with every edge getting weight $1/k$ is in the perfect matching polytope of $G$ (in other words, $G$ is fractionally $k$-edge-colourable). Since the weight vector is nonzero everywhere, every edge must be contained in at least one perfect matching. (Again in other words, since only perfect matchings can be used to fractionally $k$-edge-colour a $k$-regular graph, every edge must be in a perfect matching.) |
Rate of turn is dependent on the following two items:
The horizontal component of lift (centripetal force) The tangential velocity of the aircraft (true airspeed)
The rate or turn is directly proportional to the horizontal component of lift and inversely proportional to the tangential velocity of the aircraft.
For a given angle of bank, the vertical and horizontal components of lift will be the same, regardless of airspeed in level flight.
Consequently the airplane will experience the same centripetal acceleration, regardless of airspeed.
Since the tangential velocity is slower, any kind of centripetal force will produce a greater rate of turn for a slower flying aircraft as opposed to a faster moving aircraft and this can be shown by the centripetal acceleration equation
$$a_c = \frac{v^2}{r}$$
so both slow flying airplane with a true airspeed $v_s = 100$ knots and fast flying airplane with a true airspeed $v_f = 200$ knots experience the same centripetal acceleration.
$$\dfrac{v_s^2}{r_s} = \dfrac{v_f^2}{r_f} = 4\ \dfrac{v_s^2}{r_f}$$
or, $$\dfrac{1}{r_s} = \dfrac{4}{r_f}$$
Consequently $r_s < r_f$; in this case $r_f = 4\ r_s$
Since the angular velocity is equal to the tangential speed divided by the radius.
$$\omega = v/r$$
the angular speed of the slower aircraft will be greater than the faster aircraft.
$$\omega_s = v_s/r_s$$
and
$$\omega_f = \dfrac{v_f}{r_f} = \dfrac{2 \ v_s}{4 \ r_s} = \frac{1}{2}w_s$$
So our twice as slow airplane turns twice as fast as the faster one does under these conditions. |
This question already has an answer here:
The question is, if two following phenomena are thought to effect to the motion of the planet that orbits a star:
Celestial body is pulled by some massive star by ordinary Newtonian gravity:
$$ m \frac{d^2r}{dt^2} = - G \frac{Mm}{r^2} $$
There is additional radial velocity component
$$ \frac{dr}{dt} = kr\ ,\qquad k << 1 $$ or (linear approximation): $$ r(t) = r_0 (1+ kt) , k << 1\ , $$
that pulls the body $m$ away from the mass $M$ - and
that is independent from the gravity (that means it happens also in zero gravity), what kind of solutions would these 2 rules give for orbits and can the orbits be stationary in some situations?
The Newtonian gravitation law only, when the mass $M \gg m$, would result a Keplerian orbit: $$ r(\theta) = \frac{a(1-e^2)}{1 + e\cos(\theta)}$$ that is circle, ellipse, hyperbola, or parabola.
Partial answers: A. At least the circular motion equilibrium can be maintained at the start simply by demanding that the initial velocity of the body to be not exactly tangential, but have a small radial component inward:
$$ v_{initial} = v_{tan} + v_{r}\ , $$ where $$ v_{r} = -kr $$
Now the radial component of initial velocity would cancel the radial velocity $kr$ at initial time. What would yield to the familiar equilibrium equation for circular motion:
$$ v^2/r = GM/r^2 $$
B. Also, when the planet is put into aphelion point of the orbit, where the Newtonian gravitation is in then minimum and according Newtonian gravitation law its velocity would be perpendicular to $r$ and in the minimum, the initial velocity should be slightly inward from the tangential velocity $v$, if this point would be the true aphelion point. From this initial condition, the first guess is that the resultant orbit or trajectory should be then somewhere between the aphelion circle $r_{max}$ and perihelion circle $r_{min}$ -at least during the first cycle. The first guess for the orbit equation is simply:$$ r(\theta) = \frac{a(1-e^2)}{1 + e\cos(\theta)}(1+kt)$$or$$ r(\theta) = \frac{a(1-e^2)}{1 + e\cos(\theta)}e^{kt}\ ,$$if nothing else (e.g gravity of other planets, orbital decay, etc) effect on the motion. The reason why I ask this question is that I would like to think what would happen if the cosmological expansion of the universe would effect on the celestial mechanics. I have read that this is not thought to be true however, cosmic expansion is not effecting the solar system. But if it would be true, the cosmic expansion would be described approximately by equations:$$r = r_0(1 + Ht) $$ that is linear approximation when $t$ is small, or $$ \frac{dr}{dt} = Hr\ , $$ which is another approximation when $t$ is small, whose solution is exponential function.
And the value for $H$ would be around $H = 2.20 \times 10^{-18}\, \mathrm{s}^{-1}$ or $H = 6.93 \times 10^{-11}\, \mathrm{year}^{-1}$, that is simply the Hubble constant is the units [1/s] and [1/year].
With this value for $k$, for example Earth-Moon distance 384000 km should increase by 2.63 cm/year, and Earth-Sun distance $150\times10^6\, \mathrm{km}$ would increase by 10.4 m/year, if nothing cancels this effect.
The observed values are 3.8 cm/year (radar measurements) and 10.4 cm/year (That is 100 times smaller. I don't know how the value of AU is actually measured). The Moon-Earth system is thought to separate due to tidal forces, but it is unclear to me how much this effect really contribute to the increase of Earth-Moon distance. |
How to test for simultaneous equality of choosen coefficients in logit or probit model ? What is the standard approach and what is the state of art approach ?
Wald test
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") # Load dataset from the webmydata$rank <- factor(mydata$rank)mylogit <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") # calculate the logistic regressionsummary(mylogit)Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.989979 1.139951 -3.500 0.000465 ***gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank2 -0.675443 0.316490 -2.134 0.032829 * rank3 -1.340204 0.345306 -3.881 0.000104 ***rank4 -1.551464 0.417832 -3.713 0.000205 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Say, you want to test the hypothesis $\beta_{gre}=\beta_{gpa}$ vs. $\beta_{gre}\neq \beta_{gpa}$. This is equivalent of testing $\beta_{gre} - \beta_{gpa} = 0$. The Wald test statistic is:
$$ W=\frac{(\hat{\beta}-\beta_{0})}{\widehat{\operatorname{se}}(\hat{\beta})}\sim \mathcal{N}(0,1) $$
or
$$ W^2 = \frac{(\hat{\theta}-\theta_{0})^2}{\operatorname{Var}(\hat{\theta})}\sim \chi_{1}^2 $$
Our $\widehat{\theta}$ here is $\beta_{gre} - \beta_{gpa}$ and $\theta_{0}=0$. So all we need is the standard error of $\beta_{gre} - \beta_{gpa}$. We can calculate the standard error with the Delta method:
$$ \hat{se}(\beta_{gre} - \beta_{gpa})\approx \sqrt{\operatorname{Var}(\beta_{gre}) + \operatorname{Var}(\beta_{gpa}) - 2\cdot \operatorname{Cov}(\beta_{gre},\beta_{gpa})} $$
So we also need the covariance of $\beta_{gre}$ and $\beta_{gpa}$. The variance-covariance matrix can be extracted with the
vcov command after running the logistic regression:
var.mat <- vcov(mylogit)[c("gre", "gpa"),c("gre", "gpa")]colnames(var.mat) <- rownames(var.mat) <- c("gre", "gpa") gre gpagre 1.196831e-06 -0.0001241775gpa -1.241775e-04 0.1101040465
Finally, we can calculate the standard error:
se <- sqrt(1.196831e-06 + 0.1101040465 -2*-0.0001241775)se[1] 0.3321951
So your Wald $z$-value is
wald.z <- (gre-gpa)/sewald.z[1] -2.413564
To get a $p$-value, just use the standard normal distribution:
2*pnorm(-2.413564)[1] 0.01579735
In this case we have evidence that the coefficients are different from each other. This approach can be extended to more than two coefficients.
Using multcomp
This rather tedious calculations can be conveniently done in
R using the
multcomp package. Here is the same example as above but done with
multcomp:
library(multcomp)glht.mod <- glht(mylogit, linfct = c("gre - gpa = 0"))summary(glht.mod) Linear Hypotheses: Estimate Std. Error z value Pr(>|z|) gre - gpa == 0 -0.8018 0.3322 -2.414 0.0158 *confint(glht.mod)
A confidence interval for the difference of the coefficients can also be calculated:
Quantile = 1.9695% family-wise confidence levelLinear Hypotheses: Estimate lwr upr gre - gpa == 0 -0.8018 -1.4529 -0.1507
Likelihood ratio test (LRT)
The coefficients of a logistic regression are found by maximum likelihood. But because the likelihood function involves a lot of products, the log-likelihood is maximized which turns the products into sums. The model that fits better has a higher log-likelihood. The model involving more variables has at least the same likelihood as the null model. Denote the log-likelihood of the alternative model (model containing more variables) with $LL_{a}$ and the log-likelihood of the null model with $LL_{0}$, the likelihood ratio test statistic is:
$$ D=2\cdot (LL_{a} - LL_{0})\sim \chi_{df1-df2}^{2} $$
The likelihood ratio test statistic follows a $\chi^{2}$-distribution with the degrees of freedom being the difference in number of variables. In our case, this is 2.
To perform the likelihood ratio test, we also need to fit the model with the constraint $\beta_{gre}=\beta_{gpa}$ to be able to compare the two likelihoods. The full model has the form $$\log\left(\frac{p_{i}}{1-p_{i}}\right)=\beta_{0}+\beta_{1}\cdot \mathrm{gre} + \beta_{2}\cdot \mathrm{gpa}+\beta_{3}\cdot \mathrm{rank_{2}} + \beta_{4}\cdot \mathrm{rank_{3}}+\beta_{5}\cdot \mathrm{rank_{4}}$$. Our constraint model has the form: $$\log\left(\frac{p_{i}}{1-p_{i}}\right)=\beta_{0}+\beta_{1}\cdot (\mathrm{gre} + \mathrm{gpa})+\beta_{2}\cdot \mathrm{rank_{2}} + \beta_{3}\cdot \mathrm{rank_{3}}+\beta_{4}\cdot \mathrm{rank_{4}}$$.
mylogit2 <- glm(admit ~ I(gre + gpa) + rank, data = mydata, family = "binomial")
In our case, we can use
logLik to extract the log-likelihood of the two models after a logistic regression:
L1 <- logLik(mylogit)L1'log Lik.' -229.2587 (df=6)L2 <- logLik(mylogit2)L2'log Lik.' -232.2416 (df=5)
The model containing the constraint on
gre and
gpa has a slightly higher log-likelihood (-232.24) compared to the full model (-229.26). Our likelihood ratio test statistic is:
D <- 2*(L1 - L2)D[1] 16.44923
We can now use the CDF of the $\chi^{2}_{2}$ to calculate the $p$-value:
1-pchisq(D, df=1)[1] 0.01458625
The $p$-value is very small indicating that the coefficients are different.
R has the likelihood ratio test built in; we can use the
anova function to calculate the likelhood ratio test:
anova(mylogit2, mylogit, test="LRT")Analysis of Deviance TableModel 1: admit ~ I(gre + gpa) + rankModel 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 395 464.48 2 394 458.52 1 5.9658 0.01459 *---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Again, we have strong evidence that the coefficients of
gre and
gpa are significantly different from each other.
Score test (aka Rao's Score test aka Lagrange multiplier test)
The Score function $U(\theta)$ is the derivative of the log-likelihood function ($\text{log} L(\theta|x)$) where $\theta$ are the parameters and $x$ the data (the univariate case is shown here for illustration purposes):
$$ U(\theta) = \frac{\partial \text{log} L(\theta|x)}{\partial \theta} $$
This is basically the slope of the log-likelihood function. Further, let $I(\theta)$ be the Fisher information matrix which is the negative expectation of the second derivative of the log-likelihood function with respect to $\theta$. The score test statistics is:
$$ S(\theta_{0})=\frac{U(\theta_{0}^{2})}{I(\theta_{0})}\sim\chi^{2}_{1} $$
The score test can also be calculated using
anova (the score test statistics is called "Rao"):
anova(mylogit2, mylogit, test="Rao")Analysis of Deviance TableModel 1: admit ~ I(gre + gpa) + rankModel 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Rao Pr(>Chi) 1 395 464.48 2 394 458.52 1 5.9658 5.9144 0.01502 *---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The conclusion is the same as before.
Note
An interesting relationship between the different test statistics when the model is linear is (Johnston and DiNardo (1997): Econometric Methods): Wald $\geq$ LR $\geq$ Score.
You did not specify your variables, if they are binary or something else. I think you talk about binary variables. There also exist multinomial versions of the probit and logit model.
In general, you can use the complete trinity of test approaches, i.e.
Likelihood-Ratio-test
LM-Test
Wald-Test
Each test uses different test-statistics. The standard approach would be to take one of the three tests. All three can be used to do joint tests.
The LR test uses the differnce of the log-likelihood of a restricted and the unrestricted model. So the restricted model is the model, in which the specified coefficients are set to zero. The unrestricted is the "normal" model. The Wald test has the advantage, that only the unrestriced model is estimated. It basically asks, if the restriction is nearly satisfied if it is evaluated at the unrestriced MLE. In case of the Lagrange-Multiplier test only the restricted model has to be estimated. The restricted ML estimator is used to calculate the score of the unrestricted model. This score will be usually not zero, so this discrepancy is the basis of the LR test. The LM-Test can in your context also be used to test for heteroscedasticity.
The standard approaches are the Wald test, the likelihood ratio test and the score test. Asymptotically they should be the same. In my experience the likelihood ratio tests tends to perform slightly better in simulations on finite samples, but the cases where this matters would be in very extreme (small sample) scenarios where I would take all of these tests as a rough approximation only. However, depending on your model (number of covariates, presence of interaction effects) and your data (multicolinearity, the marginal distribution of your dependent variable), the "wonderful kingdom of Asymptotia" can be well approximated by a surprisingly small number of observations.
Below is an example of such a simulation in Stata using the Wald, likelihood ratio and score test in a sample of only 150 observations. Even in such a small sample the three test produce fairly similar p-values and the sampling distribution of the p-values when the null hypothesis is true does seem to follow a uniform distribution as it should (or at least the deviations from the uniform distribution are no larger than one would expect due to the randomness inherrit in a Monte Carlo experiment).
clear allset more off// data preparationsysuse nlsw88, cleargen byte edcat = cond(grade < 12, 1, /// cond(grade == 12, 2, 3)) /// if grade < .label define edcat 1 "less than high school" /// 2 "high school" /// 3 "more than high school"label value edcat edcatlabel variable edcat "education in categories"// create cascading dummies, i.e.// edcat2 compares high school with less than high school// edcat3 compares more than high school with high schoolgen byte edcat2 = (edcat >= 2) if edcat < .gen byte edcat3 = (edcat >= 3) if edcat < .keep union edcat2 edcat3 race southbsample 150 if !missing(union, edcat2, edcat3, race, south)// constraining edcat2 = edcat3 is equivalent to adding // a linear effect (in the log odds) of edcatconstraint define 1 edcat2 = edcat3// estimate the constrained modellogit union edcat2 edcat3 i.race i.south, constraint(1)// predict the probabilitiespredict prgen byte ysim = .gen w = .program define sim, rclass // create a dependent variable such that the null hypothesis is true replace ysim = runiform() < pr // estimate the constrained model logit ysim edcat2 edcat3 i.race i.south, constraint(1) est store constr // score test tempname b0 matrix `b0' = e(b) logit ysim edcat2 edcat3 i.race i.south, from(`b0') iter(0) matrix chi = e(gradient)*e(V)*e(gradient)' return scalar p_score = chi2tail(1,chi[1,1]) // estimate unconstrained model logit ysim edcat2 edcat3 i.race i.south est store full // Wald test test edcat2 = edcat3 return scalar p_Wald = r(p) // likelihood ratio test lrtest full constr return scalar p_lr = r(p)endsimulate p_score=r(p_score) p_Wald=r(p_Wald) p_lr=r(p_lr), reps(2000) : simsimpplot p*, overall reps(20000) scheme(s2color) ylab(,angle(horizontal)) |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
The Closure of a Set in a Topological Space Examples 1
Recall from The Closure of a Set in a Topological Space page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then the closure of $A$ is the smallest closed set containing $A$.
We will now look at some examples of the closure of a set
Example 1 Consider the topological space $(X, \tau)$ where $\tau$ is the discrete topology on $X$. What is the closure of $A \subseteq X$?
Recall that if $\tau$ is the discrete topology then $\tau = \mathcal P (X)$. Hence every subset of $X$ is open and by extension, every subset of $X$ is closed. Hence, for each $A \subseteq X$ the smallest closed set containing $A$ is $A$ itself!
Therefore we have that:(1)
Example 2 Consider the topological space $(X, \tau)$ where $\tau$ is the indiscrete topology on $X$. What is the closure of $A \subseteq X$?
If $\tau$ is the indiscrete topology then $\tau = \{ \emptyset, X \}$. Furthermore, the only closed sets of $X$ with respect to the indiscrete topology are $\emptyset$ and $X$.
If $A \neq \emptyset$ then the smallest closed set containing $A$ is $X$ itself! Hence:(2)
If $A = \emptyset$ then the smallest closed set containing $A$ is $\emptyset$, so:(3)
Example 3 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau = \{ \emptyset \} \cup \{ (-n, n) : n \in \mathbb{Z}, n \geq 1 \}$. What is the closure of $A = \{ 0 \}$? What is the closure of $B = (2, 3)$?
Notice that the open sets of $\mathbb{R}$ with respect to the topology $\tau$ are:(4)
Therefore the closed sets of $\mathbb{R}$ with respect to this topology are:(5)
Notice that NONE of these sets except for the whole set $\mathbb{R}$ contain $\{ 0 \}$. Therefore:(6)
Now notice that $(2, 3) \subseteq (-\infty, -2] \cup [2, \infty)$. This is the smallest such closed set, and so:(7) |
I was analyzing a GARCH(1,1) process. In particular, let's say that I have a process ${y_t}$, with $t \in {1,2,...,T}$. I have created a GARCH process that can be written as:
$\sigma_t^2 = \omega + \alpha y_{t-1}^2 + \beta \sigma_{t-1}^2$,
with $t \in{1,2,...,T}$. After that, I maximize the Log Likelihood of the model, obtaining the three parameters, namely $\hat{\alpha}, \hat{\beta}, \hat{\omega}$.
Now I would like to use these estimated parameters for forecasting volatility. In particular, I can get
$\sigma_{T+1}^2 = \hat{\omega} + \hat{\alpha} y_{T}^2 + \hat{\beta} \sigma_{T}^2$
that is, I can get the estimated volatility at $(T+1)$. It is not clear in my mind how can I get, for example, $\sigma_{T+2}^2$. According to the formulation above, I shall use $y_{T+1}^2$, but my series stops at $T$. How can I get the forecasted value $y_{T+1}$?
I have found in literature that for a GARCH(1,1) and $k>2$
$\mathbb{E}_t[\sigma_{t+k}^2] = \sum_{i=0}^{k-2} (\hat{\alpha}+\hat{\beta})^i\hat{\omega} + (\hat{\alpha}+\hat{\beta})^{k-1}\sigma_{T+1}^2$
Thus I can use the forecasted value for $\sigma_{t+k}^2$ and, inverting the formulation of GARCH(1,1), get the forecasted value of $y_{T+1}$. |
205 0
Consider the ground state of the hydrogen atom. Estimate the correction [tex]\frac{\Delta E}{E_1s} [/tex] caused by the finite size of the nucleus. Assume that it is a unifromly charged shell with radius b and the potential inside is given by [tex]\frac{-e^2}{4\pi \epsilon b}[/tex]
Calculate the first order energy eorrection to the ground state and expand in [tex]\frac{b}{a_0}[/tex]. Keep the leading term and observe [tex]\frac{\Delta E}{E_1s}[/tex] for b = 10^-15m.
Ok, I need help in constructing the interaction W (or H'). Once I get that, I would then calculate the expectation value of it by sandwiching it between [tex]\psi_1s[/tex]. Is this correct and how would I construct the interaction?
Here is what I have so far
H0 = (p^2)/2m - e^2/r and H = H0 for r > r0
H = (p^2)/2m -e^2/(4pi epsilon b) = H0 + H' for r < r0
Then I would solve for H' and use the perturbation equation. Is this correct ?
James
Calculate the first order energy eorrection to the ground state and expand in [tex]\frac{b}{a_0}[/tex]. Keep the leading term and observe [tex]\frac{\Delta E}{E_1s}[/tex] for b = 10^-15m.
Ok, I need help in constructing the interaction W (or H'). Once I get that, I would then calculate the expectation value of it by sandwiching it between [tex]\psi_1s[/tex]. Is this correct and how would I construct the interaction?
Here is what I have so far
H0 = (p^2)/2m - e^2/r and H = H0 for r > r0
H = (p^2)/2m -e^2/(4pi epsilon b) = H0 + H' for r < r0
Then I would solve for H' and use the perturbation equation. Is this correct ?
James
Last edited: |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Table of Contents
The Closure of a Set in a Topological Space Examples 2
Recall from The Closure of a Set in a Topological Space page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then the closure of $A$ is the smallest closed set containing $A$.
We will now look at some more examples of the closure of a set
Example 1 Consider the topological space $(\mathbb{N}, \tau)$ where $\tau$ is the cofinite topology. Find the closure of $A = \{ 1, 2, 3 \}$ and the closure of $E = \{ 2n : n \in \mathbb{N} \} = \{2, 4, 6, ... \}$.
Recall that the cofinite topology is described to be:(1)
Therefore all open sets must be infinite (although not all infinite sets are open with this topology and except for the $\emptyset$). Furthermore, these open sets have finite complements so the closed sets must be finite sets (except for the wholeset $\mathbb{N}$ itself).
Hence $A = \{ 1, 2, 3 \}$ is a closed set. To show this, look at $A^c = \{ 1, 2, 3 \}^C = \mathbb{N} \setminus \{ 1, 2, 3 \}$. Now $(A^c)^c = \{ 1, 2, 3 \}$ is finite, so $A^c \in \tau$ so $A^c$ is open and hence $A$ is closed. Therefore the smallest closed set containing $\{1, 2, 3 \}$ is itself, i.e.,:(2)
For the same reasoning, if $A$ is a finite collection of natural numbers, $A = \{ n_1, n_2, ..., n_p \}$ with $n_1, n_2, ..., n_p \in \mathbb{N}$ then $\bar{A} = \{ n_1, n_2, ..., n_p \}$.
Now the set of even natural numbers $E = \{ 2n : n \in \mathbb{N}$ is an infinite set - but all closed sets apart from the whole set are closed. Therefore the smallest closed set containing $E$ is the wholeset $\mathbb{N}$ so:(3)
For the same reasoning, if $B$ is an infinite collection of natural numbers then $\bar{B} = \mathbb{N}$!
Example 2 Consider the topological space $(\mathbb{R}^3, \tau)$ where $\tau$ is the usual topology of open balls in $\mathbb{R}^3$. What is the closure of an open half cone, $C \subseteq \mathbb{R}^3$?
It should not be too surprising that the closure of $C$ is:(4) |
I’ve already examined the classic Sleeping Beauty Problem and pointed out some of the pitfalls that many people fail to avoid when trying to solve the problem. I also examined Nick Bostrom’s so-called “Extreme Beauty” modification to the problem, in which Beauty wakes many, many times if the coin toss comes up tails. However, there is another “extreme” variant of this problem, the variant in which the coin toss is replaced with another two-result random process that has extremely uneven odds. That is, in this “extreme” problem, one of the possible results is extremely unlikely. Examining this variant with the methods of reasoning commonly used by the “thirders” can be enlightening and can provide some illustration of why they are wrong.
Since many “thirders” seem to be fond of relying on betting analogies to reason through the problem and explain their arguments, a useful substitute for the coin toss is a lottery. A typical lottery provides a very small chance of winning accompanied by a very large payoff (which is why lotteries are so popular). So here we shall examine what happens when Sleeping Beauty plays the lottery.
The Problem
The events of the original Sleeping Beauty Problem occur as described before. That is, Beauty is put to sleep on Sunday and woken on Monday. She is then put to sleep with her memory erased and woken again on Tuesday
only if a certain random event happens.
The difference between the two problems is that the coin toss in the original problem is replaced with a lottery. Before she goes to sleep on Sunday night, Beauty chooses a “lucky” set of lottery numbers. (Note: Each set of numbers constitutes one lottery ticket and one chance to win the lottery.) Then, sometime after she is asleep, the lottery’s winning numbers are drawn at random. If the numbers that Beauty chose match the winning numbers, she will be woken on Monday, have her memory erased, and woken again on Tuesday. If her numbers do not match the winning numbers, she will be woken only on Monday.
As in the original problem, the circumstances of her wakings are identical, so she cannot tell what day it is or the result of the lottery. The new question is the following: What should Beauty think about her probability of winning the lottery upon awakening?
Note that winning the lottery in this problem corresponds with the coin toss coming up tails in the original problem, and losing the lottery corresponds to the toss coming up heads.
To quantify the problem, assume that there are \(n\) possible sets of lottery numbers and each set has an equal chance of being drawn. Therefore, the probability of winning the lottery is 1 in \(n\). Typically, \(n\) is a large number—e.g., one in a million.
The following notation is used below to indicate the conditions of the experiment:\[
\begin{aligned} W &{}= \text{Beauty picked the winning set of numbers}\\ L &{}= \text{Beauty did not pick the winning set of numbers}\\ D_1 &{}= \text{It is the first day (Monday)}\\ D_2 &{}= \text{It is the second day (Tuesday)} \end{aligned} \] The “Thirders”
With the problem defined, it is useful to consider the typical approaches that are used by the “thirders” to tackle a problem such as this. To this purpose, I have divided the majority of the thirders into three categories:
The Waking Thirders – This group fixates on the datum that Sleeping Beauty wakes up The Vegas Thirders – This group evaluates probabilities by turning them into some sort of gambling proposition The Monday-Morning Thirders – This group fixates on what happens on Monday morning
Note: The fact that I have divided the “thirders” into three categories is not intended to imply that they are evenly divided into these categories. In fact, many “thirders” use, or at least explore, more than one of these approaches to this problem, so the process of categorization is somewhat nuanced.
The Waking Thirders’ Solution
Little changes in the analysis provided by the “waking thirders.” Once again, they observe three states that Beauty can find herself when she wakes:
(1) Beauty lost the lottery and it’s Monday
(2) Beauty won the lottery and it’s Monday (3) Beauty won the lottery and it’s Tuesday
Since it is impossible for her to distinguish between the three, they are all assumed to be equally as likely:\[
P(L \cap D_1) = P(W \cap D_1) = P(W \cap D_2) \]Because these are the only possibilities, their probabilities must sum to one. Therefore, each state has a probability of 1/3, and since two out of the three states is a win for Beauty, they conclude that Beauty should believe upon waking that she has a 2/3 chance of having won the lottery. The Vegas Thirders’ Solution
For the “Vegas” crowd to be able to analyze this situation, additional assumptions are necessary, because they are interested in the payoff. Their analysis requires that we know how much she won to determine how likely she was to win.
Therefore, let’s stipulate that the winner is paid according to the odds of winning. That is, the winner of a lottery with a one-in-a-million chance of winning is paid $1 million on a $1 lottery ticket. (The winnings will be taxed, of course, which is how the state will pull in its take, but that’s not relevant to this analysis.) Furthermore, we must assume that the possibility of two or more people selecting the same set of numbers is either so unlikely as to be negligible or that it is simply not allowed by the rules.
Beauty chooses her “lucky” numbers on Sunday night. Each time she wakes, she is given an opportunity to purchase a lottery ticket. To avoid the possibility of having to split the winnings, we can posit that the lotteries on Monday and Tuesday are separate lotteries, which use the same numbers that were drawn on Sunday. Therefore, it is possible for Beauty to win the entire lottery prize twice.
Note that if \(n = 2\), this problem reduces to the problem of the flipped coin, with Beauty betting on tails. Therefore, this lottery problem can be considered to be a generalization of the original Sleeping Beauty Problem, and the reasoning considered here is a generalization of the reasoning used by “thirders” for the original problem.
To make this problem concrete, a “Vegas thirder” would suggest something like a $1 lottery ticket for a $1 million prize with a 1-in-a-million chance of picking the winning number. Then, the “thirder” would consider the situation in which the “experiment” (lottery drawing) is repeated many, many times—say, one million times so that we can expect that Beauty picks the winning number once.
With the conditions in place, the totals can be compiled. If Beauty buys a lottery ticket every time she wakes, then she will have spent $1,000,001 in lottery tickets (because she would have bought an additional ticket on Tuesday the time that she picked the winning number), and she will have won $2,000,000. The “thirder” then calculates the odds of winning the lottery as\[
\begin{aligned} \text{odds} &{}= (\text{amount won}):(\text{amount lost})\\ &{}= 2,000,000:999,999 \approx 2:1 \end{aligned} \]This is then interpreted as 2:1 odds of picking the winning number or a probability of \(P(W) = 2/3\). The Monday-Morning Thirders’ Solution
The answer given by the “Monday-morning thirders” is different than the answer given by the other two groups, because this group actually considers the probabilities of the random number generator (i.e., coin toss in the original problem) in their arguments. They reason that, since Beauty must wake on Monday regardless of the result of the lottery, her chance of having the winning number on Monday are\[
\begin{aligned} P(W|D_1) &{} = \frac{1}{n}\\ P(L|D_1) &{} = \frac{n-1}{n} \end{aligned} \]They note that the probability of Beauty picking the wrong number is\[ P(L) = P(L|D_1)\cdot P(D_1) + P(L|D_2)\cdot P(D_2) \]Since Beauty will not be woken on Tuesday (\(D_2\)) if she did not pick the winning number, \(P(L|D_2) = 0\). Therefore,\[ P(L) = P(L|D_1)\cdot P(D_1) = \frac{n-1}{n} P(D_1) \]So the answer depends on the probability of the day being Monday. Since there are three indiscernible states in which Beauty can wake and two of these states occur on Monday, the “thirder” concludes that \(P(D_1) = 2/3\). Thus, the probabilities associated with Beauty picking the winning number are\[ \begin{aligned} P(L) &{} = \frac{2n – 2}{3n}\\ P(W) &{} = 1 – P(L) = \frac{n + 2}{3n} \end{aligned} \]
In the limit that \(n\) becomes large,\[
\begin{aligned} P(L) &{} \sim 2/3\\ P(W) &{} \sim 1/3 \end{aligned} \]This is closer to the correct answer than the other two arguments, but it is still incorrect. The Right Answer
The correct set of probabilities is given below.\[
\begin{aligned} P(W) &{}= \frac{1}{n}\\ P(L) &{}= \frac{n – 1}{n}\\ P(D_1|L) &{}= 1\\ P(D_2|L) &{}= 0\\ P(D_1|W) &{}= 1/2\\ P(D_2|W) &{}= 1/2\\ P(W \cap D_1) &{}= P(W) \cdot P(D_1|W) = \frac{1}{2n}\\ P(W \cap D_2) &{}= P(W) \cdot P(D_2|W) = \frac{1}{2n}\\ P(L \cap D_1) &{}= P(L) \cdot P(D_1|L) = \frac{n-1}{n}\\ P(L \cap D_2) &{}= P(L) \cdot P(D_2|L) = 0\\ P(D_1) &{}= P(W) \cdot P(D_1|W) + P(L) \cdot P(D_1|L) = \frac{2n-1}{2n}\\ P(D_2) &{}= P(W) \cdot P(D_2|W) + P(L) \cdot P(D_2|L) = \frac{1}{2n}\\ P(W|D_1) &{}= \frac{P(D_1|W) \cdot P(W)}{P(D_1)} = \frac{1}{2n – 1}\\ P(L|D_1) &{}= \frac{P(D_1|L) \cdot P(L)}{P(D_1)} = \frac{2n – 2}{2n – 1}\\ P(W|D_2) &{}= \frac{P(D_2|W) \cdot P(W)}{P(D_2)} = 1\\ P(L|D_2) &{}= \frac{P(D_2|L) \cdot P(L)}{P(D_2)} = 0 \end{aligned} \]These results are consistent with the original problem, which corresponds to \(n = 2\), \(W = T\), and \(L = H\). Conclusion
By changing the probabilities of the outcomes of the event that determines the number of times that Beauty wakes, the flaws in the reasoning commonly used by the “thirders” become obvious.
My advice to the “thirders” is the following: Try to find some drug that produces limited memory loss, then go buy a lottery ticket—the larger the payout the better. With the help of a friend, you can go to sleep on the night of the drawing confident that when you awake, you’ll have between a 33% and a 67% chance (depending on your reasoning) of being a lottery winner and becoming a new multimillionaire. Good luck! |
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer |
The main likely reasons why barter is not more common are:The inconvenience of having to find another party who both offers what you want and wants what you offer.Even if such a party can be found, the possible complexity of negotiating a "fair" transaction (eg I'll do your electrical job if you'll clean my windows monthly for the next 3 months).I don'...
In the countries that I am familiar with (such as Canada), using barter to avoid taxes is definitely illegal. You are required to report the dollar value of the exchange as revenue. It is treated as an implicit trade of cash along with the trade of goods. Since I am not going to give tax advice to random strangers on the internet, please consult the tax laws ...
One reason is the inflationary gain problem. Let me give an example with simple numbers. I make \$100 in income and pay 20% tax of \$20. I have \$80 left, which I invest in a stock. The stock goes up in value at the same rate as inflation, about 3.5% a year. After 20 years, it's worth about \$160, but \$160 has the same value now as \$80 did when I ...
There is quite a bit of work being done in that area. One very recent example is Straub and Werning's working paper "Positive Long Run Capital Taxation: Chamley-Juff Revisited."The point seems to be that we need to consider the rate of convergence to the steady state.Also, there is other literature that gives some competing results (e.g., "the new ...
Yes, sugar tax!This is probably as controversial as tobacco tax was back in the days. If you walk through a supermarket, you will find that half of the food section is food full of sugar. Sugar is what makes you fat, not fat itself. It has been known for a while, at least since the professional sports were invented. Yet the lobby of the enormous sugar ...
I'm not an expert on this, but as a Singaporean, here are some factors off the top of my head (plus some Googling), explaining why Singapore is different from other first-world countries (in terms of revenue and expenditures).In 2014, the personal income tax contributed only about 14.6% of government revenues (Source). In contrast, in Canada for example, ...
The first and most important insight here is to recognize that burden of taxation (the incidence) need not fall on the entity taxed. Economist Herb Stein is famous for having quipped (on the subject of the corporate income tax):"...that it’s as if people think that if the government imposed a tax oncows, the tax would be paid by the cows."Are ...
"Why I don't hear nobody speaking about such idea?"Because historical experience says it won't work.By printing money instead of collecting taxes, what increases is the nominal disposable income. The "value of work" is certainly not increased. And the important question is, does this increased nominal income lead to higher consumption?Consider a ...
It depends on the definition you want to use. You can define a regressive tax as the wiki says, in which case a sale tax would not match.But the goal of definition of different class of taxation is to assess the impact on person being taxed. Without considering income the above definition becomes moot. Sure the tax is proportional (fixed share of tax on ...
The burden of taxation is shared among suppliers and demanders according to the price elasticities of supply and demand. The more elastic side carries less of the tax burden.To understand this, note that the tax effectively increases the price demanders pay and decreases the price suppliers get. Elasticity tells us how demanders and suppliers react to this ...
(Note that this answer implicitly makes reference to the specific model in Lee and Saez.)Short answer: the increased taxes on high-skilled workers exactly offset the higher real wages they obtain from a decline in the minimum wage for low-skilled workers.Longer answer: Suppose that I'm the government, and I decide to lower the minimum wage $\bar{w}$. The ...
Ljungqvist and Sargent (2004). Recursive macroeconomic theory 2n ed. (ch. 15) present and review the issue. In the Concluding Remarks section, they mention two environments, where the "zero-optimal-capital tax rate" does not hold:Aiyagari(1995) presents a model with heterogeneous agents, incomplete insurance markets and borrowing constraints (i.e. a "...
Excellent question. You can understand this by reading the original study referred in the article. I will try to provide a short explanation here.Basically the study argues that if you take into account all sorts of taxes like taxes on properties, consumption, dividends, etc. the European countries tax their rich residents relatively less than the poor ...
Pierre Dubois, Rachel Griffith, and Aviv Nevo have a nice and well-executed AER paper where they argue that differences in obesity rates across countries can be due to differences in food consumption patterns. For instance, obesity rates are the highest in the United States at 30.0% (as you mentioned), compared to 14.5% in France and 23.6% in the United ...
In any discussion of obesity-related policy, it helps to call out a couple of assumptions:Assumption: A healthy lifestyle will reduce a person's weight.Everyone and their brother "knows" this to be true, but the science behind it is sketchy, at best. As several others have pointed out, obesity is not well-understood, medically, and it is entirely ...
Cutting corporate taxes is one of the tools of the Trickle-Down Economics (TDE) school of managing economic growth. In a New York Times article in 2012, economist Robert Frank of the Johnson School of Management at Cornell University summarized the findings of years of research about the effectiveness of applications of this theory.In his article, he ...
It's called seigniorage. This actually goes on to some degree in every country. I don't think any country has ever relied solely on this to cover all its expenses. (Those that come close usually suffer from hyperinflation.)Briefly googling I found this old graph (source). With more work you can probably find more up-to-date figures.Note that although the ...
There's a fair amount to unpack in the question, so it might be useful to take it step by step, and consider everything from a more abstract, economic theory perspective....those who are more hardworking (or, at least, those who are more skilled) are essentially punished....We should be careful making statements like this for a couple of reasons. First,...
The Laffer Curve is a theoretical argument that feels strongly true the moment one hears about it. The complexities around it once one wants to have a closer look are many (just for a taste the wikipedia article can be consulted).Even in the simplest econometric study, one has to make sure that what one does is consistent with the theory one wants to ...
The combination of the two policies could work and would have some advantages over either alone, and for a given reduction in emissions the overall cost burden on industry and consumers would not be doubled. However, the cap on emissions would weaken the effect of gradual escalation of the tax rate.The model represented in the following diagram is a ...
Here is an answer based on the following interpretation of the SWF : there is no "true" SWF, but SWFs are observable in principle, they simply represent the preferences of the policy decision maker.Under this interpretation, the policy recommendation are relevant despite depending on the specification of the SWF, precisely because different decision makers ...
The reason is that at the same time the wage of the high-skilled increases.By reducing the minimum wage, the number of people working in the low-skilled sector increases (involuntary unemployment is reduced) which leads to an increase in the wage of the high-skilled.The corresponding proposition in the paper is Proposition 3, the argument of which is...
A property tax:In the short run, a property tax will have no effect on rent, as the supply of homes is fixed (i.e., supply is totally inelastic in the short run), and demand is relatively inelastic but not completely so, so the incidence will fall on property owners. In the long run, a property tax will increase rental costs because supply of new units (i....
From an equality of opportunity perspective, it is desirable to have some taxes on inheritance. This levels the playing field for each generation to some extent. If real estates are exempt from such a tax, individuals would prefer real estate over other assets, creating a distortion in the asset price market.For a recent contribution to optimal inheritance ...
It may seem unfair that someone who has already met their tax obligations gets hit with another one after they die. From an economic standpoint, it is very popular.I'll let this economist article explain it...The gut dislike of death duties seems to arise because the tax clashes with heartfelt dynastic instinctsAny tax on capital will tend to ...
The current answers correctly point out that financing the government via the printing press would generate inflation. Since inflation is bad, this would be a bad policy.However, these answers miss out on several advantages of an inflation tax. Firstly, there would be substantial productivity gains since the entire government revenue system, tax advisors, ...
I think you're confusing quantity demanded, and the demand curve.The demand curve tells you how much quantity should be demanded at each price point. Therefore changes in price should change the quantity demanded, not shift the curve. You want to think of this in terms of demand and supply shocks.For example:Imagine it was discovered that bananas gave ...
Because they are high in comparison. Take a look at the data from the OECD here, for 2015. I have plotted them below:(The red is the personal income tax set by law, and the blue is the one adding tax credits, as far as I understand).As you can see, rates in Canada are quite high in comparison with other OECD countries like Estonia or Poland, and also ...
First let's look at the specific tax. The profit is$$\pi_s=[P(q)-\tau_s]q-C(q).$$Differentiating to establish the first-order condition:$$P'(q)q+P(q)-\tau_s-C'(q)=0.$$If we write $A=\tau_s q$ for the tax revenue then we can rewrite the FOC thus:$$P'(q)q+P(q)-C'(q)=\frac{A}{q}.$$Now for ad valorem:$$\pi_a=(1-\tau_a)P(q)q-C(q)$$First-order condition:$...
I think you've misunderstood how the (UK) income tax brackets work. They work differently to how stamp duty used to work.Stamp duty brackets used to be absolute rates: so when the purchase price crossed over the bracket threshold by £1, the marginal rate would be huge, as the whole purchase price was taxed in the higher rate. Hence the clustering.That's ... |
Here's a graph of the line $y = 2x + 3$ along with some rays outward from the origin.Each ray is labeled with the angle of the ray in polar coordinates.
The rays labeled $\theta_1$ and $\theta_2$ lie along the line through the origin parallel to the line $y = 2x + 3$. This line has slope $2$ and the angle of theray $\theta_1$ is therefore $\theta_1 = \arctan(2).$
The ray $\theta_2$ is in the exact opposite direction from $\theta_1.$To rotate the ray $\theta_1$ counterclockwise onto $\theta_2,$we have to rotate it $180$ degrees, which is $\pi$ radians.Therefore $\theta_2 = \theta_1 + \pi = \arctan(2) + \pi.$
As we rotate a ray so that its angle increases from $\theta_1$ to $\theta_2,$at each angle (for example $\theta_3,$ $\theta_4,$ and $\theta_5$),the ray intersects the line and we can use that angle $\theta$ and the radius $r = f(\theta) = 3 / (\sin \theta - 2 \cos \theta)$to plot one of the points on the line.As the ray sweeps through all the angles between $\theta_1$ and $\theta_2,$the polar coordinates $(f(\theta), \theta)$ plot every point on the line.
We want all of those angles to be in the domain of $f$ so that we can plot the whole line.If you stopped at some angle before $\arctan(2) + \pi,$ for example $\theta_4,$ you would only have plotted part of the line.All the parts of the line at angles between $\theta_4$ and $\arctan(2) + \pi$would be missing.
Note that $\arctan(2) + \pi$ actually is
not "allowed,"because the upper bound of the domain is described by$\theta < \arctan(2) + \pi$; notice how it is $<$ rather than $\leq,$which tells us that the angle $\theta$ can never exactly equal $\arctan(2) + \pi.$We cannot include either $\arctan(2)$ or $\arctan(2) + \pi$ in the domain,because the rays at those angles are parallel to the line and never intersect it,so there is no value we can set $r = f(\theta)$ to in order for $(f(\theta), \theta)$ to be polar coordinates of a point on the line. |
*S follows a process $dS= mSdt + oSdz$ where m and o are constant.
What is the probability followed by $ Y=(Se)^{(r-t)} $.
If S follows a process $ dS= k (b-S) dt + oSdz $ where k, b, o are constant.
What’s the process followed by $Y =S^2$ ?
Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community
*S follows a process $dS= mSdt + oSdz$ where m and o are constant.
What is the probability followed by $ Y=(Se)^{(r-t)} $.
If S follows a process $ dS= k (b-S) dt + oSdz $ where k, b, o are constant.
What’s the process followed by $Y =S^2$ ?
Not sure I fully understand your question. However, I'd suggest using the Ito's lemma (second equation on wikipedia page https://en.wikipedia.org/wiki/It%C3%B4%27s_lemma) to solve for dY. In both cases, dY will have both a drift term and a stochastic term. The coefficient of the stochastic term will indicate what sort of probability process Y follows.
e.g., in the first case, you'll get something like dY = [ (m-1)Y ]dt + [ rY ]dW, implying log-normal distribution for Y.
The first part has already been answer by @Uditg_ucla, so I am only providing answer of your 2nd part.
Rewriting your SDE in more sophisticated way: $$dS=k(b-S)dt+\sigma S dz$$ You want SDE for $S^2$. Using Taylor series, it can be written as: $$df(S)=f'(S)dS + \frac{1}{2!}f''(S)(dS)^2+\cdots$$ $$df(S)=2SdS+(dS)^2$$ $$df(S)=2S[k(b-S)dt+\sigma S dz]+\sigma^2 S^2 dt$$ $$df(S)=\bigg(2Sk(b-S)+\sigma^2S^2\bigg)dt+2\sigma S^2dz$$ since $Y=S^2$, so replacing $S^2$ from $Y$, $$dY=\bigg(2k(b\sqrt{Y}-Y)+\sigma^2Y\bigg)dt+2\sigma Y dz$$ desired SDE... |
Table of Contents
The Closure of Sets in Finite Topological Products
Recall from the The Interior of Sets in Finite Topological Products page that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of topological spaces and $A_i \subseteq X_i$ for all $i \in \{1, 2, ..., n \}$ then the interior of the product of these sets is equal to the product of the interiors of these sets.
We will now look at a similar result for closures of sets.
In the following theorem we will show that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of topological spaces and $A_i \subseteq X_i$ for all $i \in \{ 1, 2, ..., n \}$ then the closure of the product of these sets is equal to the product of the closures of these sets.
Theorem 1: Let $\{ X_1, X_2, ..., X_n \}$ be a collection of topological spaces and let $A_i \subseteq X_i$ for all $i \in \{1, 2, ..., n \}$. Then $\displaystyle{\overline{\left ( \prod_{i=1}^{n} A_i \right)} = \prod_{i=1}^{n} \overline{A_i}}$. Proof:$\Rightarrow$ Suppose that $\displaystyle{\mathbf{x} = (x_1, x_2, ..., x_n) \in \overline{\prod_{i=1}^{n} A_i}}$. For each $i \in \{1, 2, ..., n \}$, let $U_i$ be an open neighbourhood of $x_i$. Then $\displaystyle{U = \prod_{i=1}^{n} U_i}$ is an open neighbourhood of $\mathbf{x}$, and so we have that: From above, this implies that $A_i \cap U_i \neq \emptyset$ for all $i \in \{1, 2, ..., n \}$. Therefore $x_i \in \overline{A_i}$ for all $i \in \{1, 2, ..., n \}$, i.e., $\displaystyle{\mathbf{x} \in \prod_{i=1}^{n} \overline{A_i}}$. Hence: $\Leftarrow$ Suppose that $\displaystyle{\mathbf{x} = (x_1, x_2, ..., x_n) \in \prod_{i=1}^{n} \overline{A_i}}$. Then $x_i \in \overline{A_i}$ for all $i \in \{ 1, 2, ..., n \}$. Let $\displaystyle{U = \prod_{i=1}^{n} U_i}$ be an open neighbourhood of $\mathbf{x}$ in $\displaystyle{\prod_{i=1}^{n} X_i}$. Then $U_i$ is an open neighbourhood of $x_i$ for all $i \in I$, and hence: Therefore: This shows that $\displaystyle{\mathbf{x} \in \overline{\prod_{i=1}^{n} A_i}}$ and so: From the inclusions in $(*)$ and $(**)$ we conclude that: |
Answer a is not possible since it reduces to
$m = \frac{1}{2} 2m + \frac{3}{2}\frac{2m}{3} = 2m$
which has no sense. Idem for answer c, $m\neq\frac{1}{4}m$.
Remains answers b or d.
For answer b, we have
$u(2m,0)=8m$
and for answer d,
$u(0,\frac{2m}{3}) = \frac{28}{3}m = (9+\frac{1}{3})m$
As you can see, $ (9+\frac{1}{3}) > 8$.
In the general case, to find the maximum of your utility function given a monetary constrain, you can formalize and maximize the following Lagrangian function
$L(x_1,x_2,\lambda)=u(x_1,x_2)+\lambda(m-p_1 x_1 - p_2 x_2)$
where $p_1$ and $p_2$ are prices. In your case, those are $\frac{1}{2}$ and $\frac{3}{2}$ respectively.
Suite
As usual,
. the story behind equations is of first importance
Remember that $x_1$ and $x_2$ are two perfect substitutes. This means that the individual will spend all her income either in $x_1$ or in $x_2$, and will chose the good which provides her with the highest utility. And this is actually the story that the use of the Lagrangian function tells you.
As you mentioned in your comment below, you get
first order conditions which seem to be contradictory. But they are not. Indeed, the two goods will never be bought simultaneously. Which means that you will get either $\lambda = 8$ or $\lambda = \frac{28}{3}$ in a mutually exclusive manner.
Recall what $\lambda$ is : it is a shadow price. In other words, it expresses how much your objective function, $u(x_1,x_2)$, will increase, if your constrain, $m$, increases by $1$.
Thus, if $m$ increases by $1$, and the individual spends all her income in $x_1$, $u$ will increase by $8$.If she spends all her income in $x_2$, $u$ will increase by $\frac{28}{3}$.
Which good will the individual chose ? |
Your notation is OK up to and including step 2, except for the range of summation. You need$$|s\rangle=\frac{1}{\sqrt{2^{n+m}}}\sum_{x=0}^{2^n-1}\sum_{y=0}^{2^m-1}|x\rangle|y\rangle$$Now the problem is how to write down the effect of the oracle, and you cannot just write down the output qubit. I think you probably know this from the title of the question, because entanglement will appear that you're not describing. So, you have an oracle that acts as$$|x\rangle|y\rangle|0\rangle\xrightarrow{\text{oracle}}|x\rangle|y\rangle|F(x,y)\rangle.$$Hence, if the input is some superposition state such as $|s\rangle$, we have$$|s\rangle|0\rangle\xrightarrow{\text{oracle}}|\Psi\rangle=\frac{1}{\sqrt{2^{n+m}}}\sum_{x=0}^{2^n-1}\sum_{y=0}^{2^m-1}|x\rangle|y\rangle|F(x,y)\rangle.$$You absolutely cannot describe (except in very special cases of $F$) the last qubit in the form $\alpha|0\rangle+\beta|1\rangle$ because it is entangled with the other registers.
The question seems to be evolving into
If I'm not measuring $x$ or $y$, why can't the state of the extra qubit be written in the form $\alpha|0\rangle+\beta|1\rangle$?
There are several ways that this might be answered. Normally, I'd take the partial trace and calculate the reduced density matrix, but I infer from comments that the OP doesn't know this technique. Thus, let us try another route.
Let us assume that the extra qubit can be written in the form $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. This means that we could define a measurement$$P_\psi=|\psi\rangle\langle\psi|\qquad P_{\perp}=|\psi^\perp\rangle\langle\psi^\perp|$$where $|\psi^\perp\rangle=\beta^{\star}|0\rangle-\alpha^\star|1\rangle$ is orthogonal to $|\psi\rangle$. If we can guarantee that the extra qubit is in that state, then we are guaranteed to get the measurement result $P_{\psi}$. In other words,$$\langle\Psi|\mathbf{I}\otimes\mathbf{I}\otimes P_{\psi}|\Psi\rangle=1.$$(The identity operations are how we say that we're not measuring the $x$ and $y$ systems.) I claim that there are no satisfying $\alpha,\beta$ where $|\alpha|^2+|\beta|^2=1$, unless $F(x,y)$ is a constant function.
So, we start to evaluate\begin{align*}\langle\Psi|\mathbf{I}\otimes\mathbf{I}\otimes P_{\psi}|\Psi\rangle&=\frac{1}{2^{n+m}}\sum_x\sum_y\langle F(x,y)|P_{\psi}|F(x,y)\rangle \\&=\frac{1}{2^{n+m}}\left(\sum_{x,y:F(x,y)=0}|\alpha|^2+\sum_{x,y:F(x,y)=1}|\beta|^2\right) \\&= \frac{1}{2^{n+m}}(M|\alpha|^2+(2^{n+m}-M)(1-|\alpha|^2))\end{align*}Where $M$ is the number of values such that $F(x,y)=0$. Setting this equal to 1, we can rearrange for $|\alpha|^2$:$$|\alpha|^2=\frac{M}{2M-2^{n+m}}$$For $|\alpha|^2$ to be a valid value, it must be $0\leq|\alpha|^2\leq 1$. One has to be careful in the analysis here. If we assume that $2M>2^{n+m}$, then the denominator is positive, and $|\alpha|^2\leq 1$ implies$$M\geq 2^{n+m}.$$This only happens if $M=2^{n+m}$, in other words, $F(x,y)=0$ for all $x$ and $y$. On the other hand, if $2M<2^{n+m}$, the denominator is negative, and so $|\alpha|^2\geq 0$ implies $M\leq 0$. This can only happen if $M=0$, i.e. all answers $F(x,y)$ give answer 1.
We conclude that unless $F(x,y)$ is constant, there is no valid $\alpha,\beta$ so that the measurement gives probability 1, which means there is no pure state description of that qubit. |
Table of Contents
Heredity of Second Countability on Topological Subspaces
Recall from the Hereditary Properties of Topological Spaces page that if $(X, \tau)$ is a topological space that a property of $X$ is said to be hereditary if for all subsets $A \subseteq X$ we have that the topological subspace $(A, \tau_A)$ also has that property (where $\tau_A$ is the subspace topology on $A$).
We will now show that second countability is hereditary.
Theorem 1: Second countability is hereditary, that is, if $(X, \tau)$ is a second countable topological space and $A \subseteq X$ then $(A, \tau_A)$ is a second countable topological space where $\tau_A = \{ A \cap U : U \in \tau \}$ is the subspace topology on $A$. Proof:Let $(X, \tau)$ be a second countable topological space and let $A \subseteq X$. Since $X$ is second countable we have that there exists a countable basis $\mathcal B$ of $X$. We claim that the following collection is a countable basis for $A$: Clearly $\mathcal B_A$ is countable since $\mathcal B$ is countable, so we only need to show that $\mathcal B_A$ is a basis of the topology $\tau_A$ on $A$. To prove this we must show that every open set in $A$ (with respect to the subspace topology $\tau_A$ on $A$) is a union of a collection of sets from $\mathcal B_A$. Let $U \subseteq A$ be an open set in $A$. Since $U$ is open in $A$ and $A \subseteq X$ we have that there exists an open set $V$ in $X$ such that: Since $V$ is open in $X$ and $\mathcal B$ is a basis of the topology $\tau$ on $X$ we have that there exists a subcollection $\mathcal B^* \subseteq \mathcal B$ such that: Hence we see that: But $A \cap B \in \tau_A$ for all $B \in \mathcal B^* \subseteq \mathcal B$. So each open set $U$ in $A$ can be expressed as a union of open sets in $\mathcal B_A$. Hence $\mathcal B_A$ is indeed a basis of the topology $\tau_A$ on $A$. Hence $\mathcal B_A$ is a countable basis of $\tau_A$, so first countability is hereditary. $\blacksquare$ |
I have a different take on ralu's accepted answer and some of the comments thereafter.
Consider two $N$-bit data sequences which we think of aspolynomials $$D^{(1)}(x) = \sum_{i=1}^{N-1} D_i^{(1)}x^i
~~\text{and}~~D^{(2)}(x) = \sum_{i=1}^{N=1} D_i^{(2)}x^i$$where each $D_i^{(1)}$ and $D_i^{(2)}$ is $0$ or $1$. Let $M(x)$of degree $64$ denote the CRC polynomial. Actual CRC implementations for data communications have many bells and whistlesbut let us assume that for hashing purposes, the simplest formof CRC is used so thatthe CRC check sums (or hashes) $R^{(1)}(x)$ and $R^{(2)}(x)$ of degree $63$ or less (and thus having $64$ bits) are are the remainders obtained by dividing $x^{64}D^{(1)}(x)$ and $x^{64}D^{(2)}(x)$ by $M(x)$. Rememeber that this is polynomial divisionover the binary field $\{0,1\}$ where addition (and subtraction) is theExclusive-OR operation $\oplus$. We thus have$$\begin{align*}
x^{64}D^{(1)}(x) &= Q^{(1)}(x)M(x) \oplus R^{(1)}(x)\\
x^{64}D^{(2)}(x) &= Q^{(2)}(x)M(x) \oplus R^{(2)}(x)
\end{align*}$$where $Q^{(1)}(x)$ and $Q^{(2)}(x)$ are the quotients.Adding these two equations, we have that$$x^{64}\left[D^{(1)}(x)\oplus D^{(2)}(x)\right]
= \left[Q^{(1)}(x) \oplus Q^{(1)}(x)\right]M(x)
\oplus \left[R^{(1)}(x) \oplus R^{(2)}(x)\right]
$$ It follows that if $R^{(1)}(x) = R^{(2)}(x)$ so that$[R^{(1)}(x) \oplus R^{(2)}(x) = 0$, then it mustbe that $D^{(1)}(x)\oplus D^{(2)}(x)$ is a multiple of $M(x)$.Conversely, if $D^{(1)}(x)\oplus D^{(2)}(x)$ is a multiple of $M(x)$,then so is $$x^{64}\left[D^{(1)}(x)\oplus D^{(2)}(x)\right]
\oplus \left[Q^{(1)}(x) \oplus Q^{(1)}(x)\right]M(x)
= \left[R^{(1)}(x) \oplus R^{(2)}(x)\right]$$a multiple of $M(x)$, and therefore $R^{(1)}(x) \oplus R^{(2)}(x)$of degree $63$ or less is a multiple of $M(x)$ of degree $64$.Since this can happen only if $R^{(1)}(x) \oplus R^{(2)}(x) = 0$,that is, $R^{(1)}(x) = R^{(2)}(x)$, we have the following.
$D^{(1)}(x)$ and $D^{(2)}(x)$ hash to the same check sum,
that is, $R^{(1)}(x) = R^{(2)}(x)$, if and only if
$D^{(1)}(x)$ and $D^{(2)}(x)$ differ by a multiple of $M(x)$
This result holds even if $D^{(1)}(x)$ and $D^{(2)}(x)$are of different degrees if we zero-pad the shorter sequence with zeroes at the high-order end to make the sequences of equallength. But if the afore-mentioned bells andwhistles are included (e.g. complement the high-order two bytes beforecommencing CRC calculations), then the result still holds forequal length data sequences, but should not be applied blindlywhen $D^{(1)}(x)$ and $D^{(2)}(x)$ are of different degrees:some care is necessary.
For the simple case considered here, we have immediatelythat $$\deg D^{(1)}(x) = \deg D^{(2)}(x) = N-1 \geq \deg M(x) = 64$$and so if $N \leq 64$, we are guaranteed that no two sequenceshash to the same checksum.
Turning to further specifics and ralu's answer, each alphanumeric symbolcan have one of $62$ different values, and while it is possible tomap $11$ such symbols to $62^{11}$ different bit sequencesof lengths $64$ or less, it is much more convenientto implement a symbol-by-symbol mapping into $6$-bit bytesand create a degree-$65$ data sequence $D(x)$ of $66$ bits to be hashed.The downside is that four sequences $D(x), D(x)\oplus M(x), D(x) \oplus xM(x)$, and $D(x)\oplus
(1\oplus x)M(x)$ will have the same hash, and this is the pricepaid for simplicity of the mapping algorithm: we have to restrictourselves to $10$ alphanumeric symbols to avoid collisions.On the other hand, compressing $11$ alphanumeric symbols to$64$ bits or less is a messy task.
An even simpler method is to use the password as entered bythe user (say as a sequence of ASCII-encoded $8$-bit bytes) to createthe data sequence by concatenation. Now, $8$ symbols guaranteesno collisions as per the simplified analysis above, but theactual picture is somewhat different. Although with $9$ bytesand $72$ bits, collisions
can occur, it is not immediatelyobvious that collisions will occur. For example, $D(x)\oplus M(x)$might well be a sequence of bytes that cannot be entered bythe user as a password because some of the ASCII characters arecontrol characters that cause the computer to take other actionsthan to simply pass the character on to the application to beprocessed.
I doubt there is a simple answer to the question of what isthe maximum password length for which collisions are guaranteednot to occur. The answer depends on the choice of $M(x)$also. For example, Wikipedia's page on CRCssays that CRC-64-ISO $x^{64}+x^4+x^3+x+1$ is weak for hashing purposes,the basis of which claim the diligent reader of the above will have no difficulty understanding. |
I am searching for pointers to algorithms for feature detection.
EDIT: all the answers helped me a lot, I cannot decide which one I should accept. THX guys!
What I did:
For discrete variables (i.e. $D_i, E$ are finite sets) $X_i : \Omega \to D_i$ and a given data table $$ \begin{pmatrix}{} X_1 & ... & X_n & X_{n+1} \\ x_1^{(1)} & ... & x_n^{(1)} & x_{n+1}^{(1)} \\ ... \\ x_1^{(m)} & ... & x_n^{(m)} & x_{n+1}^{(m)} \\ \end{pmatrix} $$ (the last variable will be the 'outcome', thats why I stress it with a special index) and $X, Y$ being some of the $X_1, ..., X_{n+1}$ (so if $X=X_a, Y=X_b$ then $D=D_a, E=D_b$) compute
$$H(X) = - \sum_{d \in D} P[X=d] * log(P[X=d])$$
$$H(Y|X) = - \sum_{d \in D} { P[X=d] * \sum_{e \in E} { P[Y=e|X=d] * log(P[Y=e|X=d]) } }$$
where we estimate $$P[X_a=d] = |\{j \in \{1, ..., m\} : x_a^{(j)} = d\}|$$ and analogously $$P[X_a=d \cap X_b=e] = |\{j \in \{1, ..., m\} : x_a^{(j)} = d ~\text{and}~ x_b^{(j)}=e\}|$$ and then $$I(Y;X) = \frac{H(Y) - H(Y|X)}{\text{log}(\text{min}(|D|, |E|))}$$ which is to be interpreted as the influence of $Y$ on $X$ (or vice versa, its symmetric).
EDIT: A little late now but still:
This is wrong:
Exercise for you: show that if $X=Y$ then $I(X,Y)=1$.
This is correct: Exercise for you: show that if $X=Y$ then $I(X,X)=H(X)/log(|D|)$ and if $X$ is additionally equally distributed then $I(X,X)=1$.
For selecting features start with the available set $\{X_1, ..., X_n\}$ and a set 'already selected'$ = ()$ [this is an ordered list!]. We select them step by step, always taking the one that maximizes $$\text{goodness}(X) = I(X, X_{n+1}) - \beta \sum_{X_i ~\text{already selected}} I(X, X_i)$$ for a value $\beta$ to be determined (authors suggest $\beta = 0.5$). I.e. goodness = influence on outcome - redundancy introduced by selecting this variable. After doing this procedure, take the first 'few' of them and throw away the ones with lower rank (whatever that means, I have to play with it a little bit). This is what is described in this paper.
For computing the $I$ for continuous variables one needs to bin them in some way. More concretely, the inventors of 'I' suggest to take the maximal value over binning $X$ into $n_x$ bins, $Y$ into $n_y$ bins and $n_x \cdot n_y <= m^{0.6}$, i.e. compute $$ \text{MIC}(X;Y) = \text{max}_{n_X \cdot n_Y \leq m^{0.6}} \left( \frac{I_{n_X, n_Y}(X;Y)}{log(\text{min}(n_X, n_Y)} \right)$$
where $I_{n_X, n_Y}(X;Y)$ means: compute the $I$ precisely as you did for discrete variables by treating $X$ as a discrete random variable after binning it into $n_X$ bins and analogously with $Y$.
===
ORIGINAL QUESTION
More precisely: I have a classification problem for one boolean variable, let's call this variable
outcome.
I have lots of data and lots of features (~150 or so) but these features are not totally 'meaningless' as in image prediction (where every x and y coordinate is a feature) but they are of the form gender, age, etc.
What I did until now: from these 150 features, I guessed the ones that 'seem' to have some importance for the outcome. Still, I am unsure which features to select and also how to measure their importance before starting the actual learning algorithm (that involves yet more selection like PCA and stuff).
For example, for a feature
f taking only finitely many values
x_1, ..., x_n my very naive approach would be to compute some relation between
P(outcome==TRUE | f==x_1), ...,
P(outcome==TRUE | f==x_n) and
P(outcome==TRUE)
(i.e. the feature is important when I can deduce more information about the coutcome from it than without any knowledge about the feature).
Concrete question(s): Is that a good idea? Which relation to take? What to do with continuous variables?
I'm sure that I'm not the first one ever wondering about this. I've read about (parts of) algorithms that do this selection in a sort-of automated way. Can somebody point me into the right direction (references, names of algorithms to look for, ...)? |
we substitute
$y=ax-3$ in $(x-1)^2+(y-1)^2= 1$
$(x-1)^2+(ax-3-1)^2= 1$
$(x-1)^2+(ax-4)^2= 1$ $x^2-2x+1+a^2x^2-8ax+16=1$ we have this second degree equation
$x^2(1+a^2)+x(-2-8a)+16=0$
with
$\Delta =(-2-8a)^2-4\cdot(1+a^2)\cdot 16$
if $\Delta <0$ the equation have no solution and the line not intersect the circle if $\Delta =0$ the equation have one solution and the line is tangent to the circle if $\Delta >0$ the equation have two solutions and the line intersect the circle in 2 points
$\Delta =4+32a+64a^2-64-64a^2$
$\Delta =32a-60$
so if
$32a-60>0$ $a>\frac{15}{8}$ the line intersect the circle in 2 points
and if
$32a-60=0$ $a=\frac{15}{8}$ the line is tangent to the circle |
Table of Contents
Second Countable Topological Spaces are Separable Topo. Spaces
Recall from the Second Countable Topological Spaces page that the topological space $(X, \tau)$ is said to be second countable if there exists a countable basis $\mathcal B$ of $\tau$.
Furthermore, recall from the Separable Topological Spaces page that the topological space $(X, \tau)$ is said to be separable if it contains a countable dense subset.
We will now look at a rather nice theorem which says that every second countable topological space is a separable topological space.
Theorem 1: Let $(X, \tau)$ be a topological space. If $(X, \tau)$ is second countable then $(X, \tau)$ is separable. Proof:Let $(X, \tau)$ be a second countable topological space. Then there exists a countable basis $\mathcal B = \{ B_1, B_2, ..., B_n, ... \}$ of $\tau$. Since $\mathcal B$ of $\tau$ is a basis of $\tau$ we have that every open set $U \in \tau$ can be expressed as the union of sets in some subcollection $\mathcal B^* \subseteq \mathcal B$. In particular: We must now construct a countable dense subset of $X$. Assume that $\mathcal B$ does not contain the empty set. If it does contain the empty set then we can discard it. Then for each $B_n \in \mathcal B$ take $x \in B_n$ and define the set $A$ as: Then $A$ is a countable subset of $X$ since we take one element from each set in the countable basis. Furthermore, for all $U \in \tau \setminus \{ \emptyset \}$ we have that $A \cap U \neq \emptyset$ because $A$ contains one element from each of the basis sets and $U$ is the union of some subcollection of the basis sets. Therefore $A$ is a dense subset of $X$. Hence $A$ is a countable dense subset of $X$, so $(X, \tau)$ is a separable topological space. $\blacksquare$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.