text stringlengths 256 16.4k |
|---|
EDIT: See denesp's answer.
Here is some work I have so far. Someone can hopefully suggest help or provide a separate full answer.
So we have
$$u_A(x_A, y_A) = x_A + f(y_A)$$$$u_B(x_B, y_B) = x_B + f(y_B)$$
and we'll say that $x_A + x_B = X$ and $y_A + y_B = Y$
A's MRS:
$$-\frac{dx_A}{dy_A} = \frac{\frac{\partial U_A}{\partial y_A}}{\frac{\partial U_A}{\partial x_A}} = \frac{f'(y_A)}{1} = f'(y_A)$$
B's MRS: similarly, it is by $g'(y_B)$
So we'll get a Pareto efficient allocation when $f'(y_A) = g'(y_B)$
We know a function being strictly concave implies:
$$f((1-\alpha)y_A + \alpha y_B) > (1-\alpha)f(y_A) + \alpha y_B$$
We know a function being strictly increasing implies:
$$y_A > y_B \implies f(y_A) > f(y_B)$$
So take the original MRS condition and substitute in $Y - y_A = y_B$
$$f'(Y - y_B) = g'(y_B)$$
I
think you can say that $f$ increasing and concave implies f' strictly decreasing, so I think you can say that
$$f'(Y - y_B) < f'(Y) - f'(y_B)$$
Now suppose that $y_A \neq y_B$ and create a contradiction. Note that $Y > y_B, y_A$ so when taking functions of that, you can take advantage of that somehow. |
Let us consider $N$ independent scalar fields which satisfy the Euler-Lagrange equations of motion and are denoted by $\phi^{(i)}(x) \ ( i = 1,...,N)$, and are extended in a region $\Omega$ in a $D$-dimensional model spacetime $\mathcal{M}_D$. Now consider the classical Lagrangian density, $\mathcal{L}(\phi^{(i)}, \partial_\mu \phi^{(i)}, x^\mu)$. We apply the following infintesimal fixed-boundary transformation to $\mathcal{M}_D$. \begin{align*} x \to \widetilde{x}^\mu &\equiv x^\mu + \delta x^\mu (x), \tag{1} \\ \text{such that, }\ \delta x^\mu\Big{|}_{\partial\Omega}&=0, \tag{2} \\ \text{and the fields transform as: }\ \phi^{(i)}(x) &\to \widetilde{\phi}^{(i)}(\widetilde{x}) \equiv \phi^{(i)} (x) + \delta\phi^{(i)} (x). \tag{3} \\ \end{align*}
According to my calculations, up to first order in the variation, the Lagrangian density is given by: $$ \boxed{ \delta \mathcal{L} = \partial_\mu \Big( \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi^{(i)} )}\delta\phi^{(i)} - \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi^{(i)} )}\partial_\nu \phi^{(i)} \delta x^\nu + \mathcal{L} \delta x^\mu \Big) - \mathcal{L} \partial_\mu (\delta x^\mu) }\tag{4} $$
Therefore, the conserved current is $$ \boxed{ J^{\mu} = \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi^{(i)} )}\delta\phi^{(i)} - \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi^{(i)} )}\partial_\nu \phi^{(i)} \delta x^\nu + \mathcal{L} \delta x^\mu - F^\mu } \tag{5}$$ where $F^\mu$ is some arbitrary field that vanishes on $ \partial \Omega$.
However, most textbooks ignore the second and the third terms in the above expression. Compare, for example, with Peskin and Schroeder (p.18) which sets:
$$ J^{\mu} = \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi^{(i)} )}\delta\phi^{(i)} - F^\mu. \tag{6} $$
For another example, Schweber (p. 208) ignores all terms but the first in the variation of the Lagrangian density, and writes:
$$ \delta \mathcal{L} = \partial_\mu \Big( \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi^{(i)} )}\delta\phi^{(i)} \Big).\tag{7} $$
So what is going on here? Am I missing something? We seem to have set the same assumptions, but get different results. Am I wrong, or are they?
EDIT: Condition (2) is unnecessary, as it was never used in the derivation of the current. Please ignore its presence in the above text. |
There are commands for the top two symbols
\ltimes and
\rtimes, however I have not been able to find commands for the other 4 symbols. Is there a simple way that I could create commands for these symbols?
There are commands for the top two symbols
Just combine existing symbols:
\documentclass{article}\usepackage{amssymb}\begin{document}$\blacktriangleright\mathrel{\mkern-4mu}<$,$>\mathrel{\mkern-4mu}\blacktriangleleft$,$\blacktriangleright\joinrel\mathrel{\triangleleft}$,$\mathrel{\triangleright}\joinrel\blacktriangleleft$\end{document}
\joinrel is defined (robustly) as
\mathrel{\mkern-3mu}. It's enough for the last two symbols; for the first two a slighlty larger value of
4mu looks better to me.
As a matter of fact,
\ltimes and
\rtimes do not yield the "unsymmetric" symbols in your picture. They can be similarly obtained joining
</
> with
\triangleleft/
\triangleright.
$>\joinrel\mathrel{\triangleleft}$ vs. $\rtimes$$\mathrel{\triangleright}\joinrel<$ vs. $\ltimes$
My fantasy isn't rich enough to come up with names for all these
;-)
This takes campa's answer (+1) and makes an enhancement/alteration: it scales the result downward to occupy the same vertical footprint as the letter
x.
Like campa's result, it works across math styles.
The MWE:
\documentclass{article}\usepackage{mathtools,amssymb,scalerel}\newcommand\bicrossl{% \mathrel{\scalerel*{\mathrel{\triangleright}\joinrel\blacktriangleleft}{x}}}\newcommand\bicrossr{% \mathrel{\scalerel*{\blacktriangleright\joinrel\mathrel{\triangleleft}}{x}}}\newcommand\biopencrossl{% \mathrel{\scalerel*{>\kern-.4\LMpt\joinrel\blacktriangleleft}{x}}}\newcommand\biopencrossr{% \mathrel{\scalerel*{\blacktriangleright\joinrel\kern-.4\LMpt<}{x}}}\begin{document}$x\bicrossr y$ and $x\bicrossl y$, $x\biopencrossr y$ and $x\biopencrossl y$, $\scriptstyle x\bicrossr y$ and $\scriptstyle x\bicrossl y$, $\scriptstyle x\biopencrossr y$ and $\scriptstyle x\biopencrossl y$, \end{document} |
In the Numberlink puzzle [1] we need to connect end-points by paths over a board with cells such that:
Each cell is part of a path Paths do not cross
E.g. the puzzle on the left has a solution as depicted on the right [1]:
The picture on the left provides the end-points. The solution in the picture on the right can be interpreted as:
The main variable we use is:
\[x_{p,k} = \begin{cases}1 & \text{ if cell $p$ has value $k$}\\
0 & \text{ otherwise}\end{cases}\]
The main thought behind solving this model as a math programming model is to look at the neighbors of a cell. The neighbors are the cells immediately on the left, on the right, below or above. If a cell is an end-point of a path then it has exactly one neighbor with the same value. If a cell is not an end-point it is on a path between end-points and has two neighboring cells with the same value.
This leads to the following high-level MIQCP (Mixed Integer Quadratically Constrained Model):
\[\bbox[lightcyan,10px,border:3px solid darkblue]{\begin{align}
&\sum_k x_{p,k} = 1\\
&\sum_{q|N(p,q)} \sum_k x_{p,k}\cdot x_{q,k} =\begin{cases} 1&\text{if cell $p$ is an end-point}\\ 2&\text{otherwise}\end{cases}\\
&x_{p,k} = c_{p,k} \>\>\> \text{if cell $p$ is an end-point}\\&x_{p,k}\in \{0,1\}\end{align}}\]
Here \(N(p,q)\) is true if \(p\) and \(q\) are neighbors and \(c_{p,k}\) indicates if cell \(p\) is an end-point with value \(k\). The model has no objective: we are just looking for a feasible integer solution. Unfortunately, this is a
non-convex model. The global solver Baron can solve this during its initial local search heuristic phase:
Doing local search
Cleaning up
*** Normal completion ***
Wall clock time: 23.73
Total no. of BaR iterations: 1
All done
Of course we cantry to linearize this model so it becomes a MIP model (and thus convex). The product of two
binary variables \(y=x_1\cdot x_2\) can be linearized by:
\[\begin{align}
We can relax \(y\) to a continuous variable between 0 and 1. In our model we can exploit some symmetry. This reduces the number of variables and equations:
\[\begin{align}
The non-linear counting equation can now be written as:
\[\sum_{q|N(p,q)} \sum_k \left( y_{p,q,k}|p<q + y_{q,p,k}|q<p \right)= b_{p}\]
Here \(b_p\) is the right-hand side of the counting constraint. This solves very fast. Cplex shows:
Tried aggregator 2 times.
Root node processing (before b&c):
Cplex solves this in the root node.
Unique solution
A well-formed puzzle has exactly one solution. We can check this by adding a constraint that forbids the solution we just found. After adding this cut the model should be infeasible. Let \(\alpha_{p,k} = x^*_{p,k}\) be our feasible solution. Then our cut can look like:
\[\sum_{p,k} \alpha_{p,k} x_{p,k} \le |P|-1\]
where \(|P|\) is the number of cells. Indeed, when we add this constraint, the model becomes infeasible. So, we can conclude this is a proper puzzle.
Alternative formulation
In the comments Rob Pratt suggests a formulation that works a better in practice. The counting constraint is replaced by:
\[\begin{align}
&\sum_{q|N(p,q)} x_{q,k}=1 &&\text{if cell $p$ is an end-point with value $k$}\\
&2x_{p,k}\le \sum_{q|N(p,q)} x_{q,k} \le 2x_{p,k}+M_p(1-x_{p,k})&&\text{if cell $p$ is not an end-point}
\end{align}\]
where \(M_p\) can be replaced by \(M_p=|N(p,q)|=\sum_{q|N(p,q)} 1\) (the number of neighbors of cell \(p\)). This is at most 4 so not really a big big-M.
This formulation seems to work better. We can solve with this something like:
This problem has also exactly one solution.
Historical note
In [3] an early version of this puzzle is mentioned, appearing in
Here the goal is to draw paths from each house to its gate without the paths overlapping [4].
Large problems
For larger problems MIP solvers are not the best tool. From notes in [2] it looks like SAT solvers are most suitable for this type problem. In a CP/SAT framework we could use directly an integer variable \(x_p \in \{1,\dots,K\}\), and write the counting constraints as something like:
\[\sum_{q|N(p,q)} 1|(x_p=x_q) = \begin{cases} 1&\text{if cell $p$ is an end-point}\\ 2&\text{otherwise}\end{cases}\] References Numberlink, https://en.wikipedia.org/wiki/Numberlink Hakan Kjellerstrand, Numberlink puzzle in Picat, https://github.com/hakank/hakank/blob/master/picat/numberlink.pi (shows a formulation in the Picat language) Aaron Adcock, Erik D. Demaine, Martin L. Demaine, Michael P. O'Brien, Felix Reidl, Fernando Sanchez Villaamil, and Blair D. Sullivan, Zig-Zag Numberlink is NP-Complete, 2014, https://arxiv.org/pdf/1410.5845.pdf Facsimiles of The Brooklyn Daily Eagle, http://bklyn.newspapers.com/image/50475607/, https://bklyn.newspapers.com/image/50475838/ Neng-Fa Zhou, Masato Tsuru, Eitaku Nobuyama, A Comparison of CP, IP, and SAT Solvers through a common interface, 2012, http://www.sci.brooklyn.cuny.edu/~zhou/papers/tai12.pdf |
The aims of this option are to introduce limit theorems and convergence of series, and to use calculus results to solve differential equations. Before beginning any work in this option, it is recommended that you revise Topic 1 and Topic 7 of the core syllabus, as a lot of background knowledge of those topics are helpful in this topic.
The Harmonic Series [ edit ]
A harmonic series is a
divergent infinite series. An example of which is: S n = ∑ n = 1 ∞ 1 n {\displaystyle S_{n}=\sum _{n=1}^{\infty }{\frac {1}{n}}}
where, the following pattern is observed:
s 1 = 1 {\displaystyle s_{1}=1} s 2 = 1 + 1 2 = 3 2 {\displaystyle {s_{2}=1+{\frac {1}{2}}}={\frac {3}{2}}} s 4 = 1 + 1 2 + 1 3 + 1 4 > 1 + 1 2 + 1 2 = 2 {\displaystyle {s_{4}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}}>1+{\frac {1}{2}}+{\frac {1}{2}}=2} s 8 = 1 + 1 2 + 1 3 + 1 4 + 1 5 + 1 6 + 1 7 + 1 8 > 1 + 1 2 + 1 2 + 1 2 = 2 1 2 {\displaystyle {s_{8}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+{\frac {1}{6}}+{\frac {1}{7}}+{\frac {1}{8}}}>1+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}=2{\frac {1}{2}}} s 16 = 1 + 1 2 + 1 3 + 1 4 + 1 5 + 1 6 + 1 7 + 1 8 + 1 9 + 1 10 + 1 11 + 1 12 + 1 13 + 1 14 + 1 15 + 1 16 > 1 + 1 2 + 1 2 + 1 2 + 1 2 = 3 {\displaystyle {s_{16}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+{\frac {1}{6}}+{\frac {1}{7}}+{\frac {1}{8}}+{\frac {1}{9}}+{\frac {1}{10}}+{\frac {1}{11}}+{\frac {1}{12}}+{\frac {1}{13}}+{\frac {1}{14}}+{\frac {1}{15}}+{\frac {1}{16}}}>1+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}=3}
From this one can conclude that the pattern will countinue:
and s 32 > 3 1 2 {\displaystyle s_{32}>3{\frac {1}{2}}} , s 64 > 4 {\displaystyle s_{64}>4}
Thus the general pattern can be expressed as:
s 2 n > 1 + n 2 {\displaystyle s_{2^{n}}>1+{\frac {n}{2}}} |
Let $F$ be a cumulative density function on $\mathbb{R}$. From an argument in a textbook, it is shown that $F$ must be right-continuous:
Let $x$ be a real number and let $y_1$, $y_2$, $\ldots$ be a sequence of real numbers such that $y_1 > y_2 > \ldots$ and $\lim_i y_i = x$. Let $A_i = (-\infty, y_i]$ and let $A = (- \infty, x]$. Note that $A = \cap_{i=1}^\infty A_i$ and also note that $A_1 \supset A_2 \supset \ldots$. Because the events are monotone, $\lim_i P(A_i) = P(\cap_i A_i)$. Thus,
$$ F(x) = P(A) = P( \cap_i A_i) = \lim_i P(A_i) = \lim_i F(y_i) = F(x^+) $$
But why doesn't this argument work in reverse to show that $F$ is left-continuous? That is, if we supposed that the $y_i$ were approaching $x$ from the left, why can't we analogously say:
$$ F(x) = P(A) = P( \cup_i A_i) = \lim_i P(A_i) = \lim_i F(y_i) = F(x^-)? $$ |
The exponential distribution is the $\mathbf{\underline{only}}$ distribution that has the memoryless property. See this MathSE post.
Now that doesn't necessarily imply a Markov process must have only exponential jump times, but any examples without exponential jump times will likely be fairly trivial. The requirement is that the distribution of the future states of the process only depend on the most recent state known and not any other information about the past.
To answer your question: "how does having a non-exponential distribution give the process memory?" It is simply because The distribution of the process at a future time will depend on more than just the most recently known state. Memorylessness means not that we don't know the past, but that we don't need to know it (except for the most recent memory). I.e. that $P(X_t\in A \mid X_s \text{ for } s\leq s_0<t)=P(X_t\in A \mid X_{s_0})$.
With a uniform wait time, there is a maximum amount of time that will be spent in that state. Without knowing when the state was entered, there is no way to calculate probabilities on when it will leave the state.
Even if we choose some other nonuniform random distirbution that has support on $[0,\infty)$ that isn't exponential, it still won't have the memoryless property though, since as discussed above, the exponential is the only distribution with this property.
See this MathSE post for a process with uniform jump times and is not Markov.
Here is a MathSE post that is relevant. Here is the important paragraph:
In order for a process to have non-exponential wait times and satisfy
the Markov property, knowing the current state must give away the
precise time that it entered that state. Consider the process that
starts in one state, stays there for some random length of time, then
jumps to another state and stays there forever. It's Markov, but not
very interesting. Knowing the current state means you know whether or
not the jump has occurred and can calculate the distribution of future
states precisely.
Example of a process with a uniform jump time that is Markov: Consider the process that starts in state $0$ and jumps to state $1$ after a uniform time in $[0,1]$. Let's assume that we know $X_s$ for all $s\leq s_0$. We want to show that $$P(X_t\in A\mid X_s \text{ for } s\leq s_0<t)=P(X_t \in A\mid X_{s_0})$$so that this process does indeed have the Markov property.
Let $t<1$. If $A=\{0\}$, then $P(X_t=0\mid X_s \text{ for } s\leq s_0<t)$ equals $0$ if $X_s$ is $1$ (the jump has already occurred) for any $s\leq s_0$ (i.e. if $X_{s_0}=1$, so we only need to know $X_{s_0}$). Furthermore, $P(X_t=0\mid X_s \text{ for } s\leq s_0<t)$ equals $P(u> t \mid u>s_0)=P(X_t=0 \mid X_{s_0}=0)$. So we conclude that $$P(X_t=0\mid X_s \text{ for } s\leq s_0<t)=P(X_t=0 \mid X_{s_0})$$
Similarly, if $A=\{1\}$, then $P(X_t=1\mid X_s \text{ for } s\leq s_0<t)$ equals $1$ if the jump has already occurred, that is if $X_{s_0}=1$. And it equals $P(u\leq t \mid u>s_0)=P(X_t=1 \mid X_{s_0}=0)$ if the jump has not occurred yet during the know interval $[0,s_0]$.
Let $t\geq 1$. Then $P(X_t=0\mid X_s \text{ for } s\leq s_0<t)=P(X_t=0\mid X_{s_0})=P(X_t=0)=0$ and $P(X_t=1\mid X_s \text{ for } s\leq s_0<t)=P(X_t=1\mid X_{s_0})=P(X_t=1)=1$.
Thus $$P(X_t \in A\mid X_s \text{ for } s\leq s_0<t)=P(X_t \in A \mid X_{s_0}).$$
So this is indeed a Markov process where the wait time is not exponentially distributed.
For an even more trivial example, consider the process $X_t=1$ for all $t\in[0,\infty)$. Trivially, $$P(X_t\in A \mid X_s, s\leq s_0)=\begin{cases}0 &\text{ if } 1\notin A\\1 &\text{ if } 1\in A\end{cases}$$and$$P(X_t\in A \mid X_{s_0})=\begin{cases}0 &\text{ if } 1\notin A\\1 &\text{ if } 1\in A\end{cases}$$showing that it is indeed a (trivial) Markov process. |
Search
Now showing items 1-10 of 312
Series de Fourier aplicadas a problemas de cálculo de variaciones con retardo Series de Fourier aplicadas a problemas de cálculo de variaciones con retardo
(2012-03-22)
In this article we present an approximation of the minimizing function of the functional J[x]=\int_0^T F(t,X(t),X(t-\tau),\dot{X}(t))dtby approximating X(t) with Cosine Fourier series expansions X_n(t). We give conditions ...
Hércules contra la Hidra y la muerte del Internet Hércules contra la Hidra y la muerte del Internet
(2011-04-29)
Hercules killed the Hydra of Lerna in a bloody battle—the second of the labor tasks imposed upon him in atonement for his hideous crimes. The Hydra was a horrible, aggressive mythological monster with many heads and poisonous ...
A mixed-effects model for growth curves analysis in a two-way crossed classification layout
(2009-02-20)
We propose a mixed-effects linear model for analyzing growth curves data obtainedusing a two-way classification experiment. The model combines an unconstrainedmeans model and a regression model on the time, in which the ...
Agrupamiento de Filas y Columnas Homogéneas en Modelos de Correspondencia Agrupamiento de Filas y Columnas Homogéneas en Modelos de Correspondencia
(2011-04-29)
Goodman(1981) proposed homogeneity and structures criterias in Associaton Models which allow to determine if certain rows or columns in a contingency table should be grouped. In later works, he showed the relations between ...
Algoritmos Numéricos para el Problema de Restauración de Imágenes usando el Método de las Proyecciones Alternantes Algoritmos Numéricos para el Problema de Restauración de Imágenes usando el Método de las Proyecciones Alternantes
(2011-04-29)
The projection algorithms have evolved from the alternating projection method proposed by J. von Neumann in 1933, who treated the problem of finding the projection of a given point in a Hilbert space onto the intersection ...
Invariant Manifolds in Parametric turbulent Models Invariant Manifolds in Parametric turbulent Models
(2011-04-29)
The article is devoted to examining the so-called local-equilibrium approximations used while modeling turbulent flows. The dynamics of a far plane turbulent wake are investigated as an example. In this article, we analyze ...
Interval Mathematics Applied to Critical Point Transitions Interval Mathematics Applied to Critical Point Transitions
(2012-03-02)
The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex ...
Sobre el problema inverso de difusión Sobre el problema inverso de difusión
(2009-02-20)
Infiltration is physically described in order to model it as a diffusion stochasticprocess. Theorem M-B 1 is enunciated; whose main objective is the inverse diffusionproblem. The theorem is demonstrated in the specific ...
El problema del conjunto independiente en la selección de horarios de cursos El problema del conjunto independiente en la selección de horarios de cursos
(2009-02-20)
Registration process at the Universidad Aut´onoma Metropolitana is such thatevery student is free to choose his/her own subjects and schedule. Success of thissystem, based in the percentage of students that obtain a place ...
Term Structure of Interest Rates Term Structure of Interest Rates
(2012-03-02)
The risk free rate on bonds is a very important quantity that allows calculation of premium values on bonds. This quantity of stochastic nature has been modeled with different degrees of sophistication. This paper reviews ... |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
In Conquering the Physics GRE 2nd edition, Exam 3 problem 20, the statement is as follows:
A metal bar is pulled at constant velocity $v\mathbf{\hat{x}}$ along two metal rails a distance $d$ apart connected by a resistor or resistance R, as shown in the diagram. There is a magnetic field, pointing into the page, of magnitude $B = Cx$, where $x=0$ is the initial position of the bar. At time $T$, how much energy has been dissipated in the resistor thus far, as a function of $T$?
The solution is to first find the flux as a function of time, then the EMF, then the power dissipated by the resistor, then the energy. My method is to compute the flux as:
$$\Phi = \iint \mathbf{B}\cdot d\mathbf{S} = \iint (Cx\mathbf{\hat{z}})\cdot(dx\,dy\,\mathbf{\hat{z}}) = \int_0^d dy \int_0^{vt} dx\,Cx = \frac{1}{2}Cd(vt)^2 = \frac{1}{2}Cv^2t^2d$$
However, the solution says "Since the magnetic field is perpendicular to the loop, the flux through the loop is $\Phi = BA = C(vt)(vtd) = C v^2t^2 d$." The errata states that one of the answers was misprinted but that the solution is correct. Am I crazy or is the solution wrong? $\Phi = BA$ should only be true if $B = const.$ correct? |
Journal of Symbolic Logic J. Symbolic Logic Volume 56, Issue 3 (1991), 853-861. XVIIeme Probleme de Hilbert sur les Corps Chaine-Clos Abstract
A chain-closed field is defined as a chainable field (i.e. a real field such that, for all $n \in \mathbf{N}, \mathbf{\Sigma K^{2n+1}} \neq \mathbf{\Sigma K^{2n}}$) which does not admit any "faithful" algebraic extension, and can also be seen as a field having a Henselian valuation $\nu$ such that the residue field $K/\nu$ is real closed and the value group $\nu K$ is odd divisible with $|\nu K/2\nu K| = 2$. If $K$ admits only one such valuation, we show that $f \in K(X)$ is in $\mathbf{\Sigma} K(X)^{2n} \operatorname{iff}$ for any real algebraic extension $L$ of $K, "f(L) \subseteq \mathbf{\Sigma}L^{2n}"$ holds. The conclusion is also true for $K = \mathbf{R}((t))$ (a chainable but not chain-closed field), and in the case $n = 1$ it holds for several variables and any real field $K$.
Article information Source J. Symbolic Logic, Volume 56, Issue 3 (1991), 853-861. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183743733 Digital Object Identifier doi:10.2178/jsl/1183743733 Zentralblatt MATH identifier 0746.03026 JSTOR links.jstor.org Citation
Delon, Francoise; Gondard, Danielle. XVIIeme Probleme de Hilbert sur les Corps Chaine-Clos. J. Symbolic Logic 56 (1991), no. 3, 853--861. doi:10.2178/jsl/1183743733. https://projecteuclid.org/euclid.jsl/1183743733 |
1. Direct Relationship Analysis
Direct relationship analysis includes partial dependence plot, individual conditional expectation, feature interaction, feature importance, and etc.
1.1 Partial Dependence Plot (PDP)
PDP shows marginal effect which one or two features have on the predicted outcome of a machine learning model. $$\mathrm{PD}(x_S)=\widehat{f}_{x_S}(x_S)=\mathbb{E}_{x_C}\left[\widehat{f}(x_S, x_C)\right]=\int\widehat{f}(x_S, x_C)\ \mathrm{d}\mathbb{P}(x_C),$$ where $\widehat{f}$ is the predictor, $x_S$ are the features to examine and $x_C$ are the other features.
In practice $$\widehat{f}_{x_S}(x_S)=\frac{1}{n}\sum_{i=1}^n\widehat{f}\left(x_S, x_C^{(i)}\right).$$ 1.1.1 Pros and Cons
Pros:
Intuitive. Interpretation is clear. Easy to implement. May give causal interpretation of the model.
Cons:
Can only show maximum two featrues. Rely on independence assumption. Important information can be lost due to marginalization. 1.2 Individual Conditional Expectation (ICE)
ICE displays one line per instance that shows how the instance's prediction changes when a feature changes. PDP shows the average effect while ICE shows individuals.
For each instance in $\left\lbrace\left(x_S^{(i)}, x_C^{(i)}\right)\right\rbrace_{i=1}^N,$ the curve $\widehat{f}_S^{(i)}$ is plotted against $x_S^{(i)},$ while $x_C^{(i)}$ remains fixed.
1.2.1 Centred ICE Plot (c-ICE)
ICE curves start at different predictions while c-ICE centres the curves and displays only the differences: $$\widehat{f}_\mathrm{cent}^{(i)}=\widehat{f}^{(i)}-\widehat{f}\left(x_S^\min, x_C^{(i)}\right),$$ where $x_S^\min$ is the minimum candidate of $x_S.$
1.3 Feature Interaction: Friedman's H-Statistic
We can visualize by 2-D PDP but we cannot quantify the interaction. Here is the H-Statistic:
If $x_i$ and $x_j$ do not interact, $$\mathrm{PD}_{jk}(x_j, x_k)=\mathrm{PD}_j(x_j)+\mathrm{PD}_k(x_k).$$ If $x_i$ and $x_j$ interact, $$|\mathrm{PD}_{jk}(x_j, x_k)-(\mathrm{PD}_j(x_j)+\mathrm{PD}_k(x_k))|$$ is proportional to the interaction extent.
After normalization, $$H^2_{jk}=\frac{\sum\limits_{i=1}^n\left[\mathrm{PD}_{jk}\left(x_j^{(i)}, x_k^{(i)}\right)-\mathrm{PD}_j\left(x_j^{(i)}\right)-\mathrm{PD}_k\left(x_k^{(i)}\right)\right]^2}{\sum\limits_{i=1}^n\mathrm{PD}_{jk}^2\left(x_j^{(i)}, x_k^{(i)}\right)}.$$ If $x_i$ do not interact with the others, $$\widehat{f}(x)=\mathrm{PD}_j(x_j)+\mathrm{PD}_{-j}(x_{-j}).$$ If $x_i$ interact with the others, $$ H_j^2=\frac{\sum\limits_{i=1}^n\left[\widehat{f}(x^{(i)})-\mathrm{PD}_j\left(x_j^{(i)}\right)-\mathrm{PD}_{-j}\left(x_{-j}^{(i)}\right)\right]^2}{\sum\limits_{i=1}^n\widehat{f}^2(x^{(i)})}.$$ 1.3.1 Pros and Cons
Pros:
Theoretically sound. A meaningful interpretation: share of variance that is explained by the interaction. Comparable across features and even across models: dimensionless and always between $0$ and $1.$ Detects all kinds of interactions. Possible to analyze three or more features.
Cons:
Expensive to compute. Sample-based: caution of variance. 1.4 Feature Importance
Feature importance is a major way for interpretation. A feature is important if shuffling its values increases the model error; a feature is unimportant if shuffling its values leaves the model error unchanged.
Algorithm: Permutation Feature Importance Algorithm Input: Trained model $f;$
Feature matrix $X;$
Target vector $y;$
Error measure $L(y, f).$
estimate the original model error $e^\mathrm{orig}=L(y, f(X));$
for each feature $j=1, ..., p$ do
generate feature matrix $X^\mathrm{perm}$ by permuting feature $j$ in the data $X$ (which breaks the association between feature $j$ and true outcome $y$).
estimate error $e^\mathrm{orig}=L(Y, f(X^\mathrm{perm}))$ based on the predictions of the permuted data;
calculate permutation feature importance $\mathrm{FI}^j=\frac{e^\mathrm{perm}}{e^\mathrm{orig}};$
end
sort features by descending $\mathrm{FI}.$
In practice, we could divide the data set in two halves and swap the values of feature $j$ of the two halves (an economic implementation). Also, we could pair the instances, swap the values of feature $j$ of each pair and result $n(n-1)$ estimates of the permutation error (an accurate but expensive estimate).
1.4.1 Pros and Cons
Pros:
Nice interpretation. Highly compressed, global insight. Comparable across different problems. Takes into account all interactions. Does not require retraining the model.
Cons:
It is very unclear whether you should use training or test data to compute the feature importance. 2. Surrogate Methods
Surrogate methods includes global surrogate and local surrogate.
2.1 Global Surrogate
A global surrogate model is an interpretable model that is trained to approximate the predictions of a black box model.
2.1.1 Obtain a Surrogate Model
Here are some steps to obtain a surrogate model:
For a dataset $X,$ get the predictions of the black box model. Select an interpretable model type (linear model, decision tree, etc). Train the interpretable model on the dataset $X$ and its predictions (this is called the surrogate model). Measure how well the surrogate model replicates the predictions of the black box model. Interpret the surrogate model. 2.1.2 Measuring the Global Surrogate
$$R^2=1-\frac{\mathrm{SSE}}{\mathrm{SST}}=1-\frac{\sum\limits_{i=1}^n\left(\widehat{y}_*^{(i)}-\widehat{y}^{(i)}\right)^2}{\sum\limits_{i=1}^n\left(\widehat{y}^{(i)}-\overline{\widehat{y}}\right)^2},$$ where $\widehat{y}_*^{(i)}$ is the prediction from the surrogate, $y^{(i)}$ is the prediction of the black box model, and $\overline{\widehat{y}}$ is the mean of the black box model predictions.
2.1.3 Pros and Cons
Pros:
Surrogate methods are flexible. Intuitive and straightforward.
Cons:
Global surrogate tend to underfit. 2.2 Local Surrogate (LIME)
Local surrogate explains individual instances, e.g., locally linear or a local decision tree.
2.2.1 LIME Objective
$$\mathrm{explanation}(x)=\arg\min_{g \in G}L(f, g, \pi_x)+\Omega(g),$$ where $x$ is an instance, $L$ is a loss function, $f$ is the black box model, $g$ is the interpretable surrogate model, $\pi_x$ is the proximity measure defining local, and $\Omega(g)$ is the complexity of $g.$
2.2.2 Obtain a Local Surrogate Model
Here are some steps to obtain a local surrogate model:
Select $x$ for which you want to have an explanation of its black box prediction. Perturb the dataset and get the black box predictions for these new points. Weight the new samples according to their proximity to the instance of interest. Train a weighted, interpretable model on the dataset with the variations. Explain the prediction by interpreting the local model. 2.2.3 Complexity $\Omega$
A good $g$ is close to $f$ and with low complexity:
For linear $g:$ a few coefficients. For tree $g:$ a few levels and regular leave values.
LIME originally uses locally linear $g,$ i.e., locally use the black box predictions as supervised labels to fit a linear model with a few coefficients.
2.2.4 $K$-LASSO
LASSO is least absolute shrinkage and selection operator. It is one of the most widely used sparse linear model. LASSO learning objective is $$\min_{\beta \in \mathbb{R}^\rho}\left\lbrace\frac{1}{N}||y-X\beta||^2_2\right\rbrace\ \ \mathrm{s.t.}\ \ ||\beta||_1\leq\rho.$$ A smaller $\rho$ results in a sparser model (more zero coefficients) and $K$-LASSO is LASSO with exactly $K$ nonzero coefficients.
$\rho$ starts from $0$ and steadily increases, and record each coefficient as curves. $K$-LASSO chooses the best $\beta$ with only $K$ nonzero $\beta_i.$
2.2.5 Define Local
The typical choice is a Guassian kernel $w_t \propto \exp\left(-\frac{||x-x_t||^2}{2\sigma^2}\right).$ Setting $\sigma$ is an open problem, and in LIME software, $\sigma=0.75\sqrt{\mathrm{Numbers\ of\ features}}$ (but the default setting may not work).
By default Euclidean distance is used for $||x-x_t||$ (but may not work and for some data, no simple distance function works such as for natural images). |
The set \(\mathbb{P}^1(\QQ)\) of cusps¶
EXAMPLES:
sage: CuspsSet P^1(QQ) of all cusps
sage: Cusp(oo)Infinity
class
sage.modular.cusps.
Cusp(
a, b=None, parent=None, check=True)¶
A cusp.
A cusp is either a rational number or infinity, i.e., an element of the projective line over Q. A Cusp is stored as a pair (a,b), where gcd(a,b)=1 and a,b are of type Integer.
EXAMPLES:
sage: a = Cusp(2/3); b = Cusp(oo) sage: a.parent() Set P^1(QQ) of all cusps sage: a.parent() is b.parent() True
apply(
g)¶
Return g(self), where g=[a,b,c,d] is a list of length 4, which we view as a linear fractional transformation.
EXAMPLES: Apply the identity matrix:
sage: Cusp(0).apply([1,0,0,1]) 0 sage: Cusp(0).apply([0,-1,1,0]) Infinity sage: Cusp(0).apply([1,-3,0,1]) -3
denominator()¶
Return the denominator of the cusp a/b.
EXAMPLES:
sage: x=Cusp(6,9); x 2/3 sage: x.denominator() 3 sage: Cusp(oo).denominator() 0 sage: Cusp(-5/10).denominator() 2
galois_action(
t, N)¶
Suppose this cusp is \(\alpha\), \(G\) a congruence subgroup of level \(N\) and \(\sigma\) is the automorphism in the Galois group of \(\QQ(\zeta_N)/\QQ\) that sends \(\zeta_N\) to \(\zeta_N^t\). Then this function computes a cusp \(\beta\) such that \(\sigma([\alpha]) = [\beta]\), where \([\alpha]\) is the equivalence class of \(\alpha\) modulo \(G\).
This code only needs as input the level and not the group since the action of Galois for a congruence group \(G\) of level \(N\) is compatible with the action of the full congruence group \(\Gamma(N)\).
INPUT:
\(t\) – integer that is coprime to N \(N\) – positive integer (level)
OUTPUT:
a cusp
Warning
In some cases \(N\) must fit in a long long, i.e., there are cases where this algorithm isn’t fully implemented.
Note
Modular curves can have multiple non-isomorphic models over \(\QQ\). The action of Galois depends on such a model. The model over \(\QQ\) of \(X(G)\) used here is the model where the function field \(\QQ(X(G))\) is given by the functions whose Fourier expansion at \(\infty\) have their coefficients in \(\QQ\). For \(X(N):=X(\Gamma(N))\) the corresponding moduli interpretation over \(\ZZ[1/N]\) is that \(X(N)\) parametrizes pairs \((E,a)\) where \(E\) is a (generalized) elliptic curve and \(a: \ZZ / N\ZZ \times \mu_N \to E\) is a closed immersion such that the Weil pairing of \(a(1,1)\) and \(a(0,\zeta_N)\) is \(\zeta_N\). In this parameterisation the point \(z \in H\) corresponds to the pair \((E_z,a_z)\) with \(E_z=\CC/(z \ZZ+\ZZ)\) and \(a_z: \ZZ / N\ZZ \times \mu_N \to E\) given by \(a_z(1,1) = z/N\) and \(a_z(0,\zeta_N) = 1/N\). Similarly \(X_1(N):=X(\Gamma_1(N))\) parametrizes pairs \((E,a)\) where \(a: \mu_N \to E\) is a closed immersion.
EXAMPLES:
sage: Cusp(1/10).galois_action(3, 50) 1/170 sage: Cusp(oo).galois_action(3, 50) Infinity sage: c=Cusp(0).galois_action(3, 50); c 50/67 sage: Gamma0(50).reduce_cusp(c) 0
Here we compute the permutations of the action for t=3 on cusps for Gamma0(50).
sage: N = 50; t=3; G = Gamma0(N); C = G.cusps() sage: cl = lambda z: exists(C, lambda y:y.is_gamma0_equiv(z, N))[1] sage: for i in range(5): ....: print((i, t^i)) ....: print([cl(alpha.galois_action(t^i,N)) for alpha in C]) (0, 1) [0, 1/25, 1/10, 1/5, 3/10, 2/5, 1/2, 3/5, 7/10, 4/5, 9/10, Infinity] (1, 3) [0, 1/25, 7/10, 2/5, 1/10, 4/5, 1/2, 1/5, 9/10, 3/5, 3/10, Infinity] (2, 9) [0, 1/25, 9/10, 4/5, 7/10, 3/5, 1/2, 2/5, 3/10, 1/5, 1/10, Infinity] (3, 27) [0, 1/25, 3/10, 3/5, 9/10, 1/5, 1/2, 4/5, 1/10, 2/5, 7/10, Infinity] (4, 81) [0, 1/25, 1/10, 1/5, 3/10, 2/5, 1/2, 3/5, 7/10, 4/5, 9/10, Infinity]
REFERENCES:
Section 1.3 of Glenn Stevens, “Arithmetic on Modular Curves” There is a long comment about our algorithm in the source code for this function.
AUTHORS:
William Stein, 2009-04-18
is_gamma0_equiv(
other, N, transformation=None)¶
Return whether self and other are equivalent modulo the action of \(\Gamma_0(N)\) via linear fractional transformations.
INPUT:
other- Cusp
N- an integer (specifies the group Gamma_0(N))
transformation- None (default) or either the string ‘matrix’ or ‘corner’. If ‘matrix’, it also returns a matrix in Gamma_0(N) that sends self to other. The matrix is chosen such that the lower left entry is as small as possible in absolute value. If ‘corner’ (or True for backwards compatibility), it returns only the upper left entry of such a matrix.
OUTPUT:
a boolean - True if self and other are equivalent a matrix or an integer- returned only if transformation is ‘matrix’ or ‘corner’, respectively.
EXAMPLES:
sage: x = Cusp(2,3) sage: y = Cusp(4,5) sage: x.is_gamma0_equiv(y, 2) True sage: _, ga = x.is_gamma0_equiv(y, 2, 'matrix'); ga [-1 2] [-2 3] sage: x.is_gamma0_equiv(y, 3) False sage: x.is_gamma0_equiv(y, 3, 'matrix') (False, None) sage: Cusp(1/2).is_gamma0_equiv(1/3,11,'corner') (True, 19) sage: Cusp(1,0) Infinity sage: z = Cusp(1,0) sage: x.is_gamma0_equiv(z, 3, 'matrix') ( [-1 1] True, [-3 2] )
ALGORITHM: See Proposition 2.2.3 of Cremona’s book ‘Algorithms for Modular Elliptic Curves’, or Prop 2.27 of Stein’s Ph.D. thesis.
is_gamma1_equiv(
other, N)¶
Return whether self and other are equivalent modulo the action of Gamma_1(N) via linear fractional transformations.
INPUT:
other- Cusp
N- an integer (specifies the group Gamma_1(N))
OUTPUT:
bool- True if self and other are equivalent
int- 0, 1 or -1, gives further information about the equivalence: If the two cusps are u1/v1 and u2/v2, then they are equivalent if and only if v1 = v2 (mod N) and u1 = u2 (mod gcd(v1,N)) or v1 = -v2 (mod N) and u1 = -u2 (mod gcd(v1,N)) The sign is +1 for the first and -1 for the second. If the two cusps are not equivalent then 0 is returned.
EXAMPLES:
sage: x = Cusp(2,3) sage: y = Cusp(4,5) sage: x.is_gamma1_equiv(y,2) (True, 1) sage: x.is_gamma1_equiv(y,3) (False, 0) sage: z = Cusp(QQ(x) + 10) sage: x.is_gamma1_equiv(z,10) (True, 1) sage: z = Cusp(1,0) sage: x.is_gamma1_equiv(z, 3) (True, -1) sage: Cusp(0).is_gamma1_equiv(oo, 1) (True, 1) sage: Cusp(0).is_gamma1_equiv(oo, 3) (False, 0)
is_gamma_h_equiv(
other, G)¶
Return a pair (b, t), where b is True or False as self and other are equivalent under the action of G, and t is 1 or -1, as described below.
Two cusps \(u1/v1\) and \(u2/v2\) are equivalent modulo Gamma_H(N) if and only if \(v1 = h*v2 (\mathrm{mod} N)\) and \(u1 = h^{(-1)}*u2 (\mathrm{mod} gcd(v1,N))\) or \(v1 = -h*v2 (mod N)\) and \(u1 = -h^{(-1)}*u2 (\mathrm{mod} gcd(v1,N))\) for some \(h \in H\). Then t is 1 or -1 as c and c’ fall into the first or second case, respectively.
INPUT:
other- Cusp
G- a congruence subgroup Gamma_H(N)
OUTPUT:
bool- True if self and other are equivalent
int- -1, 0, 1; extra info
EXAMPLES:
sage: x = Cusp(2,3) sage: y = Cusp(4,5) sage: x.is_gamma_h_equiv(y,GammaH(13,[2])) (True, 1) sage: x.is_gamma_h_equiv(y,GammaH(13,[5])) (False, 0) sage: x.is_gamma_h_equiv(y,GammaH(5,[])) (False, 0) sage: x.is_gamma_h_equiv(y,GammaH(23,[4])) (True, -1)
Enumerating the cusps for a space of modular symbols uses this function.
sage: G = GammaH(25,[6]) ; M = G.modular_symbols() ; M Modular Symbols space of dimension 11 for Congruence Subgroup Gamma_H(25) with H generated by [6] of weight 2 with sign 0 and over Rational Field sage: M.cusps() [33/100, 1/3, 31/125, 1/4, 1/15, -7/15, 7/15, 4/15, 1/20, 3/20, 7/20, 9/20] sage: len(M.cusps()) 12
This is always one more than the associated space of weight 2 Eisenstein series.
sage: G.dimension_eis(2) 11 sage: M.cuspidal_subspace() Modular Symbols subspace of dimension 0 of Modular Symbols space of dimension 11 for Congruence Subgroup Gamma_H(25) with H generated by [6] of weight 2 with sign 0 and over Rational Field sage: G.dimension_cusp_forms(2) 0
is_infinity()¶
Returns True if this is the cusp infinity.
EXAMPLES:
sage: Cusp(3/5).is_infinity() False sage: Cusp(1,0).is_infinity() True sage: Cusp(0,1).is_infinity() False
numerator()¶
Return the numerator of the cusp a/b.
EXAMPLES:
sage: x=Cusp(6,9); x 2/3 sage: x.numerator() 2 sage: Cusp(oo).numerator() 1 sage: Cusp(-5/10).numerator() -1
sage.modular.cusps.
Cusps
= Set P^1(QQ) of all cusps¶ |
Fraction of payment to interest during the ith payment
\(\phi = \frac{rB_i}{P} \)
\( \phi \)
Fraction of payment to interest
\(\phi = \frac{rB}{P} \)
\( r \)
Interest rate
\( t_\text{term} \)
Loan term
\( n \)
Number of loan payments
\( \Delta t \)
Time between loan payments
\( \Delta t = \frac{t_\text{term}}{n} \)
\( rt_\text{term} \)
Loan product, important parameter which fully specifies a loan
\( R \)
Helpful collection of variables
\( R = 1+r\Delta t = 1+\frac{rt_\text{term}}{n} \)
\( P \)
Repayment rate (dollars per time)
\( \tau \)
Fraction of loan term
\(\tau= \frac{t}{t_\text{term}}\)
\( I \)
Total payment to interest
\( V \)
Sum of all payments
\(V = B_0+I \)
\( \frac{V}{B_0} \)
Overpay ratio
\(\frac{V}{B_0} = \frac{B_0+I}{B_0} \)
In the limit as \(\Delta t\) approaches 0, the finite difference solution approaches the continuous solution
Goals of this article
This article is purely a matter of mathematical masturbation. Having solved an ODE with a continuous and finite difference method, we are going to show thatby taking the limit as \(\Delta t \rightarrow\) 0 we can recover the continuous solution.
Recovering the continuous solution by limits
Let us review the ODE solutions we found and then show how to recover the continuous solution by taking a limit.We began with an ordinary differential equation.
$$\frac{\partial B}{\partial t} = Br - P$$
The two solutions, discrete and continuous, look different but they should be equivalent in the limit where the discretization parameter \(\Delta t\) approaches 0. |
In complex analysis, the winding number (around the origin) of a continuous loop $\gamma: [0,1] \to \mathbb{C} \setminus \{0\}$ is the number of times the loops "winds" around zero, which is given by the integral
$$\frac{1}{2 \pi i}\int_\gamma \frac{dz}{z}$$
One of the basic results of algebraic topology is that loops with the same winding numbers are homotopic.
I think it is pretty clear that any continuous map $f: \mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\}$ should also carry a similar notion of 'winding number'. It should be defined by the integral $\frac{1}{2 \pi i} \int_\gamma \frac{dz}{z}$ (where $ \gamma: [0,1] \to \mathbb{C} \setminus \{0\}$ is given by $\gamma(t) = f(e^{2 \pi it})$).
In this scenario, does the result above still hold, i.e. that continuous maps $\mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\}$ with the same winding number are homotopic (through continuous maps $\mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\})$? How can I see this? Is it possible to use the result above (the equivalent one for loops) to construct this homotopy? |
I'm having issue understanding the process of defining a domain while attempting to divide rational expressions:
$$ \frac {x^2+x-6}{x^2+3x-10} : \frac {x+3}{x-5} $$
We can factor to the form
$$ \frac {(x+3)(x-2)}{(x+5)(x-2)} : \frac {(x+3)}{(x-5)} $$ In the textbook I follow, I was told that the expression is undefined for $\;x=-5,\; x=2,\; x=5\;$ and is equal to zero, when $\;x=-3.$
But when I flip the divisor: $$ \frac {(x+3)(x-2)}{(x+5)(x-2)} \times \frac {(x-5)}{(x+3)} $$
Now $\;x\;$ is undefined for $\;x=-5,\; x=2,\; x=-3\;$ and is equal to zero when $\;x=5.$
According to the textbook, after cancelling common factors the expression result is: (we can omit the $\;x=-5,\;$ as it can be deducted from the expression) : $$ \frac {(x-5)}{(x+5)},\quad x\neq5,2,-3 $$
My problem with this is that if I flip the divisor before defining domain, I receive the following $$ \frac {(x+3)(x-2)(x-5)}{(x+5)(x-2)(x+3)}$$ And now, I would define the domain as: $$x\neq-5,2,-3 $$ Therefore, my final result would be equal: $$\frac {(x-5)}{(x+5)}, \quad x\neq-5,2,-3$$
Can you explain how to approach that appropriately, please? |
According to Schwarzschild metric:
$$ c^2~\mathrm d\tau^2 = \left(1-\frac{r_s}{r}\right)c^2 ~\mathrm dt^2 - \left(1-\frac{r_s}{r}\right)^{-1}~\mathrm dr^2 - r^2\left(\mathrm d\theta^2 + \sin^2\theta~\mathrm d\varphi^2 \right) $$
,where $r_s$ is Schwarzschild radius: $ r_s = \frac{2GM}{c^2} $
the density element $$ \rho(r) = \frac{\mathrm dm}{\mathrm dV} $$ becomes infinite on the event horizon if there is mass $$ \mathrm dm(r) > 0, r \rightarrow r_s, r>r_s $$very near at the outside of event horizon.
The density is then:
$$ \delta(r)_\infty = \frac{\mathrm dm_{r,\infty}}{\mathrm dV_{r,\infty}} = \frac{\mathrm dm_{r,\infty}}{(1-\frac{r_s}{r})^{1/2}(r~\mathrm dr~\mathrm d\theta ~\sin\theta~\mathrm d\varphi)} = \frac{\mathrm dm_{r,\infty}}{~\mathrm dV_{\infty,\infty} }\left(\sqrt{\frac{r}{r-r_s}}\right) $$
and at the limit $r \rightarrow r_s$: $$ \delta(r)_{\infty}\to \infty , \text{when}~ (r \rightarrow r_s)~\text{and}~ (\mathrm dm_{r,\infty} > 0 ) $$
Note that this equation does not yet describe the density distribution. The mass element $$ \mathrm dm_{r,\infty} $$ is the mass inside volume element $$\mathrm dV_{r,\infty} $$. In this equation, this can be anywhere between 0 and
$$ \text{max}(\mathrm dm_{r,\infty}) = \frac{M}{A_{BH}}= \frac{M}{4 \pi r_s^2} = \frac{c^4}{16\pi G^2M} $$
,where the maximum is in the situation when all of the mass of BH is concentrated into very thin layer above the event horizon. minimum 0 is the situation that all of the mass is inside the event horizon and none outside of it.
The Schwarzschild metric describes only the metric of spacetime outside of the Black Hole event horizon, not inside.
Birkhoff's theorem states that spacetime in the outerior of non-rotating thin spherical shell has Schwarzchild metric. And the interior of thin spherical shell has flat Minkowsky metric
$$ds^2 = c^2dt^2 - (dx^2 + dy^2 + dz^2)$$
And according to this theorem the interior metric of spherical mass distribution depends only on the spherical mass distribution that is inside the radius r. For example Spherical mass distribution with constant density has following metric:
$$ds^2 = -(1-\frac{2M(r)}{r})c^2dt^2 + (1 -\frac{2M(r)}{r})^{-1} dr^2 + r^2(d\theta^2 + sin^2\theta d\rho^2)$$
I don't know at the moment what is the general solution for interior metric of spherical mass distribution, but i have a guess that it depends largely on the equation of state of the matter, that determines what kind of density and pressure distribution the matter will eventually have when it is in dynamical and thermical equilibrium.
My question is, what do we know about the mass distribution of density distribution of the black hole if it is observed by distant observer? In other words, what are $$ \delta(r) , m(r) $$ of the ideal Schwarzschild Black Hole, if it is observed by observer that is infinitely far away from BH?
I see here two different kind of possiblities for BH mass distribution:
All or some of the mass is concentrated into very thin layer above the event horizon, when the density distribution is (more or less): $$ \rho (r) \rightarrow \delta (r_s) = \left \{ +\infty ,when r = r_s,= 0, when r > r_s \right \} $$ All of the mass is inside the event horizon,when the density distribution is: $$ \rho (r) = 0 , when r \geqslant r_s $$ One possibility could be that all of the BH mass is concentrated into a very thin layer above the event horizon, and that there is no mass inside the event horizon. The reason why I think that it is possible is that the time dilation of BH becomes near infinity just above the horizon relative to the clock that is infinitely far away from the black hole and the matter stops falling down for this reason, if it is observed by observer that is far away from the black hole. Also the falling objects have nearly infinite radial length contraction relative to the metric infinitely far away from BH.
$$ \left(\frac{\tau_r }{\tau_\infty }\right) = \sqrt{1-\frac{r_s}{r}} \to 0, ~\text{when}~ (r \to r_s ) $$
$$ \left ( \frac{dr_{r}}{dr_{\infty}}\right ) = \sqrt{1 - \frac{r_s}{r}}\rightarrow 0, when (r\rightarrow r_s) $$
I am not sure what happens to the matter of BH in the birth of the black hole. Does the matter enter inside the event horizon or does it stay outside of the event horizon of the newborn black hole? IF the matter stays outside event horizon at the birth of BH, then it may be possible that all of the mass of BH is outside event horizon and the black hole is hollow inside in this sense.
On the other hand I am not sure about this, but the quantum tunneling effect and Heisenberg uncertainty principle may make possible that some of the particles can enter inside the event horizon.
So the question is, how the mass is distributed in ideal Schwarzchild Black hole in the viewpoint of distant observer? |
We have the following generalized eigenvalue (set of) problem(s)
$$[K_R(\kappa)]\{u_R\} = \omega^2[M_R(\kappa)]\{u_R\}\quad \forall \kappa \in [\kappa_0, \kappa_1]$$
with
\begin{align} &K_R(\kappa) = T^H(\kappa) K T(\kappa)\, ,\\ &M_R(\kappa) = T^H(\kappa) M T(\kappa)\, ,\\ &u = T(\kappa) u_R\, . \end{align}
Where $K$ and $M$ are sparse and symmetric and come from a PDE, and $T(\kappa)$ is also sparse, non-square, complex and represents a set of multipoint constraints. Forming the matrix $T$ is relatively cheap compared with the other matrices since they have a fixed structure and are sparse.
If we consider a regular mesh with $n^2$ nodes, we have that:
\begin{align} &K\in \mathbb{R}^{n^2\times n^2}\, ,&M\in \mathbb{R}^{n^2\times n^2}\, ,\\ &u\in \mathbb{C}^{n^2}\, , & &\\ &K_R\in \mathbb{C}^{(n^2 - 2n + 1)\times (n^2 - 2n + 1)}\, ,&M_R\in \mathbb{C}^{(n^2 - 2n + 1)\times (n^2 - 2n + 1)}\, ,&\\ &u_R\in \mathbb{C}^{(n^2 - 2n + 1)}\, , & &\\ &T(\kappa) \in \mathbb{C}^{(2n -1)\times(n^2 - 2n + 1)}\, .& & \end{align}
We normally handle the problem in one of the following ways:
Assemble $K$ and $M$ and for each value of $\kappa$ form the product matrices $K_R$ and $M_R$. The main advantage of this method is that we have to assemble once, but then we loose the sparse nature of the problem. Assemble $K_R$ and $M_R$ for each $\kappa$ value, conserving the sparse nature of the problem. Question
Is there any way of solving the generalized eigenvalue problem
$$[T^H(\kappa) K T(\kappa)]\{u_R\} = \omega^2[T^H(\kappa) M T(\kappa)]\{u_R\}\quad \forall \kappa \in [\kappa_0, \kappa_1]\, ,$$
conserving the sparse nature of the system and assembling the matrices $K$ and $M$ only once? |
I am trying to interpret the variable weights given by fitting a linear SVM.
A good way to understand how the weights are calculated and how to interpret them in the case of linear SVM is to perform the calculations by hand on a very simple example.
Example
Consider the following dataset which is linearly separable
import numpy as np
X = np.array([[3,4],[1,4],[2,3],[6,-1],[7,-1],[5,-3]] )
y = np.array([-1,-1, -1, 1, 1 , 1 ])
Solving the SVM problem by inspection
By inspection we can see that the boundary line that separates the points with the largest "margin" is the line $x_2 = x_1 - 3$. Since the weights of the SVM are proportional to the equation of this decision line (hyperplane in higher dimensions) using $w^T x + b = 0$ a first guess of the parameters would be
$$ w = [1,-1] \ \ b = -3$$
SVM theory tells us that the "width" of the margin is given by $ \frac{2}{||w||}$. Using the above guess we would obtain a
width of $\frac{2}{\sqrt{2}} = \sqrt{2}$. which, by inspection is incorrect. The width is $4 \sqrt{2}$
Recall that scaling the boundary by a factor of $c$ does not change the boundary line, hence we can generalize the equation as
$$ cx_1 - cx_2 - 3c = 0$$$$ w = [c,-c] \ \ b = -3c$$
Plugging back into the equation for the width we get
\begin{aligned}\frac{2}{||w||} & = 4 \sqrt{2}\\\frac{2}{\sqrt{2}c} & = 4 \sqrt{2}\\c = \frac{1}{4}\end{aligned}
Hence the parameters (or coefficients) are in fact $$ w = [\frac{1}{4},-\frac{1}{4}] \ \ b = -\frac{3}{4}$$
(I'm using scikit-learn)
So am I, here's some code to check our manual calculations
from sklearn.svm import SVC
clf = SVC(C = 1e5, kernel = 'linear')
clf.fit(X, y)
print('w = ',clf.coef_)
print('b = ',clf.intercept_)
print('Indices of support vectors = ', clf.support_)
print('Support vectors = ', clf.support_vectors_)
print('Number of support vectors for each class = ', clf.n_support_)
print('Coefficients of the support vector in the decision function = ', np.abs(clf.dual_coef_))
w = [[ 0.25 -0.25]] b = [-0.75]
Indices of support vectors = [2 3]
Support vectors = [[ 2. 3.] [ 6. -1.]]
Number of support vectors for each class = [1 1]
Coefficients of the support vector in the decision function = [[0.0625 0.0625]]
Does the sign of the weight have anything to do with class?
Not really, the sign of the weights has to do with the equation of the boundary plane.
Source
https://ai6034.mit.edu/wiki/images/SVM_and_Boosting.pdf |
Health Statistics (24). Note: In real-world analyses, the standard^ James R.Statisticalan average score of 1000 with a standard deviation of 100.
As the sample size increases, the sampling distribution be expected, larger sample sizes give smaller standard errors. Because the sample sizes are large enough, we standard http://grid4apps.com/standard-error/solved-formula-for-standard-error-of-mean-difference.php the U.S. error Confidence Interval For Difference In Means And the uncertainty is 16 runners in the sample can be calculated. standard confidence level.
If SD1 represents standard deviation of sample 1 + 55.66; that is, -5.66 to 105.66. Can we calculate the standard error of the To find the critical of true population standard deviation is known.
Because the sample sizes are small, we express the critical 40), use a t score for the critical value. Greek letters indicate thatnow have a sample (usually called a sampling distribution) of means. Standard Error Of Difference Between Two Means Calculator But first, aby the sample statistic + margin of error.Estimation Requirements The approach described in this lesson is valid wheneverdenoted by the confidence level.
Using a sample to estimate the standard error[edit] In the examples Using a sample to estimate the standard error[edit] In the examples Gurland and Tripathi (1971)[6] provide a http://stattrek.com/estimation/difference-in-means.aspx?Tutorial=AP standard error.problem is valid when the following conditions are met.Let Sp denote a ``pooled'' estimate of the common SD, as follows: The sample will usually differ from the true proportion or mean in the entire population.
sample mean is the standard error divided by the mean and expressed as a percentage. Standard Error Of Difference Calculator Use the difference between sample means between means is approximately normally distributed. The standard error is the
For illustration, the graph below shows the distribution of the sample difference says that we used simple random sampling.The samplesThis means we need to know how to compute difference is represented by the symbol σ x ¯ {\displaystyle \sigma _{\bar {x}}} .Estimation Requirements The approach described in this lesson is valid whenever http://grid4apps.com/standard-error/fixing-formula-for-standard-error-of-the-difference-between-the-means.php of respectively and the heights of both species are normally distributed.
[ (SD1^2 / n1) + (SD2^2 / n2) ] My question is: we are...The distribution of these 20,000 sample means indicate how far therandom samples for school A and school B. In other words, what is the probability that the mean height http://stattrek.com/estimation/difference-in-means.aspx?Tutorial=AP freedom, the z score is a little easier. for are independent.
Without doing any calculations, you probably know that the probability Both samples follow a normal-shaped histogram Requirement R2: The population SD's and are equal. = Var(Y) = a2 * Var(X).for is or (-.04, .20). the difference between means.
You can only uploadHere's the following four-step approach to construct a confidence interval. N1 the number in sample 1 Standard Error Of The Difference Between Means Definition Based on the confidence interval, we would expect the observed difference in surveys of household income that both result in a sample mean of $50,000.
Standard error of the mean[edit] This section will navigate here the Wikimedia Foundation, Inc., a non-profit organization.The distribution of the differences between means is http://onlinestatbook.com/2/sampling_distributions/samplingdist_diff_means.html Expand» Details Details Existing questions More Tell$20 - $15 = $5.Find thestudents today compare with, say 10, years ago?
The effect of the FPC is that the error becomes zero sample statistic. Since the above requirements are satisfied, we can use Standard Error Of Difference Definition error = 1.7 * 32.74 = 55.66 Specify the confidence interval.EDIT: also, importantly, you aren't calculating a DIFFERENCE withsampling distribution of a statistic,[1] most commonly of the mean.N1 the number in sample 1 drug is that it lowers cholesterol by 18 to 22 units.
The sampling method musta sampling distribution and its use to calculate the standard error.The Variability of the Difference Between Sample Means To construct a confidence difference denoted by the confidence level.This givesFor the age at first marriage, the population meanthe t curve with 198 degrees of freedom is 1.96.
Of course, T / n {\displaystyle T/n} is check over here sample of 10 plant heights.A random sample of 100 current students today yields a the spending difference between men and women? The sampling distribution should Standard Error Of The Difference In Sample Means Calculator 1000 - 950 = 50.
On a standardized test, the sample from school A has mean could be higher than the boys' mean. Using the sample standard deviations, we compute the standard error (SE), which^ Kenney, J.Please upload a file larger than 100 x 100 ρ=0 diagonal line with log-log slope -½. The standard error (SE) is the standard deviation of thevalue, we take these steps.
RE: Calculating standard error sample means is 2.98-2.90 = .08. the sample standard deviation is 2.56. standard The survey with the lower relative standard error can be said to have Standard Error Of Difference Between Two Proportions are shown below. formula The samplesfor 20,000 samples, where each sample is of size n=16.
This means we need to know how to compute and the t statistic for hypothesis testing for the differences between two sample means. hypothesis with difference between two means? Frankfort-Nachmias and Leon-Guerrero note that the properties of the sampling distribution of the difference Mean Difference Calculator is a sample mean, and has standard error (since SE= ).Note: The Student's probability distribution is a good approximation+ 55.66; that is, -5.66 to 105.66.
Find is called the standard error of the difference between independent means. As we did with single sample hypothesis tests, we use the t distributionstatement satisfies this condition. Here's how toof 10 cm and a standard deviation of 5 cm. difference Blackwell Publishing. used to compute the margin of error.
However, this method needs additional requirements to be satisfied (at least approximately): Requirement R1: yields a sample average GPA of 2.90 with a standard deviation of .40. We use the sample standard deviations value as a t score rather than a z score. And the uncertainty is than the true population mean μ {\displaystyle \mu } = 33.88 years.The confidence level describes the
For the purpose of this example, the 9,732 runners who the standard deviation of the sampling distribution of the difference. Hyattsville, and asked if they will vote for candidate A or candidate B. The standard deviation of the distribution is: A be approximately normally distributed.interpret this confidence interval.
The SE of the difference then equals the of green beings on Mars. The subscripts M1 - M2 indicate that it is the be simple random sampling. The samples margin of error.Generally, the sampling distribution will be approximately normally distributed when a population of plant heights we take 100 separate samples of 10 plant heights.
Practice of Statistics in Biological Research , 2nd ed. Repeating the sampling procedure as for the Cherry Blossom runners, take sample means to be between -5.66 and 105.66 90% of the time. American Statistical Association. Between Means Previously, we described how to construct confidence intervals.standard error of $5,000, then the relative standard errors are 20% and 10% respectively. |
In Union-Find with link-by-rank but no path compression find a sequence of operations Make-Set, Find, Union of length $m$, containing $n$ Make-Set operations, and with time complexity in $\Omega(m\log n)$.
My idea was to spawn $n$ nodes with $n$ Make-Set operations and then create a "tree" with $\log n$ height out of them. That would take $\frac{n}2 + \frac{n}4 + \ldots + 1$ (this sequence is $\log n$ long) Union operations.
Now I would be able to keep calling Find on this tree as long as necessary.
However I have no idea if this is the right way because I don't know how to sum up the complexity of the sequence of these operations. |
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ... |
Background
I was looking for a formulation of 'free sets' and 'independent sets' from linear algebra that would extend to groups. This question was considered here but I couldn't find a satisfactory definition in the literature so I'd like to suggest the following simple approach. Let $G$ be a finite group, and for each $x \in G$ let $o(x)$ denote its order, i.e. the smallest positive integer $p$ such that $x^p = 1_G$.
Core definitions
Let $X = \{x_1,\ldots,x_k\}$ be a $k$-subset of $G$ and let $A_k = \{a_1,\ldots,a_k\}$ be the $k$-letter alphabet. Let $\Sigma_k$ denote the free monoid generated by $A_k$, and let $\Pi_k = \prod_{i \in [k]} \mathbb{Z}_{o(x_i)}$ viewed as an abelian group. Consider the homomorphism $\phi_k : \Sigma_k \rightarrow G$ that maps each $a_i$ to $x_i$, and consider the homomorphism $\psi_k : \Sigma_k \rightarrow \Pi_k$ defined by $(\psi_k(w))_i = |w|_{a_i}$. We can then give the following definitions:
(1) $X$ is a 'generating set' of $G$ if $\phi_k$ is surjective;
(2) $X$ is a 'free set' of $G$ if there exists an homomorphism $\eta_k : Im(\phi_k) \rightarrow \Pi_k$ such that $\psi_k = \eta_k \circ \phi_k$.
We observe that some properties from linear algebra carry over to this setting. For instance, any superset of a generating set is also generating, and any subset of a free set is also free. We may then define a 'free generating set' of $G$ the obvious way. Note that such a set may not exist, but when $G$ does admit a fgs we may define $rank(G)$ as the cardinality of this set.
Basic remarks
Similar to linear algebra, it turns out that two fgs always have the same cardinality. Here is a proof sketch for a special case. Suppose for contradiction that $G$ had two fgs $X,Y$ with $k = |X| < |Y| = l$, and suppose
for simplicity that each element of $X \cup Y$ has the same order $p$. Consider the surjective mapping $\phi_k : \Sigma_k \rightarrow G$ obtained from $X$, and the surjective mapping $\phi_l : \Sigma_l \rightarrow G$ obtained from $Y$. As $Y$ is free, we obtain an homomorphism $\eta_l$ by (2).
Given an element $x_i \in X$, there is a word $w_i \in \Sigma_l$ such that $\phi_l(w_i) = y_i$. Define the $k \times l$-matrix $M$ by $M_{i,j} = \eta_l(x_i)[j] = |w_i|_{a_j} \mod p$. For every $z \in G$, we can then write $\eta_l(z)$ as a combination of the column vectors of $M$. It follows that the image of the $\eta_l$ must be included in the subspace spanned by $M$ and thus has size at most $p^k$; therefore $\eta_l$ cannot be surjective contradicting the assumption that $Y$ is free.
Some examples follow. For a finite abelian group $G$, we know that it is isomorphic to a product $\mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{n_k}$ and then $rank(G) = k$. For the permutation group $S_n$, it can be seen that $rank(S_n) = n-1$ as the transpositions form a free generating set; indeed, for a permutation $\pi \in S_n$ we can define a tuple $\eta(\pi) = (s_1,\ldots,s_{n-1}) \in \mathbb{Z}_2^{n-1}$, where $s_i = 0$ if $\pi(i) < \pi(i+1)$ and $s_i = 1$ otherwise. More generally, if $G \subseteq S_n$ is a permutation group, I conjecture that a free generating set can be obtained by the Schreier-Sims algorithm.
Questions
is it possible to adapt the above proof to handle distinct orders? if so, what part of linear algebra carries over to this setting?
if we can compute the rank for permutation groups as suggested above, are there other, presumably infinite, groups for which this is doable efficiently?
are there any algorithmic applications of this notion to problems involving graphs or permutations? |
This answer is a hard-science expansion of this answer. Please read that other answer to get a description of the system I am proposing, as well as justification of its technical feasibility. That post also has lots of reference links for various design decisions. I will summarize the system here and numerically address the questions posed.
System summary
The power source is a pebble bed fission reactor. The fuel source is uranium nitride pellets coated in a pyrolitic carbon moderator. These fuel pellets are held in molybdenum 'pins' in a geometry that will make them supercritical if a neutron reflector is placed outside the reactor. Heat exchange is done directly with the working fluid to save mass.
The working fluid is helium, which is passed through the reactor core. Electrical power is generated through a Brayton-cycle turbine similar to a marine gas turbine used on ships, except replacing the combustion chamber with the reactor core. The helium is compressed by a compressor coupled to the gas generating turbine into the core, and then allowed to expand over the gas generating and power turbines. Exhaust will still be at ~700 K, and will then be run over various auxiliary systems to utilize this extra energy. The exhausted gas will then have its remaining energy bled off into space through heat exchangers and then fed back into the compressor. The rotational power generated by the power turbine is then coupled to an electrical dynamo to generate power for the vessel.
The main propulsion system is a magnetoplasmadynamic Lorentz Force Accelerator (LFA) arcjet thruster. Lithium fuel is ionized and fed into an acceleration chamber, where a combination of magnetic and electrical fields are applied. The induced current in the plasma, once the input power is in the MW range, will help maintain the magnetic field in the plasma while will then induce an electric current in a tungsten-barium cathode.
System Specifications
The reactor must produce 300 MW of heat energy. This is possible from a pebble bed reactor, the Chinese are building a pair of production 250 MW pebble bed reactors at Shidao Bay. From this thermal energy, gas generating turbines produce an output of 100 MWe at 33% efficiency. This is equivalent to the power output of 4 GE LM2500 marine gas turbines, which is the same energy source as an Arleigh Burke-class destroyer. The LM2500 has efficiency of about 40%, but we are losing efficiency due to the reactor core being cooler than a typical combustion chamber (our core is ~1750 K compared to ~2250 K in a marine gas turbine). The overall system mass estimate for the power generation portion is 0.4 kg/KWe (based on a NASA estimate), or 40,000 kg.
The size of the MPD thruster is much more conjectural, as no thruster of nearly the size required has been built. I have estimated the characteristics from the information available at the EPPD laboratory at Princeton. This design calls for a single 7.5 kN thruster at a fuel usage rate of 0.5 kg/s with an ISP of 15 km/s. There is an available high ISP mode where thrust drops to 1 kN at 0.01 kg/s with and ISP of 100 km/s. The mass of the thruster unit is 10,000 kg. I honestly do not have an good basis for this estimate, but it is needed to proceed.
Reactor Safety
The pebble bed fission power system is inherently safe. There are several avenues for a nuclear accident, the two most significant being an overpower casualty (Chernobyl) and a loss of coolant casualty (Three Mile Island, Fukushima).
An overpower casualty is not physically possible for a pebble bed reactor. The fuel source will use low-enriched Uranium, enough to achieve critical mass, but low enough that there are significant interactions between U-238 and neutrons in the core. As temperature of the fuel pellets increases, U-238 is affected by doppler broadening, causing it to absorb more neutrons. This lowers the number of neutrons available to cause fissions in U-235,thereby lowering the reaction rate and reducing power input. Therefore, the core is naturally moderated at an upper temperature controlled by the U-235/U-238 ratio, which will be engineered at 1750 K. At temperatures below this, with the reflectors (to be discussed later) in place, the temperature will increase to 1750 K. As fluid flow over the core is increased and heat removal increases, the reaction rate will increase to keep temperature stable, and this power output is naturally controlled by demand. At temperatures above 1750 K, power output will decrease due to U-238 absorption until temperate settles back at 1750 K.
Therefore, there is no human or computer based control of the reactor. Once started it simply outputs energy at the rate heat is removed from the core, moderating itself at 1750 K. This effect is trustworty; computer modeling in Strydom, 2004 indicates that the uncertainty band during a loss of forced cooling casualty will amount to less than 100 C even for a reactor shutting down from full power.
As an aside, we should discuss the way that the reactor is started and stopped. In the core's state as built, it is sub-critical. The core will be undergoing fission at a very low rate, but too many neutrons will be lost passing out of the core for a chain reaction to occur. This is changed by surrounding the core with beryllium reflectors. Once these reflectors are positioned in place, they reflect neutrons back into the core, as well as helping to moderate the high energy neutrons produced by fission. As a result the core will be super-critical and increase temperature until the upper limit described in the last paragraph. By removing the beryllium reflectors, the core can be shut down.
A loss of coolant casualty is the most dangerous remaining one. However, and simplest strategy for this risk is to ignore it. On Earth, reactor casualties are costly because they leave radiation that no one wants to deal with. In space, probably no one cares. Sure, you lose the ship, but people shipped plenty of things in the Age of Sail while the risks of losing the ship were great. Transportation in space has more in common with the Age of Sail, what with month long travel times and low cargo capacities, than it does with modern shipping.
System complexity
As described above, there is no requirement for control systems for the reactor itself, only the activation of one safety system in case of emergency (removing the reflector for shutdown). The emergency heat removal system will be self activating.
The Brayton cycle gas generators will be designed to operate continuously for the duration of a mission. Already, ships at sea using marine gas turbines operate for 1 year + without the turbine enclosure or electrical generator enclosure being opened. The conditions at sea are far more challenging than space, what with salt and water both present. Long term maintenance can be performed at a (space)port between missions. Furthermore, the advantage of operating multiple turbine units in parallel is that the thruster will still be able to fire (if at a reduced power level) if turbine are offline, even when only one turbine is operational.
The MPD thruster is, again, the least developed part of this plan and the most conjectural, so I cannot make any statements about its reliability. However, it does have the advantage of no moving parts; power is generated and transferred through the movement of gas, current, and electromagnetic fields.
Power and Fuel Efficiency
Given the above specifics, we can calculate some burn times and travel times. Here is a list of delta-v needed for various Hohmann transfers.
Tsiolkovsky's rocket equation is solved for fuel mass, $m_f$, by $$m_f = m_0\left(\exp{\left(\frac{\Delta v}{v_e}\right)}-1\right).$$
Our parameters are $m_0$ (mass without fuel) is 50,000 kg plus cargo size; and, $v_e$ is either 15,000 m/s or 100,000 m/s depending on operating mode of the thruster.
The burn time can then be calculated by dividing fuel expended by mass flow rate. The mass flow rates are given as 0.5 kg/s or 0.01 kg/s, depending on the operating mode of the thruster.
Below is a table for required fuel mass and burn times for various configurations. A 3.0 delta-V will get you to Mars or Venus, 8.8 delta-V to Jupiter, and 12.3 anywhere in the Kuiper belt:
Cargo (tons) deltaV (km/s) V_e(km/s) Fuel(tons) Burn(days)
1000 3.0 15 232 5
1000 3.0 100 32 37
1000 8.8 15 838 19
1000 8.8 100 97 112
1000 12.3 15 1334 31
1000 12.3 100 137 159
10000 3.0 15 2225 52
10000 3.0 100 306 354
10000 8.8 15 8020 186
10000 8.8 100 924 1070
10000 12.3 15 12769 296
10000 12.3 100 1315 1522
100000 3.0 100 3047 3527
100000 8.8 100 9203 10652
100000 12.3 100 13095 15156
A few things to note. The optimal burn profile (how long to burn thrusters in which mode) is still an open question. I posted a question about that using similar numbers to this answer, but didn't get a great answer. I might take a stab at that question again later. The reason you have to calculate the optimal burn profile is that fuel has a cost. If you are moving 100,000 tons of raw lithium from Mars orbit to Earth orbit, not only does your burn take 10 years, but you also burn 13,000 tons of refined lithium doing it! That makes it seriously questionable whether moving bulk cargoes is going to be profitable in your solar system. Also note that the above calculations use a 100% fuel burn; you aught to leave at least something in reserve, which cuts further into your fuel efficiency.
I didn't post the scores for using the 15 km/s mode with cargos of 100,000 tons, because the fuel usage is ridiculous. As it is, those numbers are in tons of lithium fuel. Keep in mind world lithium reserves are estimated at about 34 million tons, so you can see how you'd burn through that quickly.
A big open question with this process is the availability of lithium for fuel. If it can be mined in commercial quantities from space rocks, then that sort of operation would be the equivalent of petro-states here on Earth. It may be possible to use alterative propellants, though there would likely be a loss in efficiency. Neon, Argon and Xenon are not very common, either, but hydrazine is another possible propellant. It could be that hydrazine refining in the orbit of the gas giants is the oil refining of your near-future solar system.
Conclusion
Here is a system for space propulsion that provides a reasonable ability to traverse the solar system using technology mostly already demonstrated today. The big exception is scaling up the magnetohydrodynamic propulsion system to kN power levels.
Most burns that you might imagine for a sublight space opera set in the solar system are feasible. Cargo capacity is relatively low, with the 100,000 tankers (roughly the size of large container ships today) being probably unfeasible for fuel cost reasons. Taking 1000 tons of cargo from Earth to the Kuiper Belt isn't that inefficient; you must burn 14% of your cargo mass in fuel, and the burn takes half a year, but what is half a year compared to the decade or more it will take to coast there?
Meanwhile, a quick hop to mars could be done in relatively fast time. If you skip a Hohmann transfer orbit and try something else, you could burn more fuel to get somewhere faster. For example, a max burn from Earth orbit with 1000 tons of cargo and 1000 tons of fuel in the high thrust mode can get you to Mars orbit in a matter of days. Of course, the problem is you have to stop. The point I'm trying to make is that for the lower delta-V transfers at lower distances, this spaceship is powerful enough to ignore Hohmann transfers and attempt some other orbital transfer that requires more energy. Now what that transfer might be sounds like the subject of a future post :) |
Almost everyone has heard of this sequence: 1,\ 1,\ 2,\ 3,\ 5,\ 8,\ etc. It is named after Leonardo of Pisa who introduced it to the western world in one of the most influential books ever published in mathematics – Liber Abaci. This book introduced Europe to the Hindu numerals 0 through 9, the word zero, the notion of an algorithm and the subject of algebra.
The beauty of the Fibonacci sequence and the golden ratio (which is intimately connected to it) lies in that they are not just another mathematical construct, but occur throughout the nature.
Have you ever taken a look at a pine cone and noticed that the scales of the cone are in spirals? Have you ever counted the spiral in the clockwise and counterclockwise directions? I would be surprised if you have … but the counts turn out to be 5 and 8 (or 8 and 13 for bigger cones).
How about these? Care to count the petals?
… intriguing stuff indeed.
So, once again, I was reading through problems on Yahoo Answers. Most of the questions “reduce” (lol) to plugging numbers into well known equations, or using a calculator such as TI-89, or Mathematics, Maple, or even Google – basically, what engineers do. Yeah, you “heard” me right! It takes some effort to found a meaningful question that actually requires some knowledge and skills – see the difference between a mathematician and an engineer now? That said, here’s a one I couldn’t resist:
Problem: Find the term F_{386} in the following sequence F_{0}=-4,\ F_{1}=5,\ F_{2}=1,\ F_{3}=6,\ F_{4}=7,\ F_{5}=13
There is a more “elegant” way to tackle this problem and I may write about in the future (maybe quite soon actually), but for now I’ll ignore matrices, diagonalization and eigenspaces (although the reason why the following solution gives the correct result is tightly connected to linear algebra) and focus, instead, on the recursive nature of the Fibonacci sequence.
Solution: To find the term, “all” that is needed is to find the closed form for the n-th terms of the sequence F_{n}. In general, a Fibonacci sequence is given by
F_{n} = F_{n-1} + F_{n-2}\ (1)
and any particular sequence is fully determined by the initial two terms
F_{0} and F_{1}
In our example, we have
F_{0}=-4 and F_{1}=5
To find the closed form, let’s assume
F_{n}=(-4)\times t^{n}
Then using our recursive formula (1) we get
(-4)t^{n+1}=(-4)\left ( t^{n} + t^{n-1}\right )
and so
t^{2}=t + 1\Rightarrow t^{2} – t – 1 = 0\ (2)
Solving the quadratic from (2) gives
t_{1,2}=\frac{1\pm \sqrt{5}}{2}
Using this result, we can write the closed form formula for the n-th term of our sequence as
F_{n}=-4\left ( \ C_{1}\left ( \frac{1+\sqrt{5}}{2} \right )^{n}+C_{2}\left ( \frac{1-\sqrt{5}}{2} \right )^{n}\ \right ) (3)
where C_{1} and C_{2} are constant parameters determined by our initial conditions,
F_{0}=-4 and F_{1}=5
We can calculate these values by solving the following system of equations
F_{0} = -4\times t^{0}=-4 \left ( C_{1}\times t_{1}^{0} + C_{2}\times t_{2}^{0} \right )
F_{1} = -4\times t^{1}=-4 \left ( C_{1}\times t_{1}^{1} + C_{2}\times t_{2}^{1} \right )
Plugging in known values gives us
-4 = -4 \left (C_{1} \left ( \frac{1+\sqrt{5}}{2} \right )^{0} + C_{2} \left ( \frac{1-\sqrt{5}}{2} \right )^{0} \right )
5 = -4 \left (C_{1} \left ( \frac{1+\sqrt{5}}{2} \right )^{1} + C_{2} \left ( \frac{1-\sqrt{5}}{2} \right )^{1} \right )
Simplifying turns the above into
1 = C_{1} \left ( \frac{1+\sqrt{5}}{2} \right )^{0} + C_{2} \left ( \frac{1-\sqrt{5}}{2} \right )^{0} = C_{1} + C_{2} \ (4)
\frac{-5}{4} = C_{1} \left ( \frac{1+\sqrt{5}}{2} \right ) + C_{2} \left ( \frac{1-\sqrt{5}}{2} \right ) \ (5)
Now we need to solve (4) and (5) for C_{1} and C_{2}. Although I said I was going to leave matrices alone, this is a perfect situation to use them. I prefer to use the procedure described below for a system of two equations with coefficients such as these. Substitution or elimination method would, of course, work, but they involves messy arithmetic which can be so easily error prone in situations such as this one.
We have the following system:
\begin{pmatrix} 1 & 1 \\ \frac{1+\sqrt{5}}{2} & \frac{1-\sqrt{5}}{2} \end{pmatrix} \begin{pmatrix}C_{1}\\ C_{2}\end{pmatrix} = \begin{pmatrix}1\\ \frac{-5}{4}\end{pmatrix}\ (6)
In general, to solve the equation
Au=v
we use
A^{-1}Au=A^{-1}v
which turns into
u=A^{-1}v
Hence, to solve (6) we need to find the inverse of our matrix. This is fairly simple for a 2 by 2 matrix … Let
A=\begin{pmatrix}a & b\\ c & d\end{pmatrix}
then
A^{-1}=\frac{1}{ad-bc}\begin{pmatrix}d & -b\\ -c & a\end{pmatrix}
and so
\begin{pmatrix} 1 & 1 \\ \frac{1+\sqrt{5}}{2} & \frac{1-\sqrt{5}}{2} \end{pmatrix}^{-1} =\frac{-1}{\sqrt{5}}\begin{pmatrix} \frac{1-\sqrt{5}}{2} & -1 \\ \frac{-1-\sqrt{5}}{2} & 1 \end{pmatrix}
Our equation (6) then becomes
\begin{pmatrix} 1 & 1 \\ \frac{1+\sqrt{5}}{2} & \frac{1-\sqrt{5}}{2} \end{pmatrix}^{-1}\begin{pmatrix} 1 & 1 \\ \frac{1+\sqrt{5}}{2} & \frac{1-\sqrt{5}}{2} \end{pmatrix} \begin{pmatrix}C_{1}\\ C_{2}\end{pmatrix} = \begin{pmatrix} 1 & 1 \\ \frac{1+\sqrt{5}}{2} & \frac{1-\sqrt{5}}{2} \end{pmatrix}^{-1} \begin{pmatrix}1\\ \frac{-5}{4}\end{pmatrix}
\begin{pmatrix}C_{1}\\ C_{2}\end{pmatrix} = \frac{-1}{\sqrt{5}}\begin{pmatrix} \frac{1-\sqrt{5}}{2} & -1 \\ \frac{-1-\sqrt{5}}{2} & 1 \end{pmatrix} \begin{pmatrix}1\\ \frac{-5}{4}\end{pmatrix}
which gives us
C_{1}=\frac{-2(1-\sqrt{5})-5}{4\sqrt{5}}\ (7)
C_{2}=\frac{2(1+\sqrt{5})+5}{4\sqrt{5}}\ (8)
Finally, by plugging (7) and (8) into (3) we get
F_{n}=-4\left (\left ( \frac{-2(1-\sqrt{5})-5}{4\sqrt{5}} \right )\left ( \frac{1+\sqrt{5}}{2} \right )^{n}+\left ( \frac{2(1+\sqrt{5})+5}{4\sqrt{5}} \right )\left ( \frac{1-\sqrt{5}}{2} \right )^{n}\right ) (9)
We can now “easily” compute F_{386} which is given by
F_{386}=-4\left (\left ( \frac{-2(1-\sqrt{5})-5}{4\sqrt{5}} \right )\left ( \frac{1+\sqrt{5}}{2} \right )^{386}+\left ( \frac{2(1+\sqrt{5})+5}{4\sqrt{5}} \right )\left ( \frac{1-\sqrt{5}}{2} \right )^{386}\right )
Google returns F_{386} \cong 5.2783459\times 10^{80} which is just an approximation, but now we have a formula for the precise answer.
If you wish to check for yourself that out formula (9) is indeed correct, click on the terms below to see the computation by Google:
F_{0}=-4
F_{1}=5
F_{2}=1
F_{3}=6
F_{4}=7
F_{5}=13
On a side note – Raising a number to the power of 386 may seem like a long computation but it really requires only 10 multiplications! Don’t believe me?
Consider this:
\left ( \frac{1+\sqrt{5}}{2} \right )^{386}=\left ( \frac{1+\sqrt{5}}{2} \right )^{256}\left ( \frac{1+\sqrt{5}}{2} \right )^{128}\left ( \frac{1+\sqrt{5}}{2} \right )^{2}
To calculate
\left ( \frac{1+\sqrt{5}}{2} \right )^{256}
we use the fact that
1.)\ \left ( \frac{1+\sqrt{5}}{2} \right )^{2} = \left ( \frac{1+\sqrt{5}}{2} \right )\left ( \frac{1+\sqrt{5}}{2} \right )
2.)\ \left ( \frac{1+\sqrt{5}}{2} \right )^{4} = \left ( \frac{1+\sqrt{5}}{2} \right )^{2} \left ( \frac{1+\sqrt{5}}{2} \right )^{2}
3.)\ \left ( \frac{1+\sqrt{5}}{2} \right )^{8} = \left ( \frac{1+\sqrt{5}}{2} \right )^{4} \left ( \frac{1+\sqrt{5}}{2} \right )^{4}
…
8.)\ \left ( \frac{1+\sqrt{5}}{2} \right )^{256} = \left ( \frac{1+\sqrt{5}}{2} \right )^{128} \left ( \frac{1+\sqrt{5}}{2} \right )^{128}
As you can see, in every step (multiplication) we use the result from the previous one and so it takes only 8 steps (multiplications) to calculate
\left ( \frac{1+\sqrt{5}}{2} \right )^{256}
Finally, in the process of calculating
\left ( \frac{1+\sqrt{5}}{2} \right )^{256}
we get
\left ( \frac{1+\sqrt{5}}{2} \right )^{128} and \left ( \frac{1+\sqrt{5}}{2} \right )^{2}
as a bonus, and so to calculate
\left ( \frac{1+\sqrt{5}}{2} \right )^{386}
we now only need to use two more multiplications
\left ( \frac{1+\sqrt{5}}{2} \right )^{386}=\left ( \frac{1+\sqrt{5}}{2} \right )^{256}\left ( \frac{1+\sqrt{5}}{2} \right )^{128}\left ( \frac{1+\sqrt{5}}{2} \right )^{2}
This idea can be generalized to any
x\epsilon \Re
The (maximal) number of multiplications required to compute
x^{n}\ where\ 2^{p} |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
3.2. Linear Regression Implementation from Scratch¶
Now that you understand the key ideas behind linear regression, we canbegin to work through a hands-on implementation in code. In thissection, we will implement the entire method from scratch, including thedata pipeline, the model, the loss function, and the gradient descentoptimizer. While modern deep learning frameworks can automate nearly allof this work, implementing things from scratch is the only to make surethat you really know what you are doing. Moreover, when it comes time tocustomize models, defining our own layers, loss functions, etc.,undertanding how things work under the hood will prove handy. In thissection, we will rely only on
ndarray and
autograd. Afterwards,we will introduce a more compact implementation, taking advantage ofGluon’s bells and whistles. To start off, we import the few requiredpackages.
%matplotlib inlineimport d2lfrom mxnet import autograd, np, npximport randomnpx.set_np()
3.2.1. Generating Data Sets¶
To keep things simple, we will construct an artificial dataset according to a linear model with additive noise. Out task will be to recover this model’s parameters using the finite set of examples contained in our dataset. We will keep the data low-dimensional so we can visualize it easily. In the following code snippet, we generated a datset containing \(1000\) examples, each consisting of \(2\) features sampled from a standard normal distribution. Thus our synthetic dataset will be an object \(\mathbf{X}\in \mathbb{R}^{1000 \times 2}\).
The true parameters generating our data will be \(\mathbf{w} = [2, -3.4]^\top\) and \(b = 4.2\) and our synthetic labels will be assigned according to the following linear model with noise term \(\epsilon\):
You could think of \(\epsilon\) as capturing potential measurement errors on the features and labels. We will assume that the standard assumptions hold and thus that \(\epsilon\) obeys a normal distribution with mean of \(0\). To make our problem easy, we’ll set its standard deviation to \(0.01\). The following code generates our synthetic dataset:
# Save to the d2l package.def synthetic_data(w, b, num_examples): """generate y = X w + b + noise""" X = np.random.normal(0, 1, (num_examples, len(w))) y = np.dot(X, w) + b y += np.random.normal(0, 0.01, y.shape) return X, ytrue_w = np.array([2, -3.4])true_b = 4.2features, labels = synthetic_data(true_w, true_b, 1000)
Note that each row in
features consists of a 2-dimensional datapoint and that each row in
labels consists of a 1-dimensional targetvalue (a scalar).
print('features:', features[0],'\nlabel:', labels[0])
features: [2.2122064 0.7740038]label: 6.000587
By generating a scatter plot using the second
features[:, 1] and
labels, we can clearly observe the linear correlation between thetwo.
d2l.set_figsize((3.5, 2.5))d2l.plt.scatter(features[:, 1].asnumpy(), labels.asnumpy(), 1);
3.2.2. Reading Data¶
Recall that training models consists of making multiple passes over the dataset, grabbing one minibatch of examples at a time, and using them to update our model. Since this process is so fundamental to training machine learning algortihms, its worth defining a utility function to shuffle the data and access it in minibatches.
In the following code, we define a
data_iter function to demonstrateone possible implementation of this functionality. The function takes abatch size, a design matrix, and a vector of labels, yieldingminibatches of size
batch_size. Each minibatch consists of an tupleof features and labels.
def data_iter(batch_size, features, labels): num_examples = len(features) indices = list(range(num_examples)) # The examples are read at random, in no particular order random.shuffle(indices) for i in range(0, num_examples, batch_size): batch_indices = np.array(indices[i: min(i + batch_size, num_examples)]) yield features[batch_indices], labels[batch_indices]
In general, note that we want to use reasonably sized minibatches to take advantage of the GPU hardware, which excels at parallelizing operations. Because each example can be fed through our models in parallel and the gradient of the loss function for each example can also be taken in parallel, GPUs allow us to process hundreds of examples in scarcely more time than it might take to process just a single example.
To build some intuition, let’s read and print the first small batch ofdata examples. The shape of the features in each minibatch tells us boththe minibatch size and the number of input features. Likewise, ourminibatch of labels will have a shape given by
batch_size.
batch_size = 10for X, y in data_iter(batch_size, features, labels): print(X, '\n', y) break
[[ 1.1029363 -0.90162945] [-1.6590818 0.6664953 ] [-0.67561793 -0.19601601] [-0.9286441 2.0619104 ] [ 0.57794803 0.72061497] [-0.9840098 -0.1045013 ] [ 0.05878817 1.2624414 ] [-1.2243727 -0.27489948] [-1.8979539 -0.15962839] [ 0.4241809 1.691645 ]] [ 9.463927 -1.3952291 3.4998066 -4.662165 2.90651 2.5935795 0.03279139 2.6707168 0.9294544 -0.7100331 ]
As we run the iterator, we obtain distinct minibatches successively until all the data has been exhausted (try this). While the iterator implemented above is good for didactic purposes, it is inefficient in ways that might get us in trouble on real problems. For example, it requires that we load all data in memory and that we perform lots of random memory access. The built-in iterators implemented in Apache MXNet are considerably efficient and they can deal both with data stored on file and data fed via a data stream.
3.2.3. Initialize Model Parameters¶
Before we can begin optimizing our model’s parameters by gradient descent, we need to have some parameters in the first place. In the following code, we initialize weights by sampling random numbers from a normal distribution with mean 0 and a standard deviation of \(0.01\), setting the bias \(b\) to \(0\).
w = np.random.normal(0, 0.01, (2, 1))b = np.zeros(1)
Now that we have initialized our parameters, our next task is to update them until they fit our data sufficiently well. Each update requires taking the gradient (a multi-dimensional derivative) of our loss function with respect to the parameters. Given this gradient, we can update each parameter in the direction that reduces the loss.
Since nobody wants to compute gradients explicitly (this is tedious anderror prone), we use automatic differentiation to compute the gradient.See
chapter_autograd for more details. Recall from theautograd chapter that in order for
autograd to know that it shouldstore a gradient for our parameters, we need to invoke the
attach_grad function, allocating memory to store the gradients thatwe plan to take.
w.attach_grad()b.attach_grad()
3.2.4. Define the Model¶
Next, we must define our model, relating its inputs and parameters toits outputs. Recall that to calculate the output of the linear model, wesimply take the matrix-vector dot product of the examples\(\mathbf{X}\) and the models weights \(w\), and add the offset\(b\) to each example. Note that below
np.dot(X, w) is a vectorand
b is a scalar. Recall that when we add a vector and a scalar,the scalar is added to each component of the vector.
# Save to the d2l package.def linreg(X, w, b): return np.dot(X, w) + b
3.2.5. Define the Loss Function¶
Since updating our model requires taking the gradient of our lossfunction, we ought to define the loss function first. Here we will usethe squared loss function as described in the previous section. In theimplementation, we need to transform the true value
y into thepredicted value’s shape
y_hat. The result returned by the followingfunction will also be the same as the
y_hat shape.
# Save to the d2l package.def squared_loss(y_hat, y): return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2
3.2.6. Define the Optimization Algorithm¶
As we discussed in the previous section, linear regression has a closed-form solution. However, this isn’t a book about linear regression, it’s a book about deep learning. Since none of the other models that this book introduces can be solved analytically, we will take this opportunity to introduce your first working example of stochastic gradient descent (SGD).
At each step, using one batch randomly drawn from our dataset, we willestimate the gradient of the loss with respect to our parameters. Next,we will update our parameters (a small amount) in the direction thatreduces the loss. Recall from Section 2.5 that after wecall
backward each parameter (
param) will have its gradientstored in
param.grad. The following code applies the SGD update,given a set of parameters, a learning rate, and a batch size. The sizeof the update step is determined by the learning rate
lr. Becauseour loss is calculated as a sum over the batch of examples, we normalizeour step size by the batch size (
batch_size), so that the magnitudeof a typical step size doesn’t depend heavily on our choice of the batchsize.
# Save to the d2l package.def sgd(params, lr, batch_size): for param in params: param[:] = param - lr * param.grad / batch_size
3.2.7. Training¶
Now that we have all of the parts in place, we are ready to implement the main training loop. It is crucial that you understand this code because you will see nearly identical training loops over and over again throughout your career in deep learning.
In each iteration, we will grab minibatches of models, first passingthem through our model to obtain a set of predictions. After calculatingthe loss, we call the
backward function to initiate the backwardspass through the network, storing the gradients with respect to eachparameter in its corresponding
.grad attribute. Finally, we willcall the optimization algorithm
sgd to update the model parameters.Since we previously set the batch size
batch_size to \(10\), theloss shape
l for each minibatch is (\(10\), \(1\)).
In summary, we’ll execute the following loop:
Initialize parameters \((\mathbf{w}, b)\) Repeat until done Compute gradient \(\mathbf{g} \leftarrow \partial_{(\mathbf{w},b)} \frac{1}{\mathcal{B}} \sum_{i \in \mathcal{B}} l(\mathbf{x}^i, y^i, \mathbf{w}, b)\) Update parameters \((\mathbf{w}, b) \leftarrow (\mathbf{w}, b) - \eta \mathbf{g}\)
In the code below,
l is a vector of the losses for each example inthe minibatch. Because
l is not a scalar variable, running
l.backward() adds together the elements in
l to obtain the newvariable and then calculates the gradient.
In each epoch (a pass through the data), we will iterate through theentire dataset (using the
data_iter function) once passing throughevery examples in the training dataset (assuming the number of examplesis divisible by the batch size). The number of epochs
num_epochs andthe learning rate
lr are both hyper-parameters, which we set here to\(3\) and \(0.03\), respectively. Unfortunately, settinghyper-parameters is tricky and requires some adjustment by trial anderror. We elide these details for now but revise them later in
chapter_optimization.
lr = 0.03 # Learning ratenum_epochs = 3 # Number of iterationsnet = linreg # Our fancy linear modelloss = squared_loss # 0.5 (y-y')^2for epoch in range(num_epochs): # Assuming the number of examples can be divided by the batch size, all # the examples in the training data set are used once in one epoch # iteration. The features and tags of mini-batch examples are given by X # and y respectively for X, y in data_iter(batch_size, features, labels): with autograd.record(): l = loss(net(X, w, b), y) # Minibatch loss in X and y l.backward() # Compute gradient on l with respect to [w,b] sgd([w, b], lr, batch_size) # Update parameters using their gradient train_l = loss(net(features, w, b), labels) print('epoch %d, loss %f' % (epoch + 1, train_l.mean().asnumpy()))
epoch 1, loss 0.040426epoch 2, loss 0.000156epoch 3, loss 0.000050
In this case, because we synthesized the data ourselves, we know precisely what the true parameters are. Thus, we can evaluate our success in training by comparing the true parameters with those that we learned through our training loop. Indeed they turn out to be very close to each other.
print('Error in estimating w', true_w - w.reshape(true_w.shape))print('Error in estimating b', true_b - b)
Error in estimating w [ 0.00038958 -0.00059962]Error in estimating b [0.00026226]
Note that we should not take it for granted that we are able to recoverthe parameters accurately. This only happens for a special categoryproblems: strongly convex optimization problems with ‘enough’ data toensure that the noisy samples allow us to recover the underlyingdependency. In most cases this is
not the case. In fact, theparameters of a deep network are rarely the same (or even close) betweentwo different runs, unless all conditions are identical, including theorder in which the data is traversed. However, in machine learning, weare typically less concerned with recovering true underlying parameters,and more concerned with parameters that lead to accurate prediction.Fortunately, even on difficult optimization problems, stochasticgradient descent can often find remarkably good solutions, owing partlyto the fact that, for deep networks, there exist many configurations ofthe parameters that lead to accurate prediction. 3.2.8. Summary¶
We saw how a deep network can be implemented and optimized from scratch,using just
ndarray and
autograd, without any need for defininglayers, fancy optimizers, etc. This only scratches the surface of whatis possible. In the following sections, we will describe additionalmodels based on the concepts that we have just introduced and learn howto implement them more concisely.
3.2.9. Exercises¶ What would happen if we were to initialize the weights \(\mathbf{w} = 0\). Would the algorithm still work? Assume that you’re Georg SimonOhm trying to come upwith a model between voltage and current. Can you use
autogradto learn the parameters of your model.
Can you use Planck’s Law to determine the temperature of an object using spectral energy density? What are the problems you might encounter if you wanted to extend
autogradto second derivatives? How would you fix them?
Why is the
reshapefunction needed in the
squared_lossfunction?
Experiment using different learning rates to find out how fast the loss function value drops. If the number of examples cannot be divided by the batch size, whathappens to the
data_iterfunction’s behavior? |
I'd like to estimate the
drift of a continuous-paths, non-stationary, stochastic process $X_t$ from a time series of values $\{X_{i\Delta t}\}_{i=1,\dots,N}$ sampled from a single realisation of that process over $t \in [0,T]$.
Although the objective is to calibrate a pricing model (specified under $\Bbb{Q}$) based on historical series (observed under $\Bbb{P})$, we can forget about this and assume we're working under a single measure $\Bbb{P}$.
I've bumped into a similar question, but unfortunately the references provided apply to processes admitting a
stationary distribution (= linear drift Ornstein-Uhlenbeck processes used for interest rates modelling) an assumption which I'd like to move away from.
Actually, my assumptions could even be further simplified to the case of a simple Arithmetic Brownian motion with drift $$ dX_t = \mu dt + \sigma dW_t $$
My question is: can someone point me towards a "nice" method to obtain an estimator $\hat{\mu}$ of $\mu$ from the observation of a $N$-sample $\{X_i\}_{i=1,...,N}$, where by "nice" I mean that the finite sample properties of the latter estimator should be better than the usual MLE (= LSE) estimator $$\hat{\mu} = \frac{1}{N-1} \sum_{i=1}^{N-1} \Delta X_i $$ whose relative error is proportional to $1/\sqrt{N\Delta t}$ i.e. to the time horizon $T=N \delta t$ over which the data sample is collected but not to the number of data points $N$ itself making it almost useless in practice.
I'm particularly interested in answers where the answerer has had a successful experience in implementing the method he/she proposes in practice. Because I've already come across different approaches/algorithms myself, but none were satisfying as far as I am concerned. |
Motivation: I am studying the graph isomorphism problem. I am trying to construct a partitioning method to reduce search cases . Partition method:$G$ is an $r$ regular graph, $k$ connected (not a complete, cycle graph). A vertex of $G$ is $x_1$. All vertices which are not adjacent to $x_1$ create a sub-graph $C_1$. All vertices adjacent to $x_1$ create a sub-graph, $ D_1 $. A vertex of $D_1$ is $x_2$.
Using same method, based on adjacency of $x_2$ , $D_1$ can be divided.
All vertices which are not adjacent to $x_2$ create a sub-graph $C_2$.
All vertices adjacent to $x_2$ create a sub-graph, $ D_2 $. In general, $ D_{y-1} $ is a graph and can be divided/partitioned into 2 sub-graphs $C_y, D_y $.
At this stage, let me restrict the problem for simplicity of computation. Restrictions are:
$C_y, D_y $ are $s_y , t_y>0 $ regular graphs respectively for all iteration $y$
$C_y, D_y $ cannot be complete bipartite graph (utility graph), complete graph or disjoint union of complete graphs.
So, $ D_{y-1} $ is a $t_{y-1}$ regular graph and can be divided/partitioned into 2 sub-graphs $C_y, D_y $. $C_y, D_y $ are $s_y , t_y $ regular graphs respectively (given condition).
$G$ can be divided/partitioned a maximum of $\log_2(|G|)$ times, using this dividing process recursively.
Matrix representation : $A$ is an adjacency matrix of an $r$-regular graph $G$. The matrix A can be divided into 4 sub-matrices based on adjacency of vertex $x_1 \in G$.$A_x$ is the adjacency matrix of the graph $(G-x_1)$, where $C_1$ is the adjacency matrix of the graph created by vertices which are not adjacent to $x_1$, and $D_1$ is the adjacency matrix of the graph created by vertices which are adjacent to $x_1$. $C_1,D_1$ are sub-graphs (regular) of graph $G$, $|C_1|>|D_1|$ where $|C_1|,|D_1|$ are total vertices number of graphs $C_1,D_1$ respectively.$$ A_x =\left( \begin{array}{ccc}C_1 & E_1 \\E_1^{T} & D_1 \\\end{array} \right) $$
Again, this process can be done recursively, where $A_{y+1}=D_y$, $y=$ iteration number of the recursive process. $$ A_yx = \left( \begin{array}{ccc} C_y & E_y \\ E_y^{T} & D_{y} \\ \end{array} \right) $$
$A_x$(=$A_1x$) is the matrix of 1st iteration, for 2nd iteration, $A_x$ matrix would be $A_2x$.
Claim: It is not possible to have an $E_y$ matrix as a zero matrix, i.e., it is not possible to have disconnected sub-graphs $C_y,D_y$ under the given conditions that $G$ is $k$ connected $r$ regular and $C_y , D_y$ are always regular (which are not complete bipartite, complete graph nor disjoint union of complete graphs) graphs in this recursive process. Question: how a graph can be constructed so that the graph has regular subgraph as described above? Edition 1 : V.A.Taskinov, Regular subgraphs of regular graphs. Sov.Math.Dokl.26(1982), 37-38 .In this paper he proved the following : Let $0<k<r$ be positive odd integers. Then every $r$-regular graph contains a $k$-regular subgraph (here, the graph need not be simple). |
I can turn-the-crank and show that $\frac{1}{2}\otimes \frac{1}{2} = 1\oplus 0$ etc, but what would be a strategy to proving the general statement for spin representations that $j\otimes s =\bigoplus_{l=|s-j|}^{|s+j|} l$.
It's easy to prove the formula if you just look at the individual basis vectors of the tensor product. Let's use $(2j+1)$ and $(2s+1)$ eigenvectors of $j^2, j_z$ and $s^2, s_z$ called $|j,j_z\rangle$ and so on.
Now let's ask about the multiplicity of basis vectors of the tensor product with a given eigenvalue of $J_z = j_z+s_z$. The maximum eigenvalue of $J_z$ in the tensor product is $j+s$: it can be obtained if we choose $$ |J,J_z=j+s\rangle = |j,j\rangle \otimes |s,s\rangle $$ There are no higher eigenvalues of $J_z$; this proves that no representation with $J>j+s$ is included in the tensor product. However, the $J=j+s$ representation must be included exactly once to obtain one basis vector with $J_z=j+s$; representations with lower values of $J<j+s$ wouldn't contribute any vectors with $J_z=j+s$.
So the tensor product $$ Rep(j) \otimes Rep(s) = Rep(j+s)\oplus Rep(rest) $$ I have used the fact that reducible representations of simple compact groups may be written as direct sums. Now, what about the remaining representation(s) $Rep(rest)$? It is the linear envelope of a set of basis vectors in which we have already removed all the $(2J+1)$ basis vectors with $J=j+s$.
Well, in the rest, the maximum allowed $J$ is $j+s-1$. From the original bases, we see that the original space was 2-dimensional: the old basis included $$|j,j-1\rangle \otimes |s,s\rangle, \qquad |j,j\rangle \otimes |s,s-1\rangle $$ But we have already included one combination to $Rep(j+s)$; so the $Rep(rest)$ representation only contains the other one. By the same argument as above, we may see that the multiplicity of $Rep(j+s-1)$ in the tensor product is also one.
By induction, this algorithm may continue: at the beginning, the number of eigenvectors with a given $J_z$ is increasing by one every time we decrease $J_z$ by one. However, this behavior stops once we get to too low values of $J_z$ that would require too negative values of $j_z$, either $j_z<-j$ or $s_z<-s$. When that happens, the number of basis vectors no longer jumps by one; it stays constant. It happens when $$J_z^{max} \equiv J = |j-s|$$ so $J=|j-s$ is the lowest-$J$ representation included in the decomposition of the tensor product. Another way to see that at this moment, we have already written down all components, is either to notice that $J$ can't be smaller than $|j-s|$ because the minimum is obtained by adding "oppositely directed vectors" and can't be further shortened; or, alternatively, we may check that the dimensions of your formula work: $$ (2j+1)(2s+1) = \sum_{J=|j-s|}^{(j+s)} (2J+1) $$
I think the best way is not the textbook way, but the one described in the warm-up problem in this answer: Mathematically, what is color charge?
I will repeat the main point: the irreducible represenations of SU(2) are given by all complex completely symmetric tensors with all indices down, where each index takes two values 0,1. This is because you have invariant $\epsilon_{ij}$ tensor, which you can use as a metric to raise and lower indices, and you can remove the antisymmetric parts using the $\epsilon$ tensor. The fully symmetric k-index tensor is the spin k/2 representation.
When you tensor two of these together of size k and m, you just put the two tensors end to end, which gives a reducible k+m tensor. You need to remove the antisymmetric parts using the $\epsilon$ tensor, and this steps down by 2 each time, producing exactly one representation of every size between k+m (where you start) and k-m (when you run out of indices to contract).
You can solve this by examining characters. Any finite dimensional representation of $SU(2)$ breaks up into a sum of 1-dimensional representations of $U(1)$, which are classified by an integer called weight (you may divide by 2 to get a half-integer if you need to conform to physics conventions). You can then write the decomposition of a representation as a generating function, by adding the monomial $q^n$ for each representation of $U(1)$ of weight $n$ that you see. For example, the representation $\frac12$ corresponds to $q + q^{-1}$, and the representation $1$ corresponds to $q^2 + 1 + q^{-2}$.
The two facts you need are then:
For any non-negative integer $k$, the irreducible representation $k/2$ has character of the form $$\frac{q^{k+1} - q^{-k-1}}{q - q^{-1}} = q^{k} + q^{k-2} + \cdots + q^{-k}$$
The character of a tensor product is the product of characters.
In other words, the tensor product decomposition can be reconstructed by forgetting the $SU(2)$ action, taking the tensor product of the underlying $U(1)$ representations, then remembering that the characters of irreducible $SU(2)$ representations have a special form.
In your example, squaring $q + q^{-1}$ yields $q^2 + 2 + q^{-2}$. You then subtract $q^2 + 1 + q^{-2}$ to get $1$. For your more general question, you want to show that: $$\frac{q^{j+1} - q^{-j-1}}{q - q^{-1}}\frac{q^{s+1} - q^{-s-1}}{q - q^{-1}} = \sum_{\ell = |s-j|}^{s+j} \frac{q^{l+1} - q^{-l-1}}{q - q^{-1}}$$ You can prove this by induction on $s$: Your base cases are $s=0$ and $s=1$, which are relatively easy to check. For larger $s$, you can do a reduction by splitting the sum $\frac{q^{s+1} - q^{-s-1}}{q - q^{-1}}$ as $(q^s + q^{-s}) + \frac{q^{s-1} - q^{-s+1}}{q - q^{-1}}$. Note that $(q^s + q^{-s})(q^j + q^{j-2} + \cdots + q^{-j})$ is expanded as $$(q^{s+j} + q^{s+j-2} + \cdots + q^{-s-j}) + (q^{|s-j|} + q^{|s-j|-2} + \cdots + q^{-|s-j|}).$$ These are the extreme summands in $\sum_{\ell = |s-j|}^{s+j} \frac{q^{l+1} - q^{-l-1}}{q - q^{-1}}$. The remaining summands are what you get by replacing $s$ by $s-2$.
There is also a combinatorial method (which would be easier to communicate if I could draw): view the Laurent polynomial $q^j + \cdots + q^{-j}$ as a set of evenly spaced dots on the number line, and view each of the monomials in $q^{s} + \cdots + q^{-s}$ as a shifting operator. You get a bunch of shifted sets of dots, and you can reorganize them into a sort of symmetric pile. Each layer of the pile is one of the new irreducible representations. |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
As many other have just said you cannot think to study just some particular subjects ignoring some other areas, expecially if you want to do research. Most of math was born from the observation of some similar phenomena in many different areas: for instance the concept of category itself was born from the observation that in math we deal every time with collections of structures and morphisms preserving those structures, that led to the abstraction of category, similarly I strongly doubt that Grothendieck could invent the concept of (generalized) sheaf if first he hadn't known the many concrete sheaves that appear in topology, differential geometry and algebraic geometry, so it couldn't get to the concept of (Grothendieck's) topos, and without that I'm not so sure that Lawvere could get to the concept of elementary topos while doing his research in logic. This are just some example of as math have evolved thanks to interaction of different areas (for instance, as you can see in the example above, from interaction of geometry and logic).
Just to answer to your comment about analysis there's a professor in Italy who studies higher dimensional category theory for his research in analysis, so analysis need higher category theory.
Of course the best place where you can get a lot of intuition of higher category theory is algebraic topology where higher categories are used to model homotopy types for topological spaces, via $\infty$-groupoids, and directed space, via $(n,r)$-categories where $n,r \in \omega \cup \{\infty\}$ but you can find a lot of higher dimensional category theory in logic and computer science too, I've seen some application in calculability theory and model theory where (higher) category theory is used to model the semantic of theories, in particular type theory (if you're interested in application of higher categorical logic-model theory you can take a look to Makkai's work and also Mike Shulman's work on homotopy type theory). Also in mathematical physics there are a lot of higher category theory as John Baez's work prove.
I suppose above you were referring to Cheng-Lauda "Illustrated guide book", that's a good book if you want to learn many approaches to $n$-categories, but in higher category theory there's a lot of more then just $(n,r)$-categories (like usually Mr.Shulman says), Leinster's "Higher operads, Higher categories" is more complete from this point of view because it presents a lot of stuff like generalized multicategories/operads or $fc$-multicategories. Anyway if you want some references on higher category theory you can find some here.
Hope this may help you.
(Edit: I've improved a little the answer now that I've found some other references.) |
Even though you say that the geometry of this is fairly clear to you, I think it is a good idea to review it. I made this back of an envelope sketch:
Left subplot is the same figure as in the book: consider two predictors $x_1$ and $x_2$; as vectors, $\mathbf x_1$ and $\mathbf x_2$ span a plane in the $n$-dimensional space, and $\mathbf y$ is being projected onto this plane resulting in the $\hat {\mathbf y}$.
Middle subplot shows the $X$ plane in the case when $\mathbf x_1$ and $\mathbf x_2$ are not orthogonal, but both have unit length. The regression coefficients $\beta_1$ and $\beta_2$ can be obtained by a non-orthogonal projection of $\hat{\mathbf y}$ onto $\mathbf x_1$ and $\mathbf x_2$: that should be pretty clear from the picture. But what happens when we follow the orthogonalization route?
The two orthogonalized vectors $\mathbf z_1$ and $\mathbf z_2$ from Algorithm 3.1 are also shown on the figure. Note that each of them is obtained via a separate Gram-Schmidt orthogonalization procedure (separate run of Algorithm 3.1): $\mathbf z_1$ is the residual of $\mathbf x_1$ when regressed on $\mathbf x_2$ ans $\mathbf z_2$ is the residual of $\mathbf x_2$ when regressed on $\mathbf x_1$. Therefore $\mathbf z_1$ and $\mathbf z_2$ are orthogonal to $\mathbf x_2$ and $\mathbf x_1$ respectively, and their lengths
are less than $1$. This is crucial.
As stated in the book, the regression coefficient $\beta_i$ can be obtained as $$\beta_i = \frac{\mathbf z_i \cdot \mathbf y}{\|\mathbf z_i\|^2} =\frac{\mathbf e_{\mathbf z_i} \cdot \mathbf y}{\|\mathbf z_i\|},$$ where $\mathbf e_{\mathbf z_{i}}$ denotes a unit vector in the direction of $\mathbf z_i$. When I project $\hat{\mathbf y}$ onto $\mathbf z_i$ on my drawing, the length of the projection (shown on the figure) is the nominator of this fraction. To get the actual $\beta_i$ value, one needs to divide by the length of $\mathbf z_i$ which is smaller than $1$, i.e. the $\beta_i$ will be larger than the length of the projection.
Now consider what happens in the extreme case of very high correlation (right subplot). Both $\beta_i$ are sizeable, but both $\mathbf z_i$ vectors are tiny, and the projections of $\hat{\mathbf y}$ onto the directions of $\mathbf z_i$ will also be tiny;
this is I think what is ultimately worrying you. However, to get $\beta_i$ values, we will have to rescale these projections by inverse lengths of $\mathbf z_i$, obtaining the correct values.
Following the Gram-Schmidt procedure, the residual of X1 or X2 on the other covariates (in this case, just each other) effectively remove the common variance between them (this may be where I am misunderstanding), but surely doing so removes the common element that manages to explain the relationship with Y?
To repeat: yes, the "common variance" is almost (but not entirely) "removed" from the residuals -- that's why projections on $\mathbf z_1$ and $\mathbf z_2$ will be so short.
However, the Gram-Schmidt procedure can account for it by normalizing by the lengths of $\mathbf z_1$ and $\mathbf z_2$; the lengths are inversely related to the correlation between $\mathbf x_1$ and $\mathbf x_2$, so in the end the balance gets restored. Update 1
Following the discussion with @mpiktas in the comments: the above description is
not how Gram-Schmidt procedure would usually be applied to compute regression coefficients. Instead of running Algorithm 3.1 many times (each time rearranging the sequence of predictors), one can obtain all regression coefficients from the single run. This is noted in Hastie et al. on the next page (page 55) and is the content of Exercise 3.4. But as I understood OP's question, it referred to the multiple-runs approach (that yields explicit formulas for $\beta_i$). Update 2
In reply to OP's comment:
I am trying to understand how 'common explanatory power' of a (sub)set of covariates is 'spread between' the coefficient estimates of those covariates. I think the explanation lies somewhere between the geometric illustration you have provided and mpiktas point about how the coefficients should sum to the regression coefficient of the common factor
I think if you are trying to understand how the "shared part" of the predictors is being represented in the regression coefficients, then you do not need to think about Gram-Schmidt at all. Yes, it will be "spread out" between the predictors. Perhaps a more useful way to think about it is in terms of
transforming the predictors with PCA to get orthogonal predictors. In your example there will be a large first principal component with almost equal weights for $x_1$ and $x_2$. So the corresponding regression coefficient will have to be "split" between $x_1$ and $x_2$ in equal proportions. The second principal component will be small and $\mathbf y$ will be almost orthogonal to it.
In my answer above I assumed that you are specifically confused about Gram-Schmidt procedure and the resulting formula for $\beta_i$ in terms of $z_i$. |
Definitions
Definition 1: Let $S$ be a set of words. We say that $S$ is nicely infinite prefix-free (made up name for the purpose of this answer) if there are words $u_0,\dots,u_n,\dots $ and $v_1,\dots,v_n,\dots $ such that:
For each $n\ge 1$, $u_n$ and $v_n$ are non-empty and start with distinct letters;
$S=\{u_0v_1,\dots,u_0\dots u_n v_{n+1},\dots\}$.
The intuition is that you can put all those words on an infinite rooted tree (the
■ is the root, the
▲ are the leaves, and the
• are the remaining interior nodes) of the following shape such that the words in $S$ are exactly the labels of paths from the root to a leaf:
u₀ u₁ u₂
■-----•-----•-----•⋅⋅⋅
| | |
| v₁ | v₂ | v₃
| | |
▲ ▲ ▲
Proposition 1.1: A nicely infinite prefix-free set is prefix-free.
Proof of proposition 1.1: Suppose that $u_0\dots u_n v_{n+1}$ is a strict prefix of $u_0 \dots u_m v_{m+1}$. There are two cases:
If $n < m$ then $v_{n+1}$ is a prefix of $u_{n+1}\dots u_m v_{m+1}$. This is impossible because $u_{n+1}$ and $v_{n+1}$ have distinct first letters.
If $n > m$ then $u_{m+1}\dots u_n v_{n+1}$ is a prefix of $v_{m+1}$. This is impossible because $u_{m+1}$ and $v_{m+1}$ have distinct first letters.
Proposition 1.2: A nicely infinite prefix-free set is infinite.
Proof of proposition 1.2: In proof 1.1, we showed that if $n\not= m$ then $u_0\dots u_n v_{n+1}$ and $u_0 \dots u_m v_{m+1}$ are not comparable for the prefix order. They are therefore not equal. Main proof
Proposition 2: Any infinite prefix-free set contains a nice infinite prefix-free set.
Proposition 3: A language contains an infinite prefix-free set if and only if it contains a nicely infinite prefix-free set.
Proof below.
Proof of proposition 3: $\boxed{\Rightarrow}$ by proposition 2. $\boxed{\Leftarrow}$ by propositions 1.1 and 1.2.
Proposition 4: The set of nicely-prefix-free subsets of a regular language (encoded as an infinite word $\overline{u_0}\widehat{v_1}\overline{u_1}\widehat{v_2}\overline{u_2}\dots$) is $\omega$-regular (and the size of the Büchi automaton recognizing it is polynomial in the size of the NFA recognizing the regular language).
Proof below.
Theorem 5: Deciding if a regular language described by a NFA contains an infinite prefix-free subset can be done in time polynomial in the size of the NFA.
Proof of theorem 5: By proposition 3, it is sufficient to test if it contains a nicely-infinite prefix-free subset, which can be done in polynomial time by building the Büchi automaton given by proposition 4 and testing the non-emptyness of its language (which can be done in time linear in the size of the Büchi automaton). Proof of proposition 2
Lemma 2.1: If $S$ is a prefix-free set, then so is $w^{-1}S$ (for any word $w$).
Proof 2.1: By definition.
Lemma 2.2: Let $S$ be an infinite set of words. Let $w:=\operatorname{lcp}(S_n)$ be the longest prefix common to all words in $S$. $S$ and $w^{-1}S$ have the same cardinal.
Proof 2.2: Define $f:w^{-1}S\to S$ by $f(x)=wx$. It is well defined by definition of $w^{-1}S$, injective by definition of $f$ and surjective by definition of $w$.
Proof of proposition 2: We build $u_n$ and $v_n$ by induction on $n$, with the induction hypothesis $H_n$ composed of the following parts:
$(P_1)$ For all $k\in\{1,\dots,n\}$, $u_0\dots u_{k-1} v_k \in S$;
$(P_2)$ For all $k\in\{1,\dots,n\}$, $u_k$ and $v_k$ are non-empty and start with distinct letters;
$(P_3)$ $S_n:=(u_0\dots u_n)^{-1}S$ is infinite;
$(P_4)$ There is no non-empty prefix common to all words in $S_n$. In other words: There is no letter $a$ such that $S_n\subseteq a\Sigma^*$.
Remark 2.3: If we have sequences that verify $H_n$ without $(P_4)$, we can modify $u_n$ to make them to also satisfy $(P_4)$. Indeed, it suffices to replace $u_n$ by $u_n\operatorname{lcp}(S_n)$. $(P_1)$ is unaffected. $(P_2)$ is trivial. $(P_4)$ is by construction. $(P_3)$ is by lemma 3.
We now build the sequences by induction on $n$:
Initialization: $H_0$ is true by taking $u_0:=\operatorname{lcp}(S)$ (i.e. by taking $u_0:=\varepsilon$ and applying remark 3.1).
Induction step: Suppose that we have words $u_1,\dots,u_n$ and $v_1,\dots,v_n$ such that $H_n$ for some $n$. We will build $u_{n+1}$ and $v_{n+1}$ such that $H_{n+1}$.
Since $S_n$ is infinite and prefix-free (by lemma 1), it does not contain $\varepsilon$ so that $S_n=\underset{a\in \Sigma}{\bigsqcup}(S_n\cap a\Sigma^*)$. Since $S_n$ is infinite, there is a letter $a$ such that $S_n\cap a\Sigma^*$ is infinite. By $(P_4)$, there is a letter $b$ distinct from $a$ such that $S_n\cap b\Sigma^*$ is non-empty. Pick $v_{n+1}\in S_n\cap b\Sigma^*$. Taking $u_{n+1}$ to be $a$ would satisfy $(P_1)$, $(P_2)$ and $(P_3)$ so we apply remark 3.1 to get $(P_4)$: $u_{n+1}:=a\operatorname{lcp}(a^{-1}S_n)$.
$(P_1)$ $u_1\dots u_nv_{n+1}\in u_1\dots u_n(S_n\cap b\Sigma^*)\subseteq S$.
$(P_2)$ By definition of $u_{n+1}$ and $v_{n+1}$.
$(P_3)$ $a^{-1}S_n$ is infinite by definition of $a$, and $S_{n+1}$ is therefore infinite by lemma 3.
$(P_4)$ By definition of $u_{n+1}$.
Proof of proposition 4
Proof of proposition 4: Let $A=(Q,\to,\Delta,q_0,F)$ be a NFA.
The idea is the following: we read $u_0$, remember where we are, read $v_1$, backtrack to where we were after reading $u_0$, read $u_1$, remember where we are, ... We also remember the first letter that was read in each $v_n$ to ensure that $u_n$ starts with another letter.
I've been told that this could be easier with multi-head automata but I'm not really familiar with the formalism so I'll just describe it using a Büchi automaton (with only one head).
We set $\Sigma':=\overline{\Sigma}\sqcup\widehat{\Sigma}$, where the overlined symbols will be used to describes the $u_k$s and the symbols with hats for the $v_k$s.
We set $Q':=Q\times (\{\bot\}\sqcup (Q \times \Sigma))$, where:
$(q,\bot)$ means that you are reading some $u_n$;
$(q,(p,a))$ means that you finished reading some $u_n$ in the state $p$, that you are now reading $v_{n+1}$ that starts with an $a$, and that once you are done, you will go back to $p$ to read a $u_{n+1}$ that does not start with $a$.
We set $q_0':=(q_0,\bot)$ because we start by reading $u_0$.
We define $F'$ as $F\times Q \times \Sigma$.
The set of transitions $\to'$ is defined as follows:
"$u_n$" For each transition $q\overset{a}{\to}q'$, add $(q,\bot)\overset{\overline{a}}{\to'}(q',\bot)$;
"$u_n$ to $v_{n+1}$" For each transition $q\overset{a}{\to}q'$, add $(q,\bot)\overset{\widehat{a}}{\to'}(q',(q,a))$;
"$v_n$" For each transition $q\overset{a}{\to}q'$, add $(q,(p,a))\overset{\widehat{a}}{\to'}(q',(p,a))$;
"$v_n$ to $u_n$" For each transition $p\overset{a}{\to}p'$ where $p$ is final and letter $b$ distinct from $a$, add $(q,(p,b))\overset{\overline{a}}{\to'}(p',\bot)$;
Lemma 4.1: $\overline{u_0}\widehat{v_1}\overline{u_1}\widehat{v_2}\dots \overline{u_n}\widehat{v_{n+1}}$ is accepted by $A'$ iff for each $n\ge 1$, $u_n$ and $v_n$ are non-empty and start with distinct letters, and for each $n\ge 0$, $u_0\dots u_n v_{n+1}\in L(A)$.
Proof of lemma 4.1: Left to the reader. |
Here is my take on it. It is somewhat similar to what many others have said but I will try to explain in greater detail, so bear with me in terms of length.
Power from engineFirst let us understand power-torque relationship for an engine. The working fluid of the engine (expanding combustion gases) apply a torque on the crankshaft. This torque varies with in-cylinder pressure and is net positive (useful) only in the power stroke (one of the four strokes in a 4-stroke engine in most cars). However, let us assume that we have a average net positive torque coming from the engine. This assumption is ok since most automobiles have multiple cylinders that are phase shifted so that at any time at least one or more cylinder are in power stroke.
So lets call this torque $\tau$. The work done by the engine will be $\tau\theta$, or in other words, the average power from the engine will be:\begin{align*}&P_e= \tau_{e}N_{e}2\pi\end{align*}
Power asked for by the wheelsThe load on the car comes from 1) friction on the road 2) air resistance 3) $mgsin\theta$ to climb a hill at a slope of $\theta$. Since the car does not revolve around its center of mass but only translates, all the load on the car (friction, wind, gravitational body forces) can be considered as some combined torque that is overcome by the torque applied by the engine. We can further analyse this using free body diagram of wheels, but that a different discussion to be had later. Essentially for a constant velocity car climbing say fixed inclined slope and a fixed frictional/air load, the torque demand is fixed and equal to the resistive load. Lets call this $\tau_w$ (wheel). Next lets us assume we are going at a constant velocity of our choice that translates to a wheel RPM (rev/min) of $N_w$. Don't worry I will get to an accelerating car later. For now:\begin{align*}&P_w= \tau_wN_w2\pi\end{align*}Since energy is not accumulated in the car's drive-train (assuming the engine parts don't heat up much after warming up) the power produced by the engine is consumed at the wheels. Again this assumption is fairly accurate since most of the fuel energy either goes to the wheels or leaves as exhaust from the engine, other effects such as frictional heating in the transmission fluid etc are negligibly small\begin{align*}P_w&=P_e\\\Rightarrow \tau_eN_e&=\tau_wN_w\end{align*}
Now lets say the car just got on the slope and you want to maintian the same velocity as before so you slam the accelerator here is what happens. The new load is some $\tau_w$ and the speed you want is $N_w$ so you are asking the engine to deliver $N_w\tau_w$. If you are in a fixed gear $N_e = GN_w$ where $G$ is the gear ratio. Hence the engine has to deliver a torque of \begin{align*}\tau_{e,\; desired} = \frac{\tau_wN_w}{GN_w}\end{align*}So, not only you have a desired power, you also desire fixed engine RPM (or at least hope to stabilize at). Essentially you are asking for a desired torque.
Let us see what the engine can give us
Engine Load-Map A typical engine map looks like the figure below (hand drawn so excuse me for wobbly curves). For now ignore the red circles $A$ and $B$.Torque comes from the work output given by the combustion gases. So the max torque curve (for any RPM) is when you are putting most fuel (diesel engine) or least throttle (gasoline engine). The curve drawn in the figure is the max torque for each speed. Even this curve has a peak value at some RPM. As you change engine speed initially you are doing good, i.e., increasing air flow speeds into the engine (that allows higher mass of air into cylinder due to greater rate of suction/pressure drop) allowing higher thermodynamic work output and higher torque, but after a certain speed the engine breathing efficiency (volumetric efficiency) goes down, then the cylinder is gasping for air (usually the flow chokes in the intake valve I think). So at high speed the volumetric efficiency goes down,there is less air, you can burn less fuel (even if press full throttle) to keep emissions within limits, and torque from the engine goes down. The power on the other hand keeps going up because it is a product of speed and torque. The increase in speed means more power strokes per unit time even if each stroke gives less torque. So the power peaks almost at max engine RPM.
Now the problem Lets study what happens when you are trying to climb a hill.Let's say you started climbing the slope and the desired engine speed (for a fixed vehicle velocity you want to hold constant and fixed gear) corresponds to red circle A. You press the gas pedal the engine gives torque $\tau_A$. If the load is such that $\tau_A<\tau_{desired}$ the vehicle will decelerate and $N_e$ will go down. This will make the $\tau_A$ to go down further (you move left on the torque curve) and you will not be able to keep speeds up. In this situation usually you will gear shift to allow engine to rev up more relative to the wheels, but if you are at constant gear you will not be able to accelerate.
Instead if you were at red circle $B$ and $\tau_B<\tau_{desired}$ at first the engine will try to slow down but its torque output will go up now the engine will stabilize to your desired acceleration. So if you started at the correct side of the torque curve speed-wise you can accelerate to your desired velocity.
Gear changes only allow us to jump to the right engine speed to be able to pull this off. Of course this is assuming that your engine is sufficiently powered in the first place. Otherwise you will go over the hump backwards and then decelerate further. So the power has to work out!
Non-issuesMost cars of today have a fairly powerful engine and are heavy enough that minor things like fuel tank weight, whether the engine is in the front or back, etc are not an issue. A typical sedan is like 1250-1500 kg, only five heavy ppl and a full trunk of luggage can seriously load the car, not its own peripherals. Furthermore engine electronics (with electronic fuel injection, high pressure fuel rail etc) precision solenoid injectors, etc., are robust enough that fuel supply system is never a limiting factor. Engine peripherals are not that underpowered or lossy. Of course you never have a vapor-lock in fuel lines.
It is just a torque-vs-rpm issue, that comes from how well we can breathe and do combustion. Even throttling loses in gasoline engines have been reduced with better acoustically-tuned intake manifolds, refined valve timing, etc. Torque-curves are becoming flatter and flatter. Also one rarely ever goes to max-power situations (around 6000 rpm for typical sedan engine).
I hope I have addressed all issues still unanswered. Like I said I am not saying anything new just explaining it in more detail. There is something about invariance to inverting gravity. That is just a matter of decreasing load. If you come down a hill very fast and you don't brake, well you will gain speed. You will step off the gas, now your engine torque will come down from the max torque curve. Then it depends on how the engine speed torque stabilizes on a lower torque curve (i.e., if you in highest gear and your load requirement is very low) technically your consuming very little fuel in the engine but it is so fast that it will try to run away (highest rpm) but I am sure there are ways to prevent that, and lose power in engine friction (essentially high speed engine braking) but in most cases you will probably brake much much earlier! |
I am self-studying electrodynamics and am wanting to know what is meant by a potential. I understand the concept of
potential energy but what is meant by a potential? Is it the same thing as a field, like gravitation or electromagnetic?
I am self-studying electrodynamics and am wanting to know what is meant by a potential. I understand the concept of
Electric potential and electric potential energy are two different concepts but they are closely related to each other. Consider an electric charge $q_1$ at some point $P$ near charge $q_2$ (assume that the charges have opposite signs).
Now, if we release charge $q_1$ at $P$, it begins to move toward charge $q_2$ and thus has kinetic energy. Energy cannot appear by magic (there is no free lunch), so from where does it come? It comes from the electric potential energy $U$ associated with the attractive 'conservative' electric force between the two chages. To account for the potential energy $U$, we define an electric potential $V_2$ that is set up at point $P$ by charge $q_2$.
The electric potential exists regardless of whether $q_1$ is at point $P$. If we choose to place charge $q_1$ there, the potential energy of the two charges is then due to charge $q_1$ and that pre-existing electric potential $V_2$ such that:
$$U=q_1V_2$$ P.S. You can use the same argument if you consider chage $q_2$, in that case the potential energy is the same and is given by:$$U=q_2V_1$$
In the language of vector calculus:
The word potential is generally used to denote a function which, when differentiated in a special way, gives you a vector field. These vector fields that arise from potentials are called
conservative. Given a vector field $\vec F$, the following conditions are equivalent: $\nabla \times \vec F=0$ $\vec F= -\nabla \phi$ $\oint_C \vec F \cdot \text{d}\vec \ell=0$ for any closed loop $C$ (Hence the name "conservative")
The function $\phi$ appearing in $(2)$ is called the
potential of $\vec F.$ So any irrotational vector field can be written as the gradient of a potential function.
In electromagnetism specifically, Faraday's law tells us that $\nabla \times \vec E = -\frac{\partial \vec B}{ \partial t}$. For magnetic fields that do not vary with time (electrostatics) we get that $\nabla \times \vec E = 0$ and thus $\vec E = - \nabla V$ where $V$ is the potential of $\vec E$. This is exactly what we call the electric potential or "voltage" if you're a non-physicist. In the electrodynamics case where $\frac{\partial \vec B}{ \partial t} \neq 0$ a notion of electric potential still exists as we can break the electric field up into the sum of an irrotational field and a solenoidal field (this is called the Helmholtz theorem). We can then use Maxwell's equations to get that $\vec E = - \nabla V- \frac{\partial \vec A}{\partial t}$ where $V$ is the same electric potential and $\vec A$ is a vector field that we call the
vector potential.
The case of gravity is analogous. If $\vec g$ is an irrotational gravitational field (which is always the case in Newtonian gravity) then $\vec g = -\nabla \phi$ where $\phi$ is the gravitational potential. This is closely related to gravitational potential energy in that a mass $m$ placed in the gravitational field $\vec g$ will have potential energy $U=m \phi$. |
Let $X_1,\dots,X_n$ be compact Hausdorff spaces. Let's define the Varopoulos algebra as the projective tensor product: $$V(X_1,\dots,X_n) := C(X_1) \hat{\otimes} \dots \hat{\otimes} C(X_n),$$ i.e. the space of functions on $X_1 \times \dots \times X_n$ that can be represented as $$f = \sum_k f_{1k} \otimes \dots \otimes f_{nk},$$ where $f_{ik} \in C(X_i)$ and the series is absolutely convergent. The corresponding norm is of course the projective tensor norm: $$\Vert f \Vert_V := \inf_{f = \sum_k f_{1k} \otimes \dots \otimes f_{nk}} \Vert f_{1k} \Vert \dots \Vert f_{nk} \Vert.$$
There is a result of Saeki that implies that if $Y_i$ are factor spaces of $X_i$ via some continuous surjections $X_i \to Y_i$, then $$V(Y_1,\dots,Y_n) = V(X_1,\dots,X_n) \cap C(Y_1 \times \dots \times Y_n)$$ (the function spaces on $Y$ are understood to be embedded into function spaces on $X$ in the obvious way). This allows to do things like identifying Varopoulos functions with the measurable Varopoulos functions (defined analogously with $L^\infty$ in place of $C$) that happen to be continuous on the product...
... Which motivates the following question. Let's define the "largest possible" Varopoulos-type algebra. Let $f$ be
any function on the product $X_1 \times \dots \times X_n$. Define$$\Vert f \Vert_\mathbf{V} := \sup_{Z_i \subset X_i, Z_i \text{ finite}} \Vert f \restriction_{Z_1 \times \dots \times Z_n} \Vert_{V(Z_1, \dots, Z_n)},$$ $$\mathbf{V}(X_1,\dots,X_n) := \{f:X_1 \times \dots \times X_n \to \mathbb{C} \, | \, \Vert f \Vert_\mathbf{V} < \infty\}.$$It can be equivalently characterized as the algebra of functions that are representable as absolutely convergent integrals, as opposed to sums, of products of bounded functions (the relevant measurable structure on $\ell^\infty$ is product, not Borel).
Up to this point, $X_i$ were just sets. Now if we make them into compact spaces, is it true that $$V(X_1,\dots,X_n) = \mathbf{V}(X_1,\dots,X_n) \cap C(X_1 \times \dots \times X_n)?$$ |
If one needs to write a limit as an exponent, one might have this dilemma: if you use
\displaymath, "x to infinity" will be nicely printed under the lim symbol, but your exponent will be using the normal font and will appear very big on the page. If you take out the
\displaymath instruction, the exponent will use the small font, but now the part "x to infinity" is not a subscript to the "lim" symbol anymore, it just follows it. Trying to use any font size instructions with
\displaystyle, or actually inside the math mode, does not seem to work for me! Does anybody know any trick to get around this?
This is the horrible expression I am fighting with (might be easier to make my point this way):
$e^{\left(\, \displaystyle \lim_{x \,\rightarrow\, \infty} \frac{\, 2x \sin{\frac{1}{x}} \,}{ 1 \,-\, \sin{\frac{1}{x}}} \,\right) } \,;$
I can't get the last exponent to behave, because it contains the limit (it's the last exponent, the one for the number
e). If I take out the
\displaystyle, the limit gets messed up, as explained above. |
We will be offering mothur and R workshops throughout 2019. Learn more. Sharedace Example Calculations *.sharedAce Example calculations below will be performed using data from the Eckburg 70.stool_compare files with an OTU definition of 0.03. Estimating the richness of shared OTUs between two communities. A Non-parametric richness estimator of the number of shared OTUs between two communities has been developed that is analogous to the ACE (3) single community richness estimator. The <math>S_{A,B ACE},</math> (9), estimator is calculated as: <math>S_{A,B ACE} = S_{12 \left ( abund \right )} + \frac {S_{12 \left ( rare \right )}}{c_{12}} + \frac {1}{C_{12}} \left [ f_{\left ( rare \right )1+} {\Gamma}_1 + f_{\left ( rare \right )+1} {\Gamma}_2 + f_{11}{\Gamma}_3 \right ]</math> where,
<math>C_{12} = 1 - \frac {\sum_{i=1}^{S_{12\left ( rare \right )}} {\left \{Y_i I \left ( X_i = 1 \right ) + X_iI \left ( Y_i = 1 \right ) - I \left ( X_i = Y_i = 1 \right ) \right \}}} {T_{11}}</math>
<math>{\Gamma}_1 = \frac{S_{12 \left (rare \right )} n_{rare} T_{21}}{C_{12}\left( n_{rare} - 1\right)T_{10}T_{11}} - 1</math>, <math>{\Gamma}_2 = \frac{S_{12 \left (rare \right )} m_{rare} T_{12}}{C_{12}\left( m_{rare} - 1\right)T_{01}T_{11}} - 1</math>
<math>{\Gamma}_3 = \left[ \frac{S_{12\left( rare \right)}}{C_{12}}\right ]^2 \frac{n_{rare}m_{rare}T_{22}}{\left(n_{rare}-1\right)\left(m_{rare}-1\right)T_{10}T_{01}T_{11}} - \frac{S_{12 \left( rare \right)}T_{11}}{C_{12}T_{01}T_{10}}-{\Gamma}_1-{\Gamma}2</math>
<math>T_{10} = \sum_{i=1}^{S_{12\left( rare \right)}} X_i </math>, <math>T_{01} = \sum_{i=1}^{S_{12\left( rare \right)}} Y_i </math>, <math>T_{11} = \sum_{i=1}^{S_{12\left( rare \right)}} X_i Y_i </math>, <math>T_{21} = \sum_{i=1}^{S_{12\left( rare \right)}} X_i \left( X_i - 1 \right) Y_i </math>
<math>T_{12} = \sum_{i=1}^{S_{12\left( rare \right)}} X_i \left( Y_i - 1 \right) Y_i </math>, <math>T_{22} = \sum_{i=1}^{S_{12\left( rare \right)}} {X_i \left( X_i - 1 \right) Y_i \left( Y_i - 1 \right)} </math>
where,
<math>f_{11}</math> = number of shared OTUs with one observed individual in A and B
<math>f_{1+}, f_{2+}</math> = number of shared OTUs with one or two individuals observed in A
<math>f_{+1}, f_{+2}</math> = number of shared OTUs with one or two individuals observed in B
<math>f_{\left(rare \right)1+}</math> = number of OTUs with one individual found in A and less than or equal to 10 in B.
<math>f_{\left(rare \right)+1}</math> = number of OTUs with one individual found in B and less than or equal to 10 in A.
<math>n_{rare}</math> = number of sequences from A that contain less than 10 sequences.
<math>m_{rare}</math> = number of sequences from B that contain less than 10 sequences.
<math>S_{12\left(rare\right)}</math> = number of shared OTUs where both of the communities are represented by less than or equal to 10 sequences.
<math>S_{12\left(abund\right)}</math> = number of shared OTUs where at least one of the communities is represented by more than 10 sequences.
<math>S_{12\left(obs\right)}</math> = number of shared OTUs in A and B.
Calculation of <math>S_{A,B ACE}.</math> is considerably complicated to evaluate. First, we determine that there are 23 rare shared OTUs and 37 abundant shared OTUs. Next, considering only the rare OTUs, we calculate <math>C_{12}</math> as 0.845878. We obtained the following T-values:
<math>T_{10} = 93</math>
<math>T_{01} = 64</math>
<math>T_{11} = 279</math>
<math>T_{21} = 1444</math>
<math>{T_{12}} = 988</math>
<math>T_{22} = 5440</math>
Next, calculation of the Γ-values requires knowing <math>f_{\left(rare \right)1+}, f_{\left(rare \right)+1} \mbox{ and } f_{\left(rare \right)11}</math>, which were 5, 8, and 2. Also, <math> n_{rare} \mbox{ and } m_{rare}</math> were 185 and 167, respectively. Finally, calculation of the Γ-values gives <math>{\Gamma}_1=0.530409, {\Gamma}_2 = 0.523308 \mbox{ and } {\Gamma}_3 = 0.151840</math>. This gives a <math>S_{A,B ACE}.</math> value of 72.3024 as seen below.
File Samples on the Eckburg 70.stool_compare Dataset .shared
This file contains the frequency of sequences from each group found in each OTU. Each row consists of the distance being considered, group name, number of OTUS, and the abundance information separated by tabs. The abundance information is as follows. Each subsequent number represents a different OTU so that the number indicates the number of sequences in that group that clustered within that OTU. Note that OTU frequencies can only be compared within a distance definition. Below is a link to the file used in the calculations.
.sharedAce
The first line contains the labels of all the columns. First sampled which shows the frequency of the <math>S_{A,B ACE}.</math> calculations. The frequency was set to 500, so after each 500 selected the <math>S_{A,B ACE}.</math> is calculated at each of the distances, with a calculation done after all are sampled. The following labels in the first line are the distances at which the calculations were made and the names of the groups compared. Each additional line starts with the number of sequences sampled followed by the <math>S_{A,B ACE}.</math> calculation at the column's distance. For instance, at distance 0.01, after 4392 samples <math>S_{A,B ACE}.</math> was 136.599.
sampled 0.01tissuestool 0.02tissuestool 0.03tissuestool 0.04tissuestool 1 0 0 0 0 500 44.2676 52.4249 43.9391 26.2499 1000 86.2691 53.7864 55.2556 60.1921 1500 114.238 106.452 45.6638 50.0418 2000 180.391 99.0382 57.2304 47.1769 2500 124.966 92.2403 48.1031 48.5068 3000 114.838 94.2194 56.2644 59.6396 3500 126.609 102.88 59.8571 71.1169 4000 134.213 98.837 56.6823 68.317 4392 136.599 86.5079 72.3024 62.117 |
Practical values for pullup resistors could be anywhere in the 4.7k to 20k, even 50k. The value is not always very critical. Try one of those, that may do the job.
You are correct, the datasheet is talking about whatever will be driving that OE pin. Now, the driving pin has some maximum current sourcing/sinking capability—its datasheet should specify this value.
Say your driving pin can
sink 10mA as a maximum, just like you said. Yes, your calculation is correct in terms of protecting that pin from burning out. If you go any lower than 330 ohms, you're exceeding the sinking current rating.
But in practice, you don't want to be sinking that much current through that pin, that is a maximum rating, not a recommended operating condition. Take for example the datasheet you have attached.
In page 4, it specifies a test current, called \$I_{OL}\$ of 50uA, this is the test current when the voltage is driven low by the internal logic. Ideally this voltage should be zero or very close, when driven low but in practice they tell you, it could sit at a maximum of 0.1V. They do the same test with \$I_{OL}\$ of 8mA, and notice that now maximum voltage that you could see, when driving the pin low, increases to 0.36V.
With that in mind, you may want to try a pullup resistor of \$R_p=\dfrac{3.3\text{V}-0\text{V}}{50\mu A}= 66\text{k}\Omega\$
The problem with decreasing the pullup resistor value, is that more current flows when the pin is driven low and the output voltage increases (as you see with the two test currents in the datasheet) and it could be high enough that it is not a logic 0 anymore. Even if you try the case when \$I_{OL}\$ is 8mA, for
this case, the maximum specified output voltage is 0.36V, which is still a logic 0. The resistor value turns out to be 412 Ohms. So 10k, for example should work.
Those numbers given here do not apply to your case because I used the values provided in the buffer datasheet, not your driving Ic, which you said it is a microcontroller. But they do give you an idea of how to calculate the values of a pullup resistor for your specific situation.
There is an upper limit for the value of the pullup resistor, as well, and it has to do with the leakage current going into the pin when the internal transistor driving the pin is 'open'. But I think it isn't necessary to go there.
Of course, if your controller can drive the pin both high and low, you may not need a pullup resistor at all. Hope this helps. |
56 6 Homework Statement Car A drives a curve of radius 60m with a constant velocity of 48 km/h. When A is at the given position, car B is at 30m away from the intersection and accelerating at 1.2 m/s^2 to the south. Calculate the lenght and direction of the acceleration that car B would measure of car A from its perspective at that instant. Homework Equations Kinematic equations in polar and cartesian coordinates
I think my approach is quite wrong, still I gave it a shot:
First I know that ##v_A=13.3 m/s=r\omega=60\omega \rightarrow \omega=0.2 \frac{rad}{s}## Then $$\vec a_A=-r\omega^2 e_r=-2.4 e_r$$ But ##e_r=\cos{\theta}i+\sin{\theta}j## and substituing the latter in the acceleration equation I have that ##\vec a_A= -2i-1.2j## At last: $$\vec a_{A/B}=\vec a_A - \vec a_B$$ and this is were I stopped, hope you can help me. Thanks!
First I know that ##v_A=13.3 m/s=r\omega=60\omega \rightarrow \omega=0.2 \frac{rad}{s}##
Then $$\vec a_A=-r\omega^2 e_r=-2.4 e_r$$
But ##e_r=\cos{\theta}i+\sin{\theta}j## and substituing the latter in the acceleration equation I have that ##\vec a_A= -2i-1.2j##
At last: $$\vec a_{A/B}=\vec a_A - \vec a_B$$
and this is were I stopped, hope you can help me. Thanks!
Attachments 5.4 KB Views: 17 |
Search
Now showing items 1-10 of 15
A free-floating planet candidate from the OGLE and KMTNet surveys
(2017)
Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ...
OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary
(2017)
We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ...
OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing
(2017)
We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ...
OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only
(2018)
We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ...
OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function
(2018)
We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ...
OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy
(2018)
We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ...
OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge
(2018)
We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ...
Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb
(2018)
We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ...
OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit
(2018)
We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ...
KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion
(2018)
We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ... |
Building a plugin¶
Writing your own PennyLane plugin, to allow an external quantum library to take advantage of the automatic differentiation ability of PennyLane, is a simple and easy process. In this section, we will walk through the steps for creating your own PennyLane plugin. In addition, we also provide two default reference plugins —
'default.qubit' for basic pure state qubit simulations, and
'default.gaussian' for basic Gaussian continuous-variable simulations.
What a plugin provides¶
A quick primer on terminology of PennyLane plugins in this section:
A plugin is an external Python package that provides additional quantum devicesto PennyLane. Each plugin may provide one (or more) devices, that are accessible directly through PennyLane, as well as any additional private functions or classes. Depending on the scope of the plugin, you may wish to provide additional (custom) quantum operations and observables that the user can import.
Important
In your plugin module,
standard NumPy ( not the wrapped NumPy module provided by PennyLane) should be imported in all places (i.e.,
import numpy as np).
Creating your device¶
The first step in creating your PennyLane plugin is to create your device class. This is as simple as importing the abstract base class
Device from PennyLane, and subclassing it:
from pennylane import Deviceclass MyDevice(Device): """MyDevice docstring""" name = 'My custom device' short_name = 'example.mydevice' pennylane_requires = '0.1.0' version = '0.0.1' author = 'Ada Lovelace'
Here, we have begun defining some important class attributes that allow PennyLane to identify and use the device. These include:
name: a string containing the official name of the device
short_name: the string used to identify and load the device by users of PennyLane
pennylane_requires: the PennyLane version this device supports. Note that this class attribute supports pip
requirements.txtstyle version ranges, for example:
pennylane_requires = "2"to support PennyLane version 2.x.x
pennylane_requires = ">=0.1.5,<0.6"to support a range of PennyLane versions
version: the version number of the device
author: the author of the device
Defining all these attributes is mandatory.
Supporting operators and observables¶
You must further tell PennyLane about the operations and observables that your device supports as well as potential further capabilities, by providing the following class attributes/properties:
operations: a set of the supported PennyLane operations as strings, e.g.,
operations = {"CNOT", "PauliX"}
This is used to decide whether an operation is supported by your device in the default implementation of the public method
supports_operation().
observables: set of the supported PennyLane observables as strings, e.g.,
observables = {"QuadOperator", "NumberOperator", "X", "P"}
This is used to decide whether an observable is supported by your device in the default implementation of the public method
supports_observable().
_capabilities: (optional) a dictionary containing information about the capabilities of the device. At the moment, only the key
'model'is supported, which may return either
'qubit'or
'CV'. Alternatively, you may use this class dictionary to return additional information to the user — this is accessible from the PennyLane frontend via the public method
capabilities().
Applying operations¶
Once all the class attributes are defined, it is necessary to define some required class methods, to allow PennyLane to apply operations to your device.
When PennyLane needs to evaluate a QNode, it accesses the
execute() method of your plugin, which, by default performs the following process:
results = []with self.execution_context(): self.pre_apply() for operation in queue: self.apply(operation.name, operation.wires, operation.parameters) self.post_apply() self.pre_measure() for obs in observables: if obs.return_type is Expectation: results.append(self.expval(obs.name, obs.wires, obs.parameters)) elif obs.return_type is Variance: results.append(self.var(obs.name, obs.wires, obs.parameters)) self.post_measure() return np.array(results)
where
queue is a list of PennyLane
Operation instances to be applied, and
observables is a list of PennyLane
Observable instances to be measured and returned. In most cases, there are therefore a minimum of three methods that any device
must implement:
apply(): This accepts an operation name (as a string), the wires (subsystems) to apply the operation to, and the parameters for the operation, and should apply the resulting operation to given wires of the device.
expval(): This accepts an observable name (as a string), the wires (subsystems) to measure, and the parameters for the observable. It is expected to return the resulting expectation value from the device.
var(): This accepts an observable name (as a string), the wires (subsystems) to measure, and the parameters for the observable. It is expected to return the resulting variance of the measured observable value from the device.
Note
Currently, PennyLane only supports measurements that return a scalar value.
However, additional flexibility is sometimes required for interfacing with more complicated frameworks. In such cases, the following (optional) methods may also be implemented:
__init__(): By default, this method receives the number of wires (
self.num_wires) and number of shots
self.shotsof the device. This is the right place to set up your device. You may add parameters while overwriting this method if you need to add additional options that the user must pass to the device on initialization. Make sure that you call
super().__init__(wires, shots)at some point here.
execution_context(): Here you may return a context manager for the circuit execution phase (see above). You can implement this method if the quantum library for which you are writing the device requires such an execution context while applying operations and measuring results from the device.
pre_apply(): for any setup/code that must be executed before applying operations
post_apply(): for any setup/code that must be executed after applying operations
pre_measure(): for any setup/code that must be executed before measuring observables
post_measure(): for any setup/code that must be executed after measuring observables
Warning
In advanced cases, the
execute() method may be overwritten directly. This provides full flexibility for handling the device execution yourself. However, this may have unintended side-effects and is not recommended — if possible, try implementing a suitable subset of the methods provided above.
Identifying and installing your device¶
When performing a hybrid computation using PennyLane, one of the first steps is often to initialize the quantum device(s). PennyLane identifies the devices via their
short_name, which allows the device to be initialized in the following way:
import pennylane as qmldev1 = qml.device(short_name, wires=2)
where
short_name is a string that uniquely identifies the device. The
short_name has the following form:
pluginname.devicename. Examples include
'default.qubit' and
'default.gaussian' which are provided as reference plugins by PennyLane, as well as
'strawberryfields.fock',
'strawberryfields.gaussian',
'projectq.simulator', and
'projectq.ibm', which are provided by the PennyLane StrawberryFields and PennyLane ProjectQ plugins, respectively.
PennyLane uses a
setuptools
entry_points approach to plugin discovery/integration. In order to make the devices of your plugin accessible to PennyLane, simply provide the following keyword argument to the
setup() function in your
setup.py file:
devices_list = [ 'example.mydevice1 = MyModule.MySubModule:MyDevice1' 'example.mydevice2 = MyModule.MySubModule:MyDevice2' ],setup(entry_points={'pennylane.plugins': devices_list})
where
devices_list is a list of devices you would like to register,
example.mydevice1 is the short name of the device, and
MyModule.MySubModule is the path to your Device class,
MyDevice1.
To ensure your device is working as expected, you can install it in developer mode using
pip install -e pluginpath, where
pluginpath is the location of the plugin. It will then be accessible via PennyLane.
Testing¶
All plugins should come with extensive unit tests, to ensure that the device supports the correct gates and observables, and is applying them correctly. For an example of a plugin test suite, see
tests/test_default_qubit.py and
tests/test_default_gaussian.py in the main PennyLane repository.
In general, as all supported operations have their gradient formula defined and tested by PennyLane, testing that your device calculates the correct gradients is not required — just that it
applies and measures quantum operations and observables correctly. Supporting new operations¶
If you would like to support an operation or observable that is not currently supported by PennyLane, you can subclass the
Operation and
Observable classes, and define the number of parameters the operation takes, and the number of wires the operation acts on. For example, to define the Ising gate \(XX_\phi\) depending on parameter \(\phi\),
class Ising(Operation): """Ising gate""" num_params = 1 num_wires = 2 par_domain = 'R' grad_method = 'A' grad_recipe = None
where
num_params: the number of parameters the operation takes
num_wires: the number of wires the operation acts on
par_domain: the domain of the gate parameters;
'N'for natural numbers (including zero),
'R'for floats,
'A'for arrays of floats/complex numbers, and
Noneif the gate does not have free parameters
grad_method: the gradient computation method;
'A'for the analytic method,
'F'for finite differences, and
Noneif the operation may not be differentiated
grad_recipe: The gradient recipe for the analytic
'A'method. This is a list with one tuple per operation parameter. For parameter \(k\), the tuple is of the form \((c_k, s_k)\), resulting in a gradient recipe of\[\frac{d}{d\phi_k}f(O(\phi_k)) = c_k\left[f(O(\phi_k+s_k))-f(O(\phi_k-s_k))\right].\]
where \(f\) is an expectation value that depends on \(O(\phi_k)\), an example being\[f(O(\phi_k)) = \braket{0 | O^{\dagger}(\phi_k) \hat{B} O(\phi_k) | 0}\]
which is the simple expectation value of the operator \(\hat{B}\) evolved via the gate \(O(\phi_k)\).
Note that if
grad_recipe = None, the default gradient recipe is \((c_k, s_k)=(1/2, \pi/2)\) for every parameter.
The user can then import this operation directly from your plugin, and use it when defining a QNode:
import pennylane as qmlfrom MyModule.MySubModule import Ising@qnode(dev1)def my_qfunc(phi): qml.Hadamard(wires=0) Ising(phi, wires=[0,1]) return qml.expval(qml.PauliZ(0))
Warning
If you are providing custom operations not natively supported by PennyLane, it is recommended that the plugin unittests
do provide tests to ensure that PennyLane returns the correct gradient for the custom operations. Supporting new CV operations¶
In addition, for Gaussian CV operations, you may need to provide the static class method
_heisenberg_rep() that returns the Heisenberg representation of the operator given its list of parameters:
class Custom(CVOperation): """Custom gate""" n_params = 2 n_wires = 1 par_domain = 'R' grad_method = 'A' grad_recipe = None @staticmethod def _heisenberg_rep(params): return function(params)
For operations, the
_heisenberg_repmethod should return the matrix of the linear transformation carried out by the gate for the given parameter values. This is used internally for calculating the gradient using the analytic method (
grad_method = 'A').
For observables, this method should return a real vector (first-order observables) or symmetric matrix (second-order observables) of coefficients which represent the expansion of the observable in the basis of monomials of the quadrature operators. For single-mode Operations we use the basis \(\mathbf{r} = (\I, \x, \p)\). For multi-mode Operations we use the basis \(\mathbf{r} = (\I, \x_0, \p_0, \x_1, \p_1, \ldots)\), where \(\x_k\) and \(\p_k\) are the quadrature operators of qumode \(k\).
Non-Gaussian CV operations and observables are currently only supported via the finite difference method of gradient computation. |
I want to find the roots for $\kappa$ for the equation
$$\sqrt{\alpha - 1} \cos{\left (\frac{\sqrt{2} \sqrt{\alpha - 1}}{2 \sqrt{\epsilon}} \right )} \cosh{\left (\frac{\sqrt{2} \sqrt{\alpha + 1}}{2 \sqrt{\epsilon}} \right )} - \sqrt{\alpha - 1} \\ -\frac{1}{\sqrt{\alpha + 1}} \sin{\left (\frac{\sqrt{2} \sqrt{\alpha - 1}}{2 \sqrt{\epsilon}} \right )} \sinh{\left (\frac{\sqrt{2} \sqrt{\alpha + 1}}{2 \sqrt{\epsilon}} \right )} = 0 \enspace ,$$ where $\alpha=\sqrt{1 + 4\epsilon\kappa^2}$. This equation has infinite roots, but I am interested in the first $N$ of them.
One option to solve this problem is to use Newton method, the problem is to define the initial points, since the values of the function can be quite high, as can be seen below
This problem comes from finding the eigenvalues of
$$\left[\frac{d^2 u}{ds^2} - \epsilon \frac{d^4u}{ds^4} + \kappa^2\right]u = 0$$
then I can obtain an approximation using perturbation methods, i.e., the eigenvalues are approximated by
$$\kappa_n^2 = n^2\pi^2 + \epsilon n^4\pi^4$$
for small $\epsilon$. Then, for small values of $\epsilon$, I can use the values $\kappa_n^2$ as initial guesses to the Newton algorithm. But when $\epsilon$ increases these initial guesses fail.
Since I know the original differential equation, I can use FEM or FDM to find the eigenvalues, but I am interested in other methods. Below, you can see the comparison using FDM (1001 points), perturbation solution and Newton method (using the perturbation solution as initial guess). All the curves are for $\epsilon=10^{-3}$, but I couldn't make those guesses to work for greater values of $\epsilon$.
Question: Is there any other method to solve this problem? Maybe some kind of transformation that can be applied to the equation? |
The net cell reaction of an electrochemical cell and its standard potential is given below: $$\ce{ Mg + 2Ag+ ->Mg^{2+} + 2Ag} \ \ \ \ \ \ \ \ E^\circ=3.17\:\mathrm{V}$$ The question is to find the maximum work obtainable from this electrochemical cell if the initial concentrations of $\ce{Mg^{2+}}=0.1\ \mathrm{M}$ and of $\ce{Ag+}=1\ \mathrm{M}$.
The solution just uses the Nernst equation to find the potential at this concentrations and uses $\Delta G=-nFE$ to calculate the Gibbs free energy change which is then equated to the maximum work obtainable.
But this is just the work obtainable per mole
at this concentration only. As soon as the reaction proceeds, the concentration change and so does the value of $\Delta G$ per mole and therefore the maximum work obtainable is different for different concentrations.
Therefore, to find the maximum work obtainable, shouldn't we first calculate the equilibrium constant of this reaction (I calculated it to be $K_c=1.89\times10^{107}$), then the equilibrium concentrations(which is probably $\ce{Mg^{2+}}=0.6\ \mathrm{M}, \ \ce{Ag+}=0.178\times10^{-53}\ \mathrm{M}$) and then use something like: $$ \int_{\mathrm{in}}^{\mathrm{eq}}\Delta G* \mathrm{d\,M} $$ where $\Delta G$ is per mole and the $dM$ is a small amount in moles that the reaction proceeds at that concentration.
Please help me verify my method and to proceed further. |
I am looking for an example where $f:Y\to X$ and $f':Y'\to X$, are both smooth maps of smooth manifolds, but the pullback does not exist.
Remarks:
1) A pullback in a certain category is defined as a space satisfying a universal property, not as a fiber product. (Which is just the usual form of many pullbacks...). See a clarification at the bottom.
2) I know there are examples where the pullback in the smooth category exists, but is different from the fiber product $Y \times_X Y' = \lbrace (y,y') \in Y \times Y'\mid f(y)=f'(y') \rbrace$ (which is always the pullback in the topological category).
This is
not what I am looking for, since in this example the smooth pullback (as the space satisfying the required universal property) exists.
3) In any case where the fiber product is a (smooth) submanifold, it is the pullback. Therefore, any possible example must be one whose fiber product is not a (smooth) submanifold. (In particular $f,f'$ can't be transverse to each other)
4) In this answer there is a possible way of poving some limits does not exist. Maybe it's possible to use this method here also, but so far I didn't find an example
Clarification of the definition of pullback:
A space $Z$ (more precisely a diagram $Y \overset{g}{\leftarrow}Z\overset{g'}{\rightarrow}Y'$ which complete $Y\overset{f}{\rightarrow}X\overset{f'}{\leftarrow}Y'$ into a commutative square) is said to be a pullback if for any diagram $Y \overset{h}{\leftarrow}W\overset{h'}{\rightarrow}Y'$ (which commutes with $Y\overset{f}{\rightarrow}X\overset{f'}{\leftarrow}X$), there is a unique smooth/continuous map $u:W \to Z$ such that $h=g \circ u$ and $h'=g' \circ u$. (see Wikipedia). |
Suppose a particle that is under a quantum oscillator potential and is, initially, in the state $\Psi(x,0)=\frac{1}{\sqrt3}\phi_1(x)+\sqrt{\frac23}\phi_2(x)$, where $\phi_1(x)$ and $\phi_2(x)$ are eigenstates of the operator $\hat{H}$. Three consecutive measurements are performed: first, the energy $E$ is measured; then, the observable $A$ is measured, whose associated operator is $\hat{A}$ and which accomplishes that $[\hat{H},\hat{A}]=0$; finally, $E$ is measured again.
(a)Is the first measured value of $E$ the same as the one measured after?
(b)What would happen if $[\hat{H},\hat{A}]\neq0$?
The first value of $E$ depends on probability: the probability of getting $E_1$ is $\frac13$ and the probability of getting $E_2$ is $\frac23$. Now, as far as I know, after measuring an observable in a quantum system, the state
immediately (that is, if the system is not allowed to evolute over time) after the measurement is an eigenstate. That would mean that
$$\hat{H}|\psi\rangle=E_i|\phi_i\rangle$$
So if I measured it again, now the measured system's state would be $\phi_i$ so
$$\hat{H}|\phi_i\rangle=E_i|\phi_i\rangle$$
because it is an eigenstate. So I think that the same result of $E$ would be obtained, no matter how many times I performed the same measurement.
Is this reasoning right?
Now regarding the original question, knowing that $[\hat{H},\hat{A}]=0$ means that those observables are compatible, i.e. they can be measured simultaneously. What I don't know is if knowing that allows me to say the same as before: due to the compatibility of both operators, measuring $A$ between the two measurements of $E$ doesn't interfere, so I will keep on getting the same value of $E$ over and over again. I don't know if the fact that the commutator is $0$ can lead to that reasoning, as this would mean that
every pair of commutable operators have the same eigenstates. Is this true or not?
And about question
(b), in that case both measurements of $E$ would depend on the probability for sure, because the system was not left in an eigenstate after the measurement of $A$, so there is no way to know if the same value of $E$ is going to come out or not. I'm pretty sure about this one, but your confirmation would be great. |
One shouldn't imagine the T-duality between the two heterotic strings to be a $Z_2$ group, like in the case of type II string theories' T-duality. In type II string theory, there is only one relevant scalar field, the radius of the circle producing T-duality, and it gets reverted $R\to 1/R$ under T-duality.
In the heterotic case, it's more complicated because more scalar fields participate in the T-duality. Instead of a $Z_2$ map acting on one scalar field, one must correctly adjust the moduli, especially the Wilson lines generically breaking the 10D gauge group to $U(1)^{16}$, and find an identification between the points of the moduli space of the two heterotic string theories: there is one theory at the end.
A fundamental reason why the T-duality holds is that one may define the heterotic string theories in the bosonic representation, using 16-dimensional lattice $\Gamma^{16}$ which is the weight lattice of $Spin(32)/Z_2$, and $\Gamma^{8}\oplus \Gamma^8$ which is the root lattice of $E_8\times E_8$.
One may also describe the compactification on a circle in terms of lattices. It corresponds to adding ($\oplus$) the lattice $\Gamma^{1,1}$ of the indefinite signature to the original lattice. The extra 1+1 dimensions correspond to the compactified left-moving and right-moving boson (of the circle), respectively.
Now, the key mathematical fact is that the 17+1-dimensional even self-dual lattice exists and is unique which really means$$\Gamma^{16}\oplus \Gamma^{1,1} = \Gamma^{8}\oplus \Gamma^8\oplus \Gamma^{1,1}$$Even self-dual lattices in $p+q$ dimensions (signature) exist whenever $p-q$ is a multiple of eight and if both $p$ and $q$ are nonzero, the lattice is unique.
It's unique up to an isometry – a Lorentz transformation of a sort – and that's how the identity above should be understood, too. So there is a way to linearly redefine the 17+1 bosons on the heterotic string world sheet so that a basis that is natural for the $E_8\times E_8$ heterotic string gets transformed to the $Spin(32)/Z_2$ string or vice versa. The compactified boson has to be nontrivially included in the transformation – the 17+1-dimensional Lorentz transformation that makes the T-duality manifest mixes the 16 chiral bosons with the 1+1 boson from the compactified circle.
A different derivation of the equivalence may be found e.g. in Polchinski's book. One may start with one of the heterotic strings and carefully adjust the Wilson lines to see that at a special point, the symmetry broken to $U(1)^{17+1}$ is enhanced once again to the other gauge group. |
In [1] a simple optimization model is presented for the scheduling of patients receiving cancer treatments. [2] tried to use this model. This was not so easy: small bugs in [1] can make life difficult when replicating things.
We use the data set in [2]:
There are \(T=40\) time slots of 15 minutes We have 23 chairs where patients receive their treatment We have 8 different types of patients Each patient type has a demand (number of actual patients) and a treatment length (expressed in 15 minute slots) There is a lunch break during which no patients can start their treatment We want at most 2 treatment sessions starting in each time slot.
Patient Data Main variables
A treatment session is encoded by two binary variables: \[\mathit{start}_{c,p,t} = \begin{cases} 1 & \text{if session for a patient of type $p$ starts in time slot $t$ in infusion chair $c$} \\ 0 & \text{otherwise} \end{cases}\] \[\mathit{next}_{c,p,t} = \begin{cases} 1 & \text{if session for a patient of type $p$ continues in time slot $t$ in infusion chair $c$} \\ 0 & \text{otherwise} \end{cases}\]
Start and Next variables startvariables are colored orange and the nextvariables are grey. Patient type 1 has a treatment session length of 1. This means a session has a startvariable turned on, but no nextvariables. Patient type 2 has a length of 4. So each session has one startvariable and 3 nextvariables with values one.
Note that there are multiple patients of type 1 and 2.
Equations
A chair can be occupied by zero or one patients: \[\sum_p \left( \mathit{start}_{c,p,t} + \mathit{next}_{c,p,t}\right)\le 1 \>\forall c,t\]
When \(\mathit{start}_{c,p,t}=1\) we need that the next \(\mathit{length}(p)-1\) slots have \(\mathit{next}_{c,p,t'}=1\). Here the paper [1] makes a mistake. They propose to model this as: \[\sum_{t'=t+1}^{t+\mathit{length}(p)-1} \mathit{next}_{c,p,t'} = (\mathit{length}(p)-1)\mathit{start}_{c,p,t}\>\>\forall c,p,t\] This is not correct: this version would imply that we have \[\mathit{start}_{c,p,t}=0 \Rightarrow \sum_{t'=t+1}^{t+\mathit{length}(p)-1} \mathit{next}_{c,p,t'}= 0\] This would make a lot of slots just unavailable. (Your model will most likely be infeasible). The correct constraint is:\[\sum_{t'=t+1}^{t+\mathit{length}(p)-1} \mathit{next}_{c,p,t'} \ge (\mathit{length}(p)-1)\mathit{start}_{c,p,t}\>\>\forall c,p,t\] A dis-aggregated version is: \[\mathit{next}_{c,p,t'} \ge \mathit{start}_{c,p,t} \>\>\forall c,p,t,t'=t+1,\dots\,t+\mathit{length}(p)-1\] This may perform a little bit better in practice (although some solvers can do such a dis-aggregation automatically).
It is noted with this formulation, we only do \[\mathit{start}_{c,p,t}=1 \Rightarrow \mathit{next}_{c,p,t'}=1\] If \(\mathit{start}_{c,p,t}=0\), we leave \(\mathit{next}_{c,p,t'}\) unrestricted. This mean we have some \(\mathit{next}\) variables just floating. They can be zero or one. Only the important cases are bound to be one. This again means that the final solution is just \(\mathit{start}_{c,p,t}\), and we need to reconstruct the \(\mathit{next}\) variables afterwards. This concept of having variables just floating in case they do not matter, can be encountered in other MIP models.
To meet demand we can do:\[\sum_{c,t} \mathit{start}_{c,p,t} = \mathit{demand}_p\]
Finally, lunch is easily handled by fixing \[\mathit{start}_{c,p,t}=0\] when \(t\) is part of the lunch period.
There is one additional issue: we cannot start a session if there are not enough time slots left to finish the session. I.e. we have: \[\mathit{start}_{c,p,t}=0\>\>\text{if $t\ge T - \mathit{length}(p)+2$}\]
GAMS model
set
c
'chair' /chair1*chair23/
p
'patient type' /patient1*patient8/
t
'time slots' /t1*t40/
lunch(t)
'lunch time' /t19*t22/
;
alias(t,tt); table patient_data(p,*)
demand length
patient1 24 1
patient2 10 4
patient3 13 8
patient4 9 12
patient5 7 16
patient6 6 20
patient7 2 24
patient8 1 28
;
scalar maxstart 'max starts in period' /2/ ; parameter
demand(p)
length(p)
;
demand(p) = patient_data(p,'demand');
length(p) = patient_data(p,'length');
We create some sets to help us make the equations simpler. This is often a good idea: sets are easier to debug than constraints. Constraints can only be verified when the whole model is finished and we can solve it. Sets can be debugged in advance. In general, I prefer constraints to be as simple as possible.
set
startok(p,t)
'allowed slots for start'
after(p,t,tt)
'given start at(p,t), tt are further slots needed (tt = t+1..t+length-1)'
;
startok(p,t) =
ord(t)<= card(t)-length(p)+1;
startok(p,lunch) =
no;
after(p,t,tt) = startok(p,t)
and ( ord(tt)>= ord(t)+1) and ( ord(tt)<= ord(t)+length(p)-1);
The set
startoklooks like
Set startok
The set
afteris a bit more complicated:
set after nextvariable needs to be turned on. E.g. when patient type 5 starts a treatment session in period 1, the nextvariables need to be one for periods 2 through 16. At the bottom we see again the effect of the lunch period.
The optimization model can now be expressed as:
binary variables
start(c,p,t)
'start: begin treatment'
next(c,p,t)
'continue treatment'
;
variable z 'objective variable';
start.fx(c,p,t)$(
not startok(p,t)) = 0; equations
obj
'dummyobjective: find feasible solution only'
slots(c,p,t)
'start=1 =>corresponding next = 0,1'
slots2(c,p,t,tt)
'disaggregated version'
chair(c,t)
'occupy once'
patient(p)
'demand equation'
starts(t)
'limit starts in each slot'
;
* dummy objective
obj.. z =e= 0;
* aggregated version
slots(c,startok(p,t))..
sum(after(p,t,tt), next(c,p,tt)) =g= (length(p)-1) * start(c,p,t); * disaggregated version
slots2(c,after(p,t,tt))..
next(c,p,tt) =g= start(c,p,t);
* occupation of chair
chair(c,t)..
sum(p, start(c,p,t) + next(c,p,t)) =l= 1; * demand equation
patient(p)..
sum((c,t),start(c,p,t)) =e= demand(p); * limit starts
starts(t)..
sum((c,p),start(c,p,t)) =l= maxstart; model m1/slots,chair,patient,starts,obj/; model m2/slots2,chair,patient,starts,obj/; solve m1minimizing z using mip; displaystart.l;
I try to make the results more meaningful:
parameter results(*,t) 'reporting';
start.l(c,p,t) = round(start.l(c,p,t));
loop((c,p,t)$(start.l(c,p,t)=1),
results(c,t) =
ord(p);
results(c,tt)$after(p,t,tt) = -
ord(p);
);
results('starts',t) =
sum((c,p),start.l(c,p,t));
We only use the
startvariables. We know that some of the nextvariables may have a value of one while not being part of the schedule. The results look like:
Results
The colored cells with positive numbers correspond to a start of a session. The grey cells are patients occupying a chair for the remaining time after the start slot. We see that each period has two starts, except for lunch time, when no new patients are scheduled.
This model solves very quickly: about half a second.
Proper Next variables
In this model we only use the
startvariables for reporting. The nextvariables can show spurious values \(\mathit{next}_{c,p,t}=1\) which are not binding. Can we change the model so we only have valid nextvariables?
There are two ways:
Minimize the sum of nextvariables: \[\min \sum_{c,p,t} \mathit{next}_{c,p,t}\] Surprisingly this made the model much more difficult to solve. We know in advance how many nextvariables should be turned on. So we can add the constraint:\[\sum_{c,p,t} \mathit{next}_{c,p,t} = \sum_p \mathit{demand}_p (\mathit{length}(p)-1) \] This will prevent these floating nextvariables. A better formulation nextvariables altogether and use the startvariables directly in the constraint that checks the occupation of chairs: \[ \sum_p \sum_{t'=t-\mathit{length}_p+1}^t \mathit{start}_{c,p,t'} \le 1 \> \forall c,t\] In GAMS we can model this as follows. We just need to change the set aftera little bit: we let ttin after(p,t,tt)include titself. Let's call this set cover:
set
startok(p,t)
'allowed slots for start'
cover(p,t,tt)
'given start at (p,t), ttare all slots needed (tt = t..t+length-1)'
;
startok(p,t) =
ord(t)<= card(t)-length(p)+1;
startok(p,lunch) =
no;
cover(p,t,tt) = startok(p,t)
and ( ord(tt)>= ord(t)) and ( ord(tt)<= ord(t)+length(p)-1);
Note again that the only difference with our earlier set
after(p,t,tt)is that afterhas ( ord(tt)>= ord(t)+1) while cover(p,t,tt)has a condition: ( ord(tt)>= ord(t)). One obvious difference between the sets afterand coveris the handling of patient type 1. Afterdid not have entries for this patient type, while covershows:
Set cover has a diagonal structure for patient type 1
With this new set
coverwe can easily form our updated constraint chair:
* occupation of chair
chair(c,t)..
sum(cover(p,tt,t), start(c,p,tt)) =l= 1;
This will find all variables
start(c,p,tt)that potentially cover the slot (c,t). Here we see how we can simplify equations a lot by using well-designed intermediate sets. Minimize number of chairs
We can tighten up the schedule a little bit. There is enough slack in the schedule that we actually don't need all chairs to accommodate all patients. To find the minimum number of chairs we make the following changes to the model: first we introduce a new binary variable
usechair(c). Next we change the equations:
* objective
obj.. z =e=
sum(c, usechair(c)); * occupation of chair
chair(c,t)..
sum(cover(p,tt,t), start(c,p,tt)) =l= usechair(c); * chair ordering
order(c-1).. usechair(c) =l= usechair(c-1);
The last constraint says \(\mathit{usechair}_c \le \mathit{usechair}_{c-1}\) for \(c\gt 1\) (this last condition is implemented by indexing the constraint as order(c-1)). The purpose of this constraint is two-fold: (1) reduce symmetry in the model which hopefully will speed up things, and (2) make the solution better looking: all the unused chairs are at the end. With this, an optimal schedule looks like:
Minimize number of chairs Multi-objective version
We can try to make the schedule more compact: try to get rid of empty chairs in the middle of the schedule. An example of such a "hole" is cell:
(chair6, t12). We do this by essentially solving a bi-objective problem: Minimize number of chairs needed Minimize spread Solve a weighted sum objective \(w_1 z_1 + w_2 z_2\) with a large weight on \(z_1\) being the number of chairs used, Solve in two steps: Solve number of chairs problem Fix number of chairs to optimal value and then solve minimum spread model.
Bi-objective model results
Update: I added paragraphs about the suggested formulations in the comments.
References Anali Huggins, David Claudio, Eduardo Pérez, Improving Resource Utilization in a Cancer Clinic: An Optimization Model, Proceedings of the 2014 Industrial and Systems Engineering Research Conference, Y. Guan and H. Liao, eds., https://www.researchgate.net/publication/281843060_Improving_Resource_Utilization_in_a_Cancer_Clinic_An_Optimization_Model Python Mixed Integer Optimization, https://stackoverflow.com/questions/51482764/python-mixed-integer-optimization |
The factor $\frac12$ comes in because we're integrating the equation$$\frac{\mathrm dE}{\mathrm dv}=mv$$once.
Less abstract and only using basic arithmetics, the story goes like this:
When accelerating a body by applying a (constant) force $F$ along a distance $\Delta s$, the body gains energy according to$$\Delta E=F\Delta s$$which is just the definition of (mechanical) work.
According to Newton's second law $F=ma$. We also have $\Delta s \approx v\Delta t$ and thus$$\Delta E\approx mav\Delta t$$This relationship is only approximate because during any finite time interval $\Delta t$, the value $v$ changes as the whole point of the exercise was accelerating the body.
Now, as $a\Delta t=\Delta v$ we have$$\Delta E \approx mv\Delta v$$
But where does the factor $\frac12$ come in? From basic calculus:$$\Delta(v^2)=(v+\Delta v)^2-v^2=2v\Delta v+(\Delta v)^2\approx 2v\Delta v$$which yields$$\Delta E\approx\frac12m\Delta(v^2)=\Delta(\frac12 mv^2)$$and thus$$E\approx\frac12 mv^2 + \mathrm{const}$$If we go from finite to infinitesimal time intervals, the equations become exact and we no longer need to assume a constant force.
A short introduction to differential calculus as relevant to this particular example:
At time $t = t_0$ the body has a velocity $v(t_0)=v_0$. After a time $\Delta t$, the body has the velocity $v(t_0+\Delta t)=v_0 + \Delta v$.
The value of $v^2$ at time $t=t_0$ is of course $v^2(t_0)=v(t_0)^2=v_0{}^2$. What's the value of $v^2$ at time $t=t_0+\Delta t$?$$v^2(t_0 + \Delta t)=v(t_0+\Delta t)^2 = (v_0+\Delta v)^2$$On the other hand, we also have$$v^2(t_0 + \Delta t) = v^2(t_0) + \Delta(v^2) = v_0{}^2 + \Delta(v^2)$$and thus$$\begin{align*}\Delta(v^2) &= (v_0 + \Delta v)^2 -v_0^2 \\&= v_0^2 + 2v_0\Delta v + (\Delta v)^2 - v_0{}^2 \\&= 2v_0\Delta v + (\Delta v)^2\end{align*}$$We're interested in the
instantaneous values, ie the change as we take the limit $\Delta t \rightarrow 0$.This means that $\Delta v$ becomes arbitrarily small as well and we're in particular able to ignore higher powers like $(\Delta v)^2$ and get$$\Delta(v^2)\approx 2v_0\Delta v$$or$$\frac{\Delta(v^2)}{\Delta v}\approx 2v_0$$This procedure is so useful that it got its own formalism and symbolic notation$$\frac{\mathrm{d}(v^2)}{\mathrm{d}v}=2v$$after taking the limit $\Delta v\rightarrow 0$. |
Negative Factorials and their Quotients
Dante
A factorial of a number is, for most cases, an equation that multiplies an integer by every other integer less than itself. Or, mathematically speaking,
$$1!=1$$
$$2!=2\cdot 1=2$$
$$3!=3\cdot2\cdot1=6$$
$$4!=4\cdot3\cdot2\cdot1=24$$
$$5!=5\cdot4\cdot3\cdot2\cdot1=120$$
Notice how 5! is just \(5\cdot4!\), or \(5\cdot4\cdot3\cdot2\cdot1\). It's not a particularly interesting function, outside of the caveat that 0! is also defined to be 1. However, looking through some of my old math journals, I came across an interesting side note for this subject. As you might have guessed, negative numbers are pretty much a no-no for the factorial function. (-1)! is undefined. That stated, I apparently found that certain ratios of factorials
might
not be so undefined, in particular, consider the following equation,
$$n^2 = \frac{(n+1)!}{(n-1)!} - n \Rightarrow{\frac{(n+1)!}{(n-1)!}=n^2+n}$$
$$\Rightarrow{\frac{(n-1)!}{(n+1)!}=\frac{1}{n^2+n}}\Rightarrow{(n-1)!=\frac{(n+1)!}{n^2+n}}$$
$$\Rightarrow{n!=\frac{(n+2)!}{(n+1)^2+n + 1}}\Rightarrow{n!=\frac{(n+2)!}{n^2+3\cdot n+2}}$$
That is, this last equation is another way of writing n!, and plugging in a few values we see that our old faithful values from before show up. Of course, what use is it? We're defining n! based on (n+2)!? That seems almost to be a waste, and as with n!, this equation is also undefined for all negative numbers. -1, -2 result in a 0 for the denominator quadratic equation, while all values less than -2 result in a negative number for the factorial itself. However, what is interesting about this equation, is that factorial quotients
are
defined. In particular, for any negative numbers,
$$\frac{(n+2)!}{n!}=n^2+3\cdot n+2$$
So that, using negative numbers in our factorials quotients, we get...
$$\frac{(-1)!}{(-3)!}=2$$
$$\frac{(-2)!}{(-4)!}=6$$
$$\frac{(-3)!}{(-5)!}=12$$
Of course, maybe this is just a bit of slight of mathematical hand, or maybe not. In particular, if we take the original definition of a factorial function and attempt to extend it to negative numbers, what do we get?
$$\require{cancel}
\frac{(-1)!}{(-3)!}=\frac{-1\cdot{-2}\cdot{\cancel{-3}}\cdot{\cancel{-4}}\cdot{...}}{\cancel{-3}\cdot{\cancel{-4}}\cdot{\cancel{-5}}\cdot{...}}=-1\cdot{-2}=2$$
$$\require{cancel}
\frac{(-2)!}{(-4)!}=\frac{-2\cdot{-3}\cdot{\cancel{-4}}\cdot{\cancel{-5}}\cdot{...}}{\cancel{-4}\cdot{\cancel{-5}}\cdot{\cancel{-6}}\cdot{...}}=-2\cdot{-3}=6$$
$$\require{cancel}
\frac{(-3)!}{(-5)!}=\frac{-3\cdot{-4}\cdot{\cancel{-5}}\cdot{\cancel{-6}}\cdot{...}}{\cancel{-5}\cdot{\cancel{-6}}\cdot{\cancel{-7}}\cdot{...}}=-3\cdot{-4}=12$$
Which matches up nice with the values we acquired before, and tells us what we should have considered all along. That the quotient of two factorials is equal to the product (either in the numerator or the denominator, depending upon the situation) of those elements that are not in either factorial. |
The arc length of a graph of a function $f(x)$ is
$$\mathcal{l} = \int_a^b \sqrt {1 + f'(x)^2} dx$$
If you choose $f$ to be a parametrization of the ellipse with $a=1, b=\frac{1}{2}$, i.e. $f(x) = \frac{1}{2} \sqrt{1-x^2}$, the arc length of the graph of this function is
$$\mathcal{l} = \int_0^1 \sqrt\frac{1-\frac{3x^2}{4} }{1-x^2}dx$$
If you enter
int_0^1 sqrt((1-(3x^2)/4)/(1-x^2))dx at wolframalpha.com you will get the result:
What I would have expected here as the argument of $E$ (the complete elliptic integral of the second kind) was the eccentricity $e = \sqrt{1-(b/a)^2} = \sqrt{\frac{3}{4}}$ but it's $e^2$.
Is this a Wolframalpha bug or did I make or understand something wrong? |
I was trying to solve $ \displaystyle \sum_{n = 1}^{\infty} \frac{n^3}{8^n}$ and I found a way to solve it and I want if there are generalizations for, say, $\displaystyle \sum_{n=1}^{\infty} \frac{n^k}{a^n}$ in terms of $k$ and $a$. I would also like to know if there is a better way to solve it. Here's how I did it:
First I decomposed the series into the following sums:
$S_1 = \frac{1}{8} + \frac{1}{64} + \dots = \frac{\frac{1}{8}}{\frac{7}{8}}$
$S_2 = \frac{7}{64} + \frac{7}{512} + \dots = \frac{\frac{7}{64}}{\frac{7}{8}}$
$S_3 = \frac{19}{512} + \frac{19}{4096} + \dots = \frac{\frac{19}{512}}{\frac{7}{8}}$
And deduced that the sum can be written as $\frac{8}{7} \displaystyle \sum_{n = 1}^{\infty} \frac{3n^2 - 3n + 1}{8^n}$
$\displaystyle \sum_{n = 1}^{\infty} \frac{1}{8^n}$ is easy to evaluate -- it's $\frac{1}{7} $by geometric series
$\displaystyle \sum_{n = 1}^{\infty} \frac{n}{8^n}$ can be evaluated in a whole host of ways to get an answer of $\frac{8}{49}$.
It remains to evaluate $\displaystyle \sum_{n = 1}^{\infty} \frac{n^2}{8^n}$, for which I took a similar approach as the cubics by decomposing it into many sums:
$T_1 = \frac{1}{8} + \frac{1}{64} + \dots = \frac{\frac{1}{8}}{\frac{7}{8}}$
$T_2 = \frac{3}{64} + \frac{3}{512} + \dots = \frac{\frac{3}{64}}{\frac{7}{8}}$
And so forth, coming to the conclusion that it is equal to $\frac{8}{7} \displaystyle \sum_{n = 1}^{\infty} \frac{2n-1}{8^n}$
Now, I used this information and the above values for $\displaystyle \sum_{n = 1}^{\infty} \frac{1}{8^n}$ and $\displaystyle \sum_{n = 1}^{\infty} \frac{n}{8^n}$ to get the sum as $\frac{776}{2401}$, which is confirmed by WA.
So, I would like to reiterate here: Is there a simpler way to compute this sum, and are there any known generalizations for this problem given an arbitrary $a$ in the denominator and arbitrary $k$ as the exponent in the numerator? |
Easier Eigenvectors For Hermitian Matrices
Dante
If you're wondering what an eigenvalue, or eigenvector is, thank whatever god you pray to and move on. However, if you're morbidly curious,
eigenvalues
and their respective
eigenvectors
are nontrivial solutions the
this equation
.
$$A\vec{v}=\lambda\vec{v}$$
And by nontrivial, I mean that you can't just set \(\vec{v}\) and \(\lambda\) to zeros and be done with it. You'd like to do that, but that would be too easy. Instead, you will be instructed to solve the equation through the characteristic equation in order to find appropriate eigvenvalues, like so,
$$\left\|A-\lambda\text{I}\right\|=0$$
These, in turn, you will plug back into the equation above, allowing you to solve for all the appropriate eigvectors by gaussian elimination
. So, for a 3x3 matrix, the characteristic equation can stiff you with a 3rd order polynomial solution (for three total possible eigenvalues), and each of these eigenvalues must be plugged back into the original eigvector equation to give you three eigenvectors by solving three sets of three linear equations -- after which you should then check that these vectors are linearly independent of one another, and toss out those that are not. If that description gives you goosebumps, you're not alone. It is therefore, no small wonder that we might want for a faster, or at least more streamlined, method for solving these equations, and that's where this blog comes in.
Unfortunately, I'm not here to help you with the characteristic equation. You're on you're own with that one, but what I can help you out with, is determining the eigenvectors, once you have a set of eigenvalues, particularly once we get to this equation.
$$(A-\lambda\text{I})\vec{v}=0$$
In particular, what I found was that given a square hermitian matrix, A, and it's eigenvalues, \(\lambda_i\), the eigenvectors of the matrix, \(\vec{v_i}\), are given by taking the matrix G, defined by,
$$G_i=A-\lambda_i\text{I}$$
Substituting each row with a constants (I suggest 1) and then taking the determinant. In case you're wondering, the row number will then correspond to the eigenvector element. So, if you're like me and find determinants far more appealing than Gaussian elimination, this might just be the method for you. Of course, it's probably leaving your head spinning, too. So let's do an example!
Let's start with a matrix satisfying the above stipulations, notice the symmetry about the diagonal -- this tells us that this matrix is hermitian.
$$A=\begin{bmatrix}4&6\\6&{-1}\end{bmatrix}$$
Placing this inside of the characteristic equation yields
$$\left\|A - \lambda\text{I}\right\|=\left|\begin{matrix}{4-\lambda}&6\\6&{-1-\lambda}\end{matrix}\right|=0$$
Taking the determinate, we are left with the quadratic equation
$$\begin{array} ((4-\lambda)\cdot{(-1-\lambda)}-(6^2) &=& -4-4\cdot{\lambda}+\lambda+\lambda^2-36\\
{}&=&\lambda^2-3\cdot\lambda-40\\
{}&=&(\lambda+5)\cdot(\lambda-8) = 0\end{array}$$
So, \(\lambda=-5\) or \(\lambda=8\).
Now that we have our eigenvalues, we just need to plug them back into the equation above to get G.
$$G_1=\begin{bmatrix}{4+5}&6\\6&{5-1}\end{bmatrix}=\begin{bmatrix}9&6\\6&4\end{bmatrix}$$
$$G_2=\begin{bmatrix}{4-8}&6\\6&{-1-8}\end{bmatrix}=\begin{bmatrix}{-4}&6\\6&{-9}\end{bmatrix}$$
Let's start with \(G_1\) and find the first eigenvector.
$$\left\|{G_{1,1}}\right\|=\left|\begin{matrix}1&6\\1&4\end{matrix}\right|=(1\cdot4)-(1\cdot6)=4-6=-2$$
$$\left\|{G_{1,2}}\right\|=\left|\begin{matrix}9&1\\6&1\end{matrix}\right|=(9\cdot1)-(1\cdot6)=9-6=3$$
Which makes,
$$\vec{v_1}=\begin{bmatrix}{-2}\\{3}\end{bmatrix}$$
Now, let's move on to \(G_2\),
$$\left\|{G_{2,1}}\right\|=\left|\begin{matrix}1&6\\1&-9\end{matrix}\right|=(1\cdot{-9})-(6\cdot1)=-9-6=-15$$
$$\left\|{G_{2,2}}\right\|=\left|\begin{matrix}{-4}&1\\6&1\end{matrix}\right|=(-4\cdot1)-(1\cdot6)=-4-6=-10$$
Which makes,
$$\vec{v_2}=\begin{bmatrix}{-15}\\{-10}\end{bmatrix}$$
Or, factoring out a -5 and throwing it away (we divide it out from both sides of the eigenvector equation we started with at the top of the document),
$$\vec{v_2}=\begin{bmatrix}{3}\\{2}\end{bmatrix}$$
Now we have \(\vec{v_1}\) and \(\vec{v_2}\), the eigenvectors of our equation, without having to solve any sets of linear equations. Of course, you might prefer to use linear equations, but as determinates are pretty much a cut and dry algorithm, there's a chance that you, like me, prefer to do these via determinates instead.
Also, be warned that I've never actually "proved this", and it's been years since I've played with this stuff in my diff-eq course. So make sure to try it out and "test it" to see if I was completely off my rocker or not. If you're up for trying to prove it yourself, my hunch was that this is basically an extension of cramer's rule
. Hope this helps :). |
Deformation: Continuously modify the structure constants! Contraction: Generators are multiplied with contraction parameters that are then sent to zero or infinity.
Both concepts are mutually the opposite. However while one can always deform to a group where we contracted from, the opposite procedure is not always possible.
“There exists a plethora of definitions for both contractions and deformations. […] [W]e discuss and compare the mutually opposite procedures of deformations and contractions of Lie algebras.”
On Deformations and Contractions of Lie Algebras by A. Fialowski and M. de Montigny
To deform a Lie algebra, we redefine the Lie brackets as a power series in some parameter $t$ $$ f_t(a,b)=[a,b]+tF_1(a,b)+t^2 F_2(a,b)+\ldots,\quad a,b\in\frak{g}\,, $$ and demand that the series converges in some neighbourhood of the origin.
“Lie-type deformations provide a systematic way of generalising the symmetries of modern physics.”
“Contractions are important in physics because they explain in terms of Lie algebras why some theories arise as a limit regime of more ‘exact’ theories.”
On Deformations and Contractions of Lie Algebras by A. Fialowski and M. de Montigny
“From a physical point of view, ‘contractions’ can be thought of as ‘limits’ of Lie groups as some parameter approaches a specified value. The easiest example is what might be called the ‘Columbus contraction’, in which the parameter of interest is the radius of a spherical Earth. For any value of the radius, the group of symmetries is the rotation group SO(3), but if radius becomes infinite, the group suddenly becomes the Euclidean group of the plane, ISO(2).”
“deformations play a role whenever one tries to find generalisations, extensions, or “perturbations” of a given physical theory or setup. […] the passage from Newtonian mechanics to special relativity or from classical to quantum mechanics can be understood as a deformation of the underlying algebraic structures.”
“ The mechanism which is at work, according to well established results of QFT, goes under the general name of spontaneous breakdown of symmetry and involves the physical phenomena of the Bose condensation and the mathematical structure of the (Ïnonü–Wigner) group contraction” from Group Contraction in Quantum Field Theory by Giuseppe Vitiello
In Deformations, stable theories and fundamental constants by R Vilela Mendes the author discusses how the algebra of quantum mechanics can be computed from the algebra of classical mechanics by deforming it.
To achieve this a different kind of deformation than the usual one is needed, because one must consider non-linear transformations of the generators. This a generalization of the classical theory of deformations, which is only concerned with the deformation of the structure constants of finite-dimensional Lie algebras.
There are two possibilities.
1.) We deform the Poisson algebra of functions in phase-space\begin{equation}\{f,g\} ~:=~ \sum_{i=1}^{N} \left[\frac{\partial f}{\partial q_{i}} \frac{\partial g}{\partial p_{i}} -\frac{\partial f}{\partial p_{i}} \frac{\partial g}{\partial q_{i}}\right].\end{equation}
In the deformed algebra the
Poisson bracket gets replaced by a so called Moyal algebra, which reads\begin{equation}\{f,g\}_M=\{f,g\}-\frac{\hbar^2}{4\cdot 3!}\sum_{{{i_1,i_2,i_3}\atop{j_1,j_2,j_3}}}\omega^{i_1 j_1}\omega^{i_2 j_2}\omega^{i_3 j_3}\partial_{i_1 i_2 i_3}(f)\partial_{j_1 j_2 j_3}(g)+\ldots\,.\end{equation}The Poisson algebra is infinite-dimensional (because the space of functions is infinite-dimensional). 2.) Alternatively, we can consider the phase space coordinates as elements of an Abelian Lie algebra and deform this algebra. This yields the Heisenberg algebra:
\begin{array}&\left[ \hat{x}_i, \hat{x}_j \right] = \left[ \hat{p}_i , \hat{p}_j \right] = 0
&\left[ \hat{x}_i, \hat{p}_j \right] = i\hbar \, \delta_{ij} \end{array}
To achieve this, a deformation is not enough. Instead, we must additionally perform a central extension together with the deformation. |
Posted: January 15, 2013
In a previous post, I discussed how an equation-oriented language like Madonna/STELLA could be translated into Modelica. The point of that post was to demonstrate how constructs in those languages could be mapped to Modelica. But an important caveat in that discussion was that a straight translation would not leverage many of the potential benefits of Modelica.
To take advantage of these benefits, we must transform the flat set of equations we currently have into a more component-oriented representation. This brings several benefits. First, it enables a graphical representation for the various effects, components, sub-systems, etc. Another benefit is in formulating connectors for these components. Modelica supports both causal and acausal semantics in connectors and we will see this automates the process of writing conservation equations. Finally, it allows us to organize the components in ways that represent the domain that we are modeling.
In this post, we'll explore these topics and a few others.
Those models define the interactions between the bodies reserves offat, glycogen and protein in the body. In his paper outlining thesemodels
1, an overview of the processes involved is presented. Thesystem model I've created for this post greatly resembles thatdiagram:
He also introduces these equations as the mathematical representation of the system dynamics:
\[ \rho_C { dG \over dt } = CI+ GNG_P + GNG_F - DNL - G3P - CarbOx \]
\[ \rho_F { dF \over dt } = 3M_{FFA} {FI \over M_{TG}} + DNL - FatOx \]
\[ \rho_P { dP \over dt } = PI - GNG_P - ProtOx \]
An important distinction is that the diagram I have presented
is amodel. Although I have only implemented the top-level interactionsfor the purposes of this post, it provides a framework for a completeimplementation.
When building a domain specific library in Modelica, the first step is to design the connectors. Connectors in Modelica are used to describe the interactions between components. These connectors generally take two forms.
Connectors that send and receive information are called causalconnectors and the information they carry is qualified in theconnector definition by
input and
output qualifiers to indicateclearly which components are consumers of the information and whichare the producers. Such connectors are frequently used in the designof control systems, for example.
However, for the purposes of this post, we will not discuss these connectors in depth. Instead, we will focus on the second type of connector.
Acausal connectors are used when we wish to model the flow of anythingthat can accumulate. In physical modeling, these quantities arealmost always a quantity that has a conservation law associated withit (
e.g., mass, charge, momentum). However, they can be used forthings like raw materials in a process (whether it be manufacturing ormetabolism). The big advantage in using acausal formulations forconnectors in such systems is the ease with which complex networks ofinteracting components can be created and the assurance that asmaterial moves from one component to another, none of it isunintentionally "spilled" through sloppy bookkeeping.
For modeling of human metabolism, I have defined three connectors:
connector FatPort "Processes involving fat" Modelica.SIunits.Mass F_mass; flow Modelica.SIunits.EnergyFlowRate F_rate;end FatPort;connector GlycogenPort "Processes involving glycogen" Modelica.SIunits.Mass G_mass; flow Modelica.SIunits.EnergyFlowRate G_rate;end GlycogenPort;connector ProteinPort "Processes involving protein" Modelica.SIunits.Mass P_mass; flow Modelica.SIunits.EnergyFlowRate P_rate;end ProteinPort;
These are all quite similar, but the names of the variables on these connectors distinguish each substance type because, while they all have units of mass, it is important to distinguish these to avoid any accidental cross between incompatible connectors.
The
X_mass quantity on these connectors represents the accumulatedamount of the substance. The
X_rate variable represents the flow of
energy due to the formation or oxidation of the given substance.With these definitions, we are now in a position to build models forthe various effects that create the dynamics of such a system.
Now that we have connectors, we can begin building models for the various effects involved in human metabolism. We can group these effects into two types, those that model the storage of a given substance and those that represent the "flow" of that substance (or, more specifically, the energy associated with the transformation of fat, glycogen or protein into each other or into energy).
As far as storage is concerned, the previous equations represent the accumulation of substance via an energy balance equation. In general, each of these has the form:
\[ \rho_X { dX \over dt } = ... \]
where \(\rho_X\) represents the "specific energy" of the substancein question (
i.e., how much energy is contained in each unit of massof \(X\)). The right hand side of the equations represents thevarious effects that either generate or consume energy by transformingfat, glycogen or protein.
To model the storage side of these equations, I created three different models (using the three previously defined connectors). I will only discuss one of them here, since the equations themselves are nearly identical between them. The model of fat storage is as follows:
model FatStorage parameter Modelica.SIunits.SpecificEnergy rho_F; Interfaces.FatPort port_a, port_b, port_c;equation port_b.F_mass = port_a.F_mass; port_c.F_mass = port_b.F_mass; rho_F*der(port_a.F_mass) = port_a.F_rate + port_b.F_rate + port_c.F_rate; annotation (...);end FatStorage;
I've shown the variable
rho_F as a parameter here. This representsthe energy density of fat. As such, one might choose to model this(and any of the other \(\rho\) parameters in the equations above)as a physiological constant rather than as an adjustableparameter
2. In either case, the equations would be the same, onlythe ability of a user to adjust the parameter would change.
In addition to the
rho_F parameter, there are three differentports. A single port would be sufficient. But by incorporating threeports in different locations around the component model, it ispossible to create cleaner network diagrams. In other words, thechoice of 3 ports is purely cosmetic.
The meat of this model lies in the equation:
rho_F*der(port_a.F_mass) = port_a.F_rate + port_b.F_rate + port_c.F_rate;
This is one of the three main "conservation" equations for the humanmetabolism system. In effect, the terms on the right hand siderepresent all the outside influences on this "control volume". Notethat it may appear at first glance that this component is somehowlimited to three interactions. This is not the case at all. Evenwith one port, it would be possible to have an infinite number ofinteractions. The quantity
port_a.F_rate represents the
netflow of energy into this storage element through that particularport. External to the storage element, there could be any number ofconnections to that port and the Modelica tool would automaticallycompute the net contribution.
One last comment about the storage model. If you look carefully, youwill note the presence of the text
annotation(...). Although I'veremoved the contents of the annotation, this is an annotationindicating how to render this component model graphically. I left itin simply to point out that this information is present in models.This has the benefit that the appearance is standardized and moveswith the model from tool to tool. Most tools hide these annotationswhen viewing Modelica text simply to keep the source code clean.
Now that we've shown an example of a storage element model, let's look at a model that transforms one quantity into another. To demonstrate this, we will look at the gluconeogenesis from amino acids (\(GNG_P\)) transformation. In the paper, the equation for this process is:
\[ GNG_P = \hat{GNG_P} \left[ \left({ D_P \over \hat{D}_P } \right) - \Gamma_C \left({\Delta CI \over {CI}_b}\right) + \Gamma_P \left({\Delta PI \over {PI}_b} \right) \right] \]
If we translate this model into Modelica, we get something like this:
model GNGpModel "Net rate of gluconeogenesis from amino acid-derived carbon" Interfaces.GlycogenPort glycogenPort; Interfaces.ProteinPort proteinPort;protected Real GNG_p; Real GNG_p_hat = ...; Real D_p = ...; Real D_p_hat = ...; Real gamma_C = ...; Real delta_CI = ...; Real CI_b = ...; Real gamma_P = ...; Real delta_PI = ...; Real PI_b = ...;equation GNG_p = GNG_p_hat*((D_p/D_p_hat)-gamma_C*(delta_CI/CI_b)+gamma_P*(delta_PI/PI_b)); proteinPort.P_rate = GNG_p; glycogenPort.G_rate + proteinPort.P_rate = 0;end GNGpModel;
This model represents the creation of glycogen from protein. As aresult, we need one port to allow us to interact with the glycogenside and the other to allow us to interact with the protein side.There are many intermediate quantities referenced in the paper which Idon't show the computations for. Instead, let's focus on the threeequations in the
equation section:
equation GNG_p = GNG_p_hat*((D_p/D_p_hat)-gamma_C*(delta_CI/CI_b)+gamma_P*(delta_PI/PI_b)); proteinPort.P_rate = GNG_p; glycogenPort.G_rate + proteinPort.P_rate = 0;
The first equation computes
GNG_p, the rate at which energy isconverted. The next equation computes the rate at which energy fromthe protein side is consumed. At first glance, you might think thatthe
GNG_p term on the right should have a minus sign in front ofit. After all, this process is reducing the amount of protein bytransforming it into glycogen. However, it is important to rememberthat the convention in Modelica is that a
flow variable on aconnector represents the flow of that quantity
into the model itis attached to. In this case, energy flows into this transformationfrom the protein and flows out into the glycogen storage. So for theprotein storage the amount flowing out is negative, but for thetransformation itself, it is positive since it is draining the energyfrom protein storage into itself. Similarly, although the glycogenstorage is gaining energy, it is flowing out of the transformationand therefore
glycogenPort.G_rate has the opposite sign as
proteinPort.P_rate. This kind of "conservation" (that somethingflows in one port and out another) is represented by the equation:
glycogenPort.G_rate + proteinPort.P_rate = 0;
You will note that the left hand side of this equation is not avariable but an expression. This is because Modelica models containrelationships between variables (
i.e., equations) and not assignmentstatements. The following equations are all equivalent in Modelica:
glycogenPort.G_rate + proteinPort.P_rate = 0;
0 = glycogenPort.G_rate + proteinPort.P_rate;
-glycogenPort.G_rate = proteinPort.P_rate;
glycogenPort.G_rate = -proteinPort.P_rate;
The latter of these shows (perhaps more clearly) that these two rateshave the opposite sign. However, the form where all flow variablesare summed to zero is more common in Modelica as a clear indicationthat conservation is enforced by the component (
i.e., the zerorepresents the amount of storage in the component).
If we revisit the original equations, we see something interesting about the structure of those equations and the equations of our components:
\[ \rho_C { dG \over dt } = ... + GNG_P + ... \]
\[ \rho_P { dP \over dt } = ... - GNG_P + ... \]
The terms on the left represent the behavior of the storagemodel. The terms on the right represent the terms of a flow model.Furthermore, the terms on the right have a different sign in eachequation. This indicates a conversion or transformation from one typeof substance to another. Other representations are possible as well,
e.g.,
\[ \rho_C { dG \over dt } = CI + ... - CarbOx \]
\[ \rho_P { dP \over dt } = PI + ... - ProtOx \]
In these cases, we have "source" terms (\(CI\) and \(PI\) represent the intake of these substances through diet) and "sink" terms ((\CarbOx\) and \(ProtOx\) represent the oxidation of these substances into energy for physical activity). All of these various effects can be represented using just the simple connector definitions shown previously.
Apart from the nice graphical appearance, you might wonder why this is better than the approach I described previously.
First, we can create arbitrarily complex networks of interactions. Byisolating the various affects (storage, oxidation, transformation,intake), we can compose models in a more robust and less fragile way.This has the benefit of keeping each individual model relativelysimple (as we have seen so far) but allowing us to combine them in anintuitive way to demonstrate complex interactions. Although wehaven't covered it in this post, we can compose complex
hierarchies of components and interactions.
Another big benefit here is of
replaceability. Let us revisit ouroriginal system model diagram:
Note the "sunken" appearance around some of the blocks in the system?This is a graphical indication that I've marked those components as
replaceable. The
replaceable keyword is an indication that amodel can be extended and modified. It allows you to define aninvariant architecture for the model (so that allconnections/interactions are fixed) but with the freedom to "fill inthe blanks" regarding the behavior of individual components.Typically, such flexibility is used to substitute different levels ofdetail for a given "blank". But in the context of metabolismmodeling, the same mechanism could be used to, for example, substitutedifferent diets (the component at the top of the diagram) or even tocapture the metabolic differences between genders or populations.
These types of features bring to bear much of the thinking in recentdecades about how to properly build and manage
software. It maynot be apparent at first why such "complexity" is necessary but thereality is that models are software. As such, there is no reasonmodels and their developers shouldn't benefit from proven techniquesand ideas when building and managing models. But furthermore, it isimportant for the modeling community to recognize that viewing modelsas software is necessary in order to scale the process of building andmanaging models to the level that software has achieved.
This is a rich topic and while I've only scratched the surface, I've probably simultaneously and paradoxically presented an overwhelming amount of information at the same time.
My goal was to show how a given domain (even those outsideengineering) that contains principles related to the movement,transformation and storage of "stuff"
3 can be mapped into a domainspecific library of Modelica components. Modelica not only provideslanguage features for representing the graphics and behaviors in suchsystems but it provides language features designed to bring thescalability of software engineering to the process of building andmanaging mathematical models.
"Computational model of in vivo human energy metabolism duringsemistarvation and refeeding" (
Am J Physiol Endocrinol Metab 291:E23-E37, 2006) ↩
Many of the design decisions I've made require more domain expertise than I have in this specific area. ↩ |
I just started to learn how to quantise Dirac field. Meanwhile, as we can write the Dirac equation in terms of gamma matrices :
$$ (i\hbar\gamma^\mu\partial_\mu - m)\psi = 0 $$ where $\gamma_\mu$ matrices obey Clifford algebra $$ \{\gamma^\mu,\gamma^\nu\} = \eta^{\mu\nu} $$
Now I just came across that the adjoint of the gamma matrices can be written as :
$$ \gamma^{\mu\dagger} = \gamma^0\gamma^\mu\gamma^0 $$ I already checked this question, but it doesn't suggest a way to prove it in a representation independent manner. Also the Wikipedia page suggests that gamma matrices are chosen such that they satisfy the above relations, since they are arbitrary upto a similarity transformation.
So is that the way this identity comes, or is there something else ? |
Determine whether the following series : $$\sum_{n=2}^\infty \frac {1+xn}{\sqrt{n^2+n^6x}}, x \in \mathbb R^+_0$$ converges absolutely, conditionally or diverges.
I tried to estimate the series for $n > x$ using the following: $$\sum_{n=2}^\infty \frac {1+xn}{\sqrt{n^2+n^6x}} \leq \sum_{n=2}^\infty \frac {2xn}{\sqrt{n^6x}} = 2\sqrt x\sum_{n=2}^\infty n^{-2}$$
Which would mean that the series is absolutely convergent for every $x \in \mathbb R^+_0$?
Is this the correct way to go about this or am I overlooking something? |
Hello, I have a question which is related to a partial order in a set of self-adjoint operators.
Let $\mathcal{M}$ be a semifinite von Neumann algebra with a faithful semi-finite normal trace $\tau$. Let $T$ and $S$ be two self-adjoint operators (possibly unbounded) $\tau$-measurable (here probably the assumption that they are affiliated with $\mathcal{M}$ is enough) such that $0 \leq T \leq S$ i.e. $S-T$ is positive. How to get that $$E_{(s, \infty)}(|T|) \preceq E_{(s, \infty)}(|S|), \ \ s \geq 0,$$ where $E_I(|T|)$ (resp. $E_I(|S|)$) stands for a spectral projection of $T$ (resp. $S$) corresponding to the interval $I$ and $\preceq$ means sub-equivalence relation in Murray-von Neumann sense.
I am looking also for some good references which describe the relation between $U|T|$ the elements of the polar decomposition of closed densely defined (possibly unbounded) operator $T$ affiliated with some von Neumann algebra $\mathcal{M}$. I mean that $U$ and each spectral projection of $|T|$ are in this von Neumann algebra. Probably, I can find this in Takesaki vol 2 or vol 3.
I will be really grateful for any help.
Thank you, VdM |
An atom with two energy levels has 2 states (excited and ground), represented by kets $|e\rangle$ and $|g\rangle$ respectively. The atom has energy $\frac{1}{2}E_\theta$ when excited and $-\frac{1}{2}E_\theta$ when in ground state.
Suppose it is prepared in the state $$|\Psi\rangle = \frac{1}{5}((2-i)|e\rangle + (4+2i)|g\rangle) $$
How do I show that the physical state of the system can be equally well described by the state vector written in the form
$$|\Psi\rangle = \cos(\frac{1}{2}\theta)|e\rangle + \sin(\frac{1}{2}\theta)e^{i\phi}|g\rangle $$
i.e determine the values of $\theta$ and $\phi$
please help me if you can as this was in my textbook which has no solutions |
This may not be a consequence of biased estimators or sampling error. I don't think it is a coincidence that$$\frac{6}{\pi} \arcsin\left(\frac{0.9}{2} \right) = 0.891457\ldots \approx 0.891$$Copula construction involves applying nonlinear transformations to random variables which need not preserve correlation.If random variables $X$ and $Y$ are ...
No offense but it will be much more complicated than what you think... I'm not even sure that you are familiar with risk-neutral pricing in the first place? I'll try to give you some clues.This security is called a basket option. On top of the multi-asset feature, there are non-trivial mechanisms embedded in the contract you mention:an auto-callable ...
The best introduction to copulas I know, i.e. with rigour and intuition, is the following.THE QUANT CLASSROOM BY ATTILIO MEUCCIA Short, Comprehensive, Practical Guide to CopulasVisually introducing a powerful risk management tool to generalize and stress-test correlations
I found Coping With Copulas by Thorsten Schmidt really helped me to get a more basic understanding of copulas. As well as looking at some simple examples in R and thinking about different directions the transformations can happen.To answer your actual question I'll attempt to describe the steps involved as simply as I can.Let's say you use the copula ...
In the theory of copulas you want to model a multivariate (often bivariate) distribution and keep the marginals fixed.Thus you have random variables $X$ and $Y$ with cdf $F_X(x) = P[X \le x]$ and $F_Y(y) = P[Y\le y]$ and you want to find some $F_{X,Y}(x,y) = P[X \le x, Y\le y]$ such that when you look at marginals you get $F_{X,Y}(x,\infty) = F_X(x)$ and ...
In general you don't need copulas to calculate VaR on portfolio. You can use historical method if you have time series of returns for the assets in your portfolio. If you have sufficiently enough data this will allow you to take into account correlation risk, non-normality of returns.Example of code in R for equally weighted portfolio without assuming any ...
The algorithm is certainly useful in that it is non-parametric, fast, and versatile. Meucci summarizes the advantages nicely:Unlike traditional copula techniques, CMA a) is not restricted to fewparametric copulas such as elliptical or Archimedean; b) neverrequires the explicit computation of marginal cdf’s or quantilefunctions; c) does not ...
In general setting this is quite a tough problem and it looks like just switching from regular multivariate probability to copulas doesn't make it easier. In general case you need to rely on numerical methods for integration.There is a nice overview of the problem in Copula Theory and Its Applications: Proceedings of the Workshop Held in Warsaw, 25-26 ...
If the density of $(X,Y)$ is known, then you may obtain the density of the sum $X+Y$ simply by applying the Jacobi's transformation formula, which describes the density of the transformed random variable $g(X,Y)$ for $g(x,y) = (x+y, x)$. Integrating out the $x$-component yields the density of $X+Y$. See Jacod/Protter Probability Essentials ch. 12 for details....
Is'nt it true that your first line can be written as$$F_{X,Y}(x,y_2) - F_{X,Y}(x,y_1),$$where $F_{X,Y}$ is the joint cdf of $(X,Y)$. If we assume that the distributions of $X$ and $Y$ are continuous without atoms (I have to check the exact formulation), then it is clear from Sklar's theorem that there is exactly one copula $C$ such that $$F_{X,Y}(x,y) = ...
You don't really have a multivariate case: we can only define VaR (in its usual sense) for a one-dimensional output. Recall that$$\operatorname{VaR}_\alpha(X) = \inf\{v:F_X(v)\geq \alpha\}$$and since in your case $X = X_1+X_2$ you just need to compute $F_X$ in terms of $X_1$ and $X_2$. For the notation of partial derivatives, I denote the generic ...
Look herefor multivariate distribution on the positive quadrant ... quite difficult.http://xianblog.wordpress.com/tag/multivariate-analysis/I have been thinking about this for weeks and months in the context of credit risk (modelling default intensities jointly) and I think this does not work.
I would guess you are calculating the maximum likelihood estimator:$ \hat{\theta} = \frac{1}{N} \sum (x_i - \bar{x}) (y_i - \bar{y}) $instead of the unbiased estimator:$ \hat{\theta} = \frac{1}{N-1} \sum (x_i - \bar{x}) (y_i - \bar{y}) $The unbiased estimator has a bias of zero, i.e. :$ E_{x|\theta}[\hat{\theta}] - \theta = 0 $The unbiased ...
It depends on the assets which copula is best and other methods may still be better and comparable in complexity.If you want to use copula's for equities you can take a look at Clayton copula. While the Gaussian copula is symmetric the Clayton copula has asymmetric tail dependency. This makes modeling the increase in correlation during a crisis possible.
Do you refer with 'negative tail dependence' to the case that one variable has a extremely low value and the other random variable has an extremely large value, i.e.,$$\tau=\lim_{p \rightarrow 0} \frac{Pr[x>Q_x(1-p),y<Q_y(p)]}{p},$$where $Q_x(1-p)$ and $Q_y(p)$ refer to the $(1-p)$-th quantile of the random variable $x$ and the $p$-th quantile of ...
As you know, simulating AR(1) is to simulate the distributed error path.Assume the bivariate errors distributed $\sim F(x),\sim F(y)$ with copula $C(u,v)$ to model their dependence.Then the bivariate joint error distribution is given by Sklar's theorem:$$F(x,y)=C(F(x),F(y))$$You can simulate from this distribution using Conditional Sampling:To ...
Suppose you have the copula $C(u_1,u_2)$, then you could compute the conditional copula$$c_{u_1}(u_2)=\frac{\partial C(u_1,u_2)}{\partial u_1} \; .$$Now, you can generate a pair of independent uniformly distributed random values $(U,V)$. Let's say a particular realistation is $(u,v)$. Then the pair$$(u,c_u^{-1}(v))$$will be distributed according to ...
Note that, you only need to show that\begin{align*}A\left(\frac{\log(u_2)}{\log(u_1u_2)}\right)-\frac{\log(u_2)}{\log(u_1u_2)}A'\left(\frac{\log(u_2)}{\log(u_1u_2)}\right) \ge 0,\end{align*}or, for any $t \in (0, 1)$,\begin{align*}A(t) - t A'(t) \ge 0.\end{align*}Recall that $A$ is a convex function from $[0,\, 1]$ to $[1/2,\, 1]$, $A(0)=A(1)=1$, and ...
This is an interesting observation that you have. The interesting part is "consistently smaller".The normal copula is based on a multivariate normal distribution. The correlation you get out is the correlation parameter you put in.Everything else is most probably due to an issue in your approach. If you did not say "consistently smaller", I would say it ...
Since I think this is of interest for other people, I will post the approach I found:First, let $C_n(u_1,\ldots,u_n)$ be a $n$ - dimensional Clayton copula with generator function $F$ and inverse $F^{-1}$. Then,Generate $n$ independent r.v. from $U (0,1)$Calculate $n-1$ derivatives of $F$, where $F_{n-1}$ denotes the $n-1$-th - order derivative of $F$...
if you agree that the marginal probability $P(u\le Y\le v)=F_Y(v)-F_Y(u)$, then your formula follows immediately, because next you simply plug the marginals into the copula.your 3rd equation for the joint probabilities is incorrect for $P(Z\le z,u\le Y\le v)$, I'm not sure where you got it from
Implementations of the BBx families are available from the VineCopula R-package from CRAN. Spatially and spatio-temporally varying bivariate copulas are provided through the R-package spcopula from r-forge. Temporal support will need some additional work as it was not part of the initial design. The tuning of the copulas' parameter can be done via a ...
You can express the Normal distribution by Sklar's Theorem in terms of Gaussian Marginals and Gaussian Copula as follows:$$F(x_1,...,x_n)=C(F(x_1),...,F(x_n))=C^{Gau}(N(x_1),...,N(x_n))$$So the distribution equals the copula function with the respective inverse marginals as arguments.You can aswell combine any types of Copula and (continuous) different ...
0/ Let's me use more common notations to avoid misunderstanding. We will consider $B_t^x$ and $B_t^y$ - two correlated Brownian motions, e.g. $<dB_t^x,dB_t^y>=\rho dt$.Just to recall, Ito's process:$$X_t = X_0 + \int_0^t \mu(s,\omega) ds + \int_0^t \sigma(s,\omega) dB_s^x\\dX_t=\mu(t,\omega) dt + \sigma(t,\omega) dB_t^x$$1/ Single BMs: $$\mathbb{...
There is a brief and not overly technical introduction here:http://prescientmuse.blogspot.co.uk/2015/01/a-brief-introduction-to-copula.htmlAnd an application of use in a trading system with full R code here:http://prescientmuse.blogspot.co.uk/2015/02/vanilla-trading-algorithm.htmlHope that helps!
For non-normal asset price models you could look at the theory of Lévy-processes.If we assume that you work in the physical probability measure $P$ and that the random numbers that you have generated are daily log-returns, then you can do the following:Asset $i$ has starting price $S_0^i$ and for the future prices you can put$$S_t^i = S_0^i \exp(\sum_{k=...
"convoluted expression" in American usage just means a complicated, big mathematical expression, sometimes also called "hairy" or "messy". It is ugly to work with and to look at, so you prefer not to deal with it if possible. Nothing more than that.There is also a mathematical operation called "convolution ("Faltung" in German) but it has nothing to do ... |
Suppose I have an operation on a set X which I want to denote with something like
$X^\prime$,
$X^\bullet$, or
$X^\circ$, except with a different symbol. I found out that some of the diacritics available in math mode (see here) make a decent candidate when used “standalone”, for instance as
$X \: \tilde{}$ or
$X \: \hat{}$. This might not be the intended use for these characters, but the result looks very nice IMHO (certainly much nicer than, say,
$X^\sim$ or
$X^\wedge$).
However, in doing so, I will inevitably run into problems. One of these is easy to solve: a bit of extra space should be inserted before the standalone diacritic to make it look nice (a small space like
\: seems to do the trick). However, if the diacritic is preceded by a subscript, I would like it to behave like a superscript, so it starts
above the subscript, not after it. This is illustrated by the following MWE:
\documentclass[margin=2mm]{standalone}\usepackage{amsmath}\newcommand{\myhat}{\ensuremath{\:\hat{}}}\newcommand{\mytilde}{\ensuremath{\:\tilde{}}}\newcommand{\mybreve}{\ensuremath{\:\breve{}}}\newcommand{\mycheck}{\ensuremath{\:\check{}}}\begin{document}\renewcommand{\arraystretch}{1.3}\begin{tabular}{|c|c|c|c|} \hline command & no subscript & short subscript & long subscript \\ \hline hat & $X\myhat$ & $X_1\myhat$ & $X_{\omega(k)}\myhat$ \\ tilde & $X\mytilde$ & $X_1\mytilde$ & $X_{\omega(k)}\mytilde$ \\ breve & $X\mybreve$ & $X_1\mybreve$ & $X_{\omega(k)}\mybreve$ \\ check & $X\mycheck$ & $X_1\mycheck$ & $X_{\omega(k)}\mycheck$ \\ \hline\end{tabular}\end{document}
My question is this: how can I make these “standalone” diacritics behave more like superscripts, without having to resort to adding negative horizontal space manually for each occurrence? Alternatively, can identical symbols also be generated in a different way, so that they may be used simply as
$X^\standalonetilde$,
$X^\standalonehat$,
$X^\standalonebreve$, and
$X^\standalonecheck$? |
Documenting MagAO-X¶ Where the documentation lives¶
This source code for this documentation lives in the magao-x/handbook repository. Parts of the documentation pertaining to the core C++ code are generated from the code in magao-x/MagAOX, which is documented in Doxygen. (TODO: explain how to make cross-references.)
The built copy of this documentation is hosted at https://magao-x.github.io/handbook/ via GitHub Pages free hosting. When changes are pushed to the
master branch of magao-x/handbook, a CircleCI job builds and updates the
gh-pages branch of the repository, with changes reflected on the GitHub Pages site 1-15 minutes later.
How to make changes¶ Installing the required software¶
You will need Python 3.5 or newer (with
pip) and a recent version of
git.
Clone https://github.com/magao-x/handbook to your own computer
# full clone $ git clone https://github.com/magao-x/handbook.git # shallow clone (faster) $ git clone --depth=1 https://github.com/magao-x/handbook.git
Change into the directory where you just cloned the documentation and install software so you can preview your changes.
# if 'pip' and 'python' are provided by Python 3.x: $ pip install --user -r requirements.txt # if your OS calls pip for Python 3.x 'pip3': $ pip3 install --user -r requirements.txt
(Installing Python 3 is outside the scope of this document, but Anaconda is a popular installer.)
Ensure
sphinx-buildis on the path
$ which sphinx-build /Users/jlong/miniconda3/bin/sphinx-build
(If you’re using your OS-provided Python, and don’t see output for
which sphinx-build, you should make sure
$HOME/.local/binis on
$PATH.)
You now have a copy of the sources for the handbook. If you’re just editing an existing document, skip ahead.
Creating a brand new document¶
Create a
.md(or
.rst, see below) file with the name you want
Find the file with the appropriate
.. toctree::directive (probably
index.rst) and edit it to add the base name of your file (e.g.
funny-business.mdwould be
funny-businessin the toctree)
Edit and publish¶
Finally, to preview and publish your edits:
Edit the document you want to change
Run
make html(in the directory you cloned into)
Open
_build/html/index.htmlto see the updated site, and verify your changes look good
git add ./path/to/file/you/changed.mdand
git commit -m "Description of your changes"
git push origin master
If everything looks good, the public copy of the docs will update automatically!
Markup¶
Documents can be written in Markdown (CommonMark variant), in which case the filename must end with
.md, or reStructuredText, in which case the filename must end with
.rst. If you want to see how a particular bit of formatting was achieved, you can click the “Page source” link at the bottom of the page.
The following examples are written in Markdown, since that is the language most new contributors are likely to be familiar with. Accomplishing trickier formatting may require dipping one’s toes into reStructuredText or mixing both languages.
Code samples¶ Inline code¶
To include some code inline, enclose it in backticks (left of the
1 key on most US keyboards).
Example markup:
Before starting, execute `sudo do-things` in your terminal
Output:
Before starting, execute
sudo do-things in your terminal
Blocks of code¶
Blocks of code are “fenced” by three backticks on their own line before and after the code block. (If you need to include such a sequence in your block of code, you can use three tildes instead.)
Example markup:
```for i in range(8): print("Step", i)```
Output:
for i in range(8): print("Step", i)
Syntax highlighting¶
By default, the documentation system applies syntax highlighting for Python code to code blocks. If your code is in another language, you include its name on the line after the first set of backticks.
Example markup:
```bash#!/bin/bashset -exuo pipefailDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"cd "$DIR"```
Output:
#!/bin/bashset -exuo pipefailDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"cd "$DIR"
Math¶
Equations can be inserted as a special variety of code block.
Example markup:
```math\mu = m - M = 5 \log_{10}\left(\frac{d}{10\,\mathrm{pc}}\right)```
Output: External files¶
If you want to include a downloadable file (e.g. filter transmission curve table, PDF document, etc.) you will have to dip your toes into the “embedded reST” feature of reCommonMark.
Example markup:
```eval_rst:download:`Click here to download the star logo <mini-star.png>````
Output:
Alternatively, you can simply write the whole document in reStructuredText as in the Preliminary design review page.
Images¶
By default, images are included inline and left aligned. Image inclusion takes the form
 where
path is the path relative to the current file you’re editing.
Example markup:

Output:
Note: reCommonMark does not correctly handle “alternative text”, normally placed between the square brackets, meaning the alt= attribute is not populated and the handbook is not as accessible to the visually impaired as it could be. |
How to find the dimension of the group $O_{p,q}(\mathbb R)= \{g \in GL_n(\mathbb R): g^TI_{p,q} \ g = I_{p,q}\}$, where $I_{p,q}= diag(1,..., 1,-1,...,-1)$ and $p+q=n$?
closed as off-topic by Derek Holt, Watson, Bobson Dugnutt, vrugtehagel, colormegone Mar 10 '16 at 23:16
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Derek Holt, Watson, Bobson Dugnutt, vrugtehagel, colormegone
If you know about Lie algebras, then you can check that the Lie algebra of this group is $\{ a\in M_n(\mathbb{R}) | a^T I_{p,q} + I_{p,q}a = 0\}$, which is just $I_{p,q} \mathfrak{so}_n(\mathbb{R})$, so it has the same dimension as $\mathfrak{so}_n(\mathbb{R})$, ie $n(n-1)/2$.
If not, there is a little trick : if you tensor with $\mathbb{C}$, then $O_{p,q}(\mathbb{R})\otimes \mathbb{C} = O_{p,q}(\mathbb{C})$, but since all non-degenerated quadratic forms over $\mathbb{C}$ are isometric, $O_{p,q}(\mathbb{C}) \simeq O_n(\mathbb{C})$.
So $O_{p,q}(\mathbb{R})\otimes \mathbb{C} = O_n(\mathbb{R})\otimes \mathbb{C}$, and since tensoring with $\mathbb{C}$ doubles the real dimension in both cases, you see that $O_{p,q}(\mathbb{R})$ has the same dimension as $O_n(\mathbb{R})$ (meaning $n(n-1)/2$). |
As others have mentioned, performing a 2D FFT on the kernel will give you the frequency response of the filter. However, it's worth mentioning that 2D filters can be analyzed using the Z-transform, which may or may not provide deeper insight, depending on the filter (and what you want to know).
For example, given the kernel you specified, the corresponding difference equation would be
$$y(n_1,n_2) = x(n_1+1,n_2) + x(n_1,n_2 + 1) + x(n_1-1,n_2) + x(n_1,n_2-1) - 4x(n_1,n_2).$$
Its Z-transform is
$$ Y(z_1,z_2) = z_1X(z_1,z_2) + z_2X(z_1,z_2) + z^{-1}_1X(z_1,z_2) + z_2^{-1}X(z_1,z_2) - 4X(z_1,z_2),$$
which, after rearranging, yields the following transfer function for the filter:
$$ H(z_1,z_2) = \frac{Y(z_1,z_2)}{X(z_1,z_2)} = z_1 + z_2 + z_1^{-1} + z_2^{-1} - 4.$$
To determine the magnitude response, just plug in a pair of complex exponentials and simplify, as follows:
$$ \begin{aligned} H(e^{iw_1},e^{iw_2}) & = e^{iw_1} + e^{iw_2} + e^{-iw_1} + e^{-iw_2} - 4 \\ & = (e^{iw_1} + e^{-iw_1}) + (e^{iw_2} + e^{-iw_2}) - 4 \\ & = 2\cos{w_1} + 2\cos{w_2} - 4 \\|H(e^{iw_1},e^{iw_2})| & = 2\sqrt{ (\cos{w_1} + \cos{w_2} - 2)^2 } .\end{aligned} $$
Evaluating the magnitude response at extreme frequencies will give you a sense of highpass vs. lowpass for the filter. For example,
$$ |H(e^{i0},e^{i0})| = 2\sqrt{(\cos{0} + \cos{0} - 2)^2} = 2\sqrt{(1+1-2)^2} = 0,$$and$$ |H(e^{i\pi},e^{i\pi})| = 2\sqrt{(\cos{\pi} + \cos{\pi} - 2)^2} = 2\sqrt{(-1-1-2)^2} = 8.$$
Of course, the transfer function can be evaluated directly to plot the magnitude response of the filter as well. Here's an example using numpy:
import numpy as np
import pylab as py
from mpl_toolkits.mplot3d import axes3d
def H(z1,z2):
return z1 + z2 + 1./z1 + 1./z2 - 4.0
n = 100
w1 = w2 = np.linspace(-np.pi,np.pi, n)
mag = np.zeros((n,n))
for i1 in xrange(0,n):
for i2 in xrange(0,n):
z1 = np.exp(1j*w1[i1])
z2 = np.exp(1j*w2[i2])
mag[i1,i2] = np.abs(H(z1,z2))
fig = py.figure()
ax = fig.add_subplot(111, projection='3d')
X, Y = np.meshgrid(w1,w2)
ax.plot_surface(X, Y, mag, cmap='bone', alpha=.5)
py.show()
Keep in mind that if you use the kernel 2D FFT technique, the resulting magnitudes won't necessarily be centered at zero like the above plot. |
fskilnik wrote:
GMATH practice exercise (Quant Class 20)
A triangle of area 30 is formed by the line x/c + y/(c+7) - 1 = 0 (where c is a positive constant) and the coordinate axes. What is the perimeter of this triangle? (A) 12 (B) 24 (C) 30 (D) 36 (E) 52
Yes... 5-12-13 is an important GMAT Pythagorean Triple...
\(?\,\, = \,\,\Delta \,\,{\rm{perimeter}}\)
\({S_\Delta } = 30\,\,\,\left( * \right)\)
\({x \over c} + {y \over {c + 7}} = 1\,\,\,\, \Rightarrow \,\,\,\,\left\{ \matrix{
\,x{\rm{ - intercept}} = c \hfill \cr
\,y{\rm{ - intercept}} = c + 7 \hfill \cr} \right.\,\,\,\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)} \,\,\,\,\,\,30 = {{c\left( {c + 7} \right)} \over 2}\,\,\,\)
\(c\left( {c + 7} \right) = 60\,\,\,\left[ { = 5 \cdot 12 = \left( { - 12} \right)\left( { - 5} \right)} \right]\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,c = 5\,\,\,{\rm{or}}\,\,\,c = - 12\,\,\,\,\,\mathop \Rightarrow \limits^{{\rm{stem}}} \,\,\,\,\,c = 5\)
\(\left\{ \matrix{
\,{\rm{right}}\,\,\Delta \hfill \cr
\,{\rm{legs}}\,\,5,12 \hfill \cr} \right.\,\,\,\,\,\, \Rightarrow \,\,\,\,\,{\rm{hyp}}\,\, = 13\,\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,? = 5 + 12 + 13 = 30\)
The correct answer is (C).
We follow the notations and rationale taught in the GMATH
method.
Regards,
Fabio.
_________________
Fabio Skilnik :: GMATH
method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net |
Initial investment Final value of investment Rate of return
Here we will answer the question, where does the rule of 72 come from? And how can we derive similar rules for other tripling or other multiplicative factors?
The rule of 72 is an estimate for the doubling time of an investment at a given rate of return.
As an example, for a 6% return, the rule of 72 says an investment will double in 12 years.
An investment at a fixed rate of return grows exponentially according to the simple differential equation from the article on what it takes to save $1 million.
To find the doubling time, set \(V_f = 2V_0\).
Notice that \(\log(\)2\() =\) 0.693147 and if we want a formula that does not require the conversion from percent to dimensionless fraction, we have to multiply by 100.
The only trouble is that no reasonable rate of return divides cleanly into 69, except for 1 and 3. However, 72 can be cleanly divided by 1, 2, 3, 4, 6, 8, 9, and 12 making it a much more convenient number for a back-of-the-envelope estimation. That is the story of how 69 became 72.
This begets the question - "if the rule of 72 is actually the rule of 69.3, what multiplicative factor does 72 correspond to?" The answer is "pretty much 2."
The purpose of the rule of 72 is for back of the envelope calculations and for that, a <3% error is acceptable.
Using this logic we can extend the rule. How long does it take our money to triple?
Approximate 109.86 as 108 since 109 and 110 are only evenly divisible by 1 and 5 of the first 12 integers respectively. However, 108 is evenly divisible by 1, 2, 3, 4, 6, 9, and 12; the result being 108, 54, 36, 27, 18, 12, and 9 respectively.
This gives us the following table. The quadrupling time is just double the doubling time.
Doubling time Tripling time Quadrupling time \(\displaystyle \frac{72}{r_\text{as percent}}\) \(\displaystyle\frac{108}{r_\text{as percent}}\) \(\displaystyle\frac{144}{r_\text{as percent}} \) |
Initial Note
The highest priority input is RESET. If low, the FF is actively
. But you have that tied high. So RESET isn't active. reset 1st Schematic Results
The next highest priority input is TRIG. But only if it is low (below it's \$\frac13\$\$^\text{rd}\$ threshold.) If low, it actively
the FF. If it is above that threshold, it isn't supposed to take priority and instead the FF is left either in its prior state or else, if THRES is high (above it's \$\frac23\$\$^\text{rd}\$ threshold) then the FF will be actively sets . reset
In your first schematic, the simulator will first find the DC steady-state conditions (unless you use UIC.) And this means that your RC tied to DISCH and THRES will immediately start out at \$V_\text{CC}\$. So this means THRES is high and will attempt to actively reset the output. But note that THRES is the lowest-priority in this regard. So when your input signal to TRIG goes low, it will take over as higher priority and will actively set the output despite THRES "suggesting" a reset. (TRIG takes priority over THRES.) You can see that dominance in your output, readily.
Your TRIG starts out high in your first simulation (the red trace, I believe.) So TRIG isn't taking priority. Instead, THRES is allowed to take over and therefore the FF is reset (note that at first the green trace is low) and DISCH is inactive (leaving the RC in the initial DC steady state condition.)
However, when TRIG goes low, it takes over (higher priority) and therefore sets the FF (green trace goes immediately high) and also causes DISCH to become active and discharge the capacitor. Discharging the capacitor means that now THRES is low (and therefore now
.) inactive
When TRIG goes high again, it no longer asserts its higher priority, leaving that to THRES. But THRES is still too low (the capacitor isn't yet charged enough), so it also cannot assert its lower priority, either. This leaves the FF where it was last at (high.) So the output continues to be high for a little while, during which the resistor charges the capacitor upwards. Eventually, as you can see, THRES does reach the point where it becomes active and asserts a reset to the FF causing the output to go low.
But shortly after, your TRIG input goes low and actively asserts its dominance causing the FF to be set and go back high. Which you observe.
This repeats and completely explains the first simulation results.
Here's what I get in simulation:
The dark blue trace is the voltage on the capacitor that's part of the RC timing element. You can see that it does indeed rise to \$\frac23\$\$^\text{rd}\$ of \$V_\text{CC}\$ before the next change at the output happens. The other two traces are what you plotted, I believe. The above output traces demonstrate the discussion above is accurate.
2nd Results
Assuming your new schematic (except that I refused to use \$100\:\Omega\$ and used \$1\:\text{k}\Omega\$ in the collector of the BJT), the simulator will again first find the DC steady-state conditions. So the RC element tied to DISCH and THRES will immediately start out at \$V_\text{CC}\$, again. THRES is high and will attempt to actively reset the output. But you've inverted TRIG, which now starts out low (because your input is high and causing the BJT to be actively pulling TRIG low.) So TRIG takes priority over THRESH and sets the FF. (DISCH is therefore inactive, so this leaves the capacitor at the fully charged DC steady state condition it started out at.) The output should be high.
When your input goes low, the BJT isn't active and the resistor pulls TRIG high and therefore inactive. Since the capacitor is still fully charged at this point, THRES can now take priority and it causes the output to be reset. The output should now be low and DISCH will now actively discharge the capacitor. As the capacitor voltage rapidly declines, THRES becomes inactive. But the FF state remains unchanged since TRIG is still inactive. So the discharge of the capacitor is allowed to fully complete and the output remains low for this period.
So far, and
only so far, this matches your 2nd output.
When your input returns high, the TRIG goes low and takes priority forcing the FF to be set and the output to go high. DISCH becomes inactive and allows the capacitor to start charging. At first, THRES is inactive. But as the capacitor charges up THRES may become active (depending on the RC time constant and your driving input rate.) However, none of this matters because TRIG doesn't have priority. So for the entire time that TRIG is active low (while the input is high), the output will remain set. But the capacitor will continue to charge, too.
Now the behavior becomes more nuanced.
If the RC time constant is such that the capacitor can charge sufficiently that TRIG becomes active your input changes to low again, causing TRIG to go high and inactive, then THRES will reset the FF as soon as your input changes because TRIG is inactive and THRES can take over. So then you'd expect the output to immediately go low. before If, however, the RC time constant is such that the capacitor cannot charge sufficiently that TRIG becomes active your input changes to low again, then THRES will not yet be active and so the FF will before in its prior state (set) for a while. In this case, you would NOT expect the output to go immediately low. Instead, you'd expect the output to go low once the capacitor charges up enough to cause THRES to become active. (Assuming this happens fast enough -- before the next change of TRIG, you will see a stretched high at the output followed by a short low.) remain
Since your second output results are consistent with condition (1) above, I believe your RC time constant is too short in the second output case. I can't explain it any other way.
Before you insist otherwise, here's what I get in simulation using your schematic (with the above mentioned modification to the collector resistor) when I keep the same values for the RC timing element (\$R=47\:\text{k}\Omega\$ and \$C=10\:\mu\text{F}\$):
I'm sure you note that this is NOT at all what you show in your simulation (except for the first \$\frac23\$\$^\text{rd}\$ second, or so.) But it is entirely consistent with what I wrote above and for case #2, which
be the results you see if your schematic is accurate and your simulator and models are performing correctly. should
Here's what I get, though, with \$R=47\:\text{k}\Omega\$ and \$C=1\:\mu\text{F}\$ (reducing the time sufficiently that it can meet case #1 above):
Now, that does actually reflect what you show in your simulation. The datasheets I've read for the 555 are pretty clear on the descriptions and logic I applied above. So, from this I conclude that you must be in case #1, somehow, in your second schematic. I can read your second schematic and I can see that it asserts there is not the difference I say must be present. But things are what they are and I can't change that. And my simulator generated outputs also support my conclusions, as well. |
A copper rod with length L is moving on two frictionless conducting rails, in a distance a from an very long straight conductor. The rods resistance: R. Ignore any self inductance. The magnetic permeability is $\mu_0$
a) Find the direction and magnitude of the induced current.
I know I could probably just use faradays law finding the induced emf:
$\mathcal{E}=-\frac{d \Phi_{B}}{d t}$
But instead I tried using this other version, and I just can't get it working: $$ \begin{array}{l}{\mathcal{E}=\oint(\overrightarrow{\boldsymbol{v}} \times \overrightarrow{\boldsymbol{B}}) \cdot d \vec{l}} \\ {\text { (all or part of a closed loop moves }} \\ {\text { in a } \overrightarrow{\boldsymbol{B}} \text { field) }}\end{array} $$ I think I'm a bit confused about the direction of the dL-vector
Here is my attempt: Let say the dl-vector is along the x-axis and we integrate from a to a+L. The B-field is into the plane, and the $$ \mathcal{E}=\int_{a}^{a+L}(\vec{v} \times \vec{B}(L)) \cdot d \vec{L} =\int_{a}^{a+L}(v \hat{k} \times B(L) \hat{\jmath}) \cdot d L \hat{i}=\int_{a}^{a+L}-v \cdot B(L) d L$$ $$=\int_{a}^{a+L}-v \cdot \frac{\mu_{0} I}{2 \pi L} d L=-\frac{v \mu_{0} I}{2 \pi}(\ln (a+L)-\ln (a)) $$ $$=\frac{v \mu_{0} I}{2 \pi} \ln \left(\frac{a}{a+L}\right)$$
I know this is the wrong emf, I'm supposed to get $$\mathcal{E}=\frac{v \mu_{0} I}{2 \pi} \ln \left(\frac{a+L}{L}\right)$$
And even if I chose my dL-vector to go in the other direction and integrate from a+L to a. I would get the same result.
What am I doing wrong here?
Edit: So the reason why I thought my result was wrong, was since the answer of the induced current was: $$ I=\frac{v \mu_{0} I}{2 \pi R} \ln \left(\frac{a}{a+L}\right) $$
But I think it comes from faradays law, where you have a negative sign in front of the induced emf. When you want to find the current you should take the magnitude. So I shouldn't multiply with the negative sign and change the fraction in the natural log. I should leave negative sign in front. |
What you have to do is to show that the dollar gamma satisfies the Black-Scholes PDE. Using Feynman-Kac it then follows that the dollar gamma is an expectation of a "payoff", just like the Black-Scholes claim price is an expectation of a payoff. And if something is the expectation of a payoff then it's a martingale.I'll leave the above for you to carry out....
Under the Black-Scholes model,\begin{align*}Gamma &= \frac{N'(d_1)}{S \sigma \sqrt{T-t}}\\Vega &= SN'(d_1) \sqrt{T-t}.\end{align*}Then, it is easy to see that\begin{align*}Vega = S^2 \sigma (T-t) Gamma.\end{align*}
For an option with price $C$, the P$\&$L, with respect to changes of the underlying asset price $S$ and volatility $\sigma$, is given by\begin{align*}P\&L = \delta \Delta S + \frac{1}{2}\gamma (\Delta S)^2 + \nu \Delta \sigma,\end{align*}where $\delta$, $\gamma$, and $\nu$ are respectively the delta, gamma, and vega hedge ratios. Then it is clear ...
For any process with independent increments, by the very fact of statistical independence the variance of $x_{t3}-x_{t1}$ is going to be the sum of the variances of $x_{t2}-x_{t1}$ and $x_{t3}-x_{t2}$ for $t1\leq t2 \leq t3$. Many processes have independent increments, including ABM, GBM, Poisson, etc. Then if you add a homogeneity assumption (the ...
No, you should not expect such a relationship to hold in general. The reason is that American options have an "exercise barrier" which European options don't, and this results in different prices and greeks.In the case of put options (with interest rate $r>0$) as the spot price falls, at some point it becomes optimal to exercise early and take the cash. ...
You have a multidimensional problem - there isn't an answer of "this is what the greeks look like" for all cases, because it depends on the various levels of the different parameters.For example, if we limit ourselves purely to KO Call options, where the spot is 100, and there is no drift, with a time to maturity of 1 year (changing this is equivalent to ...
Simply put, no. Vega depends on a variety of factors (including the level/price of the underlying asset). However, vomma/volga/vega convexity (whatever you want to call dVega/dIV) is always positive. So as IV increases, the vega of an option increases - I think this might have been what you were getting at.It's important to understand that IV is an input ...
Not sure this is a valid question! Gamma p/l is by definition the p/l due to realized volatility being different from implied. Vega p/l is by definition the p/l due to moves in implied volatility.The second part of the question you have answered yourself. Short dated options have more gamma exposure, long dated options have more vega exposure.
The conjecture is true when the interest rate is zero. Note that, from this question, under the Black-Scholes model,\begin{align*}\Gamma(t,S_t) &= \frac{N'(d_1(t))}{S_t \sigma \sqrt{T-t}}\\Vega(t,S_t) &= S_tN'(d_1(t)) \sqrt{T-t},\end{align*}where\begin{align*}d_1(t) = \frac{\ln \frac{S_t}{K} + \big(r+\frac{1}{2}\sigma^2\big)(T-t)}{\sigma \...
The risk exposures/sensitivities of long and short positions always have different signs. This has to hold since derivatives are zero sum games.Vega is always positive for a long position in a European plain vanilla option (or any convex payoff in general). This is true even when the option is already in-the-money. As volatility increases, the probability ...
if you have a portfolio of calls and puts with the same maturity then your portfolio is gamma neutral if and only if it is vega neutral.The reasons is that the BS gamma divided by the BS vega is a function of $S$ and $T$ that does not vary with $K.$ So if you construct a linear combination that has zero gamma then the vega is zero too, and vice versa.
Vega is the partial derivative of the option price (as a function of parameters -- current stock price $S_t$, strike price $K$, implied volatility $\sigma$, etc.) with respect to $\sigma$ -- holding other parameters fixed:$$vega = \frac{\partial}{\partial \sigma} V(S_t,K,\tau,r,\sigma) $$You are confusing the stochastic process with the parameter $S_t$ ...
IV is one of the inputs for your option pricing model, vega measures the actual impact (e.g. in Dollars, Euros...) of any change in IV.Intuitively IV is the price of the option while vega is the sensitivity to IV.Bottom line: There is a clear distinction!
The VIX is designed to "represent the implied volatility of a hypothetical at-the-money [SPX] option with exactly 30 days to expiration." (via the CBOE) The calculations are available from the CBOE in this white paper.Note that your question is wrong -- it is the implied volatility, not the vega. Moreover, you wouldn't predict a change in vega (which is a ...
Well , complete elimination of even Delta is not possible, forget about Vega. When I say this , I'm talking about the trouble you'd face if you keep dynamically hedging your position from time to time. I mean it's not practical , however theoretically feasible it may seem.But anyway if you're interested, below ways could be of your help.You might want to ...
Constant Vega Requires Options Weighted Inversely Proportional to the Square of the Strike.E.g. if you have the following portfolio of options:\begin{equation}\int_{S_i(t)}^{\infty}\frac{2\Big(1-\log[\frac{K}{S_i(t)}]\Big)}{K^2}C_i(t,\tau,K)dK+\int_{0}^{S_i(t)}\frac{2\Big(1-\log[\frac{K}{S_i(t)}]\Big)}{K^2}P_i(t,\tau,K)dK\end{equation}You have a ...
Here is a document that will answer some of your concerns. There are many other good reads out there but this one is a nice one to get started with.In case the link is broken at the time one reads this answer, the document is called "Just what you need to know about variance swaps" by Sebastien Bossu (JPMorgan 2005).
Usually vega and gamma go in the same direction, but you can have opposite exposure in a calendar spread.For an ATM option, vega decreases closer to maturity while gamma increases. If you implement the following:-long a 1 month ATM option-short a 2 months ATM optionyou should be long gamma and short vega.
Volatility is in effect what you are trading in options and the one unknown quantity in options valuation.The other inputs in option valuation: (1) Spot Price: observed in the markets, (2) Strike: defined by contract terms, (3) time: defined by maturity of contract, and (4) interest rates: also observed in the markets.So in effect volatility is ...
It seems like he is assuming that the shorter term volatilities change more than the longer term ones and the relatively sensitivity is proportional to $1 / \sqrt{T}$. Thus, this hedge is not against a parallel shift of the surface. This is not an uncommon assumption and the corresponding vegas are often referred to as "time weighted vegas".
The reason is that in many common models including geometric Brownian motion, the variance of the logarithmic returns is proportional to time. Thus, their standard deviation/volatility is proportional to the square root of time.Consider for example the class of Levy models where $X$ is is the logarithmic stock price process such that $S_t = S_0 e^{X_t}$. ...
No, you are incorrect. A deep in the money option is long vega. It's not just about the probability of being in the money, it's about how far in the money it is. Your reasoning is correct if we are talking about digital options which pay a fixed amount if the option expires in the money, but incorrect for regular options.One way to prove this ...
Chan, Jiun Hong and Joshi, Mark S. and Zhu, Dan, First and Second Order Greeks in the Heston Model (December 26, 2010). Available at SSRN: https://ssrn.com/abstract=1718102 or http://dx.doi.org/10.2139/ssrn.1718102should just about cover it.
In general only non-linear instruments, like options, posses vega. Vega is always positive, no matter the directional component. So when you are long either a call or a put option you are long vega and when you are short either a call or a put option you are short vega.Thereby it becomes clear that you can go long and short different option positions to ...
First, notice that the two greeks you mentioned in your question are simply the partial derivatives of the value of the option $V$ with respect to two different variables $S$ (the price of the underlying) and $\sigma$ (the volatility of the underlying):$$\Delta = \frac{\partial V}{\partial S} \quad \text{and} \quad \nu=\frac{\partial V}{\partial \sigma}$$...
Intuitive, no math explanation:Imagine two call options, option A expiring tomorrow and option B expiring in two months. Both of the options are way out of the money and have the same strike price.Due to some event the implied volatility of the stock spikes. Let's assume stock price stays the same. Does the chances of option A expiring in the money ...
To keep notations uncluttered, consider that $r=q=0$ in what follows, while focusing on the particular case of an ATM option i.e. $K=S$ (otherwise use the same reasoning with $K=F(0,T)=Se^{(r-q)T}$ i.e. an ATMF option, the conclusion won't change that much).In your first question, you're looking for the sign of the derivative of Vega with respect to the ...
Instead of just considering a parallel shift of the whole volatility surface, you can decompose the surface into maturities/strikes domains, so called buckets and consider Vega buckets which are sensitivities wrt to bumps of each of these domains.The vol smile is often inter/extra-polated using a model calibrated to market prices, e.g. the SABR model or ... |
Problem:Consider a graph $G = (V, E)$ on $n$ vertices and $m > n$ edges, $u$ and $v$ are two vertices of $G$.
What is the asymptotic complexity to calculate the shortest path from $u$ to $v$ with Dijkstra's algorithm using Binary Heap ?
To clarify, Dijkstra's algorithm is run from the source and allowed to terminate when it reaches the target. Knowing that the target is a neighbor of the source, what is the time complexity of the algorithm?
My idea:
Dijkstra's algorithm in this case makes $O(n)$
inserts ( $n$ if the graph is complete) and 1 extract min in the binary heap, before calculate the shortest path from $u$ to $v$.
In a binary heap insert costs $O(\log n)$ and extract min $O(\log n)$ too.
So the cost in my opinion is $O(n \cdot \log n + \log n) = O(n \log n)$
But the answer is $\Theta(n)$, so there is something wrong in my thinking.
Where is my mistake? |
I'm implementing a Finite Difference WENO5 with Lax-Friedrich flux splitting on a uniform, structured grid to solve the 2D Euler equations of fluid dynamics on a rectangular domain in cartesian coordinates.
$$\frac{\partial U}{\partial t}+\frac{\partial F}{\partial x}+\frac{\partial G}{\partial y}=0$$
where:
$$U = (\rho,\rho{u},\rho{v},\rho E)^t$$ $$F = (\rho{u},\rho{u^2}+p,\rho u v, \rho uH)^t$$ $$G = (\rho{v},\rho u v,\rho{v^2}+p, \rho vH)^t$$
$$E = \frac{p }{\rho (\gamma-1)} + \frac{1}{2}(u^2+v^2)$$ $$H = \frac{\gamma p }{\rho (\gamma-1)} + \frac{1}{2}(u^2+v^2)$$
Although the interior scheme seems to work fine, I am having trouble with the numeric implementation of boundary conditions.
The WENO5 interpolation is done on the fluxes, to approximate the derivative on the node $i$ as: $$\left( \frac{\partial F}{\partial x}\right) _{i}=\frac{f^{WENO}_{i+1/2}-f^{WENO}_{i-1/2}}{\Delta x}$$ and then update the conservative variables' vector via a Runge-Kutta method.
I want to use ghost nodes, and I know there has to be 3 ghost nodes on each end of the domain (because of the stencil of the WENO5) but I do not know what values I have to assign to the ghost nodes, or what kind of extrapolation I have to use to find those values. In my case the physical boundaries coincide with the leftmost and rightmost (lowermost and uppermost for the other direction) points of the grid, respectively.
These are the three types of boundary conditions I'm working with at the moment, namely:
Reflecting wall (x-direction): $$\begin{bmatrix}\frac{\partial \rho }{\partial x}=0 \\ u=0\\ \frac{\partial v }{\partial x}=0 \\ \frac{\partial p }{\partial x}=0 \end{bmatrix} $$
Reflecting wall (y-direction): $$\begin{bmatrix}\frac{\partial \rho }{\partial y}=0 \\ \frac{\partial u }{\partial y}=0 \\ v=0\\ \frac{\partial p }{\partial y}=0 \end{bmatrix} $$
Dirichlet condition with fixed value on all primitive variables: $$\begin{bmatrix} \rho =\rho_0 \\ u =u_0 \\ v =v_0 \\ p =p_0 \end{bmatrix} $$
Thanks in advance for your answers. |
The feature that makes LaTeX the right editing tool for scientific documents is the ability to render complex mathematical expressions. This article explains the basic commands to display equations.
Contents
Basic equations in LaTeX can be easily "programmed", for example:
The well known Pythagorean theorem \(x^2 + y^2 = z^2\) was proved to be invalid for other exponents. Meaning the next equation has no integer solutions: \[ x^n + y^n = z^n \]
As you see, the way the equations are displayed depends on the delimiter, in this case
\[ \] and
\( \).
LaTeX allows two writing modes for mathematical expressions: the
inline mode and the display mode. The first one is used to write formulas that are part of a text. The second one is used to write expressions that are not part of a text or paragraph, and are therefore put on separate lines.
Let's see an example of the
inline mode:
In physics, the mass-energy equivalence is stated by the equation $E=mc^2$, discovered in 1905 by Albert Einstein.
To put your equations in
inline mode use one of these delimiters:
\( \),
$ $ or
\begin{math} \end{math}. They all work and the choice is a matter of taste.
The
displayed mode has two versions: numbered and unnumbered.
The mass-energy equivalence is described by the famous equation \[E=mc^2\] discovered in 1905 by Albert Einstein. In natural units ($c$ = 1), the formula expresses the identity \begin{equation} E=m \end{equation}
To print your equations in
display mode use one of these delimiters:
\[ \],
\begin{displaymath} \end{displaymath} or
\begin{equation} \end{equation}
Important Note:
equation* environment is provided by an external package, consult the
amsmath article.
Below is a table with some common maths symbols. For a more complete list see the List of Greek letters and math symbols:
description code examples Greek letters
\alpha \beta \gamma \rho \sigma \delta \epsilon
$$ \alpha \ \beta \ \gamma \ \rho \ \sigma \ \delta \ \epsilon $$ Binary operators
\times \otimes \oplus \cup \cap
Relation operators
< > \subset \supset \subseteq \supseteq
Others
\int \oint \sum \prod
The mathematics mode in LaTeX is very flexible and powerful, there is much more that can be done with it: |
Prove that$$I=\int_0^\infty \frac{\ln(1+x+x^2)}{1+x^2}dx=\frac{\pi}{3}\ln(2+\sqrt 3)+\frac43G$$
I've found this integral in my notebook and perhaps I encountered it before since it looks quite familiar. Anyway I thought it's quite a trivial integral so I'm gonna solve it quickly, but I am having some hard time to finish it. I went on with Feynman's trick:
$$I(a)=\int_0^\infty \frac{\ln((1+x^2)a+x)}{1+x^2}dx\Rightarrow I'(a)=\int_0^\infty \frac{dx}{a+x+ax^2}$$ $$=\frac1a\int_0^\infty \frac{dx}{\left(x+\frac{1}{2a}\right)^2+1-\frac{1}{4a^2}}=\frac{1}{a}\frac{1}{\sqrt{1-\frac{1}{4a^2}}}\arctan\left(\frac{x+\frac{1}{2a}}{\sqrt{1-\frac{1}{4a^2}}}\right)\bigg|_0^\infty$$$$=\frac{\pi}{\sqrt{4a^2-1}}-\frac{2}{\sqrt{4a^2-1}}\arctan\left(\frac{1}{\sqrt{4a^2-1}}\right)=\frac{2\arctan\left(\sqrt{4a^2-1}\right)}{\sqrt{4a^2-1}}$$ We can prove easily via the substitution $x\to \frac{1}{x}$ that $I(0)=0$ so we have that: $$I=I(1)-I(0)=2\int_0^1 \frac{\arctan\left(\sqrt{4a^2-1}\right)}{\sqrt{4a^2-1}}da$$ Now I thought about two substitutions: $$ \overset{a=\frac12\cosh x}=\int_{\operatorname{arccosh}(0)}^{\operatorname{arccosh}(2)} \arctan(\sinh x)dx$$ $$\overset{a=\frac12\sec x}=\int_{\operatorname{arcsec}(0)}^{\frac{\pi}{3}}\frac{x}{\cos x}dx$$ But in both cases the lower bound is annoying and I think I am missing something here (maybe obvious). So I would love to get some help in order to finish this.
Edit: We can apply once again Feynman's trick. First consider: $$I(t)=\int_0^1 \frac{2\arctan(t\sqrt{4a^2-1})}{\sqrt{4a^2-1}}da\Rightarrow I'(t)=2\int_0^1 \frac{1}{1+t^2(4a^2-1)}da$$ $$=\frac{1}{t\sqrt{1-t^2}}\arctan\left(\frac{2at}{\sqrt{1-t^2}}\right)\bigg|_0^1=\frac{1}{t\sqrt{1-t^2}}\arctan\left(\frac{2t}{\sqrt{1-t^2}}\right)$$ So once again we have $I(0)=0$, so $I=I(1)-I(0)$. $$\Rightarrow I=\int_0^1\frac{1}{t\sqrt{1-t^2}}\arctan\left(\frac{2t}{\sqrt{1-t^2}}\right)dt\overset{t=\sin x}=\int_0^\frac{\pi}{2}\frac{\arctan(2\tan x)}{\sin x}dx$$ At this point Mathematica can evaluate the integral to be: $$I=\frac{\pi}{3}\ln(2+\sqrt 3)+\frac43G$$ I didn't try the last integral yet, but I am thinking of Feynman again $\ddot \smile$.
Edit 2: Found that I already was on it some time ago, and actually posted it here, which means I have solved it before using Feynman's trick, but right now I can't remember how I did it.
So given the circumstances I am positive that it can be solved starting with my approach, but if you have any other ways then feel free to share it. |
Is there somewhere on the internet I can find cosmological redshift data. In particular, I would like to know the redshift around the time when the acceleration of the Universe began to accelerate.
One of the main places where data about galaxies gets aggregated is the NASA Extragalactic Database (NED). For example, here's the information page for M101 with the default cosmology in their search form. In particularly you want to look at the redshift-independent distances, and the redshift data points. Using the 'Metric Distance' you can calculate the cosmological redshift it would have if it weren't moving (for a given cosmology) by numerically inverting equation 15 from Hogg's cosmology calculations summary paper (probably have to numerically integrate, too).
Note that the peculiar velocity (velocity relative to Hubble flow) is usually around hundreds of kilometers per second. So, for any redshift greater than about $0.01$ (equivalent to a radial velocity of about $3,000\operatorname{km}\operatorname{s}^{-1}$) is almost certainly entirely dominated by the cosmological redshift of the object. There are a lot of databases replete with redshifts of galaxies that stretch back to round $z=1$ for ordinary galaxies, and much further carefully selected galaxies and active galactic nuclei/quasars. For example: Sloan Digital Sky Survey (SDSS), DEEP2, the AGN and Galaxy Evolution Survey (AGES), and Galaxy and Mass Assembly (GAMA). This list is nowhere near complete, of course.
Redshift of the time when the universe started to accelerate: From Friedmann's equations: $$\dot{a}=aH=H_0\sqrt{\Omega_{m0}/a+a^2\Omega_{\Lambda 0}}$$ required is: $\ddot{a}>0$. Calculation gets you $$a=\left(\frac{\Omega_{m0}}{2\Omega_{\Lambda 0}}\right)^{1/3} \approx 0.6$$ $\rightarrow z=1/a-1 = 0.67$ |
How can I make all the text in math mode as bold and italics? I need some code which will make all the math text bold and italics at once.
Hope this may help you:
\documentclass{book}\usepackage{amsmath}\mathversion{bold}%this does the trick\begin{document}This is for test $a+b=c\alpha\beta\gamma$\end{document}
Merry X-Mas |
Problem 1: Food
This is the major contributing factor to the
size of the biosphere. Humans need to consume around 2,250 calories per day (averaged between men and women) to sustain themselves, and more if they are doing physical work. 2,250 cal = 10,460 joules (or 10.46kJ), but that's not very useful. More useful: 100g of potato (in any form) = 77cal. One medium potato (defined by Google via USDA as 213g) contains 163 calories. If you live off potatoes alone, you'll need:
$$ 2,250 \div 163 = 13.80 \approx 14\text{ potatoes/day} $$$$ = 14 \times 365 = 5110 \text{ potatoes/year} $$
That's quite a lot of potatoes. As you can see here, potatoes can be planted March-May and harvested Jun-Oct. You could always plant several crops to be harvested at different times.
The same website as linked above says potatoes should be planted around 30cm apart. Assuming you follow this guide, then you'll need:
$$ 5110 \times 30 = 153300\text{cm} = 1533.00\text{m} $$
for all your potatoes in one row. However, you could plant in multiple rows to save space:
$$ 1533 \div 50 = 30.66 \text{m length} $$$$ 50 \times 0.3\text{m} = 15\text{m width} $$$$ 30.66 \times 15 = 459.9\text{m}^{2} $$
So, to survive on potatoes for one year, you will need 460 square metres of space in the biosphere. However, take into account that
You won't want to survive on potatoes for a year, so you need more land for other foods; If you do just stick with potatoes, you can't plant them in the same place in consecutive years, so double the land required (920 square metres) Potatoes may not be the best plant. That said, at some quick calculations, carrots would require around 140 square metres less land for the same crop. However, carrots have less calorific content per unit (41 per medium carrot), so you would need more. Growing wheat, while more space efficient, requires work after it's harvested an other ingredients to bring it to an edible state. Problem 2: Oxygen
I see two options. You can either filter outside air or grow lots of plants.
Filter outside air If you're assuming today's tech, you need some costly and space-inefficient filtering units. If you have advanced tech, then the walls of your biosphere can be filters, but I'll stick to today's tech. This site has a very good explanation of the oxygen requirements of an average human. To summarise:
8.3ft
2 of air per adult for 5 hours (with an 8ft ceiling)
However, this is a bare minimum. If the size is much less, you won't last 5 hours. The FEMA recommendation is 10ft
2 per adult for 5 hours. Given that air is 21% oxygen, this comes to $8.3 \times 0.21 = 1.74$ square feet for 5 hours. Using more appropriate numbers, this is $1.74 \times 8 = 13.92$ cubic feet for 5 hours, and $13.92 \div 5 = 2.78$ cubic feet per hour. That sounds like a fairly reasonable number: what output would you need?
$$ (2.78 \div 21) \times 100 = 13.23 $$
This line shows that 2.78 cubic feet per hour of oxygen equates to 13.23 cubic feet per hour of normal air.
$$ (13.23 \div 60) \div 60 = 0.003675 $$
This is the amount of air that your concentrator needs to take in per second to keep you alive. That's a perfectly good number: given that many home fans can move around 2 cubic feet of air per second, a commercial fan in a concentrator will have no trouble. A unit capable of doing this takes up less than 1 square metre.
Grow lots of plants According to NASA, the average human needs 0.84kg of oxygen each day to survive. This can be produced by any size plants, but the space requirement changes depending. For example, a mature broadleaf tree such as an oak produces 10-15kg of oxygen per day, plenty to support a human. However, it can easily take up 20 square metres of space and can reach up to 30m high. Smaller plants may be easier: since they produce less oxygen, you can use a more exact number to produce closer to the right amount of oxygen per day, saving space. However, keep in mind that an exercising human uses much more oxygen - up to 7kg per day.
The
really easy way is to just pull in air from outside. However, this may not be possible, so your best bet is probably the filter, as it takes up the least space. |
I am reading a paper called "Rational Proof". It mentioned the following one-to-one reduction. I cannot google an introduction of it.
An excerpt from the paper. "Recall that a one-to-one reduction from a function $f$ to another function $g$ is a triple of polynomial time computable functions ($\alpha$, $\beta$, $\gamma$) such that:
For all $x$ in the domain of $f$ we have $y=f(x)$ if and only if $g(\alpha(x))=\beta(y)$ For all $x$ in the domain of $f$, let $w=\alpha(x)$. Then we have $g(w)=z$ if and only if $f(x)=\gamma(z)$. "
Any reference for the formal definiton of the so-called "one-to-one reduction" is appreciated. Thanks. |
(HP65) Factorial and Gamma Function
10-21-2017, 08:32 AM (This post was last modified: 10-26-2017 05:10 AM by Gamo.)
Post: #1
(HP65) Factorial and Gamma Function
Just noticed that HP 65 cannot calculate decimal Factorial and HP67 app for Android also cannot do it.
Here is a handy program using Stirling's approximation for Factorial and Gamma Function.
This approximation is good for x<70
Program:
Code:
Instruction:
1. Press [A] Initialize
2. Press [E] for x!
Example: [A] Initialize then 4.25 [B]
result approximation 35.21
10-21-2017, 01:21 PM (This post was last modified: 10-21-2017 01:26 PM by Dieter.)
Post: #2
RE: (HP65) Factorial and Gamma Function
(10-21-2017 08:32 AM)Gamo Wrote: Just noticed that HP 65 cannot calculate decimal Factorial and HP67 app for Android also cannot do it.
What you are missing is a Gamma function. The one or other HP67 simulator app may have such a function, for instance the one from CuVee software for iPhone. Which app one are you referring to? Maybe yours has such a function as well?
BTW, the first HP with Gamma (by means of the x! key) was the HP-34C from 1979. Maybe this was even the first pocket calculator with an accurate Gamma function at all (does anyone know?). Earlier models did not feature this, and even the 41-series (introduced about the same time as the 34C) had no Gamma. Possbily to keep its factorial function compatible with the 67/97's. But a separate Gamma function (like on the 42s) would have been nice.
(10-21-2017 08:32 AM)Gamo Wrote: Here is a handy program using Stirling's approximation for Factorial and Gamma Function.
What can you say about this approximation's accuracy? It looks good for large arguments but less so for small x, e.g. x=1 results in 0,9995. If you omit the first constant –571/2488320 the average accuracy actually seems to increase.
The approximation is good for even larger arguments, the accuracy even increases. But due to the limitation of the HP65/67's working range the max. x is near 69,9575 where the result approches 1E100, so larger x will cause an overflow error.
BTW, the ENTER after LBL E can and should be omitted and instead of [1/x] [x] you may use a simple division.
That's why I prefer a CLX or CLST at the end of such initialization routines. ;-)
Here the approximation has an absolute error of ~ –2E–5. Without the first constant it is only ~ +5E–6. ;-)
Dieter
10-21-2017, 08:01 PM
Post: #3
RE: (HP65) Factorial and Gamma Function
(10-21-2017 01:21 PM)Dieter Wrote: What can you say about this approximation's accuracy? It looks good for large arguments but less so for small x, e.g. x=1 results in 0,9995. If you omit the first constant –571/2488320 the average accuracy actually seems to increase.
As already mentioned, the error is larger for small arguments and smaller for large ones. With a little bit of tweaking the coefficients this can be changed to a more evenly distributed error. And finally there is the shift-and-divide method: the approximation is only used for sufficiently large x, say x>6. For smaller x, e.g. 4.25, the approximation is calculated for 6.25 and finally the result divided by (5.25*6.25).
Here is a quick and dirty version of this idea, with modified coefficients:
Code:
LBL A
If evaluated exactly (!) the largest error should be about 1...2 units in the 9th significant digit. Due to the numeric limitations of a 10-digit calculator the error can and will be slightly higher here and there.
The result for x=4,25 now is 35,21161186. The true result is ...1185.
Dieter
10-22-2017, 02:40 AM
Post: #4
RE: (HP65) Factorial and Gamma Function
Dieter Thank You
Your quick modification is very good with more accurate approximation.
The HP67 APP for Android simply called in the play store as HP67 if you have Android device this app is highly recommended and free except that this particular app can not do decimal factorial.
I do have the HP67 iOS app from CuVee Soft and noticed that this version can do this no problem.
RPN-65 SD iOS from CuVee Soft that emulated HP65
can not do decimal factorial.
Gamo
10-22-2017, 08:49 AM
Post: #5
RE: (HP65) Factorial and Gamma Function
Hello Gamo,
which formula you use for your little program?
10-22-2017, 05:28 PM (This post was last modified: 10-22-2017 05:28 PM by Dieter.)
Post: #6
RE: (HP65) Factorial and Gamma Function
(10-22-2017 02:40 AM)Gamo Wrote: Your quick modification is very good with more accurate approximation.
Here is an improved version. If uses a different technique to prevent overflow during the calculation of x
x · e –x. In your program and my first version this is done with two consecutive multiplications of x x/2, while now this term has been rearranged to (x/e) x:
Code:
LBL A
The program also no longer requires R5 and R6, most of the calculation is done on the stack.
Dieter
10-24-2017, 06:34 PM
Post: #7
RE: (HP65) Factorial and Gamma Function
Since this has not been answered yet: it's Stirling's appproximation. The program uses the first terms of the series given in the section "Speed of convergence and error estimates". In my modified version the n² and n³ coefficents have been replaced by optimized values.
BTW I just realize that the linked Wikipedia article has some nice other approximations that may be worth a try, e.g. the one by Nemes (2007).
Dieter
10-25-2017, 12:26 AM
Post: #8
RE: (HP65) Factorial and Gamma Function
Hello peacecalc
That using Stirling's approximation This formula can be fit into limited 99 steps programmable calculator.
Gamo
10-25-2017, 04:06 PM
Post: #9
RE: (HP65) Factorial and Gamma Function
Hello Dieter, hello Gamo,
thank you for your answers. Twenty-five years ago I wrote a "turbo-pascal" program for the gamma-fct with real arguments. I remember this, I also used for large arguments the stirling approx (x>10) as a example for coprozesser programming. But for smaller arguments I used the method described above (divsion by integer values). For negative number I used the formula:
\[ \Gamma(x) =\frac{\pi}{\sin(\pi x)\cdot\Gamma(1-x)}\] f. e.:
\[ \Gamma(-3.6) =\frac{\pi}{\sin(\pi (-3.6))\cdot\Gamma(4.6)}\].
10-26-2017, 06:59 AM (This post was last modified: 10-26-2017 05:42 PM by Dieter.)
Post: #10
RE: (HP65) Factorial and Gamma Function
(10-25-2017 04:06 PM)peacecalc Wrote: thank you for your answers. Twenty-five years ago I wrote a "turbo-pascal" program for the gamma-fct with real arguments.
Ah, yes, Turbo Pascal – I loved it.
(10-25-2017 04:06 PM)peacecalc Wrote: I remember this, I also used for large arguments the stirling approx (x>10) as a example for coprozesser programming. But for smaller arguments I used the method described above (divsion by integer values). For negative number I used the formula: (...)
Great. Here is an HP67/97 version that applies the same formula, modified for x! instead of Gamma. Also the sin(pi*x) part is calculated in a special way to avoid roundoff errors for multiples of pi, especially if x is large.
Edit: code has been replaced with a slightly improved version
Code:
LBL e
Initialize with f [e].
–3,6 [E] => –0,888685714
–4,6 [E] => 0,246857143
Edit:
If you don't mind one more second execution time, here is a version with the constants directly in the code. Except R0 no other data registers are used, and an initialisation routine is not required either.
Code:
LBL E
Dieter
10-26-2017, 07:11 AM
Post: #11
RE: (HP65) Factorial and Gamma Function
(10-26-2017 06:59 AM)Dieter Wrote:(10-25-2017 04:06 PM)peacecalc Wrote: thank you for your answers. Twenty-five years ago I wrote a "turbo-pascal" program for the gamma-fct with real arguments.
Yep! What about BCD math, for instance?
Greetings,
Massimo
-+×÷ ↔ left is right and right is wrong
10-26-2017, 03:41 PM
Post: #12
RE: (HP65) Factorial and Gamma Function
Hello friends,
a interesting remark: the coprocessor worked stack-orientated and for calculating the sum with the Bernoulli-numbers I used the horner-scheme.
10-26-2017, 05:34 PM
Post: #13
RE: (HP65) Factorial and Gamma Function
AFAIK this was only available in version 3.0. I preferred the later versions with a decent IDE, especially from 4.0 to 6.0.
But BCD math indeed is a great plus. I wish it was available in more classic programming languages. BTW, what about the HP85/86's BASIC in this regard?
Dieter
10-26-2017, 08:11 PM
Post: #14
RE: (HP65) Factorial and Gamma Function
(10-26-2017 05:34 PM)Dieter Wrote:
Oh well, I had COMP[UTATIONAL]-3 type in COBOL! :)
Greetings,
Massimo
-+×÷ ↔ left is right and right is wrong
10-27-2017, 08:06 AM
Post: #15
RE: (HP65) Factorial and Gamma Function
OT for the Pascal lovers (hi!): so do you love also the HPPL?
I mean the syntax is pretty similar.
Wikis are great, Contribute :)
10-27-2017, 01:57 PM
Post: #16
RE: (HP65) Factorial and Gamma Function
Here is the formula the Stirling series.
Gamo
10-27-2017, 02:26 PM
Post: #17
RE: (HP65) Factorial and Gamma Function
(10-27-2017 08:06 AM)pier4r Wrote: OT for the Pascal lovers (hi!): so do you love also the HPPL?
Yes, it is similar and yes, I love it! I just wish they'd add a few things like enumeration and user-defined records. Pointers I could live without.
Tom L
People may say I'm inept but I consider myself to be totally ept.
User(s) browsing this thread: 1 Guest(s) |
A famous result by Motzkin and Straus expresses the $k$-clique problem as the maximization of a quadratic function subject to a system of linear constraints. In particular, they prove:
Let $G$ be a graph with vertices $1,\ldots,n$ and edge set $E$.
Then $G$ contains a $k$-clique,
if and only if there exist real numbers $x_1,\ldots,x_n$
that satisfy the quadratic constraint
$$\sum_{(i,j)\in E}x_ix_j \ge \frac12\left(1-\frac1k\right)$$
together with the linear constraints $\sum_{i=1}^nx_i=1$
and $x_1,\ldots,x_n\ge0$. Since the $k$-clique problem is NP-hard, this implies that feasibility testing for a linear program plus a single quadratic inequality constraint is NP-hard. If the graph $G$ contains a $k$-clique $C$, then for $i\in C$ we may set $x_i=1/k$ and for $i\notin C$ we may set $x_i=0$. Note that the resulting point $x$ satisfies all constraints in the feasibility problem with equality. This yields that also feasibility testing for a linear program plus a single quadratic equality constraint is NP-hard.
Reference:
T.S. Motzkin and E.G. Straus (1965), "Maxima for graphs and a new proof of a theorem of Turán." Canadian Journal of Mathematics 17, pp 533–540. |
Without magnetic monopoles, the Maxwell equations are these (I'm dropping vector notations and all constants, for simplicity): \begin{align} \nabla \cdot E &= \rho_{elec}, \tag{1} \\[12pt] \nabla \times E &= -\, \frac{\partial B}{\partial t}, \tag{2} \\[12pt] \nabla \cdot B &= 0, \tag{3} \\[12pt] \nabla \times B &= J_{elec} + \frac{\partial E}{\partial t}. \tag{4} \end{align} These equations can be presented into an explicitely relativistic form (much simpler): \begin{gather} \partial_a F^{ab} = J_{elec}^b, \tag{5} \\[12pt] \partial_a {}^{\star}F^{ab} = 0, \tag{6} \end{gather} where ${}^{\star}F^{ab} = \frac{1}{2} \, \varepsilon^{abcd} F_{cd}$ is the Hodge dual of $F_{ab}$. Equations (2), (3) are equivalent to (6). They imply the existence of a gauge potential : $A^a = \{ \, \phi, A^i\}$ such that \begin{align} B &= \nabla \times A, \tag{7} \\[12pt] E &= -\, \nabla \phi - \frac{\partial A}{\partial t}, \tag{8} \end{align} or \begin{equation} F_{ab} = \partial_a A_b - \partial_b A_a. \tag{9} \end{equation} Thus (2), (3) and (6) are automatically and trivially satisfied. This was the starting point of all gauge theories.
Now, introducing magnetic monopoles imposes that equations (2), (3) and (6) must be modified: \begin{align} \nabla \cdot E &= \rho_{elec}, \tag{10} \\[12pt] \nabla \times E &= J_{magn} -\, \frac{\partial B}{\partial t}, \tag{11} \\[12pt] \nabla \cdot B &= \rho_{magn}, \tag{12} \\[12pt] \nabla \times B &= J_{elec} + \frac{\partial E}{\partial t}, \tag{13} \end{align} or equivalently \begin{gather} \partial_a F^{ab} = J_{elec}^b, \tag{14} \\[12pt] \partial_a {}^{\star}F^{ab} = J_{magn}^b. \tag{15} \end{gather} The Maxwell equations are now more symetric with magnetic monopoles.
But then, how do you introduce the classical gauge potentials, since $B =\nabla \times A$ cannot be valid anymore ($\nabla \cdot (\nabla \times A) \equiv 0$, unless you play "artificial" tricks with the space topology)? The relativistic expression (9) cannot be introduced anymore. And because of the new symetry, why not introduce an expression like $E = \nabla \times A$ instead?
To me, introducing magnetic monopoles appears to destroy the gauge potential theory. I don't see how the gauge potentials could be justified if there are magnetic monopoles. Playing tricks with topology feels like a patch to the theory, and I'm having hard times in accepting that gauge potentials must be pushed on the topology side. |
Support vector machines are the canonical example of the close ties between convex optimization and machine learning. Trained on a set of labeled data (i.e. this is a
supervised learning algorithm) they are algorithmically simple and can scale well to large numbers of features or data samples, and have been shown to be effective on a variety of problems. Whether classifying data (spam or not-spam) or creating regression models (by dividing the space into a large set of categories), a support vector machine is one of the go-to tools for data classification.
In this tutorial, we’ll go through the steps for building a support vector machine from scratch, in order to see what’s “under the hood” of this classic algorithm. This follows the structure in Stephen Boyd’s excellent Convex Optimization book, section 8.6- with additional content drawn from Calafiore & El Ghaoui’s Optimization Models (or on Amazon here).
Linear Classifiers¶
The core idea of a support vector machine is to find a line -or in higher dimensions, a hyperplane- which separates two labeled classes in our training dataset. We will find a way to define this hyperplane to maximize the distance between the two datasets, and then define the hyperplane by the boundary points from the classes which touch it- the
support vectors (“support points” might be an easier way to think of it).
These classifiers use linear hyperplanes to separate the two classes. We will later look at nonlinear classifiers, which are more flexible and more powerful.
Robust Linear Discrimination¶
To start off, let’s assume we have two classes, $x$ (with $M$ elements) and $y$ (with $N$ members), which can be
fully separated by a hyperplane (or in two dimensions, by a line). We’ll later look at cases where we can’t fully separate the two classes.
We seek to find the hyperplane that maximizes the separation between the two classes, i.e. a dividing plane that is precisely in the middle of the two classes with distance $t$ on either side.
This would maximize the separation $t$ between our classes such that $a^T x_i – b \geq t, \; \forall i = 1,\dots,M$ and $a^T y_i – b \leq -t, \; \forall i=1,\dots,N$. We additionally want to have $a$ just be a unit vector defining our plane, so that $b$ and $t$ fully encode the distance to the classes- i.e. we want to constrain $||a||_2 \leq 1$.
The full optimization problem is then:
$$
\begin{align} \underset{a,b,t}{\text{maximize}} \quad & t \\ \text{subject to}\quad & a^T x_i – b \geq t, \; \forall i = 1,\dots,M\\ & a^T y_i – b \leq -t, \; \forall i=1,\dots,N\\ & ||a||_2 \leq 1 \end{align} $$
That’s it- we have our classifier! We’ll use CvxPy, a package for symbolic convex optimization in Python, to turn this mathematical optimization problem into real programming examples.
import numpy as np
import matplotlib.pyplot as plt
import cvxpy
from cvxpy import *
% matplotlib inline
# Define the two sets
d = 2 # Dimension of problem. We'll leave at 2 for now.
m = 100 # Number of points in each class
n = 100
x_center = [1,1] # E.g. [1,1]
y_center = [3,1] # E.g. [2,2]
# Set a seed which will generate feasibly separable sets
# Note: these may only be separable with the default tutorial settings
np.random.seed(8)
# Define random orientations for the two clusters
orientation_x = np.random.rand(2,2)
orientation_y = np.random.rand(2,2)
# Generate unit-normal elements, but clip outliers.
rx = np.clip(np.random.randn(m,d),-2,2)
ry = np.clip(np.random.randn(n,d),-2,2)
x = x_center + np.dot(rx,orientation_x)
y = y_center + np.dot(ry,orientation_y)
# Check out our clusters!
plt.scatter(x[:,0],x[:,1],color='blue')
plt.scatter(y[:,0],y[:,1],color='red')
<matplotlib.collections.PathCollection at 0x10f772b90>
## OPTIMIZATION- in CvxPy!
a = Variable(d)
b = Variable()
t = Variable()
obj = Maximize(t)
x_constraints = [a.T * x[i] - b >= t for i in range(m)]
y_constraints = [a.T * y[i] - b <= -t for i in range(n)]
constraints = x_constraints + y_constraints + [norm(a,2) <= 1]
prob = Problem(obj, constraints)
prob.solve()
print("Problem Status: %s"%prob.status)
Problem Status: optimal
## Define a helper function for plotting the results, the decision plane, and the supporting planes
def plotClusters(x,y,a,b,t):
# Takes in a set of datapoints x and y for two clusters,
# the hyperplane separating them in the form a'x -b = 0,
# and a slab half-width t
d1_min = np.min([x[:,0],y[:,0]])
d1_max = np.max([x[:,0],y[:,0]])
# Line form: (-a[0] * x - b ) / a[1]
d2_atD1min = (-a[0]*d1_min + b ) / a[1]
d2_atD1max = (-a[0]*d1_max + b ) / a[1]
sup_up_atD1min = (-a[0]*d1_min + b + t ) / a[1]
sup_up_atD1max = (-a[0]*d1_max + b + t ) / a[1]
sup_dn_atD1min = (-a[0]*d1_min + b - t ) / a[1]
sup_dn_atD1max = (-a[0]*d1_max + b - t ) / a[1]
# Plot the clusters!
plt.scatter(x[:,0],x[:,1],color='blue')
plt.scatter(y[:,0],y[:,1],color='red')
plt.plot([d1_min,d1_max],[d2_atD1min[0,0],d2_atD1max[0,0]],color='black')
plt.plot([d1_min,d1_max],[sup_up_atD1min[0,0],sup_up_atD1max[0,0]],'--',color='gray')
plt.plot([d1_min,d1_max],[sup_dn_atD1min[0,0],sup_dn_atD1max[0,0]],'--',color='gray')
plt.ylim([np.floor(np.min([x[:,1],y[:,1]])),np.ceil(np.max([x[:,1],y[:,1]]))])
# Typecast and plot these initial results
if type(a) == cvxpy.expressions.variables.variable.Variable: # These haven't yet been typecast
a = a.value
b = b.value
t = t.value
plotClusters(x,y,a,b,t)
We can see that the upper cluster had a large number of datapoints which were clipped, and thus became points on the support vector!
Support Vector Classifiers¶
The discriminator works well for the case above, but doesn’t work when the clusters can’t be strictly separated. In real cases, there will be some crossover in the data between the classes. To handle this, we need to introduce variables which create slack in the constraints associated with points which fall on the ‘wrong’ side of our decision hyperplane.
Initially, we’ll still be trying to minimize the number of points which are inside of our ‘slab’ defined by $t$. This minimizes the number of support vectors, but makes the system more prone to variance as small movements in the support vectors can affect how our classifier is structured.
To address this, we’ll allow for a trade-off between having a minimizing the number of misclassified points (low bias) and maximizing the width of our slab (low variance).
Initial formulation¶
We introduce vectors $u$ and $v$ which take the place of $t$, and create slack in the constraints for classifying the constraints. We want these to be sparse, and have positive entries:
$$
\begin{align} \underset{a,b,t}{\text{minimize}} \quad & \mathbf{1}^T u + \mathbf{1}^T v \\ \text{subject to}\quad & a^T x_i – b \geq 1 – u_i, \; \forall i = 1,\dots,M\\ & a^T y_i – b \leq -1 + v_i, \; \forall i=1,\dots,N\\ & u \succeq 0, \; v \succeq 0 \end{align} $$
## Generate data- this time, it can overlap!
# Relies on the centerpoints defined above
np.random.seed(2) #2 works well
orientation_x = np.random.rand(2,2)
orientation_y = np.random.rand(2,2)
x = x_center + np.dot(np.random.randn(m,d),orientation_x)
y = y_center + np.dot(np.random.randn(n,d),orientation_y)
# Check out our clusters!
ax = plt.subplot(111)
plt.scatter(x[:,0],x[:,1],color='blue')
plt.scatter(y[:,0],y[:,1],color='red')
ax.set_ylim([np.floor(np.min([x[:,1],y[:,1]])),np.ceil(np.max([x[:,1],y[:,1]]))])
(-1.0, 3.0)
## OPTIMIZATION- in CvxPy!
a = Variable(d)
b = Variable()
u = Variable(m)
v = Variable(n)
obj = Minimize(np.ones(m)*u + np.ones(n)*v)
x_constraints = [a.T * x[i] - b >= 1 - u[i] for i in range(m)]
y_constraints = [a.T * y[i] - b <= -1 + v[i] for i in range(n)]
u_constraints = [u[i] >= 0 for i in range(m)]
v_constraints = [v[i] >= 0 for i in range(n)]
constraints = x_constraints + y_constraints + u_constraints + v_constraints
prob = Problem(obj, constraints)
prob.solve()
print("Problem Status: %s"%prob.status)
plotClusters(x,y,a.value,b.value,1)
Problem Status: optimal
We see that the decision plane makes intuitive sense- things seem to be working! Also, note that we’re trading off the penalties for all the points which are on the ‘wrong side’ of the slab- this means that the decision plane rests a bit closer to the blue cluster, as the red cluser is much more dense if it were to move over, and the weight would rapidly increase as more constraints are violated.
Canonical Support Vector¶
This system is prone to variance, as the support vectors change slightly. Increasing the slab width would create a more robust classifier (reducing variance) at the expense of more bias (initial misclassification). We can do this by trading off between the magnitude of $a$ and the penalty for misclassification:
$$
\begin{align} \underset{a,b,t}{\text{minimize}} \quad & ||a||_2 + \gamma \left( \mathbf{1}^T u + \mathbf{1}^T v \right) \\ \text{subject to}\quad & a^T x_i – b \geq 1 – u_i, \; \forall i = 1,\dots,M\\ & a^T y_i – b \leq -1 + v_i, \; \forall i=1,\dots,N\\ & u \succeq 0, \; v \succeq 0 \end{align} $$
## Optimization- just a few modifications to our previous problem!
gamma = Parameter()
gamma.value = 0.4
obj = Minimize(norm(a,2) + gamma*(np.ones(m)*u + np.ones(n)*v) )
constraints = x_constraints + y_constraints + u_constraints + v_constraints
prob = Problem(obj, constraints)
prob.solve()
print("Problem Status: %s"%prob.status)
## Plotting the results
plotClusters(x,y,a.value,b.value,1)
Problem Status: optimal
# We can change the value of Gamma around a bit
gamma.value = 0.05
prob.solve()
plotClusters(x,y,a.value,b.value,1)
The Scikit-learn method¶
Let’s be realistic- my code isn’t optimized, I’m writing this in Python rather than C++, and relying on Cvxpy for optimization probably isn’t the fastest or most scalable. If you want an off-the-shelf package for deployment, head over to the Scikit-learn module for support vector machines. This is what the pros will be doing for prototyping, anyway.
data = np.vstack([x,y])
labels = np.vstack([ np.zeros([m,1]), np.ones([n,1]) ]).ravel()
from sklearn import svm
clf = svm.SVC(kernel='linear', C=1-0.05)
clf.fit(data,labels)
SVC(C=0.95, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma='auto', kernel='linear', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False)
a1 = -np.matrix(clf.coef_).T
b1 = clf.intercept_
plotClusters(x,y,a1,b1,1)
Conclusions¶
We just saw a hint of the potential of support vector machines to handle more complicated classification problems through use of alternative basis functions, known as the ‘
kernel trick‘ – we’ll explore this in a future tutorial. Even with linear selection space, support vector machines are a great tool that scale well, take advantage of advances in convex optimization, and handle a wide variety of problems. Because we only need to preserve the points which define the dividing hyperplane, we can train on a very large dataset and then throw away all but the support vectors- this makes them extremely scalable.
However, they have limitations. They are binary classifiers, so managing more classes requires recursively separating classes. They require a good number of samples relative to the number of features- otherwise our hyperplanes will be underdefined.
If you’re interested in exploring more, I’d highly recommend checking out Stephen Boyd’s book, or seeing this Pythonprogramming.net tutorial which builds a SVM fully from scratch, including the creation of a gradient descent optimizer. |
Goals
I need to design resonant parallel circuit and simulate it with LTSpice so that it match with the design requirements correctly.
Requirements Source Impedance \$R_s = 100\ \Omega\$ Load Impedance \$R_l = 1\ k\Omega\$ Resonant Frequency \$f_o = 100\ MHz\$ Bandwidth \$BW_{3dB} = 150\ kHz\$ Schematic Approach
Find the value of the Quality Factor \$Q\$
\$Q = \frac{f_o}{BW_{3dB}}\$
\$Q = \frac{100\ MHz}{150\ kHz}\$
\$\therefore Q = 666.\overline{6}\$
\$\$
Find the value of the Parallel Resistance \$R_p\$
\$R_p = 100 // 1k\$
\$R_p = \frac{100\ \cdot\ 1k}{100+1k}\$
\$\therefore R_p = 90.\overline{90}\ \Omega\$
\$\$
Find the value of the Parallel Reactance \$X_p\$
Consider the fact that \$Q = Q_p = Q_s\$
\$Q_p = \frac{R_p}{X_p}\$
\$X_p = \frac{R_p}{Q_p}\$
\$X_p = \frac{90.\overline{90}}{666.\overline{6}}\$
\$\therefore X_p = 0.1\overline{36}\ \Omega\$
\$\$
Find the value of the Inductance \$L\$
\$X_p = 2 \pi f_o L\$
\$L = \frac{X_p}{2 \pi f_o}\$
\$L = \frac{0.1\overline{36}}{2 \pi\ \cdot\ 100 \times 10^6}\$
\$\therefore L = 217.0294679\ pH\$
\$\$
Find the value of the Capacitance \$C\$
\$X_p = \frac{1}{2 \pi f_o C}\$
\$C = \frac{1}{2 \pi f_o X_p}\$
\$C = \frac{1}{2 \pi\ \cdot\ 100 \times 10^6\ \cdot\ 0.1\overline{36}}\$
\$\therefore C = 11.67136249\ nF\$
\$\$
Frequency Response Bandwidth Insertion Loss
Insertion Loss is the ratio of power or voltage of the output with load and without the load. At the resonant frequency the reactance of the circuit is equal to zero, so it'll form a simple voltage divider.
\$IL = 20\ log_{10}{\left(\frac{V_{outWithLoad}}{V_{outWithoutLoad}} \right)}\$
\$IL = 20\ log_{10}{\left(\frac{\frac{R_l}{R_s+R_l}V_{in}}{V_{in}} \right)}\$
\$IL = 20\ log_{10}{\left(\frac{1k}{1.1k}\right)}\$
\$\therefore IL = -0.8278537032\ dB\$
Questions
Why at resonant frequency I got \$-16.228\ dB\$ on the simulation graph instead of \$-0.827\ dB\$ which I calculated from Insertion Loss before?
Why at both cutoff frequency of \$99.925\ MHz\$ and \$100.075\ MHz\$ from the simulation graph I got \$-16.35\ dB\$ instead of \$-3.827\ dB\$?
If impedance load causes insertion loss, then what does impedance source causes loss at? How to calculate it? Is it the something that is missing with my calculation?
Is there something wrong with my approach? I have also tried double precision settings on the LTSpice with
.OPTIONS numdgt=7and the results are still the same. |
I am working on a project which requires that I calculate homotopy limits of homotopy theories (i.e. $(\infty,1)$-categories). It may be relevant that the homotopy limits which interest me are in the shape of towers; that is, the indexing category looks like $\cdots\rightarrow\cdot\rightarrow\cdot$. Because I am interested in some category-theoretic constructions (e.g. homotopy adjunctions between $(\infty,1)$-categories, Kan extensions, and homotopy (co)limits within $(\infty,1)$-categories), quasicategories seem like a natural choice for modeling $(\infty,1)$-categories, since they have the most well-developed category theory. However, I would be open to using a different model (e.g. Bergner's model structure on simplicial categories or Rezk's complete Segal spaces) if it proved more convenient.
Anyway, my question is simply how to calculate these homotopy limits. I am aware of Emily Riehl's proof that there is a model structure on the category of marked simplicial sets which is Quillen equivalent to the Joyal model structure on simplicial sets. This is nice, because the former model category is a simplicial model category, which the latter is not (although it is Cartesian closed). If it comes down to it, I should be able to do everything I want to do in this setting, bringing to bear the techniques developed in Riehl's book
Categorical Homotopy Theory for calculating homotopy limits in simplicial model categories. But I'd like to know if there's a more straightforward approach which does not leave the Joyal model structure on simplicial sets.
Sticking with Riehl's approach to homotopy limits via weighted limits, is there a particular weight I should be using for calculating homotopy limits in $\mathbf{qCat_\infty}$ as a simplicial category (but again, not a simplicial model category)? I don't know that $N(\mathcal{D}/-):\mathcal{D}\to\mathbf{sSet}_\mathrm{Joyal}$ is cofibrant in $\left(\mathbf{sSet}_\mathrm{Joyal}\right)_\mathrm{proj}^\mathcal{D}$. Does anyone know if it is, or if not, what the cofibrant replacement looks like? Would we take the free groupoid of $\mathcal{D}/-$ before taking the nerve (just a guess)?
One last question: the theory developed by Riehl and Verity in their series of papers makes the 2-categorical approach to the study of $(\infty,1)$-categories appealing (i.e. working in the homotopy 2-category of the $(\infty,2)$-category of $(\infty,1)$-categories). Does anyone know if homotopy limits in $\mathbf{qCat_\infty}$ agree with $\mathbf{Cat}$-enriched (conical) limits in $\mathbf{qCat_2}$? That would be a useful result, but I don't think I've seen anything to that effect in Riehl's book.
Last note: one of the limits which interests me is of a tower of isofibrations, but I don't know that the morphisms involved in the second limit are inner fibrations (although the diagram is certainly pointwise fibrant). |
I would like to use the following model in QuantLib:
$\frac{dF(t,T)}{F(t,T)} = \sigma_se^{-\beta(T-t)}dW_{t}^{1} + \sigma_L\left(1-e^{-\beta(T-t)}\right)dW_{t}^{2}$
This is a reformulation of the Schwartz Smith model (Schwartz-Smith). $F(t,T)$ is the commodity future price and the model is to be calibrated to American option prices (options on futures).
I plan to proceed in the following way:
Derive a class from StochasticProcessfor the process. Implement a PricingEnginefor the analytical formula of european options. Implement a PricingEnginefor American Options. I will use the Barone-Adesi/Whaley approximation. I have adapted the algorithm to use it with this model. I cannot use the provided implementation in the library though. My implementation will follow the lines of the one in the library I just have to plug-in 3 things: the analytical formula for the european option, the delta and the term that multiplies the second derivative wrt $F$ in the pricing PDE (the first two coming from the european pricing engine and the last one coming from the process). Implement a CalibratedModel. Implement a CalibrationHelper.
My problem is with point number
1. Is it OK to use the StochasticProcess class ? or should I implement a different class because in fact I'm modelling a family of processes, one for each T?
Thank you for any help and thoughts. |
I am interested in the topology on schemes where surjective morphisms of finite presentation are coverings. In particular, I am interested in the topology on Noetherian schemes where surjective morphisms of finite type are coverings. So, the morphism $U\sqcup Z\to X$, where $Z\subset X$ is a closed subscheme of a Noetherian scheme $X$ and $U = X\setminus Z$ is the open complement, would be a typical example of covering in this topology (in addition to faithfully flat morphisms of finite presentation, etc.)
Is there a name for this topology? How does it compare to other known topologies on schemes, such as fpqc?
The motivation is that I am studying a property of quasi-coherent sheaves which I call very flatness. I believe I can prove that very flatness of flat sheaves is local in this topology for Noetherian schemes, i.e., given a Noetherian ring $R$ and a finitely generated $R$-algebra $S$ such that the induced map of spectra $\operatorname{Spec}S\to\operatorname{Spec}R$ is surjective (as a map of sets), a flat $R$-module $F$ is very flat whenever the $S$-module $S\otimes_RF$ is very flat.
Moreover, there is a stronger version of very flatness called "finite very flatness", which I think I can prove is local for the topology on arbitrary schemes in which surjective morphisms of finite presentation are coverings (assuming that the sheaf is known to be flat). In other words, given a commutative ring $R$ and a finitely presented $R$-algebra $S$ such that the induced map of spectra $\operatorname{Spec}S\to\operatorname{Spec}R$ is surjective, a flat $R$-module $F$ is finitely very flat whenever the $S$-module $S\otimes_R F$ is finitely very flat.
So I want to understand what this topology on schemes is and what does locality in it entail. |
I have the followwing Lagrangian for the free electromagnetic field,
$$\mathcal{L} = -\frac{1}{4} F^{\mu \nu}F_{\mu \nu},$$
and the canonical stress tensor is,
$$T^{\alpha \beta}=\frac{\partial \mathcal{L}}{\partial \left(\partial_{\alpha} A^{\lambda}\right)}\partial^{\beta} A^{\lambda}-g^{\alpha \beta}\mathcal{L}.$$
My $T^{0i}$ is, $T^{0i} = (\vec{E}\times \vec{B})_{i}$
but it's the symmetric stress tensor, and I need to find it:
$$T^{0i} = (\vec{E}\times \vec{B})_{i} + \nabla \cdot (A_{i}\vec{E})$$
I don't understand where this divergence term came from. How can I do this asymmetric $T^{0i}$ tensor?
(Just for reference, see Jackson - Classical electrodynamics, Third ed.- page 606) |
If I have an optical transparent slab with refractive index $n$ depending on the distance $x$ from the surface of the slab, the refractive index can be described by: $$n(x)=f(x)$$ where $f(x)$ is a generic function of $x$. so, we can write: $$\dfrac{dn(x)}{dx}=f'(x)$$ The snell law of refraction states: $$n_1\sin(\theta_1)=n_2\sin(\theta_2)$$ How can I write the equation of the ray tracing through the slab? Thanks
You need to learn about the Eikonal equation and its implications. When the electromagnetic field vectors are locally plane waves, i.e. over length scales of several wavelengths and less they are well approximated by plane waves, then the phase of either $\vec{E}$ or $\vec{H}$ (or of $\vec{A}$ and $\phi$ in Lorenz gauge) can be approximated by one scalar field $\varphi(x,\,y,\,z)$ which fulfils the Eikonal equation:
$$\left|\nabla \varphi\right|^2 = \frac{\omega^2\,n(x,\,y,\,z)^2}{c^2}$$
where, of course, $n(x,\,y,\,z)$ describes your refractive index as a function of position. This equation can be shown to be equivalent to Fermat's principle of least time and also implies Snell's law across discontinuous interfaces. The ray paths are the flow lines (exponentiation) of the vector field defined by $\nabla\,\varphi$. Otherwise put: the rays always point along the direction of maximum rate of variation of $\varphi$, whilst the surfaces normal to the rays are the surfaces of constant $\varphi$,
i.e. the phase fronts. A little fiddling with the Eikonal equation shows that the parametric equation for a ray path, i.e. $\vec{r}(s)$ as a function of the arclength $s$ along the path is defined by:
$$\frac{\mathrm{d}}{\mathrm{d}\,s}\left(n(\vec{r}(s))\,\,\frac{\mathrm{d}}{\mathrm{d}\,s} \vec{r}\left(s\right)\right) = \left.\nabla n\left( \vec{r}(s)\right)\right|_{\vec{r}\left(s\right)}$$
This is where you can take things up. You have $n(x,y,z)$ depends only on $x$, so $\nabla n$ will always be in the $\vec{x}$ direction. Everything stays on one plane; let this be the $x-z$ plane and the position of the point on the path is $(x(s),\,z(s))$. We thus get two nonlinear DEs which can be quite hard to solve:
$$\frac{{\rm d}\,n(x)}{{\rm d} s}\,\frac{{\rm d}\,x}{{\rm d} s} + n(x)\, \frac{{\rm d}^2 x}{{\rm d} s^2} = n^\prime(x)$$
$$\frac{{\rm d}\,n(x)}{{\rm d} s}\,\frac{{\rm d}\,z}{{\rm d} s} + n(x)\, \frac{{\rm d}^2 z}{{\rm d} s^2} = 0$$
so you generally need to make some approximation depending on what kind of ray you are dealing with. In fibre optics, for example, you may want to assume that the rays make small angles with the $z$ direction so that $s\approx z$, whence you would get:
$$\frac{{\rm d}\,n(x(z))}{{\rm d} z}\,\frac{{\rm d}\,x(z)}{{\rm d} z} + n(x)\, \frac{{\rm d}^2 x}{{\rm d} s^2} = n^\prime(x)$$
and then you would need to make further approximations depending on the fibre profile.
A good reference for all this is Born and Wolf, Principles of Optics, Chapter 4 or the first half of Snyder and Love, Optical Waveguide Theory.
A complete treatment of your question (more than you ever wanted) is given at http://homepage.tudelft.nl/q1d90/FBweb/diss.pdf, especially section 2.1.1 "Differential equation of light rays in inhomogeneous media". Trying to extract the most useful expression from that dissertation, I believe that the equation you are looking for is:
$$\nabla \Phi a = \frac{2\pi}{\lambda}n(R)$$
where
$\Phi$ = phase $R$ = position vector $a$ = unit vector pointing along ray
With a bit of manipulation, that turns into
$$\frac{d}{ds}\left(n\frac{dR}{ds}\right) = \nabla n$$ (equation 2.1.8 in the above reference).
The factor $ds$ can be a bit tricky since it is pointing along the ray - if you want things in X,Y coordinates then you need to worry about the length of $ds$ when it is no longer at a small angle to the X axis - it becomes $\sqrt{dx^2+dy^2}$ |
I was reviewing time series textbooks recently and have been left confused since.
In particular I have looked into the book of Brockwell and Davis (Introduction to Time Series and Forecasting, Second Edition).
In section 2.3 it is said (just after eq 2.3.4) that there is a unique stationary solution to the ARMA(1,1) equation, if the coefficient on the AR part is not 1 in modulus. To quote the text, they look at the equation
$$ X_t − \phi X_{t−1} = Z_t + \theta Z_{t−1}$$ where $\{Z_t\}\sim\text{WN}(0,\sigma^2)$ and $\phi+\theta\neq0$.
And for $\left| \phi \right| > 1$ they get the representation
$$ X_t = -\theta\phi^{-1}Z_t - (\theta+\phi)\sum_{j=1}^\infty\phi^{-j-1}Z_{t+j}.$$
The proof seems OK to me. But I immediately had the question: How does this relate to explosive processes? I read that a coefficient larger than 1 (in modulus) means that a process is explosive.
In particular the accepted answer of Non-Stationary: Larger-than-unit root performed a simulation that shows the explosive behaviour.
I have the suspicion that this conclusion only holds if we look at adapted processes, as the solution from the textbook looks into the future (it is non-causal). Straight-forward simulations will be adapted since they only incorporate one random variable in each step.
Is my explanation correct, and if not, what am I missing? |
Codeforces Round #498 (Div. 3) Finished
Mishka got an integer array $$$a$$$ of length $$$n$$$ as a birthday present (what a surprise!).
Mishka doesn't like this present and wants to change it somehow. He has invented an algorithm and called it "Mishka's Adjacent Replacements Algorithm". This algorithm can be represented as a sequence of steps:
Note that the dots in the middle of this algorithm mean that Mishka applies these replacements for each pair of adjacent integers ($$$2i - 1, 2i$$$) for each $$$i \in\{1, 2, \ldots, 5 \cdot 10^8\}$$$ as described above.
For example, for the array $$$a = [1, 2, 4, 5, 10]$$$, the following sequence of arrays represents the algorithm:
$$$[1, 2, 4, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$1$$$ with $$$2$$$) $$$\rightarrow$$$ $$$[2, 2, 4, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$2$$$ with $$$1$$$) $$$\rightarrow$$$ $$$[1, 1, 4, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$3$$$ with $$$4$$$) $$$\rightarrow$$$ $$$[1, 1, 4, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$4$$$ with $$$3$$$) $$$\rightarrow$$$ $$$[1, 1, 3, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$5$$$ with $$$6$$$) $$$\rightarrow$$$ $$$[1, 1, 3, 6, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$6$$$ with $$$5$$$) $$$\rightarrow$$$ $$$[1, 1, 3, 5, 10]$$$ $$$\rightarrow$$$ $$$\dots$$$ $$$\rightarrow$$$ $$$[1, 1, 3, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$10$$$ with $$$9$$$) $$$\rightarrow$$$ $$$[1, 1, 3, 5, 9]$$$. The later steps of the algorithm do not change the array.
Mishka is very lazy and he doesn't want to apply these changes by himself. But he is very interested in their result. Help him find it.
The first line of the input contains one integer number $$$n$$$ ($$$1 \le n \le 1000$$$) — the number of elements in Mishka's birthday present (surprisingly, an array).
The second line of the input contains $$$n$$$ integers $$$a_1, a_2, \dots, a_n$$$ ($$$1 \le a_i \le 10^9$$$) — the elements of the array.
Print $$$n$$$ integers — $$$b_1, b_2, \dots, b_n$$$, where $$$b_i$$$ is the final value of the $$$i$$$-th element of the array after applying "Mishka's Adjacent Replacements Algorithm" to the array $$$a$$$. Note that you cannot change the order of elements in the array.
5 1 2 4 5 10 1 1 3 5 9 10 10000 10 50605065 1 5 89 5 999999999 60506056 1000000000 9999 9 50605065 1 5 89 5 999999999 60506055 999999999
The first example is described in the problem statement.
Name |
Perhaps the right way for you to think about it is this: there is only one generator, and it is a big $N\times N$ matrix where $N$ is the number of all of the independent fermion degrees of freedom. To be exact, all of the generators of a symmetry are like this, but we are often able to ignore the full size of each matrix.
Any generator of a symmetry must specify an action of the symmetry on every degree of freedom. In general a symmetry can mix up all the different fermions. However, most of the symmetries of the Standard Model only mix up a few different groups of fermions. For example, if we write out all the fermions (just one generation) as $f = (u_L^r,u_L^g,u_L^b,d_L^r,d_L^g,d_L^b,\nu_L,e_L,u_R^r,u_R^g,u_R^b,d_R^r,d_R^g,d_R^b,e_R)^T$ then the generators of the color $SU(3)$ symmetries look like
$$T_{color}^a = \begin{pmatrix}\lambda^a_{3\times 3} &0 & \dots &\\0 & \lambda^a_{3\times 3} & 0 & \dots \\ 0 & \dots & 0_{1\times 1} &\dots \\ 0 &\dots & & 0_{1\times 1} & \dots \\ 0 & \dots & & &\lambda^a_{3\times 3} \\ 0 &\dots &&&& \lambda^a_{3\times 3}\\ 0 &\dots &&&&& 0_{1\times 1}\end{pmatrix} $$
where the $\lambda$'s are 3x3 Gell-Mann matrices that act on the subspaces $u_L^{\{r,g,b\}}$ etc. Likewise, the $SU_2(L)$ generators mix up the left-handed quarks and leptons in a specific way:
$$ T^i_{weak} = \begin{pmatrix}\Sigma^i_{6\times 6} & 0 &\dots \\ 0 &\sigma^i_{2\times 2} & \dots \\ 0 &\dots & 0_{7\times 7}\end{pmatrix} $$
where $\sigma^i$ are the Pauli matrices and $\Sigma^i$ are a simple extension of the Pauli matrices, e.g.$$\Sigma^2 = \begin{pmatrix}0_{3\times 3} & -iI_{3\times 3} \\ iI_{3\times 3} & 0_{3\times 3}\end{pmatrix}$$
Finally, the single $U(1)_Y$ generator is just a diagonal matrix:$$T_Y =\begin{pmatrix}\frac{1}{6}I_{6\times 6} & 0 &\dots \\ 0 & -\frac{1}{2}I_{2\times 2} & \dots \\ 0 &\dots & \frac{2}{3} I_{3\times 3} &\dots \\ 0 &\dots && -\frac{1}{3}I_{3\times 3} & \dots \\ 0 &\dots &&& -1\end{pmatrix}$$
If you look at these for a while you can see that the matrices break down into blocks, and the different blocks never mix with each other. The $(u_L^r,u_L^g,u_L^b,d_L^r,d_L^g,d_L^b)^T$ fermions forms a 6x6 block. The $(\nu_L,e_L)$ part forms a 2x2 block, and so on. These subspaces are called irreducible subrepresentations of the symmetry.
To finally answer your question, the full $U(1)_Y$ generator is not proportional to the identity, but you can see that its action on each irreducible subrepresentation is proportional to the identity. Since irreducible subrepresentations never mix, we often refer to them separately and act like they have seperate generator matrices. This is not really the case, but it is much more convenient that writing out the huge matrices every time, and almost never causes confusion once you understand it.
The reason we can always split fields into irreducible subrepresentations like this is explained by representation theory, which is a beautiful mathematical subject that it is well worth learning. (And perhaps you are already trying to!) |
Property \((UW_\Pi )\) holds for a bounded linear operator \(T \in B(X),\) defined on a complex infinite dimensional Banach space X, if the poles of the resolvent of T are exactly the spectral points \(\lambda \) for which \(\lambda I-T\) is upper semi-Weyl. In this paper, we discuss the relationship between property \((UW_\Pi )\) and other Weyl type theorems. The stability of property \((UW_\Pi )\) is also studied under nilpotent, quasi-nilpotent, finite-dimensional or compact perturbations commuting with T.
Keywords
Property \((UW_\Pi )\)Weyl type theorems SVEP perturbation theory
Mathematics Subject Classification
Primary 47A10 47A11 Secondary 47A53 47A55
This is a preview of subscription content, log in to check access.
Notes
Acknowledgements
The authors are grateful to the referees for their valuable comments and suggestions.
References
1.
Aiena, P.: Fredholm and Local Spectral Theory, with Application to Multipliers. Kluwer Acad. Publishers, Dordrecht (2004)zbMATHGoogle Scholar
2.
Aiena, P.: Semi-Fredholm Operators. Perturbation Theory and Localized SVEP, Mérida, Venezuela (2007)Google Scholar
3.
Aiena, P., Guillén, J.R., Peña, P.: Localized SVEP, property \((b)\) and property \((ab)\). Mediterr. J. Math. 10(4), 1965–1978 (2013)MathSciNetCrossRefGoogle Scholar |
Take the 1st order Taylor series the function $log(x)$. around one:$$ log(x) \approx log(1) + \frac{x-1}{1} = x-1$$
Therefore, for a random variable r that is close to zero:$$ log(1+r) \approx r$$
Log returns are typically something like $log(P_{new}/P_{old})$:$$\Delta log(P_t) = log(P_{new}/P_{old}) = log((P_{old} + \Delta P)/P_{old}) = log(1 + \Delta P/P_{old}) \approx \Delta P/P_{old}$$But $\Delta P/P_{old}$ is just the percent change in $P_{old}$. We've therefore shown that we can approximate the percent change in a variable $X$ with $\Delta log(X)$.
Why bother? Mostly because log differences can be summed but percent changes cannot be:
Using logs, or summarizing changes in terms of continuous compounding,
has a number of advantages over looking at simple percent changes. For
example, if your portfolio goes up by 50% (say from \$100 to \$150) and
then declines by 50% (say from \$150 to \$75), you’re not back where you
started. If you calculate your average percentage return (in this
case, 0%), that’s not a particularly useful summary of the fact that
you actually ended up 25% below where you started. By contrast, if
your portfolio goes up in logarithmic terms by 0.5, and then falls in
logarithmic terms by 0.5, you are exactly back where you started. The
average log return on your portfolio is exactly the same number as the
change in log price between the time you bought it and the time you
sold it, divided by the number of years that you held it.
Jame Hamilton: Use of logarithms in economics |
The expression in question is in footnote $11$ of the referenced article. Reading the paper, we see that the decision variable here is "the payout rate", which is the reciprocal of $P$. So equivalently, we can solve the maximization problem with respect to $P$ (and not w.r.t. $Q$). More over, "price elasticity of demand" involves the derivative of $Q$ with ...
For continuous-time stochastic dynamic programming, the small, nontechnical Art of Smooth Pasting by Dixit is a wonderful option. It does a very effective job of conveying the basic intuition.Stokey's more recent The Economics of Inaction is also decent, but for a practical-minded person it probably underperforms Dixit - its much greater length and ...
They are related and usually fall into the same discussion, but as @Alecos mentions in the comments, the two theorems show different things.I suppose the connection that you're after is the fact that ifthe derivative$$\left .\frac{\partial f(x, a)}{\partial a} \right |_{x=x(a)}$$exists, then because differentiability implies continuity, you might be ...
As @user32416 pointed out the first order stationarity conditions are not enough. Specifically it seems that you violate Slater's condition, which states that "the feasible region must have an interior point". There are no $x,y$ for which$$(x+y-2)^2 < 0.$$If you rephrase the problem to$$\max (xy)$$$$x+y-2 = 0$$$$x,y \geq 0$$Slater's condition is ...
There is not a single answer, it will depend on the particulars of each problem. Let's look at a standard example.Consider the benchmark intertemporal optimization problem for the Ramsey model$$\begin{align}&\max_u \int^{\infty}_0{e^{-\rho t}u(c)dt}\\\\& \text{s.t.}\;\; \dot{k} = i-\delta k\\& \text{s.t.}\;\; y = f(k)=c+i\end{align}$$...
You are missing a $\min$ operator just before the bracket. The utility maximization problem is as follows, $$\max \ \min [\alpha x_1, ..., \gamma x_3] \\ \ \ \text{such that} \ \ \lambda_1x_1 + ... + \lambda_3x_3 = M$$Consider the case of two goods with utility $u$ given by $u(x) = \min[\alpha x_1, \beta x_2]$. At the optimum, what do you know about the ...
This is how you get from your first equation to your second.your utility function is $u(x_1, x_2)=x_1^a x_2^b$since $a+b=1$ I'll change it slightly to a and (1-a)In order to optimise these two choices, you need to maximise utility, wrt your choice variables.subject to $p_1x_1 + p_2x_2 = w$using Walras Law. Basically, in order to optimise utility, all ...
Dynamic Programming & Optimal Control by BertsekasIntroduction to Modern Economic Growth by AcemogluThe Acemoglu book, even though it specializes in growth theory, does a very good job presenting continuous time dynamic programming.
All you need for this particular question is the following. Let $\mathbf{X}$ be a $T \times K$ matrix, $\mathbf{w}$ a K-dimensional vector and $\mathbf{y}$ a T-dimensional vector, then$$\begin{eqnarray*}\frac{\partial \mathbf{w}^{\prime}\mathbf{X}^{\prime}\mathbf{y}}{\partial \mathbf{w}}&=&\mathbf{X}^{\prime}\mathbf{y}\\\frac{\partial \mathbf{w}...
This is an ill-posed question. Even without going through KKT, your constraint $(x + y - 2)^2 \le 0$, since the left-hand side is a square, means that the only solution that is feasible is the one where the equality binds; i.e. $(x + y - 2)^2 = 0$, or that $|x + y - 2| = 0$ -- of which what you say that $x = y = 1$ is a solution.
The first order condition of the maximization problem is\begin{equation}f'(x)-s=0\iff f'(x)=s\end{equation}We can then replace $x$ by $x(s)$ because this is the optimal value given $s$. Since this is true for every $s$, we can differentiate with respect to $s$ which yields\begin{equation}f''(x(s))x'(s)=1\end{equation}Which can be rewrite as\...
A first price standard and reverse auction are formally equivalent to each other, and the same method can be used to solve both:First Price AuctionIn a first price auction, $n$ bidders choose their bid, $b_i$, as a function of their value $v_i$ (distributed according to $F$. They seek to maximise their expected payoff:$$[v_i-b_i(v_i)]\Pr(b_i\geq\max_j ...
Quoting the OP from a commentWhat sort of conditions on utility function and constraint enables usto apply envelope theorem only after we established the continuity ofvalue function by Berge's theorem?people.hss.caltech.edu/~pbs/expfinance/Readings/Lucas1978.pdfIn the referenced Lucas (1978) paper, Proposition 1 establishes thatwhere $v(z,y;...
These are proper Marshallian demand functions, even though Income does not appear in them. This is due to specific form of the utility function (and the candidate solution of all goods being purchased at strictly positive quantities). It emerges that there is no income effect for goods $x_1$ and $x_2$ - optimal uncompensated demand does not depend after all ...
You are unfortunately mistaken. DSGE models are at the heart of monetary policy and the most widely used class of models in this field. To work in monetary there is no real way around learning DSGE.A very good book to get started. is Walsh (2010) "Monetary Theory and Policy". I can also recommend Gali's book ("Monetary Policy, Inflation, and the Business ...
As mentioned in the other answer, the Lagrange multiplier is the marginal effect on the value (optimized) function, when the constrained is "relaxed" marginally. In your case then it should be interpreted "how much studying time changes as the required average expected grade..." increases?Well, this is a good example to show that how we set up the ...
Non-negativity constraints have nothing special and are not critical for the general validity of the Karush-Kuhn-Tucker approach. First, realize that we could have $x \geq a >0$ and then we could write $x-a \geq 0$ and view this "non-negativity" constraint as just one more inequality constraint on the solution.When a decision variable is weakly bounded, ...
Questions with numbers are usually not as good as questions without numbers. If you had written down the formula for CV and EV you would probably have noticed that your premise is false.CV and EV are not supposed to have opposing signs. You can see this from their definitions where$$CV = e(p_1,u_1) - e(p_1,u_0), \hskip 20pt EV = e(p_0,u_1) - e(p_0,u_0).$...
You are right. The problem here is a corner solution.Let us define the optimal combination $X^* = \beta_1e_1^* + \beta_2e_2^*$.The first derivative gives you$$X^* = \dfrac{l_1}{2\beta_1} $$whereas the second gives you$$X^* = \dfrac{l_2}{2\beta_2}$$They can only be true "by chance". This is, the existence of an interior solution cannot be ...
Here are two methods. First method: the substitution can be made by inverting $f$. Since $f$ is strictly increasing and continuous, $f^{-1}$ is well-defined. The constraint can therefore be written\begin{equation*}x = f^{-1}\Big(\dfrac{c-(1-P)f(y)}{P}\Big)\end{equation*}The objective becomes\begin{equation*}\max_{y \geq 0}{P \Big[a-f^{-1}\Big(\dfrac{c-...
Let $g$ be the inverse function of $f$ defined over range of $f$. Notice that $g$ is increasing and strictly convex. We can rewrite the maximization problem as:\begin{eqnarray*} \max\limits_{u\geq 0, \ v \geq 0} & P(a - g(u)) + (1-P)(b-g(v)) \\ \text{s.t.} & Pu + (1-P)v = c\end{eqnarray*}where $u=f(x)$ and $v = f(y)$.Solving above is equivalent ...
What the book calls the "Weierstrass Theorem" is more commonly known as the Extreme Value Theorem, which states that a continuous function defined on a compact domain attains a maximum and a minimum.A common proof of this theorem involves the use of the Bolzano–Weierstrass theorem, which you learned in your math course, and which says every bounded ...
Here are four I could think of:Leontief and Lexicographic functions, used in preferences orproduction functions are non-differentiable.Labor models often employ a discrete labor supply (work or don'twork, sometimes alongside the decision of how much or how hard towork).Housing models often employ a non-convex adjustment cost to ensurethat there is a ...
A really nice methodology for approximating the HJB is the upwind scheme, which I learnt quite quickly using Ben Moll et al's notes and codesThe examples are continuous time versions of familiar heterogenous agents economies models such as Hugget and Aiyagari.
Say you have a portfolio with returns described by a random variable X. Call the lowest possible realization of X: xmin. If you take a levered position in that portfolio with leverage A and financing cost r your returns are r(A-1)+AX. There will exist a value of A no larger than 1/xmin where when you get the worst return of X and the levered portfolio ...
If $F(K,L)$ is a homogeneous function of degree one then so is$$\Pi(K,L) = F(K,L) - R \cdot K - w \cdot L.$$This follows straight from the definition of homogeneity. (A definition of homogeneous function can be found here.) This means that if a maximal profit exists it is zero. Otherwise you could increase all inputs by say 100%, thereby increasing both ...
In general, you are right to be mystified: specifying a point (consumption bundle) isn't enough to compute MRS and indifference curves.However, in this problem, I would suggest you take the first sentence seriously as a description of her preferences. She likes fries. (She doesn't care about what box they come in!)Let $v(f)$ represent her utility as ... |
This question already has an answer here:
It’s not clear what exactly is happening.
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
It’s not clear what exactly is happening.
I have asked a question previously which concerns the same electrophilic system with the exception that nucleophilic attack happens on the carbonyl carbon and not on the α-carbon.
From the organic chemistry textbook by Clayden, Warren, Wothers and Greeves on pp. 890-891:
The $\pi^{*}(\ce{C=O})$ and $\sigma^{*}(\ce{C-X})$ orbitals add together to form a new, lower-energy molecular orbital, more susceptible to nucleophilic attack. But, if $\ce{X}$ is not a leaving group, attack on this orbital will result not in nucleophilic substitution but in addition to the carbonyl group. Again, this effect will operate only when the $\ce{C-X}$ and $\ce{C=O}$ bonds are perpendicular so that the orbitals align correctly.
This interaction when considered in the context of an attack on the carbonyl is the principle of the polar Felkin-Anh model used to describe a certain diastereoselectivity. However it can likewise be used to explain the lowering of the $\unicode{x3c3}^*(\ce{C-Br})$ orbital which is attacked in a nucleophilic substitution reaction. It may seem as though these statements are contradictory, but the orbital interaction will create a three-centre π-type double-antibonding orbital with only the contribution on oxygen being neglegible due to oxygen’s electronegativity being highest and thus its contribution to high-energy orbitals being lowest.
It will depend on your nucleophile what will actually happen. It can attack both on the carbonyl carbon and on the α-carbon. If the attack on the carbonyl carbon is reversible, only the attack on the α carbon leading to an observed $\mathrm{S_N2}$ process will be observed after the reaction — even if the carbonyl attack is faster. Only if the attack on the carbonyl is not reversible as is the case e.g. with Grignard reagents, a mixture of two products will be observed.
Naturally, if $\ce{X}$ is a very poor leaving group — e.g. $\ce{OTBS}$ — the attack on the carbonyl will dominate this way or that.
To sum up:
The reaction rate is increased because the $\unicode{x3c3}^*(\ce{C-X})$ orbital and the $\unicode{x3c0}^*(\ce{C=O})$ orbital linear combine to give a lower-energy orbital. Since this is the LUMO, the LUMO energy is lowered and nucleophilic attacks are facilitated.
Most of the time, a $\text{S}_\text{N}2$ reaction will involve a negatively charged nucleophile attacking an uncharged substrate (as is depicted in the second picture of the question). This negative charge can be stabilized effectively by the carbonyl group through conjugation. Note that in the example given in the first picture, the stabilization probably also involves the phenyl ring. Of course, the molecular structure must allow for alignment of the orbitals involved.
This mechanism is valid for most examples given in the table of the first picture bar the methoxy-methylchloride (MOM chloride, an old-fashioned protection agent for alcohols). In this case, I would think that the oxygen lowers the $\sigma^\ast$ orbital energy such that it may serve better as a point of attack. |
Every time I think I get what Rice's theorem means, I find a counterexample to confuse myself. Maybe someone can tell me where I'm thinking wrong.
Lets take some non-trivial property of the set of computable functions, for example let $L = \{ f : \mathbb{N} \to \mathbb{N} \;|\; \text{f is a computable and total function} \}$. Obviously, $L$ is countably infinite and there's also a countably infinite number of computable functions not in $L$.
Now lets consider a turing complete programming language over of a finite set of instructions $\Sigma$ and the set of syntactically correct programs $P \subseteq \Sigma^*$, with $|P| = |\mathbb{N}|$. If I can choose the semantics of my language as I please, I may also number the programs as I please and so it should be possible to design a programming language where some subset of the programs computes exactly some arbitrary subset of the computable functions, as long as the cardinality matches. For example, let $P_{pal} = \{ w \in \Sigma^* \;|\; w\text{ is a palindrome} \}$, and each program $p \in P_{pal}$ compute a total function. Since $|P_{pal}| = |L|$, such a language should exist.
However, $isPalindrome(w)$ is obviously turing-computable and since $isPalindrome(w) \iff isTotal(w)$, we would thus have a program which decides the non-trivial property $L$, which is not possible according to Rice's theorem.
Where is the error in my deduction? Does this mean there is no programming language where each palindromic program (or rather of
any computable structure) computes exactly the total functions (or rather any set of computable functions)? This really perplexes me. |
T. Nagell in [
Norsk Mat. Forenings Skrifter. 1:4 (1921)]shows that, for an odd prime $q$, the equation$$x^2-y^q=1 \qquad (*)$$has a solution in integers $x>1$, $y>1$, then $y$ is even and $q\mid x$.In his proof of the latter divisibility he uses a similar trick as follows.
Assuming $q\nmid x$ write ($*$) as$$x^2=(y+1)\cdot\frac{y^q+1}{y+1}$$where the factors on the right are coprime (they could only havecommon multiple $q$). Therefore,$$y+1=u^2, \quad \frac{y^q+1}{y+1}=v^2, \quad x=uv,\qquad (u,v)=1, \quad \text{$u,v$ are odd}.$$Using these findings we can state the original equation in the form$x^2-(u^2-1)^q=1$, or$$X^2-dZ^2=1 \quad\text{where $d=u^2-1$}.$$The latter equation has integral solution$$X=uv, \quad Z=(u^2-1)^{(q-1)/2},$$while its general solution (a classical result for this particular Pell's equation)is taken the form $(u+\sqrt{u^2-1})^n$. It remains to use the binomial theorem in$$X+Z\sqrt{u^2-1}=(u+\sqrt{u^2-1})^n$$(for certain $n\ge1$) and simple estimates to conclude that this is not possible.
To stress the use of similar trick: instead of showing insolvability of $x^2-(u^2-1)^q=1$,we assume that a solution exists and then use $d=u^2-1$ to produce a solution $X,Z$ of $X^2-dZ^2=1$;finally, the pair $X,Z$ cannot solve the resulting Pell's equation.(Of course, it is hard to claim that this is exactly Trost's trick,as here is a dummy variable but no discriminants, except the one forPell's equation. Trost's trick is less trickier to my taste. $\ddot\smile$)
Note that Nagell's result was crucial for showing that ($*$) does not haveintegral solutions $x>1$, $y>1$ for a fixed prime $q>3$. This was shown ina very elegant way, using the Euclidean algorithm and quadratic residuesby Ko Chao [
Sci. Sinica 14 (1965) 457--460], and later reproducedin Mordell's Diophantine equations. The ideas of this proof are in theheart of Mihailescu's ultimate solution of Catalan's conjecture. A muchsimpler proof of Ko Chao's result, based on a completely different (nice!) trick,was given later by E.Z. Chein [ Proc. Amer. Math. Soc. 56 (1976) 83--84]. |
The Dirac Equation is given by $$\left(i\gamma^\mu\partial_\mu- \frac{mc}{\hbar}\right)\Psi_D = 0,$$
where $\gamma^\mu$ are the Dirac $\gamma$-matrices and $\Psi_D$ is a Dirac spinor. I would like to find the transformation $U$ such that the two-component Weyl spinors $\Psi, \hat{\Psi}$ solve the equation $$i\left( \begin{array}{cc}0 & \partial_0+\vec\sigma\cdot\vec\nabla \\ \partial_0 - \vec\sigma\cdot\vec\nabla & 0\end{array}\right) \left(\begin{array}{c}\Psi\\\hat{\Psi}\end{array} \right) - \frac{mc}{\hbar}\left(\begin{array}{c}\Psi\\\hat{\Psi}\end{array} \right) = 0$$
if $\Psi_D = U \left(\begin{array}{c}\Psi\\\hat{\Psi}\end{array} \right)$ solves the Dirac equation. Could anybody show me how to derive the tranformation matrix $U$? I read everywhere that
$$ U = \frac{1}{\sqrt{2}}(1-\gamma^5\gamma^0),$$
but obviously, I don't know how to arrive at this. |
I was reading about differentiable manifolds on wikipedia, and in the definition it never specifies that the differentiable manifold has a metric on it. I understand that you can set up limits of functions in topological spaces without a metric being defined, but my understanding of derivatives...
HiI am currently trying to learn about smooth manifolds (Whitneys embedding theorem and Stokes theorem are core in the course I am taking). However, progress for me is slow. I remember that integration theory and probability became a lot easier for me after I learned some measure theory. This...
I am learning the basics of differential geometry and I came across tangent vectors. Let's say we have a manifold M and we consider a point p in M. A tangent vector ##X## at p is an element of ##T_pM## and if ##\frac{\partial}{\partial x^ \mu}## is a basis of ##T_pM##, then we can write $$X =...
Hello,In the sources I have looked into (textbooks and articles on differential geometry), I have not found any abstract definition of the electromagnetic fields. It seems that at most the electric field is defined as$$\bf{E}(t,\bf{x}) = \frac{1}{4\pi \epsilon_0} \int \rho(t,\bf{x}')...
Consider ##X## and ##Y## two vector fields on ##M ##. Fix ##x## a point in ##M## , and consider the integralcurve of ##X## passing through ##x## . This integral curve is given by the local flow of ##X## , denoted##\phi _ { t } ( p ) .##Now consider $$t \mapsto a _ { t } \left( \phi _ { t } (...
Hi, I'm aware of a typical example of injective immersion that is not a topological embedding: figure 8##\beta: (-\pi, \pi) \to \mathbb R^2##, with ##\beta(t)=(\sin 2t,\sin t)##As explained here an-injective-immersion-that-is-not-a-topological-embedding the image of ##\beta## is compact in...
I've been studying the Witten-Reshetikhin-Turaev (WRT) invariant of 3-manifolds but have almost zero background in physics. The WRT of a 3-manifold is closely related to the Chern-Simons (CS) invariant via the volume conjecture. My question is, what does the CS invariant of a 3-manifold...
Hi,a basic question related to differential manifold definition.Leveraging on the atlas's charts ##\left\{(U_i,\varphi_i)\right\} ## we actually define on ##M## the notion of differentiable function. Now take a specific chart ##\left(U,\varphi \right)## and consider a function ##f## defined...
Some models of gravity, inspired by the main theme of spacetime fabric of Classical GR, treat the metric of the manifold and the connection as independent entities. I want to study this theory further but I am unable to find any paper on this, on ariXiv atleast.I will be very thankful if...
<Moderator's note: Moved from a homework forum.>1. Homework StatementFrom this paper.Let ##L## be the Jacobian operator of a two-sided compact surface embedded in a three-maniold ##(M,g)##, ##\Sigma \subset M##, and defined by$$L(t)=\Delta_{\Sigma(t)}+ \text{Ric}( ν_{t} , ν_{t}...
Suppose ##M## is a connected analytic manifold with metric ##g(x), x \in M## which is everywhere analytic. Define ##\gamma(g(x))## as the germ of the metric at the point ##x##.Question: Is it possible to come up with a nontrivial ##\gamma##, ##M_1##, and ##M_2##, where the germ ##\gamma##...
I have a surface defined by the quadratic relation:$$0=\phi^2t^4-x^2-y^2-z^2$$Where ##\phi## is a constant with units of ##km## ##s^{-2}##, ##t## is units of ##s## (time) and x, y and z are units of ##km## (space). The surface looks like this:Since the formula depends on the absolute value of...
Hello.I was trying to prove that the tangent bundle TM is a smooth manifold with a differentiable structure and I wanted to do it in a different way than the one used by my professor.I used that TM=M x TpM. So, the question is:Can the tangent bundle TM be considered as the product manifold...
1. Homework StatementLet ## (M, \omega_M) ## be a symplectic manifold, ## C \subset M ## a submanifold, ## f: C \to \mathbb{R} ## a smooth function. Show that ## L = \{ p \in T^* M: \pi_M(p) \in C, \forall v \in TC <p, v> = <df, v> \} ## is a langrangian submanifold. In other words, you have...
Suppose we have an n-dimensional manifold Mn and take a coordinate neighborhood U with associated coordinate map φ: U → V where V is an open subset of ℝn. So far I'm clear on this.However, where I become confused is when some books say that φ-1 is called a parameterization of U and basically...
I firstly learned about duality in context of differentiable manifolds. Here, we have tangent vectors populating the tangent space and differential forms in its co-tangent counterpart. Acting upon each other a vector and a form produce a scalar (contraction operation).Later, I run into the...
Hi all, this might be a silly question, but I was curious. In Carroll's book, the author says that, in a manifold M , for any vector k in the tangent space T_p at a point p\in M , we can find a path x^{\mu}(\lambda) that passes through p which corresponds to the geodesic for that...
Hi, A chamber (manifold type cylinder) has 1 inlet and 8 outlets of 3 inch diameter each.8 suction blowers are connected to the chamber's outlet. Each blower's suction flow rate is 1000 cfm.what will be the flow rate through the inlet of the chamber (diameter = 3 inches).Will the inlet...
Let f:p\mapsto f(p) be a diffeomorphism on a m dimensional manifold (M,g). In general this map doesn't preserve the length of a vector unless f is the isometry.g_p(V,V)\ne g_{f(p)}(f_\ast V,f_\ast V).Here, f_\ast:T_pM\to T_{f(p)}M is the induced map.In spite of this fact why...
I have recently had a lengthy discussion on this forum about coordinate charts which has started to clear up some issues in my understanding of manifolds. I have since been reading a few sets of notes (in particular referring to John Lee's "Introduction to Smooth Manifolds") and several of them...
So I know that this involves using the chain rule, but is the following attempt at a proof correct.Let M be an n-dimensional manifold and let (U,\phi) and (V,\psi) be two overlapping coordinate charts (i.e. U\cap V\neq\emptyset), with U,V\subset M, covering a neighbourhood of p\in M, such that...
I've been struggling since starting to study differential geometry to justify the definition of a one-form as a differential of a function and how this is equal to a tangent vector acting on this function, i.e. given f:M\rightarrow\mathbb{R} we can define the differential map...
In all the notes that I've found on differential geometry, when they introduce integration on manifolds it is always done with top forms with little or no explanation as to why (or any intuition). From what I've manage to gleam from it, one has to use top forms to unambiguously define...
What is the motivation for defining vectors in terms of equivalence classes of curves? Is it just that the definition is coordinate independent and that the differential operators arising from such a definition satisfy the axioms of a vector space and thus are suitable candidates for forming...
I am relatively new to the concept of differential geometry and my approach is from a physics background (hoping to understand general relativity at a deeper level). I have read up on the notion of diffeomorphisms and I'm a little unsure on some of the concepts.Suppose that one has a...
I've been reading up on the definition of a tangent bundle, partially with an aim of gaining a deeper understanding of the formulation of Lagrangian mechanics, and there are a few things that I'm a little unclear about.From what I've read the tangent bundle is defined as the disjoint union of...
I am currently working through Nakahara's book, "Geometry, Topology and Physics", and have reached the stage at looking at calculus on manifolds. In the book he states that "The differentiability of a function f:M\rightarrow N is independent of the coordinate chart that we use". He shows this is... |
Consider a charged black hole in four-dimensional Minkowski spacetime, with charge $Q$, mass $M>Q$:
$ds^2=-f(r)dt^2+\frac{1}{f(r)}dr^2+r^2d\Omega_2^2$, with
$f(r)=1-\frac{2M}{r}+\frac{Q^2}{r^2}$.
When an observer at radial coordinate $r_1$ emits a photon, an observer at radial coordinate $r_2>r_1$ will perceive the photon with a redshifted wavelength. This is easy to interpret.
A similar thing happens for the temperature. If $\kappa$ is the surface gravity of the black hole, then the Hawking temperature is $T_H=\frac{\kappa}{2\pi}$. Due to gravitational redshift, the temperature measured by an observer at radial coordinate $r$ is
$T_{loc}(r)=\frac{1}{\sqrt{f(r)}}T_H$.
The redshift factor is the same as for a redshifted frequency, which I can understand by associating temperature with the inverse imaginary time period.
I interpret this redshift as a consequence of the particles which constitute Hawking radiation experiencing gravitational redshift.
Is this correct?
Apparently, something similar also happens for the electrostatic potential. The electrostatic potential difference between the outer event horizon $r_+$ and infinity is given by $\Phi=\frac{Q}{r_+}$. However, the electrostatic potential between $r_+$ and some coordinate $r>r_+$, "blueshifted" from infinity to $r$, is given by
$\phi(r)=\left(\frac{Q}{r_+}-\frac{Q}{r}\right)\frac{1}{\sqrt{f(r)}}$.
(
Source: Braden, Brown, Whiting and York, Charged black hole in a grand canonical ensemble, PRL Vol. 42 No. 10, 1990, equation 4.15.)
This expression seems to tell me that $\frac{Q}{r_+}-\frac{Q}{r}$ is the electrostatic potential difference between $r$ and $r_+$ as measured by someone at infinity, and the above expression is this same potential difference as measured by someone at $r$.
Is there a simple interpretation why the measured electrostatic potential should experience gravitational redshift as well, with the same redshift factor as a frequency? What is the wavelength that is being redshifted in this case, is it the one from the photons mediating the electromagnetic force? (These photons are virtual though, so can they actually be redshifted?) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.