text
stringlengths
256
16.4k
Question: Can we have a model of $ZF-\text {Regularity}$ where there exist an ordinal $\kappa$ such that $H_{\kappa}$ exists and $H_{\kappa}$ is not equinumerous to any well founded set? The motivation for this question comes in connection with defining Cardinality under some situations beyond Regularity and Choice. Especially in connection with the following anti-foundation axiom: $\text{Anti-Foundation axiom: }$ Every set is subnumerouse to some iterative power of some $H_\kappa$ set, where $\kappa$ is an ordinal. where $H_\kappa = \{x| x \text{ is hereditarily subumerous to } \kappa\}$ Where $x \text{ is hereditarily subnumerous to } \kappa $ is defined as: $$ \forall y \in TC(\{x\} ) \exists f (f: y \rightarrowtail \kappa)$$ Where $TC$ stands for the "transitive closure function" defined in the usual manner. Iterative powers $P^i(x)$ are defined recursively as: $P^0(x) = x$ $P^j(x) = \bigcup (\{P (P^i(x))| i < j\})$ Using this as an anti-foundation axiom would enable us to define a notion for Cardinality that covers more sets than does Scott's definition of Cardinality. $\text{Define:} $ Card(x) is the set of all subsets of the first supernumerous to $x$ iterative power of the nearest $H_{\kappa}$ set to $x$, that are equinumerous to $x$. The distance of $x$ from $H_{\kappa}$ is the minimal ordinal $i$ such that $P^i(H_{\kappa})$ supernumerous to $x$. Of all $H_{\kappa}$ sets that lie at the least distance from $x$, the one with the least $\kappa$ value is the "Nearest $H_{\kappa}$ set to $x$". The idea is that there is no combinatorial restriction on what constitutes the cardinality of an $H_{\kappa}$ set, so this definition can work without having a practical kind of restriction over the non-well ordered, non-well founded realm. While Scott's definition can only define cardinality for sets as long as those are equinumerous to some well founded set. Which is in some sense restrictive outside of choice.
This problem can arise not just in vehicle routing but in many sorts of sequencing problems (such as scheduling jobs for production). Of course, preserving the original ordering to the extent possible is not always a concern, but it might be if, for instance, the existing stops are customers who have been promised somewhat general time windows for delivery. In any event, we'll just take the question as a given. The answer I posted on OR-X made the somewhat charitable (and, in hindsight, unwarranted) assumption that the two new stops would be inserted by breaking two previous arcs, rather than consecutively (for instance, ... - 2 - 3 - 5 - 4 - ...). So I'll post an answer without that assumption here. In fact, I'll post three variants, one specific to the case of adding exactly two stops and the other two more general. First, let me articulate some common elements. I'll denote the set of original nodes by $N_1$, the set of nodes to be added by $N_2$, and their union by $N=N_1 \cup N_2$. All three approaches will involve setting up integer programming models that will look for the most part like familiar routing models. So we will have binary variables $x_{ij}$ that will take the value 1 if $j$ immediately follows $i$ in the new tour. We will have constraints ensuring that every node is entered and exited exactly once:$$\sum_{j\in N} x_{ij} = 1\quad \forall i\in N\\ \sum_{i \in N} x_{ij} = 1 \quad\forall j\in N.$$The objective function will be some linear combination of the variables (sum of distances covered, sum of travel times, ...), which I will not worry about here, since it is no different from any sequencing model. The first new wrinkle is that we do notdefine a variable for every pair of nodes. We create $x_{ij}$ only for the following combinations of subscripts: \begin{align*} i & \in N_{2},j\in N_{2},i\neq j\\ i & \in N_{1},j\in N_{2}\\ i & \in N_{2},j\in N_{1}\\ i & \in N_{1},j\in N_{1},(i,j)\in T \end{align*} where $T$ is the original tour. Thus, for example, we would have $x_{24}$ but not $x_{42}$, nor $x_{26}$. The rationale is straightforward: if we add an arc between two original nodes that were not successors on the original tour, we will force an order reversal. For instance, suppose we replace the arc 2 - 4 with, say, 2 - 6. Node 4 now must appear either before node 2 or after node 6, and either way the order has not been preserved. Version 1 The first variant makes explicit use of the fact that we have only two new nodes. We add one subtour elimination constraint, to prevent the new nodes from forming a subtour: $x_{35}+x_{53}\le 1.$ Now consider how many different ways we could insert the two new nodes. First, we could break two links in the original tour, inserting 3 in the void where the first link was and 5 in the void where the second link was. Since the original tour had five links there are $\binom{5}{2}=10$ distinct ways to do this. Similarly, we could break two links but insert 5 first and 3 later. There are again ten ways to do it. Finally, we could break one link and insert either 3 - 5 or 5 - 3 into the void. With five choices of the link to break and two possible orders, we get another ten results, for a grand total of 30 possibly new tours. With that in mind, consider what happens if node 3 is inserted after original node $i$, breaking the link between $i$ and its original successor $j$. (In our model, this corresponds to $x_{i3}=1$.) If this is a single node insertion, then we should have $j$ follow node 3 ($x_{3j}=1$). If it is a double insertion ($i$ - 3 - 5 - $j$), we should have $x_{35}=x_{5j}=1$. We can capture that logic with a pair of constraints for each original arc: \[ \left.\begin{aligned}x_{i3}-x_{3j} & \le x_{35}\\ x_{i3}-x_{3j} & \le x_{5j} \end{aligned} \right\} \forall(i,j)\in T. \] We could do the same using node 5 in place of node 3, but it is unnecessary. If node 3 is correctly inserted by itself, say between $i$ and $j$, and node 5 is inserted after original node $h$, then the original successor $k$ of $h$ needs a new predecessor. That predecessor cannot be $h$, nor can it be any other original node (given our reduced set of variables), nor can it be node 3 (which now precedes $j$). The only available predecessor is 5, giving us $h$ - 5 - $k$ as expected. You might wonder how this accommodates a 5 - 3 insertion, say after node $i$. The original successor $j$ of $i$ needs a new predecessor, and 3 is the only eligible choice, so we're good. I tested this with a small Java program, and it did in fact find all 30 valid revised tours (and no invalid ones). Version 2 Version 2, which can be applied to scenarios with any number of new nodes, involves building a standard sequencing model with subtour elimination constraints. The only novel element is the reduced set of variables (as described above). A blog is no place to explain sequencing models in their full glory, so I'll just assume that you, the poor suffering reader, already know how. Version 3 In version 3, we again build a sequencing model with the reduced set of variables, but this time we use the Miller-Tucker-Zemlin method of eliminating subtours rather than adding a gaggle of subtour elimination constraints. The MTZ approach generally results in smaller models (since the number of subtours, and hence the potential number of subtour constraints, grows combinatorially with the number of nodes), but also generally produces weaker relaxations. The Wikipedia page for the TSP shows the MTZ constraints, although for some reason without labeling them as such. Assume a total of $n$ nodes (with consecutive indices), with node $0$ being the depot. The MTZ approach adds continuous variables $u_i, \,i\in \{1,\dots,n\}$ with bounds $0\le u_i \le n-1$. It also adds the following constraints for all eligible arcs $(i,j)$ with $i\neq 0$:$$u_i - u_j + n x_{ij} \le n-1.$$You can think of the $u_i$ variables as counters. The MTZ constraints say that if we go from any node $i$ (other than the depot) to any node $j$ (including the depot), the count at node $j$ has to be at least one larger than the count at node $i$. These constraints preclude any subtours, since a subtour (one starting and ending any place other than the depot) would result in the count at the first node of the subtour being larger than itself. As I mentioned, the MTZ formulation has a somewhat weaker LP relaxation than a formulation with explicit subtour elimination constraints, so it is not favored by everyone. In our particular circumstance, however, it has an additional virtue: it gives us a relatively painless way to enforce the order preservation requirement. All we need do is insert constraints of the form$$u_j \ge u_i + 1\quad\forall (i,j)\in T.$$This forces the counts at the original nodes to increase monotonically with the original tour order, without directly impacting the counts at the new nodes.
493 0 an integral expression for Pi(x).... hello i have discovered a new method to calculate [tex]\pi(e^x) [/tex] it runs in O(x^d) operations d>0 it is very simple: Forst of all we have the integral for Pi(x) as knows: [tex]Ln\zeta(s)=s\int_0^{\infty}\frac{\pi(x)}{x^{s}-1}dx [/tex] we make the change of variable x=exp(exp(t)) and apply the integral transform to both sides: [tex] \int_{-\infty}^{\infty}ds(2+is)^{-iw} [/tex] now we have a double integral we express 2+is as exp(ln(2+is) we make the change of variable t+ln(2+is)=u t=v so finallly we would have: [tex] \int_{-\infty}^{\infty}ds(2+is)^{-iw-1}Ln\zeta(2+is)=\int_{-i\infty}^{i\infty}\frac{r^{-iw}}{exp(r)-1}*\int_{-\infty}^{\infty}g(v)e^{iwv} [/tex] that have the solution:(with g(t)=Pi(exp(exp(t)) [tex]g(t)=\frac{1}{2\pi}\int_{-\infty}^{\infty}dwe^{-iwt}F(w)/G(w) [/tex] where we call F(w) and G(w) to: [tex] G(w)=\int_{-i\infty}^{i\infty}\frac{r^{-iw}}{exp(r)-1} [/tex] [tex] F(w)=\int_{-\infty}^{\infty}ds(2+is)^{-iw-1}Ln\zeta(2+is) [/tex] so we have an expression for [tex]\pi(e^{e^t}) [/tex] so to calculate pi(x) we have to take Ln(Ln(t)) in our integral.... Edit:there are some integrals wich lacks on the dx simbol well the symbol is the same as the variable (if the integral appears r is dr ,if is s then is ds..and so on)... why my method is better than other?..well i would say several things..... a)it is an analityc method...you solve it by solving three integrals... b)time employed:if the time to calculate the three integrals goes like O(x^d) d>0 it seems big but to calculate for example Pi(exp(exp(100) you only need to calculate the integral upto t=100 so is faster than other methods that give the equation for Pi(x). Last edited:
In Locatello et al's Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations he claims to prove unsupervised disentanglement is impossible. His entire claim is founded on a theorem (proven in the appendix) that states in my own words: Theorem: for any distribution $p(z)$ where each variable $z_i$ are independent of each other there exists an infinite number of transformations $\hat z = f(z)$ from $\Omega_z \rightarrow \Omega_z$ with distribution $q(\hat z$) such that all variables $\hat z_i$ are entangled/correlated and the distributions are equal ($q(\hat z) = p(z)$) Here is the exact wording from the paper: (I provide both because my misunderstanding may be stemmed from my perception of the theorem) From here the authors explain the straightforward jump from this to that for any unsupervised learned disentangled latent space there will exist infinitely many entangled latent space with the exact same distribution. I do not understand why this means its no longer disentangled? Just because an entangled representation exists, does not mean the disentangled is any less valid. We can still conduct inference of the variables independently because they still follow that $p(z) = \prod_i p(z_i)$, so where does the impossibility come in?
How does one get the frequency response of a filter, given an input signal and the signal output by the filter? The plant output data is usually generated using Gaussian white-noise excitation, although more informative input signals can be generated by experiment design, if prior information about the plant is known [3]. The ETFE of the plant $\widehat{G}(k)$ is found as the quotient of the cross power spectral density estimate of the input and the measured output $P_{yu}(k)$, and the power spectral density estimate of the input $P_{uu}(k)$, i.e.,\begin{equation*} \widehat{G}(k) = \frac{P_{yu}(k)}{P_{uu}(k)} .\end{equation*}In Welch's method, the time-series data is divided into windowed segments, with an option to use overlapping segments. Then, a modified periodogram of each segment is computed and the results are averaged. Welch's method for generating an ETFE corresponds to the function tfestimate in MATLAB. One of the advantages of Welch's method is the flexibility in terms of the number of frequency samples and excitation signal used. Frequency response is simply the ratio of the Fourier transforms of the output and input signals. $$ H(e^{j\omega}) = \frac{Y(e^{j\omega})}{X(e^{j\omega})} $$ where $Y(e^{j\omega})$ is the Fourier transform of output $y[n]$: $$ Y(e^{j\omega}) = \sum\limits_{n=-\infty}^{+\infty} y[n] e^{-j\omega n} $$ and $X(e^{j\omega})$ is the Fourier transform of input $x[n]$: $$ X(e^{j\omega}) = \sum\limits_{n=-\infty}^{+\infty} x[n] e^{-j\omega n} $$ It might be a good idea to choose an input $x[n]$ such that $X(e^{j\omega}) \ne 0$ for all $\omega$ of interest in the frequency response. if you give an impulse as input , frequency response of output signal is equal to frequency response of filter. This technique is efficient only if you don't know filter specifications. So that I m assuming you know nothing about filter specifications. Here this matlab code implement the technique. fs=1000;impulse=[1 zeros(1,999)];f=linspace(-fs/2,fs/2,length(impulse));hpass=fdesign.highpass('Fst,Fp,Ast,Ap',100,200,40,1,fs);Hdhp=design(hpass,'butter');y=filter(Hdhp,impulse);figure,plot(f,fftshift(abs(fft(y,fs))));
closed as no longer relevant by Robin Chapman, Akhil Mathew, Yemon Choi, Qiaochu Yuan, Pete L. Clark Aug 22 '10 at 9:00 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. $e^{\pi i} + 1 = 0$ Stokes' Theorem Trivial as this is, it has amazed me for decades: $(1+2+3+...+n)^2=(1^3+2^3+3^3+...+n^3)$ $$ \frac{24}{7\sqrt{7}} \int_{\pi/3}^{\pi/2} \log \left| \frac{\tan t+\sqrt{7}}{\tan t-\sqrt{7}}\right| dt\\ = \sum_{n\geq 1} \left(\frac n7\right)\frac{1}{n^2}, $$where $\left(\frac n7\right)$ denotes the Legendre symbol. Not really my favorite identity, but it has the interesting feature that it is aconjecture! It is a rare example of a conjectured explicit identitybetween real numbers that can be checked to arbitrary accuracy.This identity has been verified to over 20,000 decimal places.See J. M. Borwein and D. H. Bailey, Mathematics by Experiment: Plausible Reasoning in the 21st Century, A K Peters, Natick, MA,2004 (pages 90-91). There are many, but here is one. $d^2=0$ Mine is definitely $$1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{n^2}+\cdots=\frac{\pi^2}{6},$$ an amazing relation between integers and pi. There's lots to choose from. Riemann-Roch and various other formulas from cohomology are pretty neat. But I think I'll go with $$\sum\limits_{n=1}^{\infty} n^{-s} = \prod\limits_{p \text{ prime}} \left( 1 - p^{-s}\right)^{-1}$$ 1+2+3+4+5+... = -1/12 Once suitably regularised of course :-) $$\frac{1}{1-z} = (1+z)(1+z^2)(1+z^4)(1+z^8)...$$ Both sides as formal power series work out to $1 + z + z^2 + z^3 + ...$, where all the coefficients are 1. This is an analytic version of the fact that every positive integer can be written in exactly one way as a sum of distinct powers of two, i. e. that binary expansions are unique. $V - E + F = 2$ Euler's characteristic for connected planar graphs. I'm currently obsessed with the identity $\det (\mathbf{I} - \mathbf{A}t)^{-1} = \exp \text{tr } \log (\mathbf{I} - \mathbf{A}t)^{-1}$. It's straightforward to prove algebraically, but its combinatorial meaning is very interesting. $196884 = 196883 + 1$ For a triangle with angles a, b, c $$\tan a + \tan b + \tan c = (\tan a) (\tan b) (\tan c)$$ Given a square matrix $M \in SO_n$ decomposed as illustrated with square blocks $A,D$ and rectangular blocks $B,C,$ $$M = \left( \begin{array}{cc} A & B \\\ C & D \end{array} \right) ,$$ then $\det A = \det D.$ What this says is that, in Riemannian geometry with an orientable manifold, the Hodge star operator is an isometry, a fact that has relevance for Poincare duality. But the proof is a single line: $$ \left( \begin{array}{cc} A & B \\\ 0 & I \end{array} \right) \left( \begin{array}{cc} A^t & C^t \\\ B^t & D^t \end{array} \right) = \left( \begin{array}{cc} I & 0 \\\ B^t & D^t \end{array} \right). $$ It's too hard to pick just one formula, so here's another: the Cauchy-Schwarz inequality: ||x|| ||y|| >= |(x.y)|, with equality iff x&y are parallel. Simple, yet incredibly useful. It has many nice generalizations (like Holder's inequality), but here's a cute generalization to three vectors in a real inner product space: ||x|| 2||y|| 2||z|| 2+ 2(x.y)(y.z)(z.x) >= ||x|| 2(y.z) 2+ ||y|| 2(z.x) 2+ ||z|| 2(x.y) 2, with equality iff one of x,y,z is in the span of the others. There are corresponding inequalities for 4 vectors, 5 vectors, etc., but they get unwieldy after this one. All of the inequalities, including Cauchy-Schwarz, are actually just generalizations of the 1-dimensional inequality: ||x|| >= 0, with equality iff x = 0, or rather, instantiations of it in the 2 nd, 3 rd, etc. exterior powers of the vector space. I always thought this one was really funny: $1 = 0!$ I think that Weyl's character formula is pretty awesome! It's a generating function for the dimensions of the weight spaces in a finite dimensional irreducible highest weight module of a semisimple Lie algebra. $2^n>n $ It has to be the ergodic theorem, $$\frac{1}{n}\sum_{k=0}^{n-1}f(T^kx) \to \int f\:d\mu,\;\;\mu\text{-a.e.}\;x,$$ the central principle which holds together pretty much my entire research existence. Gauss-Bonnet, even though I am not a geometer. Ἐν τοῖς ὀρθογωνίοις τριγώνοις τὸ ἀπὸ τῆς τὴν ὀρθὴν γωνίαν ὑποτεινούσης πλευρᾶς τετράγωνον ἴσον ἐστὶ τοῖς ἀπὸ τῶν τὴν ὀρθὴν γωνίαν περιεχουσῶν πλευρῶν τετραγώνοις. That is, In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle. The formula $\displaystyle \int_{-\infty}^{\infty} \frac{\cos(x)}{x^2+1} dx = \frac{\pi}{e}$. It is astounding in that we can retrieve $e$ from a formula involving the cosine. It is not surprising if we know the formula $\cos(x)=\frac{e^{ix}+e^{-ix}}{2}$, yet this integral is of a purely real-valued function. It shows how complex analysis actually underlies even the real numbers. It may be trivial, but I've always found $\sqrt{\pi}=\int_{-\infty}^{\infty}e^{-x^{2}}dx$ to be particularly beautiful. For X a based smooth manifold, the category of finite covers over X is equivalent to the category of actions of the fundamental group of X on based finite sets: \pi-sets === et/X The same statement for number fields essentially describes the Galois theory. Now the ideathat those should be somehow unifiedwas one of the reasons in the development of abstract schemes, a very fruitful topic that is studied in the amazing area of mathematics called the abstract algebraic geometry. Also, note that "actions on sets" is very close to "representations on vector spaces" and this moves us in the direction of representation theory. Now you see, this simple line actually somehow relates number theory and representation theory. How exactly? Well, if I knew, I would write about that, but I'm just starting to learn about those things. (Of course, one of the specific relations hinted here should be the Langlands conjectures, since we're so close to having L-functions and representations here!) E[X+Y]=E[X]+E[Y] for any 2 random varibles X and Y $\prod_{n=1}^{\infty} (1-x^n) = \sum_{k=-\infty}^{\infty} (-1)^k x^{k(3k-1)/2}$ $ D_A\star F = 0 $ Yang-Mills $\left(\frac{p}{q}\right) \left(\frac{q}{p}\right) = (-1)^{\frac{p-1}{2} \frac{q-1}{2}}$. My favorite is the Koike-Norton-Zagier product identity for the j-function (which classifies complex elliptic curves): j(p) - j(q) = p -1 \prod m>0,n>-1 (1-p mq n) c(mn), where j(q)-744 = \sum n >-2 c(n) q n = q -1 + 196884q + 21493760q 2 + ... The left side is a difference of power series pure in p and q, so all of the mixed terms on the right cancel out. This yields infinitely many identities relating the coefficients of j. It is also the Weyl denominator formula for the monster Lie algebra.
Advances in Differential Equations Adv. Differential Equations Volume 16, Number 11/12 (2011), 1087-1137. Local well-posedness and a priori bounds for the modified Benjamin-Ono equation Abstract We prove that the complex-valued modified Benjamin-Ono (mBO) equation is analytically locally well posed if the initial data $\phi$ belongs to $H^s$ for $s\geq 1/2$ with $ \| {\phi} \| _{L^2}$ sufficiently small, without performing a gauge transformation. The key ingredient is that the logarithmic divergence in the high-low frequency interaction can be overcome by a combination of $X^{s,b}$ structure and smoothing effect structure. We also prove that the real-valued $H^\infty$ solutions to the mBO equation satisfy a priori local-in-time $H^s$ bounds in terms of the $H^s$ size of the initial data for $s>1/4$. Article information Source Adv. Differential Equations, Volume 16, Number 11/12 (2011), 1087-1137. Dates First available in Project Euclid: 17 December 2012 Permanent link to this document https://projecteuclid.org/euclid.ade/1355703113 Mathematical Reviews number (MathSciNet) MR2858525 Zentralblatt MATH identifier 1236.35121 Citation Guo, Zihua. Local well-posedness and a priori bounds for the modified Benjamin-Ono equation. Adv. Differential Equations 16 (2011), no. 11/12, 1087--1137. https://projecteuclid.org/euclid.ade/1355703113
Get your free trial content now! Video Transcript Transcript Zero and Negative Exponents In a land of two kingdoms, rival Kings Wallace the 4th and Frederick the negative 3rd enjoy playing pranks on each other. King Wallace receives a package from his rival, but is it a package or a prank ? It’s a painting… Oh my! How provocative! In order to maintain diplomatic relations , the king must hang the painting in a prominent position...this is a terribly tricky situation, so the king calls Mr. Magic, the court mathemagician. Mr. Magic knows just what to do. He’ll shrink the painting From his bag of tricks, he pulls out a secret potion...and leaves the rest to magic. The fraction Oh no! The shrinking potion only worked in one dimension – look what happened. Mr. Magic realizes his error ...so he pulls out another potion - this time to shrink the provocative painting proportionally by 10⁻⁵. 10⁻⁵! Wait – negative powers can be confusing. Let’s investigate. Take a look at our problem here, 10⁻⁵. We can rewrite this as a fraction. In the denominator, write the base to the absolute value of the power, or 10⁵. So, what do you write in the numerator? 1. Denominator and numerator Now, simplify the fraction. See what happens when you have a positive exponent in the denominator of a fraction? The value of the fraction get smaller and smaller. 10⁻⁵ = 1/100,000. I think Mr. Magic is on to something here…Take a look at this example: 2⁻⁴. To rewrite this as a fraction, in the denominator, write 2⁴, then, write a one in the numerator, and simplify. 2⁻⁴ = 1/16. Example Let's look at an example when the base is a variable. We can rewrite x⁻⁴ as a fraction by writing x⁴, which is x times x times x times x in the denominator and a 1 in the numerator. This simplifies to 1/x⁴. Here's the rule for negative exponents: x⁻ª = 1/xª. Remember, 'x' cannot be equal to zero. Let's look at a rule to see why this works, and then it will be easier to remember… The rule Our rule is: any base raised to the zero power is equal to 1, for example, 1⁰ = 1...2⁰ = 1, and 3⁰ = 1, and so on...the rule is: any base, such as x, raised to the zero power is equal to 1, when 'x' does not equal 0. So, using the example, 2⁻⁴, rewritten as a fraction, is equal to 1/2⁴ is the same as 2⁰ over 2⁴ and like magic, this is equal to 2⁽⁰⁻⁴⁾, which is 2⁻⁴, so we're right back where we started. That makes it much easier to understand! Sometimes math is like magic! King Wallace hung the picture. But wait, where is it? Ah, there. Take a look at it now… Thanks to Mr. Magic, King Wallace isn’t worrying about the picture – he’s trying to figure out what prank to play next on King Frederick… Zero and Negative Exponents Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Zero and Negative Exponents kannst du es wiederholen und üben. Decide what $10^{-5}$ stands for. Tipps For example, $10^3=10\times 10\times 10$. You multiply two decimal powers by adding their exponents: $10^7\times 10^{-5}=10^{7-5}=10^2$. Here you see how to handle a negative exponent for $x\neq 0$. Lösung Mr. Magic shrinks the picture by a decimal power: $10^{-5}$. How can we see what this negative decimal power is trying to express? Using what we know about multiplying and dividing powers with exponents, we can see that: $10^{\large -5}=10^{\large 0-5}=10^{\large 0}\div 10^{\large 5}=1\div 10^{\large 5}=\frac{1}{10^{\large 5}}=\frac{1}{100000}$. Explain how to write $x^{-a}$ as a fraction. Tipps An example of a negative exponent is $2^{-4}=\frac1{2^{\large 4}}$. An example with $x$ as the basis is $x^{-4}=\frac1{x^{\large 4}}$. Remember that division by zero isn't allowed. Lösung We've already seen that $10^{-5}=\frac1{10^{\large 5}}$ We can prove that $2^{-4}=\frac1{2^{\large 4}}$ using $2^0=1$: $\begin{array}{lcr} \frac1{2^{\large 4}}&=&\frac{2^{\large 0}}{2^{\large 4}}\\ &=&2^{0-4}\\ &=&2^{-4} \end{array}$ Similarly we have $x^{-4}=\frac1{x^{\large 4}}$. In general, we have $x^{-a}=\frac1{x^{\large a}}$. As long as the basis does not equal zero, if we have a power with a negative exponent we can also write it as a fraction with $1$ in the numerator and the power with the same basis to the absolute value of the exponent in the denominator. Explain why $2^{-4} = \frac1{2^4}$ is true. Tipps Here is the rule for dividing powers with the same basis. Each power with a zero exponent is equal to $1$ for $x\neq 0$: $\large x^0=1$ Lösung We start with the fraction $\frac1{2^{\large 4}}$. Since each power with a zero exponent is equal to $1$, we have that $2^{0}=1$. So $1$ over $2$ to the power of $4$ is equal to $2^{0}\div 2^{4}$. Since $2^{4}$ and $2^{0}$ have the same basis, we can subtract their exponents by the rule for dividing powers to get $2^{0-4}$. We then have that $\frac1{2^{\large 4}}=2^{-4}$. Identify the powers resulting from the calculations shown. Tipps In general $\large 10^{-a}=\frac1{10... 0}$, where the number of zeros in the denominator is the same as the absolute value of the exponent. If you divide $1$ by a decimal power you get the resulting decimal number, $0.0...01$, where the position of $1$ after the comma is the same as the exponent. Lösung Let's start with the fraction $\frac{10^{\large 3}}{10^{\large 6}}$. Since we have two powers with the same basis, we can subtract the exponents to get $\frac{10^{\large 3}}{10^{\large 6}}=10^{\large 3-6}=10^{\large -3}$. Next we can write the power as a fraction $10^{\large -3}=\frac1{1000}$. To get the corresponding decimal number we write the $1$ in the third position after the comma: $\frac{10^{\large 3}}{10^{\large 6}}=\frac1{1000}=0.001$. $~$ Next we multiply two powers with the same exponents: $\left(\frac12\right)^{\large -4}\times 20^{\large -4}$. First we multiply both base numbers together to get $\frac12\times 20=10$. The resulting power has the same exponent as both factors. So we have $\left(\frac12\right)^{\large -4}\times 20^{\large -4}=10^{\large -4}$. Again we write this power as a fraction and after as a decimal number: $10^{\large -4}=\frac1{10^{\large 4}}=\frac1{10000}=0.0001$ $~$ We handle the last, $8^{\large -3}\times125^{\large -3}$, in a similar manner. We have $8\times 125=1000=10^{\large 3}$, so we can conclude that $8^{\large -3}\times125^{\large -3}=\left(10^3\right)^{-3}=10^{\large -9}$. Writing this result as a fraction and a decimal number, we get $10^{\large -9}=\frac1{10^{\large 9}}=\frac1{1000000000}=0.000000001$. Examine the following powers with negative exponents. Tipps Keep the general formula for negative exponents in mind. Count the number of zeros in the denominator. The number of zeros in the denominator is the same as the absolute value of the exponent. Lösung In general we have $x^{-a}=\frac1{x^{\large a}}$, where $x\neq 0$. So if we have $x=10$, we get: $10^{-3}$$=\frac1{10^{\large 3}}=\frac1{1000}$ $10^{-1}$$=\frac1{10^{\large 1}}=\frac1{10}$ $10^{-6}$$=\frac1{10^{\large 6}}=\frac1{1000000}$ $10^{-7}$$=\frac1{10^{\large 7}}=\frac1{10000000}$ Decide the power of the enlarging potion. Tipps Find the shrinking factor of the spell. Divide the size of the shrunken castle by the size of the original castle. The enlarging factor is the reciprocal of the shrinking factor. If you write the shrinking factor as $1$ over a decimal power with a positive exponent, you can find the reciprocal directly. Lösung First let's find the shrinking factor of the spell. To do this, we divide the resulting size of the castle $2$ by the original size $200$: $\frac2{200}=\frac1{100}=\frac1{10^{\large 2}}$. Writing this as a decimal power with a negative exponent we get $10^{-2}$. So to undo the shrinking, we have to multiply by $10^2=100$. Let's check: $2\times 100=200$ $\surd$. So $10^2=100$ is the enlarging factor we need for the potion.
The convolution of two signals in the time domain is equivalent to the multiplication of their representation in frequency domain. Mathematically, we can write the convolution of two signals as$$y(t) = x_{1}(t)*x_{2}(t)$$ $$= \int_{-\infty}^{\infty}x_{1}(p).x_{2}(t-p)dp$$ Let us do the convolution of a step signal u(t) with its own kind. $y(t) = u(t)*u(t)$ $= \int_{-\infty}^{\infty}[u(p).u[-(p-t)]dp$ Now this t can be greater than or less than zero, which are shown in below figures So, with the above case, the result arises with following possibilities $y(t) = \begin{cases}0, & if\quad t<0\\\int_{0}^{t}1dt, & for\quad t>0\end{cases}$ $= \begin{cases}0, & if\quad t<0\\t, & t>0\end{cases} = r(t)$ It states that order of convolution does not matter, which can be shown mathematically as$$x_{1}(t)*x_{2}(t) = x_{2}(t)*x_{1}(t)$$ It states that order of convolution involving three signals, can be anything. Mathematically, it can be shown as;$$x_{1}(t)*[x_{2}(t)*x_{3}(t)] = [x_{1}(t)*x_{2}(t)]*x_{3}(t)$$ Two signals can be added first, and then their convolution can be made to the third signal. This is equivalent to convolution of two signals individually with the third signal and added finally. Mathematically, this can be written as;$$x_{1}(t)*[x_{2}(t)+x_{3}(t)] = [x_{1}(t)*x_{2}(t)+x_{1}(t)*x_{3}(t)]$$ If a signal is the result of convolution of two signals then the area of the signal is the multiplication of those individual signals. Mathematically this can be written If $y(t) = x_{1}*x_{2}(t)$ Then, Area of y(t) = Area of x 1(t) X Area of x 2(t) If two signals are scaled to some unknown constant “a” and convolution is done then resultant signal will also be convoluted to same constant “a” and will be divided by that quantity as shown below. If, $x_{1}(t)*x_{2}(t) = y(t)$ Then, $x_{1}(at)*x_{2}(at) = \frac{y(at)}{a}, a \ne 0$ Suppose a signal y(t) is a result from the convolution of two signals x1(t) and x2(t). If the two signals are delayed by time t1 and t2 respectively, then the resultant signal y(t) will be delayed by (t1+t2). Mathematically, it can be written as − If, $x_{1}(t)*x_{2}(t) = y(t)$ Then, $x_{1}(t-t_{1})*x_{2}(t-t_{2}) = y[t-(t_{1}+t_{2})]$ Example 1 − Find the convolution of the signals u(t-1) and u(t-2). Solution − Given signals are u(t-1) and u(t-2). Their convolution can be done as shown below − $y(t) = u(t-1)*u(t-2)$ $y(t) = \int_{-\infty}^{+\infty}[u(t-1).u(t-2)]dt$ $= r(t-1)+r(t-2)$ $= r(t-3)$ Example 2 − Find the convolution of two signals given by $x_{1}(n) = \lbrace 3,-2, 2\rbrace $ $x_{2}(n) = \begin{cases}2, & 0\leq n\leq 4\\0, & x > elsewhere\end{cases}$ Solution − x 2(n) can be decoded as $x_{2}(n) = \lbrace 2,2,2,2,2\rbrace Originalfirst$ x 1(n) is previously given $= \lbrace 3,-2,3\rbrace = 3-2Z^{-1}+2Z^{-2}$ Similarly, $x_{2}(z) = 2+2Z^{-1}+2Z^{-2}+2Z^{-3}+2Z^{-4}$ Resultant signal, $X(Z) = X_{1}(Z)X_{2}(z)$ $= \lbrace 3-2Z^{-1}+2Z^{-2}\rbrace \times \lbrace 2+2Z^{-1}+2Z^{-2}+2Z^{-3}+2Z^{-4}\rbrace$ $= 6+2Z^{-1}+6Z^{-2}+6Z^{-3}+6Z^{-4}+6Z^{-5}$ Taking inverse Z-transformation of the above, we will get the resultant signal as $x(n) = \lbrace 6,2,6,6,6,0,4\rbrace$ Origin at the first Example 3 − Determine the convolution of following 2 signals − $x(n) = \lbrace 2,1,0,1\rbrace$ $h(n) = \lbrace 1,2,3,1\rbrace$ Solution − Taking the Z-transformation of the signals, we get, $x(z) = 2+2Z^{-1}+2Z^{-3}$ And $h(n) = 1+2Z^{-1}+3Z^{-2}+Z^{-3}$ Now convolution of two signal means multiplication of their Z-transformations That is $Y(Z) = X(Z) \times h(Z)$ $= \lbrace 2+2Z^{-1}+2Z^{-3}\rbrace \times \lbrace 1+2Z^{-1}+3Z^{-2}+Z^{-3}\rbrace$ $= \lbrace 2+5Z^{-1}+8Z^{-2}+6Z^{-3}+3Z^{-4}+3Z^{-5}+Z^{-6}\rbrace$ Taking the inverse Z-transformation, the resultant signal can be written as; $y(n) = \lbrace 2,5,8,6,6,1 \rbrace Originalfirst$
Babb, Jeff; Currie, James(Montana Council of Teachers of Mathematics & Information Age Publishing, 2008) Large context problems (LCP) are useful in teaching the history of science. In this article we consider the brachistochrone problem in a context stretching from Euclid through the Bernoullis. We highlight a variety of ... Currie, James D.; Rampersad, Narad; Saari, Kalle(Cambridge University Press, 2015-09) Let A be a finite alphabet and f: A^* --> A^* be a morphism with an iterative fixed point f^\omega(\alpha), where \alpha{} is in A. Consider the subshift (X, T), where X is the shift orbit closure of f^\omega(\alpha) and ... A graph is called well-covered if every maximal independent set has the same size. One generalization of independent sets in graphs is that of a fractional cover -- attach nonnegative weights to the vertices and require ... We classify all 3 letter patterns that are avoidable in the abelian sense. A short list of four letter patterns for which abelian avoidance is undecided is given. Using a generalization of Zimin words we deduce some ... We find an infinite word w on four symbols with the following property: Two occurrences of any block in w must be separated by more than the length of the block. That is, in any subword of w of the form xyx, the length of ... The thesis begins by giving background in linear programming and Simplex methods. Topics covered include the duality theorem, Lemke's algorithm, and the pathological programs of Klee-Minty.Because of the bad behaviour ... Currie, James Daniel(The University of CalgaryUniversity of Calgary, 1987-06) A word $w$ over alphabet $\Sigma$ is {\em non-repetitive} if we cannot write $w=abbc$, $a,b,c\in\Sigma^*$, $b\ne\epsilon$. That is, no subword of $w$ appears twice in a row in $w$. In 1906, Axel Thue, the Norwegian number ... Mullan, G. J.; Meiklejohn, C.; Babb, J.(University of Bristol Spelaeological Society, 2017) An account is given of the discovery and excavation of this small cave in the 1960s. It is recorded that archaeological finds were made, but of these, only a single human mandible can now be traced. Radiocarbon dating shows ... Cassaigne et al. introduced the cyclic complexity function c_x(n), which gives the number of cyclic conjugacy classes of length-n factors of a word x. We study the behavior of this function for the Fibonacci word f and the ... Rampersad, Narad(University of WinnipegUniversity of Waterloo, 2007) The study of combinatorics on words dates back at least to the beginning of the20th century and the work of Axel Thue. Thue was the first to give an example of an infinite word over a three letter alphabet that contains ... Rampersad, Narad(The Electronic Journal of Combinatorics, 2011-06-21) In combinatorics on words, a word w over an alphabet ∑ is said to avoid a patternp over an alphabet ∆ if there is no factor x of w and no non-erasing morphism hfrom ∆* to ∑* such that h(p) = x. Bell and Goh have recently ... We study the structure of automata accepting the greedy representations of N in a wide class of numeration systems. We describe the conditions under which such automata can have more than one strongly connected component ... We prove that the subsets of Nd that are S-recognizable for all abstract numeration systems S are exactly the 1-recognizable sets. This generalizes a result of Lecomte and Rigo in the one-dimensional setting. Henshall, Dane; Rampersad, Narad; Shallit, Jeffrey(Bulletin of the European Association for Theoretical Computer Science, 2012) We consider various shuffling and unshuffling operations on languages and words, and examine their closure properties. Although the main goal is to provide some good and novel exercises and examples for undergraduate formal ...
I would like to particularly address this nice question relating the Hamiltonian formulation of this superconducting state (via Bogoliubov-de Gennes (BdG) equation) to the low energy quantum field theory, especially the Topological Quantum Field Theory (TQFT). What is a $p_x+i p_y$ superconductor: It is a chiral $p$-wave superconductor. It is an odd-parity and spin-triplet pairing superconductor. The excited state of $p_x+i p_y$ superconductor around the vortex carries a quantized angular momentum $L$ related to the $p_x+i p_y$ order parameter. We can write either chiral $p_x+i p_y$ or anti-chiral $p_x-i p_y$ pairing order parameter. The wave function of the condensate is$$\Psi_\pm = e^{i \varphi}\bigg[ d_x \Big( -\left|\uparrow\uparrow\right\rangle + \left|\downarrow\downarrow\right\rangle \Big) +i d_y \Big( \left|\uparrow\uparrow\right\rangle + \left|\downarrow\downarrow\right\rangle \Big) +d_z \Big( \left|\uparrow\downarrow\right\rangle + \left|\downarrow\uparrow\right\rangle \Big) \bigg](k_x \pm i k_y)$$which we usually simplify it as:$$\Psi_\pm = e^{i \varphi}\bigg[ i(\vec{d} \cdot \vec{\sigma}) \sigma_y \bigg](k_x \pm i k_y),$$where $\vec{d}=(d_x,d_y,d_z)$ and $\vec{\sigma}=(\sigma_x, \sigma_y, \sigma_z)=(\begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix},\begin{pmatrix} 0 & -i \\ i & 0\end{pmatrix},\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix})$, the$$i\vec{\sigma} \sigma_y=(\begin{pmatrix} -1 & 0 \\ 0 & 1\end{pmatrix},\begin{pmatrix} i & 0 \\ 0 & i\end{pmatrix},\begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix})$$ where the $2 \times 2$ matrix have the spin-pairing as$\begin{pmatrix} | \uparrow \uparrow\rangle & | \uparrow \downarrow\rangle \\ | \downarrow \uparrow\rangle & |\downarrow \downarrow \rangle\end{pmatrix}$. Since chiral $p$-wave superconductor is fully gapped (the pairing gap causes the Fermi sea gapped everywhere in all directions around $\vec{k}_F$), we can ask what is the field theory description. Especially a Topological Field Theory description: It is a spin-Ising TQFT. It is a fermionic spin TQFT that requires to be defined on the spin manifold. In terms of Chern-Simons (CS) theory, it is a $(SO(3)_1 × U(1)_{−1})$ CS theory. It only has two quasi-particle sectors: $\{1, \psi\}$. The 1 is a bosonic trivial vacuum and the $\psi$ is the fermionic sector related to the Bogoliubov fermion $\psi$ (when one deals with the BdG equation). How is it related to topological superconductors? In a modern definition (of Wen and Kitaev), a chiral $p_x+i p_y$ superconductor is not a Topological Superconductor. A chiral $p_x+i p_y$ superconductor is instead an invertible fermionic intrinsic Topological Order. Topological superconductor as a Symmetry-Protected Trivial State (or Symmetry-Protected Topological State, a SPT state) must be a Short Range Entangled state that has no chiral edge mode. But a $p_x+i p_y$ superconductor has a chiral Majorana-Weyl gapless edge mode (see 3).A chiral $p_x+i p_y$ superconductor is not a SPT state. So in short, a 2+1D chiral $p_x+i p_y$ superconductor: not a SPT state (not a Short Range Entangled Symmetry-Protected Topological/Trivial State) not a Topological Superconductor an invertible fermionic intinsic Topological Order However, if we stack a chiral $p_x+i p_y$ with a anti-chiral $p_x-i p_y$ superconductor, what we obtain is a Topological Superconductor respect to $Z_2$-Ising global symmetry as well as a $Z_2^f$-fermionic parity symmetry. So it is a 2+1D $Z_2 \times Z_2^f$-Topological Superconductor. And indeed the 1+1D edge modes on the boundary of the system have central charge $(c_L,c_R)=(1/2,-1/2)$, thus the chiral central charge $c_L-c_R=0$ (mod 4), which is indeed a non-chiral edge mode and a gappable edge by breaking the $Z_2$-Ising global symmetry with some appropriate interactions. It turns out that stacking from 1 to 8 layers of such $Z_2 \times Z_2^f$-Topological Superconductor ($p_x+i p_y/p_x-i p_y$), you can get 8 distinct classes (and at most 8, mod 8 classes) of TQFTs. They are labeled by $\nu \in \mathbb{Z}_8$ classes of 2+1D fermionic spin-TQFTs: There are a list of topological invariant data given above. Such as topological ground state degeneracy (GSD), reduced modular $S^{xy}$ and $T^{xy}$ matrices for anyonic statistics. The 8-th class is the same as the 0-th class. More details are here. How is it related to Majorana modes? A 2+1D chiral $p_x+i p_y$ superconductor has a 1+1D boundary chiral Majorana-Weyl gapless edge mode, which has a central charge $c=1/2$.The vortex of $p_x+i p_y$ superconductor traps the Majorana zero modes. The dynamical vortex with this non-Majorana zero mode can be identified as the $\sigma$-anyon in the Ising TQFT with quasi-particle sector $\{1, \psi, \sigma\}$. Note added 1: If we consider the odd classes ($\nu=1,3,5,6$) $\in \mathbb{Z}_8$ class Topological Superconductor described in the above part (2), there exists a special non-Abelian anyon, usually denoted $\sigma$ anyon. If we take this $\sigma$ anyon trace around the trefoil knot as the worldline in the spacetime trajectory, we can get a statistical Berry phase $(-1)$. This is related to the mathematics of Arf invariant. One can derive that. Usually the non-Abelian anyon has a non-Abelian statistical Berry matrix when doing braiding process, but the trefoil trajectory worldline (below) for $\sigma$ anyon gives only an Abelian phase $(-1)$. Note added 2: The illustration of stacking $\nu$-layers of (Ising/$\bar{\text{Ising}}$ TQFT or p+ip/p-ip) superconductors: The winding figures illustrate the 1/2-quntum vortices ($\frac{hc}{2e}$ flux) that traps Majorana-zero mode as a non-Abelian $\sigma$ anyon. ======================================= The more details can be read in this Reference: arxiv 1612.09298 Annals of Physics 384C (2017) 254-287
Since several other people have posted solutions using the same packages you were using, here is an alternative. I would personally recommend a modern toolchain with Unicode and OpenType math fonts whenever you aren't forced to use the legacy packages. Every single OpenType math font, including the default, comes with a more comprehensive selection of math symbols than is even possible with any combination of legacy packages, that match each other better and work out of the box with just the package unicode-math. This includes bold upright and bold italic Greek. \documentclass[varwidth, preview, 12pt]{standalone} \usepackage{polyglossia} \usepackage{amsmath} \usepackage[math-style=ISO]{unicode-math} \usepackage{xcolor} % Not actually needed for this MWE. \setmainlanguage{german} \begin{document} \section{Bild 1} \begin{equation} \symbfup{\kappa} \frac{\partial^2 T}{\partial x^2}=\frac{\partial T}{\partial t} \end{equation} \end{document} You might consider defining a semantic macro in case you want to migrate your source to a publisher with a different house style, or back-port to a different set of packages. If you want to stick to legacy encodings, see section 2.2.1 of the isomath manual for various methods. Here is a version that loads the bold upright κ from GFS Artemisia: \documentclass[varwidth, preview, 12pt]{standalone} \usepackage[LGR, T1]{fontenc} \usepackage{textcomp, amssymb} % Not used here. \usepackage[utf8]{inputenc} \usepackage[ngerman]{babel} \usepackage{amsmath} \usepackage[artemisia]{textgreek} % Or the font of your choice. % This MWE does not require the other packages you included, but is compatible % with them. \newcommand{\mathbfup}[1]{\mathord{\textnormal{\textbf{#1}}}} \begin{document} \section{Bild 1} \begin{equation} \mathbfup{\textkappa} \frac{\partial^2 T}{\partial x^2}=\frac{\partial T}{\partial t} \end{equation} \end{document} Here is an alternative solution using only the same packages you loaded. The bold upright Greek font here is, by default, cbgreek. \documentclass[varwidth, preview, 12pt]{standalone} \usepackage[LGR, T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[greek, ngerman]{babel} \usepackage{amsmath} % This MWE does not require the other packages you included, but is compatible % with them. \newcommand{\greekbfup}[1]{\mathord{\text{\textgreek{\textbf{#1}}}}} \begin{document} \section{Bild 1} \begin{equation} \greekbfup{\textkappa} \frac{\partial^2 T}{\partial x^2}=\frac{\partial T}{\partial t} \end{equation} \end{document}
The trick here is to notice that the Witten index for a finite temperature $\beta$ is given by $$\text{Tr}\left\{(-1)^Fe^{-\beta H}\right\}=\int_{\text{PBC}}\mathcal{D}\phi\mathcal{D}\overline\psi\mathcal{D}\psi\,e^{-S},$$ where the boundary conditions are on a circle of circumference $\beta$. Next, we know that the Witten index is independent of the temperature (it computes the Euler characteristic of the Riemannian manifold), and so we can take the $\beta\to 0$ limit of this expression. In this case, all non-constant modes of the fields $\phi$ and $\psi$ have energy proportional to $1/\beta$, and thus will be exponentially suppressed in the $\beta\to 0$ limit. Thus, the path integral will localize only to those modes which are constant in time, namely $$\text{Tr}\,(-1)^F\propto\int_{\mathcal{M}}\mathrm{d}\phi\,\sqrt{g}\int\mathrm{d}\overline{\psi}\,\mathrm{d}\psi\,\exp\left(-\frac{\beta}{2}R_{IJKL}\psi^I\overline{\psi}^J\psi^K\overline{\psi}^L\right),$$ where we have traded out our path integral for a standard integral over constant modes only, and the $\sqrt{g}$ term comes from the integral over non-constant modes in the Guassian limit (a factor of $1/\sqrt{g}$ from the bosonic fields and a factor of $g$ from the fermionic ones). The constant of proportionality can be worked out by being careful with the suppression of non-constant modes and working explicitly with the path integral measure over fourier components. However, this is quite technical. Now, as a warm-up, if the manifold is $2$-dimensional, we have $$\text{Tr}\,(-1)^F\propto\int\mathrm{d}^2\phi\,\sqrt{g}\int\mathrm{d}^2\overline{\psi}\mathrm{d}^2\psi\exp\left(-\frac{\beta}{2}R_{IJKL}\psi^I\overline{\psi}^J\psi^K\overline{\psi}^L\right).$$ I will leave it to you to show that, when you bring the Grassmann coordinates down from the exponential and integrate over them, the result is $$\text{Tr}\,(-1)^F\propto\int\mathrm{d}^2\phi\,\sqrt{g}\,R,$$ where $R$ is the Ricci scalar. Since $\text{Tr}\,(-1)^F=\chi(M)$ is the Euler characteristic, this is exactly the statement of the Gauss-Bonnet theorem, up to a constant of proportionality ($1/4\pi$). The technique for higher dimensions can be worked out in a similar fashion.
Can anyone throw some light on using accelerometers to measure angular acceleration and hence angular velocity. This approach is to avoid gyroscopes due to drifting errors. Any links for this also would be very helpful. Thank you. Robotics Stack Exchange is a question and answer site for professional robotic engineers, hobbyists, researchers and students. It only takes a minute to sign up.Sign up to join this community Gyros measure angular velocity without the insane drift that comes from integrating accelerometer data. Integrating accelerometers for velocity or position is not a good idea because any noise gets integrated. In theory if the noise was perfectly random this wouldn't be a problem but the noise almost never is. An accelerometer triad measures the non-field specific force vector ($\mathbf{f}$) at it's location. If it is located at the centre of mass of the body, it will measure $$ \mathbf{f} = \mathbf{\dot{v}} - \mathbf{g} $$ where $\mathbf{\dot{v}}$ is the translational acceleration of the body and $\mathbf{g}$ is the acceleration due to gravity. To be able to measure angular acceleration around the centre of mass, the accelerometer triad needs to be displaced a distance $\mathbf{\rho}$ from the centre of mass. In that case it will measure $$ \mathbf{f'} = \mathbf{f} + \mathbf{\dot{\omega}\times\rho} +\mathbf{\omega\times}(\mathbf{\omega\times\rho}) $$ where $\mathbf{\omega}$ is the angular velocity of the body. So, to be able to use the accelerometer to (indirectly) measure angular acceleration, you will need to measure the acceleration of the centre of mass as well. This can be done by having two accelerometer triads mounted to the body. Then, you have to solve the equation given above to obtain $\mathbf{\dot{\omega}}$ and integrate it to obtain $\mathbf{\omega}$. Gyroscopes do drift, yes. But accelerometers also have biases, they are very noisy, and will also measure vibrations in your system. These biases and noises will, of course, be integrated. Accelerometers (not gyroscopes) are by far the most common way of sensing inclination, in other words the rotation angle relative to "down", and adjusting the mobile phone screen and camera to "portrait mode" or "landscape mode". Unlike gyroscopes, this method of measuring pitch and roll generally does not have long-term drift (although it often does have a lot of short-term errors).
Edit: I'm a dumbass. The thing below is supposed to be just the motivation of asking. I want to ask for below and in general, hehe. Assume that we have a general one-period market model consisting of d+1 assets and N states. Using a replicating portfolio $\phi$, determine $\Pi(0;X)$, the price of a European call option, with payoff $X$, on the asset $S_1^2$ with strike price $K = 1$ given that $$S_0 =\begin{bmatrix} 2 \\ 3\\ 1 \end{bmatrix}, S_1 = \begin{bmatrix} S_1^0\\ S_1^1\\ S_1^2 \end{bmatrix}, D = \begin{bmatrix} 1 & 2 & 3\\ 2 & 2 & 4\\ 0.8 & 1.2 & 1.6 \end{bmatrix}$$ where the columns of D represent the states for each asset and the rows of D represent the assets for each state What I tried: We compute that: $$X = \begin{bmatrix} 0\\ 0.2\\ 0.6 \end{bmatrix}$$ If we solve $D'\phi = X$, we get: $$\phi = \begin{bmatrix} 0.6\\ 0.1\\ -1 \end{bmatrix}$$ It would seem that the price of the European call option $\Pi(0;X)$ is given by the value of the replicating portfolio $$S_0'\phi = 0.5$$ On one hand, if we were to try to see if there is arbitrage in this market by seeing if a state price vector $\psi$ exists by solving $S_0 = D \psi$, we get $$\psi = \begin{bmatrix} 0\\ -0.5\\ 1 \end{bmatrix}$$ Hence there is no strictly positive state price vector $\psi$ s.t. $S_0 = D \psi$. By 'the fundamental theorem of asset pricing' (or 'the fundamental theorem of finance' or '1.3.1' here), there exists arbitrage in this market. On the other hand the price of 0.5 seems to be confirmed by: $$\Pi(0;X) = \beta E^{\mathbb Q}[X]$$ where $\beta = \sum_{i=1}^{3} \psi_i = 0.5$ (sum of elements of $\psi$) and $\mathbb Q$ is supposed to be the equivalent martingale measure given by $q_i = \frac{\psi_i}{\beta}$. Thus we have $$E^{\mathbb Q}[X] = q_1X(\omega_1) + q_2X(\omega_2) + q_3X(\omega_3)$$ $$ = 0 + \color{red}{-1} \times 0.2 + 2 \times 0.6 = 1$$ $$\to \Pi(0;X) = 0.5$$ I guess $\therefore$ that we cannot determine the price of the European call using $\Pi(0;X) = \beta E^{Q}[X]$ because there is no equivalent martingale measure $\mathbb Q$ I noticed that one of the probabilities, in what was attempted to be the equivalent martingale measure, is negative. I remember reading about negative probabilities in Wiki and here However the following links mentioned by Wiki seem to assume absence of arbitrage so I think they are not applicable. Or are they? Is it perhaps that this market can be considered to be arbitrage-free under some quasiprobability measure that allows negative probabilities?
Local principles Part of the Universitext book series (UTX) Chapter Abstract Before we start our walk through the world of local principles, it is useful to give a general idea of what a local principle should be. A local principle will allow us to study invertibility properties of an element of an algebra by studying the invertibility properties of a (possibly large) family of (hopefully) simpler objects. These simpler objects will usually occur as homomorphic images of the given element. To make this more precise, consider a unital algebra \(\mathcal {A}\) and a family \(\mathcal{W} = (\mathsf {W}_{t})_{t \in T}\) of unital homomorphisms \(\mathsf {W}_{t} : \mathcal {A}\to \mathcal {B}_{t}\) from \(\mathcal {A}\) into certain unital algebras \(\mathcal {B}_{t}\). We say that \(\mathcal{W}\) forms a sufficient family of homomorphisms for\(\mathcal {A}\) if the following implication holds for every element \(a \in \mathcal {A}\): (the reverse implication is satisfied trivially). Equivalently, the family \(\mathcal{W}\) is sufficient if and only if $$\mathsf {W}_t(a) \; \mbox{is invertible in} \; \mathcal {B}_t \; \mbox{for every } t \in T \quad \Longrightarrow \quad a \; \mbox{is invertible in} \; \mathcal {A}$$ (again with the reverse inclusion holding trivially). In case the family \(\mathcal{W}\) is a singleton, {W} say, then \(\mathcal{W}\) is sufficient if and only if W is a symbol mapping in the sense of Section 1.2.1. $$\sigma_{\mathcal{A}} (a) \subseteq \cup_{t \in T} \sigma_{\mathcal {B}_t} \qquad \mbox{for all} \; a \in \mathcal{A}$$ KeywordsBanach Algebra Compact Hausdorff Space Local Algebra Maximal Ideal Space Commutative Banach Algebra These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Preview Unable to display preview. Download preview PDF. Copyright information © Springer-Verlag London Limited 2011
December 1st, 2018, 04:33 AM # 1 Newbie Joined: Dec 2018 From: Canada Posts: 1 Thanks: 0 The inverse of one angle and the side opposite to the angle, which is equal to one, in all triangles gives a diameter of the circumscribed circle. In a right triangle, the inverse of the angle is always 1, which is the circle's diameter, and its opposite's side is 1, that is hypotenuse of 1. $\sin 90 =1$ is the angle $\theta$. My question is, what is the right explanation concerning the inverse of angles with sides $a,b,c$ for all $\triangle ABC$ when the base or the hypotenuse equals to 1? 1) explanation I have three sides $5, 5, 4$ and $1.25, 1.25, 1$ with all the same angles except the base is 1, and all I did is divide sides $5, 5, 4$ by 4 to obtain the simplified version of the triangle sides. $1.25, 1.25, 1$. From the $\triangle ABC $ and 1, the side $c$ opposite to the $\angle C =\sqrt {1-0.68^2}$. Its inverse is the diameter of circumscribed circle in terms of $\sin$, but not in terms of cosine. The law of sines states that: $$\frac {\sin A}{a}=\frac {\sin B}{b}=\frac {\sin C}{c}=\frac {1}{d}$$ where d is the diameter. $$\frac {a}{\sin A}=\frac {b}{\sin B}=\frac {c}{\sin C}={d}$$ where d is the diameter. Last edited by skipjack; December 1st, 2018 at 08:52 AM. Tags circle, diameter Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Line length from mid-point circle to outside diameter circle Byte Geometry 3 March 29th, 2017 03:56 PM Minimum diameter of a circle to fit other circles inside Ash Algebra 1 March 12th, 2014 09:50 AM Prove that Diameter(X 3,n) = round(n/6) + 1 sunita_kharwar Applied Math 0 June 3rd, 2013 12:56 PM Diameter question mathkid182 Algebra 1 April 15th, 2013 11:47 PM finding the diameter... im_with_stupid Algebra 10 January 27th, 2011 05:05 PM
Research Open Access Published: Multiple periodic solutions for second order Josephson-type differential systems Boundary Value Problems volume 2017, Article number: 99 (2017) Article metrics 831 Accesses Abstract In this paper, some new existence theorems are obtained for multiple periodic solutions of second order Josephson-type differential systems with partially periodic potential by using the minimax methods in critical point theory, which generalize and improve some known results in the literature. Introduction In this paper, we study the second order Josephson-type differential systems where A is an \((N\times N)\)-symmetric matrix, \(h(t)\in L^{1}([0,T]; \mathbb{R}^{N})\), \(T>0\), \(\nabla F(t,x)\) denotes its gradient with respect to the second variable, that is, and \(F:[0,T]\times \mathbb{R}^{N}\rightarrow \mathbb{R}\) satisfies the following assumptions: (H1): \(F(t,x)\) is measurable in tfor each \(x\in \mathbb{R}^{N}\) and continuously differentiable in xfor a.e. \(t\in [0,T]\), and there exist \(a\in C(\mathbb{R}^{+},\mathbb{R}^{+})\), \(b\in L^{1}([0,T]; \mathbb{R}^{+})\) such that$$ \bigl\vert F(t,x) \bigr\vert \leq a \bigl(\vert x \vert \bigr)b(t), \qquad \bigl\vert \nabla F(t,x) \bigr\vert \leq a \bigl(\vert x \vert \bigr)b(t) $$ for all \(x\in \mathbb{R}^{N}\) and a.e. \(t\in [0,T]\). (H2): \(\operatorname{dim} N(A)=m\geq 1\) and matrix Ahas no eigenvalue of the form \(k^{2}\upsilon^{2}\) (\(k\in \mathbb{N}/\{0\}\)), where \(\upsilon =2 \pi /T\). (H3): There exist linearly independent vectors \(e_{j}\in \mathbb{R}^{N}\) (\(1 \leq j\leq m\)) such that$$ N(A)=\operatorname{span} \{e_{1},e_{2},\ldots ,e_{m}\} $$ and$$ \int_{0}^{T} \bigl(h(t),e_{j} \bigr) \,dt=0. $$ This problem (1.1) occurs in various branches of mathematical physics, for example, when \(A=N^{2}D^{2}\) and \(-\nabla F(t,u(t))=f(u(t))=(a _{1}\sin u_{1},\ldots ,a_{N}\sin u_{N})\), problem (1.1) reduces to the nonlinear systems of the form where D is an \((N\times N)\)-symmetric matrix. This type of problem can be applied to describe the motion of forced linearly coupled pendulums. During the past two decades, the existence of periodic solutions for second order differential systems have been studied extensively, and many solvability conditions have been obtained via variational methods and critical point theory. In this direction we mention the papers [1–15], and we refer the reader to [16–19] for a broad introduction to variational methods and critical point theory. It might be also interesting to study the above mentioned abstract equations with more general potentials, see the paper [20]. In the classical monograph [6], Mawhin and Willem proved that problem (1.1) has at least one solution by using the saddle point theorem under the following bounded condition: there exists \(g\in L^{1}([0,T]; \mathbb{R}^{+})\) such that for all \(x\in \mathbb{R}^{N}\) and a.e. \(t\in [0,T]\). They obtained the following result. Theorem A [6] Suppose that F satisfies (H1)-(H3), (1.3) and (H4): there exists\(T_{j}>0\) such that$$ F ( t,x+T_{j}e_{j} ) =F(t,x), \quad 1\leq j\leq m, $$ for all\(x\in \mathbb{R}^{N}\) and a. e. \(t\in [0,T]\). Then Eq. (1.1) has at least one solution in\(H_{T}^{1}\), where the Sobolev space\(H_{T}^{1}\) is defined by$$\begin{aligned} H_{T}^{1}&= \bigl\{ u:[0,T]\rightarrow \mathbb{R}^{N} \mid \textit{u is absolutely continuous}, \\ &\quad u(0)=u(T) \textit{ and } \dot{u}\in L^{2} \bigl([0,T]; \mathbb{R}^{N} \bigr) \bigr\} \end{aligned}$$ and\(H_{T}^{1}\) is a Hilbert space with the norm$$ \Vert u \Vert = \biggl( \int_{0}^{T} \bigl\vert \dot{u}(t) \bigr\vert ^{2}\,dt+ \int_{0}^{T} \bigl\vert u(t) \bigr\vert ^{2}\,dt \biggr) ^{\frac{1}{2}}, \quad u\in H_{T}^{1}. $$ When the nonlinearity \(\nabla F(t,x)\) is sublinear, that is, there exist \(f,g\in L^{1}([0,T];\mathbb{R}^{+})\) and \(\alpha \in [0,1)\) such that (H5): There exist \(T_{j}>0\), \(1\leq r\leq m\) such that$$ F ( t,x+T_{j}e_{j} ) =F(t,x), \quad 1\leq j\leq r, $$ for all \(x\in \mathbb{R}^{N}\) and a.e. \(t\in [0,T]\). (H6): \(\lim_{\Vert x \Vert \rightarrow \infty }\frac{\int_{0}^{T}F(t,x)\,dt}{ \Vert x \Vert ^{2\alpha }}=-\infty , \mbox{as } x\in N(A)\ominus \operatorname{span}\{e_{1},e_{2},\ldots ,e_{r}\}\). Theorem B [3] In [15], the author obtained the following result. Theorem C [15] Suppose that F satisfies (H1)-(H3), (H5), (1.4) and the following generalized Ahmad- Lazer- Paul type coercive conditions: (H7): \(\lim_{\Vert x \Vert \rightarrow \infty }\frac{\int_{0}^{T}F(t,x)\,dt}{\Vert x \Vert ^{2\alpha }}<-L\), as\(x\in N(A)\ominus \operatorname{span}\{e_{1},e_{2},\ldots ,e_{r}\}\), where L is a positive constant. Then Eq. (1.1) has at least\(r+1\) distinct solutions in\(H_{T}^{1}\). In this paper, we use a more general control function instead of \(\vert x \vert ^{\alpha }\) in (1.4). By using the generalized saddle point theorem due to Liu [5], we can prove the existence of multiple periodic solutions for the second order Josephson-type differential systems for a new and large range of the nonlinear term. Preliminaries In [6], Mawhin and Willem established a variational structure which enables us to reduce the existence of solutions for problem (1.1) to the existence of critical points of the following energy functional. Define the energy functional associated with problem (1.1) on \(H_{T}^{1}\) It follows from assumption (H1) that the functional φ is continuously differentiable. Moreover, one has Let Therefore, we can see that where I denotes the identity operator on \(H_{T}^{1}\) and \(K:H_{T} ^{1}\rightarrow H_{T}^{1}\) is the linear self-adjoint operator defined, using Riesz representation theorem, by It is easy to see that K is compact. By classical spectral theory, we can decompose \(H_{T}^{1}\) into the orthogonal sum of invariant subspaces for \(I-K\) where \(H^{0}=\mathrm{Ker}(I-K)=N(A)\) and \(\operatorname{dim} H^{-}<+\infty \), for some \(\delta >0\), we have Lemma 2.1 [6] There is a continuous embedding \(H_{T}^{1}\hookrightarrow C([0,T],\mathbb{R}^{N})\), and the embedding is compact. Then there exists \(C_{0}>0\) such that Define then where \(u^{-}\in H^{-}, u^{+}\in H^{+}\), \(Pu^{0}\in Y_{1}\) and \(Qu^{0}=\sum_{j=1}^{r}c_{j}e_{j}\). Let be a discrete subgroup of \(H_{T}^{1}\), where \(\mathbb{Z}\) is the set of all integers, and let \(\pi :H_{T}^{1}\rightarrow H_{T}^{1}/G\) be the canonical surjection. Let where \(W=H^{+}\), \(Z=H^{-}\oplus Y_{1}\), \(V=Y_{0}/G\), then \(\dim Z<+ \infty \), \(\dim V<+\infty \), and V is isomorphic to the torus \(T^{r}\). The element in V can be represented as where \(\hat{c}_{j}=c_{j}-k_{j}T_{j}\), \(0\leq \hat{c}_{j}< T_{j}\). Let By (H3) and (H5), we have and Thus, \(\varphi (u)=\varphi (\hat{u})\), \(\varphi '(u)=\varphi '( \hat{u})\). Define \(\psi : X\times V\mapsto \mathbb{R}\): \(\psi (\pi (u))= \varphi (u)\), then ψ is well defined. Moreover, ψ is continuously differentiable and Definition 2.1 [6] Suppose that ψ satisfies the (PS) condition, that is, every sequence \(\{x_{n}\}\) of \(X\times V\) such that \(\psi \{x_{n}\}\) is bounded and \(\psi '\{x_{n}\}\rightarrow 0\) as \(n\rightarrow \infty \) possesses a convergent subsequence. Lemma 2.2 The generalized saddle point theorem [5] Let X be a Banach space with a decomposition \(X=Z+W\), where Z and W are two subspaces of X with \(\dim Z<+\infty \). Let V be a finite- dimensional, compact \(C^{2}\)- manifold without boundary. Let \(\psi :X\times V\rightarrow \mathbb{R}\) be a \(C^{1}\)- function satisfying the ( PS) condition. Suppose that there exist constants \(\rho >0\) and \(\gamma <\beta \) such that where \(S=\partial D\), \(D=\{z\in Z\mid \vert z \vert \leq \rho \}\). Then the functional ψ has at least \(\operatorname{cuplength}(V)+1\) critical points. Main results Here are our main results. Theorem 3.1 Suppose that assumptions (H1)-(H3), (H5) hold and there exist constants \(M_{i}>0\), \(i=0,1,2\), and a nonnegative function \(\omega \in C([0,\infty ),[0,\infty ))\) with the properties: ( ω1): \(\omega (s)\leq \omega (t)\), \(\forall s\leq t, s,t \in [0,\infty )\); ( ω2): \(\omega (s+t)\leq M_{0}(\omega (s)+\omega (t))\), \(\forall s,t\in [0,\infty )\); ( ω3): \(0\leq \omega (s)\leq M_{1}s+M_{2}\), \(\forall s,t \in [ 0,\infty )\); ( ω4): \(\omega (s)\rightarrow +\infty \) as\(s\rightarrow +\infty \). Moreover, there exist constant \(a>3\) and \(f,g\in L ^{1}([0,T];\mathbb{R}^{+})\) with such that for all \(x\in \mathbb{R}^{N}\) and a. e. \(t\in [0,T]\), and as \(x\in N(A)\ominus \operatorname{span}\{e_{1},e_{2},\ldots ,e_{r}\}\). Then Eq. (1.1) has at least \(r+1\) distinct solutions in \(H_{T}^{1}\). Theorem 3.2 as \(x\in N(A)\ominus \operatorname{span}\{e_{1},e_{2},\ldots ,e_{r}\}\). Then Eq. (1.1) has at least \(r+1\) distinct solutions in \(H_{T}^{1}\). Corollary 3.1 as \(x\in N(A)\ominus \operatorname{span}\{e_{1},e_{2},\ldots ,e_{r}\}\). Then Eq. (1.1) has at least \(r+1\) distinct solutions in \(H_{T}^{1}\). Remark 3.1 (i) When \(A\equiv 0\), assumptions ( ω1)-( ω4) and condition (3.2) were introduced in [10]. Comparing with the results in [10], the periodicity and coercivity conditions in our Theorem 3.1 are only in a part of variables of potentials, and we obtained multiplicity of periodic solutions for problem (1.1). (ii) To show that our Theorem 3.1 is new, we give an example to illustrate our result. For example, let \(1\leq r\leq m\), \(x=(x_{1},x _{2},\ldots ,x_{N})^{T}\in \mathbb{R}^{N}\), and For example, let \(x=(x_{1},x_{2},\ldots ,x_{N})^{T}\in \mathbb{R}^{N}\), \(\omega (\vert x \vert )=\vert x \vert \) and For the sake of convenience, we denote by \(C_{i}\) (\(i=1,2,3,\ldots ,33\)) various positive constants. Proof of Theorem 3.1 First, we prove that ψ satisfies the (PS) condition. Let \(\pi :W_{T}^{1,p(t)}\rightarrow W_{T}^{1,p(t)}/G\) be the canonical surjection. Define \(\psi : X\times V\mapsto \mathbb{R}\) by \(\psi ( \pi (u))=\varphi (u)\). Assume that \((\pi (u_{n}))\) is a (PS) sequence for ψ, that is, \(\psi (\pi (u_{n}))\) is bounded and \(\psi '( \pi (u_{n}))\rightarrow 0\). Then \(\varphi (u_{n})\) is bounded and \(\varphi '(u_{n})\rightarrow 0\). We can get from ( ω1), ( ω2), and ( ω3) that By (2.3) and the boundedness of \(\vert Q\hat{u}^{0} \vert \), we have From (H3) and (2.3), we obtain that for large n. So we have where \(C_{5}=\min_{s\in [0,+\infty )} \{ [ \delta -(2+a)M _{0}M_{1}C_{0}^{2}\int_{0}^{T}f(t)\,dt ] s^{2}-C_{4}s \} \). From (3.1), one has that \((2+a)M_{0}M_{1}C_{0}^{2}\int_{0}^{T}f(t)\,dt<\delta \), then \(C_{5}<0\). Hence In a similar way, we have Combining the above two inequalities, one has that Consequently, Using similar arguments, we can prove that In a similar way, we can obtain By (3.2) and ( ω1), one has Hence, we have which implies \(\vert Pu^{0}_{n} \vert \) is bounded. Otherwise, we assume \(\vert Pu^{0}_{n} \vert \rightarrow \infty \) as \(n\rightarrow \infty \). From ( ω4), we obtain that By (3.3), we conclude that this contradicts the boundedness of \(\{\varphi (u_{n})\}\), so \(\vert Pu^{0}_{n} \vert \) is bounded. Combining (3.8) and (3.9), we obtain that \(\Vert u^{+}_{n} \Vert \) and \(\Vert u^{-}_{n} \Vert \) are bounded. Furthermore, \(\vert Q\hat{u}^{0} \vert \) is bounded, so \(\{\hat{u}_{n}\}\) is bounded in \(H^{1}_{T}\). Arguing then as in Proposition 4.1 in [6], \(\{\hat{u} _{n}\}\) has a convergent subsequence. By \(\pi (\hat{u}_{n})=\pi (u _{n})\), we conclude that ψ satisfies the (PS) condition. Next, we only need to verify the linking conditions of the generalized saddle point theorem: (a) For \(\pi (u)\in W\times V\), \(u(t)=u^{+}(t)+Qu^{0}\). By the proof of (3.12), we have Hence Note the boundedness of \(\vert Qu^{0} \vert \) and (3.1), we obtain that \(\psi (\pi (u))\rightarrow +\infty \) as \(\Vert u \Vert \rightarrow -\infty \) for all \(\pi (u)\in W\times V\), which implies that there exists \(\beta \in \mathbb{R}\) such that \(\psi (\pi (u))\geq \beta \) on \(W\times V\). (b) In a way similar to the proof of (3.12), we have Consequently, From \(a>3\) and (3.1), one has that \({-}\delta +5M_{0}M_{1}C_{0}^{2} \int_{0}^{T}f(t) \, dt<0\). By (3.3), we deduce that so we obtain that \(\psi (\pi (u))\rightarrow +\infty \) as \(\Vert u \Vert \rightarrow -\infty \) for all \(\pi (u)\in Z\times V\), which implies that there exists \(\gamma <\beta \) such that \(\psi (\pi (u))\leq \gamma \) on \(Z\times V\). The functional ψ satisfies all the assumptions of Lemma 2.2, so it has at least \(\operatorname{cuplength}(V)+1\) critical points, and since V is the torus \(T^{r}\), then \(\operatorname{cuplength}(V)=r\). Hence φ has at least \(r+1\) critical points. Therefore, problem (1.1) has at least \(r+1\) distinct solutions in \(H_{T}^{1}\). The proof of Theorem 3.1 is completed. □ Proof of Theorem 3.2 Remark 3.2 ( ω3)′: \(0\leq \omega (s)\leq M_{1}s^{\alpha }+M_{2}\), \(\forall s,t\in [ 0, \infty )\), where \(0\leq \alpha <1\). References 1. Chang, KQ: On the periodic nonlinearity and the multiplicity of solutions. Nonlinear Anal., Theory Methods Appl. 13, 527-537 (1989) 2. Faraci, F: Multiple periodic solutions for second order systems with changing sign potential. J. Math. Anal. Appl. 319, 567-578 (2006) 3. Feng, JX, Han, ZQ: Periodic solutions to differential systems with unbounded or periodic nonlinearities. J. Math. Anal. Appl. 323, 1264-1278 (2006) 4. Han, ZQ, Wang, SQ: Multiple solutions for nonlinear systems with gyroscopic terms. Nonlinear Anal., Theory Methods Appl. 75, 5756-5764 (2012) 5. Liu, JQ: A generalized saddle point theorem. J. Differ. Equ. 82, 372-385 (1989) 6. Mawhin, J, Willem, M: Critical Point Theory and Hamiltonian Systems. Springer, New York (1989) 7. Ning, Y, An, TQ: Periodic solutions of a class of nonautonomous second-order Hamiltonian systems with nonsmooth potentials. Bound. Value Probl. 2015, 34 (2015) 8. Pipan, J, Schechter, M: Non-autonomous second order Hamiltonian systems. J. Differ. Equ. 257, 351-373 (2014) 9. Schechter, M: Periodic second order superlinear Hamiltonian systems. J. Math. Anal. Appl. 426, 546-562 (2015) 10. Wang, ZY, Zhang, JH: Periodic solutions of a class of second order non-autonomous Hamiltonian systems. Nonlinear Anal., Theory Methods Appl. 72, 4480-4487 (2010) 11. Xiao, L: Existence of periodic solutions for second order Hamiltonian system. Bull. Malays. Math. Soc. 35, 785-801 (2012) 12. Tang, CL: Periodic solutions of second order nonautonomous systems with sublinear nonlinearity. Proc. Am. Math. Soc. 126, 3263-3270 (1998) 13. Tang, CL: A note on periodic solutions of second order systems. Proc. Am. Math. Soc. 132, 1295-1393 (2003) 14. Zhang, XY, Tang, XH: Periodic solutions for an ordinary p-Laplacian system. Taiwan. J. Math. 12, 1369-1396 (2011) 15. Zhang, SG: Multiple periodic solutions for a class of sublinear nonautonomous second order system. Acta Anal. Funct. Appl. 15, 12-20 (2013) 16. Rabinowitz, PH: Minimax Methods in Critical Point Theory with Applications to Differential Equations. CBMS Regional Conference Series in Mathematics, vol. 65. Am. Math. Soc., Providence (1986) 17. Willem, M: Minimax Theorems. Birkhäuser, Boston (1996) 18. Schechter, M: Linking Methods in Critical Point Theory. Birkhäuser, Boston (1999) 19. Bartsch, T: Critical point theory on partially ordered Hilbert spaces. J. Funct. Anal. 186, 117-152 (2001) 20. Shahmurov, R: Solution of the Dirichlet and Neumann problems for a modified Helmholtz equation in Besov spaces on an annulus. J. Differ. Equ. 249, 526-550 (2010) Acknowledgements This work is supported by the National Natural Science Foundation of China (No. 31260098). Additional information Competing interests The author declares that they have no competing interests. Author’s contributions The author read and approved the final manuscript. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Received Accepted Published DOI MSC 34C25 58E50 Keywords periodic solution critical point second order Josephson-type differential systems periodicity minimax methods the generalized saddle point theorem
Forgot password? New user? Sign up Existing user? Log in In the figure above, area of circle is 50 and area of triangle is 15. If the value of sinθ+sinα+sinβ \sin\theta + \sin\alpha + \sin\beta sinθ+sinα+sinβ equal to mnπ\dfrac mn \pi nmπ for coprime positive integers mmm and nnn, find the value of m+nm+nm+n. Problem Loading... Note Loading... Set Loading...
This is a problem from Billingsley's text Probability and measure. Suppose that $f$ is nonnegative on a $\sigma$-finite measure space $(\Omega, \mathscr{F}, \mu)$. Show that $$\int_\Omega f d \mu = (\mu \times \lambda)[(\omega, y) \in \Omega \times \mathbb{R}^1: 0 \leq y \leq f(\omega)]. \tag{1}$$ Prove that the set on the right is measurable. My Attempt: Denote the set on the right by $G$. First, by condition, the product measure $\mu \times \lambda$ exists on the $\sigma$-field $\mathscr{F} \times \mathscr{R}^1$. I am able to show that $G$ is measurable and $(1)$ holds if $f$ are simple functions. It looks naturally that for general nonnegative $f$, let $\{f_n\}$ be a sequence of simple functions such that $f_n \uparrow f$. Then define $G_n = [(\omega, y) \in \Omega \times \mathbb{R}^1: 0 \leq y \leq f_n(\omega)]$ accordingly for every $n$. It is expected that $G_n \uparrow G$ so that the measurability of $G$ follows from the measurability of $G_n$. However, this seems unjustified, since in fact $$G_n \uparrow [(\omega, y) \in \Omega \times \mathbb{R}^1: 0 \leq y \color{red}{<} f(\omega)].$$ So we have to show that $$[(\omega, y) \in \Omega \times \mathbb{R}^1: f(\omega) = y, y \geq 0] = [(\omega, f(\omega)): f(\omega) \geq 0] \tag{2}$$ is measurable. And here is where I got stuck. Can someone show me why $(2)$ is measurable or to show the assertion through other ways? Thank you very much.
How to find the Horizontal asymptote for this function: $f(x)= \sqrt {9x^2+2x}-3x$ I have tried to find the lim where x goes to infinity and negative infinity then what ? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Assuming you meant $f(x)=\sqrt{9x^2+2x}-3x$, we can write $$ \begin{align} \sqrt{9x^2+2x}-3x &=\left(\sqrt{9x^2+2x}-3x\right)\frac{\sqrt{9x^2+2x}+3x}{\sqrt{9x^2+2x}+3x}\\ &=\frac{2x}{\sqrt{9x^2+2x}+3x}\\ &=\frac{2}{\sqrt{9+2/x}+3}\\ &\to\frac13 \end{align} $$ The technique of multiplying $\left(\sqrt{a}-\sqrt{b}\right)$ by $\frac{\sqrt{a}+\sqrt{b}}{\sqrt{a}+\sqrt{b}}$ to get $\frac{a-b}{\sqrt{a}+\sqrt{b}}$ is very useful in a number of problems.
For simplicity, let us talk about a scalar field $\phi : \mathbb{R}^4 \rightarrow \mathbb{R}$. The action for a free scalar field is $$S[\phi] = \frac{1}{2}\int_{\mathbb{R}^4} \partial_\mu\phi\partial^\mu\phi - m^2\phi^2$$ and its classical equations of motion is the Klein-Gordon equation $$ (\partial_\mu\partial^\mu + m^2) \phi = 0 $$ Now that looks suspiciously like an oscillator or wave equation, doesn't it? This inspires us to do a Fourier transform to obtain the eigenfunctions $\mathrm{e}^{\mathrm{i}px}$ solving this equation. The general solution $\phi(x)$ can then be expanded as $$\phi(x) = \int \frac{\mathrm{d}^3p}{(2\pi)^3} \frac{1}{\sqrt{2\omega_p}}(a(\vec p)\mathrm{e}^{\mathrm{i}px} + a^\dagger(\vec p)\mathrm{e}^{-\mathrm{i}px}) $$ which is precisely the expansion one could do for any other oscillator. Now, you can talk about the modes $a(\vec p)$ and $a^\dagger(\vec p)$ of the field being excited, and you can imagine $\mathrm{e}^{\mathrm{i}px}$ describing a (basic) oscillation at any point $x$, and talk about the integral representing the field being made out of such oscillators. This is all nonsense. This might sound strong, but it has been the source of many annoying misunderstandings in the publicization of quantum theories to laypeople. Just because something ($\phi$) fulfills a wave/oscillator equation and has a mode expansion (as the above is called), it does not mean that anything oscillates. It's just the same type of equation you encounter in oscillator, not the same physical situation. It's a nice pretty picture to tell ourselves that we understand the quantum field, but ultimately, there is nothing there that would justify the oscillator interpretation. Nothing physical is vibrating or oscillating here. Furthermore, the above only holds for a free, non-interacting field. When you have a field with arbitrary interactions, its equations of motion may not look at all like the wave/oscillator equation, and it has no modes, so the picture falls apart there completely. Since the comments show that this is more controversial than I thought, I will elaborate a bit: Electromagnetic waves also look as if there is a harmonic oscillator at every point in space, by the same logic of mode expansion. The descriptions are formally equivalent. This led people to believe that there is the luminiferous aether, because, how else could empty space carry the wave? But this turned out to be not true, there is no aether, and there is nothing carrying the wave. The formal equivalence is misleading, there is no physical object oscillating when the wave travels through the vacuum. I am not saying that anything about the formal treatment is false, I am trying to explain why it is not a good idea to believe that the oscillator description is a good physical interpretation of the situation, which is what I believe the OP refers to when he mentions a book saying "This is not true" about the "oscillators at every point in space" idea.
The inverse tangent function atan(z)(denoted by Atan(z) in the Fungrim formula language) is a function of a single variable. The following table lists conditions such that Atan(z) is defined in Fungrim. The inverse tangent function atan2(y,x)(denoted by Atan2(y, x) in the Fungrim formula language) is a function of two variables. The following table lists conditions such that Atan2(y, x) is defined in Fungrim. Entry(ID("ce3a8e"),SymbolDefinition(Atan2, Atan2(y, x), "Two-argument inverse tangent"),Description("The inverse tangent function", Atan2(y, x), "(denoted by", SourceForm(Atan2(y, x)), "in the Fungrim formula language)", "is a function of two variables.", "The following table lists conditions such that", SourceForm(Atan2(y, x)), "is defined in Fungrim."),Table(TableRelation(Tuple(P, Q), Implies(P, Q)), TableHeadings(Description("Domain"), Description("Codomain")), List(TableSection("Numbers"), Tuple(And(Element(y, RR), Element(x, RR)), Element(Atan2(y, x), OpenClosedInterval(Neg(ConstPi), ConstPi)))))) An X-ray plot illustrates the geometry of a complex analytic function f(z). Thick black curves show where Im(f(z))=0(the function is pure real). Thick red curves show where Re(f(z))=0(the function is pure imaginary). Points where black and red curves intersect are zeros or poles. Magnitude level curves ∣f(z)∣=Care rendered as thin gray curves, with brighter shades corresponding to larger C. Blue lines show branch cuts. The value of the function is continuous with the branch cut on the side indicated with a solid line, and discontinuous on the side indicated with a dashed line. Yellow is used to highlight important regions. Entry(ID("8bb3d8"),Image(Description("X-ray of", Atan(z), "on", Element(z, Add(ClosedInterval(-2, 2), Mul(ClosedInterval(-2, 2), ConstI)))), ImageSource("xray_atan")),Description("An X-ray plot illustrates the geometry of a complex analytic function", f(z), ".", "Thick black curves show where", Equal(Im(f(z)), 0), "(the function is pure real).", "Thick red curves show where", Equal(Re(f(z)), 0), "(the function is pure imaginary).", "Points where black and red curves intersect are zeros or poles.", "Magnitude level curves", Equal(Abs(f(z)), C), "are rendered as thin gray curves, with brighter shades corresponding to larger", C, ".", "Blue lines show branch cuts.", "The value of the function is continuous with the branch cut on the side indicated with a solid line, and discontinuous on the side indicated with a dashed line.", "Yellow is used to highlight important regions.")) \operatorname{atan}\!\left(\overline{z}\right) = \overline{\operatorname{atan}\!\left(z\right)}z \in \mathbb{C} \,\mathbin{\operatorname{and}}\, i z \notin \left(-\infty, -1\right) \cup \left(1, \infty\right) \operatorname{atan2}\!\left(y, x\right) = -i \log\!\left(\operatorname{sgn}\!\left(x + y i\right)\right)x \in \mathbb{R} \,\mathbin{\operatorname{and}}\, y \in \mathbb{R} \,\mathbin{\operatorname{and}}\, x + y i \ne 0 \operatorname{atan2}\!\left(y, x\right) = \operatorname{Im}\!\left(\log\!\left(x + y i\right)\right)x \in \mathbb{R} \,\mathbin{\operatorname{and}}\, y \in \mathbb{R} \,\mathbin{\operatorname{and}}\, x + y i \ne 0 \operatorname{Im}\!\left(\operatorname{atan}\!\left(x + y i\right)\right) = \frac{1}{4} \log\!\left(\frac{{x}^{2} + {\left(1 + y\right)}^{2}}{{x}^{2} + {\left(1 - y\right)}^{2}}\right)x \in \mathbb{R} \,\mathbin{\operatorname{and}}\, y \in \mathbb{R} \,\mathbin{\operatorname{and}}\, x + y i \notin \left\{-i, i\right\}
A Lack of Confidence Interval Thu 15 February 2018by Steven E. Pav For some years now I have been playing around with a certain problemin portfolio statistics: suppose you observe \(n\) independent observationsof a \(p\) vector of returns, then form the Markowitz portfolio based onthose returns. What then is the distribution of what I call the 'signal tonoise ratio' of that Markowitz portfolio, defined as the true expectedreturn divided by the true volatility. That is, if \(\nu\) is the Markowitzportfolio, built on a sample, its 'SNR' is \(\nu^{\top}\mu /\sqrt{\nu^{\top}\Sigma \nu}\), where \(\mu\) is the population mean vector, and\(\Sigma\) is the population covariance matrix. This is an odd problem, somewhat unlike classical statistical inference because the unknown quantity, the SNR, depends on population parameters, but also thesample. It is random and unknown. What you learn in your basic statistics class isinference on fixed unknowns. (Actually, I never really took a basic statisticsclass, but I think that's right.) Paulsen and Sohl made some progress on this problem in their 2016 paper on whatthey call the Sharpe Ratio Information Criterion.They find a sample statistic which is unbiased for the portfolio SNR whenreturns are (multivariate) Gaussian. In my mad scribblings on the backs ofenvelopes and scrap paper, I have been trying to find the distribution of the SNR.I have been looking for this love, as they say, in all the wrong places,usually hoping for some clever transformation that will lead to a slick proof.(I was taught from a young age to look for slick proofs.) Having failed that mission, I pivoted to looking for confidence intervals forthe SNR (and maybe even read more prediction intervals on the out-of-sample Sharpe ratioof the in-sample Markowitz portfolio). I realized that some of the work I haddone … geom cloud. Thu 21 September 2017by Steven E. Pav I wanted a drop-in replacement for geom_errorbar in ggplot2 that wouldplot a density cloud of uncertainty. The idea is that typically (well, where I work), the ymin and ymax of an errorbar are plotted at plus and minus one standard deviation. A 'cloud' where the alpha is proportional to a normaldensity with the same standard deviations could show the same informationon a plot with a little less clutter. I found out how to do this witha very ugly function, but wanted to do it the 'right' way by spawning myown geom. So the geom_cloud. After looking at a bunch of other ggplot2 extensions, some amount oftinkering and hair-pulling, and we have the following code. The first partjust computes standard deviations which are equally spaced in normal density.This is then used to create a list of geom_ribbon with equal alpha, butthe right size. A little trickery is used to get the scales right. Thereare three parameters: the steps, which control how many ribbons are drawn.The default value is a little conservative. A larger value, like 15, givesvery smooth clouds. The se_mult is the number of standard deviations thatthe ymax and ymin are plotted at, defaulting to 1 here. If you plotyour errorbars at 2 standard errors, change this to 2. The max_alpha is thealpha at the maximal density, i.e. around y. read more # get points equally spaced in density equal_ses <- function(steps) { xend <- c(0,4) endpnts <- dnorm(xend) # perhaps use ppoints instead? deql <- seq(from=endpnts[1],to=endpnts[2],length.out=steps+1) davg <- (deql[-1] + deql[-length(deql)])/2 # invert xeql <- unlist(lapply(davg,function(d) { uniroot(f=function(x) { dnorm(x) - d },interval=xend)$root })) xeql } library(ggplot2) library(grid) geom_cloud <- function(mapping … Spy vs Spy vs Wald Wolfowitz. Tue 05 September 2017by Steven E. Pav I turned my kids on to the great Spy vs Spy cartoon from Mad Magazine.This strip is pure gold for two young boys: Rube Goldberg plusexplosions with not much dialog (one child is still too young to read).I became curious whether the one Spy had the upper hand, whether Prohias worked to keep the score 'even', and so on. Not finding any data out there, I collected the data to the bestof my ability from the Spy vs Spy Omnibus, which collects all248 strips that appeared in Mad Magazine (plus two special issues).I think there are more strips out there by Prohias that appearedonly in collected books, but have not collected them yet.I entered the data into a google spreadsheet, then converted intoCSV, then into an R data package.Now you can play along at home. On to the simplest form of my question: did Prohias alternate betweenBlack and White Spy victories? or did he choose at random? Up until 1968 it was common for two strips to appear in one issueof Mad, with one victory per Spy. In some cases three stripsappeared per issue, with the Grey Spy appearing in the third;the Black and White Spies always receive a comeuppance when sheappears, and so the balance of power was maintained. After 1972, it seems that only a single strip appeared per issue,and we can examine the time series of victories. library(SPYvsSPY) library(dplyr) data(svs) # show that there are multiple per strip svs %>% group_by(Mad_no,yrmo) %>% summarize(nstrips=n(), net_victories=sum(as.numeric(white_comeuppance) - as.numeric(black_comeuppance))) %>% ungroup() %>% select(yrmo,nstrips,net_victories) %>% head(n=20) %>% kable() read more yrmo nstrips net_victories 1961-01 3 -1 1961-03 2 0 1961-04 2 0 1961-06 2 0 1961-07 2 … Calendar plots in ggplot2. Thu 18 May 2017by Steven E. Pav I like the calendar 'heatmap' plots of commits you can see on github user pages, and wanted to play around with some.Of course, if I just wanted to make some plots, I could have just googled around, and then followed this recipe,or maybe used the rChartsCalmap package. Instead I set out, as an exercise, to make my own using ggplot2. For data, I am using the daily GHCND observations data for station USC00047880, which islocated in the San Rafael, CA, Civic Center. I downloaded this data as part of a projectto join weather data to campground data (yes, it's been done before), directly fromthe NOAA FTP site, then read the fixed widthfile. I then processed the data, subselected to 2016 and beyond, and converted the units.I am left with a dataframe of dates, the element name, and the value, which is a temperaturein Celsius. The first ten values I show here: date element value 2016-01-01 TMAX 9.4 2016-01-01 TMIN 0.0 2016-01-02 TMAX 10.0 2016-01-02 TMIN 3.9 2016-01-03 TMAX 11.7 2016-01-03 TMIN 6.7 2016-01-04 TMAX 12.8 2016-01-04 TMIN 6.7 2016-01-05 TMAX 12.8 2016-01-05 TMIN 8.3 Here is the code to produce the heatmap itself. I first use the date fieldto compute the x axis labels and locations: the dates are converted essentiallyto 'Julian' days since January 4, 1970 (a Sunday), then divided by seven to get a 'Julian' week number. The week number containing the tenth of the month isthen set as the location of the month name in the x axis labels. I add years to the January labels. I then compute the Julian week number and day number of the week. I create a variablewhich alternates between … read more
Line 35: Line 35: This algorithm is mainly used to construct long exposure star-trails images. This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. + + + <!--T:7--> <!--T:7--> Revision as of 13:47, 11 April 2016 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. Sum Stacking This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing. Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations. These algorithms are very efficient to remove satellite/plane tracks. Median Stacking This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math]. Pixel Maximum Stacking This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. Pixel Minimum Stacking This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower. In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]). The output console thus gives the following result: 22:26:06: Pixel rejection in channel #0: 0.215% - 1.401% 22:26:06: Pixel rejection in channel #1: 0.185% - 1.273% 22:26:06: Pixel rejection in channel #2: 0.133% - 1.150% 22:26:06: Integration of 12 images: 22:26:06: Normalization ............. additive + scaling 22:26:06: Pixel rejection ........... Winsorized sigma clipping 22:26:06: Rejection parameters ...... low=4.000 high=3.000 22:26:09: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 22:26:19: Background noise value (channel: #0): 10.013 (1.528e-04) 22:26:19: Background noise value (channel: #1): 6.755 (1.031e-04) 22:26:19: Background noise value (channel: #2): 6.621 (1.010e-04) Noise estimation is a good estimator of the quality of your stacking process. In our example, the red channel has almost 2 times more noises that green or blue. That probably means that DSLR is unmodified: most of red photon are stopped by the original filter, therefore leading to a more noisy channel. Then, in this example we note that high rejection seems to be a bit strong. Setting high rejection to [math]\sigma_{high}=4[/math] could produce a better image. And this is what we have in the image below. After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the differe1nt display mode. In our example the file is the stack result of all files, i.e., 12 files. The images above picture the result in Siril using the Histogram Equalization rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]19.7/6.4 = 3.08 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math]. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril: Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
Notice: If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help! Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Genotype Refinement workflow for germline short variants Contents Overview Summary of workflow steps Output annotations Example More information about priors Mathematical details 1. Overview The core GATK Best Practices workflow has historically focused on variant discovery --that is, the existence of genomic variants in one or more samples in a cohorts-- and consistently delivers high quality results when applied appropriately. However, we know that the quality of the individual genotype calls coming out of the variant callers can vary widely based on the quality of the BAM data for each sample. The goal of the Genotype Refinement workflow is to use additional data to improve the accuracy of genotype calls and to filter genotype calls that are not reliable enough for downstream analysis. In this sense it serves as an optional extension of the variant calling workflow, intended for researchers whose work requires high-quality identification of individual genotypes. While every study can benefit from increased data accuracy, this workflow is especially useful for analyses that are concerned with how many copies of each variant an individual has (e.g. in the case of loss of function) or with the transmission (or de novo origin) of a variant in a family. If a “gold standard” dataset for SNPs is available, that can be used as a very powerful set of priors on the genotype likelihoods in your data. For analyses involving families, a pedigree file describing the relatedness of the trios in your study will provide another source of supplemental information. If neither of these applies to your data, the samples in the dataset itself can provide some degree of genotype refinement (see section 5 below for details). After running the Genotype Refinement workflow, several new annotations will be added to the INFO and FORMAT fields of your variants (see below). Note that GQ fields will be updated, and genotype calls may be modified. However, the Phred-scaled genotype likelihoods (PLs) which indicate the original genotype call (the genotype candidate with PL=0) will remain untouched. Any analysis that made use of the PLs will produce the same results as before. 2. Summary of workflow steps Input Begin with recalibrated variants from VQSR at the end of the germline short variants pipeline. The filters applied by VQSR will be carried through the Genotype Refinement workflow. Step 1: Derive posterior probabilities of genotypes Tool used: CalculateGenotypePosteriors Using the Phred-scaled genotype likelihoods (PLs) for each sample, prior probabilities for a sample taking on a HomRef, Het, or HomVar genotype are applied to derive the posterior probabilities of the sample taking on each of those genotypes. A sample’s PLs were calculated by HaplotypeCaller using only the reads for that sample. By introducing additional data like the allele counts from the 1000 Genomes project and the PLs for other individuals in the sample’s pedigree trio, those estimates of genotype likelihood can be improved based on what is known about the variation of other individuals. SNP calls from the 1000 Genomes project capture the vast majority of variation across most human populations and can provide very strong priors in many cases. At sites where most of the 1000 Genomes samples are homozygous variant with respect to the reference genome, the probability of a sample being analyzed of also being homozygous variant is very high. For a sample for which both parent genotypes are available, the child’s genotype can be supported or invalidated by the parents’ genotypes based on Mendel’s laws of allele transmission. Even the confidence of the parents’ genotypes can be recalibrated, such as in cases where the genotypes output by HaplotypeCaller are apparent Mendelian violations. Step 2: Filter low quality genotypes Tool used: VariantFiltration After the posterior probabilities are calculated for each sample at each variant site, genotypes with GQ < 20 based on the posteriors are filtered out. GQ20 is widely accepted as a good threshold for genotype accuracy, indicating that there is a 99% chance that the genotype in question is correct. Tagging those low quality genotypes indicates to researchers that these genotypes may not be suitable for downstream analysis. However, as with the VQSR, a filter tag is applied, but the data is not removed from the VCF. Step 3: Annotate possible de novo mutations Tool used: VariantAnnotator Using the posterior genotype probabilities, possible de novo mutations are tagged. Low confidence de novos have child GQ >= 10 and AC < 4 or AF < 0.1%, whichever is more stringent for the number of samples in the dataset. High confidence de novo sites have all trio sample GQs >= 20 with the same AC/AF criterion. Step 4: Functional annotation of possible biological effects Tool options: Funcotator (experimental) Especially in the case of de novo mutation detection, analysis can benefit from the functional annotation of variants to restrict variants to exons and surrounding regulatory regions. Funcotator is a new tool that is currently still in development. If you would prefer to use a more mature tool, we recommend you look into SnpEff or Oncotator, but note that these are not GATK tools so we do not provide support for them. 3. Output annotations The Genotype Refinement workflow adds several new info- and format-level annotations to each variant. GQ fields will be updated, and genotypes calculated to be highly likely to be incorrect will be changed. The Phred-scaled genotype likelihoods (PLs) carry through the pipeline without being changed. In this way, PLs can be used to derive the original genotypes in cases where sample genotypes were changed. Population Priors New INFO field annotation PG is a vector of the Phred-scaled prior probabilities of a sample at that site being HomRef, Het, and HomVar. These priors are based on the input samples themselves along with data from the supporting samples if the variant in question overlaps another in the supporting dataset. Phred-Scaled Posterior Probability New FORMAT field annotation PP is the Phred-scaled posterior probability of the sample taking on each genotype for the given variant context alleles. The PPs represent a better calibrated estimate of genotype probabilities than the PLs are recommended for use in further analyses instead of the PLs. Genotype Quality Current FORMAT field annotation GQ is updated based on the PPs. The calculation is the same as for GQ based on PLs. Joint Trio Likelihood New FORMAT field annotation JL is the Phred-scaled joint likelihood of the posterior genotypes for the trio being incorrect. This calculation is based on the PLs produced by HaplotypeCaller (before application of priors), but the genotypes used come from the posteriors. The goal of this annotation is to be used in combination with JP to evaluate the improvement in the overall confidence in the trio’s genotypes after applying CalculateGenotypePosteriors. The calculation of the joint likelihood is given as: where the GLs are the genotype likelihoods in [0, 1] probability space. Joint Trio Posterior New FORMAT field annotation JP is the Phred-scaled posterior probability of the output posterior genotypes for the three samples being incorrect. The calculation of the joint posterior is given as: where the GPs are the genotype posteriors in [0, 1] probability space. Low Genotype Quality New FORMAT field filter lowGQ indicates samples with posterior GQ less than 20. Filtered samples tagged with lowGQ are not recommended for use in downstream analyses. High and Low Confidence De Novo New INFO field annotation for sites at which at least one family has a possible de novo mutation. Following the annotation tag is a list of the children with de novo mutations. High and low confidence are output separately. 4. Example Before: 1 1226231 rs13306638 G A 167563.16 PASS AC=2;AF=0.333;AN=6;… GT:AD:DP:GQ:PL 0/0:11,0:11:0:0,0,249 0/0:10,0:10:24:0,24,360 1/1:0,18:18:60:889,60,0 After: 1 1226231 rs13306638 G A 167563.16 PASS AC=3;AF=0.500;AN=6;…PG=0,8,22;… GT:AD:DP:GQ:JL:JP:PL:PP 0/1:11,0:11:49:2:24:0,0,249:49,0,287 0/0:10,0:10:32:2:24:0,24,360:0,32,439 1/1:0,18:18:43:2:24:889,60,0:867,43,0 The original call for the child (first sample) was HomRef with GQ0. However, given that, with high confidence, one parent is HomRef and one is HomVar, we expect the child to be heterozygous at this site. After family priors are applied, the child’s genotype is corrected and its GQ is increased from 0 to 49. Based on the allele frequency from 1000 Genomes for this site, the somewhat weaker population priors favor a HomRef call (PG=0,8,22). The combined effect of family and population priors still favors a Het call for the child. The joint likelihood for this trio at this site is two, indicating that the genotype for one of the samples may have been changed. Specifically a low JL indicates that posterior genotype for at least one of the samples was not the most likely as predicted by the PLs. The joint posterior value for the trio is 24, which indicates that the GQ values based on the posteriors for all of the samples are at least 24. (See above for a more complete description of JL and JP.) 5. More information about priors The Genotype Refinement Pipeline uses Bayes’s Rule to combine independent data with the genotype likelihoods derived from HaplotypeCaller, producing more accurate and confident genotype posterior probabilities. Different sites will have different combinations of priors applied based on the overlap of each site with external, supporting SNP calls and on the availability of genotype calls for the samples in each trio. Input-derived Population Priors If the input VCF contains at least 10 samples, then population priors will be calculated based on the discovered allele count for every called variant. Supporting Population Priors Priors derived from supporting SNP calls can only be applied at sites where the supporting calls overlap with called variants in the input VCF. The values of these priors vary based on the called reference and alternate allele counts in the supporting VCF. Higher allele counts (for ref or alt) yield stronger priors. Family Priors The strongest family priors occur at sites where the called trio genotype configuration is a Mendelian violation. In such a case, each Mendelian violation configuration is penalized by a de novo mutation probability (currently 10-6). Confidence also propagates through a trio. For example, two GQ60 HomRef parents can substantially boost a low GQ HomRef child and a GQ60 HomRef child and parent can improve the GQ of the second parent. Application of family priors requires the child to be called at the site in question. If one parent has a no-call genotype, priors can still be applied, but the potential for confidence improvement is not as great as in the 3-sample case. Caveats Right now family priors can only be applied to biallelic variants and population priors can only be applied to SNPs. Family priors only work for trios. 6. Mathematical details Note that family priors are calculated and applied before population priors. The opposite ordering would result in overly strong population priors because they are applied to the child and parents and then compounded when the trio likelihoods are multiplied together. Review of Bayes’s Rule HaplotypeCaller outputs the likelihoods of observing the read data given that the genotype is actually HomRef, Het, and HomVar. To convert these quantities to the probability of the genotype given the read data, we can use Bayes’s Rule. Bayes’s Rule dictates that the probability of a parameter given observed data is equal to the likelihood of the observations given the parameter multiplied by the prior probability that the parameter takes on the value of interest, normalized by the prior times likelihood for all parameter values: $$ P(\theta|Obs) = \frac{P(Obs|\theta)P(\theta)}{\sum_{\theta} P(Obs|\theta)P(\theta)} $$ In the best practices pipeline, we interpret the genotype likelihoods as probabilities by implicitly converting the genotype likelihoods to genotype probabilities using non-informative or flat priors, for which each genotype has the same prior probability. However, in the Genotype Refinement Pipeline we use independent data such as the genotypes of the other samples in the dataset, the genotypes in a “gold standard” dataset, or the genotypes of the other samples in a family to construct more informative priors and derive better posterior probability estimates. Calculation of Population Priors Given a set of samples in addition to the sample of interest (ideally non-related, but from the same ethnic population), we can derive the prior probability of the genotype of the sample of interest by modeling the sample’s alleles as two independent draws from a pool consisting of the set of all the supplemental samples’ alleles. (This follows rather naturally from the Hardy-Weinberg assumptions.) Specifically, this prior probability will take the form of a multinomial Dirichlet distribution parameterized by the allele counts of each allele in the supplemental population. In the biallelic case the priors can be calculated as follows: $$ P(GT = HomRef) = \dbinom{2}{0} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 2)}{\Gamma(nSamples + 2)\Gamma(RefCount)} $$ $$ P(GT = Het) = \dbinom{2}{1} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 1)\Gamma(AltCount + 1)}{\Gamma(nSamples + 2)\Gamma(RefCount)\Gamma(AltCount)} $$ $$ P(GT = HomVar) = \dbinom{2}{2} \ln \frac{\Gamma(nSamples)\Gamma(AltCount + 2)}{\Gamma(nSamples + 2)\Gamma(AltCount)} $$ where Γ is the Gamma function, an extension of the factorial function. The prior genotype probabilities based on this distribution scale intuitively with number of samples. For example, a set of 10 samples, 9 of which are HomRef yield a prior probability of another sample being HomRef with about 90% probability whereas a set of 50 samples, 49 of which are HomRef yield a 97% probability of another sample being HomRef. Calculation of Family Priors Given a genotype configuration for a given mother, father, and child trio, we set the prior probability of that genotype configuration as follows: $$ P(G_M,G_F,G_C) = P(\vec{G}) \cases{ 1-10\mu-2\mu^2 & no MV \cr \mu & 1 MV \cr \mu^2 & 2 MVs} $$ where the 10 configurations with a single Mendelian violation are penalized by the de novo mutation probability μ and the two configurations with two Mendelian violations by μ^2. The remaining configurations are considered valid and are assigned the remaining probability to sum to one. This prior is applied to the joint genotype combination of the three samples in the trio. To find the posterior for any single sample, we marginalize over the remaining two samples as shown in the example below to find the posterior probability of the child having a HomRef genotype: This quantity P(Gc|D) is calculated for each genotype, then the resulting vector is Phred-scaled and output as the Phred-scaled posterior probabilities (PPs).
Background Information: This question is from Lectures on Financial Mathematics: Discrete Asset Pricing. Theorem 3.2 First Fundamental Theorem of Asset Pricing - Suppose $\nu$ is any measure such that $S/S^{0}$ is a $\nu$-martingale. For an attainable claim $X$ with replicating strategy $\phi$ and $0\leq t\leq T$, we have $$V_t(\phi) = E_{\nu}\left(X\frac{S_t^{0}}{S_T^{0}}|\mathcal{F}_t\right)$$ Question: Prove that: All martingale measures price the attainable claim equally, and if there is a martingale measure, then all replicating strategies for a given claim have the same value at all times. I am sort of confused even where to begin, some guidance or suggestions may help.
In this paper we consider dynamic networks that can change over time. Often, such networks have a repetitive pattern despite constant and otherwise unpredictable changes. Based on this observation, we introduce the notion of a ρ-recurring family of a dynamic network, which has the property that the dynamic network frequently contains a graph in the family, where frequently means at a rate 0< ρ ≤ 1. Using this concept, we reduce the analysis of max-degree random walks on dynamic networks to the case for static networks. Given a dynamic network with a ρ -recurring family $\mathcal{F}$ , we prove an upper bound of on the hitting and cover times, and an upper bound of $O\left( \rho^{-1}(1- \hat\lambda(\mathcal{F}))^{-1} \log n \right) $ on the mixing time of random walks, where n is the number of nodes, $\hat t_{hit}(\mathcal{F})$ is upper bound on the hitting time of graphs in $\mathcal{F}$ , and $\hat\lambda(\mathcal{F})$ is upper bound on the second largest eigenvalue of the transition matrices of graphs in $\mathcal{F}$ . These results have two implications. First, they yield a general bound of $O\left( \rho^{-1} n^3 \log n \right) $ on the hitting time and cover time of a dynamic network ( ρ is the rate at which the network is connected); this result improves on the previous bound of $O\left( \rho^{-1} n^5 \log^2 n \right) $ ,[3]. Second, the results imply that dynamic networks with recurring families preserve the properties of random walks in their static counterparts. This result allows importing the extensive catalogue of results for static graphs (cliques, expanders, regular graphs, etc.) into the dynamic setting.
Inside a Schwarzschild black hole Hello and welcome Ever wondered what lies beyond the of a ? Current thinking asserts any one of numerous outcomes; time travel, wormholes, being crushed to a point, or instead, perhaps a fiery end in a wall of flame, making it very hard to know what to believe. We offer a somewhat more prosaic answer; not nearly so exciting, but needing no undiscovered extensions to existing theory or StatTrekian beliefs, and as such, so much more believable. This is a new vision of what lies beyond the of a black hole. New, and as we will show later, testable as demonstrated by the otherwise unexplained presence of supermassive black holes. This will never be entirely understood without a smidgen of mathematics, but, if you have a basic college-level understanding of mathematics, then there should be nothing overly hard for you to follow. So let us just jump right in. To begin with, here are a couple of basic facts about Einstein's theory of general relativity for any visitors who are new to this field. There is no dispute about these facts so I hope you will just accept them for now: Karl Schwarzschild (1873–1916) The gravitational field around a non-rotating symmetrical body (such as a star, or a planet) is given by the Schwarzschild solution, originally developed by Karl Schwarzschild in 1916, just a year after Einstein announced his general theory of relativity: \[ c^2d\tau ^2=\left(1-\frac {r_s}{r}\right)c^2dt^2 -\left(1-\frac {r_s}{r}\right)^{-1}dr^2-r^2\left(d\theta ^2+\sin ^2\theta \,d\varphi ^2\right) \]The key fact to notice about this equation is the first term which seems to 'blow up' when \(r=r_s\). This term is what gives rise to the event horizon. Birkhoff's theorem added that for non-rotating spherically symmetric body, the exterior gravitational field in space must be static, with a metric given by a any of the Schwarzschild metric. This sounds difficult but all this is saying is that there is only one solution, the Schwarzschild solution and that it is unchanging. piece An immediate consequence of Birkhoff’s theorem is that the field inside a symmetric non-rotating spherically shell of matter must be flat, or Minkowski space (the only piece of the Schwarzschild metric possible in this circumstance as there is no enclosed mass). Knowing just these two undisputed facts, we could, for instance, calculate the precise field at the bottom of a mine shaft -- just calculate the field due to the mass beneath our feet whilst ignoring all of the mass above our heads, and neglecting the effect of the relatively slow rotation of the earth. This much is standard stuff and fully confirmed by experiments, here on earth. Now, keeping these same two undisputed facts in mind, consider a large ball of matter, collapsing due to the force of gravity, where the forces involved have already exceeded those needed to halt the collapse at the size of a neutron star. (Such as during the final stage of collapse after a sufficiently large star goes supernova at the end of it's active life.) For simplicity, let the ball be spherically symmetric and nonrotating. The collapsing ball of matter, if of sufficient mass, will eventually form a black hole with an event horizon having a , \(r_s\), given by this simple equation \[r_s=\frac{2Gm}{c^2}\] where the \(r_s\) is the reduced radius of the event horizon, \(G\) is the gravitational constant, the same constant used in the gravity equation of Newton, \(m\) is the total mass enclosed by this event horizon, and \(c\) is the speed of light. In the following argument, all radii will be reduced radii. Inside this event horizon the ball of particles will continue to collapse, heading relentlessly towards the origin. So far, we have not deviated in any way from established theories. Agree or disagree, or have any questions or observations about this, and I would love to hear from you, so please This email address is being protected from spambots. You need JavaScript enabled to view it., or leave a comment. Your views are always most welcome.
Geometry is a branch of mathematics that concerned with shapes, points, lines and much more. Geometry Formulas are used to calculate the length, perimeter, area and volume of different geometric figures and shapes. They are also used to calculate the arc length, radius etc. In class 9 students will learn about coordinate geometry, where the points are placed on the “coordinate plane”. It has two scales – one running across the plane called the “x axis” and another a right angles to it called the y axis. The table below gives you few important geometry formulas for class 8. The formulas listed below are commonly required in class 8 geometry to calculate lengths, areas and volumes. Geometry Shapes Formulas for Class 9 Geometric Figure Area Perimeter Rectangle \(A= l \times w\) \(P = 2 \left (l+w \right )\) Triangle \(A = \frac{1}{2}bh\) \(P = a + b + c\) Trapezoid \(A = \frac{1}{2} h \left (b_{1}+ b_{2} \right )\) \(P = a + b + c + d\) Parallelogram \(A = bh\) \(P = 2 (b + h)\) Circle \(A=\pi r^{2}\) \(C = 2 \pi r\) ‘
Why does the phase shift between the input and the output of a transfer function vary with the frequency of the input sinusoid? Assuming you're asking about Linear and Time-Invariant systems (LTI), the phase is shifted only for such systems that have "reactive" elements in the system. LTI systems are made up of signal-processing elements that fall into 3 fundamental classes: adders (devices that add two signals). scalers (devices that scale a signal by a constant). "reactive" elements (devices that are able to discriminate w.r.t. frequency). Element classes 1. and 2. are essentially the same for analog or digital filters. The are sometimes called "memoryless" devices or elements. For an analog filter (or "analog LTI system"), those reactive elements would be capacitors or inductors. They integrate or differentiate one signal to become another. That turns a sine signal into a cosine signal or shifts the phase by $\pm$ 90°. $$\cos(\Omega t) = \sin(\Omega t + \tfrac{\pi}{2})$$ For digital filters, the reactive elements are delay elements. A unit delay (a delay of exactly one sample period $T$) will delay any signal, including a sinusoid, by 1 sample or $T$ units of time. That shifts the phase by an amount that is dependent on frequency $$\sin(\Omega (t-T) ) = \sin(\Omega t - \Omega T)$$ or $$\sin(\omega (n-1) ) = \sin(\omega n - \omega )$$ Any LTI system that acts as a "filter", a device to filter out some frequency components and leave others, must have reactive elements (or "non-memoryless" elements or components having memory) in order to discriminate one frequency from another. And such a filter will shift phase which will normally be different for different frequencies. But a memoryless LTI system (which is just a scaler) will not discriminate between frequencies nor will shift phase, except for possibly by 180°, which is just a polarity reversal or scaling by a negative constant. For any LTI (Linear Time Invariant) system (considering the steady state response i.e. ignoring the transient response which generally becomes negligible in a very short period), if the input is a sinusoidal signal, the output is always a sinusoidal signal of the same frequency. But the amplitude of the output sinusoidal can be different from the input amplitude. And the ratio of the output amplitude to input amplitude is a constant for that particular frequency - whatever the input amplitude and phase are. Similarly, the difference between the input and output sinusoidal phases is also constant for the particular frequency - irrespective of the amplitude. The transfer function can be used to give this input-output amplitudes ratio and input-output phase difference as a function of frequency. Hope this helps. The frequency dependency is derivable from the input/output differential equation in analog case or difference equation in discrete case.
(The society is limited to people with PhDs after 1990, occasioning the title of this post, a reference to a song about a bar limited to people under 21, a reference you will not get unless your PhD was granted well before 1990.) I can't blog all the great papers and discussions, so I'll pick one of particular interest, Itamar Drechsler, Alexi Savov, and Philipp Schnabl's "Model of Monetary Policy and Risk Premia" This paper addresses a very important issue. The policy and commentary community keeps saying that the Federal Reserve has a big effect on risk premiums by its control of short-term rates. Low interest rates are said to spark a "reach for yield," and encourage investors, and too big to fail banks especially, to take on unwise risks. This story has become a central argument for hawkishness at the moment. The causal channel is just stated as fact. But one should not accept an argument just because one likes the policy result. Nice story. Except there is about zero economic logic to it. The level of nominal interest ratesand the risk premiumare two totally different phenomena. Borrowing at 5% and making a risky investment at 8%, or borrowing at 1% and making a risky investment at 4% is exactly the same risk-reward tradeoff. In equations, consider the basic first order condition for investment, \[ 0 = E \left[ \left( \frac{C_{t+1}}{C_t} \right)^{-\gamma} (R_{t+1}-R^f_t) \right] \] \[ 1 = E \left[ \beta \left( \frac{C_{t+1}}{C_t} \right)^{-\gamma} \right] R_t^f \] Risk aversion \(\gamma\) controls the risk premium in the first equation, and impatience \(\beta\) controls the risk free rate in the second equation. The level of risk free rates has nothing to do with the risk premium. Yes, higher risk aversion or consumption volatility would increase precautionary saving and lower interest rates in the second equation, holding \(\beta\) fixed. But that is the "wrong" sign -- lower interest rates are associated with higher, not lower, risk premiums. Worse, that "wrong" sign is what we see in the data. Risk premiums are high in the early part of recessions, when interest rates are low. Risk premiums are low in booms, when interest rates are high. OK, I'm a bit defensive because "by force of habit" with John Campbell was all about producing that correlation. But that is the pattern in the data. I made a graph above of the Federal Funds rate (blue) and the spread between BAA bonds and treasuries (green, right scale). You can see the risk premium higher just when rates fall at the early stage of every recession, and premiums low at the peaks of the booms, when rates are at their peaks. So, if one has this belief about Fed policy, there must be some other effect driving a big negative correlation between risk premiums and rates, yet the Fed can causepremiums to go up or down a bit more by raising or lowering rates. Every time I ask people -- policy types, central bankers, Fed staff, financial journalists -- about this widely held belief, I get basically psychological and institutional rather than economic answers. Fund managers, insurance companies, pension funds, endowments, have fixed nominal rate of return targets. People have nominal illusions and don't think 8% with 1% short rates is a lot better than 10% with 9% short rates. Maybe. But basing monetary policy on the notion that all investors are total morons seems dicey. For one thing, the minute the Fed starts to exploit rules of thumb, smart investors change the rules of thumb. Segmented markets and institutional constraints are written in sand, not stone, and persist only as long as they are not too costly. OK, enter Drechsler, Savov, and Schnabl. They have a real, economic model of the phenomenon. That's great. We may disagree, but the only way to understand this issue is to write down a model, not to tell stories. The model is long and hard, and I won't pretend I have it all right. I think I digest it down to one basic point. Banks had (past tense) to hold non-interest-bearing reserves against deposits. This is a source of nominal illusion. If banks have to hold some non-interest bearing cash for every investment they make, then the effective cost of funds is higher when the nominal rate is higher. We are, in effect, mismeasuring \(R^f\) in my equation. This makes a lot of sense. Except... Before 2007 non-interest-bearing reserves were really tiny, $50 billion dollars out of $9 trillion of bank credit. Quantitatively, the induced nominal illusion is small. Also, while it's fun to write models in which all funds must channel through intermediaries, there are lots of ways that money goes directly from savers to borrowers, like mortgage-backed securities, without paying the reserve tax. Banks aren't allowed to hold equities, so this channel can't work at all for the idea that low rates fuel stock "bubbles." And now, reserves will pay interest. At the conference, Alexi disagreed with this interpretation. He showed the following graph: Fed funds are typically higher than T bills, and the spread is higher when interest rates are higher. They interpret this quantity (p.3) as the "external finance spread." Fed funds represent a potential use of funds, and the shadow value of lending. Alexi cited another mechanism too: "sticky" deposits generate a relationsip (at least temporary) between interest rate levels and real bank funding costs. So by whatever mechanism, they say, you can see that cost of funds vary with the level of interest rates. In response to my sort of graph, yes, lots of other things push risk premiums around generating the negative correlation, but allowing the causal effect. Read the paper for more. I have come to praise it not to criticize it. Real, solid, quantiative economic models are just what we need to have a serious discussion. This is a really important and unsolved question, which I will close by restating: Does monetary policy, by controlling the level of short term rates, substantially affect risk premiums? If so, how? Of course, maybe the answer is "it doesn't."
Research Open Access Published: Some new results on the boundary behaviors of harmonic functions with integral boundary conditions Boundary Value Problems volume 2016, Article number: 136 (2016) Article metrics 616 Accesses 1 Citations Abstract In this paper, using a generalized Carleman formula, we prove two new results on the boundary behaviors of harmonic functions with integral boundary conditions in a smooth cone, which generalize some recent results. Introduction Let \(\mathbf{R}^{n} \) (\(n\geq2\)) be the n-dimensional Euclidean space. A point in \(\mathbf{R}^{n}\) is denoted by \(V=(X,y)\), where \(X=(x_{1},x_{2},\ldots,x_{n-1})\). The boundary and the closure of a set E in \(\mathbf{R}^{n}\) are denoted by ∂E and E̅, respectively. We introduce a system of spherical coordinates \((l,\Lambda)\), \(\Lambda=(\theta_{1},\theta_{2},\ldots,\theta_{n-1})\), in \(\mathbf{R}^{n}\) that are related to Cartesian coordinates \((x_{1},x_{2},\ldots,x_{n-1},y)\) by \(y=l\cos\theta_{1}\). The unit sphere and the upper half unit sphere in \(\mathbf{R}^{n}\) are denoted by \(\mathbf{S}^{n-1}\) and \(\mathbf{S}_{+}^{n-1}\), respectively. For simplicity, a point \((1,\Lambda)\) on \(\mathbf{S}^{n-1}\) and the set \(\{\Lambda; (1,\Lambda)\in\Gamma\}\) for a set \(\Gamma\subset\mathbf{S}^{n-1}\) are often identified with Λ and Γ, respectively. For two sets \(\Xi\subset\mathbf{R}_{+}\) and \(\Gamma\subset \mathbf{S}^{n-1}\), the set \(\{(l,\Lambda)\in\mathbf{R}^{n}; l\in\Xi,(1,\Lambda)\in\Gamma\}\) in \(\mathbf{R}^{n}\) is simply denoted by \(\Xi\times\Gamma\). We denote the set \(\mathbf{R}_{+}\times\Gamma\) in \(\mathbf{R}^{n}\) with the domain Γ on \(\mathbf{S}^{n-1}\) by \(T_{n}(\Gamma)\). We call it a cone. In particular, the half-space \(\mathbf{R}_{+}\times\mathbf{S}_{+}^{n-1}\) is denoted by \(T_{n}(\mathbf{S}_{+}^{n-1})\). The sets \(I\times\Gamma\) and \(I\times\partial{\Gamma}\) with an interval on R are denoted by \(T_{n}(\Gamma;I)\) and \(\mathcal{S}_{n}(\Gamma;I)\), respectively. We denote \(T_{n}(\Gamma)\cap S_{l}\) by \(\mathcal{S}_{n}(\Gamma ; l)\), and we denote \(\mathcal{S}_{n}(\Gamma; (0,+\infty))\) by \(\mathcal{S}_{n}(\Gamma)\). The ordinary Poisson in \(T_{n}(\Gamma)\) is defined by where \({\partial}/{\partial n_{W}}\) denotes the differentiation at W along the inward normal into \(T_{n}(\Gamma)\), and \(\mathbb{G}_{\Gamma }(V,W)\) (\(P, Q\in T_{n}(\Gamma)\)) is the Green function in \(T_{n}(\Gamma)\). Here, \(c_{2}=2\) and \(c_{n}=(n-2)w_{n}\) for \(n\geq3\), where \(w_{n}\) is the surface area of \(\mathbf{S}^{n-1}\). Let \(\Delta_{n}^{*}\) be the spherical part of the Laplace operator, and Γ be a domain on \(\mathbf{S}^{n-1}\) with smooth boundary ∂Γ. Consider the Dirichlet problem (see [1]) We denote the least positive eigenvalue of this boundary problem by τ and the normalized positive eigenfunction corresponding to τ by \(\psi(\Lambda)\). In the sequel, for brevity, we shall write χ instead of \(\aleph^{+}-\aleph^{-}\), where The estimate we deal with has a long history tracing back to known Matsaev’s estimate of harmonic functions from below in the half-plane (see, e.g., Levin [2], p.209). Theorem A Let \(A_{1}\) be a constant, and let \(h(z)\) (\(|z|=R\)) be harmonic on \(T_{2}(\mathbf{S}_{+}^{1})\) and continuous on \(\overline{T_{2}(\mathbf{S}_{+}^{1})}\). Suppose that and Then where \(z=Re^{i\alpha}\in T_{2}(\mathbf{S}_{+}^{1})\), and \(A_{2}\) is a constant independent of \(A_{1}\), R, α, and the function \(h(z)\). Theorem B Let \(A_{3}\) be a constant, and \(h(V)\) (\(\vert V\vert =R\)) be harmonic on \(T_{n}(\mathbf{S}_{+}^{n-1})\) and continuous on \(\overline{T_{n}(\mathbf{S}_{+}^{n-1})}\). If and then where \(V\in T_{n}(\mathbf{S}_{+}^{n-1})\), and \(A_{4}\) is a constant independent of \(A_{3}\), R, \(\theta_{1}\), and the function \(h(V)\). Theorem C Let K be a constant, and \(h(V) \) (\(V=(R,\Lambda)\)) be harmonic on \(T_{n}(\Gamma)\) and continuous on \(\overline{T_{n}(\Gamma)}\). If and then where \(V\in T_{n}(\Gamma)\), N (≥1) is a sufficiently large number, and M is a constant independent of K, R, \(\psi(\Lambda)\), and the function \(h(V)\). In this paper, we obtain two new results on the lower bounds of harmonic functions with integral boundary conditions in a smooth cone (Theorems 1 and 2), which further extend Theorems A, B, and C. Our proofs are essentially based on the Riesz decomposition theorem (see [6]) and a modified Carleman formula for harmonic functions in a smooth cone (see [5], Lemma 1). In order to avoid complexity of our proofs, we assume that \(n\geq3\). However, our results in this paper are also true for \(n=2\). We use the standard notations \(h^{+}=\max\{h,0\}\) and \(h^{-}=-\min\{h,0\}\). All constants appearing further in expressions will be always denoted M because we do not need to specify them. We will always assume that \(\eta(t)\) and \(\rho(t)\) are nondecreasing real-valued functions on an interval \([1,+\infty)\) and \(\rho(t)> \aleph^{+}\) for any \(t\in[1,+\infty)\). Main results First of all, we shall state the following result, which further extends Theorem C under weak boundary integral conditions. Theorem 1 Let \(h(V)\) (\(V=(R,\Lambda)\)) be harmonic on \(T_{n}(\Gamma)\) and continuous on \(\overline{T_{n}(\Gamma)}\). Suppose that the following conditions (I) and (II) are satisfied: (I) For any\(V=(R,\Lambda)\in T_{n}(\Gamma;(1,\infty))\), we have$$ \int_{\mathcal{S}_{n}(\Gamma;(1,R))}h^{-}t^{\aleph^{-}}{\partial\psi }/{ \partial n}\,d\sigma_{W} \leq M\eta(R) (cR)^{\rho(cR)-\aleph^{+}} $$(2.1) and$$ \chi \int_{\mathcal{S}_{n}(\Gamma ;R)}h^{-}R^{\aleph^{-}-1}\psi d S_{R} \leq M\eta(R) (cR)^{\rho(cR)-\aleph^{+}}. $$(2.2) (II) For any\(V=(R,\Lambda)\in T_{n}(\Gamma;(0,1])\), we have$$ h(V)\geq-\eta(R). $$(2.3) Then$$h(V)\geq-M\eta(R) \bigl(1+(cR)^{\rho(cR)} \bigr)\psi^{1-n}( \Lambda), $$ where\(V\in T_{n}(\Gamma)\), N(≥1) is a sufficiently large number, and M is a constant independent of R, \(\psi(\Lambda)\), and the functions\(\eta(R)\) and\(h(V)\). Remark 1 From the proof of Theorem 1 it is easy to see that condition (I) in Theorem 1 is weaker than that in Theorem C in the case \(c\equiv(N+1)/{N}\) and \(\eta (R)\equiv K\), where N (≥1) is a sufficiently large number, and K is a constant. Theorem 2 Remark 2 Proof of Theorem 1 By the Riesz decomposition theorem (see [6]) we have where \(V=(l,\Lambda)\in T_{n}(\Gamma;(0,R))\). We next distinguish three cases. Case 1. \(V=(l,\Lambda)\in T_{n}(\Gamma;({5}/{4},\infty ))\) and \(R={5l}/{4}\). Since \(-h(V)\leq h^{-}(V)\), we have from (3.1), where and We have the following estimates: and We consider the inequality where and We first have from (2.1). We shall estimate \(U_{32}(V)\). Take a sufficiently small positive number d such that for any \(V=(l,\Lambda)\in\Pi(d)\), where and divide \(T_{n}(\Gamma)\) into two sets \(\Pi(d)\) and \(T_{n}(\Gamma)-\Pi(d)\). If \(V=(l,\Lambda)\in T_{n}(\Gamma)-\Pi(d)\), then there exists a positive \(d'\) such that \(\vert V-W\vert \geq{d}'l\) for any \(Q\in \mathcal{S}_{n}(\Gamma)\), and hence which is similar to the estimate of \(U_{31}(V)\). We shall consider the case \(V=(l,\Lambda)\in\Pi(d)\). Now put where Since \(\mathcal{S}_{n}(\Gamma)\cap\{W\in\mathbf{R}^{n}: \vert V-W\vert < \delta (V)\}=\emptyset\), we have where \(i(V)\) is a positive integer satisfying Since \(r\psi(\Lambda)\leq M\delta(V)\) (\(V=(l,\Lambda)\in T_{n}(\Gamma)\)), similarly to the estimate of \(U_{31}(V)\), we obtain for \(i=0,1,2,\ldots,i(V)\). So On the other hand, we have from (2.2) that Case 2. \(V=(l,\Lambda)\in T_{n}(\Gamma;({4}/{5},{5}/{4}])\) and \(R={5l}/{4}\). It follows from (3.1) that where \(U_{1}(V)\) and \(U_{4}(V)\) are defined as in Case 1, and Similarly to the estimate of \(U_{3}(V)\) in Case 1, we have Case 3. \(V=(l,\Lambda)\in T_{n}(\Gamma;(0,{4}/{5}])\). It is evident from (2.3) that which also gives (3.11). Finally, from (3.11) we have which is the conclusion of Theorem 1. Proof of Theorem 2 We first apply a new type of Carleman’s formula for harmonic functions (see [5], Lemma 1) to \(h=h^{+}-h^{-}\) and obtain where \(dS_{R}\) denotes the \((n-1)\)-dimensional volume elements induced by the Euclidean metric on \(S_{R}\), and \({\partial}/{\partial n}\) denotes differentiation along the interior normal. It is easy to see that and from (2.4). We remark that We have (2.2) and References 1. Carleman, T: Über die Approximation analytischer Funktionen durch lineare Aggregate von vorgegebenen Potenzen. Ark. Mat. Astron. Fys. 17, 1-30 (1923) 2. Levin, B: Lectures on Entire Functions. Translations of Mathematical Monographs, vol. 150. Am. Math. Soc., Providence (1996) 3. Guan, X, Liu, M: Coordination in the decentralized assembly system with dual supply modes. Discrete Dyn. Nat. Soc. 2013, Article ID 381987 (2013) 4. Pan, G, Qiao, L, Deng, G: A lower estimate of harmonic functions. Bull. Iran. Math. Soc. 40(1), 1-7 (2014) 5. Pang, S, Ychussie, B: Matsaev type inequalities on smooth cones. J. Inequal. Appl. 2015, Article ID 108 (2015) 6. Hayman, W, Kennedy, P: Subharmonic Functions, vol. 1. Academic Press, London (1976) 7. Essén, M, Lewis, LJ: The generalized Ahlfors-Heins theorem in certain d-dimensional cones. Math. Scand. 33, 113-129 (1973) 8. Yoshida, H: A boundedness criterion for subharmonic function. J. Lond. Math. Soc. 24(2), 148-160 (1981) Acknowledgements This work was supported by the National Natural Science Foundation of China under Grant no. 61401368. We are grateful to the editor and anonymous reviewers for their valuable comments and corrections that helped improve the original version of this paper. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions CV completed the main study. XX responded point by point to each reviewer comments and corrected the final proof. Both authors read and approved the final manuscript.
In Coleman's paper Fate of the false vacuum: Semiclassical theory while working out the exponential coefficient for tunneling probability through a potential barrier, he studies the problem with Wick's rotation $\tau=it$, getting to the Euclidean Lagrangian $$L_E = \frac{1}{2}\left(\frac{dq}{d\tau}\right)^2+V(q),\tag{2.14}$$ where clearly the potential is inverted. The potential as given in the paper is this He then states that from conservation of energy formula $$\frac{1}{2}\left(\frac{dq}{d\tau}\right)^2-V=0.$$ i quote "By eq. (2.12)"-the conservation of energy-"the classical equilibrium point, $q_0$, can only be reached asymptotically, as $\tau$ goes to minus infinity" $$\lim_{\tau\rightarrow-\infty}q = q_0.\tag{2.15}$$ Q1. Why is this true? How you define infinity for a complex number? Then, by translation invariance, he sets the time at which the particle reaches $\sigma$ as $\tau=0$ and that $$\left.\frac{dq}{d\tau}\right|_{0}=0.$$ He goes on by saying that this condition "[...] also tells us that the motion of the particle for positive $\tau$ is just the time reversal of its motion for negative $\tau$; the particle simply bounces off $\sigma$ at $\tau=0$ and returns to $q_0$ at $\tau=+\infty$." Q2. Even this isn't very clear for me. Why should the condition for zero velocity at $\sigma$ imply that? Is there something really basic that I'm missing? I'm not very competent in Wick's rotations and such and I have to understand every little bit of this paper for my bachelor's thesis.
Journal of Symbolic Logic J. Symbolic Logic Volume 56, Issue 3 (1991), 949-963. Model-Theoretic Properties Characterizing Peano Arithmetic Abstract Let $\mathscr{L} = \{0, 1, +, \cdot, <\}$ be the usual first-order language of arithmetic. We show that Peano arithmetic is the least first-order $\mathscr{L}$-theory containing $I\Delta_0 + \exp$ such that every complete extension $T$ of it has a countable model $K$ satisfying. (i) $K$ has no proper elementary substructures, and (ii) whenever $L \prec K$ is a countable elementary extension there is $\bar{L} \prec L$ and $\bar{K} \subseteq_\mathrm{e} \bar{L}$ such that $K \prec_{\mathrm{cf}}\bar{K}$. Other model-theoretic conditions similar to (i) and (ii) are also discussed and shown to characterize Peano arithmetic. Article information Source J. Symbolic Logic, Volume 56, Issue 3 (1991), 949-963. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183743742 Digital Object Identifier doi:10.2178/jsl/1183743742 Mathematical Reviews number (MathSciNet) MR1129158 Zentralblatt MATH identifier 0746.03032 JSTOR links.jstor.org Citation Kaye, Richard. Model-Theoretic Properties Characterizing Peano Arithmetic. J. Symbolic Logic 56 (1991), no. 3, 949--963. doi:10.2178/jsl/1183743742. https://projecteuclid.org/euclid.jsl/1183743742
In-sample fits are not a reliable guide to out-of-sample forecasting accuracy. The gold standard in forecasting accuracy measurement is to use a holdout sample. Remove the last 30 days from the training sample, fit your models to the rest of the data, use the fitted models to forecast the holdout sample and simply compare accuracies on the holdout, using ... Combining forecasts is an excellent idea. (I think it is not an exaggeration to say that this is one of the few things academic forecasters agree on.)I happen to have written a paper a while back looking at different ways to weight forecasts in combining them: http://www.sciencedirect.com/science/article/pii/S0169207010001032 Basically, using (Akaike) ... which is the appropriate neuronal network / function for time series prediction? Please consider, that above example is just a simplified data-example.Well, this totally depends on your data. In your example data you havea small univariate time series (only 14 observations)a linear trendno white noiseno seasonalityno cyclenon non-linearitynnetar()... You can use the following recurrent formula:$\sigma_i^2 = S_i = (1 - \alpha) (S_{i-1} + \alpha (x_i - \mu_{i-1})^2)$Here $x_i$ is your observation in the $i$-th step, $\mu_{i-1}$ is the estimated EWM, and $S_{i-1}$ is the previous estimate of the variance. See Section 9 here for the proof and pseudo-code. Note that Croston's method does not forecast "likely" periods with nonzero demands. It assumes that all periods are equally likely to exhibit demand. It separately smoothes the inter-demand interval and nonzero demands via Exponential Smoothing, but updates both only when there is nonzero demand. The in-sample fit and the point forecast then essentially is ... As @forecaster has pointed out, this is caused by outliers at the end of the series. You can see the problem clearly if you plot the estimated level component over the top:plot(forecast(fit2))lines(fit2$states[,1],col='red')Note the increase in the level at the end of the series.One way to make the model more robust to outliers is to reduce the ... If we assume that the derivative and the infinite sum may be interchanged, there is a quick way to arrive at your result. As @whuber pointed out, we require absolute convergence for this to be possible. This holds for the series in question when $|1-\lambda|<1$. We then have$$\sum_{t} t \left( 1-\lambda \right)^{t} = - \left(1-\lambda\right) \sum_{t} \... Because this is a statistics site, let's develop a purely statistical solution.The first formula in the question correctly observes that$$\lambda + \lambda(1-\lambda)^1 + \lambda(1-\lambda)^2 + \lambda(1-\lambda)^3 + \cdots = 1,$$implicitly assuming $|1-\lambda|\lt 1$. For real numbers $0 \lt \lambda \lt 1$, this exhibits $1$ as the sum of a series ... I don't know what "non-stationary limited data" means. So I will assume you mean "non-stationary data".Exponential smoothing methods including Holt-Winters methods are appropriate for (some kinds of) non-stationary data. In fact, they are only really appropriate if the data are non-stationary. Using an exponential smoothing method on stationary data is not ... There is no normality assumption in fitting an exponential smoothing model. Even if maximum likelihood estimation is used with a Gaussian likelihood, the estimates will still be good under almost all residual distributions.There is also no normality assumption when producing point forecasts from an exponential smoothing model.However, there is often a ... To address the part of your question related to R, the ets function from the forecast package includes a lambda argument -- when true, a Box-Cox transformation is used that will keep the forecasts strictly positive. You may be able to use the same general approach in Java. When your data must be positive, you shouldn't fit a model that can go negative, and if you do, you shouldn't be surprised that it may forecast there.If your values are all strictly $> 0$, one common approach is to take logarithms and fit (and forecast) a model on that scale.There are other ways to approach this sort of problem, but that's probably ... Is it true that a (simple) exponential smoothing model with alpha (smoothing constant) = 1 is the same as MA(1), which is in turn the same as a random walk model? (i.e. using only the most recent observation as the forecast for all future periods)?No, it is not. Here are the forecasts by the three models:Simple exponential smoothing (SES; see section 7.1 ... Yes indeed: both exponential smoothing and ARIMA are special cases of state space models. For ARIMA, see this talk by Rob Hyndman, and for Exponential Smoothing, see Forecasting with Exponential Smoothing - the State Space Approach. This underlies the fact that specific Exponential Smoothing methods can be shown to yield MSE-optimal point forecasts for ... This isn't an exact answer to your question, but... you are definitely best off spending a bit of time to do learn some R basics and use something like Rob Hyndman's forecast package to do this. This will let you try a number of robust forecasting procedures and choose appropriate parameters, all within a state of the art computing environment with good ... As Brian says in his answer: there's no simple rule as to which is better. For example, the UK's Office for National Statistics switched from HW to ARIMA and wrote a paper on it and while they chose to switch it was probably because of the power of the X12 (now X13) software package, which is ARIMA-based and very powerful, rather than the technique itself.... A weighted average of any sequence $x_1, x_2, \ldots, x_n$ with respect to a parallel sequence of weights $w_1, w_2, \ldots, w_n$ is the linear combination$$(w_1 x_1 + w_2 x_2 + \cdots + w_n x_n) / (w_1 + w_2 + \cdots + w_n).\tag{1}$$An exponentially weighted average (EWS), by definition, uses a geometric sequence of weights$$w_i = \rho^{n-i} w_0$$... Values of $\alpha$ and $\beta$ close to one suggest the model is mis-specified.Try using the ets() function in the forecast package instead. It will choose the model for you, and select the best values of the smoothing parameters. Only the smoothing parameters are held fixed, the initial states are re-estimated. See the help file:model It is also possible for the model to be of class "ets", andequal to the output from a previous call to ets. In this case, thesame model is fitted to y without re-estimating any smoothingparameters. See also the use.initial.values argument.... To the best of my knowledge you cannot use exponential smoothing for daily forecasting that involves irregular seasonal effects or causal variables like holidays. The paper you cite is requires well defined seasonal cycle example 24 hours a day X 7 days a week = 168 hours a week, typically you see these type of seasonality in weather forecasting, electricity ... This is textbook case of having outliers at the end of the series and its unintended consequences. The problem with your data is that the last two points are outliers, you might want to identify and treat outliers before you run the forecasting algorithms. I'll update my answer and analysis later today on some strategies to identify outliers. Below is the ... Let $x$ be the original time series and $x_m$ be the result of smoothing with a simple moving average with some window width. Let $f(x, \alpha)$ be a function that returns a smoothed version of $x$ using smoothing parameter $\alpha$.Define a loss function $L$ that measures the dissimilarity between the windowed moving average and the exponential moving ... You can have an exponential smoothing model that involves multiplicative seasonality but no trend. For example, in R:> library(forecast)> x <- ts(rnorm(100,10,1),f=4)> fit <- ets(x,"MNM")> fitETS(M,N,M)Call:ets(y = x, model = "MNM")Smoothing parameters:alpha = 1e-04gamma = 0.0449Initial states:l = 10.... In-sample fit such as $R^2$ is even more frowned upon as a measure of model quality in forecasting than in other statistical subdisciplines, for all the well-known reasons (if you make your model more and more complex, you will get better and better in-sample fits... but ever worse out-of-sample forecast accuracy). If at all, people will rather use ... Dampening can be thought of as a special case of shrinkage methods; these methods as a whole tend to reduce uncertainty in estimates (yet another circumstance of trading bias for variance, an ever-recurring theme in statistics, though in some cases, such as many involving variable-selection, shrinkage can reduce both bias and variance).There are many ... I can't say precisely why your loess fit differs from the exponential fit -- that's more less "because it does, because they're different" -- but the reason that your exponential fit looks so linear, and why it looks so different from your plotted function, is that over the range of the data it is very close to linear. The parameter is -0.0037, the range of ... @forecaster you are correct that the last value is an outlier BUT periood 38 (the penultimate value) is not an outlier when you take into account trends and seasonal activity. This is a defining/teaching moment for testing/evaluating alternative robust approaches. If you don't identify and adjust for anomalies then the variance is inflated causing other ... ARIMA models are not stationary, ARMAs are. ARIMA includes the integration terms, e.g. a random walk model is ARIMA(0,1,0) and it's not stationary.There's a couple of different ways to exponentially smooth, here's EWMA and a different version. Neither of them requires stationarity.Here's an example in MATLAB with fitting ARIMA(0,1,1) into S&P 500 ... I assume we're not dealing with the multiplicative form.The reason we use the most recent estimates for the level and trend is because of the way the model is set up -- in effect, it corresponds to an assumption that's part of the model.It's easiest to see if you look at the model in error-correction form (see, for example, Sec 7.5 of Hyndman & ...
First i began with the definition of $\theta$ as below: $c_1n \leq k \log(k) \leq c_2n \implies c_1\frac{n}{logk} \leq k \leq c_2\frac{n}{logk}$ and we also know if $k\log(k) = \theta (n)$ then $k = O(n)$ then $k \leq cn$ thus $c\frac{n}{logn} \leq c_1\frac{n}{logk}\leq k $ and thus $k = \omega(\frac{n}{\log(n)})$. In in this manner, i have to prove that $ c_2\frac{n}{logk} \leq c\frac{n}{logn}$ too to show that $k = O(\frac{n}{\log(n)})$ . But it ($ c_2\frac{n}{logk} \leq c\frac{n}{logn}$) seems false! Is this really false and there is a better solution, or is there any proving tricks for $ c\frac{n}{logn} \leq c_1\frac{n}{logk}$? First, let me explain why your claim holds. Then we'll see how to prove it. If $k \log k = \Theta(n)$ then $k$ has the same order of magnitude as $n$, and so we expect $\log k \approx \log n$ (or, more formally, $\log k = \Theta(\log n)$). Diving by $\log n$, we deduce $k = \Theta(n/\log n)$. This was just intuition, so now let us see a rigorous proof. A rigorous proof rests on a rigorous definition of big $\Theta$, and this is a surprisingly delicate matter. In this case it suffices to assume that $n \geq 1$. Upper bound Suppose that $k \log k \leq Cn$ for $C>0$. If $k \leq \sqrt{n}$ then $$ \frac{k}{\frac{n}{\log n}} \leq \frac{\log n}{\sqrt{n}}. $$ Now $\frac{\log n}{\sqrt{n}}$ is continuous and tends to zero at infinity, so it is upper-bounded by some constant $M$. Therefore in this case $$ k \leq M \frac{n}{\log n}. $$ If $k > \sqrt{n}$ then $$ Cn \geq k \log k > k \log \sqrt{n} = \frac{1}{2} k \log n, $$ and so $$ k \leq 2C \frac{n}{\log n}. $$ We conclude that for all $k$, $$ k \leq \max(2C,M) \frac{n}{\log n}. $$ Lower bound Left to you as an exercise.
If I run a randomForest model, I can then make predictions based on the model. Is there a way to get a prediction interval of each of the predictions such that I know how "sure" the model is of its answer. If this is possible is it simply based on the variability of the dependent variable for the whole model or will it have wider and narrower intervals depending on the particular decision tree that was followed for a particular prediction? If I run a This is partly a response to @Sashikanth Dareddy (since it will not fit in a comment) and partly a response to the original post. Remember what a prediction interval is, it is an interval or set of values where we predict that future observations will lie. Generally the prediction interval has 2 main pieces that determine its width, a piece representing the uncertainty about the predicted mean (or other parameter) this is the confidence interval part, and a piece representing the variability of the individual observations around that mean. The confidence interval is fairy robust due to the Central Limit Theorem and in the case of a random forest, the bootstrapping helps as well. But the prediction interval is completely dependent on the assumptions about how the data is distributed given the predictor variables, CLT and bootstrapping have no effect on that part. The prediction interval should be wider where the corresponding confidence interval would also be wider. Other things that would affect the width of the prediction interval are assumptions about equal variance or not, this has to come from the knowledge of the researcher, not the random forest model. A prediction interval does not make sense for a categorical outcome (you could do a prediction set rather than an interval, but most of the time it would probably not be very informative). We can see some of the issues around prediction intervals by simulating data where we know the exact truth. Consider the following data: set.seed(1)x1 <- rep(0:1, each=500)x2 <- rep(0:1, each=250, length=1000)y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000) This particular data follows the assumptions for a linear regression and is fairly straight forward for a random forest fit. We know from the "true" model that when both predictors are 0 that the mean is 10, we also know that the individual points follow a normal distribution with standard deviation of 1. This means that the 95% prediction interval based on perfect knowledge for these points would be from 8 to 12 (well actually 8.04 to 11.96, but rounding keeps it simpler). Any estimated prediction interval should be wider than this (not having perfect information adds width to compensate) and include this range. Let's look at the intervals from regression: fit1 <- lm(y ~ x1 * x2)newdat <- expand.grid(x1=0:1, x2=0:1)(pred.lm.ci <- predict(fit1, newdat, interval='confidence'))# fit lwr upr# 1 10.02217 9.893664 10.15067# 2 14.90927 14.780765 15.03778# 3 20.02312 19.894613 20.15162# 4 21.99885 21.870343 22.12735(pred.lm.pi <- predict(fit1, newdat, interval='prediction'))# fit lwr upr# 1 10.02217 7.98626 12.05808# 2 14.90927 12.87336 16.94518# 3 20.02312 17.98721 22.05903# 4 21.99885 19.96294 24.03476 We can see there is some uncertainty in the estimated means (confidence interval) and that gives us a prediction interval that is wider (but includes) the 8 to 12 range. Now let's look at the interval based on the individual predictions of individual trees (we should expect these to be wider since the random forest does not benefit from the assumptions (which we know to be true for this data) that the linear regression does): library(randomForest)fit2 <- randomForest(y ~ x1 + x2, ntree=1001)pred.rf <- predict(fit2, newdat, predict.all=TRUE)pred.rf.int <- apply(pred.rf$individual, 1, function(x) { c(mean(x) + c(-1, 1) * sd(x), quantile(x, c(0.025, 0.975)))})t(pred.rf.int)# 2.5% 97.5%# 1 9.785533 13.88629 9.920507 15.28662# 2 13.017484 17.22297 12.330821 18.65796# 3 16.764298 21.40525 14.749296 21.09071# 4 19.494116 22.33632 18.245580 22.09904 The intervals are wider than the regression prediction intervals, but they don't cover the entire range. They do include the true values and therefore may be legitimate as confidence intervals, but they are only predicting where the mean (predicted value) is, no the added piece for the distribution around that mean. For the first case where x1 and x2 are both 0 the intervals don't go below 9.7, this is very different from the true prediction interval that goes down to 8. If we generate new data points then there will be several points (much more than 5%) that are in the true and regression intervals, but don't fall in the random forest intervals. To generate a prediction interval you will need to make some strong assumptions about the distribution of the individual points around the predicted means, then you could take the predictions from the individual trees (the bootstrapped confidence interval piece) then generate a random value from the assumed distribution with that center. The quantiles for those generated pieces may form the prediction interval (but I would still test it, you may need to repeat the process several more times and combine). Here is an example of doing this by adding normal (since we know the original data used a normal) deviations to the predictions with the standard deviation based on the estimated MSE from that tree: pred.rf.int2 <- sapply(1:4, function(i) { tmp <- pred.rf$individual[i, ] + rnorm(1001, 0, sqrt(fit2$mse)) quantile(tmp, c(0.025, 0.975))})t(pred.rf.int2)# 2.5% 97.5%# [1,] 7.351609 17.31065# [2,] 10.386273 20.23700# [3,] 13.004428 23.55154# [4,] 16.344504 24.35970 These intervals contain those based on perfect knowledge, so look reasonable. But, they will depend greatly on the assumptions made (the assumptions are valid here because we used the knowledge of how the data was simulated, they may not be as valid in real data cases). I would still repeat the simulations several times for data that looks more like your real data (but simulated so you know the truth) several times before fully trusting this method. I realize this is an old post but I have been running some simulations on this and thought I will share my findings. @GregSnow made a very detailed post about this but I believe when calculating the interval using predictions from individual trees he was looking at $[ \mu + \sigma, \mu - \sigma]$ which is only a 70% prediction interval. We need to look at $[\mu+1.96*\sigma,\mu-1.96*\sigma]$ to get the 95% prediction interval. Making this change to @GregSnow code, we get the following results set.seed(1)x1 <- rep( 0:1, each=500 )x2 <- rep( 0:1, each=250, length=1000 )y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000)library(randomForest)fit2 <- randomForest(y~x1+x2)pred.rf <- predict(fit2, newdat, predict.all=TRUE)pred.rf.int <- t(apply( pred.rf$individual, 1, function(x){ c( mean(x) + c(-1.96,1.96)*sd(x), quantile(x, c(0.025,0.975)) )}))pred.rf.int 2.5% 97.5%1 7.826896 16.05521 9.915482 15.314312 11.010662 19.35793 12.298995 18.642963 14.296697 23.61657 14.749248 21.112394 18.000229 23.73539 18.237448 22.10331 Now, comparing these with the intervals generated by adding normal deviation to predictions with standard deviation as MSE like @GregSnow suggested we get, pred.rf.int2 <- sapply(1:4, function(i) { tmp <- pred.rf$individual[i,] + rnorm(1000, 0, sqrt(fit2$mse)) quantile(tmp, c(0.025, 0.975)) })t(pred.rf.int2) 2.5% 97.5%[1,] 7.486895 17.21144[2,] 10.551811 20.50633[3,] 12.959318 23.46027[4,] 16.444967 24.57601 The interval by both these approaches are now looking very close. Plotting the prediction interval for the three approaches against the error distribution in this case looks as below Black lines = prediction intervals from linear regression, Red lines = Random forest intervals calculated on Individual Predictions, Blue lines = Random forest intervals calculated by adding normal deviation to predictions Now, let us re-run the simulation but this time increasing the variance of the error term. If our prediction interval calculations are good, we should end up with wider intervals than what we got above. set.seed(1)x1 <- rep( 0:1, each=500 )x2 <- rep( 0:1, each=250, length=1000 )y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000,mean=0,sd=5)fit1 <- lm(y~x1+x2)newdat <- expand.grid(x1=0:1,x2=0:1)predict(fit1,newdata=newdat,interval = "prediction") fit lwr upr1 10.75006 0.503170 20.996952 13.90714 3.660248 24.154033 19.47638 9.229490 29.723274 22.63346 12.386568 32.88035set.seed(1)fit2 <- randomForest(y~x1+x2,localImp=T)pred.rf.int <- t(apply( pred.rf$individual, 1, function(x){ c( mean(x) + c(-1.96,1.96)*sd(x), quantile(x, c(0.025,0.975)) )}))pred.rf.int 2.5% 97.5%1 7.889934 15.53642 9.564565 15.478932 10.616744 18.78837 11.965325 18.519223 15.024598 23.67563 14.724964 21.431954 17.967246 23.88760 17.858866 22.54337pred.rf.int2 <- sapply(1:4, function(i) { tmp <- pred.rf$individual[i,] + rnorm(1000, 0, sqrt(fit2$mse)) quantile(tmp, c(0.025, 0.975)) })t(pred.rf.int2) 2.5% 97.5%[1,] 1.291450 22.89231[2,] 4.193414 25.93963[3,] 7.428309 30.07291[4,] 9.938158 31.63777 Now, this makes it clear that calculating the prediction intervals by the second approach is far more accurate and is yielding results quite close to the prediction interval from linear regression. Taking the assumption of normality, there is another easier way to compute the prediction intervals from random forest. From each of the individual trees we have the predicted value ($\mu_i$) as well as the mean squared error ($MSE_i$). So prediction from each individual tree can be thought of as $ N(\mu_i,RMSE_i)$. Using the normal distribution properties our prediction from the random forest would have the distribution $N(\sum \mu_i/n, \sum RMSE_i/n)$. Applying this to the example we discussed above, we get the below results mean.rf <- pred.rf$aggregatesd.rf <- mean(sqrt(fit2$mse))pred.rf.int3 <- cbind(mean.rf - 1.96*sd.rf, mean.rf + 1.96*sd.rf)pred.rf.int31 1.332711 22.093642 4.322090 25.083023 8.969650 29.730584 10.546957 31.30789 These tally very well with the linear model intervals and also the approach @GregSnow suggested. But note that the underlying assumption in all the methods we discussed is that the errors follow a Normal distribution. If you use R you can easily produce prediction intervals for the predictions of a random forests regression: Just use the package quantregForest (available at CRAN) and read the paper by N. Meinshausen on how conditional quantiles can be inferred with quantile regression forests and how they can be used to build prediction intervals. Very informative even if you don't work with R! This is easy to solve with randomForest. First let me deal with the regression task (assuming your forest has 1000 trees).In the predict function, you have the option to return results from individual trees. This means that you will receive 1000 column output. We can take the average of the 1000 columns for each row - this is the regular output RF would have produced any way. Now to get the prediction interval lets say +/- 2 std. deviations all you need to do is, for each row, from the 1000 values calculate +/-2 std. deviations and make these your upper and lower bounds on your prediction. Second, in the case of classification, remember that each tree outputs either 1 or 0 (by default)and the sum over all 1000 trees divied by 1000 gives the class probablity (in the case of binary classification). In order to put a prediction interval on the probability you need to modify the min. nodesize option (see randomForest docuementation for the exact name of that option) once you set it a value >>1 then the individual trees will output numbers between 1 and 0. Now, from here on you can repeat the same process as described above for the regression task. I hope that makes sense. I've tried some options (this all WIP): I actually made the dependent variable a classification problem with the results as ranges, instead of a single value. The results I got were poor, compared to using a plain value. I gave up this approach. I then converted it to multiple classification problems, each of which was a lower-bound for the range (the result of the model being whether it would cross the lower bound or not) and then ran all the models (~20), and then combined the result to get a final answer as a range. This works better than 1 above but not as good as I need it to. I'm still working to improve this approach. I used OOB and leave-one-out estimates to decide how good/bad my models are. The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper: Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random Forest Prediction Intervals." The American Statistician,2019. The R package "rfinterval" is its implementation available at CRAN. Installation To install the R package rfinterval: #install.packages("devtools")#devtools::install_github(repo="haozhestat/rfinterval")install.packages("rfinterval")library(rfinterval)?rfinterval Usage Quickstart: train_data <- sim_data(n = 1000, p = 10)test_data <- sim_data(n = 1000, p = 10)output <- rfinterval(y~., train_data = train_data, test_data = test_data, method = c("oob", "split-conformal", "quantreg"), symmetry = TRUE,alpha = 0.1)### print the marginal coverage of OOB prediction intervalmean(output$oob_interval$lo < test_data$y & output$oob_interval$up > test_data$y)### print the marginal coverage of Split-conformal prediction intervalmean(output$sc_interval$lo < test_data$y & output$sc_interval$up > test_data$y)### print the marginal coverage of Quantile regression forest prediction intervalmean(output$quantreg_interval$lo < test_data$y & output$quantreg_interval$up > test_data$y) Data example: oob_interval <- rfinterval(pm2.5 ~ ., train_data = BeijingPM25[1:1000, ], test_data = BeijingPM25[1001:2000, ], method = "oob", symmetry = TRUE, alpha = 0.1)str(oob_interval)
$f:[0,1]\times [0,1]\to\mathbb R,$ defined by $$f(x,y)= \begin{cases}1,\quad \ \ y\in\mathbb R\text{\\}\mathbb Q\\2x,\quad\text{otherwise}\end{cases}$$. $1.1$: $\int_0^1f(x,y)dx$ exists for every $y\in[0,1]$ and is equal to $1$. $1.2$: The iterated integral $\int_0^1(\int_0^1f(x,y)dx)dy$ exists and is $1$. $1.3$: The double integral $\int_If(x,y)d(x,y)$ does not exist. I am struggling with solving iterated integrals in general and with this one I don't even know where to start since the values kind of jump from 1 to 2x constantly. Edit: Got an idea for 1.1.: I made two cases, one for an irrational y and one for the rest. Giving me $\int_0^11dx$ which is 1 and $\int_0^12xdx$ which also is 1. Could someone give me a short explanation about them and some hints on how to approach these exercises?
The following question is taken from Royden Real Analysis $4$th edition, Chapter $4,$ question $20.$ Let $\{f_n\}$ be a sequence of nonnegative measurable functions that converges to $f$ pointwise on $E.$ Let $M\geq 0$ be such that $\int_E f_n\leq M$ for all $n.$ Show that $\int_E f\leq M.$ Verify that this property is equivalent to the statement of Fatou's Lemma. I have proven that $\int_E f\leq M.$ But I have no idea on how to tackle second part. After some googling, I found a solution in MSE, which goes as follows: Let $(f_n,n\in\Bbb N)$ be a sequence of measurable integrable functions and $a_N:=\inf_{k\geqslant N}\int f_kd\mu$. Working with the sequence $(f_n,n\geqslant N)$ (for which the sequence of integrals has the same $\liminf$ as those of the whole sequence), one can see that that $\int fd\mu\leqslant a_N$ for each $N$. Now take the limit $\lim_{N\to +\infty}$. Questions: $(1)$ What is a motivation of considering $a_N:=\inf_{k\geq N}\int f_k d\mu?$ $(2)$ Why does $\int f d\mu\leq a_N$ for each $N?$
Let us consider Example 16.1 in Wooldridge (2010), concerning school and employment decisions for young men. The data contain information on employment and schooling for young men over several years. We will work with the data for 1987. The outcome is status, coded 1=in school, 2=at home (meaning not in school and not working), and 3=working. The predictors are education, a quadratic on work experience, and an indicator for black. We read the data from the Stata website, keep the year 1987, drop missing values, label the outcome, and fit the model. . use http://www.stata.com/data/jwooldridge/eacsap/keane.dta, clear . keep if year == 87 (10985 observations deleted) . drop if missing(status) (21 observations deleted) . label define status 1 "school" 2 "home" 3 "work" . label values status status . mlogit status educ exper expersq black, base(1) Iteration 0: log likelihood = -1199.7182 Iteration 1: log likelihood = -960.26272 Iteration 2: log likelihood = -908.7673 Iteration 3: log likelihood = -907.85992 Iteration 4: log likelihood = -907.85723 Iteration 5: log likelihood = -907.85723 Multinomial logistic regression Number of obs = 1717 LR chi2(8) = 583.72 Prob > chi2 = 0.0000 Log likelihood = -907.85723 Pseudo R2 = 0.2433 status Coef. Std. Err. z P>|z| [95% Conf. Interval] school (base outcome) home educ -.6736313 .0698999 -9.64 0.000 -.8106325 -.53663 exper -.1062149 .173282 -0.61 0.540 -.4458414 .2334116 expersq -.0125152 .0252291 -0.50 0.620 -.0619633 .036933 black .8130166 .3027231 2.69 0.007 .2196902 1.406343 _cons 10.27787 1.133336 9.07 0.000 8.056578 12.49917 work educ -.3146573 .0651096 -4.83 0.000 -.4422699 -.1870448 exper .8487367 .1569856 5.41 0.000 .5410507 1.156423 expersq -.0773003 .0229217 -3.37 0.001 -.1222261 -.0323746 black .3113612 .2815339 1.11 0.269 -.240435 .8631574 _cons 5.543798 1.086409 5.10 0.000 3.414475 7.673121 The results agree exactly with Table 16.1 in Wooldridge (2010, page 645). Let us focus on the coefficient of black in the work equation, which is 0.311. Exponentiating we obtain . di exp( _b[work:black] ) 1.3652822 Thus, the relative probability of working rather than being in school is 37% higher for blacks than for non-blacks with the same education and work experience. (Relative probabilities are also called relative odds.) A common mistake is to interpret this coefficient as meaning that the probability of working is higher for blacks. It is only the relative probability of work over school that is higher. To obtain a fuller picture we need to consider the second equation as well. The coefficient of black in the home equation is 0.813. Exponentiating, we obtain . di exp(_b[home:black] ) 2.2546993 Thus, the relative probability of being at home rather than in school for blacks is more than double the corresponding relative probability for non blacks with the same education and work experience. In short, black is associated with an increase in the relative probability of work over school, but also a much large increase in the relative probability of home over school. What happens with the actual probability of working depends on how these two effects balance out. To determine the effect of black in the probability scale we need to compute marginal effects, which can be done using continuous or discrete calculations. The continuous calculation is based on the derivative of the probability of working with respect to a predictor. Let \( \pi_{ij} = \Pr\{Y_i=j\} \) denote the probability that the i-th observation follows on the j-th category, which is given by \[ \pi_{ij} = \frac{e^{x_i'\beta_j}}{ \sum_r e^{x_i'\beta_r}} \] where \( \beta_j = 0\) when j is the baseline or reference outcome, in this case school. Taking derivatives w.r.t. the k-th predictor we obtain, after some simplification \[ \frac{\partial\pi_{ij}}{\partial x_{ik}} = \pi_{ij} ( \beta_{jk} - \sum_r \pi_{ir} \beta_{rk} ) \] noting again that the coefficient is zero for the baseline outcome. To compute these we predict the probabilities and then apply the formula. . predict p1 p2 p3, pr . gen me1 = p1*( -(p2*_b[2:black] + p3*_b[3:black])) . gen me2 = p2*(_b[2:black] -(p2*_b[2:black] + p3*_b[3:black])) . gen me3 = p3*(_b[3:black] -(p2*_b[2:black] + p3*_b[3:black])) . sum me* Variable Obs Mean Std. Dev. Min Max me1 1717 -.0183811 .0241438 -.1011232 -.0007906 me2 1717 .058979 .0355181 .0073935 .1402041 me3 1717 -.0405979 .0404273 -.1246674 .0587828 We find that the average marginal effect of black on work is actually negative: -0.0406. This means that the probability of working is on average about four percentage points lower for blacks than for non-blacks with the same education and experience. Stata can do this calculation using the dydx() option of the margins command. Here's the marginal effect for work: . margins, dydx(black) pr(out(3)) Average marginal effects Number of obs = 1717 Model VCE : OIM Expression : Pr(status==work), predict(out(3)) dy/dx w.r.t. : black Delta-method dy/dx Std. Err. z P>|z| [95% Conf. Interval] black -.0405979 .0197356 -2.06 0.040 -.079279 -.0019168 This agrees exactly with our hand calculation. Note that Stata uses the derivative for continuous variables and a discrete difference for factor variables, which we consider next. For the discrete calculation we compute predicted probabilities by setting ethnicity to black and then to non-black and averaging: . gen keep_black = black . quietly replace black = 1 . predict p11 p12 p13, pr . sum p1? Variable Obs Mean Std. Dev. Min Max p11 1717 .0450738 .0712937 .0015405 .4842191 p12 1717 .2274114 .2114531 .0237205 .9368684 p13 1717 .7275148 .2156368 .0615363 .9393418 Variable Obs Mean Std. Dev. Min Max p01 1717 .0630648 .0941436 .0025128 .5787593 p02 1717 .1659493 .1861749 .014167 .8990285 p03 1717 .7709859 .1986715 .0975198 .9462531 We find that the average probability of working is 0.7275 if black and 0.7710 if not black, a difference of -0.0435, so the probability of working is on average just over four percentage points lower for blacks. Stata can calculate the predictive margins if you specify black as a factor variable when you fit the model, and then issue the command margins black. This only works for factor variables. . quietly mlogit status educ exper expersq i.black, base(1) . margins black, pr(out(3)) Predictive margins Number of obs = 1717 Model VCE : OIM Expression : Pr(status==work), predict(out(3)) Delta-method Margin Std. Err. z P>|z| [95% Conf. Interval] black 0 .7709859 .0119827 64.34 0.000 .7475001 .7944716 1 .7275148 .0154176 47.19 0.000 .6972968 .7577328 The marginal effect can then be obtained as a discrete difference . margins, dydx(black) pr(out(3)) Average marginal effects Number of obs = 1717 Model VCE : OIM Expression : Pr(status==work), predict(out(3)) dy/dx w.r.t. : 1.black Delta-method dy/dx Std. Err. z P>|z| [95% Conf. Interval] 1.black -.043471 .0199503 -2.18 0.029 -.082573 -.0043691 Note: dy/dx f r factor levels is the discrete change from the base level. These results agree exactly with our hand calculations. The take away conclusion here is that multinomial logit coefficients can only be interpreted in terms of relative probabilities, to reach conclusions about actual probabilities we need to calculate continuous or discrete marginal effects.
This web page lists all elements and attributes that can be used in the input file of an calculation: exciting elements are defined according to the general XML conventions. Example:The element is used to set up a self-consistent calculation of the ground-state energy. groundstate attributes are defined according to the general XML conventions. An attribute is always connected to an element. In an attribute generally specifies a parameter or a set of parameters which are connected to the corresponding element. exciting Example:The attribute of the element xctype defines which exchange-correlation potential is used in the self-consistent calculation. groundstate The input file of an calculation is namedinput.xml. It must be a valid XML file, and it must contain the root element exciting . input Unless explicitly stated otherwise, uses atomic units ($\hbar = m_{e} = e = 1$): exciting Energies are given in Hartree: $1 Ha = 2 Ry = 27.21138386(68) eV = 4.35926 10^{-18}\ J$ Lengths are given in Bohr: $1 a_{\rm Bohr}\ = 0.52917720859(36) {\buildrel _{\circ} \over {\mathrm{A}}} \ = 0.52917720859(36) 10^{-10} \ m$ Magnetic fields are given in units of $1 a.u. = \displaystyle\frac{e}{a_{\rm Bohr}^2}\ = 1717.2445320376\ Tesla.$ Note: The electron charge is positive, so that the atomic numbers$Z$are negative. Element: input The xml element input is the root element of the input file. It must contain one element exciting structureand the element groundstate. contains: title (1 times) convert (optional) extract (optional) structure (1 times) groundstate (optional) structureoptimization (optional) properties (optional) phonons (optional) xs (optional) keywords (optional) XPath: /input Element: title Element: convert Element: extract Element: structure Element: crystal Element: basevect Element: species Element: atom Element: LDAplusU Element: groundstate The groundstate element is required for any calculation. Its attributes are the parameters and methods used to calculate the groundstate density. contains: spin (optional) solver (optional) XPath: /input/groundstate List of attributes: do , ngridk , rgkmax , epspot , epsengy , epsforce , rmtapm , swidth , stype , findlinentype , isgkmax , gmaxvr , nempty , nosym , frozencore , autokpt , radkpt , reducek , tfibs , tforce , lmaxapw , maxscl , chgexs , deband , epsband , dlinenfermi , epschg , epsocc , mixer , beta0 , betainc , betadec , lradstep , nprad , xctype , ldapu , lmaxvr , fracinr , lmaxinr , lmaxmat , vkloff , npsden , cfdamp , nosource , tevecsv , nwrite , ptnucl Element: spin Element: solver Element: structureoptimization Element: properties Properties listed in this element can be calculated from the groundstate. It works also from a saved state from a previous run. contains: bandstructure (optional) STM (optional) wfplot (optional) dos (optional) LSJ (optional) masstensor (optional) chargedensityplot (optional) exccplot (optional) elfplot (optional) mvecfield (optional) xcmvecfield (optional) electricfield (optional) gradmvecfield (optional) fermisurfaceplot (optional) EFG (optional) momentummatrix (optional) linresponsetensor (optional) mossbauer (optional) dielectric (optional) expiqr (optional) elnes (optional) eliashberg (optional) XPath: /input/properties Element: bandstructure Element: STM Element: wfplot Element: dos If present a DOS calculation is started. DOS and optics plots require integrals of the kind(8) These are calculated by first interpolating the functions$e({ \bf k})$and$f({ \bf k})$with the trilinear method on a much finer mesh whose size is determined by ngrdos . Then the$\omega$-dependent histogram of the integrand is accumulated over the fine mesh. If the output function is noisy then either ngrdos should be increased or nwdos decreased. Alternatively, the output function can be artificially smoothed up to a level given by nsmdos . This is the number of successive 3-point averages to be applied to the function$g$. Type: no content XPath: /input/properties/dos Element: LSJ Element: masstensor Element: chargedensityplot Element: exccplot Element: elfplot Element: mvecfield Element: xcmvecfield Element: electricfield Element: gradmvecfield Element: fermisurfaceplot Element: EFG Element: momentummatrix Element: linresponsetensor Element: optcomp Element: mossbauer Element: dielectric Element: expiqr Element: elnes Element: eliashberg Element: phonons Element: phonondos Element: phonondispplot Element: xs If this element is present with valid configuration, the macroscopic dielectric function and related spectroscopic quantities in the linear regime are calculated through either time-dependent DFT (TDDFT) or the Bethe-Salpeter equation (BSE). contains: tddft (optional) screening (optional) BSE (optional) qpointset (1 times) tetra (optional) dosWindow (1 times) plan (optional) XPath: /input/xs List of attributes: emattype , dfoffdiag , lmaxapwwf , lmaxemat , emaxdf , broad , epsdfde , tevout , xstype , symmorph , fastpmat , fastemat , gather , tappinfo , dbglev , usegdft , gqmax , nosym , ngridk , vkloff , reducek , ngridq , reduceq , rgkmax , swidth , lmaxapw , lmaxmat , nempty , scissor Element: tddft Element: dftrans Element: trans Element: screening Element: BSE Element: tetra Element: dosWindow Element: plan Element: doonly Element: keywords + Reused Elements The following elements can occur more than once in the input file. There for they are listed separately. Element: origin Element: point Element: plot1d The element plot1d specifies sample points along a path. The coordinate space (lattice or cartesian)is chosen in the context of the parent. contains: path (1 times) XPath: ./plot1d Parent: /input/properties/bandstructure /input/properties/wfplot /input/properties/chargedensityplot /input/properties/exccplot /input/properties/elfplot /input/properties/gradmvecfield /input/phonons/phonondispplot Element: path Element: plot2d Defines a 2d plot domain. contains: parallelogram (1 times) XPath: ./plot2d Parent: /input/properties/STM /input/properties/wfplot /input/properties/chargedensityplot /input/properties/exccplot /input/properties/elfplot /input/properties/mvecfield /input/properties/xcmvecfield /input/properties/electricfield /input/properties/gradmvecfield Element: parallelogram Element: plot3d Defines a 3d plot domain. contains: box (1 times) XPath: ./plot3d Parent: /input/properties/wfplot /input/properties/chargedensityplot /input/properties/exccplot /input/properties/elfplot /input/properties/mvecfield /input/properties/xcmvecfield /input/properties/electricfield /input/properties/gradmvecfield Element: box Element: pointstatepair Element: kstlist The kstlist element is used in the LSJ and wavefunction plot element This is a user-defined list of${ \bf k}$-point and state index pairs which are those used for plotting wavefunctions and writing${ \bf L}$,${ \bf S}$and${ \bf J}$expectation values. contains: pointstatepair (1 times or more) XPath: ./kstlist Parent: /input/properties/wfplot /input/properties/LSJ Element: qpointset Element: qpoint Element: parts Element: dopart Data Types The Input definition uses derived data types. These are described here. Type fortrandouble Type vector A vector is a space separated list of floating point numbers. Type integerlist Type vect3d Type vect2d Type integertriple Space separated list of three integers. Type integerpair Space separated list of two integers Example: "1 2"
Here is a string of comments which might be helpful. UPDATE at the end I conjecture an upper bound $a(n) \leq \lfloor (\frac{n-1}{2})^2 \rfloor$ which satisfies a stronger property. Consider instead cases of $$\prod_1^k(x_i+a)= \prod_1^k(y_i+a) \tag{*}$$ where the multisets $\{x_1,\cdots ,x_k\}$ and $\{y_1,\cdots ,y_k\}$ are disjoint. I'll assume the elements are listed in increasing order. To stick to the OP, add the requirement that the $y_i$ are distinct. For example, $a(5)\geq 2$ because there are counter-examples to $a=0$ and $a=1.$ $$(2+0)(2+0)(3+0)(2+0)(5+0)=(1+0)(2+0)(3+0)(4+0)(5+0)$$$$(2+1)(2+1)(3+1)(3+1)(4+1)=(1+1)(2+1)(3+1)(4+1)(5+1)$$ Cancel out common factors to to see that sources of these counter-examples are $1\cdot 4=2 \cdot 2 $ and $2 \cdot 6=3 \cdot 4.$ In the other direction, one can pad an example of $(*)$ by changing the right-hand side to $\prod_1^n(i+a)$ and adding on the left the same new factors. Here $n$ could be $\max(x_k,y_k)$ or anything larger. The final remark exhibits that $a(n)$ is non-decreasing. Of the values reported so far the larger ones are somewhat close . $$a(14)=33 \lt 42=\lfloor (\frac{13}2)^2\rfloor$$ $$ a(15)=45 \lt 49$$Here is a potential conjecture. It is false. I mention it only because the counter-examples are lovely. Suppose that the value of $\prod_{i=1}^n (a + x_i) -\prod_{i=1}^n (a + y_i)$ is independent of $a$. Does that mean that the shared value is $0$ and $x_i=y_i?$ The answer is no because of ideal solutions to the Prouhet-Tarry-Escott problem. For example $2^k+3^k+7^k=1^k+5^k+6^k$ for $k=0,1,2.$ This explains the observation that $$(2+a)(3+a)(7+a)=42+41a+12a^2+a^3$$$$(1+a)(5+a)(6+a)=30+41a+12a^2+a^3$$ so the two always differ by $12.$ The OP is to find the first $a$ which satisfies the condition. for any set of integers $(x_1,...,x_n)$ and $1\leq x_i \leq n$: $(x_1,\dotsc,x_n)$ is a permutation of $(1,\dotsc,n)$ if and only if: $(x_1+a)\dotsb(x_n+a)=(1+a)\dotsb(n+a)$. I will instead seek the last $a$ which fails the property. This (plus $1$) is then an upper-bound on $a(n).$ I will conjecture that for fixed $n,$ this last bad $a$ is at most $(\frac{n-1}{2})^2.$ My justification is sketchy and would probably benefit from classical inequalities. By my comments above, given $n$, a particular $a$ is bad if there is $k$-member subset of $\{a+1,\cdots ,a+n\}$ and a disjoint multiset of $k$ elements from the same set which have the same product. I think that the extreme case is $k=2$ with $a+1=s^2$ and $n=2s+1$ so $a+n=(s+1)^2$ Then $s^2\cdot (s+1)^2=(s^2+s)\cdot (s^2+s).$ Here are plots showing that $a=18$ and $a=45$ are good for $n=11$ at least as far as $k=2.$ The first shows that there are no solutions of $18\cdot 29=u \cdot v$ with $19 \leq u,v \leq 28$ The hyperbola $xy=19\cdot 28$ (on this interval) snakes through the lattice points without hitting any of them. That isn't surprising given that $19$ is prime. The second shows the hyperbola $xy=45\cdot 56.$ Along the diagonal are the lattice points $(x,101-x).$ The diagonal below is the closest lattice points. But the hyperbola stays above that closest diagonal.Hence there are no solutions of $u \cdot v=2520$ in that range other than the endpoints. The $a$ chosen for these is larger than needed but it makes the plot easier to see. In the cases mentioned above such as $ 25\cdot 36=30 \cdot 30$ , the hyperbola is tangent to the lower diagonal and the contact point is a lattice point. I'll suffice to end this sketch by saying, without justification, that for larger $k$ the surface $x_1x_2\cdots x_k=y_1y_2\cdots y_k$ lies below the hyperplane $x_1+x_2+\cdots +x_k=y_1+y_2+\cdots + y_k$ which is rich in lattice points. If the numbers are large enough then that surface stays close enough to the hyperplane that it never touches the parallel hyperplane of nearest lattice points. It seems as if “large enough” decreases with $k$. A study of the known bad a values might make that clear. Do any of the known counter examples use more than $k=2?$ The exact value of $a(n)$ in the OP depends on the distribution of fairly composite integers in certain intervals of length $n.$ That is not very predictable.However I think the simplifications here might make the searches easier. The values reported so far seem close to the bound.
Question 1 I think the short answer is no. Try testing functions of the form $|x|^{-n + 1}$ around $0$. Its gradient would be like $|x|^{-n}$ and thus in $L^{1,\infty}$ but the function is not in the required $L_p$ space. The Sobolev inequalities for $p=1$ are connected with the isoperimetric inequality, see [S] and are thus very tight. I recall a result similar to what you want but at the other end of the range, i.e. if $\nabla u \in L^{n,\infty}$ then $u$ is in BMO. Question 2 Try expressing $v = \nabla u$ as $u = D_n \ast v$ for some $\mathbf{R}^n$-valued distribution $D_n$. Then check in which Lorentz spaces $D_n$ is and apply the Young's inequalities in Lorentz spaces, as in [N]. Question 3 This would be immediately correct with weak $n/n-2$ as a consequence of Hardy-Littlewood-Sobolev inequalities, which hold for every fractional power of $(-\Delta)$ i.e.:$$ \| \, u \, \|_{L^{\frac{n}{n - \alpha},\infty}} \lesssim \| \, (-\Delta)^{\frac{\alpha}{2}} u \, \|_{L^1}$$see [V]. When $\alpha = 2$, the inequality would still be true without weak bounds after changing $\Delta$ by the double gradient $\nabla^2$ by iteration of the Gagliardo-Niremberg inequality. I do not think that you can change the $\nabla^2$ by a $\Delta$ unless the iterated Riesz transforms $R_i R_j$ were bounded in $L^1$, which they are not. To produce an explicit counterexample i would first show that the inequality if tight for $\nabla^2$ ans then use that $\|\nabla^2 u\|_1$ and $\|\Delta u\|_1$ are not comparable. [N] Nursultanov, Erlan; Tikhonov, Sergey, Convolution inequalities in Lorentz spaces, J. Fourier Anal. Appl. 17, No. 3, 486-505 (2011). ZBL1235.44012.https://link.springer.com/article/10.1007/s00041-010-9159-9 [S] Saloff-Coste, Laurent, Aspects of Sobolev-type inequalities, London Mathematical Society Lecture Note Series. 289. Cambridge: Cambridge University Press. x, 190 p. (2002). ZBL0991.35002. [V] Varopoulos, N. Th.; Saloff-Coste, L.; Coulhon, T., Analysis and geometry on groups, Cambridge Tracts in Mathematics 100. Cambridge: Cambridge University Press (ISBN 978-0-521-08801-5/pbk). xii, 156 p. (2008). ZBL1179.22009.> Blockquote
Superfunctions Contents Covers The back cover suggests short abstract of the Book and few notes about the Author. About the topic Assume some given holomorphic function \(T\). The superfunction is holomorphic solution F of equation \(T(F(z))=F(z+1)\) The Abel function (or abelfunction) is the inverse of superfunction, \(G=F^{-1}\) The abelfunction is solution of the Abel equation \(G(T(z))=G(z)+1\) As the superfunction \(F\) and the abelfunction \(G=F^{-1}\) are established, the \(n\)th iterate of transfer function \(T\) can be expressed as follows: \(T^n(z)=F(n+G(z))\) This expression allows to evaluate the non-integer iterates. The number n of iterate can be real or even complex. In particular, for integer \(n\), the iterates have the common meaning: \(T^{-1}\) is inverse function of \(T\), \(T^0(z)=z\), \(T^1(z)=T(z)\), \(T^2(z)=T(T(z)) \), \(T^3(z)=T(T(T(z))) \), and so on. The group property holds: \(T^m(T^n(z))=T^{m+n}(z)\) The special notation is used in through the book; the number of iterate is indicated as superscript. For example, In these notations, \(\sin^2(z)=\sin(\sin(z))\), but never \(\sin(z)^2\). This notation is borrowed from the Quantum mechanics, where \(P^2(\psi)=P(P(\psi))\), but never \(P(\psi)^2\). About the Book Tools for evaluation of superfunctions, abelfunctions and non-integer iterates of holomorphic functions are collected. For a giver transfer function T, the superfunction is solution F of the transfer equation F(z+1)=T(F(z)) . The abelfuction is inverse of F. In particular, thesuperfunctions of factorial, exponent, sin; the holomorphic extensions of the logistic sequence and of the Ackermann functions are suggested. from ackermanns, the tetration (mainly to the base b>1) and pentation (to base e) are presented. The efficient algorithm for the evaluation of superfunctions and abelfunctions are described. The graphics and complex maps are plotted. The possible applications are discussed. Superfunctions significantly extend the set of functions that can be used in scientific research and technical design. Generators of figures are loaded to the site TORI, http://mizugadro.mydns.jp for the free downloading. With these generators, the Readers can reproduce (and modify) the figures from the Book. The Book is intended to be applied and popular. I try to avoid the complicated formulas, but some basic knowledge of the complex arithmetics, Cauchi integral and the principles of the asymptotical analysis should help at the reading. About the Author Dmitrii Kouznetsov Graduated from the Physics Department of theMoscow State University (1980). Work: USSR, Mexico, USA, Japan. Century 20: Proven the quantum stability of the optical soliton, suggested the low bound of the quantum noise of nonlinear amplifier, indicated the limit of the single mode approximation in the quantum optics. Century 21: Theorem about boundary behaviour of modes of Dirichlet laplacian, Theory of ridged atomic mirrors, formalism of superfunctions, TORI axioms. Summary The summary suggests main notations used in the Book: \(T\)\( ~ ~ ~ ~ ~\) Transfer function \(F\big(G(z)\big)=z\) \(~ ~ ~ ~ ~\) Identity function \(T^n(z)=F\big(n+G(z)\big)\) \(~ ~ ~\) \(n\)th iterate \(\displaystyle F(z)=\frac{1}{2\pi \mathrm i} \oint \frac{F(t) \, \mathrm d t}{t-z}\) \(~ ~ ~\) Cauchi integral \(\mathrm{tet}_b(z\!+\!1)=b^{\mathrm{tet}_b(z)}\) \(~ ~ ~\) tetration to base \(b\) \(\mathrm{tet}_b(0)=1\) \(~, ~ ~\) \( \mathrm{tet}_b\big(\mathrm{ate}_b(z)\big)=z\) \(\mathrm{ate}_b(b^z)=\mathrm{ate}_b(z)+1\) \(~ ~\) arctetration to base \(b\) \(\exp_b^{~n}(z)=\mathrm{tet}_b\big(n+\mathrm{ate}_b(z)\big)\) \(~ ~\) \(n\)th iterate of function \(~\) \(z\!\mapsto\! b^z\) \(\displaystyle \mathrm{Tania}^{\prime}(z)=\frac{\mathrm{Tania}(z)}{\mathrm{Tania}(z)\!+\!1}\) \(~ ~\) Tania function,\(~\) \(\mathrm{Tania}(0)\!=\!1\) \(\displaystyle \mathrm{Doya}(z)=\mathrm{Tania}\big(1\!+\!\mathrm{ArcTania}(z)\big)\) \(~ ~\) Doya function \(\displaystyle \mathrm{Shoka}(z)=z+\ln(\mathrm e^{-z}\!+\!\mathrm e \!-\! 1)\) \(~\) Shoka function \(\displaystyle \mathrm{Keller}(z)=\mathrm{Shoka}\big(1\!+\!\mathrm{ArcShoka}(z)\big)\) \(~ ~\) Keller function \(\displaystyle \mathrm{tra}(z)=z+\exp(z)\) \(~ ~ ~\) Trappmann function \(\displaystyle \mathrm{zex}(z)=z\,\exp(z)\) \(~ ~ ~ ~\) Zex function \(\displaystyle \mathrm{Nem}_q(z)=z+z^3+qz^4\) \(~ ~ ~ ~\) Nemtsov function Recent advance Most of results presented in the book, are published in scientific journals; the links (without numbers) are supplied at the bottom. After the appearance of the first version of the Book, certain advances are observed about evaluation of tetrationof complex argument; the new algorithm is suggested, that seems to be mode efficient, than the Cauchi integral described in the Book. [3] [4] [5] [6] References https://www.morebooks.de/store/ru/book/Суперфункции/isbn/978-3-659-56202-0Дмитрий Кузнецов. Суперфунцкии.ISBN-13: 978-3-659-56202-0ISBN-10: 3659562025EAN: 9783659562020 http://www.ils.uec.ac.jp/~dima/BOOK/202.pdf http://mizugadro.mydns.jp/BOOK/202.pdf http://www.ils.uec.ac.jp/~dima/BOOK/443.pdf (a little bit out of date) http://mizugadro.mydns.jp/BOOK/444.pdf D.Kouznetov. Superfunctions. 2018. http://journal.kkms.org/index.php/kjm/article/view/428 William Paulsen. Finding the natural solution to f(f(x))=exp(x). Korean J. Math. Vol 24, No 1 (2016) pp.81-106. https://link.springer.com/article/10.1007/s10444-017-9524-1 William Paulsen, Samuel Cowgill. Solving F(z + 1) = b F(z) in the complex plane. Advances in Computational Mathematics, December 2017, Volume 43, Issue 6, pp 1261–1282 https://search.proquest.com/openview/cb7af40083915e275005ffca4bfd4685/1?pq-origsite=gscholar&cbl=18750&diss=y Cowgill, Samuel. Exploring Tetration in the Complex Plane. Arkansas State University, ProQuest Dissertations Publishing, 2017. 10263680. https://link.springer.com/article/10.1007/s10444-018-9615-7 William Paulsen. Tetration for complex bases. Advances in Computational Mathematics, 2018.06.02. The book combines the main results from the following publications: http://www.ams.org/mcom/2009-78-267/S0025-5718-09-02188-7/home.html http://mizugadro.mydns.jp/PAPERS/2009analuxpRepri.pdf D.Kouznetsov. Analytic solution of F(z+1)=exp(F(z)) in complex z-plane. Mathematics of Computation 78 (2009), 1647-1670. http://www.jointmathematicsmeetings.org/journals/mcom/2010-79-271/S0025-5718-10-02342-2/S0025-5718-10-02342-2.pdf http://www.ams.org/journals/mcom/2010-79-271/S0025-5718-10-02342-2/home.html http://eretrandre.org/rb/files/Kouznetsov2009_215.pdf http://www.ils.uec.ac.jp/~dima/PAPERS/2010q2.pdf http://mizugadro.mydns.jp/PAPERS/2010q2.pdf D.Kouznetsov, H.Trappmann. Portrait of the four regular super-exponentials to base sqrt(2). Mathematics of Computation, 2010, v.79, p.1727-1756. http://www.springerlink.com/content/qt31671237421111/fulltext.pdf?page=1 http://mizugadro.mydns.jp/PAPERS/2010superfae.pdf D.Kouznetsov, H.Trappmann. Superfunctions and square root of factorial. Moscow University Physics Bulletin, 2010, v.65, No.1, p.6-12. http://www.ils.uec.ac.jp/~dima/PAPERS/2010vladie.pdf http://mizugadro.mydns.jp/PAPERS/2010vladie.pdf D.Kouznetsov. Tetration as special function. Vladikavkaz Mathematical Journal, 2010, v.12, issue 2, p.31-45. http://www.springerlink.com/content/qt31671237421111/fulltext.pdf?page=1 D.Kouznetsov, H.Trappmann. Superfunctions and square root of factorial. Moscow University Physics Bulletin, 2010, v.65, No.1, p.6-12 http://mizugadro.mydns.jp/t/index.php/Place_of_science_in_the_human_knowledge D.Kouznetsov. Place of science and physics in human knowledge. English translation from http://ufn.ru/tribune/trib120111 Russian Physics:Uspekhi, v.191, Tribune, p.1-9 (2010) http://www.ils.uec.ac.jp/~dima/PAPERS/2010logistie.pdf http://mizugadro.mydns.jp/PAPERS/2010logistie.pdf D.Kouznetsov. Continual generalisation of the Logistic sequence. Moscow State University Physics Bulletin, 3 (2010) No.2, стр.23-30. http://www.ams.org/journals/mcom/0000-000-00/S0025-5718-2012-02590-7/S0025-5718-2012-02590-7.pdf http://www.ils.uec.ac.jp/~dima/PAPERS/2012e1eMcom2590.pdf http://mizugadro.mydns.jp/PAPERS/2012e1eMcom2590.pdf H.Trappmann, D.Kouznetsov. Computation of the Two Regular Super-Exponentials to base exp(1/e). Mathematics of Computation, 2012, 81, February 8. p.2207-2227. http://www.ils.uec.ac.jp/~dima/PAPERS/2012or.pdf http://mizugadro.mydns.jp/PAPERS/2012or.pdf Dmitrii Kouznetsov. Superfunctions for optical amplifiers. Optical Review, July 2013, Volume 20, Issue 4, pp 321-326. http://www.scirp.org/journal/PaperInformation.aspx?PaperID=36560 http://mizugadro.mydns.jp/PAPERS/2013jmp.pdf D.Kouznetsov. TORI axioms and the applications in physics. Journal of Modern Physics, 2013, v.4, p.1151-1156. http://www.ingentaconnect.com/content/asp/asl/2013/00000019/00000003/art00071 http://mizugadro.mydns.jp/PAPERS/2012thaiSuper.pdf D.Kouznetsov. Recovery of Properties of a Material from Transfer Function of a Bulk Sample (Theory). Advanced Science Letters, Volume 19, Number 3, March 2013, pp. 1035-1038(4). http://link.springer.com/article/10.1007/s10043-013-0058-6 D.Kouznetsov. Superfunctions for amplifiers. Optical Review, July 2013, Volume 20, Issue 4, pp 321-326. http://www.m-hikari.com/ams/ams-2013/ams-129-132-2013/kouznetsovAMS129-132-2013.pdf http://mizugadro.mydns.jp/PAPERS/2013hikari.pdf D.Kouznetsov. Entire function with logarithmic asymptotic. Applied Mathematical Sciences, 2013, v.7, No.131, p.6527-6541. Keywords Abel function, Book, Doya function, Iteration, Keller function, Maple and tea, LambertW, Shoka function, Superfunction, SuperFactorial, SuSin, SuTra, SuZex, Tania function, Tetration, Trappmann function, Tetration, Zex function,
With a stationary loop of wire and field $\vec B$ varying in time it doesn't seem that you'd be able to exploit the Lorentz force, because it's difficult to introduce the velocity $v$. But another configuration may be more helpful, see picture. A metallic rod (orange) crosses the magnetic lines (light-blue) with a constant velocity $v$ (blue) in the shown direction. It indeed moves the electrons in the rod as your formula $(1)$ says. The rod glides on two metallic tracks (light-gray) which are connected through another rod (light-gray) to form a closed circuit. Although the field $B$ is constant, the magnetic flux $\Phi$ varies in time, because $$\Phi_B = \int_{\scriptstyle_S} \mathbf{B} \cdot \text d\mathbf S \tag{i}$$ In our case is the surface $S$ is rectangular, so we can write the flux as $\Phi_B = B S = B L D$, where $D$ is the length of the rod, and $L$ the length of the metallic tracks on which the rod glides. For the electromotive force we are interested in the flux derivative with time $$\mathcal E = -\frac {\text d\Phi_B}{\text d t} = v B D. \tag{ii}$$ because $\text dL/dt = -v$. Now, I believe that my formula $\text {(ii)}$ and your $(2)$ begin to resemble one another. What misses in my $\text {(ii)}$ is the charges $q$, and what has to appear in your $(2)$ is the rod length $D$, which probably is connected with the element of distance $\text d \vec {\ell}$. Also, since in our case the force $\vec F$ in your $(1)$ is along the rod, and the rod velocity $\vec v$ is perpendicular to $B$, the three vectors $\vec F$, $\vec v$ and $\text d \vec {\ell}$ are mutual perpendicular, s.t. the fact that in my $\text {(ii)}$, the vector product does not appear, is not a concern. At this point I leave the issue to you. I suggest you to think what is the relationship between work $W$ and potential difference $\mathcal E$, and what is the connection between the integration over the loop in your $(2)$ and the quantity $D$ in my $\text {(ii)}$. in the grey bars no electromotive force is produced. Hint:
I haven't actually done the derivation but the approach you would take would be to write a ray tracing matrix for the whole system, including the object distance $s_1$ and the image distance $s_2$: $$\begin{bmatrix}x_f \\ \theta_f\end{bmatrix} =\begin{bmatrix}1 & s_2 \\ 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 \\ -1/f_2 & 1\end{bmatrix}\begin{bmatrix}1 & d \\ 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 \\ -1/f_1 & 1\end{bmatrix}\begin{bmatrix}1 & s_1 \\ 0 & 1\end{bmatrix}\begin{bmatrix}x_i \\ \theta_i\end{bmatrix}$$ Then you do the tedious matrix multiplication so that you get coefficients $A,B,C,D$ for your equation system in terms of $d,f_1,f_2,s_1,s_2$: $$\begin{bmatrix}x_f \\ \theta_f\end{bmatrix} =\begin{bmatrix}A & B \\ C & D\end{bmatrix}\begin{bmatrix}x_i \\ \theta_i\end{bmatrix}$$ When an image is formed, all the rays starting from position $x_i$ end up at $x_f$ regardless of their initial angle $\theta_i$. So in the equation $x_f = Ax_i + B\theta_i$, you can set $B=0$ and from there derive an expression for $\frac{1}{s_1} + \frac{1}{s_2}$ which is the focal length of the whole system. This expression, which is hopefully the same as what your book says, will certainly depend on $f_1$, $f_2$, and $d$. If you plot each one while keeping the other two constant, you can see how they depend when e.g. one lens is negative and the other positive, or the distance is greater or smaller than the focal length.
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
In this tutorial, you will learn how to perform a series of phonon calculations in order to obtain the mode Grüneisen parameters and the thermal-expansion coefficient of diamond. Purpose: 0. Prerequisites This tutorial assumes that you have already set the relevant environment variables. Otherwise, please have a look at . Furthermore, we also assume that you are familiar with the concepts and the calculations presented in How to set environment variables for tutorials scripts and Phonons at Γ in Diamond-Structure Crystals . The only scripts which is relevant for this tutorial is Phonons at X in Diamond-Structure Crystals : Python script for calculating the thermal-expansion coefficient for cubic systems. THERMAL-cubic.py Note: The symbol at beginnings of lines in code segments below indicates the shell prompt. $ In the following we use the conversion factor . 1 Ha $\approx$ 2.194 746 x 105 cm-1 1. Theoretical background Harmonic phonon frequencies do not depend on the crystal volume. As a consequence of this fact, there is no effect of temperature on the equilibrium volume of a perfect harmonic crystal. However, real crystals do change their volume as their temperature is varied. In the next, we introduce physical quantities which are related to the phenomenon of thermal expansion. 1.1) Mode Grüneisen parameters Phonon frequencies of a real crystal explicitly depend on the crystal volume(1) This dependence can be characterized by the mode Grüneisen parameters, $\gamma_j({\bf q})$. The Grüneisen parameter of the mode $j$ at a wavevector ${\bf q}$ is defined as(2) where $V_0$ is the equilibrium volume. For cubic systems and in terms of the lattice parameter $a$ one can write(3) where again $a_0$ is the equilibrium parameter. 1.2) Thermal expansion coefficient The thermal expansion of a crystal can be quantified by the linear thermal-expansion coefficient $\alpha(T)$. Within the , one can write quasi-harmonic approximation where $B_0$ is the bulk modulus and(5) is the derivative of the vibrational free energy $F_{\rm vib}(V,T)$ with respect to the volume. 2. Calculations The calculation of the thermal expansion coefficient and of the mode Grüneisen parameter at some high-symmetry point in the first Brillouin zone of the diamond lattice requires the evaluation of derivatives with respect to the volume or, equivalently, to the lattice constant, as shown in Eqs. (2-5). In order to calculate these derivatives in a finite difference approach, phonon calculations have to be performed at three different volumes (lattice constants). Therefore, the first step is to create three new working directories, e.g., , diamond-phonons-minus , diamond-phonons-zero , diamond-phonons-plus which will correspond to the calculations performed at lattice parameters $a_0-\Delta$, $a_0$, and $a_0+\Delta$, respectively. $ mkdir grueneisen$ cd grueneisen$ mkdir diamond-phonons-minus diamond-phonons-zero diamond-phonons-plus Here, $a_0$ is the equilibrium lattice parameter and $\Delta$ a positive " small" number which in this tutorial is set to 0.05Bohr. If you have already performed the calculation presented in and Phonons at Γ in Diamond-Structure Crystals , you can reduce the computational effort by copying the results you have obtained in the " Phonons at X in Diamond-Structure Crystals equilibrium" directory . If this is not the case, create an input file for the phonon calculation as shown in those tutorials inside each of the three directories. Change the lattice parameter in the input file inside diamond-phonons-zero to $a_0-0.05$. Similarly, change the lattice constant in the input file in diamond-phonons-minus to $a_0+0.05$. diamond-phonons-plus 2.1) Mode Grüneisen parameters Now, repeat the calculation of the phonon frequencies of diamond at and Γ as mentioned in X Section 1.4of inside each new directory. When all executions are completed, compute the mode Grüneisen parameters using Eq. (3) and evaluating the numerical derivative from a quadratic interpolation, Phonon properties of diamond-structure crystals The following values should be obtained (frequencies are given in cm -1). Here, $\gamma_{\rm conv}$ is the converged result, obtained using a $\times$ 8 $\times$ 8 8 -point mesh, k = rgkmax , and a grid of "7.0" $\times$ 4 $\times$ 4 4 -points. q 2.2) Thermal expansion coefficient The derivative in Eq. (5) is approximated numerically by a quadratic interpolation.(7) In order to calculate $\alpha(T)$, you can use the following procedure. Obtain the phonon DOS and thermodynamical properties for $a_0 - \Delta$, $a_0$, and $a_0 + \Delta$ using the same procedure of Sections 2.1and 2.2of . Phonon properties of diamond-structure crystals Create the new directory . thermal-expansion Copy the files and input.xml from the directory thermo.xml to the directory diamond-phonon-zero adding the suffix thermal-expansion . Thus, you will have inside .0 the two new files thermal-expansion and input.xml.0 . thermo.xml.0 Repeate the previous step from the directory and the suffix diamond-phonon-plus . .+ Repeate the previous step from the directory and the suffix diamond-phonon-minus . .- Now, inside the directory you have the following files. thermal-expansion input.xml.- thermo.xml.-input.xml.0 thermo.xml.0input.xml.+ thermo.xml.+ The linear thermal-expansion coefficient can be obtained using the script inside the directory THERMAL-cubic.py . thermal-expansion $ THERMAL-cubic.py 0 1600 40------------------------------------------------------------------------Linear thermal-expansion coefficient for cubic systems------------------------------------------------------------------------Enter value for the bulk modulus [GPa] >>>> 452$ 0), the maximum temperature, and the number of temperature steps between the minimum and the maximum. The script also requires the value of the bulk modulus, in our example the experimental value for diamond has been used ( 452 GPa). Alternatively, the bulk modulus can be obtained as described in . The result can be visualized by watching the PostScript output file Volume optimization for cubic systems . PLOT.ps A converged result (using an $\times$ 8 $\times$ 8 8 -grid, k = rgkmax , and "7.0" 4 -points in each direction) can be found below. q Exercise: Silicon in diamond structure The calculation presented here for diamond can be repeated for Silicon. Use the exchange-correlation functional, an equilibrium lattice parameter of 10.34 Bohr, and low values for both PBE - and k -point grids to accelerate the runs, as above. The experimental bulk modulus is 98 GPa ( q ). Landolt-Börnstein tables What are the differences to diamond? Use the theoretical bulk modulus. Calculate theoretical values of equilibrium parameters as in . Volume Optimization for Cubic Systems Compare your results with the "converged" one displayed . here
Help:Editing Contents General To edit a MediaWiki page, click on the " Edit this page" (or just " edit") link at one of its edges. This will bring you to a page with a text box containing the wikitext: the editable source code from which the server produces the webpage. For the special codes, see below. After adding to or changing the wikitext it is useful to press "Preview", which produces the corresponding webpage in your browser but does not make it publicly available yet (not until you press "Save"). Errors in formatting, links, tables, etc., are often much easier to discover from the rendered page than from the raw wikitext. If you are not satisfied you can make more changes and preview the page as many times as necessary. Then write a short edit summary in the small text field below the edit-box and when finished press "Save". Depending on your system, pressing the "Enter" key while the edit box is not active (i.e., there is no typing cursor in it) may have the same effect as pressing "Save". You may find it more convenient to copy and paste the text first into your favorite text editor, edit and spell check it there, and then paste it back into your web browser to preview. This way, you can also keep a local backup copy of the pages you have edited. It also allows you to make changes offline, but before you submit your changes, please make sure nobody else has edited the page since you saved your local copy (by checking the page history), otherwise you may accidently revert someone else's edits. If someone has edited it since you copied the page, you'll have to merge their edits into your new version (you can find their specific edits by using the "diff" feature of the page history). These issues are handled automatically by the Mediawiki software if you edit the page online, retrieving and submitting the wikicode in the same text box. See also MediaWiki architecture. Dummy edit If the wikitext is not changed no edit will be recorded and the edit summary is discarded. A dummy edit is a change in wikitext that has no effect on the rendered page, such as changing the number of newlines at some position from 0 to 1 or from 2 to 3 or conversely (changing from 1 to 2 makes a difference, see below). This allows an edit summary, and is useful for correcting a previous edit summary, or an accidental marking of a previous edit as "minor" (see below). Also it is sometimes needed to refresh the cache of some item in the database, see e.g. A category tag in a template; caching problem. Minor edits When editing a page, a logged-in user has the option of flagging the edit as a "minor edit". This feature is important, because users can choose to hide minor edits in their view of the Recent Changes page, to keep the volume of edits down to a manageable level. When to use this is somewhat a matter of personal preference. The rule of thumb is that an edit of a page that consists of spelling corrections, formatting, and minor rearranging of text should be flagged as a "minor edit". A major edit is basically something that makes the entry worth revisiting for somebody who wants to watch the article rather closely. So any "real" change, even if it is a single word, should be flagged as a "major edit". The reason for not allowing a user who is not logged in to mark an edit as minor is that vandalism could then be marked as a minor edit, in which case it would stay unnoticed longer. This limitation is another reason to log in. The wiki markup In the left column of the table below, you can see what effects are possible. In the right column, you can see how those effects were achieved. In other words, to make text look like it looks in the left column, type it in the format you see in the right column. You may want to keep this page open in a separate browser window for reference. If you want to try out things without danger of doing any harm, you can do so in the Sandbox. Sections, paragraphs, lists and lines What it looks like What you type Start your sections with header lines: == New section == === Subsection === ==== Sub-subsection ==== A single newline has no effect on the layout. But an empty line starts a new paragraph. (<p> disables this paragraphing until </p> or the end of the section) (in Cologne Blue two newlines and a div tag give just one newline; in the order newline, div tag, newline, the result is two newlines) A single newline has no effect on the layout. But an empty line starts a new paragraph. You can break lines without starting a new paragraph. Sufficient as wikitext code is <br>, the XHTML code <br /> is not needed, the system produces this code. You can break lines<br> without starting a new paragraph. marks the end of the list. * Lists are easy to do: ** start every line with a star *** more stars means deeper levels *A newline *in a list marks the end of the list. *Of course *you can *start again. marks the end of the list. # Numbered lists are also good ## very organized ## easy to follow #A newline #in a list marks the end of the list. #New numbering starts #with 1. * You can even do mixed lists *# and nest them *#* like this<br>or have newlines<br>inside lists * You can also **break lines<br>inside lists<br>like this ; Definition list : list of definitions ; item : the item's definition A manual newline starts a new paragraph. : A colon indents a line or paragraph. A manual newline starts a new paragraph. IF a line of plain text starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines won't wrap; ENDIF this is useful for: * pasting preformatted text; * algorithm descriptions; * program source code * ASCII art; * chemical structures; WARNING If you make it wide,you force the whole page to be wide andhence less readable. Never start ordinary lines with spaces. IF a line of plain text starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines won't wrap; ENDIF this is useful for: * pasting preformatted text; * algorithm descriptions; * program source code * ASCII art; * chemical structures; <center>Centered text.</center> A horizontal dividing line: above and below. Mainly useful for separating threads on Talk pages. A horizontal dividing line: above ---- and below. Summarizing the effect of a single newline: no effect in general, but it ends a list item or indented part; thus changing some text into a list item, or indenting it, is more cumbersome if it contains newlines, they have to be removed; see also w:Wikipedia:Don't use line breaks. Links, URLs What it looks like What you type Sue is reading the video policy. Thus the link above is to http://meta.wikipedia.org/wiki/Video_policy, which is the page with the name "Video policy". Sue is reading the [[video policy]]. Link to a section on a page, e.g. List_of_cities_by_country#Morocco; when section editing does not work the link is treated as link to the page, i.e. to the top; this applies for: [[List_of_cities_by_country#Morocco]]. Link target and link label are different: answers. (This is called a piped link). Same target, different name: [[User:Larry Sanger|answers]] Endings are blended into the link: official positions, genes Endings are blended into the link: [[official position]]s, [[gene]]s Automatically hide stuff in parentheses: kingdom. Automatically hide namespace: Village pump. The server fills in the part after the | when you save the page. Next time you open the edit box you will see the expanded piped link. A preview interprets the abbreviated form correctly, but does not expand it yet in the edit box. Press Save and again Edit, and you will see the expanded version. The same applies for the following feature. Automatically hide stuff in parentheses: [[kingdom (biology)|]]. Automatically hide namespace: [[Wikipedia:Village pump|]]. When adding a comment to a Talk page, you should sign it. You can do this by adding three tildes for your user name: or four for user name plus date/time: When adding a comment to a Talk page, you should sign it. You can do this by adding three tildes for your user name: : ~~~ or four for user name plus date/time: : ~~~~ The weather in London is a page that doesn't exist yet. [[The weather in London]] is a page that doesn't exist yet. Redirect one article title to another by putting text like this in its first line. #REDIRECT [[United States]] [[fr:Wikipédia:Aide]], [[:fr:Wikipédia:Aide]] "What links here" and "Related changes" can be linked as: [[Special:Whatlinkshere/ Wikipedia:How to edit a page]] and [[Special:Recentchangeslinked/ Wikipedia:How to edit a page]] External links: Nupedia, [1] External links: [http://www.nupedia.com Nupedia], [http://www.nupedia.com] Or just give the URL: http://www.nupedia.com. Or just give the URL: http://www.nupedia.com. ISBN 0123456789X RFC 123 To include links to non-image uploads such as sounds, use a "media" link. [[media:Sg_mrob.ogg|Sound]] Use links for dates, so everyone can set their own display order. Use Special:Preferences to change your own date display setting. [[July 20]], [[1969]] , [[20 July]] [[1969]] and [[1969]]-[[07-20]]will all appear as 20 July 1969 if you set your date display preference to 1 January 2001. Images What it looks like What you type A picture: Wikipedia - The Free Encyclopedia A picture: [[Image:Wiki.png]] or, with alternate text ( [[Image:Wiki.png|Wikipedia - The Free Encyclopedia]] Web browsers render alternate text when not displaying an image -- for example, when the image isn't loaded, or in a text-only browser, or when spoken aloud. See Alternate text for images for help on choosing alternate text. See Extended image syntax for more options. Clicking on an uploaded image displays a description page, which you can also link directly to: Image:Wiki.png [[:Image:Wiki.png]] To include links to images shown as links instead of drawn on the page, use a "media" link. [[media:Tornado.jpg|Image of a Tornado]] HTML Tables Character formatting What it looks like What you type ''Emphasize'', '''strongly''', '''''very strongly'''''. You can also write You can also write <i>italic</i> and <b>bold</b> if the desired effect is a specific font style rather than emphasis, as in mathematical formulas: :<b>F</b> = <i>m</i><b>a</b> A typewriter font for technical terms. A typewriter font for <tt>technical terms</tt>. You can use small text for captions. You can use <small>small text</small> for captions. You can and You can <strike>strike out deleted material</strike> and <u>underline new material</u>. è é ê ë ì í À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ñ Ò Ó Ô Õ Ö Ø Ù Ú Û Ü ß à á â ã ä å æ ç è é ê ë ì í î ï ñ ò ó ô œ õ ö ø ù ú û ü ÿ ¿ ¡ « » § ¶ † ‡ • — £ ¤ ™ © ® ¢ € ¥ £ ¤ Subscript: x 2 Superscript: x Subscript: x<sub>2</sub> Superscript: x<sup>2</sup> or x² or in projects with the templates sub and sup: Subscript: x{{sub|2}} Superscript: x{{sup|2}} ε<sub>0</sub> = 8.85 × 10<sup>−12</sup> C² / J m. 1 [[hectare]] = [[1 E4 m²]] Greek characters: α β γ δ ε ζ α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ σ ς τ υ φ χ ψ ω Γ Δ Θ Λ Ξ Π Σ Φ Ψ Ω ∫ ∑ ∏ √ − ± ∞ ≈ ∝ ≡ ≠ ≤ ≥ → × · ÷ ∂ ′ ″ ∇ ‰ ° ∴ ℵ ø ∈ ∉ ∩ ∪ ⊂ ⊃ ⊆ ⊇ ¬ ∧ ∨ ∃ ∀ ⇒ ⇔ → ↔ x 2 ≥ 0 true. <i>x</i><sup>2</sup> ≥ 0 true. <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> arrow → arrow → ''italics'' [[link]] arrow → ''italics'' [[link]] <nowiki>arrow → ''italics'' [[link]]</nowiki> arrow → ''italics'' [[link]] <pre>arrow → ''italics'' [[link]]</pre> arrow → ''italics'' [[link]] arrow → ''italics'' [[link]] <tt>arrow →</tt> <tt>''italics''</tt> <tt>[[link]]</tt> &rarr; <!-- comment here --> Templates Some part of a page may correspond in the edit box to just a reference to another page, in the form {{ name}}, referring to the page "Template: name" (or if the name starts with a namespace prefix, it refers to the page with that name; if it starts with a colon it refers to the page in the main namespace with that name without the colon). This is called a template. For changing that part of the page, edit that other page. Sometimes a separate edit link is provided for this purpose. A convenient way to put such a link in a template is with a template like m:Template:ed. Note that the change also affects other pages which use the same template. Page protection In a few cases the link labeled "MediaWiki:Editthispage" is replaced by the text "Protected page" (or equivalents in the language of the project). In that case the page can not be edited. Position-independent wikitext Wikitext for which the result does not depend on the position in the wikitext page: interlanguage links (see also above) - the mutual order is preserved, but otherwise the positions within the page are immaterial category specification - ditto __NOTOC__, __FORCETOC__, see Help:Section Separating edits When moving or copying a piece of text within a page or from another page, and also making other edits, it is useful to separate these edits. This way the diff function can be usefully applied for checking these other edits.
If you have a series of a group of operators defined at the same space time point (as you might find in e.g interaction terms in a lagrangian consisting of local operator terms for maintenance of locality) then one can derive that $$T((AB \dots)_{x_1} \dots (PQ \dots)_{x_n})_{\text{No E.T.C}} = T(N(AB \dots)_{x_1} \dots N(PQ \dots)_{x_n})$$ There is a nice argument for this in Mandl and Shaw where, within the $r^{\text{th}}$ group on the $\text{r.h.s}$ the operators have their time location shifted by $\pm \epsilon$ depending on whether they are a creator or an annihilator. As the groups are in normal ordering already, with this shift they become time ordered so a naive Wick expansion can be applied without considering contractions within a group (because there $T(AB \dots)_{x_{r\pm \epsilon}} \equiv N(AB \dots)_{x_{r\pm \epsilon}}$). The limit as $\epsilon \rightarrow 0$ of such an expansion gives the result on the $\text{l.h.s}$. Why would one do this? Defining the $S$-matrix as the time ordered product of an exponentiated normal ordered interaction hamiltonian removes tadpole contributions from the game completely which are formally infinite, to be removed with e.g renormalisation counterterms in the path integral formalism but are otherwise set conveniently to zero in this Wick-ordered operator formalism from the outset. Ref: 'Quantum Field Theory', Mandl & Shaw, 2nd edition, chp.6, P.97
We present a two dimensional model describing the elastic behaviour of the wall of a curved pipe to model blood vessels in particular. The wall has a laminate structure consisting of several anisotropic layers of varying thickness and is assumed to be much smaller in thickness than the radius of the vessel which itself is allowed to vary. Our two-dimensional model takes the interaction of the wall with the surrounding material and the fluid flowing inside into account and is obtained via a dimension reduction procedure. The curvature and twist of the vessel axis as well as the anisotropy of the laminate wallpresent the main challenges in applying the dimension reduction procedure so plenty of examples of canonical shapes of vessels and their walls are supplied with explicit systems of dierential equations at the end. Let be a Lipschitz domain in Rn n ≥ 2, and L = divA∇· be a second order elliptic operator in divergence form. We establish solvability of the Dirichlet regularity problem with boundary data in H1,p(@) and of the Neumann problem with Lp(@) data for the operator L on Lipschitz domains with small Lipschitz con- stant. We allow the coefficients of the operator L to be rough obeying a certain Carleson condition with small norm. These results complete the results of [7] where the Lp(@) Dirichlet problem was considered under the same assumptions and [8] where the regularity and Neumann problems were considered on two dimensional domains. Funding agencies: Engineering and Physical Sciences Research Council [EP/J017450/1]; National Science Foundation DMS Grant [0901139]; CANPDE Funding agencies: ANR project "Harmonic analysis at its boundaries" [ANR-12-BS01-0013-01]; Swedish Research Council, VR [621-2011-3744] We prove the global L2 × L2 → L1 boundedness of bilinear oscillatory integral operators with amplitudes satisfying a Hörmander type condition and phases satisfying appropriate growth as well as the strong non-degeneracy conditions. This is an extension of the corresponding result of R. Coifman and Y. Meyer for bilinear pseudo-differential operators, to the case of oscillatory integral operators. Funding: Crawfoord Foundation; [MTM2010-14946] We establish the regularity of bilinear Fourier integral operators with bilinear amplitudes in and non-degenerate phase functions, from Lp×Lq→Lr under the assumptions that and . This is a bilinear version of the classical theorem of Seeger–Sogge–Stein concerning the Lp boundedness of linear Fourier integral operators. Moreover, our result goes beyond the aforementioned theorem in that it also includes the case of quasi-Banach target spaces. We consider two types of multilinear pseudodifferential operators. First, we prove the boundedness of multilinear pseudodifferential operators with symbols which are only measurable in the spatial variables in Lebesgue spaces. These results generalise earlier work of the present authors concerning linear pseudo-pseudodifferential operators. Secondly, we investigate the boundedness of bilinear pseudodifferential operators with symbols in the Hormander S-p,delta(m) classes. These results are new in the case p less than 1, that is, outwith the scope of multilinear Calderon-Zygmund theory. We initiate the study of the finiteness condition∫ Ω u(x) −β dx≤C(Ω,β)<+∞ whereΩ⊆R n is an open set and u is the solution of the Saint Venant problem Δu=−1 in Ω , u=0 on ∂Ω . The central issue which we address is that of determining the range of values of the parameter β>0 for which the aforementioned condition holds under various hypotheses on the smoothness of Ω and demands on the nature of the constant C(Ω,β) . Classes of domains for which our analysis applies include bounded piecewise C 1 domains in R n , n≥2 , with conical singularities (in particular polygonal domains in the plane), polyhedra in R 3 , and bounded domains which are locally of classC 2 and which have (finitely many) outwardly pointing cusps. For example, we show that if u N is the solution of the Saint Venant problem in the regular polygon Ω N with N sides circumscribed by the unit disc in the plane, then for each β∈(0,1) the following asymptotic formula holds: % {eqnarray*} \int_{\Omega_N}u_N(x)^{-\beta}\,dx=\frac{4^\beta\pi}{1-\beta} +{\mathcal{O}}(N^{\beta-1})\quad{as}\,\,N\to\infty. {eqnarray*} % One of the original motivations for addressing the aforementioned issues was the study of sublevel set estimates for functions v satisfying v(0)=0 , ∇v(0)=0 and Δv≥c>0 .
Can anybody explain in simple terms how the critical value of the ADF test can be derived using Monte Carlo simulation? The ADF test assumes the DGP $$ \Delta y_t = \alpha +\beta t +\gamma y_t +\delta_1 \Delta y_{t-1}+\cdots +\delta_k \Delta y_{t-k}+\epsilon_t $$ The parameters are estimated using OLS on a sample of length $T$. You might impose $\alpha=0$ and/or $\beta=0$, this will give you different null hypotheses to test. But your test is always $\gamma=0$, and the statistic you use to do that is the t-statistic that comes from the regression $t=\hat{\gamma}/\hat{\sigma}_\gamma$. To perform the test you compare this value to the critical value which depends on the sample size $T$, if the DGP assumes $\alpha$ and/or $\beta$ are zero, and the number or lags $k$. Essentially you want to assess what is the probability that you observed the estimated value $\hat{\gamma}$ due to the randomness of the sample (ie generated by the noise $\epsilon_t$) although the true value that generated the data was $\gamma=0$ (ie the sampling distribution under the null). In order to produce the sampling distribution using MC you follow the following steps: Estimate all parameters by OLS using the data you have, and compute the t-statistic $t$ Fix all estimated parameters except $\gamma$ which you set to zero (ie parameters under the null) Generate Gaussian random numbers $\epsilon_t$, and using the parameters under the null generate random sample paths $y_t$ of the same length as the original data, ie $T$ Using this sample re-estimate $\gamma$ and then the t-statustic using OLS, which is a random number drawn from the sampling distribution under the null, say $t_1$ Repeat steps (3) and (4) above $M$ times, say 10,000 times, and produce a set $t_1,\cdots,t_M$ Percentiles of this distribution give the critical values
77 11 The institute where I'm studying consists of books, problems and exams which seems to me completely senseless. I have been a self-study guy ( actually this was a compulsion, not a desire) so I read some well renowned books like The institutes in which I'm right now have something which I hate and I try to depict the reason of my hatred. Consider this problem from my book :- Let $$ a_1 , a_2 , a_3 ....... $$ be an A.P. (Arithmetic Progression) . Prove that $$ \sum _{n=1} ^ {2m} (-1)^{n-1} a_n ^2 = \frac {m} {2m-1} ~ (a_1 ^2 - a_{2m} ^2) $$ Now, the problem is not that this question is hard or time consuming but to me it seems senseless. Let me give you some more taste of this Evaluate :- $$ sin (\pi/2) ~ sin (\pi/2^2) ~ sin (\pi/2^3)~~........ ~sin(\pi/2^{11}) ~ cos(\pi/2^{12})$$ My educators over there could solve these problems easily and every quickly. I'm sharing a link, please see it Calculus. But I know and I believe that people here are far better, more experienced and more generous than these educators. So, I must state my problems explicitly and which is quite hard to do :- I request all the senior members to please help me over here. Thank you. The Feynman Lectures on Physics, Spivak Calculus , Morrison and Boyd's Organic chemistryand others. I took video lectures too from the internet like those of Sir Herbert Gross' , Leonard Susskind and some others. This site helped me a lot and is still helping me. So, due to all this I got indulged in Science and Maths the way our ancestors (early Mathematicians and Physicists) wanted us ( I apologize if I'm committing an offense by imagining just anything but due to etiquette I have used the plural form of first person instead of singular form, that is using 'us' instead of 'me'). The institutes in which I'm right now have something which I hate and I try to depict the reason of my hatred. Consider this problem from my book :- Let $$ a_1 , a_2 , a_3 ....... $$ be an A.P. (Arithmetic Progression) . Prove that $$ \sum _{n=1} ^ {2m} (-1)^{n-1} a_n ^2 = \frac {m} {2m-1} ~ (a_1 ^2 - a_{2m} ^2) $$ Now, the problem is not that this question is hard or time consuming but to me it seems senseless. Let me give you some more taste of this Evaluate :- $$ sin (\pi/2) ~ sin (\pi/2^2) ~ sin (\pi/2^3)~~........ ~sin(\pi/2^{11}) ~ cos(\pi/2^{12})$$ My educators over there could solve these problems easily and every quickly. I'm sharing a link, please see it Calculus. But I know and I believe that people here are far better, more experienced and more generous than these educators. So, I must state my problems explicitly and which is quite hard to do :- What is the difference between these problems and the problems we find in renowned books which are written by mathematicians?Problems in renowned books are hard too ( like SL Loney's , Irodov's , G.H. Hardy's ) but they are quite different from these and my mind finds it pleasant to do those problems. Is there some problem with me only? Am I creating a non-existing dichotomy ? Are those question which I have stated not senseless?I know I can't leave things if I don't like them, I must have to adapt myself at least for sometime . How should I survive ? They will conduct exams with these types of question? How should I survive ? They will conduct exams with these types of question? I request all the senior members to please help me over here. Thank you.
Markdown help Code and Preformatted Text Indent four spaces to create an escaped <pre> <code> block: printf("%d\n", 42); /* what was the question again? */ You can also select text and press CTRL+ K to toggle indenting as code. The text will be wrapped in tags, and displayed in a monospaced font. The first four spaces will be stripped off, but all other whitespace will be preserved. Markdown and HTML are ignored within a code block: <blink> You would hate this if it weren't wrapped in a code block. </blink> Instead of using indentation, you can also create code blocks by using “code fences”, consisting of three or more backticks or tildes: ``` alert(false); ``` ~~~ alert(true); ~~~ Code Spans Use backticks to create an inline <code> span: The `$` character is just a shortcut for `window.jQuery`. (The backtick key is in the upper left corner of most keyboards.) Like code blocks, code spans will be displayed in a monospaced font. Markdown and HTML will not work within them. Note that, unlike code blocks, code spans require you to manually escape any HTML within! If your code itself contains backticks, you may have to use multiple backticks as delimiters: The name ``Tuple`2`` is a valid .NET type name. Linebreaks End a line with two spaces to add a <br/> linebreak: How do I love thee? Let me count the ways Italics and Bold *This is italicized*, and so is _this_. **This is bold**, and so is __this__. Use ***italics and bold together*** if you ___have to___. You can also select text and press CTRL+ I or CTRL+ B to toggle italics or bold respectively. Links Basic Links There are three ways to write links. Each is easier to read than the last: Here's an inline link to [Google](http://www.google.com/). Here's a reference-style link to [Google][1]. Here's a very readable link to [Yahoo!][yahoo]. [1]: http://www.google.com/ [yahoo]: http://www.yahoo.com/ You can also select text and press CTRL+ L to make it a link, or press CTRL+ L with no text selected to insert a link at the current position. The link definitions can appear anywhere in the document -- before or after the place where you use them. The link definition names [1] and [yahoo] can be any unique string, and are case-insensitive; [yahoo] is the same as [YAHOO]. Advanced Links Links can have a title attribute, which will show up on hover. Title attributes can also be added; they are helpful if the link itself is not descriptive enough to tell users where they're going. Here's a <span class="hi">[poorly-named link](http://www.google.com/ "Google")</span>. Never write "[click here][^2]". Visit [us][web]. [^2]: http://www.w3.org/QA/Tips/noClickHere (Advice against the phrase "click here") [web]: https://chemistry.stackexchange.com/ "Chemistry Stack Exchange" You can also use standard HTML hyperlink syntax. <a href="http://example.com" title="example">example</a> Bare URLs We have modified our Markdown parser to support "naked" URLs (in most but not all cases -- beware of unusual characters in your URLs); they will be converted to links automatically: I often visit http://example.com. Force URLs by enclosing them in angle brackets: Have you seen <https://example.com>? URLs can be relative or full. Headers Underline text to make the two <h1> <h2> top-level headers : Header 1 ======== Header 2 -------- You can also select text and press CTRL+ H to step through the different heading styles. The number of = or - signs doesn't matter; one will work. But using enough to underline the text makes your titles look better in plain text. Use hash marks for several levels of headers: # Header 1 # ## Header 2 ## ### Header 3 ### The closing # characters are optional. Horizontal Rules Insert a horizontal rule <hr/> by putting three or more hyphens, asterisks, or underscores on a line by themselves: --- You can also press CTRL+ R to insert a horizontal rule. Rule #1 --- Rule #2 ******* Rule #3 ___ Using spaces between the characters also works: Rule #4 - - - - You can also press CTRL+ R to insert a horizontal rule. Simple lists A bulleted <ul> list: - Use a minus sign for a bullet + Or plus sign * Or an asterisk A numbered <ol> list: 1. Numbered lists are easy 2. Markdown keeps track of the numbers for you 7. So this will be item 3. You can also select text and press CTRL+ U or CTRL+ O to toggle a bullet or numbered list respectively. A double-spaced list: - This list gets wrapped in <p> tags - So there will be extra space between items Advanced lists: Nesting To put other Markdown blocks in a list; just indent four spaces for each nesting level: 1. Lists in a list item: - Indented four spaces. * indented eight spaces. - Four spaces again. 1. Lists in a list item: - Indented four spaces. * indented eight spaces. - Four spaces again. 2. Multiple paragraphs in a list items: It's best to indent the paragraphs four spaces You can get away with three, but it can get confusing when you nest other things. Stick to four. We indented the first line an extra space to align it with these paragraphs. In real use, we might do that to the entire list so that all items line up. This paragraph is still part of the list item, but it looks messy to humans. So it's a good idea to wrap your nested paragraphs manually, as we did with the first two. 3. Blockquotes in a list item: > Skip a line and > indent the >'s four spaces. 4. Preformatted text in a list item: Skip a line and indent eight spaces. That's four spaces for the list and four to trigger the code block. Simple blockquotes Add a > to the beginning of any line to create a blockquote. > The syntax is based on the way email programs > usually do quotations. You don't need to hard-wrap > the paragraphs in your blockquotes, but it looks much nicer if you do. Depends how lazy you feel. You can also select text and press CTRL+ Q to toggle a blockquote. Advanced blockquotes: Nesting To put other Markdown blocks in a blockquote, just add a > followed by a space. To put other Markdown blocks in a blockquote, just add a > followed by a space: > The > on the blank lines is optional. > Include it or don't; Markdown doesn't care. > > But your plain text looks better to > humans if you include the extra `>` > between paragraphs. Blockquotes within a blockquote: > A standard blockquote is indented > > A nested blockquote is indented more > > > > You can nest to any depth. Lists in a blockquote: > - A list in a blockquote > - With a > and space in front of it > * A sublist Preformatted text in a blockquote: > Indent five spaces total. The first > one is part of the blockquote designator. Images Images are exactly like links, but they have an exclamation point in front of them: ![Valid XHTML](http://w3.org/Icons/valid-xhtml10). You can also press CTRL+ G to insert an image. The word in square brackets is the alt text, which gets displayed if the browser can't show the image. Be sure to include meaningful alt text for screen-reading software. Just like links, images work with reference syntax and titles: This page is ![valid XHTML][checkmark]. [checkmark]: http://w3.org/Icons/valid-xhtml10 "What are you smiling at?" Note: Markdown does not currently support the shortest reference syntax for images: Here's a broken ![checkmark]. But you can use a slightly more verbose version of implicit reference names: This ![checkmark][] works. The reference name is also used as the alt text. You can also use standard HTML image syntax, which allows you to scale the width and height of the image. <img src="http://example.com/sample.png" width="100" height="100"> URLs can be relative or full. Inline HTML If you need to do something that Markdown can't handle, use HTML. Note that we only support a very strict subset of HTML! To reboot your computer, press <kbd>ctrl</kbd>+<kbd>alt</kbd>+<kbd>del</kbd>. Markdown is smart enough not to mangle your span-level HTML: <b>Markdown works *fine* in here.</b> Block-level HTML elements have a few restrictions: They must be separated from surrounding text by blank lines. The begin and end tags of the outermost block element must not be indented. Markdown can't be used within HTML blocks. <pre> You can <em>not</em> use Markdown in here. </pre> Need More Detail? Visit the official Markdown syntax reference page. Stack Exchange additions The following sections describe some additional features for text formatting that aren't officially part of Markdown. Tags To talk about a tag on this site, like-this, use See the many questions tagged [tag:elephants] to learn more. The tag will automatically be linked to the corresponding tag page. Spoilers To hide a certain piece of text and have it only be visible when a user moves the mouse over it, use the blockquote syntax with an additional exclamation point: At the end of episode five, it turns out that >! he's actually his father. Syntax highlighting for code To manually specify the language of an indented code block, insert an HTML comment like this before the block: <!-- language: lang-js --> <!-- language: lang-js --> setTimeout(function () { alert("JavaScript"); }, 1000); To manually specify the language of a fenced code block, add the language to the line with the opening fence: ``` lang-js setTimeout(function () { alert("JavaScript"); }, 1000); ``` You can use either one of the supported prettify language codes, like lang-cpp or lang-sql, or you can specify a tag, and the syntax highlighting language associated with this tag will be used: <!-- language: c# --> public static bool IsAwesome { get { return true; } } To specify a syntax highlighting language to be used not only for the next, but for all following code blocks, use: <!-- language-all: lang-html --> To specify that you don't want any syntax highlighting for a code block, use: <!-- language: lang-none --> LaTeX Chemistry Stack Exchange uses MathJax to render LaTeX. You can use single dollar signs to delimit inline equations, and double dollars for blocks: The *Gamma function* satisfying $\Gamma(n) = (n-1)!\quad\forall n\in\mathbb N$ is via through the Euler integral $$ \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt\,. $$ Learn more: MathJax help. Comment formatting Comments support only bold, italic, code and links; in addition, a few shorthand links are available. _italic_ and **bold** text, inline `code in backticks`, and [basic links](http://example.com). Supported shorthand links: [meta]– link to the current site's Meta; link text is the site name (e.g. "Super User Meta"). Does nothing if the site doesn't have (or already is) a Meta site. [main]– like [meta], just the other way around. [edit]– link to the edit page for the post the comment is on, i.e. /posts/{id}/edit. Link text is "edit" (capitalization is respected). [tag:tagname]and [meta-tag:tagname]– link to the given tag's page. Link text is the name of the tag. meta-tagonly works on meta sites. [help], [help/on-topic], [help/dont-ask], [help/behavior]and [meta-help]– link to frequently visited pages of the help center. Link text is "help center" (capitalization is respected). All links point to the main site. [tour]– link to the Tour page. Link text is "tour" (capitalization is respected). [so], [pt.so], [su], [sf], [metase], [a51], [se]– link to the given site. Link text is the site name. [chat]– link to the current site's chat site, the link text being "{site name} Chat". [something.se]– link to something.stackexchange.com, if that site exists. Link text is the site name. Use [ubuntu.se]for Ask Ubuntu. Replying in comments The owner of the post you're commenting on will be notified of your comment. If you are replying to someone else who has previously commented on the same post, mention their username: always @peter and @PeterSmith will both notify a previous commenter named “Peter Smith”. It is generally sufficient to mention only the first name of the user whose comment you are replying to, e.g. @ben or @marc. However you may need to be more specific if three people named Ben replied in earlier comments, by adding the first character of the last name, e.g. @benm or @benc Spaces are not valid in comment reply names, so don't use @peter smith, always enter it as @peters or @petersmith. If the user you're replying to has no natural first name and last name, simply enter enough characters of the name to make it clear who you are responding to. Three is the minimum, so if you're replying to Fantastico, enter @fan, @fant, or @fantastic. You can use the same method to notify any editor of the post, or – if this is the case – to the ♦ moderator who closed the question.
Proof of Proposition 1 If party \(B\) accepts the offer of party \(A\), its utility will be \(W_{B}(p_{A})+t+c_{B}\). If \(B\) rejects \(A\)’s offer, its utility will be \(W_{B}(q)+c_{B}^{\prime }\). If SIG wants \(B\) to accept \(A\) ’s proposal, it should choose \(c_{B}\) and \( c_{B}^{\prime }\) such that $$\begin{aligned} W_{B}(p_{A})+t+c_{B}\ge W_{B}(q)+c_{B}^{\prime } \end{aligned}$$ The above constraint is binding, and \(c_{B}^{\prime }\) is, in this case, optimally set to zero by SIG in order to decrease \(c_{B}\) . We also have \(c_B \ge 0\) . Hence, $$\begin{aligned} c_{B}=\max \left\{ 0,W_{B}(q)-W_{B}(p_{A})-t\right\} \end{aligned}$$ Replacing \(c_{B}\) by its value, the utility of SIG in this case becomes $$\begin{aligned} W_{S}(p_{A})-\max \left\{ 0,W_{B}(q)-W_{B}(p_{A})-t\right\} -\mathbf {I} _{p_{A}=p}c_{A} \end{aligned}$$ where the last term is an indicator function showing that SIG needs also to pay \(c_{A}\) if the final policy implemented \(p_{A}\) is the one proposed by SIG to \(A\) at the beginning of the game, namely \(p\) . Similarly, if SIG wants \(B\) to reject \(A\) ’s proposal, it chooses \(c_{B}=0\) and $$\begin{aligned} c_{B}^{\prime }=\max \left\{ 0,W_{B}(p_{A})+t-W_{B}(q)\right\} \end{aligned}$$ Replacing \(c_{B}^{\prime }\) by its value, the utility of SIG in this case becomes $$\begin{aligned} W_{S}(q)-\max \left\{ 0,W_{B}(p_{A})+t-W_{B}(q)\right\} \end{aligned}$$ Comparing SIG’s utility in the two cases, we conclude that if $$\begin{aligned} W_{S}(p_{A})+W_{B}(p_{A})+t-\mathbf {I}_{p_{A}=p}c_{A}\ge W_{S}(q)+W_{B}(q) \end{aligned}$$ (1) then SIG prefers to offer \(c_{B}=\max \left\{ 0,W_{B}(q)-W_{B}(p_{A})-t\right\} \) to party \(B\) , and party \(B\) accepts \( p_{A}\) . Hence, when party \(A\) proposes \(p_{A}\), it has to make sure that inequality (1) holds. Party \(A\) will maximize its utility \(W_{A}(p_{A})-t+\mathbf {I} _{p_{A}=p}c_{A} \) subject to (1 ). The constraint is binding. Replacing \(t\) by its value, the problem of party \(A\) becomes $$\begin{aligned} \max _{p_{A}}W_{A}(p_{A})+W_{S}(p_{A})+W_{B}(p_{A})-\mathbf {I}_{p_{A}=p}c_{A}+ \mathbf {I}_{p_{A}=p}c_{A}-W_{S}(q)-W_{B}(q) \end{aligned}$$ We see that the expressions that depend on \(c_{A}\) cancel each other out. The reason is that even if \(p_{A}=p\) and \(A\) receives \(c_{A}\) , it will need to pay back this amount to \(B\) in order to make sure that SIG does not block its policy proposal. In other words, SIG’s lack of commitment not to intervene in the process makes any offer at the beginning of the game redundant. Hence, the problem of party \(A\) simplifies to $$\begin{aligned} \max _{p_{A}}W_{A}(p_{A})+W_{S}(p_{A})+W_{B}(p_{A})-W_{S}(q)-W_{B}(q) \end{aligned}$$ We call the solution as \(p^{*}\) which is equal to $$\begin{aligned} p^{*}=\arg \max _{p_{A}}W_{S}(p_{A})+W_{A}(p_{A})+W_{B}(p_{A}) \end{aligned}$$ We see that the surplus-maximizing policy is chosen. Replacing \(t\) in the expression of \(c_{B}\) , we find that $$\begin{aligned} c_{B}=\max \{0,W_{S}(p^{*})-W_{S}(q)-\mathbf {I}_{p_{A}=p}c_{A}\} \end{aligned}$$ If \(p_{A}\ne p^{*}\) , then \(\mathbf {I}_{p_{A}=p}c_{A}=0\) , and \( c_{B}=\max \{0,W_{S}(p^{*})-W_{S}(q)\}\) . If \(p_{A}=p^{*}\), then SIG pays in total \(c_{A}+\max \{0,W_{S}(p^{*})-W_{S}(q)-c_{A}\}\). If \(W_{S}(p^{*})-W_{S}(q)\le 0\), the best SIG can choose is \(c_{A}=c_{B}=0\). If, instead, \(W_{S}(p^{*})-W_{S}(q)>0\), SIG will not choose \(c_{A}\) larger than \(W_{S}(p^{*})-W_{S}(q)\), and the total payment will be \(c_{A}+W_{S}(p^{*})-W_{S}(q)-c_{A}=W_{S}(p^{*})-W_{S}(q)\). Hence, in all cases, the total amount paid by SIG is equal to \(\max \{0,W_{S}(p^{*})\) \(-W_{S}(q)\}\). SIG’s equilibrium payoff is given by \(W_{S}(p^{*})-\max \{0,W_{S}(p^{*})-W_{S}(q)\}=\min \{W_{S}(p^{*}),W_{S}(q)\}\). \(A\)’s equilibrium payoff is \(W_{A}(p^{*})+W_{S}(p^{*})+W_{B}(p^{*})-W_{S}(q)-W_{B}(q)\). \(B\) ’s equilibrium payoff is \(W_{B}(p^{*})+t+c_{B}\) where $$\begin{aligned} t+c_{B}&= W_{S}(q)+W_{B}(q)+\mathbf {I}_{p_{A}=p}c_{A}-W_{S}(p^{*})-W_{B}(p^{*})\\&\quad +\max \{0,W_{S}(p^{*})-W_{S}(q)-\mathbf {I}_{p_{A}=p}c_{A}\} \\&= W_{S}(q)+W_{B}(q)-W_{S}(p^{*})-W_{B}(p^{*})+\max \{0,W_{S}(p^{*})-W_{S}(q)\} \\&= W_{B}(q)-W_{B}(p^{*})+\max \{0,W_{S}(q)-W_{S}(p^{*})\} \end{aligned}$$ where the second equality follows from the above discussion about SIG’s total payment. Hence, \(B\) ’s equilibrium payoff is given by \(W_{B}(q)+\max \{0,W_{S}(q)-W_{S}(p^{*})\}\) . \(\square \) Proof of Proposition 2 In the second period, we know from Proposition 1 that final policy is \( p^{*}\) with probability \(a\), and \(p_{2}\) with probability \(1-a\). In the first period, \(B\) accepts \(p_{A}\) if and only if 7 $$\begin{aligned}&W_{B}(p_{A})\!+\!t\!+\!c_{B}\!+\!\delta [a(W_{B}(p_{A})\!+\!\max \{0,W_{S}(p_{A})\!-\!W_{S}(p^{*})\})\!+\!(1-a)W_{B}(p_{2})]\\&\quad \ge W_{B}(q)+c_{B}^{\prime }+\delta [a(W_{B}(q)+\max \{0,W_{S}(q)-W_{S}(p^{*})\})+(1-a)W_{B}(p_{2})] \end{aligned}$$ Hence, if SIG wants \(B\) to accept \(p_{A}\) , it chooses \(c_{B}^{\prime }=0\) and $$\begin{aligned} c_{B}&= \max \{ 0,W_{B}(q)-W_{B}(p_{A})-t +\delta a [W_B(q)\\&\quad +\max \{0,W_S (q)-W_S (p^*) \}-W_B(p_A)-\max \{0,W_S (p_A)-W_S (p^*) \}]\} \end{aligned}$$ The utility of SIG in this case is $$\begin{aligned}&W_{S}(p_{A})-c_{B}-\mathbf {I}_{p_{A}=p}c_{A} +\delta \Big [a[W_{S}(p^{*})-\max \{0,W_{S}(p^{*})-W_{S}(p_{A})\}]\nonumber \\&\quad +(1-a)[W_{S}(p_{2})-\max \{0,W_{S}(p_{2})-W_{S}(p_{A})\}] \Big ] \end{aligned}$$ (2) where the last term is its discounted second period utility, given that the status quo in the second period is \(p_{A}\) . In the second period, SIG pays \( \max \{0,W_{S}(p^{*})-W_{S}(p_{A})\}\) if the coalition remains the same, or \(\max \{0,W_{S}(p_{2})-W_{S}(p_{A})\}\) if the coalition changes. Similarly, if SIG wants \(B\) to reject \(p_{A}\) , it chooses \(c_{B}=0\) and $$\begin{aligned} c_{B}^{\prime }&= \max \{ 0,W_{B}(p_{A})-W_{B}(q)+t -\delta a [W_B(q)+\max \{0,W_S (q)-W_S (p^*) \}\\&\quad -W_B(p_A)-\max \{0,W_S (p_A)-W_S (p^*) \}]\} \end{aligned}$$ The utility of SIG in this case is $$\begin{aligned}&W_{S}(q)-c_{B}^{\prime } +\delta \Big [a[W_{S}(p^{*})-\max \{0,W_{S}(p^{*})-W_{S}(q)\}]\nonumber \\&\quad +(1-a)[W_{S}(p_{2})-\max \{0,W_{S}(p_{2})-W_{S}(q)\}]\Big ] \end{aligned}$$ (3) where the last term is its discounted second period utility, given that the status quo in the second period is \(q\) . Comparing SIG’s utility in (2 ) and (3 ) after replacing the values of \(c_{B}\) and \(c_{B}^{\prime }\) , we conclude that if $$\begin{aligned}&W_{S}(p_{A})+(1+\delta a)W_{B}(p_{A})+\delta a\max \{0,W_{S}(p_{A})-W_{S}(p^{*})\}+t-\mathbf {I}_{p_{A}=p}c_{A} \\&\qquad -\delta \Big [a\max \{0,W_{S}(p^{*})-W_{S}(p_{A})\}+(1-a)\max \{0,W_{S}(p_{2})-W_{S}(p_{A})\}\Big ] \\&\quad \ge W_{S}(q)+(1+\delta a)W_{B}(q)+\delta a\max \{0,W_{S}(q)-W_{S}(p^{*})\} \\&\qquad -\delta \Big [a\max \{0,W_{S}(p^{*})-W_{S}(q)\}+(1-a)\max \{0,W_{S}(p_{2})-W_{S}(q)\}\Big ] \end{aligned}$$ equivalently if $$\begin{aligned}&(1+\delta a)W_{S}(p_{A})+(1+\delta a)W_{B}(p_{A})\nonumber \\&\qquad +t-\mathbf {I} _{p_{A}=p}c_{A}-\delta (1-a)\max \{0,W_{S}(p_{2})-W_{S}(p_{A})\} \nonumber \\&\quad \ge (1+\delta a)W_{S}(q)+(1+\delta a)W_{B}(q)-\delta (1-a)\max \{0,W_{S}(p_{2})-W_{S}(q)\}\quad \quad \end{aligned}$$ (4) then SIG prefers to make sure that party \(B\) accepts \(p_{A}\) . Hence, when party \(A\) proposes \(p_{A}\), it has to make sure that inequality ( 4) holds. Party \(A\) will maximize its utility $$\begin{aligned}&W_{A}(p_{A})\!-\!t\!+\!\mathbf {I}_{p_{A}\!=\!p}c_{A}\!+\!\delta \Big [a[W_{A}(p^{*})\!+\!w_{B}(p^{*})\!+\!W_{S}(p^{*})\!-\!W_{B}(p_{A})\!-\!W_{S}(p_{A})]\\&\quad +(1-a)W_{A}(p_{2})\Big ] \end{aligned}$$ subject to (4 ), where the last term is its second-period utility. 8 The constraint is binding. Replacing \(t\) by its value, the problem of party \(A\) becomes $$\begin{aligned}&\max _{p_{A}}W_{A}(p_{A})+(1+\delta a)W_{S}(p_{A})+(1+\delta a)W_{B}(p_{A})\\&\quad -\delta (1-a)\max \{0,W_{S}(p_{2})-W_{S}(p_{A})\} \\&\quad -(1+\delta a)W_{S}(q)-(1+\delta a)W_{B}(q)+\delta (1-a)\max \{0,W_{S}(p_{2})-W_{S}(q)\} \\&\quad +\delta \Big [a[W_{A}(p^{*})+w_{B}(p^{*})+W_{S}(p^{*})-W_{B}(p_{A})-W_{S}(p_{A})]+(1-a)W_{A}(p_{2})\Big ] \end{aligned}$$ which can be written after simplifications as $$\begin{aligned} \max _{p_{A}} W_{A}(p_{A})+W_{B}(p_{A})+W_{S}(p_{A}) -\delta (1-a)\max \{0,W_{S}(p_{2})-W_{S}(p_{A})\} \end{aligned}$$ since the remaining terms do not depend on \(p_A\) . Replacing \(t\) in the expression of \(c_{B}\) , we find that $$\begin{aligned} c_{B}&= \max \{0,(1+\delta a)W_{S}(p_{A})-(1+\delta a)W_{S}(q)-\mathbf {I}_{p_{A}=p}c_{A}\\&\quad +\delta (1-a)[\max \{0,W_{S}(p_{2})-W_{S}(q)\} \\&\quad -\max \{0,W_{S}(p_{2})-W_{S}(p_{A})\}]+\delta a[\max \{0,W_{S}(q)-W_{S}(p^{*})\}\\&\quad -\max \{0,W_{S}(p_{A})-W_{S}(p^{*})\}]\} \end{aligned}$$ After some algebra, $$\begin{aligned} c_{B}&= \max \{0,W_{S}(p_{A})\!-\!W_{S}(q)\!-\!\mathbf {I}_{p_{A}=p}c_{A}\nonumber \\&+\delta a[(W_S(p^*)\!-\!\max \{0,W_S(p^*)\!-\!W_S(p_A) \})\!-\!(W_S(p^*)\nonumber \\&-\max \{0,W_S(p^*)\!-\!W_S(q) \})] \nonumber \\&+\delta (1\!-\!a)[\max \{0,W_{S}(p_{2})\!-\!W_{S}(q)\}\!-\!\max \{0,W_{S}(p_{2})\!-\!W_{S}(p_{A})\}]\}\qquad \end{aligned}$$ (5) There are two different ranges of \(p_A\) according to which the objective function differs: \(A\) (i) $$\begin{aligned} \max _{p_{A}} W_{A}(p_{A})+W_{B}(p_{A})+W_{S}(p_{A}) \end{aligned}$$ such that \(W_S(p_A) \ge W_S(p_2)\) . (ii) $$\begin{aligned} \max _{p_{A}}W_{A}(p_{A})+W_{B}(p_{A})+W_{S}(p_{A})-\delta (1-a)(W_{S}(p_{2})-W_{S}(p_{A})) \end{aligned}$$ (equivalently \(\max _{p_{A}}W_{A}(p_{A})+W_{B}(p_{A})+(1+\delta (1-a))W_{S}(p_{A})\) ) such that \(W_{S}(p_{A})\le W_{S}(p_{2})\) . compares the solutions of the two preceding problems and chooses the one which maximizes its payoff. There are three possible cases according to SIG’s preferences: \(\square \) 1. \(W_{S}(p^{*})\ge W_{S}(p_{2})\) : In (i), the solution is \(p^{*} \) . In (ii), since \(W_{S}(\overline{p})\ge W_{S}(p^{*})\) , the constraint \( W_{S}(p_{A})\le W_{S}(p_{2})\) is binding. The constraint is binding possibly for two policies, \(p_{2}\) and \(\widetilde{p_{2}}\) such that \( W_{S}(p_{2})=W_{S}(\widetilde{p_{2}})\) . Hence, the solution is \(p_{2}\) or \( \widetilde{p_{2}}\) , depending on which one gives a higher payoff to \(A\) . \(A\) prefers to choose \(p^{*}\) . Note that in (i), \(A\) chooses \(p^{*}\) when \(p_{2}\) or \(\widetilde{p_{2}}\) are also possible. From Eq. (5 ), similarly as in the proof of Proposition 1, it can be seen that the total amount paid by SIG is equal to $$\begin{aligned}&\max \{0,W_{S}(p^{*})-W_{S}(q)+\delta a \max \{0,W_S(p^*)-W_S(q) \} \\&\quad +\delta (1-a)\max \{0,W_{S}(p_{2})-W_{S}(q)\}\} \end{aligned}$$ Notice that this is equal to \(0\) if and only if \(W_{S}(q)\ge W_{S}(p^{*})\ge W_{S}(p_{2})\) . 2. \(W_{S}(p_{2})>W_{S}(\overline{p})\ge W_{S}(p^{*})\) : In (i), the constraint is binding, and the solution is \(p_{2}\) or \(\widetilde{p_{2}}\) . In (ii), the solution is \(\overline{p}\) . \(A\) prefers to choose \(\overline{p}\) . Note that in (ii), \(A\) chooses \( \overline{p}\) when \(p_{2}\) or \(\widetilde{p_{2}}\) are also possible. From Eq. (5 ), the total amount paid by SIG in this case is equal to $$\begin{aligned}&\max \{0,W_{S}(\overline{p})-W_{S}(q)+\delta a \max \{0,W_S(p^*)-W_S(q) \} \\&\quad +\delta (1-a)[\max \{0,W_{S}(p_{2})-W_{S}(q)\}-(W_{S}(p_{2})-W_{S}( \overline{p}))]\} \end{aligned}$$ Notice that this is equal to \(0\) if and only if \(W_{S}(q)\ge W_{S}( \overline{p})\ge W_{S}(p^{*})\) . 3. \(W_{S}(\overline{p})\ge W_{S}(p_{2})>W_{S}(p^{*})\) : In (i), the constraint is binding, and the solution is \(p_{2}\) or \(\widetilde{p_{2}}\) . Also in (ii), the constraint is binding, and the solution is \(p_{2}\) or \( \widetilde{p_{2}}\) . Hence, \(A\) chooses \(p_{2}\) or \(\widetilde{p_{2}}\) in this case and, from Eq. (5 ), the total amount paid by SIG is given by $$\begin{aligned}&\max \{0,W_{S}(p_{2})-W_{S}(q)+\delta a \max \{0,W_S(p^*)-W_S(q) \} \\&\quad +\delta (1-a)\max \{0,W_{S}(p_{2})-W_{S}(q)\}\} \end{aligned}$$ Notice that this is equal to \(0\) if and only if \(W_{S}(q)\ge W_{S}(p_{2})>W_{S}(p^{*})\) .
Given two uncorrelated strategies, each with a Sharpe ratio of 1, what is the of Sharpe ratio of the ensemble? If we assume that by ensemble you mean an equally weighted portfolio of the two. We can express that portfolio as $$P = \frac{1}{2}x + \frac{1}{2}y$$ and the sharpe ratio of $P$, $S(P)$, will be $$\frac{\frac{1}{2}\mu_x + \frac{1}{2}\mu_y - r_f}{\sigma_{\frac{1}{2}x + \frac{1}{2}y}}$$ becuase $x$ and $y$ are uncorellated, this reduces to $$\frac{\mu_x + \mu_y - 2r_f}{\sqrt{\sigma_x^2 + \sigma_y^2}}$$ becuase the sharpe ratios $$S(x)=\frac{\mu_x - r_f}{\sigma_x}=S(y)=\frac{\mu_y - r_f}{\sigma_y} = 1$$ we get $$\mu_x - r_f = \sigma_x \\\mu_y - r_f = \sigma_y $$ thus $$\mu_x + \mu_y - 2r_f = \sqrt{\sigma_x^2} + \sqrt{\sigma_y^2} $$ and $$S(P) = \frac{\sqrt{\sigma_x^2} + \sqrt{\sigma_y^2}}{\sqrt{\sigma_x^2 + \sigma_y^2}}$$ What can you say about this ratio? How does it relate to Jensen's inequality? what happens if they are perfectly correlated? Of course, it depends on the weights of your 'ensemble'. The optimal combination will have the following Sharpe ratio: $$ S_{opt} = \sqrt{S_1^2+S_2^2} $$ i.e. $S_{opt} = \sqrt{2} \approx 1.414$ in you example Proof:Let $x$ be the expectation, and $V$ the covariance matrix of a vector of assets. The Sharpe ratio of a portfolio with weights $w$ is defined by $S_w=\frac{x^Tw}{\sqrt{w^TVw}}$. First, we transform the problem in a simpler one: It follows that if $w_1$ has an optimal Sharpe ratio $S^*$, which is always positive, then $a \: w_1$ has the same Sharpe ratio for any positive real number $a$. Setting $a=1/x^Tw_1$, shows that there exists a portfolio $w$ with optimal Sharpe ratio and $x^Tw=1$. Now, we can find $S^*$ by maximizing $S_w$ subject to $x^Tw=1$, i.e. minimize $w^TVw$ subject to $x^Tw=1$. Using one Lagrange multiplyer $\lambda$ gives the following conditions: $$ \nabla_w(w^TVw+\lambda x^Tw)=2 Vw + \lambda x\stackrel{!}{=}0 $$ $$ x^Tw=1$$ The solution is $w=\frac{V^{-1}x}{w^TV^{-1}w}$ and the optimal Sharpe ratio is thus $$ S^*=\sqrt{x^TV^{-1}x}$$ Application to your case: Two uncorrelated assets with volas $\sigma_1$ and $\sigma_2$ i.e. $V^{-1}=\left(\begin{array} c\sigma_1^{-2}& 0\\0&\sigma_2^{-2}\end{array}\right)$, and Sharpe ratios $S_i=x_i/\sigma_i$ gives the above result.
Research Open Access Published: Existence and boundary behavior of weak solutions for Schrödingerean TOPSIS equations Boundary Value Problems volume 2018, Article number: 12 (2018) Article metrics 613 Accesses Abstract In this paper, we prove that there exists a weak solution for Schrödingerean technique for order performance by similarity (TOPSIS) equations on cylinders. Meanwhile, the boundary behaviors of it are also obtained via the abstract theory of fuzzy multi-criterion decision making. As the main tools, we use Karamata regular variation theory and the method of upper and lower solutions. Introduction Motivated by uncertainty problems, risk measures and the superhedging in finance, Xue established the fundamental theory of Schrödingerean expectation theory (see [1]), where the minimally thin sets associated with a Schrödinger operator are introduced. In the Schrödingerean expectation framework, the notion of the corresponding Schrödingerean stochastic calculus of Itô type were also established (see [2]). As in [3], the set in \(\mathbf{R}^{n}\) is simply denoted by \(\mathcal{C}_{n}(\Gamma )\). We call it a cylinder (see [3]). On that basis, the theory and applications of the Schrödingerean TOPSIS equation have been developed rapidly (see [2, 4–11] and the references therein). In this paper, we consider the following Schrödingerean TOPSIS equation: in \(\mathcal{C}_{n}(\Gamma )\), where \(0< s\leq 1\) and the potential a satisfies the following condition: Under the Lipschitz assumptions on the potential a, Yang (see [11]) has proved the wellposedness of such equations with the fixed-point iteration. Moreover, Liu (see [8]) has studied the Markov chains when coefficients are integral-Lipschitz, Zhang and Wu (see [9]) considered the modified Laplace equations with some good boundaries, Wang et al. (see [10]) studied stochastic functional differential equations with infinite delay. We can also refer the reader to Miyamoto (see [3]), Chen (see [5] and the references therein). Let \(\alpha > 0\) and \(1\leq p < \infty \). Then the weighted weak space \(\aleph^{p} _{\alpha }(\Gamma ) \) on cylinders can be defined by where u are weak solutions of (1.1) on cylinders, \(d\wp_{\alpha }(y)=\operatorname{dist}(y,\partial \Gamma )^{\alpha }\,dy\) and \(1/p+1/q=1\). Let dy denote the Lebesgue measure on \(\mathbf{R}^{n}\) and \(\operatorname{dist}(y,\partial \Gamma )\) denote the Euclidean distance from z to the boundary of Γ. We let \(\aleph^{p}_{\alpha }=\aleph^{p}_{\alpha }( \mathcal{C}_{n}(\Gamma ))\). Then we can check that \(dV_{\alpha }(y) = y^{\alpha }_{n}\,dy\) in \(\mathcal{C}_{n}(\Gamma )\). Weak spaces are not studied as extensively as their holomorphic counterparts and many results on spaces has been done for bounded domains (see [12, 13]), for example, are good references for holomorphic Bergman spaces. \(\aleph^{p}_{0}(\Gamma )\) is studied in [5] and [3, 6] on the setting of upper half-space and bounded smooth domain in \(\mathbf{R}^{n}\), respectively. \(\aleph^{p} _{\alpha }(B)\), where B is the open unit ball and the upper half plane in \(\mathbf{R}^{n}\), are studied in [7] and [1], respectively. For nonnegative functions \(g_{1}\) and \(g_{2}\), we often write \(g_{1} \le g_{2} \) or \(g_{2} \ge g_{1}\) if \(g_{1}\leq cg_{2}\), where c is an inessential positive constant. Also, we write \(g_{1}\approx g_{2}\) if \(g_{1}\le g_{2}\) and \(g_{2} \le g_{1}\). Throughout this paper, we shall use the same letter C to denote various constants which may be different from line to line. Preliminary results In this section, we first recall one definition and some previous results about the generalized Poisson kernel and Green function in the half space, which will be available later. Let \(z\in \mathbf{R}^{n} \) and \(r > 0\). Let \(B(y,r)\) denote the open ball in \(\mathbf{R}^{n}\). Let \(V(B(0,1))\) be the volume of the unit ball in \(\mathbf{R}^{n}\), \(w \in \overline{\mathcal{C}_{n}( \Gamma )}\), \(\overline{w}=(w^{\prime }, -w_{n}) \) and \(z \in \mathcal{C}_{n}(\Gamma )\). Then the extended Poisson kernel \(P(y,w)\) in \(\mathcal{C}_{n}(\Gamma )\) can be defined by It is easy to see that (see [14] for details and related facts) for each \(z\in \mathcal{C}_{n}(\Gamma )\) and for every \(w \in \overline{ \mathcal{C}_{n}(\Gamma )}\). Let \(\vec{\beta }=(\beta_{1},\beta_{2},\ldots ,\beta_{n})\) be a multi-index with \(\beta_{j}\in \mathbf{N}\cup \{0\}\) for \(j=1,2, \ldots ,n\) and f be a homogeneous polynomial of degree \(\vert \vec{\beta } \vert +2\). Then we see from (2.1) that where \(\vec{\beta }=\beta_{1}+\beta_{2}+\cdots +\beta_{n}\). The following lemma collects so-called Poisson-Schrödinger type estimates (see [4]), which play important roles in our discussions. Lemma 2.1 If β⃗ is a multi- index, u is the weak solution of (1.1) and bounded by M on \(B(y,r)\), then there exists a positive constant C depending on β⃗ such that Main results For the rest of this paper, we assume \(\alpha >0\), \(p, q\in (0,\infty )\) and u is the weak solution of (1.1). First we prove that equation (1.1) has at least a weak solution. Theorem 3.1 If a changes its sign, then (1.1) has at least a weak solution \(u_{\lambda }\). Proof For convenience, let Using Lemma 2.1 it follows that \((I-\frac{\mu_{n}}{\sigma_{n}} \mathcal{G}^{*}\mathcal{G})\) is nonexpansive and averaged. Hence, Moreover, By virtue of \(\lim_{n\rightarrow \infty }(\sigma_{n+1}-\sigma _{n})=0\), it follows that Moreover, \(\{ w_{n} \} \), and \(\{ v_{n} \} \) are bounded, and so is \(\{ d_{n} \} \). Therefore, (3.2) reduces to Applying (3.3) and Karamata regular variation theory, we get Using the convexity of the norm and (3.5), we deduce that which implies that Since we have the following result: Applying the property of the projection \(P_{S_{i}}\), one can easily show that where \(M>0\) satisfying So we complete the proof of Theorem 3.1. □ Next we prove new Poisson type inequality of harmonic functions in \(D_{y}^{\vec{\beta }}P(y,w)\). Theorem 3.2 Let β⃗ be a multi- index such that and \(w\in \mathcal{C}_{n}(\Gamma )\). If in \(\mathcal{C}_{n}(\Gamma )\), then Proof First, we see from (2.3) that where f is a homogeneous polynomial of degree \(\vert \vec{\beta } \vert +2\). Then we get from the change of variables \(z\mapsto (y^{\prime }+w^{\prime },z_{n})\) and then \(z\mapsto \tau_{n}z\), where we used the homogeneity of f. Since f is a polynomial of degree \(1+\vert \vec{\beta } \vert \), we know that So which yields Then we complete the proof. □ The following result implies that convergence in \(\aleph_{\alpha } ^{p}\)-norm implies the uniform convergence on each compact subset of \(\mathcal{C}_{n}(\Gamma )\) and point evaluation is a bounded linear functional on \(\aleph_{\alpha }^{p}\). Therefore we can see that \(\aleph_{\alpha }^{p}\) is a Banach space with \(\aleph_{\alpha }^{p}\)-norm. Lemma 3.3 Let \(\alpha >0\), \(p>0\) and \(z\in \mathcal{C}_{n}(\Gamma )\). If \(u\in \aleph^{p}_{\alpha }\), then we have Proof Let \(r =\frac{z_{n}}{2}\). Note that \(\tau_{n} \approx z_{n}\), \(\tau_{n}\) ranges over all point in \(B(y,r)\). Hence, we get which means that Since and We infer that Finally, we show that \(\tau_{n}\rightarrow \hat{w\tau }\). Using the property of the projection \(P_{S_{i}}\), we derive that which is equal to Since \(\frac{\gamma_{n}}{1-\alpha_{n}}\in (0,\frac{2}{\rho (G*G)})\), we observe that \(\alpha_{n}\in (0,\frac{\gamma_{n}\rho (G*G)}{2})\). Then that is to say By virtue of \(\sum_{n=1}^{\infty } \frac{\sigma_{n}}{\gamma_{n}}<\infty \), \(\gamma_{n}\in (0,\frac{2}{ \rho (G*G)})\) and \(\langle \hat{w\tau },\hat{w\tau }-v_{n} \rangle \) is bounded, we obtain that which implies that Moreover, Consequently, \({z_{n}}\) is bounded, and so is \({v_{n}}\). Let \(T=2P_{S_{i}}-I\). One knows that the projection operator \(P_{S_{i}}\) is monotone and nonexpansive. Therefore, that is, where Indeed, So The proof is complete. □ Unlike the cases of bounded domains, the next theorem shows that if \(p\ne q\), then there is no inclusion between \(\aleph^{p}_{\alpha }\) and \(\aleph^{q}_{\alpha }\). Lemma 3.4 Let \(\alpha >0\) and \(p,q>0\). If \(p\ne q\), then \(\aleph^{p}_{\alpha }\) does not contain \(\aleph^{q}_{\alpha }\). Proof Suppose that \(\aleph^{p}_{\alpha }\subset \aleph^{q}_{\alpha }\). Then we see from Lemma 3.4 that convergence in any \(\aleph^{p} _{\alpha }\)-norm implies uniform convergence on compact subsets. Therefore we know from the closed graph theorem that the identity map from \(\aleph^{p}_{\alpha }\) to \(\aleph^{q}_{\alpha }\) is continuous. Hence we get as v ranges over all functions in \(\aleph^{p}_{\alpha }\). To show that (3.10) fails, there exists a nonnegative integer k large enough such Set \(u(y)=D^{k}_{z_{n}}P(y,0)\) for \(z\in \mathcal{C}_{n}(\Gamma )\). It is obvious that u is also harmonic in \(\mathcal{C}_{n}(\Gamma )\), since u is a partial derivative of harmonic function. Therefore we see from (2.3) that for some homogeneous polynomial f of degree \(k+2\). Let \(u_{\delta }(y)=u(y+(0, \delta ))\), where \(\delta >0\). It is easy to see from Theorem 3.2 that for \(\delta >0\) and because (3.11) holds. Hence we get Conclusions In this paper, we proved that there exists a weak solution for Schrödingerean technique for order performance by similarity equations. Meanwhile, the boundary behaviors of it were also obtained via the abstract theory of fuzzy multi-criterion decision making. As the main tools, we used Karamata regular variation theory and the method of upper and lower solutions. References 1. Xue, G, Yuzbasi, E: Fixed point theorems for solutions of the stationary Schrödinger equation on cones. Fixed Point Theory Appl. 2015, 34 (2015) 2. Chu, T, Lin, Y: Improved extensions of the TOPSIS for group decision-making under fuzzy environment. J. Inf. Optim. Sci. 23(2), 273-286 (2002) 3. Miyamoto, I: Harmonic functions in a cylinder which vanish on the boundary. Jpn. J. Math. 22(2), 241-255 (1996) 4. Çelen, A: Comparative analysis of normalization procedures in TOPSIS method: with an application to Turkish deposit banking market. Informatica 25(2), 185-208 (2014) 5. Chen, T, Li, Y: The extended TOPSIS method with interval-valued fuzzy sets and experimental analysis on separation measures. Adv. Fuzzy Sets Syst. 4(3), 269-292 (2009) 6. Jiang, H, Wang, Z, Xie, M: TOPSIS method with objective weight based on attribute measure and its application. Math. Econ. 27(2), 1-7 (2010) 7. Khorshid, S: Utilizing the hierarchical fuzzy TOPSIS and entropy method in a SWOT analysis. Adv. Fuzzy Sets Syst. 10(1), 1-32 (2011) 8. Liu, Q: An extended TOPSIS method for multiple attribute decision making problems with unknown weight based on 2-dimension uncertain linguistic variables. J. Intell. Fuzzy Syst. 27(5), 2221-2230 (2014) 9. Zhang, J, Wu, Q: Technique for order preference by similarity to ideal solution (TOPSIS) applied in vague sets. Trans. Beijing Inst. Techol. 26(10), 937-940 (2006) 10. Wang, L, Dun, C, Yang, R: Multi-objective \((Q,r)\) model based on hybrid differential evolution algorithm and entropy-based TOPSIS. Control Decis. 26(12), 1913-1916, 1920 (2011) 11. Yang, W, Shi, J, Pang, Y: Fuzzy multi-attribute group decision making method based on TOPSIS with partial weight information. Mohu Xitong yu Shuxue 28(2), 144-151 (2014) 12. Coifman, RR, Rochberg, R: Representation theorems for holomorphic and harmonic functions in \(L^{p}\). Astérisque 77, 11-66 (1980) 13. Gasiorowicz, S: Elementary Particle Physics. Wiley, New York (1966) 14. Axler, S, Bourdon, P, Ramey, W: Harmonic Function Theory. Springer, New York (1992) 15. Chen, Y, Yu, Y: An inequality of Ostrowski type for twice differentiable functions. Math. Pract. Theory 35(12), 188-192 (2005) 16. Dragomir, S, Wang, S: An inequality of Ostrowski-Grüss type and its applications to the estimation of error bounds for some special means and for some numerical quadrature rules. Comput. Math. Appl. 33(11), 15-20 (1997) 17. Dragomir, S, Wang, S: A new inequality of Ostrowski’s type in \(L_{1}\) norm and applications to some special means and to some numerical quadrature rules. Tamkang J. Math. 28(3), 239-244 (1997) Acknowledgements The authors are thankful to the honorable reviewers for their valuable suggestions and comments, which improved the paper. This work was supported by the Natural Science Foundation of Heilongjiang Province (No. A2016209040). Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Answer $a = 306~m$ $b = 1248~m$ $B = 76^{\circ}12'$ Work Step by Step We can convert angle A to degrees: $A = 13^{\circ}47' = (13+\frac{47}{60})^{\circ} = 13.8^{\circ}$ We can use angle A and angle C to find angle B: $B = 180^{\circ}-90^{\circ}-13.8^{\circ} = 76.2^{\circ}$ $B = 76.2^{\circ}$ which is $76^{\circ}12'$ We can use angle A and $c$ to find $b$: $cos~A = \frac{b}{c}$ $b = (c)~cos~A$ $b = (1285~m)~cos(13.8^{\circ})$ $b = 1248~m$ We can use the Pythagorean theorem to find a: $a^2 = c^2-b^2$ $a = \sqrt{c^2-b^2}$ $a = \sqrt{(1285~m)^2-(1248~m)^2}$ $a = 306~m$
Disclaimer: This answer derives the prices of two different binary options within the Black/Scholes framework. Note that this is not an appropriate valuation model to use for non-European contracts in most real-world markets. Up-and-In Binary Call After reading your question for a second time, I agree with Quantuple's comment that you seem to be looking for the solution to an up-and-in binary call option. Formally, let \begin{equation}\nu = \inf \left\{ t \in \mathbb{R}_+ : S_t \geq K \right\}\end{equation} be the first hitting time of $S$ to the strike $K$. The option has a unit payoff conditional on $\nu \leq T$ and $S_T \geq K$, i.e. \begin{equation}V_T = \mathrm{1} \left\{ S_T \geq K \right\} \mathrm{1} \left\{ \nu \leq T \right\}.\end{equation} Note however that $S_T \geq K \; \Rightarrow \; \nu \leq T$ and thus $\left\{ S_T \geq K \right\} \subseteq \left\{ \nu \leq T \right\}$. Consequently, we can skip the second indicator and your payoff is just \begin{equation}V_T = \mathrm{1} \left\{ S_T > K \right\}.\end{equation} I.e. the price of an up-and-in binary call option is the same of that of a normal binary call option. You thus have the standard result that \begin{equation}V_0 = e^{-r T} \mathcal{N} \left( d_- \right),\end{equation} where \begin{equation}d_- = \frac{1}{\sigma \sqrt{T}} \left( \ln \left( \frac{S_0}{K} \right) + \left( r - \frac{1}{2} \sigma^2 \right) T \right).\end{equation} Down-and-Out Binary Call A more interesting case is the down-and-out binary call. This is how I initially understood your question. Now let \begin{equation}\nu = \inf \left\{ t \in \mathbb{R}_+ : S_t \leq K \right\}\end{equation} and \begin{equation}V_T = \mathrm{1} \left\{ S_T \geq K \right\} \mathrm{1} \left\{ \nu > T \right\}.\end{equation} This option knocks out, should the spot price breach the barrier before maturity. Otherwise it has a digital payoff of one. Let $\tau = T - t$ be the time-to-maturity. The valuation function $\tilde{V}(S, \tau)$ of this option satisfies the initial boundary value problem \begin{eqnarray}\mathcal{L} \left\{ \tilde{V} \right\} (S, \tau) & = & 0 \qquad (S, \tau) \in \mathcal{D},\\\tilde{V}(K, \tau) & = & 0, \qquad \forall \tau \in \mathbb{R}_+\\\tilde{V}(S, 0) & = & \mathrm{1} \{ S \geq K \},\end{eqnarray} where $\mathcal{L}$ is the Black/Scholes forward operator and $\mathcal{D} = \left\{ (S, \tau): S > K, \tau \in \mathbb{R}_+ \right\}$. Using the method of images, see e.g. Buchen (2001), the solution can be shown to be \begin{equation}\tilde{V}(S, \tau) = \mathcal{B}_K^+(S, \tau) - \stackrel{K}{\mathcal{I}} \left\{ \mathcal{B}_K^+(S, \tau) \right\},\end{equation} where \begin{eqnarray}\mathcal{B}_K^+ (S, \tau) & = & e^{-r \tau} \mathcal{N} \left( d_- \right),\\d_- & = & \frac{1}{\sigma \sqrt{\tau}} \left( \ln \left( \frac{S}{K} \right) + \left( r - \frac{1}{2} \sigma^2 \right) \tau \right),\\\stackrel{K}{\mathcal{I}} \left\{ \mathcal{B}_K^+ (S, \tau) \right\} & = & \left( \frac{S}{K} \right)^{2 \alpha} \mathcal{B}_K^+ \left( \frac{K^2}{S}, \tau \right),\\\alpha & = & \frac{1}{2} - \frac{r}{\sigma^2}.\end{eqnarray} References Buchen, Peter W. (2001) "Image Options and the Road to Barriers," Risk Magazine, Vol. 14, No. 9, pp. 127-130
In this example we use the package to infer the modes of a bimodal, 2d Gaussian using stochastic gradient Hamiltonian Monte Carlo. So we assume we have independent and identically distributed data \(x_1, \dots, x_N\) with \(X_i | \theta \sim 0.5 N( \theta_1, I_2 ) + 0.5 N( \theta_2, I_2 )\), and we want to infer \(\theta_1\) and \(\theta_2\). First, let’s simulate the data with the following code, we set \(N\) to be \(10^4\) library(sgmcmc)library(MASS)# Declare number of observationsN = 10^4# Set locations of two modes, theta1 and theta2theta1 = c( 0, 0 )theta2 = c( 0.1, 0.1 )# Allocate observations to each componentset.seed(13)z = sample( 2, N, replace = TRUE, prob = c( 0.5, 0.5 ) )# Predeclare data matrixX = matrix( rep( NA, 2*N ), ncol = 2 )# Simulate each observation depending on the component its been allocatedfor ( i in 1:N ) { if ( z[i] == 1 ) { X[i,] = mvrnorm( 1, theta1, diag(2) ) } else { X[i,] = mvrnorm( 1, theta2, diag(2) ) }}dataset = list("X" = X) In the last line we defined the dataset as it will be input to the relevant sgmcmc function. A lot of the inputs to functions in sgmcmc are defined as lists. This improves flexibility by enabling models to be specified with multiple parameters, datasets and allows separate tuning constants to be set for each parameter. We assume that observations are always accessed on the first dimension of each object, i.e. the point \(x_i\) is located at X[i,] rather than X[,i]. Similarly the observation \(i\) from a 3d object Y would be located at Y[i,,]. The parameters are declared very similarly, but this time the value associated with each entry is its starting point. We have two parameters theta1 and theta2, which we’ll just start from the true values for the sake of demonstration purposes params = list( "theta1" = c( 0, 0 ), "theta2" = c( 0.1, 0.1 ) ) Now we’ll define the functions logLik and logPrior. It should now become clear why the list names come in handy. The function logLik should take two parameters as input: params and dataset. These parameters will be lists with the same names as those you defined for params and dataset earlier. There is one difference though, the objects in the lists will have automatically been converted to TensorFlow objects for you. The params list will contain TensorFlow tensor variables; the dataset list will contain TensorFlow placeholders. The logLik function should take these lists as input and return the value of the log likelihood as a tensor at point params given data dataset. The function should do this using TensorFlow operations, as this allows the gradient to be automatically calculated; it also allows the wide range of distribution objects as well as matrix operations that TensorFlow provides to be taken advantage of. A tutorial of TensorFlow for R is beyond the scope of this article, for more details we refer the reader to the website of TensorFlow for R. Specifying the logLik and logPrior functions regularly requires specifying specific distributions. TensorFlow already has a number of distributions implemented in the TensorFlow Probability package. All of the distributions implemented in TensorFlow Probability are located in tf$distributions, a list is given on the TensorFlow Probability website. More complex distributions can be specified by coding up the logLik and logPrior functions by hand, examples of this, as well as using various distribution functions, are given in the other tutorials. With this in place we can define the log-likelihood function logLik as follows logLik = function( params, dataset ) { # Declare Sigma (assumed known) SigmaDiag = c(1, 1) # Declare distribution of each component component1 = tf$distributions$MultivariateNormalDiag( params$theta1, SigmaDiag ) component2 = tf$distributions$MultivariateNormalDiag( params$theta2, SigmaDiag ) # Declare allocation probabilities of each component probs = tf$distributions$Categorical(c(0.5,0.5)) # Declare full mixture distribution given components and allocation probabilities distn = tf$distributions$Mixture(probs, list(component1, component2)) # Declare log likelihood logLik = tf$reduce_sum( distn$log_prob(dataset$X) ) return( logLik )} So this function basically states that our log-likelihood function is \(\sum_{i=1}^N \log \left[ 0.5 \mathcal N( x_i | \theta_1, I_2 ) + 0.5 \mathcal N( x_i | \theta_2, I_2 ) \right]\), where \(\mathcal N( x | \mu, \Sigma )\) is a Gaussian density at \(x\) with mean \(\mu\) and variance \(\Sigma\). Most of the time just specifying the constants in these functions, such as SigmaDiag, as R objects will be fine. But there are sometimes issues when these constants get automatically converted to tf$float64 objects by TensorFlow rather than tf$float32. If you run into errors involving tf$float64 then force the constants to be input as tf$float32 by using SigmaDiag = tf$constant( c( 1, 1 ), dtype = tf$float32 ). Next we want to define our log-prior density, which we assume is uninformative \(\log p( \theta ) = \log \mathcal N(\theta | 0,10I_2)\). Similar to the log-likelihood function, the log-prior density is defined as a function, but only with input params. In our case the definition is logPrior = function( params ) { # Declare hyperparameters mu0 and Sigma0 mu0 = c( 0, 0 ) Sigma0Diag = c(10, 10) # Declare prior distribution priorDistn = tf$distributions$MultivariateNormalDiag( mu0, Sigma0Diag ) # Declare log prior density and return logPrior = priorDistn$log_prob( params$theta1 ) + priorDistn$log_prob( params$theta2 ) return( logPrior )} Finally we set the tuning parameters for SGHMC, this is a list with the same names as the params list you defined earlier, and values are the stepsize for that parameter. stepsize = list( "theta1" = 2e-5, "theta2" = 2e-5 ) Optionally, we can set the tuning parameter for the momentum alpha in the same way as the stepsize. But we’ll leave this along with the trajectory tuning constant L, and the minibatchSize as their defaults. Now we can run our SGHMC algorithm using the sgmcmc function sghmc, which returns a list of Markov chains for each parameter as output. Use the argument verbose = FALSE to hide the output of the function. To make the results reproducible we’ll set the seed to 13. We’ll set the number of iterations as 11000 to allow for 1000 iterations of burn-in. chains = sghmc( logLik, dataset, params, stepsize, logPrior = logPrior, nIters = 11000, verbose = FALSE, seed = 13 ) Finally, we’ll plot the results after removing burn-in library(ggplot2)# Remove burn inburnIn = 10^3chains = list( "theta1" = as.data.frame( chains$theta1[-c(1:burnIn),] ), "theta2" = as.data.frame( chains$theta2[-c(1:burnIn),] ) )# Concatenate the two chains for the plot to get a picture of the whole distributionplotData = rbind(chains$theta1, chains$theta2)ggplot( plotData, aes( x = V1, y = V2 ) ) + stat_density2d( size = 1.5, alpha = 0.7 )
In this example we use the package to infer the mean of a 2d Gaussian using stochastic gradient Langevin dynamics. So we assume we have independent and identically distributed data \(x_1, \dots, x_N\) with \(X_i | \theta \sim N( \theta, I_2 )\), and we want to infer \(\theta\). First, let’s simulate the data with the following code, we set \(N\) to be \(10^4\) library(sgmcmc)library(MASS)# Declare number of observationsN = 10^4# Set theta to be 0 and simulate the datatheta = c( 0, 0 )Sigma = diag(2)set.seed(13)X = mvrnorm( N, theta, Sigma )dataset = list("X" = X) In the last line we defined the dataset as it will be input to the relevant sgmcmc function. A lot of the inputs to functions in sgmcmc are defined as lists. This improves flexibility by enabling models to be specified with multiple parameters, datasets and allows separate tuning constants to be set for each parameter. We assume that observations are always accessed on the first dimension of each object, i.e. the point \(x_i\) is located at X[i,] rather than X[,i]. Similarly the observation \(i\) from a 3d object Y would be located at Y[i,,]. The parameters are declared very similarly, but this time the value associated with each entry is its starting point. We have one parameter theta, which we’ll just start at 0. params = list( "theta" = c( 0, 0 ) ) Now we’ll define the functions logLik and logPrior. It should now become clear why the list names come in handy. The function logLik should take two parameters as input: params and dataset. These parameters will be lists with the same names as those you defined for params and dataset earlier. There is one difference though, the objects in the lists will have automatically been converted to TensorFlow objects for you. The params list will contain TensorFlow tensor variables; the dataset list will contain TensorFlow placeholders. The logLik function should take these lists as input and return the value of the log-likelihood function as a tensor at point params given data dataset. The function should do this using TensorFlow operations, as this allows the gradient to be automatically calculated; it also allows the wide range of distribution objects as well as matrix operations that TensorFlow provides to be taken advantage of. A tutorial of TensorFlow for R is beyond the scope of this article, for more details we refer the reader to the website of TensorFlow for R. Specifying the logLik and logPrior functions regularly requires specifying specific distributions. TensorFlow already has a number of distributions implemented in the TensorFlow Probability package. All of the distributions implemented in TensorFlow Probability are located in tf$distributions, a list is given on the TensorFlow Probability website. More complex distributions can be specified by coding up the logLik and logPrior functions by hand, examples of this, as well as using various distribution functions, are given in the other tutorials. With this in place we can define the logLik function as follows logLik = function( params, dataset ) { # Declare distribution of each observation SigmaDiag = c( 1, 1 ) baseDist = tf$distributions$MultivariateNormalDiag( params$theta, SigmaDiag ) # Declare log-likelihood function and return logLik = tf$reduce_sum( baseDist$log_prob( dataset$X ) ) return( logLik )} So this function basically states that our likelihood is \(\sum_{i=1}^N \log \mathcal N( x_i | \theta, I_2 )\), where \(\mathcal N( x | \mu, \Sigma )\) is a Gaussian density at \(x\) with mean \(\mu\) and variance \(\Sigma\). Most of the time just specifying the constants in these functions, such as SigmaDiag as R objects will be fine. But there are sometimes issues when these constants get automatically converted to tf$float64 objects by TensorFlow rather than tf$float32. If you run into errors involving tf$float64 then force the constants to be input as tf$float32 by using SigmaDiag = tf$constant( c( 1, 1 ), dtype = tf$float32 ). Next we want to define our log-prior density, which we assume is \(\log p( \theta_j ) = \log \mathcal N(\theta_j | 0,10)\), for each dimension \(j\) of \(\theta\). Similar to logLik, logPrior is defined as a function with input params. In our case the definition is logPrior = function( params ) { baseDist = tf$distributions$Normal( 0, 10 ) logPrior = tf$reduce_sum( baseDist$log_prob( params$theta ) ) return( logPrior )} Before we begin running our SGLD algorithm, we need to specify the stepsize and minibatch size. A stepsize is required for each parameter, so this must be a list of numbers with names that are exactly the same as each of the parameters. The minibatch size is simply a number that is less than \(N\), or a number between 0 and 1 which will be taken to be the proportion of \(N\). It specifies how many observations are used in each iteration of SGMCMC, it is a trade off between accuracy and speed. The default is minibatchSize = 0.01, we’ll set it to be 100. stepsize = list( "theta" = 1e-5 )n = 100 The stepsize parameters may require a bit of tuning before you get good results. The shorthand stepsize = 1e-5 can be used, which would set the stepsize of all parameters to be 1e-5. Now we can run our SGLD algorithm using the sgmcmc function sgld, which returns a list of Markov chains for each parameter as output. To make the results reproducible we’ll set the seed to 13. Use the argument verbose = FALSE to hide the output of the function chains = sgld( logLik, dataset, params, stepsize, logPrior = logPrior, minibatchSize = n, verbose = FALSE, seed = 13 ) Finally we’ll plot the results after removing burn-in library(ggplot2)burnIn = 10^3thetaOut = as.data.frame( chains$theta[-c(1:burnIn),] )ggplot( thetaOut, aes( x = V1, y = V2 ) ) + stat_density2d( size = 1.5 ) There are lots of other sgmcmc algorithms implemented in exactly the same way, such as sghmc and sgnht; as well as their control variate counterparts ( sgldcv, sghmccv and sgnhtcv) for improved efficiency, which take the additional small numeric input optStepsize, the stepsize of the initial optimization step to find the MAP parameters.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
Moment area method Method for finding deflections in a framed structure by the use of moment area curve. First Theorem Theorem 1: The change in slope between any two points on the elastic curve equals the area of the M/EI diagram between these two points. \[\theta_{AB} = {\int_A}^B \frac{M}{EI}\;dx\] where M moment EI flexural rigidity \(\theta_{AB}\) ... change in slope between points A and B A, B ... points on the elastic curve Second Theorem Theorem 2: The deviation of the tangent at point B on the elastic curve with respect to the tangent at point A equals the "moment" of the M/EI diagram between points A and B computed about point A (the point on the elastic curve), where the deviation \(t_{A/B}\) is to be determined. \[t_{A/B} = {\int_A}^B \frac{M}{EI} \bar{x} \;dx\] where M moment EI flexural rigidity \(t_{A/B}\) ... deviation of tangent at point B with respect to the tangent at point A \(\bar{x}\) ... centroid of M/EI diagram measured horizontally from point A A, B ... points on the elastic curve References Russel C. Hibbeler: Structural Analysis, 3rd Edition, Prentice Hall, 1995, chapter 8, p. 354-569, ISBN 0-02-354041-9
I don't know what people mean by 'vanilla policy gradient', but what comes to mind is REINFORCE, which is the simplest policy gradient algorithm I can think of. Is this an accurate statement? By REINFORCE I mean this surrogate objective $$ \frac{1}{m} \sum_i \sum_t log(\pi(a_t|s_t)) R_i $$ Where I indices over the $m$ episodes and $t$ over time steps, and $R_i$ is the total reward of the episode. It's also common to replace $R_i$ with something else, like a baselined version $R_i - b$ or use the future return, potentially also with a baseline $G_{it} - b$. However, I think even with these modifications to the multiplicative term, people would still call this 'vanilla policy gradient'. Is that correct?
I'am a little confused. In my text book it is written that all odd function can be described by a sine series. I have this following equation from an exercise: $$A_{0}+\sum\limits_{n=1}^\infty \Big(A_{n} \cos(n \phi) + B_{n} \sin(n \phi)\Big)c^{n} = \sin\left(\frac{\phi}{2}\right)$$ It's a standard fourier series, where n and c is positive. Then it is written in the solution that $B_{n}c^{n} = 0$ because of symmetry reasons. I'am confused because then the fourier serie only have cosine term and the function on the right hand side is an odd function?!
Let $f(x,y)$ be a binary quadratic form with co-prime integer coefficients. We say that $f$ is a proper subform of $g(x,y)$ if there exists an integer matrix $A = \left(\begin{smallmatrix} a_1 & a_2 \\ a_3 & a_4 \end{smallmatrix}\right)$ with $|\det A| > 1$ such that $$\displaystyle f(x,y) = g(a_1 x + a_2 y, a_3 x + a_4 y).$$ For example, the form $f(x,y) = 4x^2 + 4xy + 5y^2$ is a proper subform of $g(x,y) = x^2 + y^2$, since $$\displaystyle 4x^2 + 4xy + 5y^2 = (2x + y)^2 + (2y)^2.$$ If $f$ is a proper subform, then the discriminant of $f$ is divisible by $\det(A)^2$, so it is not square-free. My question is the converse: suppose that $\Delta(f)$ is divisible by an odd square $m^2$. Is $f$ a proper subform of another form $g$?
Linear fraction Linear fraction is meromorphic function that can be expressed with \((1) ~ ~ ~ \displaystyle T(z)=\frac{u+v z}{w+z}\) where \(u\), \(v\), \(w\) are parameters from the some set of numbers that allows operations of summation, multiplication and division. Usually, it is assumed, that they are complex numbers, and the operation of multiplication is commutative. For case \(u\!=\!0\), \(v\!=\!-1\), \(w\!=0\), complex map of function \(T\) by (1) is shown in figure at right. Lines \(~u=\Re(T(x\!+\!\mathrm i y))=\mathrm {const}~\) and lines \(~v=\Im(T(x\!+\!\mathrm i y))=\mathrm {const}~\) are drawn with code conto.cin in the \((x,y)\) coordinate plane. Contents Linear function Definition (1) excludes the case of linear function. However, this the linear function can be realized in limit \((2) ~ ~ ~ \displaystyle A+B z= \lim_{M\rightarrow \infty} \frac{M A+ M B}{M+z}\) where the expression of the function under the limit operation is expressed in a form that corresponds to (1), id est, \(u=M A\), \(v=MB\), \(w=M\). Inverse function The inverse function \(T^{-1}\) of the linear fraction \(T\) by (1) is also linear fraction, and its parameters can be easy expressed through the parameters of the initial linear fraction. \((3) ~ ~ ~ \displaystyle T^{-1}(z)=\frac{u-w z}{-v+z}\) One can easy check that \(T(T^{-1}(z))=T^{-1}(T(z))=z\) for all \(z\) excluding the poles, singularities at \(z=-w\) and at \(z=v\). Linear conjugate of linear fraction Linear conjugate of a function \(T\) is function \(Q\circ T\circ P\) where \(P\) is linear function and \(Q=P^{-1}\). The linear function \(P\) can be parametrized with two parameters, \(A\) and \(B\), as follows: \((4) ~ ~ ~ \displaystyle P(z)=A+B z\) then \((5) ~ ~ ~ \displaystyle Q(z)=(z-A)/B\) and \((6) ~ ~ ~ \displaystyle Q \circ T \circ P(z)= \frac{\frac{-A^2+A v-A w+u}{B^2}+\frac{v-A}{B} z}{\frac{A+w}{B}+z}\) This form, at special choice of parameters \(A\) and \(B\) can be used in order to simplify the construction of non-integer iterates of the linear fraction. One of possible choices is \((7) ~ ~ ~ \displaystyle \frac{v-A}{B}=1 ~ ~ \), \(~ ~ ~ \displaystyle -A^2+Av-Aw+u=0\) and then \((8) ~ ~ ~ \displaystyle A=\frac{v-w}{2}+r\) \((9) ~ ~ ~ \displaystyle B=v-A=\frac{v+w}{2}-r\) where \((10) ~ ~ ~ \displaystyle r=\sqrt{\Big(\frac{v-w}{2}\Big)^2+u}\) With such a choice, \((11) ~ ~ ~ \displaystyle t(z)=Q \circ T \circ P(z)= \frac{z}{c+z}\) where \((12) ~ ~ ~ \displaystyle c=\frac{v+w+2r}{v+w-2r}\) Then, the linear fraction \(T\) by (1) can be written as \((13) ~ ~ ~ \displaystyle T(z)=P \circ t \circ Q(z)\) and the \(n\)th iterate of \(T\) can be written as \((14) ~ ~ ~ \displaystyle T^n(z)=P \circ t^n \circ Q(z)\) Iterate of linear fraction Iterate of the linear fraction or the special form \(t\) by ( 11) can be expressed as follows: \((15) ~ ~ ~ \displaystyle t^n(z)=\frac{z}{c^n+\frac{1-c^n}{1-c} z }\) Superfunction \(f\) for the transfer function \(t\) by (11) can be choosen as follows: \((16) ~ ~ ~ \displaystyle f(z)=\frac{c-1}{c^z-c }\) For \(c\!=\!2\), complex map of function \(f\) by (16) is shown in figure at right. Superfunction \(f\) is periodic with period \(\displaystyle P=\frac{2 \pi \mathrm i}{\ln(c)} \) For real values of \(c\) (and, in particular, for \(c=2\)), the period is pure imaginary. At \(\Re(z)\rightarrow +\infty\), super function \(F(z)\) exponentially approaches zero. At \(\Re(z)\rightarrow -\infty\), super function \(F(z)\) exponentially approaches constant \((1\!-\!c)/c\). The corresponding Abel function \(t=g^{-1}\) can be expressed as follows: \((17) ~ ~ ~ \displaystyle g(z)= \log_c\big(1+(c\!-\!1)/z \big)\) For \(c\!=\!2\), complex map of function \(g\) by (17) is shown in figure. With superfunction \(f\) by (16) and Abel function \(g\) by (17), the iterate of \(t\) can be written in the usual form \((18) ~ ~ ~ \displaystyle t^n(z)=f(n+g(z))\) note Especially simple the interpretation of iteration of the transfer function \(T\) by (1) is for the case \(r\ge 0\), then, the \(T\) has at elast one real fixed point, and the \(n\)th iteration, id est, \(T^n\) is regular at this fixed point even at non-integer values of \(n\); the iteration also can be expressed as a linear fraction. Superfunction of the linear fraction The Iterate of linear fraction can be expressed through its superfunction and the Abel function; as usually, the additional conditions on the asymptotic behavior of these functions is required in order to make the non-integer iterate unique.
I was reading the book "Genetics and Analysis of Quantitative Traits", by Lynch and Walsh. I how the covariance between two individuals with IBD $\Theta$ gets divided into just the additive variance and dominance variance component, even in the simple $1$ locus case. Here my understanding of the modelling (for the simple one allele case): Given a genotypic value $G_{i,j}$ of mean $0$, $i,j \in \{0,1\}$ we find numbers $\alpha_0$ and $\alpha_1$ minimising the least squares of the following form $\mathbb{E}(G_{i,j}-\alpha_i-\alpha_j)^2$, where the expectation is over the population. We next define the error terms in each case as $\delta_{i,j}=G_{i,j}-\alpha_i-\alpha_j$. From the properties, viewed as functions of the population $\alpha_i$ is independent of $\delta_{i,j}$, and both have mean $0$. The claim made in the book is that given two individuals, with IBD $\Theta$ and probability that the genotype is equal $\Delta$, the covariance of the genotypes $G_{i,j}$ and $G_{k,l}$ is given by, $$\text{cov}(G_{i,j},G_{k,l}) = 2\Theta \sigma_A^2 +\Delta \sigma_D^2,$$ where $\sigma_A^2= \text{Var}(\alpha_i)$, and $\sigma_D^2 = \text{Var}(\delta_{i,j})$. Expanding the LHS of the expression, showing that $\mathbb{E}[(\alpha_i +\alpha_j)(\alpha_k+\alpha_l)] =\Theta \sigma_A^2$ is quite easy. It also seems to follow from the that the the terms $\mathbb{E}[(\alpha_i +\alpha_j)\delta_{k,l}]=0$ from independence of errors from the $\alpha$. On analysing $\mathbb{E}[\delta_{i,j}\delta_{k,l}]$, we see that if both genotypes are equal, which occurs with probability $\Delta$, then this reduces to $\sigma_D^2$. This gives us a term $\Theta \sigma_D^2$. Further, if both $i,j$ and $k,l$ are not IBD then the covariance is $0$. However when one of the two alleles are IBD, then it is not clear to me that this the covariance will still be $0$. The book seems to claim that unless both alleles are IBD, $\delta_{i,j}$ and $\delta_{k,l}$ are independent. I do not see why this is the case. Am I missing anything here? I'd appreciate any help wrt this.
There are commands for the top two symbols \ltimes and \rtimes, however I have not been able to find commands for the other 4 symbols. Is there a simple way that I could create commands for these symbols? There are commands for the top two symbols Just combine existing symbols: \documentclass{article}\usepackage{amssymb}\begin{document}$\blacktriangleright\mathrel{\mkern-4mu}<$,$>\mathrel{\mkern-4mu}\blacktriangleleft$,$\blacktriangleright\joinrel\mathrel{\triangleleft}$,$\mathrel{\triangleright}\joinrel\blacktriangleleft$\end{document} \joinrel is defined (robustly) as \mathrel{\mkern-3mu}. It's enough for the last two symbols; for the first two a slighlty larger value of 4mu looks better to me. As a matter of fact, \ltimes and \rtimes do not yield the "unsymmetric" symbols in your picture. They can be similarly obtained joining </ > with \triangleleft/ \triangleright. $>\joinrel\mathrel{\triangleleft}$ vs. $\rtimes$$\mathrel{\triangleright}\joinrel<$ vs. $\ltimes$ My fantasy isn't rich enough to come up with names for all these ;-) This takes campa's answer (+1) and makes an enhancement/alteration: it scales the result downward to occupy the same vertical footprint as the letter x. Like campa's result, it works across math styles. The MWE: \documentclass{article}\usepackage{mathtools,amssymb,scalerel}\newcommand\bicrossl{% \mathrel{\scalerel*{\mathrel{\triangleright}\joinrel\blacktriangleleft}{x}}}\newcommand\bicrossr{% \mathrel{\scalerel*{\blacktriangleright\joinrel\mathrel{\triangleleft}}{x}}}\newcommand\biopencrossl{% \mathrel{\scalerel*{>\kern-.4\LMpt\joinrel\blacktriangleleft}{x}}}\newcommand\biopencrossr{% \mathrel{\scalerel*{\blacktriangleright\joinrel\kern-.4\LMpt<}{x}}}\begin{document}$x\bicrossr y$ and $x\bicrossl y$, $x\biopencrossr y$ and $x\biopencrossl y$, $\scriptstyle x\bicrossr y$ and $\scriptstyle x\bicrossl y$, $\scriptstyle x\biopencrossr y$ and $\scriptstyle x\biopencrossl y$, \end{document}
Is the photon pair generated from the electron-positron annihilation entangled? And would they work as a source of entangled photons suitable for experiments in quantum optics? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Yes, they are definitely entangled. Their combined energy will exactly equal the combined energy of the original electron-positron pair, for example. The same is true for combined momentum and combined spin. The photon-pair is definitely entangled. It is produced from an $S$-wave state of the electron-positron system and as such has orbital angular momentum $L=0$. But it's not useful for quantum optics, the photons' energy is too high for your usual mirrors to reflect. See fig 33.15 on p. 23 of this http://pdg.lbl.gov/2018/reviews/rpp2018-rev-passage-particles-matter.pdf to get an idea what happens to a 511 keV photon from two-photon annihilation of an electron-positron pair at rest once it hits matter: ionization by Compton scattering dominates the interaction, that's not what you want to happen in a mirror or lense. It's worth keeping in mind though, that the electron-positron system (Positronium) can decay to any number of photons $>1$, though numbers higher than three are very rare. An even number of photons can be produced for decays of the singlet ground state ${}^1S_0$ (where the electron and positron are in an anti-symmetric spin state $\frac{1}{\sqrt{2}}\left( \left|\uparrow \downarrow \right\rangle - \left|\downarrow \uparrow \right\rangle\right)$, "Parapositronium"). This defines the spin entanglement in the case you had in mind, and it has actually been measured and found to agree with full entanglement, see e.g. here http://adsabs.harvard.edu/abs/2009APS..HAW.GB108S An uneven number can be produced from the triplet ${}^3S_0$ ("Orthopositronium"). The triplet is much longer-lived than the singlet state (roughly thousand times), but since it consists of three states and because symmetries ($CP$) prevent the positronium from going from the triplet to the singlet state, an appreciable number of electron-positron pairs decays to three photons. In fact, also the $S$ states of higher energy levels can decay to photons directly, but usually they will decay to the ground state first, emitting photons of energies $O(10 - 100 e\mathrm{V})$ before the actual decay to the high-energy pair or to the three continuous spectrum photons. Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Consider the finite multisets $\mathbf{Bag}\:X$. Its elements are given by $\{x_1,\ldots,x_n\}$ quotiented by permutations, so that $\{x_1,\ldots,x_n\}=\{x_{\pi 1},\ldots,x_{\pi n}\}$ for any $\pi\in\mathbf{S}_n$. What is a one-hole context for an element in such a thing? Well, we must have had $n>0$ to select a position for the hole, so we are left with ... I don't think there's any general algorithm that works for arbitrary semirings. The requirement to be a semiring doesn't give us a lot to work with.However, if you have a closed semiring, then there are algorithms for solving systems of linear equations over the semiring.Closed semiringsA closed semiring is a semiring with a closure operator, denoted $... There is a non-trivial randomized algorithm that can solve this in $O(n^2 \log (1/\delta))$ time, where $\delta>0$ is the desired error probability. SeeVerification of Identities. Sridhar Rajagopalan and Leonard J. Schulman. SIAM Journal on Computing, 29(4), pp.1155-1163. There is really no theory behind it: operator precedence is a purely human construct.The reason is that expressions are not linear blocks of text. They are trees. For example, you can represent any arithmetic in Reverse Polish Notation. In a tree, there is no ambiguity. An operator acts on its children, end of story.Humans are good at reading things ... Interesting question. Factorization of functions, including factorization of polynomials is in fact a classical problem throughout history of mathematics.For the sake of contradiction, assume that $$x^2 + y^3 - e^{z} = f(x)*g(y)*h(y,z)$$For the sake of simplicity, assume that $f, g, h$ are continuously differentiable inside $D$, the place where we are ... How about the infinite sum$$\sum_{i, j \in \mathbb{N}} X^i ?$$The derivative is$$\sum_{i, j \in \mathbb{N}} \underbrace{X^i + \cdots + X^i}_{i+1}$$which is equal to the original by associativity and commutativity of sums.Also, the infinite sum is equal to $\sum_{j \in \mathbb{N}} \mathsf{List}(X)$), so we could try to calculate the derivative using ... Whenever you are faced with two Boolean expressions $f,g$ on $n$ variables and wish to know whether they are equivalent, there is a simple algorithm you can apply:Go over all $2^n$ possible truth assignments, and check whether $f$ and $g$ have the same truth value on each.While this is infeasible for large $n$, in your case $n = 4$, so there are only ... I assume that in your definition $a \leq b$ iff $a + b = b$.First, note that if $a \leq b$ and $b \leq a$, then $a = a + b = b + a = b$.Therefore, in order to show that $a = b$, it is sufficient to show that $a \leq b$ and $b \leq a$.Now, you want to show that $(a + b)^\ast = (a + ab + b)^\ast$. As explained in the previous paragraph, it is sufficient ... According to Wikipedia, these are two names for the same concept. You can differentiate them by stating that Boolean algebra is an algebraic structure having operations $\land,\lor,\lnot$ satisfying certain axioms, and in contrast a Boolean lattice is a lattice having certain properties; but the two definitions are equivalent. Sometimes Boolean algebra and ... As you mentioned, you can factor $f_1$ and $f_2$ in polynomial time. Consider the multiset $D_1$ of degrees of the factors of $f_1$, and the multiset $D_2$ of degrees of the factors of $f_2$. If $D_1 \ne D_2$, they are not isomorphic. If $D_1 = D_2$, they are isomorphic. That takes care of determining whether they are isomorphic.Computing an ... We can start from $\sin(x)$ which has a nice regular graph. To avoid negative values you can simply use the absolute value $|\sin(x)|$. This produces a graph similar to your but with constant height.To decrease height as $x \to \infty$ we want to multiply that function by something that decrements. For example $\frac{1}{1+|x|}$ goes to $0$ as $x \to \infty$... L. J. Stockmeyer proves that SET BASIS is $\mathrm{NP}$-complete.Reference:L.J. Stockmeyer, The set basis problem is $\mathrm{NP}$-complete, Tech. Report RC-5431, IBM, 1975Each vector of your problem can be seen as subset of $\{1, 2, \dots, n\}$. And the component-wise OR operation is corresponding to set union. So a basis for a collection of Boolean ...
For a random integer $x$ chosen uniformly between 2 and $n$, what is the expected value of the smallest prime factor of $x$ as a function of $n$? What is the behavior of the function as $n$ tends to infinity? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community A quick and dirty answer... (I began before Will answered...) I first address the following question: what is the probability $\pi_n$ that a number has the n-th prime $p_n$ as smallest prime factor. A random number is even with proba ${1\over 2}$, so the smallest prime factor will be 2 with probability ${1\over 2}$. An odd number is a multiple of 3 with proba ${1\over 3}$, so the smallest prime factor will be $3$ with proba ${1\over 2\cdot 3}$. If it is not dividable by 2 or 3, which happens with probability $1 - {1\over 2} - {1\over 6} = {1\over 3} $, this will be $5$ with proba ${1\over 5}\times{1\over 3} = {2 \over 2\cdot 3 \cdot 5}$. Then it will be 7 with proba ${1\over 7} \times (1 - {1\over 2} - {1\over 6} - {1\over 15}) = {1\over 7} \times {4 \over 15} = {8 \over 2\cdot 3 \cdot 5\cdot 7}$. Denoting this proba by $\pi_n$ and by $p_n$ the sequence of primes, we have $\pi_1 = {1\over 2}$ and $\pi_{n} = {1 \over p_n} \left( 1 - \sum_{i=1}^{n-1} \pi_i\right)$. Edit I take some time to see the connection between this answer and Will’s.I compute the totient function : $\phi(2) = 1$, $\phi(2\cdot3) = 1\cdot 2$, $\phi(2\cdot 3\cdot 5) = 1 \cdot 2 \cdot 4$. Denoting $p_n\# = \prod_{i\le n} p_i$, it appears that in the first few terms I get the following : $$\pi_n = {\phi(p_{n-1}\#) \over p_n\#},$$ which is slightly different – but Will is computing an expectation, and he is right cf edit 6 below. Edit 2 This is logical from the definition of totient function : $\phi(p_{n-1}\#)\over p_{n-1}\#$ is the proportion numbers which are not dividable by $p_1, \dots, p_{n-1}$ ; multiply by $1\over p_n$ to get the proportion of numbers which are not dividable by $p_1, \dots, p_{n-1}$ but dividable by $p_n$. If one manage to prove by recurrence that the above defined $\pi_n$ coincide with this, the fact that the sum is 1 should be clear. Edit 3 It is not difficult to complete. If we prove that for all $n$,$$1 - \sum_{i\le n} \pi_i = {\phi(p_n\#) \over p_n\#},$$we are done. This is true for $n=1, 2$.The induction step is:$$\begin{array}{rcl} 1 - \sum_{i\le n+1} \pi_i &=& \left(1 - \sum_{i\le n} \pi_i\right) - \pi_{n+1} \\ &=& \left(1 - \sum_{i\le n} \pi_i\right)\left( 1 - {1\over p_{n+1}}\right)\\ &=& \left({\phi(p_n\#) \over p_n\#} \right) \left( {p_{n+1} - 1\over p_{n+1}}\right)\\ &=& { \phi(p_n\#) \times (p_{n+1} - 1) \over p_n\# \times p_{n+1} }\\ &=& {\phi(p_{n+1}\#) \over p_{n+1}\#} \end{array},$$so we are done. Edit 4 Using Euler’s trick, we have easily that $$1 - \sum_{i=1}^\infty \pi_i = \prod_p \left(1-{1\over p}\right) = { 1 \over \sum_{n=1}^\infty {1\over n}} = 0,$$which can surely be rewritten respecting the modern standards... I am not familiar with analytic number theory, but I am sure this product is a classic. Edit 5 Yes, it is a classic, cf Merten’s third theorem, which tells that $\prod_{p\le n} \left(1-{1\over p}\right) \sim {e^{-\gamma}\over \log n}$. Using $p_n \sim n \log(n)$ we get that $$1 - \sum_{i=1}^n \pi_i = \prod_{p\le p_n} \left(1-{1\over p}\right) \sim {e^{-\gamma}\over \log n + \log\log n} \sim {e^{-\gamma}\over \log n }$$ and $$ \pi_n \sim {e^{-\gamma}\over n \log^2 n},$$ which gives you the asymptotic behaviour of this sequence. Edit 6 In fact I didn’t adress the original question but this is possible now. The smallest prime factor of a number taken uniformly between 1 and $p_n-1$ is $p_k$ with probability $\simeq {\pi_k \over \sum_{\ell<n}\pi_\ell}$. Its expectation is$$ { \sum_{\ell < n} p_\ell\pi_\ell \over \sum_{\ell < n} \pi_\ell} \sim \sum_{\ell < n} p_\ell\pi_\ell = \sum_{\ell < n} {\phi(p_\ell\#) \over p_\ell\#},$$as Will first stated (oh my God, why did that took me so long?). The above equivalent shows that this goes to infinity as $n\rightarrow\infty$. Comparing to an integral leads to a $O\left( {p_n \over \log p_n} \right)$. Taking the primorials $$P_0 = 1, \; P_1 = 2, \; P_2 = 6, \; P_3 = 30, \; P_4 = 210,$$ I get your expected value as $n$ increases to $\infty$ as $$ E = \sum_{k = 0}^\infty \; \frac{\phi(P_k)}{P_k},$$ wher $\phi$ is Euler's totient function. I'm not sure yet whether this is finite. Intuitively, I would expect this to be of the order of $\dfrac{n}{\log_e n}$ on the grounds that it is slightly more than the sum of the primes less than or equal to $n$ divided by $n-1$, and that is slightly less than $n$ times the frequency of primes near $n$. Empirically this looks reasonable: for $n$ between 32000 and 64000, something like $1.9\dfrac{n}{\log_e n}$ looks fairly close.
I'm currently working on a project were I need to cascade several transformers (the core we use does not nearly work as good as it ought to..). I'm using the equivalent circuit shown in (A), transformed it into an ideal transformer and equivalent circuit (B) by dragging over the circuit on the secondary side, where $$\sigma_2 L_2 \rightarrow \sigma_2 L_2 N_1^2\\R_2\rightarrow R_2 N_1^2\\C_2 \rightarrow \frac{C_2}{N_1^2}$$ This works very well for a single transformer. When I use the same method for 2 cascaded equivalent circuits of transformers (C) to transform them into the e.c. (D) the inductors, resistors and capacitors from the second e.c. transformer are modified accordingly (from C to D) : $$\sigma_2 L_2 \rightarrow \sigma_2 L_2 N_1^2\\R_2\rightarrow R_2 N_1^2\\C_2 \rightarrow \frac{C_2}{N_1^2}\\C_3 \rightarrow \frac{C_3}{N_1^2}\\R_3\rightarrow R_3 N_1^2\\ \sigma_3 L_3 \rightarrow \sigma_3 L_3 N_1^2\\ R_{fe} \rightarrow R_{fe} N_1^2\\ L_3 \rightarrow L_3 N_1^2\\ \sigma_4 L_4 N_2^2\rightarrow \sigma_4 L_4 N_1^2 N_2^2\\ \sigma_4 R_4 N_2^2\rightarrow \sigma_4 R_4 N_1^2 N_2^2\\ \frac{C_4}{N_2^2} \rightarrow \frac{C_4}{N_1^2 N_2^2} $$ The LTSpice simulation of (D) works as one would expect it to, the problem is: the LTSpice simulation of (C) does not. It shows a decrease in amplification after the second ideal transformer in instead of an increase. I suspect my method might be flawed, but I can't really pin it down right now. Does anyone have an idea what could have gone wrong / where I made a mistake ? If you can't open the schematic, I made a screenshot, hopefully that helps? http://imgur.com/pLg1v83 Thank you! ASC FILES: EDIT: 1 I'm limited by the number of turns (primary <= 10 turns) and the voltage (voltage through primary inductivity < 0.2V), but require primary inductivities > 270uH with low losses and low saturation between 0.5MHz and 1.5MHz. The transformer should have a ratio of ~ 40, which is why the number of turns on the primary side matters and I'm considering cascading multiple transformers, otherwise I won't have enough space and might run into trouble with capacitive coupling and other effects. I'm using ferrite cores (Al ~ 2700nH/[turn^2] delivers a primary inductivity of 270uH for 10 turns). Ideally the B-H curve should be linear for my use, but that's another question.
For a PDF version of this article, please click this link Introduction You may well have come across impedance expressed in the the terms real and imaginary (or resistance and reactance) when using equipment such an Antenna Analyser or impedance in the form R + jX. This may mean something to some readers whilst it probably means nothing and is somewhat baffling to many others. In this article I am going to try to explain in SIMPLE terms what it is all about. Please do not “get turned off” because it contains some mathematics, it’s all very simple — really! This article is not meant as a mathematical treatise on the subject and covers, for sake of simplicity, only the series circuit. It will, however give your brain, calculator and computer some exercise! Basic AC Theory During studies for the old RAE or newer Full Licence the concepts or resistance and reactance have been taught and the following equations will have been given: \text{Inductive reactance:}\quad X_L = 2\pi f L ~\Omega \text{Capacitive reactance:}\quad X_C = \frac{1}{2 \pi f C} ~\Omega ( Note: at least X is in Ohms and we have a chance of combining it with resistance R, L and C themselves are not in Ohms). Figure 1: Impedance in a Series Circuit You will also have been taught that inductance and capacitance introduce a phase shift in the circuit between the applied voltage and the current flowing. A circuit has impedance rather than resistance when inductance and capacitance are also involved in a circuit carrying an alternating current. Again, referring to what has been taught, impedance can be represented by a triangle such as shown in Fig. 1 for a series circuit. It is not correct to write that Z = R + X_L or Z = R + X_C as this has not taken into account the phase shift introduced by the reactive element. Rather, you must use the formulae given in Fig 1. It would, however, be very convenient it there was a method whereby R and X could be combined in some form without the use of square roots and trigonometric functions. It would allow a consistent set of units — Ohms — instead of dealing with \text{pF}, \text{μF}, \text{μH}, \text{mH} and so forth and also be convenient if reactances could just be added and subtracted. This would help us in, for example, antenna calculations, where we need to get a series reactance that will make an antenna look purely resistive. The next section explain a method for attaining this with some examples. The “j” Operator There is a mathematical tool which uses the j operator (it is often called i in mathematical books but engineers use j). This allows us to write Z=R+jX_L or Z= R-jX_C — note the minus sign for capacitive reactance, it is important. The R and the j terms cannot be further simplified, i.e. if Z = 65 + j40 this is its simplest form. The j term implies a quantity that is at 90\degree (or quadrature) to the resistive term. Two practical examples – see Fig, 2. Using the formulae for reactance given earlier and the frequencies quoted in the examples, then the series circuits can be specified as Z = 220 + j 628.3 and Z=100-j15.9 respectively (note these figures have been rounded off). This gives phase angles of approximately 72\degree (lagging) and 9\degree (leading) respectively. The minus sign indicates the reactance is capacitive and the plus sign denotes inductive. Figure 2: Two Practical Examples If the series circuit as shown in Fig. 3 is used, the combined impedance is given by: Z = R_1 + R_2 + j X_L – j X_C The non j and the j terms can he collected together which gives: Z = (R_1 + R_2) + j (X_L – X_C) Figure 3: Combined Circuits Thus series resistance can be added together (something that should be known) as also can series reactance — but taking into account the sign. The reactances can only be added together provided that they are quoted at the same frequency. Taking the examples from Fig. 2 and combining them in series gives: Z = 100 +330 + j(628.3 – 15.9) \\ Z = 430 + j612.4 This denotes that the combined circuit at 100\text{kHz} has a resistive part of 430\Omega and an inductive reactance of 612.4\Omega (because the j term is positive). This is equivalent to 0.975\text{mH} (or 975\text{μH}). The resulting phase angle of 55\degree is obtained from: \tan \text{\o} = 612.4/430 A well-known condition is achieved when the resultant j terms equals zero, i.e. when X_L = X_C. From earlier then: 2 \pi fL = 1/(2 \pi fC) When rearranging this one obtains: f=\frac{1}{2\pi \sqrt{LC}} This is the well-known resonant frequency formula. You are then left with Z = the resistive term only, i.e. a series circuit at resonance is purely resistive — something one learnt for the exams? Figure 4: Antenna System Impedance A Practical Use You could well ask what is the use of this, is it just a mathematical exercise? No, it is not: a practical use was hinted at earlier. The following example is just one application. The impedance at an antenna system is measured at 3.7\text{MHz} using an antenna analyser and it is found that the resistive part is 38\Omega and the reactive part is -j100\Omega (ie Z = 38 – j100). To get maximum power into the antenna it is desirable to eliminate the reactive part so that, from terminals AB, the impedance is purely resistive. Assuming the antenna analyser gives an equivalent series circuit, then a reactance must be added in series to cancel the -j100 term. This is obviously +j100 and the value of the inductance can now be calculated as 4.3\text{μH} at 3.7\text{MHz}. Conclusion It is hoped that this short article has provided an insight into the use of the operator j, but the article really only touches the surface regarding the use or this operator. Sufficient information is given for converting between physical values (ie. farads and henrys and sub-multiples) and equivalent reactances which are expressed in one single unit — the Ohm. The practical examples given will hopefully allow you to use the operator for other applications. To ease the maths, you can write a simple spreadsheet to do it for you.
Edit: A colleague informed me that my method below is an instance of the general method in the following paper, when specialized to the entropy function, Overton, Michael L., and Robert S. Womersley. "Second derivatives for optimizing eigenvalues of symmetric matrices." SIAM Journal on Matrix Analysis and Applications 16.3 (1995): 697-718. http://ftp.cs.nyu.edu/cs/faculty/overton/papers/pdffiles/eighess.pdf Overview In this post I show that the optimization problem is well posed and that the inequality constraints are inactive at the solution, then compute first and second Frechet derivatives of the entropy function, then propose Newton's method on the problem with the equality constraint eliminated. Finally, Matlab code and numerical results are presented. Well posedness of the optimization problem First, the sum of positive definite matrices is positive definite, so for $c_i > 0$, the sum of rank-1 matrices $$A(c):=\sum_{i=1}^N c_i v_i v_i^T$$is positive definite. If the set of $v_i$ is full rank, then eigenvalues of $A$ are positive, so the logarithms of the eigenvalues can be taken. Thus the objective function is well-defined on the interior of the feasible set. Second, as any $c_i \rightarrow 0$, $A$ loses rank so smallest eigenvalue of $A$ goes to zero. Ie, $\sigma_{min}(A(c)) \rightarrow 0$ as $c_i \rightarrow 0$. Since the derivative of $-\sigma \log(\sigma)$ blows up as $\sigma \rightarrow 0$, one cannot have a sequence of successively better and better points approaching the boundary of the feasible set. Thus the problem is well-defined and furthermore the inequality constraints $c_i \ge 0$ are inactive. Frechet derivatives of the entropy function In the interior of the feasible region the entropy function is Frechet differentiable everywhere, and twice Frechet differentiable wherever the eigenvalues are not repeated. To do Newton's method, we need to compute derivatives of the matrix entropy, which depends on the matrix's eigenvalues. This requires computing sensitivities of the eigenvalue decomposition of a matrix with respect to changes in the matrix. Recall that for a matrix $A$ with eigenvalue decomposition $A = U \Lambda U^T$, the derivative of the eigenvalue matrix with respect to changes in the original matrix is,$$d\Lambda = I \circ (U^T dA U),$$and the derivative of the eigenvector matrix is,$$dU = UC(dA),$$where $\circ$ is the Hadamard product, with the coefficient matrix $$C = \begin{cases}\frac{u_i^T dA u_j}{\lambda_j - \lambda_i}, & i=j \\0, &i=j\end{cases}$$ Such formulas are derived by differentiating the eigenvalue equation $AU=\Lambda U$, and the formulas hold whenever the eigenvalues are distinct. When the there are repeated eigenvalues, the formula for $d\Lambda$ has a removable discontinuity that can be extended so long as the nonunique eigenvectors are chosen carefully. For details about this, see the following presentation and paper. The second derivative is then found by differentiating again,\begin{align}d^2 \Lambda &= d(I \circ (U^T dA_1U)) \\&= I \circ (dU_2^T dA_1 U + U^T dA_1 dU_2) \\&= 2 I \circ (dU_2^T dA_1 U).\end{align} While the first derivative of the eigenvalue matrix could be made continuous at repeated eigenvalues, the second derivative cannot since $d^2 \Lambda$ depends on $dU_2$, which depends on $C$, which blows up as the eigenvalues degenerate towards one another. However, so long as the true solution does not have repeated eigenvalues, then it is OK. Numerical experiments suggest this is the case for generic $v_i$, though I don't have a proof at this point. This is really important to understand, as maximizing entropy generally would try to make eigenvalues closer together if possible. Eliminating the equality constraint We can eliminate the constraint $\sum_{i=1}^N c_i = 1$ by working on only the first $N-1$ coefficients and setting the last one to $$c_N = 1-\sum_{i=1}^{N-1} c_i.$$ Overall, after about 4 pages of matrix calculations, the reduced first and second derivatives of the objective function with respect to changes in the first $N-1$ coefficients are given by,$$df = dC_1^T M^T [I \circ (V^T U B U^T V)]$$$$ddf = dC_1^T M^T [I \circ (V^T[2dU_2 B_a U^T + U B_b U^T]V)],$$ where$$M = \begin{bmatrix}1 & \\& 1 & \\&&\ddots& \\&&&1\\-1 & -1 & \dots & -1\end{bmatrix},$$ $$B_a = \mathrm{diag}(1+\log \lambda_1, 1 + \log \lambda_2, \ldots, 1 + \log \lambda_N),$$ $$B_b = \mathrm{diag}(\frac{d_2\lambda_1}{\lambda_1},\ldots,\frac{d_2\lambda_N}{\lambda_N}).$$ Newton's method after eliminating the constraint Since the inequality constraints are inactive, we simply start in the feasible set and run trust-region or line-search inexact newton-CG for quadratic convergence to the interior maxima. The method is as follows, (not including trust-region/line search details) Start at $\tilde{c} = [1/N,1/N,\ldots,1/N]$. Construct the last coefficient, $c = [\tilde{c},1 - \sum_{i=1}^{N-1} c_i]$. Construct $A = \sum_i c_i v_i v_i^T$. Find the eigenvectors $U$ and eigenvalues $\Lambda$ of $A$. Construct gradient $G = M^T [I \circ (V^T U B U^T V)]$. Solve $H G = p$ for $p$ via conjugate gradient (only the ability to apply $H$ is needed, not the actual entries). $H$ is applied to vector $\delta \tilde{c}$ by finding $dU_2$, $B_a$, and $B_b$ and then plugging into the formula,$$M^T [I \circ (V^T[2dU_2 B_a U^T + U B_b U^T]V)]$$ Set $\tilde{c} \leftarrow \tilde{c} - p$. Goto 2. Results For random $v_i$, with linesearch for steplength the method converges very quickly. For example, the following results with $N=100$ (100 $v_i$) are typical - the method converges quadratically. >> N = 100; >> V = randn(N,N); >> for k=1:N V(:,k)=V(:,k)/norm(V(:,k)); end >> maxEntropyMatrix(V); Newton iteration= 1, norm(grad f)= 0.67748 Newton iteration= 2, norm(grad f)= 0.03644 Newton iteration= 3, norm(grad f)= 0.0012167 Newton iteration= 4, norm(grad f)= 1.3239e-06 Newton iteration= 5, norm(grad f)= 7.7114e-13 To see that the computed optimal point is in fact the maximum, here is a graph of how the entropy changes when the optimal point is perturbed randomly. All perturbations make the entropy decrease. Matlab code All in 1 function to minimize the entropy (newly added to this post):https://github.com/NickAlger/various_scripts/blob/master/maxEntropyMatrix.m
How do derivatives of operators work? Do they act on the terms in the derivative or do they just get "added to the tail"? Is there a conceptual way to understand this? For example: say you had the operator $\hat{X} = x$. Would $\frac{\mathrm{d}}{\mathrm{d}x}\hat{X}$ be $1$ or $\frac{\mathrm{d}}{\mathrm{d}x}x$? The difference being when taking the expectation value, would the integrand be $\psi^*\psi$ or $\psi^*(\psi+x\frac{\mathrm{d}\psi}{\mathrm{d}x})$? My specific question is about the band effect in solids. To get a better understanding of the system, we've used Bloch's theorem to express the wavefunction in the form $\psi = e^{iKx}u_K(x)$ where $u_K(x)$ is some periodic function. With the fact that $\psi$ solves the Schrodinger equation, we've been able to derive an "effective Hamiltonian" that $u_K$ is an eigenfunction of, $H_K = -\frac{\hbar^2}{2m}(\frac{\mathrm{d}}{\mathrm{d}x}+iK)^2+V$. My next problem is to find $\left\langle\frac{\mathrm{d}H_z}{\mathrm{d}K}\right\rangle$, which led to this question. Some of my reasoning: An operator is a function on functions, so like all other functions we can write it as $f(g(x))$. When you take the derivative of this function, you get $f'(g(x))*g'(x)$. So looking at the operator, $\hat{X}$, we can say that it is a function on $\psi(x)$, $\hat{X}(\psi)= x\psi$. So taking the derivative gives us: $$\frac{\mathrm{d}\hat{X}}{\mathrm{d}x} = \psi+ x\frac{\mathrm{d}\psi}{\mathrm{d}x}$$ but you could also say that $\hat{X}=x$ (not a function), so $$\frac{\mathrm{d}\hat{X}}{\mathrm{d}x} = \frac{\mathrm{d}}{\mathrm{d}x}x = 1$$ Now I'm inclined to say that $\hat{X}$ is a function, but it seems like for this question, it is better to just treat is as a constant and naively (in my opinion) take its derivative. So which way do I do it?
Someone’s inspirational iterative approach index card wasn’t 100% accurate. Better. Someone’s inspirational iterative approach index card wasn’t 100% accurate. Better. Eating at a picnic spot or wilderness campsite is great, but prep can be tricky. You still want decent kitchen utensils, but for knives there’s a downside to having something sharp and pointy rattling around in your belongings! When you have a 3D printer, problems like this have an easy solution… I want to build a unique mechanical counter and I’ve come across an interesting design challenge. There are marble “clocks” that count up the time – typically in a rack of single-minute balls, a rack of 5-minute balls, and a rack of hour balls. When the last ball reaches a rack, it dumps out all the balls and sends a single ball to the next rack. I’m thinking this concept would be great for a mechanical decade counter – just have one rack per digit of what you’re counting. With chimes, strange lifting mechanisms, or “complex just because” Most of these clocks are like type A – they use racks with a “deer scarer” tipping mechanism – when enough balls land in the rack, they all tip out. Some like type B use a better alternative – the last ball bounces out and releases a gate that allows all the stored balls to roll through the rack. All the existing designs have a common flaw; balls flow from top to bottom, the least significant rack is at the top. To read the state of the display you have to unintuitively read the racks from bottom to top. This seems a bit ‘wrong’ to me. Naturally you read numbers from left to right. An ideal mechanism would: I want to design a 100% modular “digit” of such a mechanism, so I can stack as many of them horizontally as needed. Conservation of energy. If the design is like the above picture, where the rail is flat… When a rack is emptied, surely those 9 balls that fall out could be harnessed to transfer energy back to the rolling ball? The devil being in the details, what would you suggest for the design of this “digit”? Watch this space… This article covers one method of attaching wires to a bare lithium-polymer battery pouch. This could also be done using a battery spot welder, clamping, or screwing. Before you solder the terminals, plan your wiring. Where should the battery wires exit the battery? In what direction will they go, and how far? (A scale printout of the layout may help) Strip and tin the wire ends. Each bare end should be slightly shorter than the battery tab’s width. Peel back one of the battery terminals. With a small file, roughen the outer half of the battery tab. Fold the end of the battery tab over, to create a small ‘hook’. It should be just large enough to fit the wire in. Put a tiny drop of flux in the hook. You can spread it around with a piece of wire. Place the wire into the hook – making sure it matches the polarity – and press it closed. It may help to clamp a little bit of the insulation in the hook as well. Carefully, press the soldering iron against the folded hook. The aim here is to heat up the tab, the wire and the flux, without heating up the battery. If the battery near the tab is warm, stop and wait 2-3 minutes before attempting again. Run the solder into the folded hook until it begins to melt and ‘wet’ to the metal. As soon as it seems that a good contact has been made, stop. The solder should have bonded well to both sides of the hook, and to the wire as well. Roll the tab up until it is inside the battery sled. Replace the covering tape. Repeat for the second battery tab and wire. Once both tabs are soldered, the wires can be fixed in place with hot glue, and the area covered with a small quantity of masking tape. Complete! There’s a strange machine in the workshop! It’s big and interesting, but how do you make it do something useful? This post will answer that, though it won’t make you a master machinist in one go. I’ll assumes that you already made a design, and now you want to fabricate it. The X-Carve comes with some cloud-based software called “Easel”. There’s been plenty of hate directed at it. Easel is nowhere near perfect, but it can be wrangled into submission, and it can even be useful. If you have complex requirements you might actually need something more advanced. Easel works in the browser. Boo. On the plus side; Easel only supports SVG import, not DXF. Lots of scaling issues can occur when passing SVG files between programs. Blech. Here’s a whole section just on getting your design into Easel – I have found a process that works: Part B: CAM CAM stands for “computer aided manufacturing”; how to get from a model to instructions that make a physical part. This step isn’t particularly difficult, but it’s important that you pass the correct instructions to the machine! In order, I recently bought a heap of these switches – small DPDT toggle buttons – for $0.07 ea.These switches are not that large, the …but how to read them all? First of all, forget ‘one I/O per switch’, it will not scale well. The most common method to wire up many switches is in a matrix. Each row uses an I/O, each column uses an I/O, and pressing the switch connects that row and column together. 16 switches need 8 I/O, 64 switches need 16 I/O, it scales well. (and with a demux on one side, even fewer are needed) However, switches are only simple electrical devices. When a switch is closed current can freely flow in either direction. If only one or two are closed, the patterns are unique. As soon as three or more switches are closed, a simple matrix can no longer read the switches with certainty. For example, there is no way to distinguish between these five switch states: The matrix option will not be able to read all the switches. This is a common problem faced by designers. What is the common solution? A diode can be placed next to each switch to allow current only in one direction. “What’s wrong with that?” Well, every switch needs a diode. So now there are twice as many parts on the board, and you’ve got to find space for them, pay for them, solder them in…not great at all. What other options are there? Honey Cake Serves 12 or so? This lovely cake is very tasty, quick and simple to make, and light but moist. It was an accidental invention – the first time I made it, I was trying to make biscuits! Ingredients: Method: Serve with a hot or cold refreshing drink. So good! Don’t tell your guests that it’s vegan, and they’ll never guess… Makes a 20cm dia cake – Serves 8 Serve on its own or with ice cream/sorbet. If vegan: Do your best to keep everyone else away from your precious cake. Precious… In my previous post, I simulated some mechanical logic gates using a novel 3-bar linkage. The dimensions of the linkage are critical to achieving correct operation; Inputs \(\{ false\ false \}\), \(\{ false\ true \}\), \(\{ true\ false \}\) must all produce the same output position. For the simulations, an empirical approach was used. (fiddle with the values until it looks right) This time, I’m going to attempt to find the exact values to use for a given stroke, separation distance and output position. Ahead lurks tough mathematics (which may be unsuitable for liberal-arts majors) If it’s hard to read, then sorry! It was hard to write! Using this, we can calculate M’: \( \begin{align} \frac{h}{b} &= \frac{\sqrt{ d_i^2 – sep^2 – str^2 }}{\sqrt{ sep^2 + str^2 }} \\ &= \sqrt{\frac{ d_i^2 – sep^2 – str^2 }{ sep^2 + str^2 }} \\ &= \sqrt{\frac{d_i^2}{sep^2 + str^2} – 1}\\ \\ x_m’ &= x_c’\ – ( y_c’ – y_b)\frac{h}{b}\\ &= str – \sqrt{\frac{d_i^2}{sep^2 + str^2} – 1} \\ y_m’ &= y_c’ + ( x_c’ – x_b)\frac{h}{b}\\ &= str \sqrt{\frac{d_i^2}{sep^2 + str^2} – 1} \end{align}\\ \) Now that \(O\) and \(O’\) are found, and we know \(O \equiv O’\)… \( \begin{align} x_o &= \sqrt{d_i^2 – sep^2} + d_o \\ x_o – \sqrt{d_i^2 – sep^2} &= d_o \\ ( x_o – \sqrt{d_i^2 – sep^2} )^2 &= d_o^2 \\ ( x_o – \sqrt{d_i^2 – sep^2} )^2 &= \left( x_o’ – str + \sqrt{\frac{d_i^2}{sep^2 + str^2} – 1} \right)^2 + str^2 (\frac{d_i^2}{sep^2 + str^2} – 1) \end{align} \) Ugly, but workable? \( \begin{align} ( x_o – \sqrt{d_i^2 – sep^2} )^2 &= \left( x_o’ – str + \sqrt{\frac{d_i^2}{sep^2 + str^2} – 1} \right)^2 + str^2 (\frac{d_i^2}{sep^2 + str^2} – 1) \\ ( 70 – \sqrt{d_i^2 – 50^2} )^2 &= \left( 70 – 20 + \sqrt{\frac{d_i^2}{50^2 + 20^2} – 1} \right)^2 + 20^2 (\frac{d_i^2}{50^2 + 20^2} – 1) \\ \end{align} \\ \) Pushed through Wolfram Alpha… \( \begin{align} \\ d_i &= \sqrt{2900} \\ &= 53.852\\ \text{and}\\ d_o &= x_o – \sqrt{d_i^2 – sep^2} \\ &= 70 – \sqrt{ 2900 – 2500 } \\ &= 50 \end{align} \) Phew! Thusly, one can calculate the input and output linkage lengths for any logic gate. These dimensions will work for an OR gate too, as the linkages are just reversed. It’s irksome that I couldn’t work out a symbolic solution for the last equations, especially since \(d_i\) in the example came out to such a neat value. I might revisit this further in the future, and it’ll definitely come in handy when building something with these gates. Logic gates! The building blocks of computing as we know it. What if we ditch the electrons? I want some mechanical logic for an upcoming project, so I did some research… There are several examples of mechanical logic out there on the internet: Website Comments Keshav Saharia’s Lego Logic Push-pull, needs a spring return. Randomwraith’s Lego Logic Push-pull with internal rotating elements, reversible. Mechalogic’s Logic Elements Push-pull, AND & NOT reversible, OR needs a spring return. Xiaoji Chen’s Linkage Computer Rotating logic, reversible, some mechanical issues with parallelogram weakness. Spillerrec’s Lego Logic Gates Push-pull, very compact, needs a spring return. He identifies the problem with needing springs, but doesn’t have a fix. Mentions how AND/OR gates can be reversed to become OR/AND gates. Zeroumus’s AND Gate Push-pull, needs a spring return. KNEX XOR Gate Push-pull, needs a spring return. Carolin Liebl and Lisa Hopf’s NAND Gate Push-pull, reversible, complex. For my project, I have the following requirements; This disqualifies most of the designs I found, particularly the ‘no springs’ requirement. Mechalogic’s AND gate design uses an elastic band, but doesn’t appear to need it, so it serves as a good starting point: It’s time consuming to tweak parameters in the physical world, so I wrote a small simulation: (click to see in action) Success! Meets all the requirements, and only needs three parts. The geometry is important – the ratio will determine the output’s position when only one of the inputs is asserted, according to; Now, can an OR gate be designed with the same success? Most OR gates look like this; the output (on the left of this picture) has a flat plate, and either input will push the flat forward. Unfortunately the design needs that elastic band to return the output to zero. Inspired by some earlier attempts to couple an AND and a backwards OR gate together, I tried flipping the AND gate around; (click to see in action) Success! Oddly enough, an OR gate is just an AND gate connected up backwards. Even the linkage lengths are the same! Best of all, the OR gate is reversible like the AND gate – no springs and no force required. As the graphs show, output displacement equals input displacement, so these designs should be chainable. Let’s have a look; (click to see in action) Success! Now I just have to build something complex with them…
<< ToK ToK Warszawa meeting - Rough Notes Thu 15 Feb 2007 These are just rough notes - feel free to correct them, add links, etc. Hector Rubinstein - stockholm - magnetic fields Magnetic fields on kpc scales exist. They may exist on intergalacticscales - it's unclear whether or not their origin is primordial. CMB - Planck - may be able to detect magnetic fields present at theepochs not long after nucleosynthesis and recombination it is well known that photon has a thermal mass - about 10^{-39} [units =eV?]which is extremely small - related to electron loops Maxwell eqns -> Proca eqns WikipediaEn:Proca_action m_photon < 10^-26 eV \exists galactic mag fields at z \approx 3 making dynamo mechanisms difficult to explain them Boehm - LDM hypothesis 511 keV detection Leventhal(sp?) 199x ApJ OSSE 3 components Purcell et al 1997 candidates stars - SNe, SNII, WRcompact sources - pulsars, BH, low mass binaries - most excluded because they would imply 511keV from the disk- SNIa - need large escape fraction and explosion rate to maintain a steady flux- low mass X-ray binaries - need electrons to escape from the disk to the bulge dm + dm -> e^- + e^+ e+ loses energy -> positronium e+e- -> positronium decays para-positronium 2 gamma - monochromatic wih 511keVortho-positronium 3 gamma - continuum predictions positron emission should be maximal with highest DM concentration (n^2 effect?) cdm spectrum does NOT produce CDM-like power spectrum??? - at 10^9 M_sun essentially CDM-like - by 10^6 M_sun, the difference would be important spectrum Ascasibar et al 2005, 2006 model through F- 511keV through Z' relic density link with neutrino mass interaction/decay diagrams -> link between neutrino mass and DM cross section: \sigma_\nu well-known for relic density \sim 10^{-26} cm^3/sMeV" class="mmpImage" src="/foswiki/pub/Cosmo/ToK070215RoughNotes/_MathModePlugin_731486621df83fabc7268c4eba1fd524.png" /> to fit neutrino data BBN: 1MeV < m_N low energy Beyond SM MeV DM has definitely escaped all previous low energy experimentsdue to lack of luminosity BABAR/BES II ... ? summary ... explains low value of neutrino masses detection at LHC may be possible but requires work back to SUSY -> snu-neutralino-nu ? Conlon - hierarchy problems in string theory: the power of large volume planck scale 10^18 GeV ... cosm constant scale (10^-3eV)^4 - large-volume models can generate hierarchies thorugh a stabilisedexponentially large volume - predicts cosmological constant (but about 50 orders of magnitude toolarge - solving this problem is left to the reader/audience) G\"unther Stigl - high-energy c-rays, gamma-rays, neutrinos HESS - correlation of observations at GC with molecular cloud distribution KASCADE - has made observations Southern Auger - 1500km^2 - in Chile/Argentina Hillas plot c-rays at highest energies could be protons, could be ions - most interactions produce pions; pi^\pm decays to neutrinos pi^0 decays to photons (gamma-rays) origin of very high energy c rays remains one of the fundamental unsolved questions of astroparticle physics - even galactic c ray origin is unclear acceleration and sky distribution of c rays are strongly linked to the strength and distribution of cosmic magnetic fields - which are poorly known sources probably lie in fields of \mu-Gauss HE c-rays, pion-production, gamma-ray/neutrinos - all three fields should be considered together; strong constraints arise from gamma-ray overproduction Khalil - DM - SUSY - brane cosmology (British University in Egypt = BUE) - friedmann eqn modified in 5D (brane model) - dark matter relic abundance Error during latex2img: ERROR: problems during latex INPUT: \documentclass[fleqn,12pt]{article} \usepackage{amsmath} \usepackage[normal]{xcolor} \setlength{\mathindent}{0cm} \definecolor{teal}{rgb}{0,0.5,0.5} \definecolor{navy}{rgb}{0,0,0.5} \definecolor{aqua}{rgb}{0,1,1} \definecolor{lime}{rgb}{0,1,0} \definecolor{maroon}{rgb}{0.5,0,0} \definecolor{silver}{gray}{0.75} \usepackage{latexsym} \begin{document} \pagestyle{empty} \pagecolor{white} { \color{black} \begin{math}\displaystyle m_\nu = \sqrt{ \sigma \nu \over 128 \pi^3 } m_N^2 ln{ \Lambd^2/m_N^2 }\end{math} } \clearpage \end{document} STDERR: This is pdfTeX, Version 3.14159265-2.6-1.40.17 (TeX Live 2016/Debian) (preloaded format=latex) restricted \write18 enabled. entering extended mode (/tmp/VHagh_1xAh/6eGaBx1U8W LaTeX2e <2017/01/01> patch level 3 Babel <3.9r> and hyphenation patterns for 30 language(s) loaded. (/usr/share/texlive/texmf-dist/tex/latex/base/article.cls Document Class: article 2014/09/29 v1.4h Standard LaTeX document class (/usr/share/texlive/texmf-dist/tex/latex/base/fleqn.clo) (/usr/share/texlive/texmf-dist/tex/latex/base/size12.clo)) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsmath.sty For additional information on amsmath, use the `?' option. (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amstext.sty (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsgen.sty)) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsbsy.sty) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsopn.sty)) (/usr/share/texlive/texmf-dist/tex/latex/xcolor/xcolor.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/color.cfg) (/usr/share/texlive/texmf-dist/tex/latex/graphics-def/dvips.def)) (/usr/share/texlive/texmf-dist/tex/latex/base/latexsym.sty) No file 6eGaBx1U8W.aux. (/usr/share/texlive/texmf-dist/tex/latex/base/ulasy.fd) Package amsmath Warning: Foreign command \over; (amsmath) \frac or \genfrac should be used instead (amsmath) on input line 17. ! Undefined control sequence. l.17 ...ma \nu \over 128 \pi^3 } m_N^2 ln{ \Lambd ^2/m_N^2 }\end{math} [1] (./6eGaBx1U8W.aux) ) (see the transcript file for additional information) Output written on 6eGaBx1U8W.dvi (1 page, 752 bytes). Transcript written on 6eGaBx1U8W.log.
Now I want you to consider a single particle, P, inside this event horizon, at a distance \(r'\) from the origin, on its way to the centre. In a thought experiment or Gedanken, compare the motion of this particle P with that of an identical particle, P', on the surface of an identical distribution of matter but with all matter at a greater distance than \(r'\) removed. We know that in the second case - in our thought experiment - the exterior field at P' must be given by the Schwarzschild metric because of Birkhoff's theorem, whilst in the first case - the real world situation - the field at P must be completely unaffected by the symmetrical shell of matter any further from the origin than P due to the second aspect of Birkhoff's theorem given earlier. Consequently, the motion of the two particles, P and P', should be, must be, identical. The particle P will move ever closer to the origin until a new event horizon will form for a distant observer . At this point we have \[r=\frac{2Gm_{r}}{c^2}\]where \(m_{r}\) is the mass enclosed by the spherical surface through the point P but which excludes any mass further out. Although in the original case, an event horizon may never be observable, we have no reason to assume that it does not still have significance, and crucially, that for a distant observer the point P would never have crossed this newly formed event horizon in a finite time. That is if we could entirely , it would still be hovering above this new event horizon. If this were untrue in the interior of a black hole, then we would have to accept that what happened there depended entirely upon whether it was being observed or not. This would not be an acceptable modification to the existing theory of General Relativity. As our test particle was at an arbitrary distance from the origin, this must be equally true for all particles within the event horizon of the original black hole, and, as a consequence, the eventual distribution of mass must be such that for all \(r\) less than \(r_s\) \[r=\frac{2G}{c^2}\int_0^r4\pi r^2\rho(r)dr\] Where \(\int_0^r4\pi r^2\rho(r)dr\) is simply the mass enclosed by a sphere of radius \(r\) and with \(\rho(r)\) being the eventual mass distribution function. At this point, rearranging and solving the integral gives \[\rho(r)=\frac{c^2}{8\pi Gr^2}\] The conclusion The black hole must have a density inside the outer event horizon that is inversely proportional to the square of the (reduced) distance from the origin. I hope you find this argument as compelling as I do. Please give us your thoughts, if not. I look forward to your thoughts. Now, after all this heavy stuff, time to look at a more restful place, a place of absolute peace and tranquillity - the end of time itself! Nirvana-> Agree or disagree, or have any questions or observations about this, and I would love to hear from you, so please This email address is being protected from spambots. You need JavaScript enabled to view it., or leave a comment. Your views are always most welcome. See this published paper for more information.
90 6 Homework Statement A body of mass ##10 kg## is connected to a spring of constant ##490 \frac{N}{m}## and it lies on an frictionless inclined plane. The angle formed by the plane and the floor is ##30°##. This block is inside a room which accelerates upwards with ##a=5 \frac{m}{s^2}##. Prove that if the mass is moved away from its equilibrium point, it will experience simple harmonic motion Homework Equations Newton's equation If I write Newton's equations, seen inside the room and with non tilted axis we have: ##x) N.sin(\alpha)-Fe.cos(\alpha)=m.a_x## ##y) N.cos(\alpha)+Fe.sin(\alpha)-m.g-f*=m.a_y## Where ##f*=ma##, ##Fe## is the elastic force. Then, how can I realize about simple harmonic motion? I also can think it with tilted axis, which would be ##x) mg.sin(\alpha)+f*sin(\alpha)-Fe=m.a_x## ##y)N-mg.cos(\alpha)-f*cos(\alpha)=0## But I can't notice the SHM. I mean, I can't relate that with ##m.\ddot x +k.x=0## ##x) N.sin(\alpha)-Fe.cos(\alpha)=m.a_x## ##y) N.cos(\alpha)+Fe.sin(\alpha)-m.g-f*=m.a_y## Where ##f*=ma##, ##Fe## is the elastic force. Then, how can I realize about simple harmonic motion? I also can think it with tilted axis, which would be ##x) mg.sin(\alpha)+f*sin(\alpha)-Fe=m.a_x## ##y)N-mg.cos(\alpha)-f*cos(\alpha)=0## But I can't notice the SHM. I mean, I can't relate that with ##m.\ddot x +k.x=0## Attachments 11.2 KB Views: 9
Here is a beautiful result from numerical analysis. Given any nonsingular $n\times n$ system of linear equations $Ax=b$, an optimal Krylov subspace method like GMRES must necessarily terminate with the exact solution $x=A^{-1}b$ in no more than $n$ iterations (assuming exact arithmetic). The Cayley-Hamilton theorem provides a simple, elegant proof of this statement. To begin, recall that at the $k$-th iteration, minimum residual methods like GMRES solve the least-squares problem$$\underset{x_k\in\mathbb{R}^n}{\text{minimize }} \|Ax_k-b\|$$by picking a solution from the $k$-th Krylov subspace$$\text{subject to } x_k \in \mathrm{span}\{b,Ab,A^2b,\ldots,A^{k-1}b\}.$$If the objective $ \|Ax_k-b\|$ goes to zero, then we have found the exact solution at the $k$-th iteration (we have assumed that $A$ is full-rank). Next, observe that $x_k=(c_0 + c_1 A + \cdots + c_{k-1}A^{k-1})b=p(A)b$, where $p(\cdot)$ is a polynomial of order $k-1$. Similarly, $\|Ax_k-b\|=\|q(A)b\|$, where $q(\cdot)$ is a polynomial of order $k$ satisfying $q(0)=-1$. So the least-squares problem from above for each fixed $k$ can be equivalently posed as a polynomial optimization problem with the same optimal objective $$\text{minimize } \|q_k(A)b\| \text{ subject to } q_k(0)=-1,\; q_k(\cdot) \text{ is an order-} k \text{ polynomial.}$$Again, if the objective $\|q_k(A)b\|$ goes to zero, then GMRES has found the exact solution at the $k$-th iteration. Finally, we ask: what is a bound on $k$ that guarantees that the objective goes to zero? Well, with $k=n$, and the optimal polynomial $q_n(\cdot)$ for our polynomial optimization problem is just the characteristic polynomial of $A$. According to Cayley-Hamilton, $q_n(A)=0$, so $\|q_n(A)b\|=0$. Hence we conclude that GMRES always terminate with the exact solution at the $n$-th iteration. This same argument can be repeated (with very minor modifications) for other optimal Krylov methods like conjugate gradients, conjugate residual / MINRES, etc. In each case, the Cayley-Hamilton forms the crux of the argument.
Research Open Access Published: Uniqueness and stability of solutions without the boundary condition Boundary Value Problems volume 2019, Article number: 21 (2019) Article metrics 336 Accesses Abstract A parabolic equation related to the p-Laplacian is considered. If the equation is degenerate on the boundary, then demonstrating the regularity on the boundary is difficult, the trace on the boundary cannot be defined, in general. The existence and uniqueness of weak solutions are researched. Based on uniqueness, the stability of solutions can be proved without any boundary condition. Introduction and main results Consider a parabolic equation related to the p-Laplacian with the initial value where Ω is a bounded domain in \(\mathbb{R}^{N}\) with appropriately smooth boundary, \(p>1\), \(u_{0}(x)\) is a \(C_{0}^{1}( \varOmega )\) function, \(a(u, x,t)\geq 0\). If \(a(u, x,t)=1\), Eq. (1.1) is the evolutionary p-Laplacian equation with a convective term and the usual boundary condition can be imposed. The initial-boundary boundary value problem of Eq. (1.3) has been studied in many monographs or textbooks, one can refer to [1,2,3] and the references therein. Benedikt et al. [4, 5] had studied the equation with \(0<\alpha <1\), and such that there exists an \(x_{0}\in \varOmega \) satisfying \(q(x_{0})>0\). They showed that the uniqueness of a solution does not hold. Meanwhile, the author of [6] had studied the equation with \(\alpha >0\), and has shown that the stability of solutions can be proved without any boundary condition, where \(d=d(x)=\operatorname{dist}(x,\partial \varOmega )\) is the distance function from the boundary and \(f(s,x,t)\) is a Lipschitz function. Certainly, \(|u|^{\alpha -1}u\) is not a Lipschitz function with respect to u, the result of [6] is compatible with those of [4, 5]. But then, the result of [6] shows that the degeneracy of the coefficient \(d^{\alpha }\) can eliminate the action from the source term \(f(u,x,t)\). Moreover, we have shown that a weak solution to the equation For a degenerate parabolic equation, the phenomenon that the solution is free from the limitation of the boundary condition has been studied for a long time, one can refer to [9,10,11,12,13,14,15]. Roughly speaking, instead of the whole boundary condition (1.4), we may conjecture that only a partial boundary condition should be imposed, where \(\varSigma _{1}\) is a relatively open subset of ∂Ω. In this paper, we will show that a weak solution to Eq. (1.1) is unique independent of the boundary value condition. In other words, the degeneracy of the diffusion \(a(\cdot , x,t)\) on the boundary can take place regardless of the boundary value condition. To simplify exposition, in what follows, we assume that where \(r>0\) is a constant, \(\rho (x)\) is a \(C^{1}(\overline{\varOmega })\) nonnegative function and Let Then Eq. (1.1) becomes where The initial value matching up to Eq. (1.7) is Definition 1.1 and, for any function \(\phi (x,t) \in {C_{0}^{1}}(Q_{T})\), there holds We will give a basic result of the existence of a weak solution. Theorem 1.2 If \(p\geq 2\), \(u_{0}(x)\geq 0\), \(\rho (x) \mid _{x\in \partial \varOmega }=0\) and \(\int _{\varOmega }\rho (x)^{- \frac{2}{p-2}}\,dx<\infty \), for any given \(i\in \{1, 2, \dots , N\}\), \(a_{i}(s)\) is a \(C^{1}\) function and there exist constants α and c such that This theorem may not be optimal, the conditions \(p\geq 2\), \(\int _{ \varOmega }\rho (x)^{-\frac{2}{p-2}}\,dx<\infty \) and (1.11) may all be weakened. However, the main aim of this paper is to probe the uniqueness and stability of weak solutions, the main results of our paper are the following theorems. Theorem 1.3 Theorem 1.4 Let \(u(x,t)\) and \(v(x,t)\) be two weak solutions of Eq. (1.7) with different initial values \(u_{0}(x)\) and \(v_{0}(x)\), respectively. If \(p\geq 2\), and \(a_{i}(s)\) is a Lipschitz function, then It is well-known that the usual evolutionary p-Laplacian equation needs to be subjected to the whole boundary condition (1.4) [2, 3]. Clearly, condition \(a(u, x,t)|_{x\in \partial \varOmega }=0\) excludes the usual evolutionary p-Laplacian equation, while condition (1.12) excludes the conservation law equation. The uniqueness of solutions for a conservation law equation only holds in the sense of the entropy solution [2]. The equations considered in [6,7,8, 16,17,18,19], as well as Eq. (1.1), have apparently different characteristics from both the usual evolutionary p-Laplacian equation and the conservation term. Roughly speaking, in the interior of Ω, Eq. (1.1) has the characteristic of the usual evolutionary p-Laplacian equation, while on the boundary ∂Ω, Eq. (1.1) has the characteristic of the conservation law equation. Comparing with our previous works [7, 17,18,19] and [6, 8], the main difficulty comes from the nonlinearity of the diffusion coefficient \(a(u,x,t)\). Moreover, unlike our previous works, the stability of the weak solutions is based on the uniqueness of the weak solution. Theorem 1.3 shows that the uniqueness of the weak solution holds independently of the boundary value condition. Once we have the uniqueness of the weak solution, Theorem 1.4 shows that the stability of the weak solutions is also true without the boundary value condition. Accordingly, Theorems 1.3 and 1.4 show that not only the degeneracy of the coefficient \(a(u,x,t)\) can eliminate the action from the source term \(f(u,x,t)\) [6], but it may also eliminate the action of the convection term \(\sum_{i=1}^{N}\frac{\partial a_{i}(v)}{\partial x _{i}}\). Existence of a solution Consider an approximate problem of Eq. (1.7), namely with the initial boundary value conditions Definition 2.1 and for any \(\phi (x) \in {C_{0}^{1}}( Q_{T})\), there holds For any \(k>0\), we define \(\varphi ^{+}_{k}(s)=\beta s^{\beta -1}\) when \(s\geq k^{-1}\), \(\varphi ^{+}_{k}(s)=\beta (a_{k}s^{2}+b_{k} s)\) when \(0\leq s< k^{-1}\), where Extending \(\varphi ^{+}(s)\) to be an even function on the whole \(\mathbb{R}^{1}\), and denoting it as \(\varphi _{k}(s)\), we have \(\varphi _{k}(s)\in C^{1}\), \(\varphi _{k}(s)\rightarrow \beta s^{\beta -1}\), \(s\neq 0\) as \(k\rightarrow \infty \). By considering the following approximate problem: where \(\|v_{0k}(x)-v_{0}(x)\|_{p}\rightarrow 0\) as \(k\rightarrow 0\) and \(|v|^{\beta -1}v_{0}(x)=u_{0}(x)\), we obtain that there is a unique classical solution \(v_{k\varepsilon }\) of problem (2.6)–(2.8). Let \(k\rightarrow \infty \). Similarly as in [20], we can prove that where c is a constant independent of k and ε, but depending on \(\|u_{0}\|_{L^{\infty }(\varOmega )}\). In what follows, we call \(v_{\varepsilon }\) an asymptotic solution. Proof of Theorem 1.2 Multiplying (2.1) by \(v_{\varepsilon }\) and integrating over \(Q_{T}\), we have where \(\rho _{\varepsilon }=\rho +\varepsilon \). Using the fact we have and in particular, Multiplying (2.1) with \(v_{\varepsilon t}\), and integrating over Ω, By the assumption of (1.11), Here, we have used the assumption \(\int _{\varOmega }\rho (x)^{- \frac{2}{p-2}}\,dx<\infty \), which implies \(\int _{\varOmega }(\rho +\varepsilon )^{-\frac{2}{p-2}}\,dx<\infty \). Then and where \(Q_{\lambda T}=\varOmega _{\lambda }\times (0,T)\). Then \(v_{\varepsilon }\rightarrow v\) in \(L^{2}(Q_{\lambda T})\). By the arbitrariness of λ, \(v_{\varepsilon }\rightarrow v\) a.e. in \(Q_{T}\). Thus \(a_{i}(v_{\varepsilon })\rightarrow a_{i}(v)\) a.e. in \(Q_{T}\). □ The uniqueness Theorem 3.1 Let \(u(x,t)\) and \(v(x,t)\) be two weak solutions of Eq. (1.7) with different initial values \(u_{0}(x)\) and \(v_{0}(x)\), respectively, \(0< m\leq \|u\|_{L^{\infty }(Q_{T})}\leq M\), \(0< m\leq \|v \|_{L^{\infty }(Q_{T})}\leq M\). Let \(p>1\), \(a_{i}(s)\) be a Lipschitz function, and let \(\rho (x)\) satisfy (1.6). Then there exists a constant \(\alpha _{1}\geq \max \{p, 2, 2(p-1)\}\) such that Proof Denote \(\varOmega _{\lambda }=\{x\in \varOmega : \rho (x)> \lambda \}\) as before. Let For any fixed \(\tau ,s\in [0,T]\), we may choose \(\chi _{[\tau ,s]}(u _{\varepsilon }-v_{\varepsilon })\xi _{\lambda }\) as a test function in (3.1), where \(\chi _{[\tau ,s]}\) is the characteristic function on \([\tau ,s]\), where \(u_{\varepsilon }\) and \(v_{\varepsilon }\) are the mollified functions of the solutions u and v, respectively. Then, denoting \(Q_{\tau s}=\varOmega \times [\tau , s]\), we have For any given small \(\lambda >0\), denoting \(Q_{T\lambda }= \varOmega _{\lambda }\times (0,T)\), since \(\rho (x)\in C^{1}(\overline{ \varOmega })\) and \(\rho (x)>0\) when \(x\in \varOmega \), then \(\nabla u\in L ^{p}(Q_{T\lambda })\), \(\nabla v\in L^{p}(Q_{T\lambda })\). According to the definition of the mollified functions \(u_{\varepsilon }\) and \(v_{\varepsilon }\), we have Since on \(\varOmega _{\lambda }\), by Young inequality, The first term on the right-hand side of (3.6) satisfies The last term on the right-hand side of (3.6) can be bounded as follows: Here, we have used the fact that \(|\nabla \rho (x)|\leq c\). Then, If \(p\geq 2\), clearly, since \(u,v\in L^{\infty }\), \(|u-v|\leq c\), and we have if \(1< p<2\), for \(\alpha _{1}\geq 2(p-1)\), using the Hölder inequality, we have Meanwhile, by the Lebesgue dominated convergence theorem, Due to the fact \(|\nabla \rho |\leq c\), \(\alpha _{1}\geq p\), we have and If \(1< p<2\), then \(p'>2\), and if \(\alpha _{1}\geq p\), when \(\rho <1\), then \(\rho ^{\frac{\alpha _{1}-1}{p-1}}\leq \rho ^{\frac{\alpha _{1}}{p}}\). When \(1\leq \rho \leq D=\sup_{x\in \overline{\varOmega }}\rho (x)\), it is obvious that Thus, \(\rho ^{\frac{\alpha _{1}-1}{p-1}}\leq c\rho ^{ \frac{\alpha _{1}}{p}}\) is always true, and then we have If \(p\geq 2\), then \(p'<2\), and for \(\alpha _{1}\geq 2\), by the Hölder inequality, where \(q<1\). Now, where \(\zeta \in (v,u)\). If for any \(s\geq \tau \) is true, then clearly holds. If there is an \(s_{0}\geq \tau \) such that then by (3.16) where \(\zeta \in (v,u)\), \(M=\max \{\|u\|_{L^{\infty }(Q_{T})}, \|v\|_{L^{\infty }(Q_{T})} \}\). Then Since by (3.20), we have Here \(m=\min \{\|u\|_{L^{\infty }(Q_{T})}, \|v\|_{L^{\infty }(Q _{T})} \}\). Inequality (3.21) implies This inequality contradicts assumption (3.18). Thus, for any \(s,\tau \in [0,T)\), (3.17) is always true. By the arbitrariness of τ, we have The proof is complete. □ The proof of Theorem 1.4 Proof For any given positive integer n, let \({g_{n}}(s)\) be an odd function, and Clearly, and where c is independent of n. Let \(u_{\varepsilon }\) and \(v_{\varepsilon }\) be the asymptotic solutions of u and v, respectively. They satisfy the asymptotic problem (2.1)–(2.3). Since the weak solution of Eq. (1.7) with the initial value (1.8) is unique, we have We can choose \(\chi _{[\tau ,s]}{g_{n}}(u_{\varepsilon } - v_{\varepsilon })\) as the test function. Then At first, by (4.4), we have Secondly, we have Here, we have used two facts. The first one is, by (4.6), The second one is, since \((u_{\varepsilon }-v_{\varepsilon })\rightarrow (u-v)\), a.e. in Ω, using (4.6), Thirdly, we have Moreover, since we have Then Based on it, we are able to prove that In details, the limitation (4.9) is established by the following calculations: Due to (1.12), where \(\xi \in (v,u)\). If the set \(\{ x \in \varOmega :|u - v| = 0\}\) has positive measure, then Now, letting \(n\rightarrow \infty \) in (4.5), It implies that Theorem 1.4 is proved. □ Conclusions The equation considered in this paper comes from many reaction–diffusion problems. If the diffusion coefficient not only depends on the unknown solution u, but also on the spatial variable x, the degeneracy of the equation becomes more complicated. If the diffusion coefficient is degenerate on the boundary, the usual Dirichlet boundary value condition seems redundant completely. The uniqueness of the weak solution is proved without any boundary value conditions. Based on this fact, the stability of weak solutions can also be proved without any boundary value conditions. References 1. Nakao, M.: \(L^{p}\) estimates of solutions of some nonlinear degenerate diffusion equation. J. Math. Soc. Jpn. 37, 41–63 (1985) 2. Wu, Z., Zhao, J., Yin, J., Li, H.: Nonlinear Diffusion Equations. Word Scientific Publishing, Singapore (2001) 3. Zhao, J., Yuan, H.: The Cauchy problem of some doubly nonlinear degenerate parabolic equations. Chin. Ann. Math., Ser. A 16(2), 179–194 (1995) (in Chinese) 4. Benedikt, J., Bobkov, V.E., Girg, P., Kotrla, L., Takac, P.: Nonuniqueness of solutions of initial-value problems for parabolic p-Laplacian. Electron. J. Differ. Equ. 2015, 38 (2015) 5. Benedikt, J., Girg, P., Kotrla, L., Kotrla, L., Takac, P.: Nonuniqueness and multi-bump solutions in parabolic problems with the p-Laplacian. J. Differ. Equ. 260, 991–1009 (2016) 6. Zhan, H.: On a parabolic equation related to the p-Laplacian. Bound. Value Probl. 2016, 78 (2016) 7. Zhan, H.: The degeneracy on the boundary of an equation related to the p-Laplacian. J. Inequal. Appl. 2018, 7 (2018) 8. Zhan, H.: The stability of the solutions of an equation related to the p-Laplacian with degeneracy on the boundary. Bound. Value Probl. 2016, 178 (2016) 9. Wu, Z., Zhao, J.: The first boundary value problem for quasilinear degenerate parabolic equations of second order in several variables. Chin. Ann. Math. 4B(1), 57–76 (1983) 10. Wu, Z., Zhao, J.: Some general results on the first boundary value problem for quasilinear degenerate parabolic equations. Chin. Ann. Math. 4B(3), 319–328 (1983) 11. Li, Y., Wang, Q.: Homogeneous Dirichlet problems for quasilinear anisotropic degenerate parabolic-hyperbolic equations. J. Differ. Equ. 252, 4719–4741 (2012) 12. Lions, P.L., Perthame, B., Tadmor, E.: A kinetic formation of multidimensional conservation laws and related equations. J. Am. Math. Soc. 7, 169–191 (1994) 13. Kobayasi, K., Ohwa, H.: Uniqueness and existence for anisotropic degenerate parabolic equations with boundary conditions on a bounded rectangle. J. Differ. Equ. 252, 137–167 (2012) 14. Escobedo, M., Vazquez, J.L., Zuazua, E.: Entropy solutions for diffusion–convection equations with partial diffusivity. Trans. Am. Math. Soc. 343, 829–842 (1994) 15. Guarguaglini, F.R., Milišić, V., Terracina, A.: A discrete BGK approximation for strongly degenerate parabolic problems with boundary conditions. J. Differ. Equ. 202, 183–207 (2004) 16. Yin, J., Wang, C.: Properties of the boundary flux of a singular diffusion process. Chin. Ann. Math. 25B(2), 175–182 (2004) 17. Zhan, H., Yuan, H.: A diffusion convection equation with degeneracy on the boundary. J. Jilin Univ. Sci. Ed. 53(3), 353–358 (2015) (in Chinese) 18. Zhan, H.: The boundary value condition of an evolutionary \(p(x)\)-Laplacian equation. Bound. Value Probl. 2015, 112 (2015). https://doi.org/10.1186/s13661-015-0377-6 19. Zhan, H.: The solutions of a hyperbolic–parabolic mixed type equation on half-space domain. J. Differ. Equ. 259, 1449–1481 (2015) 20. Chen, S., Wang, Y.: Global existence and \(L^{\infty }\) estimates of solution for doubly degenerate parabolic equation. Acta Math. Sin., Ser A. 44, 1089–1098 (2001) (in Chinese) Acknowledgements The author would like to thank SpringerOpen Accounts Team for its kindness in giving me a discount on the paper charge if my paper gets accepted. Availability of data and materials Not applicable. Funding The paper is supported by Natural Science Foundation of Fujian province, supported by Science Foundation of Xiamen University of Technology, China. Ethics declarations Competing interests The author declares that he has no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The recent question Complete the following sequence: point, triangle, octahedron, . . . in a dg-category reminded me of something I wanted to clarify long time ago; most likely this is now well known but unfortunately I did not follow literature in the field attentively enough. Shortly, any composable $n$-tuple of morphisms in a triangulated category $\mathscr T$ gives a bunch of distinguished triangles which can be organized into a $\it hypersimplex$ - polytope modelled by the convex hull of edge midpoints of a (regular) $(n+1)$-simplex $\Delta^{n+1}$. Now it seems that one may use this to define certain simplicial set out of $\mathscr T$. For that, we view each such hypersimplex as some sort of a map of $\Delta^{n+1}$ to $\mathscr T$: we map all vertices to some new entity $*$, edges to objects of $\mathscr T$ corresponding to vertices of this hypersimplex, etc. Then the standard cosimplicial space structure on $\Delta^*$ gives a simplicial structure on the collection of all hypersimplices as follows: There is a unique 0-simplex $*$; 1-simplices are objects of $\mathscr T$, degeneracy of $*$ being the zero object; 2-simplices are distinguished triangles, and faces of ${\bf T}=(A\to B\to C\to\Sigma A)$ are $d_0({\bf T})=A$, $d_1({\bf T})=B$, $d_2({\bf T})=C$; moreover degeneracies of an object $X$ are $s_0(X)=(X=X\to0\to\Sigma X)$ and $s_1(X)=(0\to X=X\to\Sigma0)$. 3-simplices are octahedra, faces being those four faces of the octahedron which are distinguished triangles. I've posted an image of a 4-simplex to the page of the above question. This is something Wikipedia calls rectified 5-cell, it's like this: Its faces are the five octahedra that can be seen in the picture - four ones adjacent (from inside) to the facets of the outer tetrahedron and the central one. I guess everybody reading about triangulated categories for the first time and having seen simplicial identities before (or vice versa) comes up with this immediately; the point here is just that because of the above description of hypersimplices it is more or less clear that one indeed obtains a simplicial set (well, when $\mathscr T$ is small at any rate). My questions are: Does this simplicial set give a model for the $K$-theory of $\mathscr T$? Seems like its fundamental group is $K_0(\mathscr T)$... Is this a Kan complex? Triangle axioms and the octahedron axiom seem to translate into something about the Kan conditions, does this extend up? There seems to be more structure produced by the shift functor; is this actually a cyclic set? Can this structure be extended to a $\Gamma$-space, or turned into an E$_\infty$-space using any other machinery? Is all this done accurately somewhere?
I believe that the process you postulate has a Beta conditional distribution. If my memory serves me well, I have encountered it in the book by Liptser and Shiryayev "Statistics of Random Processes" as the evolution of the conditional probability in a HMM. This was 10+ years ago, therefore I might be well off. In that case you should be sampling from Beta to discretize Update: My mistake, the stationary distribution is Beta, not the conditional one. Therefore you will not be able to evolve from Beta exactly. The diffusion you postulate is called 'Jacobi diffusion', see Forman and Sørensen, case 6, at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1150110 I suspect that you might be able to use the stationary PDF to produce an approximate scheme to discretize. Update 2: Actually, let me change the notation slightly and write$$d Y_t = \theta (\mu-Y_t)\ dt + \sqrt{2\alpha\theta Y_t(1-Y_t)}\ dB_t$$which we know has a stationary distribution $Y_\infty\sim B\left(\frac{\mu}{\alpha}, \frac{1-\mu}{\alpha}\right)$. Now, use the change of variable $X_t = f(Y_t) = 2 \arcsin\sqrt{Y_t}$, which leads to the diffusion$$dX_t = \frac{\theta (\mu -\alpha/2) - (\alpha-1) \theta \sin^2 (X_t/2)}{|\sin X_t|}\ dt + \sqrt{2\alpha\theta}\ dB_t$$ You can simulate this as $$x_{t+\delta} = x_t + \frac{\theta (\mu -\alpha/2) - (\alpha-1) \theta \sin^2 (x_t/2)}{|\sin x_t|}\ \delta + \sqrt{2\alpha\theta\delta}\ \epsilon_t,\quad \epsilon_t\sim N(0,1)$$and then transform back to produce the paths$$y_t = \sin^2 (x_t/2)$$
Saddle-node bifurcation Yuri A. Kuznetsov (2006), Scholarpedia, 1(10):1859. doi:10.4249/scholarpedia.1859 revision #151865 [link to/cite this article] A saddle-node bifurcation is a collision and disappearance of two equilibria in dynamical systems. In systems generated by autonomous ODEs, this occurs when the critical equilibrium has one zero eigenvalue. This phenomenon is also called fold or limit point bifurcation. A discrete version of this bifurcation is considered in the article "Saddle-node bifurcation for maps". Contents Definition Consider an autonomous system of ordinary differential equations (ODEs) \[ \dot{x}=f(x,\alpha),\ \ \ x \in {\mathbb R}^n \] depending on a parameter \(\alpha \in {\mathbb R}\ ,\) where \(f\) is smooth. Suppose that at \(\alpha=0\) the system has an equilibrium \(x^0=0\ .\) Further assume that its Jacobian matrix \(A_0=f_x(0,0)\) has a simple eigenvalue \(\lambda_{1}=0 \ .\) Then, generically, as \(\alpha\) passes through \(\alpha=0\ ,\) two equilibria collide, form a critical saddle-node equilibrium (case \(\beta=0\) in Figure 1), and disappear. This bifurcation is characterized by a single bifurcation condition \(\lambda_1=0\) (has codimension one) and appears generically in one-parameter families of smooth ODEs. The critical equilibrium \( x^0 \) is a multiple (double) root of the equation \( f(x,0)=0 \ .\) One-dimensional Case \[\dot{x} = f(x,\alpha), \ \ \ x \in {\mathbb R}\ .\]If the following nondegeneracy conditions hold: (SN.1)\(a(0)=\frac{1}{2}f_{xx}(0,0) \neq 0\ ,\) (SN.2)\(f_{\alpha}(0,0) \neq 0\ ,\) then this system is locally topologically equivalent near the origin to the normal form \[ \dot{y} = \beta + \sigma y^2 \ ,\] where \(y \in {\mathbb R},\ \beta \in {\mathbb R}\ ,\) and \(\sigma= {\rm sign}\ a(0) = \pm 1\ .\) The normal form has two equilibria (one stable and one unstable) \(y^{1,2}=\pm \sqrt{-\sigma \beta}\) for \(\sigma \beta<0\) and no equilibria for \(\sigma \beta > 0\ .\) At \(\beta=0\ ,\) there is one critical equilibrium \(y^0=0\) with zero eigenvalue. Multidimensional Case In the \(n\)-dimensional case with \(n \geq 2\ ,\) the Jacobian matrix \(A_0\) at the saddle-node bifurcation has a simple zero eigenvalue \(\lambda_{1}=0\ ,\) as well as \(n_s\) eigenvalues with \({\rm Re}\ \lambda_j < 0\ ,\) and \(n_u\) eigenvalues with \({\rm Re}\ \lambda_j > 0\ ,\) with \(n_s+n_u+1=n\ .\) According to the Center Manifold Theorem, there is a family of smooth one-dimensional invariant manifolds \(W^c_{\alpha}\) near the origin. The \(n\)-dimensional system restricted on \(W^c_{\alpha}\) is one-dimensional, hence has the normal form above. Moreover, under the non-degeneracy conditions (SN.1) and (SN.2), the \(n\)-dimensional system is locally topologically equivalent near the origin to the suspension of the normal form by the standard saddle, i.e.\[\dot{y} = \beta + \sigma y^2\ ,\]\[\dot{y}^s = -y^s\ ,\]\[\dot{y}^u = +y^u\ ,\]where \(y \in {\mathbb R}\ ,\) \(y^s \in {\mathbb R}^{n_s}, \ y^u \in {\mathbb R}^{n_u}\ .\) Figure 1 shows the phase portraits of the normal form suspension when \(n=2\ ,\) \(n_s=1\ ,\) \(n_u=0\ ,\) and \(\sigma=+1\ .\) Quadratic Coefficient The quadratic coefficient \(a(0)\ ,\) which is involved in the nondegeneracy condition (SN.1), can be computed for \(n \geq 1\) as follows. Write the Taylor expansion of \(f(x,0)\) at \(x=0\) as \[ f(x,0)=A_0x + \frac{1}{2}B(x,x) + O(\|x\|^3) \ ,\] where \(B(x,y)\) is the bilinear function with components \[ \ \ B_j(x,y) =\sum_{k,l=1}^n \left. \frac{\partial^2 f_j(\xi,0)}{\partial \xi_k \partial \xi_l}\right|_{\xi=0} x_k y_l \ ,\] where \(j=1,2,\ldots,n\ .\) Let \(q\in {\mathbb R}^n\) be a null-vector of \(A_0\ :\) \(A_0q=0, \ \langle q, q \rangle =1\ ,\) where \(\langle p, q \rangle = p^Tq\) is the standard inner product in \({\mathbb R}^n\ .\) Introduce also the adjoint null-vector \(p \in {\mathbb R}^n\ :\) \(A_0^T p = 0, \ \langle p, q \rangle =1\ .\) Then (see, for example, Kuznetsov (2004)) \[ a(0)= \frac{1}{2} \langle p, B(q,q))\rangle = \left.\frac{1}{2} \frac{d^2}{d\tau^2} \langle p, f(\tau q,0) \rangle \right|_{\tau=0} \ .\] Standard bifurcation software (e.g. MATCONT) computes \(a(0)\) automatically. Other Cases Saddle-node bifurcation occurs also in infinitely-dimensional ODEs generated by PDEs and DDEs, to which the Center Manifold Theorem applies. Saddle-node bifurcations occur also for dynamical systems with discrete time (iterated maps). An important case of saddle-node bifurcation in planar ODEs is when the center manifold makes a homoclinic loop, as in the Figure 3. Such a saddle-node homoclinic bifurcation results in the birth of a limit cycle when the saddle-node disappears. The period of this cycle tends to infinity as the parameter approaches its bifurcation value. In ODEs with \( n \geq 3 \ ,\) a saddle-node with \( n_sn_u >0 \) can have more than one homoclinic orbit simultaneously. Disappearance of such a saddle-node, called a saddle-saddle or a Shilnikov saddle-node, generates an infinite number of saddle periodic orbits. References A.A. Andronov, E.A. Leontovich, I.I. Gordon, and A.G. Maier (1971) Theory of Bifurcations of Dynamical Systems on a Plane. Israel Program Sci. Transl. L.P. Shilnikov, On a new type of bifurcation in multidimensional dynamical systems (1969) Sov Math Dokl. 10, 1368-1371. V.I. Arnold (1983) Geometrical Methods in the Theory of Ordinary Differential Equations. Grundlehren Math. Wiss., 250, Springer J. Guckenheimer and P. Holmes (1983) Nonlinear Oscillations, Dynamical systems and Bifurcations of Vector Fields. Springer Yu.A. Kuznetsov (2004) Elements of Applied Bifurcation Theory, Springer, 3rd edition. S. Newhouse, J. Palis and F. Takens (1983) Bifurcations and stability of families of diffeomorphisms. Inst. Hautes Études Sci. Publ. Math. 57, 5-71. L.P. Shilnikov, A.L. Shilnikov, D.V. Turaev, and L.O. Chua (2001) Methods of Qualitative Theory in Nonlinear Dynamics. Part II, World Scientific. Internal references Yuri A. Kuznetsov (2006) Andronov-Hopf bifurcation. Scholarpedia, 1(10):1858. Jack Carr (2006) Center manifold. Scholarpedia, 1(12):1826. Willy Govaerts, Yuri A. Kuznetsov, Bart Sautois (2006) MATCONT. Scholarpedia, 1(9):1375. James Murdock (2006) Normal forms. Scholarpedia, 1(10):1902. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. External Links See Also Andronov-Hopf bifurcation, Bifurcations, Center manifold theorem, Dynamical systems, Equilibria, MATCONT, Ordinary differential equations, Saddle-node bifurcation for maps, Saddle-node homoclinic bifurcation, XPPAUT
Let $K$ be a field having two field extensions $L\supseteq K$ and $M\supseteq K$. Does there exist a field $N$ along with embeddings $L\to N$ and $M\to N$, such that the diagram$$ \require{AMScd}\begin{CD} K @>>> L\\ @V V V @VV V\\ M @>>> N \end{CD}$$is commutative? To put it less formally, do $L$ and $M$ have a common field extension $N$ (with $K$ lying in the intersection)? If yes, consider this side question (but do leave an answer even if you can only answer the main question!): Does the above property of fields generalise to the following, stronger property? Let $p$ be either $0$ or a prime number. Does there exist a sequence of fields$$ \mathbb F_p = L_0\subseteq L_1\subseteq L_2\subseteq L_3\subseteq L_4\subseteq\cdots$$(where I use the convention $\mathbb F_0 = \mathbb Q$) such that any field $K$ of characteristic $p$ has an extension field among the $L_\alpha$? This sequence should be understood as enumerated by ordinal numbers. In other words, what I need is a function that assigns to each ordinal number $\alpha$ a field $L_\alpha$ containing all $L_\beta$ with $\beta < \alpha$. (Intuitively, I suspect that this might depend on the Axiom of Choice.) If the second property holds, it shows that you can essentially only extend a field in one "direction." It also shows that the class (it is obviously not a set) $\mathbb M_p = \bigcup_\alpha L_\alpha$ is a field (class). We can define all the usual field operations (addition, multiplication, division) here since all pairs of elements lie in $L_\alpha$ for a sufficiently large $\alpha$. This "monster field" of characteristic $p$ then contains all other set fields of that characteristic.
Get your free trial content now! Video Transcript Transcript Factoring by Grouping Adventure Mike and his girlfriend plan to build a treehouse. Using a snake as a measuring stick, Mike figured out the polynomial expression to represent the total area. Oh jeez, his girlfriend just remembered – she wants a balcony, so she and Mike can watch the sunset. What can he do? She’s the romantic type. So, to save time, rather than measuring and calculating everything all over again, he can use grouping to factor polynomials. Standard Form of a Quadratic Polynomial Let’s take a look at the expression he wrote, 15x²+9x-6. Hmm. This looks familiar, doesn’t it? This expression is in the standard form of a quadratic polynomial, ax²+bx+c, but notice it's a trinomial and 'a' is equal to a number other than 1. To figure out the measurements for the sides of the treehouse, how can Mike factor this expression? Multiplying Binomials To show him how to factor by grouping, let’s use another problem as an example. To put this method into perspective, we'll start at the end result, with the factors. Working backwards, first, use the FOIL method to multiply the two binomials.Before combining like terms, we have 3x² + 6x – x – 2. To help you understand factoring by grouping, pay attention to the terms that are highlighted. After combining these like terms, the result is a trinomial in standard, quadratic form with 'a' equal to a number other than 1. Finding the factored form Let’s work on Mike’s problem. Okay, so how do you get from the standard form to the factored form when 'a' is not equal to 1? There's a little trick to doing this. We need to find the factors of 'ac' that sum to 'b'. a = 15 and c = -6. So since 'ac' = -90, here's a list of some of the factors of -90 – let's take a look. Hmm can you find the pair of factors that also sum to 9? That's right. -6 and 15 are factors of -90 and sum to 9. Now, watch carefully while I do some mathemagic. Write two new terms using the factors of 'ac' that sum to 'b', and multiplied by 'x'. Does this format look familiar? Remember the highlighted terms - from the example problem? Now to group, use parentheses to group the four terms into two binomials. This is a little tricky because you have to group the terms so that, when you factor out the GCF, the remaining binomial is the same for each. Also, be especially careful you don't make a sign error. Last step, combine the two GCFs, to create a new binomial. The polynomial is factored! Adventure Mike has the measurements for the two sides of the treehouse. If his girlfriend wants a bigger treehouse, it won’t be a problem because he can just adjust the size of the sides. Let’s fast forward, the treehouse is finally finished. To remember the special moment, Adventure Mike snaps some photos. Let's look Oh no! It seems like Mike has been alone in the jungle - just a little too long… Factoring by Grouping Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Factoring by Grouping kannst du es wiederholen und üben. Explain how to factor the given quadratic polynomial by grouping. Tipps To factor quadratic polynomials by grouping, first multiply $a$ with $c$. Then sum all pairs of factors of this product and find the pair which sums to $b$. Working backwards, we have: $\begin{array}{rcl} (3x-1)(x+2)&=&3x(x+2)-(x+2)\\ &=&3x^2+6x-x-2 \end{array}$ Lösung In order to factor such a quadratic polynomial, $ax^2+bx+c$, with $a$, $b$, and $c$ as coefficients, we must find all pairs of factors of $a\times c$. Then we search for the pair of factors whose sum is $b$. Let's have a look at an example: $3x^2+5x-2$. Here we have $a=3$, $b=5$, and $c=-2$ and thus $a\times c=3\times -2=-6$. The pairs of factors of $-6$ and their corresponding sums are: $\begin{array}{rr|r} \text{factor}&\text{factor}&\text{sum}\\ \hline -1&6&5\\ 1&-6&-5\\ -2&3&1\\ 2&-3&-1 \end{array}$ The pair of factors which sums to $5$ is $-1$ and $6$. The following grouping $3x^2+5x-2=3x^2+6x-x-2$ then gives us $3x(x+2)-(x+2)=(3x-1)(x+2)$. which is the factorization we are looking for. Complete the following table of factors and sums. Tipps The product of a pair of factors must be $-90$. If you know one factor, just divide $-90$ by that factor to get the other factor in the pair. An example of a pair of factors of $-90$ and their sum: Lösung For $15x^2+9x-6$, we first multiply $15\times -6=-90$. We then know that we are looking for all pairs of factors of $-90$, and one pair in particular which sum up to $9$. Let's calculate the products and sums of all pairs of factors of $-90$: $-1\times 90=-90$ and $-1+90=89$ $1\times (-90)=-90$ and $1-90=-89$ $-2\times 45=-90$ and $-1+90=89$ $2\times (-45)=-90$ and $2-45=-43$ $-3\times 30=-90$ and $-3+30=27$ $3\times (-30)=-90$ and $3-30=-27$ $-5\times 18=-90$ and $-5+18=13$ $5\times (-18)=-90$ and $5-18=-13$ $-6\times 15=-90$ and $-6+15=9$ $6\times (-15)=-90$ and $6-15=-9$ Factor the polynomial $15x^2+9x-6$. Tipps The product of $15$ and $-6$ is $-90$. You can write $-90$ as a product of two factors. Find the pair of factors that add up to $9$. In the example, $\begin{array}{rcl} (2x+3)(4x-2)&=&2x(4x-2)+3(4x-2)\\ &=&8x^2-4x+12x-6\\ &=&8x^2+8x-6 \end{array}$ we have that $8\times -6=48$, as well as $-4\times 12=48$. We can see that $-4+12=8$. First multiply $a$ and $b$. Then we check all pairs of factors of $ab$ whose product is $ab$. If we add a pair of factors together and get $b$, then we've found the pair we are looking for. Lösung To factor the quadratic trinomial $15x^2+9x-6$ with $a=15$, $b=9$, and $c=-6$, we first look for the pairs of factors of $a\times c$ that sum to $b$. We have that $a\times c=-90$, so we investigate all pairs of factors of $-90$: ${\small\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c|c} \text{Factors of }-90&-1,90&1,-90&-2,45&2,-45&-3,30&3,-30&-5,18&5,-18&-6,15&6,-15 \\ \hline \text{Sum of factors }&89&-89&43&-43&27&-27&13&-13&9&-9 \end{array}}$ The sum of the factors $-6$ and $15$ is $-6+15=9$. So this is the pair we want. Next we group the quadratic polynomial as follows: $15x^2+9x-6=15x^2-6x+15x-6$. Grouping the binomials together, we get $(15x^2-6x)+(15x-6)$. The greatest common factor of the left binomial is $3x$ and of the right binomial is $3$: $(15x^2-6x)+(15x-6)=3x(5x-2)+3(5x-2)$. Now we see that both summands have the binomial $5x-2$ as a common factor, and thus we can factor even further: $3x(5x-2)+3(5x-2)=(3x+3)(5x-2)$. Consider the appropriate factorization. Tipps You can either factor by grouping or use the FOIL method for binomials. An example of the FOIL method: $\begin{array}{rcl} (5x+1)(x+5)&=&5x^2+5x(5)+1x+1(5)\\ &=&5x^2+26x+5 \end{array}$ Remember: First, Outer, Inner, Last. If you'd like to factor $ax^2+bx+c$ by grouping then: Find all pairs of factors of $a\times c$. Check which pair sums to $b$. Lösung To match the quadratic polynomial in standard form with its factorization, you can either factor the quadratic polynomial or multiply the binomials. Let's go over how to use both methods with the polynomial $2x^2-11x+12$. Factoring the Polynomial: First we note that $a=2$, $b=-11$ and $c=12$. First multiply $a$ and $c$ to get $2\times 12=24$. Next look for the pair of factors of $24$ which sum to $-11$; the pair we are looking for is $-8$ and $-3$. Then group to get $2x^2-11x+12=(2x^2-8x)-(3x-12)=2x(x-4)-3(x-4)$. Then factor out the greatest common factors of the binomials to get $2x(x-4)-3(x-4)=(2x-3)(x-4)=(-2x+3)(-x+4)$. Lastly, multiply each binomial with $-1$. Multiplying the Binomials: You can also use FOIL multiplication with $(3x-4)(2x-2)$. We have that $(3x-4)(2x-2)=6x^2-6x-8x+8=6x^2-14x+8$. Indicate the coefficients of the quadratic polynomials. Tipps Keep in mind that the $-$ sign belongs to the coefficient. For the polynomial $3x^2+2x+4$, we have that $a=3$, $b=2$, and $c=4$. Pay attention: Neiter $x^2$ nor $x$ belong to $a$ or $b$. Lösung The standard form of quadratic polynomial is given by $ax^2+bx+c$, where $a\neq 1$. Before we start to group quadratic polynomials we have to note the values of $a$, $b$, and $c$. For the given polynomials, we look for the factor of the quadratic term $x^2$, $a$, the linear term, $b$, and the constant term, $c$: For $6x^2+x-2$, we have $a=6$, $b=1$, and $c=-2$. For $9x^2-25x-6$, we have $a=9$, $b=-25$, and $c=-6$. For $8x^2+4x-12$, we have $a=8$, $b=4$, and $c=-12$. For $8x^2+14x+3$, we have $a=8$, $b=14$, and $c=3$. Factor the given polynomial. Tipps $ax^2+bx+c$ is the standard form of a quadratic polynomial. For example, with $x^2-x+1$, we have that $a=1$, $b=-1$, and $c=1$. Look for the pair of factors of $a\times c$ that sum to $b$. Once you group a polynomial into two binomials, each binomial has a greatest common factor. So you can factor the greatest common factor from each binomial. Lösung To factorize the trinomial $6x^2+x-2$ we first look at the coefficients: Here we have $a=6$, $b=1$ and $c=-2$. We're looking for the pairs of factors of $6\times (-2)=-12$ which sum to $1$. $\begin{array}{rr|r} \text{factor}&\text{factor}&\text{sum}\\ \hline -1&12&11\\ 1&-12&-11\\ -2&6&4\\ 2&-6&-4\\ -3&4&1 \end{array}$ Now we can stop: We've found the factors $-3$ and $4$. We can rewrite $6x^2+x-2$ as follows: $6x^2+x-2=(6x^2-3x)+(4x-2)$. Just look at the two resulting binomials in parentheses. The greatest common factor of the first one is $3x$ and of the second one is $2$: $(6x^2-3x)+(4x-2)=3x(2x-1)+2(2x-1)$. Now we see that both summands have the binomial $2x-1$ as a factor in common. So we factor to get, $3x(2x-1)+2(2x-1)=(3x+2)(2x-1)$. The side lengths Adventure Mike was looking for are then $3x+2$ and $2x-1$.
I was reading BMO spaces (John-Nirenberg) on wikipidia http://en.wikipedia.org/wiki/Bounded_mean_oscillation. There they define BMO norm as $$sup_{Q}\frac{1}{Q}\int_Q |u(y) - u_Q|dy$$ where $u_Q$ is the average of $u(y)$ over $Q$ and the supremum is taken over all cubes of arbitrary diameter. Questions: 1. I think what can be the definition of BMO spaces on a torus $T^n = S^1 \times ...\times S^1$? We cannot have cubes of arbitrary diameter. Maybe we can look at the supremum over cubes whose diameter is smaller than or equal to that of torus? 2 What will happen if we take the supremum over cubes $Q$ whose diameter is less than or equal $r$, say, r being very small? Will that give the same norm? Thanks please. Edit: Question 3. Please also advise a little about non-quotient spaces? Edit:Joonas Ilmavirta answers Q1 and 3 below. Someone please look at Question 2.
A commutative algebra (with unity) over a field gives rise to the covariant functor F: Set_f->Vect from finite sets to vector spaces: F(E) := A^{otimes E}. Is it true that, over complex numbers, a finite dimensional algebra can be reconstructed from the corresponding functor? (A Gamma-module is a functor from finite pointed sets to vector spaces;so F is not a Gamma-module. I use this term in the title just because I do not know the correct term for F: Set_f->Vect.) Let me clarify my question. For a commutative algebra $A$ we define a functor $F:\mathrm{Set}_\mathrm{f}\to\mathrm{Vect}$ by $F(I)=A^{\otimes I}$ for a finite set $I$ and $F(t):F(I)\to F(J)$, $\bigotimes_{i\in I}a_i\mapsto\bigotimes_{j\in J}\prod_{i\in t^{-1}(j)}a_i$ for a map $t:I\to J$ (exactly as Andreas Blass proposed). Suppose now that two finite-dimensional algebras $A$ and $B$ over the complex numbers produce isomorphic functors $F$ and $G$. Is it true that then $A$ and $B$ are isomorphic? The question is not trivial. Let $e:F\to G$ be an isomorphism of functors. Then $e_{\{1\}}:A\to B$ and $e_{\{1,2\}}:A\otimes A\to B\otimes B$ are isomorphisms of vector spaces. If we had $e_{\{1,2\}}=e_{\{1\}}\otimes e_{\{1\}}$, this would imply that $e_{\{1\}}$ is an isomorphism of algebras. The problem is that we have only linear naturality relations between $e_I$.
Suppose I have a very simple asset whose price takes only three possible values: $X_t\in \{-1,0,1\}$. I also got some discrete time series $X = (X_t)_{t\geq 0}$ and I would like to come up with a trading rule based on these observations. Let's focus on the following naive approach: given the current level of the asset, I would like to estimate of what the next change would be. Thus, I am fitting this time series into Markov Chain where I disregard transition that do not change the state. For example, based on the following sample from the time series:$$ \dots0,0,0,0,0,0,0,1,1,1,0,0,0,-1,-1\dots \tag{1}$$and suppose that there are no appearances of $0$ anymore. I can conclude that out of $10$ appearances of $0$, $7$ are followed by $1$ and $3$ are followed by $-1$, hence a naive algorithm would say that transition probabilities are $p(1|0) = 0.7$ and $p(-1|0) = 0.3$. Now, I wanted to make it faster, so I preprocessed the data to get rid of repetitions since I thought that it shall not affect the end result. For example, the sample $(1)$ transforms into $$ \dots0,1,0,-1\dots \tag{2} $$ but now $p(1|0) = 0.5$ and $p(-1|0) = 0.5$ which is quite different from the previous estimation. Of course, that's just a simple example, but it gives a general impression: the lumping procedure $(1)\to(2)$ changes the estimates of transition probabilities. It surprised me first, but now it seems very natural: in $(2)$ when sampling I give equal weight to each interval of consecutive $0$'s whereas in $(1)$ more weight goes to a longer interval of consecutive $0$'s. The question is: given my purpose, what would be the correct method to estimate probabilities? Note that I am trying to predict the price move without taking into account for how long have I stayed at the current price of the asset, only the price level itself. From that perspective, I guess the second approach is more appropriate: if I observe new prices and I see the price is $0$, I'd say I'd rather rely on that the next price change happens with probabilities $(0.5,0.5)$ rather than $(0.7, 0.3)$. At the same time, here I do not feel confident since my background in statistics is rather weak, so any feedback on this topic is highly appreciated. Namely: what is formally more correct, to use probabilities from $(1)$ or from $(2)$, or if both are correct depending on how they are used, what is a proper way of using them. Practical comment on that approach is also welcome.
The help pages in R assume I know what those numbers mean, but I don't. I'm trying to really intuitively understand every number here. I will just post the output and comment on what I found out. There might (will) be mistakes, as I'll just write what I assume. Mainly I'd like to know what the t-value in the coefficients mean, and why they print the residual standard error. Call:lm(formula = iris$Sepal.Width ~ iris$Petal.Width)Residuals: Min 1Q Median 3Q Max -1.09907 -0.23626 -0.01064 0.23345 1.17532 This is a 5-point-summary of the residuals (their mean is always 0, right?). The numbers can be used (I'm guessing here) to quickly see if there are any big outliers. Also you can already see it here if the residuals are far from normally distributed (they should be normally distributed). Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.30843 0.06210 53.278 < 2e-16 ***iris$Petal.Width -0.20936 0.04374 -4.786 4.07e-06 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Estimates $\hat{\beta_i}$, computed by least squares regression. Also, the standard error is $\sigma_{\beta_i}$. I'd like to know how this is calculated. I have no idea where the t-value and the corresponding p-value come from. I know $\hat{\beta}$ should be normal distributed, but how is the t-value calculated? Residual standard error: 0.407 on 148 degrees of freedom $\sqrt{ \frac{1}{n-p} \epsilon^T\epsilon }$, I guess. But why do we calculate that, and what does it tell us? Multiple R-squared: 0.134, Adjusted R-squared: 0.1282 $ R^2 = \frac{s_\hat{y}^2}{s_y^2} $, which is $ \frac{\sum_{i=1}^n (\hat{y_i}-\bar{y})^2}{\sum_{i=1}^n (y_i-\bar{y})^2} $. The ratio is close to 1 if the points lie on a straight line, and 0 if they are random. What is the adjusted R-squared? F-statistic: 22.91 on 1 and 148 DF, p-value: 4.073e-06 F and p for the whole model, not only for single $\beta_i$s as previous. The F value is $ \frac{s^2_{\hat{y}}}{\sum\epsilon_i} $. The bigger it grows, the more unlikely it is that the $\beta$'s do not have any effect at all.
I'm trying to solve the exercise in the title and I think it makes no sense. Here's what it says: An onto map $f: X \to Y$ is called an elementary partial isomorphismbetween $\mathcal{A}$ and $\mathcal{B}$ if $X \subseteq A$, $Y \subseteq B$ and $$(\mathcal{A},\{x\}_{x\in X}) \equiv (\mathcal{B},\{f(x)\}_{x\in X})$$ $f$ is immediately extensibleif for each $a\in A$ (respectively $b\in B$) there is a $b\in B$ (respectively $a\in A$) such that $$(\mathcal{A},\{x\}_{x\in X},a) \equiv (\mathcal{B},\{f(x)\}_{x\in X},b)$$ Suppose every finite, elementary partial isomorphism between $\mathcal{A}$ and $\mathcal{B}$ is immediately extensible. Show $\mathcal{A} \equiv\mathcal{B}$. From what I can see there are two cases: either there exists a finite, elementary partial isomorphism between $\mathcal{A}$ and $\mathcal{B}$, in which case $\mathcal{A} \equiv\mathcal{B}$ simply by removing the additional constants; or there isn't, so by taking $X=\emptyset$, we obtain $\mathcal{A} \not\equiv\mathcal{B}$. In either case, the extensibility condition plays no role. Am I getting it wrong? Also, from what I've looked up, this seems related to Ehrenfeucht–Fraïssé games, but I'm not experienced enough to translate the statement back. Any help? Thanks.
In probability studies we come across many situations and apply various concepts to find the probability of an event. The numerical value of possible outcomes in a random experiment varies from trial to trial. Suppose you roll a fair die, the number of possible outcomes is $6$. That is, the die can roll on $1, 2, 3, 4, 5, 6$. If we define a variable $X, X$ can randomly vary from $1$ to $6$ in this experiment. But the outcomes are countable and hence the variable is called as discreet random variable. On the other hand if the continuous data can take any value within a range, then the variable is defined as continuous random variable. For example, if you study the heights of group of grown up persons, the variable can take any value including decimals within an appropriate range, says from $5$ ft. to $7$ ft. Analysis of Continuous Random variables Let us now take a detailed study of continuous random variable. In a given range $[a, b]$ if the probability is equal for all the values of the random variable within the given interval, then the probability is uniformly distributed. It is also called as the rectangular distribution, the length of the rectangle is the range and the height is the probability of any value of the variable within the given range, say$p$. Clearly the area of such rectangle is $1$ as it represents the infinite sum of the probabilities of all possible outcomes. Generalizing we can say it is a function represented as $f(X) \{ X$ = $x \}$. The following diagram helps us to understand better. Now we get an interesting result! The length of the rectangle in the assumed range is $(b - a)$ and if the probability for any value of the variable is $p$, then $p(b - a)$ = $1$. Therefore, $p$ = $\frac{1}{(b \times a)}$. We will now adopt a different approach. In case of uniformly distributed function $f(X)$ = $p$, where $p$ is a constant. The actual probability is $0$ at $x$ = $a$ and is $1$ at $x$ = $b$. Hence the probability cumulates from $0$ to $1$, in a linear manner in this case. If the interval width is the rate of variation or the slope is $\frac{1}{w}$. We can define a cumulative density function shortly called as $CDF$) such that $C(X) \{X$ = $x \}$ = $kx$, where $k$ = $(\frac{1}{w})$. In general, the probability distribution will not be uniform always and hence we cannot generalize $f(X)$ as a constant and may be in a general form as $f_X$(x), The domain of could be all real numbers, that is, $(-\infty$, $\infty)$, but in our probability study we can limit the interval as $[a, b]$. Since the probability is the area under the curve which represents the cumulative density function, we can say, $P \{a < X \leq b\}$ = $F_x\ (b)\ -\ F_x$ (a) where $F_x$ is the integral of $f_X(x)$. The most commonly known and important continuous distribution of a random variable is standard normal distribution, because many statistical data are normally distributed. Here the random variable is denoted as $z$ which is defined by the relation $z$ = $\frac{(\text{Random variable of the data\ -\ Mean of the data})}{\text{Standard deviation of the data}}$. The cumulative density function of a standard normal distribution is a bit complicated to integrate and evaluate. Instead the probability of occurrence represented by the random variable of the data can be directly found from a table called as z-score table. Word Problems Example 1: Govind is visiting his home village for a week. The parliamentary member of that constituency regularly spends a day in that village once in 60 days. What is the probability that Govind will meet the minister in his home village? In a span of 60 days, the minister can visit on any of the days. In other words, each day the probability of ministers visit to the village is $\frac{1}{60}$. Hence, the cumulative density function of probability can be narrated as $C(X) \{X$ = $x\}$ = $(\frac{1}{60})$ $(x)$ Since Govind will be staying at the village for $7$ days the value of the random variable $x$ is $7$. Therefore, the probability of Govind meeting the minister is $C(X) \{X$ = $7\}$ = $(\frac{1}{60})$ $(7)$ = $\frac{7}{60}$. Example 2: A continuous random variable $X$ follows a normal distribution. What is $P(5 < X < 12)$ ? The width of the interval is $12 - 5$ = $7$ and hence $k$ = $\frac{1}{7}$ Therefore, the probability of the random variable for any value in the given range is $\frac{1}{7}$. In other words, $P(5 < X < 12)$ = $\frac{1}{7}$. Example 3: The graph of probability density function of a random variable is shown below. What is the value of $k$? The graph describes the shape of a triangle and its area is given by $(\frac{1}{2})$ $\times\ k \times\18$ = $9k$ But this area represents the total probability of all possible values of the random variable which is $1$. Therefore, $9k$ = $18$ and hence $k$ = $\frac{1}{9}$. Example 4: A probability distribution function is defined as follows. $f_X(x)$ = $ke{-cx}$ for the interval [0, $\infty$) a) Find the relation between$c$ and $k$ b) Find the cumulative density function assuming $c$ = $2$ c) Determine the probability for $3 < X < 5$. a) The total probability for all positive values of the random variable is the area under the curve that represents $fX(x)$. In other words it is the definite integral of $fX(x)$ between the limits $0$ and infinity. But for the continuous random variable, the area under the graph of the function is $1$. Therefore, $\int_{0}^{\infty} ke^{-cx) }dx$ = $1$ -$(\frac{k}{c})$ $[e^{-cx} ]_0^{\infty}$ = $1$ -$(\frac{k}{c})$ $(0 - (1)$ = $1$ $\frac{k}{c}$ = $1$ or $k$ = $c$. b) If $c$ = $2$, then $k$ also = $2$ and hence $f_X (x)$ = $2e{-2x}$. The cumulative density function is given by $F_x(x)$ = $\int_{0}^{x}f_x(u)du$ where $0 \leq u < x$ $\int_{0}^{x} 2e^{-2u} du$ = $-[e^{-2u} ]_0^x $ Or, $F_X$(x)$ = $1$ - $e^{-2x}$ c) $P (3 < X < 5)$ = $F_X(5) - F_X (3)$ = $(1 - e^{-10})\ -\ (1 - e^{-6})$ = $e^{-6}\ -\ e^{-10}\ \approx\ 0.0024$ Example 5: The length of rod manufactured follows is a random variable follows a normal distribution with a mean length of $20$ cm and a standard division of $2$ cm. You randomly pick up a rod. What is the probability that a) the length of that rod is less than $19$ cm? b) the length of the rod is between $21$ and $22$ cm? a) The z-score for this case is $\frac{(19 \times 20)}{2}$ = $-0.5$ Therefore, the probability of the length of a rod less than $19$ cm, $P (X < 19)$ is same as $P (Z < -0.5)$ Referring to the z-score table, the value corresponding to $-0.5$ is $0.3085$. Hence the probability of the length of a randomly picked up rod less than $19$ cm is $0.3085$. b) The z-score for a length of $21$ cm is $\frac{(21 \times 20)}{2}$ = $0.5$ The z-score for a length of $22$ cm is $\frac{(22 \times 20)}{2}$ = 1 As per the z-score table, $P (Z$ = $1)$ = $0.8413$ and $P (Z$ = $0.5)$ = $0.6915$ So, $P (21 < X < 22)$ = $P (0.5 < Z < 1)$ = $P (Z$ = $1)\ -\ P (Z$ = $0.5)$ = $0.8413 - 0.6915$ = $0.1498$ Hence, the probability of the length of a randomly picked up rod between $21$ and $22$ cm is $0. 0.1498$
Forgot password? New user? Sign up Existing user? Log in ∑cycabc+1\large \displaystyle \sum_{\text {cyc}} \dfrac{a}{bc+1}cyc∑bc+1aLet a,ba,ba,b and ccc be positive reals satisfying a+b+c=3a+b+c=3a+b+c=3. Find the minimum value of the expression above. Bonus: Find as many approaches as possible Problem Loading... Note Loading... Set Loading...
In a proof, I want to split the equations on the left and have a picture on the right using minipages. However for one of the equations on the left, the comment is too long and is too big for the minipage. How can I split the comment so that it goes across two lines? I understand it can be done equations, but not sure how to do it for comments. Below is my code and the output: \documentclass[a4paper, 11pt]{article}\usepackage{fullpage} \usepackage{amsmath,amsthm,amssymb}\usepackage{mathtools,amsthm}\begin{document}\begin{proof}~\\\begin{minipage}{0.5\textwidth}\begin{align*}AB = BD, & AC=CE &\qquad &\textrm{(midpoint of a side)}\\\angle BAC &= \angle DAE&\qquad &\textrm{(common angle)}\\\frac {AB}{AD} &= \frac {AC}{AE} = \frac 12\\\therefore \Delta ABC &\sim \Delta ADE& \qquad &\textrm{(SAS; 1:2)}\\\angle ABC & = \angle ADE& \qquad &\textrm{(matching $\angle$ 's, $\Delta ABC \sim\Delta ADE$)}\\\therefore BC &\| DE& \qquad &\textrm{(corresponding $\angle$'s are $=$)}\\\frac{BC}{DE} &=\frac 12 & \qquad & \textrm{(matching sides in the same ratio,$\Delta ABC \sim\Delta ADE$)}\\\therefore BC &= \frac 12 DE && \qedhere\end{align*}\end{minipage}\hfill\begin{minipage}{0.4\textwidth}\includegraphics[width=0.9\textwidth,keepaspectratio]{proof.PNG}\end{minipage}\end{proof}\end{document} As in the above diagram, I want to split the comment at the vertical line and place it on the next line aligned with the other comments. Thanks for any help!
Pressing the envelope, presumably the best scenario would be a simple proof of the Prime Number Theorem. After all, Wilson’s Theorem gives a necessary and sufficient condition, in terms of the Gamma Function, for a number to be a prime, and Stirling’s Formula specifies the asymptotic behaviour of the Gamma Function. Using Robbins' [1] form of Stirling's formula, $$\sqrt{2\pi}n^{n+1/2}\exp(-n+1/(12n+1))< n!< \sqrt{2\pi}n^{n+1/2}\exp(-n+1/(12n))$$ we get $$\left\lceil\sqrt{2\pi}(n-1)^{n-1/2}\exp(-n-1+1/(12n-11))\right\rceil$$ $$\le (n-1)!\le$$ $$\left\lfloor\sqrt{2\pi}(n-1)^{n-1/2}\exp(-n-1+1/(12n-12))\right\rfloor$$ which is accurate enough to distinguish prime from composite for $n\le8$. For larger numbers, the error bound is too large. This can be extended further using a modification of Wilson's theorem: for n > 9, $$\lfloor n/2\rfloor!\equiv0\pmod n$$ if and only if n is composite. This allows testing 10 through 15, plus (with some cleverness) 17. With tighter explicit bounds and high-precision evaluation, it might be possible to test as high as 100 with related methods: direct evaluation up to 25 and the 'divide by 4' variant of the above for n > 25. This is not so much 'using a cannon to swat a fly' (using methods more powerful than needed) as it is 'using the space station to swat a fly': the methods must be extremely powerful and accurate to do very little. [1] H. Robbins, "A Remark on Stirling's Formula." The American Mathematical Monthly 62 (1955), pp. 26-29.
RD Sharma Solutions Class 8 Chapter 4 Exercise 4.5 Making use of the cube root table, find the table, find the cube roots of the following (correct to three decimal points): 7 70 700 7000 1100 780 7800 1346 940 5112 9800 732 7342 133100 37800 0.27 8.6 0.86 8.65 7532 833 34.2 Answer: Q1. 7 Answer : Because 7 lies between 1 and 100, we will look at the row containing 7 in the column of x. By the cube root table, we have: \(\sqrt[3]{7}\) Thus, the answer is 1.913. Q2. 70 Because 70 lies between 1 and 100, we will look at the row containing 70 in the column of x. By the cube root table, we have: \(\sqrt[3]{70}\) Thus, the answer is 4.121 Q3. We have: 700 = 70 x 10 Cube root of 700 will be in the column of \(\sqrt[3]{10x}\) By the cube root table, we have: \(\sqrt[3]{700}\) Thus, the answer is 8.879 Q4. We have: 7000 = 70 x 100 \(\sqrt[3]{7000}\) By the cube root table, we have: \(\sqrt[3]{7}\) \(\sqrt[3]{7000}\) Thus, the answer is 19.13 Q5. We have: 1100 = 11 x 100 Therefore, \(\sqrt[3]{1100}\) By the cube root table, we have: \(\sqrt[3]{11}\) \(\sqrt[3]{1100}\) Thus, the answer is 10.323. Q6. We have: 780 = 78 x 10 Therefore, Cube root of 780 will be in the column of \(\sqrt[3]{10x}\) By the cube root table, we have: \(\sqrt[3]{780}\) Thus, the answer is 9.025 Q7. 7800 7800 = 78 x 100 \(\sqrt[3]{7800}\) By the cube root table, we have: \(\sqrt[3]{78}\) \(\sqrt[3]{7800}\) Thus, the answer is 19.835 Q8. 1346 Answer: By prime factorisation, we have: 1346 = 2 x 673 => \(\sqrt[3]{1346}\) Also 670 < 673 < 680 => \(\sqrt[3]{670}\) From the cube root table, we have: \(\sqrt[3]{670}\) For the difference (680 – 670), i.e., 10, the difference in the values = 8.794 – 8.750 = 0. 044 For the difference of (673 – 670), i.e., 3, the difference in the values = \(\frac{0.044 \times 3}{10}\) = 8.750 + 0.013 = 8.763 Now, \(\sqrt[3]{1346}\) Thus, the answer is 11.041 Q 9. 940 Answer : We have: 250 = 25 x 100 Cube root of 250 would be in the column of \(\sqrt[3]{10x}\) By the cube root table, we have: \(\sqrt[3]{250}\) Thus, the required cube root is 6.3. Q10. 5112 Answer : By prime factorisation, we have: 5112 = 2 3 x 3 2 x 71 => \(\sqrt[3]{}\) By the cube root table, we have: \(\sqrt[3]{9}\) \(\sqrt[3]{5112}\) Thus, the required cube root is 17.227. Q11. We have: 9800 = 98 x 100 \(\sqrt[3]{9800}\) By the cube root table, we have: \(\sqrt[3]{98}\) \(\sqrt[3]{9800}\) Thus, the required cube root is 21.40. Q12. 732 Answer : We have: 730 < 732 < 740 => \(\sqrt[3]{730}\) From cube root table, we have: \(\sqrt[3]{730}\) For the difference (740 – 730), i.e., 10, the difference in values = 9.045 – 9.004 = 0.041 For the difference of (732 – 730), i.e., 2, the difference in values \(\frac{0.044 \times 2}{10}\) \(\sqrt[3]{732}\) Q13. 7342 Answer: We have: 7300 < 7342 < 7400 => \(\sqrt[3]{7300}\) From the cube root table, we have: \(\sqrt[3]{7300}\) For the difference (7400 – 7300), i.e., 100, the difference in values = 19.48 – 19.39 = 0.09 For the difference of (7342 – 7300), i.e., 42, the difference in the values = \(\frac{0.09 \times 42}{100}\) \(\sqrt[3]{7342}\) Q14. We have: 133100 = 1331 x 100 => \(\sqrt[3]{133100}\) From the cube root table, we have: \(\sqrt[3]{100}\) \(\sqrt[3]{133100}\) Q15. We have, 37800 = \(2 ^{3} \times 3 ^{3} \times 175 => \sqrt[3]{37800} = \sqrt[3]{2 ^{3} \times 3 ^{3} \times 175} = 6 \times \sqrt[3]{175}\) Also 170 < 175 < 180 => \(\sqrt[3]{170}\) From cube root table, we have: \(\sqrt[3]{170}\) For the difference (180 – 170), i.e., 10, the difference in values = 5.646 – 5.540 = 0.106 For the difference of (175 – 170), i.e., 5, the difference in values \(\frac{0.106 \times 5}{10}\) \(\sqrt[3]{175}\) Now 37800 = 6 x \(\sqrt[3]{175}\) Thus, the required cube root is 33.558. Q16. 0.27 The number 0.27 can be written as \(\frac{27}{100}\) Now, \(\sqrt[3]{0.27} = \sqrt[3]{\frac{27}{100}} = \frac{\sqrt[3]{27}}{\sqrt[3]{100}} = \frac{3}{\sqrt[3]{100}}\) From cube root table, we have: \(\sqrt[3]{100}\) \(\sqrt[3]{0.27}\) Thus, the required cube root is 0.646. Q17. 8.6 The number 8.6 can be written as \(\frac{86}{10}\) Now \(\sqrt[3]{8.6} = \sqrt[3]{\frac{86}{10}} = \frac{\sqrt[3]{86}}{\sqrt[3]{10}} \\ From \; cube \; root \; table, \; we \; have: \\ = \sqrt[3]{86} = 4.414 \; and \; \sqrt[3]{10} = 2.154 \\ = \sqrt[3]{8.6} = \frac{\sqrt[3]{86}}{\sqrt[3]{10}} = \frac{4.414}{2.154} = 2.049\) Thus, the required cube root is 2.049. Q18. 0.86 The number 0.86 can be written as \(\frac{86}{100}\) Now \(\sqrt[3]{0.86} = \sqrt[3]{\frac{86}{100}} = \frac{\sqrt[3]{86}}{\sqrt[3]{100}} \\ From \; cube \; root \; table, \; we \; have: \\ = \sqrt[3]{86} = 4.414 \; and \; \sqrt[3]{100} = 4.342 \\ = \sqrt[3]{0.86} = \frac{\sqrt[3]{86}}{\sqrt[3]{100}} = \frac{4.414}{4.642} = 0.951\) Thus, the required cube root is 0.951. Q19. 8.65 Answer : The number 8.65 could be written as \(\frac{865}{100}\) Now \(\sqrt[3]{8.65} = \sqrt[3]{\frac{865}{100}} = \frac{\sqrt[3]{865}}{\sqrt[3]{100}} \\ Also, \; 860 \; < \; 865 \; < \; 870 => \sqrt[3]{860} \; < \; \sqrt[3]{865} \; < \; \sqrt[3]{870} \\ From \; cube \; root \; table, \; we \; have: \\ = \sqrt[3]{860} = 9.510 \; and \; \sqrt[3]{870} = 9.546 \\\) For the difference (870 – 860), i.e., 10, the difference in values = 9.546 – 9.510 = 0.036 For the difference of (865 – 860), i.e., 5, the difference in values = \(\frac{0.036 \times 5}{10}\) \(\sqrt[3]{865}\) From cube root table, we also have: \(\sqrt[3]{100}\) \(= \sqrt[3]{8.65} = \frac{\sqrt[3]{865}}{\sqrt[3]{100}} = \frac{9.528}{4.642} = 2.053\) Thus, the required cube root is 2.053 Q20. We have, 7532 \(7500 \; < \; 7532 \; < \; 7600 => \sqrt[3]{7500} \; < \; \sqrt[3]{7532} \; < \; \sqrt[3]{7600} \\ From \; cube \; root \; table, \; we \; have: \\ = \sqrt[3]{7500} = 19.57 \; and \; \sqrt[3]{7600} = 19.66 \\\) For the difference of (7600 – 7500), i.e., 100, the difference in values = 19.66 – 19.57 = 0.09 For the difference of (7532 – 7500), i.e., 32, the difference in values, = \(\frac{0.009 \times 32}{100}\) \(\sqrt[3]{7532}\) Thus, the required cube root is 19.599 Q21. We have, 833 \(830 \; < \; 833 \; < \; 840 => \sqrt[3]{830} \; < \; \sqrt[3]{833} \; < \; \sqrt[3]{840} \\ From \; cube \; root \; table, \; we \; have: \\ = \sqrt[3]{830} = 9.398 \; and \; \sqrt[3]{840} = 9.435 \\\) For the difference of (840 – 830), i.e., 10, the difference in values = 9.435 – 9.398 = 0.037 For the difference of (833 – 830), i.e., 3, the difference in values = \(\frac{0.037 \times 3}{10}\) \(\sqrt[3]{833}\) Thus, the required cube root is 9.409 Q22. 34.2 The number 34.2 could be written as \(\frac{342}{10}\) Now, \(\sqrt[3]{34.2} = \sqrt[3]{\frac{342}{10}} = \frac{\sqrt[3]{342}}{\sqrt[3]{10}} \\ Also \\ 340 \; < \; 342 \; < \; 350 => \sqrt[3]{340} \; < \; \sqrt[3]{342} \; < \; \sqrt[3]{350} \\ From \; cube \; root \; table, \; we \; have: \\ = \sqrt[3]{340} = 6.980 \; and \; \sqrt[3]{350} = 7.047 \\\) For the difference of (350 – 340), i.e., 10, the difference in values = 7.047 – 6.980 = 0.067 For the difference of (342 – 340), i.e., 2, the difference in values = \(\frac{0.067 \times 2}{10}\) From cube root table, we also have: \(\sqrt[3]{10}\) \(\sqrt[3]{34.2}\) Thus, the required cube root is 3.246
Research Open Access Published: Boundary value problems for modified Dirac operators in Clifford analysis Boundary Value Problems volume 2015, Article number: 158 (2015) Article metrics 888 Accesses Abstract In this paper, we discuss two kinds of Riemann type boundary value problems for the operator \(\widetilde{D}_{\lambda}\), where λ is a complex number. Furthermore, we establish the Almansi type expansion for the operator \(\widetilde{D}_{\lambda}^{k}\), where \(k\in\mathbf{N}\). As applications of the expansion, we investigate the Riemann type boundary value problem and the generalized Riquier problem for the operator \(\widetilde{D}_{\lambda}^{k}\). Introduction The uniqueness and existence theorems for the solutions of boundary value problems for systems of partial differential equations are sufficiently well known. Such problems have remarkable applications in mathematical physics, the mechanics of deformable bodies, electromagnetism, relativistic quantum mechanics, and some of their natural generalizations. Almost all such problems can be set in the context of Clifford analysis (see [1, 2]). Clifford analysis is centered around the concept of monogenic functions, i.e. null solutions of a first order vector valued rotation invariant differential operator called the Dirac operator which factorizes the Laplace operator (see [3, 4]). As to the mathematical study of boundary value problems in Clifford analysis, there are several different approaches known in the literature. Without claiming completeness, we mention some of them. First of all, we have the approach originating with Bernstein, whose approach is to translate boundary value problems to the corresponding singular integral equations, then use the properties of the Fredholm operator to discuss the solvability of singular integral equations (see [5]). Another important approach is based on complex analysis. In this case, first we use analytic function theory to solve these kinds of boundary value problems, then we use the results of boundary value problems to solve singular integral equations (see [6, 7]). The advantage of this method is that the explicit representation of solutions can be obtained, but in the higher dimensional space there still exist many obstacles to generalize this method. In this paper, we continue to use the method in [6, 7] to solve boundary value problems for the modified Dirac operators. The paper is organized as follows. In Section 2, we review some results on the theory of Clifford analysis. In Section 3, applying the Plemelj formula for the modified Dirac operator [6], we consider Riemann type boundary value problems for the operator \(\widetilde{D}_{\lambda}\). In Section 4, using the Euler operator in Clifford analysis, we obtain the Almansi type expansion for the operator \(\widetilde{D}_{\lambda}^{k}\). In Section 5, as applications of the expansion, we investigate the Riemann type boundary value problem and the generalized Riquier problem for the operator \(\widetilde{D}_{\lambda}^{k}\). Preliminaries Clifford analysis Let \(\mathbf{R}_{0,m}\) be the real associative Clifford algebra generated by \(\{e_{1}, e_{2}, \ldots, e_{m}\}\), where the basic vectors \(e_{1}, e_{2}, \ldots, e_{m}\) satisfy the relations \(e_{i}e_{j}+e_{j}e_{i}=-2\delta_{i,j}\), \(i,j=1,\ldots,m\). Let \(\varepsilon_{i}=-e_{1}e_{i}\), \(i=1,\ldots,m\), then the universal Clifford algebra \(\mathbf{R}_{0, m-1}\) for \(\mathbf{R}^{m-1}\) is generated by \(\{\varepsilon_{1}, \varepsilon_{2}, \ldots, \varepsilon_{m}\}\), where the vectors \(\varepsilon_{1}, \varepsilon_{2}, \ldots, \varepsilon_{m}\) satisfy the following relations: Each of the elements in \(\mathbf{R}_{0, m-1}\) may be written as \(a=\sum_{A} a_{A}\varepsilon_{A}\), where \(a_{A}\) are real numbers and \(\varepsilon_{A}=\varepsilon_{\alpha _{1}}\varepsilon_{\alpha_{2}}\cdots\varepsilon_{\alpha_{h}}\) with \(A=\{\alpha_{1},\ldots,\alpha_{h}\}\subset\{2,\ldots,m\}\). We define the norm of a as \(|a|= (\sum_{A} |a_{A}|^{2} )^{\frac{1}{2}}\). If there exists \(b\in\mathbf{R}_{0,m-1}\) such that \(ab=ba=\varepsilon _{1}\), then b is called the inverse of a, which is denoted as \(a^{-1}\). A typical element of \(\mathbf{R}^{m}\) is denoted by \(x=x_{1}\varepsilon _{1}+x_{2}\varepsilon_{2}+\cdots+x_{m}\varepsilon_{m}\) with \(x_{i}\in \mathbf{R}\). We define \(\overline{x}=x_{1}\varepsilon_{1}-x_{2}\varepsilon_{2}-\cdots -x_{m}\varepsilon_{m}\), then \(x\overline{x}=\overline{x}x=|x|^{2}\). Obviously, for \(x\neq0\), we have \(x^{-1}=\frac{\overline{x}}{|x|^{2}}\). One of the main aims of Clifford analysis is to construct a first order operator, the so-called Dirac operator, factorizing the Laplace operator and to study the function-theoretical properties of the null solutions of this operator. When working over \(\mathbf{R}^{m}\), this Dirac operator is defined by Then the modified Dirac operator is defined as When studying the modified Dirac operator in this setting, we consider functions f which are e.g. elements of spaces such as \(C^{k}(\Omega)\otimes\mathbf{R}_{0,m-1}\) with Ω some open domain in \(\mathbf{R}^{m}\). This means that f can be written as with \(f_{A}(x)\in C^{k}(\Omega)\). Denote by \(|f|= (\sum_{A} |f_{A}(x)| )^{\frac{1}{2}}\) the norm of \(f\in C^{k}(\Omega)\otimes\mathbf{R}_{0,m-1}\). Boundary value problems for the operator \(\widetilde{D}_{\lambda}\) Riemann type problem for the operator \(\widetilde{D}_{\lambda}\) Let where \(\omega_{m}=\frac{2\pi^{\frac{m}{2}}}{\Gamma(\frac{m}{2})}\) is the surface area of the unit sphere in \(\mathbf{R}^{m}\). Then \(E(x)\) satisfies the equation \(\widetilde{D}f=0\). Let f be a Hölder continuous function on ∂Ω and take its Cauchy transform Then \(f(x)\) satisfies the equation \(\widetilde{D}f=0\) in \(\mathbf {R}^{m}\setminus{\partial\Omega}\) as was proved in [6]. In [6], the following Plemelj formulas hold for \(s\in\partial\Omega\): and where \(\overline{\Omega}=\Omega\cup{\partial\Omega}\). In order to obtain the main result in this section, we need the following lemma. Lemma 3.1 Let \(x_{1}\) be a nonzero finite real number and \(\lambda\in{{C}}\). Then where \(\operatorname{ker}\widetilde{D}_{\lambda}=\{f|f\in C^{1}(\Omega)\otimes R_{0,m-1},(\widetilde{D}- \lambda)f=0\}\), and \(\operatorname{ker}\widetilde{D}=\operatorname{ker}\widetilde{D}_{\lambda}\) for \(\lambda=0\). Proof Letting \(f\in \operatorname{ker}\widetilde{D}\), we have which implies that \(e^{\lambda x_{1}}\operatorname{ker}\widetilde{D}\subset \operatorname{ker}\widetilde{D}_{\lambda}\). On the contrary, for \(f\in\widetilde{D}_{\lambda}\), we can see that which means that \(\operatorname{ker}\widetilde{D}_{\lambda}\subset e^{\lambda x_{1}}\operatorname{ker}\widetilde{D} \). □ Therefore, we obtain the conclusion. Theorem 3.2 Let f be a Hölder continuous function on ∂Ω and let \(G\in Z(\mathbf{R}_{0,m-1})\) be invertible with inverse \(G^{-1}\). Then the Riemann type problem has a solution Φ given by where Note that the center \(Z(\mathbf{R}_{0,m-1})\) of \(\mathbf{R}_{0,m-1}\) is the set of elements in \(\mathbf{R}_{0,m-1}\) which commute with all elements of \(\mathbf{R}_{0,m-1}\) (see e.g. [6]) Proof where \(s\in\partial\Omega\). Finally, it is obvious that the function \(\Phi(x)\) vanishes at infinity. Thus, we obtain the conclusion. □ Riemann type boundary value problem (II) In this section, using the Plemelj formulas, we consider the following Riemann type boundary value problem (II). Suppose that f is a Hölder continuous function on ∂Ω. Find a function \(\Psi\in C^{1}(\Omega)\otimes\mathbf{R}_{0,m-1}\) that satisfies where a, b are given \(\mathbf{R}_{0,m-1}\) valued constants whose inverses are \(a^{-1}\), \(b^{-1}\). Lemma 3.3 Let \(Z_{k}=x_{k}\varepsilon_{1}-x_{1}\varepsilon_{k}\), where \(2\leq k\leq m\). Then we have the polynomials of order p where the sum runs over all distinguishable permutations of all of \((k_{1},\ldots,k_{p})\). Theorem 3.4 The boundary value problem (II) has a solution. Proof We will prove the function is a solution of the boundary value problem (II). Denote Then where a, b have the inverses \(a^{-1}\), \(b^{-1}\), respectively. The boundary value problem (II) is equivalent to Note that for \(x\in\mathbf{R}^{m}\backslash\partial\Omega\) is meaningful and satisfies the boundary properties Thus which means that \(\Phi(x)-(T[f])(x)=g(x)\in \operatorname{ker}\widetilde{D}_{\lambda}\) in \(\mathbf{R}^{m}\) by the Painlevé theorem. By Lemma 3.3, we put \(g(x)=\sum_{p=1}^{l}\sum_{\pi ({k_{1}\cdots k_{p}})}V_{k_{1}\cdots{k_{p}}}e^{\lambda x_{1}}\). Thus we have the conclusion. □ Almansi type expansion for the operator \(\widetilde{D}_{\lambda}^{k}\) In 1899, the Almansi expansion for polyharmonic functions was established, which was equivalent to the Fischer decomposition for polynomials (see [8]). One can find important applications and generalizations of this result in the case of several complex variables in the monograph of Aronszajn et al. [9], e.g. concerning functions holomorphic in the neighborhood of the origin in \(C^{n}\). Also for the case of a Clifford analysis, one can refer to [10, 11]. But all these cases are limited to star-like domains. In this section, we consider the difficult case that Ω is some open domain in \(\mathbf{R}^{m}\) not limited to star-like domains. Definition 4.1 We define the generalized Euler operator by where s is a complex constant, I is the identity operator, and E is the Euler operator. Lemma 4.2 Let Ω be as stated before. For \(f( x)\in C^{2}(\Omega)\otimes\mathbf{R}_{0,m-1}\), where \(s\in{C}\). Proof For \(s=0\), from Definition 4.1 it follows that, for \(f( x)\in C^{2}(\Omega)\otimes\mathbf{R}_{0,m-1}\), This implies that \(\widetilde{D}\mathbf{E}=\mathbf{E_{1}}\widetilde{D}\). For \(s\neq0\), This completes the lemma. □ Lemma 4.3 If \(f\in \operatorname{ker}(\widetilde{D}_{\lambda})\), then where \(C_{k}=\frac{1}{k!\lambda^{k}}\) and \(k\in\mathbf{N}\). Proof Note that \(f\in \operatorname{ker}\widetilde{D}_{\lambda}\). For \(k=1\), Lemma 4.2 implies that Suppose that, for \(k=l\), where \(C_{l}=\frac{1}{l!\lambda^{l}}\). For \(k=l+1\), We calculate which implies the conclusion. □ Denote \(\operatorname{ker}\widetilde{D}_{\lambda}^{k}=\{f|f\in C^{k}(\Omega)\otimes R_{0,m-1}, (\widetilde{D}-\lambda)^{k}f=0, k\in\mathbf{N}\}\). Theorem 4.4 If \(f(x)\in \operatorname{ker}\widetilde{D}_{\lambda}^{k}\), then there exist unique functions \(f_{0},\ldots, f_{k-1}\in \operatorname{ker}\widetilde{D}_{\lambda}\) such that where \(f_{0},\ldots, f_{k-1}\) are given as follows: and \(C_{k}=\frac{1}{k!\lambda^{k}}\). Conversely, if functions \(f_{0},\ldots, f_{k-1}\in \operatorname{ker}\widetilde {D}_{\lambda}\), then the function \(f(x)\) given by (13) satisfies the equation \(\widetilde{D}_{\lambda}^{k}f=0\). Proof Thus, Similarly, if we let the operator \(\widetilde{D}_{\lambda}^{k-2}\) act on \(f(x)-\mathbf{E}_{\lambda}^{k-1}f_{k-1}(x)\), we have Therefore, we have By induction, we have Conversely, suppose that the functions \(f_{0},\ldots, f_{k-1}\in \operatorname{ker}\widetilde{D}_{\lambda}\). Applying Lemma 4.3, we obtain which completes the proof. □ Boundary value problems for the operator \(\widetilde{D}_{\lambda}^{k}\) Riemann type boundary value problem (III) Now we consider the following Riemann type boundary value problem (III). Suppose that \(g_{l}(t)\), \(l=0,\ldots,k-1\), are Hölder continuous functions on ∂Ω. Find a function \(\Psi\in C^{k}(\Omega)\otimes\mathbf{R}_{0,m-1}\) that satisfies where and a, b are given \(\mathbf{R}_{0,m}\) valued constants whose inverses are \(a^{-1}\), \(b^{-1}\). Theorem 5.1 The boundary value problem (III) has a solution. Proof We will prove that the function where for \(0\leq i\leq k-1\), and is a solution of the boundary value problem (III). which completes the proof. □ Generalized Riquier problem for the operator \(\widetilde{D}_{\lambda}^{k}\) In 1936, Nicolescu established Riquier problem for polyharmonic equations (see [12]). In 2003, applying the 0-normalized system of functions with respect to the Laplace operator, Karachik obtained a solution of the Riquier problem in harmonic analysis (see [13]). In this section, we will study the generalized Riquier problem for the operator \(\widetilde{D}_{\lambda}^{k}\) by the expansion (13), as follows: Find a function Φ such that \(\widetilde{D}_{\lambda}^{i}\Phi\in {C(\overline{\Omega})\otimes\mathbf{R}_{0,m-1}}\), for \(i=0,\ldots ,k-1\), and Theorem 5.2 Suppose that the functions \(f_{i}(x)\in{C(\overline{\Omega})\otimes \mathbf{R}_{0,m-1}}\), \(i=0,\ldots,k-1\). Then problem (IV) has a solution given by where the functions \(f_{i}(x)\) satisfy Proof First, by Theorem 4.4, we can see that Then, for \(0\leq i\leq k-1\), Lemma 4.3 implies that Letting \({x}\rightarrow t\), the formulas in (20) give \(\widetilde {D}_{\lambda}^{i}\Phi|_{\partial\Omega}=g_{i}(t)\), \(i=0,\ldots,k-1\), which implies the conclusion. □ References 1. Obolashvili, E: Partially Differential Equations in Clifford Analysis. Pitman Monographs and Surveys in Pure and Applied Mathematics, vol. 96 (1999) 2. Obolashvili, E: Higher Order Partial Differential Equations in Clifford Analysis: Effective Solutions to Problems. Progress in Mathematical Physics, vol. 28. Birkhäuser, Boston (2003) 3. Brackx, F, Delanghe, R, Sommen, F: Clifford Analysis. Res. Notes Math. Pitman, London (1982) 4. Huang, S, Qiao, YY, Wen, GC: Real and Complex Clifford Analysis. Springer, New York (2005) 5. Bernstein, S: On the index of Clifford algebra valued singular integral operators and the left linear Riemann problem. Complex Var. Theory Appl. 35, 33-64 (1998) 6. Xu, ZY: Boundary value problems and function theory for Spin-invariant differential operators. Ph.D thesis, State University of Ghent (1989) 7. Xu, ZY, Zhou, C: On boundary value problems of Riemann-Hilbert type for monogenic functions in a half space of \(R^{m}\), \(m\geq2\). Complex Var. Theory Appl. 22, 181-193 (1993) 8. Almansi, E: Sull’integrazione dell’equazione differenziale \(\Delta^{2m}u=0\). Ann. Mat. Pura Appl. 3(2), 1-51 (1899) 9. Aronszajn, N, Creese, TM, Lipkin, LJ: Polyharmonic Functions. Oxford Mathematics Monographs. Clarendon, Oxford (1983) 10. Ryan, J: Iterated Dirac operators in \(C^{n}\). Z. Anal. Anwend. 9(5), 385-401 (1990) 11. Malonek, H, Ren, GB: Almansi-type theorems in Clifford analysis. Math. Methods Appl. Sci. 25, 1541-1552 (2002) 12. Nicolescu, M: Les fonctions polyharmoniques. Hermann, Paris (1936) 13. Karachik, VV: Normalized system of functions with respect to the Laplace operator and its applications. J. Math. Anal. Appl. 287, 577-592 (2003) Acknowledgements This work was supported by the NNSF of China under Grant No. 11426082. Additional information Competing interests The author declares that they have no competing interests. Author’s contributions The author read and approved the final manuscript.
Like a option to post images while solving problems there should also be an option to post images in solutions . There is a problem while editing solutions , if any small minor changes are to be done in solutions then the whole solution has to be written again because when we click edit , the format of solution changes and lots of extra "\" are added to unwanted places. e.g this is a solution i posted and want to edit it . let at some time t the velocity of the center of mass of the rolling part be v and radius be r. let there be p turns in the carpet per unit length.Let the current be then be I Let the total number of turns be n \\\phi=B\.A=\\int B\\pi r^\{2\}dN=\\int\_\{0\}^\{r\}B\\pi r^\{2\}pdr=\\frac\{\\pi Bpr^\{3\}\}\{3\} \\ \ We know that the mass is constant let the mass per unit length of carpet be \ \ \ \ \ finally all the carpet is rolled out and velocity finally is 0 after t=Ts applying energy conservation. MgR=\int_{0}^{t}I^{2}mdt=\int_{0}^{t}\(\\frac\{\-B^\{2\}v^\{2\}r^\{2\}\}\{2m\}^{2}mdt \] \==29 \] Problem Loading... Note Loading... Set Loading...