text
stringlengths
256
16.4k
Research Open Access Published: A note on the boundary behavior for a modified Green function in the upper-half space Boundary Value Problems volume 2015, Article number: 114 (2015) Article metrics 823 Accesses 1 Citations Abstract Motivated by (Xu et al. in Bound. Value Probl. 2013:262, 2013) and (Yang and Ren in Proc. Indian Acad. Sci. Math. Sci. 124(2):175-178, 2014), in this paper we aim to construct a modified Green function in the upper-half space of the n-dimensional Euclidean space, which generalizes the boundary property of general Green potential. Introduction and main results Let \({\mathbf{R}}^{n}\) (\(n\geq2\)) denote the n-dimensional Euclidean space. The upper half-space H is the set \(H=\{x=(x_{1},x_{2},\ldots,x_{n})\in{\mathbf{R}}^{n}: x_{n}>0\}\), whose boundary and closure are ∂H and \(\overline{H}\) respectively. For \(x\in{\mathbf{R}}^{n}\) and \(r>0\), let \(B(x,r)\) denote the open ball with center at x and radius r. Set Let \(G_{\alpha}\) be the Green function of order α for H, that is, where ∗ denotes reflection in the boundary plane ∂H just as \(y^{\ast}=(y_{1},y_{2},\ldots,-y_{n})\). In case \(\alpha=n=2\), we consider the modified kernel function, which is defined by In case \(0<\alpha<n\), we define where m is a non-negative integer, \(C^{\omega}_{k}(t)\) (\(\omega=\frac{n-\alpha}{2}\)) is the ultraspherical (or Gegenbauer) polynomial (see [1]). The expression arises from the generating function for Gegenbauer polynomials where \(|r|<1\), \(|t|\leq1\) and \(\omega>0\). The coefficient \(C^{\omega}_{k}(t) \) is called the ultraspherical (or Gegenbauer) polynomial of degree k associated with ω, the function \(C^{\omega}_{k}(t) \) is a polynomial of degree k in t. Then we define the modified Green function \(G_{\alpha,m}(x,y)\) by Write where μ is a non-negative measure on H. Here note that \(G_{2,0}(x,\mu)\) is nothing but the general Green potential. Let k be a non-negative Borel measurable function on \({\mathbf{R}}^{n}\times{\mathbf{R}}^{n}\), and set for a non-negative measure μ on a Borel set \(E\subset{\mathbf{R}}^{n}\). We define a capacity \(C_{k}\) by where the supremum is taken over all non-negative measures μ such that \(S_{\mu}\) (the support of μ) is contained in E and \(k(y,\mu) \leq1\) for every \(y\in H\). For \(\beta\leq0\), \(\delta\leq0\) and \(\beta\leq\delta\), we consider the kernel function Now we prove the following result. For related results in a smooth cone and tube, we refer the reader to the papers by Qiao (see [5, 6]) and Liao-Su (see [7]), respectively. The readers may also find some related interesting results with respect to the Schrödinger operator in the papers by Su (see [8]), by Polidoro and Ragusa (see [9]) and the references therein. Theorem Let \(n+m-\alpha+\delta+2\geq0\). If μ is a non- negative measure on H satisfying then there exists a Borel set \(E\subset H\) with properties: where \(E_{i}=\{x\in E: 2^{-i}\leq x_{n}<2^{-i+1}\}\). Remark Some lemmas Throughout this paper, let M denote various constants independent of the variables in questions, which may be different from line to line. Lemma 1 There exists a positive constant M such that \(G_{\alpha}(x,y)\leq M\frac{x_{n}y_{n}}{|x-y|^{n-\alpha+2}}\), where \(0<\alpha\leq n\), \(x=(x_{1},x_{2},\ldots,x_{n})\) and \(y=(y_{1},y_{2},\ldots, y_{n})\) in H. This can be proved by a simple calculation. Lemma 2 Gegenbauer polynomials have the following properties: (1) \(|C_{k}^{\omega}(t)|\leq C_{k}^{\omega}(1)=\frac{\Gamma(2\omega +k)}{\Gamma(2\omega)\Gamma(k+1)}\), \(|t|\leq1 \); (2) \(\frac{d}{dt}C_{k}^{\omega}(t)=2\omega C_{k-1}^{\omega+1}(t)\), \(k \geq1\); (3) \(\sum_{k=0}^{\infty} C_{k}^{\omega}(1)r^{k}=(1-r)^{-2\omega}\); (4) \(|C^{\frac{n-\alpha}{2}}_{k} (t)-C^{\frac{n-\alpha}{2}}_{k} ( t^{\ast})| \leq(n-\alpha)C^{\frac{n-\alpha+2}{2}}_{k-1} (1)|t-t^{\ast}|\), \(|t|\leq1\), \(|t^{\ast}|\leq1\). Proof Lemma 3 For \(x, y\in{\mathbf{R}}^{n}\) (\(\alpha=n=2\)), we have the following properties: (1) \(|\Im \sum_{k=0}^{m}\frac{x^{k}}{y^{k+1}}|\leq\sum_{k=0}^{m-1} \frac{2^{k} x_{n} |x|^{k}}{|y|^{k+2}} \); (2) \(|\Im\sum_{k=0}^{\infty}\frac{x^{k+m+1}}{y^{k}}|\leq 2^{m+1}x_{n} |x|^{m}\); (3) \(|G_{n,m}(x,y)-G_{n}(x,y)|\leq M \sum_{k=1}^{m} \frac{k x_{n} y_{n} |x|^{k-1}}{|y|^{k+1}}\); (4) \(|G_{n,m}(x,y)|\leq M \sum_{k=m+1}^{\infty} \frac{k x_{n} y_{n} |x|^{k-1}}{|y|^{k+1}}\). The following lemma can be proved by using Fuglede (see [11], Théorèm 7.8). Lemma 4 For any Borel set E in H, we have \(C_{k_{\alpha}}(E)=\hat{C}_{k_{\alpha}}(E)\), where \(\hat{C}_{k_{\alpha}}(E)=\inf\lambda(H)\), \(k_{\alpha}=k_{\alpha,0,0}\), the infimum being taken over all non- negative measures λ on H such that \(k_{\alpha}(\lambda,x)\geq1\) for every \(x \in E\). Following [10], we say that a set \(E\subset H\) is α-thin at the boundary ∂H if where \(E_{i}=\{x\in E: 2^{-i}\leq x_{n} <2^{-i+1}\}\). Proof of Theorem We write where We distinguish the following two cases. Case 1. \(0<\alpha<n\). By assumption (1.2) we can find a sequence \(\{a_{i}\}\) of positive numbers such that \(\lim_{i\rightarrow\infty} a_{i}=\infty\) and \(\sum_{i=1}^{\infty}a_{i}b_{i}<\infty\), where Consider the sets for \(i=1,2,\ldots\) . Set Then \(G\subset\{y \in H:2^{-i-1}< y_{n}< 2^{-i+2}\}\). Let ν be a non-negative measure on H such that \(S_{\nu}\subset E_{i}\), where \(S_{\nu}\) is the support of ν. Then we have \(k_{\alpha,\beta,\delta}(y,\nu)\leq1\) for \(y\in H\) and So that which yields Setting \(E=\bigcup_{i=1}^{\infty}E_{i}\), we see that (2) in Theorem is satisfied and For \(U_{2}(x)\), by Lemma 1 we have Similarly, we have by (3) and (4) in Lemma 2 Finally, by Lemma 1, we have Case 2. \(\alpha=n=2\). In this case, \(U_{1}(x)\), \(U_{2}(x)\) and \(U_{5}(x)\) can be proved similarly as in Case 1. Here we omit the details and state the following facts: where \(E=\bigcup_{i=1}^{\infty}E_{i}\) and \(\sum_{i=1}^{\infty}2^{i(\beta+\delta)}C_{k_{\alpha,\beta,\delta }}(E_{i})<\infty\), By Lemma 3(3), we obtain By Lemma 3(4), we have Hence the proof of the theorem is completed. References 1. Szegö, G: Orthogonal Polynomials. American Mathematical Society Colloquium Publications, vol. 23. Am. Math. Soc., Providence (1975) 2. Ren, YD, Yang, P: Growth estimates for modified Neumann integrals in a half space. J. Inequal. Appl. 2013, 572 (2013) 3. Xu, G, Yang, P, Zhao, T: Dirichlet problems of harmonic functions. Bound. Value Probl. 2013, 262 (2013) 4. Yang, DW, Ren, YD: Dirichlet problem on the upper half space. Proc. Indian Acad. Sci. Math. Sci. 124(2), 175-178 (2014) 5. Qiao, L: Integral representations for harmonic functions of infinite order in a cone. Results Math. 61, 62-74 (2012) 6. Qiao, L, Pan, GS: Generalization of the Phragmén-Lindelöf theorems for subfunctions. Int. J. Math. 24(8), 1350062 (2013) 7. Liao, Y, Su, BY: Solutions of the Dirichlet problem in a tube domain. Acta Math. Sin. 57(6), 1209-1220 (2014) 8. Su, BY: Dirichlet problem for the Schrödinger operator in a half space. Abstr. Appl. Anal. 2012, Article ID 578197 (2012) 9. Polidoro, S, Ragusa, MA: Harnack inequality for hypoelliptic ultraparabolic equations with a singular lower order term. Rev. Mat. Iberoam. 24(3), 1011-1046 (2008) 10. Armitage, H: Tangential behavior of Green potentials and contractive properties of \(L^{p}\)-potentials. Tokyo J. Math. 9, 223-245 (1986) 11. Fuglede, B: Le théorèm du minimax et la théorie fine du potentiel. Ann. Inst. Fourier 15, 65-88 (1965) Acknowledgements The authors are highly grateful for the referees’ careful reading and comments on this paper. This work was completed while the authors were visiting the Department of Mathematical Sciences at the University of Wollongong, and they are grateful for the kind hospitality of the Department. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
On the complex plane $\mathbb C$ consider the half-open square $$\square=\{z\in\mathbb C:0\le\Re(z)<1,\;0\le\Im(z)<1\}.$$ Observe that for every $z\in \mathbb C$ and $p\in\{0,1,2,3\}$ the set $(z+i^p\cdot\square)$ is the shifted and rotated square $\square$ with a vertex at $z$. Problem.Is it true that for any function $p:\mathbb C\to\{0,1,2,3\}$ there a subset $Z\subset\mathbb C$ such that the union of the squares $$\bigcup_{z\in Z}(z+i^{p(z)}\cdot\square)$$is not Borel in $\mathbb C$? Added in Edit. As @YCor observed in his comment, the answer to this problem is affirmative under $\neg CH$. An affirmative answer to Problem would follow from an affirmative answer to another intriguing Problem'.Is it true that for any partition $\mathbb C=A\cup B$ either $A$ contains an uncountable strictly increasing function or $B$ contains an uncountable strictly decreasing function? Here by a function I understand a subset $f\subset \mathbb C$ such that for any $x\in\mathbb R$ the set $f(x)=\{y\in\mathbb R:x+iy\in f\}$ contains at most one element. Added in the Next Edit. In the discussion with @YCor we came to the conclusion that under CH the answer to both problems is negative. Therefore, both problems are independent of ZFC. Very strange.
Could someone please help me translate what this is saying on page P15, section 4.2: Specifically: When the order rates are time-varying probabilities must be computed via Monte Carlo. A simple algorithm is as follows. There are six types of orders; buy/sell and market/limit/cancel. For each type of order there are multiple rates depending on the distance to the bid/ask, i.e. if the bid is at the tenth highest tick level then there are ten limit buy orders. Let $\lambda_i$, $i \in \mathcal{I}$ be the collection of all order rates and $\boldsymbol{x}_t = (x_1, \ldots, x_n)$ be the current state of the order book, as specified in [1]. Then there are a fixed and finite number of possible states $x_t+1$ can take on. The next state of the order book is completely determined by which order arrives first. It is known that if $X_i \sim \exp(\lambda_i)$ then Therefore to determine the next state of the order book we just sample $u \sim U(0,1)$ then partition the interval $(0, 1)$ according to the above probabilities to determine which order arrived first. After the next state of the order book is computed we recompute the $\lambda_i$'s since they depend on the order book, i.e. $x_t$ is an inhomogeneous Markov chain, and repeat to generate an entire sample path. Let $A$ be the set of $\omega$ where the midprice increases, to compute its probability we simulate sample paths until there is a change in the midprice and compute $I_A(\omega)$ then estimate the probability as EDIT: Ok, I have made some progress: As the comment below says, X is exponentially distributed. However, I do not get what calculating the "distribution of the minimum exponential random variable" is for? Also, once we have done this we then (seem to) plot the uniform distribution between 0 and 1 and then plot the probabilities on the x-axis, and, I think, look for the probability with the greatest area? I really don't understand why this tells us the next state?? What exactly is finding the minimum exponential random variable telling us? Why do we need to use the uniform distribution?
tf.contrib.layers.legacy_fully_connected( x, num_output_units, activation_fn=None, weight_init=initializers.xavier_initializer(), bias_init=tf.zeros_initializer(), name=None, weight_collections=(ops.GraphKeys.WEIGHTS,), bias_collections=(ops.GraphKeys.BIASES,), output_collections=(ops.GraphKeys.ACTIVATIONS,), trainable=True, weight_regularizer=None, bias_regularizer=None ) Adds the parameters for a fully connected layer and returns the output. A fully connected layer is generally defined as a matrix multiply: y = f(w * x + b) where f is given by activation_fn. If activation_fn is None, the result of y = w * x + b is returned. If x has shape [\(\text{dim} 0, \text{dim}_1, ..., \text{dim}_n\)] with more than 2 dimensions (\(n > 1\)), then we repeat the matrix multiply along the first dimensions. The result r is a tensor of shape [\(\text{dim}_0, ..., \text{dim}{n-1},\) num_output_units], where \( r_{i_0, ..., i_{n-1}, k} = \sum_{0 \leq j < \text{dim} n} x{i_0, ... i_{n-1}, j} \cdot w_{j, k}\). This is accomplished by reshaping x to 2-D [\(\text{dim} 0 \cdot ... \cdot \text{dim}{n-1}, \text{dim} n\)] before the matrix multiply and afterwards reshaping it to [\(\text{dim}_0, ..., \text{dim}{n-1},\) num_output_units]. This op creates w and optionally b. Bias ( b) can be disabled by setting bias_init to None. Most of the details of variable creation can be controlled by specifying the initializers ( weight_init and bias_init) and in which collections to place the created variables ( weight_collections and bias_collections; note that the variables are always added to the VARIABLES collection). The output of the layer can be placed in custom collections using output_collections. The collections arguments default to WEIGHTS, BIASES and ACTIVATIONS, respectively. A per layer regularization can be specified by setting weight_regularizer and bias_regularizer, which are applied to the weights and biases respectively, and whose output is added to the REGULARIZATION_LOSSES collection. Args: : The input x Tensor. : The size of the output. num_output_units : Activation function, default set to None to skip it and maintain a linear activation. activation_fn : An optional weight initialization, defaults to weight_init xavier_initializer. : An initializer for the bias, defaults to 0. Set to bias_init Nonein order to disable bias. : The name for this operation is used to name operations and to find variables. If specified it must be unique for this scope, otherwise a unique name starting with "fully_connected" will be created. See name tf.variable_scopefor details. : List of graph collections to which weights are added. weight_collections : List of graph collections to which biases are added. bias_collections : List of graph collections to which outputs are added. output_collections : If trainable Truealso add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES(see tf.Variable). : A regularizer like the result of weight_regularizer l1_regularizeror l2_regularizer. Used for weights. : A regularizer like the result of bias_regularizer l1_regularizeror l2_regularizer. Used for biases. Returns: The output of the fully connected layer. Raises: : If x has rank less than 2 or if its last dimension is not set. ValueError
I've been trying to get a grasp on some of the basics of interest rate modeling, and am looking to simulate rates using the 2 factor Hull White model, which I am aware offers a more realistic model of rates which allows for imperfect correlation between instantaneous forward rates. I've found resources on the web which reduce the model to an additive Gaussian one, where one has for the short rate $r(t)$: $r(t) = \varphi(t) + x_1(t) + x_2(t)$ where $x_1, x_2$ are mean reverting processes governed by: $dx_1(t) = -a_1x_1(t)\cdot dx + \sigma_1\cdot dW_1(t)$ $dx_2(t) = -a_1x_2(t)\cdot dx + \sigma_2\cdot dW_2(t)$ with $dW_1(t)dW_2(t) = \rho$, and $\varphi(t)$ is deterministic and chosen to fit the initial forward rate curve $f(0,t)$: $\varphi(t) = f(0,t) + \frac{\sigma_1^2}{2a_1^2}\left(1-e^{-a_1t}\right)^2+\frac{\sigma_2^2}{2a_2^2}\left(1-e^{-a_2t}\right)^2+\rho\frac{\sigma_1\sigma_2}{a_1a_2}\left(1-e^{-a_1t}\right)\left(1-e^{-a_2t}\right)$ This tells me how to simulate the short rate (by updating $x_1,x_2$ at each time increment and adding to $\varphi$), but my question is, how could one simulate the evolution of the whole forward curve? I have also found (unwieldy) closed-form expressions for $P(t,T)$ the price of a term $T$ zero coupon bond at time $t$, from which you can obtain the forward curve, but is there a way to generate the forward curve at time $t+\Delta t$ by updating the curve at time $t$, akin to the way we can do it for the short rate $r(t)$?
584 0 Can someone please help me out with the following question? Q. A simple harmonic oscillator, of mass m and natural frequency w_0, experiences an oscillating driving force f(t) = macos(wt). Therefore its equation of motion is: [tex]\frac{{d^2 x}}{{dt^2 }} + \omega _0 ^2 x = a\cos \left( {\omega t} \right)[/tex] Given that at t = 0 we have x = dx/dt = 0, find the function x(t). Describe the solution if w is approximately, but not exactly, equal to w_0. I got: [tex]y\left( t \right) = \frac{a}{{\left( {\omega _0 ^2 - \omega ^2 } \right)}}\left( {\cos \left( {\omega t} \right) - \cos \left( {\omega _0 t} \right)} \right)[/tex] The answer says a couple of things about the behaviour of the solution for w ~ w_0 but I can't figure out how they got it. For instance "for large t it shows beats of maximum amplitude 2((w_0)^2 - w^2)^-1." How is that deduced and how would I determine which are the main characterstics of motion that I need to note. Any help would be appreciated. Q. A simple harmonic oscillator, of mass m and natural frequency w_0, experiences an oscillating driving force f(t) = macos(wt). Therefore its equation of motion is: [tex]\frac{{d^2 x}}{{dt^2 }} + \omega _0 ^2 x = a\cos \left( {\omega t} \right)[/tex] Given that at t = 0 we have x = dx/dt = 0, find the function x(t). Describe the solution if w is approximately, but not exactly, equal to w_0. I got: [tex]y\left( t \right) = \frac{a}{{\left( {\omega _0 ^2 - \omega ^2 } \right)}}\left( {\cos \left( {\omega t} \right) - \cos \left( {\omega _0 t} \right)} \right)[/tex] The answer says a couple of things about the behaviour of the solution for w ~ w_0 but I can't figure out how they got it. For instance "for large t it shows beats of maximum amplitude 2((w_0)^2 - w^2)^-1." How is that deduced and how would I determine which are the main characterstics of motion that I need to note. Any help would be appreciated. Last edited:
I know, for various reason, some users preprocess their latex file using Perl or sed, say. I'm considering doing this to, so I would like to seek your guidance, to smooth my entry in this area. My use case is a simple macro expansion preprocessing If $\ep$ is small... --> If $\epsilon$ is small... So that a compilation session becomes doc.tex --preprocessor--> post.tex --pdftex--> post.pdf And the following issues arise There is generally a mismatch between line numbers in doc.texand post.tex. So that pdftexerror messages points to a line in post.tex. But, really, one wants the line number in doc.tex. The same for synctex. Backward and foward search are with respect to post.tex. But, one really wants them to be with respect to doc.tex. Sometimes, one wants to go back and forth from doc.texand post.tex. The preprocessorknows the corresponds between lines of the two documents. But then, how to put this knowledge in a workable way. I mean, text editors does not have out-of-the-box support for these backward-forward trips. So, do you know some software, workflow or technics which will flatten (1)-(3)? I look around, but I could only find example of Perl or sed preprocessing scripts. I never saw the question of the workflow discuss somewhere. EDIT The line mismatch can originate from big expansion A function is continuous if \multiline_continuity_definition_I_often_useHence ... where the definition is expanded to \begin{description} \item[Standard] $\forall \epsilon ... $ \item[Nonstandard] $x \approx y ... $\end{description} Edit 2 The point of this question is to discuss preprocessing technics. My examples are very simplistic, but I hope they catch the main challenge of a preprocessing session.
I am interested in calculating the transition dipole moment (TDM) from the information from two wavefunctions of different states. This is somewhat similar to calculating the molecular dipole moment which was previously answered here: The information I have is the molecular orbital coefficients of alpha and beta from states 1 and 2 and the multipole matrix. In considering just one Cartesian direction X, calculating the dipole moment along X would be something like $$\mathbf{P_a} = \mathbf{C_a} \cdot \mathbf{C_a^T}$$ $$\mathbf{P_b} = \mathbf{C_b} \cdot \mathbf{C_b^T}$$ $$\mathbf{P} = \mathbf{P_a} + \mathbf{P_b}$$ $$\mathbf{\mu_{matrix}} = \mathbf{P} \cdot \mathbf{\text{multipoleMatrixX}}$$ $$\mu_{electronic} = trace(\mathbf{\mu_{matrix}})$$ $$\mu_{nuclear} = \sum_{\text{all atoms}}(Z_{nucleus} * \overrightarrow{r_x})$$ $$\mu_{X} = - \mu_{electronic} + \mu_{nuclear}$$ where $\mathbf{C_a}$ and $\mathbf{C_b}$ are the occupied alpha and beta molecular orbital coefficients, $\mathbf{P_a}$ and $\mathbf{P_b}$ are the alpha and beta density, $\mathbf{P}$ is the total density, and $\mathbf{\text{multipoleMatrixX}}$ is the multipole matrix for the X direction. I might think that to calculate the transition dipole moment I might just change the densities to the transition densities doing something like I have show below. The transition denisty is based on page 10 of the article here: $$\mathbf{S_a} = \mathbf{C_{a1}} \cdot \mathbf{C_{a2}^T}$$ $$\mathbf{S_b} = \mathbf{C_{b1}} \cdot \mathbf{C_{b2}^T}$$ $$\mathbf{P_a} = \mathbf{C_{a1}} \cdot \mathbf{C_{a2}^T} \cdot \mathbf{S_a^{-1}}$$ $$\mathbf{P_b} = \mathbf{C_{b1}} \cdot \mathbf{C_{b2}^T} \cdot \mathbf{S_b^{-1}}$$ $$\mathbf{P} = \mathbf{P_a} + \mathbf{P_b}$$ $$\mathbf{\mu_{matrix}} = \mathbf{P} \cdot \mathbf{\text{multipoleMatrixX}}$$ $$\mu_{electronic} = trace(\mathbf{\mu_{matrix}})$$ $$\mu_{nuclear} = \sum_{\text{all atoms}}(Z_{nucleus} * \overrightarrow{r_x})$$ $$\mu_{X,TDM} = - \mu_{electronic} + \mu_{nuclear}$$ where $\mathbf{C_{a1}}$ and $\mathbf{C_{a2}}$ are the occupied alpha molecular orbital coefficients from state 1 and 2, $\mathbf{C_{b1}}$ and $\mathbf{C_{b2}}$ are the occupied beta molecular orbital coefficients from state 1 and 2, $\mathbf{S_a}$ and $\mathbf{S_b}$ are the alpha and beta overlap matricies, $\mathbf{S_a^{-1}}$ and $\mathbf{S_b^{-1}}$ are the inverse of the overlap matrices. I am unsure how to calculating the TDM since it is in disagreement with known calculated TDM values; some of these transitions should have a TDM of zero due to symmetry considerations. I have been unable to find any discussion on calculating the TDM from two wavefunctions, and most discussions I have seen are in the context of some perturbation on the ground state (i. e. TD or CI). I would be grateful if anyone could provide some suggestions or references to look at.
If the premises are inconsistent then they cannot all be true. So the statement "if all the premises are true, then the consequence is true" holds, since the premises are never all true. On the syntactic side, the details depend on what deductive system we use, and there are many different possibilities even if we confine ourselves to natural deduction-style systems. In the kind I prefer, there is a primitive falsity sentence $\bot$ and negation is not primitive, rather $\lnot A$ abbreviates $A\to \bot.$ So clearly from $A$ and $\lnot A,$ you can use implication elimination, aka modus ponens to infer $\bot.$ The principle of explosion is encapsulated in an elimination rule for $\bot$ that says you can any sentence from $\bot$. It might seem unsatisfying that explosion seems to be assumed, not proven, in this framework but this is actually a major advantage, especially if you’re particularly interested in the rule. This way it is simple to track what proofs “use explosion” and which don’t, and it’s easy to come up with a systems (e.g. minimal logic) that don’t have it: just consider subsystems that don’t have this rule. The question of “why” it’s true classically, is better answered by a semantic argument, anyway, e.g. the one I gave above or observing $A\land\lnot A\to B$ is a tautology. Regarding your last paragraph, it doesn't matter that you can ignore the contradictions in your premises. If you can find a consistent subset $\Sigma'$ of your assumptions $\Sigma$ such that $\Sigma' \models \phi,$ that's great, and much more interesting that the fact that $\Sigma\models \phi.$ But that just means you should have been working with $\Sigma'$ all along rather than $\Sigma.$ We know that since $\Sigma$ is inconsistent, $\Sigma\models \psi$ for any sentence $\psi,$ so the fact that $\Sigma\models \phi$ is not very informative. Thus we only usually care about reasoning from a consistent set of premises. (Unless, say, we’re using a logic that doesn’t have explosion, where the consequences of an inconsistent set of premises can be nontrivial.)
I have a few problems that I'm trying to work through. Want to see if these few are correct. $$3y'' + 4y' - 3y = 0$$ auxiliary equation is: $$3r^2 + 4r -3 = 0$$ where $a = 3$, $b = 4$, $c = -3$ can't really find roots by factoring so gonna use quadratic: $$r = \frac{-4 \pm \sqrt{16 - 4(3)(-3)}}{6}$$ $$r = \frac{-4 \pm \sqrt{52}}{6}$$ $$r = \frac{-2}{3} \pm \frac{\sqrt{52}}{6}$$ $$r = \frac{-2}{3} \pm \frac{2\sqrt{13}}{3}$$ so there are 2 real roots. So the general solution is: $$y = c_1e^{r_1x} + c_2e^{r_2x}$$ where $r_1 = \frac{-2}{3} + \frac{2\sqrt{13}}{3}$ where $r_2 = \frac{-2}{3} - \frac{2\sqrt{13}}{3}$ $$9y'' + 4y = 0$$ auxiliary equation (could have used quadratic): $$9r^2 + 4 = 0$$ $$9r^2 = -4$$ $$r^2 = -4/9$$ $$r = \pm \frac{2}{3}i$$ so the two roots are: $$r_1 = 0 + \frac{2}{3}i$$ and $$r = 0 - \frac{2}{3}i$$ where $\alpha = 0$ and $\beta = \frac{2}{3}$ and so the general solution is: $$y = e^0(c_1cos\frac{2}{3}x + c_2sin\frac{2}{3}x)$$ $$y = y''$$ $$y'' - y = 0$$ $$r^2 - 1 = 0$$ $$r^2 = 1$$ $$r = \pm 1$$ two real roots so: general solution is: $$y = c_1e^x + c_2e^{-x}$$ $$y'' + 2y = 0$$ $$r^2 + 2 = 0$$ $$r^2 = -2$$ $$r = \pm \sqrt{-2} = \pm \sqrt{2}i$$ and so $\alpha = 0$ and $\beta = \sqrt{2}$ and so $e^{\alpha x} = 1$ so $$y = c_1cos\sqrt{2}x + c_2sin\sqrt{2}x$$ do these look right?
Let $G$ be a semisimple algebraic group over $\mathbb C$, $T$ be a maximal torus and $B$ be a Borel subgroup of $G$ containing $T$. Let $R^+$ be the set of positive roots with respect to $B$. Let $Q$ be a parabolic subgroup containing $B$ corresponding to a subset $\{\alpha_1, \alpha_2, \cdots ,\alpha_k \}$ of the set of simple roots $\{\alpha_1, \alpha_2, \cdots ,\alpha_n \}$. Then Bruhat decomposition of $G/Q$ is given by $G/Q=\cup_{w \in W^Q}BwQ/Q$, where $W^Q$ is the Weyl group of $Q$. $C_Q(w):=BwQ/Q$ is called a Schubert cell and its closure $X_w$ in $G/Q$ is called the Schubert variety associated to $w$. Let $B^-$ be the Borel subgroup of $G$ opposite to $B$. Then $B^-vQ/Q$ is called the opposite cell and its closure $X^v$ in $G/Q$ is called the opposite Schubert variety associated to $v$. When $Q=B$ then $C_B(w)= \prod_{\{\alpha \in R^+: w^{-1}(\alpha) <0 \}}U_{\alpha}$, where $U_{\alpha}$ is the root subgroup corresponding to $\alpha$. My questions are the following: What is the expression for $C_Q(w)$ in terms of root subgroups for $Q \neq B$. Let $v < w$ in Bruhat order. Then how does an element in $BwQ/Q \cap B^-vQ/Q$ look like in the form of a matrix. Lets take $G=SL_6$, $B=$ the subgroup of upper triangular matrices and $Q$ be the maximal parabolic corresponding to the simple root $\alpha_2$, $w=s_2s_1s_5s_4s_3s_2$ and $v=s_3s_2$. If $x \in BwQ/Q \cap B^-vQ/Q$, then what is the matrix form of $x$ ?
257 23 Homework Statement Cylinder length 1.2m, mass =0.1kg, radius 0.5cm is released from rest and roll down without slipping down 2 rails connected to the battery of 10V. Rails at angle of 40 degrees. B = 0.01T. The total resistance of rails and cylinder is 250 ohms. Calculate V after it has rolled down 0.45m Homework Equations - Firstly, I need to determine what the electric field is causing. Using left hand rule, the force due to the field is acting down the slope. Hence my FBD looks like: Where the two arrows pointing towards the right represent the force due to the field and weight of the cylinder. Since : ##\epsilon = blv## ##I =\frac{\epsilon}{r} = \frac{blv}{R}## Is it correct to say that: ##ma = mgsin\theta + F_{b} = mgsin\theta + bIl## ? If so, what do I do next? Rolling down 0.45m , does that mean I need to compute the time it takes to roll down using Pythagoras theorem? Or am I overthinking here? Thank you
Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. Sum Stacking This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing. Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations. These algorithms are very efficient to remove satellite/plane tracks. Median Stacking This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math]. Pixel Maximum Stacking This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. Pixel Minimum Stacking This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower. In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]). The output console thus gives the following result: 14:33:06: Pixel rejection in channel #0: 0.181% - 1.184% 14:33:06: Pixel rejection in channel #1: 0.151% - 1.176% 14:33:06: Pixel rejection in channel #2: 0.111% - 1.118% 14:33:06: Integration of 12 images: 14:33:06: Pixel combination ......... average 14:33:06: Normalization ............. additive + scaling 14:33:06: Pixel rejection ........... Winsorized sigma clipping 14:33:06: Rejection parameters ...... low=4.000 high=3.000 14:33:07: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 14:33:07: Execution time: 9.98 s. 14:33:07: Background noise value (channel: #0): 9.538 (1.455e-04) 14:33:07: Background noise value (channel: #1): 5.839 (8.909e-05) 14:33:07: Background noise value (channel: #2): 5.552 (8.471e-05) After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., 12 files. The images above picture the result in Siril using the Auto-Stretch rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]21/5.1 = 4.11 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math]. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril: Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
If a random variable is discrete, and we are interested in its quantile value, how to define a proper back testing procedure? For example, the underlying variable with a discrete value is $$ d(\mbox{account}) = \mbox{PaymentDate} - \mbox{BillingDate} $$ the observing variable: $$ y = \mbox{percentile}(d, 95\%, \mbox{month}) $$ or $y$ is the 95th percentile value of $d$, for a particular month. e.g. 95% of credit cards are paid within 20 days from the billing, in 2013 Jan. How could I define a back-testing approach? Background To define an estimation-backtesting method for a continous random variable is easier. Now in my group we have such a non-parametric approach: underlying variable: $$r(\mbox{month}) = \mbox{monthly credit-card account default rate}$$ For example, 2013 Feb default rate is 1.1%, 2013 Jan is 1.2%... observing variable: $$ x = \mbox{percentile}(r, 95\%) $$ $x$ is the 95% percentile value of $r$. Here $x$ definition is similar to VaR. point forcast: $$ \hat x(\mbox{month}) = \mbox{percentile}(r(\mbox{month}), N, 95\%) $$ $\hat x$ is the 95% percentile value of $r$, based on $N$ historic observations. For example, take $N=36$, retrieve back 36 months, the 95% percentile value of default rate $r$ is 2.3%. then $\hat x = 2.3\%$. point forecast Exception: $$ \mbox{PFException}(t) = \begin{cases} 0 & r(\mbox{month}) \leq \hat x(\mbox{month}) \\ 1 & \text{otherwise} \end{cases} $$ By right 95% of the time there shall have no exception, while 5% of the time exception happens. backtesting: There are POF test, checking the rate of the exception; and independent test, checking the correlation of exceptions. For example, Kupiec (1995) proposed a POF test checks exceptions happened in previouis 36 months' point forecasts: 0-4 exceptions are ok, green light, 4-7 exceptions are yellow light, while more than 8 exceptions are red light. Christoffersen (1998) proposed an independent test. Kupiec, P. (1995). Techniques for verifying the accuracy of risk management models. Journal of Derivatives 3, 73–84. Christoffersen, P. (1998). Evaluating interval forecasts. International Economic Review 39, 841–62.
It is well known that the problem concerning even perfect numbers is to prove or refute if there are infinitely many of them. Few weeks ago I wrote the following conjecture, where $\varphi(n)$ denotes the Euler's totient function, $\sigma(n)=\sum_{1\leq d\mid n}d$ the sum of divisors function and $$\operatorname{rad}(n)=\prod_{\substack{p\mid n\\p\text{ prime}}}p$$ is the product of distinct primes dividing $n>1$ with the definition $\operatorname{rad}(1)=1$, see the Wikipedia Radical of an integer. One can finds the Euler's totient function and the sum of divisors function in formulations of equivalences to the Riemann hypothesis, and the so-called radical of an integer is the famous arithmetical function that appears in the formulation of the abc conjecture. Conjecture. An integer$n\geq 1$ is an even perfect number if and only if$$\operatorname{rad}(n)=\frac{1}{\frac{1}{2}-2\frac{\varphi(n)}{\sigma(n)}}.\tag{1}$$ I've cited this conjecture few days ago in MSE. My intention is to know if is it possible to get some statement about the problem concerning even perfect nubmers, if there exist infinitely many of them, using the equation or well if you can argue that seems that the equation $(1)$ isn't useful for this purpose. Question.Does possible to get an interesting statement about the infinitude of even perfect numbers, or a fact about them distribution, using this equation $(1)$ or invoking previous Conjecture(it is easy to prove that even perfect numbers $n$ satisfy it, but my attempt of proof for the other part of the conjecture was failed)? If you think that it isn't possible for some obstruction, please explain it. Many thanks. You can to invoke propositions about even perfect numbers and tools or conjectures from the analytic number theory (we can search and read from the literature those statements). I hope that this is a nice exercise for this site, any case I hope comments.
I am trying to determine the parameters for the Nelson Siegel Svensson model and am solving a Non-Linear Optimization problem to do this. I am trying to solve: $$ \min_\theta{\sum{(p_i - \hat p_i)^2}}. $$ where $p_i$ are the observed dirty prices of the bonds and $\hat p_i$ are the prices that have been calculated using the NSS parameters, $\theta$ I am using the procedure presented in this paper. But I've also read that the Optimization is highly sensitive to the input set of parameters ($\theta$) as mentioned in Page 2 of this paper. Hence, if I don't have data on these parameters how should I look to set the input. I am currently trying to do this for GBP Government Bonds, but am unable to find any published parameters. I was also unable to find how people circumvent this problem. Currently, I am using the $\theta$ values presented here as I thought they may be similar for the GBP Government Bonds. However, the Optimization proves to be unsolvable. This is part of the code that I am using in Python to solve the optimization problem. func just returns the sum of the squared difference in prices (Objective function) and params refer to $\theta$. These are the input params I am currently using. params = [3.15698855, -2.98240445, -3.37586632, -1.67713694, 0.88538977, 3.84324841] #Thetaoptimize.minimize(func, params, method='COBYLA', constraints = cons, options={'disp': True}) Thank You
If dispersion measures amount of variation, then the direction of variation is measured by skewness. The most commonly used measure of skewness is Karl Pearson's measure given by the symbol Skp. It is a relative measure of skewness. ${S_{KP} = \frac{Mean-Mode}{Standard Deviation}}$ When the distribution is symmetrical then the value of coefficient of skewness is zero because the mean, median and mode coincide. If the co-efficient of skewness is a positive value then the distribution is positively skewed and when it is a negative value, then the distribution is negatively skewed. In terms of moments skewness is represented as follows: ${\beta_1 = \frac{\mu^2_3}{\mu^2_2} \\[7pt] \ Where\ \mu_3 = \frac{\sum(X- \bar X)^3}{N} \\[7pt] \, \mu_2 = \frac{\sum(X- \bar X)^2}{N}}$ If the value of ${\mu_3}$ is zero it implies symmetrical distribution. The higher the value of ${\mu_3}$, the greater is the symmetry. However ${\mu_3}$ do not tell us about the direction of skewness. Problem Statement: Information collected on the average strength of students of an IT course in two colleges is as follows: Measure College A College B Mean 150 145 Median 141 152 S.D 30 30 Can we conclude that the two distributions are similar in their variation? Solution: A look at the information available reveals that both the colleges have equal dispersion of 30 students. However to establish if the two distributions are similar or not a more comprehensive analysis is required i.e. we need to work out a measure of skewness. Value of mode is not given but it can be calculated by using the following formula:
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
The class $\Sigma^1$ of symbols on $\mathbb R^{2n}$ is made with $C^\infty$ functions $a$ of $X=(x,\xi)\in \mathbb R^n\times\mathbb R^n$ such that$$\vert\partial_X^\alpha a\vert\le C_\alpha(1+\vert X\vert)^{2-\vert \alpha\vert}.$$Assuming that $a\in\Sigma^1$ is real-valued principal type and denoting by $A$ its Weyl quantization, using the fact that $A$ is continuous on the Schwartz space$\mathscr S(\mathbb R^n)$ and on its dual (tempered distributions)$\mathscr S'(\mathbb R^n)$,we may define the maximal extension $H$ of $A$ with the domain$$D(H)=\{u\in L^2(\mathbb R^n), Au \in L^2(\mathbb R^n) \}.$$ Then I claim that $H$ is self-adjoint. I believe that it is well-known and I am looking for a reference in the literature. A related question is the same problem for first-order pseudo-differential operators on a compact manifold without boundary $\mathcal M$ (equipped with a smooth density): let $A$ be a first-order pseudo-differential operator on $\mathcal M$ (I do not want to assume ellipticity, but I know that $A$ is continuous on $C^\infty(\mathcal M)$ and on the distributions on $\mathcal M$) and assume that $A$ is symmetric, that is such that for $\phi, \psi\in C^\infty(\mathcal M)$ $\langle A\phi,\psi \rangle=\langle \phi,A\psi \rangle_{L^2(\mathcal M)}$. Then consider the maximal extension $H$ of $A$ with $$ D(H)=\{u\in L^2(\mathcal M), Au \in L^2(\mathcal M) \}. $$ Then $H$ is self-adjoint. Is it true and well-known? Last but not least, dropping the compactness assumption on $\mathcal M$ in the second question above, assuming that $A$ is properly supported, can I get the same result?
Research Open Access Published: Infinitely many solutions for p-biharmonic equation with general potential and concave-convex nonlinearity in \(\mathbb{R}^{N}\) Boundary Value Problems volume 2016, Article number: 6 (2016) Article metrics 1358 Accesses 1 Citations Abstract In this paper, we study the existence of multiple solutions to a class of p-biharmonic elliptic equations, \(\Delta^{2}_{p}u-\Delta_{p}u+V(x)|u|^{p-2}u =\lambda h_{1}(x)|u|^{m-2}u+h_{2}(x)|u|^{q-2}u\), \(x\in \mathbb{R}^{N}\), where \(1< m< p< q< p_{*}=\frac {pN}{N-2p}\), \(\Delta^{2}_{p} u=\Delta(|\Delta u|^{p-2}\Delta u)\) is a p-biharmonic operator and \(\Delta_{p}u=\operatorname{div}(|\nabla u|^{p-2}\nabla u)\). The potential function \(V(x)\in C{({\mathbb{R}}^{N})}\) satisfies \(\inf_{x\in{\mathbb{R}} ^{N}}V(x)>0\). By variational methods, we obtain the existence of infinitely many solutions for a p-biharmonic elliptic equation in \({\mathbb{R}}^{N}\). Introduction In this paper, we are interested in the existence of solutions to the following p-biharmonic elliptic equation: where \(2<2p<N\), \(f(x,u)=\lambda h_{1}(x)|u|^{m-2}u+h_{2}(x)|u|^{q-2}u\), \(1< m< p< q< p_{*}=\frac{pN}{N-2p}\), \(\Delta^{2}_{p} u=\Delta(|\Delta u|^{p-2}\Delta u)\) is a p-biharmonic operator and \(\Delta _{p}u=\operatorname{div}(|\nabla u|^{p-2}\nabla u)\). The potential function \(V(x)\in C{({\mathbb{R}}^{N})}\) satisfies \(\inf_{x\in{\mathbb{R}}^{N}}V(x)>0\). Recently, the nonlinear biharmonic equation in an unbounded domain has been extensively investigated, we refer the reader to [1–9] and the references therein. For the whole space \({\mathbb{R}}^{N}\) case, the main difficulty of this problem is the lack of compactness for the Sobolev embedding theorem. In order to overcome this difficulty, the authors always assumed the potential \(V(x)\) has some special characteristic. For example, in [4], Yin and Wu studied the following fourth-order elliptic equation: where the potential \(V(x)\) satisfied (V 0):: \(V\in C({\mathbb{R}}^{N})\) satisfies \(\inf_{x\in{\mathbb {R}}^{N}}V(x)>0\), and for each \(M>0\), \(\operatorname{meas}\{x\in \mathbb{R}^{N}\leq M\}<+\infty\). etc.), the authors proved the existence of infinitely many solutions to problem (1.2) by using the variational techniques in a standard way. In [2], Liu et al.considered the following fourth-order elliptic equation: where the potential \(V(x)\) satisfied a weaker condition than (V 0), that is, (V 1):: \(V\in C({\mathbb{R}}^{N})\) satisfies \(\inf_{x\in{\mathbb {R}}^{N}}V(x)>0\), there exists some \(M>0\), \(\operatorname{meas}\{x\in \mathbb{R}^{N}\leq M\}<+\infty\). 1), the compactness of the embedding is lost and this renders variational techniques more delicate. With the aid of the parameter \({\lambda}>0\), they proved that the variational functional satisfies \((\mathit{PS})\) condition, and then they showed the existence and multiplicity results of problem (1.3). A natural question is whether the existence results still holds if we assume a more general potential \(V(x)\) than (V 0), (V 1), namely, (V):: \(V(x)\in C({\mathbb{R}}^{N})\) satisfies \(\inf_{x\in {\mathbb{R}}^{N}}V(x)>0\). In the present paper, we will answer this interesting question. We consider the existence of solutions to the p-biharmonic problem (1.1) with a more general potential \(V(x)\). To prove that the \((\mathit{PS})\) sequence weakly converges to a critical point of the corresponding functional, we adapt ideas developed by [10–12] and then by variational methods, we establish the existence of infinitely many high-energy solutions to problem (1.1) with a concave-convex nonlinearity, i.e., \(f(x,u)=\lambda h_{1}(x)|u|^{m-2}u+h_{2}(x)|u|^{q-2}u\), \(1< m< p< q< p_{*}=\frac{pN}{N-2p}\). To the best of our knowledge, little has been done for p-biharmonic problems with this type of nonlinearity. Here, we give our assumptions on the weight functions \(h_{1}(x)\) and \(h_{2}(x)\): (H 1): \(h_{1}\in L^{\sigma}({\mathbb{R}}^{N})\) with \(\sigma=\frac {p}{p-m}\); (H 2): \(h_{2}(x)\geq0\) (≢0), \(h_{2}(x)\in L^{\infty}({\mathbb{R}}^{N})\). The main result in this paper is as follows. Theorem 1.1 Let \(2<2p<N\), \(1< m< p< q< p_{*}=\frac{pN}{N-2p}\). Assume (V), (H 1), and (H 2) hold. Then there exists \(\lambda_{0}>0\) such that for all \(\lambda\in [0,\lambda_{0}]\), problem (1.1) admits infinitely many high- energy solutions in \({\mathbb{R}}^{N}\). This paper is organized as follows. In Section 2, we build the variational framework for problem (1.1) and establish a series of lemmas, which will be used in the proof of Theorem 1.1. In Section 3, we prove Theorem 1.1 by the mountain pass theorem [13]. Preliminaries In order to apply the variational setting, we assume the solutions of (1.1) belong to the following subspace of \(\mathcal {D}^{2,p}({\mathbb{R}}^{N})\): endowed with the norm where \(\mathcal{D}^{2,p}({\mathbb{R}}^{N})=\{u\in L^{p_{*}}({\mathbb {R}}^{N}) |\Delta u\in L^{p}({\mathbb{R}}^{N})\}\), \(\|\cdot\|_{s}\) means the norm in \(L^{s}({\mathbb{R}}^{N})\). We denote by \(S_{*}\) the Sobolev constant, that is, and where \(S_{*}\) is obtained by a positive and radially symmetric function; see for instance [14]. Definition 2.1 A function \(u\in E\) is said to be a weak solution of (1.1) if, for any \(\varphi\in E\), we have Let \(J(u):E\to{\mathbb{R}}\) be the energy functional associated with problem (1.1) defined by To prove the existence of infinitely many solutions to problem (1.1), we need to prove that the functional J defined by (2.6) satisfies the \((\mathit{PS})\) condition. Recall that a sequence \(\{u_{n}\}\) in E is called a \((\mathit{PS})_{c}\) sequence of J if The functional J satisfies the \((\mathit{PS})\) condition if any \((\mathit{PS})_{c}\) sequence possesses a convergent subsequence in E. Lemma 2.1 Assume (V), (H 1), and (H 2) hold. If \(\{u_{n}\}\subset E\) is a \((\mathit{PS})_{c}\) sequence of J, then \(\{u_{n}\}\) is bounded in E. Proof It follows from Hölder’s inequality that where \(a_{1}=V_{0}^{-{m}/{p}}\|h_{1}\|_{\sigma}\). Choose \(t\in(0,1)\) such that \(q=pt+(1-t)p_{*}\), then where \(a_{2}=S_{*}^{-{p_{*}(q-p)}/{p(p_{*}-p)}}V_{0}^{-t}\|h_{2}\|_{\infty}\). Thus, Since \(1< m< p< q\), we conclude that \(\|u\|_{E}\) is bounded and the proof is complete. □ In the following, we shall show that \(\{u_{n}\}\) has a convergent subsequence in E. Since the sequence \(\{u_{n}\}\) given by (2.8) is a bounded sequence in E, there exist a subsequence of \(\{u_{n}\}\) (still denoted by \(\{u_{n}\}\)) and \(v\in E\) such that \(\|u_{n}\|_{E}\leq M\), \(\|v\|_{E}\leq M\), and Lemma 2.2 Assume (V), (H 1), and ( H 2) hold. If the sequence \(\{u_{n}\}\) is bounded in E satisfying (2.12), then (i) \(\lim_{n\to\infty}\int_{{\mathbb{R}} ^{N}}h_{1}(x)|u_{n}|^{m}\,dx=\int_{{\mathbb{R}}^{N}}h_{1}(x)|v|^{m}\,dx\), \(\lim_{n\to \infty}\int_{{\mathbb{R}}^{N}}h_{1}(x)|u_{n}-v|^{m}\,dx=0\); (ii) \(\lim_{n\to\infty}\int_{{\mathbb{R}} ^{N}}h_{2}(x)|u_{n}|^{q}\,dx=\int_{{\mathbb{R}}^{N}}h_{2}(x)|v|^{q}\,dx\), \(\lim_{n\to \infty}\int_{{\mathbb{R}}^{N}}h_{2}(x)|u_{n}-v|^{q}\,dx=0\). Proof (i) In fact, from \(h\in L^{\sigma}({\mathbb{R}}^{N})\) and (2.12), we obtain, for any \(r>0\), where and in the sequel \(B_{r}=\{x\in{\mathbb{R}}^{N}:|x|< r\}\), \(B^{c}_{r}={\mathbb{R}}^{N}\setminus \overline{B}_{r}\). On the other hand, we see from the Hölder inequality that as \(r\to\infty\). By Fatou’s lemma, we see that, as \(n\to\infty\), Since \(p< p< p_{*}\), it easy to see that, for any small \(\varepsilon>0\), there exist \(S_{0}>s_{0}>0\) such that \(|s|^{q}<\varepsilon|s|^{p}\) if \(|s|\leq s_{0}\) and \(|s|^{q}\leq\varepsilon|s|^{p_{*}}\), if \(|s|\geq S_{0}\). This shows that with some constant \(M_{1}>0\), and where \(|A_{n}|=\operatorname{meas}(A_{n})\). Equation (2.18) implies that \(\sup_{n\in{\mathbb{N}}}|A_{n}|\leq M_{1}|s_{0}|^{-p_{*}}<\infty\), so it is easy to see that In the following, we show that \(\lim_{r\to\infty }\operatorname{meas}(A_{n}\cap B_{r}^{c})=0\) uniformly in \(n\in{\mathbb{N}}\). In fact, it follows from (2.12) that \(v\in L^{p}({\mathbb{R}}^{N})\) and \(u_{n}(x)\to v(x)\) a.e. \({\mathbb{R}}^{N}\). Therefore, for any small \(\varepsilon >0\), there exists \(r_{0}>1\) such that \(r\geq r_{0}\), For this ε, we choose \(t_{1}=r_{0}\), \(t_{j}\uparrow\infty\) such that \(D_{j}=B^{c}_{t_{j}}\setminus\overline{B}^{c}_{t_{j+1}}\), \(B^{c}_{r_{0}}=\bigcup^{\infty}_{j=1}D_{j}\) and Obviously, for every fixed \(j\in N\), \(D_{j}\) is a bounded domain and \(D_{j}\cap D_{i}=\emptyset\) (\(j\neq i\)). Furthermore, \(s_{0}\leq|u_{n}|\leq S_{0}\) in \(D_{j}\cap A_{n}\). By Fatou’s lemma, we have, for every \(j\in {\mathbb{N}}\), Then, for \(s_{1}=2^{1-q}s_{0}^{q}\), we obtain Notice that, for any \(r\geq r_{0}\) and \(n\in{\mathbb{N}}\), we have \((A_{n}\cap B^{c}_{r})\subset(A_{n}\cap B_{r_{0}}^{c})\). Therefore, the application of (2.19) and (2.20) yields \(\lim_{r\to\infty}|A_{n}\cap B_{r}^{c}|=0\) uniformly in \(n\in{\mathbb{N}}\). Thus, for any \(\varepsilon>0\), there exists \(r_{0}\geq1\) such that \(\operatorname{meas}(A_{n}\cap B^{c}_{r})<\frac{\varepsilon}{S_{0}^{q}\|h_{2}\| _{\infty}}\), for \(r\geq r_{0}\). Then it follows from (2.17) that and Moreover, we derive from (2.12) that Lemma 2.3 Let \(\{u_{n}\}\) be a \((\mathit{PS})_{c}\) sequence satisfying (2.12), then \(u_{n}\to v\) in E, that is, the functional J satisfies the \((\mathit{PS})\) condition. Proof Denote Then the fact \(J'(u_{n})\to0\) in \(E^{*}\) shows that \(P_{n}\to0\) as \(n\to \infty\). Moreover, the fact \(u_{n}\rightharpoonup v\) in E implies \(Q_{n}\to0\), where It follows from the Hölder inequality and the limit (i) in Lemma 2.2 that Similarly, we can derive from the limit (ii) in Lemma 2.2 that Then we have \(\|u_{n}-v\|_{E}\to0\) as \(n\to\infty\). Thus \(J(u)\) satisfies the \((\mathit{PS})\) condition on E and the proof is completed. □ Proof of Theorem 1.1 In this section, we will give the proof of Theorem 1.1. We assume that all conditions in the theorem hold. The proof mainly relies on the mountain pass theorem. Lemma 3.1 ([13]) Let E be an infinite dimensional real Banach space, \(J\in C^{1}(E,{\mathbb{R}})\) be even and satisfies the \((\mathit{PS})\) condition and \(J(0)=0\). Assume \(E=Y\oplus Z\), Y is finite dimensional, and J satisfies: (J 1): There exist constants\(\rho,\alpha>0\) such that\(J(u)\leq \alpha\) on\(\partial B_{\rho}\cap Z\). (J 2): For each finite dimensional subspace\(E_{0}\subset E\), there is an\(R_{0}=R_{0}(E_{0})\) such that\(J(u)\leq0\) on\(E_{0}\setminus B_{R_{0}}\), where\(B_{r}=\{u\in E: \|u\|_{E}< r\}\). Then J possesses an unbounded sequence of critical values. Proof of Theorem 1.1 Clearly, the functional J defined by (2.6) is even in E. By Lemma 2.2 in Section 2, the functional satisfies the \((\mathit{PS})\) condition. Next, we prove that J satisfies (J 1) and (J 2). From (2.9) and (2.10), it follows that Denote Then, there exist \(\lambda_{0},z_{1},\alpha>0\) such that \(\phi(z_{1})\geq \alpha\) for any \(\lambda\in[0,\lambda_{0}]\). Let \(\rho=z_{1}\), we have \(J(u)\geq\alpha\) with \(\|u\|_{E}=\rho\) and \(\lambda\in[0,\lambda _{0}]\). So the condition (J 1) is satisfied. We now verify (J 2). For any finite dimensional subspace \(E_{0}\subset E\), we assert that there is a constant \(R_{0}>\rho\) such that \(J<0\) on \(E_{0}\setminus B_{R_{0}}\). Otherwise, there exists a sequence \(\{u_{n}\} \subset E_{0}\) such that \(\|u\|_{n}\to\infty\) and \(J(u_{n})\geq0\). Hence Set \(\omega_{n}=\frac{u_{n}}{\|u_{n}\|_{E}}\). Then up to a sequence, we can assume \(\omega_{n}\rightharpoonup\omega\) in E, \(\omega_{n}\rightarrow \omega\) a.e. in \({\mathbb{R}}^{N}\). Denote \(\Omega=\{x\in{\mathbb {R}}^{N}:\omega(x)\neq 0\}\). Assume \(|\Omega|>0\). Clearly, \(u_{n}(x)\to\infty\) in Ω. It follows from (2.8) and (2.9) that On the other hand, we derive Therefore, multiplying (3.1) by \(\|u\|_{E}^{-p}\) and passing to the limit as \(n\to\infty\) show that \(\frac{1}{p}\geq\infty\). This is impossible. So \(|\Omega|=0\) and \(\omega(x)=0\) a.e. on \({\mathbb {R}}^{N}\). By the equivalence of all norms in \(E_{0}\), there exists a constant \(\beta >0\) such that Hence This is a contradiction. So there exists a constant \(R_{0}\) such that \(J<0\) on \(E_{0}\setminus B_{R_{0}}\). Therefore, the existence of infinitely many solutions \(\{u_{n}\}\) for problem (1.1) follows from Lemma 3.1 and we finish the proof of Theorem 1.1. □ References 1. Carrião, PC, Demarque, R, Miyagaki, OH: Nonlinear biharmonic problems with singular potentials. Commun. Pure Appl. Anal. 13, 2141-2154 (2014) 2. Liu, J, Chen, SX, Wu, X: Existence and multiplicity of solutions for a class of fourth-order elliptic equations in \({\mathbb{R}}^{N}\). J. Math. Anal. Appl. 395, 608-615 (2012) 3. Chabrowski, J, Marcos do Ó, J: On some fourth-order semilinear elliptic problems in \({\mathbb{R}}^{N}\). Nonlinear Anal. 49, 861-884 (2002) 4. Yin, Y, Wu, X: High energy solutions and nontrivial solutions for fourth-order elliptic equations. J. Math. Anal. Appl. 375, 699-705 (2011) 5. Ye, YW, Tang, CL: Infinitely many solutions for fourth-order elliptic equations. J. Math. Anal. Appl. 394, 841-854 (2012) 6. Ye, YW, Tang, CL: Existence and multiplicity of solutions for fourth-order elliptic equations in \({\mathbb{R}}^{N}\). J. Math. Anal. Appl. 406, 335-351 (2013) 7. Zhang, W, Tang, X, Zhang, J: Infinitely many solutions for fourth-order elliptic equations with general potentials. J. Math. Anal. Appl. 407, 359-368 (2013) 8. Zhang, G, Costa, DG: Existence result for a class of biharmonic equations with critical growth and singular potential in \({\mathbb{R}}^{n}\). Appl. Math. Lett. 29, 7-12 (2014) 9. Bhakta, M, Musina, R: Entire solutions for a class of variational problems involving the biharmonic operator and Rellich potentials. Nonlinear Anal. 75, 3836-3848 (2012) 10. Chen, CS: Multiple solutions for a class of quasilinear Schrödinger equations in \(\mathbb{R}^{N}\). J. Math. Phys. 56, 071507 (2015) 11. Chen, CS: Infinitely many solutions to a class of quasilinear Schrödinger system in \(\mathbb{R}^{N}\). Appl. Math. Lett. 52, 176-182 (2016) 12. Chen, CS, Chen, Q: Infinitely many solutions for p-Kirchhoff equation with concave-convex nonlinearities in \(\mathbb{R}^{N}\). Math. Methods Appl. Sci. (2015). doi:10.1002/mma.3583 13. Rabinowitz, PH: Minimax Methods in Critical Point Theory with Application to Differential Equations. CBMS Reg. Conf. Ser. Math., vol. 65. Am. Math. Soc., Providence (1986) 14. Swanson, CA: The best Sobolev constant. Appl. Anal. 47, 227-239 (1992) 15. Brezis, H, Lieb, EH: A relation between pointwise convergence of functions and convergence of functionals. Proc. Am. Math. Soc. 88, 486-490 (1983) Acknowledgements The authors wish to express their gratitude to the referees for valuable comments and suggestions. This work was supported by the Project of Innovation in Scientific Research for Graduate Students of Jiangsu Province (No. CXZZ13-0263) and the Fundamental Research Funds for the Central Universities of China (2015B31014). Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions Each of the authors contributed to each part of this study equally. All authors read and approved the final vision of the manuscript.
I'm currently involved in a small (but quite time consuming) project where we are trying to get some decent bound for the number $N(P)$ of real zeroes of a random polynomial $P(x)=\sum_{k=0}^n\xi_k x^k$ where $\xi_k$ are real independent identically distributed random variables satisfying $P(\xi_k=0)=0$ (just to avoid totally idiotic degeneracies) but no other a priori assumptions. Our current approach uses the inequality $$ \mathcal P\left[\max_{|\alpha|<C\ell}|P(re^{i\alpha})|\le n^{-C}\ell^{Cm}\sum_k|\xi_k|r^k\right]\le e^{-m} $$ for all fixed $r>0$, $0<\ell<1$, $n,m\ge 2$ with some absolute $C>0$, which is not terribly bad and gives the bound $$ \mathcal E N(P)\le C\log^4 n $$ in the end. However, I suspect that even a stronger bound $$ \mathcal P\left[\max_{|\alpha|<C\ell}|P(re^{i\alpha})|\le n^{-C}\ell^{Cm}\sum_k|\xi_k|r^k\right]\le n^{-m} $$ may hold, which would allow us to shave one logarithm off. I wonder if anybody has any idea of how to get something like this (or better). If you find a counterexample, it'll shed some light on what is going on too. We are currently using the combination of the Turan lemma and the flip-flop around the median technique but all my previous experience shows that using the worst case scenario estimates in the probabilistic setting is never optimal. To put things in perspective, the bound everybody hopes for should be $\mathcal EN(P)\le C\log n$. It has been proved for many "decent" distributions of $\xi_k$ but if you do not assume anything and require the uniform bound over all distributions (which may be attained at different distributions for different $n$), as in our project, then it looks like the best published bound is $\mathcal EN(P)\le C\sqrt n$ (if somebody knows anything better, I'll be happy to hear it too). Update: We have finally got $C\log n$ with absolute $C$ in the classical problem (any real i.i.d coefficients) by a different method. However, the question remains because there are interesting situations to which our new approach does not apply but the original one, which gives $\log^4 n$, does.
First of all, even setting aside the issue with scalar automorphisms noted in comments, at the level of objects there is a "problem": the functor is not essentially surjective for unipotent groups (by a consideration of Ext$^1$-groups with their natural structure of vector space over the ground field). Probably nobody cares, so I won't get into the details (but if someone does care then hopefully someone else will summon the energy to write out an actual argument, since I don't feel like going down that road here). Now let's focus on positive statements that someone might care about. Let $K'/K$ be an extension of fields (any characteristic, but characteristic 0 is especially nice when $G^0$ is reductive because of the Remark below), and $G$ a smooth affine group over $K$. Assume $K$ is algebraically closed (but $K'$ is not assumed to be algebraically closed). Then we claim: Theorem: For any semisimple linear representation $\rho':G_{K'} \to {\rm{GL}}(V')$, there exists a semisimple linear representation $\rho:G \to {\rm{GL}}(V)$ unique up to isomorphism such that $\rho' \simeq \rho_{K'}$. Remark: In characteristic 0 the semisimplicity hypothesis on the representations automatically holds if $G$ has reductive identity component. There is a version of the Theorem in characteristic 0 that doesn't require $K$ to be algebraically closed if $G$ is split connected reductive, but that involves an entirely different ingredient than is used below, so I'll pass over it in silence. To prove the Theorem, we begin with: Lemma The coefficients of the characteristic polynomial of $\rho'$ come from $K[G] \subset K'[G_{K'}]$. Proof. An element $f' \in K'[G]$ comes from $K[G]$ if and only if its restriction to a Zariski-dense subset of $G(K)$ belongs to $K$.(Indeed, if we apply "spread out and specialize" to $f’$ then we get some $f \in K[G]$ such that $f_{K'}$ agrees with $f'$ on a Zariski-densesubset of $G(K)$, but such a subset is also Zariski-dense in $G_{K'}$ (exercise!), so $f' = f_{K'}$ as desired.) Let $B$ be a Borel $K$-subgroup of $G$, so the $G(K)$-conjugates of $B(K)$ cover $G(K)$ (since $K$ is algebraically closed). Hence, it suffices to show that the coefficients on all such $G(K)$-conjugates have values in $K$. Let $T$ be a maximal $K$-torus in $B$, so $B = T \ltimes U$ for $U := \mathscr{R}_u(B)$. Applying Lie-Kolchin to $B_{K'}$ acting on $V'$, those coefficients on any $b \in B(K)$ only depend on the $T$-component of $b$. Thus, since the characteristic polynomial is conjugation-invariant, we're reduced to studying these coefficients on points of $T(K)$. All weights of $T_{K'}$ are "$K$-rational" (since $T$ is a split $K$-torus, as $K$ is algebraically closed), so we win. QED Lemma Now if we apply "spreading out and specialization" followed by semisimplification, from $\rho'$ we get a semisimple$$\rho: G \to {\rm{GL}}(V)$$such that $\rho_{K'}$ has the same characteristic polynomial as $\rho'$ due to the Lemma. But $\rho_{K'}$ is semisimple because $\rho$ is semisimple with $K$ algebraically closed (i.e., $V$ is a semisimple representation of the abstract group $G(K)$ with $K$ algebraically closed, so it is "absolutely semisimple" and hence — by consideration of the endomorphism algebra — $V_{K'}$ is semisimple over $K'$ as a representation of the abstract group $G(K)$ and thus as a representation of the algebraic group $G_{K'}$ by Zariski-density of $G(K)$ in $G_{K'}$). Hence, $\rho_{K'}$ and $\rho'$ are semisimple representations of $G(K')$ with the same characteristic polynomial, so by Brauer-Nesbitt (which applies to semisimple representations of finite dimension for any abstract group at all) these representations are isomorphic. That isomorphism amounts to a single conjugation in the language of matrices, so it says that as algebraic representations they’re $K'$-isomorphic. This gives that $\rho'$ descends to a semisimple representations of $G$ over $K$ as desired. QED Theorem
In writing a paper I need to refer to the following type of objects, does anybody know if it has an established name? What we are given We have $M$ a smooth manifold and $\Sigma \subset M$ a codimension 1, smooth compact submanifold with boundary $\partial\Sigma$ and interior $\hat{\Sigma}$. What I am interested in An open subset $D\subset M$ and a smooth map $\Phi: (-1,1)\times \Sigma \to M$ such that $\Phi$ restricted to $(-1,1)\times \hat{\Sigma}$ is a diffeomorphism onto $D$ $\Phi(0,\cdot)$ is the identity map on $\Sigma$. $\Phi(t,\cdot)|_{\partial\Sigma}$ is the identity map (independent of $t$). The second condition roughly states that we are looking at some sort of smooth homotopies on maps from the manifold with boundary $\Sigma$ to the manifold $M$. The third condition restricts to considering homotopies between maps that fixes the boundary $\partial\Sigma$ (Is there a commonly used terminology for this condition alone?) What's most important, however, is the first condition. This allows us to construct a smooth foliation of $D$ by submanifolds diffeomorphic to $\hat{\Sigma}$. If I add some further analytic conditions on it, and abuse terminology a little bit, I can call it a development a la PDE theory. But it turns out that for the paper I am writing I need to talk about some properties of pairs $(D,\Phi)$ relative to a fixed $\Sigma$ with the above definition independently of the PDE/analysis assumptions. I would prefer not to invent a new name if a construction like above has been used in the literature before. A simple example of the object: let $M = \mathbf{R}^2$ with the usual coordinates $(x,y)$, Let $\Sigma = [-1,1]\times\{0\}$ with coordinate $x$. Then $\Phi(s,x) = (x,s(x-1)(x+1))$ defined on $(-1,1)\times [-1,1]$ is smooth. It sends $(s,\pm 1) \mapsto (\pm 1, 0)$. It sends $(0,x) \to (x,0)$. And the Jacobian determinant $|d\Phi| = (x-1)(x+1) > 0$ if $|x| < 1$. So it is a diffeomorphism from $(-1,1)\times(-1,1)$ to its image.
Lets deal with wave propagation in a cylindrical duct. We ask the question: "what is the general form of a pressure wave which can propagate through the duct?" In answering this, we assume that the pressure wave in question has harmonic time dependence, i.e. $\exp(i\omega t)$ dependence. The critical thing about solutions of the wave equation is that they must obey the boundary conditions of the duct. Without going into detail, for an infinite length duct of radius $a$ the solution is: $p(r,\theta,x,t) = \exp(i(\omega t-n\theta) J_n(z_{mn}r/a) \left( A_{mn}\exp(-ik_{mn}x) + B_{mn}\exp(ik_{mn}x) \right)$ Where $J$ is the bessel function with corresponding zeros $z_{mn}$. Mathematically the significance of $m,n$ is simply that we can assign any positive integer value to these as a feasible solution. This determines the mode mathematically. As an aside it should be pointed out that for a particular value of $\omega = \omega_c$ only certain modes with low enough $\max(m.n)$ can actually propagate- any "higher order" modes will decay rapidly along $x$. $\omega_c$ is therefore known as the cut-off frequency. My question now is how are modes created in the first place by an acoustic source? Does the source select a particular mode or are many present at the same time? Please explain. To start with an answer I thought of a vibrating disc or membrane in the duct which vibrates with velocity $v = \hat{V}\exp(i\omega t)$ in the $x$ (axial) direction. Assuming linearity, the pressure wave form should have same harmonic time dependence. Now it is important for the sake of this argument that $v$ is exactly sinusoidal, i.e. I am not assuming there would be any higher order coefficients in the Fourier decomposition of $v$. So what mode or modes would initially (even if they are cut-off) be set up by this scenario?
I am given the temperature profile below and the two equations: $$Q = (2\pi R_0)LU_0(T_{1\infty}-T_{2\infty})\tag1\label1$$ $$\frac{1}{U_0R_0} = \frac{1}{h_1R_0} +\frac{\ln(R_\mathrm a/R_0)}{k}+ \frac{1}{h_2R_\mathrm a}\tag2\label2$$ and I am asked How does \eqref{1} simplify for the temperature profile shown, i.e. the case where the temperature in the wall is very close to $T_{1\infty}$}? I am not sure if this means that I am supposed to "remove" the middle term in \eqref{2} since it will more or less be the same as the first term and thus the overall heat transfer coefficient would look like this: $${U_0} = \left(\frac{1}{h_1} + \frac{R_0}{h_2R_\mathrm a}\right)^{-1}$$ What I don't understand is that I just simplified $\ce{Eq(2)}$ and not \eqref{1} which the question asks for. Or is it that by simplifying \eqref{2} I in turn simplify \eqref{1}, or have I completely misunderstood the question.
No, but I can tell you why it feels like you are double counting. Consider a cash flow $\tilde{x}=\tilde{x}(t,\mu,\sigma^2)$ to be received in the future. While many cash flows lack a first moment and so no defined mean or variance, let us assume at least the second moment is defined to make the discussion simple. Implicitly, your assumption of an expectation would require a first moment to exist. Indeed to make this easier, let us assume normality. If $t$ is time; $\mu$ the center of location; and, $\sigma^2$ the scale parameter, then we can talk about a rate. From the formula $$\rho=\frac{\mathbb{E}(\tilde{x}(\mu(t),\sigma^2(t)))}{1+d(\mu(t),\sigma^2(t))}.$$ Your assumption of additivity is problematic, while it is often used as an approximation, if you think about it for a second you will see why. Instead, I am defining $$d=(1+r(\mu_0,\sigma^2_0))(1+\tau(t))$$ because I am using $t$ for time instead of risk. $\tau(t)$ is the function that maps the premium as a function of time to discount a certain cash flow. We will adopt a Frequentist interpretation of probability to make this simple. Using multiplication all the constants are on the left and the expectation of the random variables are on in the center when we rearrange it as $$\rho(1+d(\mu(t),\sigma^2(t))=\mathbb{E}(\tilde{x}(\mu(t),\sigma^2(t)))=\rho\mu(t).$$ The present value cannot be stochastic as it is known by observation. Since $\mu(t)$ and $\sigma^2(t)$ are constants by definition in the Frequentist interpretation of probability nothing on the left or right contains any randomness at all. Only the center is random. That randomness is averaged out over the sample space so that only the point is left. You could logically take this one step further and argue that $\mu(t)=\mu(t,\sigma^2(t))$. It feels like it is double counted because the mean is a function of the variance and the cash flow is a function of the mean and the variance. The rate is a function of the risk-adjusted mean, so it is a function of the mean and the variance. You could dissolve the mean and convert it into a pure function of variance and time, and then only the scale parameter would exist in the numerator and the denominator. $$\rho=\frac{\mathbb{E}(\tilde{x}(\sigma^2(t),t))}{1+d(\sigma^2(t),t)}$$ You are also missing the observation that $\mu(t,\sigma^2(t))$ is similar to an expenditure function and not merely a center of location. If you step back one more unit of time, to before time zero, then $\rho$ becomes stochastic as well because $\tilde{\rho}=\rho$ if and only if $\mathbb{E}[\mathcal{U}(\tilde{x})]>\mathcal{U}(\tilde{\rho}=0)$, where $\mathcal{U}()$ is a utility function.
Difference between revisions of "Demand/Dynamic User Assignment" (note on convergence and results) (using vtypes from additional file) (4 intermediate revisions by 2 users not shown) Line 1: Line 1: + + + + + + + + The tool '' {{SUMO}}/tools/assign/duaIterate.py '' can be used to compute the (approximate) dynamic user equilibrium. The tool '' {{SUMO}}/tools/assign/duaIterate.py '' can be used to compute the (approximate) dynamic user equilibrium. {{Caution|This script will require copious amounts of disk space}} {{Caution|This script will require copious amounts of disk space}} Line 6: Line 14: ''duaIterate.py'' supports many of the same options as [[SUMO]]. Any options not listed when calling ''duaIterate.py {{Option|--help}}'' can be passed to [[SUMO]] by adding {{Option|sumo--long-option-name arg}} after the regular options (i.e. {{Option|sumo--step-length 0.5}}. ''duaIterate.py'' supports many of the same options as [[SUMO]]. Any options not listed when calling ''duaIterate.py {{Option|--help}}'' can be passed to [[SUMO]] by adding {{Option|sumo--long-option-name arg}} after the regular options (i.e. {{Option|sumo--step-length 0.5}}. − + This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by Line 16: Line 24: Between successive calls of DUAROUTER, the ''.rou.alt.xml'' format is used to record not only the current ''best'' route but also previously computed alternative routes. These routes are collected within a route distribution and used when deciding the actual route to drive in the next simulation step. This isn't always the one with the currently lowest cost but is rather sampled from the distribution of alternative routes by a configurable algorithm described below. Between successive calls of DUAROUTER, the ''.rou.alt.xml'' format is used to record not only the current ''best'' route but also previously computed alternative routes. These routes are collected within a route distribution and used when deciding the actual route to drive in the next simulation step. This isn't always the one with the currently lowest cost but is rather sampled from the distribution of alternative routes by a configurable algorithm described below. − The + + The |and .of is the of the for the last ) . − + Gawron () + The for a of + the the in the step + for a set of routes + probability a − = Logit = + = Logit = The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. Line 29: Line 42: <math>p_r' = \frac{\exp(\theta c_r')}{\sum_{s\in R}\exp(\theta c_s')}</math> <math>p_r' = \frac{\exp(\theta c_r')}{\sum_{s\in R}\exp(\theta c_s')}</math> − = + − = + == + + + + == + + + + + + + + + + + = oneShot-assignment = = oneShot-assignment = − An alternative to the iterative user assignment above is incremental assignment. This happens automatically when using {{XML|<trip>}} input directly in [[SUMO]] instead of {{XML|<vehicle>}}s with pre-defined routes. In this case each vehicle will compute a fastest-path computation at the time of departure which prevents all vehicles from driving blindly into the same jam and works pretty well empirically (for larger scenarios). + An alternative to the iterative user assignment above is incremental assignment. This happens automatically when using {{XML|<trip>}} input directly in [[SUMO]] instead of {{XML|<vehicle>}}s with pre-defined routes. In this case each vehicle will compute a fastest-path computation at the time of departure which prevents all vehicles from driving blindly into the same jam and works pretty well empirically (for larger scenarios). + + [[Tools/Assign#one-shot.py]] + + + . Latest revision as of 13:02, 12 February 2018 Contents Introduction For a given set of vehicles with of origin-destination relations (trips), the simulation must determine routes through the network (list of edges) that are used to reach the destination from the origin edge. The simplest method to find these routes is by computing shortest or fastest routes through the network using a routing algorithm such as Djikstra or A*. These algorithms require assumptions regarding the travel time for each network edge which is commonly not known before running the simulation due to the fact that travel times depend on the number of vehicles in the network. . The problem of determining suitable routes that take into account travel times in a traffic-loaded network is called user assignment.SUMO provides different tools to solve this problem and they are described below. Iterative Assignment ( Dynamic User Equilibrium) The tool <SUMO_HOME> /tools/assign/duaIterate.py can be used to compute the (approximate) dynamic user equilibrium. python duaIterate.py -n -t <network-file> -l <trip-file> <nr-of-iterations> duaIterate.py supports many of the same options as SUMO. Any options not listed when calling duaIterate.py --help can be passed to SUMO by adding sumo--long-option-name arg after the regular options (i.e. sumo--step-length 0.5. This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by calling DUAROUTER to route the vehicles in a network with the last known edge costs (starting with empty-network travel times) calling SUMO to simulate "real" travel times result from the calculated routes. The result edge costs are used in the net routing step. The number of iterations may be set to a fixed number of determined dynamically depending on the used options. In order to ensure convergence there are different methods employed to calculate the route choice probability from the route cost (so the vehicle does not always choose the "cheapest" route). In general, new routes will be added by the router to the route set of each vehicle in each iteration (at least if none of the present routes is the "cheapest") and may be chosen according to the route choice mechanisms described below. Between successive calls of DUAROUTER, the .rou.alt.xml format is used to record not only the current best route but also previously computed alternative routes. These routes are collected within a route distribution and used when deciding the actual route to drive in the next simulation step. This isn't always the one with the currently lowest cost but is rather sampled from the distribution of alternative routes by a configurable algorithm described below. Route-Choice algorithm The two methods which are implemented are called Gawron and Logit in the following. The input for each of the methods is a weight or cost function on the edges of the net, coming from the simulation or default costs (in the first step or for edges which have not been traveled yet), and a set of routes where each route has an old cost and an old probability (from the last iteration) and needs a new cost and a new probability . Gawron (default) The Gawron algorithm computes probabilities for chosing from a set of alterantive routes for each driver. The following values are considered to compute these probabilities: the travel time along the used route in the previous simulation step the sum of edge travel times for a set of alternative routes the previous probability of chosing a route Logit The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. The probabilities are calculated from an exponential function with parameter scaled by the sum over all route values: Termination The option --max-convergence-deviation may be used to detect convergence and abort iterations automatically. Otherwise, a fixed number of iterations is used. Once the script finishes any of the resulting .rou.xml files may be used for simulation but the last one(s) should be the best. Usage Examples Loading vehicle types from an additional file By default, vehicle types are taken from the input trip file and are then propagated through DUAROUTER iterations (always as part of the written route file). In order to use vehicle type definitions from an additional-file, further options must be set duaIterate.py -n ... -t ... -l ... --additional-file <FILE_WITH_VTYPES>duarouter--aditional-file <FILE_WITH_VTYPES>duarouter--vtype-output dummy.xml Options preceeded by the string duarouter-- are passed directly to duarouter and the option vtype-output dummy.xml must be used to prevent duplicate definition of vehicle types in the generated output files. oneShot-assignment An alternative to the iterative user assignment above is incremental assignment. This happens automatically when using <trip> input directly in SUMO instead of <vehicle>s with pre-defined routes. In this case each vehicle will compute a fastest-path computation at the time of departure which prevents all vehicles from driving blindly into the same jam and works pretty well empirically (for larger scenarios). The routes for this incremental assignment are computed using the Automatic Routing / Routing Device mechanism. Since this device allows for various configuration options, the script Tools/Assign#one-shot.py may be used to automatically try different parameter settings. The MAROUTER application computes a classic macroscopic assignment. It employs mathematical functions (resistive functions) that approximate travel time increases when increasing flow. This allows to compute an iterative assignment without the need for time-consuming microscopic simulation.
I'm trying to make a figure using a font that has support for mathematics. The aim here is to enhance the visual appeal of the figure compared to what could be achieved with one of the plainer standard LaTeX fonts. After some experimentation I have chosen gfsartemisia-euler. I have the problem that the \dot{x} command is processed to give a result more like \underline{x}. Here's an example: \documentclass[a4paper]{article}\usepackage{mathtools, color,tikz} \usetikzlibrary{positioning,shapes}\usetikzlibrary{fit}\usetikzlibrary{calc}\usepackage{gfsartemisia-euler}\usepackage[T1]{fontenc}\begin{document}\pagestyle{empty}\begin{figure}[c]\centering \begin{tikzpicture}\begin{scope}[font=\large, color=black, ultra thick, node distance=19mm, text centered,text=black]\node[text width=80mm] (SE-PE) {$\displaystyle \frac{ \partial } { \partial t} A_{n r} = [B_{0}, A_{nr} ] + \sum_{\substack{s \neq r \\ s=1}}^{M} \frac{ \dot{x}_{r} - \dot{x}_{s} } { x_{r} - x_{s} } [ A_{ns}, A_{nr}] $ \\[+5pt] Schlesinger Equations/ Painlev\'e Equations};\node[minimum height=47mm, minimum width=87mm, rounded rectangle, draw,red] at (SE-PE) {};\end{scope}\end{tikzpicture}\end{figure}\end{document} with result I have looked at various pages on changing fonts in documents without success, but I don't often have to play with fonts so it appears that I'm missing some crucial piece of information on how to proceed. Suggestions will be appreciated.
In my first article “The most enlightening Calculus books”, I argued the importance of maintaining high standards for mathematics education and suggested deep and inspiring calculus books for those of you who are interested in pursuing the joy of learning mathematics. The feedback has been overwhelming and I wish to follow up with an article that addresses a couple of remarks that I’ve received by email. One person commented on the blog, and another wrote me privately, to express their concern that “harder books are not necessarily better books” and that teaching which is geared towards only the smartest kids is a mistake. I want to point out that I’m in no way advocating teaching for the brightest minds only. Wide access to mathematics is something that should be encouraged all over the world, and I’m pretty sure it will help take society in a better direction. In fact, education in general – and mathematics, technical and scientific education in particular – are key for the development of every country and ultimately for good of humankind. However my point was that with wider access to higher education mathematics, we should not reduce the expected and established testing standards. In other words, there is a fair level of understanding that we should expect from people who major in math or from students who strongly depend on mathematics for their future careers. Furthermore, the textbooks adopted should be mathematically sound and provide the right intellectual stimulation for those who could use it. That said, there is nothing wrong with teachers trying to use different styles of teaching to reach a wider audience, or for students who struggle with the level of math presented in the textbook, to supplement it with simpler books in order to get an easier start. Hence, it’s perfectly OK for a student (who for example is taking an undergraduate class in programming in C) to read C For Dummies if The C Programming Language by K&R is too hard for them off the bat. But that does not imply that the class should adopt “C for Dummies” as their textbook nor that the examination should be based on such a book. So to summarize this point, feel free to study any number of introductory books, as long as you know that if you plan to be serious about mathematics, you should be able to eventually read and understand standard books and be able to solve most of the exercises put forward in them. Having clarified the first concern, I’d like to provide an answer for the second point, which actually interests me the most. A few readers wrote me emails about how they feel enthusiastic about the post and the opportunity to study mathematics again, but how those books are way too advanced for them, because they simply forgot all the mathematics taught at a high school level. So I’ve received a few “how can I get a refresher of high school math?” type of questions. The mathematics that you learned in high school is classified as pre-calculus, and as you can expect it is propaedeutic to learn math at an higher level. It is normal that you forgot quite a few formulas, but having a good grasp of the essentials of precalculus can make a big difference when trying to master calculus. You should have a decent knowledge of basic algebra, trigonometry, exponential, logarithmic, and analytic geometry. Calculus itself will provide you with a refresher of some of these topics and give you a deeper understanding not only of “how” but rather “why”. That said, Calculus without a decent precalculus base can be a big challenge for most people. Before proceeding to suggest a few resources, let’s try to establish if you actually need a refresher course or not. Here is a (simple and of course incomplete) list of some basic exercises. If you haven’t a clue or struggle to find a lot of the solutions for them, a refresher may be in order. Simple Precalculus Questions: 1) Factor the following polynomials: [tex]\displaystyle x^{2}-6x+9[/tex] [tex]\displaystyle x^{2}+x-6[/tex] [tex]\displaystyle x^{3}-27[/tex] 2) Solve for x: [tex]\displaystyle 3x^{2}+5x-2=0[/tex] [tex]\displaystyle |x^{2}-x|=3[/tex] [tex]\displaystyle x^{4}-8ax^{2}+16a^{2}=0[/tex] [tex]\displaystyle \frac{x^2+x-6}{x+3}=0[/tex] [tex]\displaystyle 2\sqrt{x} = x – 15[/tex] 3) Find the values of x for which: [tex]\displaystyle x^{2}>9[/tex] [tex]\displaystyle |2x-3| \leq 5[/tex] [tex]\displaystyle |2x-1| > 9[/tex] [tex]\displaystyle |x-1| + |x-3| \geq 8[/tex] 4) Evaluate: [tex]\displaystyle \log_{2}{1}[/tex] [tex]\displaystyle \ln{e}[/tex] [tex]\displaystyle \log_{2}{1024}[/tex] [tex]\displaystyle \frac{4^{8}2^{4}}{2^{12}}[/tex] 5) Solve for x: [tex]\displaystyle 5^{x}=10[/tex] [tex]\displaystyle \log_{3}{7x} = 2[/tex] [tex]\displaystyle \log_{x}{9}=2[/tex] [tex]\displaystyle \ln(3x-2)=0[/tex] [tex]\displaystyle 3^x+x=4[/tex] 6) Solve for x, where [tex]\displaystyle 0\leq x \leq 2\pi[/tex]: [tex]\displaystyle 2\sin{x} = 1[/tex] [tex]\displaystyle \tan{2x} = \frac{\sqrt{3}}{3}[/tex] [tex]\displaystyle \sin{3x} = 1[/tex] [tex]\displaystyle \cos^{2}{x} – x = 2 -\sin^{2}{x}[/tex] 7) Write the equations of the following curves in the Cartesian plane: Parabola Hyperbola Circle Ellipse 8 ) Find the vertex, focus, and directrix of the parabolas given by the equations: [tex]\displaystyle x^{2}=16y[/tex] [tex]\displaystyle y^{2}+4y+12x=-16[/tex] 9) Find the center, vertices, foci, and eccentricity of the hyperbola given by the equation: [tex]\displaystyle \frac{x^{2}}{4}-\frac{y^{2}}{36}=1[/tex] 10) Find the equation of a circle whose center is at [tex](2, -3)[/tex] and radius [tex]3[/tex]. 11) Determine the center and radius of the circle with equation: [tex]\displaystyle x^{2} -4x+ y^2-18y = -4[/tex]. How did it go? Did you experience many struggles and the feeling that “I used to know this stuff”? If so, then it is a good idea to go for a refresher before attempting calculus right away. The following are two books that you may find useful to respectively learn and refresh basic math in a well organized manner: Precalculus by Michael Sullivan: a big book, which is quite extensive and thorough. If you want an all-in-one book that covers all you need to know about precalculus and more, in a clear but college oriented manner, than this is without doubt an excellent choice. It will likely make the step up to Calculus quite easy. Schaum’s Outline of Precalculus: it has a less prosaic approach but it’s still very clear and easy to read. If you were pretty good at math in high school and you just forgot a few things because you haven’t touched these topics in a while, then pick this book up. It is adequate for already mathematically inclined people who are in a rush to brush up the skills they once had. If you feel entirely clueless and would like a “for dummies” type of book, the following two titles seem to have a good table of contents and excellent reviews: If you would like to use some free resources available online instead, here are some lessons: Precalculus Tutorial Topics in PRECALCULUS OJK’s Precalculus page Exploring Precalculus Precalculus problems Prof. Ward’s lecture notes (PDF, 23 pages) Dave’s Short Trig Course Precalculus Lessons Collection of links related to Precalculus Wikipedia entry on Precalculus If you know of any other resources that are available for free, or if you successfully used other books for these purposes, please feel free to use the comment section to add to the discussion. Get more stuff like this Get interesting math updates directly in your inbox. Thank you for subscribing. Please check your email to confirm your subscription. Something went wrong.
We know that there is an electric field inside the battery that works against the moving electrons of a circuit. But there is also the chemical force of the battery that at some point become equal. Voltage drop is the integral of the electric field over a closed loop. But you must also have the same integral over the same loop for the chemical force field. EMF is called precisely that, the integral of the chemical force field over a closed loop(the loop with the battery inside). So, can you give me an answer to "Why is the voltage of a battery equal to the EMF?" and having the forces and the integrals in your explanation? Where does the integral of the electric field go? I might be missing or misunderstanding some very basic things about work and voltage, so excuse me if this is the case! Imagine a free-standing battery (not connected to any wires) and take a closed loop through the battery, out one terminal, and back in the other terminal. The total work done in moving a test charge around that loop must vanish. For this to happen, the change in electric potential outside of the battery must equal the negative of the EMF change within the battery. $$\int_\mathrm{outside}\vec{E}\cdot{\mathrm{d} \vec{\ell}} + \mathcal{E} = 0$$ Update after comments Work is well defined as the integral of force over distance. The relationship between work and energy is more subtle. One needs to carefully define what the system in question is. We also have to recognize that potential energy is the energy associated with the configuration of a system of interacting entities. One is on thin ice if one refers to "the potential energy of a particle". Particles do not have potential energy. The system comprising the particle and something with which it interacts has potential energy. (A ball does not have potential energy. The earth-ball system has potential energy.) I'll review the background on this, with apologies if the background is already well-understood. Once a system is defined, energy can be added to the system by an external force which can do external work on the system. Work is one way energy can be added. Heat is another, but we'll mostly ignore heat and thermal energy. Generally, $$W_\mathrm{external} = \Delta E$$ where $E$ is the total energy of the system. External work causes energy to be added to the system, but once inside that energy could be potential, kinetic, thermal, chemical ... Potential energy is defined to be the negative of the work done by forces internal to the system:$$W_\mathrm{internal} = -\Delta PE$$ Now our system. Let's take it to be the wire, the battery terminal, the conductors inside the battery, but not the chemicals and processes that generate the "chemical force". The chemical processes are a source of energy, so we'll take it to be outside of our system. The work done by the chemical processes are external, and do external work on the charge carriers $$W_\mathrm{external}=q\mathcal{E}$$But the internal voltage due to the separated charges within the battery, also do work, but this work is internal to our system, and thus changes the potential energy of the system $$W_\mathrm{interal} = -\Delta PE = -qV$$but recall $$W_\mathrm{external} = \Delta E = \Delta PE = -W_\mathrm{internal}$$ (ignoring stores of energy other than potential energy within the system). Finally $$q\mathcal{E}=qV$$ $$\mathcal{E} = V$$ First let's establish the situation in which the result actually holds. Voltage itself is only well defined in electrostatics, and this result only holds in a steady state. In an ideal battery, there is no energy loss inside the battery during operation, and in the steady state just as much charge flows into the battery as flows out of the battery, and just as much current flows into the battery as flows out of the battery, so the average work done per unit charge inside the battery by both the electrostatic force per unit charge $\vec{E}$ and the chemical (or more generally the source) force per unit charge $\vec{f}_s$ is zero. So if the battery has terminals at a and b then: $$0=\int_a^b\left(\vec{f}_s+\vec{E}\right)\cdot d\vec{\ell}.$$ Therefore we get \begin{align}V &=-\int_a^b \vec{E} \cdot d\vec{\ell}\\ &=\int_a^b \vec{f}_s \cdot d\vec{\ell}\\ &=\oint \vec{f}_s \cdot d\vec{\ell}\\ &=\mathscr E_{battery}. \end{align} The first equality is by an electrostatic definition. The second equality is from our previous equation about the steady state. The third is because the battery only exerts a force per unit charge inside the battery. The last is by general definition of the EMF generated by the battery when $\vec{f}_s$ is the force per unit charge exerted by the battery. Engineer's (practical) answer: They are the same thing by different names. EMF and voltage are interchangeable terms as applied to sources.
Continuum flow limitations From Thermal-FluidsPedia Line 1: Line 1: - The transport phenomena are usually modeled in continuum states for most applications – the materials are assumed to be continuous and the fact that matter is made of atoms is ignored. Recent development in fabrication and utilization of nanotechnology, micro devices, and microelectromechanical systems (MEMS) requires noncontinuum modeling of transport phenomena in nano- and microchannels. When the characteristic dimension, <math>L</math>, is small compared to the molecular mean free path, λ, which is defined as average distance between collisions for a molecule, the traditional Navier-Stokes equation and the energy equation based on the continuum assumption have failed to provide accurate results. The continuum assumption also fails when the gas is at very low pressure (rarefied). + The transport phenomena are usually modeled in continuum states for most applications – the materials are assumed to be continuous and the fact that matter is made of atoms is ignored. Recent development in fabrication and utilization of nanotechnology, micro devices, and microelectromechanical systems (MEMS) requires noncontinuum modeling of transport phenomena in nano- and microchannels. When the characteristic dimension, <math>L</math>, is small compared to the molecular mean free path, λ, which is defined as average distance between collisions for a molecule, the traditional Navier-Stokes equation and the energy equation based on the continuum assumption have failed to provide accurate results. The continuum assumption also fails when the gas is at very low pressure (rarefied). The continuum assumption may also not be valid in conventional sized systems – for example, the early stages of high-temperature heat pipe startup from a frozen state (Cao and Faghri, 1993) and microscale heat pipes (Cao and Faghri, 1994). During the early stage of startup of high-temperature heat pipes, the vapor density in the heat pipe core is very low and partly loses its continuum characteristics. The vapor flow in this condition is usually referred to as rarefied vapor flow. Because of the low density, the vapor in the rarefied state is somewhat different from the conventional continuum state. Also, the vapor density gradient is very large along the axial direction of the heat pipe. The vapor flow along the axial direction is caused mainly by the density gradient via vapor molecular diffusion. The continuum assumption may also not be valid in conventional sized systems – for example, the early stages of high-temperature heat pipe startup from a frozen state (Cao and Faghri, 1993) and microscale heat pipes (Cao and Faghri, 1994). During the early stage of startup of high-temperature heat pipes, the vapor density in the heat pipe core is very low and partly loses its continuum characteristics. The vapor flow in this condition is usually referred to as rarefied vapor flow. Because of the low density, the vapor in the rarefied state is somewhat different from the conventional continuum state. Also, the vapor density gradient is very large along the axial direction of the heat pipe. The vapor flow along the axial direction is caused mainly by the density gradient via vapor molecular diffusion. Line 29: Line 29: <center><math>{\rho _{tr}} = \frac{{1.051{k_b}}}{{\sqrt 2 \pi {\sigma ^2}{R_g}D{\rm{Kn}}}} \qquad\qquad(3)</math></center> <center><math>{\rho _{tr}} = \frac{{1.051{k_b}}}{{\sqrt 2 \pi {\sigma ^2}{R_g}D{\rm{Kn}}}} \qquad\qquad(3)</math></center> - where the ideal gas equation of state, <math>p = \rho {R_g}T</math> was used. Assuming that the vapor is in the saturation state, the transition vapor temperature <math>{T_{tr}}</math> corresponding to the transition density can be obtained by using the Clausius-Clapeyron equation (see Chapter 2) combined with the equation of state: + where the ideal gas equation of state, <math>p = \rho {R_g}T</math> was used. Assuming that the vapor is in the saturation state, the transition vapor temperature <math>{T_{tr}}</math> corresponding to the transition density can be obtained by using the Clausius-Clapeyron equation (see Chapter 2) combined with the equation of state: <center><math>{T_{tr}} = \frac{{{p_{sat}}}}{{\rho {R_g}}}\exp \left[ { - \frac{{{h_{\ell v}}}}{{{R_g}}}\left( {\frac{1}{{{T_{tr}}}} - \frac{1}{{{T_{sat}}}}} \right)} \right] \qquad\qquad(4)</math></center> <center><math>{T_{tr}} = \frac{{{p_{sat}}}}{{\rho {R_g}}}\exp \left[ { - \frac{{{h_{\ell v}}}}{{{R_g}}}\left( {\frac{1}{{{T_{tr}}}} - \frac{1}{{{T_{sat}}}}} \right)} \right] \qquad\qquad(4)</math></center> - where <math>{P_{sat}}</math> and <math>{T_{sat}}</math> are the saturation pressure and temperature, <math>{h_{\ell v}}</math> is the latent heat of vaporization, and the vapor density <math>P</math> is given by eq. (3). Equation (4) can be rewritten as + where <math>{P_{sat}}</math> and <math>{T_{sat}}</math> are the saturation pressure and temperature, <math>{h_{\ell v}}</math> is the latent heat of vaporization, and the vapor density <math>P</math> is given by eq. (3). Equation (4) can be rewritten as <center><math>\ln \left( {\frac{{{T_{tr}}\rho {R_g}}}{{{p_{sat}}}}} \right) + \frac{{{h_{\ell v}}}}{{{R_g}}}\left( {\frac{1}{{{T_{tr}}}} - \frac{1}{{{T_{sat}}}}} \right) = 0 \qquad\qquad(5)</math></center> <center><math>\ln \left( {\frac{{{T_{tr}}\rho {R_g}}}{{{p_{sat}}}}} \right) + \frac{{{h_{\ell v}}}}{{{R_g}}}\left( {\frac{1}{{{T_{tr}}}} - \frac{1}{{{T_{sat}}}}} \right) = 0 \qquad\qquad(5)</math></center> - and solved iteratively for <math>{T_{tr}}</math> using the Newton-Raphson/secant method. The transition vapor temperature is the boundary between the continuum and noncontinuum regimes. + and solved iteratively for <math>{T_{tr}}</math> using the Newton-Raphson/secant method. The transition vapor temperature is the boundary between the continuum and noncontinuum regimes. ==References== ==References== Revision as of 18:05, 15 October 2009 The transport phenomena are usually modeled in continuum states for most applications – the materials are assumed to be continuous and the fact that matter is made of atoms is ignored. Recent development in fabrication and utilization of nanotechnology, micro devices, and microelectromechanical systems (MEMS) requires noncontinuum modeling of transport phenomena in nano- and microchannels. When the characteristic dimension, L, is small compared to the molecular mean free path, λ, which is defined as average distance between collisions for a molecule, the traditional Navier-Stokes equation and the energy equation based on the continuum assumption have failed to provide accurate results. The continuum assumption also fails when the gas is at very low pressure (rarefied). The continuum assumption may also not be valid in conventional sized systems – for example, the early stages of high-temperature heat pipe startup from a frozen state (Cao and Faghri, 1993) and microscale heat pipes (Cao and Faghri, 1994). During the early stage of startup of high-temperature heat pipes, the vapor density in the heat pipe core is very low and partly loses its continuum characteristics. The vapor flow in this condition is usually referred to as rarefied vapor flow. Because of the low density, the vapor in the rarefied state is somewhat different from the conventional continuum state. Also, the vapor density gradient is very large along the axial direction of the heat pipe. The vapor flow along the axial direction is caused mainly by the density gradient via vapor molecular diffusion. The validity of the continuum assumption can also be violated in micro heat pipes. As the size of the heat pipe decreases, the vapor in the heat pipe may lose its continuum characteristics. The heat transport capability of a heat pipe operating under noncontinuum vapor flow conditions is very limited, and a large temperature gradient exists along the heat pipe length. This is especially true for miniature or micro heat pipes, whose dimensions may be extremely small. The continuum criterion is usually expressed in terms of the Knudsen number Based on the degree of rarefaction of gas or the gases size, the flow regimes in various devices can be classified into four regimes: 1. Continuum regime (Kn < 0.001). The Navier-Stokes and energy equations are valid (with no-slip/no jump boundary conditions). 2. Slip flow regime (0.001 < Kn < 0.1). The Navier-Stokes and energy equations can be used with the application of slip or jump boundary conditions, i.e., allowing non-zero axial fluid velocity near the wall of the object. 3. Transition regime (0.1 < Kn < 10). The Navier-Stokes equation is not valid, and the flow must be solved using molecular based models such as the Boltzmann equation or Direct Simulation Monte Carlo (DSMC). 4. Free molecular flow regime (Kn > 10). The collision between molecules can be neglected and a collisionless Boltzmann equation can be used. In the slip flow regime, the slip boundary condition refers to circumstance when the tangential velocity of the fluid at the wall is not the same as the wall velocity. Temperature jump is similarly defined as when the temperature of the fluid next to the wall is not the same as the wall temperature. The mean free path for dilute gases based on the kinetic theory can be rewritten in terms of temperature and pressure The transition density under which the continuum assumption is invalid can be obtained by combining eqs. (1) and (2) and Kn = 0.001 i.e., where the ideal gas equation of state, p = ρ R g T was used. Assuming that the vapor is in the saturation state, the transition vapor temperature T corresponding to the transition density can be obtained by using the Clausius-Clapeyron equation (see Chapter 2) combined with the equation of state: tr where P and sat T are the saturation pressure and temperature, is the latent heat of vaporization, and the vapor density s a t Pis given by eq. (3). Equation (4) can be rewritten as and solved iteratively for T using the Newton-Raphson/secant method. The transition vapor temperature is the boundary between the continuum and noncontinuum regimes. tr References Cao, Y., and Faghri, A., 1993, “Simulation of the Early Startup Period of High Temperature Heat Pipes From the Frozen State by a Rarefied Vapor Self-Diffusion Model,” ASME Journal of Heat Transfer, Vol. 115, pp. 239-246. Cao, Y., and Faghri, A., 1994, “Micro/Miniature Heat Pipes and Operating Limitations,” Journal of Enhanced Heat Transfer, Vol. 1, pp. 265-274.
Continuity for the rotation-two-component Camassa-Holm system 1. School of Public Affairs, Chongqing University, Chongqing 400044, China 2. Department of Mathematics, Southwestern University of Finance and Economics, Sichuan 611130, China 3. College of Mathematics Science, Chongqing Normal University, Chongqing 401331, China 4. College of Mathematics and statistics, Chongqing University, Chongqing 401331, China This paper focuses on the Cauchy problem of the rotation-two-component Camassa-Holm(R2CH) system, which is a model of equatorial water waves that includes the effect of the Coriolis force. It has been shown that the R2CH system is well-posed in Sobolev spaces $ H^s(\mathbb{R})\times H^{s-1}(\mathbb{R}) $ with $ s>3/2 $. Using the method of approximate solutions in conjunction with well-posedness estimates, we further proved that the dependence on initial data is sharp, i.e., the data-to-solution map is continuous but not uniformly continuous. Moreover, we obtain that the solution map for the R2CH system is Hölder continuous in $ H^\theta(\mathbb{R})\times H^{\theta-1}(\mathbb{R}) $-topology for all $ 0\leq\theta<s $ with exponent $ \gamma $ depending on $ s $ and $ \theta $. The Coriolis term and higher nonlinear term in the R2CH system bring challenges to construct the counter-approximate solutions. Keywords:Rotation-two-component Camassa-Holm system, Sobolev space, non-uniform dependence, Hölder continuity. Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C35. Citation:Chenghua Wang, Rong Zeng, Shouming Zhou, Bin Wang, Chunlai Mu. Continuity for the rotation-two-component Camassa-Holm system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6633-6652. doi: 10.3934/dcdsb.2019160 References: [1] [2] [3] [4] [5] R. M. Chen, L. Fan, H. J. Gao and Y. Liu, Breaking waves and solitary waves to the rotation-two-component Camassa-Holm system, [6] [7] R. Chen and S. M. Zhou, Well-posedness and persistence properties for two-component higher order Camassa-Holm systems with fractional inertia operator, [8] [9] [10] [11] [12] [13] [14] [15] A. Constantin and W. A. Strauss, Stability of peakons, [16] [17] A. Constantin and H. P. McKean, A shallow water equation on the circle, [18] J. Eckhardt, The inverse spectral transform for the conservative Camassa-Holm flow with decaying initial data, [19] J. Escher, D. Henry, B. Kolev and T. Lyons, Two-component equations modelling water waves with constant vorticity, [20] J. Escher, M. Kohlmann and J. Lenells, The geometry of the two-component Camassa-Holm and Degasperis-Procesi equations, [21] J. Escher and T. Lyons, Two-component higher order Camassa-Holm systems with fractional inertia operator: A geometric approach, [22] [23] Q. H. Feng, F. W. Meng and B. Zheng, Gronwall-Bellman type nonlinear delay integral inequalities on times scales, [24] [25] X. M. He, A. X. Qian and W. M. Zou, Existence and concentration of positive solutions for quasi-linear Schrodinger equations with critical growth, [26] [27] [28] [29] [30] [31] [32] B. Moon, On the Wave-breaking phenomena and global existence for the periodic rotation-two-component Camassa-Holm system, [33] P. J. Olver and P. Rosenau, Tri-Hamiltonian duality between solitons and solitary-wave solutions having compact support, [34] [35] [36] P. Wang, The concavity of the Gaussian curvature of the convex level sets of minimal surfaces with respect to the height, [37] P. H. Wang and L. L. Zhao, Some geometrical properties of convex level sets of minimal graph on 2-dimensional Riemannian manifolds, [38] P. H. Wang and D. K. Zhang, Convexity of level sets of minimal graph on space form with nonnegative curvature, [39] S. D. Yang, Z.-A. Yao and C.-A. Zhao, The weight distributions of two classes of $p$-ary cyclic codes with few weights, [40] [41] S. M. Zhou, The local well-posedness in Besov spaces and non-uniform dependence on initial data for the interacting system of Camassa-Holm and Degasperis-Procesi equations, [42] S. M. Zhou, Z. J. Qiao, C. L. Mu and L. Wei, Continuity and asymptotic behaviors for a shallow water wave model with moderate amplitude, show all references References: [1] [2] [3] [4] [5] R. M. Chen, L. Fan, H. J. Gao and Y. Liu, Breaking waves and solitary waves to the rotation-two-component Camassa-Holm system, [6] [7] R. Chen and S. M. Zhou, Well-posedness and persistence properties for two-component higher order Camassa-Holm systems with fractional inertia operator, [8] [9] [10] [11] [12] [13] [14] [15] A. Constantin and W. A. Strauss, Stability of peakons, [16] [17] A. Constantin and H. P. McKean, A shallow water equation on the circle, [18] J. Eckhardt, The inverse spectral transform for the conservative Camassa-Holm flow with decaying initial data, [19] J. Escher, D. Henry, B. Kolev and T. Lyons, Two-component equations modelling water waves with constant vorticity, [20] J. Escher, M. Kohlmann and J. Lenells, The geometry of the two-component Camassa-Holm and Degasperis-Procesi equations, [21] J. Escher and T. Lyons, Two-component higher order Camassa-Holm systems with fractional inertia operator: A geometric approach, [22] [23] Q. H. Feng, F. W. Meng and B. Zheng, Gronwall-Bellman type nonlinear delay integral inequalities on times scales, [24] [25] X. M. He, A. X. Qian and W. M. Zou, Existence and concentration of positive solutions for quasi-linear Schrodinger equations with critical growth, [26] [27] [28] [29] [30] [31] [32] B. Moon, On the Wave-breaking phenomena and global existence for the periodic rotation-two-component Camassa-Holm system, [33] P. J. Olver and P. Rosenau, Tri-Hamiltonian duality between solitons and solitary-wave solutions having compact support, [34] [35] [36] P. Wang, The concavity of the Gaussian curvature of the convex level sets of minimal surfaces with respect to the height, [37] P. H. Wang and L. L. Zhao, Some geometrical properties of convex level sets of minimal graph on 2-dimensional Riemannian manifolds, [38] P. H. Wang and D. K. Zhang, Convexity of level sets of minimal graph on space form with nonnegative curvature, [39] S. D. Yang, Z.-A. Yao and C.-A. Zhao, The weight distributions of two classes of $p$-ary cyclic codes with few weights, [40] [41] S. M. Zhou, The local well-posedness in Besov spaces and non-uniform dependence on initial data for the interacting system of Camassa-Holm and Degasperis-Procesi equations, [42] S. M. Zhou, Z. J. Qiao, C. L. Mu and L. Wei, Continuity and asymptotic behaviors for a shallow water wave model with moderate amplitude, [1] Lei Zhang, Bin Liu. Well-posedness, blow-up criteria and gevrey regularity for a rotation-two-component camassa-holm system. [2] Qiaoyi Hu, Zhijun Qiao. Persistence properties and unique continuation for a dispersionless two-component Camassa-Holm system with peakon and weak kink solutions. [3] Caixia Chen, Shu Wen. Wave breaking phenomena and global solutions for a generalized periodic two-component Camassa-Holm system. [4] Kai Yan, Zhaoyang Yin. Well-posedness for a modified two-component Camassa-Holm system in critical spaces. [5] Zeng Zhang, Zhaoyang Yin. Global existence for a two-component Camassa-Holm system with an arbitrary smooth function. [6] [7] [8] Wei Luo, Zhaoyang Yin. Local well-posedness in the critical Besov space and persistence properties for a three-component Camassa-Holm system with N-peakon solutions. [9] [10] [11] Joachim Escher, Tony Lyons. Two-component higher order Camassa-Holm systems with fractional inertia operator: A geometric approach. [12] [13] [14] H. A. Erbay, S. Erbay, A. Erkip. On the decoupling of the improved Boussinesq equation into two uncoupled Camassa-Holm equations. [15] [16] Joachim Escher, Olaf Lechtenfeld, Zhaoyang Yin. Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation. [17] Yong Chen, Hongjun Gao, Yue Liu. On the Cauchy problem for the two-component Dullin-Gottwald-Holm system. [18] [19] Delia Ionescu-Kruse. Variational derivation of the Camassa-Holm shallow water equation with non-zero vorticity. [20] 2018 Impact Factor: 1.008 Tools Article outline [Back to Top]
I have been trying to understand the last step of this derivation. Consider a sphere made up of charge $+q$. Let $R$ be the radius of the sphere and $O$, its center. A point $P$ lies inside the sphere of charge. In such a case, Gaussian surface is a spherical shell,whose center is $O$ and radius is $r$ (=OP). If $q'$ is charge enclosed by Gaussian surface,then $$E\times4\pi r^2=q'/\epsilon$$ where $\epsilon=$ absolute permittivity of free space. $$E=\frac{1}{4\pi\epsilon}\times(q'/r^2)$$ for $(r<R)$ $$q'=\frac{q}{\frac{4}{3}\pi R^3}\times\frac{4}{3}\pi r^3=\frac{qr^3}{R^3}$$
Is Supersymmetry really swapping fermions with bosons? I wouldn't say supersymmetry is really swapping fermions with bosons. It's saying there's a certain symmetry wherein each particle has a superpartner. A boson has a fermionic superpartner, and a fermion has a bosonic superpartner. For example supersymmetry says the electron has a superpartner called a selectron, which is a boson. However we've never seen a selectron, and people say it would have a much larger mass-energy than an electron. So you can't just swap the electron for a selectron. I've been studying supersymmetry for the last few months, and while I can do some mathematics with the Wess-Zumino model (show the Lagrangian is invariant under a susy transformation, find the Noether charges, etc) I realise that I don't actually know what a supersymmetric transformation does. I don't think anybody will be able to tell you that, because contemporary physics doesn't give you a description of the electron, or the selectron. It doesn't even tell you what happens in pair production and annihilation. We can swap bosons into fermions in pair production, and we can swap fermions into bosons in annihilation. There's no description of what this transformation does. In similar vein there's no description of swapping an electron into a selectron. Which you can't do anyway because the mass-energy difference. An infinitesimal transformation of a spin 0 particle is proportional to a spin half particle $\delta\phi=\bar{\epsilon}\chi$, and vice versa, but I don't know what this means for the universe. It doesn't mean anything for the universe. If SUSY can't explain how a real-world transformation from a photon to an electron occurs in pair production, she hasn't got the foundation upon which to propose the selectron. A mathematical symmetry just isn't enough. Especially when it's broken. Unlike other symmetries I know, the universe doesn't actually seem symmetric under susy. e.g. If the entire universe was translated in some direction, or CPT, then we wouldn't be able to notice. Yet if we swapped all the fermions and bosons, then my seat that I am sitting on would be made of higgs particles and I'd fall on the floor. What am I misunderstanding about supersymmetry? Perhaps it's this: SUSY is only a hypothesis. I've heard that it's only a symmetry of the equations, but what does that mean for our universe? The equations might not be describing our universe? Can a supersymmetric transformation actually happen? I don't know how. I think other transformations can happen. For example, the positron is a time-reversed electron because it has the opposite chirality, not because it's an electron travelling back through time. See this answer where I tried to describe it with a gif played backwards. But I have no concept of what a selectron is, so like I said, I don't know how a supersymmetric transformation can happen.
Recurrence Property Pattern Untimed version Pattern Name and Classification Recurrence: Occurrence Qualitative Patterns Structured English Specification Scope, P [holds] repeatedly. Pattern Intent This pattern requires that P must recurrently hold. Temporal Logic Mappings LTL Globally: $\Box (\Diamond \; P)$ Before R: $\Diamond R \rightarrow ((\Diamond (P \vee R)) \cup R)$ After Q: $\Box (Q \rightarrow \Box (\Diamond \; P))$ Between Q and R: $\Box ((Q \wedge \neg R \wedge \Diamond R) \rightarrow ((\Diamond (P \vee R)) \mathcal{U} R))$ After Q until R: $\Box ((Q \wedge \neg R) \rightarrow ((\Diamond (P \vee R)) \mathcal{W} R))$ CTL Globally: $AG(AF \;P)$ Before R: $A[((AF (P \vee R \vee AG(\neg R)))) \mathcal{W} R]$ After Q: $AG(Q \rightarrow AG(AF\; P))$ Between Q and R: $AG((Q \wedge \neg R) \rightarrow A[(AF (P \vee R \vee AG(\neg R))) \mathcal{W} R])$ After Q until R: $AG((Q \wedge \neg R) \rightarrow A[AF (P \vee R) \mathcal{W} R])$ Example and Known Uses The system infinitely often checks the functioning of its sensors. Additional notes This pattern is the repetition over time of the Existence Pattern presented in [1]. As highlighted in [2], this untimed version can be found in several publication, such as [3], and it is commonly used to specify the absence on non-progress cycles in a system. Time constrained version The Recurrence Property Pattern has been proposed in [2]. Pattern Name and Classification Recurrence: Occurrence Specification Pattern Structured English Specification Scope, P [holds] repeatedly [every $t_u^0 \in \mathbb{R}^+$ TimeUnits]. (see the English grammar). Pattern Intent This pattern describes the periodic satisfaction of a propositional formula. Intuitively, it captures the property that in every c time unit(s), the proposition P has to hold at least once. The proposition P holding more often than every c time units or holding continously is considered a correct behavior in this pattern. Temporal Logic Mappings LTL Globally: $\Box(\Diamond_{\leq c} P)$ Before R: $\Diamond R \rightarrow ((\Diamond_{\leq c}(P \vee R)) \; \mathcal{U}\; R)$ After Q: $\Box (Q \rightarrow \Box (\Diamond_{\leq c P}))$ Between Q and R: $\Box ((Q \wedge \neg R\wedge \Diamond R) \rightarrow ((\Diamond_{\leq c}(P \vee R)) \; \mathcal{U} \; R))$ After Q until R: $\Box ((Q \wedge \neg R) \rightarrow ((\Diamond_{\leq c}(P \vee R)) \; \mathcal{W} \;R))$ CTL Globally: $AG (AF_{\leq c} P)$ Before R: $A[((AF_{\leq c} (P \vee R)) \vee AG(\neg R)) \; \mathcal{W} \;R]$ After Q: $AG(Q \rightarrow AG (AF_{\leq c} P))$ Between Q and R: $AG((Q \wedge \neg R) \rightarrow A[((AF_{\leq c} (P \vee R)) \vee AG(\neg R)) \; \mathcal{W} \;R])$ After Q until R: $AG((Q \wedge \neg R) \rightarrow A[(AF_{\leq c} (P \vee R)) \; \mathcal{W} \;R])$ Additional notes The Recurrence Property Pattern has been proposed by Konrad and Cheng in [2]. Probabilistic version Pattern Name and Classification Probabilistic Recurrence: Time Constrained Patterns Structured English Specification Scope, P [holds] repeatedly [every $t_u^0 \in \mathbb{R}^+$ TimeUnits] [ Probability]. (see the English grammar). Pattern Intent This pattern describes the periodic satisfaction of a state formula. Intuitively, it captures the property that in every interval [a, a +c] within the scope of the property at time, the state formula P has to hold at least once. Temporal Logic Mappings PLTL Globally: $[\Box (\Diamond^{[t1,t2]}\; P)]_{\bowtie p}$ Before R: $[\Diamond R \rightarrow ((\Diamond^{[t1,t2]}\; (P \vee R)) \mathcal{U} R)]_{\bowtie p}$ After Q: $[\Box (Q \rightarrow \Box (\Diamond^{[t1,t2]}\; P))]_{\bowtie p}$ Between Q and R: $[\Box ((Q \wedge \neg R \wedge \Diamond R) \rightarrow ((\Diamond^{[t1,t2]}\;(P \vee R)) \mathcal{U} R))]_{\bowtie p}$ After Q until R: $[\Box ((Q \wedge \neg R) \rightarrow ((\Diamond^{[t1,t2]}\;(P \vee R)) \mathcal{W} R))]_{\bowtie p}$ CSL Globally: $\mathcal{P}_{\bowtie p}(\Box (\Diamond^{[t1,t2]}\; P))$ Before R: $\mathcal{P}_{= 1}(\Diamond R \rightarrow \mathcal{P}_{\bowtie p}((\Diamond^{[t1,t2]}\; (P \vee R)) \mathcal{U} R))$ After Q: $\mathcal{P}_{= 1}(\Box (Q \rightarrow \mathcal{P}_{\bowtie p} \Box (\Diamond^{[t1,t2]}\; P)))$ Between Q and R: $\mathcal{P}_{= 1}(\Box ((Q \wedge \neg R \wedge \Diamond R) \rightarrow \mathcal{P}_{\bowtie p}((\Diamond^{[t1,t2]}\;(P \vee R)) \mathcal{U} R)))$ After Q until R: $\mathcal{P}_{= 1}(\Box ((Q \wedge \neg R) \rightarrow \mathcal{P}_{\bowtie p}((\Diamond^{[t1,t2]}\;(P \vee R)) \mathcal{W} R)))$ Example and Known Uses The system checks the functioning of its sensors in an interval of 3 units of time. Additional notes This pattern is a probabilistic version of the recurrence pattern. Bibliography 1. Matthew B. Dwyer; George S. Avrunin; James C. Corbett, Patterns in Property Specifications for Finite-State Verification.ICSE 1999. pp. 411-420. 2. Sascha Konrad; Betty H.C. Cheng, Real-time specification patterns.ICSE 2005. pp. 372-381. 3. Holzmann, Gerard J. The Spin Model Checker: Primer and Reference Manual.Addison Wesley Publishing Company, 2003.
Estimate an ARIMA Model Estimates an ARIMA model for a univariate time series, including a sparse ARIMA model. Usage estimate(x, p = 0, d = 0, q = 0, PDQ = c(0, 0, 0), S = NA, method = c("CSS-ML", "ML", "CSS"), intercept = TRUE, output = TRUE, ...) Arguments x a univariate time series. p the AR order, can be a positive integer or a vector with several positiveintegers. The default is 0. d the degree of differencing. The default is 0. q the MA order, can be a positive integer or a vector with several positiveintegers. The default is 0. PDQ a vector with three non-negative integers for specification of the seasonalpart of the ARIMA model. The default is c(0,0,0). S the period of seasonal ARIMA model. The default is NA. method fitting method. The default is CSS-ML. intercept a logical value indicating to include the intercept in ARIMA model. Thedefault is TRUE. output a logical value indicating to print the results in R console. The default is TRUE. ... optional arguments to arimafunction. Details This function is similar to the ESTIMATE statement in ARIMA procedure of SAS,except that it does not fit a transfer function model for a univariate time series. Thefitting method is inherited from arima in stats package. To bespecific, the pure ARIMA(p,q) is defined as$$X[t] = \mu + \phi[1]*X[t-1] + ... + \phi[p]*X[p] + e[t] - \theta[1]*e[t-1] - ... - \theta[q]*e[t-q].$$The p and q can be a vector for fitting a sparse ARIMA model. For example, p = c(1,3),q = c(1,3) means the ARMA((1,3),(1,3)) model defined as$$X[t] = \mu + \phi[1]*X[t-1] + \phi[3]*X[t-3] + e[t]- \theta[1]*e[t-1] - \theta[3]*e[t-3].$$ The PDQ controls theorder of seasonal ARIMA model, i.e., ARIMA(p,d,q)x(P,D,Q)(S), where S is the seasonalperiod. Note that the difference operators d and D = PDQ[2] are different.The d is equivalent to diff(x,differences = d) and D is diff(x,lag = D,differences = S), where the default seasonal period is S = frequency(x). The residual diagnostics plots will be drawn. Value Note Missing values are removed before the estimate. Sparse seasonal ARIMA(p,d,q)x(P,D,Q)(S) model is not allowed. References Brockwell, P. J. and Davis, R. A. (1996). Introduction to Time Series and Forecasting. Springer, New York. Sections 3.3 and 8.3. See Also Aliases estimate Examples estimate(lh, p = 1) # AR(1) processestimate(lh, p = 1, q = 1) # ARMA(1,1) processestimate(lh, p = c(1,3)) # sparse AR((1,3)) process# seasonal ARIMA(0,1,1)x(0,1,1)(12) modelestimate(USAccDeaths, p = 1, d = 1, PDQ = c(0,1,1)) Documentation reproduced from package aTSA, version 3.1.2, License: GPL-2 | GPL-3
Consider a body attached to a horizontal spring and resting on a surface, inclined at an angle $\theta$ from the ground. The spring constant is $k$. Initially the spring was kept in its natural length while the body was held still by some external agent. When the external agent was removed, the body slid $x$ units down the inclined plane to achieve equilibrium.The coeffecient of (kinetic) frictional force acting on the body is $\mu$ To solve this problem, I can use two methods: 1.Work-Energy Theorem: This yields $$0=mg(x\sin\theta)-\frac12kx^2-\mu (mgx\cos\theta)$$ 2.Equating forces at equilibrium: This yields $$kx+\mu mg\cos\theta-mg\sin\theta=0$$(along the inclined plane) On solving the equations the two methods give different answers, while they should give the same. Is there anything I am missing out here? Please help me solve this problem.
Pi is defined as the ratio of the circumference of any circle to its diameter. As all circles are similar and therefore proportional in dimensions, pi is therefore always the same for all circles and is a constant. Consequently, pi can also be viewed as the area of a circle whose radius is one. Value The true value can never be exactly implemented (not with, but only approximated, and therefore π should often remain a factored constant. Pi to one-hundred significant digits: $ \pi = $ 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679$ \cdots $. The current known value has been computed using super-computers to in excess of 2.7 trillion digits. Computing Pi Pi can be computed in a variety of techniques and represented in a variety of ways. Even the value, itself, has been represented in base numbering systems other than decimal. As an infinite summation: $ \pi = \sum_{n=0}^\infty \frac{4{(-1)}^n}{2n+1} = \frac{4}{1} - \frac{4}{3} + \frac{4}{5} - \frac{4}{7} \cdots $ As an integral: $ \pi = \int^{\infty}_{-\infty} \frac{dx}{1 + x^2} $ Applications Pi is used to relate properties of spheres and circles to their radii; however, due to the unique properties and origins of the number, the value has uses throughout mathematics, including outside the realm of strict circular geometry. Most notably, pi is the basis of the angular measurement radians, and therefore has huge implications for trigonometry, complex analysis, and calculus. An especially important application is in Euler's formula. Pi appears in many integrals; for example: $ \int^{\infty}_{-\infty} \frac{1}{1+x^2} \ dx = \pi $ , $ \int^{\infty}_{-\infty} e^{-x^2} dx = \sqrt{\pi} $ and $ \int\limits_{-1}^{1}\frac{dx}{\sqrt{1-x^2}} = \pi $
It is well-known that as you have more evidence (say in the form of larger $n$ for $n$ i.i.d. examples), the Bayesian prior gets "forgotten", and most of the inference is impacted by the evidence (or the likelihood). It is easy to see it for various specific case (such as Bernoulli with Beta prior or other type of examples) - but is there a way to see it in the general case with $x_1,\ldots,x_n \sim p(x|\mu)$ and some prior $p(\mu)$? EDIT: I am guessing it cannot be shown in the general case for any prior (for example, a point-mass prior would keep the posterior a point-mass). But perhaps there are certain conditions under which a prior is forgotten. Here is the kind of "path" I am thinking about showing something like that: Assume the parameter space is $\Theta$, and let $p(\theta)$ and $q(\theta)$ be two priors which place non-zero probability mass on all of $\Theta$. So, the two posterior calculations for each prior amount to: $$p(\theta | x_1,\ldots,x_n) = \frac{\prod_i p(x_i | \theta) p(\theta)}{\int_{\theta} \prod_i p(x_i | \theta) p(\theta) d\theta}$$ and $$q(\theta | x_1,\ldots,x_n) = \frac{\prod_i p(x_i | \theta) q(\theta)}{\int_{\theta} \prod_i p(x_i | \theta) q(\theta) d\theta}$$ If you divide $p$ by $q$ (the posteriors), then you get: $$p(\theta | x_1,\ldots,x_n)/q(\theta | x_1,\ldots,x_n) = \frac{p(\theta)\int_{\theta} \prod_i p(x_i | \theta) q(\theta)d \theta}{q(\theta)\int_{\theta} \prod_i p(x_i | \theta) p(\theta)d \theta}$$ Now I would like to explore the above term as $n$ goes to $\infty$. Ideally it would go to $1$ for a certain $\theta$ that "makes sense" or some other nice behavior, but I can't figure out how to show anything there.
As for your first two questions: there is indeed a behaviour and a target policy, which can be different. In the example image of the $3$-step tree-backup update in the beginning of the section you mention, the actions $A_t$, $A_{t+1}$, and $A_{t+2}$ are assumed to be selected according to some behaviour policy, whereas a (different) target policy is used to determine weights for the different leaf nodes. As for your third question, in the case where our target policy is greedy, lots of terms will indeed have $0$ weights and therefore entirely drop out. However, this is not always going to fall down to the same return as $n$-step Sarsa returns; that would not be correct, because ($n$-step) Sarsa is an on-policy algorithm. In the case where the target policy is a greedy policy, the return will depend very much on how many actions happened to get selected by the behaviour policy which the greedy target policy also would have "agreed" with. If the behaviour policy already happened to have made a "mistake" (selected an action that the greedy policy wouldn't have selected) for $A_{t+1}$, you'll end up with a standard $1$-step $Q$-learning update. If the behaviour policy only made a "mistake" with the action $A_{t+2}$ (and agreed with the target policy on $A_{t+1}$), you'll get something that kind of looks like a "$2$-step $Q$-learning update" (informally here, because $n$-step $Q$-learning isn't really a thing). Etc. You can see that this is the case by closely inspecting Equation 7.16 from the book: $$G_{t:t+n} \doteq R_{t+1} + \gamma \color{blue}{\sum_{a \neq A_{t+1}} \pi (a \mid S_{t+1}) Q_{t + n - 1}(S_{t+1}, a)} + \gamma \color{red}{\pi (A_{t + 1} \mid S_{t + 1}) G_{t + 1 : t + n}}.$$ Suppose that the target policy $\pi$ is greedy, and the behaviour policy already selected action $A_{t+1}$ differently from what $\pi$ would have selected (meaning that $\pi(A_{t+1} \mid S_{t+1}) = 0$). Then we know that the $\color{red}{\text{red}}$ part of the equation will evaluate to $0$. In the $\color{blue}{\text{blue sum}}$, the $\pi(a \mid S_{t+1})$ term will only evaluate to $1$ for $a = \arg\max_a Q_{t+n-1}(S_{t+1}, a)$ (because that is the action $a$ to which the greedy target policy would assign a probability of $1$), and $0$ again for all other elements of the sum. So, in this example situation, we end up using the return $G_{t:t+n} = R_{t+1} + \gamma \max_a Q_{t+n-1}(S_{t+1}, a)$. This is exactly the return also used by the standard $1$-step $Q$-learning update rule. Now suppose that the behaviour policy and a greedy target policy happened to "agree" with each other on $A_{t + 1}$, but disagree again on $A_{t + 2}$. In this case, the entire $\color{blue}{\text{blue sum}}$ for $G_{t:t+n}$ will evaluate to $0$, but the $\pi(A_{t+1} \mid S_{t+1})$ in the $\color{red}{\text{red part}}$ will evaluate to $1$. Then, "inside" the recursive $\color{red}{G_{t+1:t+n}}$, we'll get a very similar situation to the example above, with a single non-zero term in the blue sum and a red part that evaluates to $0$. In this case, the total return will be $G_{t:t+n} = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 \max_a Q_{t + n - 1}(S_{t + 2}, a)$. Note that this is different from a $2$-step Sarsa return. $2$-step Sarsa would use a return with $\gamma^2 Q_{t+n-1}(S_{t+2}, A_{t+2})$ at the end (where $A_{t+2}$ is sampled according to the behaviour policy), rather than that term involving $\max_a$. What you essentially end up getting in these cases is returns similar to those of $n$-step Sarsa, but they automatically get truncated as soon as there is disagreement between the behaviour policy and a greedy target policy (all subsequent reward observations get replaced by a single bootstrapping value estimate instead). In the examples above, I assumed completely greedy target policies, since that is what you appeared to be most interested in in your question (and is probably also the most common use-case of off-policy learning). Note that a target policy does not have to be greedy. You can also have non-greedy target policies if you like, and then the returns will obviously change quite a bit from the discussion above (fewer $\pi(S, A)$ terms would evaluate to $0$, there'd be more non-zero terms).
Dear MO, Question 1. Do you know of an example of a Fano variety which is not Frobenius split? Background (1) A variety $X$ in characteristic $p$ is called Frobenius split if there is a "$p$-th root" map $\sigma: \mathcal{O}_X \to \mathcal{O}_X$, that is, an additive map satisfying $\sigma(f^p g) = f\sigma(g)$ and $\sigma(1) = 1$ (in particular, $\sigma(f^p) = f$, so that $\sigma$ is an $\mathcal{O}_X$-linear splitting of the Frobenius map $F: \mathcal{O}_X \to F_* \mathcal{O}_X$). Such varieties enjoy very nice properties, for example, $H^i(X, L)=0$ for $i>0$ for every ample line bundle $L$ on $X$. In case $X$ is smooth and projective, $X$ is Frobenius split if and only if the map $F: H^{\dim X}(X, \omega_X) \to H^{\dim X}(X, F^* \omega_X)$ is nonzero. Note that $F^* \omega_X = \omega_X^p$. In particular, by Serre duality, $H^{\dim X}(X, \omega_X^p)^\vee = H^{0}(X, \omega_X^{1-p})$ is nonzero, that is, $(1-p)K_X$ is effective - so Frobenius split varieties are ,,on the Fano side''. (2) A smooth projective variety $X$ is called Fano if $\omega_X^{-1}$ is ample. One can prove (Brion, Kumar Frobenius splitting methods in geometry and representation theory, Exercise 1.6.E5) that if $X$ is a Fano variety in characteristic $0$, then for $p\gg 0$ the reduction $X_p$ of $X$ mod $p$ is Fano and Frobenius split. This means that counterexamples to Question 1 might be difficult to find. Further questions Therefore, I am almost sure that, if counterexamples appear in the answers, they will have $\dim X$ (or other invariants of $X$, for example the degree of $K_X$ or its index) big compared to $p$. So I would like to ask: Question 2. Can you find an effective bound $M = M(X) = M(\dim X, \ldots)$, depending on the dimension of $X$ and maybe other relatively simple invariants, such as the degree or index, such that whenever $X$ is a Fano variety in characteristic $p>M(X)$ then $X$ is Frobenius split. For example, does $M = 0$ (this is Question 1) or $M = n$ work? Note. The $M = n$ case reminds me of the requirement in the theorem of Deligne-Illusie about decompositions of the de Rham complex that $p$ has to be $>n$.
Exercise:Suppose that $a_k \geq 0$ for $k$ large and that $\sum_{k = 1}^{\infty} \frac{a_k}{k}$ converges. Prove that $$\lim_{j \to \infty}\sum_{k = 1}^{\infty} \frac{a_k}{j+k} = 0$$ Attempt in proof: Suppose $a_k \geq 0$ for any large $k$ and that $\sum_{k = 1}^{\infty} \frac{a_k}{k}$ converges. . Then give $\epsilon >0$ there is $N \in N$ such that $\left|\sum_{k = n}^{\infty} \frac{a_k}{k}\right| < \epsilon$. Let $s_n = \frac{a_n}{j+k}$ denote the partial sums. Then $\sum_{k = 1}^{\infty} \frac{a_k}{j+k} $ will converge to zero if and only if its partial sum converges to zero as n approaches infinity. Taking the limit of $$\lim_{j \to \infty}\sum_{k = 1}^{\infty} \frac{a_k}{j+k} = \lim_{j \to \infty}\frac{a_1}{j + k} + \cdots+ \frac{a_n}{j + k} + \cdots = \lim_{j \to \infty}\frac{a_1/j}{1 + k/j} + \cdots+ \frac{a_n/j}{1 + k/j} +\cdots $$ Can someone please help me finish? I don't know if this a right way. Any help/hint/suggestion will be really appreciate it. Thank you in advance.
Consider a dataset $\mathcal{D}=\{x^{(i)},y^{(i)}:i=1,2,\ldots,N\}$ where $x^{(i)}\in\mathbb{R}^3$ and $y^{(i)}\in\mathbb{R}$ $\forall i$ The goal is to fit a function that best explains our dataset.We can fit a simple function, as we do in linear regression. But that's different about neural networks, where we fit a complex function, say: $\begin{align}h(x) & = h(x_1,x_2,x_3)\\& =\sigma(w_{46}\times\sigma(w_{14}x_1+w_{24}x_2+w_{34}x_3+b_4)+w_{56}\times\sigma(w_{15}x_1+w_{25}x_2+w_{35}x_3+b_5)+b_6)\end{align}$ where, $\theta = \{w_{14},w_{24},w_{34},b_4,w_{15},w_{25},w_{35},b_5,w_{46},w_{56},b_6\}$ is the set of the respective coefficients we have to determine such that we minimize:$$J(\theta) = \frac{1}{2}\sum_{i=1}^N (y^{(i)}-h(x^{(i)}))^2$$The above optimization problem can be easily solved with . Just initiate $\theta$ with random values and with proper learning parameter $\eta$, update as follows till convergence:$$\theta:=\theta-\eta\frac{\partial J}{\partial \theta}$$ gradient descent In order to get the gradients, we express the above function as a neural network as follows: Let's calculate the gradient, say w.r.t. $w_{14}$. $$\frac{\partial J}{\partial w_{14}} = \sum_{i=1}^N \Big[\big(h(x^{(i)})-y^{(i)}\big)\frac{\partial h(x^{(i)})}{\partial w_{14}}\Big]$$Let $p(x) = w_{14}x_1+w_{24}x_2+w_{34}x_3+b_4$ , and Let $q(x) = w_{46}\times\sigma(p(x))+w_{56}\times\sigma(w_{15}x_1+w_{25}x_2+w_{35}x_3+b_5)+b_6)$ $\therefore \frac{\partial h(x)}{\partial w_{14}} = \frac{\partial h(x)}{\partial q(x)}\times\frac{\partial q(x)}{\partial p(x)}\times\frac{\partial p(x)}{\partial w_{14}} = \frac{\partial\sigma(q(x))}{\partial q(x)}\times\frac{\partial\sigma(p(x))}{\partial p(x)}\times\frac{\partial p(x)}{\partial w_{14}}$ We see that the derivative of the activation function is important for getting the gradients and so for the learning of the neural network. A constant derivative will not help in the gradient descent and we won't be able to learn the optimal parameters.
Suppose we have a family of compact oriented even dimensional spin manifolds $\{Y_x\}$ parameterized by a compact even dimensional manifold $X$. The $Y_x$'s are all diffeomorphic to some $Y$, of dimension $n$, and fit together to form a fiber bundle $\pi : Z \rightarrow X$ with fiber $Y_x=\pi ^{-1}(x)$. $TZ$ has the subbundle $V:=\text{ker }\pi_*$ which is tangent to the fibers. There may be a family of coefficient bundles also and we obtain a family of twisted Dirac operators $D_x:\Gamma(S^+_x\otimes E_x)\rightarrow \Gamma (S^-_x\otimes E_x)$. The index of the family gives rise to an element $\text{ind} D \in K(X)$, which is the virtual vector bundle $[\text{ker } D_x]-[\text{coker }D_x]$ when the dimension of both spaces are constant. Finally, there is a map $\text{H} ^{*}(Z,\mathbb{R})\rightarrow \text{H} ^{*-n}(X,\mathbb{R})$ known as the Gysin homomorphism or integration over the fibers map. We'll use the latter terminology writing the map $\int_Y$ and regarding cohomology classes as living in de Rham cohomology. The Atiyah-Singer index theorem gives $$\text{ch }(\text{ind } D)= \int _Y \hat A (V) \text{ch}(E)$$ What general results exist regarding the components of the Chern character of the index bundle, or equivalently the results of the integration over the fibers map, for twisted Dirac operators? To illustrate, an immediate answer is that the zero cohomology (virtual rank) is the index of the Dirac operator on $Y$. A more interesting answer is that in some cases that might be all one obtains: it is a result of Borel-Hirzebruch that the signature is strictly multiplicative in all bundles where $\pi_1$ of the base acts trivially on the rational cohomology of the fibers. The signature is the index of a certain twisted Dirac operator. If we have a family of these operators such that $Z\rightarrow X$ satisfies the condition involving the fundamental group, then the strict multiplicativity gives $\text{ch}(\text{ind }D)=\int_Y \hat A (V)ch(E)=\text{sign }(Y)$. A priori one could expect higher degree cohomology classes. It seems interesting that these vanish. If the question is too vague or broad, I would be happy knowing Are there any instances in which there are known relations between the Chern character of the index bundle and the Chern classes of $X$?
The basic Weyl ordering property generating all the Weyl ordering identities for polynomial functions is: $((sq+tp)^n)_W = (sQ+tP)^n$ $(q, p)$ are the commuting phase space variables, $(Q, P)$ are the corresponding noncommuting operators (satisfying $[Q,P] = i\hbar $). For example for n = 2, the identity coming from the coefficient for the$st$ term is the known basic Weyl ordering identity: $(qp)_W = \frac{1}{2}(QP+PQ)$ By choosing the classical Hamiltonian as $h(p,q) = (sq+tp)^n$ and carefully performing the Fourier and inverse Fourier transforms, we obtain the Weyl identity: $\int {dx\over2\pi}{dk\over2\pi} e^{ixP + ikQ} \int dpdqe^{-ixp-ikq} (sq+tp)^n =(sQ+tP)^n $ The Fourier integral can be solved after the change of variables: $l = sq+tp, m = tq-sp$ and using the identity $ \int dl e^{-iul} l^n =2 \pi \frac{\partial^n}{\partial v^n} \delta_D(v)|_{v=u}$ Where $ \delta_D$ is the Dirac delta function.
What is the difference between linear regression and logistic regression? When would you use each? Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community Linear regression uses the general linear equation $Y=b_0+∑(b_i X_i)+\epsilon$ where $Y$ is a continuous dependent variable and independent variables $X_i$ are usually continuous (but can also be binary, e.g. when the linear model is used in a t-test) or other discrete domains. $\epsilon$ is a term for the variance that is not explained by the model and is usually just called "error". Individual dependent values denoted by $Y_j$ can be solved by modifying the equation a little: $Y_j=b_0 + \sum{(b_i X_{ij})+\epsilon_j}$ Logistic regression is another generalized linear model (GLM) procedure using the same basic formula, but instead of the continuous $Y$, it is regressing for the probability of a categorical outcome. In simplest form, this means that we're considering just one outcome variable and two states of that variable- either 0 or 1. The equation for the probability of $Y=1$ looks like this: $$ P(Y=1) = {1 \over 1+e^{-(b_0+\sum{(b_iX_i)})}} $$ Your independent variables $X_i$ can be continuous or binary. The regression coefficients $b_i$ can be exponentiated to give you the change in odds of $Y$ per change in $X_i$, i.e., $Odds={P(Y=1) \over P(Y=0)}={P(Y=1) \over 1-P(Y=1)}$ and ${\Delta Odds}= e^{b_i}$. $\Delta Odds$ is called the odds ratio, $Odds(X_i+1)\over Odds(X_i)$. In English, you can say that the odds of $Y=1$ increase by a factor of $e^{b_i}$ per unit change in $X_i$. Example: If you wanted to see how body mass index predicts blood cholesterol (a continuous measure), you'd use linear regression as described at the top of my answer. If you wanted to see how BMI predicts the odds of being a diabetic (a binary diagnosis), you'd use logistic regression. Linear Regression is used to establish a relationship between Dependent and Independent variables, which is useful in estimating the resultant dependent variable in case independent variable change. For example: Using a Linear Regression, the relationship between Rain (R) and Umbrella Sales (U) is found to be - U = 2R + 5000 This equation says that for every 1mm of Rain, there is a demand for 5002 umbrellas. So, using Simple Regression, you can estimate the value of your variable. Logistic Regression on the other hand is used to ascertain the probability of an event. And this event is captured in binary format, i.e. 0 or 1. Example - I want to ascertain if a customer will buy my product or not. For this, I would run a Logistic Regression on the (relevant) data and my dependent variable would be a binary variable (1=Yes; 0=No). In terms of graphical representation, Linear Regression gives a linear line as an output, once the values are plotted on the graph. Whereas, the logistic regression gives an S-shaped line Reference from Mohit Khurana. The differences have been settled by DocBuckets and Pardis, but I want to add one way to compare their performance not mentioned. Linear regression is usually solved by minimizing the least squares error of the model to the data, therefore large errors are penalized quadratically. Logistic regression is just the opposite. Using the logistic loss function causes large errors to be penalized to an asymptotically constant. Consider linear regression on a categorical {0,1} outcomes to see why this is a problem. If your model predicts the outcome is 38 when truth is 1, you've lost nothing. Linear regression would try to reduce that 38, logistic wouldn't (as much). Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
When I use \left( and \right) to get appropriately sized parentheses for an operator, a space is created between the operator and the parentheses. I think it looks bad. Is it intentional? If so, why? ... When using \newcommand and the like, LaTeX seems to create a new box for the content (I'm probably observing things wrong). For example, I have \newcommand{\p}[1]{\ensuremath{\left(#1\right)}} in my ... What is the difference between ( and \left( in LaTeX? Sometimes, when the content is small, it does not seem to matter which pair I use.What is the best practice when it comes to which parentheses ... I know there has been discussion about this topic, but I haven't seen a working solution.Is it possible to define a macro within LaTeX, such that inside math mode, ( would produce \left( and ) would ... I am in constant search for an alternative to LaTeX, but it seems I am stuck for it for a long time, and I am trying to see the bright side of it.The other day one thought struck me, I think, LaTeX ... First of all I love LaTeX/MathJax; I use it a lot, but there are couple of things I hate about it. Sometimes the code becomes so messy that I almost can't read it without looking at the output, while ... I don't like the fact that I am required to use \abs* when I want the vertical line of the absolute value to automatically re-size. I pretty much always want it to re-size, so would like to swap the ... I'd like to define my own commands to replace \sin, \cos, \tan, etc... I'd like to be able to use \tan[a]{x} to mean \tan^{a}{(x)}, and \tan{x} to mean just \tan{(x)}.I've tried modeling a solution ...
The Joule-Thomson experiments occurs with no change in enthalpy. Suppose that at the left of a porous plug there is a pressure $p_1$ and temperature $T_1$ and $p_2,T_2$ to the right of the plug, as $p_1>p_2$ the gas moves left to right. The experimental configuration must ensure that pressures remain constant and that the experiment is performed under adiabatic conditions when $q=0$. If a volume $V_1$ of gas moves from the left to right the work done/mole is $W=p_1V_1-p_2V_2$. This is the difference between the work of compression on the left of the plug and work recovered on expansion on the right. If the gas were ideal then $w=0$, but real gases are not. The gas expansion is also adiabatic so that no heat leaves or enters then $q=0$ and the change in internal energy $\Delta U$ is equal to the net work $$\Delta U =U_2-U_1 = p_1V_1-p_2V_2$$ therefore $$U_2+p_2V_2=U_1 + p_1V_1$$As $H=U+pV$, then $$\Delta H = H_2-H_1=U_2 +p_2V_2-U_1 -p_2V_2 =0$$ The Joule-Thompson coefficient $\mu$ is defined, as you write, $\left ( \partial T/\partial P \right)_H$ and this measures how much the intermolecular interactions make the gas differ from an perfect gas. Most gases cool when passing from high to low pressures at room temperature. Notes: The coefficient can be rewritten in other forms using$$ \left ( \frac{\partial T}{\partial p} \right)_H \left ( \frac{\partial H}{\partial T} \right)_p \left ( \frac{\partial p}{\partial H} \right)_T =-1$$then$$ \mu C_p= -\left (\frac{\partial H}{\partial p} \right)_T $$ as$$ \frac{dH}{dP} = C_p\frac{dT}{dp} +V-T\left (\frac{\partial V}{\partial T} \right)_p$$then if $\alpha=(1/V)(\partial V/\partial T)_p$ is the coefficient of expansion then at constant T, $(\partial H/\partial p)_T= V(1-\alpha T)$ which means that the Joule-Thompson coefficient can be written as $\mu=(V/C_p)(\alpha T-1)$ (This last equation has been used as a way of measuring absolute temperature because $V,\mu$ and $ \alpha $ are all measurable quantities.
Ok I have the following circuit and data (when the subscript is "ef" it means "rms" values): I am asked to determine the paramenters of the transformer r1 L11, L22 and LM with the given experimental data. I had no problem extracting data from the open circuit experiment. Using the fact that the active power is given by $$P=r_1 I_{rms}^2$$ I found $$r_1=10 \Omega$$ Then applying induction law in both primary and secondary leaves us with: $$u_1(t)=r_1i_1(t)+L_{11}\frac{di_1(t)}{dt}$$ $$u_2(t)=-L_{M}\frac{di_1(t)}{dt}$$ Applying phasor notation and taking the rms values will lead us to obtain $$L_M=\frac{U_{2_{rms}}}{\omega I_{1_{rms}} }=31.83 mH$$ $$L_{11}=\sqrt{(\frac{U_{1_{rms}}^2}{I_{1_{rms}}^2} - r_1^2) \frac{1}{\omega^2}}=55.13 mH$$ Ok and there is no more data we can extract form the open-circuit experiment. Passing to the short-circuit experiment I will obtain from induction law again: $$0=-L_{M}\frac{di_1(t)}{dt}-L_{22}\frac{di_2(t)}{dt}$$ Which leads to $$L_{22}=\frac{L_M I_{1_{rms}}}{I_{2_{rms}} }$$ Problem now is I don't know the value of the root-mean square of current 2 and have no idea how to find it out. My guess is that I need to use the reactive power. But how? I know from Poynting complex theorem: $$P_Q= 2\omega ((W_e)_{av} - (W_m)_{av})$$ But, and that is another question I have and would like to get ans answer on? How should I apply this formula. For the electrical energy, should I take the capacitor? But what's the voltage value? The same as the open-circuit experiment? And for the magnetic energy? What inductances should I consider? Do I need to calculate an equivalent circuit? I'm really confused and would appreciate some help. Thanks!
SuZex approximation This article collects approximations of function SuZex, which is superfunction of zex\(\,(z)=z\exp(z)~\). The complex map of SuZex is shown in figure at right. Below, it is compared to similar maps for various approximations of SuZex with elementary functions. All the maps are supposed to be displayed in the same scale. Contents Background (1) \(~ ~ ~ T(z)=\mathrm{zex}(z) = z\,\exp(z)~\) The superfunction \(F=\mathrm{SuZex}\) satisfies the transfer equation (2) \( ~ ~ ~ T(F(z))=F(z\!+\!1)\) and the additional condition (3) \( ~ ~ ~ F(0)=1\) Also, it is assumed that the solution \(F=\mathrm{SuZex}\) decays to the stationary point 0 of the transfer function \(T\) by (1) at infinity, except some strip along the positive part of the real axis. The superfunction \(F=\mathrm{SuZex}\) is real–holomorphic; \(F(z^*)=F(z)^*\), For real values of the argument, the explicit plot \(y=\mathrm{SuZex}(x)\) is shown in figure 2. The function is positive and increasing along the whole real axis; all its derivatives are also positive. The function slowly raises since zero at minus infinity, passes through point (0,1) and then shows the fast growth, similar to that of the SuperFactorial and that o the tetration (to base \(b\!>\! \exp^2(-1)\)). Sorry, the figure 2 is not yet loaded. For the efficient (id est, fast and precise) evaluation of SuZex, various approximations are described below. They are used in the C++ implementation of function SuZex, called by generators of figures, and, in particular, figures 1 and 2. Taylor expansion at zero Fig.3. Taylor approximation (4) with 48 terms; \(!u\!+\!\mathrm i v= P_{48}(x\!+\!\mathrm i y)\), left; the same and map of \(u\!+\!\mathrm i v= \mathrm{SuZex}(x\!+\!\mathrm i y)\), center; and the agreement \(A_{48}(x\!+\!\mathrm i y)\), right. The simple approximation of any function is, perhaps, the truncated Taylor series, which is, actually, a polynomial. The complex map of such a polynomial of power \(N\!=\!42\) is shown in the figure 3, (4) \( ~ ~ ~ ~\displaystyle \mathrm{SuZex}(z) \approx P_{N}(z)=\sum_{n=0}^{N} \,c_n\, z^n\) Approximations for the first 17 coefficients \(c\) or the expansion are shown in table at left. More coefficients are available at SuZexTay0co.cin . The series converges, and the increase of the number of terms taken into account extends the range of approximation. However, due to the fast growth of the function at real values of the argument, practically the application of the approximation is limited by a circle \(|z|\!<\!2\); for larger values, the enormous amount of coefficients should be taken into account, and the rounding errors destroy the precision of the approximation. For evaluation of SuZex of real argument, the polynomial approximation is sufficient, the values of function can be reconstructed applying iteratively the Transfer equation (5) \( ~ ~ ~ ~ \mathrm{SuZex}(z\!+\!1)=\mathrm{zex}\Big(\mathrm{SuZex}(z)\Big)\) or its modification (6) \( ~ ~ ~ ~ \mathrm{SuZex}(z\!-\!1)=\mathrm{LambertW}\Big(\mathrm{SuZex}(z)\Big)\) (7) \( ~ ~ ~ ~\displaystyle A_{48}(z)= -\lg \left( \frac {|P_{48}(z) - \mathrm{SuZex}(z)|} {|P_{48}(z)| + |\mathrm{SuZex}(z)|} \right) \) This agreement indicates, how many decimal digits does the approximation for certain \(z=x\!+\!\mathrm i y\); levels \(A=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\) are drawn. In the central part, the approximation provides of order of 15 decimal digits. Outside the outher loop, the agreement is smaller than unity, id est, even first digit of the estimate of SuZex by the approximation \(P_{48}\) is doubtful. This loop has light excess at the real part of the real axis bevause of huge denominator in the definition of \(A_{48}\). The inner part is still sufficient for the implementation of SuZex along the real axis, using the transfer equation; the error due to the approximation is smaller than the rounding error at the double precision arithmetics. Perhaps, even less terms would be sufficient for the professional implementation of this superfunction. Expansion at infinity For large values of the argument, SuZex can be approximated using the asymptotic expansion. Let (9) \(~ ~ ~ \displaystyle Q_N(z)=\frac{1}{z}\, \sum_{n=0}^{N} \, z^{-n}\, \sum_{m=0}^n\, a_{m,n} \ell^m\) where \(\ell=\ln(-z)\). Then (10) \(~ ~ ~\mathrm{SuZex}(z) \approx Q_N(x_1\!+\!z)~\) where \(x_1\!\approx\! -1.1259817765745028~\) is solution of the equation (11) \(~ ~ ~\displaystyle \lim_{k \rightarrow \infty} \mathrm{zex}^k\Big(Q_N(x_1-k)\Big) = 1\) For \(N\!=\!20\), the complex map of approximation (8) is shown in Figure 4. Practically, with \(k\!=\! 20\), the error of evaluation of \(x_1\) becomes of order of rounding errors at the "double" precision; at \(\Re(z)\!<\!-20\), the last term in the asymptotic expansion is smaller than \(10^{-16}\) and does not contribute to the estimated value. The complex map of \(Q(x_1\!+\!z)\) for \(z\!=\!x\!+\!\mathrm i y\) is shown in figure at right. In the region \(~x\!<\!0~\), \(~x^2\!+\!y^2\!<\!4~\), the approximation through \(Q(20)\) shows reasonable agreement with the polynomial expansion \(P_{48}\) above. In particular, the same level \(v\!=\!0.4\) looks in the similar way in both figures. However, the corresponding values of the argument are out of range of high precision of each of the approximations. Coefficients \(a_{n,m}\) in the expansion: \(\begin{array}{cccccccccc} ~ ~ n ~ \backslash~ m\!\! & \bf 0 & \bf 1 & \bf 2 & \bf 3 &\\ \!\bf 0&-1& 0 & 0 &0\\ \!\bf 1& 0&1/2&0 &0\\ \!\bf 2& -1/6& 1/4& -1/4 &0\\ \!\bf 3& {-7}/{48}~ & {3}/{8} & -{5}/{16}~ & {1}/{8} \end{array}\) Below the same table appears as the Mathematica output produced after calculation of the coefficients with the command TeXForm[Table[Table[If[n >= m, a[n, m], 0], {m, 0, 7}], {n, 0, 7}]] \(\!\!\!\!\!\!\!\!\!\! ^{ \begin{array}{ccccccccccc} -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{1}{6} & \frac{1}{4} & -\frac{1}{4} & 0 & 0 & 0 & 0 & 0 \\ -\frac{7}{48} & \frac{3}{8} & -\frac{5}{16} & \frac{1}{8} & 0 & 0 & 0 & 0 \\ -\frac{707}{4320} & \frac{23}{48} & -\frac{17}{32} & \frac{13}{48} & -\frac{1}{16} & 0 & 0 & 0 \\ -\frac{1637}{8640} & \frac{1121}{1728} & -\frac{83}{96} & \frac{37}{64} & -\frac{77}{384} & \frac{1}{32} & 0 & 0 \\ -\frac{274133}{1209600} & \frac{15427}{17280} & -\frac{1619}{1152} & \frac{443}{384} & -\frac{205}{384} & \frac{87}{640} & -\frac{1}{64} & 0 \\ -\frac{4024763}{14515200} & \frac{142801}{115200} & -\frac{156559}{69120} & \frac{1915}{864} & -\frac{1307}{1024} & \frac{53}{120} & -\frac{223}{2560} & \frac{1}{128} \end{array}}\) The precision of the approximation through the asymptotic expansion (9) can be characterized with agreement (12) \( ~ ~ ~ \displaystyle B_{N}(z)=- \lg\left( \frac{|Q_N(x_1\!+\!z)-\mathrm{SuZex}(z)|} {|Q_N(x_1\!+\!z)|+|\mathrm{SuZex}(z)|} \right)\) For \(N\!=\!20\), this function is shown in figure 5. Outside the loops, the approximation provides at least 15 correct decimal digits. While \(\Re(z)<5\), for \(|z\!+\!2|>8\), the error of approximation of \(\mathrm{SuZex}(z)\) with \(Q_N(x_1\!+\!z)\) is smaller than the rounding errors at the use of the complex double arithmetics. Taylor expansion at \(z=-12+x_1\) At positive part of the real axis, the asymptotic approximation above has the cutline; for moderate values of the real part of the argument, the Taylor expansion would be better. In addition, the precise evaluation with direct application of the asymptotic expansion requires \(N\) of order of 20, and needs approximately \(N^2\) (id est, of order of 400) operations. The speed of evaluation can be boosted doe an order of magnitude with the Taylor expansion at some point far from the region of fast growth of function SuZex. Such a point is chosen to be \(z_{12}=-12\!+\!x_1\approx -13.1\) (13) \( ~ ~ ~ \Phi_{12,N}(z)=\) \( \displaystyle \sum_{n=0}^N f_n (z-z_{12})^n=\) \( \displaystyle \sum_{n=0}^N f_n (z+12-x_{1})^n\) The coefficients \(f\) in (13) were evaluated from the asymptotic approximation, expanding expression (14) \(~ ~ ~ \mathrm{zex}^8\Big( Q_{20}(-20\!+\!z) \Big)\) at small values of \(z\). The complex map of the expansion (13) for \(N\!=\!80\) is shown in figure 6. The point of expansion (approximately, \(-13.1259817765745028\)) is a little bit outside the field covered by the map. In order to simplify the comparison with other pictures, the same scale is used. The constant 20 in expression (14) is chosen for the following reason. While doing numerical calculation with complex double variables, with 21 terms of the expansion (8), the last term does not affect the value \(Q_20(z)\) for \(\Re(z)<-20\). The additional misplacement of the expansion point to the left would not improve the precision of the evaluation. The precision can be improved, using some patience (to press a key, to have a tea) and the long double variables; then it would have sense to get more terms in asymptotic expansion (9) and use more iterations of function zex in (14). However, the 15 digits seem to be more than sufficient to plot all the figures. In particular, the expansion (13) reproduces such properties of function SuZex as its value unity at zero and even value 2.7.. (approximately \(\mathrm e\)) at unity, although \(\Phi(z)\) it is designed to approximate \(\mathrm{Suzex}(z)\) in vicinity of \(z\!=\!-12\), id est, mainly outside of the field of the map shown. The precision of approximation of SuZex with function \(\Phi\) by (13) can be characterized with the agreement (15) \( ~ ~ ~ \displaystyle C_{N}(z)=- \lg\left( \frac{|\Phi_N(z)-\mathrm{SuZex}(z)|} {|\Phi_N(z)|+|\mathrm{SuZex}(z)|} \right)\) For \(N=80\), the map of agreement by (15) is shown in Figure 7. (Hope, the last figure to be loaded for this article) Inside the loop, the approximation (14) gives at least 15 decimal digits, and the deviation in 16th digits (not plotted) should be attributed to the rounding errors (in particular in the evaluation of the Taylor coefficients \(f\), but not to the lack of term in the Taylor polynomial (13). Numerical implementation For \(|\Im(z)|\!>\!8\) the espansion \(Q\) is used. For small values of \(|z|<1.6\), the Taylor expansion (4) at zero is used, truncated keeping the 96th power of \(z\), id est, \(P_{96}(z)\) is used as approximation of \(\mathrm{SuZex}(z)\) For positive \(~\Re(z)~\), but \(~|\Im(z)|\!<\!1.5~\), the iteration \(~\mathrm{zex}^n( P_{96}(z\!-\!n)~\)is used as approximation of \(\mathrm{SuZex}(z)\), with integer \(~n \!\approx\! \Re(z)\). If not a case, for \(|z\!+\!12\!+\!x_1|<8.1\), the approximation \(\Phi\) is used; and, fot positive \(~\Re(z)~\), but \(~|\Im(z)|\!<\!1.5~\), the iteration \(~\mathrm{zex}^n\!\Big( \Phi(z\!-\!n)\Big)~\)is used as approximation of \(\mathrm{SuZex}(z)\), with integer \(~n \!\approx\! \Re(z)\). For the rest of cases, id est, \(\Re(z)<-14\), again, the asymptotic expansion \(Q\) is used. The approximations described above have wide areas of overlapping, where at least two or three of approximations above provide at least 15 correct decimal digits. This allows to use the implementation without any further worries about the error of the approximation. The SuZex can be used as any other special function with known behavior, known asymptotic and the efficient algorithm for the evaluation available.
ad nauseam. I ran an experiment, just to see whether the idea was a complete dud or not. Before getting into the results, I should manage expectations a bit. Specifically, while it's theoretically possible in some cases that you might find all the Pareto efficient solutions (and is in fact guaranteed if there is only one), in general you should not bank on it. Here's a trivial case that demonstrates that it may be impossible to find them all using my proposed technique. Suppose we have three yes-no decisions ($N=3$, using the notation from yesterday's post) and two criteria ($M=2$), one to be maximized and one to be minimized. Supposed further that, after changing the sign of the second criterion (so that we are maximizing both) and scaling, that all three decisions produce identical criterion values of $(1,-1)$. With $N=3$, there are $2^3=8$ candidate solutions. One (do nothing) produces the result $(0,0)$. Three (do just one thing) produce $(1,-1)$, three (do any two things) produce $(2,-2)$ and one (do all three things) produces $(3,-3)$. All of those are Pareto efficient! However, if we form a weighted combination using weights $(\alpha_1, \alpha_2) \gg 0$ for the criteria, then every decision has the same net impact $\beta = \alpha_1 - \alpha_2$. If $\beta > 0$, the only solution that optimizes the composite function is $x=(1,1,1)$ with value $3\beta$. If $\beta<0$, the only solution that optimizes the composite function is $x=(0,0,0)$ with value $0$. The other six solutions, while Pareto efficient, cannot be found my way. With that said, I ran a single test using $N=10$ and $M=5$, with randomly generated criterion values. One test like this is not conclusive for all sorts of reasons (including but not limited to the fact that I made the five criteria statistically independent of each other). You can see (and try to reproduce) my work in an R notebook hosted on my web site. The code is embedded in the notebook, and you can extract it easily. Fair warning: it's pretty slow. In particular, I enumerated all the Pareto efficient solutions among the $2^{10} = 1,024$ possible solutions, which took about five minutes on my PC. I then tried one million random weight combinations, which took about four and a half minutes. Correction: After I rejiggered the code, generating a million random trials took only 5.8 seconds. The bulk of that 4.5 minutes was apparently spent keeping track of how often each solution appeared among the one million results ... and even that dropped to a second or so after I tightened up the code. To summarize the results, there were 623 Pareto efficient solutions out of the population of 1,024. The random guessing strategy only found 126 of them. It found the heck out of some of them: the maximum number of times the same solution was identified was 203,690! (I guess that one stuck out like a sore thumb.) Finding 126 out of 623 may not sound too good, but bear in mind the idea is to present a decision maker with a reasonable selection of Pareto efficient solutions. I'm not sure how many decision makers would want to see even 126 choices. A key question is whether the solutions found by the random heuristic are representative of the Pareto frontier. Presented below are scatter plots of four pairs of criteria, showing all the Pareto efficient solutions color-coded according to whether or not they were identified by the heuristic. (You can see higher resolution versions by clicking on the link above to the notebook, which will open in your browser.) In all cases, the upper right corner would be ideal. Are the identified points a representative sample of all the Pareto efficient points? I'll let you judge. I'll offer one final thought. The weight vectors are drawn uniformly over a unit hypercube of dimension $M$, and the frequency with which each identified Pareto solution occurs within the sample should be proportional to the volume of the region within that hypercube containing weights favoring that solution. So high frequency solutions have those high frequencies because wider ranges of weights favor them. Perhaps that makes them solutions that are more likely to appeal to decision makers than their low frequency (or undiscovered) counterparts?
Inside a Schwarzschild black hole Hello and welcome Ever wondered what lies beyond the of a ? Current thinking asserts any one of numerous outcomes; time travel, wormholes, being crushed to a point, or instead, perhaps a fiery end in a wall of flame, making it very hard to know what to believe. We offer a somewhat more prosaic answer; not nearly so exciting, but needing no undiscovered extensions to existing theory or StatTrekian beliefs, and as such, so much more believable. This is a new vision of what lies beyond the of a black hole. New, and as we will show later, testable as demonstrated by the otherwise unexplained presence of supermassive black holes. This will never be entirely understood without a smidgen of mathematics, but, if you have a basic college-level understanding of mathematics, then there should be nothing overly hard for you to follow. So let us just jump right in. To begin with, here are a couple of basic facts about Einstein's theory of general relativity for any visitors who are new to this field. There is no dispute about these facts so I hope you will just accept them for now: Karl Schwarzschild (1873–1916) The gravitational field around a non-rotating symmetrical body (such as a star, or a planet) is given by the Schwarzschild solution, originally developed by Karl Schwarzschild in 1916, just a year after Einstein announced his general theory of relativity: \[ c^2d\tau ^2=\left(1-\frac {r_s}{r}\right)c^2dt^2 -\left(1-\frac {r_s}{r}\right)^{-1}dr^2-r^2\left(d\theta ^2+\sin ^2\theta \,d\varphi ^2\right) \]The key fact to notice about this equation is the first term which seems to 'blow up' when \(r=r_s\). This term is what gives rise to the event horizon. Birkhoff's theorem added that for non-rotating spherically symmetric body, the exterior gravitational field in space must be static, with a metric given by a any of the Schwarzschild metric. This sounds difficult but all this is saying is that there is only one solution, the Schwarzschild solution and that it is unchanging. piece An immediate consequence of Birkhoff’s theorem is that the field inside a symmetric non-rotating spherically shell of matter must be flat, or Minkowski space (the only piece of the Schwarzschild metric possible in this circumstance as there is no enclosed mass). Knowing just these two undisputed facts, we could, for instance, calculate the precise field at the bottom of a mine shaft -- just calculate the field due to the mass beneath our feet whilst ignoring all of the mass above our heads, and neglecting the effect of the relatively slow rotation of the earth. This much is standard stuff and fully confirmed by experiments, here on earth. Now, keeping these same two undisputed facts in mind, consider a large ball of matter, collapsing due to the force of gravity, where the forces involved have already exceeded those needed to halt the collapse at the size of a neutron star. (Such as during the final stage of collapse after a sufficiently large star goes supernova at the end of it's active life.) For simplicity, let the ball be spherically symmetric and nonrotating. The collapsing ball of matter, if of sufficient mass, will eventually form a black hole with an event horizon having a , \(r_s\), given by this simple equation \[r_s=\frac{2Gm}{c^2}\] where the \(r_s\) is the reduced radius of the event horizon, \(G\) is the gravitational constant, the same constant used in the gravity equation of Newton, \(m\) is the total mass enclosed by this event horizon, and \(c\) is the speed of light. In the following argument, all radii will be reduced radii. Inside this event horizon the ball of particles will continue to collapse, heading relentlessly towards the origin. So far, we have not deviated in any way from established theories. Agree or disagree, or have any questions or observations about this, and I would love to hear from you, so please This email address is being protected from spambots. You need JavaScript enabled to view it., or leave a comment. Your views are always most welcome.
Sep 9th 2008, 07:34 AM # 2 Forum Admin Join Date: Apr 2008 Location: On the dance floor, baby! Posts: 2,856 Originally Posted by jianxu 1. The problem statement, all variables and given/known data A member of a colony on Jupiter is required to salute the UN flag at the same time as it is being done on Earth at noon in New York. If observers in all inertial frames(i.e. any observer traveling at any arbitrary velocity) are to agree that he has performed his duty, how long must he solute for(i.e. seeing you don't know how fast the observer is traveling, what time iterval ensures that the rising of the falg and the saluting are simultaneous for all possible speeds of the observer)? (Distance between the planets is approximately 8 x10^6 km, ignore any relative motion of the planets) 2. Relevant equations Lorentz Transformation 3. The attempt at a solution I need help getting started on this. I have no idea what to do at all(partly because of not understanding what the question wants) I'm not sure if I'm suppose to derive an equation, or there is a number answer to this. I'm totally stuck on the thought process portion and cannot/don't know how to translate anything onto paper. I know that the arbitrary observer can have velocity from the range of -c to c but I don't know if that means I should end up with two solutions which will give us the time interval. So any advice on how to begin this would be greatly appreciated. Thanks I'm off by a negative sign and I'm not sure why. But here it goes. I'm going to make the simplifying assumption that the observers' origin is at the Earth at the time that the salute is supposed to be given. Let's synchronize all the observers' clocks with the Earth clock and let t = 0 be when the salute is supposed to be performed. The saluter has to salute somewhat before this time in order for the signal to be received at t = 0 on Earth, so he/she has to salute at t = -L/c where L is the Earth-Jupiter distance. Now an observer moving at speed v sees the salute at: $\displaystyle t' = \gamma \left ( -\frac{L}{c} - \frac{Lv}{c^2} \right )$ $\displaystyle t' = -\frac{L}{c} \sqrt{\frac{1 + \frac{v}{c}}{1 - \frac{v}{c}}} = -\frac{L}{c} \sqrt{\frac{c + v}{c - v}}$ So as v goes from 0 to c the time t' goes from -L/c to (ahem) minus infinity. Given the result I strongly suspect our poor saluter is going to have to salute forever (and we can logic that out.) I just can't find a way to get rid of the pesky negative sign. -Dan __________________ Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup. See the forum rules here .
Defining and manipulating vector equations with cross and dot products Hello, I have been experimenting with Sage to see what it can or can't do. Consider the following simple problem. Show $[ \mathbf{A} \times (\mathbf{B} \times \mathbf{C}) ] + [ \mathbf{B} \times (\mathbf{C} \times \mathbf{A}) ] + [ \mathbf{C} \times (\mathbf{A} \times \mathbf{B}) ] = 0 $ where $\mathbf{A}, \mathbf{B}, \mathbf{C} \in \mathbb{R}^3$. In Sage I can do this in one line eqn = A.cross_product(B.cross_product(C)) + B.cross_product(C.cross_product(A)) + C.cross_product(A.cross_product(B)) where A,B and C are elements of $SR^3$. Now I can show component wise eqn[0].expand() eqn[1].expand() eqn[2].expand() that it's zero.A much simpler way is to use the identity $\mathbf{A} \times ( \mathbf{B} \times \mathbf{C} ) = \mathbf{B}( \mathbf{A} \cdot \mathbf{C} ) - \mathbf{C}( \mathbf{A} \cdot \mathbf{B} )$ and plug it in. Yet this is easier done by hand than by computer.My question is can Sage do this? Can I define a vector equation in sage, and sub in vector identities to manipulate or simplify the equation?Thanks
\(\text{25}\%\) of \(\text{R}\,\text{124,16}\) \(\text{25}\% = \frac{\text{1}}{\text{4}}\). \(\frac{\text{1}}{\text{4}} \text{ of } \text{R}\,\text{124,16} = \text{R}\,\text{124,16} \div \text{4} = \text{R}\,\text{31,04}\) Previous Ratio, rate and proportion Next End of chapter activity Let's see how this works in an example. Use a calculator to answer the following questions: How many people live in rural areas? How many T.B. patients are H.I.V. positive? How many people had never voted before the 1994 election? Top Teenage T-shirts printed \(\text{120}\) T-shirts. They sold \(\text{72}\) T-shirts immediately. What percentage of the T-shirts were sold? \(\text{72}\) of the \(\text{120}\) T-shirts were sold \(\text{72} \div \text{120} \times \text{100} = \text{60}\%\). So \(\text{60}\%\) of the T-shirts were sold. Calculate the following without a calculator: \(\text{25}\%\) of \(\text{R}\,\text{124,16}\) \(\text{25}\% = \frac{\text{1}}{\text{4}}\). \(\frac{\text{1}}{\text{4}} \text{ of } \text{R}\,\text{124,16} = \text{R}\,\text{124,16} \div \text{4} = \text{R}\,\text{31,04}\) \(\text{50}\%\) of \(\text{30}\) \(\text{mm}\) \(\text{50}\% = \frac{\text{1}}{\text{2}}\). \(\frac{\text{1}}{\text{2}} \text{ of } \text{30}\text{ mm} = \text{30}\text{ mm} \div \text{2} = \text{15}\text{ mm}\) Using your calculator and calculate: \(\text{15}\%\) of \(\text{R}\,\text{3 500}\) \(\text{R}\,\text{525}\) \(\text{12}\%\) of \(\text{25}\) litres \(\text{3}\) litres \(\text{37,5}\%\) of \(\text{22}\) \(\text{kg}\) \(\text{8,25}\) \(\text{kg}\) \(\text{75}\%\) of \(\text{R}\,\text{16,92}\) \(\text{R}\,\text{12,69}\) \(\text{18}\%\) of \(\text{105}\) \(\text{m}\) \(\text{18,9}\) \(\text{m}\) \(\text{79}\%\) of \(\text{840}\) \(\text{km}\) \(\text{663,6}\) \(\text{km}\) Calculate what percentage the first amount is of the second amount (you may use your calculator): \(\text{25}\%\) \(\text{8,3}\%\) \(\text{70}\%\) \(\text{37,5}\%\) \(\text{90}\%\) \(\text{14,3}\%\) Look at the following extracts from newspaper articles and adverts: The price of a tub of margarine is \(\text{R}\,\text{6,99}\). If the price rises by \(\text{10}\%\), how much will it cost? New price is \(\text{R}\,\text{6,99}\) + \(\text{10}\%\) of \(\text{R}\,\text{6,99}\)= \(\text{R}\,\text{6,99}\) + \(\text{70}\) \(\text{c}\) (rounded off) = \(\text{R}\,\text{7,69}\) OR New price is (\(\text{100}\) + \(\text{10}\))\% of \(\text{R}\,\text{6,99}\) = \(\text{110}\%\) of \(\text{R}\,\text{6,99}=\frac{\text{110}}{\text{100}} \times \frac{\text{6,99}}{\text{1}}= \text{R}\,\text{7,69}\) (rounded off) Top Teenage T-shirts have a \(\text{20}\%\) discount on all T-shirts. If one of their T-shirts originally cost \(\text{R}\,\text{189,90}\), what will you pay for it now? You only pay \(\text{80}\%\) (\(\text{100}\%\) \(-\) \(\text{20}\%\) discount). Thus: \(\frac{\text{80}}{\text{100}} \times \text{189,901} = \text{R}\,\text{151,92}\) OR \(\text{20}\%\) of \(\text{R}\,\text{189,90} = \frac{\text{20}}{\text{100}} \times \text{189,901}\). The discount is thus \(\text{R}\,\text{37,98}\). You pay \(\text{R}\,\text{189,90} - \text{R}\,\text{37,98} = \text{R}\,\text{151,92}\). Look at the pictures below. What is the value of each of the following items, in rands? \(\text{R}\,\text{239,96} - \text{R}\,\text{59,75} = \text{R}\,\text{180,21}\) \(\text{R}\,\text{299,50} - \text{R}\,\text{44,925} = \text{R}\,\text{1 254,58}\) \(\text{R}\,\text{9 875} + \text{R}\,\text{790} = \text{R}\,\text{10 665}\) \(\text{R}\,\text{15 995} + \text{R}\,\text{799,75}= \text{R}\,\text{16 794,75}\) Calculate the percentage discount on each of these items: \(\frac{\text{R}\,\text{1 360}}{\text{R}\,\text{1 523}} = \text{89}\%\). So discount is \(\text{100}\% - \text{89}\% = \text{11}\%\) \(\frac{\text{R}\,\text{527,40}}{\text{R}\,\text{586}} = \text{90}\%\). So discount is \(\text{100}\% - \text{90}\% = \text{10}\%\) Previous Ratio, rate and proportion Table of Contents Next End of chapter activity
I'm wondering what is the formula for induced drag in a stalled regime, i.e. in a regime where the $C_L$ (coef. of. lift) has started to decrease but is still nonzero. I've a feeling that the conventional formula for induced drag$$D_i = \frac{L^2}{\frac{1}{2}\rho V^2 \pi b^2 \epsilon}$$ (taken e.g. from this answer) which in essence depends on lift $L$, does not explain induced drag in stalling regime (if it would, it would erroneously imply that the induced drag is equal for a pair of AoAs $\alpha_{\text{before-stall}} < \alpha_{\text{after-stall}}$ which both correspond to the the same $C_L$). I've come across the evidence that induced drag continues to grow as square of AoA even in stalled regime (e.g. schematic Fig. 4.14 from av8n.com) but can't find more. Thanks.
There is no shortage of idiots and assholes in this world. I see them everyday in my mail inbox. The rich Nigerian widow, Idi Amin’s grandson, Gaddafi’s daughter-in-law, Saddam Husain’s son-in-law, are all looking for a “reliable” partner in their business adventures and seek my help in this matter. Of course, they promise to share a fortune with me. The crooks think that I will bite the bait. I have my way of dealing with such idiots.. In addition, I use many other tools to discourage such shameless intruders. I avoid scum pits like Whatsapp, where anyone is free to share and forward any muck which they have received from other idiots. We all have to join hands and stop the stink. Zero-tolerance is the key to a peaceful world. In case you did’nt know this. It is not at all difficult to share news/pics without Facebook, Instagram, or Whatsapp. Linux/FOSS gurus are considered as social misfits and condemned, since they never use these hitech tools. No wonder, no one wants to talk to them. Proof of the pudding, lies in the eating. Proof of the eating, lies in the feedback. Proof of the feedback does not exist yet. Do you trust your child with this school ? What quality of education are you expecting to give your child ? Vedic maths as a business model ? Why not try witchcraft ? I received today an invitation for a weird event to discuss ways of making money using Vedic mathematics. To make their point more tempting and attractive, they have also added a suggestive picture in the invite. The point is, many gullible and greedy people, particularly parents, are expected to bite this bait and fall into this trap. “Look before you leap”, is a very popular idiom. I must add — “Look, before you make your children leap”. Before you sink your money and drown your children, look at some sane advise from a mature teacher of mathematics: https://drpartha.wordpress.com/2013/07/07/15-2013-and-you-still-believe-vedic-maths-is-good-for-your-children/ If making money is your aim, why not try witchcraft ? It can fetch you much more money than mathematics and make you rich, faster. Use your commonsense, and you be the judge. Adding a plugin to show maths using LaTeX in you wordpress posts is a pain. The worst plugin experience was with WP KaTeX. WP KaTeX : absolutely no documentation to help usage, after plugin installation. No mention of limitations or restrictions. No description for the recommended jsDelivr CDN addon. No examples or sample sites or tutorials or support forums. $ latex \frac{1}{\pi} = \frac{2\sqrt{2}}{9801}\sum\limits_{k=0}^\infty\frac{(4k)!(1103+26390k)}{(k!)^4 (396^{4k})} $ வர்ட் பிரெஸில் தமிழ் எழுத முடியுமா ? என்று என்னை கேட்டார்கள் ….இதொ என் பதில்: தமிழன் நினைத்தால் எதையும் செய்வான் . This is a sequel to the opinion-survey/feedback form given in this blog. We would love to get some feedback on the feedback form itself ! All items marked with a star (*) are essential, and cannot be skipped. You can also send your opinion and suggestions directly by mail to drpartha@gmail.com Thanks for giving us your opinion and suggestions using the form below: This new contact form was made using WPForms plugin. You can use this form to send us your comments, issues, queries and suggestions. You can also send all your comments, issues, queries and suggestions, directly by email to : drpartha@gmail.com Use this opportunity responsibly, and remember to follow these basic rules of decency.
Example 2: What is the probability of choosing a card from a deck of cards that is a heart or a nine? Solution: Let event $A$ be choosing a heart from the deck of cards. As there are $13$ heart cards in a deck of cards, so the probability of selecting a heart card would be $P(A)$ = $\frac{n(A)}{n(S)}$ = $\frac{13}{52}$ = $\frac{1}{4}$. Let event $B$ be choosing a nine from a deck of cards. As there are $4$ nine’s in a deck of cards, so the probability of choosing a nine card shall be $P(B)$ = $\frac{n(B)}{n(S)}$ = $\frac{4}{52}$ = $\frac{1}{13}$. The total number of cards present in a deck of cards is $52$, so the number of elements in the sample space would be $n(S)$ = $52$. The common outcome from both the events that is selecting a heart and that too nine in number is1, so the probability of joint occurrence of event $A$ and event $B$ is $P(A \cap B)$ = $\frac{1}{52}$. Thus, the probability of choosing a card from a deck of cards that is a heart or a nine is $P(A\ or\ B)$ = $P(A) + P(B) - P(A\ and\ B)$ $P(A\ or\ B)$ = $\frac{1}{4}$ $+$ $\frac{1}{13}$ $-$ $\frac{1}{52}$ $P(A\ or\ B)$ = $\frac{16}{52}$ $P(A\ or\ B)$ = $\frac{4}{13}$ Example 3: What is the probability of choosing a number from $1$ to $15$ that is less than $10$ or even? Solution: Let event $A$ be choosing a number from 1 o 15 that is less than $10$. So, $A$ = $\{ 1, 2, 3, 4, 5, 6, 7, 8, 9 \}$ and $n(A)$ = $9$. Let event $B$ be choosing an even number from $1$ to $15$. So, $B$ = $\{ 2, 4, 6, 8, 10, 12, 14 \}$ and $n(B)$ = $7$. The number of outcomes in the sample space are the numbers from $1$ to $15$, $n(S)$ = $15$. The probability of event $A$ would be $P(A)$ = $\frac{n(A)}{n(S)}$ = $\frac{9}{15}$. Similarly, the probability of event $B$ would be $P(B)$ = $\frac{n(B)}{n(S)}$ = $\frac{7}{15}$. The common outcome from both the events would be the number $A \cap B$ = $\{ 2, 4, 6, 8 \}$ , so the probability of joint occurrence of event $A$ and event $B$ is $P(A \cap B)$ = $\frac{4}{15}$. Thus, the probability of choosing a number from $1$ to $15$ that is less than $10$ or even is $P(A\ or\ B)$ = $P(A) + P(B) - P(A\ and\ B)$ $P(A\ or\ B)$ = $\frac{9}{15}$ $+$ $\frac{7}{15}$ $-$ $\frac{4}{15}$ $P(A\ or\ B)$ = $\frac{12}{15}$ $P(A\ or\ B)$ = $\frac{4}{5}$ Example 4: $2$ fair dice are rolled. What is the probability of getting a sum less than $8$ or a sum equal to $9$? Solution: Let us first draw the sum table when two fair dice are rolled Let event $A$ be getting a sum less than $8$ on rolling two fair dice. In the table above those numbers are shaded in brick red color, so $n(A)$ = $21$. Let event $B$ be getting a sum equal to $9$ on rolling two fair dice. In the table above those numbers are shaded in green color, so $n(B)$ = $4$. The number of outcomes in the sample space is the total sum of numbers we get on rolling two fair dice, $n(S)$ = $36$. The probability of event $A$ would be $P(A)$ = $\frac{n(A)}{n(S)}$ = $\frac{21}{36}$. Similarly, the probability of event $B$ would be $P(B)$ = $\frac{n(B)}{n(S)}$ = $\frac{4}{36}$. As there is no elements in common so $A \cap B$ = $0$. In this type of condition the events are said to be mutually exclusive events. Thus, the probability of getting a sum less than $8$ or a sum equal to $9$ $P(A \cup B)$ = $P(A) + P(B) – P(A \cap B)$ $P(A \cup B)$ = $\frac{21}{36}$ $+$ $\frac{4}{36}$ $-$ $0$ $P(A \cup B)$ = $\frac{25}{36}$ Example 5: What is the probability of getting a sum less than $5$ and a sum less than $4$ when two fair dice is rolled? Solution: Let us first draw the sum table when two fair dice are rolled. Let event $A$ be getting a sum less than $5$ on rolling two fair dice. In the table above those numbers are shaded in purple color, so $n(A)$ = $6$. Let event $B$ be getting a sum less than $4$ on rolling two fair dice. In the table above those numbers are shaded in pink color, so $n(B)$ = $3$. The number of outcomes in the sample space is the total sum of numbers we get on rolling two fair dice, $n(S)$ = $36$. The probability of event $A$ would be $P(A)$ = $\frac{n(A)}{n(S)}$ = $\frac{6}{36}$. Similarly, the probability of event $B$ would be $P(B)$ = $\frac{n(B)}{n(S)}$ = $\frac{3}{36}$. The common outcome from both the events would be the number $A \cap B$ = $\{ 2, 3, 3 \}$, so the probability of joint occurrence of event $A$ and event $B$ is $P(A \cap B)$ = $\frac{3}{36}$. Thus, the probability of getting a sum less than $5$ or a sum less than $4$. $P(A\ or\ B)$ = $P(A) + P(B) - P(A and B) $P(A\ or\ B)$ = $\frac{6}{36}$ $+$ $\frac{3}{36}$ $-$ $\frac{3}{36}$ $P(A\ or\ B)$ = $\frac{6}{36}$ $P(A\ or\ B)$ = $\frac{1}{6}$
For general discussion about Conway's Game of Life. "Very", "extra", etc. are adverb that modify "long" (and each other), and can't directly modify "boat". They can only be used directly as adjectives in phrases like "This collision makes the object I want, plus an extra boat" or "That glider eliminated the very boat that was causing problems!" Small Life patterns were originally assigned arbitrary mnemonic names, which were adequately unique and descriptive for small patterns, but that nomeclature gets more and more strained, the larger and more complicated that patterns get. It also becomes more and more futile as patterns get larger, as there are exponentially more of them at each given size. In Life's early days, there were unique names for all still-lifes up to 7 bits, about half of the 8-bit ones, and only one 9-bit one. In my pattern collections, I've tried to extrapolate meaningful names for small lists of objects (up to around a hundred objects or so), with mixed results, but after a point, using long chains of adjectives to describe a feature becomes tedious. Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. Small Life patterns were originally assigned arbitrary mnemonic names, which were adequately unique and descriptive for small patterns, but that nomeclature gets more and more strained, the larger and more complicated that patterns get. It also becomes more and more futile as patterns get larger, as there are exponentially more of them at each given size. In Life's early days, there were unique names for all still-lifes up to 7 bits, about half of the 8-bit ones, and only one 9-bit one. In my pattern collections, I've tried to extrapolate meaningful names for small lists of objects (up to around a hundred objects or so), with mixed results, but after a point, using long chains of adjectives to describe a feature becomes tedious. Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. I would prefer "long^5 boat" to "very very very very long boat", personally.mniemiec wrote:Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. EDIT: Also, Catagolue puts this in PATHOLOGICAL: Code: Select all x = 4, y = 8, rule = B3/S2-i34q2bo$bobo2$b3o3$bo$3o! "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane Well, guess who... yup, you're right.77topaz wrote:That's a good point. Who decided that the LifeWiki should get rid of the long^5 format names and replace them with the less adaptable adjectival names like "terribly long boat", anyway? I remember a discussion before that, though. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. she/they // "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Please stop using my full name. Refer to me as dani. "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. On the contrary, "amazingly long boat" is long^37, if you follow the footnotes for Long on the LifeWiki... which I definitely hope nobody does, since that thread has all the hallmarks of a "naming frenzy". People inevitably seem to get into naming frenzies every so often, and then hopefully come to regret them later.danny wrote:I remember a discussion before that, though. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. Me, I was just really happy to be able to go in and delete all the over-long pattern names in the LifeWiki collection that were messing up columnar lists, like Very_very_very_very_very_very_very_long_boat... so I didn't worry too much about whether the replacement names were really a good idea. They were a huge improvement if nothing else. I believe muzik added those very very very long names also, but it was some time before the most recent cleanup project. And just by the way, that 12-bit still life project was a whole heck of a lot of work on muzik's part, and it did definitely succeed in cleaning up a lot of things... though sometimes just by drawing attention to the problems, so Ian07's follow-up work also definitely deserves a good round of applause. I remember from the very old days, there were a small number of patterns that had qualifer names (e.g. long boat, long snake=python) but there was no real consistent nomenclature beyond that. Other than a few notable still-lifes (like paperclip), there generally wasn't even a standard nomenclature for still-lifes above 8 bits. When I started to systematically categorize pseudo-objects, it was much easier to use symbolic names (rather than empirical ones like 14.123 or 14P1.123), so I needed a way to concisely name the pieces involved. This meant coming up with ad-hoc names for objects up to 12 bits. I knew that the system would not hold up well much beyond that point, but it was never intended to. Also, as history tends to show, whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be), with the lack of any better nomenclature, that name tends to stick. When I started to systematically categorize pseudo-objects, it was much easier to use symbolic names (rather than empirical ones like 14.123 or 14P1.123), so I needed a way to concisely name the pieces involved. This meant coming up with ad-hoc names for objects up to 12 bits. I knew that the system would not hold up well much beyond that point, but it was never intended to. Also, as history tends to show, whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be), with the lack of any better nomenclature, that name tends to stick. coughjolsoncoughrunnynosecoughmniemiec wrote:whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be) Hard agree.77topaz wrote:I think "long^3 boat" would be the best naming scheme/format for these pages. What do others think? Another somewhat controversial opinion I have is that everything above a certain number doesn't really deserve a page, but let's go one step at a time. she/they // "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Please stop using my full name. Refer to me as dani. "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Fine by me. Along with this, the LifeWiki needs standard pnames -- lowercase alphanumeric-only names for the RLE and plaintext pattern files. The one good thing about the Arbitrary Adjective naming system was that it avoided the whole '"long<sup>10</sup> boat" / "Long%5E10_boat" mess and produced decent-looking pnames.77topaz wrote:I think "long^3 boat" would be the best naming scheme/format for these pages. What do others think? I think "long3boat" is fine for a pname, though. I think the only Arbitrary Adjectival pname that I'm personally responsible for is "abominably long boat", which I pretty much made up in desperation to get rid of an even more abominable "very very very very..." name. I've now removed abominablylongboat.cellsand abominablylongboat.rlefrom the server, uploaded long10boat.cellsand long10boat.rleinstead, and fixed the pname in the article. If other pnames can be patched up to be consistent with this, and if someone can keep a list of all the RLE:arbitraryadjectivelongsomething pages that get moved to RLE:veryNlongsomething, then I can go through at some point and delete all the arbitraryadjectivelongsomething.cells/.rlepattern files from the server, and the auto-upload script will take care of the rest. Maybe put the list, and any further discussion on this topic, on the Tiki Bar or someone's LifeWiki user page? Moosey Posts:2483 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact: I feel that the arbitrary adjective name should be kept as an alternative name, as in: Long^103 doorjamb (or uselessly long doorjamb) is the long^103 equivalent of the doorjamb. Long^103 doorjamb (or uselessly long doorjamb) is the long^103 equivalent of the doorjamb. Minor, but prodigal is lowercase on Catagolue. I and wildmyron manage the 5S project, which collects all known spaceship speeds in Isotropic Non-totalistic rules. Things to work on: - Find a (7,1)c/8 ship in a Non-totalistic rule - Finish a rule with ships with period >= f_e_0(n) (in progress) Things to work on: - Find a (7,1)c/8 ship in a Non-totalistic rule - Finish a rule with ships with period >= f_e_0(n) (in progress) https://catagolue.appspot.com/object/xp15_4R4Z4R4/b3s23 https://catagolue.appspot.com/census/b3 ... ?offset=-2 https://catagolue.appspot.com/census/b3 ... ?offset=-2 x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Sometimes when I tried to view not-yet-searched censuses, it says "No one has investigated this, investigate it yourself". But sometimes I just get an empty symmetry list. This post was brought to you by the Element of Magic. Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Code: Select all #C Favorite Gun. Found by me.x = 4, y = 6, rule = B2e3i4at/S1c23cijn4ao2bo$4o3$4o$o2bo! Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X As far as I can tell, the "It appears that no-one has yet investigated this combination of rule and symmetry options.™" only appears if you also specify a symmetry. If you dont specify a symmetry (Only a rule), you will get said empty symmetry list.Hunting wrote:Sometimes when I tried to view not-yet-searched censuses, it says "No one has investigated this, investigate it yourself". But sometimes I just get an empty symmetry list. Airy Clave White It Nay (Check gen 2) Code: Select all x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! Moosey Posts:2483 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact: Here’s an oddity: https://catagolue.appspot.com/object/xs ... jns23-ckqy It seems to use a different lifeviewer theme... the “inverse” theme. EDIT: Holy cow, it’s EVERYWHERE on Catagolue! EDIT: Oh. https://catagolue.appspot.com/object/xs ... jns23-ckqy It seems to use a different lifeviewer theme... the “inverse” theme. EDIT: Holy cow, it’s EVERYWHERE on Catagolue! EDIT: Oh. "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane What's odd about that? It's a perfectly legitimate P2 oscillator.Hdjensofjfnen wrote:This: https://catagolue.appspot.com/object/xp ... v0rr/b3s23 But it is called ":D".mniemiec wrote:What's odd about that? It's a perfectly legitimate P2 oscillator.Hdjensofjfnen wrote:This: https://catagolue.appspot.com/object/xp ... v0rr/b3s23 This post was brought to you by the Element of Magic. Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Code: Select all #C Favorite Gun. Found by me.x = 4, y = 6, rule = B2e3i4at/S1c23cijn4ao2bo$4o3$4o$o2bo! Figure eight on pentadecathlon does not appear in the large objects section of the statistics page despite having a maximum population of 66. I'm guessing this is because its period is too high for Catagolue to calculate it. Wiki: http://www.conwaylife.com/wiki/User:Ian07 Discord: Ian07#6028 Discord: Ian07#6028 It's there now, perhaps the page hadn't been updated yet? I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section?wildmyron wrote: It's there now, perhaps the page hadn't been updated yet? Wiki: http://www.conwaylife.com/wiki/User:Ian07 Discord: Ian07#6028 Discord: Ian07#6028 Don't mind me, can't read properly. Sorry.Ian07 wrote:I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section? I'm confirming this odd bug. Maybe Catagolue just needs some time for Adam P. Goucher to add xp120 to the list of objects that Catagolue considers "large". (e.g. the count doesn't consider still-life bins below cloverleaf interchange, since that would waste time)wildmyron wrote:Don't mind me, can't read properly. Sorry.Ian07 wrote:I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section? EDIT: This looks fishy. Very fishy. http://catagolue.appspot.com/census/b345s4567/iC1 EDIT: By the way, the extremely high period of the new xp120 makes it the first object other than linear-growth patterns to break the preview. "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane I'm fairly certain this explanation is correct - Catagolue only displays population statistics (or, @Hdjen, animated GIFs) for objects of period <100 (or sometimes ≤100 - I think the threshold isn't always consistent across functions). So, the system doesn't knowthe xp120 belongs in that section.
We study $$L^p\times L^q\rightarrow L^r$$ L p × L q → L r bounds for the bilinear Bochner–Riesz operator $$\mathcal {B}^\alpha $$ B α , $$\alpha >0$$ α > 0 in $${\mathbb {R}}^d,$$ R d , $$d\ge 2$$ d ≥ 2 , which is defined by [Equation not available: see fulltext.]We make use of a decomposition which relates the estimates for $$\mathcal {B}^\alpha $$ B α to the square function estimates for the classical Bochner–Riesz operators. In consequence, we significantly improve the previously known bounds. Mathematische Annalen – Springer Journals Published: May 30, 2018 It’s your single place to instantly discover and read the research that matters to you. Enjoy affordable access to over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C. “Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera
Given any triangle $T$ there is a triangle $T'$ similar to $T$ such that the area of $T'$ is the same as the perimeter of $T'$. Suppose the perimeter of $T$ is $P$ and the area of $T$ is $A$. Then dilate $T$ by a factor of $\frac{P}{A}$ to produce the triangle $T'$. (That is, multiply all the lengths by $\frac{P}{A}$.) The perimeter of $T'$ will be the perimeter $T$ multiplied by $\frac{P}{A}$ to give:$$\text{Perimeter of } T' = \frac{P}{A}\cdot P = \frac{P^2}{A}$$The ara of $T'$ will be the area of $T$ multiplied by $(\frac{P}{A})^2$ to give:$$\text{Area of } T' = \left(\frac{P}{A}\right)^2 A = \frac{P^2}{A}$$ For example, an equilateral triangle with all edges equal to 1 has area $\sqrt3/4$ and perimeter 3. The equilateral triangle with perimeter equal to its area is dilated by $3 \div (\sqrt3/4) = 12/ \sqrt3$, so its side lengths are all $12/\sqrt3$. Another example: a 3-4-5 triangle has perimeter 12 and area 6. The similar triangle with the area equal to its perimeter is dilated by $12/6 = 2$. Thus the new triangle is a 6-8-10 triangle and has area and perimeter 24. A final example: a right-angled triangle with short sides $a$ and $b$ has perimeter $P = a+ b+ \sqrt{a^2+b^2}$ and area $A = \frac12 ab$. Dilate this by $P/A$ to give sides of:$$\begin{align}a \cdot \frac{a + b + \sqrt{a^2 + b^2}}{\frac12 ab} &= 2\left( \frac{a}{b} + 1 + \sqrt{\left(\frac{a}{b}\right)^2 + 1}\right)\\\text{and } \quad b \cdot \frac{a + b + \sqrt{a^2 + b^2}}{\frac12 ab} &= 2\left( \frac{b}{a} + 1 + \sqrt{\left(\frac{b}{a}\right)^2 + 1}\right)\\\text{and }\quad\sqrt{a^2+b^2} \cdot \frac{a + b + \sqrt{a^2 + b^2}}{\frac12 ab} &= 2\left(\sqrt{\left(\frac{a}{b}\right)^2 + 1} + \sqrt{\left(\frac{b}{a}\right)^2 + 1} + \frac{a}{b} + \frac{b}{a}\right)\end{align}$$
Get your free trial content now! Video Transcript Transcript Graphing Functions Stephanie Gawking is a math enthusiast, and her hobby is astronomy. From her backyard, she gazes through her telescope and dreams of discovering a new celestial body. She sees a shooting star - which is a small, fast meteor. Although she knows the average speed of a shooting star is 30,000 miles per hour, she wonders about its path and how far it'll travel in a given amount of time. What is a function? To show the relationship between distance and time, we can look at a graph; the path of the star MAY be the graph of a function, but how do we know for sure? A function is a special relationship between two variables; in this case, the variables are the distance and the time. For each minute that passes, the star travels to a new location in the sky. If the graph of the star's path is a function, then for every input, time, there is a unique output, the location or the distance traveled. Let’s take a look at the function f(x) = 2x + 8. Notice that we used the function notation, f(x), this is just a fancy way of representing 'y'. Graphing a function Okay, let’s graph the function. It’s already written in slope-intercept form, y = mx + b. The y-intercept is equal to 8, and the slope, or rise over run, is equal to 2. OR, you can write the values for 'x' and 'y' in a table. For instance, when x = 0, y = 8. When x = 1, y = 10, and so on. Next, plot a few points and connect the points on the line. We know the graph displays a function because each 'x' has only one 'y'. To double check that the graph is a function, we can also do a vertical line test. Draw in several vertical lines, if the lines touch the graph in only one place, then the graph is a function. If any line touches the graph in more than one place, it's not a function - it's that simple! Graphing Parabolas Let’s graph y = x2. To do this, we can create a function table and calculate a few points, then graph. If x = -2, then y = 4. If x = -1, y = 1, and so on. Notice the shape of this function. This distinctive u-shape is called a parabola. When you have a quadratic equation, the graph is always a parabola. How do we know if a quadratic equation is a function? For each x, there is only one y, and the graph passes the vertical line test. Vertical Line Test Looking through her telescope, Stephanie sees a constellation. It’s so curvy; is it a function? Let’s use the vertical line test to see if it is.Oh look! It passes the test! So this graph is also a function - for each input, 'x', there is one output, 'y'. Here’s another awesome constellation, but does its graph form a function? Because the graph passes the vertical line test, it sure does! And, what about this one? It’s u-shaped, but it’s turned sideways. It fails the vertical line test, so no, it’s not a function. For each input, there is more than one output. Whoa nelly! This one looks like a circle. Is it a function? It fails the test, so no way. Stephanie adjusts her telescope. Holy moly! Stop the presses! What's that? She thinks she's finally discovered a new celestial body. It’s a dream come true. Wait, is that a firefly? Graphing Functions Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Graphing Functions kannst du es wiederholen und üben. Describe how the given graph represents a function. Tipps We have that $f(x)=2x+8$ is a function, as well as $f(x)=8x+2$. A linear function in slope intercept form is given by $f(x)=mx+b$, where $m$ is the slope while $b$ is the y-intercept. Each function is a relation but not each relation is a function. Lösung Functions are very special relations between variables. In particular, for us, they are relations between two variables, say $x$ and $y$, where every $x$ is related to at most one $y$. For example, $f(x)=2x+8$ is a linear function with the slope $2$ and the y-intercept $8$; the corresponding graph is a line. Given just a graph, how can we check if it represents a function? Well, we can use the vertical line test! Which says that a graph represents a function if any vertical line has only one point in common with the graph. By drawing vertical lines over the graph of $f(x)=2x+8$, we can see that it passes the vertical line test and thus represents a function! Explain how to draw the graph of the function $y=2x+8$. Tipps Here you see how to plot the point $(3,6)$ in a coordinate system. You can write a linear function as $f(x)=2x+8$, as well as $y=2x+8$. The $y$ corresponding to $x=3$ is $y=2\times 3+8=6+8=14$. Lösung To draw the graph of a function, first we need to draw a coordinate system with a horizontal line, the $x$-axis, and a vertical one, the $y$-axis. You can either draw a line corresponding to a linear function by drawing the $y$-intercept and then use the slope to determine the other points on the line (Remember: the slope is given by rise over run), or you can figure out a function table of $(x,y)$ pairs. To get these pairs, plug each $x$ value into the function; for example, the $y$ coordinate at $x=0$ is $f(0)=2\times 0+8=8$. So the pair we get is $(0,8)$. From the function table, the graph of the function can be drawn by plotting the points from the function table on the coordinate system and drawing a line connecting all of the points. Decide which graphs represent functions. Tipps Functions are special relations between variables. In particular, for us, they are relations between two variables, say $x$ and $y$, where every $x$ is related to at most one $y$. Use the vertical line test: a graph represents a function if any vertical line has only one point in common with the graph. Lösung To check if a graph represents a function, you can use the vertical line test. The vertical line test says that a graph represents a function if any vertical line has only one point in common with the graph. For example $x^2+y^2=16$ is a circle with radius $4$. This equation is not a function, as its graph does not pass the vertical line test. We can see that the graph of the sideways parabola $x=y^2$ (i.e. the bottom right-most picture) also does not pass the vertical line test and thus does not represent a function. All the other graphs represent functions, as they pass the vertical line test. Match the graph with its corresponding function. Tipps A line is the graph of a linear function. A linear function in slope intercept form is given by $y=mx+b$, where $m$ is the slope and $b$ the y-intercept. The graph of a quadratic function is a parabola. If you know the vertex $(v_x, v_y)$ of a parabola you can write the corresponding equation as $y=a(x-v_x)^2+v_y$. Lösung Here you have two lines and two parabolas. We first want to remember that: A line corresponds to a linear function. A parabola corresponds to a quadratic function. Let's start with the lines: The line on the left has the y-intercept $5$ as well as this one on the right. But what is the difference? It's the slope! The line on the lieft has a negative slope while the line on the right has a positive slope. You get the slope using "rise over run": for the left line we have a rise of $-5$ and a run of $2$. So the corresponding function is $y=-\frac52x+5$. The rise of the right line is $5$ and the run is $2$. This gives us the function $y=\frac52x+5$. Now for the parabolas: The left parabola has intercepts the $y$-axis at $(0,-4)$. So the left parabola is the parabola $y=x^2$ shifted down by $4$, and the corresponding function is $y=x^2-4$. The right parabola has intercepts the $y$-axis at $(0,1)$. So the right parabola is the parabola $y=x^2$ shifted up by $1$, and the corresponding function is $y=x^2+1$. Complete the function table for the function $y=2x^2+3$. Tipps For each $x$ you can calculate the corresponding $y$ by plugging $x$ into the function. When $x=4$, we have that $y=2(4)^2+3=2\times 16+3=32+3=35$. Pay attention to the sign of $x$. A negative number squared is a positive number. Lösung Here you see the resulting graph as well as the three points $(0,0)$, $(-1,5)$, and $(1,5)$. $~$ Any point lying on the graph is given by $(x,y)$, where $y=2x^2+3$. $~$ We can construct a function table by taking some $x$-values and calculating their corresponding $y$-values: For $x=-3$ we get $y=2(-3)^2+3=2\times 9+3=18+3=21$ For $x=-2$ we get $y=2(-2)^2+3=2\times 4+3=8+3=11$ For $x=-1$ we get $y=2(-1)^2+3=2\times 1+3=2+3=5$ For $x=0$ we get $y=2(0)^2+3=2\times 0+3=0+3=3$ For $x=1$ we get $y=2(1)^2+3=2\times 1+3=2+3=5$ For $x=2$ we get $y=2(2)^2+3=2\times 4+3=8+3=11$ For $x=3$ we get $y=2(3)^2+3=2\times 9+3=18+3=21$ Decide which stars belong to the graph of the function. Tipps You can check each star by plugging its $x$-coordinate into the function. The resulting $y$-value must match the $y$-coordinate of that star for that star to be part of the constellation. Plugging $x=0$ into the function, we get $y=\frac{1}{10}(0)^2=0$. So we can see that the star at $(0,0)$ belongs to the graph. The star at $(-4,1)$ does not belong to the graph since $y=f(-4)=\frac{1}{10}(-4)^2=1.6$, not $1$. Lösung To check if a star lies in our constellation, we need to plug in its $x$-coordinate into the function $f(x)=\frac{1}{10} x^2$ and see if the resulting $y$-coordinate is indeed the $y$-coordinate of that star. We can construct a function table to help figure out which stars are in the constellation: $\begin{array}{c|r|r|r|r|r|r|r|r|r} x &-10 & -9 & -8 & -7 & -6 & - 5& -4 & -3 & -2 & -1 & 0 \\ \hline y & 10 & 8.1 & 6.4 & 4.9 & 3.6 & 2.5 & 1.6 & 0.9 & 0.4 & 0.1 & 0 \end{array}$ $\begin{array}{c|r|r|r|r|r|r|r|r|r} x & 0 & ~~~1 &~~~ 2 & ~~~3 &~~~4 & ~~~5& ~~~6 & ~~~7 & ~~~8 & ~~~9 & ~~~10 \\ \hline y & 0 & 0.1 & 0.4 & 0.9 & 1.6 & 2.5 & 3.6 & 4.9 & 6.4 & 8.1 & 10 \end{array}$ Graphing the points in this table, we see that the stars which belong to the constellation are: $(0,0),(10,10),(-10,10),(4,1.6),(6,3.6)$, and $(-5,2.5)$.
Some background: leukemia is an immune cancer so it comes from the hematopoietic system. Thus, it would be useful to look into Hematopoietic stem cells (HSCs). Note that, in general, the (semantic) hierarchy of stem cells goes: totipotent, pluripotent, and multipotent, in order of less generality (i.e. more differentiated, or more mature as a cell in other words). Maybe reading about stem cells in general would be useful too. HSCs have two subtypes: one which can indefinitely renew itself, and one which has a timed lifespan (i.e. it stops being capable of generating new cells of different types after awhile). It is interesting to note that leukemias have an analog of these, which is what is being referenced, presumably, in the paper. See Normal and leukemic hematopoiesis: Are leukemias a stem cell disorder or a reacquisition of stem cell characteristics? for a bit more on this. Multipotent progenitor cells (MPCs) are also capable of generating new cells, but are capable of less variety than plutipotent stem cells. They can "choose" to become a more differentiated cell (i.e. choose a lineage). They also cannot self-replicate (i.e. to divide and create new multipotent progenitors) indefinitely, though they can do it for some number of steps usually. In other words, an MPC will generate a finite number of cells, whereas a true stem cell can divide indefinitely, given the resources. The Goldie-Coldman Law (also called hypothesis) can be summarized via: This hypothesis predicts that the tumor cells develop resistant phenotypic variations towards chemotherapeutic agents. This resistance is independent of the chemotherapeutic agent and dependent on the number of cell divisions that occur after treatment begins. The larger the tumor size or the longer delay in initiating chemotherapy, the more resistant cells. The implication is that a regimen of alternating cycles of two different non–cross-resistant chemotherapy medicines yields a better chance of tumor eradication. (From here). In other words, it is a model for how cancers become resistant to drugs. See also Improving Cancer Treatment via Mathematical Modeling: Surmounting the Challenges is Worth the Effort. Ok, onto the second paragraph. I will be very informal here :) Let $Q$ denote the density of stem-cells population. Within a cell population (cancer or otherwise), often there is a small subpopulation of stem cells (the rest of the cells are differentiated cells). So if there are $N$ cells in total, roughly $S=QN$ will be the number of stem cells present. A percentage $\eta_1$ of these ones are supposed to undergo asymmetric division with one daughter cell identical to the mother cell and the other one committed to differentiation. Within the population of stem cells (which are a subpopulation of cells), there will be further subcategories with different reproductive behaviours. Notice there are really only 3 options: a stem cell becomes (a) two stem cells, (b) two differentiated cells, and (c) one stem and one differentiated cell.This sentence means that roughly $s_1 = \eta_1 S$ cells will divide in the asymmetric pattern given above. A percentage $\eta_2$ of the stem-like population differentiate symmetrically and go to the line of mature cells. This means the number of symmetric divider cells is roughly $s_2 = \eta_2 S$ (both outputs will be differentiated. The rest of them, a percentage $p = 1−\eta_1−\eta_2$ is supposed to self-renew: both cells that result from mitosis are identical to the cell that entered to the cell cycle. Finally, around $s_3=pS$ cells become stem cells again, i.e. division type (a). The same duration $\tau$ of the cell cycle is supposed for all types of division. In order to divide, every cell has to undergo a complex process called the cell cycle. This takes time; in other words, it upper-bounds/limits the rate at which cells can divide. Presumably, different cells take different lengths of times to divide. This assumption above ignores this issue. Edit (062018): let's take a look at the equation$$\tilde{r}(P) = r_1(P)\frac{x_0 - R_0}{x_0^{-p+1}} \;\text{ with }\;r_1(P) = \frac{P^m}{P^m + P^m_0}$$where $Q$ is the density of the stem cell population, $P$ is the amount of plasmatic drug, $P_0$ is the half-maximum activity concentration, $m$ is the Hill coefficient, $x_0$ is the number of infected cells, $R_0$ is the number of resistant cells, $p$ is the probability of mutation, and $q=-p\in (-1,0]$. From a high level perspective, we need to account for three effects of treatment: (1) the building of the cancer's resistance to that treatment, (2) the increase in drug effect as it gets absorbed and decrease as it is excreted, and(3) the effect of the drug on the target cells. First, notice in the differential equations:\begin{align}\dot{Q} = f(Q,t) - \tilde{r}(P)Q^{q+1} \\\dot{D} = -\lambda_0 D + K \\\dot{P} = -vP + \lambda_0 D\end{align}that (2) is being handled in the equations for $D$ and $P$.Drugs start in some compartment (like the intestines when you swallow a pill) where they are not so useful against leukemia. Here they start with some concentration $K$, and the amount of drug decreases with first-order kinetics, i.e. the first order linear differential equation that shows $D$ decreasing. See an article on pharmacokinetics for more info on this. Once the drug is absorbed (i.e. contributes to $P$ instead of $D$), it enters the plasma and starts to have an effect on the leukemia cells. (Notice that (2) is being handled here as well, in the $-vP$ term of the differential formula for $P$, which measures how fast your body is destroying the drug.) Naturally, only the drug concentration $P$ in the blood plasma can affect the cancer cells; hence, the term for $\tilde{r}$ only depends on $P$! Next, we can see how the model accounts for (3) in the differential equation for $Q$. Again, we have first-order kinetics of $Q$ (by the first-order linear differential relation), modulated by $r$ and $q$. Since $q + 1 > 0$ and $\tilde{r} \geq 0$, we know that as $\tilde{r}$ increases, we are causing $Q$ to decrease faster, i.e. we are hurting the cancer more (good!). Thus we expect that as $P$ (the drug concentration) increases, $\tilde{r}$ should also increase. Ok, so now for (1). Notice that the equation for $\tilde{r}$ breaks into 2 terms. The second term is $R_T=(x_0- R_0)/x_0^{-p+1}$. This captures the effect of resistance, as the cancer evolves via mutation to gain resistance to the drug. Notice there is no dependence on the drug itself (just as described in the Goldie-Coldman hypothesis). (Note that this may not be realistic, since the drug will induce an (un)natural selection pressure on the cells and encourage the ones that resist the drug to proliferate more easily; however it does simplify the model).Notice that $R_T$ decreases as $R_0$ increases, meaning that as the number of resistant cells increases, the efficacy of the treatment decreases. The first term, i.e. $r_1(P)$, describes the effectiveness of the drug as a function of its concentration via the classical Hill equation.Simplistically speaking, drugs often act via receptor molecules on the surfaces of cells; once all these receptors are saturated, the drug cannot affect the cell anymore! Hence, the effect of a drug "levels off": as its concentration increases, it will bind more and more receptors (and thus have more and more effect, albeit at a diminishing rate), until saturation occurs and adding more drug (i.e. increasing $P$ no longer does anything). You can see this in the equation for $r_1$ since $\lim_{P\rightarrow\infty} r_1(P) = 1$ (though of course $\partial_P r_1 > 0$ still). The $m$ and the $P_0$ simply control this relationship: the Hill coefficient $m$ describes the "cooperative binding" properties of the drug (see the link above for details), i.e. whether binding accelerates further binding or not, while $P_0$ (usually denoted $K_A$, or called the dissociation constant) describes the ligand concentration at which half the target sites are occupied.Higher $P_0$ means that occupying half the receptor sites requires a higher concentration, which means you need more drug for the same effect, i.e. the efficacy of the drug is poorer; this is why $P_0$ appears on the denominator: as it increases, the effect of the drug on the cells (given by $\tilde{r}$) will decrease.Essentially, $m$ controls how fast the drug reaches peak efficacy (how steep the curve is), while $P_0$ controls the physical properties of molecular binding (the translation component of the curve).
Integrate after Pullback How to integrate a differential for obtained by pullback? Let say I have calculated the pullback of some differential form, the result is (in latex): $\rho^2\sin(\theta)\,\mathrm{d}\rho\wedge\mathrm{d}\theta\wedge\mathrm{d}\phi$ The triple integral is: sage: integral(integral(integral(rho^2*sin(theta),rho,0,1),theta,0,pi/7),phi,0,pi/5) But I want this process to be automatic, i.e. not manually write the integrand in the triple integral by myself. I want to be able to extract the integrand from the pullback and put it in the triple integral. I that possible? Daniel
Rational Expressions Learn easily with Video Lessons and Interactive Practice Problems Simplifying Rational Expressions A rational expression is a fraction. What makes a rational expression different from a simple or complex fraction is the numerator and/or the denominator are polynomials. To simplify rational expressions, cancel out any common factors. Factors may be polynomials. $\frac{x^{2} +5x +6}{x +2}=\frac{(x +3) (x +2)}{(x + 2)}=\frac{x+3}{1}=x + 3$ To simplify this rational expression, factor the trinomial in the numerator then cancel out the common factors. Adding and Subracting Rational Expressions Find a common denominator for all fractions then add or subtract and simplify if needed. There are many steps to solve adding and subtraction rational expression problems. $\begin{align} \frac{2}{3x} +\frac{3}{9} + \frac{1}{x^{2}}&=\\ \frac{6x}{9x^{2}} + \frac{3x^{2}}{9x^{2}}+ \frac{9}{9x^{2}} &=\\ \frac{3x^{2} + 6x + 9}{9x^{2}}&=\\ \frac{3(x^{2} +2x +3)}{3(3x^{2})} &=\\ \frac{x^{2} +2x +3}{3x^{2}} \end{align}$ Multiplying and Dividing Rational Expression To make the computations less complicated, cancel out common factors then multiply. For this multiplication problem, factor out the common binomial terms then multiply. $\begin{align} \frac{x^{2} -x -2}{x-2}&\times\frac{x^{2} +4x +4}{x+2}=\\ \frac{(x+1)(x-2)}{x-2}&\times\frac{(x+2)(x+2)}{x+2}=\\ \frac{x+1}{1}&\times\frac{x+2}{1}=x^{2}+3x+2 \end{align}$ After canceling out any common factors, divide the rational expressions. Remember the mnemonic device for dividing fractions? Keep it, switch it, flip it. $\begin{align} \frac{x^{2} +4x +4}{x+2}&\div\frac{x^{2} +6x +8}{x+2}=\\ \frac{(x+2)(x+2)}{x+2}&\div\frac{(x+2)(x+4)}{x+2}=\\ \frac{x+2}{1}&\div\frac{x+4}{1}=\\ \frac{x+2}{1}&\times\frac{1}{x+4}=\frac{x+2}{x+4} \end{align}$ Longterm Division of Polynomials Long division of polynomials follows similar steps as traditional long division. To check the solution, you can use the Distributive Property. This quotient for this division problem is a trinomial and has no remainder. $\begin{align} x^{2}+7x+12\\ x+6 ~\overline{\big)x^3+13x^{2}+54x +72}\\ \underline{-(x^3 +6x^{2})}\\ 7x^{2}+54x\\ \underline{-(7x^{2}+42x)}\\ 12x +72\\ \underline{-(12x+72)}\\ 0 \end{align}$ When when writing the dividend under the division bracket, include a placeholder for any powers that do not have terms. For this problem, notice the $0x^{3}$ term is a placeholder. $\begin{align} 4x^{3}-8x^{2}+17x-32 +\frac{180}{(3x+6)}\\ 3x+6 ~\overline{\big) 12x^{4}+0x^3+3x^{2}+6x +12}\\ \underline{-(12x^4 +24x^{3})}\\ -24x^{3}+3x^{2}\\ \underline{-(-24x^{3}-48x^{2})}\\ 51x^{2} +6x\\ \underline{-(51x^{2}+102x)}\\ -96x -12\\ \underline{-(-96x -192)}\\ 180 \end{align}$ The quotient for this division problem did not work out evenly, and there is a remainder. The remainder is written as a fraction with the divisor as the denominator.
A few days ago, whilst gazing upon the midnight sky, a totally justified question crawled into my mind. How many SpaceX rockets does it take to deorbit the Moon? It went straight into one of those places where you can’t really ignore it or shake it off, you just have to know. I could have probably just calculated the required Δv to reduce the periapsis (“lowest” point in orbit or closest approach) of the Moon enough for it to enter the atmosphere, but motivated by my need of a kinematics – gravity framework for a long standing project I decided to simulate the orbit of the Moon around the Earth in Python and brute-force the required amount of thrust (or number of rockets). So here I am, registering domain names, configuring webservers and editing css stylesheets with the hope that my work will be helpful or educational to some, in an attempt to pay my long due obligation of giving back to the internet. But enough of that, lets get to the point. In order to model orbital motion, we need to be able to compute the position of a given body at each time step. This depends on its velocity which in turn depends on its acceleration, which is computed from the forces acting on it using Newton’s second law of motion: Where F are the acting forces, m the mass of the body and a the acceleration. Our first task is to create a class of an orbiting (or not) body, since this is meant to be reusable and not just a one night stand. class Body(object): def __init__(self, Pos0, Mass0, V0, A0, R0, F0=[0,0]): self.Pos = Pos0 self.Mass = Mass0 self.V = V0 self.A = A0 self.R = R0 self.F = F0 def __call__(self): print self.Pos, self.V, self.A, self.F The attributes of the object consist of its position vector, mass, velocity vector, acceleration vector, radius and force vector. All the vectors are two dimensional since we assume that the Sun does not exist for the scope of this problem, so we don’t have to bother with orbital inclinations and another dimension (who needs the Sun and a third dimension anyway?). When the object is called, some information about its state gets printed. In order for a mass m to orbit around a central point, it has to have a sufficient velocity v orthogonal to the centripetal force F g: \( |\vec{F_{g}}|=G\frac{m_1 m_2}{r^{2}} \\\) At first, we need the gravity force vector, \(\vec{F_{g}}\), the magnitude of which is given to us by Newton’s law of universal gravitation: Where G is the universal gravitational constant (not to be confused with g), m 1,2 the masses of the two bodies and rthe distance between them. Gand mare known so we need to compute 1,2 r. To do so, we shall utilize the Pythagorean theorem: Where x 1,2, yare the Cartesian coordinates of the two points (bodies). Now we can construct the functions 1,2 DistBetweenand FgravMag: def DistBetween(body1, body2): return math.sqrt((body1.Pos[0]-body2.Pos[0])**2+(body1.Pos[1]-body2.Pos[1])**2) def FgravMag(body1, body2): return G*body1.Mass*body2.Mass/(DistBetween(body1, body2)*10**3)**2 Note that the output of DistBetween is multiplied by 10 3 in FgravMag because the distance unit of choice is Km but we need the force in Newtons. In order to turn our gravity force magnitude to a force vector, we need a unit vector (one that has a magnitude of 1) \(\vec{u}\) that points from body1 to body2. This is exactly what we are going to get if we grab a body1 centered vector that points to body2 by subtracting their coordinates and divide that by the distance between them: \( \vec{D}=(x_2 -x_1,y_2- y_1) \\ \vec{u}=\frac{\vec{D}}{|\vec{D}|} \\\) Which is handled by the prosaically named function PosUnitDirVct: def PosUnitDirVct(body1, body2): return numpy.divide([body2.Pos[0]-body1.Pos[0], body2.Pos[1]-body1.Pos[1]], DistBetween(body1, body2)) By multiplying this direction vector with the magnitude of the gravitational pull we get the gravity force vector we sought for:\(\vec{F_{g}} = \vec{u}|\vec{F_{g}}| \\\) And the equivalent code: def FgravVct(body1, body2): return numpy.multiply(PosUnitDirVct(body1, body2), FgravMag(body1, body2)) We are half way there! To reduce the periapsis of the Moon enough, in order to reach the atmosphere or crash into the earth, we need to apply a force in the retrograde direction, which means opposite to the direction of its velocity. This task is made a hell of a lot easier by the fact that the Moon is tidally locked to the Earth (orbital frequency same as rotational frequency, or why there is an album called The Dark Side of the Moon [sic]), so we can just park our boosters nose down on the “leading” point of the Moon that is always facing to the direction of its velocity (prograde) and fire them up at the apoapsis. To do so, we first employ a velocity direction vector (prograde vector) formulated in an orderly fashion, which we will later invert, resulting in a retrograde direction vector. By multiplying this new vector with the thrust of a Falcon9 v1.1 first stage booster (in Newtons!) and the count N of the boosters, we get our deorbiting force vector \( \vec{F_r} \): def VelUnitDirVct(body): return numpy.divide(body.V, math.sqrt(body.V[0]**2+body.V[1]**2)) # math.sqrt(V[0]**2+V[1]**2) is equivalent to numpy.linalg.norm(V) ... Fr = numpy.multiply(VelUnitDirVct(Moon),-1) * F9thrust * N Now we have both of our acting forces and we can proceed with the computation of the position at each step. Since we will be working with discrete time progression in the form of a modest for loop, this is only a matter of integrating the acceleration given to us by Newton’s second law of motion twice, without forgetting to add its previous velocity/position: \\\\ \vec{v}_{t+n} = \int_{t}^{t+n} \vec{a_t}dt + \vec{v}_{t} \\\\ \vec{x}_{t+n} = \int_{t}^{t+n} \vec{v_t}dt + \vec{x}_{t}\\\) Where x the position, t is a given point in time and n our time step. We can implement that directly as methods of our objects: def updateAcc(self): self.A = numpy.divide(self.F, self.Mass)/10**3 def updateVel(self, time): self.V = [self.V[0] + self.A[0]*time, self.V[1] + self.A[1]*time] def updatePos(self, time): self.Pos = [self.Pos[0] + self.V[0]*time, self.Pos[1] + self.V[1]*time] def Move(self, time): self.updateAcc() self.updateVel(time) self.updatePos(time) I should also note at this point that this method of time discretization (Solving the differential equations using Euler method) has an inherent error when working with time varying quantities, directly related to the size of our step. All models are wrong, but some are useful.– George Box Finally it’s time to glue everything together. Okay but how do we do that? First we should define the two bodies: Earth = Body([0,0], 5.97237*10**24, [0,0], [0,0], 6371) Moon = Body([0,405400], 7.341*10**22, [0,0], [0,0], 1737) We place the Earth at 0,0 (how arrogant of us, casually placing the Earth) and the apoapsis of the Moon at the top of our coordinate system. Mass is in Kg and distance in Km. Every position of the Moon during our simulation will be placed in a list named Moonposplot (yeah I know). The first stage of a Falcon9 has a burn duration of 180 seconds, so after that the only acting force will be Earth’s gravity: for t in numpy.linspace(0,endtime,endtime/step): if t >= 180: Moon.F = FgravVct(Moon, Earth) else: Fr = numpy.multiply(VelUnitDirVct(Moon),-1) * F9thrust * N Moon.F = numpy.add(FgravVct(Moon, Earth), Fr) Moon.Move(step) Moonposplot.append(Moon.Pos) And using matplotlib to output an animated plot: Ecircle = plt.Circle((Earth.Pos[0], Earth.Pos[1]), Earth.R, color='g') Mcircle = plt.Circle((Moon.Pos[0], Moon.Pos[1]), Moon.R, color='grey') Esphere = plt.Circle((Earth.Pos[0], Earth.Pos[1]), Earth.R+10000, color=(0,0,0.5,0.1)) Thsphere = plt.Circle((Earth.Pos[0], Earth.Pos[1]), Earth.R+700, color=(0,0,0.6,0.15)) Msphere = plt.Circle((Earth.Pos[0], Earth.Pos[1]), Earth.R+80, color=(0,0,0.7,0.2)) Ssphere = plt.Circle((Earth.Pos[0], Earth.Pos[1]), Earth.R+50, color=(0,0,0.8,0.25)) Trsphere = plt.Circle((Earth.Pos[0], Earth.Pos[1]), Earth.R+12, color=(0,0,0.9,0.3)) fig = plt.figure() ax = plt.axes(xlim=(-415.4*10**3, 415.4*10**3),ylim=(-415.4*10**3, 415.4*10**3)) line, = ax.plot(map(list, zip(*Moonposplot))[0],map(list, zip(*Moonposplot))[1], lw=0.5) def init(): ax.add_artist(Esphere) ax.add_artist(Thsphere) ax.add_artist(Msphere) ax.add_artist(Ssphere) ax.add_artist(Trsphere) ax.add_artist(Ecircle) ax.add_artist(Mcircle) return Mcircle, Esphere, Thsphere, Msphere, Ssphere, Trsphere, Ecircle, line def anim(i): Mcircle.center = (Moonposplot[i][0], Moonposplot[i][1]) return Mcircle, Esphere, Thsphere, Msphere, Ssphere, Trsphere, Ecircle, line ani = animation.FuncAnimation(fig, anim, init_func=init, frames=int(endtime/step), interval=30, blit=True) plt.show() And let’s test that for a second without any boosters… Oh no! A premature crash! That happened because we did not specify an initial velocity V0, and it is as if a stationary Moon just appeared and fell straight into the Earth. The power of modern search engines immediately gives us 0.964 Km/s as the answer to the question “Moon minimum orbital velocity” (Minimum because at t=0 we are at the apoapsis). Isn’t that great? This power is readily available to everyone, yet some people choose to use the internet just to post photos of their meals. Anyway, let’s initialize the Moon in a meaningful way now: Moon = Body([0,405400], 7.341*10**22, [0.964,0], [0,0], 1737) And we have achieved orbit! A well respected and accurate verification can be performed using the cursor and the output of matplotlib’s figure to check out if the periapsis is where it should, which surprisingly enough it is! Another test that can be performed is playing with the duration of the simulation until exactly one revolution around the Earth is completed and then compare that to the orbital period of the Moon. Using a quick and dirty for loop to accomplish that, we verify that the found value is indeed really close to a sidereal month. Getting the answer to the initial question is only a matter of applying a force that is a multiple of 5885 kN (Falcon9 1.1 stage 1 thrust) until the Moon reaches the exosphere! (Then orbital decay will take over.) This can be achieved with another for loop that modifies the multiplier N until the distance between the two bodies is sufficiently small. …Well as it turns out we need Falcon9 stage 1 boosters so that’s not happening any time soon. 5*1016 Such a significant amount of boosters adds enough mass to our body to play an important role in our calculations. It is trivial to account for that, but if we decide to be that precise (let me remind you that we ignored the Sun) we would also need to take in account the reduction of the boosters’ mass, something we could do because SpaceX was kind enough to provide us with the specific impulse of Falcon9’s stage 1. So we shall deem our previous answer satisfactory. By developing the program in such way, it is now trivial to demonstrate multi-body situations such as the two body problem (because nothing is really stationary): or a binary system: Well, I can now consider my question answered! Thank you for reading this, and if you want to check out the complete code, it is on Github. Todo: Implement a higher order symplectic & time-reversible integrator such as JANUS.
Research Open Access Published: Periodic solutions of Rayleigh equations with singularities Boundary Value Problems volume 2015, Article number: 154 (2015) Article metrics 914 Accesses 7 Citations Abstract In this paper, we study the existence of periodic solutions of Rayleigh equations with singularities \(x''+f(t, x')+g(x)=p(t)\). By using the limit properties of the time map, we prove that the given equation has at least one 2 π periodic solution. Introduction In this paper, we are concerned with the existence of periodic solutions of singular Rayleigh equations where \(g: (0, +\infty)\to\mathbf{R}\) is continuous and has a singularity at the origin, \(f: \mathbf{R^{2}}\to\mathbf{R^{2}}\) is continuous and 2 π periodic with respect to the first variable t, \(p: \mathbf{R}\to\mathbf{R}\) is continuous and 2 π periodic. Equation (1.1) can be used to model the oscillations of a clarinet reed [1]. The dynamic behaviors of (1.1) have been widely investigated due to their applications in many fields such as physics, mechanics, and the engineering technique fields (see [2–8] and the references therein). Recently, the periodic problem of equations with singularities has been studied widely because of their background in applied sciences (see [9–13] and the references therein). When \(f\equiv0\), (1.1) is a conservation system Assume that g satisfies and moreover, the primitive function G of g satisfies Condition (h 3) implies that there exists a constant \(d>0\) such that Let us consider the autonomous system or its equivalent system The first integral of (1.4) is the curve where c is an arbitrary constant. From conditions (h i where \(0< d(c)< c\), \(G(d(c))=G(c)\), \(\lim_{c\to+\infty}d(c)=0\). From [10] we know that, if conditions (h i Now, let us set In this paper, we deal with the existence of periodic solutions of (1.1) by using the asymptotic properties of the time map τ. Assume that the limit holds uniformly for \(t\in[0, 2\pi]\). We obtain the following result. Theorem 1.1 Assume that conditions (h i hold. Then (1.1) possesses at least one 2 π periodic solution provided that the inequality holds. Using Theorem 1.1, we can obtain the following corollary. Corollary 1.2 Assume that conditions (h i hold. Then (1.1) possesses at least one 2 π periodic solution provided that the inequality holds. Throughout this paper, we always use the notations: for any continuous 2 π periodic function \(x(t)\). For a function \(I(c, \cdot)\), the notation \(I=o(1)\) means that, for \(c\to+\infty\), \(I\to0\) holds uniformly with respect to the other variables. A continuation lemma It is well known that the continuation theorem plays a key role in studying the existence of periodic solutions of ordinary differential equations. Now we shall introduce a continuation lemma for (1.1). To this end, we consider the equivalent system of (1.1), Now, we embed system (2.1) into a family of equations with one parameter \(\lambda\in[0, 1]\), Lemma 2.1 Assume that conditions (h i hold. Suppose that there exists a constant \(\zeta\geq d\) ( d is given in (1.3)) such that, if \((x(t), y(t))\) is a 2 π- periodic solution of system (2.2) for some \(\lambda\in(0, 1)\), then Then system (2.1) has at least one 2 π- periodic solution. Lemma 2.2 Let \(\Psi=\Psi(t, z; \lambda):[0, 2\pi]\times\mathbf{R}^{m}\times[0, 1]\to\mathbf{R}^{m}\) be a continuous function and let \(\Omega\subset\mathbf{R}^{m}\) be a ( non- empty) open bounded set ( with boundary ∂Ω and closure Ω̄). Assume the following conditions: (1) for any2 π- periodic solution\(z(t)\) of\(z'=\lambda\Psi(t, z; \lambda)\) with\(\lambda\in(0, 1)\), such that\(z(t)\in\bar{\Omega}\), for all\(t\in[0, 2\pi]\), it follows that\(z(t)\in\Omega\), for all\(t\in[0, 2\pi]\); (2) \(\Psi_{0}(z)\neq0\), for each\(z\in\partial\Omega\) and\(d_{B}(\Psi_{0}, \Omega, 0)\neq0\), where$$\Psi_{0}(z)=\frac{1}{2\pi}\int_{0}^{2\pi} \Psi(t, z; 0)\,dt,\quad \textit{for } z\in\mathbf{R}^{m}. $$ Then the equation \(z'=\Psi(t, z; 1)\) has at least one 2 π- periodic solution and \(z(t)\in\bar{\Omega}\), for all \(t\in[0, 2\pi]\). Proof of Lemma 2.1 We shall use Lemma 2.2 to prove this continuation lemma. Set Then there exists \(\tilde{t}\in[0, 2\pi]\) such that From condition (h 1) we know that there exists a constant \(0< d_{0}< d\) such that Therefore, we have Meanwhile, we have We claim that there exist constants \(0<\varepsilon<d_{0}\) and \(c>0\) such that, if \((x(t), y(t))\) is a 2 π periodic solution of (2.2) with \(x(t)\leq\zeta\), \(t\in[0, 2\pi]\), then Then we obtain where \(I_{1}=\{t\in[0, 2\pi]: 0< x(t)< d_{0}\}\), \(I_{2}=\{t\in[0, 2\pi]: d_{0}\leq x(t)\leq\zeta\}\). Hence, we have where \(M_{1}=2\pi\cdot\max\{|g(x)|:d_{0}\leq x\leq\zeta\}\). Let us take a fixed constant δ satisfying \(0<(1+\frac{\pi}{\sqrt{3}})\delta<1\). From (h 4) we see that there exists \(R_{\delta}>0\) such that, for any \(|s|\geq R_{\delta}\) and \(t\in[0, 2\pi]\), Set Then we see that, for any \((t, s)\in\mathbf{R}^{2}\), Set \(M=2M_{1}+2\pi M_{2}+\|p\|_{1}\). Then we obtain From (2.2) we know that \(x(t)\) satisfies the equation as follows: Using the Wirtinger inequality and the Sobolev inequality, we have Then we get which means that where \(\gamma=\delta(1+\frac{\pi}{\sqrt{3}})\), \(c_{0}=M_{2}\sqrt{2\pi}+\sqrt{\frac{\pi}{6}}(M+\|p\|_{1})\). Since \(0<\gamma<1\), we have Integrating the first equation of (2.2) on \([0, 2\pi]\) and noticing \(\lambda\in(0, 1]\), we get Therefore, Let \(x(t_{*})\) (\(t_{*}\in[0, 2\pi]\)) be the minimum of \(x(t)\). Then we have \(x'(t_{*})=0\) and \(x''(t_{*})\geq0\). Since \(x(t_{*})\) satisfies we have Hence, which implies Let \(x(t^{*})\) (\(t^{*}\in[0, 2\pi]\)) be the maximum of \(x(t)\). Then we have \(x'(t^{*})=0\) and \(x''(t^{*})\leq0\). Similarly, we can obtain In what follows, we shall prove that there exists \(0<\varepsilon<d_{0}\) such that, for any 2 π periodic solution \((x(t), y(t))\) of (2.2) with \(x(t)\leq\zeta\), \(t\in[0, 2\pi]\), The right inequality \(x(t)<\zeta\) (\(t\in[0, 2\pi]\)) follows directly from the condition \(\max\{x(t):t\in[0, 2\pi]\}\neq\zeta\) and \(x(t)\leq\zeta\), \(t\in[0, 2\pi]\). Next, we prove the left inequality. Otherwise, there exist a sequence \(\{\lambda_{n}\}\) with \(\lambda_{n}\in(0, 1]\) and a sequence of 2 π periodic solutions of (2.2) \(\{(x_{n}(t), y_{n}(t))\}\) (with \(\lambda=\lambda_{n}\) in (2.2)), satisfying \(x_{n}(t)\leq\zeta\), \(t\in[0, 2\pi]\), and Without loss of generality, we assume that, for every n, Since \((x_{n}(t), y_{n}(t))\) satisfies the equation we have Recalling \(x_{n}'(t)=\lambda_{n} y_{n}(t)\), we get Integrating both sides of (2.13) over the interval \([t_{n}, \alpha_{n}]\) and using the fact \(x_{n}'(t_{n})=\lambda_{n} y_{n}(t_{n})=0\), we obtain Therefore, we get From (h 2) we have Obviously, we have To use Lemma 2.2, we define an open bounded set \(\Omega=\{(x, y): \varepsilon< x<\zeta, -c-1<y<c+1\}\), and a map \(S: (0, +\infty)\times \mathbf{R}\to\mathbf{R}^{2}\), \(S(x, y)=(y, -g(x)-\bar{f}+\bar{p})\). Then, for any 2 π-periodic solution \((x(t), y(t))\) of system (2.2), such that \((x(t), y(t))\in\bar{\Omega}\), for all \(t\in[0, 2\pi]\), we have \((x(t), y(t))\in\Omega\), for all \(t\in[0, 2\pi]\). Therefore, the first condition of Lemma 2.2 is satisfied. Obviously, S does not vanish outside the rectangle Ω. Furthermore, the Brouwer degree of S, \(d_{B}(S, \Omega, 0)\), is defined and \(d_{B}(S, \Omega, 0)=d_{B}(g, (\varepsilon, \zeta), \bar{p}-\bar{f})=1\) because g is continuous and \(g(\varepsilon)<\bar{p}-\bar{f}\), \(g(\zeta)>\bar{p}-\bar{f}\). According to Lemma 2.2, system (2.1) has at least one 2 π periodic solution. □ Lemma 2.3 [14] Assume that \(g: \mathbf{R}\to \mathbf{R}\) is continuous and \(\lim_{|x|\to+\infty}\operatorname{sgn}(x)g(x)=+\infty\). Then, for any constant \(\nu\in\mathbf{R}\), where with \(\tilde{G}(x)=\int_{0}^{x}g(s)\,ds\). Remark 2.4 When \(g: [0, +\infty)\to\mathbf{R}\) is continuous and satisfies \(\lim_{x\to+\infty}g(x)=+\infty\), we can also define \(\tau_{g}(c)\) and \(\tau_{g}(\nu, c)\) for \(c>0\) large enough. In this case, we know from Lemma 2.3 that, for any constant ν, When \(g: (0, +\infty)\to\mathbf{R}\) is continuous and \(\lim_{x\to+\infty}g(x)=+\infty\), we can get a similar estimate. Under this condition, it is noted that g may have a singularity at the origin, \(x=0\), namely, \(\lim_{x\to0^{+}}g(x)=-\infty\). For any constant \(\nu\in\mathbf{R}\) and sufficiently large \(c\geq1\), let us set where \(G(x)=\int_{1}^{x}g(s)\,ds\). Then we have where τ is defined by (1.5). In fact, let us consider a function \(g_{0}: [0, +\infty)\to\mathbf{R}\), \(g_{0}(x)=g(x+1)\), \(x\geq0\). Obviously, \(g_{0}\) is continuous on the interval \([0, +\infty)\) and satisfies \(\lim_{x\to+\infty}g_{0}(x)=+\infty\). Then we have, for \(x\geq0\), According to Lemma 2.3, we get When \(c>0\) is large enough, we have Similarly, we have Consequently, we get Therefore, the conclusion (2.16) holds. Proof of Theorem 1.1 Proof of Theorem 1.1 Let us set Then there exist \(0<\varepsilon_{0}<\frac{1}{3}(\tau-2\pi)\) and a sequence \(\{c_{n}\}\) with \(\lim_{n\to\infty}c_{n}=+\infty\) such that, for every n, We shall prove that the condition of Lemma 2.1 is satisfied for \(\zeta=c_{n}\) with n sufficiently large. Let \((x(t), y(t))\) be any 2 π periodic solution of (2.2) for some \(\lambda\in(0, 1]\) and suppose that, for n large enough, Then there exists an interval \([\alpha, \beta]\subset[0, 2\pi]\) containing \(t^{*}\), with \(\alpha=\alpha(x, \lambda)\), \(\beta=\beta(x, \lambda)\) such that and From (2.2) we have Integrating both sides of (3.1) on the interval \([t, t^{*}]\) with \(\alpha\leq t\leq t^{*}\), we have From (h 4) we know that, for any sufficiently small \(\varepsilon>0\), there is a constant \(M_{\varepsilon}>0\) such that, for any \((t, y)\in R^{2}\), where \(M_{\varepsilon}'=M_{\varepsilon}+\|p\|_{\infty}\). Let us set Then we have Hence, Multiplying both sides of (3.4) by \(e^{2\varepsilon t}\) and integrating over the interval \([t, t^{*}]\) yields Since \(\phi(t^{*})=0\), we have From \(x'(t)=\lambda y(t)\geq0\), \(t\in[\alpha, t^{*}]\) we know that \(x(t)\) is increasing on the interval \([\alpha, t^{*}]\). Therefore, we get, for \(t\in[\alpha, t^{*}]\), Furthermore, Consequently, we can get, for \(t\in[\alpha, t^{*}]\), where \(\kappa(\varepsilon)=4\pi\varepsilon e^{4\pi\varepsilon}\). Recalling \(x'(t)=\lambda y(t)\) and \(y(t)>0\) for \(t\in[\alpha, t^{*}]\), we have Hence, Integrating both sides of (3.5) over interval \([\alpha, t^{*}]\) yields Similarly, we can get Therefore, we obtain Using (h 3) we can easily derive that, for \(n\to\infty\), Then we have It follows from Remark 2.4 that Consequently, we have Furthermore, Since \(\lim_{\varepsilon\to0^{+}}\sqrt{1+\kappa(\varepsilon)}=1\), there exist a sufficiently small \(\varepsilon>0\) and a sufficiently large n such that, if \(\max_{[0, 2\pi]}x(t)=c_{n}\), then which contradicts with the inequality \(\beta-\alpha<2\pi\). Then we find \(\zeta=c_{n}\) for n sufficiently large. Consequently, from the continuation Lemma 2.1, we know that (2.1) has at least one 2 π periodic solution. □ Proof of Corollary 1.2 Let us denote \(\rho=\liminf_{x\to+\infty}\frac{2G(x)}{x^{2}}<\frac{1}{4}\). Then there exists \(\varepsilon>0\) such that \(\rho_{\varepsilon}=\rho+\varepsilon\in(\rho, \frac{1}{4})\). Define Therefore, we have It follows that there exists a sequence \(\{c_{n}\}\) with \(\lim_{n\to+\infty}c_{n}=+\infty\) such that Consequently, Hence, we have As a result, we get Remark 3.1 In [12], the existence of periodic solutions of the Hamiltonian systems of the type was studied. A similar result was obtained (see [12], Corollary 3.13) for system (3.6). However, this corollary cannot be applied directly to obtain the main results of this paper because the asymptotic behavior of the primitive G of the nonlinearity g is treated in present paper. References 1. Radhakrishnan, S: Exact solutions of Rayleigh’s equation and sufficient conditions for inviscid instability of parallel, bounded shear flows. Z. Angew. Math. Phys. 45, 615-637 (1994) 2. Smith, HL: On the small oscillations of the periodic Rayleigh equation. Q. Appl. Math. 44, 223-247 (1986) 3. Smith, RA: Period bounds for generalized Rayleigh equation. Int. J. Non-Linear Mech. 6, 271-277 (1977) 4. Omari, P, Villari, G: On a continuation lemma for the study of a certain planar system with applications to Liénard and Rayleigh equations. Results Math. 14, 156-173 (1998) 5. Ma, T, Wang, Z: A continuation lemma and its applications to periodic solutions of Rayleigh differential equations with subquadratic potential conditions. J. Math. Anal. Appl. 385, 1107-1118 (2012) 6. Ma, T: Periodic solutions of Rayleigh equations via time-maps. Nonlinear Anal. 75, 4137-4144 (2012) 7. Habets, P, Torres, PJ: Some multiplicity results for periodic solutions of a Rayleigh differential equation. Dyn. Contin. Discrete Impuls. Syst., Ser. A Math. Anal. 8, 335-351 (2001) 8. Si, J: Invariant tori and unboundedness for second order differential equations with asymmetric nonlinearities depending on the derivatives. Nonlinear Anal. 67, 3098-3115 (2007) 9. Habets, P, Sanchez, L: Periodic solutions of some Liénard equations with singularities. Proc. Am. Math. Soc. 109, 1035-1044 (1990) 10. Wang, Z: Periodic solutions of the second order differential equations with singularities. Nonlinear Anal. 58, 319-331 (2004) 11. Sfecci, A: Nonresonance conditions for radial solutions of nonlinear Neumann elliptic problems on annuli. Rend. Ist. Mat. Univ. Trieste 46, 255-270 (2014) 12. Fonda, A, Sfecci, A: A general method for the existence of periodic solutions of differential systems in the plane. J. Differ. Equ. 252, 1369-1391 (2012) 13. Fonda, A, Toader, R: Periodic orbits of radially symmetric Keplerian-like systems: a topological degree approach. J. Differ. Equ. 244, 3235-3264 (2008) 14. Opial, Z: Sur les périodes des solutions de l’équation différentielle \(x''+g(x)=0\). Ann. Pol. Math. 10, 49-72 (1961) 15. Fonda, A, Zanolin, F: On the use of time-maps for the solvability of nonlinear boundary value problems. Arch. Math. 59, 245-259 (1992) 16. Mawhin, J: Equivalence theorems for nonlinear operator equations and coincidence degree theory for some mappings in locally convex topological vector spaces. J. Differ. Equ. 12, 610-636 (1972) Acknowledgements The authors are grateful to the referees for many valuable suggestions to make the paper more readable. Research supported by National Nature Science Foundation of China, No. 11501381 and the Grant of Beijing Education Committee Key Project, No. KZ201310028031. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions ZW proved a continuation lemma for Rayleigh equations. TM participated in obtaining a prior estimate and helped to draft the manuscript. All authors read and approved the final manuscript.
Introduction Suppose we have the task of predicting an outcome y given a number of variables v1,..,vk. We often want to “prune variables” or build models with fewer than all the variables. This can be to speed up modeling, decrease the cost of producing future data, improve robustness, improve explain-ability, even reduce over-fit, and improve the quality of the resulting model. For some informative discussion on such issues please see the following: How Do You Know if Your Data Has Signal? How do you know if your model is going to work? Variable pruning is NP hard In this article we are going to deliberately (and artificially) find and test one of the limits of the technique. We recommend simple variable pruning, but also think it is important to be aware of its limits. To be truly effective in applied fields (such as data science) one often has to use (with care) methods that “happen to work” in addition to methods that “are known to always work” (or at least be aware, you are always competing against such); hence the interest in mere heuristic. The pruning heuristics Let \(L(m;S)\) denote the estimate loss (or badness of performance, so smaller is better) of a model for \(y\) fit using modeling method \(m\) and the variables \(v_i : i \in S\). Let \(d(m;a)\) denote the portion of \(L(m;\{ \})-L(m;\{ a \} )\) credited to the variable \(v_a\). This could be the change in loss, something like \(\mathrm{effectsize}(v_a)\), or \(-\log(\mathrm{significance}(v_a))\); in all cases larger is considered better. For practical variable pruning (during predictive modeling) our intuition often implicitly relies on the following heuristic arguments. \(L(m; )\) is monotone decreasing, we expect \(L(m;S \cup \{ a \} )\) is no larger than \(L(m;S)\). Note this may be achievable “in sample” (or on training data), but is often false if \(L(m; )\) accounts for model complexity or is estimated on out of sample data (itself a good practice). If \(L(m;S \cup \{ a \} )\) is significantly lower than \(L(m;S)\) then we will be lucky enough to have \(d(m;a)\) not too small. If \(d(m;a)\) is not too small then we will be lucky enough to have \(d(\mathrm{lm};a)\) is non-negligible (where modeling method lmis one linear regression or logistic regression). Intuitively we are hoping (for ease of calculation) variable utility has a roughly diminishing return structure and at least some non-vanishing fraction of a variable’s utility can be seen in simple linear or generalized linear models. Obviously this can not be true in general (interactions in decision trees being a well know situation where variable utility can increase in the presence of other variables, and there are many non-linear relations that escape detection by linear models). Synergy is a good thing, we just would hate to miss it, and one way to prove we don’t miss it would be to know it isn’t there. We will show there is in fact synergy, so naive methods may in fact miss it. However, if the above were true (or often nearly true) we could effectively prune variables by keeping only the set of variables \(\left\{ a \; \left| \; d(\mathrm{lm};a) \; \text{is non negligible} \right. \right\}\). This is a (user controllable) heuristic built into our vtreat R package and proves to be quite useful in practice. I’ll repeat: we feel in real world data you can use the above heuristics to usefully prune variables. Complex models do eventually get into a regime of diminishing returns, and real world engineered useful variables usually (by design) have a hard time hiding. Also, remember data science is an empirical field- methods that happen to work will dominate (even if they do not apply in all cases). Counter-examples For every heuristic you should crisply know if it is true (and is in fact a theorem) or it is false (and has counter-examples). We stand behind the above heuristics, and will show their empirical worth in a follow-up article. Let’s take some time and show that they are not in fact laws. We are going to show that per-variable coefficient significances and effect sizes are not monotone in that adding more variables can in fact improve them. First example First (using R) we build a data frame where y = a xor b. This is a classic example of y being a function of two variable but not a linear function of them (at least over the real numbers, it is a linear relation over the field GF(2)). d <- data.frame(a=c(0,0,1,1),b=c(0,1,0,1))d$y <- as.numeric(d$a == d$b) We look at the (real) linear relations between y and a, b. summary(lm(y~a+b,data=d)) ## ## Call:## lm(formula = y ~ a + b, data = d)## ## Residuals:## 1 2 3 4 ## 0.5 -0.5 -0.5 0.5 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|)## (Intercept) 0.500 0.866 0.577 0.667## a 0.000 1.000 0.000 1.000## b 0.000 1.000 0.000 1.000## ## Residual standard error: 1 on 1 degrees of freedom## Multiple R-squared: 3.698e-32, Adjusted R-squared: -2 ## F-statistic: 1.849e-32 on 2 and 1 DF, p-value: 1 anova(lm(y~a+b,data=d)) ## Analysis of Variance Table## ## Response: y## Df Sum Sq Mean Sq F value Pr(>F)## a 1 0 0 0 1## b 1 0 0 0 1## Residuals 1 1 1 As we expect linear methods fail to find any evidence of a relation between y and a, b. This clearly violates our hoped for heuristics. For details on reading these summaries we strongly recommend Practical Regression and Anova using R, Julian J. Faraway, 2002. In this example the linear model fails to recognize a and b as useful variables (even though y is a function of a and b). From the linear model’s point of view variables are not improving each other (so that at least looks monotone), but it is largely because the linear model can not see the relation unless we were to add an interaction of a and b (denoted a:b). Second example Let us develop this example a bit more to get a more interesting counterexample. Introduce new variables u = a and b, v = a or b. By the rules of logic we have y == 1+u-v, so there is a linear relation. d$u <- as.numeric(d$a & d$b)d$v <- as.numeric(d$a | d$b)print(d) ## a b y u v## 1 0 0 1 0 0## 2 0 1 0 0 1## 3 1 0 0 0 1## 4 1 1 1 1 1 print(all.equal(d$y,1+d$u-d$v)) ## [1] TRUE We can now see the counter-example effect: together the variables work better than they did alone. summary(lm(y~u,data=d)) ## ## Call:## lm(formula = y ~ u, data = d)## ## Residuals:## 1 2 3 4 ## 6.667e-01 -3.333e-01 -3.333e-01 -1.388e-16 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|)## (Intercept) 0.3333 0.3333 1 0.423## u 0.6667 0.6667 1 0.423## ## Residual standard error: 0.5774 on 2 degrees of freedom## Multiple R-squared: 0.3333, Adjusted R-squared: 5.551e-16 ## F-statistic: 1 on 1 and 2 DF, p-value: 0.4226 anova(lm(y~u,data=d)) ## Analysis of Variance Table## ## Response: y## Df Sum Sq Mean Sq F value Pr(>F)## u 1 0.33333 0.33333 1 0.4226## Residuals 2 0.66667 0.33333 summary(lm(y~v,data=d)) ## ## Call:## lm(formula = y ~ v, data = d)## ## Residuals:## 1 2 3 4 ## 5.551e-17 -3.333e-01 -3.333e-01 6.667e-01 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|)## (Intercept) 1.0000 0.5774 1.732 0.225## v -0.6667 0.6667 -1.000 0.423## ## Residual standard error: 0.5774 on 2 degrees of freedom## Multiple R-squared: 0.3333, Adjusted R-squared: 0 ## F-statistic: 1 on 1 and 2 DF, p-value: 0.4226 anova(lm(y~v,data=d)) ## Analysis of Variance Table## ## Response: y## Df Sum Sq Mean Sq F value Pr(>F)## v 1 0.33333 0.33333 1 0.4226## Residuals 2 0.66667 0.33333 summary(lm(y~u+v,data=d)) ## Warning in summary.lm(lm(y ~ u + v, data = d)): essentially perfect fit:## summary may be unreliable ## ## Call:## lm(formula = y ~ u + v, data = d)## ## Residuals:## 1 2 3 4 ## -1.849e-32 7.850e-17 -7.850e-17 1.849e-32 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 1.00e+00 1.11e-16 9.007e+15 <2e-16 ***## u 1.00e+00 1.36e-16 7.354e+15 <2e-16 ***## v -1.00e+00 1.36e-16 -7.354e+15 <2e-16 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.11e-16 on 1 degrees of freedom## Multiple R-squared: 1, Adjusted R-squared: 1 ## F-statistic: 4.056e+31 on 2 and 1 DF, p-value: < 2.2e-16 anova(lm(y~u+v,data=d)) ## Warning in anova.lm(lm(y ~ u + v, data = d)): ANOVA F-tests on an## essentially perfect fit are unreliable ## Analysis of Variance Table## ## Response: y## Df Sum Sq Mean Sq F value Pr(>F) ## u 1 0.33333 0.33333 2.7043e+31 < 2.2e-16 ***## v 1 0.66667 0.66667 5.4086e+31 < 2.2e-16 ***## Residuals 1 0.00000 0.00000 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 In this example we see synergy instead of diminishing returns. Each variable becomes better in the presence of the other. This is on its own good, but indicates variable pruning is harder than one might expect- even for a linear model. Third example We can get around the above warnings by adding some rows to the data frame that don’t follow the designed relation. We can even draw rows from this frame to show the effect on a “more row independent looking” data frame. d0 <- dd0$y <- 0d1 <- dd1$y <- 1dG <- rbind(d,d,d,d,d0,d1)set.seed(23235)dR <- dG[sample.int(nrow(dG),100,replace=TRUE),,drop=FALSE]summary(lm(y~u,data=dR)) ## ## Call:## lm(formula = y ~ u, data = dR)## ## Residuals:## Min 1Q Median 3Q Max ## -0.8148 -0.3425 -0.3425 0.3033 0.6575 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.34247 0.05355 6.396 5.47e-09 ***## u 0.47235 0.10305 4.584 1.35e-05 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.4575 on 98 degrees of freedom## Multiple R-squared: 0.1765, Adjusted R-squared: 0.1681 ## F-statistic: 21.01 on 1 and 98 DF, p-value: 1.349e-05 anova(lm(y~u,data=dR)) ## Analysis of Variance Table## ## Response: y## Df Sum Sq Mean Sq F value Pr(>F) ## u 1 4.3976 4.3976 21.01 1.349e-05 ***## Residuals 98 20.5124 0.2093 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 summary(lm(y~v,data=dR)) ## ## Call:## lm(formula = y ~ v, data = dR)## ## Residuals:## Min 1Q Median 3Q Max ## -0.7619 -0.3924 -0.3924 0.6076 0.6076 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.7619 0.1049 7.263 9.12e-11 ***## v -0.3695 0.1180 -3.131 0.0023 ** ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.4807 on 98 degrees of freedom## Multiple R-squared: 0.09093, Adjusted R-squared: 0.08165 ## F-statistic: 9.802 on 1 and 98 DF, p-value: 0.002297 anova(lm(y~v,data=dR)) ## Analysis of Variance Table## ## Response: y## Df Sum Sq Mean Sq F value Pr(>F) ## v 1 2.265 2.26503 9.8023 0.002297 **## Residuals 98 22.645 0.23107 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 summary(lm(y~u+v,data=dR)) ## ## Call:## lm(formula = y ~ u + v, data = dR)## ## Residuals:## Min 1Q Median 3Q Max ## -0.8148 -0.1731 -0.1731 0.1984 0.8269 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.76190 0.08674 8.784 5.65e-14 ***## u 0.64174 0.09429 6.806 8.34e-10 ***## v -0.58883 0.10277 -5.729 1.13e-07 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.3975 on 97 degrees of freedom## Multiple R-squared: 0.3847, Adjusted R-squared: 0.3721 ## F-statistic: 30.33 on 2 and 97 DF, p-value: 5.875e-11 anova(lm(y~u+v,data=dR)) ## Analysis of Variance Table## ## Response: y## Df Sum Sq Mean Sq F value Pr(>F) ## u 1 4.3976 4.3976 27.833 8.047e-07 ***## v 1 5.1865 5.1865 32.826 1.133e-07 ***## Residuals 97 15.3259 0.1580 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Conclusion Consider the above counter example as exceptio probat regulam in casibus non exceptis (“the exception confirms the rule in cases not excepted”). Or roughly outlining the (hopefully labored and uncommon) structure needed to break the otherwise common and useful heuristics. In later articles in this series we will show more about the structure of model quality and show the above heuristics actually working very well in practice (and adding a lot of value to projects). // add bootstrap table styles to pandoc tables $(document).ready(function () { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); });
Halfthickness For the linear absorption with exponential decay of intensity of some intensity \( I \), id eat, \( I(z)= I(0) \exp(-\mu z) \) where \(\mu\) is absorption decrement. Halfthickness appears as length, at which the intensity drops to a half of its initial value. Halfthickness is related to the absorption e \(\mu\) with the simple relation \( \displaystyle d_{1/2} = \frac{\ln(2)}{\mu} \) Gamma rays Usually, gamma-rays or gamma-radiation refers to electromagnetic waves of energy of order of MeV, say, from 0.1 MeV to 10 MeV. Here is table of halfthickness of various materials for the gamma-rays of different energy. Examples of half-value thicknesses are measured in cm \( \begin{array}{c}\rm \phantom{Energy} \\ \rm 0.1 MeV \phantom{1^1}\\ \rm 0.5 MeV \phantom{1^1}\\ \rm 1~ MeV \phantom{1^1}\\ \rm 2~ MeV \phantom{1^1}\\ \rm 5~ MeV \phantom{1^1} \end{array}\) \( \begin{array}{l}\rm Lead ~~ \\ 0.014 \phantom{1^1}\\ 0.41 \phantom{1^1}\\ 0.88 \phantom{1^1}\\ 1.36 \phantom{1^1}\\ 1.46 \phantom{1^1}\end{array} \) \( \begin{array}{c}\rm aluminium \\ 1.7 \phantom{1^1}\\ 3.0 \phantom{1^1}\\ 4.3 \phantom{1^1}\\ 5.7 \phantom{1^1}\\ 9.5 \phantom{1^1} \end{array} \) \( \begin{array}{r}\rm Air ~ ~ \\ 3.6\times 10^3 \\ 5.9 × 10^3 \\ 8.3 × 10^3 \\ 13.4 × 10^3 \\ 21.5 × 10^3\end{array} \) \( \begin{array}{r} \rm Water ~ ~\\ 4.1\phantom{1^1}\\ 7.3\phantom{1^1}\\ 9.9\phantom{1^1}\\ 17.3 \phantom{1^1}\\ 21.7 \phantom{1^1} \end{array}\) For example, in the air, the gamma-rays of emery 5MeV are absorbed to half of its initial value at distance 215 meter. At a distance of one km, it losses Nire than an order of magnitude; and at 10 km, it losses more than 10 orders of magnitude. The table is by Matthias Hengsberger [1]. References https://www.physik.uzh.ch/~matthias/espace-assistant/manuals/en/anleitung-ab_e.pdf Matthias Hengsberger. Absorption of γ-rays – Determination of the Half-value Thickness of Absorber Materials. (2019) Laboratory Manuals for Physics Majors - Course PHY112/122 Tabelle 1.1.
Class 8th CBSE is a very important stage in the development of fundamentals of a student. As the basics of Maths and Science help students to understand complex topics which will come in 9th standard. Most of the subjects in class 8 are vast, where some of them require not only memorization but also requires skill. One such is the subject of Mathematics, which seems to be a nightmare for students and requires a lot of practice to boost confidence. So we at BYJU’S provide students of Class 8 with important 2 marks question, which can be beneficial for them in their examination. Students preparing for CBSE Class 8 Maths Examination are advised to practice the given question for Mathematics: Important 2 Marks Questions for Class 8 Maths CBSE Board are as follows- Question 1- Select those which can be written as a rational number with denominator 4 in their lowest form: \(\frac{7}{8},\frac{64}{16},\frac{36}{-12},\frac{-16}{17},\frac{5}{-4},\frac{140}{28}\) Question 2- Using suitable rearrangement and find the sum: a) \(\frac{4}{7}+\left (\frac{-4}{9} \right )+\frac{3}{7}+\left (\frac{-13}{9} \right )\) b) -5 + \(\frac{7}{10}+\frac{3}{7}+(-3)+\frac{5}{14}+\frac{-4}{5}\) Question 3- Following are the number of members in 25 families of a village: 6, 8, 7, 7, 6, 5, 3, 2, 5, 6, 8, 7, 7, 4, 3, 6, 6, 6, 7, 5, 4, 3, 3, 2, 5 . Prepare a frequency distribution table for the data using class intervals 0 – 2, 2 – 4, etc. Question 4- 13 and 31 is a strange pair of numbers such that their squares 169 and 961 are also mirror images of each other. Can you find two other such pairs? Question 5- Three numbers are in the ratio 2 : 3 : 4 . the sum of their cubes is 0.334125. Find the numbers. Question 6- A steamer goes downstream and covers the distance between two ports in 5 hours while it covers the same distance upstream in 6 hours. If the speed of the stream is 1 km/hr, find the speed of the steamer in still water. Question 7- Sum of the digits of a two-digit number is 11. The given number is less than the number obtained by interchanging the digits by 9. Find the number. Question 8- A playground in the town is in the form of a kite. The perimeter is 106 meters. If one of its sides is 23 meters, what are the lengths of the other three sides? Question 9- In the parallelogram given below YOUR, \(\angle RUO=120^{\circ}\) and OY is extended to point S such that \(\angle SRY=50^{\circ}\) .Find \(\angle YSR\) Question 10- A polyhedron has 20 faces and 12 vertices. Find the edges of the polyhedron. Question 11- In a town, an ice cream has displayed an ice cream sculpture of height 360 cm. The parlour claims that these ice creams and the sculpture are in the scale 1 : 30. What is the height of the ice creams served? Question 12- Factorize the following: a) \(x^{2}\) + 15x + 26 b) \(x^{2}\) = 17x +60 Question 13- The area of the square is given by \(4x^{2}\) + 12xy + \(9y^{2}\) . Find the side of the square. Question 14- Express as power of a rational number with negative exponent. a) \(\left ( \left ( \frac{-3}{2} \right )^{-2} \right )^{-3}\) b) \((2^{5}\div2^{8}) \times 2^{-7}\) Question 15- By what number should \(\left ( \frac{-3}{2} \right )^{-3}\) be divided so that the quotient may be \(\left ( \frac{4}{27} \right )^{-2}\) ? Question 16- Jyothi bought a product for Rs. 3,155 including 4.5% sales tax. Find the price before tax was added. Question 17- Archana bought medicines from a medical store as prescribed by her doctor for Rs. 36.40 including 4% VAT. Find the price before VAT was added. Question 18- The variable x varies directly as y and x = 80 when y is 160. What is y when x is 64? Question 19- Gogi types 108 words in 6 minutes. How many words would she type in half an hour? Question 20- The walls and ceiling of a room are to be plastered. The length, breadth and height of the room are 4.5 m, 3 m and 350 cm respectively. Find the cost of plastering at the rate of Rs 8 per \(m^{2}\) . For more important questions for class 8 download BYJU’S-The learning App.
Extraordinary claims require extraordinary proofs which really is the reason why this sorts of discussion is important. Similarly, sometimes, you are so blinded to some sorts of a truth and are faced with something so different that you can misread entirely what is being said. If you read this morning's entry, you might get a feel that I am little ambivalent about the true interesting nature of a paper entitled Statistical physics-based reconstruction in compressed sensing by Florent Krzakala, Marc Mézard, François Sausset, Yifan Sun, Lenka Zdeborová. Let's put this in perspective, our current understanding so far is that the universal phase transition observed by Donoho and Tanner seems to be seen with all the solvers featured here, that there are many ensembles for which it fits (not just Gaussian, I remember my jaw dropping when Jared Tanner showed it worked for the ensembles of Piotr Indyk, Radu Berinde et al) and that the only way to break it is to now consider structured sparsity as shown by Phil Schniter at the beginning of the week. In most people's mind, the L_1 solvers are really a good proxy to the L_0 solvers since even greedy solvers (the closest we can find to L_0 solvers) seem to provide similar results. Then there are results like the ones of Shrinivas Kudekar and Henry Pfister. ( Figure 5 of The Effect of Spatial Coupling on Compressive Sensing) that look like some sort of improvement (but not a large one). In all, a slight improvement over that phase transition could, maybe, be attributed to a slightly different solver, or ensemble (measurement matrices). So this morning I made the point that given what I understoodabout the graphs displayed in the article, it may be at besta small improvement over the Donoho-Tanner phase transition known to hold for not only Gaussian but other types of matrices and for different kinds of solvers, including greedy algorithms and SL0 (that simulate some sorts of L_0 approach). At bestis really an overstatement but I was intrigued mostly because of the use of an AMP solver, so I fired off an inquisitive e-mail on the subject to the corresponding author: Dear Dr. Krzakala, ... I briefly read your recent paper on arxiv with regards to your statistical physics based reconstruction capability and I am wondering if your current results are within the known boundary of what we know of the phase transition found by Donoho and Tanner or if it is an improvement on it. I provided an explanation of what I meant in today's entry (http://nuit-blanche.blogspot.com/2011/09/this-week-in-compressive-sensing.html). If this is an improvement, I'd love to hear about it. If it is is not an improvement, one wonders if some of the deeper geometrical findings featured by the Donoho-Tanner phase transition have a bearing on phase transition on real physical systems.Best regards,Igor. The authors responded quickly with: Dear Igor, Thanks for writing about our work in your blog.Please notice, however, that our axes in the figure you show are not the same as those of Donoho and Tanner. For a signal with N components, we define \rho N as the number of non-zeros in the signal, and \alpha N as the number of measurements. In our notation Donoho and Tanner's parameters are rho_DT = rho/alpha and delta_DT = alpha. We are attaching our figure plotted in Donoho and Tanner's way. Our green line is then exactly the DT's red line (since we do not put any restriction of the signal elements), the rest is how much we can improve on it with our method. Asymptotically (N\to \infty) our method can reconstruct exactly till the red line alpha=rho, which is the absolute limit for exact reconstruction (with exhaustive search algorithms).So we indeed improve a lot over the standard L1 reconstruction!We will be of course very happy to discuss/explain/clarify details if you are interested.With best regardsFlorent, Marc, Francois, Yifan, and Lenka The reason I messed up reading the variables is because I was probably not expecting something that stunning. Thank you for Florent Krzakala, Marc Mézard, François Sausset, Yifan Sun, Lenka Zdeborová for their rapid feedback. Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from
I do a lot of my early draft writing in LaTeX, so I place a substantial value in being able to quickly read my own LaTeX code. As a result, I adopt a style which is half-way between the "LaTeX way" (of using the correct delimiters for math environments and semantic names for macros) and quick-and-dirty macro usage (to save not only on keystrokes made, but on characters to re-read). The following is a typical way that I would render your code-segment. % In the pre-amble % ==================== % A semantic command for vectors \renewcommand\vec[1]{\mathbf{#1}} % A syntactic command for special symbols which I would % consistently use for a given purpose. \newcommand\R{\mathbb{R}} \newcommand\cS{\mathcal{S}} % In the document body % ==================== \subsection{Convex sets} A set $\cS$ in $\R^n$ is said to be \emph{convex} if \begin{align} \vec x_1, \vec x_2, \in \cS &\implies \lambda \vec x_1 + (1-\lambda) \vec x_2 \in \cS & \text{for all $0 < \lambda < 1$.} \end{align} I don't bother adding newlines in the middle of sentences, except for things such as footnotes and ends of lines (which is useful to do if you use a content revision system to track changes). But I add them liberally in my displayed math to better reveal the structure of what I'm writing, especially in tabulated environments such as align. You could also just use an equation environment instead, in which case I would write: \begin{equation} \vec x_1, \vec x_2, \in \cS \implies \lambda \vec x_1 + (1-\lambda) \vec x_2 \in \cS \quad \text{for all $0 < \lambda < 1$.} \end{equation} Some further remarks on my approach to defining the macros: If you only have a few vectors in your text, you can make your mathematics somewhat more terse by defining \newcommand\vx{\vec x} to speed up your (or your collaborators') personal lexing process; how this affects one's ability to parse the mathematics is another matter. As with \cS, this sort of naming convention depends on the reader (most probably just you, but also any collaborators) becoming accustomed to parsing the name of the macro as an adjective-noun couplet: a "calligraphic S", a "vector x", and so on. As for \R, this sort of naming convention ought only to be used for a single style thoughout all of your documents for symbols representing very important objects (such as blackboard-bold symbols denoting number-sets which you are likely to refer to in any given paper). Use such shortcuts with discretion, both for your own sanity and those of your colleagues who might also work on the document with you.
I think that adding alkyl groups to $\alpha$ - $\beta$ unsaturated carbonyl compounds are possible through Enamine synthesis by Secondary amines. But the alkylation will be at the $\alpha$ and $\gamma$ positions as follows. For simplicity, if we take the corresponding carbonyl compound as Cyclohex-$2$-en-$1$-one, the plausible pathways for alkylation are as follows, Now, the first step of forming an enamine is already reported. The link for the JACS paper is https://pubs.acs.org/doi/pdf/10.1021/ja061104y . In this paper, preparation of enamine (by using S-Proline as a secondary amine) of similar kind of $\alpha$ - $\beta$ unsaturated compound has been reported. One of the major reaction in this paper is the following, Thus it is confirmed that such kind of doubly conjugated enamine formation is possible. The remaining two steps for $\alpha$ - alkylation is also reported earlier. For details, see the paper https://onlinelibrary.wiley.com/doi/pdf/10.1002/recl.19881070906 Here they have done the following reaction related to this context, Here $\gamma$ alkylation was disfavoured to preserve the aromaticity of the benzene ring. But in general, $\gamma$ alkylation forms again a conjugated carbonyl compound, and hence forms the thermodynamically more stable product. So,$\gamma$ -alkylation is more favoured at higher temperature, whereas the $\alpha$ -alkylated product is a kinetic product. Thus, it is also possible to do alkylation of $\alpha$ - $\beta$ unsaturated compound using a secondary amine, but those alkylation will be specifically in $\alpha$ and $\gamma$ positions (not on $\beta$ positions).
5.1 Case study: Comparing work travel times The following is a worked example using the county complete dataset from Section 4 that demonstrates how to use the infer package to test if the difference in mean values of two distributions is statistically significant.To follow along, load the following libraries and dataset into your R session: # Packageslibrary(dplyr)library(ggplot2)library(readr)library(infer)# Datasetcounty_complete <- read_rds( url("http://data.cds101.com/county_complete.rds")) 5.1.1 Work travel times in Iowa and Nebraska In Section 4, we looked at the distribution of average work travel times in Iowa and Nebraska measured on the county level. In our visual inspection of the probability mass function, we concluded that the distribution of work travel times is shifted slightly towards shorter commutes in Nebraska versus Iowa. The summary statistics for the two distributions seem to support this conclusion: # Filter dataset to only include counties in Iowa and Nebraskaia_ne_county <- county_complete %>% filter(state == "Iowa" | state == "Nebraska")# Compute summary statistics of mean_work_travel column for Iowa and Nebraskaia_ne_summary_stats <- ia_ne_county %>% group_by(state) %>% summarize( mean = mean(mean_work_travel), median = median(mean_work_travel), sd = sd(mean_work_travel), iqr = IQR(mean_work_travel) ) state mean median sd iqr Iowa 19.11 18.5 3.425 4.4 Nebraska 17.97 17.6 3.823 5.0 Let’s compute the difference in means directly using ia_ne_summary_stats.First, we need to grab the mean column: ia_ne_means <- ia_ne_summary_stats %>% pull(mean) ia_ne_means is a simple vector that stores two numbers. ia_ne_means ## [1] 19.11 17.97 The first number of the vector, which we can access with ia_ne_means[1], is the mean work travel times in Iowa.The second number of the vector, which we can access with ia_ne_means[2], is the mean work travel times in Nebraska.Knowing this, we easily compute the difference in means, ia_ne_diff_in_means <- ia_ne_means[1] - ia_ne_means[2] obtaining the following value of ia_ne_diff_in_means: # (mean work travel time in Iowa) - (mean work travel time in Nebraska)ia_ne_diff_in_means = 1.15 minutes However, only checking the numerical difference between the mean values is not enough if we want to claim there is a statistically significant difference in commute times between Iowa and Nebraska.There is significant overlap between the two distributions, with the standard deviation (sd) of each distribution being 3–4 times larger than the difference in means.Before we can conclude that the difference in means is statistically significant, we need to estimate how likely it is that this difference could have arisen from chance alone. 5.1.2 Defining the hypothesis test To determine whether or not there is a meaningful difference in means between the two distributions, we will conduct a two-sided hypothesis test using the following null and alternative hypotheses: Null hypothesis: There is no difference between the mean work travel time in Iowa and the mean work travel time in Nebraska. \[\text{H}_{0}:\mu_{\text{IA}}-\mu_{\text{NE}}=0\] Alternative hypothesis: There is a difference between the mean work travel time in Iowa and the mean work travel time in Nebraska. \[\text{H}_{\text{A}}:\mu_{\text{IA}}-\mu_{\text{NE}}\neq{}0\] We set our significance level to \(\alpha=0.05\), which we will use as a cutoff for determining whether or not a result is statistically significant. 5.1.3 Building the null distribution To conduct our hypothesis test, we first must generate our null distribution using the infer package: ia_ne_mean_work_travel_null <- ia_ne_county %>% specify(formula = mean_work_travel ~ state) %>% hypothesize(null = "independence") %>% generate(reps = 10000, type = "permute") %>% calculate(stat = "diff in means", order = combine("Iowa", "Nebraska")) Let’s explain what each of these lines is doing.We start by piping our filtered dataset stored in ia_ne_county into specify: specify(formula = mean_work_travel ~ state) Here we specify the response variable and the explanatory variable in the formula = input keyword.The formula syntax is given as follows: response ~ explanatory Therefore, in our formula mean_work_travel ~ state, mean_work_travel is the response variable and state is the explanatory variable.How do we know to use mean_work_travel as the response variable?It’s because of how we formulated our null and alternative hypotheses, such that we are testing whether a person’s mean work travel time is affected by the state he or she lives in, not whether the state that a person lives in is affected by his or her mean work travel time.Finally, because the response variable is numerical, we don’t use the success = keyword in specify. The next line that we pipe into is: hypothesize(null = "independence") Here we indicate that we are conducting a hypothesis test to check whether or not the variables provided in specify are independent of one another.A basic rule of thumb is, if you specify both an explanatory and a response variable in the formula of specify, then you will use null = "independence". Next we pipe into this line: generate(reps = 10000, type = "permute") Here we indicate that we want to permute the mean_work_travel and state columns 10,000 times to build up our null distribution.In effect, we are randomly shuffling the rows in mean_work_travel into a different order, and then randomly shuffling the rows in state into yet another order, which should remove any connections between the mean_work_travel and state columns (this is what we mean by null distribution).We know to use type = "permute" because we have specified both a response and an explanatory variable and we want to see what happens when we assume the two columns are independent.As for reps = 10000, this controls the overall accuracy of our hypothesis tests.There is no hard and fast rule for the value to set, but if you want more accurate results, increase reps.However, keep in mind that the larger reps is, the longer it will take to complete the calculation. We finish up by piping into the last line: calculate(stat = "diff in means", order = combine("Iowa", "Nebraska")) Here we indicate that we are comparing the difference in means between our two distributions (the mean work travel times in Iowa and Nebraska).The input order = combine("Iowa", "Nebraska") defines the subtraction order, so we will find the difference in means by subtracting the mean time in Iowa from the mean time in Nebraska, which is the same order we used for ia_ne_diff_in_means <- ia_ne_means[1] - ia_ne_means[2].As for the stat input, we know to use stat = "diff in means" because of how we formulated our null and alternative hypotheses, i.e. we are comparing the difference in mean work travel times between Iowa and Nebraska.The way we know that we can’t use something like stat = "diff in props" is because at least one of our variables in specify is numerical ( mean_work_travel).The input stat = "diff in props" only applies if both the explanatory and response variables are categorical. The null distribution that we generated looks as follows: ia_ne_mean_work_travel_null %>% visualize() + labs( x = "difference in means", title = "Difference in mean work travel times null distribution" ) 5.1.4 Computing the two-sided p-value After we’ve generated our null distribution, we can compute the p-value using get_p_value.To compute a p-value, we need an observed statistic, which is simply the difference in means we computed using the dataset itself.This is what we computed using ia_ne_diff_in_means <- ia_ne_means[1] - ia_ne_means[2].However, infer also provides a shortcut for computing the observed statistic so that you don’t have to use summarize(), ia_ne_diff_in_means <- ia_ne_county %>% specify(formula = mean_work_travel ~ state) %>% calculate(stat = "diff in means", order = combine("Iowa", "Nebraska")) Note that all we’ve done is taken the code we used to generate the null distribution and removed the hypothesize() and generate() lines.Now that we have our observed statistic, we can compute the p-value for our two-sided hypothesis test, ia_ne_p_value_two_sided <- ia_ne_mean_work_travel_null %>% get_p_value(obs_stat = ia_ne_diff_in_means, direction = "both") p_value 0.029 Our two-sided p-value is 0.029, which is below our significance level of \(\alpha=0.05\). We therefore reject the null hypothesis in favor of the alternative hypothesis. The difference between the mean work travel times in Iowa and Nebraska is statistically significant. We visualize the meaning of the p-value in the following way: ia_ne_mean_work_travel_null %>% visualize() + shade_p_value(obs_stat = ia_ne_diff_in_means, direction = "both") + labs( x = "difference in means", title = "Difference in mean work travel times null distribution" ) The portions of the null distribution that are in the shaded red area correspond to results that are more extreme than the observed value 1.15 in a two-sided hypothesis test.The more that a null distribution lies within the red portions of the visualization, the more likely it becomes that we will fail to reject the null distribution and the difference in means that we observed in the dataset arose due to random chance alone. 5.1.5 Computing the 95% confidence interval In addition to the two-sided hypothesis test, it is also good practice to compute the 95% confidence interval for the difference in mean work travel times between Iowa and Nebraska.To compute this, we need to generate the bootstrap distribution for the difference in means: ia_ne_mean_work_travel_bootstrap <- ia_ne_county %>% specify(formula = mean_work_travel ~ state) %>% generate(reps = 10000, type = "bootstrap") %>% calculate(stat = "diff in means", order = combine("Iowa", "Nebraska")) You’ll notice that the code to generate the bootstrap distribution is pretty similar to the code used to generate the null distribution.All we had to do was remove the hypothesize(null = "independence") line and change the type = keyword in generate from "permute" to "bootstrap", and that’s it! You might be wondering, how are the methods for generating the null distribution and bootstrap distribution different? For this example, they differ in two important ways: Unlike when we generated the null distribution, the mean_work_traveland statecolumns are notrandomly shuffled Generating a bootstrap distribution requires sampling with replacement, meaning we grab different rows in ia_ne_countyat random (we might even grab the same row more than), which we do not do if we permute the columns The bootstrap distribution we just generated looks as follows: ia_ne_mean_work_travel_bootstrap %>% visualize() + labs( x = "difference in means", title = "Difference in mean work travel times bootstrap distribution" ) This shows what could happen if we went out and collected another set of samples of the data in this dataset. Due to random variations in the people we survey, sometimes we might end up with a dataset where the difference between the mean work travel times in Iowa and Nebraska is closer to 2 minutes instead of 1 minute, or it might be closer to zero. Most of the time, it will be close to 1 minute. The 95% confidence interval is defined as the fraction of the data in the bootstrap distribution that lies between the 2.5th and 97.5th percentiles: ia_ne_ci95 <- ia_ne_mean_work_travel_bootstrap %>% get_confidence_interval() 2.5% 97.5% 0.1167 2.153 This means that, if we keep collecting new samples of mean work travel times in Iowa and Nebraska, we expect that 95% of our samples will have a difference in mean work travel times that lies between 0.1167 and 2.1534. Also note that the lower bound of the confidence interval does not intersect with a difference of means equal to zero, lending further support to the conclusion we reached using our two-sided hypothesis test. We close this example by visualizing the 95% confidence interval by shading the middle 95% of the data in the bootstrap distribution: ia_ne_mean_work_travel_bootstrap %>% visualize() + shade_confidence_interval(endpoints = ia_ne_ci95) + labs( x = "difference in means", title = "Difference in mean work travel times 95% confidence interval" )
In this tutorial you will learn how to set up and execute a series of calculations for strained structures. Additionally, it will be explained how to obtain the derivatives of the energy-vs-strain curves at zero strain and how these quantities are related to elastic constants. Purpose: Table of Contents 0. Define relevant environment variables Read the following paragraphs before starting with the rest of this tutorial! Before starting, be sure that relevant environment variables are already defined as specified in . Here is a list of the scripts which are relevant for this tutorial with a short description. How to set environment variables for tutorials scripts : Python script for generating strained structures. SETUP-elastic-strain.py : (Bash) shell script for running a series of EXECUTE-elastic-strain.sh calculations. exciting : Python script for extracting derivatives at zero strain of energy-vs-strain curves. CHECKFIT-energy-vs-strain.py : Python visualization tool for energy-vs-strain curves. PLOT-energy.py : Python visualization tool for RMS deviations of the SCF potential as a function of the iteration number during the SCF loop. PLOT-status.py : Python visualization tool for the maximum amplitude of the force on the atoms during relaxation. PLOT-maxforce.py : Python visualization tool for the calculation of derivatives at zero strain using the fit of energy-vs-strain curves. PLOT-checkderiv.py : Python visualization tool for relaxed coordinates of atoms in the unit cell. PLOT-optimized-geometry.py From now on the symbol will indicate the shell prompt. $ Requirements: Bash shell. Python numpy, lxml, matplotlib.pyplot, and sys libraries. 1. Theoretical background The energy of a crystal depends on the in which the crystal is found. Usually, the "measure" of the state of distortion is given either in terms of the state of distorsion : physical-strain matrix or in terms of the : Lagrangian-strain matrix where $\pmb{\varepsilon}$ and $\pmb{\eta}$ are symmetric matrices and are related to each other by the relationship(3) The actual deformation of the crystal is given by(4) Using the previous definitions, the total energy of a crystal can be written as a Taylor expansion in terms of powers of the Lagrangian strain(5) where $\sigma^0_{ij}=0$ is the reference (unstrained) configuration is the equilibrium one. The other coefficients in this Taylor expansion are defined as of different order, elastic constants e.g., the elastic constants of the 2nd order (called also linear elastic constants) are The results above are often expressed in terms of the , which is a way to represent a symmetric tensor by reducing its order. Thus, following this notation the strain matrices above can be represented as vectors in a six-dimentional space: Voigt notation The Taylor expansion of the elastic energy can be rewritten using Voigt notation as(8) where $\pmb{C}^{(2)}$ is the 6$\times$6 matrix(9) For a given crystal structure, the number of independent elastic-constants components can be reduced using the crystal symmetry. For instance, for cubic systems the matrix of the elastic constants reduces to(10) 2. Set up the calculations i) Preparation of the input file The first step is to create a directory for each system that you want to investigate. Here, we consider the calculation of the energy-vs-strain curves for carbon in the diamond structure. However, the procedure we show you is valid for any system. Thus, we will create a directory and we move inside it. diamond-elastic-strain $ mkdir diamond-elastic-strain$ cd diamond-elastic-strain Inside this directory, we create (or copy from a previous calculation) the file corresponding to a calculation for the equilibrium structure of diamond. This file could look like the following. input.xml <input> <title>Diamond: Equilibrium structure</title> <structure speciespath="$EXCITINGROOT/species"> <crystal scale="6.714"> <basevect> 0.5 0.5 0.0 </basevect> <basevect> 0.5 0.0 0.5 </basevect> <basevect> 0.0 0.5 0.5 </basevect> </crystal> <species speciesfile="C.xml" rmt="1.25"> <atom coord="0.00 0.00 0.00" /> <atom coord="0.25 0.25 0.25" /> </species> </structure> <groundstate ngridk="8 8 8" swidth="0.0001" gmaxvr="14" xctype="GGA_PBE_SOL"> </groundstate> <relax/> </input> Please, remember that the input file for an calculation must always be called exciting . input.xml Be sure to set the correct path for the root directory (indicated in this example by exciting ) to the one pointing to the place where the $EXCITINGROOT directory is placed. In order to do this, use the command exciting $ SETUP-excitingroot.sh Be sure to have in your file the appropriate command for performing the structure optimization: Deforming your system may change the relative positions of the atoms in the unit cell. <relax/> ii) Generation of input files for distorted structures All strains considered in this tutorial are Lagrangian strains. In order to generate input files for a series of distorted structure, you have to run the script . SETUP-elastic-strain.py Noticethat the script always generates a working directory containing input files for different strains. Results of the current calculations will be also stored in the working directory. The directory name can be specified by adding the name in the command line. SETUP-elastic-strain.py $ SETUP-elastic-strain.py DIRECTORYNAME If no name is given, the script use the default name . workdir Very important: The working directory is overwritten each time you execute the script . Therefore, choose different names for different calculations. SETUP-elastic-strain.py The script produces the following output on the screen (using SETUP-elastic-strain.py as working directory). deformation-0 $ SETUP-elastic-strain.py deformation-0Enter maximum Lagrangian strain [smax] >>>> 0.10Enter the number of strain values in [-smax,smax] >>>> 11------------------------------------------------------------------------ List of deformation codes for strains in Voigt notation------------------------------------------------------------------------ 0 => ( eta, eta, eta, 0, 0, 0) | volume strain 1 => ( eta, 0, 0, 0, 0, 0) | linear strain along x 2 => ( 0, eta, 0, 0, 0, 0) | linear strain along y 3 => ( 0, 0, eta, 0, 0, 0) | linear strain along z 4 => ( 0, 0, 0, eta, 0, 0) | yz shear strain 5 => ( 0, 0, 0, 0, eta, 0) | xz shear strain 6 => ( 0, 0, 0, 0, 0, eta) | xy shear strain 7 => ( 0, 0, 0, eta, eta, eta) | shear strain along (111) 8 => ( eta, eta, 0, 0, 0, 0) | xy in-plane strain 9 => ( eta, -eta, 0, 0, 0, 0) | xy in-plane shear strain 10 => ( eta, eta, eta, eta, eta, eta) | global strain 11 => ( eta, 0, 0, eta, 0, 0) | mixed strain 12 => ( eta, 0, 0, 0, eta, 0) | mixed strain 13 => ( eta, 0, 0, 0, 0, eta) | mixed strain 14 => ( eta, eta, 0, eta, 0, 0) | mixed strain------------------------------------------------------------------------Enter deformation code >>>> 0$ In this example, (on screen) input entries are preceded by the symbol " ". Entry values must be typed on the screen when requested. The first entry (in our example >>>> 0.10) represents the absolute value of the maximum strain for which we want to perform the calculation. The second entry ( 11) is the number of deformed structures equally spaced in strain, which are generated between the maximum negative strain and the maximum positive one. The third (last) entry ( 0) is a self-explained label indicating the type of deformation. The latter is always referred to 2-dimensional strain tensors in the Voigt notation (so that, e.g., a strain value of 0.10 corresponds, for the choice 1 of the deformation code, to a linear deformation of 10% along the x direction). After running the script, a directory called is created, which contains input files for different strain values. deformation-0 3. Execute the calculations To execute the series of calculation with input files created by you have to run the script SETUP-elastic-strain.py . If a name for the working directory has been specified, then you EXECUTE-elastic-strain.sh mustgive it here, too. $ EXECUTE-elastic-strain.sh deformation-0===> Output directory is "deformation-0" <===Running exciting for file input-01.xml ----------------------------------...Run completed for file input-11.xml -------------------------------------$ After the complete run, move to the working directory . deformation-0 $ cd deformation-0 Inside this directory, results of the calculation for the input file are contained in the subdirectory input-i.xml where rundir- i is running from i 01to the total number of strain values. The data for energy-vs-strain curves are contained in the file . energy-vs-strain 4. Post-processing: Extract energy derivatives At this point, inside the directory , you can use the python script deformation-0 for extracting derivatives at zero strain of energy-vs-strain curves. CHECKFIT-energy-vs-strain.py $ CHECKFIT-energy-vs-strain.py Enter maximum strain for the fit >>>> 0.10Enter the order of derivative >>>> 2###########################################Fit data-----------------------------------Deformation code ==> 0Deformation label ==> EEE000Maximum value of the strain ==> 0.10000000Number of strain values used ==> 11 Fit results for the derivative of order 2 Polynomial of order 2 ==> 4467.25 [GPa]Polynomial of order 3 ==> 4467.25 [GPa]Polynomial of order 4 ==> 4053.37 [GPa]Polynomial of order 5 ==> 4053.37 [GPa]Polynomial of order 6 ==> 4060.24 [GPa]Polynomial of order 7 ==> 4060.24 [GPa]###########################################$ In this example, input entries are preceded by the symbol " ". Entry values must be typed on the screen when requested. The first entry (in our example >>>> 0.10) represents the absolute value of the maximum strain for which we want to perform the calculation. The second entry ( 2) is the order of the derivative that we want to obtain. The script generates the output files and check-energy-derivatives , which can be used in the post-processing analysis. Results of this script can be analyzed using the visualization tool order-of-derivative . PLOT-checkderiv.py 5. Post-processing: Visualization tools All the scripts mentioned here must be executed in the directory where the , energy-vs-strain , and check-energy-derivatives files are located. The scripts produce as output a order-of-derivative file named PostScript as well as a PLOT.ps file ( png ). PLOT.png i) PLOT-energy.py This script allows for the visualization of the energy-vs-strain curve. It is executed as follows. $ PLOT-energy.py ii) PLOT-checkderiv.py This is a very important tool that allows to represent the dependence of the calculated derivatives of the energy-vs-strain curve on the range of points included in the fitting procedure (" maximum lagrangian strain"), the maximum degree of the polynomial used in the fitting procedure (" n"). The script requires as input the PLOT-checkderiv.py and check-energy-derivatives files generated by order-of-derivative and is executed as follows. CHECKFIT-energy-vs-strain.py $ PLOT-checkderiv.py YMIN YMAX optionalentries can be specified on the calling line. Assigning numerical values to these two entries, you can set the minimum ( YMIN) and the maximum ( YMAX) value on the vertical axis, respectively. An example of the script output is the following. The previous plots can be used to determine the best range of deformations and order of polynomial fit for each distortion. By analyzing the plot, we note that curves corresponding to the higher order of the polynomial used in the fit show a horizontal plateau at about 4060 GPa. This can be assumed to be the converged value for the second derivative, from the point of view of the fit (further information on this topic can be found ). For this distortion type, this value equals 9 times the bulk modulus. Thus, the extracted value of the here bulk modulus is about 451 GPa. iii) PLOT-status.py Python visualization tool for the deviations of the effective RMS potential as a function of the iteration number during the SCF loop. It is executed as follows. SCF $ PLOT-status.py LABEL , LABEL mustbe specified and represents the name of the directory where an calculation has been or is still running. The exciting script is particularly useful in the latter case because one can follow " PLOT-status.py live" the convergence of the calculation. An example of the PostScript output of the script is the following. Different line segments correspond to calculations for different geometries during the relaxation. SCF iv) PLOT-maxforce.py Python visualization tool for the maximum amplitude of the force on atoms during relaxation. It is useful for deformations which allow for internal relaxation of atomic positions, e.g., for the deformation with the code 7. It is executed as follows. $ PLOT-maxforce.py LABEL The input entry definition is the same as for the script . If the symmetry of the deformation applied to the crystal is such that no extra force is applied to the atoms ( PLOT-status.py e.g., as it happens for deformation 1and 2) the output of the script will be PLOT-maxforce.py Either data file not (yet) ready for visualization or maximum force target reached already at the initial configuration. 7) is the following. The red points show the calculated value at each optimization step, whereas the blue line indicates the target value of the maximum amplitude of the force for stopping the relaxation. v) PLOT-optimized-geometry.py Python visualization tool for showing the optimized geometry compared to the reference (unrelaxed) geometry for the relative atomic coordinates of two atoms in the unit cell as a function of Lagrangian strain. It is useful for deformations which allow for internal relaxation of atomic positions, e.g., for the deformation with the code 7. It is executed as follows. $ PLOT-optimized-geometry.py ATOM1 ATOM2 YMIN YMAX optionalentries can be specified on the calling line. The first two entries are numeric labels of the atoms referring to the listing of the atoms in the input file . For instance, setting input.xml = ATOM1 2and = ATOM2 6(for a cell containing at least 6 atoms!) means that one considers the second and the sixth atoms, as listed in (including atoms of all species). If input.xml and ATOM1 are not specified, their ATOM2 default valuesare 1and 2, respectively. In case and ATOM1 are explicitly given, you can set the minimum ( ATOM2 YMIN) and the maximum ( YMAX) value on the vertical axis, respectively. An example of the PostScript output of the script for the deformation code 7is the following. Here, and (Δ1, Δ2, Δ3) represent the position difference vector, $\bf r$ (Δ1 , Δ2 ref , Δ3 ref ) ref - $\bf r$ ATOM2 , expressed in lattice coordinates, for the optimized geometry and the unrelaxed (reference) case, respectively. ATOM1 6. Post-processing: How to derive elastic constants Second derivatives calculated at zero strain of energy-vs-strain curves are combinations of the elastic constants C ij where the indexes i,j=1,2,…,6 are given in the Voigt notation. In the example that we are considering here, carbon in the cubic diamond structure, only 3 different elastic constants are non vanishing C 11 C 12 C 44 In order to extract these three elastic constants, three different deformation types must be used. For cubic systems the best choice is represented by the following deformation types Volume strain (in our script corresponding to the label 0) Uniaxial strain in the 100 direction (label 1) Shear strain along the 111 direction (label 7) Which in turns correspond to the following combination of elastic constants: label 0:3 C 11+ 6 C 12= 9 B 0 label 1:C 11 label 7:3 C 44 where B 0 is the bulk modulus. Experimental reference values for diamond: C 11= 1076 GPa C 12= 125 GPa C 44= 577 GPa B 0= 452 GPa
Overview The Fourier transform is an operation which can transform a signal that is described in the time-domain (i.e. x-axis is time), into a signal that is described in the frequency-domain (the x-axis is frequency). Fourier theorem states that a periodic function f(x) which is reasonably continuous may be expressed as the sum of a series of sine or cosine terms (called the Fourier series), each of which has specific amplitude and phase coefficients known as Fourier coefficients. 1 The finite signal in time has a continuous signal in frequency, and vice versa, a continuous signal in time has a finite signal in frequency. The Fourier transform can be thought of as a rotation of 90 around the time-frequency domain. In this sense, four applications of the Fourier transform should result in the original signal. There are four common Fourier transformations, which are described below: Continuous-Time Fourier Transform (CTFT) The Continuous-Time Fourier Transform (CTFT) is also commonly known just as the Fourier Transform. There is a definition of the equation which converts a signal in time to a signal in frequency, which is called the forward transform, and one which goes from the frequency domain back to the time domain (it undoes the forward transform) called the inverse transform. Note that the variable \(t\) (the independent variable) does no have to necessarily represent time. However, when it does (e.g. units in seconds), then the transform variable \(f\) represents frequency (e.g. Hertz). All of the equations below will use t and f since time and frequency are the most common units used with the Fourier transform. Forward: $$ F(f) = \int_{-\infty}^{\infty} f(t) e^{-j2\pi ft} dt $$ Inverse: $$ f(t) = \int_{-\infty}^{\infty} F(f)e^{i2\pi ft} df $$ Continuous-Time Fourier Series (CTFS) Forward: $$ F_n = \frac{1}{T_0} \int_{-\frac{T_0}{2}}^{\frac{T_0}{2}} f(t) e^{\frac{-j2\pi nt}{T_0}}dt $$ Inverse: $$ x(t) = \sum_{n=-\infty}^{\infty} F_n e^{\frac{j 2\pi nt}{T_0}} $$ Discrete-time Fourier Transform (DTFT) The DTFT of a discrete time serious produces a frequency signal that is continuous and periodic. Forward: $$ F(e^{j\omega}) = \sum_{n=-\infty}^{\infty} f(nT)e^{-j2\pi fnT} $$ Inverse: $$ f(nT) = \int_{-\frac{1}{2T}}^{\frac{1}{2T}} F(e^{j\omega})e^{j2\pi fnT} df $$ Discrete Fourier Transform (DFT) Forward: $$ F(\frac{k}{NT}) = \sum_{n=0}^{N-1} f(nT)e^{\frac{-j2\pi nk}{N}} $$ Inverse: $$ f(nT) = \frac{1}{N} \sum_{k=0}^{N-1}F\frac{k}{NT}e^{\frac{i2\pi nk}{N}} $$ The Fast Fourier Transform (FFT) A fast fourier transform is a way of calculating the DFT (discrete fourier transform) of a signal. A fourier transform is a way of looking at a waveform in the time domain to see what frequencies it is made up of. A fast fourier transform differentiates itself apart from a standard fourier transform by factorizing the DFT matrix into a produce of sparse (mostly zero) factors. This actions reduces the complexity of the DFT algorithm from \( \mathcal{O}(n^2) \) to \( \mathcal{O}(n\log{n}) \). This speed increase means that the FFT is very popular in signal processing applications. Bin Size The width of each bin (in Hertz) is equal to: where: \( f_s \) is the sample rate, in Hertz \( N_{bins} \) is the number of bins The bins of interest are those from \( 0 \) to \( \frac{N_{bins}}{2} \). Sampling Rate By the Nyquist-Shannon sampling theorem, if the sampling rate is say, 10kHz, then the maximum captured frequency content will be 5kHz. This is true when using FFTs. However, sampling just at the Nyquist rate does not give you great data. As a rule-of-thumb, if you want to accurately find the frequencies present in a signal with a reasonably low number of samples, the sample rate should be about 10x the maximum frequency of interest. Number of Samples FFT algorithms require a number of samples which is equal to an integer power of two (e.g. 2, 4, 8, 16, …). Frequency vs. Temporal Resolution There is always a trade-off between frequency and temporal (time based) resolution. At a fixed sample rate, increasing the frequency resolution decreases the temporal resolution. To increase the frequency resolution, you have to increase the number of bins. This will make a single FFT window take longer to run, which decreases the temporal resolution (all temporal info within a single FFT window is lost). Windowing A FFT samples a waveform to a finite length (you don’t/can’t measure the signal for time negative infinity to positive infinity) in what is called the window. An FFT algorithm also assumes the signal within the window repeats forever. With most real-world signals, this will result in discontinuities at the edges of the window (the only time this does not happen is if the signal repeats itself, and the window happens to contain an exact integer number of cycles). If nothing is done to the edges of the window, you will get significant spectral leakage. One way to reduce the spectral leakage is to perform windowing, in where the signal is faded in and out in the first few/last few samples. The 2D Fourier Transform And Images Because most images are stored digitally, the Discrete Fourier Transform (DFT) is used. Taking an standard input image which is in the spatial domain, the 2D DFT converts the image into the frequency domain. Each pixel in the output image represents a particular frequency in the input spatial domain image. The number of frequencies in the image is equal to the number of pixels in the image. Obviously, this means that the frequency domain image will the same size as the spatial domain image. The definition of the 2D Fourier transform (continuous): $$ \mathcal{F}(u,v) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x,y) e^{-j2\pi(ux + vy)} dx\,dy $$ Converted into a 2D discrete Fourier transform, the equation becomes: $$ \mathcal{F}(k, l) = \sum_{i=0}^{N-1} \sum_{i=0}^{N-1} f(i,j) e^{-i2\pi (\frac{ki}{M} + \frac{lj}{N})} $$ where: \( f(i, j) \) is the image in the spatial domain The basis functions are sine and cosine waves with increasing frequencies. \(\mathcal{F}(0, 0)\) represents the DC component in the image (the average brightness), all the way up to \(\mathcal{F}(N-1, N-1)\) which represents the highest frequency in the image. Note that \(\mathcal{F}(0, 0)\) is usually shifted to be in the center of the frequency domain image. The Fourier Transform of a real-numbered spatial image (i.e. a typical photo) produces a complex-valued image in the frequency domain. Obviously, we can’t view an image made of complex numbers. What we can do is display the frequency domain image as two images, either: 1 image contains the real part of the complex number, the other image displays the imaginary part 1 image displays the magnitude, the other image displays the phase (the argumentof the complex number) Often in image processing, we use the magnitude/phase representation, and are mostly interested in the magnitude image. The magnitude can be written as \(|F(u,v)|\), the phase as \(\phi F(u,v) \) Code Libraries The opensource Math.Net Numerics library contains C# FFT code built for the .NET framework. External Resources The Fourier Transform series on The Mobile Studio must be one of the best online resources if you are looking into learning more about the Fourier Transform. It is a very detailed yet well explained step-by-step tutorial! Related Content: Self-similarity Context (SSC) Modality Independent Neighbourhood Descriptor (MIND) Convolution Moving Average Filters Azure Tags: programming signal processing Fourier Transforms Fast Fourier Transforms FFTs time domain frequency domain
An electro-magnetic interference (EMI) source is connected to a filter and to a LISN. The EMI source is a common-mode source: it generates an undesirable signal which flows from the generator \$ V_g \$, equally divides itself and flows through the branches HOT and NEUTRAL and comes back to the generator through ground (notice that LISN too is connected to the same ground). The filter is placed between the EMI source and the LISN in order to deviate to ground the EMI signal before it reaches the AC mains. Input voltage in the LISN is a measure of the EMI signal which will exit into the AC mains. There are two available topologies for the filter. First configuration: Second configuration: \$ L_1 \$ and \$ L_2 \$ are (in both circuits) mutually-coupled inductors, with \$ L_1 = L_2 = L \simeq M \$, so both have a total inductance of \$ L + M \simeq 2L\$. Notice that the voltages \$ V_L \$ are the same because the two branches are equal and the current produced by \$ V_g \$ equally splits between the two branches. The first configuration gives a higher \$ V_L \$ than the second, so the first configuration is the worst. Why? I drawn the equivalent circuits for the whole signal: the returning path to which I am interested in is through ground. So, in the first configuration the equivalent circuit is and in the second one it is as follows: The left resistance is \$ R_{LISN} / 2 \$ because it is the parallel of the input resistances of the LISN seen from HOT to ground and from NEUTRAL to ground: if they are both \$ R_{LISN} \$, \$ R_{LISN} || R_{LISN} = R_{LISN} / 2 \$. Then I obtained the \$ V_L \$ voltages: in the first case it is $$V_L = V_g \displaystyle \frac{(R_{LISN} / 2) || (2C_y)}{R_g + sL + (R_{LISN} / 2) || (2C_y)}$$ and in the second one it is much more complicated. If I did not make mistakes it should be $$V_L = V_g (R_{LISN} / 2) \displaystyle \frac{sL + (R_{LISN} / 2)}{R_g(2C_y + sL + (R_{LISN} / 2)) + 2C_y(sL + R_{LISN})}$$ I know that \$ R_g \gg R_{LISN} \$; moreover, in such cases the inductors should be placed after a small impedance and the capacitor after a great impedance. This is confirmed by the fact that the second configuration is better than the first. But, again: why? I can't immediately see this from the expressions I obtained: both have \$ R_g \$ in the denominator. Even if it is not immediately visible from the maths, which could be the physical reason for this behaviour?
Before the measurement of an observable, the quantum state is $$|\Phi\rangle = \sum_i c_i |\psi_i \rangle,$$ with $|\psi_i \rangle$ called "pure states". Once the measurement is done, the quantum state $|\Phi\rangle$ is projected onto one of the pure states $|\psi_{i}\rangle$. Questions: Is a pure state an eigenvector of the observable used for the measurement? Before measurement, is the quantum state $|\Phi\rangle$ a superposition of pure states $|\psi_{i}\rangle$? What is the relation between the coefficients $c_i$ above and the probability $p_i$ to get the system in a pure state $|\psi_i\rangle$ (once the quantum state is projected, i.e the measurement is performed)? Can we write $|c_{i}|^2 = p_{i}$? From the normalization condition for the quantum state $|\Phi\rangle$ we have $$\langle\Phi|\Phi\rangle = 1 = \sum_{i} |c_{i}|^2 = \sum_{i} p_{i} = 1 \, ,$$ but I can only arrive at $\left|c_i \right|^2 = p_i$ if the pure state basis is orthogonal $\left(\text{i.e.,}~\langle\psi_i|\psi_j\rangle=\delta_{ij}\right) $, can't I? Thanks all, just a last question: So I should rather think that a pure state can also be a superposition of basis states which are assimilated to eigenvectors of an observable. A pure state may not be only one eigeinevctor, it can be a linear combination of eigeinvectors, can't it?
Let us define the auxiliary process $\Lambda_t=e^{\kappa t}\lambda_t$. Note that: $$ \Lambda_t = \kappa e^{\kappa t} \int_0^t(\rho_s-\lambda_s)ds+\delta e^{\kappa t}\int_0^tdN_t$$ Hence after a jump occurs at $t$: $$ \Lambda_t=\Lambda_{t-}+\delta e^{\kappa t}$$ Therefore by Ito's lemma for jump-diffusion processes: $$ \begin{align}d\Lambda_t & = \frac{\partial \Lambda_t}{\partial t}dt+\frac{\partial \Lambda_t}{\partial \lambda_t}\kappa(\rho_t-\lambda_t)dt+(\Lambda_t-\Lambda_{t-})dN_t\\[9pt]& = \kappa e^{\kappa t}\rho_tdt+\delta e^{\kappa t}dN_t\end{align}$$ Integrating: $$ \Lambda_t=\Lambda_0+\kappa\int_0^te^{\kappa s}\rho_sds+\delta\int_0^te^{\kappa s}dN_s$$ Finally: $$ \lambda_t=\lambda_0+\kappa\int_0^te^{\kappa (s-t)}\rho_sds+\delta\int_0^te^{\kappa (s-t)}dN_s$$ You notice that in the original SDE the following factor is the "nuisance": $$d\lambda_t = \cdots + \left(-\kappa \lambda_t dt\right) + \cdots$$ which corresponds to the differential of an exponential with constant $\kappa$: $$ dx_t = \kappa x_tdt \quad \Leftrightarrow \quad x_t = Ce^{\kappa t}$$ Hence you need to try to get rid of it by making a $+\kappa \lambda_t dt$ appear somehow, which can be achieved by differentiating an exponential with constant $\kappa$ through Ito's lemma applied to $\Lambda_t$ as defined above.
Probability of an event can be defined as the likelihood of an event to occur. Suppose we have a ground of 200 square feet, and we want to throw a ball such that it hits somewhere in a specified area of 20 square feet, then what will be the probability of the ball hitting the desired area? This probability of hitting a certain area from a given space is known as geometric probability. For the above question, the geometric probability will be $\frac{20}{200}$ = $\frac{1}{10}$. Formula To find the geometric probability, we need to find the total area and the desired area. Total area will be the area of given space, and the desired area will be the area where we are aiming at. The geometric probability is the ratio of desired area to total area.$Geometric\ Probability$ = $\frac{\text {Desired Area}}{\text {Total Area}}$ Example: A big square has a one side of 24 cm, and has a small square within it of area 16 cm square. Find the probability of a dart hitting the small square. Solution: We have total area as the area of big square. Hence, total area = $24\ \times\ 24$ = $576$. Desired area is given to be area of small square, that is, 16. Geometric probability = $\frac{16}{576}$ = $\frac{1}{36}$. Hence, the probability of hitting the small square is $\frac{1}{36}$. Examples Lets consider few of the examples: Example 1: Suppose there is a rectangular playground with sides 15 feet and 25 feet respectively. It has a semicircle area on one of its sides with radius 5 feet. If a ball hits that area, it will be counted as a point. Find the probability of getting a point. Solution: The total area will be the area of the rectangular field. Hence, total area = $15\ \times\ 25$ = $375$ Desired area will be the area of the semicircle. Desired area = $\frac{\pi \times\ 5^2}{2}$ = $12.5\pi$ Geometric probability = $\frac{12.5\pi}{375}$ = $0.10467$ Hence, the probability of getting a point is $0.10467$ Example 2: In a circle of radius 5 cm, there is a square of side 2 cm. Find the probability that a dart thrown will hit the circle but not the square. Solution: Total area = $\pi\ \times\ 5^2$ = $25\pi$ Area of square = $2\ \times\ 2$ = $4$ Desired area = $25\pi$ - $4$ Geometric Probability = $\frac{25\pi\ -\ 4}{25\pi}$ = $0.94904$ Hence, the probability is $0.95904$.
There are proofs that treat the cases of real and non-real $\chi$ on an equal footing. One proof is in Serre's Course in Arithmetic, which the answers by Pete and David are basically about. That method is using the (hidden) fact that the zeta-function of the $m$-th cyclotomic field has a simple pole at $s = 1$, just like the Riemann zeta-function.Here is another proof which focuses only on the $L$-function of the character $\chi$ under discussion, the $L$-function of the conjugate character, and the Riemann zeta-function. Consider the product$$H(s) = \zeta(s)^2L(s,\chi)L(s,\overline{\chi}).$$This function is analytic for $\sigma > 0$, with the possible exception of a pole at $s = 1$. (As usual I write $s = \sigma + it$.) Assume $L(1,\chi) = 0$. Then also $L(1,\overline{\chi}) = 0$.So in the product defining $H(s)$, the double pole of $\zeta(s)^2$ at $s = 1$ is cancelled and $H(s)$ is therefore analytic throughout the half-plane $\sigma > 0$. For $\sigma > 1$, we have the exponential representation $$H(s) = \exp\left(\sum_{p, k} \frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{kp^{ks}}\right),$$where the sum is over $k \geq 1$ and primes $p$. If $p$ does not divide $m$, then we write $\chi(p) = e^{i\theta_p}$ and find $$\frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{k} = \frac{2(1 + \cos(k\theta_p))}{k} \geq 0.$$ If $p$ divides $m$ then this sum is $2/k > 0$. Either way, inside that exponential is a Dirichlet series with nonnegative coefficients, so when we exponentiate and rearrange terms (on the half-plane of abs. convergence, namely where $\sigma > 1$), we see that $H(s)$ is a Dirichlet series with nonnegative coefficients. A lemma of Landau on Dirichlet series with nonnegative coefficients then assures us that the Dirichlet series representation of $H(s)$ is valid on any half-plane where $H(s)$ can be analytically continued. To get a contradiction at this point, here are several methods. [Edit: In the answer by J.H.S., and due to Bateman, is the slickest argument I have seen, so let me put it here. The idea is to look at the coefficient of $1/p^{2s}$ in the Dirichlet series for $H(s)$. By multiplying out the $p$-part of the Euler product, the coefficient of $1/p^s$ is $2 + \chi(p) + \overline{\chi}(p)$, which is nonnegative, but the coefficient of $1/p^{2s}$ is $(\chi(p) + \overline{\chi}(p) + 1)^2 + 1$, which is not only nonnegative but in fact is greater than or equal to 1. Therefore if $H(s)$ has an analytic continuation along the real line out to the number $\sigma$, then for real $s \geq \sigma$ we have $H(s) \geq \sum_{p} 1/p^{2s}$. The hypothesis that $L(1,\chi) = 0$ makes $H(s)$ analytic for all complex numbers with positive real part, so we can take $s = 1/2$ and get $H(1/2) \geq \sum_{p} 1/p$, which is absurd since that series over the primes diverges. QED!] If you are willing to accept that $L(s,\chi)$ (and therefore $L(s,\overline{\chi})$) has an analytic continuation to the whole plane, or at least out to the point $s = -2$, then $H(s)$ extends to $s = -2$. The Dirichlet series representation of $H(s)$ is convergent at $s = -2$ by our analytic continuation hypothesis and it shows $H(-2) > 1$, or the exponential representation implies that at least $H(-2) \not= 0$.But $\zeta(-2) = 0$, so $H(-2) = 0$. Either way, we have a contradiction. There is a similar argument, pointed out to me by Adrian Barbu, that does not require analytic continuation of $L(s,\chi)$ beyond the half-plane $\sigma > 0$. If you are willing to accept that $\zeta(s)$ has zeros in the critical strip $0 < \sigma < 1$ (which is a region that the Dirichlet series and exponential representations of $H(s)$ are both valid since $H(s)$ is analytic on $\sigma > 0$), we can evaluate the exponential representation of $H(s)$ at such a zero to get a contradiction. Of course the amount of analysis that lies behind this is more substantial than what is used to continue $L(s,\chi)$ out to $s = -2$. We consider $H(s)$ as $s \rightarrow 0^{+}$. We need to accept that $H$ is bounded as $s \rightarrow 0^{+}$. (It's even holomorphic there, but we don't quite need that.) For real $s > 0$ and a fixed prime $p_0$ (not dividing $m$, say), we can bound $H(s)$ from below by the sum of the $p_0$-power terms in its Dirichlet series. The sum of these terms is exactly the $p_0$-Euler factor of $H(s)$, so we have the lower bound $$H(s) > \frac{1}{(1 - p_0^{-s})^2(1 - \chi(p_0)p_0^{-s})(1 - \overline{\chi}(p_0)p_0^{-s})} = \frac{1}{(1 - p_0^{-s})^2(1 - (\chi(p_0)+ \overline{\chi}(p_0))p_{0}^{-s} + p_0^{-2s})}$$for real $s > 0$. The right side tends to $\infty$ as $s \rightarrow 0^{+}$.We have a contradiction. QED These three arguments at some point use knowledge beyond the half-plane $\sigma > 0$ or a nontrivial zero of the zeta-function. Granting any of those lets you see easily that $H(s)$ can't vanish at $s = 1$, but that "granting" may seem overly technical. If you want a proof for the real and complex cases uniformly which does not go outside the region $\sigma > 0$, use the method in the answer by Pete or David [edit: or use the method I edited in as the first one in this answer].
Some results about a bidimensional version of the generalized BO 1. Departamento de Matematica, IMECC-UNICAMP, 13081-970, Campinas, SP, Brazil $u_t-H^{(x)}u_{x y}+u^p u_y=0, \quad t\in \mathbb R,\quad (x,y)\in \mathbb R^2,$ we use the method of parabolic regularization to prove local well-posedness in the spaces $H^s(\mathbb R^2), \quad s>2$ and in the weighted spaces $\mathcal F_r^s=H^s(\mathbb R^2) \cap L^2((1+x^2+y^2)^rdxdy), \quad s>2,\quad r\in [0,1]$ and $\mathcal F_{1,k}^k=H^k(\mathbb R^2) \cap L^2((1+x^2+y^{2k})dxdy), \quad k\in\mathbb N, \quad k\geq 3. \quad $ As in the case of BO there is lack of persistence for both the linear and nonlinear equations (for $p$ odd) in $\mathcal F_2^s$. That leads to unique continuation principles in a natural way. By standard methods based on $L^p-L^q$ estimates of the associated group we obtain global well-posedness for small initial data and nonlinear scattering for $p\geq 3,\quad s>3$. Nonexistence of square integrable solitary waves of the form $u(x,y,t)=v(x,y-ct),\quad c>0, \quad p\in \{1,2\}$ is obtained using the results about existence of solitary waves of the BO and variational methods. Keywords:wellposedness, nonlinear dispersive equations, Benjamin-Ono equation, nonlinear scattering, unique continuation principles, solitary waves.. Mathematics Subject Classification:35Q35, 35Q5. Citation:Aniura Milanés. Some results about a bidimensional version of the generalized BO. Communications on Pure & Applied Analysis, 2003, 2 (2) : 233-249. doi: 10.3934/cpaa.2003.2.233 [1] [2] Nakao Hayashi, Pavel Naumkin. On the reduction of the modified Benjamin-Ono equation to the cubic derivative nonlinear Schrödinger equation. [3] [4] [5] [6] Kenta Ohi, Tatsuo Iguchi. A two-phase problem for capillary-gravity waves and the Benjamin-Ono equation. [7] [8] [9] [10] Lufang Mi, Kangkang Zhang. Invariant Tori for Benjamin-Ono Equation with Unbounded quasi-periodically forced Perturbation. [11] G. Fonseca, G. Rodríguez-Blanco, W. Sandoval. Well-posedness and ill-posedness results for the regularized Benjamin-Ono equation in weighted Sobolev spaces. [12] Luc Molinet, Francis Ribaud. Well-posedness in $ H^1 $ for generalized Benjamin-Ono equations on the circle. [13] Eddye Bustamante, José Jiménez Urrea, Jorge Mejía. The Cauchy problem for a family of two-dimensional fractional Benjamin-Ono equations. [14] Juan Belmonte-Beitia, Vladyslav Prytula. Existence of solitary waves in nonlinear equations of Schrödinger type. [15] Santosh Bhattarai. Stability of normalized solitary waves for three coupled nonlinear Schrödinger equations. [16] [17] [18] Peng Gao. Unique continuation property for stochastic nonclassical diffusion equations and stochastic linearized Benjamin-Bona-Mahony equations. [19] [20] 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
(Marked this version for translation) (12 intermediate revisions by the same user not shown) Line 8: Line 8: * [[Siril:Tutorial_sequence|Work on a sequence of converted images]] * [[Siril:Tutorial_sequence|Work on a sequence of converted images]] * [[Siril:Tutorial_preprocessing|Pre-processing images]] * [[Siril:Tutorial_preprocessing|Pre-processing images]] − * [[Siril:Tutorial_manual_registration|Registration ( + * [[Siril:Tutorial_manual_registration|Registration (alignment)]] * → '''Stacking''' * → '''Stacking''' Line 16: Line 16: The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. * Sum Stacking * Sum Stacking − This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to <math>\sqrt{N}</math>, where <math>N</math> is the number of images. + This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to <math>\sqrt{N}</math>, where <math>N</math> is the number of images. * Average Stacking With Rejection * Average Stacking With Rejection ** Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). ** Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Line 35: Line 35: This algorithm is mainly used to construct long exposure star-trails images. This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. + + + <!--T:7--> <!--T:7--> − In the case of + In the case of sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks (<math>\sigma_{low}=4</math> and <math>\sigma_{high}=</math>). <!--T:8--> <!--T:8--> − [[File:Siril stacking screen.png]] + [[File:Siril stacking screen.png]] <!--T:20--> <!--T:20--> The output console thus gives the following result:<br> The output console thus gives the following result:<br> − + ::: Pixel rejection in channel #0: .% - .% − + ::: Pixel rejection in channel #1: .% - .% − + ::: Pixel rejection in channel #2: 0.% - .% − + :: − + − + − + − + : Rejection .. − + ::: − + − + (channel: #0): .- − + ::: (channel: #1): .-05 + ::: (channel: #2): .-05 <!--T:9--> <!--T:9--> Line 61: Line 65: <!--T:10--> <!--T:10--> − After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., + After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., files. <!--T:11--> <!--T:11--> Line 67: Line 71: <!--T:12--> <!--T:12--> − The images above picture the result in Siril using the + The images above picture the result in Siril using the rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous [[Siril:Tutorial_preprocessing|step]] (take a look to the sigma value). The increase in SNR is of <math>/.= .\approx \sqrt{} = .</math> and you should try to improve this result adjusting <math>\sigma_{low}</math> and <math>\sigma_{high}</math>. <!--T:13--> <!--T:13--> [[File:Siril_Comparison_sigma.png|700px]] [[File:Siril_Comparison_sigma.png|700px]] − − − − − − <!--T:16--> <!--T:16--> Latest revision as of 10:34, 13 September 2016 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. Sum Stacking This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing. Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations. These algorithms are very efficient to remove satellite/plane tracks. Median Stacking This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math]. Pixel Maximum Stacking This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. Pixel Minimum Stacking This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower. In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]). The output console thus gives the following result: 14:33:06: Pixel rejection in channel #0: 0.181% - 1.184% 14:33:06: Pixel rejection in channel #1: 0.151% - 1.176% 14:33:06: Pixel rejection in channel #2: 0.111% - 1.118% 14:33:06: Integration of 12 images: 14:33:06: Pixel combination ......... average 14:33:06: Normalization ............. additive + scaling 14:33:06: Pixel rejection ........... Winsorized sigma clipping 14:33:06: Rejection parameters ...... low=4.000 high=3.000 14:33:07: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 14:33:07: Execution time: 9.98 s. 14:33:07: Background noise value (channel: #0): 9.538 (1.455e-04) 14:33:07: Background noise value (channel: #1): 5.839 (8.909e-05) 14:33:07: Background noise value (channel: #2): 5.552 (8.471e-05) After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., 12 files. The images above picture the result in Siril using the Auto-Stretch rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]21/5.1 = 4.11 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math]. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril: Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
A year ago, I was not able to understand the following. First Shreve defines $V_n$ as follows: Definition 4.4.1. For each $n, n = 0,1,\cdots, N$, let $G_n$ be a random variable depending on the first $n$ coin tosses. An American derivative security with intrinsic value process $G_n$ is a contract that can be exercised at any time prior to and including time $N$ and, if exercised at time $n$, pays off $G_n$. We define the price process $V_n$ for this contract by the American risk-neutral pricing formula $$V_n= \max_{\tau \in \mathcal{S}_n} \widetilde{\mathbb{E}}_n\Big[\mathbb{I}_{\{\tau \leq N\}}\frac{G_\tau}{(1+r)^{\tau-n}} \Big], \: n = 0, 1, \cdots, N$$ Then important properties of $V_n$ (as defined above!) are proved in Theorem 4.4.2. The American derivative security price process given by Definition 4.4.1 has the following properties: (i) $V_n \geq \max\{G_n, 0\}$ for all $n$ (ii) The discounted process $\frac{V_n}{(1+r)^n}$ is a supermartingale (iii) if $Y_n$ is another process satisfying $Y_n \geq \max\{G_n, 0\}$ for all $n$ and for which $\frac{Y_n}{(1+r)^n}$ is a supermartingale then $Y_n \geq V_n$ for all $n$ We summarize property (iii) by saying that $V_n$ is the smallest process satisfying (i) and (ii) Then, in the theorem 4.4.3 Shereve redefines $V_n$ as a Snell envelope process (though Shreve does not use this term): Theorem 4.4.3. We have the following pricing alogrithm for the path-dependent derivative security price process given by Definition 4.4.1: $V_N(\omega_1 \cdots \omega_N) = \max{\{G_N, 0\}} $ $V_n(\omega_1 \cdots \omega_n) = \max\{ G_n(\omega_1 \cdots \omega_n), \frac{1}{1+r}[\tilde{p}V_{n+1}(\omega_1 \cdots \omega_nH) + \tilde{q}V_{n+1}(\omega_1 \cdots \omega_nT)]$ Shreve proves theorem showing that the redefined $V_n$ satisfies conditions of Theorem 4.4.2. and concludes that $$V_n = \max\{ G_n, \frac{1}{1+r}[\tilde{p}V_{n+1} + \tilde{q}V_{n+1}]\} = \max_{\tau \in \mathcal{S}_n} \widetilde{\mathbb{E}}_n\Big[\mathbb{I}_{\{\tau \leq N\}}\frac{G_\tau}{(1+r)^{\tau-n}} \Big] \tag{A}\label{A}$$ From now on Shreve uses the redefined $V_n$. The optimal exercise time is defined in Theorem 4.4.5. The stopping time $$\tau^* = \min\{n; G_n = V_n\}$$ maximizes the righ-hand side of (4.4.1) when $n=0$; i.e. $$ V_0 = \widetilde{\mathbb{E}}\Big[\mathbb{I}_{\{\tau^* \leq N\}}\frac{G_{\tau^*}}{(1+r)^{\tau^*}} \Big]$$ He proves that the stopped redefined $V_n$ is a martingale: $$ V_{n\wedge \tau^*} = \mathbb{E}_n\frac{V_{n+1\wedge \tau^*}}{1+r} \tag{B}\label{B}$$ From $\eqref{A}$ we conclude: $$ V_0 = \max_{\tau \in \mathcal{S}_0} \widetilde{\mathbb{E}}\Big[\mathbb{I}_{\{\tau \leq N\}}\frac{G_{\tau}}{(1+r)^{\tau-n}} \Big] $$ From $\eqref{B}$ we conclude: $$ V_0 = V_{0 \wedge \tau^*} = \mathbb{E}\frac{V_{N\wedge \tau^*}}{{1+r}^{N\wedge\tau^*}}$$ so the reminder of the Shreve's proof should be clear now. For another proof of the optimal exercise theorem and in general better explanation of the topic I highly recommend Musiela & Rutkowski's "Martingale Methods in Financial Modelling" referenced by @Gordon on many occasions.
Open the Terminal and install Enchant. sudo apt-get install enchant Open LyX and select Tools > Preferences > Language Settings > Spellchecker Select Enchant for Spellchecker Engine. Click on Apply, Save and finally Close. Thursday, March 13, 2014 Wednesday, March 12, 2014 Friday, November 11, 2011 Wednesday, September 30, 2009 To create a multiple column layout in LaTeX is easy. What you need is the multicol package. For example, to create a two column layout. To manually break the column, use the If you are using standard LaTeX class, such as "article", you can simple create a two column layout using the code below: \usepackage{multicol} For example, to create a two column layout. \begin{multicols}{2} put the contents here \end{multicols} put the contents here \end{multicols} To manually break the column, use the \columnbreak command. \begin{multicols}{2} Column 1 \columnbreak Column 2 \end{multicols} Column 1 \columnbreak Column 2 \end{multicols} If you are using standard LaTeX class, such as "article", you can simple create a two column layout using the code below: \documentclass[twocolumn]{article} Friday, September 18, 2009 Gummi The main feature of Gummi is that the Preview Pane will update as you are typing. In my opinion, this feature is good if you are learning LaTeX. Give Gummi a try, and tell me your opinion. Click here to download. Saturday, June 20, 2009 Tuesday, June 9, 2009 To change the word Figure or TableWhat you need is \renewcommand{}{}. Example: Change Figureto Rajah:\renewcommand{figurename}{Rajah} Change Tableto Jadual:\renewcommand{tablename}{Jadual} What about List of Figures and List of Tables?We can still use the same \renewcommand{}{}. Example: Change List of Figuresto Senarai Rajah:\renewcommand{listfigurename}{Senarai Rajah} Change List of Tablesto Senarai Jadual:\renewcommand{listtablename}{Senarai Jadual} Friday, May 29, 2009 LaTeX has an ability to produce table of contents automatically for the whole document. Table of contents contain the section numbers and corresponding headings, together with the page numbers on which they begin. This article gives some introduction on generating and printing the table of content, as well as list of figures and tables in LaTeX. Printing the table of contentsTo display the table of contents, place the following command \tableofcontentsat the location where the table of contents is to appear.To set the depth of table of contents, use the following command. \setcounter{tocdepth}{the number of depth}To add additional entry to the table of contents, use the following command. \addcontentsline{toc}{section name}{entry text}\addtocontents{toc}{entry text}For example, to include a *-form subsection named Preface in table of contents. \subsection*{Preface}To include reference or bibliography entry in table of contents, use the following command right before the bibliography command. \addcontentsline{toc}{subsection}{Preface} \addcontentsline{toc}{subsection}{Preface} \addcontentsline{toc}{section}{References}To produce roman style page number for the table of contents and arabic style page number for the rest of the documents, use the following command. \pagenumbering{roman}To change the title of table of contents, use the following command. \tableofcontents \newpage \pagenumbering{arabic} \tableofcontents \tableofcontents \newpage \pagenumbering{arabic} \tableofcontents \renewcommand{contentsname}{New table of contents title} List of figures and tablesBesides table of contents, LaTeX can also produce list of figures and tables automatically.The commands are \listoffigures % to produce list of figuresThe entries in these lists use the entries from the \listoftables % to produce list of tables \listoftables % to produce list of tables \caption command in the figure and table environments. Wednesday, May 27, 2009 To insert source code with syntax highlighting, one possibility is to use the Listings package. The Listings package supports a huge number of languages: ABAP, IDL, Plasm, ACSL, inform, POV, Ada, Java, Prolog, Algol, JVMIS, Promela, Ant, ksh, Python, Assembler, Lisp, R, Awk, Logo, Reduce, bash, make, Rexx, Basic, Mathematica1, RSL, C, Matlab, Ruby, C++, Mercury, S, Caml, MetaPost, SAS, Clean, Miranda, Scilab, Cobol, Mizar, sh, Comal, ML, SHELXL, csh, Modula-2, Simula, Delphi, MuPAD, SQL, Eiffel, NASTRAN, tcl, Elan, Oberon-2, TeX, erlang, OCL, VBScript, Euphoria, Octave, Verilog, Fortran, Oz, VHDL, GCL, Pascal, VRML, Gnuplot, Perl, XML, Haskell, PHP, XSLT, HTML, and PL/I. To use the Listings package, first define the package in the preamble. For more information, read The Listings package supports a huge number of languages: ABAP, IDL, Plasm, ACSL, inform, POV, Ada, Java, Prolog, Algol, JVMIS, Promela, Ant, ksh, Python, Assembler, Lisp, R, Awk, Logo, Reduce, bash, make, Rexx, Basic, Mathematica1, RSL, C, Matlab, Ruby, C++, Mercury, S, Caml, MetaPost, SAS, Clean, Miranda, Scilab, Cobol, Mizar, sh, Comal, ML, SHELXL, csh, Modula-2, Simula, Delphi, MuPAD, SQL, Eiffel, NASTRAN, tcl, Elan, Oberon-2, TeX, erlang, OCL, VBScript, Euphoria, Octave, Verilog, Fortran, Oz, VHDL, GCL, Pascal, VRML, Gnuplot, Perl, XML, Haskell, PHP, XSLT, HTML, and PL/I. To use the Listings package, first define the package in the preamble. \usepackage{listings}Second, define the language to use, anywhere in the document. \lstset{language=Python}Finally, insert the source code. write code within your document\begin{lstlisting} put your code here \end{lstlisting} import the code from other file\lstinputlisting{source_filename.py} For more information, read Tuesday, May 26, 2009 Monday, May 25, 2009 Sunday, May 24, 2009 To add a description about what a particular section is all about, in the table of contents, use the following code. \addcontentsline{toc}{subsection}{the description}Add the code after the each section declaration. Monday, May 18, 2009 We can create a matrix in LaTeX using the array environment, or simplematrix, matrix, pmatrix, bmatrix, vmatrix, and Vmatrix environments via amsmath package. This article provides some examples on how to create a matrix in LaTeX. The amsmath package environment for matrix: Creating a matrix with arrayHere are some examples. unbracketed matrix \[ M = \begin{array}{cc} x & y \\ z & w \end{array} \] matrix surrounded by square brackets \[ M = \left[ {\begin{array}{cc} x & y \\ z & w \end{array} } \right] \] matrix surrounded by parentheses \[ M = \left( {\begin{array}{cc} x & y \\ z & w \end{array} } \right) \] matrix surrounded by single vertical lines \[ M = \left| {\begin{array}{cc} x & y \\ z & w \end{array} } \right| \] Using amsmath packageCall \usepackage{amsmath} in the preamble, after documentclass{}. The amsmath package environment for matrix: smallmatrix: inline matrix $M = \begin{smallmatrix} x & y \\ z & w \end{smallmatrix}$ $M = \left( \begin{smallmatrix} x & y \\ z & w \end{smallmatrix}\right)$ matrix: unbracketed matrix $M = \begin{matrix} x & y \\ z & w \end{matrix}$ pmatrix: matrix surrounded by parentheses $M = \begin{pmatrix} x & y \\ z & w \end{pmatrix}$ bmatrix: matrix surrounded by square brackets $M = \begin{bmatrix} x & y \\ z & w \end{bmatrix}$ vmatrix: matrix surrounded by single vertical lines $M = \begin{vmatrix} x & y \\ z & w \end{vmatrix}$ Vmatrix: matrix surrounded by double vertical lines $M = \begin{Vmatrix} x & y \\ z & w \end{Vmatrix}$ Thursday, May 14, 2009 This article will show on how to display formulas (mathematics equations) inside a box or frame. Using fbox and parboxThe formula will be framed. Must declare the width of the frame. \fbox { \parbox{5cm} { [ \oint\limits_C V,d\tau = \oint\limits_\Sigma \nabla \times V,d\sigma ] } } \parbox{5cm} { [ \oint\limits_C V,d\tau = \oint\limits_\Sigma \nabla \times V,d\sigma ] } } Using fbox and minipageThe formula will be framed. Must declare the width of the frame. \fbox { \begin{minipage}[position]{5cm} [ \oint\limits_C V,d\tau = \oint\limits_\Sigma \nabla \times V,d\sigma ] \end{minipage} } \begin{minipage}[position]{5cm} [ \oint\limits_C V,d\tau = \oint\limits_\Sigma \nabla \times V,d\sigma ] \end{minipage} } Using equation environment and fboxThe formula will be centred, framed and numbered. \begin{equation} \fbox { $ \displaystyle \oint\limits_C V,d\tau = \oint\limits_\Sigma \nabla \times V,d\sigma $ } \end{equation} \fbox { $ \displaystyle \oint\limits_C V,d\tau = \oint\limits_\Sigma \nabla \times V,d\sigma $ } \end{equation} Using displaymath enviroment and fboxThe formula will be centred, framed and unnumbered. \begin{displaymath} \fbox { $ \displaystyle \oint\limits_C V,d\tau = \oint\limits_\Sigma \nabla \times V,d\sigma $ } \end{displaymath} \fbox { $ \displaystyle \oint\limits_C V,d\tau = \oint\limits_\Sigma \nabla \times V,d\sigma $ } \end{displaymath} Sunday, March 22, 2009 Here are the typical steps to produce DVI, PostScript or PDF file from a LaTeX source file: Create the LaTeX source file. It must be in the plain ASCII format. Simple text editor like Notepad, VIM or Emacs would be fine. Please do not use word processor like Microsoft Office. Run LaTeX on the LaTeX source. Any warning or error will be reported, if exists. If the source file contains table of contents, or references, you may need to run the above command several times. If you are lucky, you will get a DVI file, called yourfile.dvi. To view it, use dvi viewer like xdvi or Yap.latex yourfile.tex To convert the DVI file to PostScript format, rundvips -Pcmz yourfile.dvi -o yourfile.ps To produce a PDF file, you don't need to do step 1. Just runpdflatex yourfile.tex To produce a PostScript file from a PDF file,pdf2ps yourfile.pdf newfile.ps To produce a PDF file from a PostScript file,pdf2ps yourfile.ps newfile.pdf Saturday, March 21, 2009 An example LaTeX source file for writing a journal article. \documentclass[a4paper,11pt]{article} \author{Rizauddin} \title{Very simple article} \begin{document} %generates the title \maketitle % insert table of contents \tableofcontents \section{Introduction} The intro will be here. \section{Conclusion} \ldots{} and it ends here. \end{document} \author{Rizauddin} \title{Very simple article} \begin{document} %generates the title \maketitle % insert table of contents \tableofcontents \section{Introduction} The intro will be here. \section{Conclusion} \ldots{} and it ends here. \end{document} Friday, March 6, 2009 In LaTeX there is a package that allows user to manually markup changes of text, such as additions, deletions, or replacements. The changes will be shown in different colour. Deleted text will be crossed out. The package is called "Manual change markup". It is a free package under the LaTeX Project Public License. If you install LateX from MiKTeX or TeX Live, most probabily, you already have this package installed. The identifier to this package is 'changes'. Therefore, in order to use it, make the declaration to the use of this package in the preamble, after the documentclass{}. For example, In order to deliver the final version of the document, add 'final' option to the 'changes' option. The identifier to this package is 'changes'. Therefore, in order to use it, make the declaration to the use of this package in the preamble, after the documentclass{}. For example, \documentclass{article} \usepackage{changes} \usepackage{changes} Examples of usage: \added{new text} for addition. \deleted{old text} for deletion. \replaced{new text}{old text} for replacement. In order to deliver the final version of the document, add 'final' option to the 'changes' option. \usepackage[final]{changes}'Manual change markup' documentation on CTAN (pdf). Friday, January 30, 2009 A document is usually divided into several parts. One of the part is the title page. To produce the title page, use the \maketitle command. The format of the title page will depends on the document class used, such as article, book or report. A document title page usually consists of several items: title, author(s), date, and some footnote. The first item is the title. The command to use is \title{}. A standard LaTeX format for the title page will align all entries centered on the lines in which they appear. The title will be broken up automatically, if the title is too long. To manually break the title, please use the \\ command. For example, title{...\\...\\...}. The second item is the author(s). To display the author, use the \author{} command. If there are several authors, separate their name with and from one another. For example \author{S. Kasim \and P. Ramli}. The author names will be printed in parallel next to each other on the same line. Replace \and with \\ to display the author(s) on top of one another. To include the address, use the \\ command, such as \author{S. Kasim\\Company\\AddressExtra item such as telephone number or email may be produce in the footnote via the \and P. Ramli\\Company\\Address} \and P. Ramli\\Company\\Address} \thanks{} command. For example, \author{S. Kasim\thanks{Tel. 03--3367638}}. The last item is the date, which can be produced using the \date{} command. If the \date{} command is omitted, then the current date is printed automatically below the author entries on the title page. An example of a title page \documentclass{article} \title{A simple LaTeX title page} \author{ S. Kasim \thanks{Tel. 03--3367638}\\Company1\\Address1 \and P. Ramli \thanks{Email. ramli@ramli.com}\\Company2\\Address2 } \date{Kuala Lumpur, \today} \begin{document} \maketitle \end{document} \title{A simple LaTeX title page} \author{ S. Kasim \thanks{Tel. 03--3367638}\\Company1\\Address1 \and P. Ramli \thanks{Email. ramli@ramli.com}\\Company2\\Address2 } \date{Kuala Lumpur, \today} \begin{document} \maketitle \end{document} Wednesday, July 9, 2008 Special symbols Character Command Description § S section sign † dag dagger ‡ ddag double dagger ¶ P pilcrow sign (paragraph) © copyright copyright sign £ pounds Pound sign Printing command characters Character Command Description $ $ dollar sign & & ampersand % % percent sign # # _ _ underscore { { opening curly brace } } closing curly brace European letters Character Command Description œ {oe} latin small ligature oe Œ {OE} latin capital ligature OE æ {ae} latin capital ligature ae Æ {AE} latin small ligature AE å {aa} latin small letter a with ring above Å {AA} latin capital letter A with ring above Ø {o} latin capital letter O with stroke Ø {O} latin small letter o with stroke ß ss sharp S ¿ ?` inverted question mark ¡ !` inverted exclamation mark Accents (diacritic marks) Character Command Description ò `{o} with grave ó '{o} with acute ô ^{o} with circumflex ö "{o} with umlaut õ ~{o} with tilde ={o} with macron .{o} with a dot above u{o} with breve v{o} with caron H{o} with double acute d{o} with a dot below o b{o} with an underline Note:You can replace o with the desired character. Saturday, June 21, 2008 This is a list of frequently use packages: ( anysize: Set margins ( Usage: Insert \usepackage{package} before \begin{document}) anysize: Set margins \marginsize{l}{r}{t}{b} \multicol: Use n columns \begin{multicols}{n} graphicx: For displaying images \includegraphics[width=3cm]{filename} \includegraphics[width=0.50textwidth]{filename} \includegraphics[width=0.50textwidth]{filename} url: To insert URL \url{http://rizauddin.com} Labels: LaTeX
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
A particle has spin $\hbar/2$. A measurement is made of the sum of x and z components of its spin angular momentum. What are the possible results of the measurement? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Strictly speaking there is not enough information in your question to provide a definite answer. In general, the possible outcomes of measuring spin in any directions must be the same since there is nothing preferential about one the other direction. In other words, suppose the possible outcomes of measuring spin along $\boldsymbol{\hat z}$ are $\pm \hbar/2$. I can relabel the $\boldsymbol{\hat z}$ axis as the $\boldsymbol{\hat x}$ axis, or for that matter as any unit vector $\boldsymbol{\hat n}$ and of course none would be the wiser about this labelling unless there is something to break the spherical symmetry of the problem. What changes with the direction are the probabilities of each outcome. The probabilities depend on two things. One is the operator, and the other is the state. If $\vert\pm \hat n\rangle$ are the eigenstates of the spin operator $\sigma_{\hat n}$ in the direction $\boldsymbol{\hat n}$, and given a fixed initial state $\vert\psi\rangle$, then these probabilities are $\vert \langle \pm \hat n\vert \psi\rangle\vert^2$. Note that an exception to the statement that the possible outcomes are the same for any direction occurs if one of $\vert \langle \pm \hat n\vert \psi\rangle\vert^2= 0$. The associated outcome has probability $0$ and is thus excluded as a possible outcome. In particular, you can verify that the eigenvalues of $\sigma_x$ are those of $\sigma_z$ and those of $\sigma_y$, and those eigenvalues, by the postulates of quantum mechanics, are the possible outcomes. Indeed you can verify that, if $U$ is unitary, then $U\sigma_zU^{-1}$ has the same eigenvalues (and thus same possible outcomes) as $\sigma_z$. The matrix $\sigma_x$ for instance, is given by $$ \sigma_x=U\sigma_z U^{-1}\, ,\qquad U=\left( \begin{array}{cc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \end{array} \right)\, . $$ You can also verify for instance that, given the state $$ \vert \psi\rangle =\frac{1}{\sqrt{2}}\vert \hat z\rangle -\frac{1}{\sqrt{2}}\vert -\hat z\rangle $$ the possible outcomes of measuring $\sigma_z$ are $\pm\hbar/2$ since both $+\hbar/2$ and $-\hbar/2$ occur with non-zero probability. The state $\vert\psi\rangle$ is an eigenstate of $\sigma_x$, so there is one one possible outcome for $\sigma_z$ for this state. This illustrates that probabilities depend on the operator and the state, but that possible outcomes (except when the associated probabilities are $0$), do not.
I use the notation of this question. A non-decreasing continuous bijection from $[0,a]$ to $[0,b]$ where $a,b\geq 0$ are two real numbers is denoted by $[0,a] \cong^+ [0,b]$. If $\phi:[0,a]\to U$ and $\psi:[0,b]\to U$ are two continuous maps for some topological space $U$ with $\phi(a)=\psi(0)$, the map $\phi*\psi:[0,a+b]\to U$ is the composition of the paths, $\phi$ on $[0,a]$ and $\psi$ on $[a,a+b]$. All topological spaces are $\Delta$-generated. Therefore all following categories are locally presentable. A multipointed $d$-space $X$ is a variant of Marco Grandis' $d$-spaces. It consists of a topological space $|X|$, a subset $X^0$ (of states) of $|X|$ and a set of continuous maps (called execution paths) $\mathbb{P}^{top}X$ from $[0,1]$ to $|X|$ satisfying the following axioms: for any $\phi\in \mathbb{P}^{top}X$, $\phi(0)$ and $\phi(1)$ belong to $X^0$ for any $\phi\in \mathbb{P}^{top}X$, a composite $[0,1] \cong^+ [0,1] \stackrel{\phi}\longrightarrow |X|$ belongs to $\mathbb{P}^{top}X$ if $\phi$ and $\psi$ are two execution paths, all composites like $[0,1] \cong^+ [0,2] \stackrel{\phi*\psi}\longrightarrow |X|$ are execution paths. Tu summarize, a multipointed $d$-space has not only a distinguished set of continuous paths but also a distinguished set of points (the other points are intuitively not interesting). Unlike Grandis' notion, the constant paths are not necessarily execution paths. It is one of the role of the cofibrant replacement of the model category structure constructed in Homotopical interpretation of globular complex by multipointed d-space to remove from a multipointed $d$-space all points which do not belong to an execution path. The cofibrant replacement cleans up the underlying space by removing the useless topological structure. It turns out that the model structure constructed in Homotopical interpretation of globular complex by multipointed d-space is the left determined model category with respect to the set of generating cofibrations $\mathrm{Glob}(\mathbf{S}^{n-1}) \subset \mathrm{Glob}(\mathbf{D}^{n})$ for $n\geq 0$ and the map $\{0,1\} \to \{0\}$ identifying two points where $\mathbf{S}^{n-1}$ is the $(n-1)$-dimensional sphere, $\mathbf{D}^{n}$ the $n$-dimensional disk, and where $\mathrm{Glob}(Z)$ is the multipointed $d$-space whose definition is explained in the paper (I don't think that it is important to recall it in this post). Now here is the question. I would be interested in considering the multipointed $d$-spaces $\vec{[0,1]^n}$ defined as follows The underlying space is the $n$-cube $[0,1]^n$ The set of distinguished states is the set of vertices $\{0,1\}^n \subset [0,1]^n$ The set of execution paths is generated by the continuous maps from $[0,1]$ to $[0,1]^n$ such that of course $0$ and $1$ are mapped to a point of $\{0,1\}^n$ and such that these maps are nondecreasing with respect to each axis of coordinates. The multipointed $d$-space $\partial\vec{[0,1]^n}$ is defined in the same way by removing the interior of the $n$-cube. Using Vopenka's principle and a result of Tholen and Rosicky, there exists a left determined model category structure with respect to the set of generating cofibrations $\partial\vec{[0,1]^n} \subset \vec{[0,1]^n}$ with $n\geq 0$ and $R:\{0,1\}\to \{0\}$. How is it possible to remove Vopenka's principle from this statement ? This question is probably too complicated for a post but if someone could give me a starting point, I would be very grateful. It is the reason why I ask the question anyway. Note: the presence of the map $\{0,1\}\to \{0\}$ in the set of generating cofibrations is not mandatory because I start considering in other parts of my work model structures where I remove this map from the set of generating cofibrations.
Iterate of exponential Contents Integer and non-integer \(n\) The most often are the first iteration of exponent, \(n=1\); \(\exp^1=\exp\) and the minus first iteration, \(n=-1\); \(\exp^{-1} = \ln\). Less often they appear with \(n = \pm 2\); \(\exp^2(z)=\exp(\exp(z))\), and \(\exp^{-2}(z)=\ln(\ln(z))\). Other values of number of iteration are not usual, and until year 2008, there was no regular way to evaluate iteration of exponential for any non–integer number \(n\) of iteration. However, with tetration tet, that is superfunction of exponent, and Arctetration ate, that is Abel function of exponent, the \(n\)th iteration can be expressed as follows: \(\exp^n(z)=\mathrm{tet}(n+\mathrm{ate}(z))\) Both, tet and ate are holomorphic functions; so, the representation above can be used for non-integer \(n\). The exponential can be iterated even complex number of times. Iimplementation Representation of \(\exp^n\) through function tet and ate defines the \(n\)th iterate of exponential for any complex number \(n\) of iterations.Methods for the evaluation are described in 2009 by D.Kouznetsov in Mathematics of Computation [2], and the efficient C++ complex double implementation are described in 2010 in Vladikavkaz mathematical Journal in Russian; the English version is also loaded [3]. WIth known properties and the efficient implementation, functions tet, ate and non–integer ietrations of the exponent shouls be qualified as special functions; in computation, one can access them as if they would be elementary functions.The complex doube implementations of functions tet and ate are loaded to TORI, see fsexp.cin and fslog.cin; they run at various operational systems; at least under Linux and Macintosh. Reports of any problems with the use or the reproducible bugs should be appreciated. Complex maps of the \(n\)th iteration of exponential, \(f=\exp^n(x+\mathrm i y)\) are shown in figures at right with lines \(u=\Re(f)\) and lines \(v=\Im(f)\) for various values \(n\) in the \(x\),\(y\) plane. As the function is real-holomorphic, the maps are symmetric; so the only upper half plane is shown in the figures. Cut lines While \(n\) is not integer, \(\exp^n(z)\) is holomorphic in the complex plane with two cut lines \(\Re(z)\le \Re(L)\), \(\Im(z)=\pm \Im(L)\), where \(L\approx 0.3+1.3 \mathrm i\) is fixed point of logarithm, id est, solution of equation \(L=\ln(L)\). In the figures at right, one of these cuts is seen; it is marked with dashed line. The additional levels \(\Re(L)\) for the real part of \(\exp^n\) and \(\Im(L)\) for the imaginary part are drown with thick green lines; of course, these lines cross each other at the branch point \(L\). In addition, for negative number \(n\) of iterations (and, in particular, for \(n=-1\)), there is cut line along the negative part of the real axis, id est, from \(-\infty\) to \(\mathrm{tet}(-2-n)\). Special function Properties of the iteration of the exponential are described. \(\exp^n(z)\) is holomorphic function with respect to \(z\) and with respect to \(n\). Properties of this function are analyzed and described. The efficient (fast and precise) algorithm for the evaluation is supplied with routines fsexp.cin and cslog.cin. With achievements above, function \((n,z) \mapsto \exp^n(z)\) is qualified as special function. Designers of compilers and interpreters from the programming languages are invited to borrow the implementations of tetration and arctetration in order to provide the built-in function, that evaluates \(\exp^n(z)\) for complex values of \(n\) and \(z\). In particular, in Mathematica, there is already name for such a function; it should be called with Nest[Exp,z,n]. Up to year 2013, the built-in function Nest is implemented in such a way, that number \(n\) of iteration should be expressed with natural constant, positive integer number [4]. Over-vice, the built-in function generates the error message instead of to perform the calculations and evaluations requested. With use of superfunctions and Abel functions, Nest could be implemented for more general case. References http://www.ams.org/journals/bull/1993-29-02/S0273-0979-1993-00432-4/S0273-0979-1993-00432-4.pdf Walter Bergweiler. Iteration of meromorphic functions. Bull. Amer. Math. Soc. 29 (1993), 151-188. http://www.ams.org/mcom/2009-78-267/S0025-5718-09-02188-7/home.html http://www.ils.uec.ac.jp/~dima/PAPERS/2009analuxpRepri.pdf http://mizugadro.mydns.jp/PAPERS/2009analuxpRepri.pdf D. Kouznetsov. Solution of \(F(x+1)=\exp(F(x))\) in complex \(z\)-plane. 78, (2009), 1647-1670 http://www.ils.uec.ac.jp.jp/~dima/PAPERS/2009vladie.pdf (English) http://mizugadro.mydns.jp/PAPERS/2010vladie.pdf (English) http://mizugadro.mydns.jp/PAPERS/2009vladir.pdf (Russian version) D.Kouznetsov. Superexponential as special function. Vladikavkaz Mathematical Journal, 2010, v.12, issue 2, p.31-45. http://reference.wolfram.com/mathematica/ref/Nest.html Nest, Wolfram Mathematica 9 Documentation center, 2013.
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Detailed record - Similar records 2018-08-25 06:58 Detailed record - Similar records 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Detailed record - Similar records 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Detailed record - Similar records 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Detailed record - Similar records 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Detailed record - Similar records 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Detailed record - Similar records 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Detailed record - Similar records 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Detailed record - Similar records 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Detailed record - Similar records
I am hoping that someone can point out the error in the "proof" of the following "theorem": Theorem: Let $k$ be a perfect field of characteristic $p>2$ and let $G$ be an ordinary $p$-divisible group over $W(k)$. Then the connected-etale sequence of G is split: $G\simeq G^m \times G^{et}$. The "Theorem" is decidedly false. For example, extensions of $\mathbf{Q}_p/\mathbf{Z}_p$ by $\mathbf{G}_m[p^{\infty}]$ over $W(k)$ are classified by the abelian group $1+pW(k)$, so any nonzero element of this group gives an ordinary $p$-divisible group with non-split connected-etale sequence. Proof: Let $G_0:=G\times_{W(k)} k$ be the special fiber of $G$. Since $k$ is perfect, $G_0= G_0^{m}\times G_0^{et}$. By Messing (LNM 264, Chap V. Theorem 1.6), there is an equivalence of categories between deformations of $G_0$ to $W(k)$ and free $W(k)$-submodules $L$ of $D(G_0)(W(k))$ lifting $\omega_{G_0}\hookrightarrow D(G_0)(k)$. This equivalence is induced by sending a lift $G'$ of $G_0$ to $\omega_{G'}$. Now $G' = G^{m}\times G^{et}$ and $G$ both lift $G_0$, and these lifts correspond to the submodules $\omega_{G'}$ and $\omega_G$ of $D(G_0)(W(k))$, respectively. But since $G$ is ordinary, so $G^0=G^{m}$, the pullback map $\omega_{G}\rightarrow \omega_{G^m}$ is an isomorphism. Via this isomorphism, the map $\omega_{G'}\rightarrow D(G_0)(W(k))$ coincides with the composite $$ \omega_{G'}\simeq \omega_{G^m}\times \omega_{G^{et}} = \omega_{G^m}\simeq \omega_G\rightarrow D(G_0)(W(k)) $$ and we conclude that $\omega_{G'}=\omega_G$ as submodules of $D(G_0)(W(k))$ lifting $\omega_{G_0}$. It then follows from Messing's Theorem above that $G\simeq G'$, as claimed. I must be making some silly and obvious mistake...can you find it?
In 1+1 dimensions there is duality between models of fermions and bosons called bosonization (or fermionization). For instance the sine-Gordon theory $$\mathcal{L}= \frac{1}{2}\partial_\mu \phi \partial^\mu \phi + \frac{\alpha}{\beta^2}\cos \beta \phi$$ can also be described in terms of fermions as the massive Thirring model $$\mathcal{L}= \bar{\psi}(i\gamma^\mu-m)\psi -\frac{1}{2}g \left(\bar{\psi}\gamma^\mu\psi\right)\left(\bar{\psi}\gamma_\mu\psi\right)$$ where the particle created by $\psi$ can be understood as a kink of sine-Gordon, and the particle created by $\phi$ can be understood as a bound state of two fermions from the Thirring model. Unlike sine-Gordon, $\phi^4$ $$\mathcal{L}= \frac{1}{2}\partial_\mu \phi \partial^\mu \phi + \frac{1}{2}m^2\phi^2 -\frac{1}{4}\lambda \phi^4$$ has only two vacua in the broken symmetry phase. I'm wondering whether here too we can write fermionic creation operators for the kinks, and rewrite the theory as a local field theory of the kink fields? The reason I think we can is that we can do this for the quantum Ising model which has much in common with $\phi^4$. The Ising model is defined on a 1d spin chain, and the ground states in the broken symmetry phase are where the 3rd component of the spins are either all pointing up or all down. The operators $\psi_1(i),\psi_2(i)$ are defined at each lattice point $i$ in terms of Pauli matrices as $$\psi_1(i) = i\sigma_2(i)\prod_{\rho=-\infty}^{i-1}\sigma_1(\rho)$$ $$\psi_2(i) = \sigma_3(i)\prod_{\rho=-\infty}^{i-1}\sigma_1(\rho)$$ The infinite product part acts to flip the 3rd component of spin to create a kink, and the Pauli matrix part gives it the usual fermionic anticommutation relations. It turns out in the continuum limit $\psi_{1,2}$ act like two components of a free Majorana fermion. Can $\phi^4$ also be expressed in terms of a Majorana fermion? What are the relations for the fermion field of $\phi^4$ that are analogous to the relations for $\psi_{1,2}$ in terms of Pauli matrices?
Can someone help me calculate this integral? $\int \frac{\sqrt{\sqrt[3]{x} - 2}}{x}dx$ I tried this substitution: $\Bigg(t = \sqrt[3]{x}, t^3 = x, 3t^2dt=dx\Bigg)$ which reduces the integral to: $\int \frac{\sqrt{t-2}}{t}3t^2dt = 3\int \sqrt{t-2}tdt$ and continuing from here is pointless, because the result (according to wolfram) is waaay wrong. I don't understand why that substitution was wrong... So I also tried this substitution instead: $t = \sqrt[3]{x} - 2$ $t^3 = x - 6 \sqrt[3]{x^2} + 12\sqrt[3]{x} - 8$ But this seems algrebraically impossible to me. Help's appreciated.
Get your free trial content now! Video Transcript Transcript Types of Numbers The system of different types of numbers is structured quite the same like the countries and cities all over the world. This is Hayato. He lives in Japan. This is a island nation in East Asia. The Japanese islands are in the Pacific. To be more precise, we can say Hayato lives in Tokyo. To be even more precise, we can say Hayato lives in the district of Shibuya. There he lives on this street. Hayato lives exactly in this house. What has this to do with the real number system you ask? I will show you! Whole Numbers The first numbers that you learned were the natural numbers. These are numbers like 6, 11, 21, 50 and so on. It is like the different houses. All the natural numbers plus Zero belong to the whole numbers. Just like every house belongs to a street. Integers When we look what comes next, we find the integers. The integers are whole numbers as well as negative numbers. It is like all of the streets in the district of Shibuya. Rational Numbers One step further, there are the rational numbers. The rational numbers are positive and negative fractions as well as positive and negative decimals that terminate or repeat. Note that not every rational number is an integer, but every integer is a rational number. It is like: Not everybody who lives in Tokyo also lives in Shibuya, but everybody who lives in the district Shibuya lives in Tokyo. There isn't just one city in Japan! You can see here is Osaka. Tokyo and Osaka are two separate cities. You cannot be in both places at the same time! Irrational Numbers A number can be a rational number or an irrational number like pi or the square root of 3. An irrational number cannot be written as a fraction or have a decimal that is repeating or terminating. Real Numbers Together the rational and the irrational numbers are in the real numbers. Just like Osaka and Tokyo are both in Japan. You have seen the organization of our Number System is structured much like all of the countires and cities we live in. And also Hayato in Japan learns this. Because the Number system is the same all over the world. 4 comments Hi :) Hi hi Hi Types of Numbers Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Types of Numbers kannst du es wiederholen und üben. Explain the different types of numbers. Tipps Obviously, every male Beagle is a Beagle. But on the other hand, all Beagles are not male Beagles. Every Beagle is a dog. But, there also exist dogs like German Shepherds that aren't Beagles. Lösung You can see the system of different types of numbers in the picture on the right. The numbers $6$, $21$, $11$ belong to a set called naturalnumbers. These numbers, along with $0$, give us the set of wholenumbers. Negative numbers, $-1$, $-6$, $-23$ etc., together with numbers like $0$, $73$ and $75$ are called integers. There are still other numbers like $\pm\frac14$ or $\pm0.375$, $\pm0.\overline{3}$, and can be rewritten in fraction form. These numbers, along with the integers combine to form the rationalnumbers. Decimal numbers like $\pi$ and $e$, which are neither terminating nor repeating and cannot be written as fractions are called irrationalnumbers. These numbers, together with the rational numbers, make up all the realnumbers. Analyze the statements about the different number types. Tipps Natural numbers are positive. So, if $a$ is a natural number, then $a>0$. The number $0$, together with the natural numbers form the set of whole numbers. $\frac13=0.\overline3$ is an example of a repeating decimal, and $7.5$ is an example of a rational number. Lösung We start with the naturalnumbers such as $11$, $21$. $-6$ is not a natural number. The natural numbers, together with $0$, form the wholenumbers. $-1$ is not a whole number. Integersare numbers like $-3$ and $-99$. $-\frac23$ is not an integer. The rationalnumbers are fractions such as $\frac14$ or decimals that terminate, like $8.1$, or repeat. $-2\sqrt 5$ is not a rational number. Irrationalnumbers are not repeating decimals nor can they be written as fractions like $\pi$. $-\frac23$ is not an irrational number. Place each number in the smallest possible set. Tipps The natural numbers are the numbers we use to count things. The union of all the numbers is the set of real numbers. Note that natural numbers, whole numbers, integers, and rational numbers are NOT irrational numbers. An irrational number is a decimal number that does not terminate or repeat and cannot be written as a fraction. Lösung $6$ and $50$ are natural numbers. We use natural numbers to count things. Whole numbers are natural numbers but also include $0$. The set of negative numbers added to the set of whole numbers gives us the set of numbers called integers. For example, $-3$ and $-14$ are integers. There are still other numbers like $\pm\frac35$ or $\pm0.5$, $\pm0.\overline{8}$ can all be rewritten in fraction form. These numbers along with the set of integers combine to form the rational numbers. Decimal numbers like $\pi$, $e$ and $\sqrt3$, which do not terminate or repeat and cannot be written as fractions are called irrational numbers. These numbers, together with the rational numbers, make up the set of real numbers. Decide to which number type the number belongs. Tipps The real numbers include both rational and irrational numbers. A number is either rational or irrational. Every natural number is a whole number, but not vice versa. Every whole number is a integer, but not vice versa. Every integer is a rational number, but not vice versa. Lösung Every natural number is also a whole number. Every whole number is also an integer. Every integer is also a rational number. Each number which is not rational is irrational. The rational and the irrational numbers together form the real numbers. $5.4$ is a rational number because it has a terminating decimal, and since all rational numbers are also real numbers, $5.4$ is also a real number. But $5.4$ is neither an integer nor a natural number. $\sqrt 5$ is an irrational number, and therefore not a rational number. However, $\sqrt 5$ is a real number. $\frac78$ is a fraction, and therefore it's a rational number. $-42$ is an integer, rational number, and real number. But $-42$ is not a whole number nor a natural number. Identify which numbers don't belong to the given number type. Tipps We use natural numbers to count and order things. Integers are both natural numbers, zero, and negative whole numbers. For example, $\frac13=0.\overline3$ is a rational number. It can be written as a fraction as well as a repeating decimal. Lösung The easiest way to remember the different number types is to remember examples of numbers belonging to each type. With the exception of the irrational numbers, all the other number types are organized like Russian nesting dolls. For example: $314$ is a realnumber $314$ is a rationalnumber $314$ is an integer $314$ is a wholenumber $314$ is a naturalnumber naturalnumbers. The note on the right side shows the correct solution. Here you find the mistakes already corrected: $-3$ is an integer but not a natural number $0$ is a whole number $3.14$ is a terminating decimal number and also a rational number, but it is not an integer $3.1\overline4$ is a repeating decimal number, so it's a rational number, and it is not an irrational number. $314$ is a Assign each number to its correct number type. Tipps $\sqrt 2$ is an irrational number. Multiples of irrational numbers are also irrational, like $3\sqrt2$. Natural numbers are used to count and order things. Some examples are: $1$, $2$, $3$, ... $1\text{st}$, $2\text{nd}$, $3\text{rd}$, ... The whole numbers are the natural numbers plus the number zero. Integers are positive and negative whole numbers. Some examples are: $... , -3, -2, -1, 0, 1, 2, 3, ...$ Lösung We can write every number type as a set: natural numbers: $\{1, 2, 3, ...\}$, whole numbers: $\{0, 1, 2, 3, ...\}$, integers: $\{..., -3, -2, -1, 0, 1, 2, 3, ...\}$. The representation of the rational numbers as a set isn't as easy as the number types shown above. So we'll define those numbers as follows: terminating or repeating decimal numbers such as $0.75$ and $0.\overline2$ fractions such as $\pm\frac{435}{23}$
161 0 Dear guys, I read a derivation of the dimension of gamma matrices in a [tex]d[/tex] dimension space, which I don't quite understand. First of all, in [tex]d[/tex] dimension, where [tex]d[/tex] is even. One assumes the dimension of gamma matrices which satisfy [tex] \{ \gamma^\mu , \gamma^\nu \} = 2\eta^{\mu\nu} \quad\quad\cdots(*)[/tex] is [tex] m [/tex]. A general a m by m matrix with complex arguments should have [tex]2m^2[/tex] independent components. Now, eq(*) gives [tex]m^2[/tex] constraints. So, the independent components of a single gamma matrix should be [tex] m^2 [/tex]. On the other hand, one finds that the anti-symmetrization of gamma matrices can produce space-time tensors under Lorentz transformation, i.e. for example, [tex] \bar{\psi}\gamma^{\mu\nu}\psi \rightarrow \Lambda^\mu{}_\rho\Lambda^{\nu}{}_\sigma\bar{\psi}\gamma^{\rho\sigma}\psi[/tex] where [tex] \gamma^{\mu\nu} = \gamma^{[\mu}\gamma^{\nu]}[/tex] Now, the various antisymmetric tensors decompose the Lorentz group into different pieces which do not mix. We now calculate the independent components of each anti-symmetric tensor, and add it up: [tex] C^d_0 + C^d_1 +C^d_2 + \cdots + C^d_d = 2^d [/tex] [tex] m^2 = 2^d [/tex] This concludes that [tex] m = 2^{d/2} [/tex]. Now, for d = 2k+1 being odd, one can easily add [tex]\gamma^{2k} \sim \gamma^0\gamma^1\cdots\gamma^{2k-1}[/tex], together with the original [tex]\gamma^0,\gamma^1,\cdots,\gamma^{2k-1}[/tex] to form gamma matrices in d = 2k+1. Since the anti-symmetric tensors has a linear relation, [tex]\gamma^{\mu_0\mu_1\cdots\mu_r} = \epsilon^{\mu_0\mu_1\cdots\mu_{2k}}\gamma_{\mu_{r+1}\cdots\mu_{2k}}[/tex]. So there are actually [tex] 2^{d}/2 [/tex] independent components for odd [tex]d[/tex]. Hence, the dimension of gamma matrices in odd spacetime dimension should be [tex]2^{\frac{d-1}{2}}[/tex]. Anyone help me go through the puzzles? thanks so much! ismaili ---- Oh, by the way, I found that from the Dirac representation method which I described in another nearby thread titled "spinors in various dimensions", one can easily realize the dimension of gamma matrices in even dimension d should be [tex] 2^{d/2} [/tex]. I read a derivation of the dimension of gamma matrices in a [tex]d[/tex] dimension space, which I don't quite understand. First of all, in [tex]d[/tex] dimension, where [tex]d[/tex] is even. One assumes the dimension of gamma matrices which satisfy [tex] \{ \gamma^\mu , \gamma^\nu \} = 2\eta^{\mu\nu} \quad\quad\cdots(*)[/tex] is [tex] m [/tex]. A general a m by m matrix with complex arguments should have [tex]2m^2[/tex] independent components. Now, eq(*) gives [tex]m^2[/tex] constraints. (<= I don't quite understand this.) So, the independent components of a single gamma matrix should be [tex] m^2 [/tex]. On the other hand, one finds that the anti-symmetrization of gamma matrices can produce space-time tensors under Lorentz transformation, i.e. for example, [tex] \bar{\psi}\gamma^{\mu\nu}\psi \rightarrow \Lambda^\mu{}_\rho\Lambda^{\nu}{}_\sigma\bar{\psi}\gamma^{\rho\sigma}\psi[/tex] where [tex] \gamma^{\mu\nu} = \gamma^{[\mu}\gamma^{\nu]}[/tex] Now, the various antisymmetric tensors decompose the Lorentz group into different pieces which do not mix. We now calculate the independent components of each anti-symmetric tensor, and add it up: [tex] C^d_0 + C^d_1 +C^d_2 + \cdots + C^d_d = 2^d [/tex] Now we match the two independent components we calculated(Why?! why they should match?!) [tex] m^2 = 2^d [/tex] This concludes that [tex] m = 2^{d/2} [/tex]. Now, for d = 2k+1 being odd, one can easily add [tex]\gamma^{2k} \sim \gamma^0\gamma^1\cdots\gamma^{2k-1}[/tex], together with the original [tex]\gamma^0,\gamma^1,\cdots,\gamma^{2k-1}[/tex] to form gamma matrices in d = 2k+1. Since the anti-symmetric tensors has a linear relation, [tex]\gamma^{\mu_0\mu_1\cdots\mu_r} = \epsilon^{\mu_0\mu_1\cdots\mu_{2k}}\gamma_{\mu_{r+1}\cdots\mu_{2k}}[/tex]. So there are actually [tex] 2^{d}/2 [/tex] independent components for odd [tex]d[/tex]. Hence, the dimension of gamma matrices in odd spacetime dimension should be [tex]2^{\frac{d-1}{2}}[/tex]. My question is that, isn't the linear relation between anti-symmetric tensors also hold in d = even spacetime dimension? Anyone help me go through the puzzles? thanks so much! ismaili ---- Oh, by the way, I found that from the Dirac representation method which I described in another nearby thread titled "spinors in various dimensions", one can easily realize the dimension of gamma matrices in even dimension d should be [tex] 2^{d/2} [/tex]. Last edited:
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
I came across this problem : The standard state Gibbs free energies of formation of C(graphite) and C(diamond) at $T = \pu{298 K}$ are $\pu{0 kJ mol-1}$ and $\pu{2.9 kJ mol-1}$, respectively. The conversion of graphite [C(graphite)] to diamond [C(diamond)] reduces its volume by $\pu{2e-6 m3 mol-1}$. If C(graphite) is converted to C(diamond) isothermally at $\pu{T = 298 K}$, the pressure at which C(graphite) is in equilibrium with C(diamond), is ( A) $\pu{14501 bar}$ ( B) $\pu{58001 bar}$ ( C) $\pu{1450 bar}$ ( D) $\pu{29001 bar}$ I applied : $$\Delta G_{(P,T)} =\Delta_f G^{\circ}+\int_{p_1}^{p_2}Vdp$$. Since the system is at equilibrium,$$ \Delta_f G^{\circ}= -\int_{p_1}^{p_2}Vdp$$ Now I am stuck. I have not been given any relation between Pressure and Volume. Is there any assumption i have to make to solve this integral ?
I am working with the following copula, and have a few questions about it: $C(x,y) = xy + \theta (1-x)(1-y)xy$ Here $\theta \in [-1,1]$ and $x,y \in [0,1]$ First, I am trying to show this copula is d-increasing. To do this, I took $\frac{\partial C}{\partial x \partial y}$ hoping $\frac{\partial C}{\partial x \partial y} \geq 0$ What I ended up with was $\frac{\partial C}{\partial x \partial y} = 1 + \theta - \theta (1-2x-2y+4xy)$. If I think of the case where $x=0, y=1, \theta = -1$ then this is equal to -1 so my condition isn't satisfied. Am I going about this the wrong way? Second, I am trying to calculate the copula of $(x,y^2)$. My first thought was just to plug in $x=x, y=y^2$ into my original copula. However I thought I couldn't do this because it would violate the assumption of uniform margins (as $y^2$ would no longer be uniform). Any hints here? Many thanks!
Total $\{k\}$-domination in special graphs 1. University of Science and Technology of China (USTC), Hefei, China 2. Facebook Seattle, 1101 Dexter Ave N, Seattle, WA 98109, USA For a positive integer $k$ and a graph $G = (V,E)$, a function $f:V \to \{0,1,...,k\}$ is called a total $\{k\}$- dominating function of $G$ if $\sum_{u∈ N_G(v)}f(u)≥ k$ for each $v∈ V$, where $N_G(v)$ is the neighborhood of $v$ in $G$. The total $\{k\}$- domination number of $G$, denoted by $\gamma _t^{\left\{ k \right\}}\left( G \right)$, is the minimum weight of a total $\{k\}$-dominating function $G$, where the weight of $f$ is $\sum_{v∈ V}f(v)$. In this paper, we determine the exact values of the total $\{k\}$-domination number for several commonly-encountered classes of graphs including cycles, paths, wheels, and pans. Mathematics Subject Classification:05C10. Citation:Haisheng Tan, Liuyan Liu, Hongyu Liang. Total $\{k\}$-domination in special graphs. Mathematical Foundations of Computing, 2018, 1 (3) : 255-263. doi: 10.3934/mfc.2018011 References: [1] [2] [3] B. Bresar, P. Dorbec, W. Goddard, B. Hartnell, M. Henning, S. Klavzar and D. F. Rall, Vizing's conjecture: A survey and recent results, [4] G. Domke, S. Hedetniemi, R. Laskar and G. Fricke, Relationships between integer and fractional parameters of graphs, [5] T. Haynes, S. H. ST and P. Slater, [6] T. Haynes, S. H. ST and P. Slater, [7] J. He and H. Liang, Complexity of total $\{k\}$-domination and related problems, [8] [9] C. Lee, [10] show all references References: [1] [2] [3] B. Bresar, P. Dorbec, W. Goddard, B. Hartnell, M. Henning, S. Klavzar and D. F. Rall, Vizing's conjecture: A survey and recent results, [4] G. Domke, S. Hedetniemi, R. Laskar and G. Fricke, Relationships between integer and fractional parameters of graphs, [5] T. Haynes, S. H. ST and P. Slater, [6] T. Haynes, S. H. ST and P. Slater, [7] J. He and H. Liang, Complexity of total $\{k\}$-domination and related problems, [8] [9] C. Lee, [10] [1] [2] M. D. König, Stefano Battiston, M. Napoletano, F. Schweitzer. On algebraic graph theory and the dynamics of innovation networks. [3] [4] E. Muñoz Garcia, R. Pérez-Marco. Diophantine conditions in small divisors and transcendental number theory. [5] [6] Navin Keswani. Homotopy invariance of relative eta-invariants and $C^*$-algebra $K$-theory. [7] [8] A. C. Eberhard, J-P. Crouzeix. Existence of closed graph, maximal, cyclic pseudo-monotone relations and revealed preference theory. [9] Nigel Higson and Gennadi Kasparov. Operator K-theory for groups which act properly and isometrically on Hilbert space. [10] Harsh Pittie and Arun Ram. A Pieri-Chevalley formula in the K-theory of aG/B-bundle. [11] [12] [13] Yunhai Xiao, Junfeng Yang, Xiaoming Yuan. Alternating algorithms for total variation image reconstruction from random projections. [14] [15] [16] [17] [18] [19] [20] Impact Factor: Tools Metrics Other articles by authors [Back to Top]
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.