text
stringlengths
256
16.4k
From Online Computation and Competitive AnalysisBy Allan Borodin, Ran El-Yaniv, to prove that an online algorithm $\text{ALG}$ is $c$-competitive for a minimization problem (i.e., there exists a constant $\alpha$ such that $\text{ALG}(I)\leq c\cdot \text{OPT}(I) +\alpha,$), it is sufficient to find a potential function $\Phi$ satisfying the following conditions with respect to any possible event sequence: If only the adversary $\text{OPT}$ moves (i.e., is active) during event $e_i$ and pays $x$ for this move, then $\Delta\Phi=\Phi_i-\Phi_{i-1}\leqslant cx$; that is, $\Phi$ increases by at most $cx$. If only $\text{ALG}$ moves during event $e_i$ and pays $x$ for this move, then $\Delta\Phi=\Phi_i-\Phi_{i-1}\leqslant -x$; that is, $\Phi$ decreases by at least $x$. There exists a constant $b$ independent of the request sequence such that for all $i$, $\Phi_i\geqslant b$. How this changes if the problem is a maximization problem instead of a minimization one? In other words, we would like to prove that there exists a constant $\beta$ such that $\text{OPT}(I)\leq c\cdot \text{ALG}(I)+\beta,$ I guess that the first bullet becomes "gains $x$ for this move, then $\Phi$ decreases by at least $x$" and the second bullet becomes "and gains $x$ for this move, then $\Phi$ increases by at most $cx$" but I am not sure. How to modify the above definition in order to deal with maximization problems?
Journal of Symplectic Geometry J. Symplectic Geom. Volume 10, Number 4 (2012), 601-653. On the growth rate of Leaf-Wise intersections Abstract We define a new variant of Rabinowitz Floer homology that is particularly well suited to studying the growth rate of leaf-wise intersections. We prove that for closed manifolds $M$ whose loop space $\Lambda M$ is "complicated", if $\Sigma \subseteq T^*M$Σ⊆ T*M is a non-degenerate fibrewise starshaped hypersurface and $\varphi \in \mathrm{Ham}_c (T^*M,\omega)$ is a generic Hamiltonian diffeomorphism then the number of leaf-wise intersection points of $\varphi$ in $\Sigma$ grows exponentially in time. Concrete examples of such manifolds are $(S^2 \times S^2)\#(S^2\#S^2)$, $\mathbb{T}^4\#\mathbb{C}P^2$, or any surface of genus greater than one. Article information Source J. Symplectic Geom., Volume 10, Number 4 (2012), 601-653. Dates First available in Project Euclid: 2 January 2013 Permanent link to this document https://projecteuclid.org/euclid.jsg/1357153430 Mathematical Reviews number (MathSciNet) MR2982024 Zentralblatt MATH identifier 1266.53076 Citation Macarini, Leonardo; Merry, Will J.; Paternain, Gabriel P. On the growth rate of Leaf-Wise intersections. J. Symplectic Geom. 10 (2012), no. 4, 601--653. https://projecteuclid.org/euclid.jsg/1357153430
I am trying to reproduce the results from a paper in Mathematica. This task involves $K$ double integrals of the form $$\int f(x,y)g(x)dx,$$ where $f(x,y)$ is a bivariate normal density with mean $(-4.08, -3.41)$ and diagonal covariance matrix with diagonal $(1/10,1/21)$, and $g(x)$ is a normal density with mean $\beta_0+\beta_1\xi$ and variance $\sigma_e^2$ (all of them known). I want to obtain this integral as a function of $\xi$. Sigma = {{1/10, 0}, {0, 1/21}} {β0, β1, σe} = \{-2.3, 0.5, 0.0005} f[ξ_] := NIntegrate[ PDF[MultinormalDistribution[{-4.08, -3.41}, Sigma], {η, ξ}]* PDF[NormalDistribution[β0 + β1*ξ, σe], \η], {η, -∞, ∞}] Plot[f[t], {t, -10, 10}] This code returns a $0$ for all values of $\xi$, while the true value should not be $0$. How can I obtain this integral? What seems to be the problem? Is it the precision? Any ideas would be greatly appreciated. My impression so far is that the $\sigma_e$ is very small and therefore the densities involved in the integration are very concentrated. This makes difficult the automatic integration.
Consider $\sum_{n=1}^{\infty}nx^n\sin(nx)$. Find $R > 0$ such that the series is convergent for all $x\in(-R,R)$. Calculate the sum of the series. I could find the radius of convergence is $R=1$, hence for any $x\in (-1,1)$ the series is continuous and convergent, However, I have some problem in finding the exact sum of this series. To find $f(x)=\sum_{n=1}^{\infty}nx^n\sin(nx)$, I think it's reasonable to find $F(x)=\sum_{n=1}^{\infty}nx^ne^{inx}$ and the imaginary part of $F(x)$ is $f(x)$. So if $F(x)=\sum_{n=1}^{\infty}nx^ne^{inx}$, then $\frac{1}{2\pi}\int_{-\pi}^\pi F(x)e^{-inx}dx=nx^n$, but I don't know how to find $F(x).$
I am working with lmer in R and am unsure on the assumptions on the variance-covariance matrix for the random effects in a mixed effects model. If I have a 2 factor model, say of the form (in mixed effects R language): Y ~ x + (1|factor1) + (variable1|factor1) + (1|factor2) + (variable1|factor2) I know that in the underlying math, this is represented as a mixed effects model: $Y=X\beta+Zb + \epsilon$ Where: $X$ is an ($n\times p$) design matrix. $\beta$ is a $(p \times 1)$ parameter vector. $Z$ is a ($n \times (s_1l_1+s_2l_2)$) matrix containing the information about grouping factor variables (where $s_i$ is the number of variables listed for factor $i$ and $l_i$ is the number of levels of factor $i$). $b$ is an ($(s_1l_1+s_2l_2) \times 1$) matrix of random effects. $\epsilon$ is an $(n\times 1)$ error term. I know that it is assumed that the covariance between levels of a factor are zero. For example, if you had a grouping factor of subject and variables of say weight and height over time, then $cov(w_{s_i},h_{s_j})=0$ for all $i\neq j$, where $w_{s_k}$ and $h_{s_k}$ represents the weight and height over time measurement random effects for subject $k$. If there is one factor in the analysis, the zero covariance between levels of a factor, means that $\Sigma=cov(b)$ is block-diagonal. What I am unsure on however (and this is my question), is; Is it also assumed that if you have multiple grouping factors that the covariance between the variables belonging to those factors is still 0? i.e. for a multi-factor random effects model is $\Sigma=cov(b)$ still block-diagonal? (e.g. for example if you have a grouping factor of say subject and also say, location the subjects reading was taken then, is it assumed that: $cov(w_{s_i},h_{l_j})=0$ for all $i \neq j$? Where $w_{s_k}$ represents the weight over time measurement random effect for subject $k$ and $h_{l_k}$ represents the height over time measurement random effect for location $k$.)
This is the mail archive of the cygwin@cygwin.commailing list for the Cygwin project. Re: Writing mathematical formulas From: Igor Pechtchanski <pechtcha at cs dot nyu dot edu> To: Alex Vinokur <alexvn at connect dot to> Cc: Randall R Schulz <rschulz at sonic dot net>, <cygwin at cygwin dot com> Date: Sun, 25 May 2003 14:38:15 -0400 (EDT) Subject: Re: Writing mathematical formulas Reply-to: cygwin at cygwin dot com On Sun, 25 May 2003, Randall R Schulz wrote: > Alex, > > At 08:28 2003-05-25, Alex Vinokur wrote: > > >"Randall R Schulz" <rschulz@sonic.net> wrote in message 5.2.1.1.2.20030524222126.03930a78@pop.sonic.net">news:5.2.1.1.2.20030524222126.03930a78@pop.sonic.net... > > > Alex, > > > > > > There is a complete TeX package available under Cygwin. > > > Randall Schulz > > > >Thanks. > > > >I worked with Microsoft Equation 3.0 in Word. > >But I have never worked with TeX package. > > > >I have read man tex. However I need any sample of invocation and using tex. > >For instance, how to create that : > >news://news.gmane.org/baqlq2$6vr$1@main.gmane.org ? > > You'll never learn TeX from the man page. That'll only tell you how to > invoke the tools. I think it's fair to say the TeX document preparation > is an art. Certainly a skill, and not one easily come by, for the most > part. And certainly WYSIWYG it ain't! (Though for all I know there are > WYSIWYG front-ends for synthesizing TeX equations.) > > There's tutorial information out there, but I'm not qualified to make > suggestions. Search the net. > > > > At 21:28 2003-05-24, Alex Vinokur wrote: > > > >Does Cygwin contains any tool for writing mathematical formulas > > > >(without using Word Equation Editor) ? > > > > Alex Vinokur > > Randall Schulz Alex, There are plenty of tutorials available on-line. I would recommend <http://www-h.eng.cam.ac.uk/help/tpl/textprocessing/>, which contains plenty of links to tutorials, references, etc. You could also search Google for "LaTeX math formulas" if you need help specifically on that. I'm not aware of any WYSIWYG tools working under Cygwin (LyX is one, but I don't think it's actually ported). OTOH, the LaTeX math interface is *very* intuitive. For example, the message you cited would be produced by something like this: \documentclass{article} \begin{document} This is an example of integrals. $\int \sqrt{a - bx} \; dx$ $t = a - bx$; $dt = -b \; dx$ $dx = -\frac{dt}{b}$ P.S. $a = k^3$ Thanks. \end{document} Just install TeTeX, cut/paste the above (from \documentclass{article} to \end{document}) into a file called "aaa.tex", and run "latex aaa.tex; dvips aaa.dvi". You can then view the above with GhostView. Igor -- http://cs.nyu.edu/~pechtcha/ |\ _,,,---,,_ pechtcha@cs.nyu.edu ZZZzz /,`.-'`' -. ;-;;,_ igor@watson.ibm.com |,4- ) )-,_. ,\ ( `'-' Igor Pechtchanski '---''(_/--' `-'\_) fL a.k.a JaguaR-R-R-r-r-r-.-.-. Meow! "I have since come to realize that being between your mentor and his route to the bathroom is a major career booster." -- Patrick Naughton -- Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple Problem reports: http://cygwin.com/problems.html Documentation: http://cygwin.com/docs.html FAQ: http://cygwin.com/faq/
In this paper, the first discussion ( Universal latent variable representation by C. Andrieu, A. Doucet and A. Lee) authors state that Sampling exactly $Y \sim f(y|\theta)$ on a computer most often means that $Y=\phi(\theta,U)$ where $U$ is a random vector of probability distribution $D(\cdot)$ and $\phi(\cdot,\cdot)$ is a mapping either known analytically or available as a “black-box". And they explain it further. I am trying to understand their point but I am lost. I think I managed to use this idea on a simple example. I did a Approximate Bayesian Computation (should work also for the exact) inference using normal likelihood (known mean, unknown precision $\tau$). So in this case $\theta=\tau$, $U\sim U[0,1]$ and $\phi(\cdot,\cdot)$ is a Box-Muller transformation. So when I simulate data from the likelihood, instead of using numpy normal distribution, I simulate $U$ from uniform distribution and use Box-Muller transformation to get my normally distributed data. However, even if I'm right with this example, I don't know how to apply it to real-world problems. In general I am working on a ABC code to infer from an Agent Based Model (that was given to me in NetLogo, I just call it from my code in Python). So instead of having to simulate from $N(\mu, \frac{1}{\tau})$, I have to run the NetLogo model with proposed parameters $\theta$ ($\theta$ is a multidimensional vector). So my only input to NetLogo model are there parameters. My question is: how can I use the idea from this paper in my case? I.e., what would $U$ and $\phi$ be? I would be grateful for any explanation and help!
Is there a formal mathematical proof that the solution to the German Tank Problem is a function of only the parameters k (number of observed samples) and m (maximum value among observed samples)? In other words, can one prove that the solution is independent of the other sample values besides the maximum value? Is there a formal mathematical proof that the solution to the German Tank Problem is a function of Likelihood Common problems in probability theory refer to the probability of observations $x_1, x_2, ... , x_n$ given a certain model and given the parameters (let's call them $\theta$) involved. For instance the probabilities for specific situations in card games or dice games are often very straightforward. However, in many practical situations we are dealing with an inverse situation (inferential statistics). That is: the observation $x_1, x_2, ... , x_k$ is given and now the model is unknown, or at least we do not know certain parameters $\theta$. In these type of problems we often refer to a term called the likelihood of the parameters, $\mathcal{L(\theta)}$, which is a rate of believe in a specific parameter $\theta$ given observations $x_1, x_2, .. x_k$. This term is expressed as being proportional to the probability for the observations $x_1, x_2, .. x_k$ assuming that a model parameter $\theta$ would be hypothetically true. $$\mathcal{L}(\theta,x_1, x_2, .. x_k) \propto \text{probability observations $x_1, x_2, .. x_k$ given $\theta$ }$$ For a given parameter value $\theta$ the more probable a certain observation $x_1, x_2, .. x_n$ is (relative to the probability with other parameter values), the more the observation supports this particular parameter (or theory/hypothesis that assumes this parameter). A (relative) high likelihood will reinforce our believes about that parameter value (there's a lot more philosophical to say about this). Likelihood in the German tank problem Now for the German tank problem the likelihood function for a set of samples $x_1, x_2, .. x_k$ is: $$\mathcal{L}(\theta,x_1, x_2, .. x_k ) = \Pr(x_1, x_2, .. x_k, \theta) = \begin{cases} 0 &\text{if } \max(x_1, x_2, .. x_k) > \theta \\ {{\theta}\choose{k}}^{-1} &\text{if } \max(x_1, x_2, .. x_k) \leq \theta, \end{cases}$$ Whether you observe samples {1, 2, 10} or samples {8, 9, 10} should not matter when the samples are considered from a uniform distribution with parameter $\theta$. Both samples are equally likely with probability ${{\theta}\choose{3}}^{-1}$ and using the idea of likelihood the one sample does not tell more about the parameter $\theta$ than the other sample. The high values {8, 9, 10} might make you think/believe that $\theta$ should be higher. But, it is only the value {10} That truly gives you relevant information about the likelihood of $\theta$ (the value 10 tells you that $\theta$ will be ten or higher, the other values 8 and 9 do not contribute anything to this information). Fisher Neyman factorization theorem This theorem tells you that a certain statistic $T(x_1, x_2, … , x_k)$ (ie some function of the observations, like the mean, median, or as in the German tank problem the maximum) is sufficient (contains all information) when you can factor out, in the likelihood function, the terms that are dependent on the other observations $x_1, x_2, … , x_k$, such that this factor does not depend on both the parameter $\theta$ and $x_1, x_2, … , x_k$ (and the part of the likelihood function that relates the data with the hypothetical parameter values is only dependent on the statistic but not the whole of the data/observations). The case of the German tank problem is simple. You can see above that the entire expression for the Likelihood above is already only dependent on the statistic $\max(x_1, x_2, .. x_k)$ and the rest of the values $x_1, x_2, .. x_k$ does not matter. Little game as example Let's say we play the following game repeatedly: $\theta$ is itself a random variable and drawn with equal probability either 100 or 110. Then we draw a sample $x_1,x_2,...,x_k$. We want to choose a strategy for guessing $\theta$, based on the observed $x_1,x_2,...,x_k$ that maximizes our probability to have the right guess of $\theta$. The proper strategy will be to choose 100 unless one of the numbers in the sample is >100. We could be tempted to choose the parameter value 110 already when many of the $x_1,x_2,...,x_k$ tend to be all high values close to hundred (but none exactly over hundred), but that would be wrong. The probability for such an observation will be larger when the true parameter value is 100 than when it is 110. So if we guess, in such situation, 100 as the parameter value, then we will be less likely to make a mistake (because the situation with these high values close to hundred, yet still below it, occurs more often in the case that the true value is 100 rather than the case that the true value is 110). You haven't presented a precise formulation of "the problem", so it's not exactly clear what you're asking to be proved. From a Bayesian perspective, the posterior probability does depend on all the data. However, each observation of a particular serial number will support that number the most. That is, given any observation $n$, the odds ratio between posterior and prior will be greater for the hypothesis "the actual number of tanks is $n$" than it will be for "the actual number of tanks is [number other than $n$]". Thus, if we start with a uniform prior, then $n$ will have the highest posterior after seeing that observation. Consider a case where we have the data point $13$, and hypotheses $N=10,13,15$. Obviously, the posterior for $N=10$ is zero. And our posteriors for $N=13,15$ will be larger than their prior. The reason for this is that in Bayesian reasoning, absence of evidence is evidence of absence. Any time we have an opportunity where we could have made an observation that would have decreased our probability, but don't, the probability increases. Since we could have seen $16$, which would have set our posteriors for $N=13,15$ to zero, the fact that we didn't see it means that we should increase our posteriors for $N=13,15$. But note that the smaller the number, the more numbers we could have seen that would have excluded that number. For $N=13$, we would have rejected that hypothesis after seeing $14,15,16,...$. But for $N=15$, we would have needed at least $16$ to reject the hypothesis. Since the hypothesis $N=13$ is more falsifiable than $N=15$, the fact that we didn't falsify $N=13$ is more evidence for $N=13$, than not falsifying $N=15$ is evidence for $N=15$. So every time we see a data point, it sets the posterior of everything below it to zero, and increases the posterior of everything else, with smaller numbers getting the largest boost. Thus, the number that gets the overall largest boost will be the smallest number whose posterior wasn't set to zero, i.e. the maximum value of the observations. Numbers less than the maximum affect how much larger a boost the maximum gets, but it doesn't affect the general trend of the maximum getting largest boost. Consider the above example, where we've already seen $13$. If the next number we see is $5$, what effect will that have? It helps out $5$ more than $6$, but both numbers have already been rejected, so that's not relevant. It helps out $13$ more than $15$, but $13$ already has been helped out more than $15$, so that doesn't affect which number has been helped out the most.
Does the OLS (ordinary least squares) method of regression consider only one sample value in calculating the sample regression function (SRF)? If not, then how is the SRF created when there is more than one observation per $X_{i}$? It is not very clear what you're asking, but if the model is of the form $$ y_i = \beta_0 + \beta_1 x_i + \epsilon_i \tag{1} $$ Then the sample regression function (SRF) $$ \hat{y}_i = \beta_0 + \beta_1 x_i \tag{2} $$ indeed only considers one value of $x_i$ at the time. If you have a situation in which several observations $y$ are associated with the same value $x_i$, then a linear model is perhaps not appropriate for describing your data OLS considers all values. The method doesn't care if some samples have the same X value as long as not all of them do. In many cases there is no controversy in having multiple samples with the same Xi but different Yi. For example, if you plot company turnover (Y) versus number of employees (X) for a sample of companies there can be companies that happen to have the exact same number of employees. Suppose you use OLS to estimate the model: $$Y_i = \beta_1 + \beta_2X_i + \varepsilon_i$$ Note that what $i$ indexes is not values of $X$ but units (eg individuals, countries) within the population of interest. Given sample data for units $i=1,...,N$, OLS will find the values of $\beta_1$ and $\beta_2$ that minimize $\Sigma(Y_i - \beta_1 - \beta_2X_i)^2$. It does not matter at all if some of the $X_i$'s have the same value. The sum to be minimised is calculated over all the sample units.
It looks like you're new here. If you want to get involved, click one of these buttons! But the really cool part is this: given preorders \(X\) and \(Y\), we can get a feasibility relation \(\Phi : X \nrightarrow Y\) either from a monotone function \(f : X \to Y\) or from a monotone function \(g: Y \to X\). So, feasibility relations put monotone functions going forwards from \(X\) to \(Y\) and those going backwards from \(Y\) to \(X\) into a common framework! Even better, we saw that one of our favorite themes, namely adjoints, is deeply connected to this idea. Let me state this as a theorem: Theorem. Let \(f : X \to Y \) and \(g: Y \to X\) be monotone functions between the preorders \(X\) and \(Y\). Define the feasibility relations \(\Phi : X \nrightarrow Y\) by $$ \Phi(x,y) \text{ if and only if } f(x) \le y $$ and $$ \Psi(x,y) \text{ if and only if } x \le g(y) .$$ Then \(\Phi = \Psi\) if and only if \(f \) is the left adjoint of \(g\). Proof. We have \(\Phi = \Psi\) iff $$ \Phi(x,y) \text{ if and only if } \Psi(x,y) $$ for all \(x \in X, y \in Y\), but by our definitions this is true iff $$ f(x) \le y \text{ if and only if } x \le g(y) $$ which is true iff \(f\) is the left adjoint of \(g\). \( \qquad \blacksquare \) Ah, if only all proofs were so easy! Now, to make feasibility relations into a truly satisfactory generalization of monotone functions, we should figure out how to compose them. Luckily this is easy, because we already know how to compose relations from Lecture 40. So, we should try to prove this: Theorem. Suppose that \(\Phi : X \nrightarrow Y, \Psi : Y \nrightarrow Z\) are feasibility relations between preorders. Then there is a composite feasibility relation $$ \Psi \Phi : X \nrightarrow Z $$ defined as follows: $$ (\Psi \Phi)(x,z) = \text{true} $$ if and only if for some \(y \in Y\), $$ \Phi(x,y) = \text{true} \text{ and } \Psi(y,z) = \text{true}. $$ Puzzle 176. Prove this! Show that \(\Psi \Phi\) really is a feasibility relation. I hope you see how reasonable this form of composition is. Think of it in terms of our pictures from last time: Here we have three preorders \(X,Y,Z\), which we can think of as cities with one-way roads. We can also take one-way airplane flights from \(X\) to \(Y\) and from \(Y\) to \(Z\): the flights in blue are a way of drawing a feasibility relation \(\Phi: X \nrightarrow Y\), and the flights in red are a way of drawing \(\Psi: Y \nrightarrow Z\). Puzzle 177. Is \( (\Psi\Phi)(N,y) = \text{true}\)? Puzzle 178. Is \( (\Psi\Phi)(W,y) = \text{true}\)? Puzzle 179. Is \(( \Psi\Phi)(E,y) = \text{true}\)? Puzzle 180. Prove that there is a category \(\textbf{Feas}\) whose objects are preorders and whose morphisms are feasibility relations, composed as above. Puzzle 181. Suppose that \(\Phi : X \nrightarrow Y, \Psi : Y \nrightarrow Z\) are feasibility relations between preorders. Prove that $$ (\Psi\Phi)(x,z) = \bigvee_{y \in Y} \Phi(x,y) \wedge \Psi(y,z) $$ where \(\wedge\) is the meet in the poset \(\textbf{Bool}\), and \(\bigvee\) is the join. How is this formula related to matrix multiplication? You may remember that a feasibility relation is a \(\mathcal{V}\)-enriched profunctor in the special case where \(\mathcal{V} = \textbf{Bool}\). This formula will be the key to defining composition for more general \(\mathcal{V}\)-enriched profunctors. But we need to talk about that more.
Differentiability of a function: Differentiability applies to a function whose derivative exists at each point in its domain. Actually, differentiability at a point is defined as: suppose f is a real function and c is a point in its domain. The derivative of f at c is defined by \(\lim\limits_{h \to 0} \frac{f(x+h) – f(x)}{h}\) Differentiability in interval: For open interval:We can say a function f(x) is to be differentiable in an interval (a, b), if and only if f(x) is differentiable at each and every point of this interval (a, b). {As, () implies open interval}. For closed interval: We can say a function f(x) to be differentiable in a closed interval [a, b], if f(x) is differentiable in open interval (a, b), and also f(x) is differentiable at x = a from right hand limit and differentiable at x = b from left hand limit. Graph of differentiable function: when we draw the graph of a differentiable function we must notice that at each point in its domain there is a tangent which is relatively smooth and doesn’t contain any bends, breaks. Facts on relation between continuity and differentiability: If at any point x = a, a function f(x) is differentiable then f(x) must be continuous at x = a but the converse may not be true. The best thing about differentiability is that the sum, difference, product and quotient of any two differentiable functions is always differentiable. If a function f(x) is continuous at x = a, then it is not necessarily differentiable at x = a. Differentiable functions domain and range: Functions Curve Equation Domain & Range Continuity & differentiability Identity function f(x) = x Domain = R Range = (-∞,∞) Always continuous and differentiable in their domain. Exponential function f(x) = a x, a > 0 and a≠1 Domain = R Range = (0, ∞) Logarithmic function f(x) = log a x, x, a > 0 and a ≠ 1 Domain = (0, ∞) Range = R Root function f(x) = \(\sqrt{x}\) Domain = [0, ∞) Range = [0, ∞) Continuous and differentiable in (0, ∞). Greatest Integer Function f(x) = [x] Domain = R Range = I Other than integral value it is continuous and differentiable Least integer function f(x) = (x) Domain = R Range = I Fractional part function f(x) = {x} = x – [x] Domain = R Range = [0, 1) Signum function f(x) = \(\frac{|x|}{x}\) = -1, x < 0 = 0, x = 0 =1, x > 0 Domain = R Range = { -1, 0, 1} Continuous and differtentiable everywhere except at x = 0 Constant function f(x) = c Domain = R Range = {c}, where c is constant. Polynomial function F(x) = ax + b Domain = R Range = R Continuous and differentiable everywhere. Sine function Y = sin x Domain = R Range = R Continuous and differentiable in their domain. Cosine function Y = cos x Domain = R Range = R Tangent function Y = tan x Domain = R Range = R Cosecant function Y = cosec x Domain = R Range = R Secant function Y = sec x Domain = R Range = R Cotangent function Y = cot x Domain = R Range = R Arc sine function Y = sin -1 x Domain = R Range = R Continuous and differentiable in their domain. Arc cosine function Y = cos -1x Domain = R Range = R Arc tangent function Y = tan -1 x Domain = R Range = R Arc cosecant function Y = cosec -1 x Domain = R Range = R Arc secant function Y = sec -1 x Domain = R Range = R Arc cotangent function Y = cot -1 x Domain = R Range = R Differentiability examples: Let y = tan–1 x. Find the derivative of f given by f(x) = tanSolution: –1x assuming it exists. Then, x = tan y. By differentiating both sides w.r.t. x, we get 1 = Sec 2 y \(\frac{dy}{dx}\) it implies: \(\frac{dy}{dx}\) = \(\frac{1}{{sec}^{y}}\) = \(\frac{1}{1 + {tan}^{2}y}\) = \(\frac{1}{1 + tan({tan}^{-1}x)^{2}y}\) = \(\frac{1}{1 + {x}^{2}}\) Let y = e Differentiate eSolution: –xr.t. x. – x Using chain rule, we have \(\frac{dy}{dx}\) = e – x \(\frac{d}{dx}\) (- x) = – e –x More from Calculus Relation and Functions Limits Formula Continuity Rules Derivative Formula Integral Formula Inverse Trigonometric function Formulas Application of Integrals Logarithm Formulas
It looks like you're new here. If you want to get involved, click one of these buttons! We've been having fun with databases using categories and functors. To go any further we need 'natural transformations'. These are one of the most important aspects of category theory. They give it a special flavor different from almost all previous branches of mathematics! In most branches of math you study gadgets and maps between gadgets, like: and so on. We do this in category theory too! We have categories and functors between categories. But category theory has an extra layer. It also has natural transformations between functors. The reason is not hard to find. Some people think categories are abstract, but I don't. When you say the word 'category', a picture sort of like this pops into my head: It's a bunch of objects, which look like dots, and a bunch of morphisms, which look like arrows between dots. Now, you might object that this is a picture of a graph, not a category. And you'd be right! In a category we can compose morphisms, and we also have identity morphisms. So a more detailed mental picture of the same category would look like this: Let's call this category \(\mathcal{C}\). Then, a functor \(F\) from this category \(\mathcal{C}\) to some other category \(\mathcal{D}\) would look a bit like this: \(F\) maps each object of \(\mathcal{C}\) to an object in \(\mathcal{D}\), and it maps each morphism to a morphism. So, a functor from \(\mathcal{C}\) to \(\mathcal{D}\) is like a picture of \(\,\mathcal{C}\) drawn on the blackboard of \(\,\mathcal{D}\). The category \(\mathcal{D}\) may, of course, contain other objects and morphisms that aren't in our picture. To keep things simple I haven't drawn those. Also, two objects or morphisms of \(\mathcal{C}\) may get mapped to the same object in \(\mathcal{D}\), so the picture of \(\mathcal{C}\) in \(\mathcal{D}\) could be 'squashed down'. But I haven't drawn that either! You can draw these other possibilities. Now for the fun part. Suppose we have two functors from \(\mathcal{C}\) to \(\mathcal{D}\), say \(F\) and \(G\). Let me draw them in a stripped-down way: Then we can define a 'natural transformation' from \(F\) to \(G\), written \(\alpha : F \Rightarrow G\). It looks like this: For each object \(x \in \mathcal{C}\), the natural transformation gives a morphism from \(F(x)\) to \(G(x)\). In the picture above I've drawn every objects of \(\mathcal{C}\) as a mere dot, but that's a bit sloppy. So let's give all the objects and morphisms names: Now we see more clearly that for each object \(x \in \mathcal{C}\), the natural transformation \(\alpha\) gives a morphism \(\alpha_x : F(x) \to G(x) \). Note that this creates a lot of parallelograms in our picture... or squares, if you draw them like this: What makes the transformation natural is that all these squares must 'commute': that is, going down and across gives the same morphism as going across and down. It takes a while to see why this it's so important, but this condition is one of the first really big ideas in category theory. Ways of going between functors that feel 'natural' in an intuitive sense, meaning that they don't involve disorganized random choices, tend to be natural in this technical sense! And it turns out that one can do a lot using this idea. Indeed, when Eilenberg and Mac Lane came out with their first paper on category theory in 1945, the main topic was natural transformations. Mac Lane later said: I didn't invent categories to study functors; I invented them to study natural transformations. Okay, now let me give the formal definition: Definition. Given categories \(\mathcal{C},\mathcal{D}\) and functors \(F, G: \mathcal{C} \to \mathcal{D}\), a transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$for each object \(x \in \mathcal{C}\). We say the transformation \(\alpha\) is natural if for each morphism \(f : x \to y\) in \(\mathcal{C}\), this square commutes: In other words, $$ G(f) \alpha_x = \alpha_y F(f) .$$We call a square diagram of this sort a naturality square. Okay, now let's start figuring out what natural transformations are good for! Puzzle 126. Let \(\mathcal{C}\) be the free category on this graph: If we treat this as a database schema, a functor \(F: \mathcal{C} \to \mathbf{Set}\) is a database built using this schema, and you can draw it as a table, like this: \[ \begin{array}{c|c} \text{People} & \mathrm{FriendOf} \\ \hline Alice & Bob \\ Bob & Alice \\ Stan & Alice \\ Tyler & Stan \\ \end{array} \] Suppose \(G: \mathcal{C} \to \mathbf{Set} \) is another such database: \[ \begin{array}{c|c} \text{People} & \mathrm{FriendOf} \\ \hline Alice & Bob \\ Bob & Alice \\ Stan & Alice \\ Tyler & Stan \\ Mei-Chu & Stan \\ \end{array} \] Can you find a natural transformation \(\alpha : F \Rightarrow G\) in this example? What is its practical meaning: that is, what could it be used for? What is the significance of naturality in this example? Puzzle 127. Suppose \(H : \mathcal{C} \to \mathbf{Set} \) is yet another database built using the same schema, namely \[ \begin{array}{c|c} \text{People} & \mathrm{FriendOf} \\ \hline Alice & Bob \\ Bob & Alice \\ Tyler & Bob \\ \end{array} \] Can you find a natural transformation \(\beta : F \Rightarrow H\)? What is its practical meaning? What is the significance of naturality in this example? Puzzle 128. How many natural transformations \(\gamma: H \Rightarrow H\) are there? What is their practical meaning?
The answer is yes, it is always a submersion. First of all, $\pi:=\pi_M$ is clearly surjective. Also, for any point $s'\in S'$ lying over some $p'\in M'$, the tangent space at $s'$ maps surjectively onto the tangent space at $p'$ (since $\pi':=\pi_{M'}$ is a submersion.) Take then a point $p\in M$ and a vector $v\in T_pM$, and push-it forward to $M$, $v'=f_*v$. From what we've just noted, there must be a point $s'\in S'$ which actually lies in the intersection $f(M)\times N \cap S$, and the tangent space at that point maps surjectively onto the tangent space at $f(p)$. In other words, there is a vector $w'\in T_{s'}S'$ such that $\pi'_*w=v'$, and this $w'$ is the image of some $w \in TS$. Now, it's obvious that this $\pi_*w=v$. Here's a high-brow proof: $\pi$ is just the pullback of $\pi'$. The pull back of a submersion is a submersion. EDIT: Here's some details about the above; I'll only sketch the details, since I think that it is a very useful exercise to convince yourself of these facts. First of all, the image of $f\times \mathrm{id}$ clearly is $f(M)\times N$. Also, by definition, $S'=(f\times \mathrm{id})^{-1}(S)$, so what is $f\times \mathrm{id}(S)$? You should convince yourself that it is precisely $f(M)\times N\cap S'$. Then, I claim $(f\times \mathrm{id})_*(TS)=T(f(M)\times N\cap S')$. This should be easy to see, because $f_*f^*T(f(M)\times N\cap S')=T(f(M)\times N\cap S')$. Now, $s′$ was chosen to be in the intersection $f(M)\times N \cap S′$, and $v′$ is tangent to $M$, so that $w′$ is tangent to $f(M)\times N \cap S'$. Clearly, from the above, $w'$ is then the pushforward of some $w\in TS$. Now, I above incorrectly stated that $\pi_*w=v$. We're a priori not sure of this; what we know is that there is some $w$ that such equality holds (the problem here is $v$ might not be the only preimage of $v'$, so a randomly chosen $w$ might map to some other preimage; on the other hand, again $f\times \mathrm{id}$ pulls back a copy of $w'$ for every preimage of $v'$.) (I sense this might sound a bit confusing, mostly because I'm being a bit lazy and not writing thing more explicitly. However, if you think of this whole thing as a pullback, the whole thing becomes clear, because I'm basically going through an "element proof" of my highbrow proof.) Adition: check also: The tangent space to the preimage of $Z$ is the preimage of the tangent space of $Z$.
This is just an elaboration on Brendan McKay's beautiful answer, but too long for a comment. The crucial idea is to simplify the problem by generalising it, introducing a maximisation on the indices of the sum, detached from the original planar graph $G$. For $x = (x_1, \ldots, x_n) \in \mathbb{R}_{\geq 0}^n$ and a multi-graph $H$ with $V(H) \subseteq \{ 1, \ldots, n \}$, let$$ L_H(x) := \sum_{ij \in E(H)} \min \{ x_i, x_j \} .$$For a natural number $d$, let $\mathcal{H}_d$ be the class of all finite multi-graphs $H$ with $V(H) = \{ 1, \ldots, n \}$ (for any $n$) such that $e(H[A]) \leq d(|A|-1)$ holds for any $A \subseteq V(H)$ (in other words, graphs of arboricity at most $d$). Theorem: For any $x \in \mathbb{R}_{\geq 0}^n$ and any $H \in \mathcal{H}_d$ we have $L_H(x) \leq d(\sum_{i=1}^n x_i - \max_i x_i)$. Proof: We may assume that $x_i >0$ holds for every $i$. Permute $\{1, \ldots, n\}$ so that $x_1 \geq x_2 \geq \ldots \geq x_n$. Note that$$ L_H(x) = \sum_{ij \in E(H) \atop i \, < \, j} x_j$$ Extend the class $\mathcal{H}_d$ slightly by requiring only that $e(H[A]) \leq d(|A| - 1)$ holds when $A = \{ 1, \ldots, k \}$ for some $k$ (that is, $A$ is an initial segment of $\{ 1, \ldots, n\}$). Call this new class $\mathcal{H}_d'$. Choose $H \in \mathcal{H}_d'$ with $V(H) = \{ 1, \ldots, n \}$ so that $L_H(x)$ is maximum and, subject to this, so that $$ R(H) := \sum_{ij \in E(H)} i + j$$ is minimum. We claim that $H$ is a star with center 1.Indeed, if $ij \in E(H)$ and $1 < i < j \leq n$, then we could replace $ij$ by the edge $1j$ and obtain a graph $H'$ with $L(H') = L(H)$ and $R(H') < R(H)$. Trivially $H' \in \mathcal{H}_d'$ as well, hence contradicting our choice of $H$. Moreover, every $j \in \{ 2, \ldots, n \}$ has degree exactly $d$ in $H$. Otherwise, choose $j$ minimum with $d_H(j) \neq d$. If $d_H(j) > d$, then for $A := \{ 1, \ldots, j \}$ we have $e(H[A]) > d(|A| - 1)$, a contradiction to $H \in \mathcal{H}_d'$. Hence $d_H(j) \leq d-1$. Add an edge $1j$ to $H$. If there is some $k > j$ with $d_H(k) > d$, take a minimum such $k$ and delete one edge $1k$. If there was no such $k$, then the resulting graph $H'$ satisfies $L(H') > L(H)$. If there was such a $k$, then $L(H') = L(H)$ and $R(H') < R(H)$. Either way, $H'$ is easily seen to lie in $\mathcal{H}_d'$ and thus contradicts our choice of $H$. Now that $H$ is explicitly given, it follows that$$L_H(x) = \sum_{j = 2}^n d x_j .$$ This finishes our proof. The original statement now follows by taking as $x$ the degree-sequence of a planar graph and noting that $G \in \mathcal{H}_d$ for $d = 3$.
I have a markov chain with $Q(u,v)$ as transition probability matrix and $\pi(u)$ as stationary distribution defined on state space $\Omega$. The dimension of matrix $Q$ is $nxn$ and vector $\pi$ is $1xn$. I need to construct a vorticity matrix $\Gamma (u,v)$ of dimension $nxn$ which has below properties $\Gamma$ is skew symmetric matrix i.e, $$\Gamma (u,v) = -\Gamma (v,u) \quad ,\forall \, u,v \in \Omega $$ Row sum of $\Gamma$ is zero for every row i.e, $$ \sum_v \Gamma (u,v) = 0 \quad ,\forall \, u \in \Omega $$ Third property is, $$\Gamma(u,v) > -\pi (v)Q(v,u) \quad ,\forall \, u,v \in \Omega $$ My question is : How to construct vorticity matrix $\Gamma (u,v)$ which satisfies above three properties? I need to construct at least one such matrix. Is there any systematic way to build such matrices NOTE: Transition probability matrix $P$, and stationary distribution $\pi$ has below properties Row sum of $P$ is one for each row, $$\sum_v P(u,v)=1 \quad ,\forall \, u \in \Omega$$ $\pi$ is probability distribution hence, $$\sum_v \pi(v) = 1$$ Stationary distribution condition for $\pi$, $$\sum_u \pi(u) P(u,v) = \pi(v) \quad ,\forall \, v \in \Omega $$
Courtesy of the OpenCV 2.3 GPU code comes a neat snippet of code for using a template parameter for reading RGB or BGR ordered components when dealing with RGB triplets. The Code template <int blueIndex> float rgb2grey(const float *src) { return 0.114f*src[blueIndex^2] + 0.587f*src[1] + 0.299f*src[blueIndex]; } Then to use the function you simply supply the index where the blue value resides to take care of the RGB vs BGR ordering. For RGB ordering: And for BGR ordering: This only works for swapping R and B around and won’t work for more weird and wonderful orderings The original OpenCV code can be found in modules/gpu/src/opencv2/gpu/device/detail/color.hpp. Templates and CUDA For me the fact that you can use template meta-programming is a real plus point of CUDA. It allows for good code re-use and the template expansion gives good scope for the compiler to optimise. It can also allow you to remove conditionals from kernels in appropriate circumstances – more in a future post! Displaying equations in webpages has always been a headache. The fallback of using images was always there, but in the age of blogs and other content creation tools editing, updating and maintaining images for equations is tedious. MathML was an effort to standardise support in browsers, but the reality of it is that it only works out of the box in very few cases. Whilst looking for a way to easily put equations into a WordPress blog, MathJax turned up. Have a look at some of the examples! MathJax in WordPress A quick search turns up a couple of WordPress plugins, of which I ended up using Latex for WordPress, which allows me to easily put Latex syntax equations directly into posts. So {{{ $ $x = frac{-b pm sqrt{b^2 – 4ac}}{2a}$ $ }}} becomes $$x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a}$$ All in all excellent, 😉 For 2.5 yrs work has made use of a Kolab server I set up, but recently it has been taking up too much of my time to maintain. Google Apps was chosen as the replacement, so the problem then became how to get 39GB of data uploaded to Google over a fairly slow ADSL link, whilst keeping everyone up and running as much as possible. imapsync seemed to be the tool of choice for a lot of people doing migrations of IMAP users, and it has done a very good job here too. I used version 1.311 from source rather than the older package provided with Ubuntu at the time as it contained various fixes for Google’s IMAP servers. Continue reading At work we are getting a 64-bit version of our software up and running at the moment. Most of the usual culprits reared their head – assuming that a pointer and integer had the same size etc etc. One more interesting one, which I’ve not come across before is related to using STL string::find and the special constant string::npos. This is not unique to our code base when you google for it and actually just boils down to data being truncated before a comparison. The nuances of the problem do lead on to a discussion about signed vs. unsigned integral types in C++ and the handling of comparisons between differently sized data types. I though it was worth looking at a bit further and definitely something to watch out for when doing code reviews. It could also make for a particularly challenging interview question 😉 Continue reading
Here is the standard way to prove the result. First note that there is a non-zero $\mathcal C^\infty$ function $\theta\geq 0$ on $\mathbb R$ which is supported on $[0,1]$. From this, it follows that for any $\varepsilon >0$, there is a non-negative $\mathcal C^\infty$ function $\phi_\varepsilon$ on $\mathbb R^n$ which is supported on the closed euclidean ball $\overline B(0,\varepsilon)$ and such that $\int_{\mathbb R^n} \phi_\varepsilon (u) du=1$: just put $\phi_\varepsilon(x)= c_\varepsilon\, \theta\left(\frac{\Vert x\Vert^2}{\varepsilon^2}\right)$ for some suitably chosen constant $c_\varepsilon$. Now, choose $\varepsilon>0$ such that $D+\overline B(0,2\varepsilon)\subset U$, and define $f$ to be the convolution $\mathbf 1_{D_\varepsilon}*\phi_\varepsilon$, where $D_\varepsilon=D+\overline B(0,\varepsilon)$:$$f(x)=\int_{\mathbb R^n} \mathbf 1_{D_\varepsilon}(y)\phi_\varepsilon(x-y)\, dy\, . $$ Then, by the standard theorem on differentiation under the integral sign, $f$ is $\mathcal C^\infty$; and by a well known property of the support of a convolution, $${\rm supp}(f)\subset D_\varepsilon+{\rm supp}(\phi_\varepsilon)\subset D+\overline B(0, 2\varepsilon)\subset U\, .$$ Finally, $f$ is equal to $1$ on $D$. Indeed, write $$f(x)=\int_{\mathbb R^n} \mathbf 1_{D_\varepsilon}(x-y) \phi_\varepsilon (y)\, dy=\int_{\overline B(0,\varepsilon)} \mathbf 1_{D_\varepsilon}(x-y) \phi_\varepsilon (y)\, dy $$and observe that if $x\in D$, then $x-y\in D_\varepsilon$ for every $y\in\overline B(0,\varepsilon)$, i.e. $\mathbf 1_{D_\varepsilon}(x-y)=1$. It follows that for $x\in D$ we have$$ f(x)=\int_{\overline B(0,\varepsilon)} \phi_\varepsilon (y)\, dy=\int_{\mathbb R^n} \phi_\varepsilon(y)\, dy=1\, .$$ $\bf Edit.$ If you want to find the function $f$ just by using the existence of a partition of unity as you stated it, you can do this assuming that your partition of unity $(\phi_i)_{i\in I}$ is relative to $U$ and is locally finite, i.e. each point $x\in\mathbb R^n$ has a neighbourhood $V_x$ on which all but finitely many functions $\phi_i$ are $0$. Assume that the closure of $E$ is contained in $U$. By compactness, you can cover $\overline E$ by finitely open sets $V_{x_1},\dots ,V_{x_N}$ as above; and moreover you may assume that $V_{x_j}\subset U$ for all $j$. For each $j\in\{ 1,\dots ,N\}$, choose a finite set $I_j\subset I$ such that $\phi_i\equiv 0$ on $V_{x_j}$ for all $i\not\in I_j$. Then let $I':=\bigcup_{j=1}^N I_j$ and $f:=\sum_{i\in I'} \phi_i$. The function $f$ is perfectly well-defined and $\mathcal C^\infty$ on $\mathbb R^n$ since this is a finite sum, and you do have $f\equiv 1$ on $E$ because $\phi_i\equiv 0$ on $\overline E$ for all $i\not\in I'$ and hence $f\equiv\sum_{i\in I}\phi_i=1$ on $E$.
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in... Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch... Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen... Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl... People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f... Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a... I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac... This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s... There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com... Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not... Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}... I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo... Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a... I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst... Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ... NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ... I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few... This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme... EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc... Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu... Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d... I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa... To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co... Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik... I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like. I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have... It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl... Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,... One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi... Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case. What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?...
Suppose you observe the historical returns of \(p\) different fund managers, and wish to test whether any of them have superior Signal-Noise ratio (SNR)compared to the others. The first test you might perform is the test of pairwise equality of all SNRs.This test relies on the multivariate delta method and central limit theorem,resulting in a chi-square test, as described by Wright et al and outlinedin section 4.3 of our Short Sharpe Course.This test is analogous to ANOVA, where one tests different populations for unequalmeans, assuming equal variance.(The equal Sharpe test, however, deals naturally with the case of paired observations,which is commonly the case in testing asset returns.) In the analogous procedure, if one rejects the null of equal means in an ANOVA, onecan perform pairwise tests for equality. This is called a post hoc test, sinceit is performed conditional on a rejection in the ANOVA.The basic post hoc test is Tukey's range test, sometimes called 'Honest Significant Differences'.It is natural to ask whether we can extend the same procedure to testing the SNR.Here we will propose such a procedure for a crude model of correlated returns. The Tukey test has increased power by pooling all populations together to estimate the overall variance. The test statistic then becomes something like where \(Y_{(1)}\) is the smallest mean observed, and \(Y_{(p)}\) is the largest, and \(S^2\) is the pooled estimate of variance. The difference between the maximal and minimal \(Y\) is why this is called the 'range' test, since this is the range of the observed means. Switching back to our problem, we should not have to assume that our tested returns series have the same volatility. Moreover, the standard error of the Sharpe ratio is only weakly dependent on the unknown population parameters, so we will not pool variances. In our paper on testing the asset with maximal Sharpe, we established that the vector of Sharpes, for normal returns and when the SNRs are small, is approximately asymptotically normal: Here \(R\) is the correlation of returns. See our previous blog post for more details. Under the null hypothesis that all SNRs are equal to \(\zeta_0\), we can express this where \(R^{1/2}\) is a matrix square root of \(R\). Now assume the simple rank-one model for correlation, where assets are correlated to a single common latent factor, but are otherwise independent: Under this model of \(R\) we computed inverse-square-root of \(R\) as Picking two distinct indices, \(i, j\) let \(v = \left(e_i - e_j\right)\) be the contrast vector. We have because \(v^{\top}1=0\). Thus the range of the observed Sharpe ratios is a scalar multiple of the range of a set of \(p\) independent standard normal variables. This is akin to the 'monotonicity' principle that we abused earlier when performing inference on the asset with maximum Sharpe. Under normal approximation and the rank-one correlation model, we should then see with probability \(\alpha\), where the \(q_{1-\alpha,m,n}\) is the upper \(\alpha\)-quantile of the Tukey distribution with \(m\) and \(n\) degrees of freedom. This is computed by qtukey in R.Alternatively one can construct confidence intervals around each \(\hat{\zeta}_i\) of width \(HSD\), whereby if another \(\hat{\zeta}_j\) does not fall within it, the twoare said to be Honestly Significantly Different. The familywise error rate should beno more than \(\alpha\). Testing Let's test this under the null. We spawn 4 years of correlated returns from 16 managers, then compare the maximum and minimum observed Sharpe ratio, comparing them to the test value of \(HSD\). Assume that the correlation is known to have value \(\rho=0.8\). (More realistically, it would have to be estimated.) Note that for this many fund managers we have and thus taking into account the \(\sqrt{1-\rho}\) term, This is only slightly bigger than the naive approximate confidence intervals one would typically apply to the Sharpe ratio, which in this case would be around We perform 10 thousand simulations, computing the Sharpe over all managers, and collecting the ranges. We compute the empirical type I error rate, and find it to be nearly equal to the nominal value of 0.05: suppressMessages({ library(mvtnorm)})nman <- 16nyr <- 4ope <- 252SNR <- 0.8 # annual unitsrho <- 0.8nday <- round(nyr * ope)R <- pmin(diag(nman) + rho,1) mu <- rep(SNR / sqrt(ope),nman)nsim <- 10000set.seed(1234)ranges <- replicate(nsim,{ X <- mvtnorm::rmvnorm(nday,mean=mu,sigma=R) zetahat <- colMeans(X) / apply(X,2,sd) max(zetahat) - min(zetahat)})alpha <- 0.05HSDval <- sqrt((1-rho) / nday) * qtukey(alpha,lower.tail=FALSE,nmeans=nman,df=Inf)mean(ranges > HSDval) ## [1] 0.0541 Compact Letter Display The results of Tukey's test can be difficult to summarize. You might observe, for example, that managers 1 and 2 have significantly different SNRs, but not have enough evidence to say that 1 and 3 have different SNR, nor 2 and 3. How, then should you think about manager 3? He/She perhaps has the same SNR as 2, and perhaps the same as 1, but you have evidence that 1 and 2 have different SNR. You might label 1 as being among the 'high performers' and 2 among the 'average performers'; In which group should you place 3? One answer would be to put manager 3 in both groups.This is a solution you might see as the result of compact letter displays, which isa commonly used way of communicating the results of multiple comparison procedureslike Tukey's test.The idea is to put managers into multiple groups, each group identified by a letter,such that if two managers are in a common group, the HSD test fails to find theyhave significantly different SNR.The assignment to groups is actually not unique, and so subject to optimizing certain criteria, like minimizing the total number of groups, and so on, cf. Gramm et al.For our purposes here, we use Piepho's algorithm, which is conveniently providedby the multcompView package in R. Here we apply the technique to the series of monthly returns of 5 industry factors, as compiled by Ken French, and published in his data library. We have almost 1200 months of data for these 5 returns. The returns are highly positively correlated, and we find that their common correlation is very close to 0.8. For this setup, and measuring the Sharpe in annualized units, the critical value at the 0.05 level is For comparison, the half-width of the two sided confidence interval on the Sharpe in this case would be which is a bit bigger. We have actually gained resolving power in ourcomparison of industries because of the high level of correlation. Below we compute the observed Sharpe ratios of the five industries,finding them to range from around \(0.49\,\mbox{year}^{-1/2}\) to\(0.67\,\mbox{year}^{-1/2}\).We compute the HSD threshold, then call Piepho's method and print the compact letter display, shown below. In this case we require two groups, 'a' and 'b'. Based on our post hoc test, we assignHealthcare and Other into two different groups, but find no otherhonest significant differences, and so Consumer, Manufacturing and Technologyget lumped into both groups. # this is just a package of some data:# if (!require(aqfb.data)) { install.packages('shabbychef/aqfb_data') }library(aqfb.data)data(mind5)mysr <- colMeans(mind5) / apply(mind5,2,FUN=sd)# sort decreasing for convenience latermysr <- sort(mysr,decreasing=TRUE)# annualize itope <- 12mysr <- sqrt(ope) * mysr# showprint(mysr) ## Healthcare Consumer Manufacturing Technology Other ## 0.667411 0.648680 0.596682 0.590566 0.485220 srdiff <- outer(mysr,mysr,FUN='-')R <- cov2cor(cov(mind5))# this ends up being around 0.8:myrho <- median(R[row(R) < col(R)])alpha <- 0.05HSD <- sqrt(ope) * sqrt((1-myrho) / nrow(mind5)) * qtukey(alpha,lower.tail=FALSE,nmeans=ncol(mind5),df=Inf)library(multcompView)lets <- multcompLetters(abs(srdiff) > HSD)print(lets) ## Healthcare Consumer Manufacturing Technology Other ## "a" "ab" "ab" "ab" "b"
The $\mathbb{Z_5}$-vector space $\mathfrak{B}$ 3 over the field $(\mathbb{Z_5}, +, .)$ $\mathfrak{B}$ 3over the field $(\mathbb{Z_5}, +, .)$ 1. BackgroundThis is a formal introduction to the genetic code $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$ over the field $(\mathbb{Z_5}, +, .)$. This mathematical model is defined based on the physicochemical properties of DNA bases (see previous post). This introduction can be complemented with a Wolfram Computable Document Format (CDF) named IntroductionToZ5GeneticCodeVectorSpace.cdf available in GitHub. This is graphic user interface with an interactive didactic introduction to the mathematical biology background that is explained here. To interact with a CDF users will require for Wolfram CDF Player or Mathematica. The Wolfram CDF Player is freely available (easy installation on Windows OS and on Linux OS). 2. Biological mathematical model If the Watson-Crick base pairings are symbolically expressed by means of the sum “+” operation, in such a way that hold: G + C = C + G = D, U + A = A + U = D, then this requirement leads us to define an additive group ($\mathfrak{B}^3$, +) on the set of five DNA bases ($\mathfrak{B}^3$, +). Explicitly, it was required that the bases with the same number of hydrogen bonds in the DNA molecule and different chemical types were algebraically inverse in the additive group defined in the set of DNA bases $\mathfrak{B}$. In fact eight sum tables (like that one shown below), which will satisfice the last constraints, can be defined in eight ordered sets: {D, A, C, G, U}, {D, U, C, G, A}, {D, A, G, C, U}, {D, U, G, C, A},{G, A, U, C},{G, U, A, C},{C, A, U, G} and {C, U, A, G} [1,2]. The sets originated by these base orders are called the strong-weak ordered sets of bases [1,2] since, for each one of them, the algebraic-complementary bases are DNA complementary bases as well, pairing with three hydrogen bonds (strong, G:::C) and two hydrogen bonds (weak, A::U). We shall denote this set SW. A set of extended base triplet is defined as $\mathfrak{B}^3$ = { XYZ | X, Y, Z $\in\mathfrak{B}$}, where to keep the biological usual notation for codons, the triplet of letters $XYZ\in\mathfrak{B}^3$ denotes the vector $(X,Y,Z)\in\mathfrak{B}^3$ and $\mathfrak{B} =$ {A, C, G, U}. An Abelian group on the extended triplets set can be defined as the direct third power of group: $(\mathfrak{B}^3,+) = (\mathfrak{B},+)×(\mathfrak{B},+)×(\mathfrak{B},+)$ where X, Y, Z $\in\mathfrak{B}$, and the operation “+” as shown in the table [2]. Next, for all elements $\alpha\in\mathbb{Z}_{(+)}$ (the set of positive integers) and for all codons $XYZ\in(\mathfrak{B}^3,+)$, the element: $\alpha \bullet XYZ = \overbrace{XYZ+XYX+…+XYZ}^{\hbox{$\alpha$ times}}\in(\mathfrak{B}^3,+)$ is well defined. In particular, $0 \bullet X =$ D for all $X\in(\mathfrak{B}^3,+) $. As a result, $(\mathfrak{B}^3,+)$ is a three-dimensional (3D) $\mathbb{Z_5}$-vector space over the field $(\mathbb{Z_5}, +, .)$ of the integer numbers modulo 5, which is isomorphic to the Galois field GF(5). Notice that the Abelian groups $(\mathbb{Z}_5, +)$ and $(\mathfrak{B},+)$ are isomorphic. For the sake of brevity, the same notation $\mathfrak{B}^3$ will be used to denote the group $(\mathfrak{B}^3,+)$ and the vector space defined on it. + D A C G U D D A C G U A A C G U D C C G U D A G G U D A C U U D A C G This operation is only one of the eight sum operations that can be defined on each one of the ordered sets of bases from SW. 3. The canonical base of the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$Next, in the vector space $\mathfrak{B}^3$, vectors (extended codons): e 1 =ADD, e 2 =DAD and e 3 =DDA are linearly independent, i.e., $\sum\limits_{i=1}^3 c_i e_i =$ DDD implies $c_1=0, c_2=0$ and $c_3=0$ for any distinct $c_1, c_2, c_3 \in\mathbb{Z_5}$. Moreover, the representation of every extended triplet $XYZ\in\mathfrak{B}^3$ on the field $\mathbb{Z_5}$ as $XYZ=xe_1+ye_2+ze_3$ is unique and the generating set $e_1, e_2$, and $e_3$ is a canonical base for the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$. It is said that elements $x, y, z \in\mathbb{Z_5}$ are the coordinates of the extended triplet $XYZ\in\mathfrak{B}^3$ in the canonical base ($e_1, e_2, e_3$) [3] References José M V, Morgado ER, Sánchez R, Govezensky T. The 24 Possible Algebraic Representations of the Standard Genetic Code in Six or in Three Dimensions. Adv Stud Biol, 2012, 4:119–52. Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Grau R. An algebraic hypothesis about the primeval genetic code architecture. Math Biosci, 2009, 221:60–76.
Frequently, 19th century physicists—e.g., Helmholtz or Maxwell—did not use modern-day vector notation, which Gibbs contributed in large part to. For example, Helmholtz in his famous paper on the conservation of energy writes $$X=m\frac{du}{dt},\hspace{1em}Y=m\frac{dv}{dt},\hspace{1em}Z=m\frac{dw}{dt},$$ where $X,Y,Z$ are components of a force and $u=dx/dt$, $v=dy/dt$, $w=dz/dt$ are the components of the tangential velocity $q$. Instead, he could've written this much more concisely as: $$\vec{F}=m\vec{a},$$ where $\vec{F}=(X,Y,Z)$ and $\vec{a}=(u,v,w)$. Similarly, he could've written $$d(q^2)=\frac{d(q^2)}{dx}dx+\frac{d(q^2)}{dy}dy+\frac{d(q^2)}{dz}dz$$ much more concisely as $$d(q^2)=\nabla q^2\cdot d\vec{r}.$$ Thus: Why did those physicists use such a cumbersome, redundant way of writing what today we'd write with vector notation? Are there some expressions that cannot be expressed in vector notation but must be expressed in this seemingly cumbersome notation that predated vector notation? In summary: What were the criticisms against the introduction of "vector analysis"?
Yes, an instanton is a classical solution to the Euclidean equations of motion with finite action. Its topological charge is given by $k = \frac{1}{8\pi}\int \mathrm{tr}(F\wedge F)$ which is the integral of the divergence of the Chern-Simons current. There are many different instantons possible. A generic instanton for $\mathrm{SU}(2)$ and topological charge 1 is given by the BPST instanton$$ A_\mu^a(x) = \frac{2}{g}\frac{\eta_{\mu\nu}^a(x-x_0)^\nu}{(x -x_0)^2 - \rho ^2}$$where $x_0$ is the "center" of the instanton and $\rho$ its scale, also called the radius. The $\eta$ is the 't Hooft symbol. A large class of instantons of topological charge $k$ may be described as follows: Transforming the BPST instanton by the singular transformation $x^\mu \mapsto \frac{x^\mu}{x^2}$ leads to the expression$$ A_\mu^a(x) = -\eta_{\mu\nu}^a \partial^\nu\left(\ln\left(1+\frac{\rho^2}{(x-x_0)^2}\right)\right)$$for the transformed instanton, and one now makes the more general ansatz$$ A_\mu^a(x) = -\eta_{\mu\nu}^a \partial^\nu\left(\ln\left(1+\sum_{l=1}^k \frac{\rho_l^2}{(x-x_{0,l})^2}\right)\right)$$which leads to an instanton solution of topological charge $k$. This construction can be generalized to other non-Abelian gauge groups. The generic construction of all instantons on four-dimensional spacetimes of gauge group $\mathrm{SU}(N)$ is given by the ADHM instanton, see also the original paper "Construction of instantons" by Atiyah, Drinfeld, Hitchin and Manin.
The above graph is pulled from the book "Automatic Control Systems" by Kuo. The subject loop transfer function is $$L(s)=\frac{K}{s(1+T_1s)}$$ or $$L(j\omega)=\frac{-jK(1-jT_1\omega)}{\omega(1+T_1^2\omega^2)}$$ or $$L(j\omega)=\frac{K(-j-T_1\omega)}{\omega(1+T_1^2\omega^2)}$$ Hence the phase equation should be $$tan(\theta)=\frac{1}{T_1\omega}$$ Now at frequency infinity, which corresponds to origin in the above plot, the value of the phase function is $$tan(\theta)= \lim_{\omega \to \infty} \frac{1}{T_1\omega}=0$$ Also $$tan(\theta)= \lim_{\omega \to 0} \frac{1}{T_1\omega}=\infty$$ At infinite frequency the phase angle can assume value of either zero degree or 180 degree. In this plot it is taken as 180 degree, I can't understand why, when zero degree is also a perfect candidate. Also at zero frequency, the phase angle should be 90 degree but here it is taken as -90 degree. I can't understand this phase thing.
This question already has an answer here: Prove, that set $\{f \in \mathbb{N^N} \: | \:f \: $is strictly increasing $\}$ has the same cardinality as $\mathbb R$. My attempts: The beginning of this task was quite easy, but then I got stuck on constructing an injection between a set of function (let's call it $X$) and $\mathbb R$. I started with proving that $|\mathbb R| \geq |X|$: $|\mathbb R| \geq |X|$ because if $\forall _f , f\in \mathbb{N^N}$, and $|\mathbb{N^N}|=|\mathbb R$|, then $X \subset\mathbb R$. Then I tried to prove that $|\mathbb R| \leq |X|$, but I don't know how to do it. I tried to define a function $g(x)=x^3$, but the result is a number, not a function. Or maybe it is a correct solution? If not, can you explain to me how can I construct an injective function from $\{f \in \mathbb{N^N} \: | \:f \: $is strictly increasing $\}$ to $\mathbb R$? Is it even possible?
Note this is a hard problem, which depends on so many factors. Ergo giving complete answers which go through all the possibilities is a bit hard. For instance, the case changes dramatically depending on where you attach the pinpoint on Earth(i.e. north pole or equator) or whether Moon is in its apogee or perigee or somewhere else of its orbit. I will write some raw thoughts and give some rough numbers. I think actual computer simulations are needed for more detailed answers. Qualitative Facts: When we attach the string, if Moon is in its apogee the destruction will obviously be minimum. Since Earth and Moon are (almost)sphere's, due to Moon's orbit inclination and its axis tilt; the string will wrap around Earth a finite number of times. I will try to estimate it in the next section. Even if the string initially wraps around the Earth, due to the constant torque applied on Earth, Earth's rotation will slow down and the string will eventually unwrap. Although Earth and Moon won't collide the changes in tidal forces and daytime length will be dramatic. The changes in Earth's period around Sun(year) will be negligible. The string's maneuver on Earth will change(at most half of) Earth's face considerably in short terms. The string's tension can presumably affect tectonic plates movements(continental drift) and cause severe earthquakes. Depending on where we attach the string, it will also change Earth's axis of rotation; i.e. the equator and the Pole Star will change. Quantitative Estimations: These are estimations which can presumably be made on the back of a cocktail napkin. How many times will the rope rap around Earth?Or equivalently, what will be the length of the tangled rope around Earth? The mean inclination of the lunar orbit to the ecliptic plane is $\theta = 5.145°$. Therefore, the rope will only rap a finite number of times around Earth. Assuming the pinpoint to be on Equator, I will find an upper limit and a lower limit, then take their geometrical mean as a reasonable guess (A valid technique in Fermi problems)for the amount of string rapped around earth. The upper limit comes from rapping a rope around a cylinder of radius and height $R_E$. $$d_{up}= \frac{R_E}{\sin{\theta}} \approx 11 R_E \approx 7.1 \times 10^{7} \text{m}$$The lower limit is basically half of equator's length:$$d_{down}=\pi R_E \approx 2.0 \times 10^7 \text{m}$$$$d \approx \sqrt{d_{up}d_{down}} = \sqrt{11 \pi}R_E \approx 5.9 R_E \approx 3.8 \times 10^7 \text{m}$$ i.e. Moon will get about $10 \%$ closer. The new day: Or what will be the new definition of one hour! After attaching Moon and Earth together, and after all the wobbly motions settle down and the system reaches its steady routine motion again (the rope is no longer wrapped around Earth); Moon will be again in its initial distance from Earth; however, Earth and Moon will be rotating with the same angular velocity $\omega$. We can estimate this by conservation of Momentum. The answer may depend on whether we attach the string at Moon's apogee or perigee, so I will estimate both cases. The initial angular momentum around system's center of mass, will be(ignoring axial tilt):$$L=\frac{2}{5}M_E {R_E}^2 \omega_E + M_E {r_E}^2 \Omega + \frac{2}{5}M_M {R_M}^2 \omega_M + M_M {r_M}^2 \Omega$$where $M_E$ and $M_M$, $R_E$ and $R_M$, $r_E$ and $r_M$, $\omega_E$ and $\omega_M$ are Earth and Moon's mass, radius, distance to COM and angular frequency around themselves respectively. But we know:$$\omega_M = \Omega \\ M_E r_E = M_M r_M$$$$\Rightarrow L=\frac{2}{5}M_E {R_E}^2 \omega_E + M_M \Omega \left( \frac{2}{5}R_M^2 + r_M(r_M + r_E)\right) \\ \approx \frac{2}{5}M_E {R_E}^2 \omega_E + M_M \Omega \left( r_M^2 \right) $$ Writing the angular momentum afterwards:$$L=\frac{2}{5}M_E {R_E}^2 \Omega' + M_E {r_E}^2 \Omega' + \frac{2}{5}M_M {R_M}^2 \Omega' + M_M {r_M}^2 \Omega' \\ \approx \left( \frac{2}{5}M_E {R_E}^2 + M_M r_M^2\right) \Omega'$$ $$T'=\frac{2 \pi}{\Omega'} \approx \frac{2\pi \left( \frac{2}{5}M_E {R_E}^2 + M_M r_M^2 \right)}{\frac{2}{5}M_E {R_E}^2 \omega_E + M_M \Omega r_M^2 } = \frac{2 \pi}{\omega_E} \frac{\frac{2}{5} \frac{M_E}{M_M}+\left(\frac{r_M}{R_E} \right)^2}{\frac{2}{5} \frac{M_E}{M_M}+\left(\frac{r_M}{R_E} \right)^2 \frac{\Omega}{\omega_E}}$$ Taking the values from here and here and here, we'll get: $$T' \approx 22.1 \text{day}$$Which is really long. For perigee and apogee, the day-times will respectively be:$$T'_{p}=21.6 \text{day} \\ T'_{a}=22.5 \text{day}$$ The difference is not that significant though. To be completed
As stated, $\log (n) = O (2^n)$ is trivially true. All that it says is that $\log n$, in the end, grows no faster than $2^n$. For $2^n$, you can substitute $n$, $\sqrt n$, indeed any root of $n$. However carelessly stated, I think this really refers to the following: To represent a number of size $2^n$, you need $n$ bits. So, to represent a number of size $n$, you need $\log n$ bits. Bits are a measure of space. I made heavy weather of this before, as I thought it was referring to the time it takes to calculate $2^n$. In case it is, I'll leave this in: For $n$ a non-negative integer, and $f(n) = 2^n$ $$f(n) = \begin{cases}1 &\text{for }\, n = 0 \\(f (n \div 2))^2\times 2^{(n \mod 2)} &\text{for}\; n \gt 0\end{cases}$$ The implied algorithm is clearly $\Theta (log \, n)$. But this is time complexity. In space complexity, it is $\Theta(1)$. $\Theta (f)$ are the functions that, adapting the words of WolframMathworld, are no worse but also no better than $f$. $O (f)$ are the functions that are no worse than $f$.
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero). I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it. But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$ I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ... Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!) On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case @Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question. Moreover, the title is vague and doesn't clearly ask a question. And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed. If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself. but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A? @swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying But 240 miles seems waaay to short to cross two time zones So my inclination is to say the answer key is nonsense You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi... Hi there, I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer. Where does the term e^{(r_1-r_2)x} come from? It seems like it is taken out of the blue, but it yields the desired result.
This question already has an answer here: Can anyone help me on how to prove that sin(x)/x is not Lebesgue Integrable in [1,+00], Thanks in advance. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Can anyone help me on how to prove that sin(x)/x is not Lebesgue Integrable in [1,+00], Thanks in advance. Note that $\int_{\pi /4}^{3 \pi /4} |\sin x| \, dx >0$. Now consider the integrals over $(2k\pi +\pi /4, 2k \pi +3\pi /4) )$ and add up. Observe that $|x+2k\pi | <2k \pi +3\pi /4$ on $(2k\pi +\pi /4, 2k \pi +3\pi /4) )$.
In order to become the very best Pokenom trainer, Bash is studying Pokenom’s evolutions. Each Pokenom has a combat power ($CP$), indicating how strong the Pokenom is. After certain amount of training, a Pokenom can evolve, and the evolved Pokenom will have higher $CP$. A Pokenom can evolve multiple times. There is no known limit on how many times a Pokenom can evolve. When a Pokenom evolves, his $CP$ always increases by a constant ratio. More formally, let’s $CP_ i$ denotes the $CP$ of the Pokenom after it evolved $i$ times. If the Pokenom evolves $k$ times, then the following conditions must be true: $CP_0 < CP_1$ or $k < 1$, $\frac{CP_1}{CP_0} = \frac{CP_2}{CP_1} = \cdots = \frac{CP_ k}{CP_{k-1}}$, $CP_ i$ is a positive integer. A sequence is called CP-sequence if it satisfies the above conditions. For example: $1, 2, 4, 8, 16$ is a CP-sequence. $4, 6, 9$ is a CP-sequence. $4, 2, 1$ is NOT a CP-sequence, because $4 > 2$. $4, 6, 9, 13.5$ is NOT a CP-sequence, because $13.5$ is not an integer. $4, 6, 9, 13$ is NOT a CP-sequence, because $\frac{13}{9} \ne \frac{9}{6}$. Bash is very excited to learn about CP-sequences. Given an integer $S$, he wants to know how many CP-sequences there are whose sum equals $S$. For example, when $S = 7$, there are $5$ CP-sequences: $(7), (1, 6), (2, 5), (3, 4), (1, 2, 4)$. When $S = 19$, there are $11$ sequences: $(19), (1, 18), (2, 17), \ldots , (9, 10), (4, 6, 9)$. The first line contains one integer $t$ $(1 \leq t \leq 1\, 000)$. The second line contains $t$ distinct integers $S_{1}, S_{2}, \ldots , S_{t}$ $(1 \leq S_{i} \leq 10^{6}~ \forall 1 \leq i \leq t)$. Print $t$ integers in one line, the $i$-th number should be the answer to the problem when $S = S_{i}$. Sample Input 1 Sample Output 1 7 3 5 7 11 13 17 19 2 3 5 6 8 9 11
I am trying to solve the question 6.12 in Arora-Barak (Computational Complexity: A modern approach). The question asks you to show that the $\mathsf{PATH}$ problem (decide whether a graph $G$ has a path from a given node $s$ to another given node $t$) which is complete for $\mathbf{NL}$ is also contained in $\mathbf{NC}$ (this is easy). The question then also makes a remark that this implies that $\mathbf{NL} \subseteq \mathbf{NC}$ which is not obvious to me. I think in order to show this, one has to show that $\mathbf{NC}$ is closed under logspace reductions, i.e $$(1): B \in \mathbf{NC} \hbox{ and } A \le_l B \Longrightarrow A \in \mathbf{NC}$$ where $\le_l$ is the logspace reduction defined as $$A \le_l B :\Longleftrightarrow (\exists M \hbox{ TM}, \forall x)[x \in A \Longleftrightarrow M(x) \in B]$$ ($M$ is a TM which runs in logarithmic space). I would appreciate if someone could give a tip for proving the statement $(1)$.
Serial of year 28 You can find the serial also in the yearbook. We are sorry, this serial has not been translated. Text of serial Tasks (6 points)1. Series 28. Year - S. Unsure Write down the equations for a throw in a homogeneous gravitational field (you don't need to prove them but you need to know how to use them). Design a machine that will throw an item and determine the angle of approach and the velocity. You can throw with the item with a spring, determine its spring constant, mass of the object and calculate the kinetic energy and thus the velocity of the item. What do you think is the precision of the your value of the velocity and angle? Put the boundaries determined by this error into the equations and show in what boundaries we can expect the distance of the landing from the origin to be.Throw the item with your device at least five times and determine the distance of the landing and what are the boundaries within which you are certain of your distance? Show if your results fit into your predictions. (For a link to video with a throw you get a bonus point!) Tie a pendulum with an amplitude of $x$, which effectively oscillates harmonically but the frequency of its oscillations depends on the maximum displacement $x_{0}$ $$x(t) = x_0 \cos\left[\omega(x_0) t\right]\,, \quad \omega(x_0) = 2\pi \left(1 - \frac{x_0^2}{l_0^2}\right)\,,$$ where $l_{0}is$ some length scale. We think that are letting go of the pendulum from $x_{0}=l_{0}⁄2$ but actually it is from $x_{0}=l_{0}(1+ε)⁄2$. B By how much does the argument of the cosine differ from 2π after one predicted period? How many periods will it take for the pendulum to displaced to the other side than which we expect? Tip Argument of the cosine will in that moment differ from the expected one by more than π ⁄ 2. Take a pen into your hand and let it stand on its tip on the table. Why does it fall? And what will determine if it will fall to the right or to the left? Why can't you predict a die throw even though the laws of physics should predict it? When you play billiard is the inability to finish the game only due to being incapable of doing all the neccessary calculations? Write down your answers and try to enumerate physics phenomenons that occur in daily life which are unpredictable even if we know the situation well. (6 points)2. Series 28. Year - S. numerical We give length values in metres, time values in seconds and mass values in kilograms. Angular velocity $Ω$ we give in radians per second. If you take the equations for the movement of balls from the series, there are three more parameteres included: $α$, $β$, $γ$. What are their dimensions? Consider a freefalling ball with $Ω=0$ and $v_{x}=0$. There then exists a terminal velocity $v_{z}^{t}$, at which the frictional force and and gavitational force are equally matched and the fall of the ball isn't accelerating anymore. Determine this velocity from the equations for the movement o a ball. Change this equation so that it will express $β$. $v_{z}^{t}$ can be easily measured and for ourfootball of mass $m=0,5\;\mathrm{kg}it$ is typically around 25 m\cdot s^{ −1}. Then what is $β?$ Express the initial $v_{x}$ and $v_{z}$ using the angle at which it was shot out $φ$ with a fixed initial velocity $v=10\;\mathrm{m}\cdot \mathrm{s}^{-1}$. Write a program according to the series and try changing the initial conditions and the following parameters Choose some positive $β$, turn off the rotation $Ω=0$ and find out, if the angle under which the the ball reach the farthest is bigger or smaller than 45°. Demonstrate your finding with graphs of the trajectories. Choose a positive non-zero $α$ with a numerical value in the given units the same as $β$, $γ=0,01$ (in the given units) and $Ω=±5rad\cdot \;\mathrm{s}^{-1}.How$ will in these specific cases the optimal angle of the shot change? *Bonus:** How far would you throw with a cricket ball? Is our model good enough to make such predictions? (6 points)3. Series 28. Year - S. numerical Look at the equations of the Lorenz model and write a script to simulate them in Octave (maybe even refresh your knowledge of the second part of series). Together with the sketching command your script should have the following form: … function xidot = f(t,xi) … xdot=…; ydot=…; zdot= …; xidot = [xdot;ydot;zdot]; endfunction config = odeset('InitialStep', 0.01,'MaxStep',0.1); initialCondition=[0.2,0.3,0.4]; solution=ode45(@f,[0,300],initialCondition,config); plot3(solution.y(:,1),solution.y(:,2),solution.y(:,3)); </pre> Just instead of three dots fill in the rest of the code (just as in the second part of the series) and use $σ=9,5$, $b=8⁄3.Then$ figure out with a precision of at least units for what positive $r$ the system goes from asymptomatic stopping to chaotic oscillation(it is independent of the initial conditions). Here is the full text of the Octave script for simulating and visualising the movement of a particle in a gravitational field of a massive object in the plane $xy$, where all the constants and parameters are equal to one: clear all pkg load odepkg function xidot = f(t,xi) alfa=0.1; vx=xi(3); vy=xi(4); r=sqrt(xi(1)^2+xi(2)^2); ax=-xi(1)/r^3; ay=-xi(2)/r^3; xidot = [vx;vy;ax;ay]; endfunction config = odeset('InitialStep', 0.01,'MaxStep',0.1); x0=0; y0=1; vx0=…; vy0=0; initialCondition=[x0,y0,vx0,vy0]; solution=ode45(@f,[0,100],initialCondition,config) plot(solution.y(:,1),solution.y(:,2)); pause()</pre> Choose initial conditions $x0=0,y0=1,vy0=0$ and and a nonzero initial velocity in the direction $x$ such that the particle will be bound (ie. it won't escape the center.) Add to the gravitational force the following force $-α\textbf{r}⁄r^{4}$, where $αis$ a small positive number. Choose gradually increasing $α$ beginning with $α=10^{-3}$ and and show that they cause quasiperiodic movement. (6 points)4. Series 28. Year - S. Ljapunovian Assume a pen of length 10 cm with a center of mass precisely in the middle and $g=9.81\;\mathrm{m}\cdot \mathrm{s}^{-2}.Now$ imagine that you put the pen on the table with a null deviation $δx$ with an accuracy of $ndecimal$ places and with a null velocity. How long after making the pen stand can you be sure with just $n-decimal$ places of the nullness of the displacement? Consider a model of weather with the biggest Ljapun's exponent $λ=1.16\cdot 10^{-5}s^{-1}$. The weather forecast stops being useful if its error becomes bigger than 20 %. If you had determined the state of the weather with an accuracy of 1 %, how long do you estimate that your forecast would be good for? Give the answer in days and hours. Take Lorenz's model of convection from the last part, copy the function $f(xi,t)$ amd simulate and draw the values of the parameters $X(t)$ for two different trajectories using the commands X01=1; Y01=2; Z01=5; X02=…; Y02=…; Z02=…; nastaveni = odeset('InitialStep', 0.01,'MaxStep',0.1); pocPodminka1=[X01,Y01,Z01]; reseni1=ode45(@f,[0,45],pocPodminka1,nastaveni); pocPodminka2=[X02,Y02,Z02]; reseni2=ode45(@f,[0,45],pocPodminka2,nastaveni); plot(reseni1.x,reseni1.y(:,1),reseni2.x,reseni2.y(:,1)); pause() </pre> Instead of three dots $X02,Y02,Z02you$ have to give the initial conditions for the second trajectory. Run the code for at least five different orders of magnitude that are all still small and note the time, in which the second trajectory shall differ qualitatively from the first(ie will go in the opposite direaction). Don't decrease the deviation under cca 10^{$-8}$, because then the imprecision's of numerical integration start to show. Chart the dependency of the ungluing time on the order of magnitude of the deviation. Bonus: Attempt to use the gained dependency of the ungluing time on the size of the deviation estimate Ljapun's exponent. You will need more than five runs and you can assume that at the moment of ungluing it will always overcome some constant $Δ_{c}$. (6 points)5. Series 28. Year - S. mapping Show that for arbitrary values of parameters $K$ and $T$ you can express the Standard map from the series express as $$x_{n} = x_{n-1} y_{n-1},$$ $$\\ y_n = y_{n-1} K \sin(x),$$ where $x$, y$ are somehow scaled d$φ⁄dt,φ$. Show that the physical parameter $K$, x, y$$. Look at the model of the kicked rotor from the series and take this time the passed impuls$I(φ)=I_{0}$, after the period $T$ then $I(φ)=-I_{0}$, after another one $I_{0}$ and this way keep on kicking the rotor on and on. Make a map $φ_{n},dφ⁄dt_{n}$ on the basis of values $φ_{n-1},dφ⁄dt_{n-1}$ before the doublekick ± $I$ Why not? Solve $φ_{n},dφ⁄dt_{n}$ on the basis of some initial conditions $φ_{0},dφ⁄dt_{0}$ for an arbitrary $n$. *Bonus:** Try using the ingeredients from this series to design kicking which $will$ result in chaotic dynamics. Take care though because $φ$ is periodic with a period 2π and shouldn't d$φ⁄dt$ unscrew forever through kicking. (6 points)6. Series 28. Year - S. mixing Copy the function $iterace_stanMap$ from the series and using the following commands choose ten very close initial conditions for some $K$. K=…; X01=…; Y01=…; Iter1 = iterace_stanMap(X01,Y01,1000,K); … X10=…; Y10=…; Iter10 = iterace_stanMap(X10,Y10,1000,K); </pre> Between $Iter1$ and $Iter10$ there are hidden a thousand iterations of given initial conditions using the Standard map. As to see how the ten points look after the $nth$ iteration, you have to write n=…; plot(Iterace1(n,1),Iterace1(n,2),„o“,…,Iterace10(n,1),Iterace10(n,2),„o“) xlabel („x“); ylabel („y“); axis([0,2*pi,-pi,pi],„square“); refresh; </pre> we write $"o"$ into $plot$ so that the points will draw themselves as circles. The rest of the commands is then included so that the graph will include the whole square and that it would have the correct labels. Set some strong kicks, $Kat$ least approx. -0,6, and place the 10 initial conditions very close to each other somewhere in the middle of the chaotic region (ie for example „on the tip of a pen“). How do the ten iteration's distances with respect to each other change? Document on graphs. How do the ten initially very close initial conditions change after 1 000 iterations? What can we learn from this about the „willingness to mix“of the given area? Take again a large kick and set your ten initial conditions along the horizontal equilibrium of the rotor ie $x=0$, $y=0$. How will these ten initial conditions change in time with respect to each other? What can we say about their distance after a large amount of kickso? *Bonus:** Try to code and plot the behaviour of some other map. (For inspiration you can look at the sample solution of the last series.)
A while back I bought a couple of PIC16F57 (DIP) chips because they were dirt cheap. I figured someday I could use these in something. Yes, I know, this is a horrible way to actually build something and a great way to accumulate junk. However, this time the bet paid off! Only about a year or two too late; but that’s beside the point. The problem I now had was that I didn’t have a PIC programmer. When I bought these chips I figured I could easily rig a board up to the chip via USB. Lo and behold, I didn’t read the docs properly; this chipset doesn’t have a USB to serial interface. Instead, it only supports Microchip’s In-Circuit Serial Programming (ICSP) protocol via direct serial communication. Rather than spend the $40 to buy a PIC programmer (thus, accumulating even more junk I don’t need), I decided to think about how I could make this happen. Glancing at some of my extra devices lying around, I noticed an unused Arduino. This is how the idea for this project came to life. Believe me, the irony of programming a PIC chip with an ATMega is not lost on me. So for all of you asking, “why would anyone do this?” the answer is two-fold. First, I didn’t want to accumulate even more electronics I would not use often. Second, these exercises are just fun from time to time! Hardware Design My prototype’s hardware design is targeted to using an Arduino Uno (rev 3) and a PIC16F57. Assuming the protocol looks the same for other ICSP devices, a more reusable platform could emerge from a common connector interface. Likewise, for other one-offs it could easily be adapted for different pinouts. Today, however, I just have the direct design for interfacing these two devices: Overall, the design can’t get much simpler. For power I have two voltage sources. The Arduino is USB-powered and the 5V output powers the PIC chip. Similarly, I have a separate +12V source for entering/exiting PIC programming mode. For communication, I have tied the serial communication pins from the Arduino directly to the PIC device. The most complicated portion of this design is the transistor configuration; though even this is straightforward. I use the transistor to switch the 12V supply to the PIC chip. If I drive the Arduino pin 13 high, the 12V source shunts to ground. Otherwise, 12V is supplied to the MCLR pin on the PIC chip. I make no claims that this is the most efficient design (either via layout or power consumption), but it’s my first working prototype. Serial Communication with an Arduino Arduino has made serial communication pretty trivial. The only problem is that the Arduino’s serial communication ports are UART. That is to say, the serial communication is asynchronous. The specification for programming a PIC chip with ICSP clearly states a need for a highly controlled clock for synchronous serial communication. This means that the Arduino’s Serial interface won’t work for us. As a result, we will go on to use the Arduino to generate our own serial clock and also signal the data bits accordingly. Setting the Clock Speed The first task to managing our own serial communication with the Arduino is to select an appropriate clock speed. The key to choosing this speed was finding a suitable trade-off between programming speed (i.e. fast baud rate) vs. computation speed on the Arduino (i.e. cycles of computation between each clock tick). Remember, the Arduino is ultimately running an infinite loop and isn’t actually doing any parallel computation. This means that the amount of time it takes to perform all of your logic for switching data bits must be negligible between clock ticks. If your computation time is longer than or close to the clock ticking frequency, the computation will actually impact the clock’s ability to tick steadily. As a rule of thumb, you can always set your clock rate to have a period that is roughly 1 to 2 orders of magnitude than your total computation. Taking these factors into account, I chose 9600 baud (or a clock at 9.6KHz). To perform all the logic required for sending the appropriate programming data bits, I estimated somewhere in the 100’s of nanoseconds to 1’s of microseconds for computation. Giving myself some headroom, I selected a standard baud rate that was roughly two orders of magnitude larger than my computation estimate. Namely, a period of 104 microseconds corresponds to a 9.6KHz clock. After completing the project I could have optimized my clock speed. However, that was unnecessary for this project. The clock rate I had selected worked well. The 9600 baud rate is fast enough for timely programming the device because we don’t have much data to transmit. Similarly, it provides us a lot of headroom to experiment with different types of computation. Generating the Clock Signal While this discussion has primarily focused on the design decisions involved in choosing a clock signal rate, how did we generate it? The process really comes down to toggling a GPIO pin on the Arduino. In our specific implementation, I chose pin 2 on the Arduino. While you can refer to the code for more specific details, an outline of this process follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 inline bool clock_tick() { if (PORTD & _BV(SERIAL_CLOCK_PORT)) { PORTD &= ~_BV(SERIAL_CLOCK_PORT); // If clock is currently high, toggle to low return false; } PORTD |= _BV(SERIAL_CLOCK_PORT); // Return true if we have turned clock high return true; } void loop() { if (clock_tick()) { // ... compute and control data signals } // delay for 52us (half clock period) waitForHalfClockPeriod(); } As you can see, “ticking” the clock basically consists of toggling it and then making sure each loop iteration waits for half the clock period. The omitted section for data control is where most of the logic for the controller goes. However, it runs in a time that is far less than 52 microseconds. As a result, the duration of each loop iteration can be considered as: $$ \begin{equation} 52\mu s \gg \delta \\ 52\mu s + \delta \simeq 52\mu s \end{equation} $$ where \(\delta\) is the time required to perform computation for data control. Consequently, the clock ticks at an appropriate rate. I have included an image taken from my oscilloscope below. This image provides some empirical evidence that what we’re doing should work. While there is no data being sent on this image (we’ll show more of that below), we can generate a nice clock signal (notice the 1/|dX| and BX-AX lines on the image) at 9.6KHz by toggling the pin and waiting. Controlling the Data Line Now that we have a steady clock, we need to control the data line. Writing this section of code felt like I was back in my VHDL/Verilog days. The basic principal— from a signal generation perspective— was to only change the data lines on a positive clock edge. There were minor complications for the read data command (since the pin has to go from output to input), but this was an isolated case with a straightforward solution. To actually control the signal, we manually turn the serial data pin (in our case, pin 4) high or low depending on the command and data each clock cycle. This ICSP programming protocol starts with a 6 bit command sequence. If the command requires data, then a framed 14-bit word (total of 16 bits with the start and stop bits) is sent or received. Command and data bits are sent least significant bit first. In the case of my PIC16F57, the commands are only 4 bits where the upper 2 bits are ignored by the PIC. Likewise, since the PIC16F57 has a 12 bit word, the upper 2 bits of the data word are also ignored while sending and receiving data. The Load Data Command Let’s first investigate the load data command. This command queues up data to write to the chip. A series of additional commands and delays are executed to flush this data to the chip. The bits for the load command are 0bXX0010 (where X is a “don’t care” value). However, let’s take a look at it under the oscilloscope: The yellow curve is the clock and the blue curve is our data line. Starting from the left (and reading the blue curve under the yellow “high” marks) we can read our command exactly as intended: 0b0100XX. Notice that it is inverted since our least significant bits are sent first. If you follow along a little bit further on the top, you’ll notice a clock-low delay. This delay allows the PIC chip to prepare for data. The data for the command immediately follows the delay. Implementation Overview Without going too deeply into the details (again, I refer to the code), the command sequences are modeled as a state machine. Generally, when executing a command, we keep track of the number of steps taken for a particular command already. Since each command consists of sending a finite number of bits, we can now precisely what to do at each step. The other detail I mentioned earlier was about the read command. This command is sent over pin 4 in output mode, but during the delay this pin must switch to input mode. When in input mode, the PIC chip will proceed to send data at the given memory address. To accommodate this, each command starts by setting the pin as output mode. In the case of the read command, it sets the pin as input when appropriate. Conclusion I’ve enjoyed building out this project. When initially building, I really wanted to discover whether or not I could build a PIC programmer with an Arduino. This post reviews my initial prototype and a high-level description of the Arduino code. Unfortunately, the story doesn’t end here. Due to a variety of limitations, I had to introduce a PC-based controller to stream data to the Arduino. My finished product also removes extra elements (i.e. a second 12.5V power supply) and moves from a breadboard to a more permanent fixture. Even so, I leave these details to a part 2 of this post. In any case, you can checkout my code from this repo and run it today. While I work on the second part of the write up, you can always read through what I’ve done. For now though, I will leave you with a picture of some messy breadboarding.
The distance $d_K^H(x,y)$ between two points on the hyperboloid $H_K$ with curvature $K<0$ can be emulated on the distance $d_{-1}(x,y)$ of the hyperboloid $H_{-1}$ of curvature ($K=-1$) as follows: $$ d_K^H(x,y)=R\cdot d_{-1}^H(x/R,y/R) $$ where $R$ is the radius and is related to the curvature as follows: $R=\frac{1}{\sqrt{-K}}$. Do you know about a simple formula to do a similar emulation with the distance $d_K^D(x,y)$ on the Poincaré disk $D_K$ of curvature $K$? For $K=-1$ the distance on the Poincaré disk $D_{-1}$ is: $$ d_{-1}^P(x,y) = arccosh\left( 1+\frac{2||x-y||_2^2}{(1-||x||^2_2)(1-||y||_2^2)} \right) $$ So I'm looking for an expression of the form: $$ d_K^P(x,y)=\cdots d_1^P(\cdots x\cdots, \cdots y\cdots). $$ where the $(\cdots)$-parts are just replaced with some function or expression in terms of $K$ (or $R$). So far I've tried to project the points from the hyperboloid to the Poincaré disk. But it didn't turn out to be a nice expression.
I feel like this question might be marked as duplicate because I see many similar incurring in that fate but I'll try anyway. I would say I did not find anything similar. I have been thought a procedure to find conugate prior distributions that is based on sufficient statistics. The idea is to compute the likelihood of the sample, then identifying the sufficient statistic using Neyman factorization theorem and in the end substitute some hyperparameter for the sufficient statistic in the function $g(\theta, T(x))$, where $\theta$ is the parameter of interest and $T(X)$ is the sufficient statistic. To give an example, I have the following exponential distribution \begin{gather} p(y_t\mid\alpha) = \alpha\, \exp\{-\alpha y_t\}\mathbb{1}_{(0,\infty)}(y_t) \end{gather} Then, the likelihood function is (given $y_t$ are iid) \begin{gather} L(\alpha) = \alpha^T \exp\left\{-\alpha\sum y_t\right\} \mathbb{1}_{(0,\infty)}(\max(y_t)) \end{gather} Using Neyman factorization theorem, we can factorize the likelihood as $g(\alpha,T(x))=\alpha^T \exp\{-\alpha\sum y_t\}$ and $c(y)=\mathbb{1}_{(0,\infty)}(\max(y_t))$ so that our sufficient statistic is $T(X) = \sum y_t$. Then, the conjugate prior to this model should be \begin{gather} \pi(\alpha)=g(\alpha,\eta)=\alpha^T \exp\{-\alpha\eta\} \end{gather} where $\eta$ is the hyperparameter. I tried to compute the posterior to check if the family is the same but I got this \begin{gather} p(\alpha\mid y_t) = \alpha^{2T} \exp\left\{-\alpha\left(\eta+\sum y_t\right) \right\} \end{gather} which doesn't seem to be an exponential distribution to me. Now, my question is: should I insert a random parameter $\eta$ or should it be something meaningful, maybe related to the distribution at stake? Or are there issues in my way of proceding?
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including: Does GMAT RC seem like an uphill battle? e-GMAT is conducting a free webinar to help you learn reading strategies that can enable you to solve 700+ level RC questions with at least 90% accuracy in less than 10 days. Sat., Oct 19th at 7 am PDT Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 31 Mar 2012, 02:47 34 1 70 If n is a positive integer and n^2 is divisible by 72, then the largest positive integer that must divide n is A. 6 B. 12 C. 24 D. 36 E. 48 The largest positive integer that must divide \(n\), means for the least value of \(n\) which satisfies the given statement in the question. The lowest square of an integer, which is multiple of \(72\) is \(144\) --> \(n^2=144=12^2=72*2\) --> \(n_{min}=12\). Largest factor of \(12\) is \(12\). OR: Given: \(72k=n^2\), where \(k\) is an integer \(\geq1\) (as \(n\) is positive). \(72k=n^2\) --> \(n=6\sqrt{2k}\), as \(n\) is an integer \(\sqrt{2k}\), also must be an integer. The lowest value of \(k\), for which \(\sqrt{2k}\) is an integer is when \(k=2\) --> \(\sqrt{2k}=\sqrt{4}=2\) --> \(n=6\sqrt{2k}=6*2=12\) Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 14 Aug 2012, 22:50 14 5 Normally i divide the number into the primes just to see how many more primes we need to satisfy the condition, so in our case: n^2/72=n*n/2^3*3^2, in order to have minimum in denominator we should try modify the smallest number. If we have one more 2 then the n*n will perfectly be devisible to 2^4*3^2 from here we see that the largest number is 2*2*3=12 Hope i explained my thought._________________ If you found my post useful and/or interesting - you are welcome to give kudos! Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 21 Apr 2012, 14:14 4 1 Merging similar topics. raviram80 wrote: Hi All I have a confusion about this question 169. If n is a positive integer and n2 is divisible by 72, then the largest positive integer that must divide n is (A) 6 (8) 12 (C) 24 (0) 36 (E) 48 If we are looking for largest positive integer that must divide n, why can it not be 48. because if n2 = 72 * 32 then n will be 48 , so does this not mean n is divisible by 48. Please explain. Thanks The question asks about "the largest positive integer that MUST divide n", not COULD divide n. Since the least value of n for which n^2 is a multiple of 72 is 12 then the largest positive integer that MUST divide n is 12. Complete solution of this question is given above. Please ask if anything remains unclear._________________ Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 02 Nov 2012, 04:35 Bunuel wrote: Merging similar topics. raviram80 wrote: Hi All I have a confusion about this question 169. If n is a positive integer and n2 is divisible by 72, then the largest positive integer that must divide n is (A) 6 (8) 12 (C) 24 (0) 36 (E) 48 If we are looking for largest positive integer that must divide n, why can it not be 48. because if n2 = 72 * 32 then n will be 48 , so does this not mean n is divisible by 48. Please explain. Thanks The question asks about "the largest positive integer that MUST divide n", not COULD divide n. Since the least value of n for which n^2 is a multiple of 72 is 12 then the largest positive integer that MUST divide n is 12. Complete solution of this question is given above. Please ask if anything remains unclear. I spent a few hours on this one alone and I'm still not clear. I chose 12 at first, but then changed to 48. I'm not a native speaker, so here is how I interpreted this question: "the largest positive integer that must divide n" = "the largest positive factor of n". Since n is a variable (i.e. n is moving), so is its largest factor. Please correct if I'm wrong here. I know that if n = 12, n^2 = 144 = 2 * 72 (satisfy the condition). When n = 12, the largest factor of n is n itself, which is 12. Check: 12 is the largest positive number that must divide 12 --> true However if n = 48, n^2 = 48 * 48 = 32 * 72 (satisfy the condition too). When n = 48, the largest factor of n is n itself, which is 48. Check: 48 is the largest positive number that must divide 48 --> true So, I also notice that the keyword is "MUST", not "COULD". The question is, why is 48 not "MUST divide 48", but instead only "COULD divide 48"? I'm not clear right here. Why is 12 "MUST divide 12"? What's the difference between them? Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 02 Nov 2012, 04:53 10 catennacio wrote: Bunuel wrote: Merging similar topics. raviram80 wrote: Hi All I have a confusion about this question 169. If n is a positive integer and n2 is divisible by 72, then the largest positive integer that must divide n is (A) 6 (8) 12 (C) 24 (0) 36 (E) 48 If we are looking for largest positive integer that must divide n, why can it not be 48. because if n2 = 72 * 32 then n will be 48 , so does this not mean n is divisible by 48. Please explain. Thanks The question asks about "the largest positive integer that MUST divide n", not COULD divide n. Since the least value of n for which n^2 is a multiple of 72 is 12 then the largest positive integer that MUST divide n is 12. Complete solution of this question is given above. Please ask if anything remains unclear. I spent a few hours on this one alone and I'm still not clear. I chose 12 at first, but then changed to 48. I'm not a native speaker, so here is how I interpreted this question: "the largest positive integer that must divide n" = "the largest positive factor of n". Since n is a variable (i.e. n is moving), so is its largest factor. Please correct if I'm wrong here. I know that if n = 12, n^2 = 144 = 2 * 72 (satisfy the condition). When n = 12, the largest factor of n is n itself, which is 12. Check: 12 is the largest positive number that must divide 12 --> true However if n = 48, n^2 = 48 * 48 = 32 * 72 (satisfy the condition too). When n = 48, the largest factor of n is n itself, which is 48. Check: 48 is the largest positive number that must divide 48 --> true So, I also notice that the keyword is "MUST", not "COULD". The question is, why is 48 not "MUST divide 48", but instead only "COULD divide 48"? I'm not clear right here. Why is 12 "MUST divide 12"? What's the difference between them? Thanks, Caten Only restriction we have on positive integer n is that n^2 is divisible by 72. The least value of n for which n^2 is divisible by 72 is 12, thus n must be divisible by 12 (n is in any case divisible by 12). For all other values of n, for which n^2 is divisible by 72, n will still be divisible by 12. This means that n is always divisible by 12 if n^2 is divisible by 72. Now, ask yourself: if n=12, is n divisible by 48? No. So, n is not always divisible by 48. Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 02 Nov 2012, 05:20 6 1 I approach this problem by prime factorisation. any square must have 2 pairs of prime factors. prime factorisation of 72 has 2*2, 3*3 and 2. n^2 must have one more 2 as a prime factor. Hence lasrgest number which must devide n is 2*3*2 = 12_________________ Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 02 Nov 2012, 08:49 3 Bunuel wrote: Only restriction we have on positive integer n is that n^2 is divisible by 72. The least value of n for which n^2 is divisible by 72 is 12, thus n must be divisible by 12 (n is in any case divisible by 12). For all other values of n, for which n^2 is divisible by 72, n will still be divisible by 12. This means that n is always divisible by 12 if n^2 is divisible by 72. Now, ask yourself: if n=12, is n divisible by 48? No. So, n is not always divisible by 48. Hope it's clear. Thank you very much Bunuel. Very clear now. Now I understand what "must" means. It means it will be always true regardless of n. As you said (and I chose), when n = 24 or 36 or 48, the answer 48 can divide 48, but cannot divide 12. So the "must" here is not maintained. In this case we have to choose the largest factor of the least possible value of n to ensure that largest factor is also a factor of other values of n. Therefore the least value of n is 12, the largest factor of 12 is also 12. This factor also divides other n values, for all n such that n^2 = 72k. My mistake was that I didn't understand the "must" wording and didn't check whether my answer 48 can divide ALL possible values of n, including n=12. This is what "must" mean. Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 02 Nov 2012, 08:53 1 1 catennacio wrote: Bunuel wrote: Only restriction we have on positive integer n is that n^2 is divisible by 72. The least value of n for which n^2 is divisible by 72 is 12, thus n must be divisible by 12 (n is in any case divisible by 12). For all other values of n, for which n^2 is divisible by 72, n will still be divisible by 12. This means that n is always divisible by 12 if n^2 is divisible by 72. Now, ask yourself: if n=12, is n divisible by 48? No. So, n is not always divisible by 48. Hope it's clear. Thank you very much Bunuel. Very clear now. Now I understand what "must" means. It means it will be always true regardless of n. As you said (and I chose), when n = 24 or 36 or 48, the answer 48 can divide 48, but cannot divide 24 and 36. So the "must" here is not maintained. In this case we have to choose the largest factor of the least possible value of n to ensure that largest factor also a factor of other values of n. Therefore the least value of n is 12, the largest factor of 12 is also 12. This factor also divides other n values, for all n such that n^2 = 72k. My mistake was that I didn't understand the "must" wording and didn't check whether my answer 48 can divide ALL possible values of n, including n=12. This is what "must" mean. Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 18 May 2015, 05:37 3 1 In questions like this, It's a good idea to start with the prime factorized form of n. We can write \(n = p1^m*p2^n*p3^r\). . . where p1, p2, p3 . . .are prime factors of n and m, n, r are non-negative integers (can be equal to 0) So, \(n^2 = p1^{2m}*p2^{2n}*p3^{2r}\). . . Now, \(n^2\) is completely divisible by 72 = \(2^3*3^2\) This means, \(\frac{(p1^{2m}*p2^{2n}*p3^{2r} . . . )}{(2^3*3^2)}\) is an integer. What does this tell you? That p1 = 2 and 2m ≥ 3, that is m ≥ 3/2. But m is an integer. So, minimum possible value of m =2 Also, p2 = 3 and 2n ≥ 2. That is, n ≥ 1. So, minimum possible value of n = 1 Let's now apply this information on the expression for n: n = \(2^2*3^1\)\(*something. . .\) From this expression, it's clear that n MUST BE divisible by \(2^2*3^1\) = 12. Takeaway: If you find yourself getting confused in questions that gives divisibility information about different powers of a number, start by writing a general prime factorized expression for the number raised to power 1. Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 25 Jan 2017, 09:27 1 1 1) n is squared. This means that it should have two identical sets of prime factors. 2) Since n^2 is divisible by 72, all prime factors of 72 should be prime factors of n^2. 3) The prime factors of 72 are 2*2*3*2*3. To make this product a perfect square we need to add one more 2. Then we get two identical sets of prime factors (2*2*3)=12. 4) n has to be at least 12 in order to satisfy the conditions of the problem. 12 is the largest integer that n MUST be divisible by. Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 09 Feb 2017, 17:24 4 3 amitvmane wrote: If n is a positive integer and n^2 is divisible by 72, then the largest positive integer that must divide n is A. 6 B. 12 C. 24 D. 36 E. 48 We are given that n^2/72 = integer or (n^2)/(2^3)(3^2) = integer. However, since n^2 is a perfect square, we need to make 72 or (2^3)(3^2) a perfect square. Since all perfect squares consist of unique primes, each raised to an even exponent, the smallest perfect square that divides into n^2 is (2^4)(3^2) = 144. Since n^2/144 = integer, then n/12 = integer, and thus the largest positive integer that must divide n is 12. But I still dont get why the Q says the largest and not the smallest. 48 divides 12, so why not 48? I think you may have misinterpreted the phrase "integer that must divide n." You interpreted it as “the integer must be divisible by n.” By your interpretation, if 12 is divisible by n, 48 is also divisible by n; this would be correct, had the wording of the question been as you interpreted it. The phrase "integer that must divide n" really means “n must be divisible by that integer.” So if n is divisible by 12 (which means n/12 = integer), it doesn't mean n is divisible by 48 (i.e., it doesn't mean n/48 will be an integer). For example, if n = 12, n is divisible by 12, but n is not divisible by 48. And thus, since we determined in the original question that n is a multiple of 12, n could be as small as 12, and the largest integer that must divide into 12 is 12. Does that answer your question?_________________ Re: If n is a positive integer and n^2 is divisible by 72, then[#permalink] Show Tags 16 Nov 2017, 15:13 Top Contributor 2 amitvmane wrote: If n is a positive integer and n^2 is divisible by 72, then the largest positive integer that must divide n is A. 6 B. 12 C. 24 D. 36 E. 48 ---------------ASIDE #1-------------------------------------- A lot of integer property questions can be solved using prime factorization. For questions involving divisibility, divisors, factors and multiples, we can say: If N is a factor by k, then k is "hiding" within the prime factorization of N Consider these examples: 3 is a factor of 24, because 24 = (2)(2)(2)(3), and we can clearly see the 3 hiding in the prime factorization. Likewise, 5 is a factor of 70 because 70 = (2)(5)(7) And 8 is a factor of 112 because 112 = (2)(2)(2)(2)(7) And 15 is a factor of 630 because 630 = (2)(3)(3)(5)(7) ---------------ASIDE #2-------------------------------------- IMPORTANT CONCEPT: The prime factorization of a perfect square will have an even number of each prime For example: 400 is a perfect square. 400 = 2x2x2x2x5x5. Here, we have four 2's and two 5's This should make sense, because the even number of primes allows us to split the primes into two EQUAL groups to demonstrate that the number is a square. For example: 400 = 2x2x2x2x5x5 = (2x2x5)(2x2x5) = (2x2x5)² Likewise, 576 is a perfect square. 576 = 2x2x2x2x2x2x3x3 = (2x2x2x3)(2x2x2x3) = (2x2x2x3)² --------NOW ONTO THE QUESTION!------------------ Given: n² is divisible by 72 (in other words, there's a 72 hiding in the prime factorization of n²) So, n² = (2)(2)(2)(3)(3)(?)(?)(?)(?)(?)... [the ?'s represent other possible primes in the prime factorization of n²] Since we have an ODD number of 2's in the prime factorization, we can be certain that there is at least one more 2 in the prime factorization. So, we know that n² = (2)(2)(2)(3)(3)(2)(?)(?)(?)(?) So, while there MIGHT be tons of other values in the above prime factorization, we do know that there MUST BE at least four 2's and two 3's. Now do some grouping to get: n² = [(2)(2)(3)(?)(?)...][(2)(2)(3)(?)(?)...] From this we can see that n = (2)(2)(3)(?)(?)... Question: What is the largest positive integer that must divide n? (2)(2)(3) = 12. So, the largest positive integer that must divide n is 12
I have been helping undergrads in an introduction to linear algebra course. When solving some exercise consisting in showing that a map is linear some get lazy after proving that it is closed under addition and do not prove the closure under scalar multiplication. I wanted to confront them with an example of a map closed under addition but not under the multiplication but could not come up with an example. Do you have any? $T:\mathbb{C}\to\mathbb{C}$ defined by $T(z)=\bar{z}$ then $T(z_1+z_2)=T(z_1)+T(z_2)$ but $T(cz)\neq cT(z)$ Over the real numbers, this is tough. Any additive map is linear over $\mathbb{Q}$, so will be linear over $\mathbb{R}$ as soon as it's continuous. However, there are non-continuous additive maps, even $\mathbb{R}\to\mathbb{R}$, for instance, $\sqrt{2}\mapsto \pi, \pi\mapsto\sqrt{2}$, extend by $\mathbb{Q}$-linearity and fix everything else. $\pi$ and $\sqrt{2}$ are linearly independent over $\mathbb{Q}$, so this is well defined. If you take any field $F$, and a homomorphism of additive groups $(F,+)\to (F,+)$ which does not preserve multiplication, then this will be just such a map. This is will never exist when $F$ is the field of rational numbers, or more generally, when it is generated by the unity (such as any ${\bf F}_p$ for a prime $p$), because in those, we can define multiplication in terms of addition. A similar thing happens if you look at continuous additive maps in a topological field generated topologically by the unity (that is, the smallest subfield is dense), like the reals or $p$-adics -- every continuous additive map is linear in this case. On the other hand, if you don't care about continuity, it is pretty easy to define such a map when $F=K[a]$ is a finite extension of another field $K$. Then you can just take $f\colon F\to F$ as the map such that $f(a^n)=0$ for $0<n<\deg a$ and $f(k)=k$ for $k\in K$. For example, if $K={\bf R}$ and $a=i$, $f$ takes real part of a complex number. In this case, the map is even continuous. For fields which are not finite extensions of other fields (such as $F={\bf R}$), the existence of such maps may require a nontrivial application of axiom of choice, or more precisely, basis theorem, and then we can proceed as in the preceding paragraph: if $K\subseteq F$ is a field extension, then the map $F\to F$ which is identity on $K$ and takes a basis complementary to $1$ to zero is $K$-linear, and therefore additive. Consider any field homomorphism $L \to L$, and consider $L$ as a vector-space over a subfield, which is not fixed by the map. From that point of view, the "easiest" example is Frobenius on $\mathbb F_4$. Note that there are 8 additive maps on the field with 4 elements, which of 4 are also linear.
Apart from the formal result about #P-hardness, there's something worth touching on, about the nature of strong simulation itself. I'll comment first on strong simulation, and then specifically on the quantum case. 1. Strong simulation even of classical randomised computation is hard Strong simulation is a very powerful concept — not only in the fact that it is a useful concept to consider, but in the more practical sense that it would allow you to do very powerful computations, even if you could do it in a purely classical setting. At issue here is that a process with random outcomes — whether or not it is quantum — does not automatically come equipped with a way to compute the probabilities of its events; not even for the probabilities of the events which you actually see realised. The power of randomised computation, so to speak, is that you don't have to worry much about what the precise probability of an event is: it suffices to sample and attempt to realise that event sufficiently often to be confident of roughly what that probability is (or, for instance, just that the probability is non-zero). Asking for actual probabilities is a very, well, strong requirement to ask of a simulation, in the following sense: Proposition. Strongly simulating a polynomial-time randomized computation with zero error is #P-hard. Proof: For any non-deterministic Turing machine N, consider the question of how many branches it accepts on for an input $x \in \{0,1\}^m$. It is enough for us to consider the case where N makes a non-deterministic choice at every transition, and runs for time $m \in \mathrm{poly}(n)$ in every branch, so that the branches of the computation can be indexed by the strings $z \in \{0,1\}^m$. The computation performed by N can be represented similarly as a deterministic computation (a function $f$) depending on the input $x$ and a given branch string $z$. If we represent the status of 'accept' vs. 'reject' by a bit $a \in \{0,1\}$ which is computed as $a = f(x,z)$. Determining the number of branches $z \in \{0,1\}^m$, for which $f(x,z) = a$ for a given $x \in \{0,1\}^n$, is essentially the canonical #P-complete problem, using this connection to non-deterministic Turing machines. Consider a polynomial-time randomised computation. We can describe this computation as a deterministic classical computation, performed with the help of a uniformly random bit-string of length $m$.Suppose that we are interested in whether a particular bit yields the outcome '1'.For an input $x \in \{0,1\}^n$ and random bit-string $z \in \{0,1\}^m$, let $f(x,z) \in \{0,1\}$ be the value that this bit takes: then $f(x,z)$ can be computed in polynomial time.The probability $P(a)$ that this computation gives the result $a \in \{0,1\}$ is then$$ P(a) = \frac{\# \bigl\{ z \in \{0,1\}^m \;\big|\; f(x,z) = a \bigr\}}{2^m}\,. $$Because we could choose $f$ to be the function determining the acceptance condition of a non-deterministic Turing machine on input $x$ in branch $z$, then it is #P-complete to compute $P(a)$ exactly. Proposition. Strongly simulating a polynomial-time randomized computation, with any relative error, is NP-hard. Proof. An important corner-case of the problem of approximating a probability with relative error, is the case where the exact probability is equal to zero. In this case, any process which gives the correct probability up to any multiplicative factor must correctly produce the exact probability, if that probability happens to be 0. Similarly, any process which approximates an event with positive probability up to a positive scalar factor, must yield a probability estimate which is greater than zero. That process can then be used to determine whether the number of accepting branches of a non-deterministic Turing machine is zero or non-zero. From these two observations, you should take away the idea that strong simulation is a strong requirement — in many cases, unfairly strong — to make of a simulation method: it allows you to do much more powerful things than the computational model itself might be capable of. 2. Strong simulation of quantum computation is very hard One difference between classical and quantum computation is on how difficult we think it is (in a complexity-theoretic sense) to strongly simulate them. We know that it is NP-hard to strongly simulate a polynomial-time classical randomised process to relative error less than 1. Simulating a quantum process isn't going to be any easier. However, there is reason to believe that even with unreasonably powerful computational resources which would allow us to strongly simulate classical processes with bounded relative error, it will still be difficult to strongly simulate quantum processes. Theorem (Stockmeyer [1]). For any counting problem in #P, and any constant $d \geqslant 0$ constant, the problem of computing the counting problem within to a $(1 + O(n^{-d}))$ factor is in $\mathbf{FP^{\:\!NP^{\:\!NP}}}$. The difficulty in the q quantum case is that while classical probabilities correspond to counting the number of accepting branches of a non-deterministic Turing machine, quantum computation corresponds more closely to counting the difference between the number of accepting and rejecting branches of a non-deterministic Turing machine. That is, where a classical probability corresponds to a #P function, a quantum amplitude corresponds to a GapP function, which is the set of functions which may be expressed as a difference between two #P functions. The connection between quantum computation and GapP may be made formal, but on a high level, it is essentially because quantum computation can involve destructive interference between amplitudes associated with different events. More formally: Proposition (Liberal paraphrase of Theorem 3.2 of [2]). For any $f \in \mathbf{GapP}$, there is a polynomial-time quantum algorithm $Q$, with an accepting configuration $c$, and a polynomial $p$ such that, for all $x \in \{0,1\}^n$, $\langle c | Q | x \rangle = -f(x) \cdot 2^{-p(n)/2}$. Approximating GapP functions even to constant factors is difficult, because computing a GapP function to within even a constant factor determines whether it is positive or negative. If you can do this, you can immediately solve the PP-complete problem of determining whether or not the number of accepting paths is greater than the number of rejecting paths; and if you do it repeatedly, with a number of related quantum computations in which you artificially inflate the number of accepting or rejecting paths by doing your quantum computation conditionally, you can compute the exact GapP function essentially by binary search (the same way you can find a satisfying solution to a boolean formula, if one exists, given an oracle which simply tells you whether a solution exists). References In addition to the references mentioned above, Maarten Van den Nest's article [3] mentioned by Martin Schwarz is noteworthy for presenting the first definition of 'strong simulation' of quantum systems (to distinguish it from the more reasonable standard of weak simulation), and also for presenting a number of ideas of the links between classical and quantum computation in the context of simulation. Stockmeyer. The complexity of approximate counting. Proceedings of STOC '83 (pp.&thinsp.118-126), 1983. [PDF available at acm.org] Fenner, Green, Homer, and Pruim. Determining Acceptance Possibility for a Quantum Computation is Hardfor the Polynomial Hierarchy. Proceedings of the Royal Society London A vol. 455 (pp. 3953–3966), 1999. [ arXiv: quant-ph/ 9812056] Van den Nest. Classical simulation of quantum computation, the Gottesman-Knill theorem, and slightly beyond. Quantum Information and Computation, vol. 10 (pp. 258-271), 2010. [ arXiv: 0811.0898].
As yet I don't know how to prove this, but a formula that seems to work is $$\displaystyle p=2\left(n-2^{\bmod\left(\frac{(\log(n)-0.001)}{\log(2)}\right)\right)-1.$$ The logs are to base 10 and mod (I hope, otherwise I need to find another operator,) delivers the integer part of the expression within the brackets. p is measured from the first person to be eliminated, being the number of positions further on around the circle. For example n = 6 will return a value of p = 3. So, from the first person eliminated count to the third position further round the circle. Start by sending a photographer and a cannibal across with the photographer returning. That produces CCPPP.............................C (where the dots indicate the river) Now two cannibals across (PPP............CCC) with one returning, CPPP...............................CC Next, two photographers across (CP....................PPCC) and a photographer and a cannibal returning CCPP...............................CP Next, two photographers across (CC...................CPPP) with a cannibal returning CCC...............................PPP Now two cannibals over (C.............CCPPP) with one returning to collect the last cannibal. The same sort of routine can be achieved if two cannibals go over to begin with. The graphical method is fine if you have access to a suitable plotter, if you haven't, having only a pocket calculator, your stuck. Here's the mathematicians method of solution. Begin with a substitution using the trig identity $$\cos2A=2\cos^{2}A-1$$, from which $$2\cos^{2}A=\cos2A+1.$$ (in order to get the angles on both sides the same). That gets you $$3\sin2x= \cos2x+2,$$ which is rewritten as $$3\sin2x-\cos2x = 2.$$ Now consider the trig identity $$R\sin(2x-\alpha)=R\sin2x\cos\alpha-R\cos2x\sin\alpha.$$ Comparing with the LHS of the equation, $$R\cos\alpha=3, \text{ and }R\sin\alpha=1,$$ so, squaring and adding, $$R^{2}=3^{2}+1^{2}=10, \text{ so }R=\sqrt{10},$$ and dividing, $$\tan\alpha=1/3.$$ That gets us $$\sqrt{10}\sin(2x-\alpha)=2,$$ $$\sin(2x-\alpha)=2/\sqrt{10},\quad 2x-\alpha=\sin^{-1}(2/\sqrt{10}),$$ $$2x-\alpha = 39.23,\quad 140.77,\quad 399.23,\quad 500.77, \dots \text{ deg},$$ and with alpha = 18.43 deg, that leads to (0 - 360 deg, 2dp), $$x = 28.83, \quad 79.60, \quad 208.83, \quad 259.60, \dots \text{ deg.}$$. Alan is much too polite. Consider the number $$\displaystyle N=30.29.28.(36.27!+25)=36.30!+30.29.28.25$$ Working mod 31, $$\displaystyle 36\equiv5,\quad \text{and}\quad 30!\equiv -1,$$ ( the second of those being Wilson's theorem with p = 31, ) so, multiplying, $$\displaystyle 36.30! \equiv 5.(-1)=-5.\quad ...........(1)$$ Also, $$30 \equiv (-1), \: 29 \equiv (-2), \: 28\equiv (-3)\: \text{and}\: 25\equiv (-6),$$ so, again multiplying, $$\displaystyle 30.29.28.25\equiv (-1).(-2).(-3).(-6)=36\equiv 5.\quad ...............(2)$$ Adding (1) and (2), $$\displaystyle N\equiv -5+5\equiv 0,$$ which says that N is divisible by 31. Looking back at the definition of N, since 30.29.28 is not divisible by 31 (which is prime), it follows that the expression within the brackets must be divisible by 31. Unless we are told something to the contrary, unknowns in equations such as this are always assumed to be integers. If d is the greatest common divisor (gcd) of a and n, the linear congruence $$\displaystyle ax\equiv b \bmod n$$ has a solution only if b is also divisible by d and in that case there will be an infinite number of solutions given by $$\displaystyle x=x_{0}+\frac{nt}{d},$$ where $$x_{0}$$ is any solution whatever, and t is an integer, (positive or negative), or zero. For the equation in question, $$\displaystyle 7x = 1\bmod 26,\qquad d = 1,$$ so the general solution will be $$\displaystyle x = x_{0}+\frac{26t}{1} = x_{0}+26t.$$ Finding a suitable $$x_{0}$$ is a trial and error thing, run through the multiples of 7 and 26 looking for the first pair for which the multiple of 7 is 1 greater than the multiple of 26 That happens to be 15 and 4, so the general solution is $$\displaystyle x = 15 + 26t, \quad t\:\in \:Z.$$ The result given earlier comes from the theory of linear Diophantine equations. If $$\displaystyle d = \gcd(a,b),$$ the equation $$\displaystyle ax+by = c$$ has integer solutions only if c is also divisible by d, and in that case there will be an infinite number of solutions given by $$\displaystyle x = x_{0}+\frac{bt}{d},\quad y=y_{0}-\frac{at}{d} \quad (t\: \in Z),$$ and where $$\displaystyle \; x_{0},y_{0}$$ is any particular solution. (For large values of a and b, $$\displaystyle \; x_{0},y_{0}$$ and d are usually found using Euclid's algorithm.) ( The equation $$\displaystyle ax\equiv b \bmod n$$ is equivalent to $$\displaystyle ax-b=kn \Rightarrow ax-kn=b,$$ which, with a change of letters and a change of sign, is the Diophantine equation above.) The best method for dealing with powers and roots of complex numbers, is to switch them to polar form and then to use De Moivre's theorem. So here, $$\displaystyle \frac{-7+21\sqrt{3}\imath}{2}=-\frac{7}{2}(1-3\sqrt{3}\imath)=-\frac{7}{2}\sqrt{28}\angle\tan^{-1}(-3\sqrt{3})$$ (ask for details if you need them) $$\displaystyle = -7\sqrt{7}\angle 280.89339\deg.$$ Raise that to the power one third and you have, $$(-7\sqrt{7}\angle(-280.89339+360k))^{1/3}$$ $$\displaystyle = -\sqrt{7}\angle(93.63113+120k), \quad k=0,1,2.$$ That gets you $$\displaystyle k=0:\quad -\sqrt{7}\angle 93.63113 = -\sqrt{7}(\cos93.63113+\imath\sin93.63113)$$ $$\displaystyle =0.16752-2.64044\imath.$$ and the other two values of k, $$\displaystyle 2.20291+1.46533\imath \;\text{ and } -2.37047+1.17511\imath \quad\text{respectively}.$$. I feel a bit of a fraud coming in on this one after the introductory work done by the three of you, providing the ideas and so on, however , We need $$\displaystyle \frac{x}{x^{2}+x+1}\quad \text{to equal} \quad 2.$$ Turn that upside down, as suggested by Chris, and we have $$\displaystyle x+1+\frac{1}{x}=\frac{1}{2}\quad \dots(1).$$ Now look at the rhs, suppose that's equal to $$k$$. Turn that upside down and we have $$\displaystyle x^{2}+1+\frac{1}{x^{2}}=\frac{1}{k}.$$ Squaring (1), $$\displaystyle x^{2}+1+\frac{1}{x^{2}}+2x+2+\frac{2}{x}=\frac{1}{4},$$ so $$\displaystyle x^{2}+1+\frac{1}{x^{2}}= \frac{1}{4}-2\left(x+1+\frac{1}{x}\right)=\frac{1}{4}-\left(2\times\frac{1}{2}\right)=-\frac{3}{4},$$ in which case $$\displaystyle \frac{1}{k}=-\frac{3}{4},\quad \text{so} \qquad k=-\frac{4}{3}.$$.
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
Is it true to say that $\Sigma^* \cdot$ {$a^nb^n: n>=0$} = $\Sigma^*$ Becuase if we take $\Sigma^*$ and concatenate it to {$a^nb^n: n>=0$} we don't get any "new" words than those we had in $\Sigma^*$ in the first place. Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community Is it true to say that $\Sigma^* \cdot$ {$a^nb^n: n>=0$} = $\Sigma^*$ Becuase if we take $\Sigma^*$ and concatenate it to {$a^nb^n: n>=0$} we don't get any "new" words than those we had in $\Sigma^*$ in the first place. Yes if $\Sigma \supseteq \{a, b\}$. To show the equality, let's show one is a subset of the other, and vice versa. $\Sigma^\ast \cdot \left\{ a^n b^n \middle| n \geq 0 \right\} \supseteq \Sigma^\ast$ holds, because $$ \Sigma^\ast \cdot \left\{ a^n b^n \middle| n \geq 0 \right\} \supseteq \Sigma^\ast \cdot \{ a^0 b^0 \} = \Sigma^\ast \cdot \{ \varepsilon \} = \Sigma^\ast $$ where $\varepsilon$ is the empty word.
Each team plays $4$ games. $P(\text{a team wins all its games}) = {5\choose{1}}\cdot{1\over{2}}^4$ $P(\text{a team loses all its games}) = {5\choose{1}}\cdot{1\over{2}}^4$ $P(\text{a team wins all its games and any remaining team loses all its games}) = {5\choose{1}} \cdot {4\choose{1}} \cdot {1\over2}^7$ $P(\text{at least one team wins or loses all their games}) = {5\choose{1}}\cdot{1\over{2}}^4+{5\choose{1}}\cdot{1\over{2}}^4-{5\choose{1}} \cdot {4\choose{1}} \cdot {1\over2}^7={15\over{32}}$ $$\begin{align*}P(\text{no team wins or loses all its games})&= 1 - P(\text{at least one team wins or loses all their games})\\\\&= 1 - \frac{15}{32} \\\\&= \frac{17}{32} \\\\\end{align*}$$ Perhaps the most tricky calculation here was $P(\text{a team wins all its games and any remaining team loses all its games}) = {5\choose{1}} \cdot {4\choose{1}} \cdot {1\over2}^7$ This comes from choosing one team of the $5$ to win all its games with probability $\frac{1}{2}^4$ and then choosing one of the remaining $4$ teams to lose all it's games with probability $\frac{1}{2}^3$ since we already know they lose to the team that won all its games.
This is true for finite-dimensional spaces: the diagonal operators on a finite dimensional complex vector space form contain a dense open set and the nondiagonalizable operators have measure 0. To be precise, let $T$ be an operator on a complex Banach space $X$ which is not finite-dimensional. For each $\lambda \in \mathbb{C}$, let $V_\lambda \subseteq X$ be the subspace $\mathrm{ker} (\lambda I - T)$ on which $T$ acts by the scalar $\lambda$. Say that $T$ is diagonalizable if $\sum_\lambda V_\lambda$ is dense in $X$. Or provide a better definition if this one is deficient! I want to know to what extent the "typicality" of diagonalizable operators carries over to infinite dimensions. Are the diagonal operators dense? Are they open (or do they contain an open set)? Are they comeagre? Of course, this will probably depend on the Banach space and the operator topology. I suppose it's natural to consider just bounded operators, although I'd be interested in results about unbounded operators, too. Also, the question makes perfect sense for any topological vector space; I'm interested in non-Banach spaces, too. I asked this question on math stackexchange, and Mateusz Wasilewski pointed out that the Weyl - von Neumann - Berg theorem shows that on separable Hilbert space, the "orthogonally diagonalizable" operators (where the $V_\lambda$ are required to be orthogonal) are dense among normal operators (in the norm topology).
Search Now showing items 31-40 of 167 Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ... Measurement of azimuthal correlations of D mesons and charged particles in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-04) The azimuthal correlations of D mesons and charged particles were measured with the ALICE detector in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV at the Large Hadron Collider. ...
Abbreviation: AbLGrp An (or abelian $\ell $ abelian lattice-ordered group ) is a lattice-ordered group $\mathbf{L}=\langle L, \vee, \wedge, \cdot, ^{-1}, e\rangle$ such that -group $\cdot$ is commutative: $x\cdot y=y\cdot x$ Let $\mathbf{L}$ and $\mathbf{M}$ be $\ell$-groups. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $f:L\rightarrow M$ that is a homomorphism: $f(x\vee y)=f(x)\vee f(y)$ and $f(x\cdot y)=f(x)\cdot f(y)$. Remark: It follows that $f(x\wedge y)=f(x)\wedge f(y)$, $f(x^{-1})=f(x)^{-1}$, and $f(e)=e$ An (or abelian lattice-ordered group ) is a commutative residuated lattice $\mathbf{L}=\langle L, \vee, \wedge, \cdot, \to, e\rangle $ that satisfies the identity $x\cdot(x\to e)=e$. abelian $\ell$-group Remark: $x^{-1}=x\to e$ and $x\to y=x^{-1}y$ $\langle\mathbb{Z}, \mbox{max}, \mbox{min}, +, -, 0\rangle$, the integers with maximum, minimum, addition, unary subtraction and zero. The variety of abelian $\ell$-groups is generated by this algebra. The lattice reducts of (abelian) $\ell$-groups are distributive lattices. Classtype variety Equational theory decidable Quasiequational theory decidable First-order theory hereditarily undecidable 1) 2) Locally finite no Residual size Congruence distributive yes (see lattices) Congruence modular yes Congruence n-permutable yes, $n=2$ (see groups) Congruence regular yes, (see groups) Congruence uniform yes, (see groups) Congruence extension property Definable principal congruences Equationally def. pr. cong. Amalgamation property yes Strong amalgamation property no 3) Epimorphisms are surjective None Yuri Gurevic, 1) , Algebra i Logika Sem., Hereditary undecidability of a class of lattice-ordered Abelian groups 6, 1967, 45–62 Stanley Burris, 2) , Algebra Universalis, A simple proof of the hereditary undecidability of the theory of lattice-ordered abelian groups 20, 1985, 400–401, http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/HerUndecLOAG.pdf Mona Cherri and Wayne B. Powell, 3) , International J. Math. & Math. Sci., Vol 16, No 1 (1993) 75–80, http://www.hindawi.com/journals/ijmms/1993/405126/abs/ doi:10.1155/S0161171293000080 Strong amalgamation of lattice ordered groups and modules
I am reading MWG's explanation in Chapter 3 when showing continuous preference relation implies the existence of continuous utility function. First, the authors show $u(.)$ is continuous by using the definition that the image under $u(.)$ of a convergent sequence is convergent. Consider a sequences $x_n\rightarrow x$. They first claim that $u(x_n)$ must have a convergent subsequence. I understand the big picture: Since $x_n$ converges to x, for some large N, $u(x_n)$ must all lie in some compact set, and any infinite sequence in a compact set must have a convergent subsequence. The part I am having trouble is when they use monotonicity to show this compact set. The exact excerpt is: "By monotonicity, for any $\epsilon>0$, $\alpha(x')$ lies in a compact subset of $\mathbb{R_+}$, [$\alpha_0,\alpha_1$], for all $x'$ such that $\parallel x'-x\parallel$$\leq0$." Here I used $u(.)$ and $\alpha(.)$ interchangeably to represent the utility function. Can somebody elaborate the above claim just little more in detail please? I understand monotonicity implies local nonsatiation, hence, in any given small ball, you always have some bundle that you prefer that to x. Part of my confusion comes from the Figure they present, which is they put the bundle x on the indifference curve ($\{y\in X:y\sim x$}). But isn't $\alpha(x')$ on the diagnoal line Z? Please help. Thank you.
It looks like you're new here. If you want to get involved, click one of these buttons! Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these: Our first approach will only tackle question 1. Given \(y\), we will only ask is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\). So, for now our resources will form a "preorder", as defined in Lecture 3. Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\). All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\). What's new is that we can also combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it. It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\). Definition. A monoid is a set \(X\) equipped with: such that these laws hold: the associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\) the left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\). You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders: Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying: $$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg and a slice of bread into a fried egg and a piece of toast! You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders: The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder. Same for the set \(\mathbb{Q}\) of rational numbers. Same for the set \(\mathbb{Z}\) of integers. Same for the set \(\mathbb{N}\) of natural numbers. Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it. But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way? Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning $$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets? Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets?
What is the assumption for Boltzmann H-theorem? One can derive it just from the unitarity of quantum mechanics, so this should be generally true, does it imply a closed system will always thermalize eventually? Does it apply for many-body localized states? I would like to share my thoughts and questions on the issue. The Boltzmann H theorem based on classical mechanics is well discussed in various literatures, the irreversibility comes from his assumption of molecular chaos, which cannot be justified from the underlying dynamical equation. Here I will try to say something on quantum H theorem, the point I want to make is that, although seemingly H theorem can be derived from unitarity, the true entropy increase in fact comes from the non-unitary part of quantum mechanics. Let me first recap the derivation using unitarity $^{1,2}$. H theorem as a consequence of unitarity Denote by $P_k$ the probability of a particle appearing on the state $|k\rangle$, $A_{kl}$ the transition rate from state $|k\rangle$ to state $|l\rangle$, then by the master equation $${\frac {dP_{k}}{dt}}=\sum _{l}(A_{{kl }}P_{l }-A_{{l k}}P_{k})=\sum _{{l\neq k}}(A_{{kl }}P_{l }-A_{{l k}}P_{k})\cdots\cdots(1).$$ Then we take the derivative of entropy $$S=-\sum_k P_k\ln P_k\cdots\cdots(2),$$ we obtain $$\frac{dS}{dt}=-\sum_k\frac{dP_k}{dt}\left(1+\ln P_k\right)\cdots\cdots(3).$$ Together with (1) we have $$\frac{dS}{dt}=-\sum_{kl}\left\{(1+\ln P_k)A_{{kl }}P_{l }-(1+\ln P_k)A_{{l k}}P_{k}\right\}\cdots(4).$$ For the seond second term let us interchange the dummy indices $k$ and $l$, we get $$\frac{dS}{dt}=\sum_{kl}(\ln P_l-\ln P_k)A_{kl}P_l\cdots\cdots(5)$$ Now use the mathematical identity $(\ln P_l-\ln P_k)P_l\geq P_l- P_k$, we obtain $$\frac{dS}{dt}\geq \sum_{kl}(P_l-P_k)A_{kl}= \sum_{kl}P_l(A_{kl}-A_{lk})\\=\sum_{l}P_l\big\{\sum_{k}(A_{kl}-A_{lk})\big\}\cdots\cdots(6)$$ Now unitarity ensures $\sum_{k}A_{kl}$ and $\sum_{k}A_{lk}$ are both 0, because as transition rates,$$\sum_{k}A_{kl}=\frac{d}{dt}\sum_{k}|\langle l|S|k\rangle|^2=\frac{d}{dt}\sum_{k}\langle l|S|k\rangle\langle k|S^{\dagger}|l\rangle\\=\frac{d}{dt}\langle l|SS^{\dagger}|l\rangle=\frac{d}{dt}\langle l|l\rangle=0\cdots\cdots(7),$$where $S$ is the unitary time evolution operator describing the system. This is nothing but saying the total transition probability from one state to all states must be 1. It is clear (6) and (7) imply the H theorem:$$\frac{dS}{dt}\geq 0.$$ Where does the irreversibility come from? Now we are in a position to question ourselves with Loschmidt's paradox, analogously to its classical version: There are many unitary and time-reversible quantum mechanical systems, if we have just derived H theorem using unitarity alone, how can it be reconciled with time-reversibility of the underlying dynamics? What sneaked into the above derivation? The crucial thing to notice is that, in the quantum regime, the definition of entropy using equation (2) is inherently an impossible one: the value of the entropy in (2) depends on the basis we choose to describe the system! Consider a two-level system with two choices of orthogonal basis $\{|1\rangle, |2\rangle\}$ and $\{|a\rangle, |b\rangle\}$ related by $$|1\rangle=\frac{1}{\sqrt2}(|a\rangle+|b\rangle),\\|2\rangle=\frac{1}{\sqrt2}(|a\rangle-|b\rangle).$$ Suppose the system is in the state $|1\rangle$, then the entropy formula gives $S=0$ in the first choice of basis since it has 100% chance to appear in $|1\rangle$, while in the other basis $S=\ln2$ since it has 50%-50% chance to appear in either $|a\rangle$ or $|b\rangle$. Now we may argue, it is one thing that to say the system is in $\frac{1}{\sqrt2}(|a\rangle+|b\rangle)$ and have the potential 50%-50% chance to transit into $|a\rangle$ and $|b\rangle$ after a measurement, but a different thing to say the transition has been realized by some measurement. Two situations must be described differently. If we look back to our derivation, it is not hard to see what we really did was, after a basis state evolves to a new state which is a superposition of the basis states, we assumed transitions to original basis states have happened instead of just staying in that superposition state, and in fact the original definition of entropy is not capable of describing such situation, as explained just now. A plausible definition of quantum entropy is the Von Neumann entropy, which is a basis-independent definition of entropy, and in this description, the entropy of a unitarily evolving system is constant in time, while a (projective) measurement can increase the entropy. Based on the above comparison, we see the irreversibility really comes as an assumption, the assumption that a measurement/decoherence has happened, and as we know, a (projective) measurement is a non-unitary, irreversible process, no paradox anymore. My own question on the issue is, what to make of the fact that von Neumann entropy is constant in time? Does it mean it is incapable of describing a closed system evolving from non-equilibrium to equilibrium, or should we just reverse the argument and say any non-equilibrium to equilibrium evolution must be described by some non-unitary process? 1.Rephrased from section 3.6 of The Quantum Theory of Fields, Vol1, S. Weinberg 2.If I remember correctly(which I'm not quite confident on), such derivation was first given by Pauli, and he correctly spotted the origin of irreversibility, which he called the "random phase assumption". Contrary to claims made in the question, we cannot derive the H-theorem " just from the unitarity of quantum mechanics". In fact, the theorem requires a non-unitary extension of quantum mechanics. I will first comment on this misunderstanding about unitarity that one finds in Weinberg textbook and latter I will address other questions made in the question. (i) By a well-known theorem, entropy is conserved by unitary evolutions. This theorem is proven in many statistical mechanics books. The Schrödinger equation conserves entropy $dS/dt=0$. Yes, we can define a coarse-grained entropy $S = S_\mathrm{cg}+ \Delta S$ and force it to evolve via unitary evolution, but it will do in ways that violate both the second law of thermodynamics and experiments/observation, not to mention the existence of paradoxes associated to the arbitrariness in the coarse-graining. (ii) Above point is the reason why Weinberg doesn't start from the Schrödinger equation but from a master equation he introduces in an ad hoc way. As is well-known, the master equation is incompatible with the Schrödinger equation (yes I know some authors pretend to derive the former from the latter, but those 'derivations' are completely wrong). Thus Weinberg is proving the H-theorem by starting from an irreversible master equation that already contains the essence of the theorem. (iii) We have known since Boltzmann's epoch that unitarity relates the direct process $(i\rightarrow j)$ with the inverse process $(j\rightarrow i)$, but the master equation describes a superposition of both processes $(i\rightleftharpoons j)$ and this superposition breaks unitarity. We can check it by rewriting the master equation from the usual gain-loss form used by Weinberg to the kinetic form $dP_i/dt = K_{ij} P_j$ and then observing that the propagator $\exp(Kt)$ for the whole evolution is not unitary, predicting a final equilibrium state with $P_i^\mathrm{eq} = P_j^\mathrm{eq}$ independently of the initial state. This equilibrium state maximizes entropy (minimizes the H-function) $$ S^\mathrm{eq} = -k_\mathrm{B} \sum_j^W P_j^\mathrm{eq} \ln P_j^\mathrm{eq} = -k_\mathrm{B} \ln P_j^\mathrm{eq} \sum_j^W P_j^\mathrm{eq} =-k_\mathrm{B} \ln P_j^\mathrm{eq} = -k_\mathrm{B} \ln \frac{1}{W} = k_\mathrm{B} \ln W $$ (iv) Weinberg claims in his book that his discussion of the H-theorem is more general than in " statistical mechanics textbooks". Well, that is only true when comparing with introductory textbooks. As explained in more advanced textbooks, the general master equation contains three terms: a flow term, a memory term, and a noise term. By taking a special initial state --sometimes named the assumption of initial random phase-- the noise term can be eliminated. By taking for the density matrix a base that diagonalizes the unperturbed Hamiltonian, the flow term identically vanish. Then only the memory term in the master equation remains. Applying a further diagonal within the kernel of the memory term, $\rho_{kl} \rightarrow \rho_{kl}\delta_{kl}$, and finally applying a Markovian approximation we obtain the simple master equation used by Weinberg. About the other queries in the question, let me say that the H-theorem has the same limited validity than the approximated master equation used for his derivation. More general irreversible master equations are required for many phenomena in condensed phase systems, and those equations predicts all kind of complex evolutions for the entropy (or the H-function). In the general case the entropy of a 'closed' (I guess you mean isolated) system doesn't increase monotonically, and it is possible that there is no local extrema. It is needed to study each system and experimental situation individually. protected by ACuriousMind♦ Dec 13 '16 at 21:20 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Principle of Recursive Definition Contents Theorem Let $\N$ be the natural numbers. Let $T$ be a set. Let $a \in T$. Let $g: T \to T$ be a mapping. $\forall x \in \N: f \left({x}\right) = \begin{cases} a & : x = 0 \\ g \left({f \left({n}\right)}\right) & : x = n + 1 \end{cases}$ Let $p \in \N$. Let $p^\geq$ be the upper closure of $p$ in $\N$. $\forall x \in p^\geq: f \left({x}\right) = \begin{cases} a & : x = p \\ g \left({f \left({n}\right)}\right) & : x = n + 1 \end{cases}$ Consider $\N$ defined as a Peano structure $\left({P, 0, s}\right)$. The result follows from Principle of Recursive Definition for Peano Structure. $\blacksquare$ Consider $\N$ defined as elements of the minimal infinite successor set $\omega$. The result follows from Principle of Recursive Definition for Minimal Infinite Successor Set. $\blacksquare$ The proofs given (hidden behind the links) are necessarily long, precise and detailed. There is a temptation to take short cuts and gloss over the important details. The following argument, for example, though considerably shorter, is incorrect. Consider $\N$, defined as a naturally ordered semigroup $\left({S, \circ, \preceq}\right)$. Let the mapping $f$ be defined as: $f \left({x}\right) = \begin{cases} a & : x = 0 \\ s \left({f \left({n}\right)}\right) & : x = n \circ 1 \end{cases}$ if $f \left({n}\right)$ is defined. Let $S' = \left\{ {n \in S: f \left({n}\right) \text{ is defined} }\right\}$. Then: $0 \in S'$ and: $n + 1 \in S'$ $S' = S$ Thus the domain of $f$ is $S$. Consequently, $f$ is a mapping from $S$ into $T$ which satisfies: $f \left({0}\right) = a$ and: $f \left({n \circ 1}\right) = s \left({f \left({n}\right)}\right)$ for all $n \in S$. $\blacksquare$ Objections $(1):$ In the above argument, $S'$ is not precisely defined. In this case, the expression "is defined" does not meet that criterion. $(2):$ The mapping $f$ is not defined properly. In the above, it is indeed specified that $\left({0, a}\right) \in f$ and: $\left({n \circ 1, s \left({x}\right)}\right) \in f$ whenever $\left({n, x}\right) \in f$ Thus it appears either that: $f$ itself is used to define $f$ or else that: $f$ itself changes during the process in which it is being defined. Neither of these possibilities can be accepted. $(3):$ The only property of $\left({S, \circ, \preceq}\right)$ used in the argument is that it satisfies the Principle of Mathematical Induction for a Naturally Ordered Semigroup. However, consider the commutative semigroup $\left({D, +}\right)$ which has elements $0$ and $1$, such that: $D$ is the only subset of $D$: containing $0$ and containing $x + 1$ whenever it contains $x$. Then: there exists a mapping $f: D \to T$ such that: $f \left({y}\right) = \begin{cases} a & : y = 0 \\ s \left({f \left({x}\right)}\right) & : y = x + 1 \end{cases}$ But consider the additive group of integers modulo $2$: $\left({\Z_2, +_2}\right)$ which is indeed a commutative semigroup containing $0$ and $1$ which satisfies the hypothesis. But if $g: \N \to \N$ is the mapping defined as: $g \left({n}\right) = n + 1$ there is no mapping $f: \Z_2 \to \N$ which satisfies: $f \left({y}\right) = \begin{cases} 0 & : y = 0 \\ s \left({f \left({x}\right)}\right) & : y = x +_2 1 \end{cases}$ for all $x \in \Z_2$. Hence the argument is invalid. Also known as This result is often referred to as the Recursion Theorem. Some sources only cover the general result.
Faddeeva Package From AbInitio Revision as of 22:52, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) ← Previous diff Revision as of 22:52, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) Next diff → Line 25: Line 25: :<math>\mathrm{erf}(x) = 1 - \mathrm{erfc}(x) = \begin{cases} 1 - e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ e^{-x^2} w(-ix)) - 1 & \mathrm{Re}\,x < 0 \end{cases}</math> (error function) :<math>\mathrm{erf}(x) = 1 - \mathrm{erfc}(x) = \begin{cases} 1 - e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ e^{-x^2} w(-ix)) - 1 & \mathrm{Re}\,x < 0 \end{cases}</math> (error function) :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) - :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-x^2} - w(x) \right] = </math> ([[w:Dawson function|Dawson function]]) + :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-x^2} - w(x) \right]</math> ([[w:Dawson function|Dawson function]]) :<math>\mathrm{Voigt}(x,y) = \mathrm{Re}[w(x+iy)] \!</math> (real [[w:Voigt function|Voigt function]], up to scale factor) :<math>\mathrm{Voigt}(x,y) = \mathrm{Re}[w(x+iy)] \!</math> (real [[w:Voigt function|Voigt function]], up to scale factor) Note that in the case of erf and erfc, we provide different equations for positive and negative ''x'', in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Note that in the case of erf and erfc, we provide different equations for positive and negative ''x'', in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Revision as of 22:52, 29 October 2012 Contents Faddeeva / complex error function Steven G. Johnson has written free/open-source C++ code (with wrappers for other languages) to compute the scaled complex error function w( z) = e − z2erfc(− iz), also called the Faddeeva function(and also the plasma dispersion function), for arbitrary complex arguments zto a given accuracy. Given the Faddeeva function, one can easily compute Voigt functions, the Dawson function, and similar related functions. Download the source code from: http://ab-initio.mit.edu/Faddeeva_w.cc (updated 29 October 2012) Usage To use the code, add the following declaration to your C++ source (or header file): #include <complex> extern std::complex<double> Faddeeva_w(std::complex<double> z, double relerr=0); The function Faddeeva_w(z, relerr) computes w( z) to a desired relative error relerr. Omitting the relerr argument, or passing relerr=0 (or any relerr less than machine precision ε≈10 −16), corresponds to requesting machine precision, and in practice a relative error < 10 −13 is usually achieved. Specifying a larger value of relerr may improve performance (at the expense of accuracy). You should also compile Faddeeva_w.cc and link it with your program, of course. In terms of w( z), some other important functions are: (scaled complementary error function) (complementary error function) (error function) (imaginary error function) (Dawson function) (real Voigt function, up to scale factor) Note that in the case of erf and erfc, we provide different equations for positive and negative x, in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Wrappers: Matlab, GNU Octave, and Python Wrappers are available for this function in other languages. Matlab (also available here): A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_mex.cc (along with the help file Faddeeva_w.m. Compile it into an octave plugin with: mex -output Faddeeva_w -O Faddeeva_w_mex.cc Faddeeva_w.cc GNU Octave: A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_oct.cc. Compile it into a MEX file with: mkoctfile -DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1 -s -o Faddeeva_w.oct Faddeeva_w_oct.cc Faddeeva_w.cc Python: Our code is used to provide scipy.special.wofzin SciPy starting in version 0.12.0 (see here). Algorithm This implementation uses a combination of different algorithms. For sufficiently large | z|, we use a continued-fraction expansion for w( z) similar to those described in Walter Gautschi, "Efficient computation of the complex error function," SIAM J. Numer. Anal. 7(1), pp. 187–198 (1970). G. P. M. Poppe and C. M. J. Wijers, "More efficient computation of the complex error function," ACM Trans. Math. Soft. 16(1), pp. 38–46 (1990); this is TOMS Algorithm 680. Unlike those papers, however, we switch to a completely different algorithm for smaller | z|: Mofreh R. Zaghloul and Ahmed N. Ali, "Algorithm 916: Computing the Faddeyeva and Voigt Functions," ACM Trans. Math. Soft. 38(2), 15 (2011). Preprint available at arXiv:1106.0151. (I initially used this algorithm for all z, but the continued-fraction expansion turned out to be faster for larger | z|. On the other hand, Algorithm 916 is competitive or faster for smaller | z|, and appears to be significantly more accurate than the Poppe & Wijers code in some regions, e.g. in the vicinity of | z|=1 [although comparison with other compilers suggests that this may be a problem specific to gfortran]. Algorithm 916 also has better relative accuracy in Re[ z] for some regions near the real- z axis. You can switch back to using Algorithm 916 for all z by changing USE_CONTINUED_FRACTION to 0 in the code.) Note that this is SGJ's independent re-implementation of these algorithms, based on the descriptions in the papers only. In particular, we did not refer to the authors' Fortran or Matlab implementations (respectively), which are under restrictive "semifree" ACM copyright terms and are therefore unusable in free/open-source software. Algorithm 916 requires an external complementary error function erfc( x) function for real arguments x to be supplied as a subroutine. More precisely, it requires the scaled function erfcx( x) = e erfc( x2 x). Here, we use an erfcx routine written by SGJ that uses a combination of two algorithms: a continued-fraction expansion for large xand a lookup table of Chebyshev polynomials for small x. (I initially used an erfcx function derived from the DERFC routine in SLATEC, modified by SGJ to compute erfcx instead of erfc, by the new erfcx routine is much faster.) Test program To test the code, a small test program is included at the end of Faddeeva_w.cc which tests w( z) against several known results (from Wolfram Alpha) and prints the relative errors obtained. To compile the test program, #define FADDEEVA_W_TEST in the file (or compile with -DFADDEEVA_W_TEST on Unix) and compile Faddeeva_w.cc. The resulting program prints SUCCESS at the end of its output if the errors were acceptable. License The software is distributed under the "MIT License", a simple permissive free/open-source license: Copyright © 2012 Massachusetts Institute of Technology Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Abbreviation: QtGrpd A is a groupoid $\mathbf{A}=\langle A,\cdot\rangle$ such that quasitrivial groupoid $\cdot$ is : $x\cdot y=x\text{ or }x\cdot y=y$ quasitrivial Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be quasitrivial groupoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$ An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Quasitrivial groupoids are in 1-1 correspondence with reflexive relations. E.g. a translations is given by $x\cdot y=x$ iff $\langle x,y\rangle\in E$. Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct
Just including my answers here for my own sake, I tried to skip other people's answers until writing this. > Puzzle 18. Does f! always have a left adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which it does have a left adjoint. > Puzzle 19. Does f! always have a right adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which it does have a right adjoint. Answer 1: Hand-waving argument is that the meet and the join are the intersection and the union respectively with respect to sets. Since the intersection of a bunch of sets exists and is unique (at least in those situations that I can think of, but sets are weird, so I will assume that we are dealing with such sets and power sets), and the same holds for unions, the left and right adjoint exist and they are equal to ... (see next more detailed answer). Answer 2: This is the worked out version I wrote to make sure answer 1 was right. For the left adjoint we need: \\(g_!(R) \subseteq S \iff R \subseteq f_!(S)\\) which means we want \\(g_!(R) = \cap \\{S: R \subseteq f_!(S) \\}\\), which should exist and should be unique (at least it is in those situations I ever dealt with sets). The same reasoning leads to the right adjoint being (with abuse of notation, since I denote it the same) \\(g_!(R) = \cup \\{S: R \subseteq f_!(S) \\}\\). **Edit May4th**: Sigh, that was wrong, reading the above thread. I don't have the time to figure out what is wrong with it, but my sense is that I only used one direction of the *iff*, namely that if \\(R \subseteq f_!(S)\\), then we have \\(g_!(R) \subseteq S\\), so \\(g_!(R) = \cap \\{S: R \subseteq f_!(S)\\}\\) if this satisfies the other direction too -- which I haven't checked. Will do later. **Edit on May 15th**: Finally getting back to this. I tried to figure out where I went wrong, so let me try again. We have a left adjoint \\(g_!\\) if \\(g_!(R) \subseteq S \iff R \subseteq f_!(S)\\). Fix some \\(R\\). The right hand side of that holds exactly for all the sets \\(S \in \mathcal{S}\\) where \\(\mathcal{S} = \\{S: R \subseteq f_!(S)\\}\\). This means that we have to have \\(g_!(R) \subseteq S\\) for every \\(S \in \mathcal{S}\\) **and** that it holds for no other \\(S\\). This implies that \\(g_!(R) = \cap \mathcal{S}\\) (**Note:** for me \\(\cap \mathcal{S}\\) means to take the intersections over the elements of \\(\mathcal{S}\\), which may be poor or wrong notation), but that is not enough yet. Why? Let \\(S(R) = \cap \mathcal{S} \\). Then it could be that \\(S(R)\\) is not in \\( \mathcal{S}\\), in which case we have \\(g_!(R) \subseteq S(R)\\) (by definition), yet we do not have \\( R \subseteq f_!(S(R)) \\). So I should have proved that \\(S(R) \in \mathcal{S}\\), which I would have realized doesn't work. For instance, suppose that \\(\mathcal{S} = \\{ \\{0\\}, \\{1\\}\\}\\). Then their intersection will be the empty set, so that \\(S(R) = \emptyset \\)and hence \\(f_!(S(R)) = \emptyset\\). Unless \\(R = \emptyset\\), this is a counterexample, assuming we can come up with a function \\(f\\) and a non-empty \\(R\\) such that we get such a \\(\mathcal{S}\\). This is easy, especially after reading parts of this thread: \\(f(0) = 0\\), \\(f(1) = 0\\) with \\(R = 0\\). And that is where I went wrong. This still doesn't show when the left adjoint exists, other than that it exists if (and only if in this case, I believe) \\(S(R) = \cap \mathcal{S} \in \mathcal{S}\\). But at this point I wanted to primarily figure out where my thinking was wrong, not figure it all out.
Where the graph of the tangent function increases, the graph of the cotangent function decreases. Click to search:. Therefore, the LCD can be seen as a periodicity multiplier. If we look at any larger interval, we will see that the characteristics of the graph repeat. Use the reciprocal relationship of the cosine and secant functions to draw the cosecant function. Click to search:. The distance from the spot across from the police car grows larger as the police car approaches. Explains the tangent graph in terms of the sine and cosine waves. Demonstrates how to graph, including asymptotes. put dots for the zeroes and dashed vertical lines for the asymptotes: graph, from -pi to 2pi and from -5 to 5, showing zeroes. Standard form of equation is y=Atan(Bx−C)+D Amplitude=A=None Function tan x doesn't have an amplitude. Graphing Tangent Function Period =P=π|B|=π2π= This lesson covers how to graph the tangent function. \frac{\pi}{2} && \frac{2\pi}{ 3} && \frac{3\pi}{4} && \frac{5\pi}{6} && \pi \\ y && \tan \theta Therefore, if we were to change the period of a tangent function, we would use a. Where the graph of the sine function increases, the graph of the cosecant function decreases. Gaisma - real-life sine graphs. When the cosine function is zero, the secant is undefined. Graph of y=tan(x) (video) Trigonometry Khan Academy It is common in electronics to express the sin graph in terms of the frequency f as follows:. For example, we can use. Functions of this form are sometimes called Bloch-periodic in this context. Sf la boulange The important thing is to know the shape of these graphs - not that you can join dots! Video: Tangent graph 2pie period How to Graph Tangent with a Period Change Figure 11 shows the graph. Click to search:. Video: Tangent graph 2pie period Graphing the Tangent Function with a New Period We see that the stretching factor is 5. The secant graph has vertical asymptotes at each value of x where the cosine graph crosses the x -axis; we show these in the graph below with dashed vertical lines, but will not show all the asymptotes explicitly on all later graphs involving the secant and cosecant. The period of a spring's motion is affected by the stiffness of the spring usually denoted by the variable kand the mass on the end of the spring m. Where the graph of the cosine function increases, the graph of the secant function decreases. Trigonometric Functions and Their Graphs Tangent The properties of the 6 trigonometric functions: sin (x), cos (x), tan(x), cot (x), sec ( x) These include the graph, domain, range, asymptotes (if any), symmetry, x and y Domain: all real numbers; Range: [-1, 1]; Period = 2pi; x intercepts: x = k pi. Cosine function -> period is 2π radians or °. • Tangent function -> period is πradians or °. The basic graphs of these 3 trigonometric functions are. Where the graph of the cosine function decreases, the graph of the secant function increases. In signal processing you encounter the problem, that Fourier series represent periodic functions and that Fourier series satisfy convolution theorems i. Note that, because cosine is an even function, secant is also an even function. There is a local minimum at 1. We can transform the graph of the cotangent in much the same way as we did for the tangent. How to Change the Amplitude, Period, and Position of a Tangent or Cotangent Graph dummies Gaisma - real-life sine graphs. Tangent graph 2pie period In this case, we add C and D to the general form of the tangent function. What's the difference between phase shift and phase angle? Figure 7. Gaisma - real-life sine graphs. This time the angle is measured from the positive vertical axis.
To put things in context I'll first expose a straightforward method inspired by the classical evaluation of square roots (shortly : "if we know that $a^2 \le N <(a+1)^2$ then the next digit $d$ will have to verify $(10a+d)^2 \le 10^2 N <(10a+d+1)^2$. This means that we want the largest digit $d$ such that $(20a+d)d\le 10^2(N-a^2)$") : To evaluate the cubic root of $N$ let's suppose that $a^3 \le N <(a+1)^3$ then the next digit $d$ will have to verify $(10a+d)^3 \le 10^3 N <(10a+d+1)^3$. So that we want the largest digit $d$ such that $\left(30a(10a+d)+d^2\right)d \le 10^3(N-a^3)$. To get a feeling of this method let's evaluate $\sqrt[3]{2}$ starting with $N=2,\ a=1$ : $\begin{array} {r|l}2.000.000 & 1\\\hline \\-1.000.000 & 1.25\\1.000.000 & \\-728.000 & \\272.000 & \\-225.125 & \\46.875 & \\\end{array}$ $a=1$ so that the first decimal must verify $(30(10+d)+d^2)d \le 1000$ that is $d=2$. $a=12$ and the second decimal must verify $(360(120+d)+d^2)d \le 272000$ so that $d=5$. (let's notice that this is 'nearly' $360\cdot 120\cdot d \le 272000$ so that $d=5$ or $d=6$ : we don't really need to try all the digits!) I could have continued but observed that for $d=6$ the evaluation returned $272376$ so that the relative error on $d$ is $\epsilon_1 \approx \frac{376}{272376+360\cdot 6^2}\approx 0.001318$ giving $d\approx 5.9921$ and the solution $\sqrt[3]{2}\approx 1.259921$. Now let's give a chance to Nirbhay Sngh Nahar's method exposed here. Let's consider $N=2000$ then $x=1\cdot 10=10$ The NAHNO approximative formula is :$$A= \frac 12\left[x+\sqrt{\frac{4N-x^3}{3x}}\right]= \frac 12\left[10+\sqrt{\frac{4\cdot 2000-10^3}{3\cdot 10}}\right]\approx 12.6376$$ Doesn't look very good... Let's give the formula a second chance by providing a much better value of $x=12.5$ then the formula returns $A=12.5992125$ not so far from $2^{\frac 13}= 12.59921049894873\cdots$ but $x=12.5$ is really near the solution so let's compare this method with Newton's iterations $\displaystyle x'=x-\frac{x^3-N}{3x^2}$ $x_0=12.5\to x_1=12.6\to x_2=12.599210548\cdots \to x_3=12.5992104989487318\cdots$ EDIT: I missed the 'Precise Value of Cube Root' using following formula :$$P=A\frac{4N-A^3}{3N}$$ (I updated the picture and added this formula as well as the third Newton iteration) The NAHNO approximative formula is better than the first Newton iteration but weaker than the second. The precise NAHNO formula is beaten only by the third Newton approximation as you may see in this picture (the curves are from top to bottom : first Newton iteration, Approximative NAHNO, second Newton iteration, Precise NAHNO, third Newton iteration ; the NAHNO curves are darker, the vertical scale is logarithmic and 'lower is better') : The vertical axis shows $\ \log \left| \frac {A(N)}{N^{\frac 13}}-1\right|$ for $N$ in $(1000,50000)$. The vertical lines are values $N$ such that $2\sqrt[3]{N}$ is integer (when the initial estimation is nearly the solution). So that, considered as approximate formulas, NAHNO formulas are rather good and could be made more precise with a better first approximation (especially for $x$ between $1$ and $2.5$ more values should be provided in the table). Avoiding extravagant claims could be an advantage too! :-)
I'm learning how to control a double integrator with $H_\infty$. my model is simply $ \dot{r} = v $ $ \dot{v} = F/m $ $ r(t_0) = 0$ m, $v(t_0) = 0 $ m/s, $m = 1000 $ kg so I want to be able to track a step command. I have noise on measurements of position, velocity and force, assumed with a noise having a std of 0.02 m, 0.01 m/s and 0.2 N. I want to have closed-loop bandwith equal to 0.2 Hz with a steady-state error of 0.1 m, and a sensitivity peak at $f_{p} = 1$. The scheme I implemented is the following: The weigthing functions are the following. Since I want to track low-frequency changing signals, I imposed $W_{ref,r} = \frac{1}{s/\omega_{lpt} +1 }$, with the low-pass tracking function $f_{lpt}$ equal to $2\pi f_{lpt}$, and $f_{lpt}$ equal to 0.2. The noise weighting functions are constants corresponding to the values mentioned above, while there is no feedforward contribution, so $W_{ref_F}$ is equal to 1, and so does $W_{ctrl_inn}$ (perfect inner dynamics). If I understand the theory correctly, the functions $W_{p_r}$ and $W_u$ play a similar role as the matrices $Q$ and $R$ in LQR, except that we can shape them frequency-wise, and that we are minimizing the $\infty$ norm instead of the Euclidean one. So, as recommended by Skogestad in his wonderful book I specified $W_{p_r} = \frac{s/M+2\pi f_{p}}{1+2\pi f_{p} A}$ with $A = 0.1$, $f_p = 1$, and the peak for the sensitivity transfer function M equal to 2. The transfer function for control performance is a high-pass filter needed to penalize high-frequencies such that the controller does not waste efforts in trying to control high-frequency dynamics (in my case > 10 Hz) $W_u = \frac{3+ 2 \pi f_{hpf}/2}{1+2 \pi f_{hpf}}$, with the high pass frequency $f_{hpf}$ equal to 10 Hz. I get this bode plot of the inverses of $W_{p_r}$ and $W_u$, so at small frequency the sensitivity tf $S$ is small, and at large frequency $KS$ is small, that is, no big control efforts. If I synthesize the Hinf with matlab I get a $\gamma$ equal to 10. I would expect a small value because we want to make the z output small for the expected exogenous inputs. Can someone tell me what I'm doing wrong? Thanks! P.S. I'm getting the generalized plant by using linmod on the above specified Simulink model to get A,B,C,D and I transform it into P by doing P = ss(A,B,C,D) P = minreal(P)
Current browse context: physics.atom-ph Change to browse by: References & Citations Bookmark(what is this?) Physics > Atomic Physics Title: Transition from electromagnetically induced transparency to Autler-Townes splitting in cold cesium atoms (Submitted on 1 Sep 2017) Abstract: Electromagnetically induced transparency (EIT) and Aulter-Townes splitting (ATS) are two similar yet distinct phenomena that modify the transmission of a weak probe field through an absorption medium in the presence of a coupling field, featured in a variety of three-level atomic systems. In many applications it is important to distinguish EIT from ATS splitting. We present EIT and ATS spectra in a cold-atom three-level cascade system, involving the 35$S_{1/2}$ Rydberg state of cesium. The EIT linewidth, $\gamma_{EIT}$, defined as the full width at half maximum (FWHM), and the ATS splitting, $\gamma_{ATS}$, defined as the peak-to-peak distance between AT peak pairs, are used to delineate the EIT and ATS regimes and to characterize the transition between the regimes. In the cold-atom medium, in the weak-coupler (EIT) regime $\gamma_{EIT}$ $\approx$ A + B($\Omega_{c}^2$ + $\Omega_{p}^2)/\Gamma_{eg}$, where $\Omega_{c}$ and $\Omega_{p}$ are the coupler and probe Rabi frequencies, $\Gamma_{eg}$ is the spontaneous decay rate of the intermediate 6P$_{3/2}$ level, and parameters $A$ and $B$ that depend on the laser linewidth. We explore the transition into the strong-coupler (ATS) regime, which is characterized by the linear relation $\gamma_{ATS}$ $\approx$ $\Omega_{c}$. The experiments are in agreement with numerical solutions of the Master equation. Submission historyFrom: Jianming Zhao [view email] [v1]Fri, 1 Sep 2017 03:38:56 GMT (943kb)
Also, read Matrix Formulas Determinant Formulas Set Theory Mean Median Mode Formula: Nowadays, you must be aware of the use of data through statistics almost everywhere. Take an example of cricket match. While watching the match you see several graphical representations of different types of data such as sixes per over or wickets in an inning of different matches or many more. There is use of mean median mode formula. It has a wide range of applications. Measures of dispersion: Based on the observations and the types of the measure of central tendency the dispersion or scatter in a data is measured. Thus we calculate mean median mode. There are following measures of dispersion: Range Quartile deviation (Excluded) Mean deviation Standard deviation. S. No. Data form Formula 1 Range Maximum Value – Minimum Value 2 Mean Deviation for Ungrouped Data M.D.(\(\bar{x}\) = \(\frac{\sum{f_{i}(x_{i} – \bar{x})}}{N}\) 3 Mean deviation for Grouped Data M.D.(\(\bar{x}\) = \(\frac{\sum{(x_{i} – \bar{x})}}{n}\), where N = \(\sum{f_{i}}\) 4 Median If n is odd, then M = \((\frac{n+1}{2})^{th}\), term If n is even, then M = \(\frac{(\frac{n}{2})^{th}term+(\frac{n}{2}+1)^{th}term}{2}\) 5 Mode The value which occurs most frequently. 6 Variance \(\sigma ^{2}\) = \(\frac{\sum (x- \bar{x})^{2}}{n}\) 7 Standard Deviation \(S = \sigma = \sqrt{\frac{\sum (x- \bar{x})^{2}}{n}}\) 8 Coefficient of variation (C.V.) \(\frac{σ}{x}\) x 100, \(\bar{x}\) ≠ 0 Where, x= number of items present n = Total number of items \(\bar{x}\) = Mean Short trick to find Variance and standard deviation: σ 2 = \(\frac{h^{2}}{N^{2}}\) [N∑f iy i 2 – (∑f iy i) 2 ], σ = \(\frac{h}{N}\) \(\sqrt{ N∑f_{i}{y_{i}}^{2} – (∑f_{i}y_{i})^{2}}\) Mean Median Mode Examples: Find the mean deviation about the median for the following data: 3, 9, 5, 3, 12, 10, 18, 4, 7, 19, 21. Solution:as we see there is 11 obsevations and it is absolutely odd. First let’s arrange them in ascending order, we get 3, 3, 4, 5, 7, 9, 10, 12, 18, 19, 21. Median = \((\frac{11 + 1}{2})^{th}\) or 6th observation = 9 The absolute values of the deviations from the median respectively, i.e., |x i– M| are 6, 6, 5, 4, 2, 0, 1, 3, 9, 10, 12 \(\sum_{i=1}^{11}\) |x i– M| = 58 Thus, D. (M) = \(\frac{1}{11}\) = \(\sum_{i=1}^{11}\) |x i– M| = \(\frac{1}{11}\) x 58 = 5.27 Find the variance and standard deviation for the following data x i 4 8 11 17 20 24 32 f i 3 5 9 5 4 3 1 Solution: In tabular form it is: x i f i f ix i x i – \(\bar{x}\) (x i – \(\bar{x}\) ) 2 F i(x i – \(\bar{x}\) ) 2 4 3 12 -10 100 300 8 5 40 -6 36 180 11 9 99 -3 9 81 17 5 85 3 9 45 20 4 80 6 36 144 24 3 72 10 100 300 32 1 32 18 324 324 30 420 1374 N = 30, \(\sum_{i=1}^{7}\) f ix i = 420, \(\sum_{i=1}^{7}\)f i(x i – \(\bar{x}\) ) 2 = 1374 so, \(\bar{x} = \frac{\sum_{i=1}^{7} f_{i}x_{i}}{N}\) = \(\frac{1}{30}\) x 1374 = 45.8 Standard deviation (σ ) = \(\sqrt{45.8}\) = 6.77
How do I prove that if $A\in\mathbb C^{n\times n}$ is a matrix then it is irreducible if and only if its associated graph (defined as at Graph of a matrix) is strongly connected? Update: Seeing as no-one answered for over a week, I tried to do it by myself. The first thing I did was try to show column or row permutation didn't change the strong connectedness of the graph. I didn't manage, and actually proved the opposite. On the way, though, I managed to show transposition doesn't. The argument is that transposition affects the graph in that it inverts all the arrows, but if there is a loop through all nodes then inverting the arrows means you go through it the other way round, so it's still there, and the graph stays strongly connectedness, and if there isn't, well, transposing can't make one appear, as otherwise transposing back would make it disappear, which we have proved impossible. This result may not be useful, but since I've done it I thought I might well write it down. Then I tried to think of what permuting both rows and columns does to the graph. Why that? Let's recall the notion of irreducible matrix: A matrix $A\in\mathbb{C}^{n\times n}$ is said to be reducibleif there exists a permutation matrix $\Pi$ for which $\Pi\cdot A\cdot > \Pi^T$ is a block upper triangular matrix, i.e. has a block of zeroes in the bottom-left corner. So if this operation does not alter the graph's strong connectedness, then I can work on the reduced matrix to show its graph is not strongly connected and prove one implication. Now such multiplications as in the definition of a reducible matrix, with $\Pi$ a matrix that swaps line $i$ with line $j$ - what do they do to the graph? Swapping the lines makes all arrows that go out of $i$ go out of $j$ and viceversa; swapping the columns does the same for arrows leaving $i$ (or $j$). So imagine we have a loop. Say it starts from a node other than $i$ and $j$. At a certain point it reaches, say, $i$. Before that, everything is unchanged. When the original loop reaches $i$, the new loop will reach $j$ and go out of it to the same node as it went out from $i$ to before the permutation, if that node wasn't $j$, in which case it will go to $i$. When the original loop enters $j$, the new loop enters $i$, and same as before. So basically the result is just that $i$ and $j$ swap names, and the loop is the same as before taking the name swap into account. So this kind of operations do not alter the strong connectedness of the graph. Suppose $A$ is as follows: $$A=\left(\begin{array}{c|c}\Huge{A_{11}} & \Huge{A_{12}} \\\hline \Huge{0} & \Huge{A_{22}}\end{array}\right).$$ Suppose the $\Huge{0}$ is $m\times m$ with $m\geq\frac{n}{2}$. Then we have $m$ nodes that are unconnected to other $m$ nodes, going out. But we don't have $2m$ nodes, or have exactly that many, so those $m$ nodes are cut off from all the other $n-m$, going out. So suppose there is a loop. If it starts at one of the $m$ nodes, it can never reach the other $n-m$, and if it starts at one of those, it can reach the $m$ nodes but never get back, so maybe we have a path through all the nodes, but it can't be a loop, i.e. a closed path. So the graph is not strongly connected. Now the definition doesn't say anything about the size of those blocks, so the problem I still have is that if $m<\frac{n}{2}$, the argument above fails because we have at least one node besides the $m$ nodes and the other $m$ nodes that can't be reached from the first $m$, and that node could be the missing link. Of course, when I said "can't be reached" up till now, I meant "be reached directly", i.e. not passing through other nodes. Of course, if the above is concluded, I have proved that reducibility implies non-strong-connectedness of the graph, so that a strongly connected graph implies irreducibility. But the converse I haven't even tried. So the questions are: how do I finish the above at points 3-4 and how do I prove the converse? Or maybe I'm missing something in the definition, in which case what is it? Update 2:I think I am missing something, as a $3\times3$ matrix with a 0 in the bottom-left corner and no other zeroes does have a strongly connected graph, since the only missing arrow is $3\to1$, but we have the loop $1\to3\to2\to1$. So when is a matrix reducible? Update 3:Browsing the web, I have found some things. I bumped first into this link. Now if that is the definition of reducible matrix, then either I misunderstand the definition of the block triangular form, or the if and only if there doesn't hold, since a matrix with a square block of zeroes bottom-left definitely doesn't satisfy the disjoint set condition but definitely does satisfy the permutation condition, with no permutation at all. Maybe the first condition is equivalent to the non-strong-connection of the graph. Yes, because in that case there are $\mu$ nodes from which you can't reach the remaining $\nu$, so the graph is not strongly connected. So at least that condition implies the non-strong-connectedness of the graph. The converse seems a bit trickier. Looking for that definition, I bumped into this link. Note that no matrix there has a lone corner zero (which would be top-right as the link deals with lower triangular matrixes), and all of them satisfy the disjoint set condition in the link above. So what is the definition of block triangular matrix? If it is that there must be square blocks whose diagonal coincides with part of the original matrix's diagonal and below them there must only be zeroes, then I have finished, since the if and only if in the link above is valid, so reducibility implies non-strong-connectedness of the graph, and whoops, I'm not done yet, I still need the converse, so can someone finally come help me on that? And if it isn't, then what the bleep is it and how do I make this damn proof?
Fix an algebraically closed field $k$ (arbitrary characteristic), all schemes will be of finite type over $k$. (Property *): I'm interested in (classes of) examples of schemes $X$ (irreducible, of dimension $n$) so that any morphism of schemes $\phi: X \rightarrow Y$ with $\dim Y < n$ is constant. There are two examples I know of. Projective spaces $\mathbb{P}^n_k$ have this property and simple abelian varieties do too. (One may also put arbitrary non-reduced structures on these, see below). $\textbf{Claim}$: More generally, if $X$ is a proper irreducible scheme so that every effective divisor is ample (so proper=projective), then $X$ has property (*). (Projective spaces have this property by definition, and simple abelian varieties do too by a general result from Mumford's book.) Proof: (Eisenbud-Harris give a similar argument for the case of projective space). Let $X$ be as in the theorem and $\phi: X \rightarrow Y$ be a morphism with $Y$ of smaller dimension than $n = \dim X$. Without loss of generality, we may assume $\phi$ is surjective (hence we can pullback cartier divisors). Choose an effective Cartier divisor $D$ and a point $p \not \in |D|$ the support of $D$, but in the image of $\phi$ (since $\phi$ surjective). The pullback of an effective Cartier divisor is also effective, Cartier - hence ample. The pullback of the point will contain a complete curve (hence these two subschemes of $X$ meet by the Nakai-Moishezon criteria - contradicting that $p \not \in |D|$. $\square$ Easy observations (1) Having property $*$ is not stable under blowing-up (Blow up of $\mathbb{P}^2$ is $\mathbb{P}^1 \times \mathbb{P}^1$.) (2) If a scheme $X$ satisfies the claim, then by definition, so does $X_{red}$. Further, any thickening of $X$ has property $*$. $\textbf{Proof: }$ To check that an $X$ satisfying the claim satisfies $*$, we did calculations in the intersection ring. This is invariant under changing the non-reduced structure. $\square$ Questions Main question: Are there other (families of?) examples of schemes satisfying (*)? (1) Does every scheme (no finiteness conditions!) have a dense affine open subset? This came up when I was thinking about this, and I realized I can't prove it offhand. Certainly it is true for irreducible schemes, and suffices to show it for connected schemes. (2) Do you suspect that the only examples also satisfy the claim above? That is, have every effective divisor ample? (3) Certainly all examples of schemes satisfying $*$ must be connected. Are there connected, but not irreducible examples? I thought this was a little interesting (and admittidenly, I have no applications in mind.)
2019-09-20 08:41 Search for the $^{73}\mathrm{Ga}$ ground-state doublet splitting in the $\beta$ decay of $^{73}\mathrm{Zn}$ / Vedia, V (UCM, Madrid, Dept. Phys.) ; Paziy, V (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Swierk) ; Walters, W B (Maryland U., Dept. Chem.) ; Aprahamian, A (Notre Dame U.) ; Bernards, C (Cologne U. ; Yale U. (main)) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Bucher, B (Notre Dame U. ; LLNL, Livermore) ; Chiara, C J (Maryland U., Dept. Chem. ; Argonne, PHY) et al. The existence of two close-lying nuclear states in $^{73}$Ga has recently been experimentally determined: a 1/2$^−$ spin-parity for the ground state was measured in a laser spectroscopy experiment, while a J$^{\pi} = 3/2^−$ level was observed in transfer reactions. This scenario is supported by Coulomb excitation studies, which set a limit for the energy splitting of 0.8 keV. [...] 2017 - 13 p. - Published in : Phys. Rev. C 96 (2017) 034311 Registo detalhado - Registos similares 2019-09-20 08:41 Search for shape-coexisting 0$^+$ states in $^{66}$Ni from lifetime measurements / Olaizola, B (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Warsaw) ; Poves, A (Madrid, Autonoma U.) ; Nowacki, F (Strasbourg, IPHC) ; Aprahamian, A (Notre Dame U.) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Cal-González, J (UCM, Madrid, Dept. Phys.) ; Ghiţa, D (Bucharest, IFIN-HH) ; Köster, U (Laue-Langevin Inst.) et al. The lifetime of the 0$_3^+$ state in $^{66}$Ni, two neutrons below the $N=40$ subshell gap, has been measured. The transition $B(E2;0_3^+ \rightarrow 2_1^+)$ is one of the most hindered E2 transitions in the Ni isotopic chain and it implies that, unlike $^{68}$Ni, there is a spherical structure at low excitation energy. [...] 2017 - 6 p. - Published in : Phys. Rev. C 95 (2017) 061303 Registo detalhado - Registos similares 2019-09-17 07:00 Laser spectroscopy of neutron-rich tin isotopes: A discontinuity in charge radii across the $N=82$ shell closure / Gorges, C (Darmstadt, Tech. Hochsch.) ; Rodríguez, L V (Orsay, IPN) ; Balabanski, D L (Bucharest, IFIN-HH) ; Bissell, M L (Manchester U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Garcia Ruiz, R F (Leuven U. ; CERN ; Manchester U.) ; Georgiev, G (Orsay, IPN) ; Gins, W (Leuven U.) ; Heylen, H (Heidelberg, Max Planck Inst. ; CERN) et al. The change in mean-square nuclear charge radii $\delta \left \langle r^{2} \right \rangle$ along the even-A tin isotopic chain $^{108-134}$Sn has been investigated by means of collinear laser spectroscopy at ISOLDE/CERN using the atomic transitions $5p^2\ ^1S_0 \rightarrow 5p6\ s^1P_1$ and $5p^2\ ^3P_0 \rightarrow 5p6s^3 P_1$. With the determination of the charge radius of $^{134}$Sn and corrected values for some of the neutron-rich isotopes, the evolution of the charge radii across the $N=82$ shell closure is established. [...] 2019 - 7 p. - Published in : Phys. Rev. Lett. 122 (2019) 192502 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Radioactive boron beams produced by isotope online mass separation at CERN-ISOLDE / Ballof, J (CERN ; Mainz U., Inst. Kernchem.) ; Seiffert, C (CERN ; Darmstadt, Tech. U.) ; Crepieux, B (CERN) ; Düllmann, Ch E (Mainz U., Inst. Kernchem. ; Darmstadt, GSI ; Helmholtz Inst., Mainz) ; Delonca, M (CERN) ; Gai, M (Connecticut U. LNS Avery Point Groton) ; Gottberg, A (CERN) ; Kröll, T (Darmstadt, Tech. U.) ; Lica, R (CERN ; Bucharest, IFIN-HH) ; Madurga Flores, M (CERN) et al. We report on the development and characterization of the first radioactive boron beams produced by the isotope mass separation online (ISOL) technique at CERN-ISOLDE. Despite the long history of the ISOL technique which exploits thick targets, boron beams have up to now not been available. [...] 2019 - 11 p. - Published in : Eur. Phys. J. A 55 (2019) 65 Fulltext from Publisher: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Inverse odd-even staggering in nuclear charge radii and possible octupole collectivity in $^{217,218,219}\mathrm{At}$ revealed by in-source laser spectroscopy / Barzakh, A E (St. Petersburg, INP) ; Cubiss, J G (York U., England) ; Andreyev, A N (York U., England ; JAEA, Ibaraki ; CERN) ; Seliverstov, M D (St. Petersburg, INP ; York U., England) ; Andel, B (Comenius U.) ; Antalic, S (Comenius U.) ; Ascher, P (Heidelberg, Max Planck Inst.) ; Atanasov, D (Heidelberg, Max Planck Inst.) ; Beck, D (Darmstadt, GSI) ; Bieroń, J (Jagiellonian U.) et al. Hyperfine-structure parameters and isotope shifts for the 795-nm atomic transitions in $^{217,218,219}$At have been measured at CERN-ISOLDE, using the in-source resonance-ionization spectroscopy technique. Magnetic dipole and electric quadrupole moments, and changes in the nuclear mean-square charge radii, have been deduced. [...] 2019 - 9 p. - Published in : Phys. Rev. C 99 (2019) 054317 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Investigation of the $\Delta n = 0$ selection rule in Gamow-Teller transitions: The $\beta$-decay of $^{207}$Hg / Berry, T A (Surrey U.) ; Podolyák, Zs (Surrey U.) ; Carroll, R J (Surrey U.) ; Lică, R (CERN ; Bucharest, IFIN-HH) ; Grawe, H ; Timofeyuk, N K (Surrey U.) ; Alexander, T (Surrey U.) ; Andreyev, A N (York U., England) ; Ansari, S (Cologne U.) ; Borge, M J G (CERN ; Madrid, Inst. Estructura Materia) et al. Gamow-Teller $\beta$ decay is forbidden if the number of nodes in the radial wave functions of the initial and final states is different. This $\Delta n=0$ requirement plays a major role in the $\beta$ decay of heavy neutron-rich nuclei, affecting the nucleosynthesis through the increased half-lives of nuclei on the astrophysical $r$-process pathway below both $Z=50$ (for $N>82$ ) and $Z=82$ (for $N>126$). [...] 2019 - 5 p. - Published in : Phys. Lett. B 793 (2019) 271-275 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-14 06:30 Precision measurements of the charge radii of potassium isotopes / Koszorús, Á (KU Leuven, Dept. Phys. Astron.) ; Yang, X F (KU Leuven, Dept. Phys. Astron. ; Peking U., SKLNPT) ; Billowes, J (Manchester U.) ; Binnersley, C L (Manchester U.) ; Bissell, M L (Manchester U.) ; Cocolios, T E (KU Leuven, Dept. Phys. Astron.) ; Farooq-Smith, G J (KU Leuven, Dept. Phys. Astron.) ; de Groote, R P (KU Leuven, Dept. Phys. Astron. ; Jyvaskyla U.) ; Flanagan, K T (Manchester U.) ; Franchoo, S (Orsay, IPN) et al. Precision nuclear charge radii measurements in the light-mass region are essential for understanding the evolution of nuclear structure, but their measurement represents a great challenge for experimental techniques. At the Collinear Resonance Ionization Spectroscopy (CRIS) setup at ISOLDE-CERN, a laser frequency calibration and monitoring system was installed and commissioned through the hyperfine spectra measurement of $^{38–47}$K. [...] 2019 - 11 p. - Published in : Phys. Rev. C 100 (2019) 034304 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-12 09:23 Evaluation of high-precision atomic masses of A ∼ 50-80 and rare-earth nuclides measured with ISOLTRAP / Huang, W J (CSNSM, Orsay ; Heidelberg, Max Planck Inst.) ; Atanasov, D (CERN) ; Audi, G (CSNSM, Orsay) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cakirli, R B (Istanbul U.) ; Herlert, A (FAIR, Darmstadt) ; Kowalska, M (CERN) ; Kreim, S (Heidelberg, Max Planck Inst. ; CERN) ; Litvinov, Yu A (Darmstadt, GSI) ; Lunney, D (CSNSM, Orsay) et al. High-precision mass measurements of stable and beta-decaying nuclides $^{52-57}$Cr, $^{55}$Mn, $^{56,59}$Fe, $^{59}$Co, $^{75, 77-79}$Ga, and the lanthanide nuclides $^{140}$Ce, $^{140}$Nd, $^{160}$Yb, $^{168}$Lu, $^{178}$Yb have been performed with the Penning-trap mass spectrometer ISOLTRAP at ISOLDE/CERN. The new data are entered into the Atomic Mass Evaluation and improve the accuracy of masses along the valley of stability, strengthening the so-called backbone. [...] 2019 - 9 p. - Published in : Eur. Phys. J. A 55 (2019) 96 Fulltext from Publisher: PDF; Registo detalhado - Registos similares 2019-09-05 06:35 Nuclear charge radii of $^{62−80}$Zn and their dependence on cross-shell proton excitations / Xie, L (Manchester U.) ; Yang, X F (Peking U., SKLNPT ; Leuven U.) ; Wraith, C (Liverpool U.) ; Babcock, C (Liverpool U.) ; Bieroń, J (Jagiellonian U.) ; Billowes, J (Manchester U.) ; Bissell, M L (Manchester U. ; Leuven U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Filippin, L (U. Brussels (main)) et al. Nuclear charge radii of $^{62−80}$Zn have been determined using collinear laser spectroscopy of bunched ion beams at CERN-ISOLDE. The subtle variations of observed charge radii, both within one isotope and along the full range of neutron numbers, are found to be well described in terms of the proton excitations across the $Z=28$ shell gap, as predicted by large-scale shell model calculations. [...] 2019 - 5 p. - Published in : Phys. Lett. B 797 (2019) 134805 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-08-14 18:20 Registo detalhado - Registos similares
Refine Language English (6) (remove) If \(A\) generates a bounded cosine function on a Banach space \(X\) then the negative square root \(B\) of \(A\) generates a holomorphic semigroup, and this semigroup is the conjugate potential transform of the cosine function. This connection is studied in detail, and it is used for a characterization of cosine function generators in terms of growth conditions on the semigroup generated by \(B\). This characterization relies on new results on the inversion of the vector-valued conjugate potential transform. Let \(X\) be a Banach lattice. Necessary and sufficient conditions for a linear operator \(A:D(A) \to X\), \(D(A)\subseteq X\), to be of positive \(C^0\)-scalar type are given. In addition, the question is discussed which conditions on the Banach lattice imply that every operator of positive \(C^0\)-scalar type is necessarily of positive scalar type. In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized. \(C^0\)-scalar-type spectrality criterions for operators \(A\), whose resolvent set contains the negative reals, are provided. The criterions are given in terms of growth conditions on the resolvent of \(A\) and the semi-group generated by \(A\).These criterions characterize scalar-type operators on the Banach space \(X\), if and only if \(X\) has no subspace isomorphic to the space of complex null-sequences. In the Banach space co there exists a continuous function of bounded semivariation which does not correspond to a countably additive vector measure. This result is in contrast to the scalar case, and it has consequences for the characterization of scalar-type operators. Besides this negative result we introduce the notion of functions of unconditionally bounded variation which are exactly the generators of countably additive vector measures. The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent: \[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p} \mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.
I am reading Katz' book Enumerative Geometry and String Theory. I have a few questions regarding the moduli space of degree $d$ genus $0$ stable maps into $\mathbb P^n$, denoted $\overline{M}(\mathbb{P}^n, d)$. First let me paraphrase some definitions in the book (given between p.32-37) to the best of my understanding (they're not very precise but neither are they in the book!): A tree of $\mathbb P^1$s is a union $C=\bigcup_{i=1}^r C_i$ where each $\phi_i:\mathbb P^1\xrightarrow{\cong}C_i$ is a curve, glued along a finite collection of pairs of points (called nodes) $(p_j, q_j)\in C_{k(j)}\times C_{l(j)}$ for distinct indices $1\leq k(j), l(j) \leq r$ and avoiding cycles - finite sequences of distinct nodes $p_{j_1},p_{j_2}, \dots, p_{j_k} = p_{j_1}$ where a single component of $C$ contains both $p_{j_l}$ and $p_{j_{l+1}}$. A morphism $f:C\rightarrow\mathbb P^n$ is a map such that each $f_i :=f\circ\phi_i:\mathbb P^1 \rightarrow\mathbb P^n$ is a morphism of varieties. The degree of $f$ is the sum of the degrees of the $f_i$. A genus $0$ stable map to $\mathbb P^n$ is a tree morphism $f:C\rightarrow\mathbb P^n$ such that if $f$ is constant on a component $C_i$ of $C$, then $C_i$ contains at least $3$ nodes of the tree. The degree of the stable map is the same thing as its degree as a tree morphism. An isomorphism of stable maps $g:(f:C\rightarrow\mathbb P^n)\rightarrow(f':C'\rightarrow\mathbb P^n)$ is a morphism $g:C\rightarrow C'$ such that $f'\circ g = f$; For each $C_i$, $g(C_i) = C'_j$ and for any $j$, there is a unique $i$ with $g(C_i)=C'_j$; $(\phi_j)^{-1} \circ g\circ \phi_i:\mathbb P^1\rightarrow\mathbb P^1$ is a morphism (hence automorphism?) whenever $g(C_i)\subseteq C'_j$. The moduli space of genus $0$ stable maps of degree $d$ to $\mathbb P^n$ is then the set $$\overline{M}(\mathbb P^n, d)=\left\{\mbox{isomorphism classes of degree } d \mbox{ genus 0 stable maps into }\mathbb P^n \right\}$$ Here are my first two questions: Why do we care about trees and cycles? To me, it seems as if they are a convenient (if messy) simplification of something more natural (and apparently related to the concept of genus) but perhaps too complicated to describe in this introductory book. In the definition of stable map, why threenodes? In the book, it is claimed that $\overline{M}(\mathbb P^n, d)$ is actually a stack. I don't know what a stack actually is but I know it is a category in some sense. So I'd like to try to make $\overline{M}(\mathbb P^n, d)$ into a category and see how close this is to a "real" stack. First, let's generalise the notion of an isomorphism between two stable maps to that of a morphism which seems to be well-behaved - a morphism $g:(f:C\rightarrow \mathbb P^n)\rightarrow(f':C'\rightarrow\mathbb P^n)$ of stable maps is a morphism $g:C\rightarrow C'$ such that $f'\circ g = f$; For each $C_i$, $g(C_i) = C'_j$ for some $j$; $(\phi_j)^{-1} \circ g\circ \phi_i:\mathbb P^1\rightarrow\mathbb P^1$ is a morphism (in the sense of algebraic varieties) whenever $g(C_i)\subseteq C'_j$. Two morphisms $g_1 :f_1\rightarrow f_1'$ and $g_2:f_2\rightarrow f_2'$ between stable maps are isomorphic if there exist isomorphisms $\theta:f_1\rightarrow f_2$ and $\theta':f_1'\rightarrow f_2'$ such that $\theta'\circ g_1 = g_2 \circ\theta$. Isomorphism between stable maps and between their morphisms are equivalence relations. Let's define a category called $\overline{M}(\mathbb P^n, d)$ whose objects are isomorphism classes $[f]=\left[f:C\rightarrow\mathbb P^n\right]$ of degree $d$ genus $0$ stable maps into $\mathbb P^n$ and whose morphisms are isomorphism classes $[g]:[f]\rightarrow [f']$ of morphisms between stable maps, with composition defined by $[h]\circ [g] = [h\circ g]$. This gives us a much richer structure than just the set of all isomorphism classes defined in the book as there are now "asymmetric relations" (described by the morphisms in this category) between the points of the set $\overline{M}(\mathbb P^n, d)$. My final question is: In what sense is the category that I have come up with related to the actual stack mentioned in the book?
Abbreviation: NRng$_1$ A is a structure $\mathbf{N}=\langle N,+,-,0,\cdot,1\rangle $ of type $\langle 2,1,0,2,0\rangle $ such that near-ring with identity $\langle N,+,-,0,\cdot\rangle $ is a near-rings $1$ is a : $x\cdot 1=x\mbox{and}1\cdot x=x$ multiplicative identity Let $\mathbf{M}$ and $\mathbf{N}$ be near-rings with identity. A morphism from $\mathbf{M}$ to $\mathbf{N}$ is a function $h:M\rightarrow N$ that is a homomorphism: $h(x+y)=h(x)+h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(1)=1$ Remark: It follows that $h(0)=0$ and $h(-x)=-h(x)$. Example 1: $\langle\mathbb{R}^{\mathbb{R}},+,-,0,\cdot,1\rangle$, the near-ring of functions on the real numbers with pointwise addition, subtraction, zero, composition, and the identity function. $0$ is a zero for $\cdot$: $0\cdot x=0$ and $x\cdot 0=0$. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$
Consider the "compacted" formula (as also used in the "Syracuse"-version of the Collatz), defining one "step" beginning on $a$ going to $b$ (both odd $\ge 1$):$$ b = {3a+1\over 2^A } \tag 1$$ where $A$ contains the number of halving-steps. Now to have a (very) simple cycle of just one step we must have:$$ a = {3a+1\over 2^A } \tag 2$$ It is simple to show, there is no solution except when $a=1$ only by rearranging: $$ a = {3a+1\over 2^A } \\ 2^A = {3a+1\over a } $$$$ 2^A = 3+{1\over a } \tag3\\ $$ and having the rhs a perfect power of $2$ requires $a=1$, thus allowing (only) the well known "trivial" cycle $1 \to 1$. Now extending this to a two-step cycle we need to have$$\begin{array}{} b = {3a+1\over 2^A } & a = {3b+1\over 2^B }\\ \end{array} \tag 4$$ with some so far undetermined $A$ and $B$. To see better the space of possible solutions we can take the product of the lhs's $a \cdot b$ and which must equal the product of the rhs's:$$\begin{array}{} a \cdot b = {3a+1\over 2^A } \cdot {3b+1\over 2^B } \end{array}$$ resulting in the required equality $$\begin{array}{} 2^{A+B} = (3+{1\over a}) \cdot (3+{1\over b }) \end{array} \tag 5$$ This shows, we can only have a solution if the rhs becomes at least integer, which is difficult enough , but actually must even equal a perfect power of 2, larger than $3^2=9$ and namely must equal $16$. Now what values must $a,b$ have such that the rhs equals $16$? Both must contain $a=b=1$ and that is the already known trivial cycle, and no other solution in possible. Now you see the pattern, how this generalizes to the disproof of the 3-step-cycle, 4-step-cycle, and so on for the N-step-cycle. Unfortunately, for several $N$ the possibility for small $a,b,c,...$ "theoretically", which means, that the rhs-product can reach a perfect power of 2 by some $1 \lt (a_1 \ne a_2 \ne a_3 \cdots \ne a_N)$ - one might just try some $N$ by hand, remembering the conditions that all members of an assumed cycle should be greater than $1$, should be odd, should not be divisible by $3$ and that all involved $a_k$ should be different from each other, to get a better intuition. exists Now to proceed more we introduce the knowledge, that by simple heuristics we already know, that $a_k \gt 1000$ (can be done manually with a programming language) or $a_k \gt 1 000 000 $ and even $a_k \gt 2^{60} $ (the latter by an extensive numerical search by de Oliviera and by Rosendaal). If we introduce that knowledge and assume the minimal $a_1$ being, say $a_1=1+2^{60} $ , $a_2 = {3a_1+1\over 2^{A_1}} $ and so on , we find that for all manually reachable $N$ the rhs is disappointingly near to the value of $3^N$ and no perfect power of $2$ is only in any realistic distance from that. One can find, using the continued fraction of $\beta = \log(3)/\log(2)$, we need $N \gt 150 000 $ (or even much more don't have the actual value at hand). Well, this shall just give an intuition as to why cycles are much unlikely. If you like to do more, allow negative numbers for $a_1,a_2,...$ and see how and which solutions for small $N$ you can get. Or proceed and compare the $5x+1$ problem with this: we find actually two or three possible cycles for small $N$ and $a_1 \lt 100 $ but after that the above method can be used to say much more about the nonexistence of certain $N$-step cycles. I've even found two cycles for the $181 x+1$-version having $N=2$ and small $a_1 \lt 100$ but after that, the formula comparing the rhs-product for $N$ steps to perfect powers of $2$ indicates the same "difficulty" for cycles to exist. [Update] Just to have an example, how a (possibly infinite, don't know at the moment) set of $N$ can be ruled out for a solution to exist. Assume for example $N=12$. So we have on the rhs 12 parentheses of the form $(3+1/a_k)$, whose product should equal $2^S$ (where I use in general the letter $S$ for $S=A_1+A_2+...A_{12}$, just the sum of exponents, which is also the number of even-steps). Then what is the next possible perfect power of 2 above $3^{12}$? we get, using $\beta=\log(3)/ \log(2)$: $S=\lceil N \cdot \beta \rceil = 20$,thus we have the condition$$2^{20} =(3+ {1\over a_1})(3+ {1\over a_2})...(3+ {1\over a_{12}}) \tag 6$$and the solution for smallest $1 \ne (a_1 \ne a_2 ... \ne a_{12})$ is $$2^{20} \overset?=(3+ {1\over 5})(3+ {1\over 7})(3+ {1\over 11})...(3+ {1\over 37}) \tag 7$$Of course we could simply compute the values of the LHS and the RHS getting$$ 2^{20}= 1048576 \gt 697035.738776 $$and because increasing the values for the $a_k$ would even decrease the value of the rhs there is no solution and thus no $N=12$-step cycle. But for the sake of generality we proceed differently. Instead we do a rough estimate for the mean -value of the $a_k$. Assume all $a_k$ are equal to their "mean" $a_m$, then we can rewrite the equation$$ 2^S = (3+{1\over a_m})^N \\ 2^{S/N} = (3+{1\over a_m})\\ 2^{S/N} -3 = {1\over a_m}\\ {1 \over 2^{S/N} -3} = a_m \tag 7 $$ $$ a_m = {1 \over 2^{20/12}-3}\approx 5.72075494219... \tag 8 $$and because $a_m$ is somehow a rough mean, some $a_k$ must be smaller and some must be larger. But there is only one possible value for any $a_k \lt a_m$ namely $a_1=5$. After that, rearranging one parenthese with that value assumed to the lhs in eq(6) we get $$ 2^S/(3+1/5) = (3+{1\over a_m})^{N-1} \\ 2^{20} \cdot 5/16 = (3+{1\over a_m})^{11} \\ a_m \approx 5.79638745091$$and we find, that there is no way that $a_m$ can be a rough mean of the remaining $11$ $a_k$ since there is no more odd integer $a_k$ in $1 \ne 3 \ne 5 \lt a_k $ so a $N=12$-step-cycle cannot exist. We see nicely, that for some $N$ we can exclude the possibility of a cycle just based on the basic assumptions on the form of the members of a possible cycle $a_k$ by the formula (6) and for such $N$ do not need to recurse to the $a_k \gt 2^{60}$ found by de Oliviera and Rosendaal. However, and this leads to the observation that we need for the general solution of the cycle-problem some deeper thinking, there are some $N$ for which $2^S$ is comfortably near to $3^N$ so we can allow a set of small $a_k$ such that the rhs can approach the lhs. The continued fraction of $\beta$ gives us pairs of $N$ and $S$ where $2^S$ are especially near to $3^N$ and for which a cycle cannot be excluded by this method alone. [update 2] I've not done this before explicitely, but trying the continued fraction-convergents and filling into $a_k$ the set of consecutive smallest possible integers ($5,7,11,13,...)$ we get the following small table N S lhs=2^S rhs = (3+1/a_k)*()... lhs/rhs a_m 1, 2, 4 , 3.2 , 1.25 1 ~ 2^0 5, 8, 256 , 292.571428571, 0.875 31.81 ~ 2^5 41, 65, 3.68934881474 E 19, 5.44736223436 E 19, 0.677 1192.08 ~ 2^10 306, 485, 9.98959536101 E 145, 5.57867460455 E 146, 0.179 99780.79 ~ 2^16 15601, 24727, 3.70427126979 E 7443, 1.06756786898 E 7444, 0.346 285817586.21 ~ 2^28 79335, 125743, 2.59863196329 E37852, 8.97264176433 E37852, 0.289 7216102492.69 ~ 2^33 and we see, that the RHS can reach the LHS for that specific $N$ and thus the assumption of the smallest possible values for the $a_k$ does not suffice to exclude the possibility of a cycle. The "mean" $a_m$, estimated by the geometric mean of all parentheses as proposed above, are in the last column; they increase with the size of $N$ and we get an impression as at which cyclelength $N$ this allows values of $a_1 >2^{60}$ and thus this method has its heuristical limit: the last entry in the last row means $a_m \approx 2^{33}$ and this means, that the knowledge that $a_1>2^{60}$ suffices to have disproved all cycles of steps $N \le 79335$. But it might anyway be interesting to see, what explicitely smallest value for $a_1$ (which we can assume to be the smallest of the cycle) and the following sequence of consecutive possible members would suffice to have the LHS smaller than the RHS. That would surely be a nice exercise ...
Assume that $\phi:\mathbb{R}^n\rightarrow \mathbb{R}^n$ is a smooth vector field, and assume that we can find vectors $y_k,x_k$ ($k$ positive integer) such that $(\phi(x_k)-\phi(y_k),x_k-y_k)\geq k \mid y_k-x_k\mid^2$, where $(,)$ is the usual scalar product. Why does this condition contradict the fact that $\phi$ is Lipschitz? Recall Cauchy Schwarz which says $(x,y)\leq \|x\| \|y\|$. Then $$(\phi(x_k)-\phi(y_k),x_k-y_k)\leq \|\phi(x_k)-\phi(y_k)\| \|x_k-y_k\|$$ Since $\phi$ is Lipschitz, there exists a constant $C$ such that $$|\phi(x_k)-\phi(y_k)\|\leq C\|x_k-y_k\|$$ and hence $$(\phi(x_k)-\phi(y_k),x_k-y_k)\leq C|x_k-y_k\|^2.$$ But if we had two sequences, $x_k$, $y_k$ with $$(\phi(x_k)-\phi(y_k),x_k-y_k)\geq k\|y_k-x_k\|^2,$$ we would have to have $C>k$ for every integer $k$ which is impossible. Hope that helps, Note: I assume that your statement is meant to be interpreted as "for every integer $k$ we can find $x_k$, $y_k$ ..." rather than "there exists $k$ with..." since setting $\phi$ equal to $k$ times the identity deals with the second case.
Suppose $f$ is an entire function on $\mathbb{C}^n$ that satisfies for every $\epsilon>0$ a growth-condition $$|f(z)|\leq C_{\epsilon}(1+|z|)^{N_{\epsilon}}e^{\epsilon | \text{Im}\,z|}$$ Show that $f$ is a polynomial. (Hint: study $\hat{f} = \mathcal{F}(f)$ the Fourier-transform). I know I'm supposed to apply the Paley-Wiener-Schwartz Theorem, but not sure how.;. Any suggestions and/or tips are greatly appreciated. Thnx.
Kernel of Ring Epimorphism is Ideal Jump to navigation Jump to search This article has been proposed for deletion. In particular: Please assess the validity of this proposal. (discuss) Zero extra content over the two parts in the proof Theorem Let $\phi: \left({R_1, +_1, \circ_1}\right) \to \left({R_2, +_2, \circ_2}\right)$ be a ring epimorphism. Then: There is a unique ring isomorphism $g: R_1 / K \to R_2$ such that: $g \circ q_K = \phi$ Proof Existence of Kernel $\Box$ Uniqueness of Quotient Mapping there exists a unique ring isomorphism $g: R_1 / K \to R_2$ such that $g \circ q_K = \phi$ $\phi$ is a ring isomorphism if and only if $K = \left\{{0_{R_1}}\right\}$. $\blacksquare$
What are common cost functions used in evaluating the performance of neural networks? Details (feel free to skip the rest of this question, my intent here is simply to provide clarification on notation that answers may use to help them be more understandable to the general reader) I think it would be useful to have a list of common cost functions, alongside a few ways that they have been used in practice. So if others are interested in this I think a community wiki is probably the best approach, or we can take it down if it's off topic. Notation So to start, I'd like to define a notation that we all use when describing these, so the answers fit well with each other. This notation is from Neilsen's book. A Feedforward Neural Network is a many layers of neurons connected together. Then it takes in an input, that input "trickles" through the network and then the neural network returns an output vector. More formally, call $a^i_j$ the activation (aka output) of the $j^{th}$ neuron in the $i^{th}$ layer, where $a^1_j$ is the $j^{th}$ element in the input vector. Then we can relate the next layer's input to it's previous via the following relation: $a^i_j = \sigma(\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j)$ where $\sigma$ is the activation function, $w^i_{jk}$ is the weight from the $k^{th}$ neuron in the $(i-1)^{th}$ layer to the $j^{th}$ neuron in the $i^{th}$ layer, $b^i_j$ is the bias of the $j^{th}$ neuron in the $i^{th}$ layer, and $a^i_j$ represents the activation value of the $j^{th}$ neuron in the $i^th$ layer. Sometimes we write $z^i_j$ to represent $\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j$, in other words, the activation value of a neuron before applying the activation function. For more concise notation we can write $a^i = \sigma(w^i \times a^{i-1} + b^i)$ To use this formula to compute the output of a feedforward network for some input $I \in \mathbb{R}^n$, set $a^1 = I$, then compute $a^2$, $a^3$, ...,$a^m$, where m is the number of layers. Introduction A cost function is a measure of "how good" a neural network did with respect to it's given training sample and the expected output. It also may depend on variables such as weights and biases. A cost function is a single value, not a vector, because it rates how good the neural network did as a whole. Specifically, a cost function is of the form $$C(W, B, S^r, E^r)$$ where $W$ is our neural network's weights, $B$ is our neural network's biases, $S^r$ is the input of a single training sample, and $E^r$ is the desired output of that training sample. Note this function can also potentially be dependent on $y^i_j$ and $z^i_j$ for any neuron $j$ in layer $i$, because those values are dependent on $W$, $B$, and $S^r$. In backpropagation, the cost function is used to compute the error of our output layer, $\delta^L$, via $$\delta^L_j = \frac{\partial C}{\partial a^L_j} \sigma^{ \prime}(z^i_j)$$. Which can also be written as a vector via $$\delta^L = \nabla_a C \odot \sigma^{ \prime}(z^i)$$. We will provide the gradient of the cost functions in terms of the second equation, but if one wants to prove these results themselves, using the first equation is recommended because it's easier to work with. Cost function requirements To be used in backpropagation, a cost function must satisfy two properties: 1: The cost function $C$ must be able to be written as an average $$C=\frac{1}{n} \sum\limits_x C_x$$ over cost functions $C_x$ for individual training examples, $x$. This is so it allows us to compute the gradient (with respect to weights and biases) for a single training example, and run Gradient Descent. 2: The cost function $C$ must not be dependent on any activation values of a neural network besides the output values $a^L$. Technically a cost function can be dependent on any $a^i_j$ or $z^i_j$. We just make this restriction so we can backpropagte, because the equation for finding the gradient of the last layer is the only one that is dependent on the cost function (the rest are dependent on the next layer). If the cost function is dependent on other activation layers besides the output one, backpropagation will be invalid because the idea of "trickling backwards" no longer works. Also, activation functions are required to have an output $0\leq a^L_j \leq 1$ for all $j$. Thus these cost functions need to only be defined within that range (for example, $\sqrt{a^L_j}$ is valid since we are guaranteed $a^L_j \geq 0$).
Let $x \in [0,1]$ denote some state (e.g. market share).Let $i \in \{1,2\}$ denote an agent (e.g. firm).Im considering a model where payoffs $F_i(x)$ are perfectly invertible in the sense that the payoff functions can be mirrored along the line $x = \frac{1}{2}$ with $F_1(x) = F_2(1-x)$. Take $F_1(x) = x$ and $F_2(x) = 1 - x$ for instance.Is there a fixed term for these kind of games. It has somewhat a zero sum flavor, because preferences are perfectly opposing. Firm one wants increase its market share, and firm 2 wants to decrease the market share of firm 1 in the same manner.I can think of Hotelling or a linear city. But is this the most generic characterization? Or, even better, is there a mathematical term for this kind of functions which are mirrored along a fixed line? Edit: Example Consider the following advertising differential game defined by $(x,u_1,u_2) \in [0,1] \times \mathbb R_+ \times \mathbb R_+$\begin{align}&v_1(x_0) = \max_{u_1}\int_0^\infty{e^{-t}(x - u_1^2/2)dt}\\&v_2(x_0) = \max_{u_2}\int_0^\infty{e^{-t}(1 - x - u_2^2/2)dt}\\\text{s.t.}\quad & \dot x = u_1 - u_2\end{align}$x$ is the market share of firm 1, and $u_i$ is the respective advertising effort.A stationary feedback Nash equilibrium $(\phi_1(x), \phi_2(x))$ solves a coupled system of Hamilton-Jacobi-Bellman equations\begin{align}&v_1(x) = \max_{u_1}\{x - u_1^2/2 + v'_1(x)(u_1-\phi_2(x))\}\\&v_2(x) = \max_{u_2}\{1 - x - u_2^2/2 + v'_2(x)(\phi_1(x)-u_2)\}\end{align} It turns out that $(\phi_1(x), \phi_2(x)) = (1,-1)$ is the unique Nash equilibrium with associated values \begin{align} &v_1(x) = x - \frac{1}{2}\\ &v_2(x) = \frac{1}{2}-x\\ \Longrightarrow \quad &v_1(x) = v_2(1-x). \end{align} How would one characterize the class of games in general? Symmetric antagonistic? I may propose the following definition: Let $(\phi_1(x), \phi_2(x))$ solve \begin{align} &v_1(x) = \max_{u_1}\{F_1(x,u_1,\phi_2(x)) + v'_1(x)f(x,u_1,\phi_2(x))\}\\ &v_2(x) = \max_{u_2}\{F_2(x,\phi_1(x),u_2) + v'_2(x)f(x,\phi_1(x),u_2)\}. \end{align} A differential game is symetrically antagonistic if \begin{align} v_1(x) = v_2(\overline x - x) \quad \forall x \in X \end{align} where $\overline x := \max X$.
I am just starting a course on Lie groups and I'm having some difficulty understanding some of the ideas to do with vector fields on Lie groups.Here is something that I have written out, which I know is wrong, but can't understand why: Let $X$ be any vector field on a Lie group $G$, so that $X\colon C^\infty(G)\to C^\infty(G)$. Write $X_x$ to mean the tangent vector $X_x\in T_x G$ coming from evaluation at $G$, that is, define $X_x(-)=(X(-))(x)$ for some $-\in C^\infty(G)$. We also write $L_g$ to mean the left-translation diffeomorphism $x\mapsto gx$. Now \begin{align} X_g(-) = (X(-))(g) &= (X(-))(L_g(e))\\ &= X(-\circ L_g)(e)\\ &= X_e(-\circ L_g) \\ &= ((DL_g)_eX_e)(-). \end{align} Using this we can show that $((L_g)_*X)_{L_g(h)}=X_{L_g(H)}$ for all $h\in G$, and thus $(L_g)_*X=X$, i.e. $X$ is left-invariant. I'm sure that the mistake must be very obvious, but I'm really not very good at this sort of maths, so a gentle nudge to help improve my understanding would be very much appreciated!
Let $X$ be a Hausdorff space. Let $\mathscr{F}=\left\{f_j:X\rightarrow R\mid j\in J\right\}$ be a family of continuous real-valued functions with following property:for every $x\in X$, and every closed set $A\subset X$ with $x\notin A$, there exists $f_j\in \mathscr{F}$ with $f_j (x)>0$ and $f_j(a)=0$ for every $a\in A$. Define $F:X\rightarrow R^J$ via $F(x)=(f_j(x))_{j\in J}$. Show that $F$ is an embedding of $X$ into $R^J$. First of all, (injection) in Hausdorff space,every single point is closed. So if $F(x_1)=F(x_2)$ then $x_1=x_2$ since $f_j$'s could separate the points in $X$. Secondly,since each $f_j$ is continuous, so does $F(x)$ from the property of product topology. Then I'm stuck with the proof of inverse of $F(x)$ is continuous. Could anyone give me some help?
I'm only going to address question 1 in some generality, since the infinite measure $\mu(E) = \infty$ for $\emptyset \neq E \in \Sigma$ gives a complete measure on $\Sigma$ and thus an obvious answer to question 2 (as was noted by Niels Diepeveen). Most of the things I'm saying below can be found or extracted from Fremlin's Measure theory, volumes 1 and 2, parts even verbatim, up to minor modifications of notation. Let me say right away that the situation is somewhat subtle. The answer I'm going to give boils down to: Every measure $\mu$ is induced by an outer measure $\mu^{\ast}$ canonically obtained from $\mu$, but in general the $\sigma$-algebra of $\mu^{\ast}$-measurable sets can be strictly larger than the one on which $\mu$ is defined. However, the $\mu$-measurable sets are the same as the $\mu^{\ast}$-measurable sets provided that $\mu$ is complete (obviously necessary) and $\sigma$-finite (not strictly necessary but sufficient and good enough for many purposes) and thus $\mu$ can be obtained back from $\mu^{\ast}$ by Carathéodory's construction (or simply restriction, if you prefer). The $\sigma$-finiteness condition can be weakened to localizability/local determination conditions, but I prefer to formulate the result for $\sigma$-finite measures, as this is a more widely known condition. Start with a measure space $(X,\Sigma,\mu)$, complete or not. It seems to me that in this generality, with no more information at hand, the only thing we can do in the first place is to try and associate an outer measure $\mu^{\ast}$ with that situation. The only one that comes to mind is: For $A \subset X$ put $\mu^{\ast} (A) = \inf{\left\{\mu(E)\,:\,A \subset E, \,E\in \Sigma\right\}}.$ The following points are all very easy but we'll need them later on, so for the record: This $\mu^{\ast}$ is an outer measure: $\mu^{\ast}(\emptyset) = 0$ and monotonicity $\mu^{\ast}(A) \leq \mu^{\ast}(B)$ for $A \subset B$ are clear from the definition $\sigma$-subadditivity follows from the usual $2^{-n}$-trick: Let $A = \bigcup_{n=1}^{\infty} A_n$ and $\varepsilon \gt 0$. We may assume that $\mu^{\ast}(A_n)$ is finite for all $n$. For each $A_n$ choose $E_n \in \Sigma$ with $A_n \subset E_n$ and $\mu(E_n) \leq \mu^{\ast}(A_n) + \varepsilon \cdot 2^{-n}$ then $A \subset E = \bigcup_{n=1}^{\infty} E_n$ and $\mu(E) \leq \sum \mu(E_n) \leq \sum \mu^{\ast}(A_n) + \varepsilon$. Since $\varepsilon \gt 0$ was arbitrary, we conclude. $\mu^{\ast}(A) \leq \sum \mu^{\ast}(A_n)$. For all $E \in \Sigma$ we have $\mu(E) = \mu^{\ast}(E)$. Notice that the infimum in the definition of $\mu^{\ast}$ is actually a minimum: choose $E_n \in \Sigma$, $A \subset E_n$ such that $\mu(E_n) \leq \mu^\ast(A) + 1/n$. Then $E = \bigcap_{n=1}^{\infty} E_n \in \Sigma$, we have $A \subset E$ and $\mu^{\ast}(A) = \mu(E)$. The usual Carathéodory extension theorem applied to $\Sigma$ and $\mu$ shows that all the sets $E \in \Sigma$ are measured by $\mu^{\ast}$, so $$\Sigma \subset \Sigma_{C} = \{E \subset X\,:\,\mu^{\ast}(A) = \mu^{\ast}(A \cap E) + \mu^{\ast}(A \smallsetminus E)\text{ for all } A \subset X\}.$$Appealing to Carathéodory's extension theorem here is clearly overkill: Let $F \in \Sigma$. By subadditivity we have for all $A \subset X$ that $\mu^{\ast}(A) \leq \mu^{\ast}(A \cap F) + \mu^{\ast}(A \smallsetminus F)$ and if $A$ has infinite outer measure then $\mu^{\ast}(A) \geq \mu^{\ast}(A \cap F) + \mu^{\ast}(A \smallsetminus F)$ holds vacuously, so assume that $\mu^{\ast}(A) \lt \infty$. According to the previous point we can choose $E \in \Sigma$, with $A \subset E$ and $\mu^{\ast}(A) = \mu(E)$. Moreover, $A \cap F \subset E \cap F \in \Sigma$ and $A \smallsetminus F \subset E \smallsetminus F \in \Sigma$ so that $\mu^{\ast}(A \cap F) + \mu^{\ast}(A \smallsetminus F) \leq \mu^{\ast}(E \cap F) + \mu^{\ast}(E\smallsetminus F) = \mu(E \cap F) + \mu(E\smallsetminus F) = \mu(E) = \mu^{\ast}(A)$. So here's what we have so far: given a measure space $(X,\Sigma,\mu)$ we find an outer measure $\mu^{\ast}$ on $X$, and the associated space $(X,\Sigma_{C},\mu_C)$, where $\mu_{C}$ is the complete measure obtained from $\mu^{\ast}$ by Carathéodory's construction. We argued that $\Sigma \subset \Sigma_C$ and for $E \in \Sigma$ we have $\mu(E) = \mu_C(E)$, so $\mu_C$ is an extension of $\mu$. This gives a first partial affirmative answer to Q1: there always exists an outer measure $\mu^{\ast}$ such that the restriction of $\mu^{\ast}$ to $\Sigma$ is $\mu$ itself. That is, $\mu_{C}$ satisfies requirement 2. of your Q1. Recall that every measure space $(X,\Sigma,\mu)$ has a completion: there is a smallest $\sigma$-algebra $\check{\Sigma}$ and a complete measure $\check{\mu}$ such that $\Sigma \subset \check{\Sigma}$ and $\mu(E) = \check{\mu}(E)$ for $E \in \Sigma$. Explicitly, $\check{\Sigma}$ consists of the sets of the form $E \cup N$ where $E \in \Sigma$ and $N$ is a subset of a $\mu$-null set in $\Sigma$. It is not hard to show that $\check{\Sigma}$ is a $\sigma$-algebra and that every complete measure $\bar{\mu}$ extending $\mu$ must extend $\check{\mu}$. More precisely, if $\bar{\mu}$ is a complete measure defined on a $\sigma$-algebra $\bar{\Sigma} \supset \Sigma$ and $\bar{\mu}(E) = \mu(E)$ for all $E \in \Sigma$ then $\bar{\Sigma} \supset \check{\Sigma}$ and $\bar{\mu}(\check{E}) = \check{\mu}(\check{E})$ for all $\check{E} \in \check{\Sigma}$. Observe: If $(X,\Sigma,\mu)$ is already complete then $\check{\Sigma} = \Sigma$ and $\check{\mu} = \mu$. If $\mu$ isn't complete, then clearly $\check{\Sigma} \supsetneqq \Sigma$ and $\check{\mu} \neq \mu$ as you already observed by assuming $(X,\Sigma,\mu)$ to be complete. Finally, it is not very hard to check that $(\check{\mu})^{\ast} = \mu^{\ast}$ so that passing to the completion doesn't change the associated outer measure. Summarizing what we know so far, we have three measure spaces: the original measure space $(X,\Sigma,\mu)$ we started with, its completion $(X,\check{\Sigma},\check{\mu})$, its extension $(X,\Sigma_{C},\mu_{C})$ obtained via Carathéodory's construction from the outer measure $\mu^{\ast}$. We know that $\Sigma \subset \check{\Sigma} \subset \Sigma_{C}$ and that $\check{\mu}$ is an extension of $\mu$ and that $\mu_{C}$ is an extension of $\check{\mu}$. Your question becomes, in my interpretation above: Are $\check{\Sigma} = \Sigma_C$ and $\check{\mu} = \mu_{C}$? This turns out to be wrong in general. Here's an artificial, very degenerate, but easily tractable example (I learned this from Fremlin's treatise Measure theory, volume 2, exercise 213Ya, page 31): Let $X$ be a countable set and consider the outer measure $\varphi(A) = \sqrt{\# A}$. The $\sigma$-algebra of $\varphi$-measurable sets is $\Sigma = \{\emptyset,X\}$: If $\emptyset \neq E \neq X$ choose $A = \{e, x\}$ with $e \in E$ and $x \in X \smallsetminus E$. Then $2 = \varphi(A \cap E) + \varphi(A \smallsetminus E) \gt \varphi(A) = \sqrt{2}$ so that $E$ isn't $\varphi$-measurable. The thing “going wrong here” is of course that $\varphi$ is strictly subadditive on sets of finite non-zero measure. Clearly $\mu(\emptyset) = 0$ and $\mu(X) = \infty$ is the measure obtained from $\varphi$ by the Carathéodory construction. Now $\mu^{\ast}$ is the infinite (outer) measure $\mu^{\ast}(\emptyset) = 0$ and $\mu^{\ast}(A) = \infty$ if $A \neq \emptyset$. The $\sigma$-algebra of $\mu^{\ast}$-measurable sets is $\mathcal{P}(X)$ and $\mu_{C} = \mu^{\ast}$. In particular $\Sigma \subsetneqq \Sigma_{C}$ and $\mu \neq \mu_{C}$. The take-away is of course: If $\mu$ is induced from some outer measure $\varphi$ then it need not be identical to the measure $\mu_{C}$ given by its associated outer measure $\mu^{\ast}$. The counterexample above is admittedly very artificial and not something we really care about. So, we're looking for conditions under which $\check{\mu} = \mu_C$. Here's a natural one, which gives a affirmative answer to Q1 under the additional hypothesis that $(X,\Sigma,\mu)$ is $\sigma$-finite, that is to say $X = \bigcup_{n=1}^{\infty} E_{n}$ with $\mu(E_n) \lt \infty$. The following proposition is a minor adaptation of Fremlin's more general proposition 213C on p.24 of volume 2 of his Measure theory: Proposition. If $(X,\Sigma,\mu)$ is complete and $\sigma$-finite then $\mu = \mu_{C}$. In other words, assuming completeness and $\sigma$-finiteness, $\mu$ is induced by the outer measure $\mu^{\ast}$ given by $$\mu^{\ast} (A) = \inf{\left\{\mu(E)\,:\,A \subset E, \,E\in \Sigma\right\}}$$ and $\Sigma$ coincides with the $\mu^{\ast}$-measurable sets in the sense of Carathéodory. It suffices to prove that $\Sigma = \Sigma_C$. Recall that $\Sigma \subset \Sigma_C$ where $$\Sigma_C = \{E \subset X\,:\,\mu^{\ast}(A) = \mu^{\ast}(A \cap E) + \mu^{\ast}(A \smallsetminus E)\text{ for all } A \subset X\}.$$Let $F \in \Sigma_C$. Since we assume that $X = \bigcup_{n =1}^{\infty} E_n$ with $E_n \in \Sigma$ and $\mu(E_n) \lt \infty$ we can put $F_n = E_n \cap F$ and it suffices to show that $F_n \in \Sigma$ because $F = \bigcup_{n=1}^{\infty} F_n$. Choose $G_1 \in \Sigma$ with $F_n \subset G_1 \subset E_n$ and $\mu(G_1) = \mu^{\ast}(E_n \cap F_n)$ and $G_2 \in \Sigma$ with $E_n \smallsetminus F_n \subset G_2 \subset E_n$ and $\mu(G_2) = \mu^{\ast}(E_n \smallsetminus F_n)$. Since $F_n \in \Sigma_C$ we have by definition of $\Sigma_C$ that$$\mu(G_1) + \mu(G_2) = \mu^{\ast}(E_n \cap F_n) + \mu^{\ast}(E_n \smallsetminus F_n) = \mu^{\ast}(E_n) = \mu(E_n).$$Since $\mu(E_n) \lt \infty$ and $E_n = G_1 \cup G_2$ we have that $\mu(G_1 \cap G_2) = \mu(G_1) + \mu(G_2) - \mu(E_n) = 0.$But notice that $G_1 \smallsetminus F_n \subset G_1 \cap G_2$, so $G_1 \smallsetminus F_n$ is a null-set, hence $\mu$-measurable since $\mu$ is complete, and $F_n = G_1 \smallsetminus (G_1 \smallsetminus F_n)$ shows that $F_n \in \Sigma$, as desired.
In the mentioned context, what is meant is that, between a pair of qubits that are coupled, an XX coupling means something of the form$$X\otimes X\equiv\left(\begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array}\right),$$tensored with identity between all other qubits, where $X$ is the standard Pauli matrix. You then sum these terms over every possible pair of coupled qubits, possibly with different strengths. For example, in a spin chain, qubits labelled 1 to $n$, with a nearest-neighbour coupling (i.e. between all pairs $i$ and $i+1$), the XX coupling means$$\sum_{i=1}^{N-1}J_i\mathbb{I}^{\otimes(i-1)}\otimes X\otimes X\otimes\mathbb{I}^{\otimes(n-i-1)} $$for real parameters $J_i$. Similarly, $YY$ means replacing the $X\otimes X$ with $Y\otimes Y$. In other contexts, the terminology can be used slightly differently. For example, in the condensed matter community, they usually talk about an "XX Hamiltonian". This does not mean a Hamiltonian with XX couplings. Instead, it means a Hamiltonian with terms of the form $XX+YY$ between coupled pairs of qubits. This is also called the exchange interation. To make this notation clearer, let me give further examples. An $XXX$ Hamiltonian, normally called the Heisenberg Hamiltonian, would mean couplings of the for $XX+YY+ZZ$, while $XXZ$ means $XX+YY+\Delta ZZ$ for some parameter $\Delta$. In other words, there are generally 3 terms that you have to worry about in a two-qubit coupling: $aXX+bYY+cZZ$, and a notation like "XX" or "XXX" tells you the number of non-zero terms $(a,b,c)$ which are the same. While "XXZ" tells you two values are the same ($a=b$), but that one value is different. The further complication is that this notation is not consistently used. Sometimes people use "XY" to mean $XX+YY$ and "XYZ" to mean $XX+YY+ZZ$. what purpose do they serve The purpose is to change the maths. The whole point is that without these extra terms, D-wave's quantum computer is not universal - it is incapable of implementing an arbitrary quantum computation. Let me try to give some insight as to why that might be (I don't pretend that this applies directly). To that end, consider the simple geometry of a one-dimensional chain. You'd have some sort of Hamiltonian$$H=\sum_{i=1}^{n-1}\Delta_iZ_iZ_{i+1}+\sum_{i=1}^nB_iX_i.$$The great thing about this Hamiltonian, the transverse Ising model, is that it's exactly solvable via Bogoliubov and Jordan-Wigner transformations. But that means that we can classically simulate its effects and so it's not interesting from a computation perspective. However, if we add extra terms to make the Hamiltonian$$H=\sum_{i=1}^{n-1}\Delta_iZ_iZ_{i+1}+\sum_{i=1}^{n-1}\tilde\Delta_iX_iX_{i+1}+\sum_{i=1}^nB_iX_i.$$then we don't know how to simulate it, and it has the potential to perform interesting computations (this is a long way from proving that simulation of this Hamiltonian is BQP complete).
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
On the DNA Computer Binary Code In any finite set we can define a , a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule ( G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases The Boolean algebra on the set of elements X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable. In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table: OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table: $A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example: CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111 ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000 $\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001 The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is: In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position. There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning. References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14.
Each letter shown represent distinct digit...can vary from zero to nine. $COCA$, $COLA$, $SODA$ are three concatenated numbers. Figure these out from the following relation: $COCA + COLA = SODA$ Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community Based on Omega Krypton's answer, $2C+1=S,C+L=D+10$, $A=0,O=9$. (Note that $O=9$ so $C+L$ carries.) We also need that these digits $C,L,D,S$ are distinctbetween $1\sim 8$. ($0$ and $9$ are taken.) If $C=1$ or $C=2$ then, since $D\ge 1$ we have $L\ge 9$ which is incorrect. So $C=3$ and $S=7$. We have $L=8$ and $D=1$. That is $3930+3980=7910$. We have the following COCA+COLA----- SODA First, from the ones column, we have $A+A \implies A$ which is only possible if $A=0$. Next, notice something similar in the hundreds place; $O+O \implies O$. Since $0$ is already taken and the only possibility without a carry over, we must have a carry over from the 10s, and $O=9$ is the only possibility. We will also have a carry over into the thousands. Since we have a 4 digit number as the result, we know that $0 \lt C \le 4$. But: -But $C=4 \implies S=9$ which is already taken by $O$. -And $C=1 \implies L=9$ to achieve a carryover, which is taken by $O$. -And $C=2 \implies L\in\{8,9\}$. But $L=9$ is taken, and $L=8 \implies D=0$ is also taken. Thus, $C=3$. Also, we know $S=7$ because the hundreds will carry over, and we also know that in order to carry over the 10s, we need $L\ge 7$. But $L=7$ and $L=9$ are taken leaving only $L=8$, and thus, $D=1$. Thus, the solution is; COCA+COLA=SODA, 3930+3980=7910 Since we know that $A+A \equiv A \pmod {10}$ Therefore $A$ $=0$ Hundreds value must carry since $O \neq 0$ Therefore $O+O+1 \equiv O \pmod {10}$ Therefore $O$ $=9$ We now get $2C+1=S$ $C+L=D$ And since $S<9$ $0<C<4$ Then there are many possibilities... any relations I missed out?
The term quantum supremacy doesn't necessarily mean that one can run algorithms, as such, on a quantum computer that are impractical to run on a classical computer. It just means that a quantum computer can do something that a classical computer will find difficult to simulate.You might ask (and rightly so) what I might possibly mean by talking about ... There are several countries that are actively participating in the "Quantum Race", most of which are making significant investments. The estimated annual spending on non-classified quantum-technology research in 2015 broke down like this:United States (360 €m)China (220 €m)Germany (120 €m)Britain (105 €m)Canada (100 €m)Australia (75 €m)Switzerland (... Not sure if this is strictly what you're looking for; and I don't know that I'd qualify this as "exponential" (I'm also not a computer scientist so my ability to do algorithm analysis is more or less nonexistent...), but a recent result by Bravyi et. al presented a class of '2D Hidden Linear Function problems' that provably use fewer resources on a quantum ... There are a continuous set of possible states for $n$ qubits, each of which can be expressed as a superposition of the $2^n$ basis states.Mostly of these states are highly entangled, and would require highly complex circuits to create (assuming the standard gate set of single qubit rotations and two or three qubit entangling gates).These circuits would ... Suppose a function $f\colon {\mathbb F_2}^n \to {\mathbb F_2}^n$ has the following curious property: There exists $s \in \{0,1\}^n$ such that $f(x) = f(y)$ if and only if $x + y = s$. If $s = 0$ is the only solution, this means $f$ is 1-to-1; otherwise there is a nonzero $s$ such that $f(x) = f(x + s)$ for all $x$, which, because $2 = 0$, means $f$ is 2-to-... For all we know, it is extraordinarily hard to prove that a problem which can be solved by a quantum computer is classically hard.The reason is that this would solve an important and long-standing open problem in complexity theory, namely whether PSPACE is larger than P.Specifically, any problem which can be solved by a quantum computer in polynomial ... The term quantum supremacy, as introduced by Preskill in 2012 (1203.5813), can be defined by the following sentence:We therefore hope to hasten the onset of the era of quantum supremacy, when wewill be able to perform tasks with controlled quantum systems going beyond whatcan be achieved with ordinary digital computers.Or, as wikipedia rephrases ... You are right to recognize the complexity of building the oracle to use it with Grover's search - it is indeed the tricky part of solving the problem, and indeed a lot of sources don't consider this complexity.I like to think about the oracle as a tool to recognize the answer, not to find it. For example, if you're looking to solve a SAT problem, the ... The problem is with your initial assumption: the oracle for Grover's is based on a function f(value)=0/1, where 1 indicates that the value meets your search criteria and 0 indicates that it doesn't. This means that you do have to build a new oracle for each different search, but not for each different database.That said, Grover's algorithm and a quantum ... The complexity class of decision problems efficiently solvable on a classical computer is called BPP (or P, if you don't allow randomness, but these are suspected to be equal anyway). The class of problems efficiently solvable on a quantum computer is called BQP. If a problem exists for which a quantum computer provides an exponential speedup, then this ... There are a couple variants of the HOG test."Old HOG" computed the proportion of unique samples whose probability is larger than the median probability of the distribution. It then compares that proportion to a threshold, e.g. 2/3. If you have enough larger-than-median outputs, you pass the test."New HOG" instead computes the mean of the probabilities of ... What does "obtaining samples" mean in this context?The same thing it means in a more classical context. Consider the probability distribution of the possible outcomes of a (possibly biased) coin flip. Sampling from this probability distributions means to flip the coin once and record the result (head or tail). If you sample many times, you can retrieve ... TL/DR: The two-qubit gates are going by the moniker "Sycamore gates" in the paper, and it appears that they would ideally want to explore more of the $(\phi, \theta)$ phase-space but for their purposes (of quantum supremacy) their current Sycamore gate is sufficient. The pattern of gates $\mathrm{ABCDCDAB}$ was chosen to avoid "wedges" and maximize/optimize ... Actually, after having researched the question over the last months, the two answers (one above and one below) are correct, but we can build upon them to get something more up to date.The first answer, however, relies on figures and data which are slightly obsolete, while the source is uncertain (it is impossible to know if the source is McKinsey or The ... How do we know no better classical algorithm exist?We can know thanks to computational complexity theory, which studies the complexity of solving different problems with different computational models.It is in principle possible to prove that no classical algorithm can solve a given problem efficiently.A common way to do it is using reductions, that is, ... None.The quantum race is lead by those entities capable of building the most powerful quantum computer and it are enterprises like IBM, Google, Intel, Microsoft, D-Wave that are currently building the most powerful quantum computers.So it are enterprises that are leading this race and not countries. Whilst I cannot supply a formal proof, the simulation of (the temporal evolution) of a quantum system is believed to be such a case: There is no known better way to do this on a classical computer than in exponential time but a quantum computer can trivially do it in polynomial time.The idea of such a quantum simulator (see also wikipedia article) is in ... A Different Way Of Looking At Linear AlgebraTensor Networks provide a different way of looking at linear algebra particularly within the context of tensor space systems.Quantum Circuits Are Just Products of Vectors and OperatorsA quantum circuit is inherently a tensor space system as when we have multiple qubits we must think of the whole circuit with ... While a follow-up question asks for the motivation behind the two-qubit gates used in Sycamore, this question focuses on the random nature of the single qubit operations used in Sycamore, that is, the gates $\{\sqrt{X},\sqrt{Y},\sqrt{W}=(X+Y)/\sqrt{2}\}$ applied to each of the $53$ qubits between each of the two-qubit gates.Although I agree with @Marsl ... This answer only addresses the part about the necessity of the randomness of the circuit because I am by no means familiar with the physical implementation of the qubits at Google and what kind of constraints these impose on the implementation of certain gates.Now, for the randomness: Consider the problem of sampling from the output distribution of a ... Generally speaking, to prove quantum supremacy, you don't need to sample several times from the same unitary/circuit/output probability distribution. If you extract even a single sample from the output probability distribution of a circuit which you know is extremely hard to simulate classically, then you already achieved something that you couldn't do (... In the Sycamore paper linked in the comments, in the description of FIG. 4, the authors state:...For each $n$, each instance is sampled with $N_s$ between 0.5 M and 2.5 M... For $m=20$, obtaining 1M samples on the quantum processor takes 200 seconds, while an equal fidelity classical sampling would take 10,000 years on 1M cores, and verifying the fidelity ... As an initial matter, I think the Supplementary Information (linked in some other answers on this sight) has a significant amount of discussion on $\mathcal{F}_{XEB}$.However, as I understand it (misunderstandings are my own):There is indeed a concentration of outputs from a random quantum circuit, away from a state wherein the square of the coefficients ... I think the rough, imprecise answer is "yes, $20$ cycles of the one- and 2-qubit gates of Sycamore is sufficient to be able to generate a (Haar measure)-random element of the Hilbert space of dimension $2^{53}$."For example, the diameter (longest shortest path) between any two qubits on Sycamore is $12\lt 20$, thus any two qubits should have a chance to ... Remember that quantum computers contain, as a subset, the classical logic gates. So your assertion that "classical computer are way better at doing arithmetic operations" is not entirely clear. Unless you mean that the current state of the art quantum computers are not as good as classical computers.That said, I can think of two reasons why we might want ... Quantum arithmetic circuits are useful in implementing "oracles" in various general quantum algorithms. For example, if we want to apply Grover's algorithm to solve a real Traveler's salesman problem, we need to implement the "oracle" inside the quantum circuit. Of course, in this case, we still need the conventional computer to prepare the quantum circuits ... Grover's algorithm does not have an advantage when searching an unordered database, because encoding the oracle into a circuit requires $\tilde \Omega(n)$ operations. You can prove this with a simple circuit counting argument. If the circuit had size $O(n^{0.99})$ then there would be fewer distinct circuits than distinct oracles. So the actual operational ... Grover's algorithm is a (quantum-)circuit-SAT solver. I suppose it could also be a literal black box solver, but it would only work with black boxes that don't decohere your entangled input state, and I'm having trouble believing that such things exist.I don't know why Grover or anyone else ever called it a database search algorithm. You can of course give ... This may not exactly answer your question (which I suspect is still very much an open question, and what you're likely to get as answers are opinions), but have you looked at blind quantum computation? See here for another perspective.One way that we can describe that premise is to imagine some company claims to have developed a fabulous universal quantum ...
While this may be a fruitless pursuit of anecdotes, I still ask: what is the strangest (or most blatantly wrong (at least in the eyes of common notation)) mathematical notation you have ever seen? closed as primarily opinion-based by Zev Chonoles, Shuchang, M Turgeon, Claude Leibovici, Paramanand Singh Mar 1 '14 at 7:06 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question. There is an old story about Lang and Mazur, Mazur tried to get Lang attention by using the worst notation possible. He wrote Xi conjugated over Xi, which looks like: $$\frac{\overline{\Xi}}{\Xi}$$ P.S. You can read the story, narrated by Paul Vojta, in the AMS Notices issue dedicated to Lang:AMS Nottices Lang It is on pages 546-547. The Landau big-$O$ notation is extremely strange. One writes $$f(x) = O(g(x))$$ which looks like $f$ is the composition of $O$ and $g$, but it is nothing of the sort. Is $O()$ an operator that can be applied to any term? Can I write $$O(x^2) = O(x^3)$$ or $O(x^2) = 2x^2$? Not normally. It is easily confused with a whole family of similar notations for similar notions; computer programmers regularly talk about $O(n)$ algorithms when they mean $\Omega(n)$ algorithms, for example. This is exacerbated because someone decided that instead of using mnemonic abbreviations, it would be a good idea arbitrarily assign every possible variant of the letter ‘o’ in naming them. Then when they ran out of letter O’s they used $\Theta$, seemingly because it looks enough like an O that you might confuse it with one. It is written with an $=$ even though the relation is asymmetric! We have both $x=O(x^2)$ and $x=O(x^3)$ although $O(x^2)$ and $O(x^3)$ are not the same, and we have both $1 = O(x)$ and $x = O(x)$ even though $1\ne x$. The single worst use of mathematical notation I have ever seen was in a set of lecture notes in which the author wanted to construct a sequence of equivalence relations, each one ($\equiv_n$) derived from the previous one ($\equiv_{n-1}$). After $i_0$ iterations of this procedure, the construction has no more work to do, and the sequence has converged to a certain equivalence relation $\equiv$ with desirable properties. The notes contained this formula: $$\equiv_{i_0+1}=\equiv_{i_0}=\equiv$$ I regret that I did not make a note of the source. The usage of pi: $\pi$ is a constant. $\pi(x)$ is the prime counting function. $\prod(x)$ is a product of a sequence. I took a long time to get used to derivative of integrals like this $$\frac{\partial}{\partial x}\int_{x_0}^x f(x,y) \ dx$$ It's just too much $x'$s in the same formula, and each one has a different meaning. Nevertheless, its common to see people writing down this way. From a proof that convergence a.e. implies convergence in measure for $\mu(\Omega)<\infty$:$$\bigcup_{r\geq 1}\bigcap_{n\geq 1}\bigcup_{j\geq n}\{|{f_j-f}|>\frac{1}{r}\}=\{\omega:f_j(\omega) \not \to f(\omega)\}$$ Also, labeling graphs of functions as $f(x)$ (which I end up still doing to my undergraduates, who are bored when I mention my reservations about it), $\coprod$, "Random Variable," calling a domain the preimage but switching it to a connected open set in complex talk, etc. etc. etc. $$\large{\prod_{n = 1}^3 \mathbb{R} = \mathbb{R}^3}$$ Edit: Apparently this is common notation. MJD suggests a better example: $$\large{\prod_{n = 1}^3 S \neq S^3}$$ How about using pairs of letters like $r,s$ or $u,v$ , or $m,n$ when writing on a blackboard? Unless you're extremely careful, the two in any pair get very easily confused with each other. Or, when you're told you have two collections of objects ( with maybe some additional propreties ) , say $S,X$ , and then you have that $a$, or worse $x$ is an element in $S$. Isn't it so much better to just say $s$ is in $S$, and $x$ is in $X$ ; isn't an element $s$ in $S$ better than any other letter?
In a simple pendulum system, how does the extensibility/elasticity of the string affect the time period of oscillation? Would it lead to a random or systematic error? Would the elasticity of the string result in changing lengths of the pendulum across the oscillation, hence altering the time period? For small oscillations the time period of a pendulum is $$ T \approx 2\pi\sqrt\frac{L}{g} $$ where $L$ is the length of the string and $g$ is the gravitational constant. An elastic string would increase the length of the string the higher the velocity is (i.e. closer to the bottom), due to the centrifugal force. Therefore, such a string would increase the pediod length of the pendulum. For small oscillations and not very elastic strings, the difference to a regular pendulum will not be too large and the deviations will be systematic. Depending on the initial conditions (especially large amplitudes and/or potential energy in the spring) it will rather be a chaotic system with no periodic behaviour anymore.
Closed orbits of Hamiltonian systems on non-compact prescribed energy surfaces 1. Département de Mathématiques, Faculté des Sciences de Tunis, Campus Universitaire, 2092, Tunis, Tunisia $ \dot q = H_p (p,q),\quad \dot p=-H_q(p,q),$ such that $H(p,q)= h,$ when the prescribed energy surface $S_h=${$(p,q)\in \mathbf R^N \times \mathbf R^N;H(p,q)=h$} is non-compact. In our previous work, we have considered the class of singular Hamiltonians like $ H(p,q)$~$(|p|^\beta /\beta )-(1 /|q|^\alpha) \quad$ with $1 \leq\alpha<\beta $ and $\beta\geq 2.$ It has proven the existence of generalized (collision) solutions as a limit of approximate solutions corresponding to critical points of certain functionals. In this paper, we relate the Morse index of critical points with the number of collisions of the generalized solution via blow up arguments. In particular, we obtain the existence of a classical (non-collision) solution for $\alpha \in ]\beta /2,\beta$[ when $N \geq 4$ and for $\alpha \in ]2\beta/ 3 ,\beta$[ when $N=3$. As a consequence, we get for smooth Hamiltonians like $H(p,q)$~$|q|^\alpha (|p|^\beta +1) \quad$ with $1< \alpha < \beta$ and $\beta \geq 2,$ the same existence results since the two classes of Hamiltonians have the same energy surfaces. Keywords:Hamiltonian systems, blow-up argument, Morse index., non-compact energy surface, variational methods, Periodic solution, approximate solution. Mathematics Subject Classification:34C25, 35D10, 37J45, 58E0. Citation:Morched Boughariou. Closed orbits of Hamiltonian systems on non-compact prescribed energy surfaces. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 603-616. doi: 10.3934/dcds.2003.9.603 [1] Alain Bensoussan, Jens Frehse, Jens Vogelgesang. Systems of Bellman equations to stochastic differential games with non-compact coupling. [2] Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. [3] Shota Sato. Blow-up at space infinity of a solution with a moving singularity for a semilinear parabolic equation. [4] Tetsuya Ishiwata, Shigetoshi Yazaki. A fast blow-up solution and degenerate pinching arising in an anisotropic crystalline motion. [5] Frédéric Abergel, Jean-Michel Rakotoson. Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation. [6] [7] Julián López-Gómez, Pavol Quittner. Complete and energy blow-up in indefinite superlinear parabolic problems. [8] Guangyu Xu, Jun Zhou. Global existence and blow-up of solutions to a singular Non-Newton polytropic filtration equation with critical and supercritical initial energy. [9] Mohamed Boulanouar. On a Mathematical model with non-compact boundary conditions describing bacterial population (Ⅱ). [10] [11] [12] Guido De Philippis, Antonio De Rosa, Jonas Hirsch. The area blow up set for bounded mean curvature submanifolds with respect to elliptic surface energy functionals. [13] Jaeyoung Byeon, Sungwon Cho, Junsang Park. On the location of a peak point of a least energy solution for Hénon equation. [14] [15] Liang Ding, Rongrong Tian, Jinlong Wei. Nonconstant periodic solutions with any fixed energy for singular Hamiltonian systems. [16] [17] Min Li, Zhaoyang Yin. Blow-up phenomena and travelling wave solutions to the periodic integrable dispersive Hunter-Saxton equation. [18] Min Zhu, Shuanghu Zhang. Blow-up of solutions to the periodic modified Camassa-Holm equation with varying linear dispersion. [19] Min Zhu, Ying Wang. Blow-up of solutions to the periodic generalized modified Camassa-Holm equation with varying linear dispersion. [20] Min Zhu, Shuanghu Zhang. On the blow-up of solutions to the periodic modified integrable Camassa--Holm equation. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A February 2019 , Volume 39 , Issue 2 Select all articles Export/Reference: Abstract: Let Abstract: We study the asymptotic behavior of a class of non-autonomous non-local fractional stochastic parabolic equation driven by multiplicative white noise on the entire space Abstract: S-systems are simple examples of power-law dynamical systems (polynomial systems with real exponents). For planar S-systems, we study global stability of the unique positive equilibrium and solve the center problem. Further, we construct a planar S-system with two limit cycles. Abstract: We show that a continuous abelian action (in particular Abstract: We compute the limit of the free energy of the mean field generated by the independent Brownian particles Abstract: In this paper, we study a final value problem for a reaction-diffusion system with time and space dependent diffusion coefficients. In general, the inverse problem of identifying the initial data is not well-posed, and herein the Hadamard-instability occurs. Applying a new version of a modified quasi-reversibility method, we propose a stable approximate (regularized) problem. The existence, uniqueness and stability of the corresponding regularized problem are obtained. Furthermore, we also investigate the error estimate and show that the approximate solution converges to the exact solution in Abstract: In this paper we prove that the heat kernel for Abstract: In this paper, we study the self-dual Einstein-Maxwell-Higgs equation on compact surfaces. The solution structure depends on the parameter Abstract: For an where Abstract: We will give a new proof of a recent result of P. Daskalopoulos, G. Huisken and J.R. King ([ Abstract: We study the problem of rigidity of closures of totally geodesic plane immersions in geometrically finite manifolds containing rank 1 cusps. We show that the key notion of K-thick recurrence of horocycles fails generically in this setting. This property played a key role in the recent breakthroughs of McMullen, Mohammadi and Oh. Nonetheless, in the setting of geometrically finite groups whose limit sets are circle packings, we derive 2 density criteria for non-closed geodesic plane immersions, and show that closed immersions give rise to surfaces with finitely generated fundamental groups. We also obtain results on the existence and isolation of proper closed immersions of elementary surfaces. Abstract: We consider the following Liouville-type PDE, which is related to stationary solutions of the Keller-Segel's model for chemotaxis: where $\Omega \subset {\mathbb{R}^2}$ is a smooth bounded domain and $\beta, ρ$ are real parameters. We prove existence of solutions under some algebraic conditions involving $\beta, ρ$. In particular, if $\Omega$ is not simply connected, then we can find solution for a generic choice of the parameters. We use variational and Morse-theoretical methods. Abstract: In order to adapt to the differentiable setting a formula for linear response proved by Pollicott and Vytnova in the analytic setting, we show a result of parameter regularity of dynamical determinants of expanding maps of the circle. Linear response can then be expressed in terms of periodic points of the perturbed dynamics. Abstract: This paper extends the definition of Bowen topological entropy of subsets to Pesin-Pitskel topological pressure for the continuous action of amenable groups on a compact metric space. We introduce the local measure theoretic pressure of subsets and investigate the relation between local measure theoretic pressure of Borel probability measures and Pesin-Pitskel topological pressure on an arbitrary subset of a compact metric space. Abstract: In this paper we introduce the topological entropy and lower and upper capacity topological entropies of a free semigroup action, which extends the notion of the topological entropy of a free semigroup action defined by Bufetov [ Abstract: We prove that the average Lyapunov exponents of asymptotically additive functions have the intermediate value property provided the dynamical system has the periodic gluing orbit property. To be precise, consider a continuous map where Abstract: Recently a generalization of shifts of finite type to the infinite alphabet case was proposed, in connection with the theory of ultragraph C *-algebras. In this work we characterize the class of continuous shift commuting maps between these spaces. In particular, we prove a Curtis-Hedlund-Lyndon type theorem and use it to completely characterize continuous, shift commuting, length preserving maps in terms of generalized sliding block codes. Abstract: We are considering partially hyperbolic diffeomorphims of the torus, with Abstract: It is well known that the Leslie-Gower prey-predator model (without Allee effect) has a unique globally asymptotically stable positive equilibrium point, thus there is no Hopf bifurcation branching from positive equilibrium point. In this paper we study the Leslie-Gower prey-predator model with strong Allee effect in prey, and perform a detailed Hopf bifurcation analysis to both the ODE and PDE models, and derive conditions for determining the steady-state bifurcation of PDE model. Moreover, by the center manifold theory and the normal form method, the direction and stability of Hopf bifurcation solutions are established. Finally, some numerical simulations are presented. Apparently, Allee effect changes the topology structure of the original Leslie-Gower model. Abstract: The goal of the work is to verify the fractional Leibniz rule for Dirichlet Laplacian in the exterior domain of a compact set. The key point is the proof of gradient estimates for the Dirichlet problem of the heat equation in the exterior domain. Our results describe the time decay rates of the derivatives of solutions to the Dirichlet problem. Abstract: In this paper, we consider the global strong solutions to the Cauchy problem of the compressible Navier-Stokes equations in two spatial dimensions with vacuum as far field density. It is proved that the strong solutions exist globally if the density is bounded above. Furthermore, we show that if the solutions of the two-dimensional (2D) viscous compressible flows blow up, then the mass of the compressible fluid will concentrate on some points in finite time. Abstract: In this paper, we prove the existence of extremal functions for the best constant of embedding from anisotropic space, allowing some of the Sobolev exponents to be equal to Abstract: We give necessary and sufficient conditions for which the elliptic equation has nontrivial bounded solutions. Abstract: We consider the nonlinear Schrödinger equation for The construction involves explicit functions u close to U, we use energy estimates and a compactness argument. Abstract: In this paper, we study the initial-boundary value problem for infinitely degenerate semilinear pseudo-parabolic equations with logarithmic nonlinearity Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Abbreviation: Cat A is a structure $\mathbf{C}=\langle C,\circ,\text{dom},\text{cod}\rangle$ of type $\langle 2,1,1\rangle$ such that$C$ is a class, category $\langle C,\circ\rangle$ is a (large) partial semigroup dom amd cod are total unary operations on $C$ such that $\text{dom}(x)$ is a left unit: $\text{dom}(x)\circ x=x$ $\text{cod}(x)$ is a right unit: $x\circ\text{cod}(x)=x$ if $x\circ y$ exists then $\text{dom}(x\circ y)=\text{dom}(x)$ and $\text{cod}(x\circ y)=\text{cod}(y)$ $x\circ y$ exists iff $\text{cod}(x)=\text{dom}(y)$ Remark: The members of $C$ are called , $\circ$ is the partial operation of morphisms , dom is the composition and cod is the domain of a morphism. codomain The set of objects of $C$ is the set $\mathbf{Obj}C=\{\text{dom}(x)|x\in C\}$. For $a,b\in C$ the set of homomorphism from $a$ to $b$ is $\text{Hom}(a,b)=\{c\in C|\text{dom}(c)=a\text{ and }\text{cod}(c)=b\}$. Let $\mathbf{C}$ and $\mathbf{D}$ be categories. A morphism from $\mathbf{C}$ to $\mathbf{D}$ is a function $h:C\rightarrow D$ that is a homomorphism: $h(\text{dom}(c))=\text{dom}h(c)$, $h(\text{cod}(c))=\text{cod}h(c)$ and $h(c\circ d)=h(c) \circ h(d)$ whenever $c\circ d$ is defined. Morphisms between categories are called . functors Example 1: The category of function on sets with composition. In fact, most of the classes of mathematical structures in this database are categories. $\text{dom}(\text{dom}(x))=\text{dom}(x)=\text{cod}(\text{dom}(x))$ $\text{cod}(\text{cod}(x))=\text{cod}(x)=\text{dom}(\text{cod}(x))$ $\begin{array}{lr} f(1)= &1\\ f(2)= &3\\ f(3)= &11\\ f(4)= &55\\ f(5)= &329\\ f(6)= &2858\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
Along the lines of Glen O's answer, this answer attempts to explain the solvability of the problem, rather than provide the answer, which has already been given. Instead of using the meta-knowledge approach, which, as Glen stated, can get hard to follow, I use the range-base approach used in Rubio's answer, and specifically address some of the objections being raised. The argument has been put forward that when Mark fails to answer on the first morning, he gives Rose no new information. This is actually true (sort of— see the last spoiler section of this answer). Rose could have predicted beforehand with certainty that Mark would fail to answer on the first day, so his failure to answer doesn't tell her anything she didn't know. However, that doesn't make the problem unsolvable. To see why, you must understand the following logical axiom: Additional information never invalidates a valid deduction. In other words, if I know that all of the statements $P_1,\dots P_n$ and $Q$ are true, and that $R$ is definitely true if $P_1, \dots P_n$ are true, I can conclude that $R$ is true. My additional knowledge that $Q$ is true, though unnecessary to deduce $R$, doesn't hamper my ability to deduce $R$ from $P_1,\dots P_n$. I will call this rule LUI for "Law of Unnecessary Information." (It may have some other name, but I don't know it, so I'm giving it a new one.) The line of reasoning goes as follows: Let $R,\;M$ be the number of bars on Rose's and Mark's windows, respectively. Before the first question is asked, both Mark and Rose know the following: $P_1$: Mark knows the value of $M$ $P_2$: Rose knows the value of $R$ $P_3$: $M+R=20 \;\vee \;M+R=18\;$ ($\vee$ means "or", in case you're unfamiliar with the notation) $P_4$: $M\ge 2\;\wedge\;R \ge2\;$ ($\wedge$ means "and") $P_5$: Both of them know every statement on this list, and every statement that can be deduced from statements they both know. To help keep track of $P_5$ I will say that I will call a statement $P$ (with some subscript) only if it is known to both prisoners (or neither); thus, $P_5$ becomes "the other prisoner knows every $P$ that I know." Additionally, Mark knows that $M=12$ and Rose knows that $R=8$. Call this knowledge $Q_M$ and $Q_R$, respectively. Finally, as soon as one of them is asked the question for $k^\text{th}$ time, they both know (and know that one another know, etc.) $P_{\leftarrow k}$: $P_{\leftarrow k}$: The other prisoner could not deduce the value of $M+R$ given the information they already had. After Mark doesn't answer on the morning of day one, both prisoners can deduce from $P_1, P_3, P_4, P_5,$ and $P_{\leftarrow 2}$ that $M\le 16$ (call this $P_6$). It is true that both prisoners have more information than this about the value of $M$, but LUI tells us that that doesn't invalidate the deduction. It basically just means that Rose won't be surprised when she gets asked the question. She already knows she will be. By the following morning, both prisoners can deduce from $P_1\dots P_6$ and $P_{\leftarrow 3}$ that $4\le R \le 16$ ($P_7$), and that evening, they can deduce from $P1,\dots P_7$ and $P_{\leftarrow 4}$ that $4 \le M \le 14$ ($P_8$). Again, both prisoners know all of this already. (But the conclusions are still valid by LUI.) On the next day, in a similar manner, they can deduce in the morning that $6 \le R \le 14$ ($P_9$), and in the evening that $6 \le M \le 12$ ($P_{10}$). Here's where things get interesting. Mark can deduce from $P_3$ and $Q_M$ that $R$ is either $6$ or $8$, but $R=6\wedge P_{10} \wedge P_3\implies M+R=18$ and $R=6\wedge P_{10} \wedge P_3\wedge\left[R=6\wedge P_{10} \wedge P_3\implies M+R=18\right]\implies \neg P_{\leftarrow 7}$. When he gets asked the question again on the following morning, he learns that $P_{\leftarrow 7}$ is true, and can thus deduce that $R \neq 6$ and therefore $R=8$ and $M+R=20$. This is actually the first time in the sequence that a $P_{\leftarrow k}$ provides any more information about the value of $M+R$ than the prisoner already has, but the sequence of irrelevant questions is necessary to establish the deep metaknowledge Glen talks about. In this formulation, all this metaknowledge is encapsulated in $P_5$. When a prisoner is asked a question, $P_5$ says that they can deduce not only $P_{\leftarrow k}$ but also that both of them know $P_{\leftarrow k}$ and, by repeatedly applying $P_5$, that both of them know that both of them know $P_{\leftarrow k}$ and so on. For any $P_{\leftarrow k}$, there is some level of "we both know that we both know" that can't be deduced from $P_1\dots P_5$ and $Q_M$ or $Q_R$ alone. This is the "new information" being "learned" at each stage. Really nothing new is learned until Rose fails to answer on the $3^\text{rd}$ evening, but the sequence of non-answers $P_{\leftarrow k}$ is necessary to provide the deductive path to $P_{\leftarrow 7}$. In fact, viewing it another way, the fact that not answering provides "no new information" (and in fact doesn't provide any new direct information about the number of bars) is exactly why the puzzle is solvable, because It says that the previous answer provided no new information. Because they both know that the number of bars is either $18$ or $20$ (only two possibilities), any new information about the number of bars (eliminating a possibility) will allow them to give the answer; thus, not answering sends the message "I have not yet received any new information," which, eventually, is new information for the other prisoner. The "conversation" the prisoners have amounts to this: Mark: I don't know how many bars there are. Rose: I already knew that (that you wouldn't know). Mark: I already knew that (that you'd know I wouldn't know). Rose: I already knew THAT (etc.) Mark: I already knew THAT. Rose: I already knew $\mathbf {THAT}$. Mark (To the Evil Logician): There are $20$ bars. But how, you may ask, can a series of messages that provide their recipient with no new information lead to one that does? Simple! The non-answers provide no new information to the recipient, but they do provide information to the sender. If I tell you that I'm secretly a ninja, you might already know that, but even if you do, knowledge is gained, because by telling you, I give myself the knowledge that you know I'm a ninja, and that you know I know you know I'm a ninja, etc. Thus, each message sent, even if the recipient already knows it, provides the sender with information. After several such questions, this is enough information that a message recipient can draw conclusions based on the sender's inability to draw any conclusions from the information they know the sender has. Ok, fine, you might say, but what, exactly, is learned when Mark fails to answer on the first morning, and how can you prove this was not already known? Great question, thanks for asking. You see... At this point, we have to resort to metaknowledge (I know she knows I know...) even though it can get confusing, However, I'll break it down in such a way as to hopefully satisfy anyone who still objects that there is (meta)knowledge available after Mark fails to answer the first question was not available before he did so. Specifically, After failing to answer the first question, Now, that's a mouthful, so let's break it down into parts: Mark gains the information that Rose knows that Mark knows that Rose knows that Mark knows that Rose knows that Mark's window has less than $18$ bars. $R_0$:Mark's window doesn't have $18$ bars $M_1$:Rose knows $R_0$ $R_2$:Mark knows $M_1$ $M_3$:Rose knows $R_2$ $R_4$:Mark knows $M_3$ $M_5$:Rose knows $R_4$ My claim is that A) Before he fails to answer on the first morning, Mark does not know $M_5$, and B) Afterwards, he does. Let's examine A) first: To show that Mark doesn't know $M_5$ beforehand, we work backwards from $R_0$. In order for Rose to know that Mark's window doesn't have $18$ bars, her window would have to have more than $2$ bars. Since the rules (and numbers of bars) imply that they both have an even number of bars, in order for Mark to know $M_1$, he would have to know that Rose's window has at least $4$ bars. The only way for him to know that is if his window has less than $16$ bars. Thus, for rose to know $R_2$, she must know that Mark has no more than $14$ bars, which requires that she have at least $6$ bars. For Mark to know $M_3$, then, he must have no more than $12$ bars, so for Rose to know $R_4$ she must have at least $8$ bars, and for Mark to know $M_5$ he must have no more than $10$ bars. But he does have more than $10$ bars, so he doesn't know $M_5$ beforehand. To see why Mark must know $M_5$ after he fails to answer the question, we must realize that they both know the rules of the game and one of the rules of the game is that they both know the rules of the game. This creates an infinite loop of meta-knowledge, meaning that they both know that they both know that they both know... the rules, no matter how many times you repeat "they both know". This infinite-depth meta-knowledge extends to anything that can be deduced from the rules. If Mark's window had $18$ bars, he could deduce from the rules that Rose must have $2$, and the tower must have $20$ in total. Because he doesn't answer, rose will be asked, and when she is, she will know that he couldn't deduce the answer, and therefore has less than $18$ bars. Because this is all deduced directly from the rules, rather than the private knowledge that either prisoner has, it inherits the infinite meta-knowledge of the rules, and Mark knows $M_5$. So, Mark learns $M_5$. Does Rose learn anything? It's tempting to think that she doesn't, because she can predict in advance that Mark won't answer and therefore, one might think, she can draw in advance any conclusions that could be drawn from his not answering. However, as was shown above, by not answering, Mark learns $M_5$. Not answering changes the state of Mark's knowledge. This means that Rose's ability to predict Mark's behavior doesn't prevent her from gaining new information. She can predict in advance both what he will do (not answer) and what he will learn when he does it ($M_5$), but since he doesn't learn $M_5$ until he actually declines to answer, his failure to answer provides her with the information that he knows $M_5$. Since he didn't know $M_5$ beforehand, the knowledge that he does is by definition new information for Rose. Rose already knew that she now would know this, but until Mark doesn't answer, she doesn't actually know it (because it isn't true). By following this prediction logic out, it's possible to show that Rose knows (at the start) that Mark will be unable to answer until the $4^\text{th}$ morning, but not whether or not he'll be able to answer then. Mark, meanwhile, knows that Rose will be unable to answer until the $3^\text{rd}$ evening, but not whether or not she'll be able to answer then. As soon as one of the prisoners observes an event that they were unable to predict at the beginning, they can deduce from it something they didn't know about the state of the other's knowledge. Since the only hidden information is how many bars are in the other prisoners window, and they know that it must be one of two values, learning new information about that allows them to eliminate one of the values and find the correct result.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A April 2014 , Volume 34 , Issue 4 Special Issue on Optimal Transport and Applications Select all articles Export/Reference: Abstract: Optimal mass transportation can be traced back to Gaspard Monge's paper in 1781. There, for engineering/military reasons, he was studying how to minimize the cost of transporting a given distribution of mass from one location to another, giving rise to a challenging mathematical problem. This problem, an optimization problem in a certain class of maps, had to wait for almost two centuries before seeing significant progress (starting with Leonid Kantorovich in 1942), even on the very fundamental question of the existence of an optimal map. Due to these connections with several other areas of pure and applied mathematics, optimal transportation has received much renewed attention in the last twenty years. Indeed, it has become an increasingly common and powerful tool at the interface between partial differential equations, fluid mechanics, geometry, probability theory, and functional analysis. At the same time, it has led to significant developments in applied mathematics, with applications ranging from economics, biology, meteorology, design, to image processing. Because of the success and impact that this subject is still receiving, we decided to create a special issue collecting selected papers from leading experts in the area. For more information please click the “Full Text” above. Abstract: Exploiting recent regularity estimates for the Monge-Ampère equation, under some suitable assumptions on the initial data we prove global-in-time existence of Eulerian distributional solutions to the semigeostrophic equations in 3-dimensional convex domains. Abstract: In this paper, we generalize a result by Alexandrov on the Gauss curvature prescription for Euclidean convex bodies. We prove an analogous result for hyperbolic orbifolds. In addition to the duality theory for convex sets, our main tool comes from optimal mass transport. Abstract: We consider the very simple Navier-Stokes model for compressible fluids in one space dimension, where there is no temperature equation and both the pressure and the viscosity are proportional to the density. We show that the resolution of this Navier-Stokes system can be reduced, through the crucial intervention of a monotonic rearrangement operator, to the time discretization of a very elementary differential equation with noise. In addition, our result can be easily extended to a related Navier-Stokes-Poisson system. Abstract: In the paper a model problem for the location of a given number $N$ of points in a given region $\Omega$ and with a given resources density $\rho(x)$ is considered. The main difference between the usual location problems and the present one is that in addition to the location cost an extra routing costis considered, that takes into account the fact that the resources have to travel between the locations on a point-to-point basis. The limit problem as $N\to\infty$ is characterized and some applications to airfreight systems are shown. Abstract: We prove uniqueness in the class of integrable and bounded nonnegative solutions in the energy sense to the Keller-Segel (KS) chemotaxis system. Our proof works for the fully parabolic KS model, it includes the classical parabolic-elliptic KS equation as a particular case, and it can be generalized to nonlinear diffusions in the particle density equation as long as the diffusion satisfies the classical McCann displacement convexity condition. The strategy uses Quasi-Lipschitz estimates for the chemoattractant equation and the above-the-tangent characterizations of displacement convexity. As a consequence, the displacement convexity of the free energy functional associated to the KS system is obtained from its evolution for bounded integrable initial data. Abstract: A usual approach for proving the existence of an optimal transport map, be it in ${\mathbb R}^d$ or on more general manifolds, involves a regularity condition on the transport cost (the so-called Left Twist condition, i.e. the invertibility of the gradient in the first variable) as well as the fact that any optimal transport plan is supported on a cyclically-monotone set. Under the classical assumption that the initial measure does not give mass to sets with $\sigma$-finite $\mathcal{H}^{d-1}$ measure and a stronger regularity condition on the cost (the Strong Left Twist), we provide a short and self-contained proof of the fact that any feasible transport plan (optimal or not) satisfying a $c$-monotonicity assumption is induced by a transport map. We also show that the usual costs induced by Tonelli Lagrangians satisfy the Strong Left Twist condition we propose. Abstract: We consider discrete porous medium equations of the form $\partial_t\rho_t = \Delta \phi(\rho_t)$, where $\Delta$ is the generator of a reversible continuous time Markov chain on a finite set $\boldsymbol{\chi} $, and $\phi$ is an increasing function. We show that these equations arise as gradient flows of certain entropy functionals with respect to suitable non-local transportation metrics. This may be seen as a discrete analogue of the Wasserstein gradient flow structure for porous medium equations in $\mathbb{R}^n$ discovered by Otto. We present a one-dimensional counterexample to geodesic convexity and discuss Gromov-Hausdorff convergence to the Wasserstein metric. Abstract: Some quantum fluid models are written as the Lagrangian flow of mass distributions and their geometric properties are explored. The first model includes magnetic effects and leads, via the Madelung transform, to the electromagnetic Schrödinger equation in the Madelung representation. It is shown that the Madelung transform is a symplectic map between Hamiltonian systems. The second model is obtained from the Euler-Lagrange equations with friction induced from a quadratic dissipative potential. This model corresponds to the quantum Navier-Stokes equations with density-dependent viscosity. The fact that this model possesses two different energy-dissipation identities is explained by the definition of the Noether currents. Abstract: We present an approach for proving uniqueness of ODEs in the Wasserstein space. We give an overview of basic tools needed to deal with Hamiltonian ODE in the Wasserstein space and show various continuity results for value functions. We discuss a concept of viscosity solutions of Hamilton-Jacobi equations in metric spaces and in some cases relate it to viscosity solutions in the sense of differentials in the Wasserstein space. Abstract: We prove that every one-dimensional real Ambrosio-Kirchheim current with zero boundary (i.e. a cycle) in a lot of reasonable spaces (including all finite-dimensional normed spaces) can be represented by a Lipschitz curve parameterized over the real line through a suitable limit of Cesàro means of this curve over a subsequence of symmetric bounded intervals (viewed as currents). It is further shown that in such spaces, if a cycle is indecomposable, i.e. does not contain ``nontrivial'' subcycles, then it can be represented again by a Lipschitz curve parameterized over the real line through a limit of Cesàro means of this curve over every sequence of symmetric bounded intervals, that is, in other words, such a cycle is a solenoid. Abstract: Symmetric Monge-Kantorovich transport problems involving a cost function given by a family of vector fields were used by Ghoussoub-Moameni to establish polar decompositions of such vector fields into $m$-cyclically monotone maps composed with measure preserving $m$-involutions ($m\geq 2$). In this note, we relate these symmetric transport problems to the Brenier solutions of the Monge and Monge-Kantorovich problem, as well as to the Gangbo-Święch solutions of their multi-marginal counterparts, both of which involving quadratic cost functions. Abstract: We prove that the Abresch-Gromoll inequality holds on infinitesimally Hilbertian $CD(K,N)$ spaces in the same form as the one available on smooth Riemannian manifolds. Abstract: We study the optimal transportation mapping $\nabla \Phi : \mathbb{R}^d \mapsto \mathbb{R}^d$ pushing forward a probability measure $\mu = e^{-V} \ dx$ onto another probability measure $\nu = e^{-W} \ dx$. Following a classical approach of E. Calabi we introduce the Riemannian metric $g = D^2 \Phi$ on $\mathbb{R}^d$ and study spectral properties of the metric-measure space $M=(\mathbb{R}^d, g, \mu)$. We prove, in particular, that $M$ admits a non-negative Bakry--Émery tensor provided both $V$ and $W$ are convex. If the target measure $\nu$ is the Lebesgue measure on a convex set $\Omega$ and $\mu$ is log-concave we prove that $M$ is a $CD(K,N)$ space. Applications of these results include some global dimension-free a priori estimates of $\| D^2 \Phi\|$. With the help of comparison techniques on Riemannian manifolds and probabilistic concentration arguments we proof some diameter estimates for $M$. Abstract: This article is aimed at presenting the Schrödinger problem and some of its connections with optimal transport. We hope that it can be used as a basic user's guide to Schrödinger problem. We also give a survey of the related literature. In addition, some new results are proved. Abstract: In order to observe growth phenomena in biology where dendritic shapes appear, we propose a simple model where a given population evolves feeded by a diffusing nutriment, but is subject to a density constraint. The particles (e.g., cells) of the population spontaneously stay passive at rest, and only move in order to satisfy the constraint $\rho\leq 1$, by choosing the minimal correction velocity so as to prevent overcongestion. We treat this constraint by means of projections in the space of densities endowed with the Wasserstein distance $W_2$, defined through optimal transport. This allows to provide an existence result and suggests some numerical computations, in the same spirit of what the authors did for crowd motion (but with extra difficulties, essentially due to the fact that the total mass may increase). The numerical simulations show, according to the values of the parameter and in particular of the diffusion coefficient of the nutriment, the formation of dendritic patterns in the space occupied by cells. Abstract: This note exposes the differential topology and geometry underlying some of the basic phenomena of optimal transportation. It surveys basic questions concerning Monge maps and Kantorovich measures: existence and regularity of the former, uniqueness of the latter, and estimates for the dimension of its support, as well as the associated linear programming duality. It shows the answers to these questions concern the differential geometry and topology of the chosen transportation cost. It also establishes new connections --- some heuristic and others rigorous --- based on the properties of the cross-difference of this cost, and its Taylor expansion at the diagonal. Abstract: We prove uniqueness and Monge solution results for multi-marginal optimal transportation problems with a certain class of surplus functions; this class arises naturally in multi-agent matching problems in economics. This result generalizes a seminal result of Gangbo and Święch [17]. Of particular interest, we show that this also yields a partial generalization of the Gangbo-Święch result to manifolds; alternatively, we can think of this as a partial extension of McCann's theorem for quadratic costs on manifolds to the multi-marginal setting [23]. We also show that the class of surplus functions considered here neither contains, nor is contained in, the class of surpluses studied in [27], another generalization of Gangbo and Święch's result. Abstract: We prove that the linear ``heat'' flow in a $RCD (K, \infty)$ metric measure space $(X, d, m)$ satisfies a contraction property with respect to every $L^p$-Kantorovich-Rubinstein-Wasserstein distance, $p\in [1,\infty]$. In particular, we obtain a precise estimate for the optimal $W_\infty$-coupling between two fundamental solutions in terms of the distance of the initial points. The result is a consequence of the equivalence between the $RCD (K, \infty)$ lower Ricci bound and the corresponding Bakry-Émery condition for the canonical Cheeger-Dirichlet form in $(X, d, m)$. The crucial tool is the extension to the non-smooth metric measure setting of the Bakry's argument, that allows to improve the commutation estimates between the Markov semigroup and the Carré du Champ$\Gamma$ associated to the Dirichlet form. This extension is based on a new a priori estimate and a capacitary argument for regular and tight Dirichlet forms that are of independent interest. Abstract: We develop the fundamentals of a local regularity theory for prescribed Jacobian equations which extend the corresponding results for optimal transportation equations. In this theory the cost function is extended to a generating functionthrough dependence on an additional scalar variable. In particular we recover in this generality the local regularity theory for potentials of Ma, Trudinger and Wang, along with the subsequent development of the underlying convexity theory. Abstract: In this paper, we introduce a multiple-sources version of the landscape function which was originally introduced by Santambrogio in [10]. More precisely, we study landscape functions associated with a transport path between two atomic measures of equal mass. We also study p-harmonic functions on a directed graph for nonpositive $p$. We show an equivalence relation between landscape functions associated with an $\alpha $-transport path and $ p$-harmonic functions on the underlying graph of the transport path for $ p=\alpha /(\alpha -1)$, which is the conjugate of $\alpha $. Furthermore, we prove the Lipschitz continuity of a landscape function associated with an optimal transport path on each of its connected components. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Materials in Meep From AbInitio Revision as of 21:41, 10 September 2009 (edit) Stevenj (Talk | contribs) (→Material dispersion) ← Previous diff Revision as of 21:05, 20 July 2012 (edit) Stevenj (Talk | contribs) (→Material dispersion) Next diff → Line 20: Line 20: :::<math> = \left( 1 + \frac{i \cdot \sigma_D(\mathbf{x})}{2\pi f} \right) \left[ \varepsilon_\infty(\mathbf{x}) + \sum_n \frac{\sigma_n(\mathbf{x}) \cdot f_n^2 }{f_n^2 - f^2 - if\gamma_n/2\pi} \right] ,</math> :::<math> = \left( 1 + \frac{i \cdot \sigma_D(\mathbf{x})}{2\pi f} \right) \left[ \varepsilon_\infty(\mathbf{x}) + \sum_n \frac{\sigma_n(\mathbf{x}) \cdot f_n^2 }{f_n^2 - f^2 - if\gamma_n/2\pi} \right] ,</math> - where <math>\sigma_D</math> is the electric conductivity, <math>\omega_n</math> and <math>\gamma_n</math> are user-specified constants (or actually, the numbers that one specifies are <math>f_n = \omega_n / 2\pi</math> and <math>\gamma_n / 2\pi</math>), and the <math>\sigma_n(\mathbf{x})</math> is a user-specified function of position giving the strength of the ''n''-th resonance. The σ parameters can be anisotropic tensors (although currently only diagonal tensors are supported), while the frequency-independent term <math>\varepsilon_\infty</math> can be an arbitrary tensor (not necessarily diagonal). This corresponds to evolving '''P''' via the equations: + where <math>\sigma_D</math> is the electric conductivity, <math>\omega_n</math> and <math>\gamma_n</math> are user-specified constants (or actually, the numbers that one specifies are <math>f_n = \omega_n / 2\pi</math> and <math>\gamma_n / 2\pi</math>), and the <math>\sigma_n(\mathbf{x})</math> is a user-specified function of position giving the strength of the ''n''-th resonance. The σ parameters can be anisotropic (real-symmetric) tensors, while the frequency-independent term <math>\varepsilon_\infty</math> can be an arbitrary real-symmetric tensor as well. This corresponds to evolving '''P''' via the equations: :<math>\mathbf{P} = \sum_n \mathbf{P}_n</math> :<math>\mathbf{P} = \sum_n \mathbf{P}_n</math> Revision as of 21:05, 20 July 2012 Meep Download Release notes FAQ Meep manual Introduction Installation Tutorial Reference C++ Tutorial C++ Reference Acknowledgements License and Copyright The material structure in Maxwell's equations is determined by the relative permittivity ε( x) and the relative permeability μ( x). However, ε is not only a function of position. In general, it also depends on frequency ( material dispersion) and on the electric field E itself ( nonlinearity). It may also depend on the orientation of the field ( anisotropy). Material dispersion, in turn, is generally associated with absorption loss in the material, or possibly gain. All of these effects can be simulated in Meep, with certain restrictions. Similarly for the relative permeability μ( x), for which dispersion, nonlinearity, and anisotropy are all supported as well. In this section, we describe the form of the equations and material properties that Meep can simulate. The actual interface with which you specify these properties is described in the Meep reference. Contents Material dispersion Physically, material dispersion arises because the polarization of the material does not respond instantaneously to an applied field E, and this is essentially the way that it is implemented in FDTD. In particular, is expanded to: where is the instantaneous dielectric function (the infinite-frequency response) and P is the remaining frequency-dependent polarization density in the material. P, in turn, has its own time-evolution equation, and the exact form of this equation determines the frequency-dependence ε(ω). [Note that Meep's definition of ω uses a sign convention exp( − iω t) for the time dependence.] In particular, Meep supports any material dispersion of the form of a sum of harmonic resonances, plus a term from the frequency-independent electric conductivity: where σ is the electric conductivity, ω D and γ n are user-specified constants (or actually, the numbers that one specifies are n f = ω n / 2π and γ n / 2π), and the is a user-specified function of position giving the strength of the n n-th resonance. The σ parameters can be anisotropic (real-symmetric) tensors, while the frequency-independent term can be an arbitrary real-symmetric tensor as well. This corresponds to evolving Pvia the equations: That is, we must store and evolve a set of auxiliary fields along with the electric field in order to keep track of the polarization P. Essentially any ε(ω) could be modeled by including enough of these polarization fields — Meep allows you to specify any number of these, limited only by computer memory and time (which must increase with the number of polarization terms you require). Note that the conductivity σ corresponds to an imaginary part of ε given by (not including the harmonic-resonance terms) . When you specify frequency in Meep units, however, you are specifying D fwithout the 2π, so the imaginary part of ε is . Numerical stability If you specify a Lorentzian resonance ω at too high a frequency relative to the time discretization Δ n t, the simulation becomes unstable. Essentially, the problem is that you are trying to model a that oscillates too fast compared with the time discretization for the discretization to work properly. If this happens, you have three options: increase the resolution (which increases the resolution in both space and time), decrease the Courant factor (which decreases Δ tcompared to Δ x), or use a different model function for your dielectric response. Roughly speaking, the equation becomes unstable for ω Δ n t/ 2 > 1. (Note that, in Meep frequency units, you specify f = ω n / 2π, so this quantity should be less than 1 / πΔ n t.) A future version of Meep will check a necessary stability criterion automatically and halt with an error message if it is violated. Loss and gain If γ above is nonzero, then the dielectric function ε(ω) becomes complex, where the imaginary part is associated with absorption loss in the material if it is positive, or gain if it is negative. Alternatively, a dissipation loss or gain may be added by a positive or negative conductivity, respectively—this is often convenient if you only care about the imaginary part of ε in a narrow bandwidth, and is described in detail below. If you look at Maxwell's equations, then plays exactly the same role as a current . Just as is the rate of change of mechanical energy (the power expended by the electric field on moving the currents), therefore, the rate at which energy is lost to absorption for is given by: absorption rate Meep can keep track of this energy for the Lorentzian polarizability terms (but not for the conductivity terms), which for gain gives the amount of energy expended in amplifying the field. (This feature may be removed in a future Meep version.) Conductivity and complex ε Often, you only care about the absorption loss in a narrow bandwidth, where you just want to set the imaginary part of ε (or μ) to some known experimental value, in the same way that you often just care about setting a dispersionless real ε that is the correct value in your bandwidth of interest. One approach to this problem would be allowing you to specify a constant (frequency-independent) imaginary part of ε, but this has the disadvantage of requiring the simulation to employ complex fields (doubling the memory and time requirements), and also tends to be numerically unstable. Instead, the approach in Meep is for you to set the conductivity σ (or σ D for an imaginary part of μ), chosen so that is the correct value at your frequency ω of interest. (Note that, in Meep, you specify B f= ω / 2π instead of ω for the frequency, however, so you need to include the factor of 2π when computing the corresponding imaginary part of ε!) Conductivities can be implemented with purely real fields, so they are not nearly as expensive as implementing a frequency-independent complex ε or μ. For example, suppose you want to simulate a medium with at a frequency 0.42 (in your Meep units), and you only care about the material in a narrow bandwidth around this frequency (i.e. you don't need to simulate the full experimental frequency-dependent permittivity). Then, in Meep, you could use (make medium (epsilon 3.4) (D-conductivity (/ (* 2 pi 0.42 0.101) 3.4)); i.e. and . Note: the "conductivity" in Meep is slightly different from the conductivity you might find in a textbook, because (for computational convenience) it appears as in our Maxwell equations rather than the more-conventional ; this just means that our definition is different from the usual electric conductivity by a factor of ε. Also, just as Meep uses the dimensionless relative permittivity for ε, it uses nondimensionalized units of 1/ a (where a is your unit of distance) for the conductivities σ . If you have the electric conductivity σ in SI units and want to convert to σ D,B in Meep units, you can simply use the formula: (where D ais your unit of distance in meters, cis the vacuum speed of light in m/s, is the SI vacuum permittivity, and is the real relative permittivity). Nonlinearity In general, ε can be changed anisotropically by the E field itself, with: where the ij is the index of the change in the 3×3 ε tensor and the χ terms are the nonlinear susceptibilities. The χ (2) sum is the Pockels effect and the χ (3) sum is the Kerr effect. (If the above expansion is frequency-independent, then the nonlinearity is instantaneous; more generally, Δε would depend on some average of the fields at previous times.) Currently, Meep supports instantaneous, isotropic Pockels and Kerr nonlinearities, corresponding to a frequency-independent and , respectively. Thus, Here, "diag( E)" indicates the diagonal 3×3 matrix with the components of E along the diagonal. Normally, for nonlinear systems you will want to use real fields E. (This is usually the default. However, Meep uses complex fields if you have Bloch-periodic boundary conditions with a non-zero Bloch wavevector k, or in cylindrical coordinates with . In the C++ interface, real fields must be explicitly specified.) For complex fields in nonlinear systems, the physical interpretration of the above equations is unclear because one cannot simply obtain the physical solution by taking the real part any more. In particular, Meep simply defines the meaning of the nonlinearity for complex fields as follows: the real and imaginary parts of the fields do not interact nonlinearly. That is, the above equation should be taken to hold for the real and imaginary parts (of E and D) separately. (e.g. | E| 2 is the squared magnitude of the real part of E for when computing the real part of D, and conversely for the imaginary part.) Note: The behavior for complex fields was changed for Meep 0.10. Also, in Meep 0.9 there was a bug: when you specified χ (3)in the interface, you were actually specifying . This was fixed in Meep 0.10. For a discussion of how to relate χ (3) in Meep to experimental Kerr coefficients, see Units and nonlinearity in Meep. Magnetic permeability μ All of the above features that are supported for the electric permittivity ε are also supported for the magnetic permeability μ. That is, Meep supports μ with dispersion from (magnetic) conductivity and Lorentzian resonances, as well as magnetic χ (2) and χ (3) nonlinearities. The description of these is exactly the same as above, so we won't repeat it here — just take the above descriptions and replace ε, E, D, and σ D by μ, H, B, and σ B, respectively.
Let $A$ be a non-empty subset of $\mathbb{R}$. Define the difference set to be $A_d := \{b-a\;|\;a,b \in A \text{ and } a < b \}$ If $A$ is infinite and bounded then $\inf{A_d} = 0$. Since $a < b$ we have $b - a > 0$. Thus zero is a lower bound for $A_d$ and $\inf(A_d) \geq 0$. I then want to show that if $\inf(A_d) = \epsilon > 0$ and $A$ is bounded, then $A$ is finite. Let $\inf(A) = \beta$ and $\sup(A) = \alpha$. Then there can be at most $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor$ real numbers in $A$. Suppose that there are greater than $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1$ numbers in $A$. Since $b - a > \epsilon$ for each $a , b \in A$, We have $\alpha > (\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1)(\epsilon) + \beta$. However this is a contradiction, since $(\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1)(\epsilon) + \beta > (\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor)(\epsilon) + \beta\geq ( \frac{\alpha - \beta}{\epsilon} )(\epsilon) + \beta = \alpha$. Thus the cardinality of $A$ must be less than or equal to $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1$ and thus finite. We have show that if $\inf($A_d$) > 0$ and $A$ is bounded then, $A$ cannot be infinite. One question I have is whether this would be enough to prove the theorem. I'm sure that there are more effecient ways to formulate the above argument. I feel like this is a good opportunity for the pigeon hole principle but I don't really know how to "invoke" it. Critique is welcomed and appreciated.
The equation I'm refering to was posed by Lagrange and states that if $S\subset \mathbb{R^3}$ is a surface with zero mean curvature, ie $H=0, \: \: \forall p \in S$ and S is given as the graph of a function $z(x,y)$ then$$(1+z_x^2)z_{yy}-2z_xz_yz_{xy}+(1+z_y^2)z_{xx}=0$$I'm looking for a proof of this. I thought I would find it in Do Carmo's Differential Geometry of Curves and Surfaces but it seems to not be on there. The equation I'm refering to was posed by Lagrange and states that if $S\subset \mathbb{R^3}$ is a surface with zero mean curvature, ie $H=0, \: \: \forall p \in S$ and S is given as the graph of a function $z(x,y)$ then$$(1+z_x^2)z_{yy}-2z_xz_yz_{xy}+(1+z_y^2)z_{xx}=0$$I'm looking for a proof of this. I thought I would find it in Do Carmo's I'm going to change $x$ and $y$ to $u$ and $v$, respectively, so that I can call my patch $\textbf{x}$ without confusion. Parameterize the surface via the patch $\textbf{x}$ given by $(u,v)\mapsto (u,v,z(u,v)).$ We compute $$\textbf{x}_u=(1,0,z_u),$$ $$\textbf{x}_v=(0,1,z_v),$$ and obtain the components of the first fundamental form as \begin{align*}E&=\textbf{x}_u\cdot\textbf{x}_u=1+z_u^2\\F&=\textbf{x}_u\cdot \textbf{x}_v=z_uz_v\\ G&=\textbf{x}_v\cdot \textbf{x}_v=1+z_v^2.\end{align*} The unit normal is given by $$U=\frac{\textbf{x}_u\times \textbf{x}_v}{\sqrt{EG-F^2}}=\frac{(-z_{u},-z_{v},1)}{\sqrt{1+z_u^2+z_v^2}}.$$ Next, we find that $$\textbf{x}_{uu}=(0,0,z_{uu}),$$ $$\textbf{x}_{uv}=(0,0,z_{uv}),$$ and $$\textbf{x}_{vv}=(0,0,z_{vv}).$$ We can now compute the quantities \begin{align*} L&=\textbf{x}_{uu}\cdot U=\frac{z_{uu}}{\sqrt{1+z_u^2+z_v^2}}\\ M&=\textbf{x}_{uv}\cdot U=\frac{z_{uv}}{\sqrt{1+z_u^2+z_v^2}}\\ N&=\textbf{x}_{vv}\cdot U=\frac{z_{vv}}{\sqrt{1+z_u^2+z_v^2}}. \end{align*} Finally, we can just use the formula $$H=\frac{GL+EN-2FM}{2(EG-F^2)}=\frac{(1+z_v^2)z_{uu}+(1+z_u^2)z_{vv}-2(z_uz_v)z_{uv}}{2(1+z_u^2+z_v^2)^{3/2}},$$ which equal zero if and only if $$(1+z_v^2)z_{uu}+(1+z_u^2)z_{vv}-2(z_uz_v)z_{uv}=0,$$ as desired. Reference for formulas: Barrett O'Neil, Elementary Differential Geometry
I was reading this answer and I don't quite understand how the $\rho$ homomorphism works. The generators of the two copies of $\mathfrak{su}(2)$ in $\mathfrak{su}(2)\oplus\mathfrak{su}(2)$ are given by $N_i^+ = \frac{1}{2}(J_i+\mathrm{i}K_i)$ , $N_i^- = \frac{1}{2}(J_i-\mathrm{i}K_i)$ respectively. The $J_i$'s are the generators corresponding to rotations in $SO^+(1,3)$ and $K_i$'s are the generators corresponding to boosts in $SO^+(1,3)$. This is the excerpt where it is defined. "You are given the $(1/2,0)$ representation $\rho : \mathfrak{su}(2)\oplus\mathfrak{su}(2)\to\mathfrak{gl}(\mathbb{C}^2)$. Since $\rho$, as a representation, is a Lie algebra homomorphism, you know that $\rho(N_i^-) = 0$ implies $\rho(J_i) = \mathrm{i}\rho(K_i)$. Here, all matrices $N_i^-,J_i,K_i,0$ matrices are two-dimensional matrices on $\mathbb{C}^2$. You know that $\rho(N_i^-) = 0$ as two-dimensional matrices because of how the $(s_1,s_2)$ representation is defined: Take the individual representations $\rho^+ : \mathfrak{su}(2)\to\mathfrak{gl}(\mathbb{C}^{2s_1+1})$ and $\rho^- : \mathfrak{su}(2)\to\mathfrak{gl}(\mathbb{C}^{2s_2+1})$ and define the total representation map by $$ \rho : \mathfrak{su}(2)\oplus\mathfrak{su}(2)\to\mathfrak{gl}(\mathbb{C}^{2s_1+1}\otimes\mathbb{C}^{2s_2+1}), h\mapsto \rho^+(h)\otimes 1 + 1 \otimes \rho^-(h)$$ where I really mean the tensor product of vector spaces with $\otimes$. For $s_1 = 1/2,s_2 = 0$, this is a two-dimensional representation where $\rho^-$ is identically zero - and the zero is the two-dimensional zero matrix in the two-by-two matrices $\mathfrak{gl}(\mathbb{C}^2)$." 1) If $h = N_i ^+$, $\rho ( N_i ^+) = \rho ^+ (N_i^+)\otimes1 + 1 \otimes \rho^-(N_i^+)$ , why is $\rho^-$ defined on $N_i^+$? My guess of how it ends: $\rho (N_i^+) = (\sigma_i /2)\otimes1 + 1 \otimes {0} = \sigma_i /2$ 2)If $h = N_i ^-$, $\rho ( N_i ^-) = \rho ^+ (N_i^-)\otimes1 + 1 \otimes \rho^-(N_i^-)$ , same, why is $\rho^+$ defined on $N_i^-$? I guess $\rho^+(N_i^-) = 0$ so that $\rho (N_i^-) = 0\otimes1 + 1 \otimes {0} = 0$ , but I am not sure why. Thanks in advance.
Suppose that there are a set of $n$ points $P = \{(x_1,y_1), \dots, (x_n,y_n)\}$ in 2D. Given two coordinates $(a,b)$ and a number $r \in \mathbb{R}$, is there an algorithm with $O(|Q| + \log n)$ running time that can find the point set $Q \subseteq P$ containing those points of $P$ that are inside the circle with center $(a,b)$ and radius $r$? (That is, I want to find all points in $P$ with coordinates $(i,j)$ such that $(i-a)^2 + (j-b)^2 \leq r^2$.) [I originally asked about a solution with running time $O(\log n)$, but as Pål GD correctly points out, the answer to that question was "not possible".]
I'm trying to follow the proof in Wikipedia that the PNT is equivalent to the assertion $\psi(x)\sim x$, by proving that $\psi(x)\sim\pi(x)\log x$, which it claims is a very simple proof. One direction of inequality is an actual bound, $\psi(x)\le\pi(x)\log x$, but the other inequality has a fuzz factor: $$\psi(x) \ge \sum_{x^{1-\epsilon}\le p\le x} \log p\ge\sum_{x^{1-\epsilon}\le p\le x}(1-\epsilon)\log x=(1-\epsilon)(\pi(x)+O(x^{1-\epsilon}))\log x.$$ But this doesn't actually complete the proof, because we want $\psi(x)\ge(1-\epsilon)\pi(x)\log x$ without the fuzz factor. If we take large enough $x$ and use $\epsilon/2$ in the above equation we get $$\psi(x)\ge(1-\epsilon/2)\pi(x)\log x+Ax^{1-\epsilon/2}\log x,$$ so it is sufficient to prove that $Ax^{1-\epsilon/2}\le\frac{\epsilon}2\pi(x)$ for sufficiently large $x$, i.e. $x^{1-\epsilon/2}\in o(\pi(x)),$ and although I am sure there is a proof of this, it's not so simple that the proof can be completely omitted, at least as far as I can see. Is there an easy proof to be found here? The only one I am seeing is Chebyshev's weak version of the PNT, $\frac x{\log x}\in O(\pi(x))$, which takes some significant work to prove.
Let's say I throw an object horizontally off a cliff with a fixed height, and I know the time it takes to fall. I wanted to know how far it travels, but it has an acceleration opposite the direction of initial velocity due to air resistance. Therefore, I integrated velocity with respect to time; in this case, velocity as a function of time was equal to v0 - at. Okay, what's acceleration (in this case)? I looked into air resistance, and it turns out it's dependent on area, some coefficient, air density, and... instantaneous velocity... As a high school student who's only done basic calculus, this confuses me. Do I have to learn, like, second-order differential equations before I can solve this, or am I missing something basic? Any help would be greatly appreciated, and sorry for asking something basic like this :(. It's not a dumb question - and actually it is impossible to answer it analytically (for the case of quadratic drag with horizontal velocity and vertical acceleration). Here are some basic things to help you think about this: The drag force points the opposite direction of the velocity because the drag force is proportional to the velocity squared, the horizontal velocity increases the vertical drag(!) The equation may be daunting, until you realize that it's basically saying "the drag force is the force needed to move all the air my projectile cuts through" The equation is $$F = \frac12 \rho v^2 A C_D$$ If you have a (front facing) area of $A$, then every second you move through a column of air of volume $V = Av$ where $v$ is the velocity. The mass of this column is $m = \rho V = \rho A v$. If you move all that air at the speed of your projectile, it gets a momentum of $p = mv = \rho A v^2$. This starts to look a lot like your drag equation. We just need the factor $\frac12 C_D$ to account for the way that air really moves (it's not simply "moving the entire column of air at the speed of the projectile) and there you are. To compute the trajectory you will need to use numerical integration. You calculate the initial drag from the initial velocity. This allows you to compute the instantaneous acceleration (don't forget gravity); let this acceleration act for a very short time, and compute the new (horizontal and vertical components of) velocity. From velocity calculate displacement. Repeat for the next time step. Update I decided to write a simple Python script that demonstrates the approach. When you run this with A=0 (effectively no drag) you can compare the result with the analytical solution - this shows the integration works correctly. When you add "realistic" drag, you can then compute the trajectory for any other configuration. As usual, my code comes without warranty ("not necessarily an example of good coding, not fully tested, no error checking, etc..."). Enjoy. # example of numerical integration of projectile motion in 2Dimport matplotlib.pyplot as pltfrom math import sin, cos, atan2, pi, sqrt# constantsrho = 1.22 # density of mediumg = 9.81 # acceleration of gravity# projectile propertiesA = 0.05 # cross sectional areaCd = 0.5 # drag factorm = 0.1 # mass# initial velocity & angle (radians)v = 10. # m/stheta = pi/4# initial position, velocity, timex = 0.y = 5. # height above target surfacevx = v * cos(theta)vy = v * sin(theta)vx_init = vxt = 0.# storage for the resultX = [x]Y = [y]# step sizedt = 0.01def drag(v, theta): F =0.5*rho*v*v*A*Cd return (F*cos(theta), F*sin(theta))while ((y>0) | (vy>0)): # instantaneous force: Fx, Fy = drag(v, theta) # acceleration: ax = -Fx/m ay = -Fy/m - g # position update: x = x + vx*dt + 0.5*ax*dt*dt y = y + vy*dt + 0.5*ay*dt*dt # update velocity components: vx = vx + ax*dt vy = vy + ay*dt # new angle and velocity: v = sqrt(vx*vx+vy*vy) theta = atan2(vy,vx) # store result for plotting: X.append(x) Y.append(y) t = t + dt# adjust last point to Y=0 - we may have "overshot":ft = Y[-2]/(Y[-2]-Y[-1]) # fractional time to last pointX[-1] = X[-2] + (X[-1]-X[-2])*ftY[-1] = 0.t = t - (1-ft)*dtprint('Total flight time: %.3f sec\n'%t)print('Total distance: %.2f m'%X[-1])print('Initial horizontal velocity: %.2f m/s'%vx_init)print('Final horizontal velocity: %.2f m/s'%vx)plt.figure()plt.plot(X,Y)plt.title('projectile motion')plt.xlabel('X position')plt.ylabel('Y position')plt.show() And an example of the output of the above: I have deleted my original answer as a result of a comment from @Floris in that for drag which depends on the square of the speed the vertical and horizontal motions are not independent of one another. According to this paper the equations of motions which need to be solved in this case are: $a_{\rm x}=-kv_{\rm x}v$ and $a_{\rm y}=g-kv_{\rm y}v$ where $v^2= v_{\rm x}^2+ v_{\rm y}^2$ which can only be done numerically. In presence of drag, Newton's equations include a term proportional to v. $dv/dt = g/m - kv$ whose solutions are general solution of the homogeneous equation ( with g=0) plus a particular solution (obtained by going to infinite time, when dv/dt= 0). Let's do it. General solution to $dv/dt = -kv$ is $v/v_\infty= e^{-kt}$. Particular solution: $dv/dt = 0$ or $v_{part} = gm/k$ Now $v=gm/ke^{-kt} + gm/k$. Check: $dv/dt = gm/k(-ke^{-kt}) = -gme^{-kt}= -kv+ gm$ as expected. What is the velocity at t=0? $v_0 = gm/k + gm/k= 2gm/k$
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
How to explain to a middle-school student the notion of a geometric series without any calculus (i.e. limits)? For example I want to convince my student that $$1 + \frac{1}{4} + \frac{1}{4^2} + \ldots + \frac{1}{4^n} = \frac{1 - (\frac{1}{4})^{n+1} }{ 1 - \frac{1}{4}}$$ at $n \to \infty$ gives 4/3? The equality is equivalent to $$ \sum_{k=1}^{n}\frac{1}{4^k}=\frac{\frac{1}{4}-\frac{1}{4^{n+1}}}{1-\frac{1}{4}}$$ Now multiply both sides with $(1-\frac{1}{4})$ and everything will cancel out in the LHS except the first and the last term which are indeed $\frac{1}{4}$ and $-\frac{1}{4^{n+1}}$ This could be explained using algebraic transformation but i would rather show a very simple geometric proof for sum: 1 + 1/2 + 1/4 + ... = 2 I think a 14 year old can grasp the fact that $$\frac 12 + \frac 14 + \frac 18 + \frac 1{16} + \cdots = 1$$ rather intuitively. (Go halfway there, then half the remaining distance, then halfway again, and so on and you get arbitrarily close....) If you are willing to do a little algebra (and wave hands about rearrangement) you get $$ 2 \left( \frac 14 + \frac 1{16} + \cdots \right) + \left( \frac 14 + \frac 1{16} + \cdots \right) = 1$$ so that $$ \frac 1{4} + \frac{1}{16} + \cdots = \frac 13. $$ Now add 1. Given a series $$S_n=1+x+x^2+\cdots+x^n$$, we have $$xS_n=S_{n+1}-1=S_n+x^{n+1}-1$$, so $$S_n(x-1)=x^{n+1}-1$$, or $$S_n={x^{n+1}-1\over x-1}={1-x^{n+1}\over 1-x}$$, which is the desired result. For the infinite series, without using limits we see that if $$S=1+x+x^2+\cdots$$, then $$xS=S-1$$ (this is essentially the limit but is easy to see without formal calculus) and then $$S(x-1)=-1$$ and so $$S=\frac 1{1-x}.$$ $$\frac{1}{3}-\frac{1}{4}=\frac{1}{12}=\frac{1}{4}\cdot\frac{1}{3}$$so$$\frac{1}{3}=\frac{1}{4}+\frac{1}{4}\cdot\frac{1}{3}$$Now ask your 14 year old to plug in this expression for $1\over 3$ into itself, quite funny, bewildering and strange at first sight:$$\frac{1}{3}=\frac{1}{4}+\frac{1}{4}\cdot \left(\frac{1}{4}+\frac{1}{4}\cdot\frac{1}{3}\right)=\frac{1}{4}+\frac{1}{4^2}+\frac{1}{4^2}\frac{1}{3}$$Repeat two or three times, then discuss the difference$$\frac{1}{3}-\left(\frac{1}{4}+\frac{1}{4^2}+\dots+\frac{1}{4^n}\right)$$ edit For example I want to convince my student that this is impossible without talking about the notion a limit. How do you convince a student that $1,{1\over 2},{1 \over 3},\dots$ goes to zero? What does goes to even mean? In this situation$$1+\frac{1}{4}+\frac{1}{4^2}+\dots=\frac{4}{3}$$You cannot convince somebody that this is true without defining the meaning of these little dots on the left side. What about multiplying the LHS by $(1 - \frac{1}{4})$? Or is that what you wanted to avoid? I mean it is not so difficult to understand that nearly all terms cancel... ? As I understand your question, your student know and understand the formula for a finite geometric sum. You just need to convince them that the sum goes to $\frac 1{1-q}$ as $n \to \infty$. Well the only thing left to do really is to convince them that if $x < 1$, then $x^n \to 0$. You can do so by asking them to bring their calculator to the class, hit $1/2 \cdot 1/2$. Then again $1/2$. Then again $1/2$. After 10 or 20 iterations you can write this number out with decimal digits and they should grasp the fact that it's very close to $0$ (better to write the number out than to be left with something like $\frac 1{2^{16}}$ which may be less clear) We need to get rid of the idea that the average 14 year old is not yet not old enough to do abstract math. The undisputable fact is that the older you are the more difficult it becomes to learn it. If the average 14 year old would really struggle to understand the simple math needed to sum a geometric series, then how come they can operate their smartphones with ease? So, I would say you could just do the summation of the first n terms using the standard algebraic method e.g. given in Leartses's answer. And then you argue that the limit is 4/3 by considering the difference between the finite sum of the first n terms. You show that for every $\epsilon>0$, no matter how small, there exists an N such that for all n > N the difference is smaller than $\epsilon$. There are many ways to explain this by drawing pictures. You can explain that if you replace 4/3 by another number then it this "game" of finding N for every $\epsilon$ will go wrong. You happen to mention one of my favorite series. This may be convoluted, but consider those fraction in binary. 1/4 = .01 1/16 = .0001, etc. So your sum looks like .010101...... since S=.010101... 2S = .101010101.... and 3S = .111111.... which is 1, similar to .9999.... being 1. If 3S=1, S=1/3 No calculus, and binary always comes in handy, in my opinion. Perhaps better to teach growth rather than decay. An old story goes...A king wanting to please his subject asked him what he wished for.The clever subject replied "Give me a checker-board, put 1 grain of wheat in the the first square, two in the second, four in the third to fill up the entire board (until $2^{64}$ grains in last square) to which the king readily agreed, until all go-downs ran out of wheat.. Calculation of compound interest can be also instructive.If money doubles every 5 years how much amount after 10 years and so on.. Just explain that if the sum of the infinite series is $x$, then $4(x - 1) = x \implies x = 4/3$. If your $14$ year old is willing to accept without proof that the sum of a geometric series with positive ratio $\alpha<1$ exists at all (in your example for $\alpha=\frac14$), then I would argue as follows. Clearly all terms after the initial term$~t_0$ represent $\alpha$ times the total sum$~S$, since the terms taken in order have all been multiplied by$~\alpha$ with respect to the original terms. But then $t_0$ must be equal $1-\alpha$ times$~S$. But that makes$$S= \frac{t_0}{1-\alpha}.$$In other words you don't need the finite sum to get at the infinite sum. One can however recover the finite sum as the infinite sum minus the part of the infinite sum excluded from the finite sum. If $t_{n+1}$ is the first term excluded, then this gives for the finite sum$$ \frac{t_0}{1-\alpha}-\frac{t_{n+1}}{1-\alpha}=\frac{t_0-t_{n+1}}{1-\alpha}=t_0\frac{1-\alpha^{n+1}}{1-\alpha},$$which is your sum. Actually like this it would be nicer and more natural to call the first term excluded $t_n$, and let the sum have $n$ rather than $n+1$ terms, but I've adapted to the notation of the question. While I like the graphical examples as a way of explaining this, I think the numerical intuition might be easier if you went for examples with nice decimal expansions. For example, the series $9 + 0.9 + 0.09 + 0.009 +\dots $ can be seen to get closer and closer to $10$ (even if the precise, limiting sense in which it goes towards $10$ is harder for a 14 year old to understand). If your child is familiar with the decimal expansion of one third (and if they don't, looking at the recurring decimal by manually performing the short division would be instructive) then $0.3 + 0.03 + 0.003 + 0.0003 + \dots$ also works well. Similarly for any fraction over nine, e.g. $\frac{7}{9}$ as the limit of $0.7 + 0.07 + 0.007 +\dots$ Such examples are, in my experience as an educator, a good way to introduce the concept of an infinite series whose partial sums approach a limit. That seems to be the major conceptual hurdle here. The actual algebra can come later, and not necessarily very much later. If they know the formula for the sum of the first $n$ terms of the series, then the limiting case can be established by considering the behaviour of $r^n$ as $n$ increases, when $-1 < r < 1$. I'd suggest that you deal with the effects of (repeated) multiplication by numbers between $0$ and $1$ as a prerequisite, before you broach the subject of geometric series. When exploring the limiting case algebraically, you can confirm it works on some examples with "obvious" answers, such as those listed above.
Starting from the famous infinite product $$ (1+z)^2(1-z^2)(1+z^3)^2(1-z^4)(1+z^5)^2(1-z^6)\cdots=1+2z+2z^4+2z^9+2z^{16}+\dots $$ it is easy to show by induction that $$ \prod_{k\geqslant1}\left((1-z^k)(1-z^{2k-1})^N(1+z^{2k})^{N+1}(1+z^{2k-1})^{N+3}\right)=\sum_{n=-\infty}^{+\infty}z^{n^2} $$ for any $N\geqslant0$. Can one give sense to passing to the limit w. r. t. $N\to\infty$ and if yes what does one obtain? A by-question: it seems that this also holds for all negative $N$ but I somehow cannot prove it, is it true?
Is it true that $(\Bbb Z_n,\cdot)$, integers modulo $n$ under multiplication, is a group if and only if $n$ is prime? If it's true, why? How can I prove it? Note that $\mathbb{Z}_n=\mathbb{Z}/n\mathbb{Z}$ is never a group with respect to multiplication, unless $n=1$. This is because $[0]$ is not invertible, whenever $n>1$. Note: I denote by $[x]$ the equivalence class of $x\in\mathbb{Z}$ under congruence modulo $n$. If we instead consider $\mathbb{Z}_n\setminus\{[0]\}$ under multiplication, then the set is empty for $n=1$ and it is not even a semigroup for composite $n>1$; indeed, if $n=ab$, with $1<a<n$ and $1<b<n$, we have $$ [a][b]=[ab]=[n]=[0] $$ so the set is not closed under multiplication. Only primes then remain for investigation. If $n$ is prime and $[a]\ne[0]$, then $p\nmid a$ and so $\gcd(a,p)=1$. Therefore the Bachet-Bézout theorem provides $b$ and $c$ such that $ab+pc=1$ and we found the multiplicative inverse $[b]$ of $[a]$. Remark that this also proves that, for $[x],[y]\ne[0]$, the product $[x][y]\ne[0]$, so the set is closed under multiplication.
I had the same problem a few days ago and I used the following heuristic to do it. According to my first impression it seems to work. My specific use case was to determine a strategy in which sequence I should convert function implementations from one programming language into another so that we can execute as many test runs with as little effort as possible (in this case the set of functions is $S$ and the test runs are $\Sigma$ and our goal is to cover as many $\sigma \in \Sigma$ as possible with as few elements from $S$ as possible). Example Data So let's setup some example data. We have five sets $\sigma$ with different elements inside. $$\sigma_1 = \{s_1, s_3, s_4\} \\\sigma_2 = \{s_1, s_5, s_6\} \\\sigma_3 = \{s_1, s_3 \} \\\sigma_4 = \{s_1, s_3, s_4, s_5, s_6\} \\\sigma_5 = \{s_1, s_2, s_3, s_6\}$$ My Current Approach I give each $s_j$ a rating based on the total number of time it exists. Then I check how many elements each $\sigma$ contains. I will then calculate the division between these two numbers and select the $\sigma$ with the highest rating as first winner. In my opinion this should give me the quick win sets first. I will get at least one set covered and I try to trade-off between trying to get the smallest set first, and trying to select the $s$ that are used the most (i.e. with which I could probably cover the most $\sigma$). Let me explain this approach on the example above. If we count the number of occurrences for each $s$, we get: $$\text{Usages}(s_1) = 5 \\\text{Usages}(s_2) = 1 \\\text{Usages}(s_3) = 4 \\\text{Usages}(s_4) = 2 \\\text{Usages}(s_5) = 2 \\\text{Usages}(s_6) = 3$$ If we use these and the number of elements in each $\sigma$ we get the following ratings: $$\text{Rating}(\sigma_1) = \frac{\text{Usages}(s_1) + \text{Usages}(s_3) + \text{Usages}(s_4)}{3} = \frac{5 + 4 + 2}{3} = \frac{11}{3} = 3.67 \\\text{Rating}(\sigma_2) = \frac{5 + 2 + 3}{3} = \frac{10}{3} = 3.33 \\\text{Rating}(\sigma_3) = \frac{5 + 4}{2} = \frac{9}{2} = 4.5 \\\text{Rating}(\sigma_4) = \frac{5 + 4 + 2 + 2 + 3}{5} = \frac{16}{5} = 3.2 \\\text{Rating}(\sigma_5) = \frac{5 + 1 + 4 + 3}{4} = \frac{13}{4} = 3.25$$ In this case, I would add the elements $\{s_1, s_3\}$ to my set $X_k$ first. For the next iteration I would delete $\sigma_3$ from the candidates and repeat the process. Related Problems I think that is problem is closely related to the set cover problem. However, it is different and I did not find a way to convert my problem into a set coverage problem. It also was different to any other algorithm I found in the area of set cover problem (e.g. set packing, dominating set, maximum coverage problem). Maximum coverage problem seems to come very close, but it is again a bit different.
It looks like you're new here. If you want to get involved, click one of these buttons! Okay, now I've rather carefully discussed one example of \(\mathcal{V}\)-enriched profunctors, and rather sloppily discussed another. Now it's time to build the general framework that can handle both these examples. We can define \(\mathcal{V}\)-enriched categories whenever \(\mathcal{V}\) is a monoidal preorder: we did that way back in Lecture 29. We can also define \(\mathcal{V}\)-enriched functors whenever \(\mathcal{V}\) is a monoidal preorder: we did that in Lecture 31. But to define \(\mathcal{V}\)-enriched profunctors, we need \(\mathcal{V}\) to be a bit better. We can see why by comparing our examples. Our first example involved \(\mathcal{V} = \textbf{Bool}\). A feasibility relation $$ \Phi : X \nrightarrow Y $$ between preorders is a monotone function $$ \Phi: X^{\text{op}} \times Y\to \mathbf{Bool} . $$ We shall see that a feasibility relation is the same as a \( \textbf{Bool}\)-enriched profunctor. Our second example involved \(\mathcal{V} = \textbf{Cost}\). I said that a \( \textbf{Cost}\)-enriched profunctor $$ \Phi : X \nrightarrow Y $$ between \(\mathbf{Cost}\)-enriched categories is a \( \textbf{Cost}\)-enriched functor $$ \Phi: X^{\text{op}} \times Y \to \mathbf{Cost} $$ obeying some conditions. But I let you struggle to guess those conditions... without enough clues to make it easy! To fit both our examples in a general framework, we start by considering an arbitrary monoidal preorder \(\mathcal{V}\). \(\mathcal{V}\)-enriched profunctors will go between \(\mathcal{V}\)-enriched categories. So, let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(\mathcal{V}\)-enriched categories. We want to make this definition: Tentative Definition. A \(\mathcal{V}\)-enriched profunctor $$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} $$ is a \(\mathcal{V}\)-enriched functor $$ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ Notice that this handles our first example very well. But some questions appear in our second example - and indeed in general. For our tentative definition to make sense, we need three things: We need \(\mathcal{V}\) to itself be a \(\mathcal{V}\)-enriched category. We need any two \(\mathcal{V}\)-enriched category to have a 'product', which is again a \(\mathcal{V}\)-enriched category. We need any \(\mathcal{V}\)-enriched category to have an 'opposite', which is again a \(\mathcal{V}\)-enriched category. Items 2 and 3 work fine whenever \(\mathcal{V}\) is a commutative monoidal poset. We'll see why in Lecture 62. Item 1 is trickier, and indeed it sounds rather scary. \(\mathcal{V}\) began life as a humble monoidal preorder. Now we're wanting it to be enriched in itself! Isn't that circular somehow? Yes! But not in a bad way. Category theory often eats its own tail, like the mythical ourobous, and this is an example. To get \(\mathcal{V}\) to become a \(\mathcal{V}\)-enriched category, we'll demand that it be 'closed'. For starters, let's assume it's a monoidal poset, just to avoid some technicalities. Definition. A monoidal poset is closed if for all elements \(x,y \in \mathcal{V}\) there is an element \(x \multimap y \in \mathcal{V}\) such that $$ x \otimes a \le y \text{ if and only if } a \le x \multimap y $$ for all \(a \in \mathcal{V}\). This will let us make \(\mathcal{V}\) into a \(\mathcal{V}\)-enriched category by setting \(\mathcal{V}(x,y) = x \multimap y \). But first let's try to understand this concept a bit! We can check that our friend \(\mathbf{Bool}\) is closed. Remember, we are making it into a monoidal poset using 'and' as its binary operation: its full name is \( \lbrace \text{true},\text{false}\rbrace, \wedge, \text{true})\). Then we can take \( x \multimap y \) to be 'implication'. More precisely, we say \( x \multimap y = \text{true}\) iff \(x\) implies \(y\). Even more precisely, we define: $$ \text{true} \multimap \text{true} = \text{true} $$$$ \text{true} \multimap \text{false} = \text{false} $$$$ \text{false} \multimap \text{true} = \text{true} $$$$ \text{false} \multimap \text{false} = \text{true} . $$ Puzzle 188. Show that with this definition of \(\multimap\) for \(\mathbf{Bool}\) we have $$ a \wedge x \le y \text{ if and only if } a \le x \multimap y $$ for all \(a,x,y \in \mathbf{Bool}\). We can also check that our friend \(\mathbf{Cost}\) is closed! Remember, we are making it into a monoidal poset using \(+\) as its binary operation: its full name is \( [0,\infty], \ge, +, 0)\). Then we can define \( x \multimap y \) to be 'subtraction'. More precisely, we define \(x \multimap y\) to be \(y - x\) if \(y \ge x\), and \(0\) otherwise. Puzzle 189. Show that with this definition of \(\multimap\) for \(\mathbf{Cost}\) we have $$ a + x \le y \text{ if and only if } a \le x \multimap y . $$But beware. We have defined the ordering on \(\mathbf{Cost}\) to be the opposite of the usual ordering of numbers in \([0,\infty]\). So, \(\le\) above means the opposite of what you might expect! Next, two more tricky puzzles. Next time I'll show you in general how a closed monoidal poset \(\mathcal{V}\) becomes a \(\mathcal{V}\)-enriched category. But to appreciate this, it may help to try some examples first: Puzzle 190. What does it mean, exactly, to make \(\mathbf{Bool}\) into a \(\mathbf{Bool}\)-enriched category? Can you see how to do this by defining $$ \mathbf{Bool}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Bool}\), where \(\multimap\) is defined to be 'implication' as above? Puzzle 191. What does it mean, exactly, to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category? Can you see how to do this by defining $$ \mathbf{Cost}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Cost}\), where \(\multimap\) is defined to be 'subtraction' as above? Note: for Puzzle 190 you might be tempted to say "a \(\mathbf{Bool}\)-enriched category is just a preorder, so I'll use that fact here". However, you may learn more if you go back to the general definition of enriched category and use that! The reason is that we're trying to understand some general things by thinking about two examples. Puzzle 192. The definition of 'closed' above is an example of a very important concept we keep seeing in this course. What is it? Restate the definition of closed monoidal poset in a more elegant, but equivalent, way using this concept.
It looks like you're new here. If you want to get involved, click one of these buttons! Today I will finally define enriched profunctors. For this we need two ways to build enriched categories. There are lots of ways to build new categories from old, and most work for \(\mathcal{V}\)-enriched categories too if \(\,\mathcal{V}\) is nice enough. We ran into two of these constructions for categories in Lecture 52 when discussing the hom-functor $$ \mathrm{hom}: \mathcal{C}^{\text{op}} \times \mathcal{C} \to \mathbf{Set} . $$ To make sense of this we needed to show that we can take the 'product' of categories, and that every category has an 'opposite'. Now we're trying to understand \(\mathcal{V}\)-enriched profunctors \(\Phi : \mathcal{X} \nrightarrow \mathcal{Y}\), which are really just \(\mathcal{V}\)-enriched functors $$ \Phi : \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ To make sense of this we need the same two constructions: the product and the opposite! (This is no coincidence: soon we'll see that every \(\mathcal{V}\)-enriched category \(\mathcal{X}\) has a hom-functor \( \mathrm{hom} : \mathcal{X}^{\text{op}} \times \mathcal{X} \to \mathcal{V}\), which we can think of as a profunctor.) So, first let's look at the product of enriched categories: Theorem. Suppose \(\mathcal{V}\) is a commutative monoidal poset. Then for any \(\mathcal{V}\)-enriched categories \(\mathcal{X}\) and \(\mathcal{Y}\), there is a \(\mathcal{V}\)-enriched category \(\mathcal{X} \times \mathcal{Y}\) for which: An object is a pair \( (x,y) \in \mathrm{Ob}(\mathcal{X}) \times \mathrm{Ob}(\mathcal{Y}) \). We define $$ (\mathcal{X} \times \mathcal{Y})((x,y), \, (x',y')) = \mathcal{X}(x,x') \otimes \mathcal{Y}(y,y') .$$ Proof. We just need to check axioms a) and b) of an enriched category (see Lecture 29): a) We need to check that for every object \( (x,y) \) of \(\mathcal{X} \times \mathcal{Y}\) we have $$ I \le (\mathcal{X} \times \mathcal{Y})((x,y), \, (x,y)) .$$ By item 2 this means we need to show $$ I \le \mathcal{X}(x,x) \otimes \mathcal{Y}(y,y) .$$ But since \(\mathcal{X}\) and \(\mathcal{Y}\) are enriched categories we know $$ I \le \mathcal{X}(x,x) \text{ and } I \le \mathcal{Y}(y,y) $$ and tensoring these two inequalities gives us what we need. b) We need to check that for all objects \( (x,y), (x',y'), (x'',y'') \) of \(\mathcal{X} \times \mathcal{Y}\) we have $$ (\mathcal{X} \times \mathcal{Y})((x,y), \, (x',y')) \otimes (\mathcal{X} \times \mathcal{Y})((x',y'), \, (x'',y'')) \le (\mathcal{X} \times \mathcal{Y})((x,y), \, (x'',y'')) .$$ This looks scary, but long division did too at first! Just relax and follow the rules. To get anywhere we need to rewrite this using item 2: $$ \mathcal{X}(x,x') \otimes \mathcal{Y}(y,y') \otimes \mathcal{X}(x',x'') \otimes \mathcal{Y}(y',y'') \le \mathcal{X}(x,x'') \otimes \mathcal{Y}(y,y'') . \qquad (\star) $$ But since \(\mathcal{X}\) and \(\mathcal{Y}\) are enriched categories we know $$ \mathcal{X}(x,x') \otimes \mathcal{X}(x',x'') \le \mathcal{X}(x,x'') $$ and $$ \mathcal{Y}(y,y') \otimes \mathcal{Y}(y',y'') \le \mathcal{Y}(y,y'') . $$ Let's tensor these two inequalities and see if we get \( (\star) \). Here's what we get: $$ \mathcal{X}(x,x') \otimes \mathcal{X}(x',x'') \otimes \mathcal{Y}(y,y') \otimes \mathcal{Y}(y',y'') \le \mathcal{X}(x,x'') \otimes \mathcal{Y}(y,y'') .$$This is almost \( (\star) \), but not quite. To get \( (\star) \) we need to switch two things in the middle of the left-hand side! But we can do that because \(\mathcal{V}\) is a commutative monoidal poset. \( \qquad \blacksquare \) Next let's look at the opposite of an enriched category: Theorem. Suppose \(\mathcal{V}\) is a monoidal poset. Then for any \(\mathcal{V}\)-enriched category \(\mathcal{X}\) there is a \(\mathcal{V}\)-enriched category \(\mathcal{X}^{\text{op}}\), called the opposite of \(\mathcal{X}\), for which: The objects of \(\mathcal{X}^{\text{op}}\) are the objects of \(\mathcal{X}\). We define $$ \mathcal{X}^{\text{op}}(x,x') = \mathcal{X}(x',x) .$$ Proof. Again we need to check axioms a) and b) of an enriched category. a) We need to check that for every object \( x \) of \(\mathcal{X}^{\text{op}}\) we have $$ I \le \mathcal{X}^{\text{op}}(x,x) . $$ Using the definitions, this just says that for every object \( x \) of \(\mathcal{X}\) we have $$ I \le \mathcal{X}(x,x) . $$ This is true because \(\mathcal{X}\) is an enriched category. b) We also need to check that for all objects \(x,x',x'' \) of \(\mathcal{X}^{\text{op}} \) we have $$ \mathcal{X}^{\text{op}} (x,x') \otimes \mathcal{X}^{\text{op}}(x',x'') \le \mathcal{X}^{\text{op}}(x,x'') . $$ Using the definitions, this just says that for all objects \(x,x',x'' \) of \(\mathcal{X}\) we have $$ \mathcal{X}(x',x) \otimes \mathcal{X}(x'',x') \le \mathcal{X}(x'',x) . $$ We can prove this as follows: $$ \mathcal{X}(x',x) \otimes \mathcal{X}(x'',x') = \mathcal{X}(x'',x') \otimes \mathcal{X}(x',x) \le \mathcal{X}(x'',x) .$$ since \(\mathcal{V}\) is commutative and \(\mathcal{X}\) is an enriched category. \( \qquad \blacksquare \) Now we are ready to state the definition of an enriched profunctor! I gave a tentative definition back in Lecture 60, but we didn't really know what it meant, nor under which conditions it made sense. Now we do! Definition. Suppose \(\mathcal{V}\) is a closed commutative monoidal poset and \(\mathcal{X},\mathcal{Y}\) are \(\mathcal{V}\)-enriched categories. Then a \(\mathcal{V}\)-enriched profunctor $$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} $$ is a \(\mathcal{V}\)-enriched functor $$ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ There are lot of adjectives here: "closed commutative monoidal poset". They're all there for a reason. Luckily we've seen our friends \(\mathbf{Bool}\) and \(\mathbf{Cost}\) have all these nice properties - and so do many other examples, like the power set \(P(X)\) of any set \(X\). Alas, if we want to compose \(\mathcal{V}\)-enriched profunctors we need \(\mathcal{V}\) to be even nicer! From our work with feasibility relations we saw that composing profunctors is done using a kind of 'matrix multiplication'. For this to work, \(\mathcal{V}\) needs to be a 'quantale'. So, next time I'll talk about quantales. Luckily all the examples I just listed are quantales! Here's a puzzle to keep you happy until next time. It's important: Puzzle 197. Suppose \(\mathcal{V}\) is a closed commutative monoidal poset and \(\mathcal{X}\) is any \(\mathcal{V}\)-enriched category. Show that there is a \(\mathcal{V}\)-enriched functor, the hom functor $$ \mathrm{hom} \colon \mathcal{X}^{\text{op}} \times \mathcal{X} \to \mathcal{V} $$ defined on any object \( (x,x') \) of \(\mathcal{X}^{\text{op}} \times \mathcal{X} \) by $$ \mathrm{hom}(x,x') = \mathcal{X}(x,x') .$$ If you forget the definition of enriched functor, you can find it in Lecture 32.
Take the set of all vectors $x = (x_1, \cdots, x_n)$ that are solutions to $p_1x_1 + \cdots + p_nx_n = I > 0$. Show that this set has $n-1$ dimensions. I have somehow managed to get myself stuck on the last part of this proof it seems. I am not using the fact that this set is a hyperplane and that hyperplanes are $n-1$ dimensions of the space they are in. It is easy to show that $\{x_1, \cdots x_n\}$ spans the set we are considering, since $\sum p \cdot x$ is a linear combination and all that. However, $x_n$ can be expressed as a linear combination of $\{x_1 \cdots x_{n-1}\}$: $$x_n = \frac{I - (p_1x_1 + \cdots p_{n-1}x_{n-1})}{p_n}$$ So we can remove $x_n$ from the span and the resulting set still spans. Now we want to show $\{x_1, \cdots n_{n-1}\}$ are linearly independent. That is, if $p_1x_1 + \cdots p_{n-1}x_{n-1} = 0$, all $p_i = 0$. If the set is spanning and linearly independent, then it is a basis. Since it would have $n-1$ vectors, it would be of dimension $n-1$ and we'd be done. So I note that $p_1x_1 + \cdots + p_{n-1}x_{n-1} = I - p_nx_n$, and that $I > 0$. So I assume there is a case where $I - p_nx_n = 0$ and where $I - p_nx_n \neq 0$. I am not sure how to finish off this proof, which makes me sad, because I think I'm just missing something obvious. Any assistance would be appreciated.
I am confused about the slash notation and especially taking the square of a slashed operator. Defining $\displaystyle{\not} a \, = \, \gamma^\mu a_\mu$ we have $\,\,$ $\displaystyle{\not} a \displaystyle{\not} a = a^2 $ I tried to prove that, but I can't really doing it without assumption I didn't prove. That's my (I think wrong) procedure: $\displaystyle{\not} a \displaystyle{\not} a = \gamma^\mu a_\mu \gamma^\nu a_\nu = \gamma^\mu \gamma^\nu a_\mu a_\nu $ assuming in the last equality that the they commute, now using the anticommutation relation of the gamma matrices : $$\gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu =2\eta^{\mu\nu}\tag{1}$$ I say that probably $\gamma^\mu \gamma^\nu= \eta^{\mu\nu}$ and substituing it in one i obtain $$\eta^{\mu\nu}a_\mu a_\nu\,=\, a_\mu a^\nu \, = \, a^2$$ I don't think that's really a proof, can someone provide a right proof without the assumptions I've made?