text
stringlengths
256
16.4k
I want to prove that the product of a finite set and a countable set is countable. Is it enough to prove that a finite set is countable and use Cantor's theorem to prove that Cartesian product of two countable sets is also countable? Let the finite set be $\{1,...,N\}$ and let the countable set be $\mathbb{N}$, then define $\phi(k,n) = k+(n-1)N$. This defines a bijection $\phi:\{1,...,N\} \times \mathbb{N} \to \mathbb{N} $. OK, so your strategy is to use Cantor's proof together with the following Lemma: Lemma: "Every finite set is countable" OK, so how do you prove that Lemma? First, let's get clear on what we mean by 'countable'. In general, a set $X$ is 'countable' when there is a function from $\mathbb{N}$ to $X$ that is onto (it does not have to be a bijection ... indeed, that would only make infinite sets possibly countable!) OK, so suppose $X$ is a finite set. Then we can say that $X = \{ x_1,x_2,...x_k \}$ for some $k$ OK, so let's define a mapping $f$ from $\mathbb{N}$ to $X$: $$f(n) = x_{(n \: mod \: k) + 1}$$ Since this mapping is onto, that means $X$ is countable. So, any finite set is countable. Some people define "countable" to mean countably infinite, but in my experience "countable" usually means that the set can be put in bijection with some subset of $\mathbb{N}$. That is to say, a countable set is either finite or countably infinite. With that definition, a finite set is countable, so Cantor's theorem gives you the result you want.
I am looking at a result of Peyre, and he says for a certain variety, the number of rational points of height less than $B$ is: $$ N(B) \sim \frac{1}{3} \color{#3DB08E}{\prod_p \left( 1 - \frac{1}{p}\right)^4\left( 1 + \frac{4}{p} + \frac{1}{p^2} \right) }B (\log B)^3 $$ The manifold is some sort of Veronese del Pezzo type variety surface: $$ V(\mathbb{Q}) = \big\{ \big( (x_0:y_0), (x_1:y_1),(x_2:y_2) \big) : x_0 x_1 x_2 = y_0 y_1 y_2 \big\} \subset \text{P}^1_\mathbb{Q}\times \text{P}^1_\mathbb{Q} \times \text{P}^1_\mathbb{Q} $$ The height is some sort of exotic height -- probably a Weil height of some kind. $$ H( ...) = \sup \big(|x_0|, |y_0|\big)\sup \big(|x_1|, |y_1|\big) \sup \big(|x_2|, |y_2|\big)$$ This is the height associated to the anti-canonical divisor $\omega_V^{-1} = \mathcal{O}_V(1,1,1) $. He has removed some exceptional divisors which come from variables being zero, $x = 0$ or $y = 0$. So that $U = V \,\backslash \bigcup\{ x_i = y_j = 0 \}$. What do we know about this constant? Is it a period? It looks like the Euler product of some type of L-function of some scheme. An algebraic geometer, might try to read the product over primes as the "Adelic" part and the $B \log^3 B$ part as places over some ring of polynomials. Do the Cohen-Lenstra heuristics predict anything here? Those are more about estimating ranks of class groups, or Galois groups, but they momentarily look like a probability.
I want to code the dynamics of 2D planar quadrotor and than control it to drive it from one state to another. Dynamics that I use is taken from the online course fiven by Vijay Kumar in Coursera as follows, $ \begin{bmatrix} \ddot{y}\\ \ddot{z}\\ \ddot{\phi} \end{bmatrix} = \begin{bmatrix} 0\\ -g\\ 0 \end{bmatrix} + \begin{bmatrix} -\frac{1}{m}sin\phi & 0\\ \frac{1}{m}cos\phi & 0\\ 0 & -\frac{1}{I_{xx}} \end{bmatrix}\begin{bmatrix} u_1\\ u_2 \end{bmatrix} $ it has some linearizations also as $sin\phi->\phi$ & $cos\phi -> const.$ And u1, u2 is defined by; $u_1=m\{g+\ddot{z}_T(t)+k_{v,z}*(\dot{z}_T(t)-\dot{z})+k_{p,z}*(z_{T}(t)-z)\}$ $u_2=I_{xx}(\ddot{\phi}+k_{v,\phi}*(\dot{\phi}_T(t)-\dot{\phi})+k_{p,\phi}*(\phi_{T}(t)-\phi))$ $\phi_c=-\frac{1}{g}(\ddot{y}_T(t)+k_{v,y}*(\dot{y}_T(t)-\dot{y})+k_{p,y}*(y_{T}(t)-y))$ it is assumed to be the vehicle is near hover condition and commanded roll angle $\phi_c$ is calculated based on desired y-component and is used to calculate u2 which is net moment acting on CoG. The thing that I dont understand is, don't I need any saturation on actuators? Do I need to implement some limiting part on my code to limit the control signals. The other thing is, I don't have any desired acceleration. There is those terms in control signal equations. Can I remove them? The last thing is, my control signals creates some signals causes to vehicle to have order of 10^5 in roll angle by integrating the high angular rates caused by high u2 moment signal I guess. Since the linearization works on small angle approximation, those high angles and rates are problematic. Thus how can I handle it?
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero). I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it. But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$ I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ... Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!) On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case @Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question. Moreover, the title is vague and doesn't clearly ask a question. And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed. If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself. but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A? @swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying But 240 miles seems waaay to short to cross two time zones So my inclination is to say the answer key is nonsense You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi... Hi there, I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer. Where does the term e^{(r_1-r_2)x} come from? It seems like it is taken out of the blue, but it yields the desired result.
Lets refactor and reword these statements for ease of thought. Let $A(n)$ be $Θ(g(n))$. Let $B(n)$ be $O(g(n))$. Note that $A$ implies $B$ because $A$ is stronger than $B$. This means for $A$ to be fulfilled we have to fulfill the criteria of $B$ and more. Therefore $A$ being fulfilled implies $B$ must also be. Written set theoretically, we have the set of functions that fulfill the criteria of $A$ are a subset of the functions that fulfill the criteria of $B$. Well this is not contradictory, but actually equivalent in a sense to the last statement which showed that $A$ has the criteria of $B$ and more. From our understanding of this point we can determine that: All the functions that fulfill the criteria of $A$ fulfill $B$, but not all functions that fulfill the criteria of $B$ fulfill the criteria of $A$. This is really an equivalent statement to: The functions that fulfill the criteria of $A$ are a subset of the functions that fulfill the criteria of $B$ because the second set contains the first. And therefore $\Theta(g(n))\subseteq O (g(n))$ is true, and it is equally true that $f(n) = \Theta(g(n))$ implies $f(n) = O(g(n))$ without any contradiction. Furthermore this makes sense with the definition for $\Theta(g(n))$ which states that:$f(n)\in \Theta(g(n))$ if and only if $f(n) \in O(g(n))$ and $f(n) \in \Omega(g(n))$. Clearly by this definition, $f(n) \in Θ(g(n))$ does imply $f(n) \in O(g(n))$, and $\Theta(g(n))$ is clearly a subset of $O(g(n))$ as well.
As shown in the Diagram, we have to find $a_1, a_2$ , Provided that, the surfaces are not friction-less, $\mu$ is the friction between the surfaces of plane and $M$ mass body, $\frac{\mu}{2}$ is the friction is the between the surfaces $M$ and $m$ bodies. My approach was to make the free body diagram and indicating all the forces, according to me : $(M+m)a_1=(M+m)gSin\theta - \mu(M+m)gCos\theta $ $ma_2=mgSin\theta-\frac{\mu}{2}mgCos\theta$. But according to my teacher the answer is incorrect. He says, While considering only mass $M$ block, the equation of forces shopuld be, $Ma_1=(M+m)gSin\theta - \mu(M+m)gCos\theta - \frac{\mu}{2}mgcos\theta$ I considered the $M+m$ as a system, and so now I here omit the friction force between $M$ and $m$ because in this case it will be an internal force, and according to sir I cannot consider $M+m$ as a system because components are here in relative motion also by my equation he says the acceleration will be of centre of mass of both the blocks not the acceleration of the $M$. Please consider both the equations mine and my teacher's and explain who is correct and how. Now one more doubt is there was no relative motion between $m$ and $M$ then also, I think my equation will be same, am I correct in this also, or not ?
First of all, in lower dimensions (2+1 and 1+1) the gravity is much simpler. This is because in 3d curvature tensor is completely defined by Ricci tensor (and metric at a given point) while in 2d curvature tensor is completely defined by scalar curvature. This means that there are no purely gravitational dynamical degrees of freedom, in particular no gravitational waves. General note: horizon (which is the defining feature of black hole) representing our inability to obtain information about events under it would always imply the entropy of corresponding solution. So, in all of black hole models there is some black hole thermodynamics. For Hawking radiation one needs to include quantum effects into consideration and also radiative degrees of freedom (if there are no gravitons or photons or any other '-ons' than nothing would radiate). Let us start with case of 3d (that is 2+1). The Einstein equations in 2+1 spacetime without any matter fields would simply imply that spacetime is flat, that is 'constructed' from pieces of Minkowski spacetime. It may have nontrivial topology, so 2+1 gravity is a topological theory, but no black hole solutions exist. This model (in mathematical sense) is exactly solvable. To introduce non-trivial 2+1 solutions we can add matter or cosmological constant (which could be considered the simplest form of matter). It turns out that the spacetimes with negative cosmological constant (which would locally be composed of pieces of anti-de-Sitter spacetimes) do admit the black hole solution: BTZ black hole (name after authors of original paper). This solution shares many of the characteristics of the Kerr black hole: it has mass and angular momentum, it has an event horizon, an inner horizon, and an ergosphere; it occurs as an endpoint of gravitational collapse (for that, of course, we need to include matter beyond cosmological constant in the consideration); and it has a nonvanishing Hawking temperature and interesting thermodynamic properties (see, for instance, paper by S. Carlip). The Hawking temperature of BTZ black hole $T\sim M^{1/2}$ which, in contrast to the (3+1)-dimensional case, goes to zero as $M$decreases. Additionally, the simplicity of the model allows quantum treatment of it including statistical computation of the entropy (see references in paper by E.Witten). There are many other variations of solutions in 2+1 gravity theories (for instance by including dilaton and EM fields, scalar fields etc.) but all of them require negative cosmological constant. This is because dominant energy condition forbids the existence of a black holes in 2+1 dimensions (see here). Now to 1+1 dimensions. Locally all GR models in 1+1D are flat. So to include nontrivial spacetime geometry we need to modify gravity. This can be done by including dilaton field. The resulting models often admit nontrivial geometries with black holes (see paper by Brown, Hennaux, Teitelboim, wiki page on CGHS model, paper by Witten on BH in gauged WZW model, and this review). These black hole solutions also admit nontrivial thermodynamics and Hawking radiation. In particular the Hawking temperature is proportional to mass, so as the black hole evaporates it becomes colder (unlike 4D case where $T \sim M^{-1}$). Now to higher dimensional gravity. Gravity itself is much richer than in lower dimensional cases, so analogues of all 4D black holes also exist in higher dimensions, as well as some new black hole-like solutions such as black strings and black p-branes. There are also multi-black hole configurations where multiple black holes are placed along the ring or line such that the total force on each of them is zero, resulting in equilibrium configuration. Since many uniqueness theorems for black holes only work for 3+1 dimensions there are even solutions with nontrivial horizon topologies such as black rings. I suggest to look at the Living Review recommended by Ben Crowell or to this lectures by N. Obers. The simplest black hole would be Schwarzschild–Tangherlini solution (analogue of Schwarzschild black hole) which is vacuum solution to Einstein field equations: Here $\mu = R_s^{d-3} = \frac{16 \pi G M}{(d-2)\Omega_{d-2}}$ is mass parameter. This gives us the relationship between mass and Schwarzschild radius: $R_s \sim M^{1/(d-3)}$. The entropy is given by Bekenstein-Hawking formula: $$S = \frac {\cal A}{4G}=\frac 14 \left(\frac{\Omega_{d-2} R_s^{d-2}}{ G} \right).$$Temperature could be found from the first law $ dS = d M / T $: $$T = \frac{d-3}{4 \pi R_s}.$$ Rotating solution (generalization of Kerr metric) would be Myers-Perry metrics. Note, that rotations in higher dimensions are more complex, so the angular momentum is represented by several parameters. Also note, that many solutions with horizons elongated in one direction (such as black strings or black rings) turn out to be unstable via the Gregory-Laflamme instability, where the smooth 'tubular' horizon evolve growing perturbations of certain wavelengths. So possibly black strings and black rings would tend to decay into droplets-like black hole along them (the exact mechanics is yet unknown). But of course, the second law of thermodynamics would be observed, meaning that total area of the horizons would increase.
Is the existence of anomalies to the EMH sufficient to disprove the EMH, or do we need a persistent anomaly which the financial markets cannot correct to disprove the EMH? This question is for the weak, semi-strong, and strong versions of EMH. Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community If one finds an anomaly relative to some asset pricing model, there are three possibilities: Any test of market efficiency is a joint test of market efficiency and an asset pricing model. The efficient market hypothesis on its own doesn't generate testable predictions. The efficient market hypothesis (EMH) of Prof. Eugene Fama is that market prices reflect all available information. Let's also review some things the EMH on its own does NOT say: The efficient market hypothesis, on its own, says FAR less than a lot of people think. Early in the history of academic finance, Fama pointed out that any test of market efficiency is a joint test of market efficiency and an asset pricing model. To say prices are wrong, you need to say something about what they should be! For example, the CAPM asset pricing model is that expected returns are linear in market betas: $$ \operatorname{E}[R_{it} - R^f_{t}] = \beta_i \operatorname{E}[R^m_t - R^f_t]$$ Using various proxies for the market portfolio, joint tests of this model and market efficiency are strongly rejected: since the 1980s, average returns actually decline in market beta rather than increase. You also had the anomalies of size and value. Does the existence of these anomalies relative to the CAPM imply the efficient market hypothesis is violated? No. The CAPM could simply be the wrong asset pricing model. That begs the question, what's an allowable asset pricing model? Could one test market efficiency more directly by making fewer assumptions on the asset pricing model? A basic commonality of any risk based asset pricing model is the law of one price: a linear pricing function. If you can show two securities have the same payoffs and trade at different prices, that's hard to reconcile with market efficiency and any risk based asset pricing model. Along that line, you have Richard Thaler's paper "Can the market add and subtract?" on tech stock carveouts. Thaler argues that in the case of Palm and 3-Com, linearity of the pricing function was violated. More broadly, one can argue against market efficiency and risk based asset pricing models by showing: For example, the abnormal return to earnings surprises is fairly compelling evidence evidence against the strong form efficient market hypothesis. It suggests that the market was not already aware of the earnings info until it was announced. Fama and French have been unwilling to add a momentum factor to their asset pricing models. They're not willing to call it a risk factor. Is momentum an anomaly that falsifies the joint hypothesis of efficient markets and risk based asset pricing models? Perhaps? A line of research in the broader market efficiency literature is that you can have unexploitable anomalies due to transactions costs and various other frictions. Due to short sale constraints, transaction costs, stale prices, etc... a strategy may not actually earn the returns that a naive backtest says it would have. I have a paper that argues that the distribution of returns cannot have a mean. I argue that prices are data and that returns are not data. Rather, returns are transformations of data. Therefore, it is the statistical distribution of prices that must determine the distribution of returns. Since returns are $$\frac{p_{t+1}}{p_t}-1,$$ it follows that returns are a ratio distribution. In the simplest case, the Markowitz case, with many buyers and sellers and infinite liquidity, it can be shown through the central limit theorem that prices would be normally distributed. Depending slightly on how you construct the process, this implies that the distribution will have no mean or variance. Because negative prices don't exist, the distribution of returns for going concerns must be: $$\left[\frac{\pi}{2}+\tan^{-1}\left(\frac{\mu}{\gamma}\right)\right]^{-1}\frac{\gamma}{\gamma^2+(r-\mu)^2},$$ if you accept the Markowitzian assumption. This also assumes the distribution is conditioned on the equilibrium prices so that: $$\mu=\frac{p^*_{t+1}}{p_t^*}.$$ If you do not exist in a Markowitzian world, then the distribution will vary, but it will have the Cauchy distribution buried in it. You can see this by transforming the process into polar coordinates. In doing so, you will find that the limits of integration of the angles that the cumulative density of the Cauchy distribution must be present, regardless of any assumptions you may otherwise make. You will end up with some type of integral similar to: $$\int_{-\frac{\pi}{2}}^{\tan^{-1}(r)}.$$ It will depend a little bit on how you orient your problem. Nonetheless, the arctangent will be present because the arctangent is opposite over adjacent and if the current price is the length of the adjacent and the future price is the length of the opposite side, then you can see the linkage. I have also replaced the rules of mathematics in other papers to accommodate this. The efficient market hypothesis is invalid because it depends upon a mathematics that cannot be correct. This does not mean that markets are inefficient, though, it just means we have been discussing what efficiency means the wrong way. Having looked at the data in quite some depth, I would argue that prices, conditioned upon dividends, are quite efficient, but in a sense much weaker than any version of the efficient market hypothesis. This weakening comes from the nature of the allowable cost functions and the rules of regression. The implied cost function for the distribution to find $\mu$ is the "all-or-nothing" cost function. It is $$C(\hat{\mu},\mu)=\begin{matrix}c_1&\hat{\mu}<\mu\\0&\hat{\mu}=\mu\\ c_2&\hat{\mu}>\mu\end{matrix},c_1,c_2>0$$ A gamble in this form means that any mistake is a bad mistake, though it may be the case that a mistake to the left is a different cost than a mistake to the right. Overvaluing $\mu$ causes you to buy too much at too high a price while undervaluing $\mu$ causes you to buy too little and you suffer an opportunity loss of profits. A stronger gamble can be placed by regressing price over dividends. That leads to the absolute linear loss function, still weaker than an expectation, but about fifty percent of the time you should make too much money and about fifty percent of the time you should make too little. Since it is linear, the median loss should be zero, though the mean loss could be any value in the real number line. The anomalies are being created by the difference in the math. People are measuring behavior under an assumption of some form of normality or lognormality and seeing behavior under distributions without a mean, variance or covariance. There are tons of persisting anomalies. Until people quit using the tools of mean-variance finance, the ideas behind the EMH will persist. People are receiving compensation based upon the EMH. For example, the Uniform Prudent Investor Act presumes the validity of the EMH and declares alternative behavior imprudent and hence tortious unless you can document why you behaved in a manner different from how the hypothesis would recommend. Everyone knows it is not empirically supported. Indeed, it has been a bonanza under "publish or perish," because you don't have to discover anything to discover something because you can always find a new way to finagle a prediction into an anomaly. So many careers implicitly depend upon the EMH that while paying lip service to it being empirically incorrect, people are not stupid. They don't want lawsuits. I found 3800 articles on just one anomaly in an EconLit search. Empirical disproof has existed since 1963 with Mandelbrot's article "On the Variation of Certain Speculative Prices." Four hundred and eighty-two trillion dollars in options are priced on the EMH or a variant according to the Bank of International Settlements. That is a lot of money to say "oops" on. The EMH will persist until something forces a sea change. If you are interested in math that will work. Buy a copy of Parmigiani's book called "Decision Theory." Its ISBN-10: 047149657X is a way to search for it. If you have no background at all in Bayesian methods, you should first pick up "Introduction to Bayesian Statistics," by William Bolstad. It is ISBN-10: 1118091566. EditJust a note, this probably should have been asked and answered under the Economics forum instead and should probably be migrated to there. I agree with most answers posted. But the research on this subject has been highly contradictory, and the answer to your question is highly dependant on the hypothesis you are trying to test. If your hypothesis is looking for 6-sigma events within an "efficient" market, then anomalies are what you need to come by, and will disprove your hypothesis if not found. On the other hand, if you are trying to test for arbitrage opportunities on the basis that markets are not as frictionless as they have been theorized to be, then anomalies aren't that normal and should not be that evident in your findings, unless the stock/company/government has underwent a surprising event which has led to that anomaly. If you follow the Martingale logic for equity prices, then the price of a stock should be the sum of all past events and should reflect such "known" events. On the flip side, if you are a random walk advocate, you can fairly certainly assume that prices move randomly while following a log normal distribution, which in turn forces your returns to have a normal distribution. Stock predictability in that sense has been very insignificant, but you can run millions of Monte Carlo or Geometric Brownian motions and claim that you have predicted a future price, when in fact that would just be an assumption dependant on the set of inputs you have specified. In that sense, if you believe the prices to be following a random walk, then the price change itself follows a random walk and not prices themselves. This leads to the assumption that in this case, markets are of the weak form in which prices reflect all past information, but this also leads to the fact that todays price is a martingale of all previously known event data points of that asset. Then again, you should also test for autocorrelation within your variables because that may as well be the reason why anomalies are evident. This definitely doesn't work for assets with high storage costs such as commodities, or illiquid asset classes. Long story short, your interpretation should be highly dependent on what assumptions you are inputting to begin with. As you fiddle around with your inputs, your outputs might vary highly, so knowing the dataset you are working with would provide you with better insights as to whether the anomalies are true anomalies, or just a result of auto-correlation or multi-collinearity between the variables you are trying to test. I find the argument binary. As long as there is self interest in human behavior, EMH is a fairy tail.
I confused with a question from the past-paper of Lagrangian and Hamiltonian mechanics. A Lagrangian (plane polar coordinate) for the spaceship (mass is $m$) under influence of central force directed towards the centre of Earth is $$L=\frac{1}{2} m \left( \dot{r}^2 +r^2 \dot{\phi}^2\right)+\frac{k}{r}$$ Where the $k$ is a constant of the central field, $l$ is the magnitude of angular momentum which is a conserved quantity, and the total energy of this system is given by: $$ m r^2 \dot\phi =l \quad \quad E=\frac{m\dot{r}^2}{2}+\frac{l^2}{2mr^2}-\frac{k}{r}$$ I could find the effective potential $$V^{\text{eff}}(r)=\frac{l^2}{2mr^2}-\frac{k}{r} $$ The radius $R$ and period $T$ when spaceship moves on a circular orbit with a constant magnitude of angular momentum $l$, cloud be written in terms of $l$, $k$ and $m$ : $$ R=\frac{l^2}{mk} \quad \quad T=\frac{2 \pi l^3}{ mk^2} $$ When the spaceship moves on a circular orbit with radius $R = 7000$ km and velocity $V = 2\pi R/T = 8 $km/s. The astronaut on the spaceship jumps directly towards the Earth with velocity $v = 8 $m/s. I'm asked to calculate the minimum distance of the astronaut from the centre of the Earth, with a hint that $v \ll V $ so I could expand the effective potential near its minimum. My basic idea is below: Before the astronaut jumps, the energy of the spaceship $E_0$ is equal to the effective potential $V^{\text{eff}}(r_0)$: $$E_0=V^{\text{eff}}(r_0)=\frac{l^2}{2mr_0^2}-\frac{k}{r_0}$$ After the astronaut jumps, the energy of is $$E_{\text{jump}}= \frac{m_{as}\dot{r}^2}{2}+ \frac{l^2}{2m_{as}r^2}-\frac{k}{r}$$ Where $m_{as}$ is the mass of astronaut. Because $v \ll V$, the radius change is small, therefore the potential change is small, as, $$E_{\text{jump}}=E_0+\frac{m_{as}\dot{r}^2}{2}$$ As the curve of effective potential shown above, I would like to use a approximation of $E_{\text{jump}}$, $$E_{\text{jump}}=E_0 + \frac12 \frac{\partial^2 V^{\text{eff}}(r_0)}{\partial r^2}(r-R)^2$$ Therefore, $$E_{\text{jump}}- E_0= \frac{m_{as}\dot{r}^2}{2} =\frac12 \frac{\partial^2 V^{\text{eff}}(r_0)}{\partial r^2}(r-R)^2 $$ But I found problem from here, $m_{as}$ is not given, therefore I'm not able to find the numerical solution from the above equation. Is there anything wrong with my idea?
Table of Contents Countable Topological Spaces are Separable Topological Spaces Recall from the Separable Topological Spaces page that a topological space $(X, \tau)$ is said to be separable if it contains a countable and dense subset. We will now look at an extremely simple theorem which says that if $X$ is a countable set, then $(X, \tau)$ is a separable topological space regardless of the topology $\tau$. Theorem 1: Let $(X, \tau)$ be a topological space. If $X$ is a countable set then $(X, \tau)$ is a separable topological space. Proof:Let $X$ be a countable set and let $\tau$ be any topology on $X$. By definition, the topology $\tau$ must contain the whole set $X$. Let $A = X$. Then $A$ is a countable set. Furthermore, for all $U \in \tau \setminus \{ \emptyset \}$ we have that $U \subseteq A$. Therefore: So $A$ is also a dense set. Therefore, $A$ is a countable set subset (namely the whole set) of $X$, so $(X, \tau)$ is a separable topological space. $\blacksquare$
The problem is depicted in the picture: Generally, why we don't use engine blow wings to fly? Can we use wind-tunnel-like wing to generate lift force? Can it generate more lift than thrust? If it can, how many times? Aviation Stack Exchange is a question and answer site for aircraft pilots, mechanics, and enthusiasts. It only takes a minute to sign up.Sign up to join this community Because there is no free lunch. A single wing behind a propeller or a fan can see some amount of performance gain from being in the propeller's slipstream, but, when you start stacking multiple wings together...it gets a little trickier. A single wing can get an advantage from the swirl induced by the propeller, in addition to the increased speed you've remarked upon. In a thesis written by LLM Veldhuis on wing propeller interactions, he notes that, when you have a propeller operating in front of a wing and rotating such that the inboard blade is always moving up, it changes the local angle of attack on the wing behind the propeller. This modifies the lift distribution (notionally shown in the figure below) and ultimately decreases the induced drag of the wing for a given operating lift coefficient. A set of experiments by Witkowski et. al. ( Propeller/wing interaction) in the 1980s recorded a wing seeing up to 40% savings in induced drag based on the adjustment of a tractor propeller's rotation rate! This is what I've, at least, seen cited at the most prominent effect of having a tractor propeller in front to of a wing. If you have a small enough wing, though, say on the scale of UAV, there's a good chance you could have a significant portion of your wing in the propwash, which can keep the flow attached to the wing at higher angles of attack, letting you, potentially, keep a control surface or two unstalled and able to maneuver the aircraft when they normally wouldn't. However, that's not lift, and it doesn't quite answer the question about the multiplane setup. The problem with stacking wings so closely to each other goes back to investigations in the 1920s and can be most easily explained, I think, with the venturi effect. When air is accelerated by the curvature of a wing, the static pressure decreases and you can a suction force, which, on the upper surface of a wing, translates into lift. However, stack two wings (or other objects) closer together (as in the picture below), and you get something...not so good. The image below is from a conceptual rotorcraft design submitted by the University of Maryland for the 2016 American Helicopter Society Graduate Design Contest and is a kinda strange thing called a quadrotor-biplane-tailsitter (we named it Halcyon). However, the upshot of the image (which is colored by local pressure -- the more negative the Cp, the lower the pressure) is that, due to the wings and fuselage (the bit in the middle), you are playing a game of diminishing returns. While the bottom wing sees very high velocity air going over it, leading to very high lift, that same low pressure zone also generates a strong downforce on the fuselage, effectively mitigating a lot of your lift (the design team estimated that it killed almost all of it, actually) from the lower wing. The upper wing, since there's a little extra space, isn't as effected by that same phenomenon, so, in the end, the aircraft had enough lift, in theory, to fly...but it's not an efficient configuration at all. The same would be true for a stack of wings. In a 1923 paper by Max Munk entitled General Biplane Theory, he estimates that, in order to minimize interference effects between wings, they should be spaced apart by at least 3 chord lengths. That would easily move your wings so far apart that not even two wings could easily stay in the propeller wake...at least for full size aircraft. Fist, I find it odd that your fan lists its thrust in the unit of a mass. Thrust should be a force, not a mass. Now let's assume that this means the thrust is equivalent to the weight force of this mass in the gravitational field of earth (g = 9.80665 m/s²) and we get the actual thrust of 11.768 N. Thrust is the mass flow times the speed difference between intake and exit flow. In the static case and using the Froude hypothesis, this means: $$v_{exit} = 2\cdot \sqrt{\frac{8\cdot T}{\pi\cdot d^2_{Fan}\cdot\rho}} = 141.3\:m/s$$ Unfortunately, jet contraction means that only a small part of the wing behind the fan will experience this airspeed. Generally, lift is created by a wing by deflecting the air downwards. Normally, a big mass of air is deflected by a small amount, because this results in the best efficiency. If you insist on using a smaller wing and a higher deflection angle, efficiency will suffer. The number of wings will not affect the result - a single wing will create as much lift as any stack of multiple wings, only with less friction drag. Why is that? Lift is caused by deflecting air, and this deflecting is done by a pressure field that needs suction on the top surface of the wing and overpressure on the bottom surface. In a stack of wings, the suction of the lower wing will also act on the bottom of the upper wing, so in effect only the suction on the topmost wing and the pressure below the bottommost wing will add a net amount of lift. It's better to reduce that stack to a single wing where every surface can contribute lift. If you want as much lift as thrust is created by the fan, the wing would need to deflect the exit stream completely downward, by 90°. This is far above any realistic value for a wing (or a stack of them). It will be better to place the wing at some distance from the fan exhaust. Then the exhaust stream will suck in air from the sides, so the mass of air being thrown at the wing will increase while its speed will decrease. Now the same lift can be created with less flow deflection, but still the possible lift will be lower than the thrust of the fan. Next, your airfoil shape sucks for this application. You want maximum deflection and have a fixed angle at which the air hits the wing. Thus, you can afford a small leading edge radius and need a lot more camber. Like the airfoil of a Fowler flap! In order to increase the deflection angle, it will help to add a second flap with a gap between the two. And a third. This has been taken to its extreme by Frederick Handley Page in 1921 when he modified an RAF 19 airfoil with 7 slots for maximum lift, such that a staggered configuration of 8 airfoils resulted. Note that the airfoils are not at the same streamwise location but each sits below the wake of the preceding airfoil. Handley-Page 8 element airfoil, taken from this paper by A. M. O. Smith, McDonnell-Douglas) Now it must be said that deflecting the flow will incur drag. If the flow is turned by 90°, even under ideal conditions will the drag be at least as big as the thrust. If you draw a box around the fan-wing combination and integrate over the edges, you will find that no horizontal component is left in the exiting airstream, so no net thrust is created by this contraption. Wouldn't it be better to swivel the fan to 90° in order to use its thrust for lift? If you use the fan thrust to move the fan-wing combination though air, then the wing can work on all the air that is coming at it. Make it big and move fast enough and lift will indeed be larger than thrust. To improve efficiency, replace the fan by a decent propeller and compare the result to the status quo. In order to maximize the lift potential of the accelerated air stream of a fan (or propeller), blown wing designs showed superior low-speed capabilities as long as the engines were running. One example is the Custer Channel wing as used on the CCW-5 shown below (source). No. The thrust that the engine provides is the thrust that the engine provides, no matter how you wish to tilt the exhaust flow: The lift from the airfoil generates is from behind your CG, so your mass in front pushes the nose down and the tail is raised by the wing, which means your plane will tip forwards (downwards). Also the wing interference thing everyone else’s already mentioned. Also turbulent air is not very useful for lift and bad for control
For a random integer $x$ chosen uniformly between 2 and $n$, what is the expected value of the smallest prime factor of $x$ as a function of $n$? What is the behavior of the function as $n$ tends to infinity? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community A quick and dirty answer... (I began before Will answered...) I first address the following question: what is the probability $\pi_n$ that a number has the n-th prime $p_n$ as smallest prime factor. A random number is even with proba ${1\over 2}$, so the smallest prime factor will be 2 with probability ${1\over 2}$. An odd number is a multiple of 3 with proba ${1\over 3}$, so the smallest prime factor will be $3$ with proba ${1\over 2\cdot 3}$. If it is not dividable by 2 or 3, which happens with probability $1 - {1\over 2} - {1\over 6} = {1\over 3} $, this will be $5$ with proba ${1\over 5}\times{1\over 3} = {2 \over 2\cdot 3 \cdot 5}$. Then it will be 7 with proba ${1\over 7} \times (1 - {1\over 2} - {1\over 6} - {1\over 15}) = {1\over 7} \times {4 \over 15} = {8 \over 2\cdot 3 \cdot 5\cdot 7}$. Denoting this proba by $\pi_n$ and by $p_n$ the sequence of primes, we have $\pi_1 = {1\over 2}$ and $\pi_{n} = {1 \over p_n} \left( 1 - \sum_{i=1}^{n-1} \pi_i\right)$. Edit I take some time to see the connection between this answer and Will’s.I compute the totient function : $\phi(2) = 1$, $\phi(2\cdot3) = 1\cdot 2$, $\phi(2\cdot 3\cdot 5) = 1 \cdot 2 \cdot 4$. Denoting $p_n\# = \prod_{i\le n} p_i$, it appears that in the first few terms I get the following : $$\pi_n = {\phi(p_{n-1}\#) \over p_n\#},$$ which is slightly different – but Will is computing an expectation, and he is right cf edit 6 below. Edit 2 This is logical from the definition of totient function : $\phi(p_{n-1}\#)\over p_{n-1}\#$ is the proportion numbers which are not dividable by $p_1, \dots, p_{n-1}$ ; multiply by $1\over p_n$ to get the proportion of numbers which are not dividable by $p_1, \dots, p_{n-1}$ but dividable by $p_n$. If one manage to prove by recurrence that the above defined $\pi_n$ coincide with this, the fact that the sum is 1 should be clear. Edit 3 It is not difficult to complete. If we prove that for all $n$,$$1 - \sum_{i\le n} \pi_i = {\phi(p_n\#) \over p_n\#},$$we are done. This is true for $n=1, 2$.The induction step is:$$\begin{array}{rcl} 1 - \sum_{i\le n+1} \pi_i &=& \left(1 - \sum_{i\le n} \pi_i\right) - \pi_{n+1} \\ &=& \left(1 - \sum_{i\le n} \pi_i\right)\left( 1 - {1\over p_{n+1}}\right)\\ &=& \left({\phi(p_n\#) \over p_n\#} \right) \left( {p_{n+1} - 1\over p_{n+1}}\right)\\ &=& { \phi(p_n\#) \times (p_{n+1} - 1) \over p_n\# \times p_{n+1} }\\ &=& {\phi(p_{n+1}\#) \over p_{n+1}\#} \end{array},$$so we are done. Edit 4 Using Euler’s trick, we have easily that $$1 - \sum_{i=1}^\infty \pi_i = \prod_p \left(1-{1\over p}\right) = { 1 \over \sum_{n=1}^\infty {1\over n}} = 0,$$which can surely be rewritten respecting the modern standards... I am not familiar with analytic number theory, but I am sure this product is a classic. Edit 5 Yes, it is a classic, cf Merten’s third theorem, which tells that $\prod_{p\le n} \left(1-{1\over p}\right) \sim {e^{-\gamma}\over \log n}$. Using $p_n \sim n \log(n)$ we get that $$1 - \sum_{i=1}^n \pi_i = \prod_{p\le p_n} \left(1-{1\over p}\right) \sim {e^{-\gamma}\over \log n + \log\log n} \sim {e^{-\gamma}\over \log n }$$ and $$ \pi_n \sim {e^{-\gamma}\over n \log^2 n},$$ which gives you the asymptotic behaviour of this sequence. Edit 6 In fact I didn’t adress the original question but this is possible now. The smallest prime factor of a number taken uniformly between 1 and $p_n-1$ is $p_k$ with probability $\simeq {\pi_k \over \sum_{\ell<n}\pi_\ell}$. Its expectation is$$ { \sum_{\ell < n} p_\ell\pi_\ell \over \sum_{\ell < n} \pi_\ell} \sim \sum_{\ell < n} p_\ell\pi_\ell = \sum_{\ell < n} {\phi(p_\ell\#) \over p_\ell\#},$$as Will first stated (oh my God, why did that took me so long?). The above equivalent shows that this goes to infinity as $n\rightarrow\infty$. Comparing to an integral leads to a $O\left( {p_n \over \log p_n} \right)$. Taking the primorials $$P_0 = 1, \; P_1 = 2, \; P_2 = 6, \; P_3 = 30, \; P_4 = 210,$$ I get your expected value as $n$ increases to $\infty$ as $$ E = \sum_{k = 0}^\infty \; \frac{\phi(P_k)}{P_k},$$ wher $\phi$ is Euler's totient function. I'm not sure yet whether this is finite. Intuitively, I would expect this to be of the order of $\dfrac{n}{\log_e n}$ on the grounds that it is slightly more than the sum of the primes less than or equal to $n$ divided by $n-1$, and that is slightly less than $n$ times the frequency of primes near $n$. Empirically this looks reasonable: for $n$ between 32000 and 64000, something like $1.9\dfrac{n}{\log_e n}$ looks fairly close.
Absolutely Convex Sets Definition: Let $X$ be a linear space and let $E \subseteq X$. Then $E$ is said to be Absolutely Convex if whenever $x, y \in E$ and $a, b \in \mathbf{F}$ are such that $|a| + |b| \leq 1$ then $(ax + by) \in E$. Proposition 1: Let $X$ be a linear space and let $E \subseteq X$. If $E$ is absolutely convex then $E$ is convex. The converse of proposition 1 is not true in general. See example 1. Proof:Suppose that $E$ is absolutely convex. Then whenever $x, y \in E$ and $a, b \in \mathbf{F}$ are such that $|a| + |b| \leq 1$ then $ax + by \in E$. In particular, when $a \in [0, 1]$, and $b = 1 - a$ then $|a| + |b| = a + (1 - a) = 1$, and so $ax + (1 - a)y \in E$ for all $a \in [0, 1]$. Therefore $E$ is convex. Proposition 2: Let $X$ be a seminormed linear space. Then the open unit ball $B(0, 1) = \{ x \in X : p(x) < 1 \} $] and the closed unit ball [[$ \bar{B}(0, 1) = \{ x \in X : p(x) \leq 1 \}$ are both absolutely convex sets. Proof:Let $x, y \in B(0, 1)$ so that $p(x) < 1$ and $p(y) < 1$. Let $a, b \in \mathbf{F}$ be such that $|a| + |b| \leq 1$. Then $|b| \leq 1 - |a|$. Using this and the properties of seminorms and we get: This shows that $ax + by \in B(0, 1)$. A similar argument shows that $\bar{B}(0, 1)$ is absolutely convex. $\blacksquare$ Let's look at some examples. Example 1 Consider the linear space $\mathbb{C}$ with the usual norm let $A \subset \mathbb{C}$ be defined by:(2) Observe that $A$ is simply the closed interval $[a, b]$ embedded into $\mathbb{C}$. The set $A$ is clearly convex, however, it is not absolutely convex. To see this, observe that if $a = i$ and $b =0$ then $a$ and $b$ are such that $|a| + |b| \leq 1$ (the sum actually equals $1$). Take the points $x = 1$ and $y = 0$, both of which are in $A$. Then:(3) Visually, multiplication by $i$ rotates the line segment $A$ counterclockwise $90$ degrees about the origin, and clearly this rotation is not in $A$. This shows that there exists some convex sets that are not absolutely convex, and hence, the converse of proposition 1 is not true in general.
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724 Like this: [/url][/wiki][/url] [/wiki] [/url][/code] Many different combinations work. To reproduce, paste the above into a new post and click "preview". x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X I wonder if this works on other sites? (Remove/Change ) Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Related:[url=http://a.com/] [/url][/wiki] My signature gets quoted. This too. And my avatar gets moved down Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote: Related: [ Code: Select all [wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki] ] My signature gets quoted. This too. And my avatar gets moved down It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places. Here, I'll fix it: [/wiki][url]conwaylife.com[/url] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: It appears I fixed @Saka's open <div>. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce toroidalet Posts: 1019 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? The post before the one you quoted. The code was: Code: Select all [wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up. Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage. Last edited by Saka on June 21st, 2017, 10:51 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA I actually laughed at the terminology. "IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways, [/wiki] I like making rules Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it... -Fluffykitty Pusher Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] Screenshot? New one yay. -Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side. Last edited by Saka on June 21st, 2017, 10:20 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA Someone should create a phpBB-based forum so we can experiment without mucking about with the forums. This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X The testing grounds have now become similar to actual military testing grounds. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm We also have this thread. Also, is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you: Code: Select all [wiki][viewer][/wiki][viewer][/viewer][/viewer] Last edited by fluffykitty on June 22nd, 2017, 11:50 am, edited 1 time in total. I like making rules 83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact: oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar. EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale. Code: Select all x = 8, y = 10, rule = B3/S23 3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo! No football of any dui mauris said that. Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki] This dosen't do good things Edit: Code: Select all [wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url] Neither does this ^ What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer] I get about five different scroll bars when I preview this Edit: Code: Select all [viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki] Makes a really long post and makes the rest of the thread large and centred Edit 2: Code: Select all [url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer] Just don't do this (Sorry I'm having a lot of fun with this) ^ What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy Here's another small one: Code: Select all [url][wiki][viewer][/wiki][/url][/viewer] fg Moosey Posts: 2493 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: Code: Select all [wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki] [/code] Is a pinch broken Doesn’t this thread belong in the sandbox? I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" 77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Moosey Posts: 2493 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: 77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Now it's half an aidan mode testing grounds. Also, fluffykitty's messmaker: Code: Select all [viewer][wiki][*][/viewer][/*][/wiki][/quote] I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf
Even though Yiorgos S. Smyrlis' answer is simpler, I think it is useful having a little different proof, just to add another point of view of the same argument, which is always good :) Consider the norm $\| \cdot \|_1$ induced by $\langle \cdot, \cdot \rangle_1$ and the norm $\| \cdot \|_2$ induced by $\langle \cdot , \cdot \rangle_2$. By hypothesis and definition of induced norm $$\| v \|_1^2 = \langle v, v \rangle_1= \langle v, v \rangle_2= \| v \|_2^2 $$ $\forall v \in V$. So the two norms are the same (the power doesn't make any problem thanks to the positivity of the norm.) Then we want to do the reverse reasoning, if two product-induced-norms coincide on every element of the space, what can we say about inner products? It's immediate (and left as little exercise for the reader - just write down the definitions and use bilinearity-) to prove that given a real inner product and its norm the following equivalence is true $$ \langle x,y \rangle = \dfrac{1}{4} \left( \|x+y \|^2 - \|x-y\|^2 \right) $$ This formula is called the polarization identity (for more see here) So by the above identity and the hypothesis we have, $$ \langle x,y \rangle_1 = \dfrac{1}{4} \left( \|x+y \|_1^2 - \|x-y\|_1^2 \right) = \dfrac{1}{4} \left( \|x+y \|_2^2 - \|x-y\|_2^2 \right) = \langle x,y \rangle_2 $$ $\forall x,y \in V$ and so we are done. I wanted to stress the depth correlation between an inner product and its induced norm, and show some useful tools. Hope it helps :)
The Annals of Statistics Ann. Statist. Volume 24, Number 2 (1996), 659-681. Efficient estimation of integral functionals of a density Abstract We consider the problem of estimating a functional of a density of the type $\int \phi (f, \cdot)$. Starting from efficient estimators of linear and quadratic functionals of f and using a Taylor expansion of $\phi$,we build estimators that achieve the $n^{-1/2}$ rate whenever f is smooth enough. Moreover, we show that these estimators are efficient. Concerning the estimation of quadratic functionals (more precisely, of integrated squared density) Bickel and Ritov have already built efficient estimators. We propose here an alternative construction based on projections, which seems more natural. Article information Source Ann. Statist., Volume 24, Number 2 (1996), 659-681. Dates First available in Project Euclid: 24 September 2002 Permanent link to this document https://projecteuclid.org/euclid.aos/1032894458 Digital Object Identifier doi:10.1214/aos/1032894458 Mathematical Reviews number (MathSciNet) MR1394981 Zentralblatt MATH identifier 0859.62038 Citation Laurent, Béatrice. Efficient estimation of integral functionals of a density. Ann. Statist. 24 (1996), no. 2, 659--681. doi:10.1214/aos/1032894458. https://projecteuclid.org/euclid.aos/1032894458
[A little note: This post, like many others on this blog, contains a few mathematical symbols which are displayed using MathJax. If you are reading this using an RSS reader such as Feedly and you see a lot of $ signs floating around, you may need to click through to the blog to see the proper symbols.] People following the reporting of physics in the popular press might remember having come across a paper earlier this year that claimed to have detected the "largest structure in the Universe" in the distribution of quasars, that "challenged the Cosmological Principle". This was work done by Roger Clowes of the University of Central Lancashire and collaborators, and their paper was published in the Monthly Notices of the Royal Astronomical Societyback in March (though it was available online from late last year). The reason I suspect people might have come across it is that it was accompanied by a pretty extraordinary amount of publicity, starting from this press release on the Royal Astronomical Society website. This was then taken up by Reuters, and featured on various popular science websites and news outlets, including New Scientist,The Atlantic ,National Geographic, Space.com, The Daily Galaxy, Phys.org, Gizmodo, and many more. The structure they claimed to have found even has its own Wikipedia entry. Obligatory artist's impression of a quasar. One thing that you notice in a lot of these reports is the statement that the discovery of this structure violates Einstein's theory of gravity, which is nonsense. This is sloppy reporting, sure, but the RAS press release is also partly to blame here, since it includes a somewhat gratuitous mention of Einstein, and this is exactly the kind of thing that non-expert journalists are likely to pick up on. Mentioning Einstein probably helps generate more traffic after all, which is why I've put him in the title as well. But aside from the name-dropping, what about the main point about the violation of the cosmological principle? As a quick reminder, the cosmological principle is sometimes taken to be the assumption that, on large scales, the Universe is well-described as homogeneous and isotropic. The question of what constitutes "large scales" is sometimes not very well-defined: we know that on the scale of the Solar System the matter distribution is very definitely not homogeneous, and we believe that on the scale of size of the observable Universe it is. Generally speaking, people assume that on scales larger than about $100$ Megaparsecs, homogeneity is a fair assumption. A paper by Yadav, Bagla and Khandai from 2010 showed that if the standard $\Lambda$CDM cosmological model is correct, the scale of homogeneity must be less thanat most $370$ Mpc. On the other hand, this quasar structure that Clowes et al. found is absolutely enormous: over 4 billion light years, or more than 1000 Mpc, long. Does the existence of such a large structure mean that the Universe is not homogeneous, the cosmological principle is not true, and the foundation on which all of modern cosmology is based is shaky? Well actually, no. Unfortunately Clowes' paper is wrong, on several counts. In fact, I have recently published a paper myself (journal version here, free arXiv version here) which points out that it is wrong. And, on the principle that if I don't talk about my own work, no one else will, I'm going to try explaining some of the ideas involved here. The first reason it is wrong is something that a lot of people who should know better don't seem to realise: there is no reason that structures should not exist which are larger than the homogeneity scale of $\Lambda$CDM. You may think that this doesn't make sense, because homogeneity precludes the existence of structures, so no structure can be larger than the homogeneity scale. Nevertheless, it does and they can. Let me explain a little more. The point here is that the Universe is nothomogeneous, at any scale. What is homogeneous and isotropic is simply the background model we use the describe its behaviour. In the real Universe, there are always fluctuations away from homogeneity at all scales – in fact the theory of inflation basically guarantees this, since the power spectrum of potential fluctuations is close to scale-invariant. The assumption that all cosmological theory really rests on is that these fluctuations can be treated as perturbations about a homogeneous background – so that a perturbation theory approach to cosmology is valid. Given this knowledge that the Universe is never exactly homogeneous, the question of what the "homogeneity scale" actually means, and how to define it, takes on a different light. (Before you ask, yes it is still a useful concept!) One possible way to define it is as that scale above which density fluctuations $\delta$ generally become small compared to the homogeneous background density. In technical terms, this means the scale at which the two-point correlation function for the fluctuations, $\xi(r)$, (of which the power spectrum $P(k)$ is the Fourier transform) becomes less than $1$. Based on this definition, the homogeneity scale would be around $10$ Mpc. It turns out that this definition, and the direct measurement of $\xi(r)$ itself, is not very good for determining whether or not the Universe is a fractal, which is a question that several researchers decided was an important one to answer a few years ago. This question can instead be answered by a different analysis, which I explained once before here: essentially, given a catalogue with the positions of many galaxies (or quasars, or whatever), draw a sphere of radius $R$ around each galaxy, and count how many other galaxies lie within this sphere, and how this number changes with $R$. The scale above which the average of this number for all galaxies starts scaling as the cube of the radius, $$N(<R)\propto R^3,$$ (within measurement error) is then the homogeneity scale (if it starts scaling as some other constant power of $R$, the Universe has a fractal nature). This is the definition of the homogeneity scale used by Yadav et al. and it is related to an integral of $\xi(r)$; typically measurements of the homogeneity scale using this definition come up with values of around $100-150$ Mpc. The figure that proves that the distribution of quasars is in fact homogeneous on the expected scales. For details, see arXiv:1306.1700. To get back to the original point, neither of these definitions of the homogeneity scale makes any claim about the existence of structures that are larger than that. In fact, in the $\Lambda$CDM model, the correlation function for matter density fluctuations is expected to be small but positive out to scales larger than either of the two homogeneity scales defined above (though not as large as Yadav The second reason that the claim by Clowes et al.'s generous upper limit). The correlation function that can actually be measured using any given population of galaxies or quasars will extend out even further. So we already expectcorrelations to exist beyond the homogeneity scale – this means that, for some definitions of what constitutes a "structure", we expectto see large "structures" on these scales too. The second reason that the claim by Clowes et al.is wrong is however less subtle. Given the particular definition of a "structure" they use, one would expect to find very large structures even if density correlations were exactly zero on allscales. Yes, you read that right. It's worth going over how they define a "structure", just to make this absolutely clear. About the position of each quasar in the catalogue they draw a sphere of radius $L$. If any other quasars at all happen to lie within this sphere, they are classified as part of the same "structure", which can now be extended in other directions by repeating the procedure about each of the newly added member quasars. After repeating this procedure over all $18,722$ quasars in the catalogue, the largest such group of quasars identified becomes the "largest structure in the Universe". It should be pretty obvious now that the radius $L$ chosen for these spheres, while chosen rather arbitrarily, is crucial to the end result. If it is too large, all quasars in the catalogue end up classified as part of the same truly ginormous "structure", but this is not very helpful. This is known as "percolation" and the critical percolation threshold has been thoroughly studied for Poisson point sets – which are by definition random distributions of points with no correlation at all. The value of $L$ that Clowes et al.chose to use, for no apparent reason other than that it gave them a dramatic result, was $100$ Mpc – far too large to be justified on any theoretical grounds, but slightly lower than the critical percolation threshold would be if the quasar distribution was similar to that of a Poisson set. On the other hand, the "largest structure in the Universe" only consists of $73$ quasars out of $18,722$, so it could be entirely explained as a result of the poor definition ... Now I'll spare you all the details of how to test whether, using this definition of a "structure", one would expect to find "structures" extending over more than $1000$ Mpc in length or with more than $73$ members or whatever, even in a purely random distribution of points, which are by definition homogeneous. Suffice it to say that it turns out one would. This plot shows the maximum extent of such "structures" found in $10,000$ simulations of completely uncorrelated distributions of points, compared to the maximum extent of the "structure" found in the real quasar catalogue. To summarise then: finding a "structure" larger than the homogeneity scale does not violate the cosmological principle, because of correlations; on top of that, the "largest structure in the Universe" is actually not really a "structure" in any meaningful sense. In my professional opinion, Clowes' paper and all the hype surrounding it in the press is nothing more than that – hype. Unfortunately, this is another verification of my maxim that if a paper to do with cosmology is accompanied by a big press release, it is odds-on to turn out to be wrong. Finally, before I leave the topic, I'll make a comment about the presentation of results by Clowes The probability distribution of extents of largest "structures" found in 10,000 random point sets for two different choices of $L$. Vertical lines show the actual values found for "structures" in the quasar catalogue. The actual values are not very unusual. Figure from arXiv:1306.1700. To summarise then: finding a "structure" larger than the homogeneity scale does not violate the cosmological principle, because of correlations; on top of that, the "largest structure in the Universe" is actually not really a "structure" in any meaningful sense. In my professional opinion, Clowes' paper and all the hype surrounding it in the press is nothing more than that – hype. Unfortunately, this is another verification of my maxim that if a paper to do with cosmology is accompanied by a big press release, it is odds-on to turn out to be wrong. Finally, before I leave the topic, I'll make a comment about the presentation of results by Clowes et al. Here, for instance, is an image they presented showing their "structure", which they call the 'Huge-LQG', with a second "structure" called the 'CCLQG' towards the bottom left: 3D representation of the Huge-LQG and CCLQG. From arXiv:1211.6256. Looks impressive! Until you start digging a bit deeper, anyway. Firstly, they've only shown the quasars that form part of the "structure", not all the others around it. Secondly, they've drawn enormous spheres (of radius $33$ Mpc) at the position of each quasar to make it look more dramatic. In actual fact the quasars are way smaller than that. The combined effect of these two presentational choices is to make the 'Huge-LQG' look far more plausible as a structure than it really is. Here's a representation of the exact same region of space that I made myself, which rectifies both problems: Quasar positions around the "structures" claimed by Clowes et al. Do you still see the "structures"?
Basic Theorems Regarding Homotopies Recall from the Homotopic Mappings Relative to a Subset of a Topological Space page that if $X$ and $Y$ are topological spaces and $A \subseteq X$ is a subspace, and $f, g : X \to Y$ are continuous functions then $f$ is said to be homotopic to $g$ relative to $A$ if there exists a continuous function $H : X \times I \to Y$ such that: 1)$H_0 = f$. 2)$H_1 = g$. 3)$H(a, t) = f(a) = g(a)$ for all $a \in A$ and for all $t \in I$ In such cases we write $f \simeq_A g$. Furthermore, if $A = \emptyset$ then we simply say that $f$ is homotopic to $g$ and write $f \simeq g$. We will now state some basic theorems regarding homotopies. Theorem 1: Let $X$ and $Y$ be topological spaces and let $f, g : X \to Y$ be embeddings. If $f$ is isotopic to $g$ then $f$ is homotopic to $g$. Proof:Since $f$ is isotopic to $g$ there exists a continuous function $H : X \times I \to Y$ such that $H_t : X \to Y$ is an embedding for each $t \in I$, $H_0 = f$, and $H_1 = g$. So we can completely ignore this first condition to see that $f$ is homotopic to $g$. $\blacksquare$
Constant Paths in a Topological Space Recall from the Products of Paths Relative to {0, 1} in a Topological Space page that if $X$ is a topological space and $\alpha, \beta : I \to X$ are paths such that $\alpha(1) = \beta(0)$ then the product path $\alpha\beta : I \to X$ is defined by starting at the initial point of $\alpha$, traversing $\alpha$ twice as fast, traversing $\beta$ twice as fast, and ending at the terminal point of $\beta$. We let:(1) We proved that the operation of multiplication of these equivalence classes given by:(2) is well-defined. We now move on to showing the existence of a "left-inverse" and "right-inverse" constant paths. Definition: Let $X$ be a topological space. For any $x \in X$ we denote $c_x : I \to X$ to be the Constant Path at $x$ defined for all $t \in I$ by $c_x(t) = x$. Now let $\alpha : I \to X$ be a path such that $\alpha(0) = x$ and $\alpha(1) = y$. We will prove that $[c_y][\alpha] = [\alpha]$ and $[\alpha][c_x] = [\alpha]$ so that $[c_y]$ is a left-handed identity for $[\alpha]$ and $[c_x]$ is a right-handed identity for $[\alpha]$. Proposition 1: Let $X$ be a topological space and let $\alpha : I \to X$ be a path such that $\alpha(0) = x$ and $\alpha(1) = y$. Then: a) $[c_y][\alpha] = [\alpha]$. b) $[\alpha][c_x] = [\alpha]$. Proof of a)To show that $[c_y][\alpha] = [\alpha]$ we must show that $\alpha \simeq_{\{0, 1\}} \alpha c_y$. Define a function $H : I \times I \to X$ by: Then $H$ is continuous since $\alpha$ is continuous and by The Gluing Lemma. Furthermore: Lastly $H_t(0) = \alpha(0) = x$ for all $t \in I$ and $H_t(1) = y = \alpha(1)$ for all $t \in I$. So indeed, $\alpha \simeq_{\{0, 1\}} \alpha c_y$. So $[\alpha c_y] = [\alpha$, that is: Proof of b)Analogous to (a).
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either $$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$ The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or $\{|+\rangle,|-\rangle\}$), and check that it gets the correct outcomes. On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$). However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one. So, has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme? I should maybe clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say on the basis $$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$ Or is there an entangled counterfeiting strategy that does better? Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of $(5/8)$ $n$. So, my conjecture of the moment is that $(5/8)$ $n$ might be the right answer. In any case, the fact that $5/8$ is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is $c=1/2$).
For a mobile robot - four wheels, front wheel steering - I use the following (bicycle) prediction model to estimate its state based on accurate radar measurements only. No odometry or any other input information $u_k$ is available from the mobile robot itself. $$ \begin{bmatrix} x_{k+1} \\ y_{k+1} \\ \theta_{k+1} \\ v_{k+1} \\ a_{k+1} \\ \kappa_{k+1} \\ \end{bmatrix} = f_k(\vec{x}_k,u_k,\vec{\omega}_k,\Delta t) = \begin{bmatrix} x_k + v_k \Delta t \cos \theta_k \\ y_k + v_k \Delta t \sin \theta_k \\ \theta_k + v_k \kappa_k \Delta t \\ v_k + a_k \Delta t \\ a_k \\ \kappa_k + \frac{a_{y,k}}{v_{x,k}^2} \end{bmatrix} + \begin{bmatrix} \omega_x \\ \omega_y \\ \omega_{\theta} \\ \omega_v \\ \omega_a \\ \omega_{\kappa} \end{bmatrix} $$ where $x$ and $y$ are the position, $\theta$ is the heading and $v$, $a$ are the velocity and acceleration respectively. Vector $\vec{\omega}$ is zero mean white gaussian noise and $\Delta t$ is sampling time. These mentioned state variables $\begin{bmatrix} x & y & \theta & v & a \end{bmatrix}$ are all measured although $\begin{bmatrix} \theta & v & a \end{bmatrix}$ have high variance. The only state that is not measured is curvature $\kappa$. Therfore it is computed using the measured states $\begin{bmatrix} a_{y,k} & v_{x,k}^2\end{bmatrix}$ which are the lateral acceleration and the longitudinal velocity. My Question: Is there a better way on predicting heading $\theta$, velocity $v$, acceleration $a$, and curvature $\kappa$? Is it enough for $a_{k+1}$ to just assume gaussian noise $\omega_a$ and use the previous best estimate $a_k$ or is there an alternative? For curvature $\kappa$ I also thought of using yaw rate $\dot{\theta}$ as $\kappa = \frac{\dot{\theta}}{v_x}$ but then I would have to estimate the yaw rate too. To make my nonlinear filter model complete here is the measurement model: $$ \begin{equation} \label{eq:bicycle-model-leader-vehicle-h} y_k = h_k(x_k,k) + v_k = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ \end{bmatrix} \begin{bmatrix} x_k \\ y_k \\ \theta_k \\ v_k \\ a_k \\ \kappa_k \\ \end{bmatrix} + \begin{bmatrix} v_x \\ v_y \\ v_{\theta} \\ v_v \\ v_a \\ \end{bmatrix} \end{equation} $$ More Info on the available data: The measured state vector is already obtained/estimated using a kalman filter. What I want to achive is a smooth trajectory with the estimate $\kappa$. For this it is a requirement to use another Kalman filter or a moving horizon estimation approach.
As long as long-range interactions (e.g., gravity) are negligible, the internal energy $U$ of a homogeneous system is an extensive quantity. In such a case, the second law of thermodynamics in fact guarantees that $U$ is a convex function of other extensive quantities it depends on, e.g., $S$, $V$, and $N$. An important observation is that given a thermodynamic system, a tiny part of it should be in equilibrium with the rest. Suppose that we take this tiny part to be negligibly small compared to the entire system, and simply call it "the system." The rest of the entire system acts as a reservoir with some fixed temperature $T_{0}$, pressure $P_{0}$, and chemical potential $\mu_{0}$, so let's call this "the reservoir." When the system and the reservoir are in thermodynamic equilibrium, the second law of thermodynamics demands that the total entropy be maximized. Notice that the change in the total entropy is given by\begin{equation}\Delta S_{\mathrm{total}} = \Delta S + \frac{Q_{\mathrm{reservoir}}}{T_{0}} = \Delta S - \frac{\Delta U + P_{0}\Delta V - \mu_{0}\Delta N}{T_{0}} = -\frac{1}{T_{0}} \Delta \Omega,\end{equation}where $\Omega = U-T_{0}S + P_{0}V-\mu_{0}N$. Hence, maximizing $S_{\mathrm{total}}$ is equivalent to minimizing $\Omega$. Here, $\Omega$ has three, as opposed to four, independent variables because there is a relation between $U$, $S$, $V$, and $N$. Taking the independent parameters to be $S$, $V$, and $N$, we have\begin{equation}\Omega(S,V,N) = U(S,V,N) - T_{0}S + P_{0} V - \mu_{0} N.\end{equation} A consistency condition is that for an arbitrary $T_{0}$, $P_{0}$, and $\mu_{0}$, there exists a unique minimum of $\Omega(S,V,N)$ with respect to its independent variables, viz., the system should be able to equilibrate with the reservoir. It then follows that the function $U(S,V,N)$ is convex. So far, $S$, $V$, $N$, and $U(S,V,N)$ are properties of a small system that is a part of a much larger body. However, for a homogeneous system, the internal energy of the entire body must have the same form as any small part of it. Therefore, what I have shown equally holds for an arbitrary homogeneous system. [Taking the independent variables of $\Omega$ to be $U$, $V$, and $N$ would lead to the concavity of the function $S(U,V,N)$.]
For example assume $N$ people performed a selection test like GMAT. Assume the distribution of the scores is a normal distribution (but parameters are not known). If you have a list of the $n$ highest scores (approved people), how do you estimate $N$? Intuitively, it is totally possible to make such an estimation, if $n$ is equal to $N$ then you would expect the distribution of the n scores to be a normal (a few very high scores, a lot of average scores and a few very low scores) if $n\ll N$ then you would expect to see an almost linear increase in the number of scores as you look at the lowest scores (just a few high scores and a lot of low scores, where low here means low in comparison to the other available scores). The motivation is that in my country (Brazil), many times there is a test to get a public job and sometimes they do not publish how many people participated but publish a list of the $n$ people that passed and their scores. I would like to find a way to determine the $N/n$ (candidates per job position). This is a nice question. I’ll give it a try... Denote by $\Phi$ the cdf of the normal distribution and by $\phi$ its density. The joint distribution of the $n$ highest scores $y_{(N-n+1)}, \dots, y_{(N)} $ is $$ {N ! \over (N-n) !} \Phi(y_{(N-n+1)})^{N-n} \phi(y_{(N-n+1)}) \cdots \phi( y_{(N)} ), $$ (that is, $N-n$ values among $N$ below $y_{(N-n+1)}$, and the others where observed). This is a bit puzzling at first sight: the usual setting would be that you know $N$ and $n$ and you want to infer the parameters of the distribution. Here you are interested only in $N$ so the MLE is obtained by maximizing $$(N - n) \log \Phi(y_{(N-n+1)}) + \log(N!) - \log\left((N-n) !\right).$$ This is kind of intuitive: the grade of the last admitted $y_{(N-n+1)}$ and the number of admitted are all that matters. A quick numerical experiment: > N <- 1000> set.seed(1)> x <- sort(rnorm(N), decreasing=TRUE)[1:10] > x[1] 3.810277 3.055742 2.675741 2.649167 2.497662 2.446531 2.401618 2.350554[9] 2.349493 2.321334> f <- function(N, n = 10, xn = x[n]) + (N-n)*log(pnorm(xn)) + lfactorial(N) - lfactorial(N-n)> plot( 800:1200, sapply(800:1200, f), type="l") This looks promising. Let's have a look on the properties of this estimator, again for $n = 10$: > MLE <- replicate( 1e4, {x <- sort(rnorm(N), decreasing=TRUE)[1:10]; + optimize(f, c(100,20000), maximum=TRUE, xn = x[10])$maximum} )> mean(MLE)[1] 1112.798> sd(MLE)[1] 393.086> hist(MLE, breaks=40) However from you edits to your question I think that you want to estimate both $n$ and the parameters of the normal distribution. This could be done by maximizing the log of above the joint density. However for your concrete application, the underlying distribution is unlikely to be normal. It is surely a mixture between the grades of diversely prepared candidates, and I am not optimistic about the possibility of obtaining good estimates. So let’s try that again, with unknown parameters for the normal distribution: g <- function(N, mu, sd, X) { n <- length(x); (N-n)*pnorm(X[n], mean = mu, sd = sd, log.p = TRUE) + sum(dnorm(X, mean=mu, sd=sd, log=TRUE)) + lfactorial(N) - lfactorial(N-n) }> optim( c(1000,0,1), function(theta) -g(theta[1], theta[2], theta[3], x) )$par[1] 292951.707061 -3.650634 1.498264$value[1] -13.78503 So the MLE says here that the best guess is a mean of -3.65 (instead of 0), a standard deviation of 1.49 (instead of 1, well, ok), and a size $N = 293 000$... hu... Well let’s try again with $n = 100$: 10 observations is not much! > set.seed(17)> x <- sort(rnorm(N), decreasing=TRUE)[1:100] > optim( c(1000,0,1), function(theta) -g(theta[1], theta[2], theta[3], x) )$par[1] 1031.1320174 -0.2112833 1.1694436$value[1] -321.6677 Not so bad...? but if I try again with set.seed(18), the estimate for $N$ is 5000...! I might be missing something, but for the moment I continue to be pessimistic. Moreover, in the real world the grades are not normal. It is not rare to have a frankly bi-modal distribution, and the right tail is often quite special. The best students/candidates are far off the distribution, I have had multiple occasions to check this. So it is wrong to rely on the right tail for making these estimates: for example, if the $n = 20$ best candidates are all from a (relatively) homogeneous group of $100$ very smart and well prepared candidates, you will estimate only the size of this group; you won’t have any information on the (much more numerous) less prepared candidates.
Generalized quasilinear Schrödinger equations with concave functions $ l(s^2) $ School of Mathematics, South China University of Technology, Guangzhou 510640, China $ \begin{equation*}-Δ u+W(x)u^{2α-1}-ul'(u^2)Δ l(u^2) = f(u) \ \mbox{or}\ h(u)+u^{2^*-1},\ x∈\mathbb{R}^N,\end{equation*} $ $W(x):\mathbb{R}^N \to \mathbb{R} $ $ l,h,f $ $ u>0,$ $ 2^* = 2N/(N-2), $ $ N≥3 $ $ l(s) = s^{\frac{α}{2}}, $ $ \frac{1}{2}<α<1. $ Mathematics Subject Classification:5J20, 35J62, 35Q55. Citation:Yongkuan Cheng, Yaotian Shen. Generalized quasilinear Schrödinger equations with concave functions $ l(s^2) $. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1311-1343. doi: 10.3934/dcds.2019056 References: [1] [2] [3] C. O. Alves, Y. J. Wang and Y. T. Shen, Soliton solutions for a class of quasilinear Schrödinger equations with a parameter, [4] [5] [6] J. M. Bezerra do Ó, O. H. Miyagaki and S. H. M. Soares, Soliton solutions for quasilinear Schrödinger equations with critical growth, [7] H. Brézis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving criticalSoblev exponent, [8] [9] [10] [11] Y. B. Deng, S. J. Peng and S. S. Yan, Positive soliton solutions for generalized quasilinear Schrödinger equations with critical growth, [12] Y. B. Deng, S. J. Peng and S. S. Yan, Critical exponents and solitary wave solutions forgeneralized quasilinear Schrödinger equations, [13] [14] [15] L. Jeanjean, On the existence of bounded Palis-Smale sequence and application to aLandesman-Lazer type problem set on $ {{\mathbb{R}}^{N}} $, [16] [17] P. L. Lions, The concentration compactness principle in the calculus of variations, the locally compact case, parts Ⅰ and Ⅱ, [18] [19] [20] X. Q. Liu, J. Q. Liu and Z. Q. Wang, Quasilinear elliptic equations with critical growth via perturbation method, [21] Y. T. Shen and X. K. Guo, Discussion of nontrivial critical points of the functional $\int_{\Omega }{F(x,u,Du)\text{d}x}$, [22] [23] [24] E. A. B. Silva and G. F. Vieira, Quasilinear asymptotically periodic Schrödinger equations with critical growth, [25] show all references References: [1] [2] [3] C. O. Alves, Y. J. Wang and Y. T. Shen, Soliton solutions for a class of quasilinear Schrödinger equations with a parameter, [4] [5] [6] J. M. Bezerra do Ó, O. H. Miyagaki and S. H. M. Soares, Soliton solutions for quasilinear Schrödinger equations with critical growth, [7] H. Brézis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving criticalSoblev exponent, [8] [9] [10] [11] Y. B. Deng, S. J. Peng and S. S. Yan, Positive soliton solutions for generalized quasilinear Schrödinger equations with critical growth, [12] Y. B. Deng, S. J. Peng and S. S. Yan, Critical exponents and solitary wave solutions forgeneralized quasilinear Schrödinger equations, [13] [14] [15] L. Jeanjean, On the existence of bounded Palis-Smale sequence and application to aLandesman-Lazer type problem set on $ {{\mathbb{R}}^{N}} $, [16] [17] P. L. Lions, The concentration compactness principle in the calculus of variations, the locally compact case, parts Ⅰ and Ⅱ, [18] [19] [20] X. Q. Liu, J. Q. Liu and Z. Q. Wang, Quasilinear elliptic equations with critical growth via perturbation method, [21] Y. T. Shen and X. K. Guo, Discussion of nontrivial critical points of the functional $\int_{\Omega }{F(x,u,Du)\text{d}x}$, [22] [23] [24] E. A. B. Silva and G. F. Vieira, Quasilinear asymptotically periodic Schrödinger equations with critical growth, [25] [1] Marco A. S. Souto, Sérgio H. M. Soares. Ground state solutions for quasilinear stationary Schrödinger equations with critical growth. [2] Christopher Grumiau, Marco Squassina, Christophe Troestler. On the Mountain-Pass algorithm for the quasi-linear Schrödinger equation. [3] Yinbin Deng, Wei Shuai. Positive solutions for quasilinear Schrödinger equations with critical growth and potential vanishing at infinity. [4] Yanfang Xue, Chunlei Tang. Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth. [5] [6] Dorota Bors. Application of Mountain Pass Theorem to superlinear equations with fractional Laplacian controlled by distributed parameters and boundary data. [7] Yimin Zhang, Youjun Wang, Yaotian Shen. Solutions for quasilinear Schrödinger equations with critical Sobolev-Hardy exponents. [8] [9] Jianhua Chen, Xianhua Tang, Bitao Cheng. Existence of ground state solutions for a class of quasilinear Schrödinger equations with general critical nonlinearity. [10] Yi He, Gongbao Li. Concentrating soliton solutions for quasilinear Schrödinger equations involving critical Sobolev exponents. [11] Zhongwei Tang. Least energy solutions for semilinear Schrödinger equations involving critical growth and indefinite potentials. [12] Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. [13] Kun Cheng, Yinbin Deng. Nodal solutions for a generalized quasilinear Schrödinger equation with critical exponents. [14] Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. [15] [16] [17] [18] João Marcos do Ó, Uberlandio Severo. Quasilinear Schrödinger equations involving concave and convex nonlinearities. [19] João Marcos do Ó, Abbas Moameni. Solutions for singular quasilinear Schrödinger equations with one parameter. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Large Sets Avoiding Polynomial Configurations Combinatorics Seminar 30th April 2019, 11:00 am – 12:00 pm Howard House, 4th Floor Seminar Room A 2017 result of Andr\'as M\'ath\'e states that, given any degree $latex d$ polynomial $latex p : \mathbb{R}^{nv} \to \mathbb{R}$ with rational coefficients, there is a subset $latex E \subset \mathbb{R}^n$ of Hausdorff dimension $latex \frac{n}{d}$ that does not contain any $latex v$ distinct points $latex x_1, \ldots, x_v$ such that $latex p(x_1, \ldots, x_v) = 0$. We discuss a version of this result that applies when the coefficients of $latex p$ are assumed only to be algebraic over the rational numbers. Biography: Robert Fraser is a postdoc at the University of Edinburgh. He did his PhD at the University of British Columbia.
The Gait that closes the Gap The mostly used gaits of human locomotion are walking and running. In walking the important characteristics are that always one or both legs have ground contact, and the center of mass is lifted during a single leg stance phase. In running exist true flight phases and the center off mass reaches a minimum during stance phase. In order to change from one gait to another, an abrupt switching is observed. Maybe, there exists no smooth transition over many steps. The bipedal spring-mass model can reproduce both gaits, but there were significant differences in velocity, respectively system energy observed. However, the observations had a limited view because the simulation results were reduced to self-stable gait solutions. A more general investigation on model predictions should focus on periodic gait solutions and could propose additional strategies for stabilizing the identified gait patterns. In order to investigate periodic gait patterns for analyzing walking and running, a common platform for gait simulations is required. As mentioned, the bipedal spring-mass model is able to show both gaits and using the methodology of Poincaré maps, periodic solutions can be identified. For applying Poincaré maps, the definition of start and stop conditions is necessary, which should exist in both gaits similarly. A useful start and stop event was established in the Locomotion Lab Jena, namely the instant of Vertical Leg Orientation (VLO). The issue regarding VLO is explained in another article. Does there still exist a gap between walking and running when focussing on periodic gait solutions? Classification of the gaits. R: running, W2,W3: walking, GR: grounded running. Previously, it was known that walking operates at low velocities only. When increasing the energy respectivley the energy, the body will lift off at some time. Due to that reason, the following investigation will be focussed on gaits at low velocities. Analyses on running revealed that it is easy to find running at low speeds. It is depends mainly on the setting of the leg’s angle of attack $\alpha_0$, allowing for running or not. With our novel map for gait solutions, the systems variables at the instant of VLO are shown. Here, we focus on symmetric gait patterns, which means that the first half of the stance phase is symmetrically identical to the second half. The system state is then reduced to a single, independent variable, i.e. the height of the center of mass $y_0$ at the event of VLO. In the figure, two system parameters, i.e. the leg’s angle of attack $\alpha_0$ and the system energy (thick lines) were varied. The simulation results for typical walking (green area, W2) reveal a height of the center of mass at midstance $y_0$ always above the height at touch down $y_{TD}$. The difference between them can be very small. In contrast, the center of mass is always lowered in running (blue area, R) with $y_0 < y_{TD}$. There is another difference regarding the angle of attack $\alpha_0$ found simulations with the same system energy. In conclusion, the gap between walking and running clearly exists. Is there a chance to reduce the gap between the typical gaits of walking and running? Inside the model simulations exists one condition, not explained so far. It is implied that the center of mass moves downward at the event of leg’s touch down. It feels like curious when a leg with a fixed angle at touch down would approaching the ground from a lower level. From now we reject this thought and allowing the center of mass to move upward when the leg is touching the ground. What happens with the leg? In fact, the simulated leg is not defined when it does not have ground contact. We assume of course that the leg will lengthen or rotating before hitting the ground. With this modified assumption, the center of mass is lifted during touch down, we can identify a novel gait (orange area), which connects both, walking and running. In this gait, we observe that the center off mass is clearly lowered during single leg stance phase as usual in running. However, there exists a distinct double support phase, typical for walking. The gait called Grounded Running. Motion of the center of mass and the ground reaction forces with distinct double support phases of the legs. With the spring-mass model, a novel gait is identified, that could be classified as a running gait where $y_0 < y_{TD}$ although there is always ground contact respectively no flight phase exists. Based on simulations, this is a new gait, but it is already observed in locomotion of birds and is called Grounded Running. The simulations have shown, that grounded running exists for low velocities only. In contrast, the same simulation study revealed that walking can be faster than assumed. Featured Paper J. Rummel, Y. Blum, A. Seyfarth. From walking to running. Autonome Mobile Systeme 2009, R. Dillmann, J. Beyerer, C. Stiller, J.M. Zöllner, T. Gindele (Eds.) Springer: 89-96, 2009. DOI: 10.1007/978-3-642-10284-4_12 PDF A general system state for gait model simulations In walking and running, the motion of the centre of mass can be understood as a kind of oscillation, however, this oscillation is sometimes hardly comparable with a sinus function. For investigations of gaits, a special kind of mapping, so called Poincaré maps are used. In Poincaré maps a comparison between the systems state at the beginning and at the end of an oscillation is conducted which removes the time from the dataset. In the case, the systems state variables are equal, a periodic motion is revealed. Poincaré originally used the period duration for fixing start and end events (in order to analyse the motion of planets). When simulating gaits with legged models, the period duration is not known prior the simulation start. In gait simulations, the scientists apply physical system states to define start and end event of a simulation while two conditions are very popular in the biomechanics field, i.e. the touch down of a leg and the apex of the centre of mass. The touch down event is often used for simulating Passive Dynamic Walkers with rigid legs. In models with compliant legs, the apex is mostly used to describe start and end of a gait cycle. When the touch down event is used, the conditions of the counter leg need to be fixed a priori. Is the counter leg at the ground? Does it lift off at the same moment or does it swing without ground contact? Similar a priori definitions are required when using the apex. Is the leg lifted at the instant of apex, the gait of running is selected. In the walking gait, the active leg must have ground contact during apex. By definition, the apex is the highest point of the centre of mass during locomotion. However, the stop condition of a simulation is identified when one maximum is reached. That could be one maximum of possibly several peaks. In some simulations with a walking model, there were truly more than one maximum identified. Hence, the apex ore the maximum is not necessarily a unique event in walking and the apex return map is incomplete or maybe incorrect. The instant of Vertical Leg Orientaion (VLO) with a simulated walking model. At the Locomotion Lab in Jena another system state event for Poincaré maps was established, which allows to investigate both gaits, walking and running, with the same simulation and mapping. This event is called Vertical Leg Orientation (VLO). The definition is, that the active leg has ground contact and is oriented vertically or the hip joint is vertically above the foot point. While the active leg is on the ground, the counter leg is lifted. This system state exists in both gaits equally. Well defined motion events for Poincaré maps ensure for a reduction of independent system variables in order to simplify the analysis. The system variables of the spring-mass model are the positions $x$ and $y$ of the centre of mass, which is located at the hip, and the velocities $v_x$ and $v_y$ of the centre of mass. At the instant of Vertical Leg Orientation, the system state can be reduced to two independent variables, i.e. the height $y$ and the angle of the velocity $\theta = \arctan(v_y/v_x)$. Parameter map where symmetric walking patterns were found. The novel method for gait analysis using VLO was applied for walking simulations first. There is a clear distinction between symmetric and asymmetric gait patterns. In symmetric walking, the velocity angle $\theta$ is always zero, which means, the first half of the stance phase is symmetrically identical to the second half. In asymmetric walking patterns is $\theta \neq 0$. Asymmetric walking patterns are not worse than symmetric patterns as there exist also self-stable solutions. A surprising finding is that in symmetric walking solutions, the centre of mass $y$ is always lifted at the event of VLO compared to the height at touch down $y_{TD}$. Why is the novel event for Poincaré maps called VLO instead of “mid-stance”? The term mid-stance is already used in scientific literature but addresses various conditions. Mid-stance is for instance the time based centre of the stance phase, a period of time during the stance phase, or the event when the ground reaction force is perpendicular to the ground. In order to clearly define the event of vertical orientation and differentiate from a less explicit term, the name of Vertical Leg Orientation was established. Simulation results on Walking and Running combined in a single map using VLO will be presented in another article. Featured Paper J. Rummel, Y. Blum, H.M. Maus, C. Rode, A. Seyfarth. Stable and robust walking with compliant legs. IEEE International Conference on Robotics and Automation, May 3-8, Anchorage, Alaska: 5250-5255, 2010. DOI: 10.1109/ROBOT.2010.5509500 PDF
Initial Topologies Now that we have looked at the definition of continuity of maps on topological spaces, we are now ready to look at some new topologies that are based off of the concept. We begin by describing a special type of topology on a set $X$ that is induced by a topological space $Y$ by a function $f : X \to Y$ (or a collection topological spaces and functions $f_i : X \to Y_i$) ensuring continuity of $f$ (or each $f_i$, $i \in I$). Definition: Let $X$ be a set and let $Y$ be a topological space. The Initial Topology Induced by $f$ on $X$ is the coarsest topology on $X$ that makes the map $f : X \to Y$ continuous. Furthermore, if $X$ is a set and $\{ Y_i : i \in I \}$ is a collection of topological spaces then the Initial Topology Induced by $\{ f_i : i \in I \}$ on $X$ is the coarsest topology on $X$ that makes each map $f_i :X \to Y_i$ continuous. It is important to emphasize that the initial topology induced by $\{ f_i : i \in I \}$ is the COARSEST topology on $X$ which makes $f_i : X \to Y_i$ continuous for all $i \in I$. So, consider a set $X$ and a topological space $Y$. Let $f : X \to Y$ and consider the set of all open sets in $Y$. Then for $f$ to be continuous we must have that the inverse image of all open sets in $Y$ are open in $X$. Thus, if we give $X$ the initial topology then the open sets of $X$ are just that - sets such that for every open set $V$ in $Y$ we have that $f^{-1}(V)$ is declared an open set in $X$. We exclude adding any additional open sets to $X$ if possible since the initial topology on $X$ induced by $f$ is the coarsest of such topologies. We will need something more than just a wordy definition if we're expecting to work with initial topologies induced by $\{ f_i : i \in I \}$, so, the following theorem will give us a subbasis for this topology. Theorem 1: Let $X$ be a set, $\{ (Y_i, \tau_i) : i \in I \}$ be a collection of topological spaces, and $\{ f_i : X \to Y_i : i \in I \}$ be a collection of maps. Then the initial topology induced by $\{ f_i : i \in I \}$ on $X$ has subbasis $S = \{ f^{-1}_i (U) : U \in \tau_i \}$. Proof:To show that the initial topology induced by $\{ f_i : i \in I \}$ on $X$ has subbasis $S = \{ f^{-1}_i(U) : U \in \tau_i \}$ we must show that the topology $\tau$ generated by this subbasis is equal to this induced initial topology on $X$. To do this, we must show that $\tau$ is the topology that makes each $f_i : X \to Y_i$ for $i \in I$ continuous, and we must show that $\tau$ is the finest topology to do this. Clearly the topology $\tau$ generated by subbasis $S$ makes all of the $f_i : X \to Y$, $i \in I$ continuous, because for each $U \in \tau_i$ we have that $f^{-1}(U) \in \tau$ (is open in $X$ with respect to the topology $\tau$ generated by the subbasis $S$). Hence $f_i$ is continuous for all $i \in I$. We now show that $\tau$ is the coarsest topology to accomplish this. Suppose that $\tau'$ is another topology that makes $f_i : X \to Y_i$ continuous for all $i \in I$. Then $f^{-1}_i(U_i) \in \tau'$ for all $i \in I$. But since $\tau'$ is a topology on $X$ we have that all finite intersections of $f^{-1}_i(U_i)$ are also contained in $\tau'$ and so: So any such topology $\tau'$ which also accomplishes making $f_i : X \to Y_i$ continuous for all $i \in I$ must contain $\tau$, so the initial topology $\tau$ induced by $\{ f_i : i \in I \}$ on $X$ has subbasis $S = \{ f^{-1}_i(U) : U_i \in \tau_i \}$. $\blacksquare$
Let's denote a hydrated salt as $\ce{M_pA_q*xH2O}$ ($\ce{M}$ – metal, $\ce{A}$ – anion). Then molality $b$ of the aqueous solution of $\ce{M_pA_q}$ is defined as $$b=\frac{n(\ce{M_pA_q})}{m_\mathrm{tot}(\ce{H2O})}\, ,\label{eq:1}\tag{1}$$ where $n(\ce{M_pA_q})$ – amount of water-free salt; $m_\mathrm{tot}(\ce{H2O})$ – total mass of water. Note that the amounts of the hydrated and the water-free salt are equal: $$n(\ce{M_pA_q})=n(\ce{M_pA_q*xH2O})=\frac{m(\ce{M_pA_q*xH2O})}{M(\ce{M_pA_q*xH2O})}\, ,\label{eq:2}\tag{2}$$ where $M$ – molar mass. Total amount of water is found as the sum of water added to dissolve the salt and the water "stored" within the hydrated salt itself: $$\begin{align}m_\mathrm{tot}(\ce{H2O}) &= m(\ce{H2O})+m_\mathrm{hydr}(\ce{H2O})\\&= m(\ce{H2O})+n_\mathrm{hydr}(\ce{H2O})\cdot M(\ce{H2O})\\&=m(\ce{H2O})+x\cdot n(\ce{M_pA_q*xH2O})\cdot M(\ce{H2O})\\\eqref{eq:2}\quad\Rightarrow\quad&=m(\ce{H2O})+x\cdot m(\ce{M_pA_q*xH2O})\cdot \frac{M(\ce{H2O})}{M(\ce{M_pA_q*xH2O})}\label{eq:3}\tag{3}\end{align}$$ Now, putting \eqref{eq:2} and \eqref{eq:3} back in \eqref{eq:1}, one can obtain uniform expression for molality of a hydrated salt: $$b=\frac{m(\ce{M_pA_q*xH2O})}{m(\ce{H2O})\cdot M(\ce{M_pA_q*xH2O})+x\cdot m(\ce{M_pA_q*xH2O})\cdot M(\ce{H2O})}\label{eq:4}\tag{4}$$ Plugging in the numbers for sodium carbonate decahydrate: $$b=\frac{\pu{13.8 g}}{\pu{103.5 g}\cdot\pu{286.14 g mol-1}+10\cdot\pu{13.8 g}\cdot\pu{18.02 g mol-1}}=\pu{4.3e-4mol g-1}$$ A side note regarding the notations and units [1, p. 28]: The term molal and the symbol $\pu{m}$ should no longer be used because they are obsolete. One should use instead the term molality of solute $\ce{B}$ and the unit $\pu{mol/kg}$ or an appropriate decimal multiple or submultiple of this unit. (A solution having, for example, a molality of $\pu{1 mol/kg}$ was often called a $1$ molal solution, written $\pu{1 m}$ solution.) Reference Thompson, A.; Taylor, B. N. Guide for the Use of the International System of Units (SI). NIST Special Publication 811, 2008.
Being a bit late for the party, here is nevertheless a small answer. In fact, there is a rather explicit way to compute adjoints of every differential operator (any order) between (smooth, compactly supported) sections of vector bundles. Of course, this is (in general) not the Hilbert space adjoint, here you need a bit more analysis ;) The main idea is to use a symbol calculus based on a covariant derivative. I explain here the scalar version (trivial vector bundles) but the whole thing can be done in full generality as well. First, you choose a covariant derivative, say torsion-free and you choose a density on your manifold in order to have an integration measure. As you probably know, the symmetrized covariant derivative allows you to establish a $C^\infty(M)$-linear bijection between symbols, i.e. smooth functions on the cotangent bundle being polynomial in the fiber directions, and differential operators. Note that this is a real bijection, not just taking into account the leading symbol. Of course, the symbol depends on the chosen covariant derivative. In a second step you compute once and for all the adjoint of a differential operator $D$ with symbol $f$ by zillions of integrations by parts. The funny thing is that there is a fairly simple way how the symbol of the adjoint looks like. You need two ingredients for that: First, the covariant derivative allows you to define a horizontal lift which in turn determines a maximally indefinite pseudo Riemannian metric on the cotangent bundle (horizontal spaces are in bijection to tangent spaces at the base point, vertical spaces are in bijection to the cotangent space, thus there is a natural pairing). This metric has a Laplace operator (better: d'Alembert operator) $\Delta$ with which you can act on the symbol $f$. In the flat situation this is just\begin{equation}\Delta_{\mathrm{flat}} = \frac{\partial^2}{\partial q^i \partial p_i}\end{equation}for a Darboux chart on $T^*M$ induced by a chart on $M$. In general, there are a couple of Christoffel symbols needed to make this globally defined ;) Second, the density $\mu$ of your integration might not be covariantly constant. In any case, it defines a one-form by\begin{equation}\alpha(X) = \frac{\nabla_X \mu}{\mu},\end{equation}which is now be used to cook up a new differential operator on $T^*M$. You can lift $\alpha$ vertically to a vector field $F(\alpha)$ on $T^*M$, completely canonical. Having these two ingredients, the adjoint of $D^*$ has the following symbol\begin{equation}f^* = \exp\left(\frac{1}{2i}(\Delta + F(\alpha))\right) \overline{f}.\end{equation}The prefactor in the exponential depends a bit on your conventions concerning the assignment of symbol to operator. With this formula it is typically really just a computation to get adjoints of all kind of operators. In many cases, you can chose your density to be covariantly constant, so $\alpha = 0$. You can find all this in much detail in my book on Poisson geometry, based on some old papers on quantization of cotangent bundles in the late 90s (together with Bordemann and Neumaier). In the case of interesting bundles, the formula is essentially the same: you only have to choose covariant derivatives for the two vector bundles in question and modify the Laplace operator accordingly. Then you can proceed in the same way. This generalization is in a paper of mine with Bordemann, Neumaier and Pflaum.
Let $$\text{F}_{91}:=\left\{\overline{a}\in\left(\mathbb{Z}/n\mathbb{Z}\right)^\times:91\text { passes the Fermat primality test to base }a\right\}$$ and $$\text{MF}_{91}:=\left\{\overline{a}\in\left(\mathbb{Z}/n\mathbb{Z}\right)^\times:91\text { passes the Miller-Rabin primality test to base }a\right\}$$ How do we calculate $|\text{F}_{91}|$ and $|\text{MF}_{91}|$? Starting with the Fermat primality test: As described in the Wikipedia article, the algorithm depends on a parameter $k$. In each iteration (there are at most $k$ iterations), a random variable $a\in\left(\mathbb{Z}/n\mathbb{Z}\right)^\times$ is chosen and testet for $$a^{91-1}\equiv1\text{ mod }91$$ If this test fails, the algorithm stops and yields that $91=7\cdot 13$ is not prime. However, I'm not sure how to calculate $|\text{F}_{91}|$ since the test depends on some randomness. The randomness (or more likely pseudo-randomness) that one uses in the application of the tests shall be ignored here. "The Fermat test to base $a$" is specifically whether $a^{n-1} \equiv 1 \pmod{n}$, like the "Miller-Rabin test to base $a$" (also called strong Fermat test) is whether $a^m\equiv 1 \pmod{n}$ or $a^{2^j\cdot m} \equiv -1 \pmod{n}$ for one $j \in \{0,\dotsc,k-1\}$, where $n-1 = 2^k\cdot m$ with $m\equiv 1 \pmod{2}$. So the question is, for how many $\overline{a} \in (\mathbb{Z}/91\mathbb{Z})^\times$ do we have $a^{90} \equiv 1 \pmod{91}$, resp. $a^{45} \equiv \pm 1 \pmod{91}$. Since $91 = 7\cdot 13$, the numbers are not hard to find.
The below sum expression inside the equation throws an undefined control sequence error. \begin{equation} \left\|x\left[t\right]-\textbf{1} F(X)\right\|^2 = \sum_{i = 1}^{n} \lambda_i^2t \lvert x \left[ 0 \right] \rvert^2\end{equation} Error: Undefined control sequence. |^2 = \sum_{i = 1}^{n} \lambda_i^2t \lvert x \left[ 0 \right] \rvert^2 My guess is I am doing something wrong with the \lvert or \lambda. Don't know exactly. Any suggestion on how to solve this issue will be very helpful.
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
Stephen Hawking tried to answer that question in his book "A Brief History of Time". His answer is based on the uncertainty principle: $$\Delta E\Delta t\geq \frac{\hbar}{2}$$ If you take that the total variance of energy of the universe to be $\Delta E=0$, then, according to this principle, you can have $\Delta t\rightarrow\infty$ and then, the universe can exist for an indefinite amount of time. The stated fact is that the total amount of energy of the universe is exactly $0$. The reason is that matter has "positive" energy from it's mass ($E=mc^2$) which is exactly balanced by "negative" binding energy (electrons in an atom, atoms in molecules, planet orbiting stars, stars clusters, galaxies, galaxies clusters, superclusters, etc.). Thus, the universe would have a total amount of energy equals to $0$. To answer your question about the "creation of energy": it is sufficient to say that in the quantum mechanics realm, it is perfectly allowed to "borrow" energy from nothing. As long as the time when the energy is borrowed does not violate the uncertainty principle. This is a real effect that leads to spontaneous creation of particle-antiparticle pairs which have (or may have) a measurable effect (lamb shift, Hawking radiation, casimir effect, spontaneous emissions, etc.). These particles are also called "virtual particles" as they do not exist freely and cannot be measured directly. Thus, according to Hawking, the Universe could have spawned from such a quantum fluctuation as long as the total energy borrowed is $0$.
383 13 Homework Statement The electric field outside and an infinitesimal distance away from a uniformly charged spherical shell, with radius R and surface charge density σ, is given by Eq. (1.42) as σ/0. Derive this in the following way. (a) Slice the shell into rings (symmetrically located with respect to the point in question), and then integrate the field contributions from all the rings. You should obtain the incorrect result of ##\frac{\sigma}{2\epsilon_0}##. (b) Why isn’t the result correct? Explain how to modify it to obtain the correct result of ##\frac{\sigma}{2\epsilon_0}##. Hint: You could very well have performed the above integral in an effort to obtain the electric field an infinitesimal distance inside the shell, where we know the field is zero. Does the above integration provide a good description of what’s going on for points on the shell that are very close to the point in question? Homework Equations Coulomb's Law Hi! I need help with this problem. I tried to do it the way you can see in the picture. I then has this: ##dE_z=dE\cdot \cos\theta## thus ##dE_z=\frac{\sigma dA}{4\pi\epsilon_0}\cos\theta=\frac{\sigma 2\pi L^2\sin\theta d\theta}{4\pi\epsilon_0 L^2}\cos\theta##. Then I integrated and ended up with ##E=\frac{\sigma}{2\epsilon_0}\int \sin\theta\cos\theta d\theta##. The problem is that I don't know what are the limits of integrations, I first tried with ##\pi##, but I got 0. What am I doing wrong?
Asymptotic Solution This chapter details the steps required to establish the asymptotic solution in the LEFM context. Particular emphasis is put on the method applied and on the assumptions made. Asymptotic Solution > Notations and assumptions Linear elastic fracture mechanics consists in solving crack problems using classical linear elastic continuum mechanics. A body $B$ is balanced if the linear (translation) and angular (rotation) equilibrium equations, see Picture II.1, are satisfied. This problem is completed by the Dirichlet boundary conditions, which in our case means constrained displacements on $\partial_D B$, and by the Neumann boundary conditions, which correspond to constrained surface tractions on the surface $\partial_N B$. We also assume that the body is submitted to small deformations, which means that the strain state $\mathbf{\varepsilon}$ can be derived from the displacements field $\mathbf{u}$ as follows: \begin{equation} \mathbf{\varepsilon} = \frac{1}{2}\left(\nabla\otimes\mathbf{u}+\mathbf{u}\otimes\nabla\right). \label{eq:deformations}\end{equation} In the indicial form, this equation reads: \begin{equation} \mathbf{\varepsilon}_{ij} = \frac{1}{2}\left(\frac{\partial}{\partial {x}_i}\mathbf{u}_j+\frac{\partial}{\partial {x}_j}\mathbf{u}_i\right), \end{equation} or again \begin{equation} \mathbf{\varepsilon}_{ij} = \frac{1}{2}\left(\mathbf{u}_{j,i}+\mathbf{u}_{i,j}\right). \end{equation} LEFM also assumes that the material is homogenous, isotopic and linear. The Hooke's law of the material relates the stress tensor $\mathbf{\sigma}$ to the strain tensor $\mathbf{\varepsilon}$: \begin{equation} \mathbf{\sigma}=\mathcal{H}:\mathbf{\varepsilon}, \label{eq:Hooke}\end{equation} or in the indicial form (with the summation on repeated indices) \begin{equation} \sigma_{ij}=\mathcal{H}_{ijkl}\varepsilon_{kl}, \end{equation} where the fourth-order tensor $\mathcal{H}$ is given by \begin{equation} \mathcal{H}_{ijkl}=\frac{E\nu}{\left(1+\nu\right)\left(1-2\nu\right)}\delta_{ij}\delta_{kl} + \frac{E}{1+\nu}\left(\frac{1}{2}\delta_{ik}\delta_{jl}+\frac{1}{2}\delta_{il}\delta_{jk}\right). \end{equation} The inverse law is simply given by: \begin{equation} \mathbf{\varepsilon}=\mathcal{G} : \mathbf{\sigma}. \end{equation} Using these notations and assumptions we are now able to determine the asymptotic solution at the crack tip. The range of validity of the LEFM approach will be determined in another lecture.
Research Open Access Published: Combined effects of the Hardy potential and lower order terms in fractional Laplacian equations Boundary Value Problems volume 2018, Article number: 61 (2018) Article metrics 688 Accesses 2 Citations Abstract In this paper we consider the existence and regularity of solutions to the following nonlocal Dirichlet problems: where \((-\Delta)^{s}\) is the fractional Laplacian operator, \(s\in(0,1)\), \(\Omega\subset\mathbb{R}^{N}\) is a bounded domain with Lipschitz boundary such that \(0\in\Omega\), f is a nonnegative function that belongs to a suitable Lebesgue space. Introduction Recently, the fractional Laplacian has more and more applications in physics, chemistry, biology, probability and finance. The fractional Laplacian \((-\Delta)^{s} \) is a pseudo-differential operator defined by where \(\mathit{P.V.}\) stands for the Cauchy principal value and \(a_{N,s}\) is a constant given by The operator \((-\Delta)^{s} \) is well defined as long as u belongs to the space \(C^{1,1}_{\mathrm{loc}}\cap L_{s}\), where In this paper, we establish existence and regularity of solutions to the following nonlocal problem: where \(s\in(0,1)\), \(p>0\), \(\Omega\subset\mathbb{R}^{N}\) is a bounded domain with Lipschitz boundary such that \(0\in\Omega\), f is a positive measurable function in Ω. Before stating our main theorem and related results, we give some notions used in this paper. Definition 1.1 Let \(s\in(0,1)\), \(\Omega\subset\mathbb{R}^{N}\), define the fractional Sobolev space and the space \(H_{0}^{s}(\mathbb{R}^{N})\), defined as endowed with the norm where \(\mathcal{Q}=\mathbb{R}^{N}\times\mathbb{R}^{N}\setminus(\mathcal {C}\Omega\times\mathcal{C}\Omega)\). where is optimal and not attained. We need to make precise the sense of solutions that we will handle here and distinguish two types of solutions, according to the regularity of f. Definition 1.2 Assume \(0<\lambda<\Lambda_{N,s}\). For \(f \in H^{-s}(\Omega)\) we say that \(u \in H^{s}_{0}(\Omega)\) is a finite energy solution to problem (1.1) if, for any \(w\in H^{s}_{0}(\Omega)\), where \(\langle\cdot,\cdot\rangle\) is the natural duality product between \(H^{s}_{0}\) and \(H^{-s}\), be defined as Definition 1.3 For \(f\in L^{m}(\Omega)\), \(m\geq1\), we say that \(u\in L^{1}(\Omega)\) is a weak solution to problem (1.1) if \(u^{p}\in L^{1}(\Omega)\), \(u=0 \) in \(\mathbb{R}^{N}\setminus\Omega\) and the following equality holds: Recently a great attention has been devoted to understanding the role of the Hardy potential in the solvability of fractional elliptic problem; see for instance [9–13] and the references therein. In particular, Abdellaoui et al. [12] obtained regularity of solutions to the following nonlocal nonlinear problem: If \(f\in L^{m}(\Omega)\), \(m>\frac{N}{2s}\), the unique energy solution \(u\in H_{0}^{s}(\Omega)\) to problem (1.3) with \(\lambda\leq \Lambda_{N,s}\) satisfies \(u\leq C|x|^{-\gamma}\) for some constants Cand γ. If \(\frac{2N}{N+2s} \leq m <\frac{N}{2s}\), the unique energy solution uto problem (1.3) verifies \(u\in L^{m_{s}^{**}}(\Omega )\), \(m_{s}^{**}=\frac{mN}{N-2ms}\), provided \(\lambda<\Lambda_{N,s}\frac{4N(m-1)(N-2ms)}{m^{2}(N-2ms)^{2}}\). If \(1< m<\frac{2N}{N+2s}\), the unique weak solution uto problem (1.3) verifies \(u\in L^{m_{s}^{**}}(\Omega)\cap W_{0}^{s_{1},m_{s}^{*}}(\Omega)\) for all \(s_{1}< s\) and \(m_{s}^{*}=\frac{mN}{N-ms}\), provided \(\lambda<\Lambda_{N,s}\frac{4N(m-1)(N-2ms)}{m^{2}(N-2ms)^{2}}\). The main objective of this work is to explain the combined influence of the Hardy potential and lower order terms on the existence and regularity of solutions to problem (1.1). The influence of the Hardy potential for fractional Laplacian was studied in [12], the main effect of the Hardy potential in (1.3) is that the weak solutions to problem (1.3) satisfy \(u(x)\geq C |x|^{-\gamma}\) for some constants C and γ, this fact shows that \(u(x)\) is unbounded in a neighborhood of the origin, instead of \(u(x)\in L^{\infty}(\Omega)\). On the other hand, it is well known that the lower order term \(u^{p}\) produces a regularizing effect; see [14–17] and the references therein. Therefore, thanks to the regularizing properties of the lower order term, we will prove that summability of finite energy the solution to problem (1.1) increases as the power of the lower order term increases; see (1.4) below. According to such a definition, we can now state our existence results for problem (1.1). Theorem 1.4 Assume \(\lambda\leq\Lambda_{N,s}\). Then, for any \(f\in L^{m}(\Omega)\) with \(1\leq m\leq1+ \frac{1}{p}\), problem (1.1) has a weak solution. More precisely, \(u\in H^{s}_{0}(\Omega)\cap L^{p+1}(\Omega)\). In the case where \(f\in L^{m}(\Omega)\) with \(m>1+ \frac{1}{p}\), we will prove the following existence result. Theorem 1.5 Let \(f\in L^{m}(\Omega)\) with \(m>1+ \frac{1}{p}\), and Then there exists a finite energy solution u to problem (1.1) that verifies where Remark 1.6 Obviously, Thus the summability of the solution to problem (1.1) increases as p increases. Remark 1.7 When \(s=1\), the above theorem was proved by Adimurthi et al. [18]. The paper is organized as follows. In Sect. 2 we collect some useful tools, such as Sobolev’s imbedding theorem and a certain algebraic inequality. Furthermore, we also obtain a prior estimate of the absorption term \(u^{p}\) by analyzing the associated approximating problems. The proofs of Theorem 1.4 and 1.5 will be given in Sect. 3. Useful tools and preliminaries In this paper, we will use the classical truncating method. Given u a measurable function we consider the k-truncation of u defined by The remainder of the truncation \(T_{k}(u)\) is defined as \(G_{k}(u)=u-T_{k}(u)\). We will also need the classical Sobolev theorem; for an elementary proof of this inequality, see [1]. Lemma 2.1 Let \(s\in(0,1)\) and \(N>2s\). There exists a constant \(C(N,s)\) such that for any measurable and compactly supported function \(f:\mathbb {R}^{N}\rightarrow\mathbb{R}\), where \(2_{s}^{*}=\frac{2N}{N-2s}\) is called the Sobolev critical exponent. The next algebraic inequality will be used in our article. Lemma 2.2 Let \(s_{1},s_{2}\geq0\) and \(a>0\). Then Proof The complete proof is given in [12], for the reader’s convenience, we include here a sketch of the proof. If \(s_{1}=0\) or \(s_{2}=0\), This conclusion is obvious. We can assume \(s_{1}>s_{2}>0\), let \(x:=s_{2}/s_{1}\), then (2.2) is equivalent to Set Rewrite h as For \(a>1\), we claim that Define Clearly, Thus \(h_{1}'(x)\leq0\), here the following Young inequality will be used: Therefore \(h_{1}(x)\geq h_{1}(1)=0\), which shows that (2.3) holds. For \(a<1\). Firstly we show that In order to do this, define Now we consider the following approximation problems: where \(f_{n}(x)=\frac{f(x)}{1+\frac{1}{n}}\). Lemma 2.3 Let \(f\in L^{m}(\Omega)\), \(m\geq1\). Then, for every \(n\in\mathbb{R}\), there exists a solution \(u_{n}\in H_{0}^{s}(\Omega)\) to problem (2.4) such that Proof To show estimate (2.5), we will consider the case \(m>1\) and \(m=1\) separately. Case \(m>1\). Choose \(\phi=u_{n}^{p(m-1)}\) as a test function in (2.4), we get Therefore provided Applying Hölder’s inequality on the right-hand-side of (2.6), we obtain Case \(m=1\). Using \(\frac{T_{k}(u_{n})}{k}\) as a test function in (2.4), we get Since for any \(\sigma\in\mathbb{R}^{N}\), \(\sigma=T_{k}(\sigma)+G_{k}(\sigma)\), Moreover, by Lemma 4 in [19], we know that and using Hardy’s inequality (1.2), we get Since \(\lambda<\Lambda_{N,s}\), we have Fatou’s lemma implies, for \(k\rightarrow\infty\), that estimate (2.5) holds. □ Proof of main results Let us begin with the proof of Theorem 1.4. Proof of Theorem 1.4 Set \(f_{n}=\frac{f}{1+\frac {1}{n}}\), obviously, \(f_{n}\rightarrow f\) in \(L^{1}(\Omega)\) as \(n\rightarrow\infty\). Let \(\phi=T_{k}(u_{n})\) as a test function in (2.4), we have Since for any \(\sigma\in\mathbb{R}\), \(\sigma=T_{k}(\sigma)+G_{k}(\sigma)\), Moreover, by Lemma 4 in [19], we know that Recall that On the other hand, using \(G_{k}(u_{n})\) as a test function in (2.4), we have Moreover, \(u_{n} G_{k}(u_{n})=G^{2}_{k}(u_{n})+T_{k}(u_{n})G_{k}(u_{n})\), thus this fact combined with (3.6), implies that Applying the Young inequality on the right-hand-side of (3.7), we get Taking into account that \(\lambda<\Lambda_{N,s}\), by the Hardy inequality we obtain Therefore \(\{G_{k}(u_{n})\}_{n\in\mathbb{R}}\) is uniformly bounded in \(H_{0}^{s}(\Omega)\), it implies Then we get We deduce that \(T_{k}(u_{n})\) is uniformly bounded in \(H_{0}^{s}(\Omega)\cap L^{p+1}(\Omega)\). Then we pass to the limit in the approximation problem (2.4); up to a subsequence, there exists a function \(u\in H_{0}^{s}(\Omega)\cap L^{p+1}(\Omega)\). Now we want to prove that \(u_{n}^{p} \rightarrow u\) in \(L^{1}(\Omega)\). Let \(\psi_{i}(\sigma)\) be defined by Choosing \(\phi=\psi_{i}(u_{n})\) as a test function in (2.4), we get which implies that Let E is any measurable subset of Ω. For any \(t>0\) we have The above fact and \(f\in L^{1}(\Omega)\) allow us to say that, for any given \(\varepsilon>0\), there exists \(t_{\varepsilon}\) such that Hence Therefore Thus we prove that \(\lim_{|E|\rightarrow0} \int_{E} u^{p}_{n}=0\). Vitali’s theorem implies that \(u_{n}^{p}\rightarrow u^{p}\) in \(L^{1}(\Omega)\) i.e. □ Proof of Theorem 1.5 Define \(\beta=p(m-1)\), that satisfies \(p+\beta=\beta m'\). Using \(\phi =u_{n}^{\beta}\) as a test function in (2.4), we have Now, by Lemma 2.2, we get Using Hardy’s inequality, we have We conclude that With this choice of β, by Lemma 2.3 we obtain By Lemma 2.1, we arrive at Furthermore, As a consequence there exists a function \(u\in L^{\frac{(\beta +1)2_{s}^{*}}{2}}(\Omega)\). For any \(t>0\) and \(E\subset\Omega\) is measurable. we get There exists \(t_{\varepsilon}\) such that We see that \(|E|\rightarrow0\) implies i.e., the sequence \(u_{n}^{p}\) is equiintegrable. Consequently Thus we have proved the existence result. □ Conclusion In this paper, we main study the regularizing effect of a nonlinear term \(u^{p}\), and the influence of the Hardy potential on the existence of solutions to fractional Laplacian equations. Specifically, the positive effect of the nonlinear term \(u^{p}\) is shown. References 1. Di Nezza, E., Palatucci, G., Valdinoci, E.: Hitchhiker’s guide to the fractional Sobolev spaces. Bull. Sci. Math. 136, 521–573 (2012) 2. Bucur, C., Valdinoci, E.: Nonlocal Diffusion and Applications. Lecture Notes of the Unione Matematica Italiana, vol. 20. Springer, Cham; Unione Matematica Italiana, Bologna, xii + 155 pp. (2016) 3. Silvestre, L.: Regularity of the obstacle problem for a fractional power of the Laplace operator. Ph.D. thesis, The University of Texas at Austin, 95 pp. (2005) 4. Servadei, R., Valdinoci, E.: On the spectrum of two different fractional operators. Proc. R. Soc. Edinb., Sect. A 144, 831–855 (2014) 5. Musina, R., Nazarov, A.: On fractional Laplacians. Commun. Partial Differ. Equ. 39, 1780–1790 (2014) 6. Abdellaoui, B., Bentifour, R.: Caffarelli–Kohn–Nirenberg type inequalities of fractional order with applications. J. Funct. Anal. 272, 3998–4029 (2017) 7. Abdellaoui, B., Peral, I., Primo, A.: A remark on the fractional Hardy inequality with a remainder term. C. R. Acad. Sci. Paris, Ser. I 352, 299–303 (2014) 8. Frank, R.L., Seiringer, R.: Non-linear ground state representations and sharp Hardy inequalities. J. Funct. Anal. 255, 3407–3430 (2008) 9. Tzirakis, K.: Sharp trace Hardy–Sobolev inequalities and fractional Hardy–Sobolev inequalities. J. Funct. Anal. 270, 4513–4539 (2016) 10. Nguyen, V.: Some trace Hardy type inequalities and trace Hardy–Sobolev–Maz’ya type inequalities. J. Funct. Anal. 270, 4117–4151 (2016) 11. Barrios, B., Medina, M., Peral, I.: Some remarks on the solvability of non-local elliptic problems with the Hardy potential. Commun. Contemp. Math. 16, 1350046 (2014) 12. Abdellaoui, B., Medina, M., Peral, I., Primo, A.: The effect of the Hardy potential in some Calderón–Zygmund properties for the fractional Laplacian. J. Differ. Equ. 260, 8160–8206 (2016) 13. Dipierro, S., Montoro, L., Peral, I., Sciunzi, B.: Qualitative properties of positive solutions to nonlocal critical problems involving the Hardy–Leray potential. Calc. Var. Partial Differ. Equ. 55(4), Article 99 (2016) 14. Arcoya, D., Boccardo, L.: Regularizing effect of the interplay between coefficients in some elliptic equations. J. Funct. Anal. 268, 1053–1308 (2015) 15. Arcoya, D., Boccardo, L.: Regularizing effect of \(L^{q}\) interplay between coefficients in some elliptic equations. J. Math. Pures Appl. 111, 106–125 (2018) 16. Boccardo, L.: Marcinkiewicz estimates for solutions of some elliptic problems with nonregular data. Ann. Mat. Pura Appl. 188, 591–601 (2009) 17. Boccardo, L., Gallouët, T., Vázquez, J.: Nonlinear elliptic equations in \(\mathbf{R}^{N}\) without growth restrictions on the data. J. Differ. Equ. 105, 334–363 (1993) 18. Adimurthi, A., Boccardo, L., Cirmi, G., Orsina, L.: The regularizing effect of lower order terms in elliptic problems involving Hardy potential. Adv. Nonlinear Stud. 17, 311–317 (2017) 19. Leonori, T., Peral, I., Primo, A., Soria, F.: Basic estimates for solutions of a class of nonlocal elliptic and parabolic equations. Discrete Contin. Dyn. Syst. 35, 6031–6068 (2015) Acknowledgements The authors would like to express their gratitude to the anonymous referee for his/her kind suggestions and helpful advices which have improved the final form of the manuscript. Funding This research was partially supported by the National Science Foundation of China (Nos. 11401473, 11761059), Science and Technology Planning Project of Gansu Province (No. 1610RJZA102), Fundamental Research Funds for the Central Universities (Nos. 31920170001, 31920170147) and research and innovation teams of Northwest Minzu University. Ethics declarations Competing interests The authors declare to have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
It looks like you're new here. If you want to get involved, click one of these buttons! Is the usual \(\leq\) ordering on the set \(\mathbb{R}\) of real numbers a total order? So, yes. 1. Reflexivity holds 2. For any \\( a, b, c \in \tt{R} \\) \\( a \le b \\) and \\( b \le c \\) implies \\( a \le c \\) 3. For any \\( a, b \in \tt{R} \\), \\( a \le b \\) and \\( b \le a \\) implies \\( a = b \\) 4. For any \\( a, b \in \\tt{R} \\), we have either \\( a \le b \\) or \\( b \le a \\) So, yes. Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the Dedekind-MacNeille_completion of the rationals. Matthew has told us interesting things about it before. Hausdorff, on his part, in the book I mentioned here, says that any total order, dense, and without \( (\omega,\omega^*) \) gaps, has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here). Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the [Dedekind-MacNeille_completion of the rationals](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion#Examples). Matthew has told us interesting things about it [before](https://forum.azimuthproject.org/discussion/comment/16714/#Comment_16714). Hausdorff, on his part, in the book I mentioned [here](https://forum.azimuthproject.org/discussion/comment/16154/#Comment_16154), [says](https://books.google.es/books?id=M_skkA3r-QAC&pg=PA85&dq=each+everywhere+dense+type&hl=en&sa=X&ved=0ahUKEwjLkJao-9DaAhWD2SwKHVrkBcIQ6AEIKTAA#v=onepage&q=each%20everywhere%20dense%20type&f=false) that any total order, dense, and without \\( (\omega,\omega^*) \\) [gaps](https://en.wikipedia.org/wiki/Hausdorff_gap), has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here). I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) I believe the [hyperreal numbers](https://en.wikipedia.org/wiki/Hyperreal_number) give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) That's an interesting question, Jonathan. That's an interesting question, Jonathan. Jonathan Castello I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. First, we can observe that \(|\mathbb{R}| = |^\ast \mathbb{R}|\). This is because \(^\ast \mathbb{R}\) embeds \(\mathbb{R}\) and is constructed from countably infinitely many copies of \(\mathbb{R}\) and taking a quotient algebra modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads. Next, observe that all unbounded dense linear orders of cardinality \(\aleph_0\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the Morley categoricity theorem. From this we have that all unbounded dense linear orders with cardinality \(\kappa \geq \aleph_0\) are isomorphic. This is referred to in model theory as \(\kappa\)-categoricity. Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders. Puzzle MD 1: Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic. [Jonathan Castello](https://forum.azimuthproject.org/profile/2316/Jonathan%20Castello) > I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. First, we can observe that \\(|\mathbb{R}| = |^\ast \mathbb{R}|\\). This is because \\(^\ast \mathbb{R}\\) embeds \\(\mathbb{R}\\) and is constructed from countably infinitely many copies of \\(\mathbb{R}\\) and taking a [quotient algebra](https://en.wikipedia.org/wiki/Quotient_algebra) modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads. Next, observe that all [unbounded dense linear orders](https://en.wikipedia.org/wiki/Dense_order) of cardinality \\(\aleph_0\\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the [Morley categoricity theorem](https://en.wikipedia.org/wiki/Morley%27s_categoricity_theorem). From this we have that all unbounded dense linear orders with cardinality \\(\kappa \geq \aleph_0\\) are isomorphic. This is referred to in model theory as *\\(\kappa\\)-categoricity*. Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders. **Puzzle MD 1:** Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic. Hi Matthew, nice application of the categoricity theorem! One question if I may. You said: In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. But in my understanding the lattice and poset structure is inter-translatable as in here. Can two lattices be isomorphic and their associated posets not? Hi Matthew, nice application of the categoricity theorem! One question if I may. You said: > In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. But in my understanding the lattice and poset structure is inter-translatable as in [here](https://en.wikipedia.org/wiki/Lattice_(order)#Connection_between_the_two_definitions). Can two lattices be isomorphic and their associated posets not? (EDIT: I clearly have no idea what I'm saying and I should probably take a nap. Disregard this post.) Can two lattices be isomorphic and their associated posets not? Can two lattices be isomorphic and their associated posets not? If two lattices are isomorphic preserving infima and suprema, ie limits, then they are order isomorphic. The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive. From model theory we have two maps \(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \) and \(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \) such that: Now consider \(\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\). The hyperreals famously violate the Archimedean property. Because of this \(\bigwedge_{^\ast \mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\) does not exist. On the other than if we consider \( \bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\), that does exist by the completeness of the real numbers (as it is bounded below by \(\psi(0)\)). Hence $$\bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\right)$$So \(\psi\) cannot be a complete lattice homomorphism, even though it is part of an order isomorphism. However, just to complicate matters, I believe that \(\phi\) and \(\psi\) are a mere lattice isomorphism, preserving finite meets and joints. > Can two lattices be isomorphic and their associated posets not? If two lattices are isomorphic preserving *infima* and *suprema*, ie *limits*, then they are order isomorphic. The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive. From model theory we have two maps \\(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \\) and \\(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \\) such that: - if \\(x \leq_{\mathbb{R}} y\\) then \\(\phi(x) \leq_{^\ast \mathbb{R}} \phi(y)\\) - if \\(p \leq_{^\ast \mathbb{R}} q\\) then \\(\psi(q) \leq_{\mathbb{R}} \psi(q)\\) - \\(\psi(\phi(x)) = x\\) and \\(\phi(\psi(p)) = p\\) Now consider \\(\\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\). The hyperreals famously violate the [Archimedean property](https://en.wikipedia.org/wiki/Archimedean_property). Because of this \\(\bigwedge_{^\ast \mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\) does not exist. On the other than if we consider \\( \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\), that *does* exist by the completeness of the real numbers (as it is bounded below by \\(\psi(0)\\)). Hence $$ \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\right) $$ So \\(\psi\\) *cannot* be a complete lattice homomorphism, even though it is part of an order isomorphism. However, just to complicate matters, I believe that \\(\phi\\) and \\(\psi\\) are a mere *lattice* isomorphism, preserving finite meets and joints.
I'm intrigued by the following idea but I don't know how to do it. If I have a r.v. $x$ with given distribution $f_X(x)$ and I have a second variable $y=2x$. The goal is to find $f_Y(y)$. I know the traditional solution, but, can I say that from $y=2x$, I can conclude that $f(y\mid x)=2x \cdot \delta(y-2x)$ ?? If that is correct, then I can form the joint $f(x,y)$ then I can marginalize out $x$ to reach my goal. I'm trying to do it but I'm missing something.
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. We describe techniques for constructing models of size continuum in ω steps by simultaneously building a perfect set of enmeshed countable Henkin sets. Such models have perfect, asymptotically similar subsets. We survey applications involving Borel models, atomic models, two-cardinal transfers and models respecting various closure relations. Low energy and protein intakes have been associated with an increased risk of malnutrition in outpatients with chronic obstructive pulmonary disease (COPD). We aimed to assess the energy and protein intakes of hospitalised COPD patients according to nutritional risk status and requirements, and the relative contribution from meals, snacks, drinks and oral nutritional supplements (ONS), and to examine whether either energy or protein intake predicts outcomes. Subjects were COPD patients (n 99) admitted to Landspitali University Hospital in 1 year (March 2015–March 2016). Patients were screened for nutritional risk using a validated screening tool, and energy and protein intake for 3 d, 1–5 d after admission to the hospital, was estimated using a validated plate diagram sheet. The percentage of patients reaching energy and protein intake ≥75 % of requirements was on average 59 and 37 %, respectively. Malnourished patients consumed less at mealtimes and more from ONS than lower-risk patients, resulting in no difference in total energy and protein intakes between groups. No clear associations between energy or protein intake and outcomes were found, although the association between energy intake, as percentage of requirement, and mortality at 12 months of follow-up was of borderline significance (OR 0·12; 95 % CI 0·01, 1·15; P=0·066). Energy and protein intakes during hospitalisation in the study population failed to meet requirements. Further studies are needed on how to increase energy and protein intakes during hospitalisation and after discharge and to assess whether higher intake in relation to requirement of hospitalised COPD patients results in better outcomes. UBV observations, plus a few in R and I, were obtained during the 1984 eclipse of RZ Ophiuchi. Bolometric corrections and temperature calibration, applied to the magnitudes and colours of the stars, were used to derive the ratio of the stellar radii, and a light curve solution was obtained with this parameter fixed. Neither component fills its Roche lobe, but the system may be at a late stage of case C mass transfer. We introduce the concept of a locally finite abstract elementary class and develop the theory of disjoint$\left( { \le \lambda ,k} \right)$-amalgamation) for such classes. From this we find a family of complete${L_{{\omega _1},\omega }}$sentences${\phi _r}$that a) homogeneously characterizes${\aleph _r}$(improving results of Hjorth [11] and Laskowski–Shelah [13] and answering a question of [21]), while b) the${\phi _r}$provide the first examples of a class of models of a complete sentence in${L_{{\omega _1},\omega }}$where the spectrum of cardinals in which amalgamation holds is other that none or all. We introduce the notion of pseudoalgebraicity to study atomic models of first order theories (equivalently models of a complete sentence of${L_{{\omega _1},\omega }}$). Theorem: Let T be any complete first-order theory in a countable language with an atomic model. If the pseudominimal types are not dense, then there are 2ℵ0 pairwise nonisomorphic atomic models of T, each of size ℵ1. NGC 7027 is justifiably THE template spectrum for PNe. Its vast range of emission species – from molecular and neutral to ions with ionization potential > 120eV – its high surface brightness and accessibiliy for northern observatories make it the PN laboratory of choice. However the quality of the spectra from the UV to the IR is mixed, many line fluxes and identifications still remaining unchecked from photographic or image tube spectra. Very deep spectra of NGC 7027 (emission line strengths <10-4 of Hβ) in the 0.65 to 1.05μm region (Baluteau et al. 1995) showed the presence of many faint emission lines. Pequignot & Baluteau (1994) showed that heavy elements from the 4th, 5th and 6th rows of the Periodic Table have much higher abundances than Solar, confirming the synthesis of neutron capture elements in low mass stars and providing new constraints on stellar evolution theory. We report the direct detection of cyclic diameter variations in the Mira variable χ Cygni. Interferometric observations made between 1997 July and 1998 September, using the Cambridge Optical Aperture Synthesis Telescope (COAST) indicate periodic changes in the apparent angular diameter with amplitude 45 per-cent of the smallest value. The measurements were made in a 50 nm bandpass centred on 905 nm, which is only moderately contaminated by molecular absorption features. To assess the effects of atmospheric stratification on the apparent diameter measured in this band, we have also measured near-infrared diameters for a sample of five Miras, in both the J-band (1.3 μm) and Wing's (1971) 1.04 μm band, which is expected to isolate essentially pure continuum emission. We present J-band visibility curves which indicate that the intensity profiles of the stars in the sample differ greatly from each other. We have conducted a survey of the Lyα forest in the redshift domain 2.15 < z < 3.37 in front of nine QSOs within a 1o field to probe spatial structure along planes perpendicular to the line-of-sight. We find evidence for correlations of the Lyα absorption line wavelengths in the whole redshift range, and, at z > 2.8, of their equivalent widths. Such a correlation is consistent with the emerging picture that Lyα lines arise in filaments or large, flattened structures. In primates, the cortex adjoining the rostral border of V2 has been variously interpreted as belonging to a single visual area, V3, with dorsal V3 (V3d) representing the lower visual quadrant and ventral V3 (V3v) representing the upper visual quadrant, V3d and V3v constituting separate, incomplete visual areas, V3d and ventral posterior (VP), or V3d being divided into several visual areas, including a dorsomedial (DM) visual area, a medial visual area (M), and dorsal extension of VP (or VLP). In our view, the evidence from V1 connections strongly supports the contention that V3v and V3d are parts of a single visual area, V3, and that DM is a separate visual area along the rostral border of V3d. In addition, the retinotopy revealed by V1 connection patterns, microelectrode mapping, optical imaging mapping, and functional magnetic resonance imaging (fmri) mapping indicates that much of the proposed territory of V3d corresponds to V3. Yet, other evidence from microelectrode mapping and anatomical connection patterns supports the possibility of an upper quadrant representation along the rostral border of the middle of dorsal V2 (V2d), interpreted as part of DM or DM plus DI, and along the midline end of V2d, interpreted as the visual area M. While the data supporting these different interpretations appear contradictory, they also seem, to some extent, valid. We suggest that V3d may have a gap in its middle, possibly representing part of the upper visual quadrant that is not part of DM. In addition, another visual area, M, is likely located at the DM tip of V3d. There is no evidence for a similar disruption of V3v. For the present, we favor continuing the traditional concept of V3 with the possible modification of a gap in V3d in at least some primates. This account by three American authors of one thousand years of exploration in the Arctic regions, culminating in the voyage and loss of the USS Polaris in 1872, was published in 1874. The work, which is derived from many earlier published accounts, begins with a short and highly sentimental biography of the famous American explorer Elisha Kane (whose own works are reissued in this series). It continues with the geography of the Arctic regions, and the voyages of the Vikings and early modern explorers, describing the activities of the whaling fleets as well as the oceanographic and scientific researches of the naval expeditions from many countries seeking the North-West Passage. This is a useful and readable synthesis, which ends with a stirring appeal to the British Admiralty to resume the work of polar exploration which had gone into decline after the end of the official search for Sir John Franklin.
I was wondering if there was some code available for Hamiltonian simulation for sparse matrix. And also if they exist, they correspond to a divide and conquer approach or a Quantum walk approach? In this article the authors stated that they used this Group Leader's algorithm in order to obtain the circuit implementing the hamiltonian simulation used as a subroutine in an instance of HHL algorithm. Unfortunately though, I did not understand quite well how they actually managed to find the circuit with that method. Update on the subject: there are several implementations in the wild. I don't know if you still need them, but even if you don't it will hopefully be useful to other people. I chose to list the implementations by "provenance" rather than by the algorithm used because there are not that much implementations. This may change in the future. Qiskit-aqua: Qiskit is the library of IBM for quantum computing. Qiskit-aqua is the part of the library that deals with quantum algorithms. The Qiskit-aqua implementation can only simulate Hamiltonians that are a sum of hermitian matrices that can be written as tensor products of Pauli operators. To do so, they used the Trotter-Suzuki formula. The implementation is available here (method evolvein the class qiskit.aqua.operator.Operator). simcount: Implementation of 3 hamiltonian simulation algorithms for a specific kind of hamiltonian. Based on Quipper. All their work is explained in the paper Toward the first quantum simulation with quantum speedup (Andrew M. Childs, Dmitri Maslov, Yunseong Nam, Neil J. Ross, Yuan Su, 2017). The 3 algorithms (and their variations) implemented in the repository have been optimised for a very specific Hamiltonian $$H = \sum_{j=1}^n \left( \vec{\sigma}_j \cdot{} \vec{\sigma}_{j+1} + h_j \sigma_j^z \right).$$ The implementations are available here.
I have a set of $N$ points $(x_i,y_i)$. $X$ and $Y$ both have some noise associated with them due to measurement inaccuracy however the relationship of the underlying true values (i.e. if we could remove the noises) of these points should be of the form $y = mx +c$ where $m$ and $c$ are constants. However, due to measurement inaccuracies in both $X$ and $Y$, I will get uncertainties in both my $m$ and $c$ values. 1)If I assume that my measurement inaccuracies for both $X$ and $Y$ are Gaussian distributed $\epsilon$ ~ $N(0,\sigma)$ how do I obtain the most likely $(m,c)$ and the uncertainties/confidence in both. 2) If I instead know that the uncertainties are different for $X$ and $Y$ such that $\sigma_x \neq \sigma_y$ where $\epsilon_x$ ~ $N(0,\sigma_x)$, $\epsilon_y$ ~ $N(0,\sigma_y)$ can I get a different estimate of $(m,c)$ and the uncertainties/confidence in both.
Asymptotic Solution > The Airy function for a cracked plate Remember that in the previous section we found that 2D problems with a linear elastic material law can be stated as finding $\Phi$ so that the boundary conditions and \begin{equation} \nabla^2\nabla^2 \Phi =0 \label{eq:biharmonic},\end{equation} are satisfied. The problem is thus to find the suitable $\Phi$. Complex variables and functions \begin{equation}\begin{cases} \mathbf{\sigma}_{zz} = 0 \text{ or } \mathbf{\varepsilon}_{zz} = 0 \\ \mathbf{\sigma}_{yy}(\theta \pm \pi) = 0 \\ \mathbf{\sigma}_{xy}(\theta \pm \pi) = 0 \\ \mathbf{u}_x (\theta) = \mathbf{u}_x (-\theta) \\ \mathbf{u}_y (\theta) = -\mathbf{u}_y (-\theta) \end{cases},\label{eq:bcModeI}\end{equation} \begin{equation}\begin{cases} \mathbf{\sigma}_{zz} = 0\text{ or } \mathbf{\varepsilon}_{zz} = 0 \\ \mathbf{\sigma}_{yy}(\theta \pm \pi) = 0 \\ \mathbf{\sigma}_{xy}(\theta \pm \pi) = 0 \\ \mathbf{u}_x (\theta) = -\mathbf{u}_x (-\theta) \\ \mathbf{u}_y (\theta) = \mathbf{u}_y (-\theta) \end{cases},\label{eq:bcModeII}\end{equation} As shown in the previous problem of a plate with an elliptical hole, there is a singularity if $b\rightarrow 0$. We will then have to modify the resolution method when considering a plate with a crack. Irwin has solved this problem for three different loading modes. There are two modes loading the crack in its plane, the Mode I also called opening mode as it tends to open the crack lips, see Picture II.8, and the Mode II also called sliding mode as it tends to make the two crack lips slide on each other, see Picture II.9. The Mode III, also called shearing mode, loads the crack out of its plane, see Picture II.10. To do so the polar coordinates linked to the crack tip will be used, see Picture II.11. The only things that differentiate the three modes are the boundary conditions of the problem, which can directly be written in terms of the polar coordinates as (\ref{eq:bcModeI}-\ref{eq:bcModeIII}). Once the boundary conditions are known, it is possible to solve the problem. Because of the shape of the crack, there will be a discontinuity in the solution at the crack location. It is thus convenient to solve the problem in terms of a complex variable $\zeta=x+iy=r \mathrm{e}^{i\theta}$ using axes linked to the crack tip, see Picture II.12. Let us now remind the concept of a complex function \begin{equation} z\left(\zeta\right)= \alpha\left(x,\,y\right)+i\beta\left(x,\,y\right)\label{eq:complexz},\end{equation} constructed from $\alpha$ and $\beta$ two real functions so that $\alpha=\mathcal{R}\left(z\left(\zeta\right)\right)$ and $\beta=\mathcal{I}\left(z\left(\zeta\right)\right)$. Assuming $z$ is differentiable, i.e. it is analytic, then one can prove that \begin{equation} z'\left(\zeta\right) = \partial_x z=-i\partial_y z \label{eq:analyticz}.\end{equation} Using this last result leads directly to the Cauchy-Riemann relations \begin{equation}\begin{cases} &\partial_x z = \alpha_{,x} +i \beta_{,x} = -i\partial_y z = -i \alpha_{,y} + \beta_{,y} \\& \iff \alpha_{,x} = \beta_{,y} \text{ and } \alpha_{,y} = - \beta_{,x}\end{cases}\label{eq:cauchyRiemann1}.\end{equation} Differentiating these last relations yields \begin{equation}\begin{cases} &\alpha_{,xx} = \beta_{,xy} = - \alpha_{,yy} \text{ and } & \beta_{,xx} =- \alpha_{,xy} = - \beta_{,yy}\\ & \iff \nabla^2 \alpha = \nabla^2 \beta= 0 \end{cases}.\label{eq:cauchyRiemann}\end{equation} This means that if $z$ is analytic, then its real and imaginary function parts (both of which are real functions) satisfy the Laplace equation. Similarly, if a real function satisfies the Laplace equation, then it is the real or imaginary function part of an analytic function $z$. Solution of the bi-harmonic equation In order to solve (\ref{eq:biharmonic}), we now need to consider a change of variables to use the complex variable $\zeta$ defined on Picture II.12. As the original function $\Phi$ depends on two original variables $x$ and $y$, we also need two new variables. To easily express $x$ and $y$ in terms of the two new variables we select $\zeta$ and its conjugated value $\bar{\zeta}$, as \begin{equation}\begin{cases} x = \frac{\zeta +\bar{\zeta}}{2}\\ y = i \frac{\bar{\zeta} -\zeta}{2}\end{cases},\label{eq:chgeComplex}\end{equation} and we have \begin{equation} \Phi(x,y) = \Phi(\zeta,\, \bar{\zeta}) \end{equation} Let us define $\Psi$ such that \begin{equation} \nabla^2\Phi\left(\zeta,\,\bar{\zeta}\right) = \Psi\left(\zeta,\,\bar{\zeta}\right)\,\label{eq:Psi2} \end{equation} To express the Laplacian of $\Phi$ in terms of the complex variables, we need to compute the derivatives of the Airy function using the change of variables (\ref{eq:chgeComplex}): \begin{equation}\begin{cases} \Phi_{,xx} = & \partial_x\left(\Phi_{,\zeta} + \Phi_{,\bar{\zeta}} \right) = \Phi_{,\zeta\zeta} + 2\Phi_{,\zeta\bar{\zeta}}+ \Phi_{,\bar{\zeta}\bar{\zeta}} \\ \Phi_{,yy} = & \partial_y\left(i \Phi_{,\zeta} - i \Phi_{,\bar{\zeta}} \right) = -\Phi_{,\zeta\zeta} + 2\Phi_{,\zeta\bar{\zeta}}- \Phi_{,\bar{\zeta}\bar{\zeta}} \end{cases},\label{eq:derviPhi}\end{equation} and we finally have \begin{equation} \Psi = \nabla^2\Phi = \Phi_{,xx} + \Phi_{,yy} = 4\Phi_{,\zeta\bar{\zeta}}.\label{eq:Psi}\end{equation} Now the bi-harmonic equation (\ref{eq:biharmonic}) is rewritten \begin{equation} \nabla^2\nabla^2\Phi\left(\zeta,\,\bar{\zeta}\right) = \nabla^2\Psi\left(\zeta,\,\bar{\zeta}\right)=0. \end{equation} Since $\Psi$ satisfies the Laplace equation (\ref{eq:cauchyRiemann}), it is the real part of a function $z(\zeta)$ and we have from (\ref{eq:Psi}) \begin{equation} 4\Phi_{,\zeta\bar{\zeta}}=\Psi\left(\zeta,\,\bar{\zeta}\right) = \mathcal{R}z=\frac{z+\bar{z}}{2} \label{eq:system}\end{equation} with $\bar{z} = z(\bar{\zeta})$, a function of $\bar{\zeta}$ only. The equation $4\Phi_{,\zeta\bar{\zeta}}=\frac{z+\bar{z}}{2}$ is now integrated with respect to $\zeta$ and to $\bar{\zeta}$ -do not forget during the integration with respect to one variable to add a function which is constant in terms of that variable, but not in terms of the other one. Let $4\Omega'(\zeta) = z(\zeta)$ and $\omega(\zeta)$ be unknown functions, then \begin{equation} \Phi = \frac{\bar{\zeta}\Omega+\zeta\bar{\Omega} + \omega + \bar{\omega}}{2} \label{eq:PhiCrack},\end{equation} which satisfies (\ref{eq:system}) as \begin{equation} 4 \Phi_{,\zeta\bar{\zeta}} = 2\left(\Omega' + \bar{\Omega}'\right)=\frac{z+\bar{z}}{2} \end{equation} Note well that $\bar{\Omega} = \Omega(\bar{\zeta})$ and $\bar{\omega} = \omega(\bar{\zeta})$ are functions of $\bar{\zeta}$ only. What needs to be understood at this level is the following. If one can find two functions $\Omega(\zeta)$ and $\omega(\zeta)$ so that the BCs of the problem are satisfied, then we have solved the problem of a cracked plate. Indeed for any expressions of $\Omega(\zeta)$ and $\omega(\zeta)$, the Airy function (\ref{eq:PhiCrack}) automatically satisfies the bi-harmonic equation, and thus the linear momentum and elastic constitutive equations. So we "just" have to find the proper functions $\Omega(\zeta)$ and $\omega(\zeta)$ which satisfy the BCs. To do so we need to express the stress tensor and the displacement field from these functions. Stress components in terms of the unknown functions The stress tensor can be obtained from the derivative of the Airy function as given from the expression in the Cartesian coordinates. We will need to apply the change of variables, but before we will express the derivatives of (\ref{eq:PhiCrack}) with respect to the new variables $\zeta$ and $\bar{\zeta}$. As $\Omega(\zeta)$ and $\omega(\zeta)$ are function of $\zeta$ only, and as $\bar{\Omega} = \Omega(\bar{\zeta})$ and $\bar{\omega} = \omega(\bar{\zeta})$ are functions of $\bar{\zeta}$ only, one has \begin{equation}\begin{cases} \Phi_{,\zeta\zeta} =& \frac{\bar{\zeta}\Omega''+\omega''}{2}\\\Phi_{,\zeta\bar{\zeta}} =& \frac{\Omega'+\bar{\Omega}'}{2}\\\Phi_{,\bar{\zeta}\bar{\zeta}} =& \frac{\zeta\bar{\Omega}''+\bar{\omega}''}{2} \end{cases}\label{eq:DerivPhiCrack}.\end{equation} Now, starting from stress the expression in Cartesian coordinates, and using (\ref{eq:DerivPhiCrack}), the change of variables (\ref{eq:chgeComplex}) leads to \begin{equation}\begin{cases} \mathbf{\sigma}_{xx} = & \Phi_{,yy} =-\Phi_{,\zeta\zeta} + 2\Phi_{,\zeta\bar{\zeta}}- \Phi_{,\bar{\zeta}\bar{\zeta}}=\Omega' +\bar{\Omega}' -\frac{\bar{\zeta}\Omega''+\zeta\bar{\Omega}'' + \omega'' + \bar{\omega}''}{2}\\ \mathbf{\sigma}_{yy} = & \Phi_{,xx}=\Phi_{,\zeta\zeta} + 2\Phi_{,\zeta\bar{\zeta}}+ \Phi_{,\bar{\zeta}\bar{\zeta}}=\Omega' +\bar{\Omega}' +\frac{\bar{\zeta}\Omega''+\zeta\bar{\Omega}'' + \omega'' + \bar{\omega}''}{2} \\ \mathbf{\sigma}_{xy} = & -\Phi_{,xy}=-i\Phi_{,\zeta\zeta} +i \Phi_{,\bar{\zeta}\bar{\zeta}}=i \frac{\zeta\bar{\Omega}''-\bar{\zeta}\Omega'' + \bar{\omega}''- \omega'' }{2}\end{cases} \label{eq:stressAiry}. \end{equation} Displacement field in terms of the unknown functions How can the displacements be written in terms of the two functions $\Omega(\zeta)$ and $\omega(\zeta)$? We know that the displacement field is the integration of the strain field, which can be related to the stress tensor in terms of the Hooke law. To introduce the complex variables, we define $\mathbf{u}\left(\zeta,\,\bar{\zeta}\right) = \mathbf{u}_x\left(x,\,y\right) + i \mathbf{u}_y \left(x,\,y\right)$. The strain field can be introduced in this equation by simple derivation \begin{equation}\begin{cases}\mathbf{u}_{,\bar{\zeta}} = \left(\mathbf{u}_{x,x}+i\mathbf{u}_{y,x}\right)\partial_{\bar{\zeta}}x +\left(\mathbf{u}_{x,y}+i\mathbf{u}_{y,y}\right)\partial_{\bar{\zeta}}y = \frac{1}{2}\left(\mathbf{u}_{x,x}+i\mathbf{u}_{y,x}+i\mathbf{u}_{x,y}-\mathbf{u}_{y,y}\right)\\ \iff \mathbf{u}_{,\bar{\zeta}}=\frac{1}{2}\left(\mathbf{\varepsilon}_{xx}-\mathbf{\varepsilon}_{yy}\right) +i\mathbf{\varepsilon}_{xy}\end{cases}.\label{eq:dU}\end{equation} \begin{equation} \mathbf{u}_{,\bar{\zeta}} = \frac{1+\nu}{2E}\left(\mathbf{\sigma}_{xx}-\mathbf{\sigma}_{yy}+2i\mathbf{\sigma}_{xy}\right) = -\frac{1+\nu}{E}\left(\zeta\bar{\Omega}''+\bar{\omega}''\right)\label{eq:derivU}, \end{equation} which can be integrated with respect to $\bar{\zeta}$ as \begin{equation} \mathbf{u} = -\frac{1+\nu}{E}\left(\zeta\bar{\Omega}'+\bar{\omega}' + \mu \left(\zeta\right)\right), \label{eq:uTmpAiry} \end{equation} where the function $\mu\left(\zeta\right)$ is constant with respect to $\bar{\zeta}$ and should be defined to satisfy the P-$\sigma$ or P-$\varepsilon$ states. Plane stress state On the one hand, from the definition of $\mathbf{u}$, one has \begin{equation}\begin{cases} \mathbf{u}_{,\zeta} &= \left(\mathbf{u}_{x,x}+i\mathbf{u}_{y,x}\right)\partial_{\zeta}x +\left(\mathbf{u}_{x,y}+i\mathbf{u}_{y,y}\right)\partial_{\zeta}y =\frac{1}{2}\left(\mathbf{u}_{x,x}+i\mathbf{u}_{y,x}-i\mathbf{u}_{x,y}+\mathbf{u}_{y,y}\right)\\ \bar{\mathbf{u}}_{,\bar{\zeta}} &= \left(\mathbf{u}_{x,x}-i\mathbf{u}_{y,x}\right)\partial_{\bar{\zeta}}x +\left(\mathbf{u}_{x,y}-i\mathbf{u}_{y,y}\right)\partial_{\bar{\zeta}}y = \frac{1}{2}\left(\mathbf{u}_{x,x}-i\mathbf{u}_{y,x}+i\mathbf{u}_{x,y}+\mathbf{u}_{y,y}\right)\end{cases}\label{eq:dU1},\end{equation} which implies \begin{equation} \mathbf{u}_{,\zeta}+\bar{\mathbf{u}}_{,\bar{\zeta}} = \mathbf{u}_{x,x}+\mathbf{u}_{yy}=\mathbf{\varepsilon}_{xx}+\mathbf{\varepsilon}_{yy}=\mathbf{\varepsilon}_{\gamma\gamma}.\label{eq:dU2} \end{equation} On the other hand, starting from (\ref{eq:uAiry}), one has \begin{equation} \mathbf{u}_{,\zeta}+\bar{\mathbf{u}}_{,\bar{\zeta}} =-\frac{1+\nu}{E}\left(\bar{\Omega}'+ \mu'+\Omega'+ \bar{\mu}'\right).\label{eq:dU3} \end{equation} Combining (\ref{eq:dU2}) and (\ref{eq:dU3}) leads to \begin{equation} \mathbf{\varepsilon}_{\gamma\gamma} =-\frac{1+\nu}{E}\left(\bar{\Omega}'+ \mu'+\Omega'+ \bar{\mu}'\right).\label{eq:traceEps} \end{equation} The Hooke's law expressed in plane stress (P-$\sigma$) can be rewritten using (\ref{eq:stressAiry}), and becomes \begin{equation} \mathbf{\varepsilon}_{\gamma\gamma} =\frac{1-\nu}{E}\mathbf{\sigma}_{\gamma\gamma} = \frac{2\left(1-\nu\right)}{E}\left(\Omega'+\bar{\Omega}'\right) .\label{eq:traceEpsPstress} \end{equation} Comparing this last equation with (\ref{eq:traceEps}) gives the expression of the missing function \begin{equation} \mu\left(\zeta\right) = -\frac{3-\nu}{1+\nu}\Omega= - \kappa \Omega\left(\zeta\right), \end{equation} with $ \kappa =\frac{3-\nu}{1+\nu}$. Plane strain state We can proceed as for the P-$\sigma$ state up to (\ref{eq:traceEps}). At this point, the Hooke's law expressed in plane strain (P-$\varepsilon$) can be rewritten using (\ref{eq:stressAiry}), and becomes \begin{equation}\mathbf{\varepsilon}_{\gamma\gamma} =\frac{\left(1+\nu\right)\left(1-2\nu\right)}{E}\mathbf{\sigma}_{\gamma\gamma}= \frac{2\left(1+\nu\right)\left(1-2\nu\right)}{E}\left(\Omega'+\bar{\Omega}'\right).\label{eq:traceEpsPstrain} \end{equation} Comparing this last equation with (\ref{eq:traceEps}) gives the expression of the missing function \begin{equation} \mu\left(\zeta\right) = -\left(3-4\nu\right)\Omega= - \kappa \Omega\left(\zeta\right), \end{equation} with $ \kappa =3-4\nu$. Summary of plane stress and strain states For both cases we found the expression of the missing function as being \begin{equation} \mu\left(\zeta\right) - \kappa \Omega\left(\zeta\right).\label{eq:mu} \end{equation} This means (\ref{eq:uTmpAiry}) can be rewritten \begin{equation} \mathbf{u}\left(\zeta,\,\bar{\zeta}\right) = -\frac{1+\nu}{E}\left(\zeta\bar{\Omega}'\left(\bar{\zeta}\right)+\bar{\omega}'\left(\bar{\zeta}\right) -\kappa\Omega\left(\zeta\right)\right),\label{eq:uAiry} \end{equation} with \begin{equation}\kappa = \begin{cases}\frac{3-\nu}{1+\nu} & \text{ in P-}\sigma \\ 3-4\nu & \text{ in P-}\varepsilon\end{cases}.\label{eq:kappa}\end{equation} Selection of the complex functions We can now express the stress and displacement fields of the problem in terms of the two unknown functions $\Omega(\zeta)$ and $\omega(\zeta)$ that have to be chosen so that the BCs are satisfied. From Picture II.12, it appears that the functions should be able to capture a discontinuity in the displacement field (\ref{eq:uAiry}) for $\theta = \pi$. Forms of the unknown functions that satisfy this requirement are \begin{equation}\begin{cases} \Omega\left(\zeta\right) = &\sum_{\lambda} \left(C_1+iC_2\right)\zeta^{\lambda+1}= \sum_{\lambda} \left(C_1+iC_2\right)r^{\lambda+1}\exp{\left(i\theta\left(\lambda+1\right)\right)}\\ \omega'\left(\zeta\right) = &\sum_{\lambda} \left(C_3+iC_4\right)\zeta^{\lambda+1}= \sum_{\lambda}\left(C_3+iC_4\right)r^{\lambda+1}\exp{\left(i\theta\left(\lambda+1\right)\right)} \end{cases},\label{eq:OmegaSeries}\end{equation} with four consants $C_1$, $C_2$, $C_3$ and $C_4$ associated to each value of $\lambda$ of the series. Indeed the displacement field (\ref{eq:uAiry}) becomes \begin{eqnarray} \mathbf{u} &=& \frac{1+\nu}{E}\sum_{\lambda} r^{\lambda+1} \left[\kappa \left(C_1+iC_2\right)\exp{\left(i\theta\left(\lambda+1\right)\right)}-\right.\nonumber\\ &&\left.\left(C_1-iC_2\right)\left(\lambda+1\right)\exp{\left(i\theta\left(1-\lambda\right)\right)}- \left(C_3-iC_4\right)\exp{\left(-i\theta\left(\lambda+1\right)\right)}\right],\label{eq:uSeries} \end{eqnarray} which is discontinuous in $\theta = \pi$ for $\lambda = -\frac{1}{2}$. At this stage we do not know the manifold of $\lambda$ yet, but we know that if it includes $\lambda = -\frac{1}{2}$ we will have the discontinuity. Also, in order for the displacement $\mathbf{u}$ to remain finite we need $\lambda > -1$.
The Chinese Remainder Theorem Recall from the Systems of Linear Congruences that if we have a collection of linear congruences, say $\{ a_ix \equiv b_i \pmod m_i : i \in I \}$ (called a system), then a solution to the system is a set of $x$ which satisfy all of these linear congruences simultaneously. We noted that while some systems of linear congruences may have solutions - others may not. We will now look at a very well-known theorem that gives us a condition for when such a system has a solution. Theorem 1 (The Chinese Remainder Theorem): Let $x \equiv a_1 \pmod {m_1} \\ x \equiv a_2 \pmod {m_2} \\ \vdots \\ x \equiv a_k \pmod {m_k}$ be a system of linear congruences and suppose that $(m_i, m_j) = 1$ for all $i, j \in \{ 1, 2, ..., k \}$ with $i \neq j$. Then there exists a unique solution to this linear congruence modulo $m_1m_2...m_k$. Proof:Note that if $k = 1$ the theorem is trivially true. So consider the case when $k = 2$. Then we have the following system of linear congruences: Since $x \equiv a_1 \pmod {m_1}$ there exists a $k_1 \in \mathbb{Z}$ with $x = a_1 + k_1m_1$ and substituting this into the second linear congruence gives us: Or equivalently: Now we know that $(m_1, m_2) = 1$, so, there must exist a unique solution $t$ for $k_1$ modulo $m_2$. So there exists a $k_2 \in \mathbb{Z}$ such that: Substituting this back into the original equation for $x$ and we get: Note that $x \equiv a_1 \pmod m_1$ and $x \equiv a_2 \pmod m_2$ so both linear congruences are satisfied. The completion of the proof requires a careful induction argument which we will omit. $\blacksquare$
User:Nikita2 Pages of which I am contributing and watching Analytic function | Cauchy criterion | Cauchy integral | Condition number | Continuous function | D'Alembert criterion (convergence of series) | Dedekind criterion (convergence of series) | Derivative | Dini theorem | Dirichlet-function | Ermakov convergence criterion | Extension of an operator | Fourier transform | Friedrichs inequality | Fubini theorem | Function | Functional | Generalized derivative | Generalized function | Geometric progression | Hahn-Banach theorem | Harmonic series | Hilbert transform | Hölder inequality | Lebesgue integral | Lebesgue measure | Leibniz criterion | Leibniz series | Lipschitz Function | Lipschitz condition | Luzin-N-property | Newton-Leibniz formula | Newton potential | Operator | Poincaré inequality | Pseudo-metric | Raabe criterion | Riemann integral | Series | Sobolev space | Vitali theorem | TeXing I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX. Now there are 362 (out of 15,890) articles with Category:TeX done tag. $\quad \rightarrow \quad$ $\sum_{n=1}^{\infty}n!z^n$ Just type $\sum_{n=1}^{\infty}n!z^n$. Today You may look at Category:TeX wanted. How to Cite This Entry: Nikita2. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Nikita2&oldid=29575
Asymptotic Solution > Mode III (shearing) and summary Mode III The boundary conditions associated to the third loading mode, see Picture II.26, were found to be \begin{equation}\begin{cases} \mathbf{\sigma}_{xx} = \mathbf{\sigma}_{xy} = \mathbf{\sigma}_{yy} = \mathbf{\sigma}_{zz} = 0 \\ \mathbf{u}_x = \mathbf{u}_y = 0 \\ \mathbf{u}_z (\theta) = -\mathbf{u}_z (-\theta) \end{cases}.\label{eq:bcModeIII}\end{equation} Resolution method As the loading is out-of-plane, the resolution method differs from the two other modes. By definition of the strain tensor, one has \begin{equation} \mathbf{\varepsilon}_{\alpha z}= \frac{1}{2}\mathbf{u}_{z,\alpha}\label{eq:epsModeIII}, \end{equation} and the stress field directly derives from the Hooke's law \begin{equation} \mathbf{\sigma}_{\alpha z} = \frac{E}{1+\nu}\mathbf{\varepsilon}_{\alpha z}=\frac{E}{2\left(1+\nu\right)}\mathbf{u}_{z,\alpha}.\label{eq:HookeModeIII}\end{equation} The linear momentum equation $\nabla\cdot \mathbf{\sigma}=0$ simplifies in the case of pure shearing into $\mathbf{\sigma}_{\alpha z,\alpha} = 0$ and (\ref{eq:HookeModeIII}) yields \begin{equation} \nabla^2 \mathbf{u}_z = 0.\label{eq:LaplacianModeIII} \end{equation} As the Laplace's equation is satisfied, $\mathbf{u}_z$ is the imaginary part of a function $z(\zeta)$, see previous section, and \begin{equation} \mathbf{u}_z = \mathcal{I}\left(z\left(\zeta\right)\right).\label{eq:cauchyRiemannModeIII} \end{equation} This means that any choice of a complex function $z\left(\zeta\right)$ satisfies the linear elastic problem, as long as the BCs are also satisfied. Because of the discontinuity of the solution in $\theta=\pm \pi$, see Picture II.26, an obvious choice is \begin{equation} z = \sum_{\lambda} C r^\lambda \exp{\left(i\theta\lambda\right)}\label{eq:zModeIII}, \end{equation} with a real constant $C$ associated to each $\lambda$. Indeed \begin{equation} \mathbf{u}_z = \mathcal{I}\left(z\left(\zeta\right)\right) = \sum_{\lambda} C r^\lambda \sin{\theta\lambda} ,\label{eq:uModeIIItmp} \end{equation} is anti-symmetric, so it satisfies $\mathbf{u}_z (\theta) = -\mathbf{u}_z (-\theta)$, and has a discontinuity in $\theta=\pm\pi$ if the value of $\lambda=\frac{1}{2}$ is included in the series. For the displacement to remain finite, the series will be limited to $\lambda\geq 0$. The strain field is obtained from (\ref{eq:epsModeIII}) -remembering that $\partial_x=\cos{\theta}\partial_r-\frac{\sin{\theta}}{r}\partial_\theta$ and $\partial_x=\sin{\theta}\partial_r+\frac{\cos{\theta}}{r}\partial_\theta$- and reads \begin{equation}\begin{cases} \mathbf{\varepsilon}_{x z} =\sum_{\lambda}\left[ \frac{C}{2}\lambda \cos{\theta} r^{\lambda-1}\sin{\left(\theta\lambda\right)} - \frac{C}{2} \sin{\theta} r^{\lambda-1}\lambda\cos{\left(\lambda\theta\right)}\right] =\sum_{\lambda} \frac{C}{2} \lambda r^{\lambda-1} \sin{\left[\left(\lambda-1\right)\theta\right]}\\ \mathbf{\varepsilon}_{y z} = \sum_{\lambda} \left[\frac{C}{2}\lambda \sin{\theta} r^{\lambda-1}\sin{\left(\theta\lambda\right)} +\frac{C}{2} \cos{\theta} r^{\lambda-1}\lambda\cos{\left(\lambda\theta\right)}\right]=\sum_{\lambda} \frac{C}{2} \lambda r^{\lambda-1} \cos{\left[\left(\lambda-1\right)\theta\right]}\end{cases}.\label{eq:strainModeIII} \end{equation} The stress field is thus obtained from (\ref{eq:HookeModeIII}) and reads \begin{equation}\begin{cases} \mathbf{\sigma}_{x z} = \sum_{\lambda}\frac{E C}{2\left(1+\nu\right)} \lambda r^{\lambda-1} \sin{\left[\left(\lambda-1\right)\theta\right]}\\ \mathbf{\sigma}_{y z} =\sum_\lambda \frac{E C}{2\left(1+\nu\right)} \lambda r^{\lambda-1} \cos{\left[\left(\lambda-1\right)\theta\right]} \end{cases}.\label{eq:stressModeIIItmp} \end{equation} The boundary conditions of the problem allow the manifold of $\lambda$ to be determined: Finite displacement: $\lambda > 0$ (note that $\lambda = 0$ yields zero stress); The crack is stress free: $\mathbf{\sigma}_{y z}\left(\theta = \pm\pi\right) = 0$, hence $\lambda = \frac{n}{2}$ for $n$ = 1, ... Stress field Knowing the manifold of $\lambda$, and as the dominant term near the crack tip is obtained for the smallest $\lambda=\frac{1}{2}$, the asymptotic stress field (\ref{eq:stressModeIIItmp}) reads \begin{equation}\begin{cases} \mathbf{\sigma}_{x z} = -\frac{E C}{4\left(1+\nu\right)\sqrt{r}} \sin{\frac{\theta}{2}}+\mathcal{O}\left(r^0\right)\\ \mathbf{\sigma}_{y z} =\frac{E C}{4\left(1+\nu\right)\sqrt{r}} \cos{\frac{\theta}{2}} +\mathcal{O}\left(r^0\right)\end{cases}.\label{eq:stressModeIIItmp2} \end{equation} As the stress field tends to infinity at the crack tip, the Stress Intensity Factor (SIF) of the third loading mode has been defined from the more severe stress field $\mathbf{\sigma}_{yz}$ for the crack as \begin{equation} K_{III} = \lim_{r\rightarrow 0} \left(\sqrt{2\pi r}\left.\mathbf{\sigma}_{yz}^{\text{mode III}}\right|_{\theta=0} \right) = \frac{CE}{2\left(1+\nu\right)} \sqrt{\frac{\pi}{2}},\label{eq:SIFIII}\end{equation} which allows rewriting (\ref{eq:stressModeIIItmp2}) under the final form \begin{equation}\begin{cases} \mathbf{\sigma}_{x z} = -\frac{K_{III}}{\sqrt{2 \pi r}} \sin{\frac{\theta}{2}}+\mathcal{O}\left(r^0\right)\\ \mathbf{\sigma}_{y z} =\frac{K_{III}}{\sqrt{2 \pi r}} \cos{\frac{\theta}{2}} +\mathcal{O}\left(r^0\right)\end{cases}.\label{eq:stressAsympModeIII} \end{equation} One more time, the SIF $K_{III}$ depends on the geometry and loading conditions of the sample. Displacement field Knowing the manifold of $\lambda$, and as the dominant term near the crack tip is obtained for the smallest $\lambda=\frac{1}{2}$, the asymptotic displacement field (\ref{eq:uModeIIItmp}) reads \begin{equation}\mathbf{u}_z = \frac{2K_{III}\left(1+\nu\right)}{E} \sqrt{\frac{2r}{\pi}} \sin{\frac{\theta}{2}}+\mathcal{O}\left(r\right)\label{eq:uModeIII}, \end{equation} where we have used the definition of the SIF (\ref{eq:SIFIII}). Visualization of the fields The asymptotic stress field (\ref{eq:stressAsympModeIII}) obtained for a Mode III loading is illustrated in Pictures II.27 and II.28. This last Picture II.28 represents the "yz"-stress component on which the stress concentration profile, which tends to infinity, is surrounded. The displacement field (\ref{eq:uModeIII}) is illustrated in Pictures II.29 were the crack shearing profile can be observed. Summary on the asymptotic solution \begin{equation}\begin{cases} \mathbf{\sigma}^\text{mode i} = \frac{K_i}{\sqrt{2\pi r}} \mathbf{f}^\text{mode i}(\theta) \\ \mathbf{u}^\text{mode i} = K_i\sqrt{\frac{r}{2\pi}} \mathbf{g}^\text{mode i}(\theta) \end{cases},\label{eq:fandg}\end{equation} where $\mathbf{f}$ and $\mathbf{g}$ are functions defined for each mode but independent of the loading and geometry (as long as we consider the asymptotic value), while the SIF $K_i$ depend on the geometry and loading conditions of the sample. Since linear responses have been assumed, it is possible to use the principle of superposition, and $\mathbf{u}$ and $\mathbf{\sigma}$ can be added for: Different modes: $\mathbf{u} = \mathbf{u}_I + \mathbf{u}_{II}$ and $\mathbf{\sigma} = \mathbf{\sigma}_I + \mathbf{\sigma}_{II}$; Different loading conditions: $\mathbf{u} = \mathbf{u}^\text{a} + \mathbf{u}^\text{b}$ and $\mathbf{\sigma} = \mathbf{\sigma}^\text{a} + \mathbf{\sigma}^\text{b}$. However one has to be careful that $\mathbf{f}$ and $\mathbf{g}$ are different for two different loading modes. So if $K_i$ can be added for different loading conditions $a$ and $b$ of the same mode $i$, i.e. $K_i = K_i^\text{a} + K_i^\text{b}$, the SIF of two different modes cannot be added, i.e. $K \neq K_I + K_{II}$. The loading and geometry effects are thus fully reported to the value of the SIF. Irwin has thus the idea to consider the value of the SIF to detect the crack propagation. Indeed experiments have shown that for a given material, which obeys to the LEFM assumption, the crack propagates if the SIF reaches a threshold $K_C$ called the toughness of the material.
Logarithm Function [ edit ] We shall first look at the irrational number in order to show its special properties when used with derivatives of exponential and logarithm functions. As mentioned before in e {\displaystyle e} the Algebra section, the value of is approximately e {\displaystyle e} but it may also be calculated as the e ≈ 2.718282 {\displaystyle e\approx 2.718282} Infinite Limit: e = lim n → ∞ ( 1 + 1 n ) n {\displaystyle e=\lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}} Now we find the derivative of using the formal definition of the derivative: ln ( x ) {\displaystyle \ln(x)} d d x ln ( x ) = lim h → 0 ln ( x + h ) − ln ( x ) h = lim h → 0 1 h ln ( x + h x ) = lim h → 0 ln ( x + h x ) 1 h {\displaystyle {\frac {d}{dx}}\ln(x)=\lim _{h\to 0}{\frac {\ln(x+h)-\ln(x)}{h}}=\lim _{h\to 0}{\frac {1}{h}}\ln \left({\frac {x+h}{x}}\right)=\lim _{h\to 0}\ln \left({\frac {x+h}{x}}\right)^{\frac {1}{h}}} Let . Note that as n = x h {\displaystyle n={\frac {x}{h}}} , we get n → ∞ {\displaystyle n\to \infty } . So we can redefine our limit as: h → 0 {\displaystyle h\to 0} lim n → ∞ ln ( 1 + 1 n ) n x = 1 x ln ( lim n → ∞ ( 1 + 1 n ) n ) = 1 x ln ( e ) = 1 x {\displaystyle \lim _{n\to \infty }\ln \left(1+{\frac {1}{n}}\right)^{\frac {n}{x}}={\frac {1}{x}}\ln \left(\lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}\right)={\frac {1}{x}}\ln(e)={\frac {1}{x}}} Here we could take the natural logarithm outside the limit because it doesn't have anything to do with the limit (we could have chosen not to). We then substituted the value of . e {\displaystyle e} Derivative of the Natural Logarithm d d x ln ( x ) = 1 x {\displaystyle {\frac {d}{dx}}\ln(x)={\frac {1}{x}}} If we wanted, we could go through that same process again for a generalized base, but it is easier just to use properties of logs and realize that: log a ( x ) = ln ( x ) ln ( a ) {\displaystyle \log _{a}(x)={\frac {\ln(x)}{\ln(a)}}} Since is a constant, we can just take it outside of the derivative: 1 ln ( a ) {\displaystyle {\frac {1}{\ln(a)}}} d d x log a ( x ) = 1 ln ( a ) ⋅ d d x ln ( x ) {\displaystyle {\frac {d}{dx}}\log _{a}(x)={\frac {1}{\ln(a)}}\cdot {\frac {d}{dx}}\ln(x)} Which leaves us with the generalized form of: Derivative of the Logarithm d d x log a ( x ) = 1 ln ( a ) x {\displaystyle {\frac {d}{dx}}\log _{a}(x)={\frac {1}{\ln(a)x}}} Exponential Function [ edit ] We shall take two different approaches to finding the derivative of . The first approach: ln ( e x ) {\displaystyle \ln(e^{x})} d d x ln ( e x ) = d d x x = 1 {\displaystyle {\frac {d}{dx}}\ln(e^{x})={\frac {d}{dx}}x=1} The second approach: d d x ln ( e x ) = 1 e x ( d d x e x ) {\displaystyle {\frac {d}{dx}}\ln(e^{x})={\frac {1}{e^{x}}}\left({\frac {d}{dx}}e^{x}\right)} Note that in the second approach we made some use of the chain rule. Thus: 1 e x ( d d x e x ) = 1 {\displaystyle {\frac {1}{e^{x}}}\left({\frac {d}{dx}}e^{x}\right)=1} d d x e x = e x {\displaystyle {\frac {d}{dx}}e^{x}=e^{x}} so that we have proved the following rule: Derivative of the exponential function d d x e x = e x {\displaystyle {\frac {d}{dx}}e^{x}=e^{x}} Now that we have derived a specific case, let us extend things to the general case. Assuming that is a positive real constant, we wish to calculate: a {\displaystyle a} d d x a x {\displaystyle {\frac {d}{dx}}a^{x}} One of the oldest tricks in mathematics is to break a problem down into a form that we already know we can handle. Since we have already determined the derivative of , we will attempt to rewrite e x {\displaystyle e^{x}} in that form. a x {\displaystyle a^{x}} Using that and that e ln ( c ) = c {\displaystyle e^{\ln(c)}=c} , we find that: ln ( a b ) = b ⋅ ln ( a ) {\displaystyle \ln(a^{b})=b\cdot \ln(a)} a x = e ln ( a ) x {\displaystyle a^{x}=e^{\ln(a)x}} Thus, we simply apply the chain rule: d d x e ln ( a ) x = e ln ( a ) x ⋅ d d x [ ln ( a ) x ] = ln ( a ) a x {\displaystyle {\frac {d}{dx}}e^{\ln(a)x}=e^{\ln(a)x}\cdot {\frac {d}{dx}}[\ln(a)x]=\ln(a)a^{x}} Derivative of the exponential function d d x a x = ln ( a ) a x {\displaystyle {\frac {d}{dx}}a^{x}=\ln(a)a^{x}} Logarithmic Differentiation [ edit ] We can use the properties of the logarithm, particularly the natural log, to differentiate more difficult functions, such a products with many terms, quotients of composed functions, or functions with variable or function exponents. We do this by taking the natural logarithm of both sides, re-arranging terms using the logarithm laws below, and then differentiating both sides implicitly, before multiplying through by . y {\displaystyle y} log ( a ) + log ( b ) = log ( a b ) {\displaystyle \log(a)+\log(b)=\log(ab)} log ( a b ) = log ( a ) − log ( b ) {\displaystyle \log \left({\frac {a}{b}}\right)=\log(a)-\log(b)} log ( a n ) = n log ( a ) {\displaystyle \log(a^{n})=n\log(a)} See the examples below. Example 1 [ edit ] We shall now prove the validity of the power rule using logarithmic differentiation. d d x ln ( x n ) = n d d x ln ( x ) = n x − 1 {\displaystyle {\frac {d}{dx}}\ln(x^{n})=n{\frac {d}{dx}}\ln(x)=nx^{-1}} d d x ln ( x n ) = 1 x n ⋅ d d x x n {\displaystyle {\frac {d}{dx}}\ln(x^{n})={\frac {1}{x^{n}}}\cdot {\frac {d}{dx}}x^{n}} Thus: 1 x n ⋅ d d x x n = n x − 1 {\displaystyle {\frac {1}{x^{n}}}\cdot {\frac {d}{dx}}x^{n}=nx^{-1}} d d x x n = n x n − 1 {\displaystyle {\frac {d}{dx}}x^{n}=nx^{n-1}} Example 2 Suppose we wished to differentiate y = ( 6 x 2 + 9 ) 2 3 x 3 − 2 {\displaystyle y={\frac {(6x^{2}+9)^{2}}{\sqrt {3x^{3}-2}}}} We take the natural logarithm of both sides ln ( y ) = ln ( ( 6 x 2 + 9 ) 2 3 x 3 − 2 ) = ln ( ( 6 x 2 + 9 ) 2 ) − ln ( ( 3 x 3 − 2 ) 1 2 ) = 2 ln ( 6 x 2 + 9 ) − ln ( 3 x 3 − 2 ) 2 {\displaystyle {\begin{aligned}\ln(y)&=\ln \left({\frac {(6x^{2}+9)^{2}}{\sqrt {3x^{3}-2}}}\right)\\&=\ln {\big (}(6x^{2}+9)^{2}{\big )}-\ln {\big (}(3x^{3}-2)^{\frac {1}{2}}{\big )}\\&=2\ln(6x^{2}+9)-{\frac {\ln(3x^{3}-2)}{2}}\end{aligned}}} Differentiating implicitly, recalling the chain rule 1 y ⋅ d y d x = 2 ⋅ 12 x 6 x 2 + 9 − 1 2 ⋅ 9 x 2 3 x 3 − 2 = 24 x 6 x 2 + 9 − 9 2 x 2 3 x 3 − 2 = 24 x ( 3 x 3 − 2 ) − 9 2 x 2 ( 6 x 2 + 9 ) ( 6 x 2 + 9 ) ( 3 x 3 − 2 ) {\displaystyle {\begin{aligned}{\frac {1}{y}}\cdot {\frac {dy}{dx}}&=2\cdot {\frac {12x}{6x^{2}+9}}-{\frac {1}{2}}\cdot {\frac {9x^{2}}{3x^{3}-2}}\\&={\frac {24x}{6x^{2}+9}}-{\frac {{\frac {9}{2}}x^{2}}{3x^{3}-2}}\\&={\frac {24x(3x^{3}-2)-{\frac {9}{2}}x^{2}(6x^{2}+9)}{(6x^{2}+9)(3x^{3}-2)}}\end{aligned}}} Multiplying by , the original function y {\displaystyle y} d y d x = ( 6 x 2 + 9 ) 2 3 x 3 − 2 ⋅ 24 x ( 3 x 3 − 2 ) − 9 2 x 2 ( 6 x 2 + 9 ) ( 6 x 2 + 9 ) ( 3 x 3 − 2 ) {\displaystyle {\frac {dy}{dx}}={\frac {(6x^{2}+9)^{2}}{\sqrt {3x^{3}-2}}}\cdot {\frac {24x(3x^{3}-2)-{\frac {9}{2}}x^{2}(6x^{2}+9)}{(6x^{2}+9)(3x^{3}-2)}}} Example 3 Let us differentiate a function y = x x {\displaystyle y=x^{x}} Taking the natural logarithm of left and right ln ( y ) = ln ( x x ) = x ln ( x ) {\displaystyle {\begin{aligned}\ln(y)&=\ln(x^{x})\\&=x\ln(x)\end{aligned}}} We then differentiate both sides, recalling the product and chain rules 1 y ⋅ d y d x = ln ( x ) + x 1 x = ln ( x ) + 1 {\displaystyle {\begin{aligned}{\frac {1}{y}}\cdot {\frac {dy}{dx}}&=\ln(x)+x{\frac {1}{x}}\\&=\ln(x)+1\end{aligned}}} Multiplying by the original function y {\displaystyle y} d y d x = x x ( ln ( x ) + 1 ) {\displaystyle {\frac {dy}{dx}}=x^{x}(\ln(x)+1)} Example 4 Take a function y = x 6 cos ( x ) {\displaystyle y=x^{6\cos(x)}} Then ln ( y ) = ln ( x 6 cos ( x ) ) = 6 cos ( x ) ln ( x ) {\displaystyle {\begin{aligned}\ln(y)&=\ln(x^{6\cos(x)})\\&=6\cos(x)\ln(x)\end{aligned}}} We then differentiate 1 y ⋅ d y d x = − 6 sin ( x ) ln ( x ) + 6 cos ( x ) x {\displaystyle {\frac {1}{y}}\cdot {\frac {dy}{dx}}=-6\sin(x)\ln(x)+{\frac {6\cos(x)}{x}}} And finally multiply by y {\displaystyle y} d y d x = y ( − 6 sin ( x ) ln ( x ) + 6 cos ( x ) x ) = 6 x 6 cos ( x ) ( cos ( x ) x − sin ( x ) ln ( x ) ) {\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=y\left(-6\sin(x)\ln(x)+{\frac {6\cos(x)}{x}}\right)\\&=6x^{6\cos(x)}\left({\frac {\cos(x)}{x}}-\sin(x)\ln(x)\right)\end{aligned}}}
Imagine you have a vector that can be written in the form$$|\psi\rangle=\sum_{i=0}^{d_A-1}\sum_{j=0}^{d_B-1}c_{ij}|i\rangle|j\rangle.$$The coefficients can be arranged as a $d_A\times d_B$ matrix $C$, with the elements $c_{ij}$ (in your special case, you're talking about setting $d_A=d_B=\sqrt{m}$).Now, if you calculate $\rho_A=CC^\dagger$, this is ... The reason is relatively straightforward. Consider an $m$ dimensional vector space $V$ with basis $\lbrace \vert v_1 \rangle,...,\vert v_m \rangle \rbrace$, and an $n$ dimensional vector space $W$ with basis $\lbrace \vert w_1 \rangle,...,\vert w_n \rangle \rbrace$. As your intuition suggests, we can naturally express any element $A \in V \otimes W$ in the ... You seem to be overcomplicating this somewhat!You are right to split it up into the two terms $H_1$ and $H_2$. So, we have$$e^{-i(H_1+H_2)t}=e^{-iH_1t}e^{-iH_2t}.$$Now, straightforwardly,$$e^{-iH_1t}=I+(e^{-i\delta t}-1)|00\rangle\langle 00|.$$Next, we need to think about the $e^{-iH_2t}$ term. Of course, it maps $|00\rangle$ to $|00\rangle$. So, ... If you wish to distinguish two states $|\psi\rangle$ and $|\phi\rangle$, you can only guarantee to do this if $\langle\psi|\phi\rangle=0$. You do this by measuring in a basis defined by the two states (alternatively, you apply a unitary $U$ such that$$U|\psi\rangle=|0\rangle,\qquad U|\phi\rangle=|1\rangle,$$and then measure in the standard $Z$ basis.... Here's a simple method that will work on any state that is not entangled with other qubits. It's also pretty efficient; it's the method used by the amplitude displays in Quirk.Find the index $k$ of any one of the largest magnitude entries in the vector. In your case this would be index 0 or index 2. Technically any non-zero entry will do, but if you aren't ... In the most widespread convention, the Bloch sphere uses $\theta = 0$ radians latitude to indicate the north pole $|0\rangle$, $\theta = \pi$ to refer to the south pole $|1\rangle$ and $\theta = \pi/2$ to refer to the equator, which includes the superpositions $(1+i)/\sqrt 2$ and $(1-i)/\sqrt 2$ as well as $i|1\rangle$ and $-i|1\rangle$.If two great ... There's a mistake. It's incorrect even for $a=0,b=0,U=I$. In this case the correct formula is$$|\phi\rangle \otimes |B_{00}\rangle = \frac{1}{2} \bigg(|B_{00}\rangle \otimes |\phi\rangle + |B_{01}\rangle \otimes Z|\phi\rangle + |B_{10}\rangle \otimes X|\phi\rangle + |B_{11}\rangle \otimes XZ|\phi\rangle\bigg)$$but their formula swaps $X$ and $Z$ in the ... Here is a possible, though expensive, way. First, find all prime factors of the dimension d of your vector. In your example, the dimension is 9, and the only prime factor is 3. Next for each prime factor $p$, try a tensor product of a vector of dimension p and another vector of dimension $d/p$. Then you need to solve $d$ quadratic equations with $p+d/p$ ... Measurements cannot produce an imaginary result. So if you want to measure an imaginary part, you need a suitable transformation before you measure. I haven't looked into the details of the mentioned operations but I'm sure that is what they do.On the last part of your question: the operation $R_x (\pi /2)$ can be visualized on the Bloch sphere by a ... In the first summation for $U$:$$\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}(it\Omega \frac{H_{2}}{\Omega})^{2n}=\cos(\Omega t)\begin{pmatrix} 0&0&0\\0&1&0\\0&0&1\end{pmatrix}(\text{This is wrong})$$$$\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}(it\Omega \frac{H_{2}}{\Omega})^{2n}=\begin{pmatrix} 1&0&0\\0&0&0\\0&0&0\... Outer product is a mapping operator. You can use it to define quantum gates, just sum up outer products of input and (desired) output basis vectors. For example,$$\vert{0}\rangle\rightarrow\vert{1}\rangle,\vert{1}\rangle\rightarrow\vert{0}\rangle$$$$\vert{0}\rangle\langle{1}\vert+\vert{1}\rangle\langle{0}\vert=\begin{pmatrix}0 & 1 \\1 & 0\end{...
Existence of Orthonormal Bases for Infinite Dimensional Separable Hilbert Spaces Recall from the Orthonormal Bases page that if $H$ is a Hilbert Space then an orthonormal basis for $H$ is an orthonormal sequence $(e_n)$ of $H$ such that every $h \in H$ can be uniquely written as:(1) The following theorem tells us that an orthonormal basis always exist when we're looking at infinite dimensional separable Hilbert spaces. Theorem 1: Every infinite dimensional separable Hilbert space has an orthonormal basis. Proof:Let $\mathcal F$ be the set of all orthonormal subsets of $H$. Then elements of $\mathcal F$ can be ordered by inclusion. Let $\mathcal E \subseteq \mathcal F$ be a chain in $\mathcal F$, that is, if $E, E' \in \mathcal E$ then either $E \subseteq E'$ or $E \supseteq E'$. We will show that $\mathcal E$ has an upperbound. Let: Observe that $\tilde{E}$ is an orthonormal set. To see this, let $e, e' \in \tilde{E}$. Then there exists an $E_0 \in E$ such that $e, e' \in E_0$. So $\langle e, e' \rangle = 0$. Furthermore, $\| e \| = 1$ and $\| e' \| = 1$. So $\tilde{E}$ is an orthonormal subset of $H$ and clearly, $\tilde{E}$ is an upperbound for $\mathcal E$. So every chain in $\mathcal F$ has an upper bound. By Zorn's Lemma, $\mathcal F$ has a maximal element, call it $E^*$. Now since $H$ is separable, from the theorem on the Orthonormal Sets in Separable Inner Product Spaces page we have that $E^*$ is countable. Let $E^* = (e_n)$ be any enumeration of $E^*$. Then $(e_n)$ is such that for every $h \in H$ we have that $\displaystyle{h - \sum_{n=1}^{\infty} \langle e_n, h \rangle e_n}$ is orthogonal to each $e_n$. So $\displaystyle{h - \sum_{n=1}^{\infty} \langle e_n, h \rangle e_n = 0}$, otherwise we could create a larger orthonormal set consisting of $E^*$ with $\displaystyle{\frac{h - \sum_{n=1}^{\infty} \langle e_n, h \rangle e_n}{\| h - \sum_{n=1}^{\infty} \langle e_n, h \rangle e_n \|}}$. Thus, for each $h \in H$: So $E^* = (e_n)$ is an orthonormal basis for $H$. $\blacksquare$
Orthonormal Sets in Separable Inner Product Spaces Table of Contents Orthonormal Sets in Separable Inner Product Spaces Theorem 1: Let $H$ be an inner product space. If $H$ is separable and $E \subset H$ is an orthonormal subset of $H$ then $E$ is countable. Recall that a space is said to be separable if it contains a countable and dense subset. Proof:Suppose instead that $E$ is uncountable. Since $H$ is separable, $H$ has a countable and dense subset $D$. Now if $e, e' \in E$ and $e \neq e'$ then since $e$ and $e'$ are orthogonal we have that: \begin{align} \quad \| e - e' \|^2 = \langle e - e', e - e' \rangle = \langle e, e \rangle -2 \langle e, e' \rangle + \langle e', e' \rangle = \| e \|^2 + \| e' \|^2 = 2 \end{align} Therefore, for every $e, e' \in E$ with $e \neq e'$ we see that: \begin{align} \quad \| e - e' \| = \sqrt{2} \end{align} That is, the distance between $e$ and $e'$ is $\sqrt{2}$. Consider the collection of open balls: \begin{align} \quad \mathcal F = \left \{ B \left (e, \frac{\sqrt{2}}{2} \right ) = \left \{ h \in H : \| e - h \| < \frac{\sqrt{2}}{2} \right \} : e \in E \right \} \end{align} Since $E$ is assumed to be uncountable, so is $\mathcal F$. Furthermore, the open balls in $\mathcal F$ are disjoint. Since $D$ is dense in $\mathcal H$, every ball in $\mathcal F$ must contain a point of $D$. But $D$ is countable and so this is impossible since $E$ is uncountable. So the assumption that $E$ is uncountable is false. Thus $E$ must be countable.
Asymptotic Solution > Crack propagation Toughness In 1957 G. R. Irwin introduced a new failure criterion by crack propagation, which considers the SIFs defined in the asymptotic solution derived previously. In mode I this criterion reads \begin{equation}K_I\quad\begin{cases} < K_{IC} \rightarrow \text{ the crack does not propagate,}\\ \geq K_{IC} \rightarrow \text{ the crack does propagate,} \end{cases}\label{eq:SIFCriterion}\end{equation} where $K_{IC}$ is the mode I toughness. Under the LEFM assumption: $K_I$ depends on the geometry and loading conditions only, $K_{IC}$ depends on the material only. The definition of the toughness and the way of measuring the fracture toughness have been summarized before. A brief summary of the way to evaluate the SIFs has also been presented and will be the object of a next lecture. We will conclude this lecture be reporting different features related to the SIFs in 3D. 3D front Until now only 2D solutions have been considered but a real crack is clearly 3D. In order to analyze a 3D crack front as in Picture II.33, at any point of the crack line: A local referential can be defined; Since the asymptotic solutions hold for $r \rightarrow 0$, at this distance the crack line seems straight and the problem is locally 2D. The crack tip field can be discomposed into 3 2D problems (3 2D modes). Thickness effect Near the border of a specimen the problem state is plane-stress (P-$\sigma$), while it is plane-strain (P-$\varepsilon$) near the center, where the triaxiality is higher. This means that the SIF is larger at the center as there are more constraints (no possible lateral deformations) (see HRR lecture). This has two consequences: A crude approximation of the thickness $t$ effect on the measured toughness is \begin{equation} K_{IC}\left(t\right) \simeq K_{IC}\left(t\rightarrow\infty\right) \sqrt{1+\frac{1.4}{t^2}\left(\frac{K_{IC}\left(t\rightarrow\infty\right) }{\sigma_p^0}\right)^4},\end{equation} were $\sigma_p^0$ is the initial yield stress. From this observation it appears that the toughness $K_{IC}$ is not dependent on the material only, as the thickness has an effect. To remain conservative, the toughness is defined as the measured value for a "thick-enough" specimen, see Picture II.35. Later in this class we will come back to this notion of "thick-enough".
Isotopic Embeddings on Topological Spaces Definition: Let $X$ and $Y$ be topological spaces and let $f, g : X \to Y$ be embeddings. Then $f$ and $g$ are said to be Isotopic if there exists a continuous function $H : X \times I \to Y$ such that: 1) $H_t : X \to Y$ is an embedding for every $t \in I$. 2) $H_0 = f$. 3) $H_1 = g$. If such a function $H$ exists, then $H$ is said to be an Isotopy from $f$ to $g$. Recall that if $X$ and $Y$ are topological spaces then a function $f : X \to Y$ is said to be an embedding if $f$ is a continuous injective function such that $f : X \to f(X)$ is a homeomorphism. In the definition above, $I = [0, 1]$ is the closed unit interval, and the functions $H_t : X \to Y$ defined for each fixed $t \in I$ by $H_t(x) = H(x, t)$. Theorem 1: Let $X$ and $Y$ be topological spaces and let $f, g, h : X \to Y$ be embeddings. Then a) $f$ is isotopic to $f$. (Reflexivity) b) If $f$ is isotopic to $g$ then $g$ is isotopic to $f$. (Symmetry) c) If $f$ is isotopic to $g$ and $g$ is isotopic to $h$ then $f$ is isotopic to $h$. (Transitivity). Therefore, "to be isotopic" is an equivalence relation on the set of embeddings from $X$ to $Y$. Proof of a)Consider the function $H : X \times I \to Y$ defined by: Then $H$ is a continuous function since $f$ is a continuous function. Furthermore, $H_t(x) = f(x)$ is an embedding for every $t \in I$, $H_0 = f$ and $H_1 = f$. So $f$ is isotopic to $f$. $\blacksquare$ Proof of b)Suppose that $f$ is isotopic to $g$. Then there exists a continuous function $H : X \times I \to Y$ such that $H_t$ is an embedding for every $t \in I$, $H_0 = f$, and $H_1 = g$. Consider the function $H' : X \times I \to Y$ defined by: Then $H'$ is a continuous function since $H$ is a continuous function. Furthermore, $H'_t$ is an embedding for all $t \in I$, $H'_0 = H_1 = g$ and $H'_1 = H_0 = f$. So $g$ is isotopic to $f$. $\blacksquare$ Proof of c)Suppose that $f$ is isotopic to $g$ and $g$ is isotopic to $h$. Then there exists continuous functions $H' : X \times I \to Y$ and $H'' : X \times I \to Y$ such that $H'_t$ and $H''_t$ are embeddings for all $t \in I$, $H'_0 = f$, $H'_1 = g$, $H''_0 = g$, and $H''_1 = h$. Consider the function $H : X \times I \to Y$ defined by: Then $H$ is a continuous function since $H'$ and $H''$ are continuous and by The Gluing Lemma. Furthermore, $H_t$ is an embedding for all $t \in I$, $H_0 = H'_0 = f$, and $H_1 = H''_1 = h$. So $f$ is isotopic to $h$. $\blacksquare$ In a sense, if we have two embeddings $f, g : X \to Y$, then $f$ and $g$ are isotopic if we can find a continuous function which "transforms" $f$ to $g$ with all the intermediate steps being embeddings from $X$ to $Y$ as well.
Separable Topological Spaces Examples 1 Recall from the Separable Topological Spaces page that a topological space $(X, \tau)$ is said to be separable if it contains a countable and dense subset. We saw that if $\mathbb{R}$ is equipped with the usual topology of open intervals then the set of rational numbers $\mathbb{Q}$ is dense (and of course countable) and so $\mathbb{R}$ is a separable topological space. We will now look at some more examples of determining whether or not a topological space is separable. Example 1 Consider the set of real numbers $\mathbb{R}$ with the lower limit topology (Sorgenfrey line) $\tau = \{ [a, b) : a, b \in \mathbb{R}, a < b \}$. Determine whether $(\mathbb{R}, \tau)$ is a separable topological space. We claim that the subset of rational numbers $\mathbb{Q}$ is also dense in $\mathbb{R}$ with respect to this topology. To prove this, let $U \in \tau \setminus \{ \emptyset \}$. Then $U$ is either:(1) In either case $[a, b) \subseteq U$ for some $a, b \in \mathbb{R}$, $a < b$. Then for some $a, b \in \mathbb{R}$, $a < b$, suppose that $\mathbb{Q} \cap U = \emptyset$. Then this implies that $\mathbb{Q} \cap [a, b) = \emptyset$. But then this says that for all $x$ such that $a \leq x < b$ there exists no $q \in \mathbb{Q}$ such that $a \leq < q < b$. Given any $x \in [a, b)$ there always exists a rational number $q$ such that $x < q < b$. So the assumption that for some $U \in \tau \setminus \{ \emptyset \}$ we have that $\mathbb{Q} \cap U = \emptyset$ is false. Hence for all $U \in \tau \setminus \{ \emptyset \}$ we have that $\mathbb{Q} \cap U \neq \emptyset$, so $\mathbb{Q}$ is a dense (and countable) subset of $\mathbb{R}$, so $\mathbb{R}$ is countable with respect to the lower limit topology. Example 2 Construct a topology $\tau$ on the set $X = \{a, b, c, d \}$ such that $(X, \tau)$ is a separable topological space. Consider the topology $\tau = \{ \emptyset, \{a \}, \{ b \}, \{a, b \}, \{a, b, c \}, X \}$. Clearly every subset of $X$ is countable since $X$ consists of only finitely many elements. So we only need to find a dense subset of $X$. Consider the set $A = \{a, b \}$. Then we have that:(2) Therefore $A$ is a countable and dense subset of $X$, so $(X, \tau)$ is a separable topological space. Example 3 Prove that if $X$ is a finite set and $\tau$ is the discrete topology on $X$ then $(X, \tau)$ is a separable topological space. Let $X$ be a finite set with $n$ elements. Then $X = \{ a_1, a_2, ..., a_n \}$. If $\tau$ is the discrete topology on $X$ then $\tau = \mathcal P(X)$. Clearly every subset of $X$ is countable since $X$ is a finite set, so we only need to find a dense subset of $X$. Take $A = X$. Then for all $U \in \tau \setminus \{ \emptyset \}$ we have that $U \subseteq A$. So $A \cap U \neq \emptyset$. Hence $A$ is a countable and dense subset of $X$ so $(X, \tau)$ is a separable topological space. Example 4 Prove that if $X$ is a countable set and $\tau$ is the discrete topology on $X$ then $(X, \tau)$ is a separable topological space. If $X$ is countable then $X$ is either finite or countably infinite. Example 3 shows that if $X$ is finite and $\tau$ is the discrete topology on $X$ then $(X, \tau)$ is a separable topological space. We will now show that if $X$ is countably infinite then $(X, \tau)$ is also a separable topological space. If $X$ is countably infinite then:(3) If we let $A = X$ then every $U \in \tau \setminus \{ \emptyset \}$ is such that $U \subseteq A$ so $A \cap U \neq \emptyset$. Therefore $A$ is a countable (and dense) subset of $X$, so $(X, \tau)$ is separable.
The asymptotic solution of the Cauchy problem for a generalized Boussinesq equation 1. Department of Applied Mathematics, Southwest Jiaotong University, 610066, Chengdu, China 2. Department of Mathematics and Statistics, Curtin University of Technology, GOP Box U1987, Perth, WA 6845, Australia $ u_{t t} - a u_{t t x x}- 2 b u_{t x x} = - c u_{x x x x}+ u_{x x} - p^2 u + \beta(u^2)_{x x}, $ where $x\in R^1,$ $t > 0,$ $a ,$ $b$ and $c $ are positive constants, $p \ne 0,$ and $\beta \in R^1$. For the case $a + c > b^2$ corresponding to damped oscillations with an infinite number of oscillation cycles, we establish the well-posedness theorem of the global solution to the problem and derive a large time asymptotic solution. Mathematics Subject Classification:34C25, 93C1. Citation:Shaoyong Lai, Yong Hong Wu. The asymptotic solution of the Cauchy problem for a generalized Boussinesq equation. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 401-408. doi: 10.3934/dcdsb.2003.3.401 [1] [2] [3] Luiz Gustavo Farah. Local solutions in Sobolev spaces and unconditional well-posedness for the generalized Boussinesq equation. [4] [5] Rainer Brunnhuber, Barbara Kaltenbacher. Well-posedness and asymptotic behavior of solutions for the Blackstock-Crighton-Westervelt equation. [6] [7] [8] [9] [10] Ivonne Rivas, Muhammad Usman, Bing-Yu Zhang. Global well-posedness and asymptotic behavior of a class of initial-boundary-value problem of the Korteweg-De Vries equation on a finite domain. [11] Igor Chueshov, Alexey Shcherbina. Semi-weak well-posedness and attractors for 2D Schrödinger-Boussinesq equations. [12] Saoussen Sokrani. On the global well-posedness of 3-D Boussinesq system with partial viscosity and axisymmetric data. [13] [14] Barbara Kaltenbacher, Irena Lasiecka. Well-posedness of the Westervelt and the Kuznetsov equation with nonhomogeneous Neumann boundary conditions. [15] [16] [17] [18] Zhaohui Huo, Boling Guo. The well-posedness of Cauchy problem for the generalized nonlinear dispersive equation. [19] Zhaoyang Yin. Well-posedness, blowup, and global existence for an integrable shallow water equation. [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Let $U_1, U_2, \ldots, U_n$ be a set of random variable uniformly distributed over some box in $\mathbb{R}^3$ and let \begin{equation} R = \frac{1}{2 n(n-1)}\sum\limits_{i,j< i} |U_i - U_j| \end{equation} be the random variable corresponding to the average distance between the random points. What is the distribution of $R$? If no answer to that is available, then what is $E(R)$? I have found somewhat similar questions, but couldn't related them to this one. Thank you very much in advance. Gabriel
Two-Wavelet Multipliers on the Dual of the Laguerre Hypergroup and Applications Article First Online: 27 Downloads Abstract In this paper, we are interested in the Laguerre hypergroup \(\mathbb {K} = [0,\infty )\times {\mathbb {R}}\) which is the fundamental manifold of the radial function space for the Heisenberg group. So, we consider the generalized shift operator generated by the dual of the Laguerre hypergroup \(\widehat{\mathbb {K}}\) which can be topologically identified with the so-called Heisenberg fan, the subset of \({\mathbb {R}}^{2}\) by means of which the notion of a generalized two-wavelet multiplier is investigated. The boundedness and compactness of the generalized two-wavelet multipliers are studied on \(L^{p}_{\alpha }(\mathbb {K})\), \(1 \le p \le \infty \). Afterwards, we introduce the generalized Landau–Pollak–Slepian operator and we give its trace formula. We show that the generalized two-wavelet multiplier is unitary equivalent to a scalar multiple of the generalized Landau–Pollak–Slepian operator. As applications, we prove an uncertainty principle of Donoho–Stark type involving \(\varepsilon \)-concentration of the generalized two-wavelet multiplier operators. Moreover, we study functions whose time–frequency content is concentrated in a region with finite measure in phase space using the phase space restriction operators as a main tool. We obtain approximation inequalities for such functions using a finite linear combination of eigenfunctions of these operators. $$\begin{aligned} \cup _{j\in {\mathbb {N}}}\left\{ (\lambda ,\mu )\in {\mathbb {R}}^{2}:\mu =|\lambda |(2j+\alpha +1), \; \lambda \ne 0\right\} \cup \left\{ (0,\mu )\in {\mathbb {R}}^{2}:\mu \ge 0\right\} , \end{aligned}$$ KeywordsLaguerre hypergroup generalized multipliers generalized two-wavelet multipliers Schatten–von Neumann class generalized Landau–Pollak–Slepian operator Mathematics Subject Classification33E30 43A32 81S30 94A12 45P05 42C25 42C40 Notes Acknowledgements The authors are deeply indebted to the referees for providing constructive comments and helps in improving the contents of this article. The first author thanks the professor M.W. Wong for his help. References 1. 2. 3. 4. 5. 6.Faraut, J., Harzallah, K.: Deux cours d’Analyse Harmonique, In: Ecole d’été d’Analyse Harmonique de Tunis, Birkhaüser (1984)Google Scholar 7. 8.Huang, J.Z.: Sobolev space, Besov space and Triebel–Lizorkin space on the Laguerre hypergroup. J. Inequal. Appl. 2012, ID 190 (2012)Google Scholar 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. Copyright information © Springer Nature Switzerland AG 2019
Search Now showing items 1-10 of 15 A free-floating planet candidate from the OGLE and KMTNet surveys (2017) Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ... OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary (2017) We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ... OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing (2017) We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
Affine transformation is a linear mapping method that preserves points, straight lines, and planes. Sets of parallel lines remain parallel after an affine transformation. The affine transformation technique is typically used to correct for geometric distortions or deformations that occur with non-ideal camera angles. For example, satellite imagery uses affine transformations to correct for wide angle lens distortion, panorama stitching, and image registration. Transforming and fusing the images to a large, flat coordinate system is desirable to eliminate distortion. This enables easier interactions and calculations that don’t require accounting for image distortion. The following table illustrates the different affine transformations: translation, scale, shear, and rotation. Affine Transform Example Transformation Matrix Translation \[ \left[\begin{array}{c}1 & 0 & 0\\0 & 1 & 0\\ t_x & t_y & 1\end{array}\right]\] \(t_y\) specifies the displacement along the \(y\) axis. Scale \[ \left[\begin{array}{c}s_x & 0 & 0\\0 & s_y & 0\\ 0 & 0 & 1\end{array}\right]\] \(s_x\) specifies the scale factor along the \(x\) axis \(s_y\) specifies the scale factor along the \(y\) axis. Shear \[ \left[\begin{array}{c}1 & sh_y & 0\\sh_x & 1 & 0\\ 0 & 0 & 1\end{array}\right]\] \(sh_y\) specifies the shear factor along the \(y\) axis. Rotation \[ \left[\begin{array}{c}\cos(q) & \sin(q) & 0\\-\sin(q) & \cos(q) & 0\\ 0 & 0 & 1\end{array}\right]\] \(q\) specifies the angle of rotation. \(t_x\)
I am trying to simulate and implement the controller in the paper Geometric Tracking Control of a Quadrotor UAV on SE(3). I have the dynamics implemented, however I am stuck at one part in the controller which is the calculation of the following equation: $$e_\Omega=\Omega-R^TR_d\Omega_{d\cdot}$$ I have all the variables in equation (11) calculated except for $\Omega_d$ which is the desired angular velocity. From equation (4) of the paper we know the relation: $$\dot{R}_d=R_d\hat{\Omega}_d$$ However, I don't know how to calculate $\dot{R}_d$. Can someone give the exact equation for getting $\dot{R}_d$ so I can calculate $\Omega_d$ to get the error $e_\Omega$? My code is available on github.
Taking an $\left[\left[n, k, d\right]\right]$ code: The classical equivalent to this is an $\left[n, k, d\right]$ code, which is a code referring to the number of bits, $n$, encoding $k$ bits. The third number, $d$, is the minimum Hamming distance taken between any two codewords. This is equal to the minimum Hamming weight (i.e. number of non-zero bits) of non-zero codewords. As per the classical case, in the quantum case, the first two numbers are referring to the numbers of qubits, $n$, that encode $k$ qubits. $d$ is still used to refer to distance, but the definition of distance has to be changed. The weight, $t$, of a Pauli operator $E_a$, is the number of qubits that a (single-qubit) Pauli operator $\left(X, Y \text{ or } Z\right)$ acts on. As an example, arbitrarily taking $E_1 = X\otimes I\otimes I\otimes Z\otimes I$, $E_1$ has a weight $t=2$. The distance is then the minimum weight that the overlap of a Pauli operator (in the space of possible errors) acting on a codeword, with a different codeword is non-zero, or the minimum weight such that $\left<j\vert E_a\vert i\right>\neq C_a\delta_{ji}$ for some (real) $C_a$ for all codewords $i$ and $j$. That is, the distance is the minimum number of errors that can occur on a codeword that causes it to be mapped to a different codeword. For more details, see e.g. Chapter 7 of Preskill's quantum computation notes.
The Annals of Probability Ann. Probab. Volume 7, Number 1 (1979), 109-127. A strong Law for Variables Indexed by a Partially Ordered Set with Applications to Isotone Regression Abstract In studying the asymptotic properties of certain isotone regression estimators, one is led to consider the maximum of sums of independent random variables indexed by a partially ordered set. An index set which is a sequence of $\beta$ dimensional vectors, $\{t_k\}^\infty_{k = 1}$, and the usual partial order on $R_\beta$, the $\beta$ dimensional reals, are considered here. The random variables are assumed to satisfy a condition equivalent to a finite first moment in the identically distributed case and are assumed to be centered at their means. For $A \subset R_\beta$, let $S_n(A)$ denote the sum of those random variables with indices $t_k \in A$ and $k \leqslant n$. It is shown that if the sequence $\{t_k\}$ satisfies a certain condition, then the maximum, over all upper layers $U$ in $R_\beta$, of $S_n(U)/n$ converges almost surely to zero. As a corollary to this result one obtains the strong consistency of this isotone regression estimator. If the sequence $\{t_k\}$ is a realization of a sequence of independent, identically distributed, $\beta$ dimensional random vectors and if the probability induced by such a vector is discrete, absolutely continuous or a mixture of the two, then the condition on the sequence $\{t_k\}$ is satisfied almost surely. Some nondiscrete, singular induced probabilities of interest in these regression problems are considered also. Article information Source Ann. Probab., Volume 7, Number 1 (1979), 109-127. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176995152 Digital Object Identifier doi:10.1214/aop/1176995152 Mathematical Reviews number (MathSciNet) MR515817 Zentralblatt MATH identifier 0392.60033 JSTOR links.jstor.org Citation Wright, F. T. A strong Law for Variables Indexed by a Partially Ordered Set with Applications to Isotone Regression. Ann. Probab. 7 (1979), no. 1, 109--127. doi:10.1214/aop/1176995152. https://projecteuclid.org/euclid.aop/1176995152
Let $Q$ be the set of states of the Turing Machine, $\Sigma$ be the alphabet, and $\{L,R,S\}$ be the left shift, right shift, and stay respectively. A transition is an element of $Q \times \Sigma \times Q \times \Sigma \times \{L,R,S\}$ which can be read as current state, current symbol, next state, next symbol, movement direction. For a Turing Machine I'm looking to understand how many transitions, or states since they are linearly related, are needed so that when I give integer $i \ge 0$ on the tape it outputs the $i^\text{th}$ digit after the decimal point(zero indexed) of $\frac{p}{q}$ for which $p$,$q$ are coprime. Assume it is in base $c$ and you only have the characters $0,1,\dots,c-1$ and blank. I imagine this is possible to do in $O(c\log q)$ states, but my best attempts(outlined below) only give me $O(cq)$. An example where $c=2$ and $\frac{p}{q}=\frac{1}{3}$ would be I want a binary Turing machine such that when given $10$ on the tape would output the $3^\text{rd}$ digit after the decimal point of $\frac{1}{3}$ which is $0$ and when given $110111$ would output the $56^\text{th}$ digit which is $1$. Here is my way of doing it. For the purpose of this $\text{nt}(c,q)$ is the length of the non-terminating portion length of $\frac{p}{q}$ in base $c$, this is easily verified to be the same for all coprime $p$,$q$. Also define $\text{per}(c,q)$ as the length of the periodic portion of $\frac{p}{q}$ in base $c$ My basic idea was you check if it is less than $\text{nt}(c,q)$ and if so output that digit and halt, this takes $O(c\,\text{nt}(c,q))$ transitions. If it's not smaller subtract out $\text{nt}(c,q)$ which takes $O(c\lfloor \log \text{nt}(c,q) \rfloor)$ transitions. Then test for what the result is modulo $\text{per}(c,q)$ and output the appropriate digit. This takes $O(c\,\text{per}(c,q))$ transitions. Now remember that the worst case of $\text{per}(c,q)+\text{nt}(c,q)$ is $q-1$. Therefore the final transition complexity is $O(cq)$ A second idea I had was Newton's algorithm which I think would take $O(c\log q)$ transitions. The basic idea would be to output $p$,$q$,$.1$ to the tape in base $c$, convert to unary which takes $O(c)$ transitions then do multiplication and subtraction in unary which takes $O(1)$ transitions. Would this work?
Differential and Integral Equations Differential Integral Equations Volume 25, Number 11/12 (2012), 1053-1074. Sharp well-posedness and ill-posedness of a higher-order modified Camassa--Holm equation Abstract In this paper we consider the Cauchy problem for a higher-order modified Camassa--Holm equation. By using some dyadic bilinear estimates and the fixed-point theorem, we establish the local well-posedness of the higher-order modified Camassa--Holm equation for the small initial data in $H^{-n+\frac{5}{4}}({{\mathbf R}}),$ $n\geq 2,$ $ n\in {{\mathbf N}}$. We also prove that the Cauchy problem for the higher-order modified Camassa--Holm equation is ill-posed for the initial data in homogeneous Sobolev spaces $\dot{H}^{s}({{\mathbf R}})$ with $s < -n+\frac{5}{4},$ $n\in {{\mathbf N}},$ $ n\geq 2$. Our result partially answers the open problem which is proposed below in Theorem 1.2 by Erika A. Olson in the Journal of Differential Equations, 246 (2009), 4154--4172. Article information Source Differential Integral Equations, Volume 25, Number 11/12 (2012), 1053-1074. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356012251 Mathematical Reviews number (MathSciNet) MR3013404 Zentralblatt MATH identifier 1274.35342 Subjects Primary: 35R25: Improperly posed problems 35Q53: KdV-like equations (Korteweg-de Vries) [See also 37K10] 37L50: Noncompact semigroups; dispersive equations; perturbations of Hamiltonian systems Citation Yan, Wei; Li, Yongsheng; Li, Shiming. Sharp well-posedness and ill-posedness of a higher-order modified Camassa--Holm equation. Differential Integral Equations 25 (2012), no. 11/12, 1053--1074. https://projecteuclid.org/euclid.die/1356012251
In general, "operating on a state with an observable" does not have direct physical meaning (i.e. you cannot think of it as evolving the state doing something to it).What does have physical meaning, is applying a unitary operation to a state. Every unitary operator corresponds to a physical operation that you can (in principle) implement, transforming (... You can calculate the probability of a given answer $\pm 1$ to each measurement by evaluating$$\langle B|\frac{I+(-1)^{\eta_1}\vec{n}_1\cdot\vec{\sigma}}{2}\otimes\frac{I+(-1)^{\eta_2}\vec{n}_2\cdot\vec{\sigma}}{2}|B\rangle$$Thus, the probability of equal measurement outcomes is$$\langle B|\left(\frac{I+\vec{n}_1\cdot\vec{\sigma}}{2}\otimes\frac{I+\vec{... Intuitively, you can rotate $\vec{n}_1$ to $Z$. As $Z$ axis has two antipodal points $|0\rangle$ and $|1\rangle$, let $\vec{n}_1$ have two antipodal points $|b_0\rangle$ and $|b_1\rangle$. Now the Bell state can be rewritten as $\frac{1}{\sqrt{2}}(|b_0b_0\rangle+|b_1b_1\rangle)$. Now in this new basis, the calculation shall be much easier.To be precise, ... Your understanding is correct.In the theory of photon polarization, the parametrization of the Bloch sphere (or its surfave) has traditionally another name. On the wikipedia page for the Jones calculus (the parametrization of the Bloch sphere surface), you'll find a table for the correspondence between kets and polarizations.To summarize, eigenstates ... I see the heart of your question. I'd like to clarify a bit, before answering your question though. Matrices (aka operators) do not measure quantum states--they operate on them. Specifically, they project the state into the matrix's eigenvectors. We can then measure that projected state in a particular basis that may be the same or different than what the ... Write down the two reduced density matrices of the single qubits that you have access to. Apply the Helstrom measurement (there are several descriptions of this on the site already).The problem is that, in this case, the two reduced density matrices are the same. That means that you cannot tell them apart. More explicitly,$$|\varphi_2\rangle=(I\otimes X)|... If you wish to distinguish two states $|\psi\rangle$ and $|\phi\rangle$, you can only guarantee to do this if $\langle\psi|\phi\rangle=0$. You do this by measuring in a basis defined by the two states (alternatively, you apply a unitary $U$ such that$$U|\psi\rangle=|0\rangle,\qquad U|\phi\rangle=|1\rangle,$$and then measure in the standard $Z$ basis....
Table of Contents Heredity of First Countability on Topological Subspaces Recall from the Hereditary Properties of Topological Spaces page that if $(X, \tau)$ is a topological space that a property of $X$ is said to be hereditary if for all subsets $A \subseteq X$ we have that the topological subspace $(A, \tau_A)$ also has that property (where $\tau_A$ is the subspace topology on $A$). We will now prove that first countability on topological subspaces is hereditary, that is, if $(X, \tau)$ is a first countable topological space then any topological subspace $(A, \tau_A)$ where $A \subseteq X$ is also first countable. Theorem 1: First countability is hereditary, that is, if $(X, \tau)$ is a first countable topological space and $A \subseteq X$ then $(A, \tau_A)$ is a first countable topological space where $\tau_A = \{ A \cap U : U \in \tau \}$ is the subspace topology on $A$. Proof:Let $(X, \tau)$ be a first countable topological space and let $A \subseteq X$. Since $X$ is first countable we have that for every $a \in A \subseteq X$ ($a \in X$) has a countable local basis, call it $\mathcal B_a$. For each $a \in A$ we claim that the following collection is a countable local basis for $a \in A$ in $A$: Clearly the collection above is countable because $\mathcal B_a$ is countable. Recall that a collection of sets is a local basis of $a \in A$ if for all open sets in $A$ there exists a set in the local basis that is fully contained in this open set. Let $U \subseteq A$ be any open set in $A$. Since $U$ is open in $A$ this implies that there exists an open set $V$ in $X$ such that: Since $V$ is an open set in $X$ and $\mathcal B_a$ is a local basis of $a$ in $X$ we have that there exists a set $B \in \mathcal B_a$ such that: Hence we see that: The set $A \cap B$ is an open set in $A$ (with the subspace topology $\tau_A$), so, for all open sets $U$ in $A$ and for all $a \in U$ there exists an element $A \cap B \in \tilde{\mathcal B_a}$ such that $a \in A \cap B \subseteq U$, so $\tilde{\mathcal B_a}$ is a local basis of $a$ in $A$. Hence $\tilde{\mathcal B_a}$ is a countable local basis of $a$ in $A$, so first countability is hereditary. $\blacksquare$
A simple question. If $Y=\frac{1}{X}$ and I know $f_X(x)$, is it true that $E(Y) = E(1/X) = \int_{-\infty}^\infty \frac{1}{x}f_X(x) dx$? Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community Yes. In general if $X\sim f(x)$ then for a function $g(x)$ you have $E(g(X)) = \int g(x)f(x)dx$. You can verify this for simple cases by deriving the distribution of the transformed variable. The completely general result takes some more advanced math which you can probably safely avoid :) Another approach if you are happy with a numerical estimate (as opposed to the theorectical exact value) is to generate a bunch of data from the distribution, do the transformation, then take the mean of the transformed data as the estimate of the expected value. This avoids integration which can be nice in ugly cases, but does not give the theory, relationship, or exact value.
Orthonormal Bases Orthonormal Bases Definition: Let $H$ be a Hilbert space. An Orthonormal Basis for $H$ is an orthonormal sequence $(e_n)$ of $H$ such that every $h \in H$ can be written uniquely as $\displaystyle{h = \sum_{n=1}^{\infty} \langle e_n, h \rangle e_n}$. Before we characterize orthonormal bases, we need the following definition: Definition: Let $H$ be a Hilbert space. An orthonormal sequence $(e_n)$ of $H$ is said to be Complete if $h \perp (e_n)$ implies that $h = 0$. That is, the only vector in $H$ that is orthogonal to every $e_n$ is $h = 0$. The following theorem tells us exactly when an orthonormal sequence is an orthonormal basis. Theorem 1: Let $H$ be a Hilbert space and let $(e_n)$ be an orthonormal sequence in $H$. Then $(e_n)$ is an orthonormal basis of $H$ if and only if $(e_n)$ is complete. Proof:$\Rightarrow$ Suppose that $(e_n)$ is an orthonormal basis of $H$. Suppose that $h$ is orthogonal to every $e_n$. Write $\displaystyle{h = \sum_{n=1}^{\infty} \langle e_n, h \rangle e_n}$. Then $\langle e_n, h \rangle = 0$ for each $n \in \mathbb{N}$ and thus $h = 0$. So $(e_n)$ is complete. $\Leftarrow$ Suppose that $(e_n)$ is complete. Recall that for each $h \in H$ we have that: \begin{align} \quad h - \sum_{n=1}^{\infty} \langle e_n, h \rangle e_n \end{align} is orthogonal to every $e_n$. So for every $h \in H$ we have that: \begin{align} \quad h = \sum_{n=1}^{\infty} \langle e_n, h \rangle e_n \end{align} So $(e_n)$ is an orthonormal basis of $H$. $\blacksquare$
Asymptotic large time behavior of singular solutions of the fast diffusion equation 1. Institute of Mathematics, Academia Sinica, Taipei, Taiwan 2. Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, China $u_t=Δ u^m$ $({\mathbb R}^n\setminus\{0\})×(0, ∞)$ $0<m<\frac{n-2}{n}$ $n≥3$ $u$ $t^{-α} f_i(t^{-β}x)$ $i=1, 2$ $u_0$ $A_1|x|^{-γ}≤ u_0≤ A_2|x|^{-γ}$ $A_2>A_1>0$ $\frac{2}{1-m}<γ<\frac{n-2}{m}$ $β:=\frac{1}{2-γ(1-m)}$, $α:=\frac{2\beta-1}{1-m}, $ $f_i$ $Δ f^m+α f+β x· \nabla f=0 \,\,\,\,\,\,\mbox{ in ${\mathbb R}^n\setminus\{0\}$}$ $\tilde u(y, τ):= t^{\, α} u(t^{\, β} y, t),\,\,\,\,\,\, { τ:=\log t}, $ Keywords:Existence, large time behavior, fast diffusion equation, singular solution, self-similar solution. Mathematics Subject Classification:Primary: 35B35, 35B44, 35K55, 35K65. Citation:Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258 References: [1] [2] A. Blanchet, M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Asymptotics of the fast diffusion equation via entropy estimates, [3] M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Sharp rates of decay of solutions to the nonlinear fast diffusion equation via functional inequalities, [4] E. Chasseigne and J. L. Vázquez, Theory of extended solutions for fast-diffusion equations in optimal classes of data. Radiation from singularities, [5] P. Daskalopoulos and C. E. Kenig, [6] [7] P. Daskalopoulos, M. del Pino and N. Sesum, Type Ⅱ ancient compact solutions to the Yamabe flow, [8] [9] [10] M. Fila, J. L. Vázquez, M. Winkler and E. Yanagida, Rate of convergence to Barenblatt profiles for the fast diffusion equation, [11] M. Fila and M. Winkler, Optimal rates of convergence to the singular Barenblatt profile for the fast diffusion equation, [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] T. Kato, [22] O. A. Ladyzenskaya, V. A. Solonnikov and N. N. Uraltceva, [23] S. J. Osher and J. V. Ralston, [24] [25] [26] J. L. Vázquez, [27] [28] show all references References: [1] [2] A. Blanchet, M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Asymptotics of the fast diffusion equation via entropy estimates, [3] M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Sharp rates of decay of solutions to the nonlinear fast diffusion equation via functional inequalities, [4] E. Chasseigne and J. L. Vázquez, Theory of extended solutions for fast-diffusion equations in optimal classes of data. Radiation from singularities, [5] P. Daskalopoulos and C. E. Kenig, [6] [7] P. Daskalopoulos, M. del Pino and N. Sesum, Type Ⅱ ancient compact solutions to the Yamabe flow, [8] [9] [10] M. Fila, J. L. Vázquez, M. Winkler and E. Yanagida, Rate of convergence to Barenblatt profiles for the fast diffusion equation, [11] M. Fila and M. Winkler, Optimal rates of convergence to the singular Barenblatt profile for the fast diffusion equation, [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] T. Kato, [22] O. A. Ladyzenskaya, V. A. Solonnikov and N. N. Uraltceva, [23] S. J. Osher and J. V. Ralston, [24] [25] [26] J. L. Vázquez, [27] [28] [1] Shota Sato, Eiji Yanagida. Forward self-similar solution with a moving singularity for a semilinear parabolic equation. [2] Shota Sato, Eiji Yanagida. Singular backward self-similar solutions of a semilinear parabolic equation. [3] Cong He, Hongjun Yu. Large time behavior of the solution to the Landau Equation with specular reflective boundary condition. [4] [5] Qiaolin He. Numerical simulation and self-similar analysis of singular solutions of Prandtl equations. [6] Kin Ming Hui, Sunghoon Kim. Existence of Neumann and singular solutions of the fast diffusion equation. [7] [8] [9] Marek Fila, Michael Winkler, Eiji Yanagida. Convergence to self-similar solutions for a semilinear parabolic equation. [10] Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei. Existence and uniqueness of singular solution to stationary Schrödinger equation with supercritical nonlinearity. [11] Bhargav Kumar Kakumani, Suman Kumar Tumuluri. Asymptotic behavior of the solution of a diffusion equation with nonlocal boundary conditions. [12] [13] K. T. Joseph, Philippe G. LeFloch. Boundary layers in weak solutions of hyperbolic conservation laws II. self-similar vanishing diffusion limits. [14] [15] Adrien Blanchet, Philippe Laurençot. Finite mass self-similar blowing-up solutions of a chemotaxis system with non-linear diffusion. [16] Zoran Grujić. Regularity of forward-in-time self-similar solutions to the 3D Navier-Stokes equations. [17] Joana Terra, Noemi Wolanski. Large time behavior for a nonlocal diffusion equation with absorption and bounded initial data. [18] Weike Wang, Xin Xu. Large time behavior of solution for the full compressible navier-stokes-maxwell system. [19] Zhenhua Guo, Wenchao Dong, Jinjing Liu. Large-time behavior of solution to an inflow problem on the half space for a class of compressible non-Newtonian fluids. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Difference between revisions of "Math/Display" (7 intermediate revisions by 2 users not shown) Line 1: Line 1: − < [[Math]] + < [[Math]] > − − − − − − − − − − − − − − − = Display Math = = Display Math = Line 99: Line 84: <table width="100%" cols="2" cellpadding="5"><tr valign="top"><td width="50%"> <table width="100%" cols="2" cellpadding="5"><tr valign="top"><td width="50%"> <texcode> <texcode> − \setupformulas[ + \setupformulas[=Character] </texcode> </texcode> </td> </td> Line 106: Line 91: \setuppapersize[A5] \setuppapersize[A5] \setuplayout[width=8cm] \setuplayout[width=8cm] − \setupformulas[ + \setupformulas[=Character] \placeformula \placeformula \startformula \startformula Line 490: Line 475: (see also [[Framed]]) (see also [[Framed]]) − To highlight part of a formula, you can give it a gray background using {{cmd|mframed}}: + To highlight part of a formula, you can give it a gray background using {{cmd|mframed}}: <context source="yes"> <context source="yes"> Line 503: Line 488: \ln (1+x) =\, \graymath{x - {x^2\over2}} \,+ {x^3\over3}-\cdots. \ln (1+x) =\, \graymath{x - {x^2\over2}} \,+ {x^3\over3}-\cdots. \stopformula \stopformula + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + </context> </context> Latest revision as of 16:29, 27 February 2017 < Math > Contents Display Math The famous result (once more) is given by \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: Numbering Formulae The famous result (once more) is given by \placeformula \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: The \placeformula command is optional, and produces the equation number; leaving it off produces an unnumbered equation. Changing format of numbers You can use \setupformulas to change the format of numbers. For example to get bold numbers inside square brackets use \setupformulas[left={[},right={]},numberstyle=bold] which gives To get equations also numbered by section, add the command: \setupnumber[formula][way=bysection] To the start of your document. To get alphabets instead of numbers, use \setupformulas[numberconversion=Character] which gives Changing Formula alignment Normally a formula is centered, but in case you want to align it left or right, you can set up formulas to behave that way. Normally a formula will adapt its left indentation to the environment: \setuppapersize[A5] \setuplayout[textwidth=8cm] \setupformulas[align=left] \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=middle] \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=right] \startformula c^2 = a^2 + b^2 \stopformula Or in print: With formula numbers the code is: \setuppapersize[A5] \setuplayout[textwidth=8cm] \setupformulas[align=left] \placeformula \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=middle] \placeformula \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=right] \placeformula \startformula c^2 = a^2 + b^2 \stopformula And the formulas look like: When tracing is turned on ( \tracemathtrue) you can visualize the bounding box of the formula, As you can see, the dimensions are the natural ones, but if needed you can force a normalized line: \setuppapersize[A5] \setuplayout[textwidth=8cm] \setupformulas[align=middle,strut=yes] \tracemathtrue \placeformula \startformula c^2 = a^2 + b^2 \stopformula This time we get a more spacy result. [Ed. Note: For this example equation, there appears to be no visible change.] We will now show a couple of more settings and combinations of settings. In centered formulas, the number takes no space \setuppapersize[A5] \setuplayout[textwidth=8cm] \tracemathtrue \setupformulas[align=middle] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula You can influence the placement of the whole box with the parameters leftmargin and rightmargin. \setuppapersize[A5] \setuplayout[textwidth=8cm] Some example text, again, to show where the right and left margins of the text block are. \tracemathtrue \setupformulas[align=right,leftmargin=3em] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula \setupformulas[align=left,rightmargin=1em] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula You can also inherit the margin from the environment. \setuppapersize[A5] \setuplayout[textwidth=8cm] Some example text, again, to show where the right and left margins of the text block are. \tracemathtrue \setupformulas[align=right,margin=standard] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula The distance between the formula and the number is only applied when the formula is left or right aligned. \setuppapersize[A5] \setuplayout[textwidth=8cm] \tracemathtrue \setupformulas[align=left,distance=2em] \startformula c^2 = a^2 + b^2 \stopformula \placeformula \startformula c^2 = a^2 + b^2 \stopformula Referencing formulae The famous result (and again) is given by \placeformula[formulalabel] \startformula c^2 = a^2 + b^2. \stopformula And now we can refer to formula \ref[][formulalabel]. This, when typeset, produces the following: Note, that \ref expects two arguments, therefore you need the brackets twice. By default, only the formula number appears as a reference. This can be changed by using \definereferenceformat. For example, to create a command \eqref which shows the formula number in brackets, use \definereferenceformat[eqref][left=(,right=)] Sub-Formula Numbering Automatic Sub-Formula Numbering Examples: \startsubformulas[eq:1] \placeformula[eq:first] \startformula c^2 = a^2 + b^2 \stopformula \placeformula[eq:second] \startformula c^2 = a^2 + b^2 \stopformula \stopsubformulas Formula (\in[eq:1]) states the Pythagora's Theorem twice, once in (\in[eq:first]) and again in (\in[eq:second]). The Manual Method Sometimes, you need more fine grained control over numbering of subformulas. In that case one can make use of the optional agument of \placeformula command and the related \placesubformula commands which can be used to produce sub-formula numbering. For example: Examples: \placeformula{a} \startformula c^2 = a^2 + b^2 \stopformula \placesubformula{b} \startformula c^2 = a^2 + b^2 \stopformula What's going on here is simpler than it might appear at first glance. Both \placeformula and \placesubformula produce equation numbers with the optional tag added at the end; the sole difference is that the former increments the equation number first, while the latter does not (and thus can be used for the second and subsequent formulas that use the same formula number but presumably have different tags). This is sufficient for cases where the standard ConTeXt equation numbers suffice, and where only one equation number is needed per formula. However, there are many cases where this is insufficient, and \placeformula defines \formulanumber and \subformulanumber commands, which provide hooks to allow the use of ConTeXt-managed formula numbers with plain TeX equation numbering. These, when used within a formula, simply return the formula number in properly formatted form, as can be seen in this simple example with plain TeX's \eqno. Note that the optional tag is inherited from \placeformula. More examples: \placeformula{c} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \eqno{\formulanumber} \stopformula In order for this to work properly, we need to turn off ConTeXt's automatic formula number placement; thus the \let command to empty \doplaceformulanumber, which must be placed after the start of the formula. In many practical examples, however, this is not necessary; ConTeXt redefines \displaylines and \eqalignno to do this automatically. For more control over sub-formula numbering, \formulanumber and \subformulanumber have an optional argument parallel to that of \placeformula, as demonstrated in this use of plain TeX's \eqalignno, which places multiple equation numbers within one formula. \placeformula \startformula \eqalignno{ c^2 &= a^2 + b^2 &\formulanumber{a} \cr c &= \left(a^2 + b^2\right)^{\vfrac{1}{2}} &\subformulanumber{b}\cr a^2 + b^2 &= c^2 &\subformulanumber{c} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula Note that both \formulanumber and \subformulanumber can be used within the same formula, and the formula number is incremented as expected. Also, if an optional argument is specified in both \placefigure and \formulanumber, the latter takes precedence. More examples for left-located equation number: \setupformulas[location=left] \placeformula{d} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \leqno{\formulanumber} \stopformula and \placeformula \startformula \leqalignno{c^2 &= a^2 + b^2 &\formulanumber{a} \cr a^2 + b^2 &= c^2 &\subformulanumber{b} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula -- 23:46, 15 Aug 2005 (CEST) Prinse Wang List of Formulas You can have a list of the formulas contained in a document by using \placenamedformula instead of \placeformula. Only the formulas written with \placenamedformula are not put in the list, so that you can control precisely the content of the list. Example: \subsubject{List of Formulas} \placelist[formula][criterium=text,alternative=c] \subsubject{Formulas} \placenamedformula[one]{First listed Formula} \startformula a = 1 \stopformula \endgraf \placeformula \startformula a = 2 \stopformula \endgraf \placenamedformula{Second listed Formula}{b} \startformula a = 3 \stopformula \endgraf Gives: Shaded background for part of a displayed equation (see also Framed) To highlight part of a formula, you can give it a gray background using \mframed: the following is the code you can use in mkii (see below what one has to do in mkiv): \setuppapersize[A5] \setupcolors[state=start] \def\graymath{\mframed[frame=off, background=color, backgroundcolor=gray, backgroundoffset=3pt]} \startformula \ln (1+x) =\, \graymath{x - {x^2\over2}} \,+ {x^3\over3}-\cdots. \stopformula \setuppapersize[A5] \definemathframed[graymath] [ frame=off, location=mathematics, background=color, backgroundcolor=lightgray, backgroundoffset=2pt ] \starttext Since for $|x| < 1$ we have \startformula \log(1+x) = \graymath{x- \displaystyle{x^2\over2}} + {x^3 \over 3} + \cdots \stopformula we may write $\log(1+x) = x + O(x^2)$. \stoptext The result is shown below (possibly the framed part of the formula is not aligned correctly with the remainder of the formula because the mkiv engine on Context Garden is not up to date…).
Disclaimer: this is probably TLWR, but you can absolutely apply some physical reasoning to arrive at the answer. Changes in surface height of the earth are governed by the conservation of mass, i.e. that if mass ($M$) is removed in one location (everglades soil erosion), then an equal amount of mass must be gained at another location (perhaps accretion on the seafloor).The total change in the earths mass is zero $$\Delta M = M_1-M_2=0$$$$-M_1=M_2$$. Mass may be broken up into the volume ($V=\Delta x \Delta y \Delta z$) multiplied by density ($\rho$), so $M=\rho V$. In terms of the conservation of mass equation from above, then:$$-\rho_{1} \Delta x_1 \Delta y_1 \Delta z_1 = \rho_{2} \Delta x_2 \Delta y_2 \Delta z_{2}$$ The place where erosion occurs is described by the left side (with subscript 1). The place where accretion occurs is described by the right side (with subscript 2). This equation says the total mass lost by area 1, must equal the total mass gained by area 2. Now consider the following two situations. 1) The transported sediment . If the density does not change, you have $\rho_1=\rho_2$. Let's also say the eroding area ($\Delta x_1 \Delta y_1$) equals the accreting area ($\Delta x_2 \Delta y_2$) for simplicity. Then all equal terms in the conservation of mass cancel out and you're left with$$-\Delta z_{1} = \Delta z_{2}$$ does not undergo any change in density 2) The transported sediment . There's a host of reasons why density does undergo a change in density can change. If the density changes then we can't let the density terms cancel in the conservation of mass. Lets still say the eroding area and accreting area are equal. The conservation of mass becomes $$-\rho_1 \Delta z_1 = \rho_2 \Delta z_2$$or conversely $$-\Delta z_1=\frac{\rho_2}{\rho_1}\Delta z_2$$This says that the amount of change in height at location 1 is not equal to the amount of change at location 2 (sure they're proportional, but not definitely not exactly equal). Putting it all together to answer the question, the average earth radius (r) is the average of all heights (z) with respect to the center of the earth $$r=\frac{z_1+z_2+...+z_n}{n}$$ Try this all out with arbitrary numbers to convince yourself. Add the changes $\Delta z_1$ to $z_1$, and $\Delta z_2$ to $z_2$$$r_{before}=\frac{z_1 + z_2 + z_3}{3}$$ $$r_{after}=\frac{(z_1+\Delta z_1)+(z_2+\Delta z_2)+z_3}{3}$$Rearrange $r_{after}$ to be$$r_{after}=\frac{z_1+z_2+z_3 + (\Delta z_1+\Delta z_2)}{3}$$ Case 1 If density doesn't change (i.e. $-\Delta z_1=\Delta z_2$), then $$r_{after}=\frac{z_1+z_2+z_3 + (\Delta z_1-\Delta z_1)}{3}$$The changes cancel out and $r_{before}=r_{after}$. The earths radius does not at all change. Case 2 If the density changes, $r_{after}$ becomes$$r_{after}=\frac{z_1+z_2+z_3 + (\Delta z_1-\frac{\rho_2}{\rho_1}\Delta z_1)}{3}$$and you can see that $r_{after}$ does not equal $r_{before}$.
How to Solve a Classic CFD Benchmark: The Lid-Driven Cavity Problem The lid-driven cavity is a popular problem within the field of computational fluid dynamics (CFD) for validating computational methods. While the boundary conditions are relatively simple, the flow features created are quite interesting and complex. Here, we demonstrate how to define this benchmark problem in the COMSOL Multiphysics® software. We also showcase techniques like mapped meshing and nonlinearity ramping, which can be applied to a wide variety of CFD models. Modeling a Lid-Driven Cavity in COMSOL Multiphysics® The lid-driven cavity consists of a square cavity filled with fluid. At the top boundary, a tangential velocity is applied to drive the fluid flow in the cavity. The remaining three walls are defined as no-slip conditions; that is, the velocity is 0. For benchmarking purposes, we want to solve something general that can easily be implemented in different tools. How can we compare different computational methods using the most generic formulation of this problem? One way is to nondimensionalize the equations, which means that the problem will not depend on specific materials, length scales, or operating conditions. In the case of fluid flow in a lid-driven cavity, we can solve the nondimensional Navier-Stokes equations. The incompressible, stationary Navier-Stokes equations with no body forces take the following form: By nondimensionalizing the velocity (\textbf u^* = \frac{\textbf u}{U} ), pressure (p^* = \frac{p}{\rho U^2} ), and length scale (\textbf r^* = \frac{\textbf r}{L} , \nabla^*=L\nabla ), we can reformulate this equation as: The Reynolds number is defined as Re = \frac{\rho UL}{\mu}. This nondimensional number describes the relative importance of the inertial forces to the viscous forces in the flow, as described in this blog post. By comparing the forms of these two equations, we can determine which parameters need to be entered into the COMSOL Multiphysics model in order to solve the nondimensionalized equations. Specifically, we see that the coefficient in front of the inertial term (\textbf u^* \cdot \nabla^* )\textbf u^* is 1, so we apply a density of 1 in the material properties. For the viscous term \nabla^{*2} \textbf u, we see that the coefficient is \frac{1}{Re}, so this is entered as the viscosity. Applying Nonlinearity Ramping As the Reynolds number increases, the viscous term becomes less significant in the equation compared to the inertial term. Since the viscous term in the equation is linear and the inertial term is nonlinear, increasing the Reynolds number leads to the problem becoming more nonlinear. When solving nonlinear problems, we often want to apply nonlinearity ramping to provide good initial conditions for the solver. Nonlinearity ramping is discussed in detail in the following blog posts: Viscosity Ramping Improves the Convergence of CFD Models Nonlinearity Ramping for Improving Convergence of Nonlinear Problems In this model, we perform an auxiliary sweep in the study over multiple Reynolds numbers. This sweep serves two purposes: Comparing the solutions for different Reynolds numbers to the results in the literature Demonstrating how to perform nonlinearity ramping to help the solver In this case, the problem does not require nonlinearity ramping in order to converge. However, for highly nonlinear problems, this is an important technique to consider when improving the convergence. Setting Up the Boundary Conditions and Constraints In terms of boundary conditions, the top wall moves at a velocity of U = 1 in the x direction. The other three walls are applied as no-slip conditions ( U = 0). The boundary conditions for the lid-driven cavity model. While these boundary conditions fully describe the physical problem we want to solve, there is one other essential condition that we need to apply to the closed cavity: a pressure point constraint. In a closed system at steady state, there are no inlets or outlets in which the pressure level is defined. Without a reference level for the pressure, the Navier-Stokes equations have infinite solutions to the steady-state problem, since they only solve with respect to the gradient in pressure. Thus, the pressure point constraint provides information about what the absolute pressure levels should be in the flow. When we apply a pressure point constraint of p = 0, it corresponds to an absolute pressure of 1 atm, as explained in this blog post on how to assign fluid pressure. It is important to apply a pressure point constraint far away from the interesting behavior in the flow any time we solve for steady flow in a closed cavity — whether it is a mixed tank reactor or a natural convection problem. A couple of example models that use pressure point constraints are the Free Convection in a Water Glass and Modular Mixer tutorials. Using Mapped Meshing to Discretize the Domain Now that we have defined the boundary conditions, we can think about how we want to discretize the domain. The lid-driven cavity provides a perfect example of how mapped meshing can be applied to efficiently and effectively discretize four-sided geometries. Mapped meshing discretizes the domain using rectangular elements. These elements don’t need to be uniformly spaced. In fact, we can use Distribution subnodes to the Mapped node in the mesh sequence to define how the elements are spaced along the edges. In the lid-driven cavity, we want to stack more elements near the no-slip walls, where the gradients in the flow are higher, so we apply symmetric distributions along all of the edges. The mapped meshing of the lid-driven cavity model. While we are applying mapped meshing to a square in this case, the technique can be applied to any four-sided geometry. Irregular geometries can even be subdivided into four-sided entities so that mapped meshing can be applied. In some cases, mapped meshing can be computationally more efficient than free triangular meshing and it gives us more control over the element spacing. For examples of using mapped meshing, check out the Nonisothermal Turbulent Flow over a Flat Plate and Dissociation in a Tubular Reactor tutorials. Comparing the CFD Simulation Results to Existing Literature Now, let’s take a look at the results. First, we check the magnitude of the velocity in the cavity, plotted with the rainbow color scale, and the direction of the flow, indicated with the vector plot. We see that the velocity approaches U = 1 at the top of the cavity, where the fluid flow is being driven by the moving wall. The fluid is pushed into the wall on the right, where it flows downward before moving back up the left side of the cavity. This motion creates a large vortex in the center of the cavity. We can see that for a lower Reynolds number of 100 (figure on the left), the velocities in the center of the cavity are lower due to the dissipation of the energy through the large viscous term. As the Reynolds number increases to 10,000 (figure on the right), we see that the velocities are higher in the cavity and the vortex extends more prominently into the bottom of the cavity. The magnitude of the velocity and the direction of the flow in the cavity for Reynolds numbers of 100 (left) and 10,000 (right). The lid-driven cavity is a benchmark problem, so we want to compare it to existing literature (Ref. 1). To do so, let’s take a look at the velocities along the centerlines of the cavity. In the left image below, we see the x-component of the velocity ( u) plotted along the vertical centerline, while the right image below plots the y-component of the velocity ( v) along the horizontal centerline. We see that the simulation results closely match the results from the literature for the entire range of Reynolds numbers. Comparison of the results from the simulation and literature for the x – (left) and y -components (right) of the velocity for various Reynolds numbers. The velocity plot above shows that a large vortex is formed in the center of the cavity, but what about the flow behavior in the corners? We use streamlines to visualize the flow structures in all parts of the cavity. Because there is no inlet in this simulation, we set the Streamline Positioning to Uniform density (instead of On selected boundaries). Settings window showing the Streamline Positioning set to Uniform density . We can see that for lower Reynolds numbers, the flow separates near the bottom left and right corners and two vortices are formed. As the Reynolds number increases, there is more inertia in the flow, causing it to separate earlier along the wall and create larger corner vortices. Increasing the Reynolds number further, a third vortex forms in the top left corner. For the highest Reynolds number (10,000), two vortices are present in the bottom corners in addition to the one in the top left corner. The flow in the cavity for various Reynolds numbers. Concluding Thoughts on the Lid-Driven Cavity Problem Here, we have showed how to define a classic CFD problem, the lid-driven cavity. An auxiliary sweep has enabled us to solve for multiple Reynolds numbers while improving the convergence of the simulation. We have also demonstrated how you can leverage mapped meshing to efficiently discretize a four-sided geometry and better resolve the high gradients in the flow near the walls. In addition, we have compared the results to existing literature and found that they closely match. Next Steps To try this example yourself, click the button below to head to the Application Gallery. There, you can download the model documentation and related MPH-file if you have a COMSOL Access account and valid software license. Reference U. Ghia, K.N. Ghia, and C.T. Shin, “High-Re Solutions for Incompressible Flow Using the Navier-Stokes Equations and a Multigrid Method,” Journal of Computational Physics, vol. 48, pp. 387–411, 1982. Comments (7) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Table of Contents Polar Representation with Multiplication of Complex Numbers Recall from The Polar Representation of a Complex Number page that if $z = a + bi \in \mathbb{C}$, then we can consider this number as a vector in the complex plane, and the argument of $z$ is defined to be the angle $\theta$ denoted $\arg (z) = \theta$ formed from this vector with the positive real axis. If we let $r = \mid z \mid$, then we used basic trigonometry to obtain a polar cordinates $(r, \theta)$ of the complex number $z$ explicitly as:(1) We will now see that this polar representation of complex numbers gives us a very nice interpretation of the operation of complex number multiplication. Theorem 1: Let $z, w \in \mathbb{C}$ and $z, w \neq 0$. Then: a) $\mid z \cdot w \mid = \mid z \mid \mid w \mid$. b) $\arg (z \cdot w ) = \arg (z) + \arg (w)$ The notation "$\pmod 2\pi$" above means that $\arg (z \cdot w) = \arg (z) + \arg (w) + 2k\pi$ for some $k \in \mathbb{Z}$. Proof of a) and b):Let $z, w \in \mathbb{C}$, and let $r_z = \mid z \mid$, $r_w = \mid w \mid$, $\theta_z = \arg (z)$, and $\theta_w = \arg(w)$. Then the polar representations of $z$ and $w$ are: Taking the product $z \cdot w$ yields: Recall the following trigonometric identities: Using both $(*)$ and $(**)$ give us that: Notice that this complex number is polar form, thus, $\mid z \cdot w \mid = r_zr_w = \mid z \mid \mid w \mid$ and $\arg (z \cdot w) = \theta_z + \theta_w = \arg(z) + \arg(w)$. $\blacksquare$ Geometrically, Theorem 1 tells us that if we multiply two complex numbers $z$ and $w$ then the product $z \cdot w$ is the complex number whose modulus/length is the product of the moduli of $z$ with $w$ and whose argument is the sum of the arguments of $z$ and $w$ as pictured below: Corollary 1: Let $z, w \in \mathbb{C}$ and let $z, w \neq 0$. Then $\displaystyle{\biggr \lvert \frac{z}{w} \biggr \rvert = \frac{\mid z \mid}{\mid w \mid}}$. Proof:By Theorem 1 we have that: Therefore $\displaystyle{\biggr \lvert \frac{z}{w} \biggr \rvert = \frac{\mid z \mid}{\mid w \mid}}$. $\blacksquare$
Sublinear Functionals Definition: Let $X$ be a linear space. A function $p : X \to \mathbb{R}$ is said to be Subadditive if for all $x, y \in X$ we have that $p(x + y) \leq p(x) + p (y)$. Definition: Let $X$ be a linear space. A function $p : X \to \mathbb{R}$ is said to be Nonnegatively Homogeneous if for all $\lambda \geq 0$ and for all $x \in X$ we have that $p(\lambda x) = \lambda p(x)$. Definition: Let $X$ be a linear space. A Sublinear Functional is a function $p : X \to \mathbb{R}$ that is subadditive and nonnegatively homogeneous. For example, if $p$ is a seminorm on $X$ then $p$ is a sublinear functional. Furthermore, if $X$ be a normed linear space then the norm itself, i.e., $p(x) = \| x \|$ is a sublinear functional.(1) Clearly for all $x, y \in X$ and $\lambda > 0$ we have that: \begin{align} \quad p(\lambda x) = \| \lambda x \| = |\lambda| \| x \| = \lambda \| x \| = \lambda p(x) \end{align}(2) \begin{align} \quad p(x + y) = \| x + y \| \leq \| x \| + \| y \| = p(x) + p(y) \end{align}
Here is a problem from Week of Code 29 hosted by Hackerrank. Problem. Given two integers q_1 and q_2 (1\le q_1 \le q_2 \le 10^{15}), find and print a common fraction p/q such that q_1 \le q \le q_2 and \left|p/q-\pi\right| is minimal. If there are several fractions having minimal distance to \pi, choose the one with the smallest denominator. Note that checking all possible denominators does not work as iterating for 10^{15} times would exceed the time limit (2 seconds for C or 10 seconds for Ruby). The problem setter suggested the following algorithm in the editorial of the problem: Given q, it is easy to compute p such that r(q) := p/q is the closest rational to \pi among all rationals with denominator q. Find the semiconvergents of the continued fraction of \pi with denominators \le 10^{15}. Start from q = q_1, and at each step increase q by the smallest denominator d of a semiconvergent such that r(q+d) is closer to \pi than r(q). Repeat until q exceeds q_2. Given q, let d = d(q) be the smallest increment to the denominator q such that r(q+d) is closer to \pi than r(q). To justify the algorithm, one needs to prove the d is the denominator of one of the semiconvergents. The problem setter admits that he does not have a formal proof. Inspired by the problem setter’s approach, here is a complete solution to the problem. Note that \pi should not be special in this problem, and can be replaced by any other irrational number \theta. Without loss of generality, we may assume that \theta\in(0,1). We first introduce the Farey intervals of \theta. Start with the interval (0/1, 1/1). Suppose the last interval is (a/b, c/d). Cut it by the mediant of a/b and c/d and choose one of the intervals (a/b, (a+c)/(b+d)), ((a+c)/(b+d), c/d) that contains \theta as the next interval. We call the intervals appeared in the above process Farey intervals of \theta. For example, take \theta = \pi - 3 = 0.1415926.... The Farey intervals are: The Farey sequence of order n, denoted by F_n, is the sequence of completely reduced fractions between 0 and 1 which when in lowest terms have denominators less than or equal to n, arranged in order of increasing size. Fractions which are neighboring terms in any Farey sequence are known as a Farey pair. For example, Farey sequence of order 5 is F_5 = (0/1, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 1/1). Using the Stern–Brocot tree, one can prove that Lemma 1. For every Farey interval (a/b, c/d) of \theta, the pair (a/b, c/d) is a Farey pair. Conversely, for every Farey pair (a/b, c/d), if \theta \in (a/b, c/d), then (a/b, c/d) is a Farey interval. We say p/q is a good rational approximation of \theta if every rational between p/q and \theta (exclusive) has a denominator greater than q. By the definition of Farey sequence, incorporating with Lemma 1, we know that Lemma 2. A rational is an endpoint of a Farey interval of \theta if and only if it is a good rational approximation of \theta. In fact, one can show that the endpoints of Farey intervals and semiconvergents of continued fraction are the same thing! Thereof, the problem setter’s claim follows immediately from: Proposition. Given q, let r(q) = p / q be the rational closest to \theta with denominator q. If d = d(q) is the minimal increment to q such that r(q + d) = (p + c) / (q + d) is closer to \theta than r(q), then c/d is a good rational approximation. Remark. The proposition states that the increments to p/q always come from a good rational approximation. It is stronger than the problem setter’s statement, which only asserts that the increment to the q comes from a good rational approximation. \left| y/x - \theta \right| < \left|p/q - \theta\right|, q < x < q + d. Proof. In (x, y)-plane, plot the trapezoid defined by Also we interpret rational numbers p/q, (p+c)/(q+d) as points A = (q, p), B = (q+d, p+c). Let the line through (q, p) parallel to y=\theta x intersect the vertical line x = q+d at C = (q+d, p+\theta d). By the definition of d, we know that the trapezoid does not contain lattice points. In particular, there is no lattice point in the interior of the triangle ABC. In the coordinate system with origin at A, B has coordinate (d, c) and the line through A, C is y = \theta x. Since triangle ABC contains no lattice points, there is no (b, a) with b < d such that a/b is between \theta and c/d. In other words, c/d is a good rational approximation. [qed] Here is a fine print of the algorithm. Because floats may not have enough precision for the purpose of computation, we will instead use a convergent of the continuous fraction of \pi instead. All the computations will then happen in \mathbb{Q}. Finally, we present the algorithm. P = Rational(5706674932067741, 1816491048114374) - 3# find endpoints of Farey intervalsa, b, c, d = 0, 1, 1, 1farey = [[a,b],[c,d]]while (f = b + d) <= max - min farey << [e = a + c, f] if P < Rational(e, f) c, d = e, f else a, b = e, f endendmin, max = gets.split.map(&:to_i)p_min = (P * q).round# increase p_min/min by frations in fareywhile min <= max c, d = nil, nil farey.each do |a, b| break if min + b > max if (Rational(p_min + a, min + b) - P).abs < (Rational(p_min, min) - P).abs c, d = a, b break end end break if d == nil p_min += c; min += dendputs "#{p_min + 3 * min}/#{min}"
I wanted to better understand dfa. I wanted to build upon a previous question:Creating a DFA that only accepts number of a's that are multiples of 3But I wanted to go a bit further. Is there any way we can have a DFA that accepts number of a's that are multiples of 3 but does NOT have the sub... Let $X$ be a measurable space and $Y$ a topological space. I am trying to show that if $f_n : X \to Y$ is measurable for each $n$, and the pointwise limit of $\{f_n\}$ exists, then $f(x) = \lim_{n \to \infty} f_n(x)$ is a measurable function. Let $V$ be some open set in $Y$. I was able to show th... I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Consider a non-UFD that only has 2 units ( $-1,1$ ) and the min difference between 2 elements is $1$. Also there are only a finite amount of elements for any given fixed norm. ( Maybe that follows from the other 2 conditions ? )I wonder about counting the irreducible elements bounded by a lower... How would you make a regex for this? L = {w $\in$ {0, 1}* : w is 0-alternating}, where 0-alternating is either all the symbols in odd positions within w are 0's, or all the symbols in even positions within w are 0's, or both. I want to construct a nfa from this, but I'm struggling with the regex part
@mickep I'm pretty sure that malicious actors knew about this long before I checked it. My own server gets scanned by about 200 different people for vulnerabilities every day and I'm not even running anything with a lot of traffic. @JosephWright @barbarabeeton @PauloCereda I thought we could create a golfing TeX extension, it would basically be a TeX format, just the first byte of the file would be an indicator of how to treat input and output or what to load by default. I thought of the name: Golf of TeX, shortened as GoT :-) @PauloCereda Well, it has to be clever. You for instance need quick access to defining new cs, something like (I know this won't work, but you get the idea) \catcode`\@=13\def@{\def@##1\bgroup} so that when you use @Hello #1} it expands to \def@#1{Hello #1} If you use the D'Alembert operator as well, you might find pretty using the symbol \bigtriangleup for your Laplace operator, in order to get a similar look as the \Box symbol that is being used for D'Alambertian. In the following, a tricky construction with \mathop and \mathbin is used to get the... Latex exports. I am looking for a hint on this. I've tried everything I could find but no solution yet. I read equations from file generated by CAS programs. I can't edit these or modify them in any way. Some of these are too long. Some are not. To make them fit in the page width, I tried resizebox. The problem is that this will resize the small equation as well as the long one to fit the page width. Which is not what I want. I want only to resize the ones that are longer that pagewidth and keep the others as is. Is there a way in Latex to do this? Again, I do not before hand the size of… \documentclass[12pt]{article}\usepackage{amsmath}\usepackage{graphicx}\begin{document}\begin{equation*}\resizebox{\textwidth}{!}{$\begin{split}y &= \sin^2 x + \cos^2 x\\x &= 5\end{split}$}\end{equation*}\end{document} The above will resize the small equation, which I do not want. But since I do not know before how long the equation is, I do resize on everyone. Is there a way to find in Latex using some latex command, if an equation "will fit" the page width or how long it is? If so I can add logic to add resize if needed in that case. What I mean, I want to resize DOWN only if needed. And not resize UP. Also, if you think I should ask this on main board, I can. But thought to check here first. @egreg what other options do I have? sometimes cas generates an equation which do not fit the page. Now it overflows the page and one can't see the rest of it at all. Now, since in pdf one can zoom in a little, at least one can see it if needed. It is impossible to edit or modify these by hand, as this is done all using a program. @UlrikeFischer I do not generate unreadable equations. These are solutions of ODE's. The latex is generated by Maple. Some of them are longer than the page width. That is all. So what is your suggestion I do? Keep the long solutions flow out of the page? I can't edit these by hand. This is all generated by a program. I can add latex code around them that is all. But editing them is out of question. I tried breqn package, but that did not work. It broke many things as well. @egreg That was just an example. That was something I added by hand to make up a long equation for illustration. That was not real solution to an ODE. Again, thanks for the effort. but I can't edit the latex generated at all by hand. It will take me a year to do. And I run the program many times each day. each time, all the latex files are overwritten again any way. CAS providers do not generate good Latex also. That is why breqn did not work. many times they add {} around large expressions, which made breqn not able to break it. Also breqn has many other problems. So I no longer use it at all.
First of all, hello and thank you for your time. Context I am making a program that solves the differential equation for the time evolution of a system from the equations: $$F[\mathbf{y}]=\int\limits_{\Omega\subset\mathbb{R}^n}f(\mathbf{x},\mathbf{y}, \mathbf{\nabla y})d\mathbf{x}$$ $$\frac{\partial y_i}{\partial t}=k\Delta\frac{\delta F}{\delta y_i}$$ Where $\Delta$ is the Laplacian and $$\frac{\delta F}{\delta y_i}=\sum\limits_k\frac{\partial}{\partial x_k}\frac{\partial f}{\partial (\partial_ky_i)}-\frac{\partial f}{\partial y_i}$$ is the variational derivative ($\partial _k$ is the partial derivative with respect to $x_k$). Also: $\mathbf{x}\in\mathbb{R}^n, \mathbf{y}\in\mathbb{R}^m$ The system should satisfy the global conservation constraints: $$J_i[\mathbf{y}]=\int\limits_{\Omega\subset\mathbb{R}^n}y_id\mathbf{x}=k_i$$ Where the $k_i$ are constants. I ran the program (without adding the Lagrange multipliers to the integral). And noticed that the $J_i$ increased with time, which was obviously not intended. Question I want to add the constraints to the solution. At first I naïvely thought that I could just modify the functional by adding lagrange mutipliers: $$K[\mathbf{y}]=F[\mathbf{y}]-\sum\limits_i\lambda_i J_i[\mathbf{y}]$$ But when checking my reference book I noticed the Theorem said (I modified and omitted parts to take what's most relevant to the current question): Suppose that $F$ has an extremumat $y\in C^2[x_0, x_1]$ subject to the boundary conditions [...]. Then there exist two numbers $\lambda_0, \lambda_1$ not both zero such that $$\frac{\delta K}{\delta y}=0$$ Where $K=\lambda_0 F-\lambda_1 J$ As the theorem says, this works when one wishes to find the extremum so my naïve assumption is probably wrong since the system I described only reaches the extremum of the functional when $\mathbf{y}$ gets to the steady state ( i.e. it approaches it asymptotically). Is there a way to satisfy the constraint continuously throughout the time evolution of the system despite $F$ not being stationary? I understand this may be more involved than what an answer in the site may allow so if you know of any good textbook where I could find it I would also be very grateful.
Symmetric Matrices Definition: A square matrix $A$ is said to Symmetric if $A = A^T$, that is for all entries in $A$, $a_{ij} = a_{ji}$. We can imagine symmetric matrices to have entries that are mirror images of each other if we draw a line down the main diagonal: For example, the following two matrices $A$ and $B$ are symmetric:(1) We will now look at a theorem outlining some important properties of symmetric matrices. Theorem 1: If $A$ and $B$ are symmetric matrices of size $m \times n$ and $k \in \mathbb{R}$ is a scalar, then: a) $A^T$ is also symmetric. b) The sum $A + B$ and difference $A - B$ are also symmetric matrices. c) The matrix $kA$ is also symmetric. d) $A$ and $B$ being symmetric $\not \Rightarrow AB$ symmetry (i.e, $A$ and $B$ being symmetric does not imply that the product $AB$ is symmetric). We will show all proofs except for (d). To show (d), all that is needed is a counter-example, that is two symmetric matrices that when multiplied doesn't produce a symmetric matrix. Proof of (a):Since $A^T$ results from interchanging the rows of $A$ with the columns of $A$, $A^T$ must also be symmetric. $\blacksquare$ Proof of (b):Assume $A + B$ is symmetric, that is $A + B = (A + B)^T$. But we know that $(A + B)^T = A^T + B^T$. Since both $A$ and $B$ are already symmetric since $A = A^T$ and $B = B^T$, therefore $A + B = (A + B)^T$ and our initial assumption was true. $\blacksquare$ Proof of (c):If $A$ is symmetric, multiplying each symmetric element in $A$ by $k$ will still have $kA$ being symmetric since every entry in the $A$ is multiplied by the same scalar. $\blacksquare$
It looks like you're new here. If you want to get involved, click one of these buttons! Let's look at some examples of feasibility relations! Feasibility relations work between preorders, but for simplicity suppose we have two posets \(X\) and \(Y\). We can draw them using Hasse diagrams: Here an arrow means that one element is less than or equal to another: for example, the arrow \(S \to W\) means that \(S \le W\). But we don't bother to draw all possible inequalities as arrows, just the bare minimum. For example, obviously \(S \le S\) by reflexivity, but we don't bother to draw arrows from each element to itself. Also \(S \le N\) follows from \(S \le E\) and \(E \le N\) by transitivity, but we don't bother to draw arrows that follow from others using transitivity. This reduces clutter. (Usually in a Hasse diagram we draw bigger elements near the top, but notice that \(e \in Y\) is not bigger than the other elements of \(Y\). In fact it's neither \(\ge\) or \(\le\) any other elements of \(Y\) - it's just floating in space all by itself. That's perfectly allowed in a poset.) Now, we saw that a feasibility relation from \(X\) to \(Y\) is a special sort of relation from \(X\) to \(Y\). We can think of a relation from \(X\) to \(Y\) as a function \(\Phi\) for which \(\Phi(x,y)\) is either \(\text{true}\) or \(\text{false}\) for each pair of elements \( x \in X, y \in Y\). Then a feasibility relation is a relation such that: If \(\Phi(x,y) = \text{true}\) and \(x' \le x\) then \(\Phi(x',y) = \text{true}\). If \(\Phi(x,y) = \text{true}\) and \(y \le y'\) then \(\Phi(x,y') = \text{true}\). Fong and Spivak have a cute trick for drawing feasibility relations: when they draw a blue dashed arrow from \(x \in X\) to \(y \in Y\) it means \(\Phi(x,y) = \text{true}\). But again, they leave out blue dashed arrows that would follow from rules 1 and 2, to reduce clutter! Let's do an example: So, we see \(\Phi(E,b) = \text{true}\). But we can use the two rules to draw further conclusions from this: Since \(\Phi(E,b) = \text{true}\) and \(S \le E\) then \(\Phi(S,b) = \text{true}\), by rule 1. Since \(\Phi(S,b) = \text{true}\) and \(b \le d\) then \(\Phi(S,d) = \text{true}\), by rule 2. and so on. Puzzle 171. Is \(\Phi(E,c) = \text{true}\) ? Puzzle 172. Is \(\Phi(E,e) = \text{true}\)? I hope you get the idea! We can think of the arrows in our Hasse diagrams as one-way streets going between cities in two countries, \(X\) and \(Y\). And we can think of the blue dashed arrows as one-way plane flights from cities in \(X\) to cities in \(Y\). Then \(\Phi(x,y) = \text{true}\) if we can get from \(x \in X\) to \(y \in Y\) using any combination of streets and plane flights! That's one reason \(\Phi\) is called a feasibility relation. What's cool is that rules 1 and 2 can also be expressed by saying $$ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} $$is a monotone function. And it's especially cool that we need the '\(\text{op}\)' over the \(X\). Make sure you understand that: the \(\text{op}\) over the \(X\) but not the \(Y\) is why we can drive to an airport in \(X\), then take a plane, then drive from an airport in \(Y\). Here are some ways to lots of feasibility relations. Suppose \(X\) and \(Y\) are preorders. Puzzle 173. Suppose \(f : X \to Y \) is a monotone function from \(X\) to \(Y\). Prove that there is a feasibility relation \(\Phi\) from \(X\) to \(Y\) given by $$ \Phi(x,y) \text{ if and only if } f(x) \le y .$$ Puzzle 174. Suppose \(g: Y \to X \) is a monotone function from \(Y\) to \(X\). Prove that there is a feasibility relation \(\Psi\) from \(X\) to \(Y\) given by $$ \Psi(x,y) \text{ if and only if } x \le g(y) .$$ Puzzle 175. Suppose \(f : X \to Y\) and \(g : Y \to X\) are monotone functions, and use them to build feasibility relations \(\Phi\) and \(\Psi\) as in the previous two puzzles. When is $$ \Phi = \Psi ? $$ To read other lectures go here.
Definition:Particular Affirmative Contents Definition A particular affirmative is a categorical statement of the form: Some $S$ is $P$ where $S$ and $P$ are predicates. In the language of predicate logic, this can be expressed as: $\exists x: \map S x \land \map P x$ Its meaning can be amplified in natural language as: There exists at least one object with the property of being $S$ which also has the quality of being $P$. $\left\{{x: S \left({x}\right)}\right\} \cap \left\{{x: P \left({x}\right)}\right\} \ne \varnothing$ or, more compactly: $S \cap P \ne \varnothing$ Also denoted as Traditional logic abbreviated the particular affirmative as $\mathbf I$. Thus, when examining the categorical syllogism, the particular affirmative $\exists x: \map S x \land \map P x$ is often abbreviated: $\map {\mathbf I} {S, P}$ The abbreviation $\mathbf I$ for a particular affirmative originates from the second vowel in the Latin word aff Irmo, meaning I affirm. Also see Sources 1965: E.J. Lemmon: Beginning Logic... (previous) ... (next): $\S 4.4$: The Syllogism 1973: Irving M. Copi: Symbolic Logic(4th ed.) ... (previous) ... (next): $4.1$: Singular Propositions and General Propositions 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: syllogism
Talk:Derivative of Exponential Function $\frac{dy}{dx}=\frac{e^x(e^h-1)}{h}$ $\lim_{h \to 0}{(e^h-1)}=h$ $\frac{dy}{dx}=\frac{e^xh}{h}$ Is the second line above correct? It seems to me that as h → 0, e h→ 1 and (eh --1)→ 0 Definition of e According to Stewart's Calculus: Early Transcendentals, the definition of the number e is the number such that its derivative at x=0 is 1. This is equivalent to stating $~\lim_{h \to 0} \frac{e^h - 1}{h} = 1 $. Why? This is simply because $f(x)=e^{x}\rightarrow f'(x)=\lim_{h \to 0}\frac{e^{x+h}-e^{x}}{h} = \lim_{h \to 0}\frac{e^{x}(e^{h} - 1)}{h}$ Defining $f'(0)=1$, we get $ 1 = f'(0) = \lim_{h \to 0}\frac{e^{0}(e^{h} - 1)}{h} = \lim_{h \to 0}\frac{1\cdot(e^{h}-1)}{h}=\lim_{h \to 0}\frac{e^{h}-1}{h}$ There ya go. -- syntax fixed by Paul E J King (talk) 23:42, 27 April 2013 (UTC) Using e to define e ... ? There are several ways to define $e$. One of them is the limit of that sequence. There are others. They can be found in the (currently undergoing refactoring) page Definition:Euler's Number. --prime mover (talk) 10:11, 30 April 2013 (UTC) Infinite Series Take a common definition of the exponential function: $e^x = \sum_{n = 0}^{\infty} {\frac{x^n}{n!}}$ Evaluating the limit based on this definition: $\lim_{h \to 0} \frac{e^h - 1}{h}=\lim_{h \to 0} \frac{\sum_{n = 0}^{\infty} {\frac{h^n}{n!}}-1}{h}$ $=\lim_{h \to 0} \frac{\frac{h^0}{0!}+\sum_{n = 1}^{\infty} {\frac{h^n}{n!}}-1}{h}$ $=\lim_{h \to 0} \frac{1+\sum_{n = 1}^{\infty} {\frac{h^n}{n!}}-1}{h}$ $=\lim_{h \to 0} \sum_{n = 1}^{\infty} {\frac{h^{n-1}}{n!}}$ $=\lim_{h \to 0} \sum_{n = 0}^{\infty} {\frac{h^{n}}{(n+1)!}}$ $=\sum_{n = 0}^{\infty} {\frac{0^n}{(n+1)!}}$ $=\frac{0^0}{(0+1)!}+\sum_{n = 1}^{\infty} {\frac{0^n}{(n+1)!}}$ $=\frac{1}{1}+0$ $=1$ Notation I find that $e^x$ is a lot easier to read than $\exp x$. Of course, that could just be me, or it could be that the exp notation is more common in higher mathematics. --Cynic (talk) 22:21, 22 January 2009 (UTC) The operation of associating $2.718281828...^x \ $ is a multifunction in $\C \ $, whereas the map $\text{ exp }:\C \to \C$ is a well-defined transformation of the complex plane that corresponds to the principal branch of $2.718282828...^x \ $. Frequently, the exponential function is simply abbreviated $e^x \ $, with the understanding that it refers only to exponential function and not to $e=2.718281828...., \ $ raised to the $x \ $ power. I haven't contributed to anything regarding exponentials too heavily, but that's a necessary distinction when discussing the exponential as a complex function, though unnecessary and cumbersome for real analysis or any calculus at an elementary level. Zelmerszoetrop 22:29, 22 January 2009 (UTC) As you say, the distinction is necessary at complex analysis level (which I'm working towards, I'll start as soon as I've finished up defining the trig functions in terms of $D_{xx} f \left({x}\right) = -A f \left({x}\right)$. First proof Hello, the argument in the first proof for the claim that (e^h - 1)/h --> 1 as h --> 0 was circular (it had used l'Hopital's rule, which already assumed that the derivative of e^x is e^x). I changed it to a new one; I hope someone will point out if I've made an error. Mag487 23:59, 22 August 2009 (UTC) Sorry, I was unclear. The proof relied (it has referred the reader to this page) on using l'Hopital's rule on the quantity $\lim_{h\to 0} \frac{\exp h - 1}{h}$ to show that it's equal to one. However, in order to apply the rule here, we have to already know the derivative of the function e^x in the numerator. Mag487 05:07, 23 August 2009 (UTC)
I know 2 approaches to do LDA, the Bayesian approach and the Fisher's approach. Suppose we have the data $(x,y)$, where $x$ is the $p$-dimensional predictor and $y$ is the dependent variable of $K$ classes. By Bayesian approach, we compute the posterior $$p(y_k|x)=\frac{p(x|y_k)p(y_k)}{p(x)}\propto p(x|y_k)p(y_k)$$, and as said in the books, assume $p(x|y_k)$ is Gaussian, we now have the discriminant function for the $k$th class as \begin{align*}f_k(x)&=\ln p(x|y_k)+\ln p(y_k)\\&=\ln\left[\frac{1}{(2\pi)^{p/2}|\Sigma|^{1/2}}\exp\left(-\frac{1}{2}(x-\mu_k)^T\Sigma^{-1}(x-\mu_k)\right)\right]+\ln p(y_k)\\&=x^T\Sigma^{-1}\mu_k-\frac{1}{2}\mu_k^T\Sigma^{-1}\mu_k+\ln p(y_k)\end{align*}, I can see $f_k(x)$ is a linear function of $x$, so for all the $K$ classes we have $K$ linear discriminant functions. However, by Fisher's approach, we try to project $x$ to $(K-1)$ dimensional space to extract the new features which minimizes within-class variance and maximizes between-class variance, let's say the projection matrix is $W$ with each column being a projection direction. This approach is more like a dimension reduction technique. My questions are (1) Can we do dimension reduction using Bayesian approach? I mean, we can use the Bayesian approach to do classification by finding the discriminant functions $f_k(x)$ which gives the largest value for new $x^*$, but can these discriminant functions $f_k(x)$ be used to project $x$ to lower dimensional subspace? Just like Fisher's approach does. (2) Do and how the two approaches relate to each other? I don't see any relation between them, because one seems just to be able to do classification with $f_k(x)$ value, and the other is primarily aimed at dimension reduction. UPDATE Thanks to @amoeba, according to ESL book, I found this: and this is the linear discriminant function, derived via Bayes theorem plus assuming all classes having the same covariance matrix $\Sigma$. And this discriminant function is the SAME as the one $f_k(x)$ I wrote above. Can I use $\Sigma^{-1}\mu_k$ as the direction on which to project $x$, in order to do dimension reduction? I'm not sure about this, since AFAIK, the dimension reduction is achieved by do the between-within variance analysis. UPDATE AGAIN From section 4.3.3, this is how those projections derived: , and of course it assumes a shared covariance among classes, that is the common covariance matrix $W$ (for within-class covariance), right? My problem is how do I compute this $W$ from the data? Since I would have $K$ different within-class covariance matrices if I try to compute $W$ from the data. So do I have to pool all class' covariance together to obtain a common one?
Separability Criterion for Hilbert Spaces Separability Criterion for Hilbert Spaces Recall that a topological space is said to be separable if it contains a countable and dense subset. When dealing with Hilbert spaces, there is an easy criterion to determine if the space is separable or not. Theorem 1: Let $H$ be a Hilbert space. Then $H$ is separable if and only if $H$ has a countable Hilbert basis. Proof:$\Rightarrow$ Suppose that $H$ is separable. Let $\mathcal F$ be the collection of all orthonormal subsets of $H$ ordered by inclusion and let $\mathcal C$ be a chain in $\mathcal F$. Consider the set: \begin{align} \quad C = \bigcup_{S \in \mathcal C} S \end{align} We first show that $C$ is contained in $\mathcal F$, that is, $C$ is orthonormal. Let $x, y \in C$. Then there exists some $S \in \mathcal C$ such that $x, y \in S$ (since the elements of $\mathcal C$ are ordered by inclusion). But $S$ is an orthonormal subset of $H$, so $\langle x, y \rangle = 0$ and $\| x \|, \| y \| = 1$. Therefore $C \in \mathcal F$. Clearly $C$ is an upper bound for $\mathcal C$. By Zorn's lemma there must exist a maximal element $B \in \mathcal F$. We first establish that $B$ is an Hilbert basis of $H$. Let $y \in B^{\perp}$ be such that $\| y \| = 1$. Then $B \cup \{ y \} \in \mathcal F$. But then $B \subset B \cup \{ y \}$ which contradicts the maximality of $B$. Therefore we must have that: \begin{align} \quad B^{\perp} = \{ 0 \} \end{align} By the theorem on the Hilbert Bases (Orthonormal Bases) for Hilbert Spaces page we have that $B$ is an Hilbert basis for $H$. We now show that $B$ is countable. Let $x, y \in B$ with $x \neq y$. Then: \begin{align} \quad \| x - y \|^2 = \langle x - y, x - y \rangle = \| x \|^2 - 2 \mathrm{Re} \langle x, y \rangle + \| y \|^2 = 1 - 0 + 1 = 2 \end{align} Suppose that $B$ is uncountable. Then there can be no countable and dense subset of $H$ which contradicts the assumption that $H$ is separable. So $B$ is countable Hilbert basis for $H$. $\Leftarrow$ Let $(x_n)_{n=1}^{\infty}$ be an Hilbert basis for $H$. Then we must have that $\mathrm{span} (x_1, x_2, ... )$ is dense in $H$ and in particular the set: \begin{align} \quad \left \{ \sum_{j=1}^{N} (a_j + ib_j)x_j : n \in \mathbb{N}, \: a_j, b_j \in \mathbb{Q} \right \} \end{align} is a countable and dense subset of $H$. So $H$ is separable. $\blacksquare$
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
I basically agree with @John, let me expand: We want to model $y$ using a simple linear model, the most basic setup is$$y = c + \mathbf{X}\beta$$with $y$ the $N$ observations, $c$ a constant, $\mathbf{X}$ the $N \times M$ matrix of regressors and $\beta$ a $M$-dimensional vector of coefficients. This model has $M$ parameters, the elements of $\beta$. The above model is estimated and the Ramsey RESET test finds that the model to be misspecified and the researcher wants to fix this. As you propose the above model is easily extended$$y = c + \mathbf{X}\beta + \mathbf{X}'\gamma$$where $\mathbf{X}'_{i, j} = \mathbf{X}_{i, j}^{e_i}$, $\mathbf{e}$ is a $M$-dimensional vector and $\gamma$ a $M$-dimensional vector of coefficients. This model has $3M$ parameters, the elements of $\beta$, $\gamma$ and $e$ and much harder to estimate because of the nonlinearity. This can be easily solved by fixing all $e_i$ a priori. This yields another question: to which value do we fix it? As @pat notes, raising to a non-integer is a bad idea in the general case. But, as you note, one could use the absolute of the regressor raised to a rational exponent since $f(q) = |a^q|$ is continuous and real for all real $q \in \mathbb{Q}$. So why the insistence on integer valued exponents? One simple reason is laziness: it is much simpler to compute $x^2$ than $x^{1.95}$, a second reason is convention. A third reason is that small changes in the exponent have a small impact on the model. These arguments do not apply to the case where a rational exponent would yield a significant improvement. Unfortunately this has severe methodological problems: as argued above, making the exponent parameters makes estimation much harder and, perhaps more importantly, reduces parsimony. The last option of fixing the exponent is possible. However it would require a strong economic argument to defend this particular choice. If your application is such that it is absolutely clear that exponentiation with $q \in \mathbb{Q}$ is justified then you're free to do that. There are no methodological problems that I know of. But prepared for your critics who will notice and wil require justification of your particular choice for $q$. Another reason to choose $e_i = 2$ is the symmetry with taking cross products of the regressors, from this perspective is a square is a cross product with itself.
Abel's Identity for Linear Homogenous Second Order Differential Eqs. Abel's Identity for Linear Homogenous Second Order Differential Equations Recall from the Wronskian Determinants and Linear Homogenous Differential Equations page that if we have a linear homogenous second order differential equation $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$ and $y = y_1(t)$ and $y = y_2(t)$ are solution to this differential equation, then the Wronskian of $y_1$ and $y_2$ is defined as:(1) \begin{align} W(y_1, y_2) = \begin{vmatrix} y_1(t) & y_2(t)\\ y_1'(t) & y_2'(t) \end{vmatrix} \end{align} The following theorem gives us an alternative form for the Wronskian between the solutions $y_1$ and $y_2$ of this differential equation. Theorem 1 (Abel's Identity for Linear Homogenous Second Order Differential Equations): Let $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$ be a second order linear homogenous differential equation where $p$ and $q$ are continuous on an open interval $I$ such that $t_0 \in I$. Then the Wronskian of $y_1$ and $y_2$ at some $t$ is given by $W(y_1, y_2) = C e^{- \int p(t) \: dt}$ where $C$ is some constant dependent on $y_1$ and $y_2$. (2) Proof: Let $y = y_1(t)$ and $y = y_2(t)$ be solutions to this differential equation. Then we have that: \begin{align} \quad \frac{d^2 y_1}{dt^2} + p(t) \frac{d y_1}{dt} + q(t) y_1 = 0 \: (*) \quad \mathrm{and} \quad \frac{d^2 y_2}{dt^2} + p(t) \frac{d y_2}{dt} + q(t) y_2 = 0 \: (**) \end{align} (3) Now take the first equation, $(*)$, and multiply both sides by $-y_2(t)$ to get: \begin{align} \quad -y_2(t)\frac{d^2 y_1}{dt^2} -y_2(t)p(t)\frac{d y_1}{dt} - y_2(t)q(t) y_1 = 0 \end{align} (4) Now take the second equation, $(**)$ and multiply both sides by $y_1(t)$ to get: \begin{align} \quad y_1(t) \frac{d^2 y_2}{dt^2} + y_1(t) p(t) \frac{d y_2}{dt} + y_1(t) q(t) y_2 = 0 \end{align} (5) We will then add the last two equations together and so: \begin{align} \quad 0 = \left [y_1(t) \frac{d^2 y_2}{dt^2} + y_1(t) p(t) \frac{d y_2}{dt} + y_1(t) q(t) y_2 \right ] + \left [-y_2(t)\frac{d^2 y_1}{dt^2} - y_2(t) p(t) \frac{d y_1}{dt} - y_2(t)q(t) y_1 \right ] \\ = \left ( y_1(t) \frac{d^2 y_2}{dt^2} - y_2 \frac{d^2 y_1}{dt^2} \right ) + p(t) \left ( y_1(t) \frac{d y_2}{dt} - y_2(t) \frac{d y_1}{dt} \right ) + q(t) \underbrace{\left ( y_1(t) y_2(t) - y_1(t) y_2(t) \right )}_{= 0} \\ = \left ( y_1(t) \frac{d^2 y_2}{dt^2} - y_2 \frac{d^2 y_1}{dt^2} \right ) + p(t) \left ( y_1(t) \frac{d y_2}{dt} - y_2(t) \frac{d y_1}{dt} \right ) \end{align} (6) Now the Wronskian of $y_1$ and $y_2$ is given by: \begin{align} \quad W(t) = W(y_1, y_2) = \begin{vmatrix} y_1(t) & y_2(t)\\ y_1'(t) & y_2'(t)\end{vmatrix} = \begin{vmatrix} y_1(t) & y_2(t)\\ \frac{d y_1}{dt} & \frac{d y_2}{dt} \end{vmatrix} = y_1(t) \frac{d y_2}{dt} - y_2(t) \frac{d y_1}{dt} \end{align}(7) \begin{align} \quad W'(t) = y_1'(t)\frac{d y_2}{dt} + y_1(t) \frac{d^2 y_2}{dt^2} - y_2'(t) \frac{d y_1}{dt} - y_2 \frac{d^2 y_1}{dt^2} = y_1(t) \frac{d^2 y_2}{dt^2} - y_2 \frac{d^2 y_1}{dt^2} + \underbrace{y_1(t) y_2'(t) - y_1(t)y_2'(t)}_{=0} = y_1(t) \frac{d^2 y_2}{dt^2} - y_2 \frac{d^2 y_1}{dt^2} \end{align} (8) From this, we can now rewrite the equation from earlier in the form $W'(t) + p(t) W(t) = 0$. We can solve this differential equation by using the method of integrating factors. Let $\mu (t) = e^{\int p(t) \: dt}$ and so: \begin{align} W'(t) + p(t) W(t) = 0 \\ \mu (t) W'(t) + \mu (t) p(t) W(t) = 0 \\ \frac{d}{dt} \left ( W(t) \mu (t) \right ) = 0 \\ \int \frac{d}{dt} \left ( W(t) \mu (t) \right ) \: dt = \int 0 \: dt \\ W(t) \mu(t) = C \\ W(t) = \frac{C}{\mu (t)} \\ W(t) = \frac{C}{e^{\int p(t) \: dt}} \\ W(t) = Ce^{- \int p(t) \: dt} \quad \blacksquare \end{align}
A water-tank ‘T’ supplies 3000 litres of water. It supplies equal volume of water to the pipelines connecting the sub-stations P, Q & R. Similarly, sub-stations Q & R supply equal volume of water to the pipelines connecting them to their respective mini-stations. Sub-station P supplies water such that equal volume of water is received at mini-stations P 1 & P 2. The numbers indicate the length of the pipelines in km. It is observed that there is a loss of ‘x’% and ‘2x’% of water in the pipelines joining the tank to the sub-station and the pipelines joining the sub-stations to mini-stations respectively. (where ‘x’ represents the length of the pipeline). Q1. How much water (in litres) is received at mini-station P 1? Q2. Find the length of pipe $\mathrm { Q } \rightarrow \mathrm { Q } _ { 3 }$ (in km.) if the sum of the volume of water received at $\mathrm { Q } _ { 1 }$ and $\mathrm { Q } _ { 3 }$ is 380 litres. Q3. $\mathrm { L } _ { 1 }$ and $\mathrm { L } _ { 2 }$ are the lengths of $\mathrm { R } \rightarrow \mathrm { R } _ { 1 }$ and $\mathrm { R } \rightarrow \mathrm { R } _ { 2 }$ pipelines respectively. Also, it is known that $\mathrm { L } _ { 1 } + \mathrm { L } _ { 2 } = 25 \mathrm { km }$ . The sum of the volume of water received at mini-stations $\mathrm { R } _ { 1 }$ and $\mathrm { R } _ { 2 }$ is 600 litres. Find the length of the pipeline $\mathrm { T } \rightarrow \mathrm { R }$ . 10$\mathrm { km }$ 30$\mathrm { km }$ 25$\mathrm { km }$ 20$\mathrm { km }$ Q4. Due to scarcity of water at sub-station Q, a new pipeline is fitted between sub-station R and sub-station Q. A water loss of 5x% is observed in the new pipeline where ‘x’ is the length of the pipeline (in km). Sub-station Q now supplies 300 litres of water in each pipeline to its mini-stations. Find the length of the new pipeline if R supplies equal volume of water to each pipeline. (Use data from the previous question if necessary) 12.5$\mathrm { km }$ 11.25$\mathrm { km }$ 17.5$\mathrm { km }$ 8.75$\mathrm { km }$ Q1. As the water gets equally divided in the pipelines from the tank, water entering pipeline $\mathrm { T } \rightarrow \mathrm { P } = \frac { 3000 } { 3 } = 1000$ litres $\rightarrow$ Water received at sub-station $\mathrm { P } = 85 \%$ of 1000 litres $= 850$ litres Let a and b represent the volume of water entering pipeline $P \rightarrow P _ { 1 }$ and $P \rightarrow P _ { 2 }$ respectively. $\rightarrow a + b = 850$ As mini-stations $\mathrm { P } _ { 1 }$ and $\mathrm { P } _ { 2 }$ receive equal volume of water, 90$\%$ of $\mathrm { a } = 80 \%$ of $\mathrm { b }$ Therefore, $9 \mathrm { a } = 8 \mathrm { b }$ Therefore, $9 \mathrm { a } - 8 \mathrm { b } = 0$ Solving $( 1 )$ and $( 2 )$ simultaneously, we get $a = 400$ and $b = 450$ Hence, volume of water received at mini-station $P _ { 1 } = 400 \times 0.9 = 360$ litres Therefore, the required answer is 360 . Q2. Let the length of pipe $\mathrm { Q } \rightarrow \mathrm { Q } _ { 3 }$ be 'a' $\mathrm { km }$ . Volume of water received by sub-station $\mathrm { Q } = 75 \%$ of 1000 litres $= 750$ litres. Quantity of water received by mini-station $\mathrm { Q } _ { 1 } = 84 \%$ of $\left( \frac { 750 } { 3 } \right) = 210$ litres Volume of water received by mini-station $\mathrm { Q } _ { 3 } = \frac { 750 } { 3 } - \frac { 750 } { 3 } \times \frac { 2 \mathrm { a } } { 100 } = 5 ( 50 - \mathrm { a } )$ $\Rightarrow 210 + 5 ( 50 - \mathrm { a } ) = 380$ $\Rightarrow \quad ( 50 - \mathrm { a } ) = 170$ $\Rightarrow ( 50 - \mathrm { a } ) = 34$ $\Rightarrow \mathrm { a } = 16 \mathrm { km }$ Q3. Let 'x' represent the length of pipeline $\mathrm { T } \rightarrow \mathrm { R }$ . Therefore Volume of water received by sub-station $\mathrm { R } = 1000 \frac { ( 100 - \mathrm { x } ) } { 100 } = 10 ( 100 - \mathrm { x } )$ Therefore Volume of water distributed by $\mathrm { R } = \frac { 10 ( 100 - \mathrm { x } ) } { 2 } = 5 ( 100 - \mathrm { x } )$ Volume of water received by mini-stations: $R _ { 1 } = 5 ( 100 - x ) \frac { \left( 100 - 2 \mathrm { L } _ { 1 } \right) } { 100 }$ and $R _ { 2 } = 5 ( 100 - x ) \frac { \left( 100 - 2 \mathrm { L } _ { 2 } \right) } { 100 }$ $\rightarrow \frac { 5 ( 100 - \mathrm { x } ) } { 100 } \left[ 200 - 2 \left( \mathrm { L } _ { 1 } + \mathrm { L } _ { 2 } \right) \right] = 600$ $\Rightarrow ( 100 - \mathrm { x } ) ( 200 - 50 ) = 12000$ $\Rightarrow \quad ( 100 - \mathrm { x } ) = \frac { 1200 } { 15 } = 80$ $\Rightarrow \mathrm { x } = 20 \mathrm { km }$ Q4. Since sub-station Q distributes 300 litres of water to each of its mini-stations, it receives 300 x 3 = 900 litres of water. As it receives 750 litres from T, it receives the remaining 150 litres from R. Let the length of the new pipeline be ‘a' km. At sub-station $\mathrm { R } ,$ there are 3 pipelines. Thus, $\mathrm { R }$ distributes $\frac { 800 } { 3 }$ litres of water in each pipeline. Volume of water received by $Q = \frac { 800 } { 3 } - \frac { 800 } { 3 } \times \frac { 5 a } { 100 } = 150$ $\Rightarrow \quad \frac { 800 } { 3 } \frac { ( 100 - 5 a ) } { 100 } = 150$ $\Rightarrow ( 100 - 5 a ) = \frac { 150 \times 3 } { 8 }$ $\Rightarrow 5 a = 100 - \frac { 150 \times 3 } { 8 } = \frac { 800 - 450 } { 8 } = \frac { 350 } { 8 }$ $\Rightarrow a = 8.75 \mathrm { km }$Online CAT LRDI Course @ INR 2499 only CAT Logical Reasoning Questions Set 1 CAT Logical Reasoning Questions Set 2 CAT Logical Reasoning Questions Set 3 CAT Logical Reasoning Questions Set 4 CAT Logical Reasoning Questions Set 5 CAT Logical Reasoning Questions Set 6 CAT Logical Reasoning Questions Set 7 CAT Logical Reasoning Questions Set 8 CAT Logical Reasoning Questions Set 9 CAT Logical Reasoning Questions Set 10 CAT Logical Reasoning Questions Set 11 CAT Logical Reasoning Questions Set 12 CAT Logical Reasoning Questions Set 13 CAT Logical Reasoning Questions Set 14 CAT Logical Reasoning Questions Set 15 CAT Logical Reasoning Questions Set 16 CAT Logical Reasoning Questions Set 17 CAT Logical Reasoning Questions Set 18 CAT Logical Reasoning Questions Set 19 CAT Logical Reasoning Questions Set 20 [kkstarratings]
Shor's method relies on a period finding routine on a quantum computer. A function $f: (x_1, \dots, x_n) \mapsto f(x_1, \dots, x_n)$ is periodic, of period $(\omega_1, \dots, \omega_n)$, if $f(x_1 + \omega_1, \dots, x_n + \omega_n) = f(x_1, \dots, x_n)$ for all tuples $(x_1, \dots, x_n)$ in the domain of $f$. Factorization problem Given an RSA modulus $N = pq$, find primes $p$ and $q$. Choose a random integer $a \in \mathbb{Z}_N$ (without loss of generality, we assume $\gcd(a,N) = 1$ —otherwise, this yields the factorization of $N$ and the factorization problem is solved). Consider the univariate function $f: x \mapsto f(x) = a^x \bmod N$. The period finding routine finds an $\omega$ such that $f(x + \omega) = f(x)$. As a consequence, $\omega$ is a multiple of the order of $a$ modulo $N$. Indeed, one has $f(x+\omega) = f(x) \iff a^\omega \equiv 1 \pmod N$. If $\omega$ is a multiple of $\lambda(N)$ —where $\lambda(N)$ denotes Carmichael's function, then Miller's algorithm yields the factorization of $N$. Otherwise, repeat the process with another $a$, get the period $\omega_a$, and update $\omega$ as $\omega \gets \operatorname{lcm}(\omega, \omega_a)$, until $\omega$ is a multiple of $\lambda(N)$. [ A description of Miller's algorithm can be found in Cryptography: Theory and Practice by Douglas Stinson, http://cacr.uwaterloo.ca/~dstinson/CTAP.html ] Discrete log problem Let $g$ be a generator of a group $\mathbb{G}$ of prime order $q$. Given $y = g^k \in \mathbb{G}$, find the value of $k$. Consider the bivariate function $f : (x_1, x_2) \mapsto g^{x_1} y^{x_2}$. The period finding routine finds a pair $(\omega_1, \omega_2)$ such that $f(x_1 + \omega_1, x_2 + \omega_2) = f(x_1,x_2)$. The solution to the discrete logarithm problem is then given by $k = -\omega_1/\omega_2 \bmod q$. Indeed, one has $f(x_1 + \omega_1, x_2 + \omega_2) = f(x_1,x_2) \iff g^{\omega_1} y^{\omega_2} = 1_{\mathbb{G}} \iff g^{\omega_1 + k\omega_2} = 1_{\mathbb{G}}$ and thus $\omega_1 + k\omega_2 \equiv 0 \pmod q$.
is there any way how theoretically determine the current consumption of ultrasound piezo transducer? I only know impedance for res freq = 50 ohms and excitation voltage around -170 V. Does the Ohms law work in this case? Thank you. Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community is there any way how theoretically determine the current consumption of ultrasound piezo transducer? I only know impedance for res freq = 50 ohms and excitation voltage around -170 V. Does the Ohms law work in this case? Thank you. It depends on what kind of signal you are driving it with. Ultrasonic transducers are pulsed, how much you pulse it is up to the designer, which you are going to want to keep minimize, 1) Not to waste power and 2) Not to dissipate large amounts of power into the thing you are measuring. A piezoelectric transducer is capacative and cannot be driven with a DC waveform (and you wouldn't want to, because you wouldn't have any signal to observe from waves reflecting off materials) $$P_{avg} = \frac{1}{T} \int_0^{t}p(t) \,dt$$ If you have strictly a sine wave then A would be the amplitude: $$ V(t) = A*sin(2\pi f t)$$ $$P_{avg} = \frac{1}{T} \int_0^{t}\frac{V(t)^2}{R} \,dt = \frac{1}{T} \int_0^{t}\frac{A*sin(2\pi f t)^2}{R} \,dt$$ or what ever function of voltage you are producing. If you're driving the transducer at resonance, the reactive terms will cancel and all you'll be left with is the resistance. In this case, if the impedance looks like a pure resistance, then Ohm's law applies and the transducer will draw $$ I = \frac {E}{Z} = \frac{170V}{50\Omega} = 3.4 \text { amperes}$$ and dissipate $$ P= I^2 R = 11.56 \times 50\Omega = 578 \text { watts}$$
I need to determine if the following limit exists: $$\sum_{n=1}^\infty (\sqrt[3]{n+1}-\sqrt[3]{n-1})^\alpha$$ Where alpha is some real number (The answer might depend on the choice of alpha). This is the only question in my homework I was not able to do, I tried to define $a_n=(\sqrt[3]{n+1}-\sqrt[3]{n-1})^\alpha$ and look if something interesting happens with $\frac{a_{n+1}}{a_n}$ or with $\sum2^n \cdot a_{2^n}$ but unfortunately it did not work. Thank you for your help, and if you answer me please try to make your answer as basic as possible, because I'm not advanced yet so I don't know much (it should be basic, as this is a homework question at calculus 1 course). Thank you! I need to determine if the following limit exists: The key issue here is that you need to exploit some sort of cancellation between the two halves of the summands; the trivial estimate that the summands are just about bounded by $(2n^{1/3})^{\alpha}$ when $\alpha > 0$ isn't good enough. A standard trick to study these is to use the binomial theorem to give a sharp approximation of the power function. To wit, $$(n + 1)^{1/3} = n^{1/3}\left(1 + \frac 1 n\right)^{1/3} = n^{1/3} \left(1 + \frac 1 3 \frac 1 n + O(n^{-2})\right) \approx n^{1/3} + \frac 1 3 n^{-2/3}$$ Using a similar approximation for the other half of the summand, the question you really need to answer is for which $\alpha$ the series $$\sum_{n = 1}^{\infty} \left(n^{-2/3}\right)^{\alpha}$$ converges. As a simpler method recall that $$A^3-B^3=(A-B)(A^2+AB+B^2) \implies A-B=\frac{A^3-B^3}{A^2+AB+B^2}$$ then $$\sqrt[3]{n+1}-\sqrt[3]{n-1}=\frac{2}{\sqrt[3]{(n+1)^2}+\sqrt[3]{n^2-1}+\sqrt[3]{(n-1)^2}} \sim\frac2{3n^\frac23}$$ then the given series converges by limit comparison test with $\sum \left(\frac1{n^{2/3}}\right)^\alpha$ for $\frac23\alpha >1$ that is $\alpha>\frac32$.
Reiterate problem Bunny: a chess piece which moves like a bishop but only one square from its current position. It may also hop over another piece. Hop: a bunny hops when another piece is in a square touching the current square (no diagonals). In its turn it occupies the square 3 squares from its current square in the direction of the hopped piece. The hopped piece then takes the original square of the bunny. This is not considered a turn/move for the hopped piece. Say, there are 2 bunnies on a board. $\alpha$ and $\beta$. $\alpha$ and $\beta$ begin on opposite colour squares of your choosing. By what means might they both tour the board? No turn may be taken to a previously occupied square (being hopped does not constitute occupation). They take turns, $\alpha$ moves first, $\beta$ second, and they will take 63 turns each. Vocabulary The bunny thus has 2 distinct modes of motion: the "step" and the "hop". If a piece is hopped over its motion (though not counted as a move) is a "pivot". A sequence of "step"s is a "walk". When $\alpha$ moves to a square he paints the square $green$. When $\beta$ moves to a square he paints it $blue$. After both bunnies have moved to a square it is painted $red$ - and hereafter no bunny may enter except by "pivot", and then only immediately as the last bunny has moved to this square. $Lemma\ 1:$ It is a known fact that no tour purely constistuted of steps is possible. In fact, it is known that it takes 4 distinct walks (sharing no squares) to cover all the black (or white) squares on the board. $Lemma\ 2:$ As a corollary to the above, there are only a subset of total squares of a given colour reachable from any arbitrary square - where reachable means that it can be reached by only using some sequence of steps (no hopping). If all squares or a colour were reachable then $Lemma\ 1$ would be false. Answer I will now claim that this bunny's tour is impossible. For $\alpha$, given any starting square $s$ on colour $c$, we know that $s$ must be a square on only one of the four walks needed to cover the square of colour $c$ - call this walk $w$. Any square visited from $s$ must be part of $w$. It is clear that his tour cannot be completed from $s$ ($Lemma\ 1$). This means that $\alpha$ must escape and continue by means of a hop. Now, $\beta$ occupies one of the squares in $w$, $s^{\prime}$. $\beta$ may now process on his walk $w^{\prime}$. We know that only the same squares that were reachable to $s$ are reachable to $s^{\prime}$. This means that they are part of the same walk. Eventually $\beta$ will get stuck or otherwise need to change to a different walk to have a chance at covering the board (as we know that 1 is not sufficient). His only means of escape is a hop. Now $\alpha$ takes a square on $w^{\prime}$, BUT $w^{\prime}$ has exactly the same reachable squares as $w$. So, $\alpha$ has gained no new reachable squares for colour $c$ ($Lemma\ 2$). And there is nothing he can do to remedy this situation. Thus, there is no 2-bunny tour of a chessboard. early observations for historical reasons only I'm quite positive that there is no solution. Let's say we have 2 bunnies, a black one and a white one. The black bunny paints the board with green paint, and the white bunny paints the board with blue paint. A square that has been painted with both green and blue paint, is red. a white bunny can no longer visit a blue square a black bunny can no longer visit a green square a red square can no longer be accessed by any bunny. The answer to this question suggests that there is no "walking-bishop" tour. The bunny is a walking-bishop with a hopping capability. Each bunny must visit every square. This means that at least 8 hops would be required. 3 squares are "involved" in a hop. And one of them will be guaranteed to be red after a hop. This means that the 8 hops will involve cancelling out at least $\frac{1}{8}^{th}$ of the entire board. they must change from black to white squares 4 time in order to get out of where they are stuck and to finish painting the board. The dead-locks occur under the following conditions: (the first 3 are because the bunny's counterpart cannot each a hopping square) a white bunny is anywhere where the 4 opposite colour squares surrounding him are coloured green a black bunny is anywhere where the 4 opposite colour squares surrounding him are coloured blue any bunny is anywhere where the 4 opposite colour squares surrounding him are coloured red a bunny is on a red square and cannot move to non-red square without making it red (now hopping is not possible - as the other bunny would need to occupy a red square - i.e. a square he has already occupied in the past) I will turn this into something solid soon.
Difference between revisions of "Image Dimensions" m (Text replace - "{{Category|NewTerminology}}" to "{{NewTerminology}}") (9 intermediate revisions by 5 users not shown) Line 1: Line 1: + <div style="background-color:#DDFFDD; border:thin solid green; padding:1em"> <div style="background-color:#DDFFDD; border:thin solid green; padding:1em"> '''Disclaimer:''' This page's content is not official and not guaranteed to be free of mistakes. At the moment, it's even only a sum of personal thoughts to cast a bit of light onto synfig's image dimensions handling.</div> '''Disclaimer:''' This page's content is not official and not guaranteed to be free of mistakes. At the moment, it's even only a sum of personal thoughts to cast a bit of light onto synfig's image dimensions handling.</div> ==Describing the fields of the Canvas Properties Dialog== ==Describing the fields of the Canvas Properties Dialog== − The user access the image dimensions in the + The user access the image dimensions in the Canvas Properties Dialog. − ===The ' + ===The '' tab=== + + Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). − ===The 'Image' tab=== + + ===The 'Image' tab=== + + + Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: + ;The on-screen size(?): The fields ''Width'' and ''Height'' tell synfigstudio how many pixels the image shall cover at a zoom level of 100%. ;The on-screen size(?): The fields ''Width'' and ''Height'' tell synfigstudio how many pixels the image shall cover at a zoom level of 100%. + ;The physical size: The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. ;The physical size: The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. − − + + + + image:Non_square_pixels.png|thumb|300px|Note the different scales at the rulers. Although the image is clearly 400x300 pixels big on screen, the rulers say it is only 400x200, which is what the ''Image Area'' values say. + + + + ] + + + ] + + + + + + − [ + [:...] Latest revision as of 10:55, 20 May 2013 Contents Describing the fields of the Canvas Properties Dialog The user access the image dimensions in the Canvas Properties Dialog. The Other tab Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). The Image tab Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: The on-screen size(?) The fields Widthand Heighttell synfigstudio how many pixels the image shall cover at a zoom level of 100%. The physical size The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. The mysterious Image Area Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}}). The unit seems to be not pixels but units, which are at 60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one Image Sizepixel is being rendered. This might be useful when one has to deal with non-square output pixels. Effects of the Image Area Somehow the image area setting seems to be saved when copy&pasting between image, see also bug #2116947. Possible intended effects of out-of-ratio image areas As mentioned above, different ratios might be needed when then output needs to be specified in pixels, but those pixels are not squares. That might happen for several kinds of media, such as videos encoded in some PAL formats or for dvds. For further reading, look at Wikipedia. Still, it is probably consensus that the image, as shown on screen while editing should look as closely as possible like when viewed by the final audience. So, while specifying a different output resolution at rendering time may well be wanted, synfigstudio should (for the majority of monitors) show square pixels, i.e. circles should stay circles. Feature wishlist to simplify working across documents See also Explanation by dooglus on the synfig-dev mailing list.
Difference between revisions of "Convergence of measures" m (moved newcommand up from start of para (it adds unwanted whitespace)) Line 4: Line 4: {{TEX|done}} {{TEX|done}} + − A concept in measure theory, determined by a certain topology in a space of measures that are defined on a certain $\sigma$-algebra $\mathcal{B}$ of subsets of a space $X$ or, more generally, in a space $\mathcal{M} (X, \mathcal{B})$ of charges, i.e. countably-additive real (resp. complex) functions $\mu: \mathcal{B}\to \mathbb R$ (resp. $\mathbb C$), often also called $\mathbb R$ (resp. $\mathbb C$) valued or [[Signed measure|signed measures]]. The total variation measure of a $\mathbb C$-valued measure is defined on $\mathcal{B}$ as: A concept in measure theory, determined by a certain topology in a space of measures that are defined on a certain $\sigma$-algebra $\mathcal{B}$ of subsets of a space $X$ or, more generally, in a space $\mathcal{M} (X, \mathcal{B})$ of charges, i.e. countably-additive real (resp. complex) functions $\mu: \mathcal{B}\to \mathbb R$ (resp. $\mathbb C$), often also called $\mathbb R$ (resp. $\mathbb C$) valued or [[Signed measure|signed measures]]. The total variation measure of a $\mathbb C$-valued measure is defined on $\mathcal{B}$ as: \[ \[ Revision as of 00:35, 30 July 2012 A concept in measure theory, determined by a certain topology in a space of measures that are defined on a certain $\sigma$-algebra $\mathcal{B}$ of subsets of a space $X$ or, more generally, in a space $\mathcal{M} (X, \mathcal{B})$ of charges, i.e. countably-additive real (resp. complex) functions $\mu: \mathcal{B}\to \mathbb R$ (resp. $\mathbb C$), often also called $\mathbb R$ (resp. $\mathbb C$) valued or signed measures. The total variation measure of a $\mathbb C$-valued measure is defined on $\mathcal{B}$ as: \[ \abs{\mu}(B) :=\sup\left\{ \sum \abs{\mu(B_i)}: \text{'"`UNIQ-MathJax12-QINU`"' is a countable partition of '"`UNIQ-MathJax13-QINU`"'}\right\}. \] In the real-valued case the above definition simplifies as \[ \abs{\mu}(B) = \sup_{A\in \mathcal{B}, A\subset B} \left(\abs{\mu (A)} + \abs{\mu (X\setminus B)}\right). \] The total variation of $\mu$ is then defined as $\left\|\mu\right\|_v := \abs{\mu}(X)$. The space $\mathcal{M}^b (X, \mathcal{B})$ of $\mathbb R$ (resp. $\mathbb C$) valued measure with finite total variation is a Banach space and the following are the most commonly used topologies. 1) The norm or strong topology: $\mu_n\to \mu$ if and only if $\left\|\mu_n-\mu\right\|_v\to 0$. 2) The weak topology: a sequence of measures $\mu_n \rightharpoonup \mu$ if and only if $F (\mu_n)\to F(\mu)$ for every bounded linear functional $F$ on $\mathcal{M}^b$. 3) When $X$ is a topological space and $\mathcal{B}$ the corresponding $\sigma$-algebra of Borel sets, we can introduce on $X$ the narrow topology. In this case $\mu_n$ converges to $\mu$ if and only if \begin{equation}\label{e:narrow} \int f\, \mathrm{d}\mu_n \to \int f\, \mathrm{d}\mu \end{equation} for every bounded continuous function $f:X\to \mathbb R$ (resp. $\mathbb C$). This topology is also sometimes called the weak topology, however such notation is inconsistent with the Banach space theory, see below. The following is an important consequence of the narrow convergence: if $\mu_n$ converges narrowly to $\mu$, then $\mu_n (A)\to \mu (A)$ for any Borel set such that $\abs{\mu}(\partial A) = 0$. 4) When $X$ is a locally compact topological space and $\mathcal{B}$ the $\sigma$-algebra of Borel sets yet another topology can be introduced, the so-called wide topology, or sometimes referred to as weak$^\star$ topology. A sequence $\mu_n\rightharpoonup^\star \mu$ if and only if \eqref{e:narrow} holds for continuous functions which are compactly supported. This topology is in general weaker than the narrow topology. If $X$ is compact and Hausdorff the Riesz representation theorem shows that $\mathcal{M}^b$ is the dual of the space $C(X)$ of continuous functions. Under this assumption the narrow and weak$^\star$ topology coincides with the usual weak$^\star$ topology of the Banach space theory. Since in general $C(X)$ is not a reflexive space, it turns out that the narrow topology is in general weaker than the weak topology. A topology analogous to the weak$^\star$ topology is defined in the more general space $\mathcal{M}^b_{loc}$ of locally bounded measures, i.e. those measures $\mu$ such that for any point $x\in X$ there is a neighborhood $U$ with $\abs{\mu}(U)<\infty$. References [AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration" , Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory" , 1 , Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures" , Wiley (1968) MR0233396 Zbl 0172.21201 [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces. Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 How to Cite This Entry: Convergence of measures. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Convergence_of_measures&oldid=27239
Table of Contents Extreme Points of the Closed Unit Ball of a Normed Linear Space Recall from the Extreme Subsets and Extreme Points of a Set in a LCTVS page that if $X$ is a locally convex topological vector space and $K$ is a nonempty convex subset of $X$ then $E \subseteq K$ is said to be an extreme subset of $K$ if: 1. $E$ is nonempty. 2. $E$ is closed. 3. $E$ is convex. 4. Whenever $x = \lambda u + (1 - \lambda )v$ where $u, v \in K$ and $\lambda \in [0, 1]$ then $u, v \in E$. Furthermore, a point in $x \in K$ is said to be an extreme point of $K$ if $\{ x \}$ is an extreme subset of $K$. The following proposition tells us that if $X$ is a normed linear space then the extreme points of the closed unit ball $x \in B_X$ of $X$ are such that $\| x \| = 1$. Proposition 1: Let $X$ be a normed linear space and let $B_X$ denote the closed unit ball of $X$. If $x \in B_X$ is an extreme point of $B_X$ then $\| x \| = 1$. Proof:Let $x \in B_X$ be an extreme point of $B_X$. Since $x \in B_X$ we have that $\| x \| \leq 1$. Suppose that $\| x \| = 0$. Then $x = 0$. By taking $y \in B_X$ with $\| y \| = 1$, we can consider the line segment joining $y$ and $-y$ given by the equation: Observe that when $\lambda = 1/2$ we have that $\tilde{x} = 0 = x$. But clearly $y, -y \not \in \{ x \}$ and so $\{ x \}$ is not an extreme subset of $B_X$, i.e., $x$ is not an extreme point of $B_X$ - a contradiction. So the assumption that $\| x \| = 0$ is false. Thus $0 < \| x \| \leq 1$. Now suppose that $0 < \| x \| < 1$. We can consider the line segment joining $0$ and $\displaystyle{\frac{x}{\| x \|}}$ given by the equation: Observe that when $\lambda = \| x \| \in (0, 1)$ we have that $\tilde{x} = x$. But $0, \frac{x}{\| x \|} \not \in \{ x \}$, again contradicting $x$ being an extreme point of $B_X$. Thus the assumption that $0 < \| x \| < 1$ is false. Therefore, if $x$ is an extreme point of $B_X$ then $\| x \| = 1$. $\blacksquare$
A Lyapunov function and global properties for SIR and SEIR epidemiological models with nonlinear incidence 1. Centre for Mathematical Biology, Mathematical Institute University of Oxford, 24-29 St Giles', Oxford, OX1 3LB, United Kingdom 2. Centre for Mathematical Biology, Mathematical Institute, University of Oxford, 24-29 St Giles', Oxford OX1 3LB, United Kingdom SIRand SEIRcompartmental epidemic models with nonlinear incidence of the form $\beta I^p S^q$ for the case $p \leq 1$ are constructed. Global stability of the models is thereby established. Keywords:nonlinear incidence., endemic equilibrium state, global stability, Direct Lyapunov method. Mathematics Subject Classification:92D30, 34D2. Citation:Andrei Korobeinikov, Philip K. Maini. A Lyapunov function and global properties for SIRand SEIRepidemiological models with nonlinear incidence. Mathematical Biosciences & Engineering, 2004, 1 (1) : 57-60. doi: 10.3934/mbe.2004.1.57 [1] C. Connell McCluskey. Global stability of an $SIR$ epidemic model with delay and general nonlinear incidence. [2] Shouying Huang, Jifa Jiang. Global stability of a network-based SIS epidemic model with a general nonlinear incidence rate. [3] Yoshiaki Muroya, Toshikazu Kuniya, Yoichi Enatsu. Global stability of a delayed multi-group SIRS epidemic model with nonlinear incidence rates and relapse of infection. [4] Jinhu Xu, Yicang Zhou. Global stability of a multi-group model with generalized nonlinear incidence and vaccination age. [5] Attila Dénes, Gergely Röst. Global stability for SIR and SIRS models with nonlinear incidence and removal terms via Dulac functions. [6] Yoichi Enatsu, Yukihiko Nakata, Yoshiaki Muroya. Global stability of SIR epidemic models with a wide class of nonlinear incidence rates and distributed delays. [7] Yu Ji, Lan Liu. Global stability of a delayed viral infection model with nonlinear immune response and general incidence rate. [8] [9] [10] Deqiong Ding, Wendi Qin, Xiaohua Ding. Lyapunov functions and global stability for a discretized multigroup SIR epidemic model. [11] Jun Lai, Ming Li, Peijun Li, Wei Li. A fast direct imaging method for the inverse obstacle scattering problem with nonlinear point scatterers. [12] Yu Ji. Global stability of a multiple delayed viral infection model with general incidence rate and an application to HIV infection. [13] Ting Guo, Haihong Liu, Chenglin Xu, Fang Yan. Global stability of a diffusive and delayed HBV infection model with HBV DNA-containing capsids and general incidence rate. [14] Emine Kaya, Eugenio Aulisa, Akif Ibragimov, Padmanabhan Seshaiyer. Stability analysis of inhomogeneous equilibrium for axially and transversely excited nonlinear beam. [15] Huijuan Li, Junxia Wang. Input-to-state stability of continuous-time systems via finite-time Lyapunov functions. [16] Leonid Shaikhet. Stability of a positive equilibrium state for a stochastically perturbed mathematical model of glassy-winged sharpshooter population. [17] Evrad M. D. Ngom, Abdou Sène, Daniel Y. Le Roux. Global stabilization of the Navier-Stokes equations around an unstable equilibrium state with a boundary feedback controller. [18] Guoshan Zhang, Peizhao Yu. Lyapunov method for stability of descriptor second-order and high-order systems. [19] Andrey V. Melnik, Andrei Korobeinikov. Lyapunov functions and global stability for SIR and SEIR models with age-dependent susceptibility. [20] Marc Briant. Stability of global equilibrium for the multi-species Boltzmann equation in $L^\infty$ settings. 2018 Impact Factor: 1.313 Tools Metrics Other articles by authors [Back to Top]
and rotations and reflections can be represented using geometric products of vectors. For vectors in the plane, the rotation of a vector v through the angle between vectors a and b can be represented by right multiplying by the product \hat{a}\hat{b}Reminder on notation: in these posts, lower case latin letters like a and b represent vectors, greek letters like \theta and \phi represent real numbers such as lengths or angles, and \hat{a} represents a unit vector directed along a, so that \hat{a}^2=1 and a = |a|\hat{a}. Juxtaposition of vectors represents their geometric product, so that ab is the geometric product between vectors a and b, and the geometric product is non-commutative, so the order of terms is important. v_\mathrm{rot.} = v \hat{a}\hat{b} and the reflection of v in any vector c can be represented as the “sandwich product” v_\mathrm{refl.} = c v c^{-1} = \hat{c} v \hat{c} Notice that none of these formulae make direct reference to any angle measures. But without angle measures, won’t it be hard to state and prove theorems that are explicitly about angles? Not really. Relationships between directions that can be represented by addition and subtraction of angle measures can be represented just as well using products and ratios of vectors with the geometric product. And the geometric product is better at representing reflections, which can sometimes provide fresh insights into familiar topics. We’ll take as our example the inscribed angle theorem, because it is one of the simplest theorems about angles that doesn’t seem intuitively obvious (at least, it doesn’t seem obvious to me…). In previous posts, I have shown how to visualize both the dot product and the wedge product of two vectors as parallelogram areas. In this post, I will show how the dot product and the wedge product are related through a third algebraic product: the geometric product. Along the way, we will see that the geometric product provides a simple way to algebraically model all of the major geometric relationships between vectors: rotations, reflections, and projections. Before introducing the geometric product, let’s review the wedge and dot products and their interpretation in terms of parallelogram areas. Given two vectors, a and b, their wedge product, a \wedge b, is straightforwardly visualized as the area of the parallelogram spanned by these vectors: Recall that algebraically, the wedge product a \wedge b produces an object called a bivector that represents the size and direction (but not the shape or location) of a plane segment in a similar way that a vector represents the size and direction (but not the location) of a line segment. The dot product of the same two vectors, a \cdot b, can be visualized as a parallelogram formed by one of the vectors and a copy of the other that has been rotated by 90 degrees: Well, almost. When I originally wrote about this area interpretation of the dot product, I didn’t want to get into a discussion of bivectors, but once you have the concept of bivector as directed plane segment, it’s best to say that what this parallelogram depicts is not quite the dot product, a \cdot b, which is a scalar (real number), but rather the bivector (a \cdot b) I where I is a unit bivector. The scalar a \cdot bscales the unit bivector I to produce a bivector with magnitude/area a \cdot b. It’s hard to draw a scalar on a piece of paper without some version of this trick. Once you’re looking for it, you’ll see that graphical depictions of real numbers/scalars almost always show how they scale some reference object. It could be a unit segment of an axis or a scale bar; here it is instead a unit area I. Examining the way that the dot product and the wedge product can be represented by parallelograms suggests an algebraic relationship between them: (a \cdot b) I = b \wedge a_\perp where a_\perp represents the result of rotating a by 90 degrees. Since the dot product is symmetric, we also have (a \cdot b) I = a \wedge b_\perp To really understand this relationship, we’ll need an algebraic way to represent how a_\perp is related to a; in other words, we’ll need to figure out how to represent rotations algebraically. A visual way of expressing that three vectors, a, b, and c, form a triangle is and an algebraic way is a + b + c = 0 In a previous post, I showed how to generate the law of cosines from this vector equation—solve for c and square both sides—and that this simplifies to the Pythagorean theorem when two of the vectors are perpendicular. In this post, I’ll show a similarly simple algebraic route to the law of sines. In understanding the law of cosines, the dot product of two vectors, a \cdot b, played an important role. In understanding the law of sines, the wedge product of two vectors, a \wedge b, will play a similarly important role. I recently posted a geometry puzzle about an autonomous lawn mower steered by a rope and a peg. How much rope remains unspooled from the peg when the mower collides with it? If you haven’t seen the puzzle yet, go check out last week’s post and give it a try. One of the joys of being an engineer at Desmos is that my collaborators occasionally bring me math problems that they need to solve to make something that they’re building work right. I love tricky little geometry problems, and I’d solve them as a hobby if they weren’t part of my job. When it helps our team get on with the work, so much the better. In today’s post, I’d like to share one of these puzzles that came up while building Lawnmower Math, and invite you to solve it yourself. I have a confession to make: I have always found symbolic algebra more intuitive than geometric pictures. I think you’re supposed to feel the opposite way, and I greatly admire people who think and communicate in pictures, but for me, it’s usually a struggle. For example, I have seen many pictorial “proofs without words” of the Pythagorean Theorem. I find some of them to be quite beautiful, but I also often find them difficult to unpack, and I never really think “oh, I could have come up with that myself.” I like this proof a lot. It’s fairly simple to interpret (more so than some of the other examples in the genre), and quite convincing. We have c^2 = a^2 + b^2 because, along with the same four copies of a triangle, both sides of this equation fill up an identical area. Even so, it’s odd to me that this diagram involves four copies of the triangle. This is one of those “I couldn’t have come up with this myself” stumbling blocks. For comparison, I’ll give an algebraic proofHere and throughout I am using the word “proof” quite loosely. Forgive me, I am a physicist, not a mathematician. of the Pythagorean theorem using vectors. The condition that three vectors a, b, and c traverse a triangle is that their sum is zero: a + b + c = 0 Solving for c gives c = -(a+b) and then dotting each side with itself and distributing gives \begin{aligned}c \cdot c &= \left(a+b\right) \cdot \left(a + b\right) \\&= a \cdot a + a \cdot b + b \cdot a + b \cdot b \\&= a^2 + b^2 + 2 a \cdot b\end{aligned} The condition that vectors a and b form a right angle is just that a \cdot b = 0, and in that special case, we have the Pythagorean theorem: c^2 = a^2 + b^2 The thing I like about this algebraic manipulation is that it is a straightforward application of simple rules in sequence. There are dozens of ways to arrange 4 congruent triangles on a page (probably much more than dozens, really), but the algebra feels almost inevitableIt does take practice to get a feel for which rules to apply to achieve a given goal, but there are really only a few rules to try: distributivity, commutativity, associativity, linearity over scalar multiplication, and that’s about it.. Write down the algebraic condition that vectors forming the sides of any triangle must satisfy. We’re interested in a function of one of the side vectors, c^2, so we solve for c and apply the function to both sides. We transform the right hand side by applying distributivity of the dot product across addition, and commutativity of the dot product, i.e. a \cdot b = b \cdot a. Right triangles in particular are a simplifying special case where one term drops out. I also think it’s important that the algebraic manipulation embeds the Pythagorean theorem as a special case of a relationship that holds for all triangles: the Law of CosinesThe following diagram shows the relationship between the vector form of the law of cosines, c^2 = a^2 + b^2 + 2 a \cdot b, and the angle form of the law of cosines, c^2 = a^2 + b^2 - 2 |a||b|\cos C In the angle form, C is an interior angle, but in the vector form, if a \cdot b = |a||b|\cos(\theta_{ab}), then \theta_{ab} is an exterior angle. This is the origin of the difference in sign of the final term between the two forms.. If you have a theorem about right triangles, then you’d really like to know whether it’s ever true for non-right triangles, and how exactly it breaks down in cases where it isn’t true. Perhaps there’s a good way to deform Pythagoras’ picture to illustrate the law of cosines, but I don’t believe I’ve seen it. For these reasons, I’ve generally been satisfied with the algebraic way of thinking about the Pythagorean theorem. So satisfied, I recently realized, that I’ve barely even tried to think about what pictures would “naturally” illustrate the algebraic manipulation. In the remainder of this post, I plan to remedy this complacency. Robert Vanderbei has written a beautifulseries of articles and talks about a method for finding the radius of the earth based on a single photograph of a sunset over a large, calm lake. Vanderbei’s analysis is an elegant and subtle exercise in classical trigonometry. In this post, I would like to present an alternative analysis in a different language: Geometric Algebra. I believe that geometric algebra is a more powerful system for formulating and solving trigonometry problems than the classical “lengths and angles” approach, and it deserves to be better known. Vanderbei’s sunset problem is simple to understand and challenging to solve, so it makes a nice benchmark. Here’s Vanderbei’s sunset problem. If the earth was flat, photographs of the sun setting over water would look like this: Notice that the reflection dips just as far below the horizon as the sun peaks above it. Actual photographs of the sun setting over calm water (like Vanderbei’sUpdate: I should have been more careful to note that most photographs of sunsets over water actually don’t look like Vanderbei’s photograph (or my diagram) because of waves and atmospheric effects, and that sensor saturation artifacts make it hard to interpret images like this. Reproducing Vanderbei’s image may be somewhere between hard and impossible. More Below.) look more like this: Kakuro is a number puzzle that is a bit like a combination between Sudoku and a crossword puzzle. Imagine a crossword puzzle where, instead of words, blocks of boxes are filled with combinations of digits between 1 and 9, and instead of clues about words, you are given sums that a block of digits must add up to. When you’re solving a Kakuro puzzle, it’s helpful to be able to generate all the combinations of m different digits that add up to a given sum. A recent thread on the julia-users mailing list considered how to implement this task efficiently on a computer. In this post, I’d like to show a progression of a few different implementations of the solution of this same problem. I think the progression shows off one of Julia’s core strengths: in a single language, you are free to think in either a high level way that is close to your problem domain and easy to prototype, or a low level way that pays more attention to the details of efficient machine execution. I don’t know any other system that even comes close to making it as easy to switch back and forth between these modes as Julia does. Attention Conservation Notice: If you’re looking for information on how to solve Kakuro with a computer, you should probably look elsewhere. This post is a deep dive into a tiny, tiny subproblem. On the other hand, I’ll show how to speed up the solution of this tiny, tiny subproblem by a factor of either ten thousand or a million, depending how you count, so if that sounds fun you’re in the right place.
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
Below my modified answer containing a complete bijection between the above sequences and Dyck paths: Let $a = (a_1,\ldots,a_n)$ be a sequence of $n$ integers. $a$ satisfies Property $A$ if it satisfies your two conditions above. This is $a_{k+1} \geq a_k−1$, If $a$ contains an $i \geq 1$, then the first $i$ lies between two $i−1$s. $a$ satisfies Property $B$ if $a_{k+1} \leq a_k+1$, If $a$ contains an $i \geq 1$, then the first $i$ lies between two $i−1$s. Interchanging neighbours that do not satisfy Property $B1$ does not interfere with Property $A2 = B2$ and should provide a bijection between sequences with Property $A$ and those with Property $B$. E.g. for $n=6$ there are $8$ Property $A$ sequences that do not satisfy Property $B1$:$$001021, 011021, 010021, 010210, 010211, 010212, 012102, 010221.$$Interchanges $0$'s and $2$'s where necessary gives$$001201, 011201, 012001, 012010, 012011, 012012, 012120, 012201.$$ $a$ satisfies Property $C$ if $a_1 = 0$. $a_{k+1} \leq a_k+1$, If $a$ contains an $i\geq 1$, then there is an $i-1$ after the first $i$, Properties $B$ and $C$ are equivalent since $B2$ implies that $a_1 = 0$. This then implies that every $i \in a$ has an $i-1$ somewhere to its left, and we can drop this part of condition $B2$ to obtain Property $C3$. $a$ satisfies Property $D$ if it satisfies $C1$ and $C2$. Sequences with Property $D$ are item $u$ on Stanley's list, and are in natural bijection to Dyck paths (which are here lattice paths from $(0,0)$ to $(n,n)$ that never go below the diagonal $x=y$) by sending a path $D$ to the sequence $a$ where $a_k$ is the number of complete boxes between $D$ and the diagonal at height $i$ (this is a well-known bijection). Since Property $C$ is strictly stronger than Property $D$, we now have reached an embedding of Sequences of length $n$ with Property $A$ into Dyck paths of length $2n$. Next, we apply the ''zeta map'' $\zeta$ as defined in Jim Haglund's book on $q,t$-Catalan numbers on page 50. This map is defined by given a sequence $a = (a_1,\ldots,a_n)$ satisfying Property $D$, it returns a Dyck path as follows: first, build an intermediate Dyck path (the "bounce path") consisting of $d_1$ north steps, followed by $d_1$ east steps, followed by $d_2$ north steps and $d_2$ east steps, and so on, where $d_i$ is the number of $i-1$'s within $a$. For example, given $a = (0,1,2,2,2,3,1,2)$, we build the path $NE\ NNEE\ NNNNEEEE\ NE$ (this is the dashed path on the right of Figure 3 in the reference). Next, the rectangles between two consecutive peaks are filled. Observe that such the rectangle between the $k$th and the $(k+1)$st peak must be filled by $d_k$ east steps and $d_{k+1}$ north steps. In the above example, the rectangle between the second and the third peak must be filled by $2$ east and $4$ north steps, the $2$ being the number of $1$'s in $a$, and $4$ being the number of $2$'s. To fill such a rectangle, scan through the sequence $a$ from left to right, and add east or north steps whenever you see a $k-1$ or $k$, respectively. So to fill the $2 \times 4$ rectangle, we look for $1$'s and $2$'s in the sequence and see $122212$, so this rectangle gets filled with $ENNNEN$. The complete path we obtain in thus $NENNENNNENEEENEE$. This map sends the dinv statistic (this is, the number of pairs $k<\ell$ with $a_k-a_\ell \in \{0,1\} $) to the area, where the area below the bounce path comes from those pairs with $a_k-a_\ell = 0$, and the parts in between from the pairs with $a_k-a_\ell = 1$). Moreover, an inner touch point is reached if and only if all $i$'s come after all $i-1$'s within $a$ for any $i$. In the example, this happens only for $0$'s and $1$'s, thus giving one touch point. Given the last observation, we see that $a$ satisfies Property $C3$ if and only if $\zeta(a)$ touches the diagonal only in the very beginning and in the very end, and nowhere in between. So we thus reached a bijection between sequences satisfying Property $A$ and Dyck paths that do not have inner returns to the diagonal. Finally, stripping off the first north and the last east step yields a Dyck path of length $2n-2$, and we have obtained a complete bijection. In order make every step in my bijection visible, I provided a Sage worksheet implementing each step for a better understanding: http://sage.lacim.uqam.ca/home/pub/21/ If anything is unclear or wrong, please let me know so I can try to fix it...
Path Connectedness of Open and Connected Sets in Euclidean Space Table of Contents Path Connectedness of Open and Connected Sets in Euclidean Space Theorem 1: If $A$ is an open and connected subset of $\mathbb{R}^n$ (with the usual topology) then $A$ is path connected. Proof:Let $\mathbb{R}^n$ have the usual topology and let $A \subseteq \mathbb{R}^n$ be an open and connected subset of $\mathbb{R}^n$. Let $x, y \in A$ and let $\mathcal R$ denote the collection of all open balls contained in $A$: \begin{align} \quad \mathcal R = \{ B = B(x, r) : x \in A, r > 0, B(x, r) \subseteq A \} \end{align} Since $A$ is open and since $x \in A$ there exists an open ball contained in $\mathcal R$ that contains $x$, call it $B_1 = B_1(x, r_x)$. Set $B_2$ to be the union of all open balls $B \in \mathcal R$ such that $B \cap U_1 \neq \emptyset$: \begin{align} \quad B_2 = \left \{ \bigcup_{B \in \mathcal R} B : B \cap U_1 \neq \emptyset \right \} \end{align} In general, for all $i \in \mathbb{N}$ $i > 1$, set $B_i$ to be the union of all open balls $B \in \mathcal R$ such that [[$ B \cap U_{i-1} \neq \emptyset: \begin{align} \quad B_i = \left \{ \bigcup_{B \in \mathcal R} B : B \cap U_{i-1} \neq \emptyset \right \} \end{align} We claim that every ball $B \in \mathcal R$ is contained in some $B_i$. Suppose not. Let $\displaystyle{V = \bigcup_{i=1}^{\infty} B_i}$ and let $\displaystyle{W = \bigcup_{B \in \mathcal R^*} B}$ where $\displaystyle{\mathcal R^* = \{ B \in \mathcal R : B \not \subseteq U_i \: \forall i \in I}$. Then $V$ and $W$ are both open sets since they are the union of open balls. Moreover, $V, W \neq \emptyset$, $V \cap W = \emptyset$, and $A = V \cup W$. So, $\{ V, W \}$ is a separation of $A$ which contradictions $A$ being connected. So for every $B \in \mathcal R$ there exists a $i \in \{1, 2, ... \}$ such that $B \subseteq B_i$. So there exists an $n \in \{1, 2, ... \}$ such that $y \in B_n$. Moreover, $\displaystyle{A = V = \bigcup_{i=1}^{\infty} B_i}$ is such that $B_i \cap B_{i+1} \neq \emptyset$ for all $i \in \{1, 2, 3, ... \}$. So, from the theorem presented on the Path Connectivity of Countable Unions of Connected Sets page we have that $V = A$ is path connected. $\blacksquare$
Vector Subspace Sums Examples 2 Recall from the Vector Subspace Sums page that if $U_1$, $U_2$, …, $U_m$ are all vector subspaces of the vector space $V$, then the sum of $U_1$, $U_2$, …, $U_m$ denoted $U_1 + U_2 + ... + U_m$ is defined to be the set of all possible sums $u_1 + u_2 + ... + u_m$ where $u_i \in U_i$ for each $i = 1, 2, ..., m$, that is:(1) Furthermore, we said that $U_1$, $U_2$, …, $U_m$ form a direct sum of the vector space $V$ written $V = U_1 \oplus U_2 \oplus ... \oplus U_m$ if every vector $v \in V$ can be written uniquely as $v = u_1 + u_2 + ... + u_m$ where $u_i \in U_i$ for each $i = 1, 2, ..., m$. We will now look at some more examples regarding vector subspaces sums and vector subspace direct sums. Example 1 Prove by giving a counterexample that if $U_1 + U_2 = U_1 + U_3$ then $U_2$ need not equal $U_3$. Consider the following subspaces from $\mathbb{R}^2$:(2) It's not hard to verify that each of $U_1$, $U_2$ and $U_3$ are indeed subspaces of $\mathbb{R}^2$. Furthermore $U_1 + U_2 = \mathbb{R}^2$ since $U_1 = \mathbb{R}^2$, and $U_1 + U_3 = \mathbb{R}^2$ for the same reason. However, clearly $U_2 \neq U_3$. Take a vector $(a, 0) \in U_2$ where $a \neq 0$. Then $(a, 0) \not \in U_3$. Example 2 Consider the subspace $U = \{ (c, c, c, d) \in \mathbb{R}^4 : c, d \in \mathbb{R} \}$ of $\mathbb{R}^4$. Find a subspace $W$ of $\mathbb{R}^4$ such that $\mathbb{R}^4 = U \oplus W$. Consider the vector space $W = \{ (0, a, b, 0) \in \mathbb{R}^4 : a, b \in \mathbb{R} \}$. Then for $(x, y, z, w) \in \mathbb{R}^4$ we have that:(3) Clearly $U \cap W = \{ 0 \}$. To show this, consider a vector in $U \cap W$. Then we have that:(4) Note that then $c = 0$, $c = a$, $c = b$ and $d = 0$. Thus $a = b = c = d = 0$, which implies that the vector we were considering was the zero vector.
What you are looking for can be achieved by wrapping the labels in HoldForm, or if you prefer, HoldForm@InputForm. For example, here is a plot that combines both labeling issues you mentioned: f = x^2; Plot[f, {x, -2, 2}, AxesLabel -> {x, HoldForm[InputForm[E = f]]}] The two issues you mention are indeed separate: To get f instead of $x^2$ you should use HoldForm, but that still allows the display of TraditionalForm shorthand forms for built-in symbols such as E (which is Euler's constant but is pretty-printed as $\mathbb e$). To prevent the replacement of E by $\mathbb e$, InputForm can be used. As you noticed, using strings in labels (though sometimes perfectly fine) has undesirable effects on the font and requires more "finger-painting" with styles. The HoldForm approach is easier to use and the code is easier to read when labels get complicated. See also this related question. To expand on this topic: Sometimes you need more complicated labels that require "two-dimensional" typesetting, as in $\psi = \frac{1}{2}\int f(x) \mathbb{d}x$, see this image: Edit: how to get formatting into labels in general For output like the above, the essential ingredient is that the expression should be wrapped in a FormBox. Strings aren't made for two-dimensional display, but Mathematica has a way of sneaking FormBoxes into strings: see the documentation. Therefore, you can get a two-dimensional formula into a plot label either using HoldForm or a string. Using HoldForm, I got the formatting in the image by doing the following: Create the formula in a TraditionalForm environment. This could e.g. be in a Text cell by starting an equation with Ctrl-( and ending with Ctrl-). Copy this formula from within that math inset. Lay out your plot by typing something like Plot[f, {x, -2, 2}, AxesLabel -> {x, HoldForm[ ]}] In the blank space that I have left inside the HoldForm[ ], paste the copied equation. You will be asked if you want to wrap the TraditionalForm expression in a FormBox, and the answer is yes. Provided that the pasted expression obeys Mathematica syntax, you should now be able to evaluate the plot cell and get the output shown above. Now in some cases you want to label a graph with a two-dimensional formula that doesn't obey Mathematica syntax, and in that case you would replace step 4. above by this: Plot[f, {x, -2, 2}, AxesLabel -> {x,""} Instead of HoldForm, I now left an empty string "" in the label. Now proceed as above with step 4, for example using an equation like -(\[HBar]^2/(2m))\[PartialD]^2 \[Psi] entered in a math inset (in a text cell as in step 1). If you tried this with HoldForm, it would give a syntax error because the \[PartialD] is being used in a mathematically acceptable but syntactically incorrect way. Edit 2: the fastest way The way I described copying and pasting of TraditionalForm into strings was based on my habits, but it's actually not the fastest. I should adjust my habits to the following: In the code for your plot, type a single string placeholder letter for your label, such as "y". Using the mouse, highlight the y in your string and go to the menu item Cell > Convert To > TraditionalForm (or use the keyboard shortcut). This creates the all-important FormBox. Since this box is invisible, you now have to use the arrow keys or mouse to get inside this FormBox, right next to to the placeholder y. From here, you can start typing any arbitrary formula which will then be typeset as TraditionalForm in the plot label. So in conclusion, HoldForm is a very direct way of getting valid Mathematica expressions into labels without expanding them, and strings should be used in combination with the FormBox wrapper method above to typeset arbitrarily complicated labels.