text
stringlengths
256
16.4k
Named data packets cut both ways in CCN. Perhaps their most beneficial quality is that they shift the communication emphasis from end hosts and fixed addresses to content. However, from a security perspective, there is a steep cost for this transition. Application names, being often-overloaded URIs, can leak a significant amount of information about the data being requested. Far beyond what’s leaked from an IP packet address and port tuple. (I’ve discussed this in a past post). There are at least two ways to solve this problem: Wrap application names with encryption and carry them to their destination via unrelated network names (or locators). This is basically the TOR [1] equivalent of CCN [2]. Obfuscate application names to flat (i.e., not wrapped) network names such that they can be compared to other network names without leaking the unobfuscated form. Since (1) has been around for some time, I’m focusing on (2) in this post. To start, the description I gave might not be very clear. Let me give an example. Assume you are a consumer and wish to obtain the data named /foo/bar. Moreover, assume there existed the following set of functions at your disposal: Moreover, it should be difficult to learn the application name $N$ from its network name $N$’ = Hash($N$). I claim that you can build the same packet forwarding logic using these two functions. (Albeit at significant performance costs.) But why would one bother? For one thing, the application name is no longer revealed to the network. This lets forwarders act on names without having any encryption context and without using encapsulation as in case #1 above. From a theoretical perspective, such functions allow you to build name equivalence classes. That is, sets of different representationsfor the same name. We would like these equivalence classes to be formedsuch that, given two distinct names $N_1$ and $N_2$, the probabilitythat Hash($N_1$) is in the equivalence class of Hash($N_2$) is overwhelmingly (negligibly) small. I’ll now present a trivial construction for these name equivalence classes by instantiating the Hash and Compare functions. The core idea is based on randomized hashing of the name and a deterministic comparison function. To begin, let $N = N_1,\dots,N_k$ be a name of $k$ components that we want to map to an element of its equivalence class. Next, let $G$ be a finite cyclic group in which the discrete log problem is hard, and let $g \in G$ be a generator of this group. Assume that there is some entity responsible for comparing obfuscated names, e.g., a router. This entity generates a secret key $k \overset{$}{\gets} G$ and the corresponding public key $p = g^k$ and gives them to other entities who wish to send names to be compared, e.g., consumers. Using these parameters, the Hash and Compare functions can be built as follows. Hash(N): Split $N$ into $k$ segments. For each segment $N_i$, compute $N_i’$ as the (ryptographic) hash of the first $i$ segments. Then, sample a random $s \overset{$}{\gets} G$. For each transformed segment $N_i’$, compute the two values $\alpha_i = p^s$ and $\beta_i = g^{N_i’ + s}$, where $+$ is element addition, not concatenation. Return the list of $\alpha, \beta$ pairs $(\alpha_1,\beta_1), \dots, (\alpha_k, \beta_k)$. Compare($N_1$, $N_2$): Return False if the names are of different lengths. Let each segment $i$ in these names be denoted as $(\alpha_{i,1},\beta_{i,1})$ and $(\alpha_{i,2},\beta_{i,2})$. Using these values, compute the following: If $p_1 = p_2$, then it must be the case that $N_1$ and $N_2$ are in the same equivalence class, i.e., they are “hashed” versions of the same name. To prove this statement, let’s do the actual computations here. I’ll use the notation $s_1$ and $s_2$ to refer to the distinct $s$ values for $N_1$ and $N_2$, respectively. Assuming that $N_{i,1}’ = N_{i,2}’$ for each each segment $i=1,\dots,k$, it follows that: I implemented this basic scheme using the group of integers $Z$ modulo some sufficiently large prime $p$. (We could use a better group in which the DL problem is hard, but $Z_p$ works to illustrate the idea.) The code proceeds as follows. First, it fixes $p$ and then generates a generator $g$ and public and private key pair $k$ and $pk$ (denoted as $p$ in the previous section). Then the secret to be obfsucated is sampled at random along with the two random values $s_1$ and $s_2$ (denoted as $s$ and $r$ in the code). Finally, the program hashes both values and then compares them for equality. The result of this comparison is printed. The complete listing is below. You may run it yourself to check its correctness. While it’s great that we now have some way to compare names withoutrevealing them, this general apprpach scheme presents some challenges. For instance,in vanilla CCN, we rely on names as given (encoded) to perform matching and indexing into data structures such as the FIB, PIT, and CS. That is, we use exactmatching for names. However, since the “verifier” (router) can only determine equality between a pair ofobfuscated names by invoking the compare function for each one, the timeto index these data structures is necessarily linear with the number of entries. I’m currently working on a paper that will show this result formally. And the moral of the story is that wrapping names a la TOR is the best we can do if we wish to hide information within packet headers.
Question of our assignment $A$ is a $3×3$ real symmetric matrix such that $A^3 = I$ (Identity matrix). Does it imply that $A = I$? If so, why? If not, give an example. Any help will be appreciated. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Yes, because of the following: $A$ is diagonalizable with real eigenvalues, since it is real symmetric; further these eigenvalues solve $\lambda^3=1$ due to $A^3=I$; the only real solution is $\lambda=1$. Therefore $A=PIP^{-1}=I$. For any vector $x$, let $y=(A-I)x$. Since $A$ is symmetric and $A^3=I$, we have \begin{align*} \|Ay\|^2 + \|A^2y\|^2 + \|y\|^2 &=y^T\left[A^TA+(A^2)^TA^2+I\right]y\\ &=y^T\left(A^2+A^4+I\right)y\\ &=y^T(A^2+A+I)y\\ &=y^T(A^2+A+I)(A-I)x\\ &=y^T(A^3-I)x\\ &=0. \end{align*} Therefore $\|y\|$ must be zero, i.e. $y=0$ or $Ax=x$. Since $x$ is arbitrary, we conclude that $A=I$. $A$ satisfies $x^3-1\in\mathbb R[x]$ which can be factored into irreducible factors as $$x^3-1=(x-1)(x^2+x+1)\text{ over $\mathbb R$}$$ Since real symmetric matrices are diagonalizable over $\mathbb R$ the minimal polynomial $m_A$ of $A$ must be factored into linear factors over $\mathbb R.$ Also $$m_A|x^3-1\text{ and }m_A(A)=0$$ implies that we are left with the only possibility that $A=I.$
How to Obtain Fatigue Model Parameters When simulating fatigue, you are faced with two main challenges. The first is to select a suitable fatigue model for your application and the second is to obtain the material data for the selected model. I recently addressed the first challenge in the blog post “Which Fatigue Model Should I Choose?“. Today, I will address the second challenge and discuss how you can obtain fatigue model parameters. Predict Fatigue Using Many Different Models Fatigue models are based on physical assumptions and are therefore said to be phenomenological. Since different micromechanical mechanisms govern fatigue under various conditions, many analytical and numerical relations are needed to cover the full spectrum of fatigue. These models, in turn, require dedicated material parameters. It is well known that fatigue testing is expensive. Many test specimens are necessary since the impurities responsible for fatigue initiation are randomly distributed in the material. The difference in the fatigue life is clearly visible when you visualize all the test results in an S-N curve. An S-N curve. The black squares represent individual fatigue tests. Advice for Obtaining Model Parameters via the S-N Curve Since the S-N curve — also called the Wöhler curve — is one of the oldest tools for fatigue prediction, there is a good chance that the material data is already available in this form. Many times, the data is given for a 50% failure risk. If you do not have access to the material data, you are faced with a testing campaign. When you are done, pay attention to the statistical aspect and, at each load level, select the same reliability when constructing an S-N curve. This is important since the S-N curve is expressed in a logarithmic scale where a small difference in input has a large influence on the output. Then, S-N curves for different reliability levels fall under each other and you should select an appropriate level for your application. For noncritical structures, a failure rate of 50% might be acceptable. However, for critical structures, a significantly lower failure rate should be chosen. Always pay attention when you combine fatigue data from different sources. Make sure that the testing conditions and the operating conditions are the same. Advice for Running Fatigue Tests that Consider Mean Stress Another aspect of fatigue testing considers the mean stress that has a substantial influence on the fatigue life. In general, fatigue tests performed at tensile mean stress will give a shorter life than tests performed at a compressive mean stress. This effect is also frequently expressed using the R-value (the ratio between the minimum and maximum stress in the load cycle). Thus, with decreasing mean stress (or R-value), the fatigue life increases. In the Fatigue Module, the Stress-Life models do not take into account this effect. When using these models, you need to choose material data obtained under the same testing conditions as the operating one. In the cumulative damage model, the Palmgren-Miner linear damage summation uses an S-N curve. However, in this model, the S-N curve is specified with the R-value dependence and the mean stress effect is accounted for. The mean stress effect. In case you use a material library and the fatigue data is specified using the maximum stress, you can easily convert it to the stress amplitude using where \sigma_a is the stress amplitude, \sigma_{max} is the maximum stress, and R is the R-value. Advice for Obtaining Parameters for Findley and Matake Critical Plane Models The stress-based models seem to be fairly simple. For example, the Findley and the Matake models use the expressions and respectively. They depend on only two material constants: f and k. These material parameters are, however, nonstandard material data that can be related to the endurance limit of the material. Note that the actual values of f and k differ between the two models. The analytical relation is somewhat cumbersome to obtain since the stress-based models are based on the critical plane approach and you need to find a plane where the left-hand sides of the above relations are maximized. This is basically done by expressing the shear and the normal stress as a function of the orientation using the Mohr’s stress circle, maximizing by setting the derivative to zero, and simplifying the resulting relation. The different steps of the data manipulation will not be shown here. For the Findley model, the material parameters are related to the standard fatigue data using Here, R is the R-value and \sigma_U(R) is the endurance limit. The argument of the endurance limit indicates that the stress is R-value dependent. For the Matake model, the relation is somewhat simpler and given by Since both relations have two unknown material parameters, you need endurance limits from two different types of fatigue tests. To illustrate this, consider a case where one endurance limit is obtained by alternating the load between a tensile and a compressive value, R=-1. In the second case, the load is cycled between a zero load and a maximum load, R=0. For the Findley model, this leads to \begin{array}{lr} \frac{f}{\sigma_U(-1)}=\frac{1}{2}\left(k+\sqrt{1+k^2}\right)\\ \frac{f}{\sigma_U(0)}=\frac{1}{2}\left(2k+\sqrt{1+4k^2}\right) \end{array} \right. The pair of equations must be solved numerically. Here is the strategy: Eliminate f between the two equations. This is trivial since it always appears as a linear term. Now, you have a nonlinear equation for k only. Since k has a rather small variation (usually between 0.2 and 0.3), it is easy to solve even by pure trial and error. Given the computed k, evaluate f using either of the original equations. For the Matake model, the two fatigue tests lead to \begin{array}{lr} \frac{f}{\sigma_U(-1)}=\frac{1}{2}+\frac{k}{2}\\ \frac{f}{\sigma_U(0)}=\frac{1}{2}+k \end{array} \right. which you can solve analytically. Fatigue Model Examples I would like to share a few examples where the discussed fatigue models are used: Findley and Matake models are used to predict fatigue in the example of High-Cycle Fatigue Analysis of a Cylindrical Test Specimen. The S-N curve is used in the tutorial model from the Structural Mechanics Module of a bracket. The S-N curve with R-value dependence is used in the fatigue prediction of a model of a frame with a cutout. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
That was an excellent post and qualifies as a treasure to be found on this site! wtf wrote: When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that . In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/ Thanks for the link and I would have showcased it all on its own had I seen it first The point I am making is something different. I am pointing out that: All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out. I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate. But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate. There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets. Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity? 1) The rigorization of Newton's calculus culminated with infinitary set theory. Newton discovered his theory of gravity using calculus, which he invented for that purpose. I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it. However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun. I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant. 2) Einstein's gneral relativity uses Riemann's differential geometry. In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics. Isn't this the same problem as previous? dx=0? 3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series. In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math. I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'? 4) Quantum theory is functional analysis . If you took linear algebra, then functional analysis can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series. Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete , meaning that it has no holes in it. It's like the real numbers and not like the rational numbers. QM rests on the mathematics of uncountable sets, in an essential way. Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces. Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such. Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it." He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god. I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it. ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig. Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol 5) Cantor was led to set theory from Fourier series. In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity). I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity. In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here: If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory. Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake. I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing.... I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality. It just means we're using averages rather than discrete actualities and it's close enough. I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is. I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself. Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess Don't take my pushback too seriously I'd prefer if we could collaborate as colleagues rather than competing.
32 3 Hi, I'm supposed to prove that Wien's Law: [tex] P(\lambda,T) = \frac{f(\lambda T)}{\lambda^5} [/tex] includes Stefan-Botlzmann's Law [tex] R(T) = \sigma T^4[/tex] and Wien's Displacement Law: [tex]\lambda_{max} T = b[/tex] For Wien's Displacement Law: I know that I would have to find when [tex]P(\lambda ,T)[/tex] graphed against [tex]\lambda[/tex] has a slope of 0. So I think I need to find the derivative with respect to [tex]\lambda[/tex]. But the only two equations for [tex]P(\lambda,T) [/tex] I have are [tex] P(\lambda,T) = \frac{f(\lambda T)}{\lambda^5} [/tex] and [tex]P(\lambda,T) = \frac{8\pi kT}{\lambda^4} [/tex] So if I take the derivative of [tex] P(\lambda,T) = \frac{8\pi kT}{\lambda^4} [/tex] with respect to [tex]\lambda[/tex] I have [tex]8\pi kT} * (-4) * \lambda^{-5} = 0 [/tex] Where I'm guessing that everything except [tex] \lambda [/tex] is being held constant and I don't know what to do from there. Any hints or corrections of things I said would be appreciated. Thanks. I'm supposed to prove that Wien's Law: [tex] P(\lambda,T) = \frac{f(\lambda T)}{\lambda^5} [/tex] includes Stefan-Botlzmann's Law [tex] R(T) = \sigma T^4[/tex] and Wien's Displacement Law: [tex]\lambda_{max} T = b[/tex] For Wien's Displacement Law: I know that I would have to find when [tex]P(\lambda ,T)[/tex] graphed against [tex]\lambda[/tex] has a slope of 0. So I think I need to find the derivative with respect to [tex]\lambda[/tex]. But the only two equations for [tex]P(\lambda,T) [/tex] I have are [tex] P(\lambda,T) = \frac{f(\lambda T)}{\lambda^5} [/tex] and [tex]P(\lambda,T) = \frac{8\pi kT}{\lambda^4} [/tex] So if I take the derivative of [tex] P(\lambda,T) = \frac{8\pi kT}{\lambda^4} [/tex] with respect to [tex]\lambda[/tex] I have [tex]8\pi kT} * (-4) * \lambda^{-5} = 0 [/tex] Where I'm guessing that everything except [tex] \lambda [/tex] is being held constant and I don't know what to do from there. Any hints or corrections of things I said would be appreciated. Thanks.
Up until now, we have dealt with double integrals in the Cartesian coordinate system. This is helpful in situations where the domain can be expressed simply in terms of \(x\) and \(y\). However, many problems are not so easy to graph. If the domain has the characteristics of a circle or cardioid, then it is much easier to solve the integral using polar coordinates. Introduction The Cartesian system focuses on navigating to a specific point based on its distance from the x, y, and sometimes z axes. In polar form, there are generally two parameters for navigating to a point: \(r\) and \( \theta \). \(r\) represents the magnitude of the vector that stretches from the origin to the desired point. In other words, \(r\) is the distance directly to that coordinate point. \( \theta \) represents the angle that the aforementioned theoretical vector would make with the x-axis. This creates a circular type of motion as we adjust the value of \( \theta \), which allows us to express a circle of radius 1 as \( r = 1 \) as opposed to \( x^2 + y^2 = 1 \) in Cartesian coordinates. Polar Double Integration Formula Many of the double integrals that we have encountered so far have involved circles or at least expressions with \(x^2 + y^2\). When we see these expressions a bell should ring and we should shout, "Can't we use polar coordinates." The answer is, "Yes" but only with care. Recall that when we changed variables in single variable integration such as \(u = 2x\), we needed to work out the stretching factor \(du = 2dx\). The idea is similar with two variable integration. When we change to polar coordinates, there will also be a stretching factor. This is evident \since the area of the "polar rec\tangle" is not just as one may expect. The picture is shown below. Even if \( \Delta{r}\) and \( \Delta{q}\) are very small, the area is not the product \( \Delta{r}\, \Delta{q} \). This comes from the definition of radians. An arc that extends \( \Delta{q}\) radians a distance \(r\) out from the origin has length\( r\, \Delta{q}\). If both \(\Delta r\) and \(\Delta q\) are very small then the polar rectangle has area \[ Area = r \Delta{r} \Delta{q}. \] This leads us to the following theorem Theorem: Double Integration in Polar Coordinates Let \(f(x,y)\) be a continuous function defined over a region \(R\) bounded in polar coordinates by \( r_1(q) < r < r_2(q) \) and \( q_1 < q < q_2 \). Then \[ \iint_R f(x,y)\,dy\,dx =\int_{\theta_1}^{\theta_2} \int_{r_1(\theta)}^{r_2(\theta)} f(r\cos \theta,r\sin \theta)\,r\,dr\,d\theta.\] Theoretical discussion with descriptive elaboration The area of a closed and bounded region \(r\) in the polar coordinate plane is given by \[ A = \iint_{R}^{ }\ r \, dr\, d \theta. \] To find the bounds for a domain in this form, we use a similar technique as with integrals in rectangular form. Beginning at the origin with \(r = 0 \) we increase the value of \(r\) until we find the maximum and minimum overall distance from the origin. Similarly, for \(\theta \) we start at \( \theta = 0 \) and find the minimum and maximum angles that the domain makes with the orgin. The maximum distance away from the origin and the maximum angle with the origin that defines the boundary represents the upper bound for the double integral. The minimum distance and the minimum angle with the origin makes the lower bound for the double integral. For example, to find the bounds for \(r\), we look to see what the minimum and maximum overall distances from the origin are in terms of \(r\). Sometimes problems will explicitly give you the curves that form the domain, other times you may need to look at a graph to determine the domain. Regardless, if the origin is contained in the domain, then the lower bound for \(r\) will be 0. The upper bound will be whatever curve encompasses the rest of the domain. \( \theta \) is usually simpler to compute. The lower and upper bounds are the minimum and maximum angles that the domain makes with the origin. Trigonometric functions can be used to determine the extreme angles that the function makes with the origin. If an equation is provided, it is helpful to use the conversions: \[ x = r\, \cos \,\theta\] \[y = r\, \sin \,\theta \] to convert equations from Cartesian to polar form. If one must determine the bounds from a provided graph, it can be helpful to guess and check by plotting some test points to see if your bounds truly match the provided graph. If you need to convert an integral from Cartesian to polar form, graph the domain using the Cartesian bounds and your knowledge of curves in the Cartesian domain. Then use the method described above to derive the bounds in polar form. Once the integral is set up, it may be solved exactly like an integral using rectangular coordinates. Example \(\PageIndex{1}\) Find the volume to the part of the paraboloid \[ z = 9 - x^2 - y^2 \] that lies inside the cylinder \[ x^2 + y^2 = 4. \] Solution The surfaces are shown below. This is definitely a case for polar coordinates. The region \(R\) is the part of the xy-plane that is inside the cylinder. In polar coordinates, the cylinder has equation \[ r^2 = 4. \] Taking square roots and recalling that \(r\) is positive gives \[ r = 2. \] The inside of the cylinder is thus the polar rectangle \( 0 < r < 2 \) \( 0 < q < 2\pi\). The equation of the parabola becomes \[ z = 9 - r^2. \] We find the integral \[ \int _0^{2\pi}\int_0^2 \left(9-r^2\right) r\,dr\,d\theta.\] This integral is a matter of routine and evaluates to \( 28\pi\). Example \(\PageIndex{2}\) Find the volume of the part of the sphere of radius 3 that is left after drilling a cylindrical hole of radius 2 through the center. Solution The picture is shown below The region this time is the annulus (washer) between the circles \(r = 2\) and \(r = 3\) as shown below. The sphere has equation \[ x^2 + y^2 + z^2 = 9 .\] In polar coordinates this reduces to \[ r^2 + z^2 = 9. \] Solving for \(z\) by subtracting \(r^2\) and taking a square root we get top and bottom surfaces of \[z=\sqrt{9-r^2} \;\;\; \text{and} \;\;\; z=-\sqrt{9-r^2}. \] We get the double integral \[\int_0^{2\pi} \int_2^3 (\sqrt{9-r^2}+ \sqrt{9-r^2})r\; dr d\theta. \] This integral can be solved by letting \[u = 9 - r^2 \;\;\; \text{and} \;\;\;du = -2r\,dr.\] After substituting we get \[\begin{align} &-\dfrac{1}{2}\int_{0}^{2\pi}\int_{5}^{0} 2u^{\frac{1}{2}} \; du d\theta \\ &= -\dfrac{2}{3}\int_{0}^{2\pi}[u^{\frac{3}{2}}]_5^0 \; d\theta \\ &= \dfrac{20 \sqrt{5} \pi}{3}.\end{align} \] Example \(\PageIndex{3}\) Change the Cartesian integral into an equivalent polar integral, then solve it. \[ \int_{1}^{\sqrt{3}}\int_{1}^{x}dydx \] Solution The point at (\(\sqrt{3}\), 1) is at an angle of \(\pi/6\) from the origin. The point at (\(\sqrt{3}, \sqrt{3}\) is at an angle of \(\pi/4\) from the origin. In terms of \(r\), the domain is bounded by two equations \(r=csc\theta\) and \(r=\sqrt{3}\sec\theta\). Thus, the converted integral is \[ \int_{csc\theta}^{\sqrt{3}\sec\theta}\int_{\pi/6}^{\pi/4}rdrd\theta. \] Now the integral can be solved just like any other integral. \[\begin{align} &\int_{\pi/6}^{\pi/4} \int_{csc\theta}^{\sqrt{3}\sec\theta}rdrd\theta \\ & =\int_{\pi/6}^{\pi/4} (\dfrac{3}{2} \sec^2\theta - \dfrac{1}{2} \csc^2\theta) d\theta \\ & = \left [ \dfrac{3}{2} \tan\theta + \dfrac{1}{2}\cot\theta \right ] _{\dfrac{\pi}{6}}^{\dfrac{\pi}{4}} \\ & =2 - \sqrt{3}. \end{align}\] Example \(\PageIndex{4}\) Find the area of the region cut from the first quadrant by the curve \( r = \sqrt{2 - \sin2\theta}\). Solution Note that it is not even necessary to draw the region in this case because all of the information needed is already provided. Because the region is in the first quadrant, the domain is bounded by \( \theta = 0 \) and \( \theta = \dfrac{\pi}{2} \). The sole boundary for \(r\) is \(r = \sqrt{2 - \sin2\theta}\) so the integral is \[\begin{align} & \int_{0}^{\pi/2}\int_{0}^{\sqrt{2 - \sin2\theta}} rdrd\theta \\ &= \int_{0}^{\pi/2} \left [ \dfrac{r^2}{2} \right ] _{0}^{\sqrt{2 - \sin2\theta}} d\theta \\ &= \int_{0}^{\pi/2} \dfrac{2 - \sin2\theta}{2} d\theta \\ &= \int_{0}^{\pi/2} 1 - \dfrac{\sin2\theta}{2} d\theta \\ &= \left [ \theta + \dfrac{\cos2\theta}{4} \right ] _{0}^{\pi/2} \\ &= \dfrac{\pi}{2} - \dfrac{1}{4}. \end{align}\] Example \(\PageIndex{5}\) Find the area of the region cut from the first quadrant by the curve \( r = \sqrt{2 - \sin2\theta}\). Solution Note that it is not even necessary to draw the region in this case because all of the information needed is already provided. Because the region is in the first quadrant, the domain is bounded by \( \theta = 0 \) and \( \theta = \dfrac{\pi}{2} \). The sole boundary for \(r\) is \(r = \sqrt{2 - \sin2\theta}\) so the integral is \[\begin{align} & \int_{0}^{\pi/2}\int_{0}^{\sqrt{2 - \sin2\theta}} rdrd\theta \\ &= \int_{0}^{\pi/2} \left [ \dfrac{r^2}{2} \right ] _{0}^{\sqrt{2 - \sin2\theta}} d\theta \\ &= \int_{0}^{\pi/2} \dfrac{2 - \sin2\theta}{2} d\theta \\ &= \int_{0}^{\pi/2} 1 - \dfrac{\sin2\theta}{2} d\theta \\ &= \left [ \theta + \dfrac{\cos2\theta}{4} \right ] _{0}^{\pi/2} \\ &= \dfrac{\pi}{2} - \dfrac{1}{4}. \end{align}\] Contributors Michael Rea (UCD), Larry Green Integrated by Justin Marshall.
* Let $(A, \geq)$ be a partially ordered set. If $(A,+, \times$) is a ring we may add an order $\geq$ that cooperates with the binary operations if - $a\geq b \Rightarrow ac \geq bc$, if $c \geq 1$. - $a\geq b \Rightarrow a+c \geq b+c$. That in fact decide much of the order, and denies the existence of order for any rings with finite characteristic or invertible elements. (That is why complete totally ordered field is so rare, too.) Clearly a finite ring has finite characteristic - and it will be hard to put an partial order on it. If a space has sub-structure that looks like a finite ring it might be bad to have an order as well. Example 1. We all know that $\mathbb{C}$ has no total order. It is possible to assign a partial order to it that is consistent with the order on the reals? The answer is no, too. Problem.Consider a poset $(\mathbb{C}, +, \times, \geq)$ that cooperates with the real order. Let $\omega = e^{q\pi i}\neq \pm 1$ where $q\in \mathbb{Q}$. Show that $\omega$ is not related to 1. Hence or otherwise, show that such an order does not exist. * To further demonstrate why orders cannot cooperate with binary operations we look at the following example. Definition 2. Let $f:(A, \geq) \to (B, \geq ')$ be a function. $f$ is said to be increasing if $a\geq b$ implies $f(a) \geq ' f(b)$. Since the strength of different orders varies, it is very hard to compare maximality in different spaces. Proposition 3. Let $f:(A, \geq) \to (B, \geq ')$ be an increasing bijection. Maximality of $a\in A$ does not imply maximality of $f(a)\in B$. Proof.Take the identity map of $f: (A, =)\to (A, \geq)$ where $A = \left\{ 1,2\right\}$ and consider 1. 1 is maximal under the equality order but not maximal under the $\geq$ order. But then crafting an example with respective to the same order is much harder. Proposition 4. Let $f:(A, \geq) \to (A, \geq)$ be an increasing bijection. Maximality of $a\in A$ does not imply maximality of $f(a)\in B$. Proof.Consider $A = \mathbb{Z}$ with the order $s \geq t$ iff $(s,t) = (2n+1,2n)$ where $n\in \mathbb{N}$ plus the reflexive relations. The function $f(n) = n+2$ is an increasing bijection, but the maximality of $0$ does not imply the maximality of $f(0) = 2$. The reason behind is that finite poset failed to work here. Theorem 5.Let $f:(A, \geq)\to (A, \geq)$ be an increasing bijection, where $|A| < \infty$. Then maximality of $a\in A$ implies maximality of $f(a)\in A$. Proof.For $a\in A$ define $\langle a \rangle = \left\{ f^k(a) \mid k \in \mathbb{Z}\right\}$. If $A$ is finite then so as $\langle a \rangle$, so there exists $p,q\in \mathbb{Z}$ so that $f^p(a) = f^q(a)$. Taking inverses we have $f^k(a) = a$ for some $k \in \mathbb{N}$. [This is a standard algebra argument.] Suppose there exists partial order $\geq$ so that $a$ is maximal and $f(a)$ is not, then we have $c \in A$ so that $c > f(a)$. But that implies $f^{k-1}(c) > f^k(a) = a$ which is a contradiction. Problem.The function $f$ is decreasing if $a\geq b$ implies $f(a) \leq f(b)$. Does the above results [3,4,5] holds for decreasing bijection?
In Dixmier ($C^*$-algebras), the Plancherel theorem states (I will not mention the right regular representation even though the theorem does talk about it): Let $G$ be unimodular, $\lambda$ be the left regular representation of $G$, $\mathcal{U}$ be the Von Neumann algebra on $L^2(G)$ generated by $\lambda(G)$, $J$ the mapping $f \mapsto f^*$ on $L^2(G)$, and $t$ the natural trace on $\mathcal{U}^+$ defined by $\epsilon_1$. Then there exists a positive measure $\mu$ on $\hat{G}$ and an isomorphism $W$ of $L^2(G)$ onto $\int^\oplus (K(\zeta) \otimes \overline{K(\zeta)}) d\mu(\zeta)$ (I`m guessing the integral is over $\hat{G}$) with the following properties: ($i$) $W$ transforms $J$ into $\int^\oplus J_\zeta \ d\mu(\zeta)$ $\lambda$ into $\int^\oplus (\zeta \otimes 1) \ d\mu(\zeta)$ $\mathcal{U}$ into $\int^\oplus (\mathcal{L}(K(\zeta)) \otimes \mathbb{C}) \ d\mu(\zeta)$ $t$ into $\int^\oplus t_\zeta \ d\mu(\zeta)$ etc... However, it seems that in the case $G$ is a real reductive Lie group this decomposition can be refined where the decomposition is done over parabolic subgroups $P = MAN$, discrete series of $M$ and unitary induction on elements of $i\mathfrak{a}^*$. According to my colleague it should look something like this (in naive terms): $L^2(G) = \bigoplus_{P = MAN} \bigoplus_{\mu \in M^\wedge_{ds}} \int_{\nu \in i\mathfrak{a}^*} \pi_{P, \mu, \nu} d\xi$ where $M^\wedge_{ds}$ is the set of discrete series reprensetations of $M$, $d\xi$ is some measure related to Harish-Chandra`s $c$-functions, and $\pi_{P, \mu, \nu}$ is some class of representations of $G$ depending on these parameters. My main question is this: I am looking for references on this refinement, preferably very self-contained references. My second question is, in N. Wallach`s book Real Reductive Groups II, he proves a very close thing, namely the decomposition of $L^2(G/K)$ where $K$ is a maximal compact subgroup obtained by a Cartan involution, and the decomposition of $L^2(G/N, \chi)$, where $N$ is the nilpotent radical of a parabolic subgroup $P = MAN$ and $\chi$ is a character of $N$. Can one obtain the refined decomposition of $L^2(G)$ with the above two decompositions in some way?
Any edge of a tree is a bridge. What is the minimum number of edges that I will need to add so that there are no more bridges in a tree? I have seen the solution from the internet the answer is $\frac{|V|}{2}$, where $|V|$ is the number of vertices in the tree. How can I prove it? Let me describe the following algorithm. The basic idea is to add edges such that for any pair of vertices $v_i$ and $v_j$ there is at least two different simple paths between $v_i$ and $v_j$. Initially all edges are unmarked Select a simple path $v_i\dots v_j$ containing at least two unmarked edgessuch as $(v_i,v_j) \notin E$ (vertices $v_i$ and $v_j$ are not connected). If there is no path containing two unmarked edges, then select a path containing one unmarked edge (this means we are done) Connect $v_i$ and $v_j$ (thereby creating a cycle). Mark all edges of the (newly created) cycle $v_i\dots v_j$ If there is unmarked edge then go to step 2 Halt Claim 1: The algorithm adds at most $\frac{|V|}{2}$ edges. Proof: At step 2 we select a path whose length is at least $2$ (i.e. has at least two edges) containing at least two unmarked edges. Such path exists as long as we have two unmarked edges since there is always a simple path between any two vertices $v_i$ and $v_j$ in a connected tree. So, at each step 2 we decrease the number of unmarked edges by $2$. Since the tree initially has $|V|-1$ unmarked edges, the algorithm adds at most $\big\lceil{\frac{|V|-1}{2}}\big\rceil$ edges. But $\big\lceil{\frac{|V|-1}{2}}\big\rceil \leq \big\lfloor\frac{|V|}{2}\big\rfloor \leq \frac{|V|}{2}$. Claim 2: The resulting graph created by the algorithm has no bridges. Proof: Consider a partition $(V',V- V')$. Let $v_i \in V'$ and $v_j \in V-V'$ such that $(v_i, v_j)$ is an edge of the initial tree. Such edge obviously exists since the tree is connected. At some point the algorithm selects a path containing the edge $(v_i, v_j)$ and creates a simple cycle which includes the edge $(v_i, v_j)$. So, there are two different paths between vertices $v_i$ and $v_j$, and hence there are at least two paths (or edges) connecting the partitions $V'$ and $V-V'$. This algorithm does not compute the optimal number of edges for all possible input instances. For example for a tree which is a path it is enough to add a single edge connecting two end points to transform the tree into a bridgeless graph. My goal is to establish the least upper bound for the number of edges required to add to a tree to transform it into a bridgless graph. For example a star-like tree with even number of vertices is a worst case in which case we need exactly $\frac{|V|}{2}$ edges.
Answer C=20$^\circ$ Work Step by Step C=$\frac{5}{9}$(F-32) C=$\frac{5}{9}$(68-32) C=$\frac{5}{9}$(36) C=$\frac{5\times36}{9}=\frac{180}{9}=\frac{180\div9}{9\div9}$ C=20 You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Practice Paper 1 Question 4 What is the units digit of the number \(\sum_{n=1}^{1337}(n!)^4\) Related topics Hints Hint 1What can you say about the units digit of \(n!\) for \(n\ge5\)? Hint 2What about the units digits for \(n < 5\)? Hint 3It's sufficient to take only the units digit when multiplying or raising the number to any power. Solution All factorials greater that \(5!\) have both the factors \(5\) and \(2\), hence the units digit equal to 0. This means that we only need to worry about \(n \in \{1,2,3,4\}\). It's sufficient to take only the units digit when multiplying or raising to any power. We have: \(1^4\rightarrow1\) \((2!)^4=2^4\rightarrow6\) \((3!)^4=6^4\rightarrow6\) \((4!)^4=24^4\rightarrow4^4\rightarrow6\) The units digit for the sum is therefore the units digit of \(1+6+6+6\), i.e. \(9\). If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
Volume 9, Number 5, 2019, Pages 1750-1768 DOI:10.11948/20180315 $H_{\infty}$ feedback controls based on discrete-time state observations for singular hybrid systems with nonhomogeneous Markovian jump Zhiyong Ye,Suying Pan,Jin Zhou Keywords:Discrete-time state observation, stochastically stable, $H_{\infty}$feedback control, nonhomogeneous Markovian jump, singular system. Abstract: In this paper, the $H_{\infty}$-control problem for singular Markovian jump systems (SMJSs) with variable transition rates by feedback controls based on discrete-time state observations is studied. The mode-dependent time-varying character of transition rates is supposed to be piecewise-constant. By designing a feedback controller based on discrete-time state observations, employing a stochastic Lyapunov-Krasovskii functional, and combining with the linear matrix inequalities (LMIs) technologies, sufficient conditions under the case of nonhomogeneous transition rates are developed such that the controlled system is regular, impulse free, and stochastically stable. Subsequently, the upper bound on the duration $\tau$ between two consecutive state observations and prescribed $H_{\infty}$ performance $\gamma$ are derived. Moreover, the achieved results can be easily checked by the Matlab LMI Tool Box. Finally, two numerical examples are presented to show the effectiveness of the proposed methods. PDF Download reader
These are homework exercises to accompany Miersemann's "Partial Differential Equations" Textmap. This is a textbook targeted for a one semester first course on differential equations, aimed at engineering students. Partial differential equations are differential equations that contains unknown multivariable functions and their partial derivatives. Prerequisite for the course is the basic calculus sequence. Q3.1 Let \(\chi\): \({\mathbb{R}^n}\to {\mathbb{R}^1}\) in \(C^1\), \(\nabla\chi\not=0\). Show that for given \(x_0\in {\mathbb{R}^n}\) there is in a neighborhood of \(x_0\) a local diffeomorphism \(\lambda=\Phi(x)\), \(\Phi:\ (x_1,\ldots,x_n)\mapsto(\lambda_1,\ldots,\lambda_n)\), such that \(\lambda_n=\chi(x)\). Q3.2 Show that the differential equation $$a(x,y)u_{xx}+2b(x,y)u_{xy}+c(x,y)u_{yy}+\mbox{lower order terms}=0$$ is elliptic if \(ac-b^2>0\), parabolic if \(ac-b^2=0\) and hyperbolic if \(ac-b^2<0\). Q3.3 Show that in the hyperbolic case there exists a solution of \(\phi_x+\mu_1\phi_y=0\), see equation (3.9), such that \(\nabla\phi\not=0\). Hint: Consider an appropriate Cauchy initial value problem. Q3.4 Show equation (3.4). Q3.5 Find the type of $$Lu:=2u_{xx}+2u_{xy}+2u_{yy}=0$$ and transform this equation into an equation with vanishing mixed derivatives by using the orthogonal mapping (transform to principal axis) \(x=Uy,\ U\) orthogonal. Q3.6 Determine the type of the following equation at \((x,y)=(1,1/2)\). $$Lu:=xu_{xx}+2yu_{xy}+2xyu_{yy}=0.$$ Q3.7 Find all \(C^2\)-solutions of $$u_{xx}-4u_{xy}+u_{yy}=0.$$ Hint: Transform to principal axis and stretching of axis lead to the wave equation. Q3.8 Oscillations of a beam are described by \begin{eqnarray*} w_x-{1\over E}\sigma_t&=& 0\\ \sigma_x-\rho w_t&=&0, \end{eqnarray*} where \(\sigma\) stresses, \(w\) deflection of the beam and \(E,\ \rho\) are positive constants. Determine the type of the system. Transform the system into two uncoupled equations, that is, \(w,\ \sigma\) occur only in one equation, respectively. Find non-zero solutions. Q3.9 Find nontrivial solutions (\(\nabla \chi\not=0\)) of the characteristic equation to $$x^2u_{xx}-u_{yy}=f(x,y,u,\nabla u),$$ where \(f\) is given. Q3.10 Determine the type of $$u_{xx}-xu_{yx}+u_{yy}+3u_x=2x,$$ where \(u=u(x,y)\). Q3.11 Transform equation $$u_{xx}+(1-y^2)u_{xy}=0,$$ \(u=u(x,y)\), into its normal form. Q3.12 Transform the Tricomi-equation $$yu_{xx}+u_{yy}=0,$$ \(u=u(x,y)\), where \(y<0\), into its normal form. Q3.13 Transform equation $$x^2u_{xx}-y^2u_{yy}=0,$$ \(u=u(x,y)\), into its normal form. Q3.14 Show that $$\lambda=\dfrac{1}{\left(1+|p|^2\right)^{3/2}},\ \ \Lambda=\dfrac{1}{\left(1+|p|^2\right)^{1/2}}.$$ are the minimum and maximum of eigenvalues of the matrix \((a^{ij})\), where $$a^{ij}=\left(1+|p|^2\right)^{-1/2}\left(\delta_{ij}-\dfrac{p_ip_j}{1+|p|^2}\right).$$ Q3.15 Show that Maxwell equations are a hyperbolic system. Q3.16 Consider Maxwell equations and prove that \(\text{div}\ E=0\) and \(\text{div}\ H=0\) for all \(t\) if these equations are satisfied for a fixed time \(t_0\). Hint. \(\text{div}\ \text{rot} \ A=0\) for each \(C^2\)-vector field \(A=(A_1,A_2,A_3)\). Q3.17 Assume a characteristic surface \(\mathcal{S}(t)\) in \(\mathbb{R}^3\) is defined by \(\chi(x,y,z,t)=const.\) such that \(\chi_t=0\) and \(\chi_z\not=0\). Show that \(\mathcal{S}(t)\) has a nonparametric representation \(z=u(x,y,t)\) with \(u_t=0\), that is \(\mathcal{S}(t)\) is independent of \(t\). Q3.18 Prove formula (3.22) for the normal on a surface. Q3.19 Prove formula (3.23) for the speed of the surface \(\mathcal{S}(t)\). Q3.20 Write the Navier-Stokes system as a system of type (3.4.1). Q3.21 Show that the following system (linear elasticity, stationary case of (3.4.1.1) in the two-dimensional case) is elliptic $$ \mu\triangle u+(\lambda+\mu)\mbox{\ grad(div}\ u)+f=0, $$ where \(u=(u_1,u_2)\). The vector \(f=(f_1,f_2)\) is given and \(\lambda,\ \mu\) are positive constants. Q3.22 Discuss the type of the following system in stationary gas dynamics (isentrop flow) in \(\mathbb{R}^2\). \begin{eqnarray*} \rho u u_x+\rho v u_y+ a^2\rho_x&=&0\\ \rho u v_x+\rho v v_y+ a^2\rho_y&=&0\\ \rho (u_x+v_y)+u\rho_x+ v\rho_y&=&0. \end{eqnarray*} Here are \((u,v)\) velocity vector, \(\rho\) density and \(a=\sqrt{p'(\rho)}\) the sound velocity. Q3.23 Show formula 7. (directional derivative). Hint: Induction with respect to \(m\). Q3.24 Let \(y=y(x)\) be the solution of: \begin{eqnarray*} y'(x)&=&f(x,y(x))\\ y(x_0)&=&y_0, \end{eqnarray*} where \(f\) is real analytic in a neighborhood of \((x_0,y_0)\in \mathbb{R}^2\). Find the polynomial \(P\) of degree 2 such that $$ y(x)=P(x-x_0)+O(|x-x_0|^3) $$ as \(x\to x_0\). Q3.25 Let \(u\) be the solution of \begin{eqnarray*} \triangle u&=&1\\ u(x,0)&=&u_y(x,0)=0. \end{eqnarray*} Find the polynomial \(P\) of degree 2 such that $$ u(x,y)=P(x,y)+O((x^2+y^2)^{3/2}) $$ as \((x,y)\to(0,0)\). Q3.26 Solve the Cauchy initial value problem \begin{eqnarray*} V_t&=&{Mr\over r-s-NV}(1+N(n-1)V_s)\\ V(s,0)&=&0. \end{eqnarray*} Hint: Multiply the differential equation with \((r-s-NV)\). Q3.27 Write \(\triangle^2 u=-u\) as a system of first order. Hint: \(\triangle^2 u\equiv\triangle(\triangle u)\). Q3.28 Write the minimal surface equation $${\partial\over\partial x}\left({u_x\over\sqrt{1+u_x^2+u_y^2}}\right)+{\partial\over\partial y}\left({u_y\over\sqrt{1+u_x^2+u_y^2}}\right)=0$$ as a system of first order. Hint: \(v_1:= u_x/\sqrt{1+u_x^2+u_y^2},\ v_2:=u_y/\sqrt{1+u_x^2+u_y^2}.\) Q3.29 Let \(f:\ \mathbb{R}^1\times\mathbb{R}^m\to\mathbb{R}^m\) be real analytic in \((x_0,y_0)\). Show that a real analytic solution in a neighborhood of \(x_0\) of the problem \begin{eqnarray*} y'(x)&=&f(x,y)\\ y(x_0)&=&y_0 \end{eqnarray*} exists and is equal to the unique \(C^1[x_0-\epsilon, x_0+\epsilon]\)-solution from the Picard-Lindel\"of theorem, \(\epsilon>0\) sufficiently small. Q3.30 Show (see the proof of Proposition A7) $$\dfrac{\mu\rho(r-x_1-\ldots-x_n)}{\rho r-(\rho+mM)(x_1+\ldots+x_n)} <<\dfrac{\mu\rho r}{\rho r-(\rho+mM)(x_1+\ldots+x_n)}.$$ Hint: Leibniz's rule.
Practice Paper 1 Question 7 The Taylor expansion of \(\ln(1+x)\) is defined as \(\ln(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\cdots\). Expand \(\ln\!\left(\frac{1-x}{1+x^2}\right)\) up to and including the 4th power of \(x\). Related topics Hints Hint 1What properties of \(\log\) can you use break down \(\ln\!\left(\frac{1-x}{1+x^2}\right)\)? Hint 2How would you relate each term in the above breakdown to the given identity? Solution Although this question involves Taylor expansion, we do not need to know how to formally expand a function into a Taylor series to answer it. Let \(f(x)=\ln(1+x)\). Notice that \(\ln\!\left(\frac{1-x}{1+x^2}\right) = \ln(1-x)-\ln(1+x^2)=\) \(f(-x)-f(x^2).\) Since the Taylor expansion of \(f(x)\) is given, substitute \(-x\) and \(x^2\) to obtain the terms up to \(x^4\): \[ \begin{align} \ln\!\left(\frac{1-x}{1+x^2}\right) &=(-x-\frac{x^2}{2}-\frac{x^3}{3}-\frac{x^4}{4}-\ldots) \\ &\qquad-(x^2-\frac{x^4}{2}+\ldots)\\ &=-x-\frac{3x^2}{2}-\frac{x^3}{3}+\frac{x^4}{4}+\ldots \end{align} \] If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
Cassegrain reflector In a symmetrical Cassegrain both mirrors are aligned about the optical axis, and the primary mirror usually contains a hole in the centre thus permitting the light to reach an eyepiece, a camera, or a light detector. Alternatively, as in many radio telescopes, the final focus may be in front of the primary. In an asymmetrical Cassegrain, the mirror(s) may be tilted to avoid obscuration of the primary or the need for a hole in the primary mirror (or both). The classic Cassegrain configuration uses a parabolic reflector as the primary while the secondary mirror is hyperbolic. [1] Modern variants often have a hyperbolic primary for increased performance( for example, the Ritchey–Chrétien design), or the primary and/or secondary are spherical or elliptical for ease of manufacturing. The Cassegrain reflector is named after a published reflecting telescope design that appeared in the April 25, 1672 Journal des sçavans which has been attributed to Laurent Cassegrain. [2] Similar designs using convex secondaries have been found in the Bonaventura Cavalieri's 1632 writings describing burning mirrors [3] [4] and Marin Mersenne's 1636 writings describing telescope designs. [5] James Gregory's 1662 attempts to create a reflecting telescope included a Cassegrain configuration, judging by a convex secondary mirror found among his experiments. [6] The Cassegrain design is also used in catadioptric systems. Contents Cassegrain designs 1 The "Classic" Cassegrain 1.1 Ritchey-Chrétien 1.2 Dall-Kirkham 1.3 Off-axis configurations 1.4 Catadioptric Cassegrains 2 Schmidt-Cassegrain 2.1 Maksutov-Cassegrain 2.2 Argunov-Cassegrain 2.3 Klevtsov-Cassegrain 2.4 Cassegrain radio antennas 3 See also 4 References 5 Cassegrain designs The "Classic" Cassegrain The "Classic" Cassegrain has a parabolic primary mirror, and a hyperbolic secondary mirror that reflects the light back down through a hole in the primary. Folding the optics makes this a compact design. On smaller telescopes, and camera lenses, the secondary is often mounted on an optically flat, optically clear glass plate that closes the telescope tube. This support eliminates the "star-shaped" diffraction effects caused by a straight-vaned support spider. The closed tube stays clean, and the primary is protected, at the cost of some loss of light-gathering power. It makes use of the special properties of parabolic and hyperbolic reflectors. A concave parabolic reflector will reflect all incoming light rays parallel to its axis of symmetry to a single point, the focus. A convex hyperbolic reflector has two foci and will reflect all light rays directed at one of its two foci towards its other focus. The mirrors in this type of telescope are designed and positioned so that they share one focus and so that the second focus of the hyperbolic mirror will be at the same point at which the image is to be observed, usually just outside the eyepiece. The parabolic mirror reflects parallel light rays entering the telescope to its focus, which is also the focus of the hyperbolic mirror. The hyperbolic mirror then reflects those light rays to its other focus, where the image is observed. The radii of curvature of the primary and secondary mirrors, respectively, in the classic configuration are R_1 = -\frac{2DF}{F - B} and R_2 = -\frac{2DB}{F - B - D} where F is the effective focal length of the system, B is the back focal length (the distance from the secondary to the focus), and D is the distance between the two mirrors. If, instead of B and D, the known quantities are the focal length of the primary mirror, f_1, and the distance to the focus behind the primary mirror, b, then D = f_1(F - b)/(F + f_1) and B = D + b. The conic constant of the primary mirror is that of a parabola, K_1 = -1, and that of the secondary mirror, K_2, is chosen to shift the focus to the desired location: K_2 = -1 - \alpha - \sqrt{\alpha(\alpha+2)}, where \alpha = \frac{1}{2}\left[ \frac{4DBM}{(F + BM - DM)(F - B - D)}\right] ^2, and M=(F-B)/D is the secondary magnification. Ritchey-Chrétien The Ritchey-Chrétien is a specialized Cassegrain reflector which has two hyperbolic mirrors (instead of a parabolic primary). It is free of Henri Chrétien in the early 1910s. Dall-Kirkham The Dall-Kirkham Cassegrain telescope's design was created by Horace Dall in 1928 and took on the name in an article published in Scientific American in 1930 following discussion between amateur astronomer Allan Kirkham and Albert G. Ingalls, the magazine editor at the time. It uses a concave elliptical primary mirror and a convex spherical secondary. While this system is easier to grind than a classic Cassegrain or Ritchey-Chretien system, it does not correct for off-axis coma and field curvature so the image degrades quickly off-axis. Because this is less noticeable at longer focal ratios, Dall-Kirkhams are seldom faster than f/15. Off-axis configurations An unusual variant of the Cassegrain is the Schiefspiegler telescope ("skewed" or "oblique reflector", also known as "kutter telescope" after its inventor Anton Kutter [7]) which uses tilted mirrors to avoid the secondary mirror casting a shadow on the primary. However, while eliminating diffraction patterns this leads to several other aberrations that must be corrected. Several different off-axis configurations are used for radio antennas. [8] Another off-axis, unobstructed design and variant of the cassegrain is the 'YOLO' reflector invented by Arthur Leonard. This design uses a spherical or parabolic primary and a mechanically warped spherical secondary to correct for off-axis induced astigmatism. When set up correctly the yolo can give uncompromising unobstructed views of planetary objects and non-wide field targets, with no lack of contrast or image quality caused by spherical aberration. The lack of obstruction also eliminates the diffraction associated with cassegrain and newtonian reflector astrophotography. Catadioptric Cassegrains Schmidt-Cassegrain The Schmidt-Cassegrain was developed from the wide-field Schmidt camera, although the Cassegrain configuration gives it a much narrower field of view. The first optical element is a Schmidt corrector plate. The plate is figured by placing a vacuum on one side, and grinding the exact correction required to correct the spherical aberration caused by the primary mirror. Schmidt-Cassegrains are popular with amateur astronomers. An early Schmidt-Cassegrain camera was patented in 1946 by artist/architect/physicist Roger Hayward, [9] with the film holder placed outside the telescope. Maksutov-Cassegrain The Maksutov-Cassegrain is a variation of the Maksutov telescope named after the Soviet/Russian optician and astronomer Dmitri Dmitrievich Maksutov. It starts with an optically transparent corrector lens that is a section of a hollow sphere. It has a spherical primary mirror, and a spherical secondary that in this application is usually a mirrored section of the corrector lens. Argunov-Cassegrain In the Argunov-Cassegrain telescope all optics are spherical, and the classical Cassegrain secondary mirror is replaced by a sub-aperture corrector consisting of three air spaced lens elements. The element farthest from the primary mirror is a Mangin mirror, in which the element acts as a second surface mirror, having a reflective coating applied to the surface facing the sky. Klevtsov-Cassegrain The Klevtsov-Cassegrain, like the Argunov-Cassegrain, uses a sub-aperture corrector. It consisting of a small meniscus lens and Mangin mirror as its "secondary mirror". [10] Cassegrain radio antennas Cassegrain designs are also utilized in satellite telecommunications earth station antennas and radio telescopes, ranging in size from 2.4 metres to 70 metres. The centrally located sub-reflector serves to focus radio frequency signals in a similar fashion to optical telescopes. See also Refracting telescope Catadioptric system Celestron (Schmidt-Cassegrains, Maksutov Cassegrains) List of telescope types Meade Instruments (Schmidt-Cassegrains, Maksutov Cassegrains) Questar (Maksutov Cassegrains) Vixen (Cassegrains, Klevtsov-Cassegrain) References "Diccionario de astronomía y geología. Las ciencias de la Tierra y del Espacio al alcance de todos. Cassegrain". AstroMía. Cassegrain: a famous unknown of instrumental astronomyAndré Baranne and Françoise Launay, , Journal of Optics, 1997, vol. 28, no. 4, pp. 158-172(15) Lo specchio ustorio, overo, Trattato delle settioni coniche Stargazer, the Life and Times of the Telescope, by Fred Watson, p. 134 Stargazer, p. 115. Stargazer, pp. 123 and 132 .telescopemaking.org - The Kutter Schiefspiegler Milligan, T.A. (2005). Modern antenna design. Wiley-IEEE Press. pp. 424-429 US Patent 2,403,660, Schmidt-Cassegrain camera New optical systems for small-size telescopes
In the first propagation step of the radical chain mechanism of alkane halogenation, why does the halogen actually abstract hydrogen from the alkane? My organic chemistry textbook so far has went into details on MOs, hybridized orbitals, and general thermodynamics of reactions but has been lacking in the by and large why of reactions. Is the $\ce{C-H}$ bond broken by sufficient collisions from halogen free atoms and the hydrogen goes to the halogen as a result of $\ce{F}$/$\ce{Cl}$/$\ce{Br}$ being more electronegative than carbon, or what should cause this to happen? In the first propagation step of the radical chain mechanism of alkane halogenation, why does the halogen actually abstract hydrogen from the alkane? My organic chemistry textbook so far has went into details on MOs, hybridized orbitals, and general thermodynamics of reactions but has been lacking in the by and large why does the halogen actually abstract hydrogen from the alkane? Look at the two tables of bond energies on this webpage. The second or bottom table shows the $\ce{C-H}$ bond energies for various carbon containing compounds. In the top table we see that the H-Cl bond strength is 103 kcal/m. If we start with a chlorine radical, then if we break a C-H bond and make a Cl-H bond, the energetics of the reaction will be $$\ce{\Delta H=bonds~ broken~-~bonds~made}$$ $$\ce{\Delta H=C-H ~bond~ broken~-~H-Cl ~bond~made}$$ $$\ce{\Delta H=C-H ~bond~ broken~-~103~ Kcal/m}$$ Look at the top row in the bottom table again. As long as our chlorine radical breaks any C-H bond that has a dissociation energy less than 103 kcal/m, our reaction will be exothermic, and the reaction will continue. Edit As I understand, the first propagation step in the free-radical monochlorination of methane is slightly endothermic but is driven to completion by the exothermic nature of the second propagation step The first propagation step involves breaking a $\ce{C-H}$ bond (103 kcal/m) and making an $\ce{H-Cl}$ bond (103 kcal/m), so it is roughly thermoneutral according to the above link. $$\ce{CH3-H + Cl{.} -> CH3{.} + H-Cl}$$ $$\ce{CH3{.} + Cl-Cl -> CH3-Cl + Cl{.}}$$ The second propagation step involves breaking a $\ce{Cl-Cl}$ bond (58 kcal/m) and making a $\ce{CH3-Cl}$ bond (81 kcal/m), so it is quite exothermic. These steps are consistent with my earlier argument. You are correct that using free energies would be better (more accurate) than enthalpies, but the Tables I could find were based on enthalpies - I suspect they provide us with a reasonably accurate "back of the envelope" calculation and view of what is going on. As to the actual mechanism of the reaction, once we've created a chlorine radical, we've created a reactive species. It is reactive because it doesn't have an octet of electrons. By undergoing the above-reactions it can regain its octet and become a stable entity. Here is a diagram of a reaction coordinate (note we are using free energy as the Y-axis in this diagram). Generally the more exothermic (exergonic for free energy) a reaction, the lower the activation energy required to reach the transition state and pass over from reactants to products. So our reactive chlorine radical bounces around probing different parts of the reaction coordinate (e.g. different $\ce{C-H}$ bonds, different angles and approaches to these bonds) until it encounters a bond from the right direction, right angle, etc, such that the collision has enough energy to pass over the top of our reaction coordinate and become a product. First mistake you do is speaking of "the radical chain reaction". There are many reactions with different mechanisms which are radical chains. Re the cause: All reactions take place because there is some energy (more specific: delta G) gain. Any step in that radical chain has to fulfil this requirement. The initiation step of all these radical halogenations is the homolytic cleavage of a halogen molecule: $\ce{Cl2 ->[{h \cdot \nu}] 2 Cl^{.}}$ These radicals rapidly move away from each other so immediate recombination is not an option. Also, the radical concentration is generally rather low, so it is unlikely for the radicals to find other possible recombination partners. That is what gets the chain started. So the radical has the choice of capturing one of the other species present in the mixture which are the alkane $\ce{C_$n$H_{2$n$ +2}}$ or unreacted halogens $\ce{Cl2}$ (or previously reacted haloalkanes). In all cases, one bond would be cleaved and one formed, giving a new radical species. If the radical happens to meet $\ce{Cl2}$ then nothing apparantly happens, only if you colour one chlorine atom yellow, the second blue and the third red would you observe a change. If the halogen radical meets an alkane and the Gibbs free energy change is favourable (refer to ron’s answer for numbers) it will react. It cannot directly bind to carbon, because that would induce a $\ce{H^{.}}$ radical which is much more unstable than anything else — $\Delta G$ is too high. Instead, it cleaves a $\ce{C-H}$ bond by abstracting hydrogen: $\ce{Cl^{.} + H-C_{$n$}H_{2$n$+1} -> Cl-H + ^{.}C_{$n$}H_{2$n$ + 1}}$. This new radical then goes looking for reaction partners and usually finds a $\ce{Cl2}$ molecule first according to what we write — but it could also find another alkane and transfer the radical from one alkane molecule to another.
Practice Paper 1 Question 9 Player \(A\) rolls one die. Player \(B\) rolls two dice. If \(A\) rolls a number greater or equal to the largest number rolled by \(B\), then \(A\) wins, otherwise \(B\) wins. What is the probability that B wins? Related topics Warm-up Questions You roll a die 3 times. What is the probability that at least one roll is greater than 2? You roll two dice. What is the probability that their sum is less than 7? A coin is flipped 3 times, displaying either heads (\(H\)) or tails (\(T\)). What is the probability that you do not get \(\text{H H H}\)? Hints Hint 1Let \(a\) represent the number rolled by \(A\). What is the probability that \(B\) wins in terms of \(a\)? Hint 2\(A\) wins if both of \(B\)'s dice roll are smaller than \(a\). Hint 3Consider the scenario from the previous hint. What is the probability that \(B\) does not win for a given \(a\)? Hint 4How may we map the expression from the previous hint to all possible \(a\)? Solution Let \(a \in \{1,2,3,4,5,6\}\) represent the number rolled by \(A\). \(B\) wins if at least one of \(B\)'s rolls is higher than \(a\). This would be \(1\) minus the probability that both of \(B\)'s rolls are smaller than or equal to \(A\)'s. For a given \(a\), there exists \(a\) numbers smaller or equal to \(a\) that may be rolled. Therefore, for a given \(a\) the probability of \(B\) winning is \(1-(\frac{a}{6})^2\). Now, consider all possible values of \(a\). The probability of any value of \(a \in \{1,2,3,4,5,6\}\) being rolled is \(\frac{1}{6}.\) So, the overall probability that \(B\) wins is \(\sum_{a=1}^{6} \frac{1}{6}\big(1-(\frac{a}{6})^2\big) = \frac{125}{216}.\) See properties of summations here to aid in evaluating the sum. If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
Expression:Logarithm, is, the exponent or power to which a base must be raised to yield a given number. Mathematically: x is the logarithm of n to the base b if \( b^x \) = \( n \) and is represented as \( x \) = \( log_{b}n \) 2. Common logarithms: Logarithms to the base 10 are known as common logarithms. 3. Characteristic: The integral part of the logarithm of a number is called its characteristic. For example: \( log_{10}15 \) = 1.176 = 1+0.176. So the integer part is 1 and so the characteristic is 1. The characteristic of the logarithm of a number is an integer, negative or positive. For example: \( log_{10}0.5 \) = -0.301 = -1+0.699. Here the characteristic is 1. (i) When the number is greater than 1. Here, the characteristic is one less than the number of digits in the left of the decimal point in the given number. Examples: \( log_{10}15 \) = 1.176 = 1+0.176. Number of digits in 15.0 to the left of decimal is 2. So characteristic is 2-1=1. \( log_{10}183.5 \) = 2.2636 = 2+0.2636. Number of digits in 183.5 to the left of decimal is 3. So characteristic is 3-1=2. (ii) When the number is positive and less than 1. Here, the characteristic is one more than the number of zeros between the decimal point and the first significant digit of the number and it is negative. The negative number can also be written as: instead of -1, it is written as \(\bar{1}\). Examples: \( log_{10}0.6 \) = -0.2218= -1+0.7782. Number of zeroes in 0.5 to the right of decimal is 1. So characteristic is 0+1=1 but with a negative sign. i.e. characteristic is -1. \( log_{10}0.008\) = -2.09691001301 = -3+0.9031. Number of zeroes in 0.008 to the left of decimal is 2. So characteristic is 2+1=3, but with a negative sign. i.e. characteristic is -3. 4. Mantissa: Decimal part of the algorithm of a number is known as its mantissa. Mantissa can never be negative. For example: \( log_{10}15 \) = 1.176 = 1+0.176. The decimal part is 0.176. So, the Mantissa is 0.176. There is also a log table to look for the mantissa. Example 1 Use the properties of logarithms to rewrite expression as a single logarithm: \(2log_{b}x\) + \(\frac{1}{2}log_{b}(x + 4)\) Solution: Example 2 Use the properties of logarithms to rewrite expression as a single logarithm: \(4log_{b}(x + 2) – 3log_{b}(x – 5)\) Solution: Example 3 Use the properties of logarithms to express the following logarithm in terms of logarithm of \(x\), \(y\) and \(z\). \(log_{b}(xy^{2})\) Solution: Example 4 Use the properties of logarithms to express the following logarithm in terms of logarithm of \(x\), \(y\) and \(z\). \(log_{b}\frac{x^{2}\sqrt{y}}{z^{5}}\) Solution: 2. If \( log_{\sqrt{8}}x \) = 3 \(\frac{1}{3}\), find the value of \(x\)? Solution: 3. Simplify: \(log \frac{75}{16}\) – 2\(log \frac{5}{9}\) + \(log \frac{32}{243}\)? Solution: 4. If \( log_{10}2 \) = 0.30103, find the value of \( log_{10}50 \)? Solution: 5. Simplify : \(\frac{1}{log_{xy}xyz} + \frac{1}{log_{yz}xyz} + \frac{1}{log_{zx}xyz}\)? Solution: Properties of Logarithms: 1. \(\log_{a}\) 1 = 0 2. \(\log_{a}\) a = 1 3. \(\log_{a}\) 0 = \(\begin{cases} – \infty\;\;if\;\;a > 1\\ + \infty\;\;if\;\;a < 1 \end{cases}\) 4. \(\log_{a}\) (xy) = \(\log_{a}\) x + \(\log_{a}\) y 5. \(\log_{a}\;\frac{x}{y}\) = \(\log_{a}\) x – \(\log_{a}\) y 6. \(\log_{a}\;\sqrt[n]{x}\) = \(\frac{1}{n}\) \(\log_{a}\) x 7. \(\log_{a}\;x\) = \(\frac{\log_{c}\;x}{\log_{c}\;a}\) = \(\log_{c}\;x\) \(\cdot\) \(\log_{a}\;c\), c \(>\) 0, c \(\neq\) 1 8. \(\log_{a}\;c\) = \(\frac{1}{\log_{c}\;a}\) 9. x = \(a^{\log_{a}\;x}\) 10. Logarithm to Base 10 \(\log_{10}\;x\) = log x 11. Natural Logarithm \(\log_{e}\;x\) = ln x, where e = \(\displaystyle{\lim_{x \to \infty}}(1 + \frac{1}{k})^k\) = 2.718281828… 12. log x = \(\frac{1}{ln 10}\)ln x = 0.434294 ln x 12. ln x = \(\frac{1}{log\;e}\)log x = 2.302585 log x
Given that $ \operatorname{Var}(x)=\frac{3}{4}\theta^2$, I want t find the variance of estimator $\hat{\theta_1} = \frac{2n}{3}\sum_{i=1}^nX_i$. EDIT: $X_1,...,X_n$ are independent and identically distributed (iid) I proceed as follows: $$ \operatorname{Var}(\hat{\theta}) = \operatorname{Var}(\frac{2n}{3}\sum_{i=1}^nX_i)=\frac49 \frac1n \operatorname{Var}(X_1)= \frac49 \frac1n \frac34 \theta^2 = \frac{1}{3n}\theta^2 $$ or $$ \operatorname{Var}(\hat{\theta}) = \operatorname{Var}(\frac{2n}{3}\sum_{i=1}^nX_i)= \operatorname{Var}(\frac{2n}{3}nX_1)= \operatorname{Var}(\frac{2}{3}X_1)=\frac49 \frac34\theta^2 = \frac13\theta^2 $$ Our professor provided us with the solution, so I know the first approach is the correct one, however i do not understand what is wrong with the second approach?
Is the following sequence $x_{n}=\frac{1 +(-1)^n}{n}$ Cauchy? I got not Cauchy, but would appreciate someone to check this. Thank you! So I got the sequence is bounded by 2 and that $abs(x_n-x_{n+1})<=2$ then if we pick $\epsilon=1/2$ then the sequence is not Cauchy... is my proof sufficient? Hint: Recall that a convergent sequence is automatically Cauchy. Next, note that the triangle inequality yields$$|x_n| \leq \frac{1 + |(-1)^n|}{n} =\frac{2}{n}.$$From here can you show that $\lim{x_n}$ exists? $|x_n-x_m| \leq \frac 1 n +\frac 1 n +\frac 1 m +\frac 1 m <\epsilon $ whenever $n, m \geq \frac 4 {\epsilon}$. If $n$ is even then $x_n = \frac {1+1}n = \frac 2n$ and if $n$ is odd then $x_n = \frac {1-1}n = 0$. It should be well known that $\frac 1n \to 0$ so $\frac 2n\to 0$ and, of course $0\to 0$ so $x_n \to 0$ and converging sequences are Cauchy so this should be Cauchy. Just as we prove that $b_n = \frac 1n$ is a cauchy sequence by saying for any $\epsilon > 0$ then if we let $N = \frac 2{\epsilon}$ then if $n,m > N$ then $\frac 1n < \frac 1N = \frac \epsilon 2$ and $\frac 1m <\frac \epsilon 2$ so $|b_n - b_m|=|\frac 1n - \frac 1m| \le |\frac 1n| + |\frac 1n| < \frac \epsilon 2 + \frac \epsilon 2$... We prove $x_n$ is cauchy by pointing out for any $\epsilon > 0$ if we let $N = \frac 4\epsilon$ then if $n,m > N$ then if $n$ is even then $x_n =\frac 2{n}< \frac 2N = \frac 12\epsilon$ and if $n$ is odd then $x_n = 0 < \frac 12 \epsilon$ so either way $x_n < \frac 12 \epsilon$. Same thing for $x_m$ and $|x_n - x_m| \le |x_n| + |x_m| < \frac 12 \epsilon + \frac 12 \epsilon = \epsilon$.
"We have spent a decade collecting measurements of 1.2 million galaxies over one quarter of the sky to map out the structure of the Universe over a volume of 650 cubic billion light years,” says. Jeremy Tinker of New York University, a co-leader of the scientific team that led this effort. Hundreds of scientists are part of the Sloan Digital Sky Survey III (SDSS-III) team. These new measurements were carried out by the Baryon Oscillation Spectroscopic Survey (BOSS) programme of SDSS-III. Shaped by a continuous tug-of-war between dark matter and dark energy, the map revealed by BOSS allows astronomers to measure the expansion rate of the Universe by determining the size of the so-called baryonic acoustic oscillations (BAO) in the three-dimensional distribution of galaxies. Ref: The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample. arXiv - Astrophysics (11 July 2016) | arXiv:1607.03155 | PDF (Open Access) ABSTRACT We present cosmological results from the final galaxy clustering data set of the Baryon Oscillation Spectroscopic Survey, part of the Sloan Digital Sky Survey III. Our combined galaxy sample comprises 1.2 million massive galaxies over an effective area of 9329 deg^2 and volume of 18.7 Gpc^3, divided into three partially overlapping redshift slices centred at effective redshifts 0.38, 0.51, and 0.61. We measure the angular diameter distance DM and Hubble parameter H from the baryon acoustic oscillation (BAO) method after applying reconstruction to reduce non-linear effects on the BAO feature. Using the anisotropic clustering of the pre-reconstruction density field, we measure the product DM*H from the Alcock-Paczynski (AP) effect and the growth of structure, quantified by f{\sigma}8(z), from redshift-space distortions (RSD). We combine measurements presented in seven companion papers into a set of consensus values and likelihoods, obtaining constraints that are tighter and more robust than those from any one method. Combined with Planck 2015 cosmic microwave background measurements, our distance scale measurements simultaneously imply curvature {\Omega} K =0.0003+/-0.0026 and a dark energy equation of state parameter w = -1.01+/-0.06, in strong affirmation of the spatially flat cold dark matter model with a cosmological constant ({\Lambda}CDM). Our RSD measurements of f{\sigma}8, at 6 per cent precision, are similarly consistent with this model. When combined with supernova Ia data, we find H0 = 67.3+/-1.0 km/s/Mpc even for our most general dark energy model, in tension with some direct measurements. Adding extra relativistic species as a degree of freedom loosens the constraint only slightly, to H0 = 67.8+/-1.2 km/s/Mpc. Assuming flat {\Lambda}CDM we find {\Omega}_m = 0.310+/-0.005 and H0 = 67.6+/-0.5 km/s/Mpc, and we find a 95% upper limit of 0.16 eV/c^2 on the neutrino mass sum. ADDITIONAL RESEARCH Anisotropic galaxy clustering in Fourier-space. arXiv:1607.03150 Combining correlated Gaussian posterior distributions. arXiv:1607.03146 Angular clustering tomography and its cosmological implications. arXiv:1607.03144 Cosmological implications of the Fourier space wedges of the final sample. arXiv:1607.03143
Answer Name of the figure $=$ square Perimeter $= 9$ ft Area $ \approx 5.1$ sq ft Work Step by Step 1. Name of the figure Square ($4$ equal sides) 2. Perimeter Let $P =$ perimeter of the square $P = 4 \times 2.25$ (Note that $2.25 = 2\frac{1}{4}$) $P = 9$ ft 3. Area Let $A =$ area of the square $A = 2.25 \times 2.25$ $A = 5.0625$ ft$^{2}$ $A \approx 5.1$ ft$^{2}$
Integrate $$I=\int \frac{dx}{\sqrt{4-\sin^2 x}}$$ we can write it as $$I=\sqrt{2} \times \int \frac{dx}{\sqrt{7+\cos 2x}}$$ Put $\tan x=t$ we get $$I=\int \frac{dt}{\sqrt{1+t^2}\sqrt{8+6t^2}}$$ is it possible to continue? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Nope, this is an elliptic integral as per Botond's comment. The best you can do is express it as a elliptic integral of the first kind: $$\frac{1}{2}\int (1 - \frac{1}{4} \sin^2 x)^{-1/2} \, \mathrm{d}x = \frac{1}{2}F\left(x \, \bigg | \,\frac{1}{4}\right)$$
Does someone know any inequality between $||u||_{p}$ and $||\gamma (u)||_{p, \partial \Omega}$, where $\gamma$ is the Trace Operator? I need to find something like $||u||_{p}\leq C||\gamma (u)||_{p, \partial \Omega}$ or $||u||_{p}\leq C(||\gamma (u)||_{p, \partial \Omega} + ||\nabla u||_{p}) $ , but I couldn't find any in the bibliography. Thanks in advice. Here $ ||u||_{p}=||u||_{L^{p}} $, $\Omega $ Lipschitz's Domain.
Every day it rains with probability $ \frac12 $, otherwise we have sunny weather. Weather forecast makes mistake with probability $ \frac13 $ (if Weather forecast says it will be sunny, with a probability $ \frac23 $ so be it. Professor always takes an umbrella if the weather forecast announces rain. If the weather is sunny announced Professor take an umbrella with probability $ \frac13 $ (1) Calculate the probability that will announce the forecast rain. My approach is this: There are two possible announcements, or rain, or sun. Therefore, $ \Omega = \{s, r \} $ As both events have equal chance of occurrence, I can use probability Classic. So the answer is $ \frac12 $ I do not know how well, but what could be wrong? (2) Assuming it rains, calculate the probability that the professor does not have an umbrella. I'll think about space events - because I have to find it. I do not really see how to choose, and I try: It depends on what they said in the forecast. If predicting rain, then a professor took his umbrella. However, if the sun is a professor of predicting the probability of $\frac13 $ took the umbrella. I mean, we have $\frac23 $ that the forecast was correct and the professor has an umbrella. If the forecast is wrong, it $\frac13 $ probability. Then a professor at $ \frac13 $ took an umbrella. From my deduction that $ \frac13 \cdot \frac13 + \frac23 \cdot1 $ But is it good? If you do not understand what is wrong? It is very important for me to understand it.
Let me slightly change the notation. You have some sequence $\mathbf{a}=a_0,a_1,\cdots$ with each $a_i$ either $0$ or $1$. Define a sequence $u_0,u_1,u_2,\cdots$ by setting $u_m=0$ for $m \lt 0$, $u_0=1$ and for any $n \ge 0$, $u_{n+1}=\sum_{i=1}^{\infty}a_iu_{n-i}.$ As you note in a comment, if $a_1=1$ then $u_i$ is a non-decreasing and eventually increasing sequence (excluding a trivial case). The sequence $u_i$ will eventually be increasing as long as the set of $j$ with $a_j=1$ has a $\gcd$ of $1$. Consider the power series $f(r)=r-(a_0+\frac{a_1}{r}+\frac{a_2}{r^2}+\cdots).$ Then $f(r)$ is defined and increasing for $r \gt 1.$ There is a unique $\lambda=\lambda_{\mathbf{a}}>1$ with $f(\lambda)=0.$ I think that there should be a constant $c \le 1$ with $\lim \frac{a_n}{c\lambda^n}=1$. We can also say that $\lambda \le 2$ with equality only in the degenerate case that $a_i=1$ for all $i$. For any $1 \lt r \lt 2$ we can create an $\mathbf{a}$ with $\lambda_{\mathbf{a}}=r$ by simply choosing the $a_i$ one at a time to keep the (non-decreasing) partial sums of $f(r)$ non-negative. We can partially order the possible sequences $\mathbf{a}=(a_i)$ by saying $\mathbf{a}<\mathbf{b}$ when $a_i=b_i$ for $0 \le i \lt j$ and $0=a_j<b_j=1.$ In this case, $\lambda{\mathbf{a}}<\lambda_{\mathbf{b}}$.` In case $a_i=0$ from some point on we know how to find $\lambda$ as the root of a polynomial and if $a_i=1$ from some point on we again know how to find $\lambda$ as the root of a polynomial (perhaps with a coefficient equal to $2$.So this gives us both a lower bound and an upper bound on $\lambda_{\mathbf{a}}$ based on any initial portion of $\mathbf{a}.$ I'll stop there for now although probably more can be said. Consider the $2^d$ sequences $\mathbf{a}$ with $a_{d+m}=0$ for $m \gt 0.$ If all the roots, real and complex , of $f(r)$ are plotted then you should get a picture something like the one below. This plot is for $d=12$ you can see a similar one at this question
Difference between revisions of "Dictionary:Q(Quality)" (Marked this version for translation) (6 intermediate revisions by 3 users not shown) Line 1: Line 1: − {{DISPLAYTITLE:Dictionary:''Q''}}{{#category_index:Q|''Q''}} + − '''1'''. Quality factor, the ratio of 2π times the peak energy to the energy dissipated in a cycle; the ratio of 2π times the power stored to the power dissipated. The seismic ''Q'' of rocks is of the order of 50 to 300. ''Q'' is related to other measures of absorption (see below): + + + {{DISPLAYTITLE:Dictionary:''Q''}} + + {{#category_index:Q|''Q''}} + '''1'''. Quality factor, the ratio of 2π times the peak energyto the energy dissipated in a cycle; the ratio of 2π times the power stored to the power dissipated. The seismic ''Q'' of rocks is of the order of 50 to 300. ''Q'' is related to other measures of absorption (see below): + <center><math>\frac{1}{Q} = \frac{\alpha V}{\pi f} = \frac{\alpha \lambda}{\pi} = \frac{hT}{\pi} = \frac{\delta}{\pi} = \frac{2\Delta f}{f_\mathrm{r}} </math></center> <center><math>\frac{1}{Q} = \frac{\alpha V}{\pi f} = \frac{\alpha \lambda}{\pi} = \frac{hT}{\pi} = \frac{\delta}{\pi} = \frac{2\Delta f}{f_\mathrm{r}} </math></center> − where ''V'', ''f'', ''λ'', and ''T'' are, respectively, velocity, frequency, wavelength, and period.<ref>Sheriff, R. E. and Geldart, L. P., 1995, Exploration Seismology, 2nd Ed., Cambridge Univ. Press.</ref> The [[Dictionary:absorption coefficient|absorption coefficient]] ''α'' is the term for the exponential decrease of amplitude with distance because of absorption; the amplitude of plane harmonic waves is often written as + + where ''V'', ''f'', ''λ'', and ''T'' are, respectively, velocity, frequency, wavelength, and period.<ref>Sheriff, R. E. and Geldart, L. P., 1995, Exploration Seismology, 2nd Ed., Cambridge Univ. Press.</ref> The [[Dictionary:absorption coefficient|absorption coefficient]] ''α'' is the term for the exponential decrease of amplitude with distance because of absorption; the amplitude of plane harmonic waves is often written as + <center><math>A\mathrm{e}^{-\alpha x} \sin 2 \pi f ( t - \tfrac{x}{V} ) </math></center> <center><math>A\mathrm{e}^{-\alpha x} \sin 2 \pi f ( t - \tfrac{x}{V} ) </math></center> − where ''x'' is the distance traveled. The [[Dictionary:logarithmic decrement|logarithmic decrement]] ''δ'' is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates ''Q'' to the sharpness of a resonance condition; ''f''<sub>r</sub> is the resonance frequency and + + where ''x'' is the distance traveled. The [[Dictionary:logarithmic decrement|logarithmic decrement]] ''δ'' is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates ''Q'' to the sharpness of a resonance condition; ''f''<sub>r</sub> is the resonance frequency and Delta fis the change in frequency that reduces the amplitude by 1/. The [[Dictionary:damping factor|damping factor]] ''h'' relates to the decrease in amplitude with time, + <center><math>A(t) = A_0\mathrm{e}^{-ht} \cos \omega t \ </math></center> <center><math>A(t) = A_0\mathrm{e}^{-ht} \cos \omega t \ </math></center> − + + + + '''2'''. The ratio of the reactance of a circuit to the resistance. '''2'''. The ratio of the reactance of a circuit to the resistance. − '''3'''. A term to describe the sharpness of a [[Dictionary:filter|filter]]; the ratio of the midpoint frequency to the bandpass width (often at 3 dB). + + '''3'''. A term to describe the sharpness of a [[Dictionary:filter|filter]]; the ratio of the midpoint frequency to the bandpass width (often at 3 dB). + + + + + + + + + + − + : − + + + [[Dictionary:|]] − − == + ==== : − − − + {{reflist}} {{reflist}} + + + + + + + + + Latest revision as of 18:49, 21 July 2017 1. Quality factor, the ratio of 2π times the peak energy to the energy dissipated in a cycle; the ratio of 2π times the power stored to the power dissipated. The seismic Q of rocks is of the order of 50 to 300. Q is related to other measures of absorption (see below): where V, f, λ, and T are, respectively, velocity, frequency, wavelength, and period. [1] The absorption coefficient α is the term for the exponential decrease of amplitude with distance because of absorption; the amplitude of plane harmonic waves is often written as where x is the distance traveled. The logarithmic decrement δ is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates Q to the sharpness of a resonance condition; f r is the resonance frequency and is the change in frequency that reduces the amplitude by . The damping factor h relates to the decrease in amplitude with time, 2. The ratio of the reactance of a circuit to the resistance. 3. A term to describe the sharpness of a filter; the ratio of the midpoint frequency to the bandpass width (often at 3 dB). 4. A designation for Love waves (q.v.). 5. Symbol for the Koenigsberger ratio (q.v.). 6. See Q-type section. See also References Sheriff, R. E. and Geldart, L. P., 1995, Exploration Seismology, 2nd Ed., Cambridge Univ. Press. Sheriff, R.E., 1989, Geophysical methods, pg. 330: Prentice Hall Inc.
Content Description Regression problems occur in many metrological applications, e.g. in everyday calibration tasks (as illustrated in Annex H.3 of the GUM ), in the evaluation of interlaboratory comparisons, the characterization of sensors [Matthews et al., 2014], determination of fundamental constants [Bodnar et al., 2014], interpolation or prediction tasks [Wübbeler et al., 2012], and many more. Such problems arise when the quantity of interest cannot be measured directly, but has to be inferred from measurement data (and their uncertainties) using a mathematical model that relates the quantity of interest to the data. For example, regressions may serve to evaluate the functional relation between variables. Definition and Examples Regression problems often take the form $$ \begin{equation*} y_i = f_{\boldsymbol{\theta}}(x_i) + \varepsilon_i , \quad i=1, \ldots, n \,, \end{equation*} $$ where the measurements $\boldsymbol{y}=(y_1, \ldots, y_n)^\top$ are explained by a function $f_{\boldsymbol{\theta}}$ evaluated at values $\boldsymbol{x}=(x_1, \ldots, x_n)^\top$ and depending on unknown parameters $\boldsymbol{\theta}=(\theta_1, \ldots, \theta_p)^\top$. The measurement error $\pmb{\varepsilon}=(\varepsilon_1, \ldots, \varepsilon_n)^\top$ follows a specified distribution $p(\pmb{\varepsilon} | \boldsymbol{\theta}, \boldsymbol{\sigma}).$ Regressions may be used to describe the relationship between a traceable, highly accurate reference device with values denoted by $x$ and a device to be calibrated with values denoted by $y$. The pairs $(x_i,y_i)$ then denote simultaneous measurements made by the two devices of the same measurand such as, for example, temperature. A simple example is the Normal straight line regression model (as illustrated in Figure 1): $$ \begin{equation} \label{int_reg_eq1} y_i = \theta_1 + \theta_2 x_i + \varepsilon_i , \quad \varepsilon_i \stackrel{iid}{\sim} \text{N}(0, \sigma^2), \quad i=1, \ldots, n \,. \end{equation} $$ The basic goal of regression tasks is to estimate the unknown parameters $\pmb{\theta}$ of the regression function and possibly also the unknown parameters of the error distribution $\pmb{\sigma}$. The estimated regression model may then be used to evaluate the shape of the regression function, predictions or interpolations of intermediate or extrapolated $x$-values, or to invert the regression function to predict $x$-values for new measurements. Research Decisions based on regression analyses require a reliable evaluation of measurement uncertainty. The current state of the art in uncertainty evaluation in metrology (i.e. the GUM and its supplements) provides little guidance for regression, however. One reason is that the GUM guidelines are based on a model that relates the quantity of interest (the measurand) to the input quantities. Yet, regression models cannot be uniquely formulated as such a measurement function. By way of example, Annex H.3 of the GUM nevertheless suggests a possibility for analyzing regression problems. However, this analysis contains elements from both classical (least squares) and Bayesian statistics such that the results are not deduced from state-of-knowledge distributions and usually differ from a purely classical or Bayesian approach which was shown in [Elster et al., 2011]. Consequently, there is a need for guidance and research in metrology for uncertainty evaluation in regression problems. The Joint Committee for Guides in Metrology (JCGM) has recognized this need. PTB Working Group 8.42 lead the development of guidance for Bayesian inference of regression problems within the EMRP project NEW04, which is summarized in a Guide [Elster et al., 2015]. This Guide also contains template solutions for specific regression problems with known values $\boldsymbol{x}$ and is available free of charge at the NEW04 project web page. For regression problems with Gaussian measurement errors and linear regression functions (such as in formula (1)), [Klauenberg et al., 2015_2] provide guidance when extensive numerical calculations (such as Markov Chain Monte Carlo methods) are to be avoided in a Bayesian inference. In addition, PTB Working Group 8.42 carries out research emerging from metrological applications involving regression. For example, for the analysis of magnetic field fluctuation thermometry, [Wübbeler et al., 2012] propose and validate a Bayesian and [Wübbeler et al., 2013] a simplified approach to perform interpolations or predictions based on regression results, for the determination of fundamental constants, [Bodnar et al., 2014] provide an objective Bayesian inference and compare it to the Birge ratio method, for the analysis of immunological tests called ELISA, [Klauenberg et al., 2015] have developed informative prior distributions which are widely applicable, for the calibration of flow meters, [Kok et al., 2015] provide a Bayesian analysis which accounts for constraints on the values of the regression curve. Software In order to facilitate the application of the methods developed in the working group, the following software implementations are made available free of charge. MCMC implementation for the analysis of magnetic field fluctuation thermometry Bayesian approaches to performing regression often require numerical methods such as Markov Chain Monte Carlo (MCMC) sampling. For the analysis in magnetic field fluctuation thermometry, PTB Working Group 8.42 has developed a MATLAB software package to perform MCMC sampling from the posterior distribution of the calibration parameters and to subsequently estimate temperatures. This software is available in the electronic supplement to the related publication. Related publication G. Wübbeler, F. Schmähling, J. Beyer, J. Engert, and C. Elster (2012). Analysis of magnetic field fluctuation thermometry using Bayesian inference. Meas. Sci. Technol.23, 125004 (9pp), [DOI: 1018088/0957-0233/23/12/125004]. WinBUGS software for the analysis of immunoassay data The Bayesian approach enables the inclusion of additional prior knowledge in regression problems, but often requires numerical methods such as Markov Chain Monte Carlo (MCMC) sampling. For the analysis of immunoassay data, PTB Working Group 8.42 has developed WinBUGS software code to perform MCMC sampling from the posterior distribution for the calibration parameters and the unknown concentration. This software is available in A Guide to Bayesian Inference for Regression Problems. Related publications K. Klauenberg, M. Walzel, B. Ebert, and C. Elster (2015). Informative prior distributions for ELISA analyses. Biostatistics16, 454-464, [DOI: 10.1093/biostatistics/kxu057]. C. Elster, K. Klauenberg, M. Walzel, G. Wübbeler, P. Harris, M. Cox, C. Matthews, I. Smith, L. Wright, A. Allard, N. Fischer, S. Cowen, S. Ellison, P. Wilson, F. Pennecchi, G. Kok, A. van der Veen, and L. Pendrill (2015). A Guide to Bayesian Inference for Regression Problems Deliverable of EMRP project NEW04 “Novel mathematical and statistical approaches to uncertainty evaluation”, [download (pdf)]. Rejection sampling for the flow meter calibration problem Bayesian approaches to Normal linear regression problems yield analytical solutions under certain circumstances. Nevertheless, accounting for constraints on the values of the regression curve when calibrating flow meters requires a Monte Carlo procedure combined with an accept/reject algorithm to obtain samples from the posterior distribution. MATLAB source code implementing this algorithm is available in A Guide to Bayesian Inference for Regression Problems Related publications G. J. P. Kok, A. M. H. van der Veen, P. M. Harris, I.M. Smith, C. Elster (2015). Bayesian analysis of a flow meter calibration problem. Metrologia52, 392-399, [DOI: 10.1088/0026-1394/52/2/392]. C. Elster, K. Klauenberg, M. Walzel, G. Wübbeler, P. Harris, M. Cox, C. Matthews, I. Smith, L. Wright, A. Allard, N. Fischer, S. Cowen, S. Ellison, P. Wilson, F. Pennecchi, G. Kok, A. van der Veen, and L. Pendrill (2015). A Guide to Bayesian Inference for Regression Problems Deliverable of EMRP project NEW04 “Novel mathematical and statistical approaches to uncertainty evaluation”, [download (pdf)]. Software for Bayesian Normal linear regression Under certain circumstances, Bayesian approaches to Normal linear regression problems yield analytical solutions. In connection with a tutorial, PTB Working Group 8.42 provides software to calculate the posterior distribution of all regression parameters, the regression curve, predictions as well as for estimates, most uncertainties and credible intervals, and also graphically represent these quantities. MATLAB and R source code for these calculations is available at Related Publications C. Elster, K. Klauenberg, M. Walzel, G. Wübbeler, P. Harris, M. Cox, C. Matthews, I. Smith, L. Wright, A. Allard, N. Fischer, S. Cowen, S. Ellison, P. Wilson, F. Pennecchi, G. Kok, A. van der Veen, and L. Pendrill (2015). A Guide to Bayesian Inference for Regression Problems Deliverable of EMRP project NEW04 “Novel mathematical and statistical approaches to uncertainty evaluation”, [download (pdf)]. An introductory example for Markov chain Monte Carlo (MCMC) When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. PTB Working Group 8.42 has developed a concise introduction, illustrated by a simple, typical example from metrology. Accompanied with few lines of software code to implement the most basic and yet flexible MCMC method, interested readers are invited to get started. MATLAB as well as R source code are available in the related publication. Related Publication K. Klauenberg und C. Elster Markov chain Monte Carlo methods: an introductory example. Metrologia, 53(1), S32, 2016. [DOI: 10.1088/0026-1394/53/1/S32] Publications • J. Martin, G. Bartl and C. Elster Application of Bayesian model averaging to the determination of thermal expansion of single-crystal silicon. Measurement Science and Technology, 30 045012, 2019. [DOI: 10.1088/1361-6501/ab094b] • G. Wübbeler, O. Bodnar and C. Elster Robust Bayesian linear regression with application to an analysis of the CODATA values for the Planck constant. Metrologia, 55(1), 20, 2018. [DOI: 10.1088/1681-7575/aa98aa] • J. Lehnert, G. Wübbeler, C. Kolbitsch, A. Chiribiri, L. Coquelin, G. Ebrard, N. Smith, T. Schäffter and C. Elster Physics in Medicine & Biology, 63 215017, 2018. [DOI: 10.1088/1361-6560/aae758] • C. Elster and G. Wübbeler Bayesian inference using a noninformative prior for linear Gaussian random coefficient regression with inhomogeneous within-class variances. Comput. Stat., 32(1), 51--69, 2017. [DOI: 10.1007/s00180-015-0641-3] • M. Dierl, T. Eckhard, B. Frei, M. Klammer, S. Eichstädt and C. Elster Journal of the Optical Society of America A, 33(7), 1370--1376, 2016. [DOI: 10.1364/JOSAA.33.001370] • C. Elster and G. Wübbeler Metrologia, 53(1), S10, 2016. [DOI: 10.1088/0026-1394/53/1/S10] • K. Klauenberg and C. Elster Metrologia, 53(1), S32, 2016. [DOI: 10.1088/0026-1394/53/1/S32] • C. Elster, K. Klauenberg, M. Walzel, P. M. Harris, M. G. Cox, C. Matthews, L. Wright, A. Allard, N. Fischer, S. Ellison, P. Wilson, F. Pennecchi, G. J. P. Kok, A. Van der Veen and L. Pendrill EMRP NEW04, , 2015 • K. Klauenberg, M. Walzel, B. Ebert and C. Elster Biostatistics, 16(3), 454--64, 2015. [DOI: 10.1093/biostatistics/kxu057] • G. J. P. Kok, A. M. H. van der Veen, P. M. Harris, I. M. Smith and C. Elster Metrologia, 52(2), 392-399, 2015. [DOI: 10.1088/0026-1394/52/2/392] • K. Klauenberg, G. Wübbeler, B. Mickan, P. Harris and C. Elster Metrologia, 52(6), 878--892, 2015. [DOI: 10.1088/0026-1394/52/6/878] • O. Bodnar and C. Elster Metrologia, 51(5), 516--521, 2014. [DOI: 10.1088/0026-1394/51/5/516] • S. Eichstädt and C. Elster Journal of Physics: Conference Series, 490(1), 012230, 2014. • S. Heidenreich, H. Gross, M.-A. Henn, C. Elster and M. Bär J. Phys. Conf. Ser., 490(1), 012007, 2014. • C. Matthews, F. Pennecchi, S. Eichstädt, A. Malengo, T. Esward, I. M. Smith, C. Elster, A. Knott, F. Arrhén and A. Lakka Metrologia, 51(3), 326-338, 2014. [DOI: 10.1088/0026-1394/51/3/326] • G. Wübbeler and C. Elster Measurement Science and Technology, 24(11), 115004, 2013. • G. Wübbeler, F. Schmähling, J. Beyer, J. Engert and C. Elster Measurement Science and Technology, 23(12), 125004, 2012. • C. Elster and B. Toman Bayesian uncertainty analysis for a regression model versus application of GUM Supplement 1 to the least-squares estimate. Metrologia, 48(5), 233--240, 2011. [DOI: 10.1088/0026-1394/48/5/001] • I. Lira, C. Elster and W. Wöger Metrologia, 44(5), 379--384, 2007. [DOI: 10.1088/0026-1394/44/5/014]
In this note, I consider an application of generalized Mobius Inversion to extract information of arithmetical sums with asymptotics of the form $latex \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1 – \epsilon})$ for a fixed $latex j$ and a constant $latex a_1$, so that the sum is over both $latex n$ and $latex k$. We will see that $latex \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} + O(x^{1 – \epsilon})$. For completeness, let’s prove the Mobius Inversion formula. Suppose we have an arithmetic function $latex \alpha(n)$ that has Dirichlet inverse $latex \alpha^{-1}(n)$, so that $latex \alpha * \alpha^{-1} (n) = [n = 1] = \begin{cases} 1 & \text{if } n = 1 \\ 0 & \text{else} \end{cases}$, where I use $latex [n = 1]$ to denote the indicator function of the condition $latex n = 1$ Then if $latex F(x)$ and $latex G(x)$ are complex-valued functions on $latex [1, \infty)$, then Mobius Inversion $latex \displaystyle F(x) = \sum_{n \leq x} \alpha(n) G\left(\frac{x}{n}\right) = \alpha * G(x)$ if and only if $latex \displaystyle G(x) = \sum_{n \leq x} \alpha^{-1}(n) F\left(\frac{x}{n}\right) = \alpha^{-1} * F(x)$ Proof:Suppose that $latex \displaystyle F(x) = \sum_{n \leq x}\alpha(n) G\left(\frac{x}{n}\right)$. Then $latex \displaystyle \sum_{n \leq x} \alpha^{-1}(n) F\left(\frac{x}{n}\right) = \sum_{n \leq x} \alpha^{-1}(n) \sum_{m \leq x/n} \alpha(m)G\left(\frac{x}{mn}\right) =$ $latex \displaystyle = \sum_{n \leq x} \sum_{m \leq x/n} \alpha^{-1}(n) \alpha (m) G\left(\frac{x}{mn}\right)$. Let’s collect terms. For any given $latex d$, the number of times $latex G\left(\frac{x}{d}\right)$ will occur will be one for every factorization of $latex d$ (that is, one time for every way of writing $latex mn = d$). So reorganizing the sum, we get that it’s equal to $latex \displaystyle \sum_{d \leq x} G\left(\frac{x}{d}\right) \sum_{e | d} \alpha(e)\alpha^{-1}\left( \frac{d}{e}\right) = \sum_{d \leq x} G\left(\frac{x}{d}\right) (\alpha * \alpha^{-1})(d)$. Since we know that $latex \alpha$ and $latex \alpha^{-1}$ are Dirichlet inverses, the only term that survives is when $latex d = 1$. So we have, after the dust settles, that our sum is nothing more than $latex G(x)$. And the converse is the exact same argument. $latex \diamondsuit$. We know some Dirichlet inverses already. If $latex u(n)$ is the function sending everything to $latex 1$, i.e. $latex u(n) \equiv 1$, then we know that $latex \displaystyle \mu * u(n) = \sum_{d | n} \mu(d) = [n = 1]$, where $latex \mu(n)$ is the Mobius function. This means that $latex \displaystyle \sum_n \frac{u(n)}{n^s} \sum_m \frac{\mu(m)}{m^s} = \zeta(s) \sum \frac{\mu(m)}{m^s} = 1$. This also means that we know that $latex \displaystyle \sum \frac{\mu(n)}{n^s} = \frac{1}{\zeta(s)}$. Let me take a brief aside to mention a particular notation I endorse: the Iverson bracket notation, which says that $latex [P] = \begin{cases} 1 & \text{if } P \text{ true} \\ 0 & \text{else} \end{cases}$. Why? Well, a while ago, I really liked to use the bolded 1 indicator function, but then I started doing things with modules and representations, and someone bolded 1 became a multiplicative identity. This is done in Artin’s algebra text too, which I like (but did not learn out of). Then I really liked the $latex chi$-style indicator notation, until it became the default letter for group characters. So I’ve turned to the Iverson Bracket, which saves space compared to the other two anyway. Back to Dirichlet inverses. Some are a bit more challenging to find. We might say, let’s find the Dirichlet inverse of the function $latex [n = a^j]$ for a fixed $latex j$, i.e. the $latex j$th-power indicator function. This is one of those things that we’ll be using later, and I’m acting like one of those teachers who just happens to consider the right thing at the right time. Claim: the Dirichlet inverse of the function $latex [n = a^j]$ is the function $latex [n = a^j]\mu(a)$, which through slightly abusive notation is to say the function that is zero on non-$latex j$th-powers, and takes the value of $latex \mu(a)$ when $latex n = a^j$. Possible Motivation of claim: This doesn’t need to have descended as a gift from the gods. Examining $latex [n = a^j]$ on powers makes it seem like we want $latex \mu$-like function, and a little computation would give a good suggestion as to what to try. Proof: Both sides are $latex 1$ on $latex 1$. That’s a good start. Note also that both sides are zero on non-$latex j$-th-powers, so we only consider $latex j$th powers. For some $latex m > 1$, consider $latex n = m^j$. Then $latex \displaystyle \sum_{d | m^j} [d = a^j]\left[\frac{m^j}{d} = a’^j\right]\mu(a) = $ $latex \displaystyle \sum_{d^j | m^j} [d^j = a^j]\left[\frac{m^j}{d^j} = a^j\right] \mu(a’) = $ $latex \displaystyle \sum_{d^j | m^j} \mu(d) = \sum_{d|m} \mu(d) = 0$ as $latex m > 1$. So we are done. $latex \diamondsuit$ That’s sort of nice. Let’s go on to the main bit of the day. Lemma: $latex \displaystyle \sum_{nk^j \leq x} f(n) = \sum_{n \leq x} f * [n = k^j \text{ for some } k] (n)$. Proof of lemma:$latex \displaystyle \sum_{nk^j \leq x} f(n) = \sum_{m \leq x} \sum_{nk^j = m} f(n) = $ $latex \displaystyle \sum_{m \leq x} \sum_{d | m} [d = k^j]f\left(\frac{m}{d}\right) = \sum_{m \leq x} f * [m = k^j] (m)$ $latex \diamondsuit$ Proposition: $latex \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} + O(x^{1 – \epsilon})$, where $latex 0 < \epsilon < 1 – \frac{j}{4}$. Proof: We now know that $latex \displaystyle F(x) = \sum_{nk^j \leq x} f(n) = \sum_{n \leq x} f * [n = k^j] = $ $latex \displaystyle = \sum_{mn \leq x} [m = k^j] f(n) = \sum_{m \leq x} [m = k^j] \sum_{n \leq x/m} f(n)$, which is of the form $latex \displaystyle F(x) = \sum \alpha (n)G(x/n)$. So by Mobius inversion, $latex \displaystyle F(x) = \sum_{m \leq x} [m = k^j] \sum_{n \leq x/m} f(n) = a_1x + O(x^{1 – \epsilon}) \iff $ $latex \displaystyle \iff \sum_{n \leq x} f(n) = \sum_{n \leq x} [n = k^j]\mu(k) F(x/n)$. Let’s look at this last sum. $latex \displaystyle \sum_{n \leq x} [n = k^j]\mu(k) F(x/n) = \sum_{n^j \leq x} [n^j = k^j]\mu(n) F(x/n^j) = $ $latex \displaystyle = \sum_{n^j \leq x} \mu(n) \left( a_1\frac{x}{n^j} + O\left( \frac{x^{1 – \epsilon}}{n^{j(1 – \epsilon)}}\right)\right) = $ $latex \displaystyle = \sum_{n^j \leq x} \mu(n)a_1 \frac{x}{n^j} + \sum_{n^j \leq x} \mu(n)O\left( \frac{x^{1 – \epsilon}}{n^{j(1 – \epsilon)}}\right) = $ $latex \displaystyle = a_1 x \sum_{n^j \leq x} \frac{\mu(n)}{n^j} + \sum_{n^j \leq x} \mu(n)O\left( \frac{x^{1 – \epsilon}}{n^{j(1 – \epsilon)}}\right)$. For the main term, recall that we know that $latex \displaystyle \frac{1}{\zeta(s)} = \sum_n \frac{\mu(n)}{n^s}$, so that (asymptotically) we know that $latex \displaystyle \sum_{n^j \leq x} \frac{\mu(n)}{n^j} to \frac{1}{\zeta(j)}$. For the error term, note that $latex |\mu(n)| \leq 1$, so $latex \displaystyle |\sum_{n^j \leq x} \mu(n)O\left( \frac{x^{1 – \epsilon}}{n^{j(1 – \epsilon)}}\right)| \leq $ $latex \displaystyle \leq O\left( x^{1 – \epsilon} \sum \frac{1}{n^{j(1 – \epsilon)}}\right) = O(x^{1 – \epsilon})$, and where the convergence of this list sum is guaranteed by the condition on $latex \epsilon$ being not too big (in a sense, we can’t maintain arbitrarily small error). Every step is reversible, so putting it all together we get $latex \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} + O(x^{1 – \epsilon})$, as we’d wanted. $latex \diamondsuit$ In general, if one has error term $latex O(x^\alpha)$ for some $latex \alpha<1$, then this process will yield an error term $latex O(x^{\alpha’})$ with $latex \alpha’ \geq \alpha$, but it will still be true that $latex \alpha’ < 1$. I have much more to say about applying Mobius Inversion to asymptotics of this form, and will follow this up in another note.
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in Spanish. Contents Spanish language has some special characters, such as the ñ and some accentuated words. For this reason the preamble of your document must be modified accordingly to support these characters and some other features. \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[spanish]{babel} \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Este es un breve resumen del contenido del documento escrito en español. \end{abstract} \section{Sección introductoria} Esta es la primera sección, podemos agregar algunos elementos adicionales y todo será escrito correctamente. Más aún, si una palabra es demasiado larga y tiene que ser truncada, babel tratará de truncarla correctamente dependiendo del idioma. \section{Sección con teoremas} Esta sección es para ver qué pasa con los comandos que definen texto \end{document} There are two packages in this document related to the encoding and the special characters. These packages will be explained in the next sections. Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the inputenc package to set up input encoding. In this case the package properly displays characters in the Spanish alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system. To proper LaTeX document generation you must also choose a font encoding which has to support specific characters for Spanish language, this is accomplished by the package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in Spanish, using this specific encoding will avoid glitches with some specific characters. The default LaTeX encoding is OT1. To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the babel package for the Spanish language. \usepackage[spanish]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the Spanish words "Resumen" and "Índice" are used. An extra parameter can be passed when importing the babel package with spanish support: \usepackage[spanish, mexico]{babel} This will set a localization for the language. By now only mexico and mexico-com are available, the latter will use a comma instead of a dot as the decimal marker in mathematical mode. Mathematical commands can also be imported specifically for the Spanish language. \section{Sección con teoremas} Esta sección es para ver que pasa con los comandos que definen texto \[ \lim x = \sen{\theta} + \max \{3.52, 4.22\} \] El paquete también agrega un comportamiento especial a <<estas marcas para hacer citas textuales>> tal como lo indican las reglas de la RAE. You can see that \sen, \max and \lim are properly displayed. For a complete list of mathematical symbols in Spanish see the reference guide. For this commands to be available you must add the next line to the preamble of your document: \def\spanishoperators{} Notice also that << and >> have a special format in Spanish, this can conflict with some packages. If you don't need these or you want to use the direct keyboard input « » set the parameter es-noquotes, comma separated inside the brackets of the babel statement. Sometimes for formatting reasons some words have to be broken up in syllables separated by a - ( hyphen) to continue the word in a new line. For example, matemáticas could become mate-máticas. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble. \usepackage{hyphenat} \hyphenation{mate-máti-cas recu-perar} The first command will import the package hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the {\nobreak word} command within your document. Spanish LaTeX commands in mathematical mode LaTeX command Output \sen sen \tg tg \arcsen arc sen \arccos arc cos \arctg arc tg \lim lím \limsup lím sup \liminf lím inf \max máx \inf ínf \min mín For more information see
How can I find a closed-form expression for the following improper integral in a slick way? $$\mathcal{I}= \int_0^\infty \frac{x^{23}}{(5x^2+7^2)^{17}}\,\mathrm{d}x$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Starting from Thomas Andrews's reformulation, repeated integration by parts gives $$\int_a^\infty{(x-a)^{11}\over x^{17}}dx={11\over16}\int_a^\infty{(x-a)^{10}\over x^{16}}dx=\cdots={11\cdot10\cdots1\over16\cdot15\cdots6}\int_a^\infty{dx\over x^6}={1\over{16\choose5}}{1\over5a^5}$$ so your answer, if I've done all the arithmetic correctly, is $${1\over2\cdot5^{12}}{1\over{16\choose5}}{1\over5\cdot7^{10}}={1\over2^5\cdot3\cdot5^{13}\cdot7^{11}\cdot13}$$ Let $u=5x^2+49$ then this integral is: $$\frac{1}{10}\int_{49}^{\infty}\frac{ \left(\frac{u-49}{5}\right)^{11}}{u^{17}}\,du=\frac{1}{2\cdot 5^{12}}\int_{49}^\infty \frac{(u-49)^{11}}{u^{17}}\,du$$ That's going to be messy, but it isn't hard. We get: $$\frac{(u-49)^{11}}{u^{17}}=\sum_{i=0}^{11}\binom{11}{i}(-49)^{i}u^{-6-i}$$ So an indefinite integral is: $$\sum_{i=0}^{11} \frac{-1}{5+i}\binom{11}{i}(-49)^i u^{-5-i}$$ which is zero at $\infty$ so we only subtract that case $u=49$ which gives: $$\int_{49}^\infty \frac{(u-49)^{11}}{u^{17}}\,du = \frac{1}{49^5}\sum_{i=0}^{11}\frac{(-1)^i}{5+i}\binom{11}{i}$$ This is in the form $$ J=\int_0^{\infty} \frac{x^{s-1}}{(a+bx^n)^m} \, dx, \tag{1} $$ with $a=49,b=5,n=2,m=17,s=24$. It is clear that this converges To do (1), first change variables to $y=(b/a) x^n$, so $dy/y=n dx/x$, and $$ J = \frac{1}{n} \int_0^{\infty} \frac{(a/b)^{s/n}y^{s/n-1}}{a^m(1+y)^m} \, dy = \frac{1}{n}\frac{a^{s/n-m}}{b^{s/n}} \int_0^{\infty} \frac{y^{s/n-1}}{(1+y)^m} \, dy. $$ Now, at this point you can stick the numbers in, and do $\int_0^{\infty} \frac{y^{11}}{(1+y)^{17}} \, dy$, but I'm going to do (1) in general, which is now a matter of evaluating $$ J' = \int_0^{\infty} \frac{y^{s/n-1}}{(1+y)^m} \, dy $$ There are still a number of ways to do this: contour integration is a possibility, although problematic if $s/n$ is an integer, as in this case. The easier way turns out to be to write $$ \frac{1}{(1+y)^m} = \frac{1}{(m-1)!}\int_0^{\infty} \alpha^{m-1} e^{-(1+y)\alpha} \, d\alpha. \tag{2} $$ (Of course, this generalises to non-integers by using the Gamma function). Putting this into $J'$ and changing the order of integration gives $$ J' = \frac{1}{(m-1)!} \int_0^{\infty} \alpha^{m-1}e^{-\alpha} \left( \int_0^{\infty} y^{s/n-1} e^{-\alpha y} \, dy \right) \, d\alpha. $$ Doing the inner integral is just a matter of using (2) again: it is $$ \int_0^{\infty} y^{s/n-1} e^{-\alpha y} \, dy = \alpha^{-s/n} (s/n-1)!. $$ Then we just have to do the outer integral, which is $$ J' = \frac{(s/n-1)!}{(m-1)!} \int_0^{\infty} \alpha^{m-s/n-1}e^{-\alpha} \, d\alpha = \frac{(s/n-1)!}{(m-1)!}(m-s/n-1)!, $$ applying (2) yet again. Hence the original integral evaluates to $$ J = \frac{a^{s/n-m}}{b^{s/n}}\frac{(s/n-1)!(m-s/n-1)}{n(m-1)!} = \frac{a^{s/n-m}}{b^{s/n}} \frac{1}{s} \binom{m-1}{s/n}^{-1} . $$ Sticking the numbers in gives $$ \mathcal{I} = \frac{49^{-5}}{5^{12}}\frac{11!4!}{2(16!)}, $$ which is easy enough to calculate. $$\mathcal{I}= \int_0^{\infty} \frac{x^{23}}{(5x^2+49)^{17}}\,\mathrm{d}x$$Let $y=\left(\frac{49b}{1-b}\right)^{1/2}$ which is the sum total of some 4 or 5 substitutions in one go, it is not advisable to use this directly but go with $y=x^2$ then $z=5+49/y$ then $a=1-5/z$ and then $b=1-a$. Note that result could be evaluated at a but the $(1-a)^{11}$ is tedious to expand so I changed the form.Anyways by beta function there is no difference. Now: $$\mathcal{I}= \frac{1}{2*49^5*5^{12}}\int_{0}^{1} b^{11}(1-b)^4\,\mathrm{d}b$$ Now using beta function or expanding and integrating and adding: $$\int_{0}^{1} b^{11}(1-b)^4\,\mathrm{d}b=\frac{11!*4!}{16!}$$ So answer is: $$\boxed{\displaystyle\large\qquad\mathcal{I}=\frac{11!*4!}{16!*2*49^5*5^{12}}=\frac1{3012333710039062500000}\qquad}$$
Editorial Find the first bus Serval can see in each route and find the earliest one. For each route, finding the first bus Serval sees can work in $$$O(1)$$$. Or mark all the time no more than $$$\max(s_i,t+\max(d_i))$$$ which bus would come or there will be no bus, then search the nearest one. Editorial Fill in all the bricks, and then remove all bricks you must remove (which in some view, there is empty). This can be solved in $$$O(nm)$$$. Author & preparation: Serval Editorial First let ( be $$$+1$$$, ) be $$$-1$$$ and ? be a missing place, so we will replace all the missing places in the new $$$+1$$$,$$$-1$$$ sequence by $$$+1$$$ and $$$-1$$$. Obviously, for each prefix of a correct parenthesis sequence, the sum of the new $$$+1$$$,$$$-1$$$ sequence is not less than $$$0$$$. And for the correct parenthesis sequence itself, the sum of the new sequence should be $$$0$$$. So we can calculate how many $$$+1$$$ (let $$$a$$$ denotes it) and how many $$$-1$$$ (let $$$b$$$ denotes it) that we should fill in the missing places. According to the problem, our goal is to fill the missing place with $$$+1$$$ and $$$-1$$$ to make sure there is no strict prefix (prefixes except the whole sequence itself) exists with the sum equal to $$$0$$$. This can be solved in greedy. We want the sum of prefixes as large as possible to avoid the sum touching $$$0$$$. So let the first $$$a$$$ missing places be filled with $$$+1$$$ and the last $$$b$$$ missing places be filled with $$$-1$$$. Check it whether it is a correct parenthesis sequence or not at last. The complexity is $$$O(n)$$$. Author & preparation: bzh Editorial If we want to check whether $$$x$$$ is the answer (I didn't say I want to do binary search), then we can set all the numbers no less than $$$x$$$ as $$$1$$$, and the numbers less than $$$x$$$ as $$$0$$$. Then we can use $$$dp_i$$$ to represent that the maximum number on node $$$i$$$ is the $$$dp_i$$$-th smallest number of leaves within subtree of $$$i$$$. There should be at least $$$dp_i$$$ ones in the subtree of $$$i$$$ such that the number on $$$i$$$ is one. Then $$$k+1-dp_1$$$ is the final answer. Complexity $$$O(n)$$$. Code #include <cstdio>using namespace std;struct node{ int to; node *next;};int i,j,m,n,a[1000005],f[1000005],dp[1000005],deg[1000005],k;node *nd[1000005];void addd(int u,int v){ node *p=new node(); p->to=v; p->next=nd[u]; nd[u]=p;}void dfs(int u){ node *p=nd[u]; if ((u>1)&&(deg[u]==1)){//note that u>1 dp[u]=1; k++; return; } if (a[u]) dp[u]=1000000000; else dp[u]=0; while(p){ dfs(p->to); if (a[u]){ if (dp[p->to]<dp[u]) dp[u]=dp[p->to]; }else{ dp[u]+=dp[p->to]; } p=p->next; }}int main(){ scanf("%d",&n); for(i=1;i<=n;i++){ scanf("%d",&a[i]); } for(i=2;i<=n;i++){ scanf("%d",&f[i]); deg[i]++; deg[f[i]]++; addd(f[i],i); } dfs(1); printf("%d\n",k+1-dp[1]); return 0;} Another Solution We can solve this problem in greedy. At first we use $$$leaf_u$$$ to represent the number of leaves in the subtree whose root is $$$u$$$, and $$$f_u$$$ to represent the maximum number we can get on the node $$$u$$$. Note that since we are concidering the subtree of $$$u$$$, we just number those $$$leaf_u$$$ nodes from $$$1$$$ to $$$leaf_u$$$, and $$$f_u$$$ is between $$$1$$$ and $$$leaf_u$$$, too. Let's think how to find the maximum number a node can get. If the operation of the node $$$u$$$ we concerned is $$$\max$$$, for all the nodes $$$v$$$ whose father or parent is $$$u$$$, we can find the minimum $$$leaf_v-f_v$$$. Let $$$v_{min}$$$ denotes the node reaches the minimum. And we can construct an arrangement so that the number written in the node $$$u$$$ can be $$$leaf_u-(leaf_{v_{min}}-f_{v_{min}})$$$. When we number the leaves in the subtree of $$$u$$$ from $$$1$$$ to $$$leaf_u$$$, we number the leaves in other subtrees of children of $$$u$$$ first, and then number the leaves in subtree of $$$v_{min}$$$. It can be proved this arrangement is optimal. If the operation of the node $$$u$$$ is $$$\min$$$, we can construct an arrangement of the numbers written in the leaves to make the number written in $$$u$$$ be as large as possible. For all sons or children $$$v$$$ of $$$u$$$, we number the first $$$f_v-1$$$ leaves in subtree of $$$v$$$ first according to the optimal arrangement of the node $$$v$$$. And then no matter how we arrange the remaining numbers, the number written in $$$u$$$ is $$$1+\sum_{v \text{ is a son of } u} (f_v-1)$$$. This is the optimal arrangement. We can use this method and get the final answer $$$f_1$$$. Code for Another Solution #include <cstdio>using namespace std;const int N=1000005;int n,p;int h[N],nx[N];int t[N],sz[N];void getsize(int u){ if (!h[u]) sz[u]++; for (int i=h[u];i;i=nx[i]) { getsize(i); sz[u]+=sz[i]; }}int getans(int u){ if (!h[u]) return 1; int ret=0,tmp; if (t[u]) { for (int i=h[u];i;i=nx[i]) { tmp=getans(i)+sz[u]-sz[i]; if (tmp>ret) ret=tmp; } return ret; } for (int i=h[u];i;i=nx[i]) ret+=getans(i)-1; return ret+1;}int main(){ scanf("%d",&n); for (int i=1;i<=n;i++) scanf("%d",&t[i]); for (int i=2;i<=n;i++) { scanf("%d",&p); nx[i]=h[p]; h[p]=i; } getsize(1); printf("%d\n",getans(1)); return 0;} Editorial If the answer to a rectangle is odd, there must be exactly one head or tail in that rectangle. Otherwise, there must be even number ($$$0$$$ or $$$2$$$) of head and tail in the given rectangle. We make queries for each of the columns except the last one, then we can know for each column whether there are odd number of head and tails in it or not. Because the sum is even, we can know the parity of the last column. If the head and tail are in different columns, we can find two columns with odd answer and get them. Then we can do binary search for each of those two columns separately and get the answer in no more than $$$999+10+10=1019$$$ queries totally. If the head and tail are in the same column, we will get all even answer and know that fact. Then we apply the same method for rows. Then we can just do binary search for one of the rows, and use the fact that the other is in the same column as this one. In this case, we have made no more than $$$999+999+10=2008$$$ queries. Bonus: How to save more queries? How to save one more query? We first make queries for row $$$2$$$ to row $$$n-1$$$. If we find any ones, make the last query for row, and use the method above. If we cannot find any ones, we make $$$n-1$$$ queries for columns. If none of them provide one, we know that for row $$$1$$$ and row $$$n$$$, there must be exactly one head or tail in them, and they are in the same column. In this case, we do binary search for one of the rows, then the total number of queries is $$$998+999+10=2007$$$. If we can find two ones in the columns, we know that: if one of them is in row $$$2$$$ to row $$$n-1$$$, the other must be in the same row, because for row $$$2$$$ to row $$$n$$$, we know that there are even number of head and tails, and them can't appear in the other columns. Then we do binary search, when we divide the length into two halves, we let the one close the the middle to be the longer one, and the one close to one end to be the shorter one. Then, if it turns out that, the answer is in row $$$1$$$ (or row $$$n$$$), the number of queries must be $$$\log n$$$ rounded down, and we can use one more query to identify, whether the head or tail in the other column is in row $$$1$$$ or row $$$n$$$. If it turns out that, the answer is in one of the rows in row $$$2$$$ to row $$$n$$$, we may used $$$\log n$$$ queries rounded up, but in this case, we don't need that extra query. So the total number of queries is $$$999+998+9+1=2007$$$ (or $$$999+998+10=2007$$$). In fact, if the interactor is not adaptive and we query for columns and rows randomly, we can use far less than $$$2007$$$ queries. And if we query for rows and columns alternatively, we can save more queries. Editorial Without loss of generality, assume that $$$l=1$$$. For a segment covering, the total length of the legal intervals is the probability that we choose another point $$$P$$$ on this segment randomly such that it is in the legal intervals. Since all $$$2n+1$$$ points ($$$P$$$ and the endpoints of each segment) are chosen randomly and independently, we only need to find the probability that point $$$P$$$ is in the legal intervals. Note that only the order of these $$$2n+1$$$ points make sense. Because the points are chosen in the segment, the probability that some of them coincide is $$$0$$$, so we can assume that all points do not coincide. Now the problem is, how to calculate the number of arrangements that $$$P$$$ is between at least $$$k$$$ pairs of endpoints. It can be solved by dynamic programming in time complexity of $$$O(n^2)$$$. We define $$$f(i,j,x)$$$ as the number of arrangements for the first $$$i$$$ positions, with $$$j$$$ points haven't been matched, and $$$P$$$ appeared $$$x$$$ times (obviously $$$x=0$$$ or $$$1$$$). So we can get three different types of transition for the $$$i$$$-th position below: Place $$$P$$$ at $$$i$$$-th position (if $$$j\geq k$$$): $$$f(i-1,j,0)\rightarrow f(i,j,1)$$$ Start a new segment (if $$$i+j+x<2n$$$): $$$f(i-1,j-1,x)\rightarrow f(i,j,x)$$$ Match a started segment, note that we have $$$j$$$ choices of segments: $$$f(i-1,j+1,x)\times (j+1)\rightarrow f(i,j,x)$$$ Then $$$f(2n+1,0,1)$$$ is the number of legal arrangements. Obviously, the total number of arrangements is $$$(2n+1)!$$$. However, there are $$$n$$$ pairs of endpoints whose indices can be swapped, and the indices $$$n$$$ segments can be rearranged. So the final answer is $$$\frac{f(2n+1,0,1)\times n! \times 2^n}{(2n+1)!}$$$. Code #include <cstdio>using namespace std;const int mod=998244353;const int N=4005;const int K=2005;int n,k,l;int fac,ans;int f[N][K][2];int fpw(int b,int e=mod-2){ if (!e) return 1; int ret=fpw(b,e>>1); ret=1ll*ret*ret%mod; if (e&1) ret=1ll*ret*b%mod; return ret;}int main(){ scanf("%d%d%d",&n,&k,&l); f[0][0][0]=1; for (int i=1;i<=2*n+1;i++) for (int j=0;j<=n;j++) for (int x=0;x<=1;x++) if (f[i-1][j][x]) { if (j) f[i][j-1][x]=(f[i][j-1][x]+1ll*f[i-1][j][x]*j)%mod; if (i+j-1<2*n+x) f[i][j+1][x]=(f[i][j+1][x]+f[i-1][j][x])%mod; if (j>=k&&!x) f[i][j][1]=(f[i][j][1]+f[i-1][j][x])%mod; } fac=1; for (int i=n+1;i<=2*n+1;i++) fac=1ll*fac*i%mod; ans=f[2*n+1][0][1]; ans=1ll*ans*fpw(2,n)%mod; ans=1ll*ans*fpw(fac)%mod*l%mod; printf("%d\n",ans); return 0;} UPD: We fixed some mistakes and added another solution for D.
The formula :- \begin{equation}\tag{A}\frac{1}{P_{sid}^{-1}-P_{syn}^{-1}} = P_{e}\end{equation} (where $P_{e}$ is the sidereal period of the Earth in its orbit around the Sun) can be derived with the aid of Fig. 1 below, using the different axes frames, and making the following simplifying assumptions about the motions of the Sun, Earth and Moon :- Earth has a circular orbit around the Sun of constant speed and a fixed period $P_{e}$ Moon has a circular orbit around the Earth of constant speed and a fixed period $P_{sid}$. (Thus the moon's path wrt the fixed Sun frame is an epicycle, ie a small circle whose center moves along the circumference of a larger circle, in this case the Earth's orbit). The above two orbits are confined to a common plane (the Ecliptic Plane) From the direction we are viewing (ie `North') both Earth and Moon orbit in the anti-clockwise direction In the diagram the angles $\theta$, $\phi$, $\psi$, and $\pi + \theta$ are all `polar coordinate angles' wrt the relevant $x$-axis. The axes-frame $x'y'$ always points towards the Sun, and it moves with the Earth in its orbit. The $y'$ axis is the 'noon line' always pointing towards the Sun. The phases of the Moon are determined by the phase angle $\psi$ which the Moon makes wrt the $x'$ axis :- Conjunction means in line with and in same direction as the Sun, opposition means in line with but in opposite direction as the Sun. From the diagram it is readily seen that :- $$\psi = \pi/2 + (\phi - \theta)$$ But we could write :- \begin{eqnarray*}\theta & = & \omega_{e}t + \theta_{0} \\\phi & = & \omega_{m}t + \phi_{0} \\\end{eqnarray*} where $\omega_{e}$ = constant angular velocity of the Earth (radians per second), $\omega_{m}$ = constant angular velocity of the Moon, and $\theta_{0}$, $\phi_{0}$ are the angles at time $t$ = $0$. Thus $$\psi = (\omega_{m} - \omega_{e})t + (\phi_{0} - \theta_{0}) + \pi/2$$ and so change $\Delta \psi$ in $\psi$ after a time $\Delta t$ is given by $$\Delta\psi = (\omega_{m} - \omega_{e}) \Delta t$$ Thus moon phase angle $\psi$ varies linearly with time at the constant rate of $\omega_{m} - \omega_{e}$. Since $\omega_{m} = 2\pi / P_{sid}$ and $\omega_{e} = 2\pi / P_{e}$ we then have $$\Delta\psi = 2\pi (1 / P_{sid} - 1 / P_{e}) \Delta t$$ The synodic period of the moon is simply the time it takes for moon phase angle $\psi$ to make a full cycle, ie $2\pi$ radians (= $360^{\circ}$) so putting $\Delta \psi$ = $2\pi$ in the latter formula gives $P_{syn}$ = $\Delta t$ ie $$P_{syn} = \frac{1}{1 / P_{sid} - 1 / P_{e}}$$ or $$\frac{1}{P_{syn}} = \frac{1}{P_{sid}} - \frac{1}{P_{e}}$$ which is easily rearranged to obtain the required formula (A). In short we are just subtracting two angular velocities -- but these go by the reciprocal of the relevant orbital period $P$, hence the reciprocals in the formula. Taking the values $P_{sid} = 27.321661$ days (from https://en.wikipedia.org/wiki/Moon))) and $P_{e} = 365.256363$ days (from https://en.wikipedia.org/wiki/Sidereal_year) this gives as the moon's synodic period :- $$P_{syn} = 29.530588 \mbox{ days}$$ which differs by only 0.000001 from the Wikipedia value of 29.530589 days, where 'day' is SI day = 86,400 SI seconds). So the figures given on Wikipedia for these three orbital periods $P_{syn}$, $P_{sid}$, and $P_{e}$ are very closely related by the formula (A), so that one of them has been calculated from the other two. Thus these are not entirely actual experimentally observed figures that have been quoted on Wikipedia (nor on WolframAlpha). For we could NOT expect the actual figures to be related so exactly by the formula (A) because of the simplifying assumptions (1) - (4) above that are required to derive that formula (the actual orbits are not perfectly circular and they are not exactly in the same plane). (NOTE: The figure I have for $P_{e}$ is 365.256363 days, and for the tropical year 365.242189 days, from Wikipedia (https://en.wikipedia.org/wiki/Sidereal_year), where 'day' is SI day = 86,400 SI seconds. This differs slightly from the values you have quoted.) A reciprocal type formula is also applicable to the synodic period between two planets orbiting the Sun (see Fig. 2 below), again with the simplifying assumptions of circular orbits in the same plane :- $$\frac{1}{P_{syn}} = \frac{1}{P_{1}} - \frac{1}{P_{2}}$$ where $P_{1}$ and $P_{2}$ are the orbital periods of the planets, and $P_{1} < P_{2}$. This same type of formula is derived because :- $$\psi = \phi - \theta$$ and so the above derivation applies very similarly.
Chances are high that the Standard Model will be killed in 2016 If I were just a little bit more certain than I am, I would add the picture below and the subtitle would say:The summary of the main 2015 results of the ATLAS and CMS collaborations at the LHC was entertaining. Many channels have been shown compatible with the Standard Model. Some searches for black holes, gluinos, sbottom squarks (deficit!), etc. have extended their limits by \(200\GeV\) or so – in one search for gluinos even close to \(1.8\TeV\). The old CMS edge at \(78.7\GeV\) anomaly as well as ATLAS' on-Z anomaly was said not to exist by CMS. REVOLUTION IN SCIENCE New theory of the Universe The Standard Model and Weinbergian ideas overthrown Can you see a similarity between the ATLAS and CMS diphoton charts? I chose one of the most modest CMS graphs – they had a stronger signal when they treated it differently, especially when they added the 2012 evidence, see figure 7 in the CMS diphoton paper. A reason why CMS has weaker signals is that they only build on 2.6/fb of data (75% of the time, the CMS magnet worked fine); ATLAS has processed 3.2/fb, see the ATLAS diphoton paper, too. The \(2.9\TeV\) dilepton event could have gotten many siblings but it hasn't, so I am reclassifying the event as a "probable fluke deserved to be forgotten". Similar comments probably apply to the \(5\TeV\) dijet events etc. The situation of the \(2\TeV\) new gauge boson is somewhere in between. Such a new particle can't be "actively" excluded now. But it could have gotten some new evidence. However, it hasn't gotten any new evidence either from ATLAS or CMS. Let me describe the right attitude differently: If we hadn't seen the intermediate data and only evaluated the "total" data we have now, which may be a sensible attitude, the excesses at \(2\TeV\) would be (and are) so small that we wouldn't talk about them at all, and that's why we should probably stop talking about them now, too. We've got a new "pet bump", however. Within the error margins, the rumors were confirmed just like 4 years ago when we learned about the \(125\GeV\) Higgs boson for the first time. But before I get to this new "bump", I want to notice the only other interesting deviation from the Standard Model that I noticed. In March 2015, ATLAS announced a 3-sigma excess in a search for supersymmetry, Search for supersymmetry in events containing a same-flavour opposite-sign dilepton pair, jets, and large missing transverse momentum in \(\sqrt{s}=8\TeV\) \(pp\) collisions with the ATLAS detector.It's often referred to as the "Z+MET ATLAS excess", see e.g. this paper offering an explanation within the MSSM. \(10.8\pm 2.2\) events were predicted, 29 events were seen in 2012. The interesting news is that this 3-sigma excess from 2012 seems to get support from the fresh 2015 data as well, although the excess is 2.2 sigma so far only [new paper]. \(10.4\pm 2.4\) events were expected, 21 events were obtained. See how impressive the bump in the \(80\)-\(100\GeV\) bin looks.) In combination, this is getting to the 3.5-sigma realm and that's potentially interesting enough for us to look at the theory paper I mentioned or to turn on our brains. With an optimistic world view, there's actually one more 2-sigma excess in the search for gluinos, in the six-jet hard-lepton [muon only] signal region of ATLAS. 8 instead of \(2.5\pm 0.8\) events are observed. On Figures 6a,7, this 2-sigma excess has prevented them from excluding a \(1.2\TeV\) gluino with a \(600\GeV\) LSP, a choice that I found intriguing even in the 2012 data. And even a new CMS Figure 4a shows a slight repulsion from the point (or a nearby one). (I was also intrigued by some \(450\)-\(500\GeV\) excess in ATLAS' Higgs-like tetraleptons [Figure 11] and ATLAS' tetrajets at \(2.6\TeV\) [Figure 2b] but because the speaker said that these 2-sigma+ exceeses were "absolutely" insignificant, I have only reserved this space in the parentheses for the channels. And I don't remember he mentioned a 2.6-sigma excess in \(W'\to e\nu\) at \(600\GeV\) [Figure 2a] at all. I am personally intrigued by Figure 2 in \(\ell\ell q q\) at \(180\GeV\), too.) The new boson As the rumors indicated, both ATLAS and CMS have seen truly eye-catching excess in the diphoton events (events with two photons) whose invariant mass is close to... \(750\GeV\) in both cases. Needless to say, this detailed agreement between ATLAS and CMS makes the bumps far more interesting. The higher-mass, later version of the rumors was more correct than the earlier, \(700\GeV\) version of the rumor. I switched to the "\(750\GeV\) belief cult" in the morning because I saw some previous, 2012 CMS evidence in favor of this mass. There were actually two channels in CMS that saw an excess in that mass territory in 2012, the diphoton channel (2 sigma or so) and the \(\ell \nu q \bar q'\) channel, about 2.5 sigma. But in the latter, we only know that the two excesses were between \(700\) and \(800\GeV\). CMS has realized the previous evidence in favor of this bump. So combining the local 2.6-sigma bump from 2015 around \(760\GeV\) (or \(758 \GeV\)) with the evidence from 2012, they got above 3 sigma (locally) in the combination. ATLAS got a big bump with the central mass value of \(747\GeV\). The significance is either 3.6 or 3.9 sigma, depending on details of the method. Needless to say, adding the most optimistic numbers, we get locally 4.9 sigma from the ATLAS+CMS combo. I think it's still fair to say that the ATLAS+CMS combination remains close to 4.5 sigma if we take the look-elsewhere effect into account. It's damn close to the discovery. Incidentally, experimenters apparently love to apply the look-elsewhere correction immediately which is why you often hear ludicrously conservative if not reactionary claims such as "the evidence is just 1.2 sigma". It's right to be aware of the existence of the look-elsewhere effect. But you must also be particularly careful not to apply the look-elsewhere correction twice, to both ATLAS and CMS figures, which would mean that their combination overlooks the nontrivial agreement in their opinion "where" you should look. Just to be sure, in the same diphoton channel, ATLAS also saw a 2.5-sigma local bump near \(1.6\TeV\); CMS seems to produce negligible deviations over there. On the contrary, CMS has a less-than-two-sigma excess around \(1.15\TeV\); ATLAS sees basically nothing there. And CMS also saw some more-than-two-sigma excess around \(580\GeV\); no confirmation from ATLAS over there. That's why I will only talk about the \(750\GeV\) hints seriously at this moment. The situation is less clear than 4 years ago when the first Higgs bosonhad to exist, for the electroweak theory to be consistent (unitary), and all other places for that God particle to hide had been excluded. So it had to be there and 3 sigma was really enough to be basically certain. What theory describes the boson Now, there is no equally unavoidable reason for a new particle – which would probably be a new scalar particle and we could even say a "new Higgs boson". The Standard Model works pretty well with one God particle – much like many religions are satisfied with one God. But the correct theory of physics may still require two or, more precisely, five God particles in total. ;-) The minimum supersymmetric models such as the MSSM predict five physical Higgs bosons – two charged (antiparticles of one another) and three neutral ones (which includes one CP-odd boson). The \(750\GeV\) boson could be one of the new neutral Higgs bosons in a supersymmetric model. So its discovery would increase the probability that supersymmetry exists at low energies – but less so than a discovery of a superpartner (new Higgs bosons are needed in SUSY models, much like superpartners are needed, but they are not superpartners). For some time, I have considered NMSSM – the next-to-minimal supersymmetric standard model – to be more natural than the MSSM. It has some extra scalar particles. And it's probably (even) easier to reconcile the new particle (if it is real) with the NMSSM than with MSSM. But completely different models may be the right ones to describe the new particle. Maybe some "two Higgs doublets" theories with no SUSY or something else. I should also remind you that \(750\GeV\) would be close to the upper bound for the (only) Standard Model boson if it were the "main" one. So four years ago, it would be viewed as the "almost maximum" value of the Higgs mass. But if there are at least two Higgs bosons, the situation is more flexible. The end of the Standard Model epoch? A bet. We've often heard that the Standard Model represents the end of particle physics. It's a theory to be here for the eternity, new physics and BSM physics is religion, and so on. I am willing to bet against this common wisdom – and bet that this new \(750\GeV\) boson will be confirmed. I feel fairly convinced that this is possible but I still want you to offer me 10-to-1 odds if you want to become a Standard Model advocate. I would pay to you $200 if I lose and you will pay to me $2,000 if you lose, OK? The rules of the bet: If the first released diphoton paper of either collaboration evaluating at least \(20\) inverse femtobarns of data at the energy of at least \(13\TeV\) that reaches at least up to masses of \(800\GeV\) contains no discovery of a particle whose (mean value of) mass is in the \(720\)-\(780\GeV\) window, I will pay you the money. If such a discovery takes place, you will pay to me.Sounds good? If there's some interest, we can turn it into the largest bet in the history of physics. You know, I want the 10-to-1 odds because my claim is basically that all the "Standard Model forever" worshipers are imbeciles while Glashow, Salam, Weinberg, Gross, Wilczek, Politzer, and others were sloppy physicists. ;-) There are other reasons to keep some doubts about the particle (and reasons for me to demand better odds). While ATLAS would like the width to be some \(45\GeV\), because the excess appears in four adjacent bins, CMS could imagine the width at \(10\GeV\) – but that could be a coincidence for CMS to get the events too close to each other and ATLAS to get them far from each other. Figure 3 in the ATLAS diphoton paper shows the excess as both tall and wide. It is worrisome – ATLAS does seem to have some nontrivial evidence supporting a greater width. Also, it's ethical for me to warn you that I have already won a $500 Higgs bet in the past. ;-) The scalar-photon-photon coupling from a loop Adam Falkowski wrote a helpful text. As a former boyfriend of SUSY's who would lose $10,000 if SUSY were discovered, he prefers to give you "simpler" models. Instead of all the extra Higgs doublets and superpartners, he wants you do add a scalar \(S\) in the Standard Model. Well, you also need some new heavy "quarks" (the light ordinary ones are probably not good enough; we probably need the vector-like new quarks i.e. particles that don't break the left-right symmetry at all) that are coupled to the electric charge and maybe also the color. Then, you generate the cubic interaction capable of the \(S\to \gamma\gamma\) or \(S\to g g\) decay (two photons or two gluons) via a one-loop diagram with the new "quarks" running in the triangular loop. The effective dimension-5 interactions will be\[ {\mathcal L}_{\rm eff} =c_{s\gamma\gamma} \frac{e^2}{4v} S F_{\mu\nu}F^{\mu\nu} +c_{sgg} \frac{g_s^2}{4v}S G^{a}_{\mu\nu}G_a^{\mu\nu} \] with some new vev \(\langle S\rangle = v\), if I understood the symbol well. We need the interaction of \(S\) with the 2 photons for the final diphoton state to emerge from \(S\); but we also need the analogous interaction with two gluons because that's how \(S\) is supposedly created from the initial state, \(gg\) inside \(pp\). But you see that he probably needs something like new vector-like "quarks" aside from \(S\), too. Who ordered that, you would ask? Supersymmetric models give some deep reason to exist(a deep one, not just an explanation of some bizarre bumps) to all the new players. There is a difference between fixing problems with your Lagrangians ad hoc, whenever they break down in Switzerland, and maintaining a big picture of physics where many observations have some purpose and you try to learn some deeper lessons. I and top-down theorists choose the latter. Adam Falkowski prefers the former and one may survive in this way. But is this type of physics valuable when you never make important and new enough predictions about anything that should happen in the future? You just predict the mosquitos flying around you (according to the established model) while you say "I won't be hit by a stone" up to the moment when you're hit by a stone and when you start to say "I won't be hit by a rock". People like me want to understand not only the mosquitos but also the stones and perhaps rocks as soon as possible – all these stones, rocks, and also asteroids, are parts of physics whether we're colliding with them today or not! Before 6 pm tonight, the last nuclei orbited the ring before the beams were shut down for 2015. So the gadgets will finally enjoy a silent night tonight. They deserve it and they must relax before much more work they're expected to do in 2016.
Output Answers See the comments in the code for further explanation, but here's a summary of how I addressed your questions: I used \begin{scope} and \end{scope} around the code for the semantic trees so that they can be scaled and shifted independently. The surface scope tree has [xshift=9cm,scale=.8], so it's shifted right by 9 cm and shrunk by 20%. The inverse scope tree has [xshift=9cm,yshift=-8cm,scale=.8], so it's shifted right by 9 cm, down by 8 cm, and shrunk by 20%. I used \node at (5,1.5) {\textbf{Surface Scope}}; for the caption for the first semantic tree. The positioning of this node is relative to the tikz coordinate system within the scope environment. A similar node is used for the inverse scope tree. The \Rightarrow is placed within its own scope environment so it can be shifted relative to the whole tikzpicture, and I put \Huge in the node text to increase the arrow's size. Because the three trees are all within the same tikzpicture, you can connect them with lines that refer to named nodes in the different trees. For example, I named the node that contains the universal quantifier in the surface scope tree (universal-surface), and then \draw[thick, color=red] (everyone)..controls +(0,-9) and +(0,-8)..(universal-surface); connects that with the (everyone) node in the syntactic tree. By using the fit library, you can place shapes around a set of nodes that you specify. For example, \node [draw=blue, ellipse, thick, inner sep=-4pt, fit = (root-surface) (NPleft-surface) (NPright-surface)] {}; puts a blue ellipse around the nodes I named (root-surface), (NPleft-surface), and (NPright-surface). The blue labels were added using nodes positioned relative to named nodes in the tree. Drawing lines between the syntactic tree and the inverse scope tree works exactly the same as drawing lines between the syntactic tree and the surface scope tree. Code I cleaned up the code for your trees considerably by defining commands for your ordered sets, types, bracketed elements, etc. This way you can just tweak the formatting of the command definition and it will apply throughout the tree. \documentclass[12pt,a4paper]{article} \usepackage{tikz-qtree} \usepackage{tikz-qtree-compat} \usetikzlibrary{fit,shapes} % tikz libraries that are necessary to make the blue ellipses \usepackage{amsmath} % for the \text{} command that exits math mode \usepackage[normalem]{ulem} % provides \sout \usepackage{lscape} % for page rotation % Custom commands to clean the code up \newcommand{\ord}[1]{\ensuremath{\langle#1\rangle}} % for your ordered sets; you can change back to < > if you want \newcommand{\type}[2]{\ensuremath{\text{#1}_{#2}}} % for your types \newcommand{\br}[1]{\ensuremath{\lbrack \thinspace #1 \thinspace \rbrack}} % for your bracketed elements \newcommand{\arr}{\ensuremath{\rightarrow}} % shorter macro for the arrow \newcommand{\stm}[1]{\ensuremath{\text{\sout{$#1$}}}} % a strikeout command that words in math mode % commands for the three types you're using \newcommand{\tone}{\type{t}{1}} \newcommand{\etwo}{\type{e}{2}} \newcommand{\ethree}{\type{e}{3}} \begin{document} \begin{landscape} \scalebox{.96}{ % scales the entire contents \begin{tikzpicture}[baseline=(current bounding box.center)] \Tree [.TP [.DP [.\node(everyone){Everyone}; % label nodes you want to draw lines/arrows to or from ] ] [.T\1 [.T ] [.AspP [.Asp ] [.\emph{v}P [.\emph{v} ] [.VP [.V hugs ] [.DP [.\node(someone){someone}; ] ] ] ] ] ] ] \begin{scope}[xshift=5.5cm,yshift=-4cm] % use scope to be able to shift parts of the tikzpicture around and scale them independently of the rest of the tikzpicture \node {\Huge$\Rightarrow$}; % put the arrow in a node and make it \Huge \end{scope} \begin{scope}[xshift=9cm,scale=.8] % shift the first tree 9 cm to the right \Tree [.\node(root-surface){$\ord{\tone, \br{\emptyset}, \br{\emptyset}}$}; [.\node(NPleft-surface){$\ord{(\etwo \arr \tone) \arr \tone), \br{\ethree}}$}; [.\node(universal-surface){$\forall$}; ] ] [.$\ord{(\etwo \arr \tone), \br{\stm{\ethree}}, \br{\emptyset}}$ [.\node(ne-surface){$\etwo$}; ] [.\node(NPright-surface){$\ord{\tone, \br{\etwo}, \br{\emptyset}}$}; [.\node(VPleft-surface){$\ord{(\ethree \arr \tone) \arr \tone, \br{\etwo}, \br{\emptyset}}$}; [.\node(existential-surface){$\exists$}; ] ] [.$\ord{(\ethree \arr \tone), \br{\etwo}, \br{\stm{\ethree}}}$ [.\node(ue-surface){$\ethree$}; ] [.$\ord{\tone, \br{\etwo}, \br{\ethree}}$ [.\node [circle,draw] (me-surface) {$\etwo$}; ] [.$\ord{(\etwo \arr \tone), \br{\ethree}}$ [.$\ord{(\ethree \arr \etwo \arr \tone), \br{\emptyset}}$ ] [. \node [circle,draw] (le-surface) {$\ethree$}; ] ] ] ] ] ] ] \draw[semithick, dashed, ->, >=stealth] (le-surface)..controls +(-1,-1) and +(0,-4) .. (ue-surface); \draw[semithick, dashed, ->, >=stealth] (me-surface)..controls +(-1,-1) and +(0,-4) .. (ne-surface); % the caption node is positioned relative to the scope \node at (5,1.5) {\textbf{Surface Scope}}; % draw a blue ellipse around the nodes listed after fit =, tweak inner sep to make it slightly bigger/smaller \node [draw=blue, ellipse, thick, inner sep=-4pt, fit = (root-surface) (NPleft-surface) (NPright-surface)] {}; \node [draw=blue, ellipse, thick, inner sep=-5pt, fit = (VPleft-surface) (le-surface)] {}; % position the label of the ellipses to relative to one of the named nodes in the tree \node [above=1cm, right=1cm] at (NPright-surface) {\color{blue} NP}; \node [above=1.5cm, right=0.5cm] at (le-surface) {\color{blue} VP}; \end{scope} \begin{scope}[xshift=9cm,yshift=-8cm,scale=.8] % the inverse scope tree is positioned 9 cm to the right and 8 cm down \Tree [.$\ord{\tone, \br{\emptyset}}$ [.$\ord{(\ethree \arr \tone) \arr \tone), \br{\emptyset}}$ [.\node(existential-inverse){$\exists$}; ] ] [.$\ord{(\ethree \arr \tone), \br{\stm{\ethree}}}$ [.\node(ue-inverse){$\ethree$}; ] [.\node[draw]{$\ord{\tone, \br{\ethree}}$}; [.$\ord{(\etwo \arr \tone) \arr \tone), \br{\ethree}}$ [.\node(universal-inverse){$\forall$}; ] ] [.$\ord{(\etwo \arr \tone), \br{\ethree}}$ [.$\ord{(\ethree \arr \etwo \arr \tone), \br{\ethree}}$ ] [.\node [circle,draw] (le-inverse) {$\ethree$}; ] ] ] ] ] \draw[semithick, dashed, ->, >=stealth] (le-inverse)..controls +(-2,-1) and +(0,-4) .. (ue-inverse); \node at (5,1) {\textbf{Inverse Scope}}; \end{scope} % These are the red lines, drawn between nodes in different scopes \draw[thick, color=red] (someone)..controls +(0,-2) and +(-1,-1)..(existential-inverse); \draw[thick, color=red] (someone)..controls +(0,-2) and +(0,-3)..(existential-surface); \draw[thick, color=red] (everyone)..controls +(0,-10) and +(-1,-1)..(universal-inverse); \draw[thick, color=red] (everyone)..controls +(0,-9) and +(0,-8)..(universal-surface); \end{tikzpicture} } \end{landscape} \end{document}
1) Given \(\vecs r(t)=(3t^2−2)\,\hat{\mathbf{i}}+(2t−\sin t)\,\hat{\mathbf{j}}\), a. find the velocity of a particle moving along this curve. b. find the acceleration of a particle moving along this curve. Answer: a. \(\vecs v(t)=6t\,\hat{\mathbf{i}}+(2−\cos t)\,\hat{\mathbf{i}}\) b. \(\vecs a(t)=6\,\hat{\mathbf{i}}+\sin t\,\hat{\mathbf{i}}\) In questions 2 - 5, given the position function, find the velocity, acceleration, and speed in terms of the parameter \(t\). 2) \(\vecs r(t)=e^{−t}\,\hat{\mathbf{i}}+t^2\,\hat{\mathbf{j}}+\tan t\,\hat{\mathbf{k}}\) 3) \(\vecs r(t)=⟨3\cos t,\,3\sin t,\,t^2⟩\) Answer: \(\vecs v(t)=-3\sin t\,\hat{\mathbf{i}}+3\cos t\,\hat{\mathbf{j}}+2t\,\hat{\mathbf{k}}\) \(\vecs a(t)=-3\cos t\,\hat{\mathbf{i}}-3\sin t\,\hat{\mathbf{j}}+2\,\hat{\mathbf{k}}\) \(\text{Speed}(t) = \|\vecs v(t)\| = \sqrt{9 + 4t^2}\) 4) \(\vecs r(t)=t^5\,\hat{\mathbf{i}}+(3t^2+2t- 5)\,\hat{\mathbf{j}}+(3t-1)\,\hat{\mathbf{k}}\) 5) \(\vecs r(t)=2\cos t\,\hat{\mathbf{j}}+3\sin t\,\hat{\mathbf{k}}\). The graph is shown here: Answer: \(\vecs v(t)=-2\sin t\,\hat{\mathbf{j}}+3\cos t\,\hat{\mathbf{k}}\) \(\vecs a(t)=-2\cos t\,\hat{\mathbf{j}}-3\sin t\,\hat{\mathbf{k}}\) \(\text{Speed}(t) = \|\vecs v(t)\| = \sqrt{4\sin^2 t+9\cos^2 t}=\sqrt{4+5\cos^2 t}\) In questions 6 - 8, find the velocity, acceleration, and speed of a particle with the given position function. 6) \(\vecs r(t)=⟨t^2−1,t⟩\) 7) \(\vecs r(t)=⟨e^t,e^{−t}⟩\) Answer: \(\vecs v(t)=⟨e^t,−e^{−t}⟩\), \(\vecs a(t)=⟨e^t, e^{−t}⟩,\) \( \|\vecs v(t)\| = \sqrt{e^{2t}+e^{−2t}}\) 8) \(\vecs r(t)=⟨\sin t,t,\cos t⟩\). The graph is shown here: 9) The position function of an object is given by \(\vecs r(t)=⟨t^2,5t,t^2−16t⟩\). At what time is the speed a minimum? Answer: \(t = 4\) 10) Let \(\vecs r(t)=r\cosh(ωt)\,\hat{\mathbf{i}}+r\sinh(ωt)\,\hat{\mathbf{j}}\). Find the velocity and acceleration vectors and show that the acceleration is proportional to \(\vecs r(t)\). 11) Consider the motion of a point on the circumference of a rolling circle. As the circle rolls, it generates the cycloid \(\vecs r(t)=(ωt−\sin(ωt))\,\hat{\mathbf{i}}+(1−\cos(ωt))\,\hat{\mathbf{j}}\), where \(\omega\) is the angular velocity of the circle and \(b\) is the radius of the circle: Find the equations for the velocity, acceleration, and speed of the particle at any time. Answer: \(\vecs v(t)=(ω−ω\cos(ωt))\,\hat{\mathbf{i}}+(ω\sin(ωt))\,\hat{\mathbf{j}}\) \(\vecs a(t)=(ω^2\sin(ωt))\,\hat{\mathbf{i}}+(ω^2\cos(ωt))\,\hat{\mathbf{j}}\) \(\begin{align*} \text{speed}(t) &= \sqrt{(ω−ω\cos(ωt))^2 + (ω\sin(ωt))^2} \\ &= \sqrt{ω^2 - 2ω^2 \cos(ωt) + ω^2\cos^2(ωt) + ω^2\sin^2(ωt)} \\ &= \sqrt{2ω^2(1 - \cos(ωt))} \end{align*} \) 12) A person on a hang glider is spiraling upward as a result of the rapidly rising air on a path having position vector \(\vecs r(t)=(3\cos t)\,\hat{\mathbf{i}}+(3\sin t)\,\hat{\mathbf{j}}+t^2\,\hat{\mathbf{k}}\). The path is similar to that of a helix, although it is not a helix. The graph is shown here: Find the following quantities: a. The velocity and acceleration vectors b. The glider’s speed at any time Answer: \(∥\vecs v(t)∥=\sqrt{9+4t^2}\) c. The times, if any, at which the glider’s acceleration is orthogonal to its velocity 13) Given that \(\vecs r(t)=⟨e^{−5t}\sin t,e^{−5t}\cos t,4e^{−5t}⟩\) is the position vector of a moving particle, find the following quantities: a. The velocity of the particle Answer: \(\vecs v(t)=⟨e^{−5t}(\cos t−5\sin t),−e^{−5t}(\sin t+5\cos t),−20e^{−5t}⟩\) b. The speed of the particle c. The acceleration of the particle Answer: \(\vecs a(t)=⟨e^{−5t}(−\sin t−5\cos t)−5e^{−5t}(\cos t−5\sin t), −e^{−5t}(\cos t−5\sin t)+5e^{−5t}(\sin t+5\cos t),100e^{−5t}⟩\) 14) Find the maximum speed of a point on the circumference of an automobile tire of radius 1 ft when the automobile is traveling at 55 mph. 15) Find the position vector-valued function \(\vecs r(t)\), given that \(\vecs a(t)=\hat{\mathbf{i}}+e^t \,\hat{\mathbf{j}}, \quad \vecs v(0)=2\,\hat{\mathbf{j}}\), and \(\vecs r(0)=2\,\hat{\mathbf{i}}\). 16) Find \(\vecs r(t)\) given that \(\vecs a(t)=−32\,\hat{\mathbf{j}}, \vecs v(0)=600\sqrt{3} \,\hat{\mathbf{i}}+600\,\hat{\mathbf{j}}\), and \(\vecs r(0)=\vecs 0\). 17) The acceleration of an object is given by \(\vecs a(t)=t\,\hat{\mathbf{j}}+t\,\hat{\mathbf{k}}\). The velocity at \(t=1\) sec is \(\vecs v(1)=5\,\hat{\mathbf{j}}\) and the position of the object at \(t=1\) sec is \(\vecs r(1)=0\,\hat{\mathbf{i}}+0\,\hat{\mathbf{j}}+0\,\hat{\mathbf{k}}\). Find the object’s position at any time. Answer: \(\vecs r(t)=0\,\hat{\mathbf{i}}+(\frac{1}{6}t^3+4.5t−\frac{14}{3})\,\hat{\mathbf{j}}+(\frac{t^3}{6}−\frac{1}{2}t+\frac{1}{3})\,\hat{\mathbf{k}}\) Projectile Motion 18) A projectile is shot in the air from ground level with an initial velocity of 500 m/sec at an angle of 60° with the horizontal. The graph is shown here: a. At what time does the projectile reach maximum height? Answer: \(44.185\) sec b. What is the approximate maximum height of the projectile? c. At what time is the maximum range of the projectile attained? Answer: \(t=88.37\) sec d. What is the maximum range? e. What is the total flight time of the projectile? Answer: \(t=88.37\) sec 19) A projectile is fired at a height of 1.5 m above the ground with an initial velocity of 100 m/sec and at an angle of 30° above the horizontal. Use this information to answer the following questions: a. Determine the maximum height of the projectile. b. Determine the range of the projectile. Answer: The range is approximately 886.29 m. 20) A golf ball is hit in a horizontal direction off the top edge of a building that is 100 ft tall. How fast must the ball be launched to land 450 ft away? 21) A projectile is fired from ground level at an angle of 8° with the horizontal. The projectile is to have a range of 50 m. Find the minimum velocity (speed) necessary to achieve this range. Answer: \(v=42.16\) m/sec e. Prove that an object moving in a straight line at a constant speed has an acceleration of zero.
When I click on the Wolfram output, I see $-\frac{2}{3} \sqrt{1 - \cos(3t)}$ only in the indefinite integral calculation: there are no bounds at all. (There are some specific, numerically computed definite integrals below the symbolic calculation, but this is not where $-\frac{2}{3} \sqrt{1 - \cos(3t)}$ is.) This is important, because you cannot in general evaluate definite integrals by grabbing Wolfram Alpha's symbolically calculated output for an indefinite integral, and then plugging in bounds. The issue is that the output is not guaranteed to work for all possible choices of bounds. In general it can only be expected to work when the bounds lie in a specific interval. This is what that "restricted $t$ values" business is all about. Wolfram Alpha is telling you to look out, because it has detected that the function its algorithm produced may only work as an indefinite integral on an interval of $t$ values--- not everywhere. In slightly more precise language: you're putting in $f(t) = \sqrt{1 + \cos(t) \cos(2t) - \sin(t) \sin(2t)}$ and asking for an indefinite integral. An indefinite integral of $f$ is by definition an antiderivative of $f$, that is, a function whose derivative is $f$. Its output $F(t) = -\frac{2}{3} \sqrt{1 - \cos(3t)}$ has the property that $F'(t) = f(t)$, but only for $t$ in a restricted interval. The equality $F'(t) = f(t)$ does not hold for all $t$. So you can't evaluate all definite integrals involving $f$, by plugging into $F$. Recall the hypotheses of the theorem that lets you evaluate integrals with antiderivatives, namely, the fundamental theorem of calculus. It says: if you have a nice enough function $f(t)$ defined on $[a,b]$ and you have some function $F$ that satisfies $F'(t) = f(t)$ for all $t$ in $[a,b]$, then$$ \int_a^b f(t) \, dt = F(b) - F(a). $$If $F'(t) = f(t)$ does not hold for all $t$ in $[a,b]$, then the FTC doesn't apply, and you have no reason to expect $\int_a^b f(t) \, dt$ to be $F(b) - F(a)$. That's what's going on here. To convince yourself that that's what's going on here, note that your $f(t)$ is nonnegative for all $t$ (graph it). If $F(t)$ were to satisfy $F'(t) = f(t)$ for all $t$, then because the derivative of $F$ is nonnegative, the function $F$ would have to be nondecreasing, on the whole real line. But you can see (look at the plot of $F$) that it isn't. Another way to convince yourself: ask Wolfram to plot the derivative of $F$, and also $f$, on the same axes. You'll see that the graphs match, but only on some intervals, and not on the entire interval $[0, 2\pi]$ (in particular: $F'$ has jump discontinuities in this interval, and $f$ doesn't). So: you can't expect the integral to be $F(2\pi) - F(0)$. Why is Wolfram bugging out? I don't know the precise algorithm it's using, but roughly, it's because the function $f$ you're asking to integrate on $[0,2\pi]$ is not differentiable on all of $[0,2\pi]$--- the graph has several sharp corners. Symbolic calculators aren't good at handling functions like this. (Humans aren't, either: doing calculus with such functions generally requires working in pieces, and paying very close attention to the hypotheses of each calculational step--- ie, you're not just manipulating symbols anymore.) Here is another perspective. If you look up what "substitution" for definite integrals actually says--- as a statement that, under specific hypotheses, one can rewrite one definite integral as another--- you'll find hypotheses preventing the calculation suggested by Wolfram Alpha from working for your bounds. The familiar process of symbolic replacement we know for "doing a change of variables"--- where you replace symbols with other symbols and put in new numbers as the bounds--- is guaranteed to work only when the relationship between the old and new variable defines a one-to-one function. If you plot $s = \cos(u) + 1$ as a function of $u = 3t$ on the $u$-interval from $3 \cdot0$ to $3 \cdot 2\pi$ you see it's a wavelike graph, not a one-to-one function (it fails the horizontal line test). I hope one of these perspectives helped. [I should say: your issue isn't all that uncommon. You can expect it almost any time you are symbolically evaluating definite integrals using symbolic formulas found via "trigonometric substitutions". I don't think any software package is any better at it than Wolfram Alpha is.]
Exponential boundary stabilization for nonlinear wave equations with localized damping and nonlinear boundary condition Division of Mathematical Sciences, Graduate School of Comparative Culture, Kurume University, Miimachi, Kurume, Fukuoka 839-8502, Japan $ D\subset R^{d}$ $d- $ $R^{d} $ $\left\{ \begin{gathered}u_{tt}(t)-ρ(t)Δ u(t)+b(x)u_{t}(t)=f(u(t)), \\ u(t)=0\ \ \text{on }Γ_{0}×(0,T), \\ \dfrac{\partial u(t)}{\partialν}+γ(u_{t}(t))=0\ \ \text{on }Γ _{1}×(0,T), \\ u(0)=u_{0},u_{t}(0)=u_{1},\end{gathered} \right.$ $\left\| {{u_0}} \right\| < {\lambda _\beta }, $ $ E(0) < d_{β},$ $λ_{β}, $ $d_{β} $ $Γ=Γ_{0}\cupΓ_{1} $ $\bar{Γ}_{0}\cap\bar{Γ}_{1}=φ. $ Mathematics Subject Classification:Primary: 35L05; Secondary: 35L20, 35B40. Citation:Takeshi Taniguchi. Exponential boundary stabilization for nonlinear wave equations with localized damping and nonlinear boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1571-1585. doi: 10.3934/cpaa.2017075 References: [1] [2] M. M. Cavalcanti, V. N. D. Cavalcanti and P. Martinez, Existence and decay rate estimates for the wave equation with nonlinear boundary damping and source term, [3] M. M. Cavalcanti and V. N. Domingos Cavalcanti, Existence and asymptotic stability for evolution problems on manifolds with damping and source terms, [4] M. M. Cavalcanti, V. N. Domingos Cavalcanti and I. Lasiecka, Well posedness and optimal decay rates for the wave equation with nonlinear damping-source interaction, [5] M. M. Cavalcanti, V. N. D. Cavalcanti and J. A. Soriano, On existence and asymptotic stability of solutions of the degenerate wave equation with nonlinear boundary conditions, [6] [7] [8] B. Guo and Z-C. Shao, On exponential stability of a semilinear wave equation with variable coefficients under the nonlinear boundary feedback, [9] [10] [11] [12] S. A. Messaoudi, Blow up in a nonlinearly damped wave equation, [13] K. Ono, Asymptotic stability and blowing up of solutions for some degenerate non-linear wave equations of Kirchhoff type with a strong dissipation, [14] [15] [16] E. Vitillaro, Global existence for the wave equation with nonlinear boundary damping and source terms, [17] Zai-yun Zhang and Xiu-jin Miao, Global existence and uniform decay for wave equation with dissipative term and boundary damping, show all references References: [1] [2] M. M. Cavalcanti, V. N. D. Cavalcanti and P. Martinez, Existence and decay rate estimates for the wave equation with nonlinear boundary damping and source term, [3] M. M. Cavalcanti and V. N. Domingos Cavalcanti, Existence and asymptotic stability for evolution problems on manifolds with damping and source terms, [4] M. M. Cavalcanti, V. N. Domingos Cavalcanti and I. Lasiecka, Well posedness and optimal decay rates for the wave equation with nonlinear damping-source interaction, [5] M. M. Cavalcanti, V. N. D. Cavalcanti and J. A. Soriano, On existence and asymptotic stability of solutions of the degenerate wave equation with nonlinear boundary conditions, [6] [7] [8] B. Guo and Z-C. Shao, On exponential stability of a semilinear wave equation with variable coefficients under the nonlinear boundary feedback, [9] [10] [11] [12] S. A. Messaoudi, Blow up in a nonlinearly damped wave equation, [13] K. Ono, Asymptotic stability and blowing up of solutions for some degenerate non-linear wave equations of Kirchhoff type with a strong dissipation, [14] [15] [16] E. Vitillaro, Global existence for the wave equation with nonlinear boundary damping and source terms, [17] Zai-yun Zhang and Xiu-jin Miao, Global existence and uniform decay for wave equation with dissipative term and boundary damping, [1] Hongwei Zhang, Qingying Hu. Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition. [2] Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. [3] Le Thi Phuong Ngoc, Nguyen Thanh Long. Existence and exponential decay for a nonlinear wave equation with nonlocal boundary conditions. [4] [5] Haiyang He. Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition. [6] Stéphane Gerbi, Belkacem Said-Houari. Exponential decay for solutions to semilinear damped wave equation. [7] Tsung-Fang Wu. Multiplicity of positive solutions for a semilinear elliptic equation in $R_+^N$ with nonlinear boundary condition. [8] Cong He, Hongjun Yu. Large time behavior of the solution to the Landau Equation with specular reflective boundary condition. [9] Vyacheslav A. Trofimov, Evgeny M. Trykin. A new way for decreasing of amplitude of wave reflected from artificial boundary condition for 1D nonlinear Schrödinger equation. [10] Linglong Du, Caixuan Ren. Pointwise wave behavior of the initial-boundary value problem for the nonlinear damped wave equation in $\mathbb{R}_{+}^{n} $. [11] [12] Alain Hertzog, Antoine Mondoloni. Existence of a weak solution for a quasilinear wave equation with boundary condition. [13] Serge Nicaise, Cristina Pignotti, Julie Valein. Exponential stability of the wave equation with boundary time-varying delay. [14] Peter V. Gordon, Cyrill B. Muratov. Self-similarity and long-time behavior of solutions of the diffusion equation with nonlinear absorption and a boundary source. [15] [16] [17] Marek Fila, Kazuhiro Ishige, Tatsuki Kawakami. Convergence to the Poisson kernel for the Laplace equation with a nonlinear dynamical boundary condition. [18] Kazuhiro Ishige, Ryuichi Sato. Heat equation with a nonlinear boundary condition and uniformly local $L^r$ spaces. [19] Makoto Nakamura. Remarks on global solutions of dissipative wave equations with exponential nonlinear terms. [20] Zuodong Yang, Jing Mo, Subei Li. Positive solutions of $p$-Laplacian equations with nonlinear boundary condition. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. We show that a directed graph$E$is a finite graph with no sinks if and only if, for each commutative unital ring$R$, the Leavitt path algebra$L_{R}(E)$is isomorphic to an algebraic Cuntz–Krieger algebra if and only if the$C^{\ast }$-algebra$C^{\ast }(E)$is unital and$\text{rank}(K_{0}(C^{\ast }(E)))=\text{rank}(K_{1}(C^{\ast }(E)))$. Let$k$be a field and$k^{\times }$be the group of units of$k$. When$\text{rank}(k^{\times })<\infty$, we show that the Leavitt path algebra$L_{k}(E)$is isomorphic to an algebraic Cuntz–Krieger algebra if and only if$L_{k}(E)$is unital and$\text{rank}(K_{1}(L_{k}(E)))=(\text{rank}(k^{\times })+1)\text{rank}(K_{0}(L_{k}(E)))$. We also show that any unital$k$-algebra which is Morita equivalent or stably isomorphic to an algebraic Cuntz–Krieger algebra, is isomorphic to an algebraic Cuntz–Krieger algebra. As a consequence, corners of algebraic Cuntz–Krieger algebras are algebraic Cuntz–Krieger algebras. Keller proved in 1999 that the Gerstenhaber algebra structure on the Hochschild cohomology of an algebra is an invariant of the derived category. In this paper, we adapt his approach to show that the Gerstenhaber algebra structure on the Tate–Hochschild cohomology of an algebra is preserved under singular equivalences of Morita type with level, a notion introduced by the author in previous work. Nakayama automorphisms play an important role in the fields of noncommutative algebraic geometry and noncommutative invariant theory. However, their computations are not easy in general. We compute the Nakayama automorphism ν of an Ore extension R[x; σ, δ] over a polynomial algebra R in n variables for an arbitrary n. The formula of ν is obtained explicitly. When σ is not the identity map, the invariant EG is also investigated in terms of Zhang’s twist, where G is a cyclic group sharing the same order with σ. In this paper we consider the algebraic crossed product${\mathcal{A}}:=C_{K}(X)\rtimes _{T}\mathbb{Z}$induced by a homeomorphism$T$on the Cantor set$X$, where$K$is an arbitrary field with involution and$C_{K}(X)$denotes the$K$-algebra of locally constant$K$-valued functions on$X$. We investigate the possible Sylvester matrix rank functions that one can construct on${\mathcal{A}}$by means of full ergodic$T$-invariant probability measures$\unicode[STIX]{x1D707}$on$X$. To do so, we present a general construction of an approximating sequence of$\ast$-subalgebras${\mathcal{A}}_{n}$which are embeddable into a (possibly infinite) product of matrix algebras over$K$. This enables us to obtain a specific embedding of the whole$\ast$-algebra${\mathcal{A}}$into${\mathcal{M}}_{K}$, the well-known von Neumann continuous factor over$K$, thus obtaining a Sylvester matrix rank function on${\mathcal{A}}$by restricting the unique one defined on${\mathcal{M}}_{K}$. This process gives a way to obtain a Sylvester matrix rank function on${\mathcal{A}}$, unique with respect to a certain compatibility property concerning the measure$\unicode[STIX]{x1D707}$, namely that the rank of a characteristic function of a clopen subset$U\subseteq X$must equal the measure of$U$. Let R→U be an associative ring epimorphism such that U is a flat left R-module. Assume that the related Gabriel topology$\mathbb{G}$of right ideals in R has a countable base. Then we show that the left R-module U has projective dimension at most 1. Furthermore, the abelian category of left contramodules over the completion of R at$\mathbb{G}$fully faithfully embeds into the Geigle–Lenzing right perpendicular subcategory to U in the category of left R-modules, and every object of the latter abelian category is an extension of two objects of the former one. We discuss conditions under which the two abelian categories are equivalent. Given a right linear topology on an associative ring R, we consider the induced topology on every left R-module and, for a perfect Gabriel topology$\mathbb{G}$, compare the completion of a module with an appropriate Ext module. Finally, we characterize the U-strongly flat left R-modules by the two conditions of left positive-degree Ext-orthogonality to all left U-modules and all$\mathbb{G}$-separated$\mathbb{G}$-complete left R-modules. We show that a$\mathbb{P}$-object and simple configurations of$\mathbb{P}$-objects have a formal derived endomorphism algebra. Hence the triangulated category (classically) generated by such objects is independent of the ambient triangulated category. We also observe that the category generated by the structure sheaf of a smooth projective variety over the complex numbers only depends on its graded cohomology algebra. We give a proof of the formality conjecture of Kaledin and Lehn: on a complex projective K3 surface, the differential graded (DG) algebra$\operatorname{RHom}^{\bullet }(F,F)$is formal for any sheaf$F$polystable with respect to an ample line bundle. Our main tool is the uniqueness of the DG enhancement of the bounded derived category of coherent sheaves. We also extend the formality result to derived objects that are polystable with respect to a generic Bridgeland stability condition. In order to better unify the tilting theory and the Auslander–Reiten theory, Xi introduced a general transpose called the relative transpose. Originating from this, we introduce and study the cotranspose of modules with respect to a left A-module T called n-T-cotorsion-free modules. Also, we give many properties and characteristics of n-T-cotorsion-free modules under the help of semi-Wakamatsu-tilting modules AT. We introduce the class of partially invertible modules and show that it is an inverse category which we call the Picard inverse category. We use this category to generalize the classical construction of crossed products to, what we call, generalized epsilon-crossed products and show that these coincide with the class of epsilon-strongly groupoid-graded rings. We then use generalized epsilon-crossed groupoid products to obtain a generalization, from the group-graded situation to the groupoid-graded case, of the bijection from a certain second cohomology group, defined by the grading and the functor from the groupoid in question to the Picard inverse category, to the collection of equivalence classes of rings epsilon-strongly graded by the groupoid. We study Van den Bergh's non-commutative symmetric algebra 𝕊nc(M) (over division rings) via Minamoto's theory of Fano algebras. In particular, we show that 𝕊nc(M) is coherent, and its proj category ℙnc(M) is derived equivalent to the corresponding bimodule species. This generalizes the main theorem of [8], which in turn is a generalization of Beilinson's derived equivalence. As corollaries, we show that ℙnc(M) is hereditary and there is a structure theorem for sheaves on ℙnc(M) analogous to that for ℙ1. We prove formulas of different types that allow us to calculate the Gerstenhaber bracket on the Hochschild cohomology of an algebra using some arbitrary projective bimodule resolution for it. Using one of these formulas, we give a new short proof of the derived invariance of the Gerstenhaber algebra structure on Hochschild cohomology. We also give some new formulas for the Connes differential on the Hochschild homology that lead to formulas for the Batalin–Vilkovisky (BV) differential on the Hochschild cohomology in the case of symmetric algebras. Finally, we use one of the obtained formulas to provide a full description of the BV structure and, correspondingly, the Gerstenhaber algebra structure on the Hochschild cohomology of a class of symmetric algebras. We apply the Auslander–Buchweitz approximation theory to show that the Iyama and Yoshino's subfactor triangulated category can be realized as a triangulated quotient. Applications of this realization go in three directions. Firstly, we recover both a result of Iyama and Yang and a result of the third author. Secondly, we extend the classical Buchweitz's triangle equivalence from Iwanaga–Gorenstein rings to Noetherian rings. Finally, we obtain the converse of Buchweitz's triangle equivalence and a result of Beligiannis, and give characterizations for Iwanaga–Gorenstein rings and Gorenstein algebras. We show that silting modules are closely related with localizations of rings. More precisely, every partial silting module gives rise to a localization at a set of maps between countably generated projective modules and, conversely, every universal localization, in the sense of Cohn and Schofield, arises in this way. To establish these results, we further explore the finite-type classification of tilting classes and we use the morphism category to translate silting modules into tilting objects. In particular, we prove that silting modules are of finite type. Let$k$be a commutative ring, let${\mathcal{C}}$be a small,$k$-linear, Hom-finite, locally bounded category, and let${\mathcal{B}}$be a$k$-linear abelian category. We construct a Frobenius exact subcategory${\mathcal{G}}{\mathcal{P}}({\mathcal{G}}{\mathcal{P}}_{P}({\mathcal{B}}^{{\mathcal{C}}}))$of the functor category${\mathcal{B}}^{{\mathcal{C}}}$, and we show that it is a subcategory of the Gorenstein projective objects${\mathcal{G}}{\mathcal{P}}({\mathcal{B}}^{{\mathcal{C}}})$in${\mathcal{B}}^{{\mathcal{C}}}$. Furthermore, we obtain criteria for when${\mathcal{G}}{\mathcal{P}}({\mathcal{G}}{\mathcal{P}}_{P}({\mathcal{B}}^{{\mathcal{C}}}))={\mathcal{G}}{\mathcal{P}}({\mathcal{B}}^{{\mathcal{C}}})$. We show in examples that this can be used to compute${\mathcal{G}}{\mathcal{P}}({\mathcal{B}}^{{\mathcal{C}}})$explicitly. We consider the unital Banach algebra$\ell ^{1}(\mathbb{Z}_{+})$and prove directly, without using cyclic cohomology, that the simplicial cohomology groups${\mathcal{H}}^{n}(\ell ^{1}(\mathbb{Z}_{+}),\ell ^{1}(\mathbb{Z}_{+})^{\ast })$vanish for all$n\geqslant 2$. This proceeds via the introduction of an explicit bounded linear operator which produces a contracting homotopy for$n\geqslant 2$. This construction is generalised to unital Banach algebras$\ell ^{1}({\mathcal{S}})$, where${\mathcal{S}}={\mathcal{G}}\cap \mathbb{R}_{+}$and${\mathcal{G}}$is a subgroup of$\mathbb{R}_{+}$. If H is a monoid and a = u1 ··· uk ∈ H with atoms (irreducible elements) u1, … , uk, then k is a length of a, the set of lengths of a is denoted by Ⅼ(a), and ℒ(H) = {Ⅼ(a) | a ∈ H} is the system of sets of lengths of H. Let R be a hereditary Noetherian prime (HNP) ring. Then every element of the monoid of non-zero-divisors R• can be written as a product of atoms. We show that if R is bounded and every stably free right R-ideal is free, then there exists a transfer homomorphism from R• to the monoid B of zero-sum sequences over a subset Gmax(R) of the ideal class group G(R). This implies that the systems of sets of lengths, together with further arithmetical invariants, of the monoids R• and B coincide. It is well known that commutative Dedekind domains allow transfer homomorphisms to monoids of zero-sum sequences, and the arithmetic of the latter has been the object of much research. Our approach is based on the structure theory of finitely generated projective modules over HNP rings, as established in the recent monograph by Levy and Robson. We complement our results by giving an example of a non-bounded HNP ring in which every stably free right R-ideal is free but which does not allow a transfer homomorphism to a monoid of zero-sum sequences over any subset of its ideal class group. We solve two problems in representation theory for the periplectic Lie superalgebra$\mathfrak{p}\mathfrak{e}(n)$, namely, the description of the primitive spectrum in terms of functorial realisations of the braid group and the decomposition of category${\mathcal{O}}$into indecomposable blocks. To solve the first problem, we establish a new type of equivalence between category${\mathcal{O}}$for all (not just simple or basic) classical Lie superalgebras and a category of Harish-Chandra bimodules. The latter bimodules have a left action of the Lie superalgebra but a right action of the underlying Lie algebra. To solve the second problem, we establish a BGG reciprocity result for the periplectic Lie superalgebra. We introduce what is meant by an AC-Gorenstein ring. It is a generalized notion of Gorenstein ring that is compatible with the Gorenstein AC-injective and Gorenstein AC-projective modules of Bravo–Gillespie–Hovey. It is also compatible with the notion of$n$-coherent rings introduced by Bravo–Perez. So a$0$-coherent AC-Gorenstein ring is precisely a usual Gorenstein ring in the sense of Iwanaga, while a$1$-coherent AC-Gorenstein ring is precisely a Ding–Chen ring. We show that any AC-Gorenstein ring admits a stable module category that is compactly generated and is the homotopy category of two Quillen equivalent abelian model category structures. One is projective with cofibrant objects that are Gorenstein AC-projective modules while the other is an injective model structure with fibrant objects that are Gorenstein AC-injectives. We introduce the notion of a perfect path for a monomial algebra. We classify indecomposable non-projective Gorenstein-projective modules over the given monomial algebra via perfect paths. We apply the classification to a quadratic monomial algebra and describe explicitly the stable category of its Gorenstein-projective modules. We introduce properties of metric spaces and, specifically, finitelygenerated groups with word metrics, which we call coarsecoherence and coarse regular coherence. They aregeometric counterparts of the classical algebraic notion of coherence andthe regular coherence property of groups defined and studied by Waldhausen.The new properties can be defined in the general context of coarse metricgeometry and are coarse invariants. In particular, they are quasi-isometryinvariants of spaces and groups. The new framework allows us to provestructural results by developing permanence properties, including theparticularly important fibering permanence property, for coarse regularcoherence.
When we describe a curve using polar coordinates, it is still a curve in the\(x-y\) plane. We would like to be able to compute slopes and areas for these curves using polar coordinates. We have seen that \(x=r\cos\theta\) and \(y=r\sin\theta\) describe the relationship between polar and rectangular coordinates. If in turn we are interested in a curve given by \(r=f(\theta)\), then we can write \(x=f(\theta)\cos\theta\) and \(y=f(\theta)\sin\theta\), describing \(x\) and \(y\) in terms of \(\theta\) alone. The first of these equations describes \(\theta\) implicitly in terms of \(x\), so using the chain rule we may compute \[{dy\over dx}={dy\over d\theta}{d\theta\over dx}.\] Since \(d\theta/dx=1/(dx/d\theta)\), we can instead compute \[ {dy\over dx}={dy/d\theta\over dx/d\theta}= {f(\theta)\cos\theta + f'(\theta)\sin\theta\over -f(\theta)\sin\theta + f'(\theta)\cos\theta}. \] Example \(\PageIndex{1}\): Find the points at which the curve given by \(r=1+\cos\theta\) has a vertical or horizontal tangent line. Since this function has period \(2\pi\), we may restrict our attention to the interval \([0,2\pi)\) or \((-\pi,\pi]\), as convenience dictates. First, we compute the slope: \[ {dy\over dx}={(1+\cos\theta)\cos\theta-\sin\theta\sin\theta\over -(1+\cos\theta)\sin\theta-\sin\theta\cos\theta}= {\cos\theta+\cos^2\theta-\sin^2\theta\over -\sin\theta-2\sin\theta\cos\theta}. \] This fraction is zero when the numerator is zero (and the denominator is not zero). The numerator is \(2\cos^2\theta+\cos\theta-1\) so by the quadratic formula $$ \cos\theta={-1\pm\sqrt{1+4\cdot2}\over 4} = -1 \quad\hbox{or}\quad {1\over 2}. $$ This means \(\theta\) is \(\pi\) or \(\pm \pi/3\). However, when \(\theta=\pi\), the denominator is also \(0\), so we cannot conclude that the tangent line is horizontal. Setting the denominator to zero we get $$\eqalign{ -\theta-2\sin\theta\cos\theta &= 0\cr \sin\theta(1+2\cos\theta)&=0,\cr} $$ so either \(\sin\theta=0\) or \(\cos\theta=-1/2\). The first is true when \(\theta\) is \(0\) or \(\pi\), the second when \(\theta) is \(2\pi/3\) or \(4\pi/3\). However, as above, when \(\theta=\pi\), the numerator is also \(0\), so we cannot conclude that the tangent line is vertical. Figure 10.2.1 shows points corresponding to \(\theta\) equal to \(0\), \(\pm\pi/3\), \(2\pi/3\) and \(4\pi/3\) on the graph of the function. Note that when \(\theta=\pi\) the curve hits the origin and does not have a tangent line. We know that the second derivative \(f''(x)\) is useful in describing functions, namely, in describing concavity. We can compute \(f''(x)\) in terms of polar coordinates as well. We already know how to write \(dy/dx=y'\) in terms of \(\theta\), then \[ {d\over dx}{dy\over dx}= {dy'\over dx}={dy'\over d\theta}{d\theta\over dx}={dy'/d\theta\over dx/d\theta}.\] Example \(\PageIndex{2}\): We find the second derivative for the cardioid \(r=1+\cos\theta\): \[\eqalign{ {d\over d\theta}{\cos\theta+\cos^2\theta-\sin^2\theta\over -\sin\theta-2\sin\theta\cos\theta}\cdot{1\over dx/d\theta} &=\cdots= {3(1+\cos\theta)\over (\sin\theta+2\sin\theta\cos\theta)^2} \cdot{1\over-(\sin\theta+2\sin\theta\cos\theta)}\cr &={-3(1+\cos\theta)\over(\sin\theta+2\sin\theta\cos\theta)^3}.\cr}\] The ellipsis here represents rather a substantial amount of algebra. We know from above that the cardioid has horizontal tangents at \(\pm \pi/3\); substituting these values into the second derivative we get \( y''(\pi/3)=-\sqrt{3}/2\) and \( y''(-\pi/3)=\sqrt{3}/2\), indicating concave down and concave up respectively. This agrees with the graph of the function.
Consider holomoprhic function $f:\mathbb C \backslash \{0\}\rightarrow \mathbb C$. I want to prove that the square: $\partial R=[1+i,-1+i]+[-1+i,]+[-1-i,1-i]+[1-i,1+i]$ And the triangle: $\partial\Delta=[1-i,i]+[i,-1-i]+[-1-i,1-i]$ have the same line integral: $\int_{\partial R}f(z)dz=\int_{\partial \Delta}f(z)dz$ Any tips what to use here? Is the reasoning that the set is open, therefore the line integral of the triangle is zero, and therefore the line integral of the square composed of two triangles is zero? I feel that something is wrong with the answer...
This wordpress blog is configured to use the mathjax-latex plugin, along with a slightly customized MathJax configuration. This allows inline latex equations to be specified with and display latex to be specified with or amsmath delimiters such as Here is a sample (Stokes theorem) For blades \(F \in \bigwedge^{s}\), and \(m\) volume element \(d^k \mathbf{x}, s < k\), \begin{equation*} \int_V d^k \mathbf{x} \cdot (\boldsymbol{\partial} \wedge F) = \int_{\partial V} d^{k-1} \mathbf{x} \cdot F. \end{equation*} Here the volume integral is over a \(m\) dimensional surface (manifold), \(\boldsymbol{\partial}\) is the projection of the gradient onto the tangent space of the manifold, and \(\partial V\) indicates integration over the boundary of \(V\). This was typeset with: For blades \(F \in \bigwedge^{s}\), and \(m\) volume element \(d^k \mathbf{x}, s < k\), \begin{equation*} \int_V d^k \mathbf{x} \cdot (\boldsymbol{\partial} \wedge F) = \int_{\partial V} d^{k-1} \mathbf{x} \cdot F. \end{equation*} Here the volume integral is over a \(m\) dimensional surface (manifold), \(\boldsymbol{\partial}\) is the projection of the gradient onto the tangent space of the manifold, and \(\partial V\) indicates integration over the boundary of \(V\). My old (wordpress.com) hosted blog used the wp-latex plugin (implicitly), which is far inferior in many many ways (but is much faster to render): Any latex text had to be in one (sometimes very long) line, which was a maintainance pain. There was no support for equation numbering. The mathjax-latex plugin (when the AMS numbering option is enabled) allows an author to use \label, and \eqref markers. mathjax-latex, if a custom copy of MathJax is used, can be configured to support \newcommand macros. I can use a set of mathjax macros, for those places that I use peeters_macros.sty in standalone latex. This allows me, for example, to write \Bx for \(\Bx\) instead of \mathbf{x}, or \lr{…} for \left( … \right). Because of these features I should be able to scrap my old latex-to-wordpress monstrosity of a script that I wrote to cater to the primitive subset of latex that wordpress.com allows by default. customizing MathJax A customized version of MathJax requires laying down a copy of the distribution cd /opt/bitnami/apps/wordpress/htdocs/wp-content/plugins/mathjax-latex wget https://github.com/mathjax/MathJax/zipball/v2.3-latest mv v2.3-latest mathjax-v2.3-latest.zip unzip mathjax-v2.3-latest.zip mv mathjax-MathJax-78ea6af MathJax editing the configuration file (MathJax/config/default.js) as desired, and then pointing your wordpress plugin configuration at this modified configuration Under the location you put a fully qualified URL for the MathJax.js in your unzipped distribution. For this site, after sftp copy of the MathJax directory to mathjax-latex, this was: http://peeterjoot.com/wp-content/plugins/mathjax-latex/MathJax/MathJax.js When trying this out on an amazon EC2 VM, this configuration step was easier than with godaddy, since godaddy only offers sftp access (on an EC2 image you can unzip on the VM directly which is much faster and easier).
Let $S_{g,1}$ be the surface of genus $g \geq 1$ and $1$ boundary component. Let $Mod(S_{g,1})$ be the mapping class group in which we allow isotopies to rotate the action on the boundary (equivalently think of it as the mapping class group of the once-punctured surface of genus $g$). Is every element of $Mod(S_{g,1})$ a composition of right-handed Dehn twists? Note that this is true for $S_{g,0}$ as stated in page 124 of A primer on Mapping Class Groups by Farb and Margalit under the name of "a strange fact". Edit: I am going to comment ThiKu's answer to avoid further confusion. In A. Wand: Factorisation of Surface Diffeomorphisms and in Baker, Etnyre and Van Horn-Morris: Cabling, Contact Structures and and Mapping Class Monoids the authors, independently, provide with examples of diffeomorphisms in $Veer(\Sigma_{2,1},\partial \Sigma_{2,1})$ which are not in $Dehn^+(\Sigma_{2,1}, \partial \Sigma_{2,1})$. That is, right-veering diffeomorphisms which are not a product of right-handed Dehn twists. However, the mapping class group in which these results hold is $Mod( \Sigma_{2,1}, \partial \Sigma_{2,1})$, that is, the mapping class group of automorphisms fixing the boundary and isotopies fixing the boundary as well. This, a priori, does not yield counter-examples to my question (unless it does together with some other result that I do not know). Observe that for all $g \geq 1$ there is a central extension $$1 \to \mathbb{Z} \to Mod(\Sigma_{g,1}, \partial \Sigma_{g,1}) \to Mod( \Sigma_{g,1}) \to 1 $$ which is not split in general.
[Back] The Schnorr identification scheme was defined in 1991 [1] and supports a zero knowledge proof method. It uses discrete logarithms. The prover (Peggy) has a proving public key of \((g,X)\) where \(g\) is a generator, and \(X= g^x \pmod N \). \(x\) is the secret value that the prover (Peggy) must prove. After Peggy generates her public proving key, she will then be challenged by Victor to produce the correct result. In the following we define a secret value (\(x\)), a generator (\(g\)) and a modulus value (\(N\)). We can determine the possible values of g from [here]: Schnorr identification scheme Method With Schnorr identification, Peggy (the prover) has a proving public key of \((N,g,X)\) and a proving secret key of \((N,x)\). \(N\) is a prime number for the modulus operation, and \(x\) is the secret, and where: \(X \leftarrow g^x \pmod N \) On the registration of the secret, Peggy generates a random value (\(y\)), and then computes \(Y\): \(Y \leftarrow g^y \pmod N\) This value is sent to Victor (who is the verifier). Victor then generates a random value \((c)\) and sends this to Peggy. This is a challenge to Peggy to produce the correct result. Peggy then computes: \(z \leftarrow (y+ xc) \pmod N\) He then sends this to Victor in order to prove that he knows \(x\). Victor then computes two values: \(val_1 = Y X^c \pmod N\) \(val_2 =g^z \pmod N\) If the values are the same (\(val_1 \equiv val_2\)), Peggy has proven that she knows \(x\). This works because: \(Y X^c = g^y {g^x}^c = g^{y+cx}\) \(g^z = g^{y+cx}\) In a formal definition (taken from this paper) [2], the method is: Coding import random import sys N=89 g=3 x = random.randint(1,97) X = pow(g,x) % N y = random.randint(1,97) Y = pow(g,y) % N print "Peggy (the Prover) generates these values:" print "x(secret)=\t",x print "N=\t\t",N print "X=\t\t",X print "\nPeggy generates a random value (y):" print "y=",y print "\nPeggy computes Y = g^y (mod N) and passes to Victor:" print "Y=",Y print "\nVictor generates a random value (c) and passes to Peggy:" c = random.randint(1,97) print "c=",c print "\nPeggy calculates z = y.x^c (mod N) and send to Victor (the Verifier):" z = (y + c * x) print "z=",z print "\nVictor now computes val=g^z (mod N) and (Y X^c (mod N)) and determines if they are the same\n" val1= pow(g,z) % N val2= (Y * (X**c)) % N print "val1=\t",val1, print " val2=\t",val2 if (val1==val2): print "Peggy has proven that he knows x" else: print "Failure to prove" References [1] . C. P. Schnorr. Efficient signature generation by smart cards. Journal of Cryptology, 4(3):161–174, 1991. [paper] [2] Bellare, M., & Palacio, A. (2002, August). GQ and Schnorr identification schemes: Proofs of security against impersonation under active and concurrent attacks. In Annual International Cryptology Conference (pp. 162-177). Springer, Berlin, Heidelberg. [paper]
Prove or disprove: if $\sum_{n=0}^\infty a_n x^n$ converges at $x=x_1$ , then $\sum_{n=0}^\infty n\cdot a_n \cdot x^{n-1}$ converges at $x=x_1$ I am quite new to this material (and taylor series especially). I am pretty sure, that if I differentiate a power series, the radius of convergence stays the same, but: I'm not sure why. if $R=x_1$ (The radius of convergence), and it converges in the original series, I don't think it still holds for the differentiate. Would love some guidelines.
3. M.I. VS mod In most case, M.I. would not be the most preferred choice to prove the divisibility of an expression because we have another powerful tool: modolar arithmetic. I would not put too much technical formulae about mod because it could be very difficult. Definition 1.If $a=b+n$ where n is a positive integer larger than 1, then we say $a\equiv b\mod n$. Corollary 1.If a is divisible by b, then $a\equiv 0\mod b$. Corollary 2.There's a unique, least positive integer so that for every a we have b so that $a\equiv b\mod n$, we can say b is the principal value of a mod n, or the remainder when a is divided by n. Corollary 3.$a+bm\equiv a\mod n$, then by binomial theorem, $(a+bn)^k\equiv a^k \mod n$. Proof: exercise. By such method we can now proof all previous examples. Example 7.(Example 3 - 4 revisited) $6^n\equiv 1\mod 5$ and $6^n(5n-1)\equiv -1\mod 5$ The first one is trivial because $6^n=(1+5)^n\equiv -1\mod 25$. The second statement can't be proved by diredctly proved in mod 25. Instead we show that $\frac{6^n(5n-1)+1}{5}\equiv 0\mod5$. $\frac{6^n(5n-1)+1}{5}=n6^n+(1+6+6^2+...+6^{n-1})\equiv n-n=0\mod 5$. For example 6, we know that $7\pm \sqrt{13}$ are the roots of $x^2-14x+36=0$, by binomial theorem, $(7+\sqrt{13})^n+(7-\sqrt{13})^n=2(C^n_07^n+C^n_27^{n-2}(13)+...)$, but at this level we can't find the bridge to show the divisibility through mod, but we can actually prove this by M.I. through this formula. Lemma 1.(Extension of M.I.) The proposition is true for all natrual number n if: a) The proposition is true for n = 1,2,3...,i b) Assuming n = k is correct, then n = i + k is correct. From (b), for every $a\equiv b\mod n$ where a is smaller than b, if n = a is true, then n = b is true, eventually it's true for all natrual number n. By dividing the case into odd n and even n and complete M.I. separately, we can prove the statement as well. 4. Sequence and inequality Example 8. (HKALE 1994 Pure I2)If $\sum a_i=(\frac{1+a_n}{2})^2$, prove that $a_n=2n-1$ if all terms are positive numbers. For sequence, we usually prove the n=k+1 part by comparing their difference. Define $s_n=\sum^{n}_{i=1}a_i=\frac{1}{4}(1+a_n)^2$. Now $a_{n+1}=s_{n+1}-s_n$. Assuming that $a_n=2n-1$, then $s_n=\frac{1}{4}(1+2n+1)^2=n^2+2n+1$. Now $a_{n+1}=\frac{1}{4}(1+a_{n+1})^2-(n+1)^2$ $\frac{1}{4}(1-a_{n+1})^2=(n+1)^2$, $a_{n+1}=2n+1$, where all negative terms are rejected. Apart from some trivial geometric sequences, some sequences can also be transferred into something like geometric sequences, and we will introduce one of the sequence in form of $a_n=c+dr^n$. Example 9.Find the general term of sequence $a_{n+1}=xa_n+y$. Define $b_n=a_{n+1}-a_n$, then $b_{n+1}=a_{n+2}-a_{n+1}=(xa_{n+1}+y)-(xa_n+y)=x(a_{n+1}-a_n)=xb_{n-1}$, therefore $b_n=x^{n-1}(a_2-a_1)$ is a geometric sequence. Now $a_n=a_1+b_1+b_2+...+b_{n-1}=a_1+(a_2-a_1)(1+x+...+x^{n-2})$ $=a_1+(a_2-a_1)\frac{1-x^{n-1}}{1-x}$. One of the stupid example is that $a_1=1$, $a_n=2a_{n-1}-1$, then $a_2=1$, $a_n=1+(1-1)\frac{1-2^{n-1}}{1-2}=1$, and therefore the sequence {1,1,1,1,...} is an A.S., G.S., as well as a recurrence sequence. Exercise 4.It's given that the sum of a sequence is given by $S_n=4a_n+3n-4$, find its general term by i) Guess and proved by M.I. ii) Sequential analysis Example 10. (HKALE 1995 Pure I6)For a non-negative integral integral sequence so that $n\leq \sum^{n}_{i=1}a_i^2\leq n+1+(-1)^n$ for all natrual number n. Prove that the sequence is identically 1. This inequality is quite trivial. Proof 1. $1\leq a_{n+1}^2\leq 3$ when we subtract case (n+1) from case n, since it's non-negative integer, $a_i=1$. Proof 2. We use another lemma from MI. Lemma 2.The proposition is true for all natrual n if: 1) n = 1 is true. 2) if n = 1,2,...,(k-1) is true, then n = k is true. Assuming that $a_1=a_2=...=a_n=1$, then $n+1\leq n+a_{n+1}^2\leq n+2+(-11)^n\leq n+3$, the same result is given. Exercise 5.Try to prove the above statement by separating n into odd and even case. Example 11 (BAS ex. 9.4.8, p.84)For positive real so that $\prod (1+a_i)=2^n$, show that $\prod a_i \leq 1$. Assuming n = k is true. Now if one more number $a_{k+1}$ is added that $\prod (1+a_n)=2^{k+1}$ with $\sigma a_i$ greater than 1. Then $a_{n+1}=(a_1a_2...a_n)^-1$ which is greater than 1, then $\prod (1+a_i)=2^n(1+a_{n+1})\neq 2^{n+1}$. The above proof is incomplete and it would be nice for readers to finish the proof himself. Read the rest of this passage: Part I Part III Part IV
That was an excellent post and qualifies as a treasure to be found on this site! wtf wrote: When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that . In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/ Thanks for the link and I would have showcased it all on its own had I seen it first The point I am making is something different. I am pointing out that: All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out. I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate. But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate. There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets. Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity? 1) The rigorization of Newton's calculus culminated with infinitary set theory. Newton discovered his theory of gravity using calculus, which he invented for that purpose. I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it. However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun. I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant. 2) Einstein's gneral relativity uses Riemann's differential geometry. In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics. Isn't this the same problem as previous? dx=0? 3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series. In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math. I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'? 4) Quantum theory is functional analysis . If you took linear algebra, then functional analysis can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series. Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete , meaning that it has no holes in it. It's like the real numbers and not like the rational numbers. QM rests on the mathematics of uncountable sets, in an essential way. Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces. Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such. Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it." He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god. I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it. ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig. Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol 5) Cantor was led to set theory from Fourier series. In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity). I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity. In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here: If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory. Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake. I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing.... I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality. It just means we're using averages rather than discrete actualities and it's close enough. I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is. I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself. Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess Don't take my pushback too seriously I'd prefer if we could collaborate as colleagues rather than competing.
30 4 Homework Statement In the figure below an electron is shot directly toward the center of a large metal plate that has surface charge density -1.20 × 10^-6 C/m2. If the initial kinetic energy of the electron is 1.60 × 10^-17 J and if the electron is to stop (due to electrostatic repulsion from the plate) just as it reaches the plate, how far from the plate must the launch point be? Homework Equations F=ma E=(1/2)mV^2 I am trying to solve the following problem: My attempt at the problem: $$m = 9.11 * 10^{-31}kg$$ $$q = 1.6 * 10^{-19} C$$ $$V_0 = \sqrt{\frac{2KE}{m}} = \sqrt{\frac{2 * 1.6 * 10^{-17}}{1.6 * 10^{-19}}} = 5,926,739 m/s$$ $$F_{net} = -F_{e} = ma$$ $$a = \frac{-F_{e}}{m} = ma$$ $$E = \frac{\phi}{2\epsilon_0}$$ $$F_e = \frac{E}{q} = \frac{\phi}{2\epsilon_0q}$$ $$a = -\frac{q\phi}{2m\epsilon_0} = \frac{1.6 * 10^{-19} * -1.20 * 10^{-6}}{2 * 9.11 * 10^{-31} * 8.85 * 10^{-12}} = -1.191 * 10^{16} m/s^2$$ $$V_f^2 = V_0^2 + 2ad$$ $$V_f = 0$$ $$-2ad = V_0^2$$ $$d = \frac{V_0^2}{2a} = -\frac{5,926,739}{2 * -1.191 * 10^{16}} = 0.001475m$$ I got that the answer is 0.001475m, but this is not correct. Does anyone know what I am doing wrong? My attempt at the problem: $$m = 9.11 * 10^{-31}kg$$ $$q = 1.6 * 10^{-19} C$$ $$V_0 = \sqrt{\frac{2KE}{m}} = \sqrt{\frac{2 * 1.6 * 10^{-17}}{1.6 * 10^{-19}}} = 5,926,739 m/s$$ $$F_{net} = -F_{e} = ma$$ $$a = \frac{-F_{e}}{m} = ma$$ $$E = \frac{\phi}{2\epsilon_0}$$ $$F_e = \frac{E}{q} = \frac{\phi}{2\epsilon_0q}$$ $$a = -\frac{q\phi}{2m\epsilon_0} = \frac{1.6 * 10^{-19} * -1.20 * 10^{-6}}{2 * 9.11 * 10^{-31} * 8.85 * 10^{-12}} = -1.191 * 10^{16} m/s^2$$ $$V_f^2 = V_0^2 + 2ad$$ $$V_f = 0$$ $$-2ad = V_0^2$$ $$d = \frac{V_0^2}{2a} = -\frac{5,926,739}{2 * -1.191 * 10^{16}} = 0.001475m$$ I got that the answer is 0.001475m, but this is not correct. Does anyone know what I am doing wrong?
That was an excellent post and qualifies as a treasure to be found on this site! wtf wrote: When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that . In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/ Thanks for the link and I would have showcased it all on its own had I seen it first The point I am making is something different. I am pointing out that: All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out. I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate. But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate. There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets. Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity? 1) The rigorization of Newton's calculus culminated with infinitary set theory. Newton discovered his theory of gravity using calculus, which he invented for that purpose. I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it. However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun. I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant. 2) Einstein's gneral relativity uses Riemann's differential geometry. In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics. Isn't this the same problem as previous? dx=0? 3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series. In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math. I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'? 4) Quantum theory is functional analysis . If you took linear algebra, then functional analysis can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series. Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete , meaning that it has no holes in it. It's like the real numbers and not like the rational numbers. QM rests on the mathematics of uncountable sets, in an essential way. Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces. Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such. Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it." He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god. I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it. ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig. Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol 5) Cantor was led to set theory from Fourier series. In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity). I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity. In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here: If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory. Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake. I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing.... I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality. It just means we're using averages rather than discrete actualities and it's close enough. I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is. I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself. Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess Don't take my pushback too seriously I'd prefer if we could collaborate as colleagues rather than competing.
My fist thought was to randomly generate samples from $f(x)$ and to compute the average value of $\log(g(x))$ from the other mixture. That would give me $E_f[\log(g(X))] = -H(f,g)$ and $KL(f,g) = H(f,g) – H(f)$ where $H(f,g)$ is the cross entropy and $H(f)$ is the entropy. This converges at order $1/\sqrt{n}$ and it is easy to program. My second thought was to borrow an idea from Unscented Kalman filters. I thought about creating a non-random sampling of from one distribution at specific points and estimating $E_f[\log(g(X))]$ from those points. My third thought was to try Google. Google suggested “Lower and Upper Bounds for Approximation of the Kullback-Leibler Divergence Between Gaussian Mixture Models” by Durrien, Thiran, and Kelly (2012) and “Approximating the Kullback Leibler divergence between Gaussian Mixture Models” by Hershey and Olsen (2007). Here are some notes from their papers: 1. The KL distance between two Gaussians $f$ and $g$ is $D_{KL}( f || g ) = {1\over2}\left( \log\left( { \det(\Sigma_g)}\over { \det(\Sigma_f)}\right) + Tr( \Sigma_g^{-1} \Sigma_f) + ||\mu_f – \mu_g||_g^2 -d \right)$ where $d$ is the dimension of the space, $\Sigma$ is the covariance matrix, $\mu$ is the mean, $Tr$ is the trace, and $|| x ||^2_g = x^T (\Sigma_g^{-1}) x$. 2. Hershey and Olsen review several methods for estimating the divergence: Monte-Carlo methods, Unscented methods (unscented methods are simple and an unscented approximation of $\int f(x) g(x) dx$ is exact if $f$ is a Gaussian and $g$ is quadratic), Gaussian Approximation (this is bad, don’t do it, if you do do it, “I told you so”), Product of Gaussian approximations using Jensen’s inequality (this is cute, I like it, but I’m not sure how accurate it is), and Match Bound approximation by Do (2003) and Goldberg et al (2003) (just match each Gaussian with another Gaussian in the other mixture and compute those KL distances). 3. Hershey and Olsen introduce a delightful improvement over Match Bound approximation using variational methods. They have the same idea as Match Bound, but they significantly reduce the error in Jensen’s inequality by introducing weighted averages. Since Jensen’s inequality produces a lower bound using the weighted average, they maximize the lower bound under all possible weightings. The maximizer happens to have a very simple form, so the bound is also is very simple to compute. Very nice. (Numerical results are given at the end of the paper.) I’ve got to try this one out. 4. Durrien, Thiran, and Kelly improve on the Hershey and Olsen method, but I’m not sure how much better the new method is. More research required. Related Posts via Categories Comments are now closed.
1. Introduction When I first learned the mean value theorem as a high schooler, I was thoroughly unimpressed. Part of this was because it’s just like Rolle’s Theorem, which feels obvious. But I think the greater part is because I thought it was useless. And I continued to think it was useless until I began my first proof-oriented treatment of calculus as a second year at Georgia Tech. Somehow, in the interceding years, I learned to value intuition and simple statements. I have since completely changed my view on the mean value theorem. I now consider essentially all of one variable calculus to be the Mean Value Theorem, perhaps in various forms or disguises. In my earlier note An Intuitive Introduction to Calculus, we state and prove the Mean Value Theorem, and then show that we can prove the Fundamental Theorem of Calculus with the Mean Value Theorem and the Intermediate Value Theorem (which also felt silly to me as a high schooler, but which is not silly). In this brief note, I want to consider one small aspect of the Mean Value Theorem: can the “mean value” be chosen continuously as a function of the endpoints? To state this more clearly, first recall the theorem: Suppose $latex {f}$ is a differentiable real-valued function on an interval $latex {[a,b]}$. Then there exists a point $latex {c}$ between $latex {a}$ and $latex {b}$ such that $$ \frac{f(b) – f(a)}{b – a} = f'(c), \tag{1}$$ which is to say that there is a point where the slope of $latex {f}$ is the same as the average slope from $latex {a}$ to $latex {b}$. What if we allow the interval to vary? Suppose we are interested in a differentiable function $latex {f}$ on intervals of the form $latex {[0,b]}$, and we let $latex {b}$ vary. Then for each choice of $latex {b}$, the mean value theorem tells us that there exists $latex {c_b}$ such that $$ \frac{f(b) – f(0)}{b} = f'(c_b). $$ Then the question we consider today is, as a function of $latex {b}$, can $latex {c_b}$ be chosen continuously? We will see that we cannot, and we’ll see explicit counterexamples. This, after the fold. 2. A Counterexample For ease, we will restrict ourselves to intervals of the form $latex {[0,b]}$, as mentioned above. A particularly easy counterexample is given by $$ f(x) = \begin{cases} x^2 – 2x & x \leq 1\\ -1 & -1 \leq x \leq 2\\ x^2 – 4x + 3 & x \geq 2 \end{cases} $$ This is a flattened parabola, that is, a parabola with a flattened middle section. Clearly, the slope of the function $latex {f}$ is negative until $latex {x = 1}$, where it is $latex {0}$. It becomes (and stays) positive at $latex {x = 2}$. So if you consider intervals $latex {[0,b]}$ as $latex {b}$ is varying, since $latex {f(b) < 0}$ for $latex {x < 3}$, we must have that $latex {c_b}$ is at a point when $latex {f'(c_b) < 0}$, meaning that $latex {c_b \in [0, 1]}$. But as soon as $latex {b > 3}$, $latex {f(b) > 0}$ and $latex {c_b}$ must be a point with $latex {f'(c_b) > 0}$, meaning that $latex {c_b \in [2,b]}$. In particular, $latex {c_b}$ jumps from at most $latex {1}$ to at least $latex {2}$ as $latex {b}$ goes to the left of $latex {3}$ to the right of $latex {3}$. So there is no way to choose the $latex {c_b}$ values locally in a neighborhood of $latex {3}$ to make the mean values continuous there. In the gif below, we have animated the process. The red line is the secant line passing through $latex {(0, f(0))}$ and $latex {(b, f(b))}$. The two red dots indicate the two points of intersection. The green line is the line guaranteed by the mean value theorem. The small green dot is, in particular, what we’ve been calling $latex {c_b}$: the guaranteed mean value. Notice that it jumps when the red dot passes $latex {3}$. That is the essence of this proof. Further, although this example is not smooth, it is easy to see that if we “smoothed” off the connection between the parabola and the straight line, like through the use of bump functions, then the spirit of this counterexample works, and not even smooth functions have locally continuous choices of the mean value. This post appears on the author’s personal website davidlowryduda.com. It is also available in pdf note form. It was typeset in \TeX, hosted on WordPress sites, converted using the utility github.com/davidlowryduda/mse2wp, and displayed with MathJax. As usual, if there are any comments or questions, please let me know.
This ebook reports generalized Donaldson-Thomas invariants $\bar{DT}{}^\alpha(\tau)$. they're rational numbers which 'count' either $\tau$-stable and $\tau$-semistable coherent sheaves with Chern personality $\alpha$ on $X$; strictly $\tau$-semistable sheaves has to be counted with advanced rational weights. The $\bar{DT}{}^\alpha(\tau)$ are outlined for all periods $\alpha$, and are equivalent to $DT^\alpha(\tau)$ while it really is outlined. they're unchanged less than deformations of $X$, and remodel by way of a wall-crossing formulation lower than switch of balance $\tau$. To end up all this, the authors research the neighborhood constitution of the moduli stack $\mathfrak M$ of coherent sheaves on $X$. They convey that an atlas for $\mathfrak M$ might be written in the community as $\mat{Crit}(f)$ for $f:U\to{\mathbb C}$ holomorphic and $U$ soft, and use this to infer identities at the Behrend functionality $\nu_\mathfrak M$. They compute the invariants $\bar{DT}{}^\alpha(\tau)$ in examples, and make a conjecture approximately their integrality houses. in addition they expand the idea to abelian different types $\mat{mod}$-$\mathbb{C}Q\backslash I$ of representations of a quiver $Q$ with family $I$ coming from a superpotential $W$ on $Q$. Read or Download A Theory of Generalized Donaldson-Thomas Invariants (Memoirs of the American Mathematical Society) PDF Best Algebraic Geometry books This new-in-paperback variation presents a normal advent to algebraic and mathematics geometry, beginning with the idea of schemes, by way of functions to mathematics surfaces and to the speculation of relief of algebraic curves. the 1st half introduces easy gadgets corresponding to schemes, morphisms, base switch, neighborhood houses (normality, regularity, Zariski's major Theorem). Contents and therapy are clean and extremely diverse from the traditional remedies offers an absolutely positive model of what it capacity to do algebra The exposition is not just transparent, it really is pleasant, philosophical, and thoughtful even to the main naive or green reader First textbook-level account of uncomplicated examples and methods during this quarter. compatible for self-study through a reader who is aware a bit commutative algebra and algebraic geometry already. David Eisenbud is a well known mathematician and present president of the yank Mathematical Society, in addition to a winning Springer writer. Within the library at Trinity university, Cambridge in 1976, George Andrews of Pennsylvania country college came across a sheaf of pages within the handwriting of Srinivasa Ramanujan. quickly specified as "Ramanujan’s misplaced Notebook," it includes huge fabric on mock theta capabilities and absolutely dates from the final yr of Ramanujan’s existence. Extra info for A Theory of Generalized Donaldson-Thomas Invariants (Memoirs of the American Mathematical Society)
Forgot password? New user? Sign up Existing user? Log in How many solutions are there to sinθ+7=8 \sin \theta + 7 = 8 sinθ+7=8 in the domain [0∘,1000∘] [0 ^ \circ, 1000 ^ \circ ] [0∘,1000∘]? What is the minimum value of a positive number xxx such that the value of sin(x+π8)\sin(x+\frac{\pi}{8})sin(x+8π) is 0?0?0? How many xxx's in the interval 0≤x≤15π0 \leq x \leq 15\pi0≤x≤15π satisfy tanx=−3\tan x=-\sqrt{3}tanx=−3? What are the solutions to 12sin(2x−76π)=6212\sin \left( 2x-\frac{7}{6}\pi \right)=6\sqrt{2}12sin(2x−67π)=62 in the interval x∈[0,π]?x \in [0,\pi]?x∈[0,π]? Which of the following is a solution to sin(2θ−31∘)=cos49∘? \sin ( 2 \theta - 31 ^ \circ ) = \cos 49 ^ \circ ? sin(2θ−31∘)=cos49∘? Problem Loading... Note Loading... Set Loading...
Difference between revisions of "Multi-index notation" m (fixed a small and misplaced parenthesis) m (→Leibniz formula for higher derivatives of multivariate functions: typo) Line 43: Line 43: (x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b. (x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b. $$ $$ − === + ===formula for higher derivatives of multivariate functions=== $$ $$ \partial^\a(fg)=\sum_{0\leqslant\b\leqslant\a}\binom\a\b \partial^{\a-\b}f\cdot \partial^\b g. \partial^\a(fg)=\sum_{0\leqslant\b\leqslant\a}\binom\a\b \partial^{\a-\b}f\cdot \partial^\b g. Line 55: Line 55: \end{cases} \end{cases} $$ $$ + ===Taylor series of a smooth function=== ===Taylor series of a smooth function=== If $f$ is infinitely smooth near the origin $x=0$, then its Taylor series (at the origin) has the form If $f$ is infinitely smooth near the origin $x=0$, then its Taylor series (at the origin) has the form Revision as of 15:06, 30 April 2012 $\def\a{\alpha}$ $\def\b{\beta}$ An abbreviated form of notation in analysis, imitating the vector notation by single letters rather than by listing all vector components. Contents Rules A point with coordinates $(x_1,\dots,x_n)$ in the $n$-dimensional space (real, complex or over any other field $\Bbbk$) is denoted by $x$. For a multiindex $\a=(\a_1,\dots,\a_n)\in\Z_+^n$ the expression $x^\a$ denotes the product, $x_\a=x_1^{\a_1}\cdots x_n^{\a_n}$. Other expressions related to multiindices are expanded as follows:$$\begin{aligned}|\a|&=\a_1+\cdots+\a_n\in\Z_+^n,\\\a!&=\a_1!\cdots\a_n!\qquad\text{(as usual, }0!=1!=1),\\x^\a&=x_1^{\a_1}\cdots x_n^{\a_n}\in \Bbbk[x]=\Bbbk[x_1,\dots,x_n],\\\a\pm\b&=(\a_1\pm\b_1,\dots,\a_n\pm\b_n)\in\Z^n,\end{aligned}$$The convention extends for the binomial coefficients ($\a\geqslant\b$ means, quite naturally, that $\a_1\geqslant\b_1,\dots,\a_n\geqslant\b_n$):$$\binom{\a}{\b}=\binom{\a_1}{\b_1}\cdots\binom{\a_n}{\b_n}=\frac{\a!}{\b!(\a-\b)!},\qquad \text{if}\quad \a\geqslant\b.$$The partial derivative operators are also abbreviated:$$\partial_x=\biggl(\frac{\partial}{\partial x_1},\dots,\frac{\partial}{\partial x_n}\biggr)=\partial\quad\text{if the choice of $x$ is clear from context.}$$The notation for partial derivatives is also quite natural: for a differentiable function $f(x_1,\dots,x_n)$ of $n$ variables, $$\partial^a f=\frac{\partial^{|\a|} f}{\partial x^\a}=\frac{\partial^{\a_1}}{\partial x_1^{\a_1}}\cdots\frac{\partial^{\a_n}}{\partial x_n^{\a_n}}f=\frac{\partial^{|\a|}f}{\partial x_1^{\a_1}\cdots\partial x_n^{\a_n}}.$$If $f$ is itself a vector-valued function of dimension $m$, the above partial derivatives are $m$-vectors. The notation $$\partial f=\bigg(\frac{\partial f}{\partial x}\bigg)$$ is used to denote the Jacobian matrix of a function $f$ (in general, only rectangular). Caveat The notation $\a>0$ is ambiguous, especially in mathematical economics, as it may either mean that $\a_1>0,\dots,\a_n>0$, or $0\ne\a\geqslant0$. Examples Binomial formula $$ (x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b. $$ Leibniz formula for higher derivatives of multivariate functions $$ \partial^\a(fg)=\sum_{0\leqslant\b\leqslant\a}\binom\a\b \partial^{\a-\b}f\cdot \partial^\b g. $$ In particular, $$ \partial^\a x^\beta=\begin{cases} \frac{\b!}{(\b-\a)!}x^{\b-\a},\qquad&\text{if }\a\leqslant\b, \\ \quad 0,&\text{otherwise}. \end{cases} $$ Taylor series of a smooth function If $f$ is infinitely smooth near the origin $x=0$, then its Taylor series (at the origin) has the form $$ \sum_{\a\in\Z_+^n}\frac1{\a!}\partial^\a f(0)\cdot x^\a. $$ Symbol of a differential operator If $$D=\sum_{|\a|\le d}a_\a(x)\partial^\a$$is a linear ordinary differential operator with variable coefficients $a_\a(x)$, then its principal symbol is the function of $2n$ variables $S(x,p)=\sum_{|\a|=d}a_\a(x)p^\a$. How to Cite This Entry: Multi-index notation. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Multi-index_notation&oldid=25759
This post picks up from the previous post on Summer@Brown number theory from 2013. Now that we’d established ideas about solving the modular equation $latex ax \equiv c \mod m$, solving the linear diophantine equation $latex ax + by = c$, and about general modular arithmetic, we began to explore systems of modular equations. That is, we began to look at equations like Suppose $latex x$ satisfies the following three modular equations (rather, the following system of linear congruences): $latex x \equiv 1 \mod 5$ $latex x \equiv 2 \mod 7$ $latex x \equiv 3 \mod 9$ Can we find out what $latex x$ is? This is a clear parallel to solving systems of linear equations, as is usually done in algebra I or II in secondary school. A common way to solve systems of linear equations is to solve for a variable and substitute it into the next equation. We can do something similar here. From the first equation, we know that $latex x = 1 + 5a$ for some $latex a$. Substituting this into the second equation, we get that $latex 1 + 5a \equiv 2 \mod 7$, or that $latex 5a \equiv 1 \mod 7$. So $latex a$ will be the modular inverse of $latex 5 \mod 7$. A quick calculation (or a slightly less quick Euclidean algorithm in the general case) shows that the inverse is $latex 3$. Multiplying both sides by $latex 3$ yields $latex a \equiv 3 \mod 7$, or rather that $latex a = 3 + 7b$ for some $latex b$. Back substituting, we see that this means that $latex x = 1+5a = 1+5(3+7b)$, or that $latex x = 16 + 35b$. Now we repeat this work, using the third equation. $latex 16 + 35b \equiv 3 \mod 9$, so that $latex 8b \equiv 5 \mod 9$. Another quick calculation (or Euclidean algorithm) shows that this means $latex b \equiv 4 \mod 9$, or rather $latex b = 4 + 9c$ for some $latex c$. Putting this back into $latex x$ yields the final answer: $latex x = 16 + 35(4 + 9c) = 156 + 315c$ $latex x \equiv 156 \mod 315$ And if you go back and check, you can see that this works. $latex \diamondsuit$ There is another, very slick, method as well. This was a clever solution mentioned in class. The idea is to construct a solution directly. The way we’re going to do this is to set up a sum, where each part only contributes to one of the three modular equations. In particular, note that if we take something like $latex 7 \cdot 9 \cdot [7\cdot9]_5^{-1}$, where this inverse means the modular inverse with respect to $latex 5$, then this vanishes mod $latex 7$ and mod $latex 9$, but gives $latex 1 \mod 5$. Similarly $latex 2\cdot 5 \cdot 9 \cdot [5\cdot9]_7^{-1}$ vanishes mod 5 and mod 9 but leaves the right remainder mod 2, and $latex 5 \cdot 7 \cdot [5\cdot 7]_9^{-1}$ vanishes mod 5 and mod 7, but leaves the right remainder mod 9. Summing them together yields a solution (Do you see why?). The really nice thing about this algorithm to get the solution is that is parallelizes really well, meaning that you can give different computers separate problems, and then combine the things together to get the final answer. This is going to come up again later in this post. These are two solutions that follow along the idea of the Chinese Remainder Theorem (CRT), which in general says that as long as the moduli are relative prime, then the system $latex a_1 x \equiv b_1 \mod m_1$ $latex a_2 x \equiv b_2 \mod m_2$ $latex \cdots$ $latex a_k x \equiv b_k \mod m_k$ will always have a unique solution $latex \mod m_1m_2 \ldots m_k$. Note, this is two statements: there is a solution (statement 1), and the statement is unique up to modding by the product of this moduli (statement 2). Proof Sketch: Either of the two methods described above to solve that problem can lead to a proof here. But there is one big step that makes such a proof much easier. Once you’ve shown that the CRT is true for a system of two congruences (effectively meaning you can replace them by one congruence), this means that you can use induction. You can reduce the n+1st case to the nth case using your newfound knowledge of how to combine two equations into one. Then the inductive hypothesis carries out the proof. Note also that it’s pretty easy to go backwards. If I know that $latex x \equiv 12 \mod 30$, then I know that $latex x$ will also be the solution to the system $latex x \equiv 2 \mod 5$ $latex x \equiv 0 \mod 6$ In fact, a higher view of the CRT reveals that the great strength is that considering a number mod a set of relatively prime moduli is the exact same (isomorphic to) considering a number mod the product of the moduli. The remainder of this post will be about why the CRT is cool and useful. Application 1: Multiplying Large Numbers Firstly, the easier application. Suppose you have two really large integers $latex a,b$ (by really large, I mean with tens or hundreds of digits at least – for concreteness, say they each have $latex n$ digits). When a computer computes their product $latex ab$, it has to perform $latex n^2$ digit multiplications, which can be a whole lot if $latex n$ is big. But a computer can calculate mods of numbers in something like $latex \log n$ time, which is much much much faster. So one way to quickly compute the product of two really large numbers is to use the Chinese Remainder Theorem to represent each of $latex a$ and $latex b$ with a set of much smaller congruences. For example (though we’ll be using small numbers), say we want to multiply $latex 12$ by $latex 21$. We might represent $latex 12$ by $latex 12 \equiv 2 \mod 5, 5 \mod 7, 1 \mod 11$ and represent $latex 21$ by $latex 21 \equiv 1 \mod 5, 0 \mod 7, 10 \mod 11$. To find their product, calculate their product in each of the moduli: $latex 2 \cdot 1 \equiv 2 \mod 5, 5 \cdot 0 \equiv 0 \mod 7, 1 \cdot 10 \equiv 10 \mod 11$. We know we can get a solution to the resulting system of congruences using the above algorithm, and the smallest positive solution will be the actual product. This might not feel faster, but for much larger numbers, it really is. As an aside, here’s one way to make it play nice for parallel processing (which vastly makes things faster). After you’ve computed the congruences of $latex 12$ and $latex 21$ for the different moduli, send the numbers mod 5 to one computer, the numbers mod 7 to another, and the numbers mod 11 to a third (but also send each computer the list of moduli: 5,7,11). Each computer will calculate the product in their modulus and then use the Euclidean algorithm to calculate the inverse of the product of the other two moduli, and multiply these together. Afterwards, the computers resend their data to a central computer, which just adds the result and takes it mod $latex 5 \cdot 7 \cdot 11$ (to get the smallest positive solution). Since mods are fast and all the multiplication is with smaller integers (no bigger than the largest mod, ever), it all goes faster. And since it’s parallelized, you’re replacing a hard task with a bunch of smaller easier tasks that can all be worked on at the same time. Very powerful stuff! I have actually never seen someone give the optimal running time that would come from this sort of procedure, though I don’t know why. Perhaps I’ll look into that one day. Application 2: Secret Sharing in Networks of People This is really slick. Let’s lay out the situation: I have a secret. I want you, my students, to have access to the secret, but only if at least six of you decide together that you want access. So I give each of you a message, consisting of a number and a modulus. Using the CRT, I can create a scheme where if any six of you decide you want to open the message, then you can pool your six bits together to get the message. Notice, I mean any six of you, instead of a designated set of six. Further, no five people can recover the message without a sixth in a reasonable amount of time. That’s pretty slick, right? The basic idea is for me to encode my message as a number $latex P$ (I use P to mean plain-text). Then I choose a set of moduli, one for each of you, but I choose them in such a way that the product of any $latex 5$ of them is smaller than $latex P$, but the product of any $latex 6$ of them is greater than $latex P$ (what this means is that I choose a lot of primes or near-primes right around the same size, all right around the fifth root of $latex P$). To each of you, I give you the value of $latex P \mod m_i$ and the modulus $latex m_i$, where $latex m_i$ is your modulus. Since $latex P$ is much bigger than $latex m_i$, it would take you a very long time to just happen across the correct multiple that reveals a message (if you ever managed). Now, once six of you get together and put your pieces together, the CRT guarantees a solution. Since the product of your six moduli will be larger than $latex P$, the smallest solution will be $latex P$. But if only five of you get together, since the product of your moduli is less than $latex P$, you don’t recover $latex P$. In this way, we have our secret sharing network. To get an idea of the security of this protocol, you might imagine if I gave each of you moduli around the size of a quadrillion. Then missing any single person means there are hundreds of trillions of reasonable multiples of your partial plain-text to check before getting to the correct multiple. A similar idea, but which doesn’t really use the CRT, is to consider the following problem: suppose two millionaires Alice and Bob (two people of cryptological fame) want to see which of them is richer, but without revealing how much wealth they actually have. This might sound impossible, but indeed it is not! There is a way for them to establish which one is richer but with neither knowing how much money the other has. Similar problems exist for larger parties (more than just 2 people), but none is more famous than the original: Yao’s Millionaire Problem. Alright – I’ll see you all in class.
Three radial positive solutions for semilinear \\ elliptic problems in $\mathbb{R}^N wang su yun,Zhang Yan hong,Ma ru yun Keywords:Semilinear elliptic problem, radial positive solutions, eigenvalue, bifurcation, connected component. Abstract: This paper is concerned withthe semilinear elliptic problem$$\left\{\aligned&-\Delta u=\lambda h(|x|)f(u) \ \ \ \ \ \ \ \ \ \ \ \text{in}\ \mathbb{R}^N,\& u(x)>0\hskip 3cm \ \text{in}\ \mathbb{R}^N,\&u\to 0 \hskip 3cm \ \ \ \ \text{as}\ |x|\to \infty,\\endaligned\right.$$where $\lambda$ is a real parameter and $h$ is a weight function which is positive.We show the existence of three radial positive solutions under suitable conditions on the nonlinearity. Proofsare mainly based on the bifurcation technique.
Usually I write the weak form by hand for my FEM code, but it's a little annoying and mechanic sometimes. So, I wonder, is there any way to generate the symbolic weak form in Mathematica? For instance, if I have this strong form: $$\nabla\cdot(\nabla u) = f$$ I'd like to have an automation of multiplying by a shape function $\phi$, and integrating it obtaining $$\nabla \phi \ \nabla u = \phi f$$ So the procedure is as follows. take the left hand side and left-multiply with $\phi$: $\phi \; (\nabla\cdot(\nabla u))$ integrate by parts (usually): $\int \phi \; (\nabla\cdot(\nabla u)) = \int \nabla\phi \cdot \nabla u - \int_\partial \phi \cdot \nabla u$ by definition $\phi$ is zero on the boundary $\int_\partial \phi \cdot \nabla u = 0$ hence only the left hand side (integrated) remains $\int \nabla\phi \cdot \nabla u$ The very same procedure could be applied to the right hand side. This would be great to achieve symbolically, as I am planning to implement a FEM software, not using NDSolve or other functions. The top would be even generating the symbolic sum with Jacobians and Gauss integration points, but this is maybe asking too much! In a very classical FEM formulation, as for instance here in deal.II for a simple Laplace problem, $u$ and $\phi$ are just interpolated with a shape function, for instance $\phi$ itself, as $u(x) = \sum_j U_j \phi_j(x)$. The integral is then reduced to a sum of integrals over a triangulated domain, but as I said, I'd be glad to just extract the weak form in general (not just for linear problems, but also for instance for $\nabla \cdot ( A(x) \nabla u) = -f$ for a general $A(x)$. Thanks!
I'm trying to find the Maximum Independent Set of a Biparite Graph. I found the following in some notes "May 13, 1998 - University of Washington - CSE 521 - Applications of network flow": Problem: Given a bipartite graph $G = (U,V,E)$, find an independent set $U' \cup V'$ which is as large as possible, where $U' \subseteq U$ and $V' \subseteq V$. A set is independent if there are no edges of $E$ between elements of the set. Solution: Construct a flow graph on the vertices $U \cup V \cup \{s,t\}$. For each edge $(u,v) \in E$ there is an infinite capacity edge from $u$ to $v$. For each $u \in U$, there is a unit capacity edge from $s$ to $u$, and for each $v \in V$, there is a unit capacity edge from $v$ to $t$. Find a finite capacity cut $(S,T)$, with $s \in S$ and $t \in T$. Let $U' = U \cap S$ and $V' = V \cap T$. The set $U' \cup V'$ is independent since there are no infinite capacity edges crossing the cut. The size of the cut is $|U - U'| + |V - V'| = |U| + |V| - |U' \cup V'|$. This, in order to make the independent set as large as possible, we make the cut as small as possible. So lets take this as the graph: A - B - C |D - E - F We can split this into a bipartite graph as follows $(U,V)=(\{A,C,E\},\{B,D,F\})$ We can see by brute force search that the sole Maximum Independent Set is $A,C,D,F$. Lets try and work through the solution above: So the constructed flow network adjacency matrix would be: $$\begin{matrix} & s & t & A & B & C & D & E & F \\ s & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ t & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 \\ A & 1 & 0 & 0 & \infty & 0 & 0 & 0 & 0 \\ B & 0 & 1 & \infty & 0 & \infty & 0 & \infty & 0 \\ C & 1 & 0 & 0 & \infty & 0 & 0 & 0 & 0 \\ D & 0 & 1 & 0 & 0 & 0 & 0 & \infty & 0 \\ E & 1 & 0 & 0 & \infty & 0 & \infty & 0 & \infty \\ F & 0 & 1 & 0 & 0 & 0 & 0 & \infty & 0 \\ \end{matrix}$$ Here is where I am stuck, the smallest finite capacity cut I see is a trivial one: $(S,T) =(\{s\},\{t,A,B,C,D,E,F\})$ with a capacity of 3. Using this cut leads to an incorrect solution of: $$ U' = U \cap S = \{\}$$ $$ V' = V \cap T = \{B,D,F\}$$ $$ U' \cup V' = \{B,D,F\}$$ Whereas we expected $U' \cup V' = \{A,C,D,F\}$? Can anyone spot where I have gone wrong in my reasoning/working?
We have dealt extensively with vector equations for curves, ${\bf r}(t)=\langle x(t),y(t),z(t)\rangle$. A similar technique can be used to represent surfaces in a way that is more general than the equations for surfaces we have used so far. Recall that when we use ${\bf r}(t)$ to represent a curve, we imagine the vector ${\bf r}(t)$ with its tail at the origin, and then we follow the head of the arrow as $t$ changes. The vector "draws'' the curve through space as $t$ varies. Suppose we instead have a vector function of two variables, $${\bf r}(u,v)=\langle x(u,v),y(u,v),z(u,v)\rangle.$$ As both $u$ and $v$ vary, we again imagine the vector ${\bf r}(u,v)$ with its tail at the origin, and its head sweeps out a surface in space. A useful analogy is the technology of CRT video screens, in which an electron gun fires electrons in the direction of the screen. The gun's direction sweeps horizontally and vertically to "paint'' the screen with the desired image. In practice, the gun moves horizontally through an entire line, then moves vertically to the next line and repeats the operation. In the same way, it can be useful to imagine fixing a value of $v$ and letting ${\bf r}(u,v)$ sweep out a curve as $u$ changes. Then $v$ can change a bit, and ${\bf r}(u,v)$ sweeps out a new curve very close to the first. Put enough of these curves together and they form a surface. Example 16.6.1 Consider the function ${\bf r}(u,v)=\langle v\cos u,v\sin u, v\rangle$. For a fixed value of $v$, as $u$ varies from 0 to $2\pi$, this traces a circle of radius $v$ at height $v$ above the $x$-$y$ plane. Put lots and lots of these together,and they form a cone, as in figure 16.6.1. Example 16.6.2 Let ${\bf r}=\langle v\cos u, v\sin u, u\rangle$. If $v$ is constant, the resulting curve is a helix (as in figure 13.1.1). If $u$ is constant, the resulting curve is a straight line at height $u$ in the direction $u$ radians from the positive $x$ axis. Note in figure 16.6.2 how the helixes and the lines both paint the same surface in a different way. This technique allows us to represent many more surfaces than previously. Example 16.6.3 The curve given by $${\bf r}=\langle (2+\cos(3u/2))\cos u, (2+\cos(3u/2))\sin u, \sin(3u/2)\rangle$$ is called a trefoil knot. Recall that from the vector equation of the curve we can compute the unit tangent $\bf T$, the unit normal $\bf N$, and the binormal vector ${\bf B}={\bf T}\times{\bf N}$; you may want to review section 13.3. The binormal is perpendicular to both $\bf T$ and $\bf N$; one way to interpret this is that ${\bf N}$ and ${\bf B}$ define a plane perpendicular to $\bf T$, that is, perpendicular to the curve; since ${\bf N}$ and ${\bf B}$ are perpendicular to each other, they can function just as $\bf i$ and $\bf j$ do for the $x$-$y$ plane. Of course, $\bf N$ and $\bf B$ are functions of $u$, changing as we move along the curve ${\bf r}(u)$. So, for example, ${\bf c}(u,v)={\bf N}\cos v+{\bf B}\sin v$ is a vector equation for a unit circle in a plane perpendicular to the curve described by $\bf r$, except that the usual interpretation of $\bf c$ would put its center at the origin. We can fix that simply by adding $\bf c$ to the original $\bf r$: let ${\bf f}={\bf r}(u) +{\bf c}(u,v)$. For a fixed $u$ this draws a circle around the point ${\bf r}(u)$; as $u$ varies we get a sequence of such circles around the curve $\bf r$, that is, a tube of radius 1 with $\bf r$ at its center. We can easily change the radius; for example ${\bf r}(u) +a{\bf c}(u,v)$ gives the tube radius $a$; we can make the radius vary as we move along the curve with ${\bf r}(u) +g(u){\bf c}(u,v)$, where $g(u)$ is a function of $u$. As shown in figure 16.6.3, it is hard to see that the plain knot is knotted; the tube makes the structure apparent. Of course, there is nothing special about the trefoil knot in this example; we can put a tube around (almost) any curve in the same way. We have previously examined surfaces given in the form $f(x,y)$. It is sometimes useful to represent such surfaces in the more general vector form, which is quite easy: ${\bf r}(u,v)=\langle u,v,f(u,v)\rangle$. The names of the variables are not important of course; instead of disguising $x$ and $y$, we could simply write ${\bf r}(x,y)=\langle x,y,f(x,y)\rangle$. We have also previously dealt with surfaces that are not functions of $x$ and $y$; many of these are easy to represent in vector form. One common type of surface that cannot be represented as $z=f(x,y)$ is a surface given by an equation involving only $x$ and $y$. For example, $x+y=1$ and $y=x^2$ are "vertical'' surfaces. For every point $(x,y)$ in the plane that satisfies the equation, the point $(x,y,z)$ is on the surface, for every value of $z$. Thus, a corresponding vector form for the surface is something like $\langle f(u),g(u),v\rangle$; for example, $x+y=1$ becomes $\langle u,1-u,v\rangle$ and $y=x^2$ becomes $\langle u,u^2,v\rangle$. Yet another sort of example is the sphere, say $x^2+y^2+z^2=1$. This cannot be written in the form $z=f(x,y)$, but it is easy to write in vector form; indeed this particular surface is much like the cone, since it has circular cross-sections, or we can think of it as a tube around a portion of the $z$-axis, with a radius that varies depending on where along the axis we are. One vector expression for the sphere is $\langle \sqrt{1-v^2}\cos u,\sqrt{1-v^2}\sin u, v\rangle$—this emphasizes the tube structure, as it is naturally viewed as drawing a circle of radius $\sqrt{1-v^2}$ around the $z$-axis at height $v$. We could also take a cue from spherical coordinates, and write $\langle \sin u\cos v,\sin u\sin v,\cos u\rangle$, where in effect $u$ and $v$ are $\phi$ and $\theta$ in disguise. It is quite simple in Sage to plot any surface for which you have a vector representation. Using different vector functions sometimes gives different looking plots, because Sage in effect draws the surface by holding one variable constant and then the other. For example, you might have noticed in figure 16.6.2 that the curves in the two right-hand graphs are superimposed on the left-hand graph; the graph of the surface is just the combination of the two sets of curves, with the spaces filled in with color. Here's a simple but striking example: the plane $x+y+z=1$ can be represented quite naturally as $\langle u,v,1-u-v\rangle$. But we could also think of painting the same plane by choosing a particular point on the plane, say $(1,0,0)$, and then drawing circles or ellipses (or any of a number of other curves) as if that point were the origin in the plane. For example, $\langle 1-v\cos u-v\sin u,v\sin u,v\cos u\rangle$ is one such vector function. Note that while it may not be obvious where this came from, it is quite easy to see that the sum of the $x$, $y$, and $z$ components of the vector is always 1. Computer renderings of the plane using these two functions are shown in figure 16.6.4. Suppose we know that a plane contains a particular point $(x_0,y_0,z_0)$ and that two vectors ${\bf u}=\langle u_0,u_1,u_2\rangle$ and ${\bf v}=\langle v_0,v_1,v_2\rangle$ are parallel to the plane but not to each other. We know how to get an equation for the plane in the form $ax+by+cz=d$, by first computing ${\bf u}\times{\bf v}$. It's even easier to get a vector equation: $${\bf r}(u,v) = \langle x_0,y_0,z_0\rangle + u{\bf u} + v{\bf v}.$$ The first vector gets to the point $(x_0,y_0,z_0)$ and then by varying $u$ and $v$, $u{\bf u} + v{\bf v}$ gets to every point in the plane. Returning to $x+y+z=1$, the points $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$ are all on the plane. By subtracting coordinates we see that $\langle -1,0,1\rangle$ and $\langle -1,1,0\rangle$ are parallel to the plane, so a third vector form for this plane is $$\langle 1,0,0\rangle + u\langle -1,0,1\rangle + v\langle -1,1,0\rangle = \langle 1-u-v,v,u\rangle.$$ This is clearly quite similar to the first form we found. We have already seen (section 15.4) how to find the area of a surface when it is defined in the form $f(x,y)$. Finding the area when the surface is given as a vector function is very similar. Looking at the plots of surfaces we have just seen, it is evident that the two sets of curves that fill out the surface divide it into a grid, and that the spaces in the grid are approximately parallelograms. As before this is the key: we can write down the area of a typical little parallelogram and add them all up with an integral. Suppose we want to approximate the area of the surface ${\bf r}(u,v)$ near ${\bf r}(u_0,v_0)$. The functions ${\bf r}(u,v_0)$ and ${\bf r}(u_0,v)$ define two curves that intersect at ${\bf r}(u_0,v_0)$. The derivatives of $\bf r$ give us vectors tangent to these two curves: ${\bf r}_u(u_0,v_0)$ and ${\bf r}_v(u_0,v_0)$, and then ${\bf r}_u(u_0,v_0)\,du$ and ${\bf r}_v(u_0,v_0)\,dv$ are two small tangent vectors, whose lengths can be used as the lengths of the sides of an approximating parallelogram. Finally, the area of this parallelogram is $|{\bf r}_u\times{\bf r}_v|\,du\,dv$ and so the total surface area is $$\int_a^b\int_c^d |{\bf r}_u\times{\bf r}_v|\,du\,dv.$$ Example 16.6.4 We find the area of the surface $\langle v\cos u,v\sin u,u\rangle$ for $0\le u \le \pi$ and $0\le v\le 1$; this is a portion of the helical surface in figure 16.6.2. We compute ${\bf r}_u = \langle -v\sin u,v\cos u,1\rangle$ and ${\bf r}_v = \langle \cos u,\sin u,0\rangle$. The cross product of these two vectors is $\langle \sin u,-\cos u,v\rangle$ with length $\sqrt{1+v^2}$, and the surface area is $$\int_0^\pi\int_0^1 \sqrt{1+v^2}\,dv\,du={\pi\sqrt2\over2}+ {\pi\ln(\sqrt2+1)\over 2}.$$ Exercises 16.6 You can use these Sage cells to graph surfaces. The first example is a tube around the Trefoil knot, the second is a cone. Ex 16.6.1Describe or sketch the surface with the given vector function. a. ${\bf r}(u,v) = \langle u+v,3-v,1+4u+5v\rangle$ b. ${\bf r}(u,v) = \langle 2\sin u, 3\cos u, v\rangle$ c. ${\bf r}(s,t) = \langle s,t,t^2-s^2\rangle$ d. ${\bf r}(s,t) = \langle s\sin 2t, s^2, s\cos 2t\rangle$ Ex 16.6.2Find a vector function ${\bf r}(u,v)$ for the surface. a. The plane that passes through the point $(1,2,-3)$ and is parallel to the vectors $\langle 1,1,-1\rangle$ and $\langle 1,-1,1\rangle$. b. The lower half of the ellipsoid $2x^2+4y^2+z^2=1$. c. The part of the sphere of radius 4 centered at the origin that lies between the planes $z=-2$ and $z=2$. Ex 16.6.3Find the area of the portion of $x+2y+4z=10$ in the firstoctant.(answer) Ex 16.6.4Find the area of the portion of $2x+4y+z=0$inside $x^2+y^2=1$.(answer) Ex 16.6.5Find the area of $z=x^2+y^2$ that lies below $z=1$.(answer) Ex 16.6.6Find the area of $z=\sqrt{x^2+y^2}$ that lies below $z=2$.(answer) Ex 16.6.7Find the area of the portion of $x^2+y^2+z^2=a^2$ that liesin the first octant.(answer) Ex 16.6.8Find the area of the portion of $x^2+y^2+z^2=a^2$ that liesabove $x^2+y^2\le b^2$, $b\le a$.(answer) Ex 16.6.9Find the area of $z=x^2-y^2$ that lies inside $x^2+y^2=a^2$.(answer) Ex 16.6.10Find the area of $z=xy$ that lies inside $x^2+y^2=a^2$.(answer) Ex 16.6.11Find the area of $x^2+y^2+z^2=a^2$ that lies above the interior of the circle given in polar coordinatesby $r=a\cos \theta$.(answer) Ex 16.6.12Find the area of the cone $z=k\sqrt{x^2+y^2}$that lies above the interior of the circle given in polar coordinatesby $r=a\cos \theta$.(answer) Ex 16.6.13Find the area of the plane $z=ax+by+c$ that lies over aregion $D$ with area $A$.(answer) Ex 16.6.14Find the area of the cone $z=k\sqrt{x^2+y^2}$ that lies over aregion $D$ with area $A$.(answer) Ex 16.6.15Find the area of the cylinder $x^2+z^2=a^2$ that lies insidethe cylinder $x^2+y^2=a^2$.(answer) Ex 16.6.16The surface $f(x,y)$ can be represented with the vectorfunction $\langle x,y,f(x,y)\rangle$. Set up the surface area integral usingthis vector function and compare to the integral ofsection 15.4.
First, a recent gem from MathStackExchange: Task:Calculate $latex \displaystyle \sum_{i = 1}^{69} \sqrt{ \left( 1 + \frac{1}{i^2} + \frac{1}{(i+1)^2} \right) }$ as quickly as you can with pencil and paper only. Yes, this is just another cute problem that turns out to have a very pleasant solution. Here’s how this one goes. (If you’re interested – try it out. There’s really only a few ways to proceed at first – so give it a whirl and any idea that has any promise will probably be the only idea with promise). Looking at $latex 1 + \frac{1}{i^2} + \frac{1}{(1+i}^2$, find a common denominator and add to get $latex \dfrac{i^4 + 2i^3 + 3i^2 + 2i + 1}{i^2(i+1)^2} = \dfrac{(i^2 + i + 1)^2}{i^2(i+1)^2}$. Aha – it’s a perfect square, so we can take its square root, and now the calculation is very routine, almost. The next clever idea is to say that $latex \dfrac{ (i^2 + i + 1)}{i(i+1)} = \dfrac{(i^2 + 2i + 1)}{i(i+1)} – \dfrac{i}{i(i+1) }$, which we can rewrite as $latex \dfrac{(i+1)^2}{i(i+1)} – \dfrac{1}{i+1} = 1 + \dfrac{1}{i} – \dfrac{1}{i+1}$. So it telescopes and behaves very, very nicely. In particular, we get $latex 69 + 1 – \frac{1}{70}$. With that little intro out of the way, I get into my main topics of the day. I’ve been reading a lot of different papers recently. The collection of journals that I have access to at Brown is a little different than the collection I used to get at Tech. And I mean this in two senses: firstly, there are literally different journals and databases to read from (the print collections are surprisingly comparable – I didn’t realize how good of a math resource Tech’s library really was). But in a second sense, the amount of math that I comprehend is greater, and the amount of time I’m willing to spend on a paper to develop the background is greater as well. That aside, I revisited a topic that I used to think about all the time at the start of my undergraduate studies: math education. It turns out that there are journals dedicated solely to math education, see here for example. And almost all the journals are either on JSTOR or have open-access straight from Springerlink, which is great. I have no intention of becoming a high school teacher or anything, but I became interested as soon as I began to come across people with radically different high school experiences than I did. I left my high school with a bad taste in my mouth. It was the sort of place that, in short, held me back in the following sense: they wouldn’t let anyone take ‘too hard’ of a course-load for fear that they would overwork themselves and therefore fail, or do poorly, or overstress, in everything. In more direct terms, this meant that you had to petition to take 3 AP classes and had to really work to take 4. Absolutely no one was allowed to take more than 4 in one school year – so that many of my friends had to choose what science to take. Those of us who were willing all had sort of the same schedule in mind – if you did an art (band/choir/orchestra, usually), then in 10th grade you took AP Statistics, 11th AP Language, 12th AP Lit, AP Calc, AP (foreign language or Gov or European History or Econ), and an AP science – if no art, then you could take an additional AP science in 11th grade. At least, that’s how it worked while I was around. So the big decisions were always around the senior year. For me, I had to ask: should I take AP Chem or AP Physics? (I ended up taking Physics, which was great – it was the curiosity and intuition from mechanics that led to me becoming a mathematician now). Many of my friends asked the same sort of questions. And it was very annoying – I hate the idea that the school holds us back, ever. It also turned out that one of my classes, AP Lit, was terrible (I love lit, too – throughout the year, we only read one book, and it was a literature course – but I did a lot of reading on my own, I suppose, with that free time). And I was so annoyed that one of my four choices ended up being bad that I wrote a regretful and scathing letter to the administration at the end of the school year – one of the relatively few things I regret now. In short, I felt slighted by the system, and I’ve considered the system ever since. One of the articles I read was about the general idea that the sciences taught in schools and even at entry-undergraduate level in college are fundamentally different in both motivation and skill set from the ideas held by scientists and those who progress those subjects. The interesting part about the article was the amount of feedback that the journal received – enough to merit multiple copies of letters back and forth to make it to the next printings of the journal. That particular article was very careful to simply assert that the current paths of education in the sciences and the sciences themselves are different, as opposed to positing that any particular idea or method is above or better than any other. But of course, it’s perhaps the most natural response. Should they be different? Why does one learn math or the sciences in school? For that matter, why does one learn history (also oblique and hard to answer, but something that I maintain is important for at least the reason that it was the only substitute I ever had for an ethics class in my primary and secondary education). These are hard questions, and ones I’m not willing to directly address here at this time. But I will quickly note that in both Tech and Brown, I am stunned at how many people lack any sort of intuition for the four basic operations – (I once tutored someone who, upon being asked what 748 times 342 was, responded that it didn’t even matter because “math was made up at that point. It’s not like someone has sat down and counted that high.” oof. That hurts. Let’s not even talk about being able to add or subtract fractions. As a worker at the ‘Math Resource Center,’ I’ve learned that about a quarter of the time, helping people with their calculus classes is really a matter of helping these people manipulate fractions. So if the purpose of primary and secondary education is to get people to understand arithmetic operations and fractions, it’s not doing so well. John Allen Paulos should write yet another book, perhaps (Innumeracy is a good read). Should they be different? That is, is there much reason for the sciences and the education of the sciences to align in method and motivation? I’m not certain, but perhaps they shouldn’t pretend to be the same. I only ever learned arithmetic, as opposed to math, throughout my primary and secondary education with 2 exceptions: geometry (which had a surprisingly large logic content for me, and introduced me to interesting ideas) and calculus. Calling it math is a disservice – as Paulos mentions in his books, the general negativity towards math allows people to claim innumeracy ( “I’m not really a numbers person”) with pride – no one would ever say that they weren’t very good with letters. But reading is useful, or rather widely recognized to be useful and expressive. I end by mentioning that I think it is more important to come across real ideas of science and math at an early age, say elementary school, then middle school. In in elementary and middle school, there really isn’t much difference between the maths and the sciences, so I clump them together. But in my mind, the initial goals of science and math education should be to spark creativity and wonder, while English and reading courses stress critical thinking (somehow, math, science, and English all get the boring end of the stick while reading gets full hold over the realm of creativity – how backwards I must be). But those 4th graders whose teacher guided them towards the bee research, that has now been published under the 4th graders’ names – don’t you think that their view of science will be a much happier and, ultimately, accurate? Exciting, collaborative, uncertain with a scientific method-based structure. But then again, perhaps the lesson that my friends and I learned from our own high school is the most relevant: if you want to do something, then don’t let others stand in your way. A little motivation and discipline goes a long way.
\(\text{on the line }y=x, \text{ oddly enough for every point }y=x\\ \text{thus we have}\\ \dfrac{x^2-x}{2x-x+2} = a\\ x^2-x=a x + 2a\\ x^2 -(1+a)x -2a=0\\ x = \dfrac{(1+a)\pm \sqrt{(1+a)^2+8a}}{2}\) \(\text{You then have two points of the form }(x_k,x_k),~k=1,2\\ \text{Where the }x_k \text{ are the roots above}\\ \text{Find the distance between these two points. It's }\\ d=\sqrt{2} \sqrt{\left | a^2+10 a+1\right| }\\ \text{Solve for }d=6 \text{ and choose the value of }a \text{ that is positive}\\ \text{and leads to real points }x,y\\ a=\sqrt{42}-5 \text{ is the only one}\). \(\text{on the line }y=x, \text{ oddly enough for every point }y=x\\ \text{thus we have}\\ \dfrac{x^2-x}{2x-x+2} = a\\ x^2-x=a x + 2a\\ x^2 -(1+a)x -2a=0\\ x = \dfrac{(1+a)\pm \sqrt{(1+a)^2+8a}}{2}\) \(\text{You then have two points of the form }(x_k,x_k),~k=1,2\\ \text{Where the }x_k \text{ are the roots above}\\ \text{Find the distance between these two points. It's }\\ d=\sqrt{2} \sqrt{\left | a^2+10 a+1\right| }\\ \text{Solve for }d=6 \text{ and choose the value of }a \text{ that is positive}\\ \text{and leads to real points }x,y\\ a=\sqrt{42}-5 \text{ is the only one}\)
I'll start with Earth Earth is hurling through space at a speed of approximately $29.78 km/s$ If the sun were to disappear, the Earth would move in a straight line until the sun reappears. Since there are $259,200 seconds$ in three days that gives Earth the time to travel $29.78 km/s \times 259,200 s = 7,718,976 km$ That's quite a distance. Since the distance between the Earth and the sun varies between $147,098,290 km$ and $152,098,232 km$, I'll average that down to about 150 million kilometers for calculations. Using Pythagoras, we can get the distance form the sun when it comes back after 3 days: $\sqrt{150,000,000^{2} + 7,700,000^{2}} = 150,1632,44.504$. This puts us about $160,000 km$ out of orbit, peanuts compared to the difference between the Earths aphelion and its perhelion which is about 5 million kilometer. What about the influence of other planets? Good point, Jupiter is huge and can get reasonably close to Earth [citation needed]. We'll assume a worst case scenario and place Jupiter at a distance of 600,000,000 km from earth. Jupiter is significantly slower than earth, but in the span of three days, this is not going to make a huge difference considering the distance between them. You can calculate the acceleration of a body under gravitaional influence by another body as: $G\frac{m}{r^{2}}$ Where G is the gravitational constant, m is the mass of the body attracting (Jupiter in our case) and r is the distance between the two bodies. Filling this in gives us: $6.673\times10^{−11}\frac{1.8986\times10^{27}}{600,000,000,000^{2}} = 3.51926606\times10^{-7} m/s^{2}$ Which means that the earth will accelerate towards Jupiter at a rate of 3.51926606*10^-7 m/s every second. After 3 days we will have traveled $\frac{3.51926606\times10^{-7}\times259200^{2}}{2} = 11822.0311653 m$ towards Jupiter, not even $12 km$! Mars is closer though. I see your point, but assuming Mars is as close as 50 million km, we get an shift towards Mars of about $56 km$. Not really significant. How will other planets fare? Well, Mercury will be off the worst. If there's no significant change there, there won't be a significant change anywhere. As it is traveling at about $47.362 km/s$ It could travel a distance of more than 12 million km in 3 days. Taking into account its smaller orbit, this would take it about 1.2 million km out of orbit, not bad. But still not much compared to the variance in its orbit which is almost 14 million km. Conclusion: If Fernir eats the sun, there are more important things to worry about than where the planets will be in 3 days, when Fernir needs to go to the bathroom. Edit: But wait, the Earth is now going too fast for its distance from the sun You're right. And it's slightly turned away from the sun too. And I must admit, I underestimated the effect of this. As some intelligent people in the comments pointed out, this would change the eccentricity of the earths orbit from 0.016 to 0.06. Using this calculator we can then figure out that Earths orbit will now vary between 141 million km and 159 million km. The difference has nearly quadrupled! In the grand scheme of things, our orbit will still be relatively similar, this might be enough to seriously influence weather pattern though. Another possible effect. Since gravity can not travel faster than the speed of light, the effect of the sun disappearing can only propagate with the speed of light. Gravity needs about 4 seconds to traverse the diameter of the sun, so gravity will drop from 100% to 0 over the course of 4 seconds. Additionally, there will be about a 0.04 second lag between the part of the Earth facing the sun and the most distance part. The acceleration due to the suns gravity is $\frac{6.67\times10^{-11}\times1.9891\times10^{30}}{(1.496\times10^{11})^{2}} = 5.928151\times10^{-3}m/s^{2}$. Dropping from this value down to 0 over the course of 4 seconds with a maximum lag of 0.04 seconds doesn't seems bad enough to cause anything major, but maybe it is enough to cause some earthquakes? I'll leave that to geologists to decide.
As generator $g$ is used in DH how do you find a combination of prime $p$ and $g$? eg: if we choose $p=23$ and its generator is $7$ (given in the book) how do we find the generator? Mike gave you the answer for the specific question you asked. I'll try to give you an answer to the question you should have asked: For Diffie-Hellman, what criteria should I use to select a secure $p$ and $g$? This question is important, because not every large cyclic group is actually secure. It turns out that, for the group $\mathbb{Z}_p^*$, the factorization of $p-1$ is critical. If $p-1$ has a factor $q$, and $g^{(p-1)/q} \ne 1$, then given $g$ and $g^x \bmod p$, we can determine $x \bmod q$ in $O(\sqrt{q})$ time. What does this mean? Well, if we pick a $p$ where $p-1$ has a bunch of small factors $q_1, q_2, q_3$, and we give $g$ to be a primitive element (so $g^{(p-1)/q} \ne 1$ for any $q > 1$), then we transmit $g^x \bmod p$ as a part of the DH exchange, the attacker can efficiently derive $x \bmod q_1q_2q_3$; we're effectively giving him $\log_2 q_1q_2q_3$ bits of our secret exponent. This means that, with a random prime $p$ and either a random $g$, or a primitive $g$, we have a good possibility of leaking quite a bit of information. So, what do we do? Well, first of all, we make sure that $p-1$ has a large prime factor $q$ that we know. There are two common practices: Select a prime $p$ with $(p-1)/2$ prime as well (often called a safe prime). If we do that, then $q = (p-1)/2$ is certainly large enough (assuming $p$ is large enough). Select a prime value $q$ (perhaps 256 to 512 bits), and then search for a large prime $p = kq + 1$ (perhaps 1024 to 2048 bits). This is called a Schnorr prime Once we have our values $p$ and $q$, we then select a generator $g$ that is within the subgroup of size $q$. Members of this subgroup have the property that $g^{(p-1)/r} = 1$ for any factors $r$ of $p-1$ other than $q$ (and $p-1$ itself), hence the above observation does not apply. One easy way of selecting a random generator is to select a random value $h$ between 2 and $p-1$, and compute $h^{(p-1)/q} \bmod p$; if that value is not 1 (and with high probability, it won't be), then $h^{(p-1)/q} \bmod p$ is your random generator. An alternative method of finding a generator $g$: if you selected a safe prime, and if your safe prime also satisfied the condition $p = 7 \bmod 8$, then the value $g=2$ will always be a generator for the group of size $q$. It won't obviously be a random generator, however, we can also show that, with a safe prime, if you can solve the computational Diffie-Hellman problem with $g=2$, you can solve it with any $g$ (with a polynomial number of queries), hence $g=2$ cannot be weak. I'm assuming you meant "how to efficiently find generator $g$ in a cyclic group?" Small groups For small values $p$, bruteforce is efficient. Large groups with known factorization of group order The order of the group $\mathbb{Z}_p^*$ is $p-1$. The order of every element divides the order of the group, so the factorization of $p-1$ reveals the possible orders of elements. Using this information, one can fairly efficiently find the order of any element in the group. See also Algorithm 4.79. Note: this will also work for small groups as you should be able to factor $p-1$ for small values of $p$. Large groups with unknown factorization of group order There is no efficient method for finding the order of group elements. With DH, however, since you get to choose $p$, there are some things you can do to find generators of the full group $\mathbb{Z}_p$ or a generator of a large cyclic subgroup with in $\mathbb{Z}_p$. See 4.6.1 of HAC Ch 4. See also another question here.
Adimurthi, * and Chaudhuri, Nirmalendu and Ramaswamy, Mythily (2001) An Improved Hardy-Sobolev Inequality and its Application. In: Proceedings of the American Mathematical Society, 130 (2). pp. 489-505. PDF An_improved_hardy.pdf - Published Version Restricted to Registered users only Download (181kB) | Request a copy Abstract For\Omega \subset $IR^n$,n\geq 2, a bounded domain, and for 1 < p < n, we improve the Hardy-Sobolev inequality, by adding a term with a singular weight of the type \frac{1}{log(1/|x|)}$^2$ . We show that this weight function is optimal in the sense that the inequality fails for any other weight function more singular than this one. Moreover, we show that a series of finite terms can be added to improve the Hardy-Sobolev inequality, which answers a question of Brezis-Vazquez. Finally, we use this result to analyze the behaviour of the first eigenvalue of the operator L\mu\omega := -(div(|\nabla\upsilon|{p-2}\nabla\upilson)as \mu increases to \frac{n-p}{p}$^p$ for 1 < p < n. Item Type: Journal Article Additional Information: Copyright of this article belongs to American Mathematical Society. Keywords: Hardy-Sobolev inequality;eigenvalue;p-laplacian Department/Centre: Division of Physical & Mathematical Sciences > Mathematics Depositing User: Ravi V Nandhan Date Deposited: 24 Jan 2005 Last Modified: 01 Mar 2012 09:09 URI: http://eprints.iisc.ac.in/id/eprint/1852 Actions (login required) View Item
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
this question is giving me some issue because i know sin and cos is the same as $45$ degrees but there is a $2$ on the right side where cosine is. So how would i get $\theta$? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community this question is giving me some issue because i know sin and cos is the same as $45$ degrees but there is a $2$ on the right side where cosine is. So how would i get $\theta$? Square both sides to get $\begin{array}{l} {\sin ^2}\theta = 4{\cos ^2}\theta \Rightarrow 1 - {\cos ^2}\theta = 4{\cos ^2}\theta \Rightarrow \\ {\cos ^2}\theta = \frac{1}{5} \end{array}$ Let $\theta \in \mathbb{R}$; note that $\cos^{2}\theta = 1 - \sin^{2}\theta$. Then by assumption $\cos^{2}\theta = 1 - 4\cos^{2}\theta$, so $\cos^{2}\theta = 1/5$.
Claim: $L$ is context-free. Proof Idea: There has to be at least one difference between the first and second half; we give a grammar that makes sure to generate one and leaves the rest arbitrary. Proof: For sake of simplicity, assume a binary alphabet $\Sigma = \{a,b\}$. The proof readily extends to other sizes. Consider the grammar $G$: $\qquad\begin{align} S &\to AB \mid BA \\ A &\to a \mid aAa \mid aAb \mid bAa \mid bAb \\ B &\to b \mid aBa \mid aBb \mid bBa \mid bBb \end{align}$ It is quite clear that it generates $\qquad \mathcal{L}(G) = \{ \underbrace{w_1}_k x \underbrace{w_2v_1}_{k+l}y\underbrace{v_2}_l \mid |w_1|=|w_2|=k, |v_1|=|v_2|=l, x\neq y \} \subseteq \Sigma^*;$ the suspicious may perform a nested induction over $k$ and $l$ with case distinction over pairs $(x,y)$. Now, $w_2$ and $v_1$ commute (intuitively speaking, $w_2$ and $v_1$ can exchange symbols because both contain symbols chosen independently from the rest of the word). Therefore, $x$ and $y$ have the same position (in their respective half), which implies $\mathcal{L}(G) = L$ because $G$ imposes no other restrictions on its language. The interested reader may enjoy two follow-up problems: Exercise 1: Come up with a PDA for $L$! Exercise 2: What about $\{xyz \mid |x|=|y|=|z|, x\neq y \lor y \neq z \lor x \neq z\}$?
Click to zoom in. The counter on the right side shows 99,999,962. That's a pity because the person who manages to obtain a screenshot with the number 100,000,000 gets a $100 million coffee mug. That's a lot of money. As far as I remember, I didn't hold $100 million in my hands for years if not decades. It's even hard to imagine how the mug may be this expensive. It's not just the money. The British aristocracy promised to celebrate the 100,000,000th visitor to Watts' blog in the British Commonwealth during the January 7th Watts Up Day. So one had a chance to obtain a giant platinum coffee cup andshare the niceties with the British royal family. At any rate, a fast shift-reload didn't give me any better number (it was the same: the next one I got a long time later was 100,004,397) and someone else was lucky. You could still be impressed that I got pretty close. Suspicious values such as 99,999,962 are referred to as fine-tuning. How is it possible? One explanation is publication bias. People including myself have also gotten more boring numbers during the years. In fact, they were so boring that I didn't publish a TRF blog entry about them. So the numbers from the blog entries can't be viewed as representative. They don't follow the same statistical distribution as the actual values that the counter shows. These comments may sound as jokes but because of their universal validity, they are important for serious situations, too. You simply can't view the literature – not even scientific literature – as being representative of the actual importance of various problems or actual likelihood that some figures are right (although the scientific literature should still be closer to the truth than a random guess by a laymen: but even this sometimes fails to be the case). Some values may receive many more papers not because they're likely or important but because they would be more interesting if they were true. Or because of pure ignorance. Or due to chance. The anthropic explanation is that TRF's obtaining a number similar to 99,999,962 is needed for the intelligent life on Earth to exist. It's a good try but you see that it doesn't really work. One can perceive other civilizations where TRF doesn't produce this number. The same failure applies to the strong CP-problem: the \(\theta\)-angle of QCD weighting the instantons in the action (which cause additional CP-violation) is very small according to the observations. But it's apparently not needed for life, either. Some other parameters may have to be more accurately fine-tuned but it still doesn't mean that there doesn't exist a much more precise, non-anthropic calculation of these values as well. In some cases, there exist totally rational explanations of such coincidences. For example, \[\begin{align}\exp(\pi \sqrt{163}) \approx\\ \approx 262,537,412,640,768,743.99999999999925 \end{align} \] That's a painfully close number to an integer but it isn't an integer. The probability that you get 12 copies of "9" after the decimal point is just \(10^{-12}\), one part in a trillion. You shouldn't be so lucky with simple formulae such as one including a simple number of 163. There are much fewer "simple formulae" of this complexity than a trillion. So statistically speaking, there has to exist an explanation. And there does. It's based on the \(j\)-function, a function that naturally parameterizes the integration region for one-loop (toroidal) diagrams in string theory. You may rewrite the particular number above using another expansion whose first term is an integer and the following term is already manifestly tiny. Let me just assure a reader that his or her "surprising observations" including the neutron mass or proton mass don't have any such explanation and they're coincidences, indeed. If he calculates how likely it is for the coincidences to work at the given accuracy, he gets a likelihood vastly less tiny than \(10^{-12}\), too. So the statistical argument is much weaker. Also, only dimensionless parameters – those independent of our choice of units which is just a part of the "messy cultural baggage" – may be expected to have any rational explanations. From the viewpoint of the Standard Model, the Higgs boson mass which is probably close to 125 GeV as we learn last year – and we will probably learn for sure in 2012 – is extremely tiny. Quantum corrections would naturally drive it (both the vev and the mass) towards the fundamental scale which is 15 orders of magnitude higher. This puzzle is known as the hierarchy problem and supersymmetry remains the most viable solution according to the evidence we have as of today. Congratulations to Anthony
Sum of All Ring Products is Closed under Addition Theorem Let $\left({R, +, \circ}\right)$ be a ring. Let $\left({S, +}\right)$ and $\left({T, +}\right)$ be additive subgroups of $\left({R, +, \circ}\right)$. Let $S T$ be defined as: $\displaystyle S T = \left\{{\sum_{i \mathop = 1}^n s_i \circ t_i: s_1 \in S, t_i \in T, i \in \left[{1 \,.\,.\ n}\right]}\right\}$ Then $\left({S T, +}\right)$ is a closed subset of $\left({R, +}\right)$. Proof Let $x_1, x_2 \in S T$. Then: $\displaystyle x_1 = \sum_{i \mathop = 1}^j s_i \circ t_i, x_2 = \sum_{i \mathop = 1}^k s_i \circ t_i$ for some $s_i, t_i, j, k$, etc. By renaming the indices, we can express $x_2$ as: $\displaystyle x_2 = \sum_{i \mathop = j+1}^{j+k} s_i \circ t_i$ and hence: $\displaystyle x_1 + x_2 = \sum_{i \mathop = 1}^j s_i \circ t_i + \sum_{i \mathop = j+1}^{j+k} s_i \circ t_i = \sum_{i \mathop = 1}^k s_i \circ t_i$ So $x_1 + x_2 \in S T$ and $\left({S T, +}\right)$ is shown to be closed. $\blacksquare$
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 21 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV (American Physical Society, 2015-06) We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ...
Up until now, we have dealt with double integrals in the Cartesian coordinate system. This is helpful in situations where the domain can be expressed simply in terms of \(x\) and \(y\). However, many problems are not so easy to graph. If the domain has the characteristics of a circle or cardioid, then it is much easier to solve the integral using polar coordinates. Introduction The Cartesian system focuses on navigating to a specific point based on its distance from the x, y, and sometimes z axes. In polar form, there are generally two parameters for navigating to a point: \(r\) and \( \theta \). \(r\) represents the magnitude of the vector that stretches from the origin to the desired point. In other words, \(r\) is the distance directly to that coordinate point. \( \theta \) represents the angle that the aforementioned theoretical vector would make with the x-axis. This creates a circular type of motion as we adjust the value of \( \theta \), which allows us to express a circle of radius 1 as \( r = 1 \) as opposed to \( x^2 + y^2 = 1 \) in Cartesian coordinates. Polar Double Integration Formula Many of the double integrals that we have encountered so far have involved circles or at least expressions with \(x^2 + y^2\). When we see these expressions a bell should ring and we should shout, "Can't we use polar coordinates." The answer is, "Yes" but only with care. Recall that when we changed variables in single variable integration such as \(u = 2x\), we needed to work out the stretching factor \(du = 2dx\). The idea is similar with two variable integration. When we change to polar coordinates, there will also be a stretching factor. This is evident \since the area of the "polar rec\tangle" is not just as one may expect. The picture is shown below. Even if \( \Delta{r}\) and \( \Delta{q}\) are very small, the area is not the product \( \Delta{r}\, \Delta{q} \). This comes from the definition of radians. An arc that extends \( \Delta{q}\) radians a distance \(r\) out from the origin has length\( r\, \Delta{q}\). If both \(\Delta r\) and \(\Delta q\) are very small then the polar rectangle has area \[ Area = r \Delta{r} \Delta{q}. \] This leads us to the following theorem Theorem: Double Integration in Polar Coordinates Let \(f(x,y)\) be a continuous function defined over a region \(R\) bounded in polar coordinates by \( r_1(q) < r < r_2(q) \) and \( q_1 < q < q_2 \). Then \[ \iint_R f(x,y)\,dy\,dx =\int_{\theta_1}^{\theta_2} \int_{r_1(\theta)}^{r_2(\theta)} f(r\cos \theta,r\sin \theta)\,r\,dr\,d\theta.\] Theoretical discussion with descriptive elaboration The area of a closed and bounded region \(r\) in the polar coordinate plane is given by \[ A = \iint_{R}^{ }\ r \, dr\, d \theta. \] To find the bounds for a domain in this form, we use a similar technique as with integrals in rectangular form. Beginning at the origin with \(r = 0 \) we increase the value of \(r\) until we find the maximum and minimum overall distance from the origin. Similarly, for \(\theta \) we start at \( \theta = 0 \) and find the minimum and maximum angles that the domain makes with the orgin. The maximum distance away from the origin and the maximum angle with the origin that defines the boundary represents the upper bound for the double integral. The minimum distance and the minimum angle with the origin makes the lower bound for the double integral. For example, to find the bounds for \(r\), we look to see what the minimum and maximum overall distances from the origin are in terms of \(r\). Sometimes problems will explicitly give you the curves that form the domain, other times you may need to look at a graph to determine the domain. Regardless, if the origin is contained in the domain, then the lower bound for \(r\) will be 0. The upper bound will be whatever curve encompasses the rest of the domain. \( \theta \) is usually simpler to compute. The lower and upper bounds are the minimum and maximum angles that the domain makes with the origin. Trigonometric functions can be used to determine the extreme angles that the function makes with the origin. If an equation is provided, it is helpful to use the conversions: \[ x = r\, \cos \,\theta\] \[y = r\, \sin \,\theta \] to convert equations from Cartesian to polar form. If one must determine the bounds from a provided graph, it can be helpful to guess and check by plotting some test points to see if your bounds truly match the provided graph. If you need to convert an integral from Cartesian to polar form, graph the domain using the Cartesian bounds and your knowledge of curves in the Cartesian domain. Then use the method described above to derive the bounds in polar form. Once the integral is set up, it may be solved exactly like an integral using rectangular coordinates. Example \(\PageIndex{1}\) Find the volume to the part of the paraboloid \[ z = 9 - x^2 - y^2 \] that lies inside the cylinder \[ x^2 + y^2 = 4. \] Solution The surfaces are shown below. This is definitely a case for polar coordinates. The region \(R\) is the part of the xy-plane that is inside the cylinder. In polar coordinates, the cylinder has equation \[ r^2 = 4. \] Taking square roots and recalling that \(r\) is positive gives \[ r = 2. \] The inside of the cylinder is thus the polar rectangle \( 0 < r < 2 \) \( 0 < q < 2\pi\). The equation of the parabola becomes \[ z = 9 - r^2. \] We find the integral \[ \int _0^{2\pi}\int_0^2 \left(9-r^2\right) r\,dr\,d\theta.\] This integral is a matter of routine and evaluates to \( 28\pi\). Example \(\PageIndex{2}\) Find the volume of the part of the sphere of radius 3 that is left after drilling a cylindrical hole of radius 2 through the center. Solution The picture is shown below The region this time is the annulus (washer) between the circles \(r = 2\) and \(r = 3\) as shown below. The sphere has equation \[ x^2 + y^2 + z^2 = 9 .\] In polar coordinates this reduces to \[ r^2 + z^2 = 9. \] Solving for \(z\) by subtracting \(r^2\) and taking a square root we get top and bottom surfaces of \[z=\sqrt{9-r^2} \;\;\; \text{and} \;\;\; z=-\sqrt{9-r^2}. \] We get the double integral \[\int_0^{2\pi} \int_2^3 (\sqrt{9-r^2}+ \sqrt{9-r^2})r\; dr d\theta. \] This integral can be solved by letting \[u = 9 - r^2 \;\;\; \text{and} \;\;\;du = -2r\,dr.\] After substituting we get \[\begin{align} &-\dfrac{1}{2}\int_{0}^{2\pi}\int_{5}^{0} 2u^{\frac{1}{2}} \; du d\theta \\ &= -\dfrac{2}{3}\int_{0}^{2\pi}[u^{\frac{3}{2}}]_5^0 \; d\theta \\ &= \dfrac{20 \sqrt{5} \pi}{3}.\end{align} \] Example \(\PageIndex{3}\) Change the Cartesian integral into an equivalent polar integral, then solve it. \[ \int_{1}^{\sqrt{3}}\int_{1}^{x}dydx \] Solution The point at (\(\sqrt{3}\), 1) is at an angle of \(\pi/6\) from the origin. The point at (\(\sqrt{3}, \sqrt{3}\) is at an angle of \(\pi/4\) from the origin. In terms of \(r\), the domain is bounded by two equations \(r=csc\theta\) and \(r=\sqrt{3}\sec\theta\). Thus, the converted integral is \[ \int_{csc\theta}^{\sqrt{3}\sec\theta}\int_{\pi/6}^{\pi/4}rdrd\theta. \] Now the integral can be solved just like any other integral. \[\begin{align} &\int_{\pi/6}^{\pi/4} \int_{csc\theta}^{\sqrt{3}\sec\theta}rdrd\theta \\ & =\int_{\pi/6}^{\pi/4} (\dfrac{3}{2} \sec^2\theta - \dfrac{1}{2} \csc^2\theta) d\theta \\ & = \left [ \dfrac{3}{2} \tan\theta + \dfrac{1}{2}\cot\theta \right ] _{\dfrac{\pi}{6}}^{\dfrac{\pi}{4}} \\ & =2 - \sqrt{3}. \end{align}\] Example \(\PageIndex{4}\) Find the area of the region cut from the first quadrant by the curve \( r = \sqrt{2 - \sin2\theta}\). Solution Note that it is not even necessary to draw the region in this case because all of the information needed is already provided. Because the region is in the first quadrant, the domain is bounded by \( \theta = 0 \) and \( \theta = \dfrac{\pi}{2} \). The sole boundary for \(r\) is \(r = \sqrt{2 - \sin2\theta}\) so the integral is \[\begin{align} & \int_{0}^{\pi/2}\int_{0}^{\sqrt{2 - \sin2\theta}} rdrd\theta \\ &= \int_{0}^{\pi/2} \left [ \dfrac{r^2}{2} \right ] _{0}^{\sqrt{2 - \sin2\theta}} d\theta \\ &= \int_{0}^{\pi/2} \dfrac{2 - \sin2\theta}{2} d\theta \\ &= \int_{0}^{\pi/2} 1 - \dfrac{\sin2\theta}{2} d\theta \\ &= \left [ \theta + \dfrac{\cos2\theta}{4} \right ] _{0}^{\pi/2} \\ &= \dfrac{\pi}{2} - \dfrac{1}{4}. \end{align}\] Example \(\PageIndex{5}\) Find the area of the region cut from the first quadrant by the curve \( r = \sqrt{2 - \sin2\theta}\). Solution Note that it is not even necessary to draw the region in this case because all of the information needed is already provided. Because the region is in the first quadrant, the domain is bounded by \( \theta = 0 \) and \( \theta = \dfrac{\pi}{2} \). The sole boundary for \(r\) is \(r = \sqrt{2 - \sin2\theta}\) so the integral is \[\begin{align} & \int_{0}^{\pi/2}\int_{0}^{\sqrt{2 - \sin2\theta}} rdrd\theta \\ &= \int_{0}^{\pi/2} \left [ \dfrac{r^2}{2} \right ] _{0}^{\sqrt{2 - \sin2\theta}} d\theta \\ &= \int_{0}^{\pi/2} \dfrac{2 - \sin2\theta}{2} d\theta \\ &= \int_{0}^{\pi/2} 1 - \dfrac{\sin2\theta}{2} d\theta \\ &= \left [ \theta + \dfrac{\cos2\theta}{4} \right ] _{0}^{\pi/2} \\ &= \dfrac{\pi}{2} - \dfrac{1}{4}. \end{align}\] Contributors Michael Rea (UCD), Larry Green Integrated by Justin Marshall.
What is the smallest possible size of a set of points in $\mathbb{F}_q^3$ which intersects (blocks) every line? Clearly the union of three affine hyperplanes that intersect in a singleton, say $x = 0, y = 0$ and $z = 0$, forms such a set. If we denote by $s(q)$ the smallest possible size, then this gives us $$s(q) \leq 3q^2 - 3q + 1$$ for all $q$. Since any two points of $\mathbb{F}_2^3$ form a line, we get $s(2) = 2^3 - 1 = 7 = 12 - 6 + 1$. For $q = 3$ we can do better. In $\mathbb{F}_3^3$ the complement of such a set is a cap, i.e., a set of points no three of which are collinear. We know that the largest size of a cap in $\mathbb{F}_3^3$ is $9$ (corresponding to a quadric), and hence $s(3) = 18 < 27 - 9 + 1$. (side note: the problem of finding such a blocking set in $\mathbb{F}_3^n$ is thus equivalent to the famous cap set problem. See the survey article by Bierbrauer and Edel, large caps in projective Galois spaces and the paper by Bateman and Katz, new bounds on cap sets) Question 1: Can we improve the upper bound in general? We can also give a lower bound on $s(q)$. Jamison/Brouwer-Schrijver proved using the polynomial method that the smallest possible size of a blocking set in $\mathbb{F}_q^2$ is $2q - 1$. See this, this, this and this for various proofs of their result. Now take any $q$ parallel affine planes in $\mathbb{F}_q^3$, then the intersection of a blocking set with these hyperplanes must have size at least $2q - 1$, and hence $$2q^2 - q \leq s(q).$$ Question 2: Can we improve this lower bound in general? The Jamison/Brouwer-Schrijver result gives us another way of constructing a blocking set of size $3q^2 - 3q + 1$. Again take $q$ parallel hyperplanes $H_1, \dots, H_q$. Let $B_2, \dots, B_{q}$ be blocking sets of size $2q - 1$ in $H_2, \dots, H_{q}$. Then $B = H_1 \cup B_2 \cup \dots \cup B_{q}$ is a blocking set of size $(q-1)(2q-1) + q^2 = 3q^2 - 3q + 1$. Note that the problem of determining $s(q)$ is trivial for projective spaces. It's a classical result that a line blocking set in $PG(3,q)$ has size at least $1 + q + q^2$ with equality if and only if it is a hyperplane. See Chapter 3 of current research topics in Galois geometry for a recent survey on projective blocking sets. Edit 1: After Douglas Zare's answer below we have $s(q) \geq 2q^2 - 1$ for all $q$ and $s(q) \leq 3q^2 - 3q$ for $q \geq 3$. Can this be improved further? I have also found two references that prove this lower bound of $2q^2 - 1$, Proposition 4.1 in Nuclei of pointsets in $PG(n,q)$ (1997) and Theorem 3.1 in On Nuclei and Blocking Sets in Desarguesian Spaces (1999). In fact, Sziklai has mentioned the same argument as Douglas Zare after his proof of Proposition 4.1. Their proofs are generalisations of the polynomial technique introduced by Blokhuis in On nuclei and affine blocking sets (1994).
The question is definitely not trivial. +1 for OP. The solution follows from the following theorem: Theorem: If $a > 0$ and $n$ is a rational number then $$\lim_{x \to a}\frac{x^{n} - a^{n}}{x - a} = na^{n - 1}\tag{1}$$ This is one of the standard limits which can be used to evaluate many limits involving algebraic functions. The proof of the above theorem is easy if $n$ is an integer. For positive integers we can simply use $$x^{n} - a^{n} = (x - a)\sum_{i = 0}^{n - 1}x^{n - 1 - i}a^{i}$$ For $n = 0$ the result is obvious. For negative integer $n = -m$ we can use $x^{n} = 1/x^{m}$ and the fact that the result holds for positive integers. Similarly if the result holds positive rational number $n$ we can show that it holds for negative rational $n$ also. Thus we need to show that if $n = p/q$ with integers $p > 0, q > 1$ then the formula $(1)$ holds. Let $b = a^{1/q}$ so that $a = b^{q}$. We know that $$\lim_{y \to b}\frac{y^{q} - b^{q}}{y - b} = qb^{q - 1}\tag{2}$$ From $(2)$ it follows that the ratio $(y^{q} - b^{q})/(y - b)$ is bounded and away from $0$ as $y \to b$. Hence its reciprocal is also bounded and away from $0$ as $y \to b$. Also note that when $y \to b$ then $x = y^{q} \to b^{q} = a$ (and vice-versa because $f(y) = y^{q}$ is strictly monotone in $[0, \infty)$). Thus the ratio $(x^{1/q} - a^{1/q})/(x - a)$ is bounded when $x = y^{q} \to a$ and therefore $$\lim_{x \to a}x^{1/q} = a^{1/q}\tag{3}$$ (this by the way proves continuity of $x^{1/q}$). Now we have\begin{align}L &= \lim_{x \to a}\frac{x^{n} - a^{n}}{x - a}\notag\\&= \lim_{x \to a}\frac{x^{p/q} - a^{p/q}}{x - a}\notag\\&= \lim_{t \to b}\frac{t^{p} - b^{p}}{t^{q} - b^{q}}\text{ (putting }x = t^{q}, a = b^{q}\text{ and using (3))}\notag\\&= \lim_{t \to b}\dfrac{\dfrac{t^{p} - b^{p}}{t - b}}{\dfrac{t^{q} - b^{q}}{t - b}}\notag\\&= \frac{pb^{p - 1}}{qb^{q - 1}}\notag\\&= \frac{p}{q}b^{p - q}\notag\\&= na^{n - 1}\notag\end{align}There is another way to prove this (via inequalities and squeeze theorem) without using the continuity of $x^{1/q}$. Let me know if you are interested in that version. Update: On request of OP I am providing a proof of formula $(1)$ based on Squeeze Theorem. The credit for this proof must go to G. H. Hardy! In what follows all the numbers are positive (whether they are integers, rationals or reals will be mentioned as and when needed). Let $a, b$ be real numbers with $a > 1 > b > 0$. Let $r$ be an integer. Clearly we have $a^{r} > a^{i}$ for all $i = 0, 1, 2, \ldots, r - 1$. Hence on adding these inequalities we get $$ra^{r} > 1 + a + a^{2} + \cdots + a^{r - 1}$$ Multiplying by $(a - 1) > 0$ we get $$ra^{r}(a - 1) > a^{r} - 1$$ Adding $r(a^{r} - 1)$ on both sides, and dividing by $r(r + 1)$, we obtain $$\frac{a^{r + 1} - 1}{r + 1} > \frac{a^{r} - 1}{r}\tag{4}$$ Similarly we can prove that $$\frac{1 - b^{r + 1}}{r + 1} < \frac{1 - b^{r}}{r}\tag{5}$$ It follows that if $r, s$ are positive integers with $r > s$ then $$\frac{a^{r} - 1}{r} > \frac{a^{s} - 1}{s},\,\frac{1 - b^{r}}{r} < \frac{1 - b^{s}}{s}\tag{6}$$ If we put $s = 1$ we get $$a^{r} - 1 > r(a - 1),\, 1 - b^{r} < r(1 - b)\tag{7}$$ for $r > 1$. Next we show that the inequalities $(6), (7)$ hold when $r, s$ are positive rational numbers with $r > s$. Let $r = k/l, s = m/n$ and $r > s$ implies that $kn > lm$. Let $c = a^{1/ln}$ so that $c > 1$. In the first inequality of $(6)$ we can replace $a$ by $c$, $r$ by $kn$ and $s$ by $lm$ to get $$\frac{c^{kn} - 1}{kn} > \frac{c^{lm} - 1}{lm}$$ or $$\frac{a^{r} - 1}{r} > \frac{a^{s} - 1}{s}$$ In similar manner we can prove that other inequalities also hold when $r, s$ are rational numbers. Now that $r, s$ are rational, it is possible to take $r = 1$ in $(6)$ to get $$a^{s} - 1 < s(a - 1),\,1 - b^{s} > s(1 - b)\tag{8}$$ for rational $s$ with $0 < s < 1$. Thus we have inequalities $(6)-(8)$ for all positive rational numbers $r, s$ with $r > 1 > s$. In what follows we will assume that $a, b$ are real with $a > 1 > b > 0$ (same as before) and $r, s$ are rational with $r > 1 > s > 0$. Clearly $1/b > 1$ and hence replacing $a$ by $1/b$ and $b$ by $1/a$ in $(7)$ we get $$a^{r} - 1 < ra^{r - 1}(a - 1),\, 1 - b^{r} > rb^{r - 1}(1 - b)\tag{9}$$ Similarly from $(8)$ we get $$a^{s} - 1 > sa^{s - 1}(a - 1),\, 1 - b^{s} < sb^{s - 1}(1 - b)\tag{10}$$ Combining $(7)$ and $(9)$ we get $$ra^{r - 1}(a - 1) > a^{r} - 1 > r(a - 1)\tag{11}$$ Writing $a = x/y$ we get $$rx^{r - 1}(x - y) > x^{r} - y^{r} > ry^{r - 1}(x - y)\tag{12}$$ for $x > y > 0$. Similarly from $(8)$ and $(10)$ we get $$sx^{s - 1}(x - y) < x^{s} - y^{s} < sy^{s - 1}(x - y)\tag{13}$$ for $x > y > 0$. From the above inequalities it is clear that the function $f(x) = x^{r}$ is continuous for $x > 0$. Taking reciprocals it is easy to see that the function $f(x)$ is continuous even if $r$ is negative rational number. Further if we divide by $(x - y) > 0$ and let $x \to y^{+}$ we get via Squeeze Theorem the fundamental result $$\lim_{x \to y^{+}}\frac{x^{r} - y^{r}}{x - y} = ry^{r - 1}$$ for all positive rational numbers $r$ and $y > 0$. Interchanging the roles of $x, y$ it is easy to see that the limit holds for $x \to y^{-}$. This proves the formula $(1)$ for positive rational values of $n$. This is the way Hardy proves the formula $$\frac{d}{dx}(x^{n}) = nx^{n - 1}$$ for rational $n$ in his classic text "A Course of Pure Mathematics".
Perturbed Renewal Equations with Non-Polynomial Perturbations 2010 (English)Licentiate thesis, comprehensive summary (Other academic) Abstract [en] This thesis deals with a model of nonlinearly perturbed continuous-time renewal equation with nonpolynomial perturbations. The characteristics, namely the defect and moments, of the distribution function generating the renewal equation are assumed to have expansions with respect to a non-polynomial asymptotic scale: $\{\varphi_{\nn} (\varepsilon) =\varepsilon^{\nn \cdot \w}, \nn \in \mathbf{N}_0^k\}$ as $\varepsilon \to 0$, where $\mathbf{N}_0$ is the set of non-negative integers, $\mathbf{N}_0^k \equiv \mathbf{N}_0 \times \cdots \times \mathbf{N}_0, 1\leq k <\infty$ with the product being taken $k$ times and $\w$ is a $k$ dimensional parameter vector that satisfies certain properties. For the one-dimensional case, i.e., $k=1$, this model reduces to the model of nonlinearly perturbed renewal equation with polynomial perturbations which is well studied in the literature. The goal of the present study is to obtain the exponential asymptotics for the solution to the perturbed renewal equation in the form of exponential asymptotic expansions and present possible applications. The thesis is based on three papers which study successively the model stated above. Paper A investigates the two-dimensional case, i.e. where $k=2$. The corresponding asymptotic exponential expansion for the solution to the perturbed renewal equation is given. The asymptotic results are applied to an example of the perturbed risk process, which leads to diffusion approximation type asymptotics for the ruin probability. Numerical experimental studies on this example of perturbed risk process are conducted in paper B, where Monte Carlo simulation are used to study the accuracy and properties of the asymptotic formulas. Paper C presents the asymptotic results for the more general case where the dimension $k$ satisfies $1\leq k <\infty$, which are applied to the asymptotic analysis of the ruin probability in an example of perturbed risk processes with this general type of non-polynomial perturbations. All the proofs of the theorems stated in paper C are collected in its supplement: paper D. Place, publisher, year, edition, pages Västerås: Mälardalen University , 2010. , p. 98 Series Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 116 Keywords [en] Renewal equation, perturbed renewal equation, non-polynomial perturbation, exponential asymptotic expansion, risk process, ruin probability National Category Probability Theory and Statistics Research subject Mathematics/Applied Mathematics IdentifiersURN: urn:nbn:se:mdh:diva-9354ISBN: 978-91-86135-58-4 (print)OAI: oai:DiVA.org:mdh-9354DiVA, id: diva2:302104 Presentation 2010-05-07, Kappa, Hus U, Högskoleplan 1, Mälardalen University, 13:15 (English) Opponent Belyaev, Yuri, Professor Supervisors Silvestrov, Dmitrii, ProfessorMalyarenko, Anatoliy, Docent 2010-03-042010-03-042015-06-29Bibliographically approved List of papers
09:00 Parallel Heavy flavour - William Alexander Horowitz (University of Cape Town (ZA)) (until 10:20) (COSMOS) 09:00 Data-driven analysis of the temperature and momentum dependence of the heavy quark transport coefficients - Yingru Xu (Duke University) (COSMOS) 09:20 Heavy flavor energy loss from AdS/CFT: A novel diffusion coefficient derivation and its predictions - Robert Hambrock (University of Cape Town) (COSMOS) 09:40 Correlation between heavy flavour production and multiplicity in pp collisions at high energy in the multi-pomeron exchange model - Vladimir Kovalenko (St Petersburg State University (RU)) (COSMOS) 10:00 Coupled dynamics of heavy and light flavor flow harmonics from EPOSHQ - Pol Gossiaux (Subatech) (COSMOS) 09:00 Parallel Hydrodynamics 1 -Prof. Marcus Bleicher (FIAS and ITP, Goethe University Frankfurt) (until 10:20) (BBG 165) 09:00 Study of Lambda polarization at RHIC BES energies - Iurii Karpenko (INFN Firenze) (BBG 165) 09:20 Anisotropic flow of identified hadrons in $\sqrt{s_{\rm NN}}$~=~5.02~TeV Pb–-Pb collisions at ALICE - Redmer Alexander Bertens (Nikhef National institute for subatomic physics (NL)) (BBG 165) 09:40 Skewness of Event-by-event Elliptic Flow Fluctuations in PbPb collisions at $\sqrt{s_{NN}} = 5.02$~TeV with CMS - Elizaveta Nazarova (M.V. Lomonosov Moscow State University (RU)) (BBG 165) 10:00 Collective flow of open heavy flavour in heavy ion collisions at the LHC energies with CMS - Yen-Jie Lee (Massachusetts Inst. of Technology (US)) (BBG 165) 09:00 Parallel Small systems - Marek Gazdzicki (Johann-Wolfgang-Goethe Univ. (DE)) (until 10:20) (BBG 161) 09:00 Intrinsic momentum anisotropy for relativistic particles from quantum mechanic - Denes Molnar (Purdue University) (BBG 161) 09:20 First measurement of $\Sigma^{0}$-production in proton induced reactions on a nuclear target at E$_{kin}$ = 3.5 GeV* - Tobias Kunz (BBG 161) 09:40 Quarkonium production in pp and p-A collisions with ALICE at the LHC - Astrid Morreale (Centre National de la Recherche Scientifique (FR)) (BBG 161) 10:00 $J/\psi$ production as a function of event multiplicity in pp and p-Pb collisions with ALICE - Ionut Cristian Arsene (University of Oslo (NO)) (BBG 161) 09:00 Parallel Strangeness - Domenico Elia (INFN Bari) (until 10:20) (BBG 169) 09:00 Processes of hypernuclei formation in relativistic ion collisions - Dr Alexander Botvina (FIAS and ITP, Goethe University Frankfurt) (BBG 169) 09:20 Kaon and Phi Production in Pion-Nucleus Reactions at $1.7$~GeV/c* - Joana Wirth (Technische Universität München) (BBG 169) 09:40 Particle production and azimuthal anisotropy of strange hadrons in U+U collisions at STAR - Mr Vipul Bairathi (National Institute of Science Education and Research) (BBG 169) 10:00 Phi Meson Measurements at RHIC with the PHENIX Detector - Murad Sarsour (Georgia State University) (BBG 169) 10:20 --- Coffee break --- 10:50 Parallel BES - Alexandre Suaide (IFUSP) (until 12:30) (BBG 165) 10:50 Kaon femtoscopy in Au+Au collisions from the Beam Energy Scan at the STAR experiment - Jindřich Lidrych (Czech Technical University in Prague) (BBG 165) 11:10 Beam energy and system dependence of rapidity-even dipolar flow - Mr Niseem Abdelrahman"Magdy" (Stony Brook University) (BBG 165) 11:30 Dynamical critical fluctuations near the QCD critical point - Dr Lijia Jiang (Frankfurt Institute for Advanced Studies) (BBG 165) 11:50 Fluctuating fluid dynamics for the QGP in the LHC and BES era - Marcus Bluhm (University of Wroclaw) (BBG 165) 12:10 Identify QCD transition in heavy-ion collisions with Deep Learning - Dr Kai Zhou (FIAS, Goethe-University Frankfurt am Main) (BBG 165) 10:50 Parallel Heavy flavour - Steffen A. Bass (Duke University) (until 12:30) (COSMOS) 10:50 Overview of Heavy-Flavored Jets at CMS - Kurt Eduard Jung (University of Illinois at Chicago (US)) (COSMOS) 11:10 Measurements of charm hadron production and anisotropic flow in Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV with the STAR experiment at RHIC - Dr Sooraj Radhakrishnan (Lawrence Berkeley National Laboratory) (COSMOS) 11:30 Measurement of D-meson nuclear modification factor and elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC - Fabrizio Grosa (Politecnico di Torino (IT)) (COSMOS) 11:50 Measurement of heavy-flavour production, correlations and jets with ALICE in Pb-Pb collisions with ALICE - Shingo Sakai (Istituto Nazionale Fisica Nucleare (IT)) Shingo Sakai (University of Tsukuba (JP)) (COSMOS) 12:10 Charmonium production in pPb and PbPb collisions at 5.02 TeV with CMS - Javier Martin Blanco (Centre National de la Recherche Scientifique (FR)) (COSMOS) 10:50 Parallel Small systems - Thomas Peitzmann (Nikhef National institute for subatomic physics (NL)) (until 12:30) (BBG 161) 10:50 Insight into particle production mechanisms via angular correlations of identified particles measured with ALICE in pp collisions at $\sqrt{s} = 7$ TeV - Yiota Foka (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE)) Malgorzata Anna Janik (Warsaw University of Technology (PL)) (BBG 161) 11:10 Energy and multiplicity dependence of strange and non-strange particle production in pp collisions at the LHC with ALICE - Fiorella Fionda (University of Bergen (NO)) (BBG 161) 11:30 Heavy Quark Flow as Better Probes of QGP Properties - Zi-Wei Lin (Central China Normal University & East Carolina University) (BBG 161) 11:50 Subthreshold \Xi production in p+A collisions - Dr Miklós Zétényi (Wigner RCP) (BBG 161) 12:10 Particle production and collectivity in high-multiplicity pp and pPb collisions at the LHC with CMS - George Stephans (Massachusetts Inst. of Technology (US)) (BBG 161) 10:50 Parallel Strangeness - Dieter Roehrich (University of Bergen (NO)) (until 12:30) (BBG 169) 10:50 J/psi production in proton-lead collisions at 8 TeV with the LHCb detector - Oleksandr Okhrimenko (National Academy of Sciences of Ukraine (UA)) (BBG 169) 11:10 Strange and heavy hadrons production from coalescence plus fragmentation in AA collisions at RHIC and LHC - Salvatore Plumari (University of Catania (Italy)) Francesco Scardina (INFN Catania) Vincenzo Greco (University of Catania) (BBG 169) 11:30 $\phi$ meson production at forward rapidity in pp and Pb-Pb collisions with ALICE at the LHC - Alessandro De Falco (Universita e INFN, Cagliari (IT)) (BBG 169) 11:50 Strangeness production in Pb-Pb collisions at LHC energies with ALICE - Michal Sefcik (Pavol Jozef Safarik University (SK)) (BBG 169) 12:10 Strange and Multi-strange Particle Production in pPb and PbPb with CMS - Hong Ni (Vanderbilt University (US)) (BBG 169)
Usually doing calculations of this kind is not hard. Roughly, you start from some idea of where the pH is going to end up. For example, if you are only adding these phosphates, count the number of phosphate ions and the number of sodium ions. Divide the number of sodium ions by the number of phosphate ions. You (hopefully) get a number $x$ between $0$ and $3$, say $x=2.5$. Almost all phosphates will then be either $\ce{NaH2PO4}$ or $\ce{Na2HPO4}$. Pretend that these are the only ions and treat the problem like an ordinary two component buffer. Then you are done. This is an approximation but it will work rather well, as long as the $\mathrm{p}K_\mathrm{a}$ of the various acids are different enough. If this is not good enough, you just treat this as a bunch of acids, with a bunch of $\mathrm{p}K_\mathrm{a}$ and use the acid-base equilibrium equation over and over, that is$$\frac{[\ce{H}][\ce{B}_i]}{[\ce{A}_i]} = \mathrm{p}K_\mathrm{a}$$ where $[\ce{A}_i]$ is the concentration of an acid "$i$" and $[\ce{B}_i]$ that of the conjugate base. Given the pH, this gives you the ratio of the concentration of each base to each acid. But, really, for phosphate buffers, you are very well off if you just use one acid base pair. For acids like terephthalic acid or isophthalic acid, which have two $\mathrm{p}K_\mathrm{a}$ values that are rather close to each other, that is they differ by less than $1.5$ or so, finding and solving such polynomial equations makes possible sense. Otherwise, other approximations we routinely make are more important. For an explicit example, suppose that we consider a compound $\ce{BH2}$ that can give up two $\ce{H+}$ ions, making both $\ce{BH-}$ and $\ce{B^2-}$. Suppose that we have a total molar concentration of $\ce{B}$,$$B_t=[\ce{BH2}]+[\ce{BH-}]+[\ce{B^2-}],$$and that we have also added strong bases or strong acids to change the $[\ce{H+}],$ so that the total possible molar concentration of $[\ce{H+}]$ is $$H_t=[\ce{H^+}]+[\ce{BH^-}]+2[\ce{BH_2}].$$ We then also have the relationships \begin{align}[\ce{H+}][\ce{BH-}] &= K_{\mathrm{a},\ce{BH_2}}[\ce{BH_2}]\\[\ce{H+}][\ce{B^2-}] &= K_{\mathrm{a},\ce{BH-}}[\ce{BH-}]\\\end{align} If we now call $H=[\ce{H+}]$ and $B=[\ce{B^2-}]$ we find the equations\begin{align}H_t &= H + B (K_{\mathrm{a},\ce{BH-}}(H + 2K_{\mathrm{a},\ce{BH2}} H))\tag{1},\\B_t &= B (1 + K_{\mathrm{a},\ce{BH^-}} H (1 + K_{\mathrm{a},\ce{BH2}} H))\tag{2}.\\\end{align} We need to solve these equations for $B$ and $H$. Equation $(2)$ is easy to solve for $B$ and we can then substitute into equation $(1)$ to find$$H_t = H + B_t \frac{K_{\mathrm{a},\ce{BH-}}(H + 2K_{\mathrm{a},\ce{BH_2}}H)} {(1 + K_{\mathrm{a},\ce{BH-}} H(1 + K_{\mathrm{a},\ce{BH_2}}H))}\tag{3}.$$ We need to solve this formula for $H$, and (of course) $\ce{pH}=-\log_{10}(H)$. Some algebra from this point gets us to a cubic equation for $H$. Of course this has three possible solutions. But, only one is sensible, provided we insist on "physical sense", e.g. all the equilibrium constants and concentrations are positive real numbers. Somehow, this must be easy to see from the rules about roots of polynomials. I am not currently willing to do that proof. Physically, however, the total amount of hydrogen bound to the base will increase with increasing (positive) $H$. So the right hand side of $(3)$ increases with increasing $H$ - from $H$ for $H=0$ to $2B_t + H$ as $H\to\infty$. As the right hand side of $(3)$ increases with increasing $H$ there is exactly one positive solution. However, this right hand side is also not a very "nice" function - the rational fraction is approximately constant except when we are near a "buffer" condition, where it changes quickly. So, probably an effective way to solve the equation would be to use a computer and "regula falsi" or something like that. A little thought will allow sensible guesses for any set of $H_t$ and $B_t$. Then regula falsi will converge quickly. Or, from a good enough initial guess even Newton-Ralphson. Or, possibly it would be better in many cases to figure out which acid/base pair is closest to being in the "buffer" region. And then, suppose that you have $H$ in that buffer region. Solve for $H$ with only that acid-base pair. Then, figure out what you did wrong - there can be $\ce{H+}$'s that are not bound to the base in the buffer pair you have chosen or $\ce{H+}$'s that are bound to stronger acids. Assume that $H$ calculated from a single buffer pair is correct. Calculate how many $\ce{H+}$'s are unexpectedly bound or unbound from your assumed buffer pairs. Use this to calculate a new $H_t$. If $H_t$ has not changed much, which will be the case unless there is another acid-base pair with very nearly the same $\mathrm{p}K_\mathrm{a}$ - differing I suppose by less than an integer from that of the pair you are considering, then use this value of $H_t$ to solve again for $H$, and recurse. For the stated problem, all the $\mathrm{p}K_\mathrm{a}$ are very different so the $K_\mathrm{a}$ values differ by orders of magnitude. And, really, all you need do to is figure out which buffer region you are near. In principle, you can then find tiny errors using the recursion. But, this is silly in most circumstances. This will get a clear, continuous curve which always has the correct slope. But, it will vastly overstate what you know about the problem. The changes will be smaller than (I expect) lots of stuff: deviations from ideal solutions, errors in $K_\mathrm{a}$ values, etc..).
In a nutshell The name Kleene closure is clearly intended to mean closureunder some string operation. However, careful analysis (thanks to a critical comment by the OPmallardz), shows that the Kleene star cannot be closure under concatenation, whichrather corresponds to the Kleene plus operator. The Kleene star operator actually corresponds to a closure under thepower operation derived from concatenation. The name Kleene star comes from the syntactic representation of the operation with a star *, while closure is what it does. This is further explained below. Recall that closure in general, and Kleene star in particular, isan operation on sets, here on sets of strings, i.e. on languages. This will be used in the explanation. Closure of a subset under an operation always defined A set $C$ is closed under some $n$-ary operation $f$ iff $f$ is alwaysdefined for any $n$-tuple of arguments in $C$ and$C=\{f(c_1,\ldots,c_n)\mid \forall c_1,\ldots,c_n \in C\}$. By extending $f$ to sets of values in the usual way, i.e. $$f(S_1,\ldots,S_n)=\{f(s_1,\ldots,s_n)\mid \forall s_i\in S_i. 1\leqi\leq n\}$$ we can rewrite the condition as a set equation: $$C=f(C,\ldots,C)$$ For a domain (or set) $D$ with an operation $f$ that is always definedon $D$, and a set $S\subsetD$, The closure of $S$ under $f$ is the smallest set $S_f$containing $S$ thatsatisfies the equation: $S_f=\{f(s_1,\ldots,s_n)\mid \forall s_1,\ldots,s_n \in S_f\}$. More tersely with a set equation, the closure of $S$ under $f$ may be defined by: $$S_f \text{ is the smallest set such that } S\subset S_f \text{ and }S_f=f(S_f,\ldots,S_f)$$ This is an example of least fixed-point definition, often used insemantics, and also used in formal languages. A context-free grammarcan be seen as a system of languages equations (i.e. string set equations),where the non-terminal stand for language variables. The leastfixed-point solution associate a language to each variable, and thelanguage thus associated to the intial symbol is the one defined bythe CF grammar. Extending the concept The closure as defined above is only intended to extend a subset $S$into a minimal set $S_f$ such that the operation $f$ is alwaysdefined. As remarked by the OP mallardz, this is not a sufficient explanation,since it will not include the empty word $\epsilon$ in $S_f$ when itis not already in $S$. Indeed this closure corresponds to thedefinition of the Kleene plus + and not to the Kleene star *. Actually, the idea of closure can be extended, or considered in different ways. Extension to other algebraic properties On way to extend it (though it is no longer called closure)considers more generally an extension to a set $S_f$ having specificalgebraic properties with respect to the operation $f$. If you define $S_f$ as the smallest set containing $S$ that is aMonoid for the binary function $f$, then you require both closure and aneutral element which is the empty word $\epsilon$. Extension through a derived operation There is a second way which is more properly a closure issue.When you define the closure of $S\subset D$, you can consider itwith respect to some of the arguments, while you allow values fromthe whole set $D$ for the other arguments. Considering (for simplicity) a binary function $f$ over $D$, you candefine $S_{f,1}$ as the smallest set containing $S$ that satisfiesthe equation: $$S_{f,1}=\{f(s_1,s_2)\mid \forall s_1\in S_{f,1}\wedge\forall s_2\in D\}$$ or with set equations: $$S_{f,1} \text{ is the smallest set such that } S\subset S_{f,1} \text{ and }S_{f,1}=f(S_{f,1},D)$$ This also makes sense when the arguments do not belong to the sameset. Then you may have closure with respect to some arguments in oneset, while considering all possible values for the other arguments(many variations are possible). Given a Monoid $(M,f,\epsilon)$ $-$ for example the monoid of stringswith concatenation $-$ where $f$ is an associative binaryoperation on the elements of the set $M$ with an identity element$\epsilon$, you can define thepowers of an element $u\in M$ as:$$\forall u\in M.\; u^0=\epsilon\; \text{ and }\; \forall n\in\mathbb N\; u^n=f(u,u^{n-1})$$ This exponentiation $u^n$ is an operation that takes asargument an element of $M$ and a non-negative integer of $\mathbb N_0$. However, the natural extension of this operation to subsets of $M$ isnot the usual one which would be, for a given value of $n$,$U^n=\{u^n\mid u\in U\}$. It should rather take into account theoriginal definition of $u^n$ from the operation $f$, wich would give:$$\left\{ \begin{array}{l} U^0=\{u^0\mid u\in U\}=\{\epsilon\}\\ \forall n\in\mathbb N,\; U^n=f(U,U^{n-1}) \end{array} \right.$$so as to be consistent with the natural extension of the operation $f$to subsets of $M$. Now we can define the closure of $U_{\wedge,1}$ of $U\subset M$ forthe first argument of the power operation, as indicated above withthe set notation, as:$$U_{\wedge,1} \text{ is the smallest set such that } U\subset U_{\wedge,1} \text{ and } U_{\wedge,1}=f(U_{\wedge,1},\mathbb N_0)$$ And this does give us the the Kleene star operation when theconstruction is applied to the concatenation operation ofthe free Monoid of strings. To be completely honest, I am not sure I have not been cheating. Buta definition is only what you make it, and that was the only way Ifound to actually turn the Kleene star into a closure. I may betrying too hard. Comments are welcome. Closing a set under an operation that is not always defined This is a slightly different view and use of the concept of closure. This view is not really answering the question, but it seems good to keep it in mind to avoid some possible confusions. The above implies that the function $f$ is always defined in thereference set $D$. That may not always be the case. Then closure canalso be a mathematical technique to extend a set so than someoperation will always be defined. The way it works in practice is asfollow: start with the set $D$ where $f$ is not always defined; build another set $D'$ constructed from elements of $D$, with anoperation $f'$ that is always defined, such that you can ... show that there is an isomorphism between $D$ and a subset of $D'$that is such that $f$ is the image of $f'$ restricted to thatsubset. Then the set $D'$ with the operation $f'$ is a closed extension of $D$with $f$. That is how integers are built from natural numbers, considering theset of pairs of natural numbers quotiented by an equivalence relation(two pairs are equivalent iff the two elements are in the same orderand have the same difference). This is also how rationals can be built from the integers. And this is how classical reals can be built from the rationals,though the construction is more complex.
Arithmetic Sequences Definition Arithmetic sequences are patterns of numbers that increase (or decrease) by a set amount each time when you advance to a new term. You can determine the next term by adding the difference between any two terms to the final one to generate the next term. Let \(a\) be the initial term and \(d\) be the difference, then the \( n^{th}\) term of the arithmetic sequence can be expressed as \(t_n = a + (n - 1)d\). Example \(\PageIndex{1}\): \(n\) \(n^{th}\) Term \(=t_n\) \(t_n - t_{(n - 1)}\) 5 11 2 6 13 2 7 15 2 8 17 2 As you can see, this sequence's terms increase by 2 each time. Example \(\PageIndex{2}\): Given the following sequence, can we determine \(a\) and \(d\) and give the sequence's general form? 1, 4, 7, 10, 13, 16, 19... So: \(a = 1\), because that is the first term in the sequence. \(d = 3\), because the terms increase by 3 each time. So the general form for the sequence is: \(t_n = 1 + (n - 1)3=3n+2\) Is a sequence still an arithmetic sequence if the difference changes with each iteration, even if it is still added? Why or why not? What truly defines an arithmetic sequence? Thinking Out Loud: Finite Sum of Arithmetic Sequences There are two, equivalent, formulas for determining the finite sum of an arithmetic sequence. Here, we shall derive both the formulas and show how they are equal. Example \(\PageIndex{1}\): Consider \(S_n = a \,\,+ (a+d) \,\,+ \cdots +(a+(n-2)d)+(a+(n-1)d)\) Now, \(S_n = (a+(n-1)d)+(a+(n-2)d+ \cdots+ (a+d)\,+a\). Then, \(2S_n= n(a+ a+(n-1)d) \,= n(2a+(n-1)d)\). Hence, \(S_n=\displaystyle \frac {n}{2}(t_1+ t_n) \,= \displaystyle \frac {n}{2}(2a+(n-1)d)\). Let's illustrate these formulas by using the sequence \(t_n = 5 + (n - 1)2= 2n+3\). Example \(\PageIndex{3}\): The first formula we can use looks like this: \(S_n = n \left(\displaystyle \frac{a_1 + a_n}{2} \right)\) As we can see, this formula takes the average between the first and last terms, and multiplies by the number of terms in the series. So, if we use our series \(t_n = 5 + (n - 1)2\) and we want the sum for the first 15 terms, our calculation will look like this: \(S-{15} = 15 \left( \frac{5 + 33}{2} \right)\) \(= 15 \left( \frac{38}{2} \right)\) \(= 15(19)\) \(= 285\) Example \(\PageIndex{4}\): The second formula we can use looks like: \(S_n = \displaystyle \frac{n}{2} \left(2a + (n - 1)d \right)\) As we can see, this method doesn't need us to know the value of the \(n\) \(th\) term, just which term it is. Using our series \(t_n = 5 + (n - 1)2\), our calculations look like this when we are looking for the sum of the first 15 terms: \(S = \displaystyle \frac{15}{2} \left(2(5) + ((15) - 1)(2) \right)\) \(= \displaystyle \frac{15}{2} \left(10 + (14)(2) \right)\) \(= \displaystyle \frac{15}{2} \left( 38 \right)\) \(= 285\) So these two methods look to be equivalent so far. Let's show that this is true in the general case: Example \(\PageIndex{5}\): Consider \(S = \displaystyle \frac{n}{2} \left(2a + (n - 1)d \right)\), where \(n\) is the number of the term that is the endpoint, \(a\) is the series' starting value, and \(d\) is the difference between any two consecutive terms. Consider \(S = n \left(\displaystyle \frac{a_1 + a_n}{2} \right)\) Since \(a_n = t_n = a + (n - 1)d\), And \(a_1 = a\), Then \(S = n \left(\displaystyle \frac{a + [a + (n - 1)d]}{2} \right)\) \(= n \left(\displaystyle \frac{2a + (n - 1)d}{2}\right)\) \(= \displaystyle \frac{n}{2} \left(2a + (n - 1)d\right)\) Let's explore summation notation which will be useful to represent finite sums: Sigma (Summation) Notation Finite sum requires adding up long strings of numbers. To make it easier to write down these lengthy sums, we look at some new notation here, called sigma notation (also known as summation notation). The Greek capital letter \(Σ\), sigma, is used to express long sums of values in a compact form. For example, if we want to add all the integers from 1 to 20 without sigma notation, we have to write \[1+2+3+4+5+6+7+8+9+10+11+12+13+14+15+16+17+18+19+20.\] We could probably skip writing a couple of terms and write \[1+2+3+4+⋯+19+20,\] which is better, but still cumbersome. With sigma notation, we write this sum as \[\sum_{i=1}^{20}i\] which is much more compact. Typically, sigma notation is presented in the form \[\sum_{i=1}^{n}a_i\] where \(a_i\) describes the terms to be added, and i is called the \(index\). Each term is evaluated, then we sum all the values, beginning with the value when \(i=1\) and ending with the value when \(i=n.\) For example, an expression like \(\sum_{i=2}^{7}s_i\) is interpreted as \(s_2+s_3+s_4+s_5+s_6+s_7\). Note that the index is used only to keep track of the terms to be added; it does not factor into the calculation of the sum itself. The index is therefore called a dummy variable. We can use any letter we like for the index. Typically, mathematicians use i, j, k, m, and n for indices. Let’s try a couple of examples of using sigma notation. Example \(\PageIndex{1}\): Using Sigma Notation Write in sigma notation and evaluate the sum of terms \(3^i\) for \(i=1,2,3,4,5.\) Write the sum in sigma notation: \[1+\dfrac{1}{4}+\dfrac{1}{9}+\dfrac{1}{16}+\dfrac{1}{25}.\] Solution Write \[\sum_{i=1}^{5}3^i=3+3^2+3^3+3^4+3^5=363.\] The denominator of each term is a perfect square. Using sigma notation, this sum can be written as \(\sum_{i=1}^5\dfrac{1}{i^2}\). Exercise \(\PageIndex{1}\) Write in sigma notation and evaluate the sum of terms \(2^i\) for \(i=3,4,5,6.\) Hint Use the solving steps in Exampleas a guide. Answer \(\sum_{i=3}^{6}2^i=2^3+2^4+2^5+2^6=120\) The properties associated with the summation process are given in the following rule. Rule: Properties of Sigma Notation Let \(a_1,a_2,…,a_n\) and \(b_1,b_2,…,b_n\) represent two sequences of terms and let c be a constant. The following properties hold for all positive integers n and for integers m, with \(1≤m≤n.\) \(\sum_{i=1}^nc=nc\) \(\sum_{i=1}^n ca_i=c\sum_{i=1}^na_i\) \(\sum_{i=1}^n(a_i+b_i)=\sum_{i=1}^na_i+\sum_{i=1}^nb_i\) \(\sum_{i=1}^n(a_i−b_i)=\sum_{i=1}^na_i−\sum_{i=1}^nb_i\) \(\sum_{i=1}^na_i=\sum_{i=1}^ma_i+\sum_{i=m+1}^na_i\) Proof: We prove properties 2. and 3. here, and leave proof of the other properties to the Exercises. 2. We have \(\sum_{i=1}^nca_i=ca_1+ca_2+ca_3+⋯+ca_n=c(a_1+a_2+a_3+⋯+a_n)=c\sum_{i=1}^na_i\). 3. We have \[ \begin{align} \sum_{i=1}^{n}(a_i+b_i)&=(a_1+b_1)+(a_2+b_2)+(a_3+b_3)+⋯+(a_n+b_n) \\ &=(a_1+a_2+a_3+⋯+a_n)+(b_1+b_2+b_3+⋯+b_n) \\ & =\sum_{i=1}^na_i+\sum_{i=1}^nb_i. \end {align}\]□ A few more formulas for frequently found functions simplify the summation process further. These are shown in the next rule, for sums and powers of integers, and we will explore further in later examples. Rule: Sums and Powers of Integers The sum of nintegers is given by \[\sum_{i=1}^ni=1+2+⋯+n=\dfrac{n(n+1)}{2}.\] 2. The sum of consecutive integers squared is given by \[\sum_{i=1}^ni^2=1^2+2^2+⋯+n^2=\dfrac{n(n+1)(2n+1)}{6}.\] 3. The sum of consecutive integers cubed is given by \[\sum_{i=1}^ni^3=1^3+2^3+⋯+n^3=\dfrac{n^2(n+1)^2}{4}.\] Proof We leave proof (by induction) of the rules to the Exercises. Geometric Sequences Definition Geometric sequences are patterns of numbers that increase (or decrease) by a set ratio with each iteration. You can determine the ratio by dividing a term by the preceding one. Let \(a\) be the initial term and \(r\) be the ratio, then the \(n\) \(th\) term of a geometric sequence can be expressed as \(t_n = ar\) \((n - 1)\). Example \(\PageIndex{6}\): \(n\) \(t_n\) \(t_n / t\) \(n - 1\) 1 3 2 6 2 3 12 2 4 24 2 So we can see that \(r = 2\), since the ratio between any two consecutive terms is \(2\). Example \(\PageIndex{7}\): Given the sequence \( -3, 6, -12, 24, -48...\), can we: Determine \(a\) and \(r\) Express the general form of the sequence So: \(a = -3\), because that is the sequence's initial term. \(r = -2\), because if we divide any term by the preceding one, that is the result. So the general form for the sequence is: \(t_n = -3(-2)\) \((n - 1)\). Finite Sum of Geometric Sequences Let's use the Gauss method for finding a general case for the sum of a geometric sequence: Example \(\PageIndex{8}\): Let \(r \ne 1.\) Consider \(S_n = a + ar + ar^2 + ar^3 + ... + ar\) \((n - 1)\) Now, \((1)S_n = \, a \, + ar + ar^2 + ar^3 + ... + ar\) \((n - 1)\) \(- (r)S_n = ar + ar^2 + ar^3 + ... + ar\) \((n - 1)\) \(+ ar^n\) \(\left(1 - r\right)S_n = a - ar^n\) \(S_n = \displaystyle \frac{a(1 - r^n)}{1 - r}\) That is, $$ \sum_{k=0}^{(n-1)} a r^k = \displaystyle \frac{a(1 - r^n)}{1 - r}, r \ne 1.$$ Sum of Integers Observe: \(1 = 1\) \(1 + 2 = 3\) \(1 + 2 + 3 = 6\) \(1 + 2 + 3 + 4 = 10\) \(1 + 2 + 3 + 4 + 5 = 15\) \(1 + 2 + 3 + ... + n = ?\) This is the finite sum of first \(n\) positive integers. Below we have shown two ways of finding this sum: Example \(\PageIndex{9}\): Let's figure out a general case for a sum of integers beginning with \(1\) and ending with \(n\) using Gauss' method: Let \(S_n = 1 \, + \, \, \, \, \, \, \, 2 \, \, \, \, \, \, \, + \, \, \, \, \, \, \, \, 3 \, \, \, \, \, \, \, \, + ... + n\) \( + S_n = n + (n - 1) + (n - 2) + ... + 1\) So \(2S_n = n(n + 1)\). This is because, when you add \(S\) to itself (with the order reversed), you get \(n + 1\) repeated \(n\) times. Then, \(S_n = \displaystyle\frac{n(n + 1)}{2}\) That is \( \sum_{k=1}^{n} k = \displaystyle\frac{n(n + 1)}{2}\). Example \(\PageIndex{10}\): Here's that same concept being proven inductively: Prove that \(1 + 2 + ... + n = \displaystyle \frac{n(n + 1)}{2}, \, \forall n \in \mathbb{Z}\) Base step: Choose \(n = 1\). Then L.H.S =\(1\). and R.H.S \( = \frac{(1)(1 + 1)}{2}=1\) Induction Assumption: Assume that \( 1 + 2 + ... +k= \displaystyle\frac{k(k + 1)}{2}\) , for \(k \in \mathbb{Z}\). We shall show that \(1 + 2 + ... + k + (k + 1) = \displaystyle\frac{(k + 1)[(k + 1) + 1]}{2} = \frac{(k + 1)(k + 2)}{2}\) Consider \(1 + 2 + ... + k + (k + 1) \) \(= \displaystyle \frac{k(k + 1)}{2} + (k + 1)\) \(= (k + 1) \left( \displaystyle\frac{k}{2} + \displaystyle\frac{1}{1}\right)\) \(= (k + 1) \left( \displaystyle\frac{k + 2}{2}\right)\) \(= \displaystyle \frac{(k + 1)(k + 2)}{2}\). Thus, by induction we have \(1 + 2 + ... + n = \displaystyle\frac{n(n + 1)}{2}, \, \forall n \in \mathbb{Z}\). Can we determine \(S\) of a sequence of integers that do not start at \(1\)? Thinking Out Loud: Sum of Positive Odd Integers The edges of an equilateral triangle are divided into \(n\) equal segments by inserting \(n− 1\) points. Lines are drawn through each of these points parallel to each of the three edges, forming a set of small triangles. How many of the small triangles are there? Justify your answer. Thinking Out Loud: Observe: \(1 = 1\) \(1 + 3 = 4\) \(1 + 3 + 5 = 9\) \(1 + 3 + 5 + 7 = 16\) \(1 + 3 + 5 + 7 + 9 = 25\) \(1 + 3 + 5 + ... + (2n-1) = ?, \) Example \(\PageIndex{11}\): Let's look at a table of the sums of the first \(n\) postive odd integers: \(n\) \(S_n\) 1 1 2 4 3 9 4 16 5 25 As we can see, the sum of the first \(n\) positive odd integers \(= n^2\) Example \(\PageIndex{12}\): Let's try a visual proof for this one as well. Remember, square numbers can be arranged into perfectly square arrays. As we can see, when we arrange odd integers into an array (each new term is represented by a new color), we always have an array with \(n^2\) points. Exercise \(\PageIndex{1}\) By using induction, prove that \(1 + 3 + 5 + ... + (2n-1) = n^2,\) for all \(n \geq1\). Sum of Positive Even Integers Observe: \(2 = 2\) \(2 + 4 = 6\) \(2 + 4 + 6 = 12\) \(2 + 4 + 6 + 8 = 20\) \(2 + 4 + 6 + 8 + 10 = 30\) \(2 + 4 + 6 + ... +2n = ?, n \in \bf{N}\) Example \(\PageIndex{13}\): Let's try deriving this using what we already know: If we write the sum of positive even integers as \(2 + 4 + 6 + 8 + ... + 2n\), We see we can factor out the 2: \(2(1 + 2 + 3 + ... + n)\). This is great news! We already know the sum of a finite set of positive integers. It is \(\displaystyle \frac{n(n + 1)}{2}\) So then the sum of a series of positive even integers is: \(2 \left( \displaystyle \frac{n(n + 1)}{2}\right) \) Or \(n (n + 1)\). Example \(\PageIndex{14}\): Let's try a visual proof: Here is the sum of the first 4 positive even integers, or \(n = 4\): Now, if we move some of the points to make a rectangular array... ...we can see that, for \(n\) terms, our array is described by: \(n(n + 1)\) Contributors
By peaks do you mean just maxima?I'm sorry that isn't make sense to me.Why does that work? Also I tried to do what you said and got for all of them 100 Hz (2 peaks for the maximum wave) which doesn't make sense... 1. Homework StatementSo let's say that I have a signal of fundamental frequency 50Hz. I then have a band-pass filter that passes the band between 800 and 1000 Hz of my signal. I don't know the expression of the signals I just know the graphics:[![enter image description here][1]][1]... Oh you're right it is from zero to N! I just realized that now. Now I realize what you meant on incorporating the 2 factor when defining ##A_k##. Then I can simply write the series as:$$x(t) = a_0 + \sum_{k=1}^{k=N} A_k \cos ( k\omega_0 t + \phi_k)$$$$x(t) = \sum_{k=0}^{k=N} A_k \cos... Hi! Thanks for your reply!Ok rewriting the series like that makes things easier.However I didn't quite understand your last suggestion, can you explain it again please?What I thought to do was, using the rewritten series, I would get to$$x(t) = a_0 + \sum_{k=1}^{k=N} 2A_k \cos (j... 1. Homework StatementConsider the fourier series of a signal given by$$x(t)=\sum_{k=-\infty}^{\infty} a_ke^{jk\omega_0t}$$Let's consider an approaches to this series given by the truncated series.$$x_N(t)=\sum_{k=-N}^{N} a_ke^{jk\omega_0t}$$a- Show that if $x(t)$ is real then the... 1. Homework StatementCompute the work of the vector field $$H: \mathbb{R^2} \setminus{(0,0}) \to \mathbb{R}$$$$H(x,y)=\bigg(y^2-\frac{y}{x^2+y^2},1+2xy+\frac{x}{x^2+y^2}\bigg)$$in the path $$g(t) = (1-t^2, t^2+t-1)$ with $t\in[-1,1]$$2. Homework Equations3. The Attempt at a Solution... Is what you mean equivalent to the derivative of a composition of functions?I think I got it. What I did was to differentiate both the equations given obtaining:$$\frac{df}{dx}(t,t) + \frac{df}{dy}(t,t) = 3t^2+1$$$$\frac{df}{dx}(t,-2t) -2 \frac{df}{dy}(t,-2t) = 2$$Then making t=0... 1. Homework Statement$$f:\mathbb{R^2}\to\mathbb{R}$$ a differentiable function in the origin so:$$f(t,t) =t^3+t$$ and $$f(t,-2t)=2t$$Calculate $$D_vf(0,0)$$$$v=(1,3)$$2. Homework Equations3. The Attempt at a SolutionI have no idea on how to approach this problem.I know that... Looks wrong? What do you mean? Well I know that with other path the the work is still the same (unless the direction was the opposite, in which we would have the opposite sign).I'm going to try then, thanks. 1. Homework StatementCompute the work of the vector field ##F(x,y)=(\frac{y}{x^2+y^2},\frac{-x}{x^2+y^2})##in the line segment that goes from (0,1) to (1,0).2. Homework Equations3. The Attempt at a SolutionMy attempt (please let me know if there is an easier way to do this)I...
I just wanted some quick help with a lab I'm doing, based on absorption spectroscopy. I'm given an aqueous stock solution of Yellow Dye #5 with a concentration of 54.5 micro moles. I'm asked to create a standard curve. It gives me the diluted volume as 50 mL, and has a data table with the volume of stock solutions and absorbance values for each. It asks me to create a standard curve (absorbance versus concentration) in excel for the Yellow Dye #5. First off, would I just use the constant concentration of 54.5 micro moles for this standard curve graph and the absorbance values in the data table? I'm not familiar with what a standard curve is, or if the volume of stock solution affects the concentration value in the graph. If anyone replies I'll probably comment on the right answer with a couple more, thanks for the help! It means a lot! Supposed that Yellow Dye #5 refers to the FD&C numbering, this is a food colorant more known as tartrazine outside the U.S. A standard curve correlates absorbance at a particular wavelength with the concentration of the dye. Supposed that no distortion (e.g. due to aggregation effects) is present, the relation is given by the Lambert-Beer law: $$E_\lambda = \epsilon_\lambda \cdot c \cdot d$$ where $E_\lambda$ is the absorbance, $\epsilon_\lambda$ is the molar absorption coefficient (specific for a particular dye at a particular wavelength $\lambda$ in a certain solvent), $c$ is the concentration of the dye and $d$ is the path length of the cuvette. Please pay attention to the units in your calculations! In order to create a standard curve, Record the uv spectrum of the stock solution. If you have two-channel spectrometer, put a second cuvette with the neat solvent in that channel. Get a couple of volumetric flasks and dilute the stock solution. Record the uv spectra for each of these solutions. Plot absorbance (probably at around 420 nm) vs concentration.
I've been reading Complex Analysis by E. Freitag as alternative material of A First Course in Modular Forms by F. Diamond and I am struggling to comprehend some definitions. In the page 326, he made a remark about the terminology that he is introducing for the $q$-expansions of real periodic functions (defined in $\mathbb{H}$) which are holomorphic in some neighborhood of $i\infty$. To be precise, he says: Let \begin{align} f:U_C\to\mathbb{C} \end{align} be an analytic function on an upper half-plane \begin{align} U_C=\{z\in\mathbb{H}: \mathrm{Im} z >C\},\quad C>0. \end{align} We assume that $f$ is periodic, i.e. there exists a suitable $N$ with \begin{align} f(z+N)=f(z),\quad N\neq 0,\quad N\in\mathbb{R}. \end{align} The periodicity allows a Fourier expansion ($q$-expansion) \begin{align} f(z)=\sum_{n=-\infty}^{\infty}a_ne^{2\pi i nz/N}, \end{align} which in fact corresponds to a Luarent expansion \begin{align} \tilde{f}(z)=\sum_{n=-\infty}^{\infty}a_nq^n\quad\big(q=e^{\frac{2\pi i z}{N}}\big) \end{align} in the punctured disk around the origin having radius $e^{-2\pi C/N}$ Terminology. The function $f$ is (a) non-essentially singularat $i\infty$, iff $\tilde{f}$ is non-essentially singular at the origin. (b) regularat $i\infty$, iff $\tilde{f}$ has a removable singularity at the origin. In the case of regularity, one defines \begin{align} f(i\infty):=\tilde{f}(0)\quad (=a_0). \end{align} The notions do not depend on the choice of the period $N$, (If $f$ is non-constant, the set of all periods is a cyclic group.) I can't see why do not depend of the choice of the period. I'm aware that if $f$ is not a constant, the set of periods form a discrete group of $(\mathbb{R},+)$ and is generated by a rational number in the case of modular forms.
Due to the amount of confusion and the large number of emails, I have written up the solution to Problem 1 from Test 2. The Problem Determine the path of steepest descent along the surface $latex z = 2 + x + 2y – x^2 – 3y^2 $ from the point $latex (0,0,2).$ There are a few things to note – the first thing we must do is find which direction points ‘downwards’ the most. So we note that for a function $latex f(x,y) = z, $ we know that $latex \nabla f $ points ‘upwards’ the most at all points where it isn’t zero. So at any point $latex P, $ we go in the direction $latex -\nabla f.$ The second thing to note is that we seek a path, not a direction. So let us take a curve that parametrizes our path: $latex {\bf C} (t) = x(t) \hat{i} + y(t) \hat{j}.$ So $latex -\nabla f = (2x -1)\hat{i} + (6y -2)\hat{j}.$ As the velocity of the curve points in the direction of the curve, our path satisfies: $latex x'(t) = 2x(t) -1; x(0) = 0$ $latex y'(t) = 6y(t) – 2; y(0) = 0$ These are two ODEs that we can solve by separation of variables (something that is, in theory, taught in 1502 – for more details, look at chapter 9 in Salas, Hille, and Etgen). Let’s solve the y one: $latex y’ = 6y – 2$ $latex \frac{dy}{dt} = 6y – 2$ $latex \frac{dy}{6y-2} = dt$ $latex ln(6y-2)(1/6) = t + k$ for a constant k $latex 6y = e^{6t + k} + 2= Ae^{6t}$ for a constant A $latex y = Ae^{6t} + 1/3$ for a new constant A $latex y(0) = 0 \Rightarrow A = -1/3 $ Solving both yields: $latex x = \frac{1}{2} -\frac{1}{2} e^{2t} $ $latex y = \frac{1}{3} – \frac{1}{3} e^{6t} $ Now let’s get rid of the t. Note that $latex (3y -1) = e^{6t}$ and $latex (2x -1) = e^{2t}$. Using these together, we can get rid of t by noting that $latex \dfrac{3y-1}{(2x – 1)^3} = 1.$ Rewriting, we get $latex 3y = (2x-1)^3 + 1.$ So the path is given by $latex 3y = (2x-1)^3 + 1$ Good luck on your next test!
2 1 Hi all, I have a trigonometric function series $$f(x)={1 \over 2}{\Lambda _0} + \sum\limits_{l = 1}^\infty {{\Lambda _l}\cos \left( {lx} \right)} $$ with the normalization condition $$\Lambda_0 + 2\sum\limits_{l = 1}^\infty {{\Lambda _l} = 1} $$ and ##\Lambda_l## being monotonic decrescent weights, i.e. ##\Lambda_0>\Lambda_1>\Lambda_2...## Clearly from these two latter conditions one can prove ##f(x)## exists, but my problem is to characterise this function. Which theorems do you suggest to be helpful to this aim? I have a trigonometric function series $$f(x)={1 \over 2}{\Lambda _0} + \sum\limits_{l = 1}^\infty {{\Lambda _l}\cos \left( {lx} \right)} $$ with the normalization condition $$\Lambda_0 + 2\sum\limits_{l = 1}^\infty {{\Lambda _l} = 1} $$ and ##\Lambda_l## being monotonic decrescent weights, i.e. ##\Lambda_0>\Lambda_1>\Lambda_2...## Clearly from these two latter conditions one can prove ##f(x)## exists, but my problem is to characterise this function. Which theorems do you suggest to be helpful to this aim?
Determinants Consider row reducing the standard 2x2 matrix. Suppose that \(a\) is nonzero. \[\begin{pmatrix} a &b \\ c &d \end{pmatrix}\nonumber \] \[\frac{1}{a} R_1 \rightarrow R_1, \;\;\; R_2-cR_1 \rightarrow R_2\nonumber \] \[\begin{pmatrix} 1 &\frac{b}{a} \\ c &d \end{pmatrix}\nonumber \] \[\begin{pmatrix} 1 & \frac{b}{a} \\ 0 & d-\frac{cb}{a}\end{pmatrix}\nonumber \] Now notice that we cannot make the lower right corner a 1 if \[d - \frac{cb}{a} = 0\nonumber \] or \[ad - bc = 0.\nonumber \] Definition: The Determinant We call \(ad - bc\) the determinant of the 2 by 2 matrix \[\begin{pmatrix} a &b \\ c &d \end{pmatrix}\nonumber \] it tells us when it is possible to row reduce the matrix and find a solution to the linear system. Example \(\PageIndex{1}\) The determinant of the matrix \[\begin{pmatrix} 3 & 1\\ 5 & 2 \end{pmatrix}\nonumber \] is \[3(2) - 1(5) = 6 - 5 = 1.\nonumber \] Determinants of 3 x 3 Matrices We define the determinant of a triangular matrix \[\begin{pmatrix} a &d &e \\ 0 &b &f \\ 0 &0 &c \end{pmatrix}\nonumber \] by \[\text{det} = abc.\nonumber \] Notice that if we multiply a row by a constant \(k\) then the new determinant is \(k\) times the old one. We list the effect of all three row operations below. Theorem The effect of the the three basic row operations on the determinant are as follows Multiplication of a row by a constant multiplies the determinant by that constant. Switching two rows changes the sign of the determinant. Replacing one row by that row + a multiply of another row has no effect on the determinant. To find the determinant of a matrix we use the operations to make the matrix triangular and then work backwards. Example \(\PageIndex{2}\) Find the determinant of \[\begin{pmatrix} 2 & 6 &10 \\ 2 &4 &-3 \\ 0 &4 &2 \end{pmatrix}\nonumber \] We use row operations until the matrix is triangular. \[\dfrac{1}{2}R_1 \rightarrow R_1 \text{(Multiplies the determinant by } \dfrac{1}{2})\nonumber \] \[\begin{pmatrix} 1 & 3 &5 \\ 2 &4 &-3 \\ 0 &4 &2 \end{pmatrix}\nonumber \] \[R_2 - 2R_1 \rightarrow R_2 \text{ (No effect on the determinant)}\nonumber \] \[\begin{pmatrix} 1 & 3 &5 \\ 0 &-2 &-13 \\ 0 &4 &2 \end{pmatrix}\nonumber \] Note that we do not need to zero out the upper middle number. We only need to zero out the bottom left numbers. \[R_3 + 2R_2 \rightarrow R_3 \text{ (No effect on the determinant)}.\nonumber \] \[\begin{pmatrix} 1 & 3 &5 \\ 0 &-2 &-13 \\ 0 &0 &-24 \end{pmatrix}\nonumber \] Note that we do not need to make the middle number a 1. The determinant of this matrix is 48. Since this matrix has \(\frac{1}{2}\) the determinant of the original matrix, the determinant of the original matrix has \[\text{determinant} = 48(2) = 96.\nonumber \] Inverses We call the square matrix I with all 1's down the diagonal and zeros everywhere else the identity matrix. It has the unique property that if \(A\) is a square matrix with the same dimensions then \[AI = IA = A.\nonumber \] Definition If \(A\) is a square matrix then the inverse \(A^{-1}\) of \(A\) is the unique matrix such that \[AA^{-1}=A^{-1}A=I.\nonumber \] Example \(\PageIndex{3}\) Let \[A=\begin{pmatrix} 2 &5 \\ 1 &3 \end{pmatrix}\nonumber \] then \[A^{-1}= \begin{pmatrix} 3 &-5 \\ -1 &2 \end{pmatrix} \nonumber \] Verify this! Theorem: ExistEnce The inverse of a matrix exists if and only if the determinant is nonzero. To find the inverse of a matrix, we write a new extended matrix with the identity on the right. Then we completely row reduce, the resulting matrix on the right will be the inverse matrix. Example \(\PageIndex{4}\) \[\begin{pmatrix} 2 &-1 \\ 1 &-1 \end{pmatrix}\nonumber \] First note that the determinant of this matrix is \[-2 + 1 = -1\nonumber \] hence the inverse exists. Now we set the augmented matrix as \[\begin{pmatrix}\begin{array}{cc|cc}2&-1&1&0 \\1&-1&0&1\end{array}\end{pmatrix}\nonumber \] \[R_1 {\leftrightarrow} R_2\nonumber \] \[\begin{pmatrix}\begin{array}{cc|cc}1&-1&0&1 \\2&-1&1&0\end{array}\end{pmatrix}\nonumber \] \[ R_2 - 2R_1 {\rightarrow} R_2\nonumber \] \[\begin{pmatrix}\begin{array}{cc|cc}1&-1&0&1 \\0&1&1&-2\end{array}\end{pmatrix}\nonumber \] \[ R_1 + R_2 {\rightarrow} R_1 \nonumber \] \[\begin{pmatrix}\begin{array}{cc|cc}1&0&1&-1 \\0&1&1&-2\end{array}\end{pmatrix}\nonumber \] Notice that the left hand part is now the identity. The right hand side is the inverse. Hence \[A^{-1}= \begin{pmatrix} 1&-1 \\ 1&-2 \end{pmatrix} \nonumber \] Solving Equations Using Matrices Example \(\PageIndex{5}\) Suppose we have the system \[2x - y = 3\nonumber \] \[ x - y = 4\nonumber \] Then we can write this in matrix form \[Ax = b\nonumber \] where \[A=\begin{pmatrix} 2&-1 \\ 1&-1 \end{pmatrix}, \;\;\; x= \begin{pmatrix} x \\ y \end{pmatrix}, \;\;\; \text{and} \; b=\begin{pmatrix} 3\\4 \end{pmatrix}\nonumber \] We can multiply both sides by \(A^{-1}\): \[A^{-1}A x = A^{-1}b\nonumber \] or \[x = A^{-1}b\nonumber \] From before, \[A^{-1}=\begin{pmatrix} 1&-1 \\ 1&-2 \end{pmatrix} \nonumber \] Hence our solution is \[\begin{pmatrix} -1&-5 \end{pmatrix}\nonumber \] or \[x = -1 \text{ and } y = 5\nonumber \] Contributors Larry Green (Lake Tahoe Community College) Integrated by Justin Marshall.
In this problem, we are asked to compute the line integral \[ \int_C \frac{-y, dx + x, dy}{x^2 + y^2} \] where \(C\) is the line segment from \((1,0)\) to \((0,1)\). It turns out that this integral is particularly easy to compute in polar coordinates (although the work that I did in class probably made it seem hopelessly complicated!). In polar coorinates, we have \(x = r \cos \theta\) and \(y = r \sin \theta\). To get \(dx\) and \(dy\) in terms of \(r\), \(\theta\) and \(d\theta\) we have to do a little work differentiating. Since we are integrating over the curve \(C\), we will eventually have \(r = r(\theta)\), a function of theta. We compute \[ dx = d(r \cos \theta) = \left(\frac{dr}{d\theta} \cos \theta – r \sin \theta \right), d\theta \] and similarly \[ dy = d(r \sin \theta) = \left(\frac{dr}{d\theta} sin \theta + r \cos \theta \right), d\theta. \] Using these formulas, the numerator of the integrand simplifies immensely: \[ – y, dx + x, dy = r^2 , d\theta. \] Therefore, the integral becomes (in polar coordinates) \[ \int_C \frac{-y, dx + x, dy}{x^2 + y^2} = \int_C , d\theta. \] The curve \(C\) can be written in polar coordinates as \[ r(\theta) = \frac{1}{\cos \theta + \sin \theta} \] where \(0 \leq \theta \leq \pi / 2\). To see this, notice the line satisfies \(y = 1 – x\), or equivalently \(x + y = 1\). In polar coordinates, this equation is \(r \cos \theta + r \sin \theta = 1\), and solving for \(r\) gives the equation for \(C\). Therefore, we can compute the integral \[ \int_C, d\theta = \int_0^{\pi / 2} , d\theta = \frac{\pi}{2}. \] So despite the imposing setup to the problem, the final integral turns out to be very simple.
DISCLAIMER: Very rough notes from class. Some additional side notes, but otherwise barely edited. These are notes for the UofT course PHY2403H, Quantum Field Theory, taught by Prof. Erich Poppitz. Determinant of Lorentz transformations We require that Lorentz transformations leave the dot product invariant, that is \( x \cdot y = x’ \cdot y’ \), or \begin{equation}\label{eqn:qftLecture3:20} x^\mu g_{\mu\nu} y^\nu = {x’}^\mu g_{\mu\nu} {y’}^\nu. \end{equation} Explicitly, with coordinate transformations \begin{equation}\label{eqn:qftLecture3:40} \begin{aligned} {x’}^\mu &= {\Lambda^\mu}_\rho x^\rho \\ {y’}^\mu &= {\Lambda^\mu}_\rho y^\rho \end{aligned} \end{equation} such a requirement is equivalent to demanding that \begin{equation}\label{eqn:qftLecture3:500} \begin{aligned} x^\mu g_{\mu\nu} y^\nu &= {\Lambda^\mu}_\rho x^\rho g_{\mu\nu} {\Lambda^\nu}_\kappa y^\kappa \\ &= x^\mu {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu y^\nu, \end{aligned} \end{equation} or \begin{equation}\label{eqn:qftLecture3:60} g_{\mu\nu} = {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu \end{equation} multiplying by the inverse we find \begin{equation}\label{eqn:qftLecture3:200} \begin{aligned} g_{\mu\nu} {\lr{\Lambda^{-1}}^\nu}_\lambda &= {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu {\lr{\Lambda^{-1}}^\nu}_\lambda \\ &= {\Lambda^\alpha}_\mu g_{\alpha\lambda} \\ &= g_{\lambda\alpha} {\Lambda^\alpha}_\mu. \end{aligned} \end{equation} This is now amenable to expressing in matrix form \begin{equation}\label{eqn:qftLecture3:220} \begin{aligned} (G \Lambda^{-1})_{\mu\lambda} &= (G \Lambda)_{\lambda\mu} \\ &= ((G \Lambda)^\T)_{\mu\lambda} \\ &= (\Lambda^\T G)_{\mu\lambda}, \end{aligned} \end{equation} or \begin{equation}\label{eqn:qftLecture3:80} G \Lambda^{-1} = (G \Lambda)^\T. \end{equation} Taking determinants (using the normal identities for products of determinants, determinants of transposes and inverses), we find \begin{equation}\label{eqn:qftLecture3:100} det(G) det(\Lambda^{-1}) = det(G) det(\Lambda), \end{equation} or \begin{equation}\label{eqn:qftLecture3:120} det(\Lambda)^2 = 1, \end{equation} or \( det(\Lambda)^2 = \pm 1 \). We will generally ignore the case of reflections in spacetime that have a negative determinant. Smart-alec Peeter pointed out after class last time that we can do the same thing easier in matrix notation \begin{equation}\label{eqn:qftLecture3:140} \begin{aligned} x’ &= \Lambda x \\ y’ &= \Lambda y \end{aligned} \end{equation} where \begin{equation}\label{eqn:qftLecture3:160} \begin{aligned} x’ \cdot y’ &= (x’)^\T G y’ \\ &= x^\T \Lambda^\T G \Lambda y, \end{aligned} \end{equation} which we require to be \( x \cdot y = x^\T G y \) for all four vectors \( x, y \), that is \begin{equation}\label{eqn:qftLecture3:180} \Lambda^\T G \Lambda = G. \end{equation} We can find the result \ref{eqn:qftLecture3:120} immediately without having to first translate from index notation to matrices. Field theory The electrostatic potential is an example of a scalar field \( \phi(\Bx) \) unchanged by SO(3) rotations \begin{equation}\label{eqn:qftLecture3:240} \Bx \rightarrow \Bx’ = O \Bx, \end{equation} that is \begin{equation}\label{eqn:qftLecture3:260} \phi'(\Bx’) = \phi(\Bx). \end{equation} Here \( \phi'(\Bx’) \) is the value of the (electrostatic) scalar potential in a primed frame. However, the electrostatic field is not invariant under Lorentz transformation. We postulate that there is some scalar field \begin{equation}\label{eqn:qftLecture3:280} \phi'(x’) = \phi(x), \end{equation} where \( x’ = \Lambda x \) is an SO(1,3) transformation. There are actually no stable particles (fields that persist at long distances) described by Lorentz scalar fields, although there are some unstable scalar fields such as the Higgs, Pions, and Kaons. However, much of our homework and discussion will be focused on scalar fields, since they are the easiest to start with. We need to first understand how derivatives \( \partial_\mu \phi(x) \) transform. Using the chain rule \begin{equation}\label{eqn:qftLecture3:300} \begin{aligned} \PD{x^\mu}{\phi(x)} &= \PD{x^\mu}{\phi'(x’)} \\ &= \PD{{x’}^\nu}{\phi'(x’)} \PD{{x}^\mu}{{x’}^\nu} \\ &= \PD{{x’}^\nu}{\phi'(x’)} \partial_\mu \lr{ {\Lambda^\nu}_\rho x^\rho } \\ &= \PD{{x’}^\nu}{\phi'(x’)} {\Lambda^\nu}_\mu \\ &= \PD{{x’}^\nu}{\phi(x)} {\Lambda^\nu}_\mu. \end{aligned} \end{equation} Multiplying by the inverse \( {\lr{\Lambda^{-1}}^\mu}_\kappa \) we get \begin{equation}\label{eqn:qftLecture3:320} \PD{{x’}^\kappa}{} = {\lr{\Lambda^{-1}}^\mu}_\kappa \PD{x^\mu}{} \end{equation} This should be familiar to you, and is an analogue of the transformation of the \begin{equation}\label{eqn:qftLecture3:340} d\Br \cdot \spacegrad_\Br = d\Br’ \cdot \spacegrad_{\Br’}. \end{equation} Actions We will start with a classical action, and quantize to determine a QFT. In mechanics we have the particle position \( q(t) \), which is a classical field in 1+0 time and space dimensions. Our action is \begin{equation}\label{eqn:qftLecture3:360} S = \int dt \LL(t) = \int dt \lr{ \inv{2} \dot{q}^2 – V(q) }. \end{equation} This action depends on the position of the particle that is local in time. You could imagine that we have a more complex action where the action depends on future or past times \begin{equation}\label{eqn:qftLecture3:380} S = \int dt’ q(t’) K( t’ – t ), \end{equation} but we don’t seem to find such actions in classical mechanics. Principles determining the form of the action. relativity (action is invariant under Lorentz transformation) locality (action depends on fields and the derivatives at given \((t, \Bx)\). Gauge principle (the action should be invariant under gauge transformation). We won’t discuss this in detail right now since we will start with studying scalar fields. Recall that for Maxwell’s equations a gauge transformation has the form \begin{equation}\label{eqn:qftLecture3:520} \phi \rightarrow \phi + \dot{\chi}, \BA \rightarrow \BA – \spacegrad \chi. \end{equation} Suppose we have a real scalar field \( \phi(x) \) where \( x \in \mathbb{R}^{1,d-1} \). We will be integrating over space and time \( \int dt d^{d-1} x \) which we will write as \( \int d^d x \). Our action is \begin{equation}\label{eqn:qftLecture3:400} S = \int d^d x \lr{ \text{Some action density to be determined } } \end{equation} The analogue of \( \dot{q}^2 \) is \begin{equation}\label{eqn:qftLecture3:420} \begin{aligned} \lr{ \PD{x^\mu}{\phi} } \lr{ \PD{x^\nu}{\phi} } g^{\mu\nu} &= (\partial_\mu \phi) (\partial_\nu \phi) g^{\mu\nu} \\ &= \partial^\mu \phi \partial_\mu \phi. \end{aligned} \end{equation} This has both time and spatial components, that is \begin{equation}\label{eqn:qftLecture3:440} \partial^\mu \phi \partial_\mu \phi = \dotphi^2 – (\spacegrad \phi)^2, \end{equation} so the desired simplest scalar action is \begin{equation}\label{eqn:qftLecture3:460} S = \int d^d x \lr{ \dotphi^2 – (\spacegrad \phi)^2 }. \end{equation} The measure transforms using a Jacobian, which we have seen is the Lorentz transform matrix, and has unit determinant \begin{equation}\label{eqn:qftLecture3:480} d^d x’ = d^d x \Abs{ det( \Lambda^{-1} ) } = d^d x. \end{equation} Problems. Question: Four vector form of the Maxwell gauge transformation. Show that the transformation \begin{equation}\label{eqn:qftLecture3:580} A^\mu \rightarrow A^\mu + \partial^\mu \chi \end{equation} is the desired four-vector form of the gauge transformation \ref{eqn:qftLecture3:520}, that is \begin{equation}\label{eqn:qftLecture3:540} \begin{aligned} j^\nu &= \partial_\mu {F’}^{\mu\nu} \\ &= \partial_\mu F^{\mu\nu}. \end{aligned} \end{equation} Also relate this four-vector gauge transformation to the spacetime split. Answer \begin{equation}\label{eqn:qftLecture3:560} \begin{aligned} \partial_\mu {F’}^{\mu\nu} &= \partial_\mu \lr{ \partial^\mu {A’}^\nu – \partial_\nu {A’}^\mu } \\ &= \partial_\mu \lr{ \partial^\mu \lr{ A^\nu + \partial^\nu \chi } – \partial_\nu \lr{ A^\mu + \partial^\mu \chi } } \\ &= \partial_\mu {F}^{\mu\nu} + \partial_\mu \partial^\mu \partial^\nu \chi – \partial_\mu \partial^\nu \partial^\mu \chi \\ &= \partial_\mu {F}^{\mu\nu}, \end{aligned} \end{equation} by equality of mixed partials. Expanding \ref{eqn:qftLecture3:580} explicitly we find \begin{equation}\label{eqn:qftLecture3:600} {A’}^\mu = A^\mu + \partial^\mu \chi, \end{equation} which is \begin{equation}\label{eqn:qftLecture3:620} \begin{aligned} \phi’ = {A’}^0 &= A^0 + \partial^0 \chi = \phi + \dot{\chi} \\ \BA’ \cdot \Be_k = {A’}^k &= A^k + \partial^k \chi = \lr{ \BA – \spacegrad \chi } \cdot \Be_k. \end{aligned} \end{equation} The last of which can be written in vector notation as \( \BA’ = \BA – \spacegrad \chi \).
Quick definition of the Laplacian matrix on a triangular mesh. I give an in depth explanation here. I will discuss various ways you can ship a Maya plugin to users. Leaving [ C++ code ] to decompose a 3x3 matrix into rotation and scale/shear with polar decomposition (this is done through a 3x3 SVD and 3x3 Eigen decomposition). Keywords: Polar decomposition of a 3x3 matrix 3x3 matrix SVD (Singular Value decomposition) Eigen decomposition (extract eigenvalues and eigenvectors) of a 3x3 matrix M (assumes symmetric matrix) Showcasing simple procedures with C++ code to smooth / diffuse per vertex weights over a triangle mesh. I summarize the Phong illumination model with the above equation and explain all the terms one by one. Here is how you can make furigana appear above kanjis in your anki flash cards like this: お箸上手ですね! Learning a language is quite a challenge, here I will gather some resources and thoughts about learning Japanese. Some C++ Maya API code to set skin weights (multi attributes) of a skin cluster node. DQS with scale applied on the second to last joint. Left, globally propagates until the last bone, right, scale localized to each joint. C++ code snippet to correct the Dual Quaternion Skinning bulge artifact. Left: standard DQS, right bulge correction, (automatic skin weight with Maya "smooth bind"). Code snippet in C++ to be able to update the color of a mesh with Maya API on a per vertex basis. A trick to pause the script until evalDeferred / scriptJob / idle are executed. Dropping a procedure to compute the Laplacian smoothing of a 3D mesh (with cotangent weights). $$w_{ij} = \cot(\alpha_{ij}) + \cot(\beta_{ij})$$ Changing the shape of a 3D mesh is a very common operation in computer graphics. For instance, rotations, translations etc. are very basic forms of mesh deformations. Of course more complicated algorithms exists to deform your mesh such as wrap deformers, linear blending skinning and so on... During the design of these algorithms you might face a question: how do I transform the normal of the original shape in rest pose to the deformed shape? This entry will give you some pointers to compute it. The goal of this tutorial is to describe step by step the process of skinning a simple cylinder with two joints in Blender 2.5 and superior. This is a beginner lesson in the sense that I will describe as much as possible shortcuts and actions, however, there will be no high level explanations. Note for the scholars: blender implements the heat diffusion method to compute skinning weights, linear blending and dual quaternion skinning are used to deform the mesh. Just leaving some notes to differentiate expressions with the \( \nabla \) operator to compute gradients of various functions. Disclaimer The standalone application is written in Qt/C++ and CUDA, therefore it only works on NVIDIA GPUs but should run on Windows or Linux. This code is a research project developed on three Linux machines with similar configurations and "tested" by very few people: expect difficulties to get it to work. The documentation is also very thin. That being said I will be happy to answer questions in case of troubles. Publication: ACM SIGGRAPH ASIA, 2014 1IRIT - Université de Toulouse, 2University of Victoria, 3Inria Bordeaux, 4LJK - Grenoble Universités - Inria Comparison of Implicit Skinning with our new Elastic Implicit Skinning. From right to left: Jeff in T-pose, Jeff rigged with implicit skinning, Jeff rigged with elastic implicit skinning, female model with implicit skinning then elastic implicit skinning. With elastic implicit skinning the skin stretches automatically (without skinning weights) and the vertices distribution is more pleasing (notice the belly button stretch). Our approach is more robust, for instance the angle's range of joints is larger than implicit skinning (notice the triangle inversion of implicit skinning at the knee joint) Draft work in progress Dropping my code of the voro++ library under windows with cmake.. It helped me to compile the This is the second part of my tutorial series on bounded harmonic functions. For a quick introduction and examples of use of harmonic functions read the first part. In this part I define harmonic functions and their properties. This is the hard part with a lot of mathematics. But it's a mandatory step to understand how harmonic functions work. This will allow you to apply them in a broader context and understand many scientific papers relying on these. In addition this will be my entry point to introduce Finite Element Method in future posts. So hang on it's worth it! I ran into the infamous message "Syncing error, type: 409, message: Conflict" while syncing my AnkiDroid with AnkiWeb on my Nexus 4. Here is the solution: Introduction To represent a 2D function \( z = f(x,y) \) one can draw a 3D surface. The higher the \(z\) value the higher the point on the surface (see above). (a) (b) (c) (d) Generating an Summary In this tutorial I give the basics to be able to compute harmonic functions \( f: \mathbb R^n \rightarrow \mathbb R\) over a regular grid/texture (in 1D, 2D or 3D). All you need is to set a few values on the grid that define a closed region (see figure above), then the remaining values are automatically computed using the so-called Laplace operator and solving a linear system of equations. After a quick and "mathless" presentation of an easy way to implement this in C++, I will explain the mathematical jargon. This tutorial is the first of series which will lead to more complicated techniques like Finite Element Methods ( FEM). Here you can [ download the C++ code] to do a Poisson disk sampling of a triangular mesh. The code is a simple wrapper for the vcglib library. It takes in input a std::vector of triangles and vertex position/normals and outputs std::vector of samples positions/normals computed with the Poisson disk sampling. EDIT: Victor Martins pointed out that the above code doesn't compile on MAC and kindly provided an [ update] bundled with a newer version of vcglib. N.B: if you are looking for a way to do Poisson disk sampling fast/in real-time take a look at this siggraph paper and matlab code. There is also a C++ implementation here. This is a simple demonstrator of the HRBF technique presented and explained here. You can visualize and edit the implicit surface generated with HRBF. Dropping on github C++ code to compute spline curves of arbitrary dimensions 3D curves or 2D splines anything is possible. The class can be instantiated with any point type (1d (float), 2d, 3d etc.) as long as the appropriate operator overload are implemented. This class use the efficient blossom algorithm to compute a position on the curve. Publication: ACM SIGGRAPH, 2013 Rodolphe Vaillant 1,2, Loïc Barthe 1, Gaël Guennebaud 3, Marie-Paule Cani 4, Damien Rhomer 5, Brian Wyvill 2, Olivier Gourmel 1 and Mathias Paulin 1 1IRIT - Université de Toulouse, 2University of Victoria, 3Inria Bordeaux, 4LJK - Grenoble Universités - Inria, 5CPE Lyon - Inria (e) (a) (b) (c) (d) (a) Self-intersection of the Armadillo's knee with dual quaternions skinning. (b) Implicit Skinning produces the skin fold and contact. (c), (d) two poses of the finger with Implicit Skinning generating the contact surface and organic bugle. (e) Implicit Skinning of an animated character around 95fps. In this entry I provide [ C++ codes ] to deform a mesh with the famous Dual Quaternion Skinning ( DQS) deformer. In mathematics functions are sometime characterized as compactly or globally supported. I will explain what it means and how to convert a function from global to compact support in the context of implicit surface modeling (i.e. boolean modeling with implicit surfaces as objects). I'm leaving some code to handle a simple trackball. The code actually wasn't done by me, but I can't remember where I took it! Originally it was C code which I've wrapped up in a C++ class with way more comments and an example of how to use it. [ C++ trackball code ]. Here I describe the discreet Laplace-Beltrami operator for a triangle mesh and how to derive the Laplacian matrix for that mesh. Then I provide [ C++ code ] to compute harmonic weights over a triangular mesh by solving the Laplace equation. Theories around alternative systems for teaching and learning has always interested me, so I decided to open an entry about that. My first contribution will be the English transcript of an inspiring video about education. I might put more articles/videos on this topic later. In this entry I'll explain and give code for the easiest method I know to reconstruct a surface represented by a scalar field \(f: \mathbb R^3 \rightarrow \mathbb R\) by interpolating a set of point with their normals. The method is called Hermite Radial Basis Function (HRBF) interpolation. The audience aimed is anyone with some basic knowledge about implicit surfaces (which people often know as metaballs and render with the marching cube algorithm) Quick links: [ HRBF core sources ] [ HRBF toy app sources ] [ HRBF toy app binaries ] [ maths details ] [ maths summary ] [ tex sources (FR) (EN) ] glBegin(GL_TRIANGLES); glVertex3f(1.0f, 0.0f, 0.0f); glVertex3f(0.0f, 1.0f, 0.0f); glVertex3f(0.0f, 0.0f, 1.0f); glEnd(); [ compact version ] | [ modular version ] Remember the old days when you were able to simply draw a few primitives with GL_POINTS, GL_LINES or GL_QUADS within a pair of good old begin() end(). Well I'm providing a C++ class which will enable you to do this again under OpenGL 3.1 or higher. Just leaving some code here to invert either column or row major 4x4 matrices. You have just upgraded to CUDA 5.0 and the function cudaMemcpyToSymbol() throws you the infamous "invalid device symbol" error. Here is what to do. Edit: the usage of cudaMemcpyToSymbol describded below is deprecated since CUDA 4.1 (See also my new entry Upgrade to CUDA 5.0: cudaMemcpyToSymbol invalid device symbol error) Today I want to discuss some issues I had with CUDA constant memory and share some workarounds. This has upset me for some time and I finally found a forum entry solving this problem. The problem: I wasn't able to drag&drop in QTDesigner any widgets from the widgets list to the form I was editing. Solution found here: qtforum.org You only need to add this entry to your /etc/X11/xorg.conf: Section "Module" Load "extmod" EndSection Well I'm using QtCreator to code my CUDA project and there has been a lot of things bothering me. Among them is this F4 shortcut which doesn't work. The shortcut enables switching between the header and source file but apparently the extension .cu is not recognized as a C++ source file. Here is how to fix it. 1
Populations are dynamic entities. Populations consist all of the species living within a specific area, and populations fluctuate based on a number of factors: seasonal and yearly changes in the environment, natural disasters such as forest fires and volcanic eruptions, and competition for resources between and within species. The statistical study of population dynamics, demography, uses a series of mathematical tools to investigate how populations respond to changes in their biotic and abiotic environments. Many of these tools were originally designed to study human populations. For example, life tables, which detail the life expectancy of individuals within a population, were initially developed by life insurance companies to set insurance rates. In fact, while the term “demographics” is commonly used when discussing humans, all living populations can be studied using this approach. Population Size and Density The study of any population usually begins by determining how many individuals of a particular species exist, and how closely associated they are with each other. Within a particular habitat, a population can be characterized by its population size ( N), the total number of individuals, and its population density, the number of individuals within a specific area or volume. Population size and density are the two main characteristics used to describe and understand populations. For example, populations with more individuals may be more stable than smaller populations based on their genetic variability, and thus their potential to adapt to the environment. Alternatively, a member of a population with low population density (more spread out in the habitat), might have more difficulty finding a mate to reproduce compared to a population of higher density. As is shown in the Figure 1 below, smaller organisms tend to be more densely distributed than larger organisms. Art Connection As this graph shows, population density typically decreases with increasing body size. Why do you think this is the case? Population Research Methods The most accurate way to determine population size is to simply count all of the individuals within the habitat. However, this method is often not logistically or economically feasible, especially when studying large habitats. Thus, scientists usually study populations by sampling a representative portion of each habitat and using this data to make inferences about the habitat as a whole. A variety of methods can be used to sample populations to determine their size and density. For immobile organisms such as plants, or for very small and slow-moving organisms, a quadrat may be used. A quadrat is a way of marking off square areas within a habitat, either by staking out an area with sticks and string, or by the use of a wood, plastic, or metal square placed on the ground. After setting the quadrats, researchers then count the number of individuals that lie within their boundaries. Multiple quadrat samples are performed throughout the habitat at several random locations. All of this data can then be used to estimate the population size and population density within the entire habitat. The number and size of quadrat samples depends on the type of organisms under study and other factors, including the density of the organism. For example, if sampling daffodils, a 1 m 2 quadrat might be used whereas with giant redwoods, which are larger and live much further apart from each other, a larger quadrat of 100 m 2 might be employed. This ensures that enough individuals of the species are counted to get an accurate sample that correlates with the habitat, including areas not sampled. For mobile organisms, such as mammals, birds, or fish, a technique called mark and recapture is often used. This method involves marking a sample of captured animals in some way (such as tags, bands, paint, or other body markings), and then releasing them back into the environment to allow them to mix with the rest of the population; later, a new sample is collected, including some individuals that are marked (recaptures) and some individuals that are unmarked. Using the ratio of marked and unmarked individuals, scientists determine how many individuals are in the sample. From this, calculations are used to estimate the total population size. This method assumes that the larger the population, the lower the percentage of tagged organisms that will be recaptured since they will have mixed with more untagged individuals. For example, if 80 deer are captured, tagged, and released into the forest, and later 100 deer are captured and 20 of them are already marked, we can determine the population size ( N) using the following equation: [latex]\displaystyle\frac{\text{number marked first catch}\times\text{total number of second catch}}{\text{number marked second catch}}=N[/latex] Using our example, the population size would be estimated at 400: [latex]\displaystyle\frac{80\times{100}}{20}=400[/latex] Therefore, there are an estimated 400 total individuals in the original population. There are some limitations to the mark and recapture method. Some animals from the first catch may learn to avoid capture in the second round, thus inflating population estimates. Alternatively, animals may preferentially be retrapped (especially if a food reward is offered), resulting in an underestimate of population size. Also, some species may be harmed by the marking technique, reducing their survival. A variety of other techniques have been developed, including the electronic tracking of animals tagged with radio transmitters and the use of data from commercial fishing and trapping operations to estimate the size and health of populations and communities. Species Distribution In addition to measuring simple density, further information about a population can be obtained by looking at the distribution of the individuals. Species dispersion patterns (or distribution patterns) show the spatial relationship between members of a population within a habitat at a particular point in time. In other words, they show whether members of the species live close together or far apart, and what patterns are evident when they are spaced apart. Individuals in a population can be more or less equally spaced apart, dispersed randomly with no predictable pattern, or clustered in groups. These are known as uniform, random, and clumped dispersion patterns, respectively. Uniform dispersion is observed in plants that secrete substances inhibiting the growth of nearby individuals (such as the release of toxic chemicals by the sage plant Salvia leucophylla, a phenomenon called allelopathy) and in animals like the penguin that maintain a defined territory. An example of random dispersion occurs with dandelion and other plants that have wind-dispersed seeds that germinate wherever they happen to fall in a favorable environment. A clumped dispersion may be seen in plants that drop their seeds straight to the ground, such as oak trees, or animals that live in groups (schools of fish or herds of elephants). Clumped dispersions may also be a function of habitat heterogeneity. Thus, the dispersion of the individuals within a population provides more information about how they interact with each other than does a simple density measurement. Just as lower density species might have more difficulty finding a mate, solitary species with a random distribution might have a similar difficulty when compared to social species clumped together in groups. Demography While population size and density describe a population at one particular point in time, scientists must use demography to study the dynamics of a population. Demography is the statistical study of population changes over time: birth rates, death rates, and life expectancies. Each of these measures, especially birth rates, may be affected by the population characteristics described above. For example, a large population size results in a higher birth rate because more potentially reproductive individuals are present. In contrast, a large population size can also result in a higher death rate because of competition, disease, and the accumulation of waste. Similarly, a higher population density or a clumped dispersion pattern results in more potential reproductive encounters between individuals, which can increase birth rate. Lastly, a female-biased sex ratio (the ratio of males to females) or age structure (the proportion of population members at specific age ranges) composed of many individuals of reproductive age can increase birth rates. In addition, the demographic characteristics of a population can influence how the population grows or declines over time. If birth and death rates are equal, the population remains stable. However, the population size will increase if birth rates exceed death rates; the population will decrease if birth rates are less than death rates. Life expectancy is another important factor; the length of time individuals remain in the population impacts local resources, reproduction, and the overall health of the population. These demographic characteristics are often displayed in the form of a life table. Life Tables Life tables provide important information about the life history of an organism. Life tables divide the population into age groups and often sexes, and show how long a member of that group is likely to live. They are modeled after actuarial tables used by the insurance industry for estimating human life expectancy. Life tables may include the probability of individuals dying before their next birthday (i.e., their mortality rate), the percentage of surviving individuals dying at a particular age interval, and their life expectancy at each interval. An example of a life table is shown in the Table below from a study of Dall mountain sheep, a species native to northwestern North America. Notice that the population is divided into age intervals (column A). The mortality rate (per 1000), shown in column D, is based on the number of individuals dying during the age interval (column B) divided by the number of individuals surviving at the beginning of the interval (Column C), multiplied by 1000. [latex]\displaystyle\text{mortality rate}=\frac{\text{number of individuals dying}}{\text{number of individuals surviving}}\times{N}[/latex] For example, between ages three and four, 12 individuals die out of the 776 that were remaining from the original 1000 sheep. This number is then multiplied by 1000 to get the mortality rate per thousand. [latex]\displaystyle\text{mortality rate}=\frac{12}{776}\times{1000}\approx{15.5}[/latex] As can be seen from the mortality rate data (column D), a high death rate occurred when the sheep were between 6 and 12 months old, and then increased even more from 8 to 12 years old, after which there were few survivors. The data indicate that if a sheep in this population were to survive to age one, it could be expected to live another 7.7 years on average, as shown by the life expectancy numbers in column E. Table 1. Life Table of Dall Mountain Sheep [1] Age interval (years) Number dying in age interval out of 1000 born Number surviving at beginning of age interval out of 1000 born Mortality rate per 1000 alive at beginning of age interval Life expectancy or mean lifetime remaining to those attaining age interval 0-0.5 54 1000 54.0 7.06 0.5-1 145 946 153.3 — 1-2 12 801 15.0 7.7 2-3 13 789 16.5 6.8 3-4 12 776 15.5 5.9 4-5 30 764 39.3 5.0 5-6 46 734 62.7 4.2 6-7 48 688 69.8 3.4 7-8 69 640 107.8 2.6 8-9 132 571 231.2 1.9 9-10 187 439 426.0 1.3 10-11 156 252 619.0 0.9 11-12 90 96 937.5 0.6 12-13 3 6 500.0 1.2 13-14 3 3 1000 0.7 Survivorship Curves Another tool used by population ecologists is a survivorship curve, which is a graph of the number of individuals surviving at each age interval plotted versus time (usually with data compiled from a life table). These curves allow us to compare the life histories of different populations. Humans and most primates exhibit a Type I survivorship curve because a high percentage of offspring survive their early and middle years—death occurs predominantly in older individuals. These types of species usually have small numbers of offspring at one time, and they give a high amount of parental care to them to ensure their survival. Birds are an example of an intermediate or Type II survivorship curve because birds die more or less equally at each age interval. These organisms also may have relatively few offspring and provide significant parental care. Trees, marine invertebrates, and most fishes exhibit a Type III survivorship curve because very few of these organisms survive their younger years; however, those that make it to an old age are more likely to survive for a relatively long period of time. Organisms in this category usually have a very large number of offspring, but once they are born, little parental care is provided. Thus these offspring are “on their own” and vulnerable to predation, but their sheer numbers assure the survival of enough individuals to perpetuate the species. Section Summary Populations are individuals of a species that live in a particular habitat. Ecologists measure characteristics of populations: size, density, dispersion pattern, age structure, and sex ratio. Life tables are useful to calculate life expectancies of individual population members. Survivorship curves show the number of individuals surviving at each age interval plotted versus time. Data Adapted from Edward S. Deevey, Jr., “Life Tables for Natural Populations of Animals,” The Quarterly Review of Biology22, no. 4 (December 1947): 283–314. ↵
That was an excellent post and qualifies as a treasure to be found on this site! wtf wrote: When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that . In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/ Thanks for the link and I would have showcased it all on its own had I seen it first The point I am making is something different. I am pointing out that: All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out. I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate. But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate. There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets. Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity? 1) The rigorization of Newton's calculus culminated with infinitary set theory. Newton discovered his theory of gravity using calculus, which he invented for that purpose. I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it. However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun. I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant. 2) Einstein's gneral relativity uses Riemann's differential geometry. In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics. Isn't this the same problem as previous? dx=0? 3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series. In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math. I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'? 4) Quantum theory is functional analysis . If you took linear algebra, then functional analysis can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series. Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete , meaning that it has no holes in it. It's like the real numbers and not like the rational numbers. QM rests on the mathematics of uncountable sets, in an essential way. Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces. Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such. Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it." He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god. I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it. ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig. Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol 5) Cantor was led to set theory from Fourier series. In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity). I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity. In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here: If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory. Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake. I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing.... I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality. It just means we're using averages rather than discrete actualities and it's close enough. I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is. I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself. Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess Don't take my pushback too seriously I'd prefer if we could collaborate as colleagues rather than competing.
AP Statistics Curriculum 2007 Infer 2Means Dep From Socr General Advance-Placement (AP) Statistics Curriculum - Inferences about Two Means: Dependent Samples In the previous chapter we saw how to do significance testing in the case of a single random sample. Now, we show how to do hypothesis testing comparing two samples and we begin with the simple case of paired samples. Inferences About Two Means: Dependent Samples In all study designs, it is always critical to clearly identify whether samples we compare come from dependent or independent populations. There is a general formulation for the significance testing when the samples are independent. The fact that there may be uncountable many different types of dependencies that prevents us from having a similar analysis protocol for all dependent sample cases. However, in one specific case - paired samples - we have a theory to generalize the significance testing analysis protocol. Two populations (or samples) are dependent because of pairing (or paired) if they are linked in some way, usually by a direct relationship. For example, measure the weight of subjects before and after a six month diet. Paired Designs These are the most common Paired Designs, in which the idea of pairing is that members of a pair are similar to each other with respect to extraneous variables. Randomized block experiments with two units per block Observational studies with individually matched controls (e.g., clinical trials of drug efficacy - patient pre vs. post treatment results are compared) Repeated (time or treatment affected) measurements on the same individual Blocking by time – formed implicitly when replicate measurements are made at different times. Recall that for a random sample {} of the process, the population mean may be estimated by the sample average, . The standard error of is given by . Analysis Protocol for Paired Designs To study paired data, we would like to examine the differences between each pair. Suppose {} and {} represent the 2 paired samples. Then we want to study the difference sample {}. Notice the effect of the pairings of each X and i Y . i Now we can clearly see that the group effect (group differences) is directly represented in the { d } sequence. The one-sample T test is the proper strategy to analyze the difference sample { i d }, if the i X and i Y samples come from Normal distributions. i Since we are focusing on the differences, we can use the same reasoning as we did in the single sample case to calculate the standard error (i.e., the standard deviation of the sampling distribution of ) of . Thus, the standard error of is given by , where . Confidence Interval of the Difference of Means The interval estimation of the difference of two means (or Confidence intervals) is constructed as follows. Choose a confidence level (1 − α)100%, where α is small (e.g., 0.1, 0.05, 0.025, 0.01, 0.001, etc.). Then a (1 − α)100% confidence interval for μ 1 − μ 2 is defined in terms of the T-distribution: Both the confidence intervals and the hypothesis testing methods in the paired design require Normality of both samples. If these parametric assumptions are invalid we must use a not-parametric (distribution free test), even if the latter is less powerful. Hypothesis Testing about the Difference of Means Null Hypothesis: H :μ o 1− μ 2= μ (e.g., μ o 1− μ 2= 0) Alternative Research Hypotheses: One sided (uni-directional): H 1:μ 1− μ 2> μ , or o H 1:μ 1− μ 2< μ o Double sided: One sided (uni-directional): Test Statistics If the two populations that the { X } and { i Y } samples were drawn from are approximately Normal, then the Test Statistics is: i . Effects of Ignoring the Pairing The SE estimate will be smaller for correctly paired data. If we look within each sample at the data, we notice variation from one subject to the next. This information gets incorporated into the SE for the independent t-test via s 1 and s 2. The original reason we paired was to try to control for some of this inter-subject variation, which is not of interest in the paired design. Notice that the inter-subject variation has no influence on the SE for the paired test, because only the differences were used in the calculation. The price of pairing is smaller degrees of freedom of the T-test. However, this can be compensated with a smaller SE if we had paired correctly. Pairing is used to reduce bias and increase precision in our inference. By matching/blocking we can control variation due to extraneous variables. For example, if two groups are matched on age, then a comparison between the groups is free of any bias due to a difference in age distribution. Pairing is a strategy of design, not an analysis tool. Pairing needs to be carried out before the data are observed. It is not correct to use the observations to make pairs after the data has been collected. Example Suppose we measure the thickness of plaque (mm) in the carotid artery of 10 randomly selected patients with mild atherosclerotic disease. Two measurements are taken, thickness before treatment with Vitamin E (baseline) and after two years of taking Vitamin E daily. Formulate testable hypothesis and make inference about the effect of the treatment at α = 0.05. What makes this paired data rather than independent data? Why would we want to use pairing in this example? Data in row format Before 0.66,0.72,0.85,0.62,0.59,0.63,0.64,0.7,0.73,0.68 After 0.6,0.65,0.79,0.63,0.54,0.55,0.62,0.67,0.68,0.64 Data in column format Subject Before After Difference 1 0.66 0.60 0.06 2 0.72 0.65 0.07 3 0.85 0.79 0.06 4 0.62 0.63 -0.01 5 0.59 0.54 0.05 6 0.63 0.55 0.08 7 0.64 0.62 0.02 8 0.70 0.67 0.03 9 0.73 0.68 0.05 10 0.68 0.64 0.04 Mean 0.682 0.637 0.045 SD 0.0742 0.0709 0.0264 We begin first by exploring the data visually using various SOCR EDA Tools. Line Chart of the two samples Box-And-Whisker Plot of the two samples Index plot of the differences Inference Null Hypothesis: H :μ o − μ b e f o r e = 0 a f t e r (One-sided) Alternative Research Hypotheses: H 1:μ − μ b e f o r e > 0. a f t e r Test statistics: We can use the sample summary statistics to compute the T-statistic: . p− v a l u e= P( T (> d f= 9) T = 5.4022) = 0.000216 for this (one-sided) test. o Therefore, we can reject the null hypothesis at α = 0.05! The left white area at the tails of the T(df=9) distribution depicts graphically the probability of interest, which represents the strength of the evidence (in the data) against the Null hypothesis. In this case, this area is 0.000216, which is much smaller than the initially set Type I error α = 0.05 and we reject the null hypothesis. You can also use the SOCR Analyses (One-Sample T-Test) to carry out these calculations as shown in the figure below. This SOCR One Sample T-test Activity provides additional hands-on demonstrations of the one-sample hypothesis testing for the difference in paired experiments. 95% = (1 − 0.05)100% (α = 0.05) Confidence interval (before-after): C I(μ − μ b e f o r e ): a f t e r Conclusion These data show that the true mean thickness of plaque after two years of treatment with Vitamin E is statistically significantly different than before the treatment (p =0.000216). In other words, vitamin E appears to be an effective in changing carotid artery plaque after treatment. The practical effect does appear to be < 60 microns; however, this may be clinically sufficient and justify patient treatment. Paired Test Validity Both the confidence intervals and the hypothesis testing methods in the paired design require Normality of both samples. If these parametric assumptions are invalid, we must use a not-parametric (distribution free test), even if the latter is less powerful. The plots below indicate that Normal assumptions are not unreasonable for these data, and hence we may be justified in using the one-sample T-test in this case. Quantile-Quantile Data-Data plot of the two datasets: QQ-Normal plot of the before data: Paired vs. Independent Testing Suppose we accidentally analyzed the groups independently (using the independent T-test) rather than using this paired test (this would be an incorrect way of analyzing this before-after data). How would this change our results and findings? \(T_o = {\overline{x}-\overline{y} - \mu_o \over SE(\overline{x}+\overline{y})} \sim T(df=17)\) \(T_o = {\overline{x}-\overline{y} - \mu_o \over SE(\overline{x}+\overline{y})} = {0.682 -0.637- 0 \over \sqrt{SE^2(\overline{x})+SE^2(\overline{y})}}= \) \({0.682 -0.637\over \sqrt{{0.0742^2\over 10}+ {0.0709^2\over 10}}}={0.682 -0.637\over 0.0325}=1.38\) \(p-value=P(T>1.38)= 0.100449\) and we would have failed to reject the null-hypothesis ( incorrect!) Similarly, had we incorrectly used the independent design and constructed a corresponding Confidence interval, we would obtain an incorrect inference: \(CI: {\overline{x}-\overline{y} - \mu_o \pm t_{(df=17, \alpha/2)} \times SE(\overline{x}+\overline{y})} = \) \(0.045 \pm 1.740\times 0.0325 = [-0.0116 ; 0.1016].\) SOCR Home page: http://www.socr.ucla.edu Translate this page:
LaTeX/Picture The picture environment allows programming pictures directly in LaTeX. On the one hand, there are rather severe constraints, as the slopes of line segments and the radii of circles are restricted to a narrow choice of values. On the other hand, the picture environment of LaTeX2e brings with it the \qbezier command, "q" meaning quadratic. Many frequently-used curves such as circles, ellipses, and catenaries can be satisfactorily approximated by quadratic Bézier curves, although this may require some mathematical toil. If a programming language like Java is used to generate \qbezier blocks of LaTeX input files, the picture environment becomes quite powerful. Although programming pictures directly in LaTeX is severely restricted, and often rather tiresome, there are still reasons for doing so. The documents thus produced are "small" with respect to bytes, and there are no additional graphics files to be dragged along. Packages like pict2e, epic, eepic or pstricks enhance the original picture environment, and greatly strengthen the graphical power of LaTeX. Contents Basic commands[edit] A picture environment is available in any LaTeX distribution, without the need of loading any external package. This environment is created with one of the two commands \begin{picture}(x, y) ...\end{picture} or \begin{picture}(x, y)(x0, y0)...\end{picture} The first pair, , affects the reservation, within the document, of rectangular space for the picture. The optional second pair, , assigns arbitrary coordinates to the bottom left corner of the reserved rectangle. The numbers x, y, x0, y0 are numbers (lengths) in the units of \unitlength, which can be reset any time (but not within a picture environment) with a command such as \setlength{\unitlength}{1.2cm} The default value of \unitlength is 1pt. Most drawing commands have one of the two forms \put(x, y){object} or \multiput(x, y)(dx, dy){n}{object} Bézier curves are an exception. They are drawn with the command \qbezier(x1, y1)(x2, y2)(x3, y3) With the package picture absolute dimension (like 15pt) and expression are allowed, in addition to numbers relative to \unitlength. Line segments[edit] Line segments are drawn with the command: \put(x, y){ \line(x1, y1){length} } The \line command has two arguments: a direction vector, a "length" (sort of: this argument is the vertical length in the case of a vertical line segment and in all other cases the horizontal distance of the line, rather than the length of the segment itself). The components of the direction vector are restricted to the integers ( −6, −5, ... , 5, 6) and they have to be coprime (no common divisor except 1). The figure below illustrates all 25 possible slope values in the first quadrant. The length is relative to \unitlength. \setlength{\unitlength}{5cm}\begin{picture}(1,1)\put(0,0){\line(0,1){1}}\put(0,0){\line(1,0){1}}\put(0,0){\line(1,1){1}}\put(0,0){\line(1,2){.5}}\put(0,0){\line(1,3){.3333}}\put(0,0){\line(1,4){.25}}\put(0,0){\line(1,5){.2}}\put(0,0){\line(1,6){.1667}}\put(0,0){\line(2,1){1}}\put(0,0){\line(2,3){.6667}}\put(0,0){\line(2,5){.4}}\put(0,0){\line(3,1){1}}\put(0,0){\line(3,2){1}}\put(0,0){\line(3,4){.75}}\put(0,0){\line(3,5){.6}}\put(0,0){\line(4,1){1}}\put(0,0){\line(4,3){1}}\put(0,0){\line(4,5){.8}}\put(0,0){\line(5,1){1}}\put(0,0){\line(5,2){1}}\put(0,0){\line(5,3){1}}\put(0,0){\line(5,4){1}}\put(0,0){\line(5,6){.8333}}\put(0,0){\line(6,1){1}}\put(0,0){\line(6,5){1}}\end{picture} Arrows[edit] Arrows are drawn with the command \put(x, y){\vector(x1, y1){length}} For arrows, the components of the direction vector are even more narrowly restricted than for line segments, namely to the integers ( −4, −3, ... , 3, 4). Components also have to be coprime (no common divisor except 1). Notice the effect of the \thicklines command on the two arrows pointing to the upper left. \setlength{\unitlength}{0.75mm}\begin{picture}(60,40)\put(30,20){\vector(1,0){30}}\put(30,20){\vector(4,1){20}}\put(30,20){\vector(3,1){25}}\put(30,20){\vector(2,1){30}}\put(30,20){\vector(1,2){10}}\thicklines\put(30,20){\vector(-4,1){30}}\put(30,20){\vector(-1,4){5}}\thinlines\put(30,20){\vector(-1,-1){5}}\put(30,20){\vector(-1,-4){5}}\end{picture} Circles[edit] The command \put(x, y){\circle{diameter}} draws a circle with center (x, y) and diameter (not radius) specified by diameter. The picture environment only admits diameters up to approximately 14mm, and even below this limit, not all diameters are possible. The \circle* command produces disks (filled circles). As in the case of line segments, one may have to resort to additional packages, such as eepic, pstricks, or tikz. \setlength{\unitlength}{1mm}\begin{picture}(60, 40)\put(20,30){\circle{1}}\put(20,30){\circle{2}}\put(20,30){\circle{4}}\put(20,30){\circle{8}}\put(20,30){\circle{16}}\put(20,30){\circle{32}}\put(40,30){\circle{1}}\put(40,30){\circle{2}}\put(40,30){\circle{3}}\put(40,30){\circle{4}}\put(40,30){\circle{5}}\put(40,30){\circle{6}}\put(40,30){\circle{7}}\put(40,30){\circle{8}}\put(40,30){\circle{9}}\put(40,30){\circle{10}}\put(40,30){\circle{11}}\put(40,30){\circle{12}}\put(40,30){\circle{13}}\put(40,30){\circle{14}}\put(15,10){\circle*{1}}\put(20,10){\circle*{2}}\put(25,10){\circle*{3}}\put(30,10){\circle*{4}}\put(35,10){\circle*{5}}\end{picture} There is another possibility within the picture environment. If one is not afraid of doing the necessary calculations (or leaving them to a program), arbitrary circles and ellipses can be patched together from quadratic Bézier curves. See Graphics in LaTeX2e for examples and Java source files. Text and formulae[edit] As this example shows, text and formulae can be written in the environment with the \put command in the usual way: \setlength{\unitlength}{0.8cm}\begin{picture}(6,5)\thicklines\put(1,0.5){\line(2,1){3}}\put(4,2){\line(-2,1){2}}\put(2,3){\line(-2,-5){1}}\put(0.7,0.3){$A$}\put(4.05,1.9){$B$}\put(1.7,2.95){$C$}\put(3.1,2.5){$a$}\put(1.3,1.7){$b$}\put(2.5,1.05){$c$}\put(0.3,4){$F=\sqrt{s(s-a)(s-b)(s-c)}$}\put(3.5,0.4){$\displaystyle s:=\frac{a+b+c}{2}$}\end{picture} \multiput and \linethickness[edit] The command \multiput(x, y)(dx, dy ){n}{object} has 4 arguments: the starting point, the translation vector from one object to the next, the number of objects, and the object to be drawn. The \linethickness command applies to horizontal and vertical line segments, but neither to oblique line segments, nor to circles. It does, however, apply to quadratic Bézier curves! \setlength{\unitlength}{2mm}\begin{picture}(30,20)\linethickness{0.075mm}\multiput(0,0)(1,0){26}%{\line(0,1){20}}\multiput(0,0)(0,1){21}%{\line(1,0){25}}\linethickness{0.15mm}\multiput(0,0)(5,0){6}%{\line(0,1){20}}\multiput(0,0)(0,5){5}%{\line(1,0){25}}\linethickness{0.3mm}\multiput(5,0)(10,0){2}%{\line(0,1){20}}\multiput(0,5)(0,10){2}%{\line(1,0){25}}\end{picture} Ovals[edit] The command \put(x, y){\oval(w, h)} or \put(x, y){\oval(w, h)[position]} produces an oval centered at (x, y) and having width w and height h. The optional position arguments b, t, l, r refer to "top", "bottom", "left", "right", and can be combined, as the example illustrates. Line thickness can be controlled by two kinds of commands: \linethickness{''length''} on the one hand, \thinlines and \thicklines on the other. While \linethickness{''length''} applies only to horizontal and vertical lines (and quadratic Bézier curves), \thinlines and \thicklines apply to oblique line segments as well as to circles and ovals. \setlength{\unitlength}{0.75cm}\begin{picture}(6,4)\linethickness{0.075mm}\multiput(0,0)(1,0){7}%{\line(0,1){4}}\multiput(0,0)(0,1){5}%{\line(1,0){6}}\thicklines\put(2,3){\oval(3,1.8)}\thinlines\put(3,2){\oval(3,1.8)}\thicklines\put(2,1){\oval(3,1.8)[tl]}\put(4,1){\oval(3,1.8)[b]}\put(4,3){\oval(3,1.8)[r]}\put(3,1.5){\oval(1.8,0.4)}\end{picture} Multiple use of predefined picture boxes[edit] A picture box can be declared by the command \newsavebox{name} then defined by \savebox{name}(width,height)[position]{content} and finally arbitrarily often be drawn by \put(x, y){\usebox{name}} The optional position parameter has the effect of defining the "anchor point" of the savebox. In the example it is set to "bl" which puts the anchor point into the bottom left corner of the savebox. The other position specifiers are top and right. The name argument refers to a LaTeX storage bin and therefore is of a command nature (which accounts for the backslashes in the current example). Boxed pictures can be nested: In this example, \foldera is used within the definition of \folderb. The \oval command had to be used as the \line command does not work if the segment length is less than about 3 mm. \setlength{\unitlength}{0.5mm}\begin{picture}(120,168)\newsavebox{\foldera}\savebox{\foldera} (40,32)[bl]{% definition \multiput(0,0)(0,28){2} {\line(1,0){40}} \multiput(0,0)(40,0){2} {\line(0,1){28}} \put(1,28){\oval(2,2)[tl]} \put(1,29){\line(1,0){5}} \put(9,29){\oval(6,6)[tl]} \put(9,32){\line(1,0){8}} \put(17,29){\oval(6,6)[tr]} \put(20,29){\line(1,0){19}} \put(39,28){\oval(2,2)[tr]}}\newsavebox{\folderb}\savebox{\folderb} (40,32)[l]{% definition \put(0,14){\line(1,0){8}} \put(8,0){\usebox{\foldera}}}\put(34,26){\line(0,1){102}}\put(14,128){\usebox{\foldera}}\multiput(34,86)(0,-37){3}{\usebox{\folderb}}\end{picture} Quadratic Bézier curves[edit] The command \qbezier(x1, y1)(x, y)(x2, y2) draws a quadratic bezier curve where , denote the end points, and denotes the intermediate control point. The respective tangent slopes, and , can be obtained from the equations See Graphics in LaTeX2e for a Java program which generates the necessary \qbezier command line. \setlength{\unitlength}{0.8cm}\begin{picture}(6,4)\linethickness{0.075mm}\multiput(0,0)(1,0){7}{\line(0,1){4}}\multiput(0,0)(0,1){5}{\line(1,0){6}}\thicklines\put(0.5,0.5){\line(1,5){0.5}}\put(1,3){\line(4,1){2}}\qbezier(0.5,0.5)(1,3)(3,3.5)\thinlines\put(2.5,2){\line(2,-1){3}}\put(5.5,0.5){\line(-1,5){0.5}}\linethickness{1mm}\qbezier(2.5,2)(5.5,0.5)(5,3)\thinlines\qbezier(4,2)(4,3)(3,3)\qbezier(3,3)(2,3)(2,2)\qbezier(2,2)(2,1)(3,1)\qbezier(3,1)(4,1)(4,2)\end{picture} As this example illustrates, splitting up a circle into 4 quadratic Bézier curves is not satisfactory. At least 8 are needed. The figure again shows the effect of the \linethickness command on horizontal or vertical lines, and of the \thinlines and the \thicklines commands on oblique line segments. It also shows that both kinds of commands affect quadratic Bézier curves, each command overriding all previous ones. Catenary[edit] \setlength{\unitlength}{1cm}\begin{picture}(4.3,3.6)(-2.5,-0.25)\put(-2,0){\vector(1,0){4.4}}\put(2.45,-.05){$x$}\put(0,0){\vector(0,1){3.2}}\put(0,3.35){\makebox(0,0){$y$}}\qbezier(0.0,0.0)(1.2384,0.0)(2.0,2.7622)\qbezier(0.0,0.0)(-1.2384,0.0)(-2.0,2.7622)\linethickness{.075mm}\multiput(-2,0)(1,0){5}{\line(0,1){3}}\multiput(-2,0)(0,1){4}{\line(1,0){4}}\linethickness{.2mm}\put( .3,.12763){\line(1,0){.4}}\put(.5,-.07237){\line(0,1){.4}}\put(-.7,.12763){\line(1,0){.4}}\put(-.5,-.07237){\line(0,1){.4}}\put(.8,.54308){\line(1,0){.4}}\put(1,.34308){\line(0,1){.4}}\put(-1.2,.54308){\line(1,0){.4}}\put(-1,.34308){\line(0,1){.4}}\put(1.3,1.35241){\line(1,0){.4}}\put(1.5,1.15241){\line(0,1){.4}}\put(-1.7,1.35241){\line(1,0){.4}}\put(-1.5,1.15241){\line(0,1){.4}}\put(-2.5,-0.25){\circle*{0.2}}\end{picture} In this figure, each symmetric half of the catenary is approximated by a quadratic Bézier curve. The right half of the curve ends in the point (2, 2.7622), the slope there having the value m = 3.6269. Using again equation (*), we can calculate the intermediate control points. They turn out to be (1.2384, 0) and (−1.2384, 0). The crosses indicate points of the real catenary. The error is barely noticeable, being less than one percent. This example points out the use of the optional argument of the \begin{picture} command. The picture is defined in convenient "mathematical" coordinates, whereas by the command \begin{picture}(4.3,3.6)(-2.5,-0.25) its lower left corner (marked by the black disk) is assigned the coordinates (−2.5,−0.25). Plotting graphs[edit] \setlength{\unitlength}{1cm}\begin{picture}(6,6)(-3,-3)\put(-1.5,0){\vector(1,0){3}}\put(2.7,-0.1){$\chi$}\put(0,-1.5){\vector(0,1){3}}\multiput(-2.5,1)(0.4,0){13}{\line(1,0){0.2}}\multiput(-2.5,-1)(0.4,0){13}{\line(1,0){0.2}}\put(0.2,1.4){$\beta=v/c=\tanh\chi$}\qbezier(0,0)(0.8853,0.8853)(2,0.9640)\qbezier(0,0)(-0.8853,-0.8853)(-2,-0.9640)\put(-3,-2){\circle*{0.2}}\end{picture} The control points of the two Bézier curves were calculated with formulas (*). The positive branch is determined by , and , . Again, the picture is defined in mathematically convenient coordinates, and the lower left corner is assigned the mathematical coordinates (−3,−2) (black disk). The picture environment and gnuplot[edit] The powerful scientific plotting package gnuplot has the capability to output directly to a LaTeX picture environment. It is often far more convenient to plot directly to LaTeX, since this saves having to deal with potentially troublesome postscript files. Plotting scientific data (or, indeed, mathematical figures) this way gives much greater control, and of course typesetting ability, than is available from other means (such as postscript).Such pictures can then be added to a document by an \include{} command. N.B. gnuplot is a powerful piece of software with a vast array of commands. A full discussion of gnuplot lies beyond the scope of this note. See [1] for a tutorial.
As $T$ is separable and complete, by Ulam's theorem, for each $j\geq 1$, we can find a compact subset of $T$, say $K_j$, such that $\rho(K_j)\geq 1-j^{-1}$. If $P\in \mathcal M^{\rho}(T\times A)$, then $$P(K_j\times A)=\rho(K_j)\geq 1-j^{-1}.$$As $K_j\times A$ is a compact subset of $T\times A$, the set $\mathcal M^{\rho}(T\times A)$ is uniformly tight. By of Prokhorov theorem, as $T\times A$ is separable and complete, $\mathcal M^{\rho}(T\times A)$ has a compact closure in the narrow topology. As this one is metrizable, to see that $\mathcal M^{\rho}(T\times A)$ is closed, we just have to check sequential closeness. Let $\{ P_n\}\subset \mathcal M^{\rho}(T\times A)$ converging in law to $P$. We just have to show that for each open set $O$ of $S$, $P(O\times A)=\rho(O)$. We have by portmanteau theorem, $$P(O\times A)\leq \liminf_n P_n(O\times A)=\rho(O).$$As a closed set in a metric space is a countable intersection of open sets, we have $P(O^c\times A)\leq \rho(O^c)$, so $\rho(O)=P(O\times A)$ for all open set $O$. Theorem.(Ulam) Let $(S,d)$ a complete separable metric space. Then each Borel probability measure on $S$ is tight. Proof: Let $\{x_n\}$ a sequence dense in $S$, and $\varepsilon>0$. For each $j$, let $N_j$ integer such that $P\left(\bigcup_{l=1}^{N_j}B(x_l,j^{-1})\right)\geq 1-2^{-1}\varepsilon$. Then $K:=\bigcap_{j\geq 1}\overline{\bigcup_{l=1}^{N_j}B(x_l,j^{-1})}$ is pre-compact and closed, hence compact, and $P(K)\geq 1-\varepsilon$. Theorem.(Prokhorov) Let $(S,d)$ a separable metric space, and $\mathcal P$ a family of Borel probability measures. If $\mathcal P$ is uniformly tight, that is, for all $\varepsilon>0$, we can find a compact subset $K$ of $S$ such that for each element $P$ of $\mathcal P$, $P(K)\geq 1-\varepsilon$, then $\mathcal P$ is relatively compact for the narrow topology.
Let $X$ be a non-vanishing real analytic vector field on an open manifold $M$. What kind of obstructions would appear when we search for a Riemannian metric on $M$ such that the space of harmonic functions would be invariant under the derivation operator $f \mapsto X.f$? A harmonic function is a function $f$ which satisfy $\Delta_g (f)=0$ where $\Delta_g$ is the Laplace operator associated to the metric $g$. (A partial answer; the full picture is likely to be very complicated, with special behaviors in dimensions 1 and 2 compared to higher ones.) Dimension 1 Given a Riemannian metric on a one dimensional manifold there is, up to a sign, a unique "unit-length" vector field $Y$ tangent to $M$, and in this case you must have $\Delta_g = Y^2$. It is easy to see that $X$ preserves harmonic functions if and only if $[X,Y] = 0$, which implies $X = cY$ for some constant $c$. Therefore the only obstruction to to the existence of a suitable metric is whether $X$ vanishes: if $X$ is a non-vanishing vector field, then you can define a co-metric by $X\otimes X$. If $X$ vanishes at some point, then such a metric doesn't exist. General Dimensions Given a metric $g$, it is well-known that $$ [X, \Delta_g]f = - {}^{(0)}\pi^{ab} \nabla^2_{ab} f - g^{ab} \cdot {}^{(1)}\pi_{ab}{}^c \nabla_c f \tag{*}$$ where $$ -{}^{(0)}\pi^{ab} = \mathcal{L}_X g^{ab} = - \nabla^a X^b - \nabla^b X^a \tag{1}$$ and $${}^{(1)}\pi_{ab}{}^c = \frac12 g^{cd} (\nabla_a {}^{(0)}\pi_{bd} + \nabla_b {}^{(0)}\pi_{ad} - \nabla_d {}^{(0)} \pi_{ab} ) \tag{2} $$ For $X$ to preserve the space of harmonic functions, a sufficient condition (necessary and sufficient if you work only locally) can be found by examining the equations above. First note that it is not necessary that $[X,\Delta_g] = 0$. If we are only interested in preserving the harmonic functions, it suffices that $[X, \Delta_g] \propto \Delta_g$. Examining (*) this means that: We want ${}^{(0)}\pi^{ab} = k g^{ab}$ for some function $k$. This means that $X$ is a conformal isometry of $g$. We want ${}^{(1)}\pi_{ab}{}^c g^{ab}$ to vanish. Together with the previous statement we want $$ 0 = (2-n) \nabla^c k,$$ where $n$ is the number of spatial dimensions. So (as is well known), we conclude that in 2 dimensions a sufficient condition is that $X$ is a conformal isometry of $g$, and in higher dimensions we require that $X$ is a homothety. Now, homotheties also obey the Jacobi equation, in the form $$ \nabla_a \nabla_b X_c+ R_{bcad} X^d = 0 $$ (for one choice of sign of the Riemann curvature tensor). A particular consequence is that if $X$ and $\nabla X$ both vanish at a point $p$, then $X$ vanish everywhere. This means that we have an obstruction: Obstruction: Let $n \geq 3$. If there exists a point $p$ such that the vector field $X$ vanishes to second order (in the sense that for any function $f$ and any other vector field $Y$, both $Xf|_p = 0 = YXf|_p$), then there does not exist any metric $g$ on $M$ such that $X$ preserves the harmonic functions. Remark: In $n = 2$, this is not an obstruction. Consider the vector field $(y^2 - x^2) \partial_x - 2 xy \partial_y$ on $\mathbb{R}^2$. It is simple to check that this vector field preserves the harmonic functions on $\mathbb{R}^2$, but this vector field vanishes to second order at the origin.
This question already has an answer here: Series converges implies $\lim{n a_n} = 0$ 14 answers This came up in a friend's exam and it must be one of those ${\epsilon},N(\epsilon)$ arguments I could do in a snapshot in my twenties but now I can't figure out how the proof should go: For a positive non-increasing sequence $\{a_k, {k \in \mathbb{N}}\}$ if $$\sum_{k=1}^{\infty}a_k$$ converges then $b_n=na_n{\to}0$. Thanks in advance!
Bernoulli Bernoulli Volume 20, Number 4 (2014), 2217-2246. Adaptive sensing performance lower bounds for sparse signal detection and support estimation Abstract This paper gives a precise characterization of the fundamental limits of adaptive sensing for diverse estimation and testing problems concerning sparse signals. We consider in particular the setting introduced in ( IEEE Trans. Inform. Theory 57 (2011) 6222–6235) and show necessary conditions on the minimum signal magnitude for both detection and estimation: if $\mathbf{x}\in\mathbb{R}^{n}$ is a sparse vector with $s$ non-zero components then it can be reliably detected in noise provided the magnitude of the non-zero components exceeds $\sqrt{2/s}$. Furthermore, the signal support can be exactly identified provided the minimum magnitude exceeds $\sqrt{2\log s} $. Notably there is no dependence on $n$, the extrinsic signal dimension. These results show that the adaptive sensing methodologies proposed previously in the literature are essentially optimal, and cannot be substantially improved. In addition, these results provide further insights on the limits of adaptive compressive sensing. Article information Source Bernoulli, Volume 20, Number 4 (2014), 2217-2246. Dates First available in Project Euclid: 19 September 2014 Permanent link to this document https://projecteuclid.org/euclid.bj/1411134458 Digital Object Identifier doi:10.3150/13-BEJ555 Mathematical Reviews number (MathSciNet) MR3263103 Zentralblatt MATH identifier 1357.94030 Citation Castro, Rui M. Adaptive sensing performance lower bounds for sparse signal detection and support estimation. Bernoulli 20 (2014), no. 4, 2217--2246. doi:10.3150/13-BEJ555. https://projecteuclid.org/euclid.bj/1411134458
Title Single molecule magnetism in a family of mononuclear β-diketonate lanthanide(III) complexes: rationalization of magnetic anisotropy in complexes of low symmetry Publication Type Journal Article Year of Publication 2013 Authors Chilton, NF, Langley, SK, Moubaraki, B, Soncini, A, Batten, SR, Murray, KS Journal Chemical Science Volume 4 Pagination 1719–1730 Abstract The use of an amino-pyridyl substituted $\beta$-diketone, N-(2-pyridyl)-ketoacetamide (paaH), has allowed for the isolation of two new families of isostructural mononuclear lanthanide complexes with general formulae: [Ln(paaH*)$_2$(H$_2$O)$_4$][Cl]$_3$$\cdot$H$_2$O (Ln = Gd (1), Tb (2), Dy (3), Ho (4), Er (5) and Y (6)) and [Ln(paaH*)$_2$(NO$_3$)$_2$(MeOH)][NO$_3$] (Ln = Tb (7), Dy (8), Ho (9) and Er (10)). The dysprosium members of each family (3 and 8) show interesting slow magnetic relaxation features. Compound 3 displays Single Molecule Magnet (SMM) behaviour in zero DC field with an energy barrier to thermal relaxation of $E_a = 177(4)$ K (123(2) cm$^{-1}$) with $\tau_0 = 2.5(8) \times 10^{-7}$ s, while compound 8 shows slow relaxation of the magnetization under an optimum DC field of 0.2 T with an energy barrier to thermal relaxation of $E_a = 64$ K (44 cm-1) with $\tau_0= 6.2 \times 10^{-7}$ s. Ab initio multiconfigurational calculations of the Complete Active Space type have been employed to elucidate the electronic and magnetic structure of the low-lying energy levels of compounds 2-5 and 8. The orientation of the anisotropic magnetic moments for compounds 2-5 are rationalized using a clear and succinct, chemically intuitive method based on the electrostatic repulsion of the aspherical electron density distributions of the lanthanides. DOI 10.1039/C3SC22300K LibrarySoncini Group
Edit: unfortunately I did botch the crucial computation. So, surprisingly, the map $f$ defined below need not be locally Lipschitz in the $C^{k,\alpha}$ space. Let me give an example in $1$-dimension: let $v(x)=\frac23 |x|^{3/2}\in C^{1,\frac12}$, and consider for small $\varepsilon$ the functions $g(x)=\varepsilon +x$ and $h(x)=x$. Then obviously $\lVert g-h\rVert_{C^{1,\frac12}}=\varepsilon$, but we have for $x>0$$$(v\circ g-v\circ h)'(x)=\sqrt{\varepsilon+x}-\sqrt{x}=\frac{\varepsilon}{\sqrt{\varepsilon+x}+\sqrt{x}} \to \sqrt{\varepsilon} \quad\mbox{when } x\to 0$$so $\lVert v\circ g-v\circ h\rVert_{C^{1,\frac12}} \ge \sqrt{\varepsilon}$. Even if this counter-example does not rule out the possibility that what I asked holds (the flow of $v$ above has the wanted property), it still makes me doubt. Previous version (hopefully still useful in other regularities) Since references seem elusive, let me propose a simple proof. The idea is simply to use the Picard–Lindelöf theorem to the right object. First, since $M$ is compact it has positive injectivity radius, and its (global) exponential map $\exp:TM\to M$ induces a diffeomorphism $(x,u)\mapsto (x,\exp_x(u))$ from a neighborhood of the zero section of $TM$, to a neighborhood of the diagonal in $M\times M$ (of course this is already the Picard–Lindelöf theorem, but in the more classical smooth regularity and without the need for the "global derivative" part of the question). Pulling back by $\exp$, we can identify the diffeomorphisms of $M$ which are uniformly close to the identity with vector fields (which are then uniformly close to zero). We use the letter $\Phi$ to denote a diffeomorphism seen as a point the consequent open set $\Omega$ of $\Gamma^{k,\alpha}$ (the space of $C^{k,\alpha}$ vector fields), and the letter $V$ to denote an element of $\Gamma^{k,\alpha}$, seen as a tangent vector to $\Omega$. Let $v\in \Gamma^{k,\alpha}$ be our given vector field, and define $f:\Omega\to \Gamma^{k,\alpha}$ by$$f(\Phi) = v\circ \Phi$$Then $f$ is locally Lipschitz in the $C^{k,\alpha}$ norm, which makes $\Gamma^{k,\alpha}$ a Banach space. This follows from a certainly classical, but somewhat tedious computation I hope I got right (the same needed to show that the $C^{k,\alpha}$ regularity is stable by product and by composition; this last item requires $k\ge1$) Applying the Picard-Lindelöf theorem to the differential equation $\Phi'(t)=f(\Phi(t))$ in $\Omega\subset \Gamma^{k,\alpha}$ thus yields a unique maximal solution starting at $\Phi(0)=\mathrm{Id}_M$. This solution is obviously the flow of $v$, and now the wanted derivative in $C^{k,\alpha}$ norm follows from the equation itself.
How to Model Moisture Flow in COMSOL Multiphysics® Computing laminar and turbulent moisture flows in air is both flexible and user friendly with the Moisture Flow multiphysics interfaces and coupling in the COMSOL Multiphysics® software. Available as of version 5.3a, this comprehensive set of functionality can be used to model coupled heat and moisture transport in air and building materials. Let’s learn how the Moisture Flow interface complements existing functionality, while highlighting its benefits. Modeling Heat and Moisture Transport Modeling the transport of heat and moisture through porous materials, or from the surface of a fluid, often involves including the surrounding media in the model in order to get accurate estimates of the conditions at the material surfaces. In the investigations of hygrothermal behavior of building envelopes, food packaging, and other common engineering problems, the surrounding medium is probably moist air (air with water vapor). Moist air is the environing medium for applications such as building envelopes (illustration, left) and solar food drying (right). Right image by ArianeCCM — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons. When considering porous media, the moisture transport process, which includes capillary flow, bulk flow, and binary diffusion of water vapor in air, depends on the nature of the material. In moist air, moisture is transported by diffusion and advection, where the advecting flow field in most cases is turbulent. Computing heat and moisture transport in moist air requires the resolution of three sets of equations: The Navier-Stokes equations, to compute the airflow velocity field \mathbf{u} and pressure p The energy equation, to compute the temperature T The moisture transport equation, to compute the relative humidity \phi These equations are coupled together through the pressure, temperature, and relative humidity, which are used to evaluate the properties of air (density \rho(p,T,\phi); viscosity \mu(T,\phi); thermal conductivity k(T,\phi); and heat capacity C_p(T,\phi)); molecular diffusivity D(T) and through the velocity field used for convective transport. With the addition of the Moisture Flow multiphysics interface in version 5.3a, COMSOL Multiphysics defines all three of these equations in a few steps, as shown in the figure below. Using the Moisture Flow Multiphysics Interface Whenever studying the flow of moist air, two questions should be asked: Does the flow depend on moisture distribution? Does the nature of the flow require the use of a turbulence model? If the answer is “yes” for at least one of these questions, then you should consider using the Moisture Flow multiphysics interfaces, found under the Chemical Species Transport branch. The Moisture Flow group under the Chemical Species Transport branch of the Physics Wizard , with the single-physics interfaces and coupling node added with each version of the Moisture Flow predefined multiphysics interface. The Laminar Flow version of the multiphysics interface combines the Moisture Transport in Air interface with the Laminar Flow interface and adds the Moisture Flow coupling. Similarly, each version under Turbulent Flow combines the Moisture Transport in Air interface and the corresponding Turbulent Flow interface and adds the Moisture Flow coupling. Besides providing a user-friendly way to define the coupled set of equations of the moisture flow problem, the multiphysics interfaces for turbulent flow handle the moisture-related turbulence variables required for the fluid flow computation. Automatic Coupling Between Single-Physics Interfaces One advantage of using the Moisture Flow multiphysics interface is its usability. When adding the Moisture Flow node through the predefined interface, an automatic coupling of the Navier-Stokes equations is defined for the fluid flow and the moisture transport equations by the software (center screenshot in the image below) by using the following variables: The density and dynamic viscosity in the Navier-Stokes equations, which depend on the relative humidity variable from the Moisture Transportinterface through a mixture formula based on dry air and pure steam properties (left screenshot below) The velocity field and absolute pressure variables from the Single-Phase Flowinterface, which are used in the moisture transport equation (right screenshot below) Support for Turbulent Fluid Flow The performance of the Moisture Flow multiphysics interface is especially attractive when dealing with a turbulent moisture flow. For turbulent flows, the turbulent mixing caused by the eddy diffusivity in the moisture convection is automatically accounted for by the COMSOL® software by enhancing the moisture diffusivity with a correction term based on the turbulent Schmidt number . The Kays-Crawford model is the default choice for the evaluation of the turbulent Schmidt number, but a user-defined value or expression can also be entered directly in the graphical user interface. Selection of the model for the computation of the turbulent Schmidt number in the user interface of the Moisture Flow coupling. In addition, for coarse meshes that may not be suitable for resolving the thin boundary layer close to walls, Wall functions can be selected or automatically applied by the software. The wall functions are such that the computational domain is assumed to be located at a distance from the wall, the so-called lift-off position, corresponding to the distance from the wall where the logarithmic layer meets the viscous sublayer (or would meet it if there was no buffer layer in between). The moisture flux at the lift-off position, g_{wf}, which accounts for the flux to and from the wall, is automatically defined by the Moisture Flow interface, based on the relative humidity. Approximation of the flow field and the moisture flux close to walls when using wall functions in the turbulence model for fluid flow. Note that the Low-Reynolds and Automatic options for Wall Treatment are also available for some of the RANS models. For more information, read this blog post on choosing a turbulence model. Mass Conservation Across Boundaries By using the Moisture Flow interface, an appropriate mass conservation is granted in the fluid flow problem by the Screen and Interior Fan boundary conditions. A continuity condition is also applied on vapor concentration at the boundaries where the Screen feature is applied. For the Interior Fan condition, the mass flow rate is conserved in an averaged way and the vapor concentration is homogenized at the fan outlet, as shown in the figure below. Average mass flow rate conservation across a boundary with the Interior Fan condition. Example: Modeling Evaporative Cooling with the Moisture Flow Interface Let’s consider evaporative cooling at the water surface of a glass of water placed in a turbulent airflow. The Turbulent Flow, Low Reynolds k-ε interface, the Moisture Transport in Air interface, and the Heat Transfer in Moist Air interface are coupled through the Nonisothermal Flow, Moisture Flow, and Heat and Moisture coupling nodes. These couplings compute the nonisothermal airflow passing over the glass, the evaporation from the water surface with the associated latent heat effect, and the transport of both heat and moisture away from this surface. By using the Automatic option for Wall treatment in the Turbulent Flow, Low Reynolds k-ε interface, wall functions are used if the mesh resolution is not fine enough to fully resolve the velocity boundary layer close to the walls. Convective heat and moisture fluxes at lift-off position are added by the Nonisothermal Flow and Moisture Flow couplings. The temperature and relative humidity solutions after 20 minutes are shown below, along with the streamlines of the airflow velocity field. Temperature (left) and relative humidity (right) solutions with the streamlines of the velocity field after 20 minutes. The temperature and relative humidity fields have a strong resemblance here, which is quite natural since the fields are strongly coupled and since both transport processes have similar boundary conditions, in this case. In addition, heat transfer is given by conduction and advection while mass transfer is described by diffusion and advection. The two transport processes originate from the same physical phenomena: conduction and diffusion from molecular interactions in the gas phase while advection is given by the total motion of the bulk of the fluid. Also, the contribution of the eddy diffusivity to the turbulent thermal conductivity and the turbulent diffusivity originate from the same physical phenomenon, which adds further to the similarity of the temperature and moisture field. Next Steps Learn more about the key features and functionality included with the Heat Transfer Module, and add-on to COMSOL Multiphysics: Read the following blog posts to learn more about heat and moisture transport modeling: How to Model Heat and Moisture Transport in Porous Media with COMSOL® How to Model Heat and Moisture Transport in Air with COMSOL® Get a demonstration of the Nonisothermal Flow and Heat and Moisture couplings in these tutorial models: Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science