text
stringlengths
256
16.4k
GR8677 #5 Alternate Solutions casseverhart13 2019-09-16 04:53:24 Not too many people would actually think about this the way you just did. click here archard 2010-08-04 14:05:45 It's super easy if you write the kinetic energy as - since momentum is conserved, you don't even need to worry about whether or not the collision is elastic. Just write Divide equations and solve for h2. PorcelainMouse 2008-11-05 15:12:32 Thanks a19gray2. I completely forgot that. I was working this out and the three step process took me longer than I want. I think there is a quicker way. How about this?... Recognize that the first and last steps are really the same: conservation of total energy, but where energy moves completely from potential to kinetic and back. For any complete conversion situation: Right? So, now we know that , where is the velocity of ball A and is it's initial height (where all of the E for this whole situation originates.) And, we also know, from he same calculation, that , where is the final velocity of both (stuck-together) balls and is their final height. We want to know the relation ship between and , so let's try this: That's pretty close already. (And, you can see where the 1/16 comes from: it's the ratio of the squares of the velocity.) Now, since a19grey2 pointed out we need to use momentum to analyze the impact, we can write: But, now we only need to solve for . Let's square this, to match the ratio of squares we had relating the two heights: Putting this last equation together with the previous ratio of velocities, we can find what we want: That makes 16 times as large as . Okay, it looks harder, but I think it's much faster because the algebra is quicker and easier. Comments casseverhart13 2019-09-16 04:53:24 Not too many people would actually think about this the way you just did. click here ernest21 2019-08-10 03:09:40 The only variables/constants we can assume the units of are x and t. All others are unspecified, despite what convention suggests. artillery games fredluis 2019-08-08 12:51:52 It might be possible to do the derivation in a couple of minutes, if one is to know exactly what to look for. Consider the effective potential: tile contractor joshuaprice153 2019-08-08 05:23:09 That\'s a very good post, This post give truly quality information, I\'m definitely going to look into it, Really very useful tips are provided here, Thanks for sharing. commercial cleaning archard 2010-08-04 14:05:45 It's super easy if you write the kinetic energy as - since momentum is conserved, you don't even need to worry about whether or not the collision is elastic. Just write Divide equations and solve for h2. flyboy621 2010-11-09 20:18:08 Nice! timtammy 2011-10-11 00:57:49 far and away the best answer Shahbaz Ahmed Chughtai 2012-06-09 22:45:14 Nice and Comprehensive Answer. :-) PorcelainMouse 2008-11-05 15:12:32 Thanks a19gray2. I completely forgot that. I was working this out and the three step process took me longer than I want. I think there is a quicker way. How about this?... Recognize that the first and last steps are really the same: conservation of total energy, but where energy moves completely from potential to kinetic and back. For any complete conversion situation: Right? So, now we know that , where is the velocity of ball A and is it's initial height (where all of the E for this whole situation originates.) And, we also know, from he same calculation, that , where is the final velocity of both (stuck-together) balls and is their final height. We want to know the relation ship between and , so let's try this: That's pretty close already. (And, you can see where the 1/16 comes from: it's the ratio of the squares of the velocity.) Now, since a19grey2 pointed out we need to use momentum to analyze the impact, we can write: But, now we only need to solve for . Let's square this, to match the ratio of squares we had relating the two heights: Putting this last equation together with the previous ratio of velocities, we can find what we want: That makes 16 times as large as . Okay, it looks harder, but I think it's much faster because the algebra is quicker and easier. nakib 2010-04-02 11:27:15 Thanks for the neat expression. Memorizing this will save me a minute in the exam for sure [assuming I get a similar kind of problem]. Blake7 2007-09-11 18:30:50 The bookkeeping is deceptively simple so get a good night's sleep and commit to the time expenditure in drudge cranking. tercel 2006-12-01 11:18:28 I'm confused. Why doesn't conservation of energy work to get directly from the initial condition to the solution? In other words, where does the initial energy, , go? VanishingHitchwriter 2006-12-01 14:10:43 Not sure if this is what you're asking, but the initial potential energy is converted to kinetic energy. However, when the two objects collide, momentum is transferred, and the velocity is changed (hence kinetic energy changed). The collision is completely inelastic (a la conservation of momentum). a19grey2 2008-11-02 21:07:32 To clarify, ANY time that two objects stick together after a collison, energy is lost. Therefore, NEVER use conservation of energy to the before/after parts of a "stick together" collision. flyboy621 2010-11-09 20:17:31 The energy lost in the collision typically goes into thermal energy, i.e. raising the temperature of the masses. Or it could go into vibrations or some other form of energy that does not contribute to translational motion. jax 2005-12-01 09:01:20 It seems that only 19% of people got this one right, even though it seems easy. I guess most people probably assumed that the collision was elastic, which is not true. I messed up on that one too... ugh I hope I don't do something dumb like that on the exam! :) dnvlgm 2007-11-11 19:22:58 well, it says the particles stick together, I guess most people read too fast or careless and that's a big problem that I honestly have a lot. Gotta be careful! Good luck with your exam!!! :D Post A Comment! Bare Basic LaTeX Rosetta Stone LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces . type this... to get... $\int_0^\infty$ $\partial$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form. Basis of coefficient ring in terms of a root \(\nu\) of \(x^{3}\mathstrut -\mathstrut \) \(x^{2}\mathstrut -\mathstrut \) \(2519\) \(x\mathstrut +\mathstrut \) \(43659\): \(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( 64 \nu^{2} + 1664 \nu - 108053 \) \(\beta_{2}\) \(=\) \( -896 \nu^{2} + 124160 \nu + 1463595 \) \(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \((\)\(\beta_{2}\mathstrut +\mathstrut \) \(14\) \(\beta_{1}\mathstrut +\mathstrut \) \(49147\)\()/147456\) \(\nu^{2}\) \(=\) \((\)\(-\)\(13\) \(\beta_{2}\mathstrut +\mathstrut \) \(970\) \(\beta_{1}\mathstrut +\mathstrut \) \(123838145\)\()/73728\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. This newform can be constructed as the kernel of the linear operator \(T_{3}^{3} \) \(\mathstrut -\mathstrut 23732 T_{3}^{2} \) \(\mathstrut -\mathstrut 1785165264 T_{3} \) \(\mathstrut +\mathstrut \)\(70\!\cdots\!20\)\( \) acting on \(S_{20}^{\mathrm{new}}(\Gamma_0(8))\).
GR8677 #6 Problem Mechanics}Vectors Since there is only one force acting, i.e., the gravitational force, one can find the tangential acceleration by projecting in the tangential direction. Equivalently, one dots gravity with the tangential unit vector, . There's a long way to do this, wherein one writes out the full Gibbsean vector formalism, and then there's a short and elegant way. (The elegant solution is due to Teodora Popa.) The problem gives . Thus, , where in the last step, one notes that the ratio forms the tangent of the indicated angle. One recalls the Pythagorean identity , and the definition of in terms of and . Thus, one gets . Square both sides to get . Solve to get . The angle between the vectors and is , and thus the tangential acceleration is . Beautiful problem. Alternate Solutions pranav 2012-11-02 22:23:53 (same concept as one given...but simpler math) Differentiating y w.r.t x gives: Draw a right triangle, with one angle . Now label the side adjacent as "2", opposite as "x". The hypotenuse becomes . Now we want the tangential acceleration, which would be the "g" multiplied by the cosine of the angle opposite to i.e. , since is downward parallel to the side we labeled "x" (i.e. the dot product basically) Therefore, answer is: (D) malianil 2011-11-05 05:52:47 For those who are not inclined to solve it with trigonometric functions can take a snapshot drawing of the force diagram at arbitrary location (y,x) where rn and then use euclidean geometry to find our that the Force component of the tangent is a function of x and is rnrnIt is simpler this way for me. nakib 2010-04-02 11:51:51 (A) Lol! (B) Of course not! The tangent is not pointing downwards. (C) Wrong units. (D) Right units, maybe correct. Also, goes to as goes to . (E) Wrong units. (D) is the answer. The rigorous solutions are beautiful, but are not feasible under GRE exam conditions... Shahbaz Ahmed Chughtai 2012-06-09 22:51:44 Very Nice. GRE is designed to use common sense !!! But I salute the website owners for such a hard work ! p3ace 2008-05-15 06:11:52 I apologize for what I just said. It came out wrong and I feel terrible after having reread it. What I meant to say was, if you want to know how to just crank out the answer, this is how I would do it. p3ace 2008-05-15 06:08:15 The process of elimination is great in a pinch on a test but it doesn't demonstrate any physics, or in this case math. To me the most straight forward way to do this is: The tangential direction is just the direction of r vector, r=ix+jy=ix+j(x^2)/4. r hat or the unit vector in the tangential direction is just r/magnitude, i.e. r hat = [ix+j(x^2)/4]/{x^2+[(x^2)/4)]^2}^(1/2) You can factor an x out of the denominator and cancel it with the x in the numerator, leaving r hat = (i+jx/4)/[1+(x^2)/16]^(1/2) Now dot the acceleration vector, a=jg with the unit vector in the tangential direction to get the tangential acceleration, a tangential = g[(x/4)/(1+x^2/16)]. Now, to get the form in the answer, multiply through in the denominator by the 4. Inside the radical, it becomes a 16 so that you have, a tangential = gx/[16+x^2)^(1/2), Wah, LA, choice D. Sorry I'm not a latex jockey. To me this is radically simple, more so than the other solutions, just because this is what makes sense to me. I know that others see it differently and whatever works for you is a okay, so this is offered up to those of use who think in these terms. kevglynn 2006-10-31 09:10:31 I'm surprised no one noticed this one... As x -> infinite, acceration must approach g (a -> g), so choice (D) is the only possibility. clmw 2005-11-02 09:08:00 One can remove some of the trig nastiness in the above solution by just noting that the tangetial acceleration equals the normalized tangent vector times g. Using x as our simple parameter, the y component of the tangent vector equals dy/dx=x/2 and the x component equals dx/dx=1. If we normalize this vector (1,x/2) and then multiply g by the normalized y component we get the answer pretty straightforwardly (without using sin/cos identities) -Chris maryrose 2005-11-01 13:08:02 It can also be solved by noting the units and realizing that it cannot be zero or g. That leaves only D. rreyes 2005-10-31 09:47:42 nice solution! :) let me just note that we can also answer this problem by method of elimination by observing that as x->\infty, a must -> g. this eliminates all answers except D. Comments casseverhart13 2019-09-19 02:59:13 Some of your problem are actually sounds great. magician ernest21 2019-08-10 03:09:42 Thus, the application of dimensional analysis for eliminating answers only works for D. warship games fredluis 2019-08-08 12:43:55 I think when dealing with a signal of a single frequency (e.g., blue light), it only makes sense to talk about the phase velocity--you can\'t construct a \"group\" or wavepacket out of a single frequency signal. kitchen remodeling joshuaprice153 2019-08-08 05:39:09 Your current web-site is fairly quickly growing to be certainly one of my top feature. So, I just stumbled on creative weblog and I just need to state that this amazing is a nice blog post. Bless you pertaining to this kind of knowledge. towing service calvin_physics 2014-03-27 15:04:37 Just ignore the math. Take limits. x = 0, acceleration must be zero. x = big, acceleration must be g. We only have two choices left. (D) and (E) E gives wrong unit, it gives you an extra x. D has the right unit. Bingo. pranav 2012-11-02 22:23:53 (same concept as one given...but simpler math) Differentiating y w.r.t x gives: Draw a right triangle, with one angle . Now label the side adjacent as "2", opposite as "x". The hypotenuse becomes . Now we want the tangential acceleration, which would be the "g" multiplied by the cosine of the angle opposite to i.e. , since is downward parallel to the side we labeled "x" (i.e. the dot product basically) Therefore, answer is: (D) fkvkfdlek 2012-08-04 21:07:40 can this be solved using lagrange multiplier? I set up L' = L - and then was trying to rewrite tangential acceleration as i cant quite finish the substitution to get the inside of the square root purely in terms of x whatever 2011-11-11 15:28:18 parametrization of the line is = (x,), then taking its derivative with respect to x and dividing by it's length to get the tangential unit vector we get :=(1,) / our vector for acceleration is = (0,g) the tangential component of acceleration is then their inner product = * = gx/ , D malianil 2011-11-05 05:52:47 For those who are not inclined to solve it with trigonometric functions can take a snapshot drawing of the force diagram at arbitrary location (y,x) where rn and then use euclidean geometry to find our that the Force component of the tangent is a function of x and is rnrnIt is simpler this way for me. nakib 2010-04-02 11:51:51 (A) Lol! (B) Of course not! The tangent is not pointing downwards. (C) Wrong units. (D) Right units, maybe correct. Also, goes to as goes to . (E) Wrong units. (D) is the answer. The rigorous solutions are beautiful, but are not feasible under GRE exam conditions... archard 2010-06-05 18:04:39 The problem specifies the the coordinates are dimensionless units, so you can't eliminate C and E. physicsworks 2010-07-09 09:40:46 nakib is right when the acceleration must be . There is only one appropriate choice for this, no matter what dimensions are given. Shahbaz Ahmed Chughtai 2012-06-09 22:51:44 Very Nice. GRE is designed to use common sense !!! But I salute the website owners for such a hard work ! eighthlock 2013-08-11 13:07:06 Notice that the prompt states x and y are dimensionless units. Therefore you can't use dimensional analysis, because (D) and (E) have the same dimensions. mike1999 2014-07-16 14:15:40 Actually, the problem specifies that x and y are dimensionless, so you can't 100% narrow it down like that (actually, perhaps if you substituted the dimensional quantity...). Anyway, you can also get it from limiting cases. As x->0, the answer should -> 0. As x -> inf, the answer should -> g (because the tangent gets infinitely steep). Thus D is correct. sirius 2008-11-05 19:37:19 Here's an easier way: Only (A),(B),and (D) have the correct units, first of all. You know that the particle is accelerating, and that its acceleration can't be greater or even equal to g, no matter what x is. Only (D) satisfies these conditions. Why can't the acceleration be g? The track can never be vertical since y is constrained to a one-to-one function. jmason86 2009-08-10 19:41:52 As Yosun pointed out below, the problem states that x and y are unit-less so you can't (technically) use dimensional analysis. Limits (x->0 and x-> infinity) solves this whole problem without the need for units and it is still quick. p3ace 2008-05-15 06:11:52 I apologize for what I just said. It came out wrong and I feel terrible after having reread it. What I meant to say was, if you want to know how to just crank out the answer, this is how I would do it. p3ace 2008-05-15 06:08:15 The process of elimination is great in a pinch on a test but it doesn't demonstrate any physics, or in this case math. To me the most straight forward way to do this is: The tangential direction is just the direction of r vector, r=ix+jy=ix+j(x^2)/4. r hat or the unit vector in the tangential direction is just r/magnitude, i.e. r hat = [ix+j(x^2)/4]/{x^2+[(x^2)/4)]^2}^(1/2) You can factor an x out of the denominator and cancel it with the x in the numerator, leaving r hat = (i+jx/4)/[1+(x^2)/16]^(1/2) Now dot the acceleration vector, a=jg with the unit vector in the tangential direction to get the tangential acceleration, a tangential = g[(x/4)/(1+x^2/16)]. Now, to get the form in the answer, multiply through in the denominator by the 4. Inside the radical, it becomes a 16 so that you have, a tangential = gx/[16+x^2)^(1/2), Wah, LA, choice D. Sorry I'm not a latex jockey. To me this is radically simple, more so than the other solutions, just because this is what makes sense to me. I know that others see it differently and whatever works for you is a okay, so this is offered up to those of use who think in these terms. neon37 2008-10-05 11:42:48 hey p3ace, you didnt quite get the answer though. The choice D has not . ajkp2557 2009-10-27 11:18:17 Good approach, but note that the normalized tangent vector is the time derivative of the position vector (r-dot) divided by the magnitude of r-dot. shen 2010-08-11 08:25:04 Actually u got the wrong tangential vector. It should be found using the gradient dy/dx. r = [1, x/2]/(1+x^2/4)^1/2 You got the correct answer through a careless mistake that correctly make up for your wrong formulation. Cheers. You can check your answers. StrangeQuark 2007-06-16 09:25:22 I did this problem by noting infinity conditions which in the test I am more then happy to do however in study I would like a more concrete answer, which I found on your site (thank you). However I have a question, you say that gravity is the only force acting, isn't there a constraining force from the track, i.e. a normal force that points perpendicular to the track that needs to be accounted for? Jeremy 2007-11-11 11:24:55 I wondered about the official answer's omission of the normal force as well, but now I understand why it's not necessary. We only care about forces that have tangential components, and thus contribute to the tangential acceleration. kevglynn 2006-10-31 09:17:19 Sorry, I'm an idiot and decided not to read before I wrote that :-) Tried get rid of it, too, but it seems that you can't edit even your own posts. oh well kevglynn 2006-10-31 09:10:31 I'm surprised no one noticed this one... As x -> infinite, acceration must approach g (a -> g), so choice (D) is the only possibility. carlospardo 2007-10-02 17:34:09 Yeah, and also studying units and considering that, obviously, it is not a constant Richard 2007-10-31 12:28:05 That's how I did it. There is a similar problem on another GRE exam... the limiting technique works there as well. Poop Loops 2008-11-02 16:23:51 Limits and boundary conditions are your friends! A math teacher of mine used to tell me: The more math you do, the more room for error there is. testtest 2010-11-11 18:45:16 That does not eliminate (E) testtest 2010-11-11 18:46:34 Tzzzzz what am I saying... Time to go to bed! (yes it does eliminate E) kevglynn 2006-10-31 09:10:13 I'm surprised no one noticed this one... As x -> infinite, acceration must approach g (a -> g), so choice (D) is the only possibility. clmw 2005-11-02 09:08:00 One can remove some of the trig nastiness in the above solution by just noting that the tangetial acceleration equals the normalized tangent vector times g. Using x as our simple parameter, the y component of the tangent vector equals dy/dx=x/2 and the x component equals dx/dx=1. If we normalize this vector (1,x/2) and then multiply g by the normalized y component we get the answer pretty straightforwardly (without using sin/cos identities) -Chris Jeremy 2007-11-11 11:57:51 I think this solution is much faster than the official one, so I thought I'd write out the equations. Let represent a unit vector in the tangential direction. We want to find the net tangential acceleration , or , where is the angle between and . In the end, I guess this is the same idea expressed in the official solution, but without undue trigonometric hardship. ajkp2557 2009-10-27 11:15:13 Great solution! Side note for those that (like me) have forgotten: the tangent vector is the time derivative of the position vector. (Which makes sense physically, if you think about what the velocity vector is telling us.) clmw 2005-11-02 09:00:08 maryrose 2005-11-01 13:08:02 It can also be solved by noting the units and realizing that it cannot be zero or g. That leaves only D. yosun 2005-11-01 16:04:57 actually, maryrose, the problem gives "dimensionless units". thus one can't eliminate the other choices (other than 0 and g) that easily... mrmeep 2008-09-07 18:46:07 The problem specifically says y and x are unitless, so does that thinking still hold? rreyes 2005-10-31 09:47:42 nice solution! :) let me just note that we can also answer this problem by method of elimination by observing that as x->\infty, a must -> g. this eliminates all answers except D. Post A Comment! Bare Basic LaTeX Rosetta Stone LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces . type this... to get... $\int_0^\infty$ $\partial$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
In the first week of teaching my Calculus 1 discussion section this term, I decided to give the students a Precalc Review Worksheet. Its purpose was to refresh their memories of the basics of arithmetic, algebra, and trigonometry, and see what they had remembered from high school. Surprisingly, it was the arithmetic part that they had the most trouble with. Not things like multiplication and long division of large numbers – those things are taught well in our grade schools – but when they encountered a complicated multi-step arithmetic problem such as the first problem on the worksheet, they were stumped: Simplify: $1+2-3\cdot 4/5+4/3\cdot 2-1$ Gradually, some of the groups began to solve the problem. But some claimed it was $-16/15$, others guessed that it was $34/15$, and yet others insisted that it was $-46/15$. Who was correct? And why were they all getting different answers despite carefully checking over their work? The answer is that the arithmetic simplification procedure that one learns in grade school is ambiguous and sometimes incorrect. In American public schools, students are taught the acronym “PEMDAS”, which stands for Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. This is called the order of operations, which tells you which arithmetic operations to perform first by convention, so that we all agree on what the expression above should mean. But PEMDAS doesn’t work properly in all cases. (This has already been wonderfully demonstrated in several YouTube videos such as this one, but I feel it is good to re-iterate the explanation in as many places as possible.) To illustrate the problem, consider the computation $6-2+3$. Here we’re starting with $6$, taking away $2$, and adding back $3$, so we should end up with $7$. This is what any modern calculator will tell you as well (try typing it into Google!) But if you follow PEMDAS to the letter, it tells you that addition comes before subtraction, and so we would add $2+3$ first to get $5$, and then end up with $6-5=1$. Even worse, what happens if we try to do $6-3-2$? We should end up with $1$ since we are taking away $2$ and $3$ from $6$, and yet if we choose another order in which to do a subtraction first, say $6-(3-2)=6-1$, we get $5$. So, subtraction can’t even properly be done before itself, and the PEMDAS rule does not deal with that ambiguity. Mathematicians have a better convention that fixes all of this. What we’re really doing when we’re subtracting is adding a negative number: $6-2+3$ is just $6+(-2)+3$. This eliminates the ambiguity; addition is commutative and associative, meaning no matter what order we choose to add several things together, the answer will always be the same. In this case, we could either do $6+(-2)=4$ and $4+3=7$ to get the answer of $7$, or we could do $(-2)+3$ first to get $1$ and then add that to $6$ to get $7$. We could even add the $6$ and the $3$ first to get $9$, and then add $-2$, and we’d once again end up with $7$. So now we always get the same answer! There’s a similar problem with division. Is $4/3/2$ equal to $4/(3/2)=8/3$, or is it equal to $(4/3)/2=2/3$? PEMDAS doesn’t give us a definite answer here, and has the further problem of making $4/3\cdot 2$ come out to $4/(3\cdot 2)=2/3$, which again disagrees with Google Calculator. As in the case of subtraction, the fix is to turn all division problems into multiplication problems: we should think of division as multiplying by a reciprocal. So in the exercise I gave my students, we’d have $4/3\cdot 2=4\cdot \frac{1}{3}\cdot 2=\frac{8}{3}$, and all the confusion is removed. To finish the problem, then, we would write $$\begin{eqnarray*} 1+2-3\cdot 4/5+4/3\cdot 2-1&=&1+2+(-\frac{12}{5})+\frac{8}{3}+(-1) \\ &=&2+\frac{-36+40}{15} \\ &=&\frac{34}{15}. \end{eqnarray*} $$ The only thing we need to do now is come up with a new acronym. We still follow the convention that Parentheses, Exponents, Multiplication, and Addition come in that order, but we no longer have division and subtraction since we replaced them with better operators. So that would be simply PEMA. But that’s not quite as catchy, so perhaps we could add in the “reciprocal” and “negation” rules to call it PERMNA instead. If you have something even more catchy, post it in the comments below!
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
Cantor's Theorem (Strong Version)/Proof 2 Contents Theorem Let $S$ be a set. Let $\mathcal P^n \left({S}\right)$ be defined recursively by: $\mathcal P^n \left({S}\right) = \begin{cases} S & : n = 0 \\ \mathcal P \left({\mathcal P^{n-1} \left({S}\right)}\right) & : n > 0 \end{cases}$ where $\mathcal P \left({S}\right)$ denotes the power set of $S$. Then $S$ is not equivalent to $\mathcal P^n \left({S}\right)$ for any $n > 0$. Proof The proof proceeds by induction. For all $n \in \N_{> 0}$, let $P \left({n}\right)$ be the proposition: There is no surjection from $S$ onto $\mathcal P^n \left({S}\right)$. Basis for the Induction $P \left({1}\right)$ is Cantor's Theorem. This is our basis for the induction. Induction Hypothesis Now we need to show that, if $P \left({k}\right)$ is true, where $k \ge 1$, then it logically follows that $P \left({k+1}\right)$ is true. So this is our induction hypothesis: There is no surjection from $S$ onto $\mathcal P^k \left({S}\right)$. Then we need to show: There is no surjection from $S$ onto $\mathcal P^{k+1} \left({S}\right)$. Induction Step This is our induction step: Suppose that $P \left({k}\right)$ is true. Define the mapping $g: S \to \mathcal P^k \left({S}\right)$ as: $\displaystyle g \left({x}\right) = \bigcup f \left({x}\right)$ This is actually a mapping into $\mathcal P^k \left({S}\right)$, as follows: $f \left({x}\right) \in \mathcal P^{k+1} \left({S}\right) = \mathcal P \left({\mathcal P^k \left({S}\right)}\right)$ By the definition of power set: $f \left({x}\right) \subseteq \mathcal P^k \left({S}\right)$ Thus each element of $f \left({x}\right)$ is a subset of $P^{k - 1} \left({S}\right)$. Thus by Union of Subsets is Subset: $\displaystyle \bigcup f \left({x}\right) \subseteq \mathcal P^{k-1} \left({S}\right)$ Therefore: $\displaystyle \bigcup f \left({x}\right) \in \mathcal P^k \left({S}\right)$ That is, $\displaystyle g \left({x}\right)$ is a mapping into $\mathcal P^k \left({S}\right)$. Next we have that: $\forall y \in \mathcal P^k \left({S}\right): \left\{{y}\right\} \in \mathcal P^{n+1} \left({S}\right)$ $\exists x \in S: f \left({x}\right) = \left\{{y}\right\}$ Then: $\displaystyle g \left({x}\right) = \bigcup \left\{{y}\right\} = y$ As this holds for all such $y$, $g$ is surjective. But this contradicts the induction hypothesis. Thus we conclude that the theorem holds for all $n$. $\blacksquare$
This question will be based roughly on the Bourgade Keating review on Zeta function and eigenvalue asymptotics (BK): https://link.springer.com/chapter/10.1007/978-3-0348-0697-8_4 To set up the question, we consider the riemann zeta function $\zeta(s)$ with zeros on the critical line $\frac{1}{2} + i t_n$. The unfolded zeros are defined as: $w_n = \frac{t_n}{2\pi} \log \frac{t_n}{2\pi}$. The name "unfolded" is justified by the fact that distribution of $w_n$ is asymptotically uniform along the critical axis. In terms of these unfolded zeros, we can define the following integral: \begin{equation} R_{2,\zeta}(f,W) = \int_{-\infty}^{\infty} f(x) \frac{1}{W} \sum_{\substack {j\neq k \\ w_j,w_k \leq W}} \delta(x-w_j+w_k) dx \end{equation} Suppose the $w_n$'s had no pair correlation, the integral above would simply equal to $\int f(x) dx$. However, $w_n$'s clearly have pair correlations! So deviation of the integral from $\int f(x) dx$ roughly measures the pair correlations between the unfolded zeros $w_n$, up to some cutoff $W$. For $\zeta$ functions Montgomery proved an important theorem about this integral stated in KS as follows: Theorem 1(Montgomery): Assume the Riemann Hypothesis. Then for test functions f(x) such that: $$ \hat f(\tau) = \int_{-\infty}^{\infty} e^{2 \pi i x \tau} f(x) dx$$ has support in $(-1,1)$, the following limit exists: $$ \lim_{W \rightarrow \infty} R_{2,\zeta}(f,W) = \int_{-\infty}^{\infty} f(x) R_2(x) dx $$ with $$R_2(x) = 1 - (\frac{\sin(\pi x)}{\pi x})^2 $$ The confusion I have is: this $R_2(x)$ seems to carry unnecessary information. To show that, suppose I Fourier transform the integral:\begin{equation}\int_{-\infty}^{\infty} f(x) R_2(x) dx = \int_{-\infty}^{\infty} \hat f(\tau) K(-\tau) d \tau \end{equation}Where $K(\tau)$ is the Fourier transform of the pair correlation function, sometimes referred to as the spectral form factor. Then since $\hat f(\tau)$ only has support on $(-1,1)$, I can restrict the $\tau$ integral to $(-1,1)$. But that would mean the integral is only sensitive to $K(\tau)$ for $\tau \in (-1,1)$. Within that domain, there are other functions that produce the same $K(\tau)$. For example, following equation (26) in BK, we can define:$$ \tilde R_2(x) = 1 - \frac{1}{2 (\pi x)^2} \quad R_2(x) = 1 - (\frac{\sin(\pi x)}{\pi x})^2 $$ These functions have the same Fourier transform within $(-1,1)$. Montgomery's Theorem is usually stated as a connection between random matrix theory and zeta function because $R_2(x)$ is precisely the pair correlation of Wigner random matrix ensembles. However, the calculation above would suggest that $\tilde R_2(x)$ would do just as well within the domain of interest. Hence, I feel that $R_2(x)$ carries unnecessary information about the pair correlations, and it now seems rather artificial that it matches the random matrix results. This seems like a simple enough question. But I haven't found any explanation of it in various review articles on RMT-zeta function connections. So I would like some help from the experts: Why did Montgomery put the RMT correlation function in his theorem if it carries unnecessary information? Was it just an inspired guess or is there something deep I am missing? Note: The referenced review article seem to be on the boundary between math and physics. But since my question is more about the mathematical side, I thought it would be best to pose it here. If the moderators feel this is more fitting for other Physics SE or Math SE, please help me move the question to the right place. Thanks
News Recent result from PandaX-II was published on Physical Review Letter (Phys. Rev. Lett. 119, 181302, "Editor's Suggestion") on October 30, 2017, back-to-back with the first result from XENON1T experiment. The papers are highlighted by a Physics "Viewpoint" commentary by Dan Hooper from FNAL and Univ. of Chicago, commenting that “... The PandaX-II collaboration released the official WIMP search results using 54 ton-day exposure on Aug. 23, 2017. No excess events were found above the expected background. The most stringent upper limit on spin-independent WIMP-nucleon cross section was set for a WIMP mass greater than $100 GeV/c^{2}$, with the lowest exclusion at $8.6\times10^{-47} cm^{2}$ at $40 GeV/c^{2}$. The result reported here is more conservative than the preliminary result shown during the TeVPA2017 conference, due to the adoption of updated photon/electron detection efficiencies. Prof. Xiangdong Ji of Shanghai Jiao Tong University and University of Maryland, spokesperson of the PandaX Collaboration, announced new results on the dark matter (DM) search from the PandaX-II experiment on Monday Aug. 7, during the TeV Particle Astrophysics 2017 Conference at Columbus, Ohio, the United States. No DM candidate was identified within the data from an exposure of 54 ton-day, the largest reported DM direct detection data set to date. The PandaX observatory uses xenon as target and detector to search for WIMP particles as well as neutrinoless double beta decay ($0\nu\beta\beta$) in ${}^{136}Xe$. At present, the PandaX-II experiment is in operation in CJPL-I. The future PandaX program will pursue the following three main directions: 1 of 2 next › Events Tuesday, June 25, 2019 - 08:00YuGang Bao Library Following the activity on Exotic Hadrons initiated last year at T. D. Lee Institute in Shanghai, we are glad to announce the Workshop Exotic Hadrons: Theory and Experiment at Lepton and Hadron Colliders to be held at the T.D. Lee Institute, located in Shanghai Jiao Tong University, June 25-27, 2019. Sunday, April 28, 2019 - 18:00T. D. Lee Library GEANT4 is a powerful toolkit for detector simulations which is widely used in many applications of high energy physics, space and radiation, medical and so on. Wednesday, January 9, 2019 - 09:00T. D. Lee Library Description Vidyo connection:
Answer $s=r\displaystyle \theta\cdot\frac{180^{\mathrm{o}}}{\pi}$ Work Step by Step Converting between Degrees and Radians 1. Multiply a degree measure by $\displaystyle \frac{\pi}{180}$ radian and simplify to convert to radians. 2. Multiply a radian measure by $\displaystyle \frac{180^{\mathrm{o}}}{\pi}$ and simplify to convert to degrees. -------- Since $\theta$ is in radians (and we want degrees), we apply case 2. $s=r\displaystyle \theta\cdot\frac{180^{\mathrm{o}}}{\pi}$
GR8677 #9 Alternate Solutions thinkexist 2012-10-12 14:37:10 You can solve this by common sense and a little physics. Firstly, it doesn't make much sense for an electron to be traveling speeds comparable to the size of a proton per second. Exclude A, B, C. You also know that current works by the propagation of the coulomb force so when you put one electron into a filled tube of electrons a different electron on the other side of the tube pops out because of the coulomb force. Having said that, it doesn't really make much sense for the electron to be traveling 10^3 m/s when there are a ton of electrons it can bounce into including the atoms. Exclude E. What we have left is D. flyboy621 2010-11-09 20:51:08 , where is the electron charge, is the electron density, is the cross-sectional area of the wire, and is the drift velocity. Solve for , plug in rough numbers, and you get: , which matches (D). Ben 2005-12-01 13:23:38 I did it using unit analysis sort of. Boils down to the same thing but I get much closer to 2E-4. 100A = 100 C/s (100 C/s)*(1/1.6E-19 e-/C) = 6.25E20 e-/s r^2 = E-4 (E-4 m^2)*(1E28 e-/m^3) = E24 e-/m (of wire) Finally: (6.25E20 e-/s)*(1/E24 m/e-) which gives pretty close to 2E-4 (hopefully my latex pi's worked) Comments ernest21 2019-08-10 03:09:47 I think m was mass not meter, otherwise it made no sense at all. strategy gamer joshuaprice153 2019-08-08 06:10:59 Thanks for taking this opportunity to discuss this, I feel fervently about this and I like learning about this subject. event planner Orlando thinkexist 2012-10-12 14:37:10 You can solve this by common sense and a little physics. Firstly, it doesn't make much sense for an electron to be traveling speeds comparable to the size of a proton per second. Exclude A, B, C. You also know that current works by the propagation of the coulomb force so when you put one electron into a filled tube of electrons a different electron on the other side of the tube pops out because of the coulomb force. Having said that, it doesn't really make much sense for the electron to be traveling 10^3 m/s when there are a ton of electrons it can bounce into including the atoms. Exclude E. What we have left is D. natestree 2011-03-22 14:47:19 So, I figured the correct answer w/o any knownledge on drift velocity and some simple unit analysis. Note the units for quantities given. current is charge per time. diameter is meters density is # of particle per cubic meter we want something in units of meters per second. Therefor, we can use the quantities given to come up with the relation: = Where I is current, is the density, e is charge of an electron, and d is the diameter. This gives an answer ~ which is closest to option D. flyboy621 2010-11-09 20:51:08 , where is the electron charge, is the electron density, is the cross-sectional area of the wire, and is the drift velocity. Solve for , plug in rough numbers, and you get: , which matches (D). walczyk 2011-04-06 16:27:46 this is by far the best way to do this. its perfect when you can derive it so quickly and easily like that, i just wish i recognized it too. forget all the equations. follow this. intuition is our friend kicksp 2007-10-29 09:42:47 Inspection shows that the choices are separated by several orders of magnitude. Hence, we need only an order of magnitude estimate: The answer is clearly (D). Throw in the factor and you obtain . Go maroons! Andresito 2006-03-14 22:16:53 Your solution stills showing the area proportional to d^(2*ne). Although, it is not to worry. thanks Yosun. Andresito 2006-03-14 22:18:21 "Nevermind" wishIwasaphysicist 2006-01-24 11:22:51 oops...you were right. I took the diamter squared and not radius squared. It comes out to 2 E -4 m/s and not 3 E - 4 m/s though. wishIwasaphysicist 2006-01-24 11:08:34 I don't see where the 4 in front of current (I) came from. If you remove it, you get exactly the answer choice: 2 E -4 m/s. grae313 2007-10-07 15:11:30 It comes from squaring the radius (d/2) to get the cross sectional area Ben 2005-12-01 13:23:38 I did it using unit analysis sort of. Boils down to the same thing but I get much closer to 2E-4. 100A = 100 C/s (100 C/s)*(1/1.6E-19 e-/C) = 6.25E20 e-/s r^2 = E-4 (E-4 m^2)*(1E28 e-/m^3) = E24 e-/m (of wire) Finally: (6.25E20 e-/s)*(1/E24 m/e-) which gives pretty close to 2E-4 (hopefully my latex pi's worked) yosun 2005-12-02 01:10:35 ben: your LaTeX came out iffy. you should have used a \ (back-slash) sign before your pi's (and other latex commands) so that you type-set instead of . it's been manually corrected. (for everyone else with typos i didn't get to catch: over winter break, i'd code options for edit-posts, among other features.) Post A Comment! Bare Basic LaTeX Rosetta Stone LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces . type this... to get... $\int_0^\infty$ $\partial$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
This is something I haven't seen online yet, indicator functions with values in a finite field. Probably for a good reason, but I would like to know why, and if there are still things that can be said. For instance what can we say of the relationship (if any) between the support of the convolution of two indicator functions with values in a finite field vs the addition of the sets they indicate. More precisely: Let $F$ be a finite field and let $A, B$ be sets with additive cyclic structure, say $A, B \subseteq \mathbb{Z}_N$ with $N$ coprime to the characteristic of $F$. Define the "characteristic functions" $1_A, 1_B : \mathbb{Z}_N \to F$ by $1_A(x) = 1$ if $x \in A$, and $1_A(x) = 0$ otherwise. Similarly for $1_B$. Is there any relationship/proposition we can infer between the set $A + B$ and the support of the (cyclic) convolution $$ 1_A \ast 1_B(k) = \sum_{j \in \mathbb{Z}_N} 1_A(j) 1_B(k-j)? $$ For instance, if $1_A \ast 1_B(k) = 0$ for all $k \in \mathbb{Z}_N$, would this say anything at all about $A + B$? Note in the case of the usual characteristic functions with values in $\mathbb{R}$ we have the nice property that $A + B$ is precisely the support of $1_A \ast 1_B$. So I wondered whether we can at least get something (although probably not quite this I imagine) when the values of the characteristic functions are either the additive or multiplicative identity in a finite field. Thanks! Remark: Recall that the convolution here is taken modulo $p$, where $p$ is the characteristic of the field. Moreover note that the support of $1_A \ast 1_B$ is contained in $A + B$, since, if the convolution is non-zero modulo $p$, it is non-zero in $\mathbb{R}$ and so the support lies in $A + B$ by the fact mentioned above. This however does not say anything on whether or not the support is empty modulo $p$...
I'm trying to create a prediction model using regression. This is the diagnostic plot for the model that I get from using lm() in R: What I read from the Q-Q plot is that the residuals have a heavy-tailed distribution, and the Residuals vs Fitted plot seems to suggest that the variance of the residuals is not constant. I can tame the heavy tails of the residuals by using a robust model: fitRobust = rlm(formula, method = "MM", data = myData) But that's where things come to a stop. The robust model weighs several points 0. After I remove those points, this is how the residuals and the fitted values of the robust model look like: The heteroscedasticity seems to be still there. Using logtrans(model, alpha) from the MASS package, I tried to find an $\alpha$ such that rlm(formula, method = "MM") with formula being $\log(Y + \alpha) \sim X_1+\cdots+X_n$ has residuals with constant variance. Once I find the $\alpha$, the resulting robust model obtained for the above formula has the following Residuals vs Fitted plot: It looks to me as if the residuals still do not have constant variance. I've tried other transformations of response (including Box-Cox), but they don't seem like an improvement either. I am not even sure that the second stage of what I'm doing (i.e. finding a transformation of the response in a robust model) is supported by any theory. I'd very much appreciate any comments, thoughts, or suggestions.
This post is about gambling and how to reason about the odds of winning wagers in multiplayer games. Last week my mom found herself in an intriguing gambling situation during a game of Mahjong. There were 3 other players in the game—I’ll call them Alice, Betty, and Clara. At one point, Alice proposed a wager; if she won the next game, each player had to pay her \$20, otherwise she would pay each player \$20. When Alice purposed the wager, the number of games won by each player was as follows: Alice’s bet seems reasonable; she has won over half of the total games played. Her perceived dominance likely prompted her into proposing the wager, believing that she had a likely chance of winning the next game. The question is, should the other players take Alice’s bet? Who does this bet favor, Alice or the other players? I was interested in this problem because it resembled a famous gambling puzzle called the problem of points worked out in the 17th century through a series of correspondences between Pascal and Fermat. Pascal and Fermat were interested in determining how to divvy up a pot of winnings between two players if the game was suddenly stopped. Their work on this problem is widely regarded as the birth of modern probability theory. Alice’s bet is a more complex variant of the problem of points with an added twist. When the Mahjong game is suddenly terminated after the next game, instead of splitting a pot of earnings, each player is interested in estimating the probability that Alice will win the game, so that they can decide whether to take the bet. If I were trying to maximize my chances of winning this wager, I would model the situation as follows. With the data in Figure 1, I had one realization of a sample drawn from a discrete distribution—$N \sim Discrete(\theta)$, where $N \in \{1, … , K\}$ is a categorical random variable of $K$ players. Let $N_j$ denote the number of games won by the $j$th player and $\theta_j$ denote the probability of the $j$th player winning a game. The maximum likelihood estimator for $\theta_j$ is simply, $\hat{\theta}_j = \frac{N_j}{N}$. The number of games won by each player can be thought of as an outcome from a multinomial distribution. The multinomial is a generalization of the binomial distribution and models the probability of $K$ mutually exclusive outcomes—in this case, the number of wins in games of Mahjong. The multinomial distribution is comprised of two components—the number of ways in which a fixed number of wins can be assigned to $K$ players and the corresponding probability of each of these outcomes. Assuming outcomes are independent events, the probability of an outcome is given by the product of these two components, which is the probability mass function of the multinomial: $$ f(n_1, n_2, \dots, n_k | \theta_1, \theta_2, \dots, \theta_k) = \frac{\left(\sum_{j=i}^{K}n_j\right)!}{\prod_{j=1}^{K}n_j!} \prod_{j=1}^K \theta_j^{n_j} $$ The multinomial is a member of the exponential family, and accordingly, is conjugate with another distribution, the Dirichlet. Together, this conjugate pair forms the Dirichlet-multinomial model. This relationship is desirable for modeling in a Bayesian framework because the product of the likelihood and the prior produces a recognizable posterior kernel that is integrable. The posterior distribution of the multinomial, when normalized, forms a Dirichlet: 1 $$ \begin{eqnarray} \Pr(\theta | N) &\propto& Multi(N|\theta)Dir(\theta|\alpha) \\ &\propto& \left( \prod_{j=1}^K \theta_j^{n_j} \right) \left( \prod_{j=1}^K \theta_j^{\alpha_j-1} \right) \\ &\propto& \prod_{j=1}^K \theta_j^{n_j+\alpha_j - 1} \\ &=& Dir(N+\alpha) \\ \end{eqnarray} $$ The Dirichlet distribution is a generalization of the beta distribution over multinomial parameter vectors, sometimes called concentration parameters, $ \alpha = \alpha_1, \alpha_2, …, \alpha_k$. The concentration parameters in the model can be thought of as pseudo-counts representing the number of games won by each player. The pseudo-counts regularize the prior in a way similar to a weighted average between the prior and the likelihood by adding $\alpha$ pseudo-counts to $N$, where $\hat{\theta}_j$ is our prior on $\theta_j$. The expected value of $\theta_j$ is then given by: $$ E[\theta_j | N_j] = \frac{\alpha}{N + \alpha}\theta_j + \frac{N}{N+\alpha} \hat{\theta}_j $$ Noting again that the posterior is a Dirichlet allows the derivation of the full probability distribution of the expected values because the marginals of each $\theta$ form a beta distribution under the posterior: $$ Beta(\alpha_j, \alpha - \alpha_j) $$ With the full posterior of each player in hand, it is possible to calculate the variance, maximum a posteriori estimation, credible intervals, or almost anything that is of interest in this model. The only remaining piece of information I needed for my model was to choose a prior. What is my prior belief that each player will win the next game? To answer this, I consulted my mom as she has played Mahjong with the other players enough to be able to form a reasonable estimate. She estimated the prior probabilities to be: $\hat{\theta}_{Alice}=0.20$, $\hat{\theta}_{Betty}=0.35$, $\hat{\theta}_{Mom}=0.30$, and $\hat{\theta}_{Clara}=0.15$. The question remaining was how to weight these priors in the model. How strongly do I believe that my mom’s estimates are accurate? In most situations, Bayesian reasoning proceeds by gathering data, building a model, and introducing data that sets the model variables in known states where probabilities of interest are conditioned on the data. As more and more data is introduced, the model is updated to reflect the new evidence. This recalibration is sometimes referred to as Bayesian updating. In my model, I had to worked in reverse. The data was fixed; I had only observed the outcome in Figure 1. In this situation, the data was held constant and the choice of prior was the sole variable affecting inference. As the size and confidence of the prior grows, it pushes the expected values of the posterior toward the probabilities of the prior. To understand how my model was affected by the choice of the prior, I built a visualization: The above visualization shows the posterior distributions of the expected value for each player winning the next game. The x-axis shows the relative probability that a given player will win the next game and the y-axis shows relative density. The size of the prior is adjustable by moving the slider below the main visualization. Hovering over a player’s name in the legend shows the 95% credible interval for the specified player at the selected prior. When the slider is used to set $N=0$, there is no prior information and the resulting probabilities of each player winning the next game are simply the maximum likelihood estimates derived from the data. This is essentially what Alice probably used to conclude she would likely win the wager. From the data alone, Alice has a $53\%$ chance of winning the next game. Under these conditions, she is making a reasonable bet. However, the 95% credible interval is $0.29 > \hat{\theta}_{Alice} > 0.77$, indicated a large degree of uncertainty around this estimate. I was skeptical that the observed data alone would be a good estimate of the true underlying probability that Alice would win the next game. In a small sample of only 15 games, the observed probabilities could deviate wildly from the true odds that each player would win the next game. The wide credible intervals confirm this notion. I was willing to place more certainty on my Mom’s ability to generate reasonable estimates; however, the observed data was valuable information. Thus, I wanted a model that balanced the influence of both the data and prior together in a way that reflected this logic. I weighted my Mom’s estimate at 4x the weight of the data, $N=60$. Setting the prior to this value, suggested that Alice would win the next game $26.6\%$ of the time with a credible interval of $[17\%, 37\%]$. Interestingly, this model also suggests that the probability of each player winning a game is close to random chance, $\hat{\theta}_{Alice}=0.27$, $\hat{\theta}_{Betty}=0.33$, $\hat{\theta}_{Mom}=0.27$, and $\hat{\theta}_{Clara}=0.13$. In the actual game, the other players took Alice’s bet and subsequently won the wager. My mom won the next game and Alice had to pay each player \$20. With an $\approx 3:1$ odds against Alice winning, I would have also taken this bet if I were one of players in the game. Based on my model, I would conclude that Alice likely committed a base rate fallacy. She placed too much credence in her short term winning streak and too little weight in her past performance.
Last semester, I attended Sage Days 54 at UC Davis. In addition to learning about Sage development (perhaps a topic for a later blog post), I was introduced to FindStat, a new online database of combinatorial statistics. You may be familiar with the Online Encyclopedia of Integer Sequences; the idea of FindStat is similar, and somewhat more general. The Online Encyclopedia of Integer Sequences is a database of mathematically significant sequences, and to search the database you can simply enter a list of numbers. It will return all the sequences containing your list as a consecutive subsequence, along with the mathematical significance of each such sequence and any other relevant information. FindStat does the same thing, but with combinatorial statistics instead of sequences. A combinatorial statistic is any integer-valued function defined on a set of combinatorial objects (such as graphs, permutations, posets, and so on). Some common examples of combinatorial statistics are: The number of edges of a finite simple graph, The length of a permutation, that is, the smallest length of a decomposition of the permutation into transpositions, The number of parts of a partition, The diameter of a tree. The FindStat database has a number of combinatorial objects programmed in, with various statistics assigned to them, which can all be viewed in the Statistics Database tab. The search functionality is under Statistic Finder, in which you can choose a combinatorial object, say graphs, and enter some values for some of the graphs. It will then tell you what statistics, if any, on graphs match the values you have entered. So this is strictly more general than OEIS: we can think of integer sequences as combinatorial statistics on some collection of combinatorial objects represented by the nonnegative integers, such as finite collections of indistinguishable balls. Not that FindStat should be used for integer sequences – OEIS already does a splendid job of that – but FindStat provides something that OEIS cannot: an organized database of mathematical data that doesn’t necessarily have a natural linear ordering. The last, and most interesting, feature of FindStat is its “maps” functionality. There are many known natural maps of combinatorial objects, such as the map $\phi:P\to B$ sending a permutation to its corresponding binary search tree, where $P$ denotes the set of all permutations and $B$ the set of all binary search trees. (See here for all the maps currently implemented on the Permutations class in FindStat.) Now, given a statistic $s:B\to \mathbb{Z}$ on $B$, we automatically get a statistic $$s\circ \phi:P\to \mathbb{Z}.$$ FindStat uses this fact to give the user more information: it will give you not only the matching statistics on the combinatorial object that you chose, but the matching statistics on all other possible combinatorial objects linked by any relevant map in the database! This can help the working combinatorialist discover new ways of thinking about their statistics.
The amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Contents The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with amsmath. Let's check an example: \begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation} You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned. This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code: \usepackage{amsmath} To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document. \begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation For equations longer than a line use the multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. \begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. As mentioned before, the ampersand character & determines where the equations align. Let's check a more complex example: \begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*} Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a &; also that each equation is separated from the one before by an &. Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. The asterisk trick to set/unset the numbering of equations also works here. For more information see
Difference between revisions of "Prime counting function" (→Breaking the square root barrier) (Minor smoothing of the first section) Line 1: Line 1: − One way to [[finding primes|find primes]] is to find a polynomial time algorithm to compute <math>\pi(x)</math>, the number of primes less than x, to reasonable accuracy + One way to [[finding primes|find primes]] is to find a polynomial time algorithm to compute <math>\pi(x)</math>, the number of primes less than x, to reasonable accuracy. if we can better than <math>10^{k/2}</math> accuracy for k-digit x, we can break the square root barrier. We don't necessarily have to do this for all x; just having a targeted x for which we can show <math>\pi(x+y)-\pi(x) > 0</math> for some <math>x \sim 10^k</math> and <math>y=o(10^{k/2})</math> would suffice. − Now, perhaps instead of trying to prove that intervals like [x, x + (\log x)^A] contain primes unconditionally, we should first try to be much less ambitious and aim to show that *some* interval [y, y + \sqrt{x}] with y \in [x,2x], contains a prime number that we can discover computationally. + Now, perhaps instead of trying to prove that intervals like [x, x + (\log x)^A]contain primes unconditionally, we should first try to be much less ambitious and aim to show that *some* interval [y, y + \sqrt{x}]with y \in [x,2x], contains a prime number that we can discover computationally. − How? Well, let’s start by assuming that we can computationally locate all the O( \sqrt{x} \log x) zeros in the critical strip up to height \sqrt{x}. Then what we can do is some kind of “binary search” to locate an interval [y, y + \sqrt{x}] containing loads of primes: say at the + How? Well, let’s start by assuming that we can computationally locate all the O( \sqrt{x} \log x)zeros in the critical strip up to height \sqrt{x}. Then what we can do is some kind of “binary search” to locate an interval [y, y + \sqrt{x}]containing loads of primes: say at the step in the iteration we have that some interval [u,v]has loads of primes. Then, using the explicit formula for \psi(z) = \sum_{n \leq z} \Lambda_0(n) = z - \sum_{\rho : \zeta(\rho)=0} z^\rho/\rho(actually, the usual quantitative form of the identity), we can decide which of the two intervals [u, (u+v)/2]or [(u+v)/2, v]contains loads of primes (maybe both do — if so, pick either one for the next step in the iteration). We keep iterating like this until we have an interval of width around x^{1/2 + \epsilon}or so that we know contains primes. − Ok, but how do you locate the zeros of the zeta function up to height \sqrt{x}? Well, I’m not sure, but maybe one can try something like the following: if we can understand the value of \zeta(s) for enough well-spaced, but close together, points up the half-line, then by taking local interpolations with these points, we can locate the zeros to good precision. And then to evaluate \zeta(s) at these well-spaced points, maybe we can use a Dirichlet polynomial approximations, and then perhaps apply some variant of Fast Fourier Transforms (if this is even possible with Dirichlet polynomials, which are not really polynomials) to evaluate them at lots of values s = 1/2 + \delta quickly — perhaps FFTs can speed things up enough so that the whole process doesn’t take more than, say, 10^{k/2} polylog(k) bit operations. Keep in mind also that our Dirichlet polynomial approximation only needs to hold “on average” once we are sufficiently high up the half-line, so it seems quite plausible that this could work + Ok, but how do you locate the zeros of the zeta function up to height \sqrt{x}? Well, I’m not sure, but maybe one can try something like the following: if we can understand the value of \zeta(s)for enough well-spaced, but close together, points up the half-line, then by taking local interpolations with these points, we can locate the zeros to good precision. And then to evaluate \zeta(s)at these well-spaced points, maybe we can use a Dirichlet polynomial approximations, and then perhaps apply some variant of Fast Fourier Transforms (if this is even possible with Dirichlet polynomials, which are not really polynomials) to evaluate them at lots of values s = 1/2 + \deltaquickly — perhaps FFTs can speed things up enough so that the whole process doesn’t take more than, say, 10^{k/2}polylog(k) bit operations. Keep in mind also that our Dirichlet polynomial approximation only needs to hold “on average” once we are sufficiently high up the half-line, so it seems quite plausible that this could workfor snear to 1/2 we would need to be more careful, and get the sharpest approximation we can, because those terms contribute more in the explicit formula. Latest revision as of 13:58, 31 August 2009 One way to find primes is to find a polynomial time algorithm to compute [math]\pi(x)[/math], the number of primes less than x, to reasonable accuracy. For example, if we can find [math]\pi(x)[/math] to better than [math]10^{k/2}[/math] accuracy for k-digit x, we can break the square root barrier. We don't necessarily have to do this for all x; just having a targeted x for which we can show [math]\pi(x+y)-\pi(x) \gt 0[/math] for some [math]x \sim 10^k[/math] and [math]y=o(10^{k/2})[/math] would suffice. Now, perhaps instead of trying to prove that intervals like [math][x, x + (\log x)^A][/math] contain primes unconditionally, we should first try to be much less ambitious and aim to show that *some* interval [math][y, y + \sqrt{x}][/math] with [math]y \in [x,2x][/math], contains a prime number that we can discover computationally. How? Well, let’s start by assuming that we can computationally locate all the [math]O( \sqrt{x} \log x)[/math] zeros in the critical strip up to height [math]\sqrt{x}[/math]. Then what we can do is some kind of “binary search” to locate an interval [math][y, y + \sqrt{x}][/math] containing loads of primes: say at the [math]i[/math]th step in the iteration we have that some interval [math][u,v][/math] has loads of primes. Then, using the explicit formula for [math]\psi(z) = \sum_{n \leq z} \Lambda_0(n) = z - \sum_{\rho : \zeta(\rho)=0} z^\rho/\rho[/math] (actually, the usual quantitative form of the identity), we can decide which of the two intervals [math][u, (u+v)/2][/math] or [math][(u+v)/2, v][/math] contains loads of primes (maybe both do — if so, pick either one for the next step in the iteration). We keep iterating like this until we have an interval of width around [math]x^{1/2 + \epsilon}[/math] or so that we know contains primes. Ok, but how do you locate the zeros of the zeta function up to height [math]\sqrt{x}[/math]? Well, I’m not sure, but maybe one can try something like the following: if we can understand the value of [math]\zeta(s)[/math] for enough well-spaced, but close together, points up the half-line, then by taking local interpolations with these points, we can locate the zeros to good precision. And then to evaluate [math]\zeta(s)[/math] at these well-spaced points, maybe we can use a Dirichlet polynomial approximations, and then perhaps apply some variant of Fast Fourier Transforms (if this is even possible with Dirichlet polynomials, which are not really polynomials) to evaluate them at lots of values [math]s = 1/2 + \delta[/math] quickly — perhaps FFTs can speed things up enough so that the whole process doesn’t take more than, say, [math]10^{k/2}[/math] polylog(k) bit operations. Keep in mind also that our Dirichlet polynomial approximation only needs to hold “on average” once we are sufficiently high up the half-line, so it seems quite plausible that this could work. Note that for [math]s[/math] near to 1/2 we would need to be more careful, and get the sharpest approximation we can, because those terms contribute more in the explicit formula. Computing the parity of [math]\pi(x)[/math] Interestingly, there is an elementary way to compute the parity of [math]\pi(x)[/math] in [math]x^{1/2+o(1)}[/math] time. The observation is that for square-free n, the divisor function [math]\tau(n)[/math] (the number of divisors of n) is equal to 2 mod 4 if n is prime, and is divisible by 4 otherwise. This gives the identity [math] 2 \pi(x) = \sum_{n\ltx} \tau(n) \mu(n)^2 \hbox{ mod } 4.[/math] Thus, to compute the parity of [math]\pi(x)[/math], it suffices to compute [math]\sum_{n\ltx} \tau(n) \mu(n)^2[/math]. But by Mobius inversion, one can express [math] \tau(n) \mu(n)^2 = \sum_{d^2|n} \mu(d) \tau(n)[/math] and so [math]\sum_{n\ltx} \tau(n) \mu(n)^2 = \sum_{d \lt x^{1/2}} \mu(d) \sum_{n\ltx: d^2 | n} \tau(n).[/math] Since one can compute all the [math]\mu(d)[/math] for [math]d \lt x^{1/2}[/math] in [math]x^{1/2+o(1)}[/math] time, it would suffice to compute [math]\sum_{n\ltx: d^2 | n} \tau(n)[/math] in [math](x/d^2)^{1/2} x^{o(1)}[/math] time for each d. One can use the multiplicativity properties of [math]\tau[/math] to decompose this sum as a combination of [math]x^{o(1)}[/math] sums of the form [math] \sum_{n\lty} \tau(n)[/math] for various [math]y \leq x/d^2[/math], so it suffices to show that [math] \sum_{n\lty} \tau(n)= \sum_{a,b: ab \lt y} 1[/math] can be computed in [math]y^{1/2+o(1)}[/math] time. But this can be done by the Gauss hyperbola method, indeed [math] \sum_{a,b: ab \lt y} 1 = 2 \sum_{a \lt \sqrt{y}} \lfloor \frac{y}{a} \rfloor - \lfloor \sqrt{y} \rfloor^2.[/math] The same method lets us compute [math]\pi(x)[/math] mod 3 efficiently provided one can compute [math] \sum_{n\ltx} \tau_2(n) = \sum_{a,b,c: abc \lt x} 1[/math] efficiently. Unfortunately, so far the best algorithm for this takes time [math]x^{2/3+o(1)}[/math]. If one can compute [math]\pi(x)[/math] mod q for every modulus q up to O(\log x), one can compute [math]\pi(x)[/math] by the Chinese remainder theorem. Related to this approach, there is a nice identity of Linnik. Let [math]\Lambda(n)[/math] be the van Mangoldt function and [math]t_{j}(n)[/math] the number of representations of n as ordered products of integers greater than 1, then [math]\Lambda(n) = \ln(n) \sum_{j=1}^{\infty} \frac{(-1)^{j-1}}{j} t_{j}(n)[/math]. The sum is rather short since [math]t_{j}(n)=0[/math] for j larger than about \ln(n). Note that the function [math]t_{j}(n)[/math] is related to [math]\tau_{k}(n)[/math] by the relation [math]t_j(n) = \sum_{k=0}^{j}(-1)^{j-k} {j \choose k} \tau_{k}(n)[/math]. Again, [math]t_{2}(n)[/math] is computable in n^{1/2} steps, however [math]t_{j}(n)[/math], for larger j, appears more complicated. Curiously this is a fundamental ingredient in the Friedlander and Iwaniec work. Breaking the square root barrier It is known that breaking the square root barrier for [math]\sum_{n \leq x} \tau(n)[/math] breaks the square root barrier for the parity of [math]\pi(x)[/math] also: specifically, if the former can be computed in time [math]x^{1/2-\epsilon+o(1)}[/math] for some [math]\epsilon \lt 1/4[/math], then the latter can be computed in time [math]x^{1/4+1/(4+16\epsilon/3) + o(1)}[/math]. Details are here. Using Farey sequences, one can compute the former sum in time [math]x^{1/3+o(1)}[/math] (and hence the latter sum in time [math]x^{5/11+o(1)}[/math]: The argument is similar to elementary proofs (such as the one waved at in the exercises to Chapter 3 of Vinogradov's Elements of Number Theory) of the fact that the number of points under the hyperbola equals [math](main term) + O(N^{1/3} (\log N)^2)[/math]. What we must do is compute [math]\sum_{n \lt= N^{ 1/2 } } \lfloor N/n \rfloor[/math] in time [math]O(N^{1/3} (\log N)^3)[/math]. Lemma 1Let [math] x[/math] be about [math]N^{\theta}[/math]. Assume that [math]N/x^2 = a/q + \eta/q^2, \hbox{gcd}(a,q)=1, \eta\lt=1/\sqrt{5}[/math]. Assume furthermore that [math]q\lt=Q[/math], where [math]Q=N^{\theta-1/3}/10[/math]. Then the sum [math]\sum_{x\lt=n\ltx+q} \{N/n\}[/math] can be computed in time [math]O(\log x)[/math] with an error term [math]\lt 1/2[/math]. Proof We can write [math]N/n = N/x - N/x^2 t + \eta_2 t^2 / N^{3\theta-1} = N/x - (a/q + \eta/q^2) t + \eta_2 t^2 / N^{3\theta-1}[/math], where [math]n = x + t, 0\lt=t\ltq[/math] an integer, [math]|\eta|\lt=1/\sqrt{5}[/math] and [math]|\eta_2|\lt=1fs[/math] independent of n. Since [math]q\lt=Q[/math] and [math]Q=N^{\theta-1/3}/10[/math], we have [math]\eta t^2 / N^{3\theta-1} \lt 1/1000q[/math]. We also have [math]\eta/q^2 t \lt= 1/\sqrt{5} q[/math]. Thus, [math]\eta t^2 / N^{3\theta-1} + \eta/q^2 t \lt 1/2 q[/math]. It follows that [math]|\{N/n\} - \{N/x - at/q\}| \lt 1/(2 q)[/math] except when [math]\{N/x-at/q\} 1 - 1/(2 q)[/math]. That exception can happen for only one value of [math]t=0 \ldots q-1[/math] (namely, when [math]at[/math] is congruent mod q to the integer closest to [math]\{N/x\} q[/math]) and we can easily find that [math]t[/math] (and isolate it and compute its term exactly) in time [math]O(\log n)[/math] by taking the inverse of [math]a \mod q[/math]. Thus, we get the sum [math]\sum_{x\lt=n\ltx+q} \{N/n\}[/math] in time [math]O(\log n)[/math] with an error term less than [math](1/2 q) * q = 1/2[/math] once we know the sum [math]\sum_{0\lt=t\ltq} \{N/x - at/q\}[/math] exactly. But this sum is equal to [math]\sum_{0\lt=r\ltq} \{r/q + \epsilon/q\}[/math], where [math]\epsilon := \{qN/x\}[/math], and that sum is simply [math](q-1)/2 + \epsilon[/math]. Thus, we have computed the sum [math]\sum_{x\lt=n\ltx+q} \{N/n\}[/math] in time [math]O(\log n)[/math]. QED Now we show why the lemma is enough for attaining our goal (namely, computing [math]\sum_{n\leq \sqrt{N}} \lbrack N/n\rbrack[/math] with no error term). We know that [math]\sum_{x\lt=n\lt=x+q} \lbrack N/n \rbrack = \sum_{x\lt=n\ltx+q} N/n - \sum_{x\lt=n\ltx+q} \{N/n\} = N*(\log(x+q)-\log(x)) - \sum_{x\lt=n\ltx+q} \{N/n\}.[/math] We also know that [math]\sum_{x\lt=n\lt=x+q} \lbrack N/n \rbrack[/math] is an integer. Thus, it is enough to compute [math]\sum_{x\lt=n\ltx+q} \{N/n\}[/math] with an error term <1/2 in order to compute [math]\sum_{x\lt=n\ltx+q} \lbrack N/n\rbrack[/math] exactly. We now partition the range [math]n=\{N^\theta,\ldots,2 N^\theta\}, 1/3\lt=\theta\lt=1/2[/math], into intervals of the form [math]x\lt=n\ltx+q[/math], where q is the denominator of a good approximation to [math]N/x^2[/math], that is to say, an approximation of the form [math]a/q, \hbox{gcd}(a,q)=1, q\lt=Q[/math] with an error term [math]\lt= 1/\sqrt{5} q^2[/math]. Such good approximations are provided to us by Hurwitz's approximation theorem. Moreover, it shouldn't be hard to show that, as x varies, the q's will be fairly evenly distributed in [1,Q]. (Since Hurwitz's approximation is either one of the ends of the interval containing [math]N/x^2[/math] in the Farey series with upper bound Q/2 or the new Farey fraction produced within that interval, it is enough to show that Dirichlet's more familiar approximations have fairly evenly distributed denominators.) This means that 1/q should be about [math](\log Q)/Q[/math] in the average. Thus, the number of intervals of the form [math]x\lt=n\ltx+q[/math] into which [math]\{N^\theta,\ldots ,2 N^\theta\}[/math] has been partitioned should be about [math](\log Q) N / Q[/math]. Since the contribution of each interval to the sum [math]\sum_{N^\theta\lt=n\lt=2N^\theta} \lfloor N/n\rfloor[/math] can (by Lemma 1 and the paragraph after its proof) be computed exactly in time [math]O(\log x)[/math], we can compute the entire sum in [math]\sum_{N^\theta n\lt= 2 N^\theta} \lfloor N/n\rfloor[/math] in time [math]O((\log x) (\log Q) N^\theta/Q) = O((\log N)^2 N^{1/3})[/math]. (There are bits of the sum (at the end and the beginning) that belong to two truncated intervals, but those can be computed in time [math]O(Q) \ll O(N^{1/6})[/math].) We partition [math]\{1,2,\ldots ,\sqrt{N}\}[/math] into [math]O(\log N)[/math] intervals of the form [math]\{N^\theta,\ldots ,2 N^\theta\}[/math], and obtain a total running time of [math]O((\log N)^3 N^{1/3})[/math], as claimed.
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate? I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol. It just seems like this argument is all about the sets of n-simplices. Which is the trivial part. lol no i mean, i'm following it by context actually so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side @user1732 haha thanks! we had no idea if that'd actually find its way to the internet... @JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels @JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC @IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes @JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81 @HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary) @JonathanBeardsley what?! i really liked that picture! i wonder why they removed it @HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world @HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)? i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$ @JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat) I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open not put all my eggs in one basket, as it were I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality @JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak). There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k... @JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad It's enough to show everything works for generating cofaces and codegeneracies the codegeneracies are free, the 0 and nth cofaces are free all of those can be done treating frak{C} as a black box the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation). > Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question. I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers. You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.) I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.) @MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,. You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
Banach Fixed-Point Theorem Theorem Let $\left({M, d}\right)$ be a complete metric space. Let $f: M \to M$ be a contraction. That is, there exists $q \in \left[{0 \,.\,.\, 1}\right)$ such that for all $x, y \in M$: $d \left({f \left({x}\right), f \left({y}\right)}\right) \le q\, d \left({x, y}\right)$ Then there exists a unique fixed point of $f$. Proof Uniqueness Let $f$ have two fixed points $p_1, p_2 \in M$. Let's prove $p_1=p_2$: \(\displaystyle d \left( p_1,p_2 \right)\) \(=\) \(\displaystyle d \left({f \left({p_1}\right), f \left({p_2}\right)}\right)\) Definition of fixed point. \(\displaystyle \) \(\le\) \(\displaystyle q\, d \left({p_1, p_2}\right)\) Map $f$ is a contraction. \(\displaystyle \) \(\le\) \(\displaystyle d \left({p_1, p_2}\right)\) Assumed $q\lt 1$. Then $d \left( p_1,p_2 \right) = 0$. Metric space property (M4) implies $p_1 = p_2$. Existence The plan is to obtain $p=\lim_{n\to\infty} a_n$ with definition $a_{n+1}=f\left(a_n\right)$. Induction on $n$ applies to obtain the contractive estimate: $d \left( a_{n+1}, a_n \right) \le q^n\, d \left( a_1, a_0\right)$ Induction details $n=1$: \(\displaystyle d \left(a_2, a_1 \right)\) \(=\) \(\displaystyle d \left( f\left(a_1\right), f(\left(a_0 \right) \right)\) Definition of sequence $\sequence{a_n}$. \(\displaystyle \) \(\le\) \(\displaystyle q\, d \left( a_1, a_0\right)\) Map $f$ is a contraction. Assume the contractive estimate for $n=k$. Induction details for $n=k+1$: \(\displaystyle d \left(a_{k+2}, a_{k+1} \right)\) \(=\) \(\displaystyle d \left( f\left(a_{k+1}\right), f(\left(a_k \right) \right)\) \(\displaystyle \) \(\le\) \(\displaystyle q\, d \left( a_{k+1}, a_k \right)\) Map $f$ is a contraction. \(\displaystyle \) \(\le\) \(\displaystyle q\, q^k\, d \left( a_1, a_0\right)\) Induction hypothesis applied. Induction complete. Let's prove sequence $\sequence{a_n}$ is a Cauchy sequence in $M$ by showing that $\lim_{m\to\infty} d \left( a_{n+m}, a_n \right)=0$ for $n$ large. \(\displaystyle d(a_{n+m},a_n)\) \(\le\) \(\displaystyle \sum_{j=n}^{n+m-1} d(a_{j+1},a_j)\) metric space triangle inequality (M2) and telescoping sum ideas \(\displaystyle \) \(\le\) \(\displaystyle \sum_{j=n}^{n+m-1} q^j\, d(a_1,a_0)\) Apply the contractive estimate. \(\displaystyle \) \(\le\) \(\displaystyle q^n\left(\dfrac{1-q^m}{1-q}\right) d(a_1,a_0)\) Geometric sum identity applied on the right. Known facts are $\lim_{n\to\infty} q^n=0$ and $\dfrac{1-q^m}{1-q}\le \dfrac{1}{1-q}$. Then $\sequence{d\left(a_{n+m},a_n\right)}$ has limit zero at $m=\infty$ for large $n$. Sequence $\sequence{a_n}$ is a Cauchy sequence convergent to some $p$ in $M$. Then: \(\displaystyle d \left({f \left(p\right), p}\right)\) \(\le\) \(\displaystyle d \left({f \left(p\right), f \left(a_n\right)}\right) + d \left({f \left(a_n\right), p}\right)\) Metric Space triangle inequality (M2) \(\displaystyle \) \(\le\) \(\displaystyle q\, d \left( p , a_n \right) + d \left( a_{n+1}, p \right)\) Map $f$ is a contraction and $a_{n+1}=f\left(a_n\right)$ The right side has limit zero at $n=\infty$. Then $d(f(p),p)=0$. Then $f(p)=p$ by metric space property (M4). $\blacksquare$ Also known as Also known as the Contraction Mapping Theorem, Contraction Theorem, Banach Contraction Theorem and Contraction Lemma. Source of Name This entry was named for Stefan Banach.
Definition:Divisor Function Jump to navigation Jump to search The Definition The divisor function: $\displaystyle \sigma_\alpha \left({n}\right) = \sum_{m \mathop \backslash n} m^\alpha$ Also see Definition:Tau Function: $\sigma_0 \left({n}\right)$ is the number of divisors of $n$ and is frequently written $d \left({n}\right)$, or $\tau \left({n}\right)$
For an adiabatic system like a piston where $\delta Q = 0$, using the first law of thermodynamics gives you the following expression: $$\mathrm dU = \delta Q + \delta W$$ $$\mathrm dU = - p\,\mathrm dV$$ This expression is pretty much useless however, in that you can't integrate it, since $T$, $V$, and $p$ are all constantly changing interdependently in the system. However if you assume the gas is ideal and making a few substitutions... $$C_V\,\mathrm dT = \frac{-nRT}{V}\,\mathrm dV$$ $$C_V \frac{1}{T}\,\mathrm dT = -nR\frac{1}{V}\,\mathrm dV$$ $$C_V \int_{T_1}^{T_2} \frac{1}{T}\,\mathrm dT = -nR\int_{V_1}^{V_2} \frac{1}{V}\,\mathrm dV$$ $$C_V \cdot \ln\left(\frac{T_2}{T_1}\right) = -nR \cdot \ln\left(\frac{V_2}{V_1}\right)$$ $$C_V \cdot \ln\left(\frac{T_2}{T_1}\right) = -(C_p - C_V) \cdot \ln\left(\frac{V_2}{V_1}\right) = (C_p - C_V) \cdot \ln\left(\frac{V_1}{V_2}\right)$$ $$\ln\left(\frac{T_2}{T_1}\right) = \left(\frac{C_p}{C_V}-1\right) \cdot \ln\left(\frac{V_1}{V_2}\right)$$ $$\ln\left(\frac{T_2}{T_1}\right) = \ln\left(\frac{V_1}{V_2}\right)^{\gamma-1} ; \gamma = \frac{C_p}{C_V}$$ $$\frac{T_2}{T_1} = \frac{V_1}{V_2}^{\gamma-1}$$ From here it's just more cycling through variables; see if you can work the mathematics a bit so to get your second relationship.
Difference between revisions of "Timeline of prime gap bounds" Line 841: Line 841: | | | [http://math.mit.edu/~drew/admissible_41588_474266.txt 474,266]? [EH] [m=4] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comments Sutherland]) | [http://math.mit.edu/~drew/admissible_41588_474266.txt 474,266]? [EH] [m=4] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comments Sutherland]) + + + + + + | | |} |} Revision as of 08:26, 28 January 2014 Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments 10 Aug 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) 14 May 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. 21 May 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations 28 May 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] 30 May 59,470,640 (Morrison) 58,885,998? (Tao) 59,093,364 (Morrison) 57,554,086 (Morrison) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m 31 May 2,947,442 (Morrison) 2,618,607 (Morrison) 48,112,378 (Morrison) 42,543,038 (Morrison) 42,342,946 (Morrison) Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] 1 Jun 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] 2 Jun 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) 3 Jun 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison) 4,802,222 (Morrison) Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. 4 Jun 1/224?? (v08ltu) 1/240?? (v08ltu) 4,801,744 (Sutherland) 4,788,240 (Sutherland) Uses asymmetric version of the Hensley-Richards tuples 5 Jun 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz) 4,717,560 (Sutherland) 397,110? (Sutherland) 4,656,298 (Sutherland) 389,922 (Sutherland) 388,310 (Sutherland) 388,284 (Castryck) 388,248 (Sutherland) 387,982 (Castryck) 387,974 (Castryck) [math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance. [math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve 6 Jun 387,960 (Angelveit) 387,904 (Angeltveit) Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. 7 Jun 26,024? (vo8ltu) 387,534 (pedant-Sutherland) Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland) 285,752 (pedant-Sutherland) values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here. An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired. Jun 12 22,951 (Tao/v08ltu) 22,949 (Harcos) 249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu) 6,329? (Harcos) 6,329 (v08ltu) 60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu) 5,672? (v08ltu) 5,459? (v08ltu) 5,454? (v08ltu) 5,453? (v08ltu) 60,740 (xfxie) 58,866? (Sun) 53,898? (Sun) 53,842? (Sun) A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu) 5,453? (v08ltu) 5,452? (v08ltu) 53,774? (Sun) 53,672*? (Sun) Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao) [math]148\varpi + 33\delta \lt 1[/math]? (Tao) Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu) 1,467 (v08ltu) 12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu) [math]140\varpi + 32 \delta \lt 1[/math]? (Tao) 1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes) 1,007? (Hannes) 10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen) [math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao) 962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao) 873? (Hannes) Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao) Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao) 632 (Harcos) 4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma) 12 [EH] (Maynard) Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard) 5 [EH] (Maynard) 600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard) 582#*? (Nielsen]) 59,451 [m=2]#? (Nielsen]) 42,392 [m=2]? (Nielsen) 356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie) 448#*? (Nielsen) 43,134 [m=2]#? (Nielsen) 698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen) 10,000,000? [m=3] (Tao) 1,700,000? [m=3] (Tao) 38,000? [m=2] (Tao) 300#? (Clark-Jarvis) 182,087,080? [m=3] (Sutherland) 179,933,380? [m=3] (Sutherland) More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20 55#? (Nielsen) 36,000? [m=2] (xfxie) 175,225,874? [m=3] (Sutherland) 27,398,976? [m=3] (Sutherland) Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck) 75,000,000? [m=4] (Castryck) 3,400,000,000? [m=5] (Castryck) 5,511? [EH] [m=3] (Sutherland) 2,114,964#? [m=3] (Sutherland) 309,954? [EH] [m=5] (Sutherland) 395,154? [m=2] (Sutherland) 1,523,781,850? [m=4] (Sutherland) 82,575,303,678? [m=5] (Sutherland) A numerical precision issue was discovered in the earlier m=4 calculations Dec 23 41,589? [EH] [m=4] (Sutherland) 24,462,774? [m=3] (Sutherland) 1,512,832,950? [m=4] (Sutherland) 2,186,561,568#? [m=4] (Sutherland) 131,161,149,090#? [m=5] (Sutherland) Dec 24 474,320? [EH] [m=4] (Sutherland) 1,497,901,734? [m=4] (Sutherland) Dec 28 474,296? [EH] [m=4] (Sutherland) Jan 2 2014 474,290? [EH] [m=4] (Sutherland) Jan 6 54? (Nielsen) 270? (Clark-Jarvis) Jan 8 4 [GEH] (Nielsen) 8 [GEH] (Nielsen) Using a "gracefully degrading" lower bound for the numerator of the optimisation problem. Calculations confirmed here. Jan 9 474,266? [EH] [m=4] (Sutherland) Jan 28 395,106? [m=2] (Sutherland) Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [GEH] - bound is conditional the generalized Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted See also the article on Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
Package Description de.lmu.ifi.dbs.elki.algorithm.itemsetmining.associationrules Association rule mining. de.lmu.ifi.dbs.elki.algorithm.itemsetmining.associationrules.interest Association rule interestingness measures. Modifier and Type Field and Description protected InterestingnessMeasure AssociationRuleGeneration. interestingness Interestingness measure to be used. protected InterestingnessMeasure AssociationRuleGeneration.Parameterizer. interestMeasure Parameter for interestingness measure. Constructor and Description AssociationRuleGeneration(AbstractFrequentItemsetAlgorithm frequentItemAlgo, InterestingnessMeasure interestMeasure, double minmeasure) Constructor AssociationRuleGeneration(AbstractFrequentItemsetAlgorithm frequentItemAlgo, InterestingnessMeasure interestMeasure, double minmeasure, double maxmeasure) Constructor. Modifier and Type Class and Description class AddedValue Added value (AV) interestingness measure: \( \text{confidence}(X \rightarrow Y) - \text{support}(Y) = P(Y|X)-P(Y) \). class CertaintyFactor Certainty factor (CF; Loevinger) interestingness measure. \( \tfrac{\text{confidence}(X \rightarrow Y) - \text{support}(Y)}{\text{support}(\neg Y)} \). class Confidence Confidence interestingness measure, \( \tfrac{\text{support}(X \cup Y)}{\text{support}(X)} = \tfrac{P(X \cap Y)}{P(X)}=P(Y|X) \). class Conviction Conviction interestingness measure: \(\frac{P(X) P(\neg Y)}{P(X\cap\neg Y)}\). class Cosine Cosine interestingness measure, \(\tfrac{\text{support}(A\cup B)}{\sqrt{\text{support}(A)\text{support}(B)}} =\tfrac{P(A\cap B)}{\sqrt{P(A)P(B)}}\). class GiniIndex Gini-index based interestingness measure, using the weighted squared conditional probabilities compared to the non-conditional priors. class Jaccard Jaccard interestingness measure: \[\tfrac{\text{support}(A \cup B)}{\text{support}(A \cap B)} =\tfrac{P(A \cap B)}{P(A)+P(B)-P(A \cap B)} =\tfrac{P(A \cap B)}{P(A \cup B)}\] Reference: P. class JMeasure J-Measure interestingness measure. class Klosgen Klösgen interestingness measure. class Leverage Leverage interestingness measure. class Lift Lift interestingness measure. Copyright © 2019 ELKI Development Team. License information.
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
Classical Euclidean geometry is one of the oldest branches of mathematics, and in my opinion still as beautiful as ever. Here is a cute little geometric gemstone that was introduced to me recently by a fellow staff member at MOSP. Consider a triangle $ABC$ drawn in the plane. Let $M$ be the midpoint of $BC$, and let $O$ be the circumcenter of triangle $ABC$, that is, the center of the circle $\Omega$ passing through $A$, $B$, and $C$. Now let $\omega$ the circumcircle of triangle $BOC$, and let $D$ be the point on $\omega$ outside of $ABC$ for which $MD$ is perpendicular to $BC$. Then it turns out that the line $AD$ can be obtained by reflecting the median $AM$ across the angle bisector of angle $A$! This reflection of $AM$ is called a symmedian of the triangle. How can we go about proving this? Well, it suffices to show that angle $BAD$ is congruent to angle $CAM$. One way to go about showing angle measures are equal is by finding similar triangles. Triangles $AMC$ and $ABD$ look promising; let’s start by calculating angle $ABD$. This is equal to angle $B$ plus angle $CBD$, which is an inscribed angle on circle $\omega$. By the inscribed angle theorem (there is a beautiful interactive demonstration at that link!) angle $CBD$ is equal to angle $DOC$, which is half the arc $BC$ on circle $\Omega$. Thus angle $DOC$, and hence angle $CBD$, is equal to angle $A$ of the triangle. We now have that angle $ABD$ is equal to angle $B$ plus angle $A$, which is $180-C$ (in degrees). So angle $ADB$ is $C-\theta$ where $\theta$ is angle $BAD$. Unfortunately this is not congruent to angle $C$, so triangles $AMC$ and $ABD$ are not similar. However, they’re rather close to being similar, in the following sense. Suppose we extend line $AB$ to a point $X$ for which $XD=DB$. Then triangle $BXD$ is isosceles, so angle $AXD$ is equal to angle $XBD$, which is supplementary to angle $ABD$. By our calculation above, angle $ABD$ is $A+B$ and so $AXD=XBD=C$. So, it is possible that triangles $ADX$ and $AMC$ are similar; this would suffice. (Notice also that this is a classic case of the falsity of “Angle Side Side” similarity – if it only takes an angle and two side lengths, then $ABD$ and $AMC$ would be similar too!) Finally, we also extend $XD$ to meet the extension of $AC$ at $Y$. Then by looking at the sum of the angles in triangle $AXY$, we have that the angle at $Y$ is angle $B$. We also know by symmetry that angle $DCY$ is $B$, and so $DCY$ is isosceles and $DC=DY$. But $DC=DB$ as well since $D$ lies on the perpendicular bisector of $BC$. So $$DY=DC=DB=DX,$$ and so $D$ is the midpoint of $XY$! We are now almost done. Triangles $ABC$ and $AYX$ are now similar since all their angles are congruent, so we have $$\frac{AC}{AX}=\frac{BC}{XY}=\frac{2MC}{2DX}=\frac{MC}{DX},$$ so by Side Angle Side similarity, triangles $AMC$ and $ADX$ are similar. So indeed, angle $BAD$ is equal to angle $MAC$.
Edward Frenkel (together with Boris Feigin and others) has proven many interesting results connecting the representation theory of an affine Kac-Moody algebra at the critical level with the geometry of opers. Most of these results are collected in Frenkel's book "Langlands correspondence for loop groups". Assuming that $G$ is a connected simply-connected algebraic group with a simple Lie algebra $\mathfrak{g}$, he proves for example the following results. (1) There is an (equivariant with respect to some group actions) isomorphism between the centre $Z(\hat{\mathfrak{g}})$ of the completed universal enveloping algebra of $\hat{\mathfrak{g}}$ at the critical level and the algebra $\mathrm{Fun}\ \mathrm{Op}_{{}^LG}(D^\times)$ of functions on the space of ${}^LG$-opers on the punctured formal disc, where ${}^LG$ denotes the Langlands dual group. (2) There is also an isomorphism between the centre $\mathfrak{z}(\hat{\mathfrak{g}})$ of the vertex algebra $V_{\kappa_c}(\hat{\mathfrak{g}})$ (whose underlying vector space is the vacuum module) and the algebra $\mathrm{Fun}\ \mathrm{Op}_{{}^LG}(D)$ of functions on the space of ${}^LG$-opers on the formal disc. (3) There is an isomorphism between the endomorphism ring $\mathrm{End}_{\hat{\mathfrak{g}}}(\mathbb{M}_\lambda)$ of the Verma module $\mathbb{M}_\lambda$ and the algebra $\mathrm{Fun}\ \mathrm{Op}_{{}^LG}^{RS}(D)_{\varpi(-\lambda-\rho)}$ of functions on the space of ${}^LG$-opers on the formal disc with regular singularities and residue $\varpi(-\lambda-\rho)$. (4) Let $\lambda$ be a dominant integral weight. There is an isomorphism between the endomorphism ring $\mathrm{End}_{\hat{\mathfrak{g}}}(\mathbb{V}_\lambda)$ of the Weyl module $\mathbb{V}_\lambda$ and the algebra $\mathrm{Fun}\ \mathrm{Op}_{{}^LG}^\lambda$, where $\mathrm{Op}_{{}^LG}^\lambda \subset \mathrm{Op}_{{}^LG}^{RS}(D)_{\varpi(-\lambda-\rho)}$ is the subspace of opers with trivial monodromy. My question is: do these results hold verbatim if we replace $\mathfrak{g}$ by a reductive Lie algebra? I am essentially interested in the case $G=GL_n, \mathfrak{g}=\mathfrak{gl}_n$. Or is some modification required?
The Annals of Statistics Ann. Statist. Volume 25, Number 3 (1997), 948-969. On nonparametric estimation of density level sets Abstract Let $X_1, \dots, X_n$ be independent identically distributed observations from an unknown probability density $f(\cdot)$. Consider the problem of estimating the level set $G = G_f(\lambda) = {x \epsilon\mathbb{R}^2: f(x) \geq \lambda}$ from the sample $X_1, \dots, X_n$, under the assumption that the boundary of G has a certain smoothness. We propose piecewise-polynomial estimators of G based on the maximization of local empirical excess masses. We show that the estimators have optimal rates of convergence in the asymptotically minimax sense within the studied classes of densities. We find also the optimal convergence rates for estimation of convexlevel sets. A generalization to the N-dimensional case, where $N > 2$, is given. Article information Source Ann. Statist., Volume 25, Number 3 (1997), 948-969. Dates First available in Project Euclid: 20 November 2003 Permanent link to this document https://projecteuclid.org/euclid.aos/1069362732 Digital Object Identifier doi:10.1214/aos/1069362732 Mathematical Reviews number (MathSciNet) MR1447735 Zentralblatt MATH identifier 0881.62039 Citation Tsybakov, A. B. On nonparametric estimation of density level sets. Ann. Statist. 25 (1997), no. 3, 948--969. doi:10.1214/aos/1069362732. https://projecteuclid.org/euclid.aos/1069362732
EDIT: here's a better approach; in particular, the formula given in my original answer for $d(n!)$ is completely incorrect as pointed out by Gerhard Paseman. Divide the primes in $[1,n]$ into the "small primes" $p \le \sqrt{n}$ and the "large primes" $p \ge \sqrt{n}$. Denote by $\pi[a,b]$ the number of primes in the interval $[a,b]$. For any two small primes $p \neq q$ we have $v_p(n!) \neq v_q(n!)$ - just compare the terms $\lfloor n/p \rfloor$ and $\lfloor n/q \rfloor$. For any large prime $p$ we have $v_p(n!) = \lfloor n/p \rfloor$. In addition, we have $v_p(n!) < n$ for any (small or large) prime $p$. Now the number of divisors of $n!$ is $\prod_{p \le n\text{ prime}} 1+v_p(n!)$. We split this product into small and large primes. The product of the small-prime factors is the product of $\pi[1,\sqrt{n}]$ distinct integers between $\sqrt{n}$ and $n$, and hence divides $n!$. In fact, aside from $1+v_2(n!)$, the small-prime factors all lie between $\sqrt{n}$ and $n/2$, and so their product divides $(n/2)!$. For the large primes a bit more work is needed. We count the number of primes with $\lfloor n/p \rfloor = k$: this is just (up to boundary terms) $\pi\left[\frac{n}{k+1}, \frac{n}{k}\right].$ So the product of the factors for small primes looks something like $$P = 2^{\pi\left[\frac{n}{2},n\right]} \cdot 3^{\pi\left[\frac{n}{3},\frac{n}{2}\right]} \cdot 4^{\pi\left[\frac{n}{4},\frac{n}{3}\right]} \cdots \sqrt{n}^{\pi\left[\sqrt{n},\sqrt{n}+1\right]}.$$ We claim that for large $n$, this product divides $(n/2)!$. We have $$v_2(P) = \pi[n/2,n] + 2\pi[n/4,n/3] + \pi[n/6,n/5] + 3\pi[n/8,n/7] + \cdots.$$ We know that the partial sums of the sequence $\{v_2(2k)\}_{k \ge 1} = \{1, 2, 1, 3, 1, 2, 1, \cdots\}$ are always less than $2k$, so provided $\log n$ is large enough that PNT asymptotics are reasonably valid (e.g., $\pi[n/3, n/2] \ge \pi[n/4, n/3]$) we have $$v_2(P) \le \pi[1,n] \approx \frac{n}{\log n} < v_2((n/3)!) \approx \frac{n}{3}$$ for large enough $n$. An analogous argument gives $v_p(P) \le \pi[1,n/(p-1)]$ which is again eventually less than $v_p((n/3)!)$. To finish, we note that our total product is at most $(n/2)!(n/3)!\cdot (1+v_2(n!))$. Either $1 + v_2(n!)$ is a prime greater than $n/2$ or it is divisible by $(n/6)!$ for large enough $n$. Finally, we have $(n/2)!(n/3)!(n/6)!$ divides $n!$ by an elementary modular arithmetic argument. This doesn't give any explicit bound on $n$, but it shows why such a result must be true for large enough $n$. With a fair bit more analytic number theory and cleverness, an explicit bound could presumably be found. Vinoth is right: this is suitable for talented high school students. Here's a proof sketch. $d(n)$, the number of divisors, is multiplicative. I'll write $\pi[a,b]$ to denote the number of primes $p$ with $\lfloor a+1 \rfloor \le p \le \lfloor b \rfloor$. The number of divisors of $n!$ is then $$d(n!) = 2^{\pi[\sqrt{n},n]} \cdot 3^{\pi[\sqrt[3]{n},\sqrt{n}]} \cdot 4^{\pi[\sqrt[4]{n},\sqrt[3]{n}]} \cdots$$ The power of a prime $p$ dividing $n!$ is just $$v_p(n!) = \left\lfloor \frac{n}{p} \right\rfloor + \left\lfloor \frac{n}{p^2} \right\rfloor + \left\lfloor \frac{n}{p^3} \right\rfloor + \cdots.$$ So one only needs to establish inequalities of the form $$\pi[\sqrt{n},n] + 2\pi[\sqrt[4]{n},\sqrt[3]{n}] + \cdots \le \left\lfloor \frac{n}{2} \right\rfloor + \left\lfloor \frac{n}{4} \right\rfloor + \left\lfloor \frac{n}{8} \right\rfloor + \cdots,$$ $$\pi[\sqrt[3]{n},\sqrt{n}] + \pi[\sqrt[6]{n},\sqrt[5]{n}] + 2\pi[\sqrt[9]{n},\sqrt[8]{n}] + \cdots \le \left\lfloor \frac{n}{3} \right\rfloor + \left\lfloor \frac{n}{9} \right\rfloor + \left\lfloor \frac{n}{27} \right\rfloor + \cdots,$$ and so on for primes $p \ge 5$. If we could establish the following general inequality for $m \le n$, we'd be done: $$\sum_{k \ge 1} \pi\left[\sqrt[km]{n},\sqrt[km-1]{n}\right] \le \left\lfloor \frac{n}{m} \right\rfloor.$$ The left-hand side is bounded above by $\pi[1,\sqrt[m-1]{n}]$, and the inequality $$\pi[1,\sqrt[m-1]{n}] \le \left\lfloor \frac{n}{m} \right\rfloor$$ is straightforward ($\pi[a,b] \le b - a + 1$) for large $n$ unless $m = 2$. In this case we can use an explicit PNT estimate of the form $\pi[1,n] \le n/C\log n$ for some $1 < C < 2$, or just use an estimate of the form $\pi(n) \le 2 + \frac{n}{3}$, which is true for all $n$ by arithmetic modulo 6.
The fill rate is the fraction of customer demand that is met through immediate stock availability, without backorders or lost sales. The fill rate differs from the service level indicator. The fill rate has a considerable appeal to practitioners because it represents the fraction of the demand that is likely to be recovered or better serviced if the inventory performance was to be improved. The fill rate is measured empirically by averaging the number of correctly serviced requests over the total number of requests. Fill rate and service level are distinct The service level is often mistakenly confused with the fill rate, and vice-versa. Yet, the two indicators are numerically different. While the two indicators are quite correlated, it is possible to find real-world situations were a high service level does not translate into a high fill rate, and the other way around. Such situations tend to arise more frequently when demand is sparse (as for spare parts for example) or when demand is erratic (as in the case of books). Example: let’s consider a bookseller selling a school manual. There is 1 order per day on average. Let’s assume that on average, out of 20 requests for the book, 19 requests come from individual students who require only a single copy of the book. In addition to this, 1 request out of 20, comes from a school teacher (we are still looking at the average), and the teacher asks for 20 copies because she is buying for her whole classroom. If the bookseller keeps 10 book units in stock, and if we assume the lead time to be of 1 day, then the service level is 95% (19/20=0.95) as nearly all the students will be served with their book. However, the teacher’s order request will be systematically declined as the stock never gets big enough to cover a whole classroom. Thus, in this case, the fill rate is close to 50% (19/(19+20) ≈ 0.5) as the teacher’s request accounts for slightly more than half of the total demand. Formal definition In order to shed some light on the exact respective definition of the fill rate and the service level, we need to introduce a certain degree of formalism. Let $X$ be a random variable representing the demand over the next cycle. Let $s$ be the stock available, that is, the quantity of stock readily available to service incoming requests. The service level $\tau_1$ is written as:$$\tau_1(s) = \mathbf{P}(X \leq s)$$The fill rate $\tau_2$ is written as:$$\tau_2(s) = \frac{\mathbb{E}[\text{min}(X,s)]}{\mathbb{E}[X]}$$Indeed $\text{min}(X,s)$ represents the restriction that the available stock is imposing on the quantities to be serviced without delay. If the actual demand value $x$ is lower than $s$, then $x$ units get served without delay otherwise, only $s$ units get served without delay. Calculating the fill rate with Envision Lokad offers the possibility to compute a probabilistic forecast of the demand. The result of the forecast is a distribution of probabilities which can be turned into a distribution of fill rates with a dedicated function, namely the fillrate() function, as illustrated with: The variable FR is also a distribution of probabilities, and represents the marginal fill rate increments. In other words, FR contains the marginal contribution of each extra unit kept in stock in fulfilling the future demand.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Existence and Uniqueness of Cycle Decomposition Contents Theorem Let $S_n$ denote the symmetric group on $n$ letters. Proof Construction of Disjoint Permutations Let $\sigma \in S_n$ be a permutation on $S_n$. Let $\N_n$ be used to denote the (one-based) initial segment of natural numbers: $\N_n = \closedint 1 n = \set {1, 2, 3, \ldots, n}$ Let $\N_n / \mathcal R_\sigma = \set {E_1, E_2, \ldots, E_m}$ be the quotient set of $\N_n$ determined by $\mathcal R_\sigma$. $E \in \N_n / \mathcal R_\sigma \implies E \subseteq \N_n$ For any $E_i \in \N_n / \mathcal R_\sigma$, let $\rho_i: \paren {\N_n \setminus E_i} \to \paren {\N_n \setminus E_i}$ be the identity mapping on $\N_n \setminus E_i$. Also, let $\phi_i = \tuple {E_i, E_i, R}$ be a relation where $R$ is defined as: $\forall x, y \in E_i: \tuple {x, y} \in R \iff \map \sigma x = y$ It is easily seen that $\phi_i$ is many to one. For all $x \in E_i$: \(\displaystyle x\) \(\mathcal R_\sigma\) \(\displaystyle \map \sigma x\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \map \sigma x\) \(\in\) \(\displaystyle E_i\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \sigma \sqbrk {E_i}\) \(\subseteq\) \(\displaystyle E_i\) which shows that $\phi_i$ is left-total. $\map {\phi_i} x = \map \sigma x$ So by Injection from Finite Set to Itself is Permutation, $\phi_i$ is a permutation on $E_i$. $E_i \cup \paren {\N_n \setminus E_i} = \N_n$ So by Union of Bijections with Disjoint Domains and Codomains is Bijection, let the permutation $\sigma_i \in S_n$ be defined by: $\map {\sigma_i} x = \map {\paren {\phi_i \cup \rho_i} } x = \begin{cases} \map \sigma x & : x \in E_i \\ x & : x \notin E_i \end{cases}$ $\Box$ These Permutations are Cycles It is now to be shown that all of the $\sigma_i$ are cycles. From Order of Element Divides Order of Finite Group, there exists $\alpha \in \Z_{\gt 0}$ such that $\sigma_i^\alpha = e$, and so: $\map {\sigma_i^\alpha} x = \map e x = x$ By the Well-Ordering Principle, let $k = \min \set {\alpha \in \N_{\gt 0}: \map {\sigma_i^\alpha} x = x}$ Because $\sigma_i$ fixes each $y \notin E_i$, it suffices to show that: $E_i = \set {x, \map {\sigma_i} x, \ldots, \map {\sigma_i^{k - 1} } x}$ for some $x \in E_i$. If $x \in E_i$, then for all $t \in \Z$: $x \mathrel {\mathcal R_\sigma} \map {\sigma_i^t} x \implies \map {\sigma_i^t} x \in E_i$ It has been shown that: $(1) \quad \set {x, \map {\sigma_i} x, \ldots, \map {\sigma_i^{k - 1} } x} \subseteq E_i$ Let $x, y \in E_i$. Then: \(\displaystyle x\) \(\mathcal R_\sigma\) \(\displaystyle y\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \map {\sigma_i^t} x\) \(=\) \(\displaystyle y\) for some $t \in \Z$, by Permutation Induces Equivalence Relation \(\displaystyle \leadsto \ \ \) \(\displaystyle \map {\sigma_i^{k q + r} } x\) \(=\) \(\displaystyle y\) for some $q \in \Z$, and $0 \le r \lt k$ by the Division Theorem \(\displaystyle \leadsto \ \ \) \(\displaystyle \map {\sigma_i^r \sigma_i^{k q} } x\) \(=\) \(\displaystyle y\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \map {\sigma_i^r} x\) \(=\) \(\displaystyle y\) Fixed Point of Permutation is Fixed Point of Power It has been shown that: $(2) \quad E_i \subseteq \set {x, \map {\sigma_i} x, \ldots, \map {\sigma_i^{k - 1} } x}$ Combining $(1)$ and $(2)$ yields: $E_i = \set {x, \map {\sigma_i} x, \ldots, \map {\sigma_i^{k - 1} } x}$ $\Box$ The Product of These Cycles form the Permutation Finally, it is now to be shown that $\sigma = \sigma_1 \sigma_2 \cdots \sigma_m$. $x \in \N_n \implies x \in E_j$ for some $j \in \set {1, 2, \ldots, m}$. Therefore: \(\displaystyle \map {\sigma_1 \sigma_2 \cdots \sigma_m} x\) \(=\) \(\displaystyle \map {\sigma_1 \sigma_2 \cdots \sigma_j} x\) \(\displaystyle \) \(=\) \(\displaystyle \map {\sigma_j} x\) because $\sigma_j \sqbrk {E_j} = E_j$ \(\displaystyle \) \(=\) \(\displaystyle \map \sigma x\) Definition of $\sigma_i$ and so existence of a cycle decomposition has been shown. $\Box$ Uniqueness of Cycle Decomposition Take the cycle decomposition of $\sigma$, which is $\sigma_1 \sigma_2 \cdots \sigma_m$. It is assume that this product describes $\sigma$ completely and does not contain any duplicate $1$-cycles. Let $x$ be a moved element of $\sigma$. Then there exists a $j \in \set {1, 2, \ldots, s}$ such that $\map {\tau_j} x \ne x$. And so: \(\displaystyle \map \sigma x\) \(=\) \(\displaystyle \map {\tau_1 \tau_2 \cdots \tau_j} x\) \(\displaystyle \) \(=\) \(\displaystyle \map {\tau_j} x\) Power of Moved Element is Moved It has already been shown that $x \in E_i$ for some $i \in \set {1, 2, \ldots, m}$. Therefore: \(\displaystyle \map {\sigma_i} x\) \(=\) \(\displaystyle \map {\tau_j} x\) \(\displaystyle \map {\sigma_i^2} x\) \(=\) \(\displaystyle \map {\tau_{j \prime} \tau_j} x\) by Power of Moved Element is Moved \(\displaystyle \) \(=\) \(\displaystyle \map {\tau_j^2} x\) because $\map {\sigma_i^2} x \ne \map {\sigma_i} x$ and this product is disjoint \(\displaystyle \vdots\) \(\vdots\) \(\displaystyle \vdots\) \(\displaystyle \map {\sigma_i^{k - 1} } x\) \(=\) \(\displaystyle \map {\tau_j^{k - 1} } x\) This effectively shows that $\sigma_i = \tau_j$. Doing this for every $E_i$ implies that $m = s$ and that there exists a $\rho \in S_m$ such that: $\sigma_{\map \rho i} = \tau_i$ In other words, $\tau_1 \tau_2 \cdots \tau_m$ is just a reordering of $\sigma_1 \sigma_2 \cdots \sigma_m$. $\blacksquare$ Also see
I've been working my way through an old post, but I don't think the solution offered can be correct. The question is; Find the generating function (within a choice of sign) for: $$c_{n+1} = 2\sum_{k=0}^{n}c_k c_{n-k},\;\;\;n=1,2,3,4,\dots\\c_0=1, \;c_1=3$$ I think this recurrence relation generates the numbers $$1, 3, 12, 66, 408, 2712, ...$$ The solution offered is; Let $$g(x)=\sum_{n\ge 0}c_nx^n$$ be the ordinary generating function for the sequence. Then the standard formula for the Cauchy product of two summations yields $$\big(g(x)\big)^2=\left(\sum_{n\ge 0}c_nx^n\right)^2=\sum_{n\ge 0}\left(\sum_{k=0}^nc_kc_{n-k}\right)x^n=\frac12\sum_{n\ge 0}c_{n+1}x^n\;,$$ and multiplication by $2x$ gives us $$2x\big(g(x)\big)^2=x\sum_{n\ge 0}c_{n+1}x^n=\sum_{n\ge 1}c_nx^n=g(x)-c_0\;.$$ This is a quadratic in $g(x)$, so it can straightforwardly be solved for $g(x)$. From this I've deduced that $$g(x)=\frac{1-\sqrt(1-8x)}{4x}$$ I've gone through this proof carefully and can't see an error but I know from Wolfram Alpha that it does not generate the numbers I was expecting. It also makes no use of the fact that $c_1 = 3$ which can't be right. I think the correct generating function is; $$GF=\frac{1-\sqrt(1-8x-8x^2)}{4x}$$ but can't see how to obtain this. It generates the numbers I was expecting and is in Sloan : https://oeis.org/search?q=1%2C3%2C12%2C66%2C408&language=english&go=Search FYI : The original post, from 2013, is here : How do you find generating function? For anyone interested in The Catalan Numbers, this would be a good 'one step on' question so if anyone can spot where the glitch is, I think it would be most useful.
The traction is defined as $$\mathbf{t} = \mathbf{n} \cdot \boldsymbol{\sigma}$$ In terms of components, the zero-traction condition is $$ t_j = \sum_i n_i \sigma_{ij} = 0$$ From the above you can see that the stress components don't necessarily have to be zero for the traction to be zero. For a linear elastic material, $$ \boldsymbol{\sigma} = \mathsf{C}:\nabla{\mathbf{u}}$$ In component form, $$ \sigma_{ij} = \sum_k \sum_l C_{ijkl} \frac{\partial u_k}{\partial x_l}$$ Once again, since there is no requirement that all the $C_{ijkl}$ values have to be positive, you don't need all the displacement gradients to be zero for the stresses to be zero (which is not strictly necessary for zero-tractions anyway). However, zero displacement gradients will lead to zero tractions.
We study pattern formation in the new framework of multiplex networks, where activator and inhibitor species occupy separate nodes in different layers. Species react across layers but diffuse only within their own layer of distinct network topology. This multiplicity generates heterogeneous patterns with significant differences from those observed in single-layer networks. Remarkably, diffusion-induced instability can occur even if the two species have the same mobility rates; condition which can never destabilize single-layer networks. The instability condition is revealed using perturbation theory and expressed by a combination of degrees in the different layers. Our theory demonstrates that the existence of such topology-driven instabilities is generic in multiplex networks, providing a new mechanism of pattern formation. Activator-nhibitor system organized in a multiplex network We consider multiplex networks of activator and inhibitor populations, where the different species occupy separate network nodes in distinct layers. Species react across layers according to the mechanism defined by the activator-inhibitor dynamics, and diffuse to other nodes in their own layer through connecting links (see figure). Such a process can be described by the equations [\dot{u} i(t) = f(u_i,v_i) + \sigma^{(u)}\sum{j=1}^{N}! L_{ij}^{(u)}u_j \[2pt] \dot{v} i(t) = g(u_i,v_i) + \sigma^{(v)}\sum{j=1}^{N}! L_{ij}^{(v)}v_j] where (u_i) and (v_i) are the densities of activator and inhibitor species in nodes (i^{(u)}) and (i^{(v)}) of layers (G^{(u)}) and (G^{(v)}), respectively. The superscripts ((u)) and ((v)) refer to activator and inhibitor. The activator nodes are labeled by indices (i=1,2,\ldots,N) in order of decreasing degrees. The same index ordering is applied to the inhibitor layer. The functions (f(u_i,v_i)) and (g(u_i,v_i)) specify the activator-inhibitor dynamics. The summation terms with the Laplacian matrices (L^{(u)}) and (L^{(v)}) describe diffusion processes in the two layers, and the constants (\sigma^{(u)}) and (\sigma^{(v)}) are the corresponding mobility rates. The onset of the instability occurs when (\text{Re}\,\lambda=0) for some pair of nodes (i^{(u)}) and (i^{(v)}). The instability condition is fulfilled when these nodes possess a combination of degrees (k^{(u)}) and (k^{(v)}) such that, the inequality [ k^{(u)} \leq \frac{f_u g_v – f_v g_u – f_u \sigma^{(v)} k^{(v)} } {g_v \sigma^{(u)} – \sigma^{(u)} \sigma^{(v)} k^{(v)}}] is satisfied. Here, \(f_u,f_v,g_u, g_v\) are partial derivatives at the uniform steady state. Last equation (as equality) denotes a transcritical bifurcation where the Turing instability occurs and shows in the \( k^{(u)} -k^{(v)}\) plane with the red curve. In the same plane blue curve stands for a saddle-node bifurcation [1] and points denote the degrees of pair of nodes of a given multiplex network. The unstable nodes, where Turing instability occurs and the non-uniform pattern starts to develop, are denoted by stars. As it is seen in the following video, these nodes leave first the uniform steady state and then other nodes differentiate giving the final shape of the pattern, as shown in the movie below: If many pairs of nodes have degrees in the unstable regime as defined by the instability condition above, then the non-uniform pattern that develops contains more nodes which differentiate from the uniform steady state. See an example in the following movie: Further reading: Pattern formation in multiplex networks. N. E. Kouvaris, S. Hata and A. Diaz-Guilera. Scientific Reports 5, 10840 (2015). [Journal] Turing patterns in network-organized activator–inhibitor systems. H. Nakao and A. S. Mikhailov. Nature Physics 6, 544–550 (2010). [Journal] The Turing bifurcation in network systems: Collective patterns and single differentiated nodes. M. Wolfrum. Physica D: Nonlinear Phenomena 241 (16), 1351–1357 (2012). [Journal]
I am seeking of any already written R package which could help in an optimization technique which is called Difference of convex functions. This technique is sketched here and could be very useful for reinforcement learning processes. Does anyone know something, that could be helpful? I have found package called cccp, but it is all about conic optimization and I don't understand now how could it be used for such a problem. I would like to solve the following (a bit simplified) minimization problem, which decomposes in difference of two convex functions: $$\min_\mathbf{w} S(\mathbf{w}) =\min_\mathbf{w} \sum_{i=1}^{n} \min ( \dfrac{|X_i\mathbf{w}|}{\phi}, 1) = \min_\mathbf{w}~ (S_1(\mathbf{w}) - S_2(\mathbf{w}))$$ $$S_1(\mathbf{w}) = \sum_{i=1}^{n} \dfrac{|X_i\mathbf{w}|}{\phi} $$ $$S_2(\mathbf{w}) = \sum_{i=1}^{n} \left( \dfrac{|X_i\mathbf{w}|}{\phi} - 1 \right)^+$$ where $(x)^+ = \max(x, 0)$ If you have any suggestions or useful related links with any examples, they would be very appreciated. PS. I have found that difference of convex functions optimization problem is closely related with "concave-convex optimization procedure" (CCCP). If anyone has thoughts of how the aforementioned problem could be solved by CCCP, please share!
I’ve been doing a lot of reading on confidence interval theory. Some of the reading is more interesting than others. There is one passage from Neyman’s (1952) book “Lectures and Conferences on Mathematical Statistics and Probability” (available here) that stands above the rest in terms of clarity, style, and humor. I had not read this before the last draft of our confidence interval paper, but for those of you who have read it, you’ll recognize that this is the style I was going for. Maybe you have to be Jerzy Neyman to get away with it. Neyman gets bonus points for the footnote suggesting the “eminent”, “elderly” boss is so obtuse (a reference to Fisher?) and that the young frequentists should be “remind[ed] of the glory” of being burned at the stake. This is just absolutely fantastic writing. I hope you enjoy it as much as I did. [Neyman is discussing using “sampling experiments” (Monte Carlo experiments with tables of random numbers) in order to gain insight into confidence intervals. \(\theta\) is a true parameter of a probability distribution to be estimated.] The sampling experiments are more easily performed than described in detail. Therefore, let us make a start with \(\theta_1 = 1\), \(\theta_2 = 2\), \(\theta_3 = 3\) and \(\theta_4 = 4\). We imagine that, perhaps within a week, a practical statistician is faced four times with the problem of estimating \(\theta\), each time from twelve observations, and that the true values of \(\theta\) are as above [ie, \(\theta_1,\ldots,\theta_4\)] although the statistician does not know this. We imagine further that the statistician is an elderly gentleman, greatly attached to the arithmetic mean and that he wishes to use formulae (22). However, the statistician has a young assistant who may have read (and understood) modern literature and prefers formulae (21). Thus, for each of the four instances, we shall give two confidence intervals for \(\theta\), one computed by the elderly Boss, the other by his young Assistant. [Formula 21 and 22 are simply different 95% confidence procedures. Formula 21 is has better frequentist properties; Formula 22 is inferior, but the Boss likes it because it is intuitive to him.] Using the first column on the first page of Tippett’s tables of random numbers and performing the indicated multiplications, we obtain the following four sets of figures. The last two lines give the assertions regarding the true value of \(\theta\) made by the Boss and by the Assistant, respectively. The purpose of the sampling experiment is to verify the theoretical result that the long run relative frequency of cases in which these assertions will be correct is, approximately, equal to \(\alpha = .95\). You will notice that in three out of the four cases considered, both assertions (the Boss’ and the Assistant’s) regarding the true value of \(\theta\) are correct and that in the last case both assertions are wrong. In fact, in this last case the true \(\theta\) is 4 while the Boss asserts that it is between 2.026 and 3.993 and the Assistant asserts that it is between 2.996 and 3.846. Although the probability of success in estimating \(\theta\) has been fixed at \(\alpha = .95\), the failure on the fourth trial need not discourage us. In reality, a set of four trials is plainly too short to serve for an estimate of a long run relative frequency. Furthermore, a simple calculation shows that the probability of at least one failure in the course of four independent trials is equal to .1855. Therefore, a group of four consecutive samples like the above, with at least one wrong estimate of \(\theta\), may be expected one time in six or even somewhat oftener. The situation is, more or less, similar to betting on a particular side of a die and seeing it win. However, if you continue the sampling experiment and count the cases in which the assertion regarding the true value of \(\theta\), made by either method, is correct, you will find that the relative frequency of such cases converges gradually to its theoretical value, \(\alpha= .95\). Let us put this into more precise terms. Suppose you decide on a number \(N\) of samples which you will take and use for estimating the true value of \(\theta\). The true values of the parameter \(\theta\) may be the same in all \(N\) cases or they may vary from one case to another. This is absolutely immaterial as far as the relative frequency of successes in estimation is concerned. In each case the probability that your assertion will be correct is exactly equal to \(\alpha = .95\). Since the samples are taken in a manner insuring independence (this, of course, depends on the goodness of the table of random numbers used), the total number \(Z(N)\) of successes in estimating \(\theta\) is the familiar binomial variable with expectation equal to \(N\alpha\) and with variance equal to \(N\alpha(1 – \alpha)\). Thus, if \(N = 100\), \(\alpha = .95\), it is rather improbable that the relative frequency \(Z(N)/N\) of successes in estimating \(\alpha\) will differ from \(\alpha\) by more than\( 2\sqrt{\frac{\alpha(1-\alpha)}{N}} = .042 \) This is the exact meaning of the colloquial description that the long run relative frequency of successes in estimating \(\theta\) is equal to the preassigned \(\alpha\). Your knowledge of the theory of confidence intervals will not be influenced by the sampling experiment described, nor will the experiment prove anything. However, if you perform it, you will get an intuitive feeling of the machinery behind the method which is an excellent complement to the understanding of the theory. This is like learning to drive an automobile: gaining experience by actually driving a car compared with learning the theory by reading a book about driving. Among other things, the sampling experiment will attract attention to the frequent difference in the precision of estimating \(\theta\) by means of the two alternative confidence intervals (21) and (22). You will notice, in fact, that the confidence intervals based on \(X\), the greatest observation in the sample, are frequently shorter than those based on the arithmetic mean \(\bar{X}\). If we continue to discuss the sampling experiment in terms of cooperation between the eminent elderly statistician and his young assistant, we shall have occasion to visualize quite amusing scenes of indignation on the one hand and of despair before the impenetrable wall of stiffness of mind and routine of thought on the other. [See footnote] For example, one can imagine the conversation between the two men in connection with the first and third samples reproduced above. You will notice that in both cases the confidence interval of the Assistant is not only shorter than that of the Boss but is completely included in it. Thus, as a result of observing the first sample, the Assistant asserts that .956 \leq \theta \leq 1.227. \) On the other hand, the assertion of the Boss is far more conservative and admits the possibility that \(\theta\) may be as small as .688 and as large as 1.355. And both assertions correspond to the same confidence coefficient, \(\alpha = .95\)! I can just see the face of my eminent colleague redden with indignation and hear the following colloquy. Boss: “Now, how can this be true? I am to assert that \(\theta\) is between .688 and 1.355 and you tell me that the probability of my being correct is .95. At the same time, you assert that \(\theta\) is between .956 and 1.227 and claim the same probability of success in estimation. We both admit the possibility that \(\theta\) may be some number between.688 and .956 or between1.227 and 1.355. Thus, the probability of \(\theta\) falling within these intervals is certainly greater than zero. In these circumstances, you have to be a nit-wit to believe that \( \begin{eqnarray*} P\{.688 \leq \theta \leq 1.355\} &=& P\{.688 \leq \theta < .956\} + P\{.956 \leq \theta \leq 1.227\}\\ && + P\{1.227 \leq \theta \leq 1.355\}\\ &=& P\{.956 \leq \theta \leq 1.227\}.\mbox{”} \end{eqnarray*} \) Assistant: “But, Sir, the theory of confidence intervals does not assert anything about the probability that the unknown parameter \(\theta\) will fall within any specified limits. What it does assert is that the probability of success in estimation using either of the two formulae (21) or (22) is equal to \(\alpha\).” Boss: “Stuff and nonsense! I use one of the blessed pair of formulae and come up with the assertion that \(.688 \leq \theta \leq 1.355\). This assertion is a success only if \(\theta\) falls within the limits indicated. Hence, the probability of success is equal to the probability of \(\theta\) falling within these limits —.” Assistant: “No, Sir, it is not. The probability you describe is the a posteriori probability regarding \(\theta\), while we are concerned with something else. Suppose that we continue with the sampling experiment until we have, say, \(N = 100\) samples. You will see, Sir, that the relative frequency of successful estimations using formulae (21) will be about the same as that using formulae (22) and that both will be approximately equal to .95.” I do hope that the Assistant will not get fired. However, if he does, I would remind him of the glory of Giordano Bruno who was burned at the stake by the Holy Inquisition for believing in the Copernican theory of the solar system. Furthermore, I would advise him to have a talk with a physicist or a biologist or, maybe, with an engineer. They might fail to understand the theory but, if he performs for them the sampling experiment described above, they are likely to be convinced and give him a new job. In due course, the eminent statistical Boss will die or retire and then —. [footnote] Sad as it is, your mind does become less flexible and less receptive to novel ideas as the years go by. The more mature members of the audience should not take offense. I, myself, am not young and have young assistants. Besides, unreasonable and stubborn individuals are found not only among the elderly but also frequently among young people. [end excerpt]
Revista Matemática Iberoamericana Full-Text PDF (368 KB) | Metadata | Table of Contents | RMI summary Volume 19, Issue 3, 2003, pp. 873–917 DOI: 10.4171/RMI/373 Published online: 2003-12-31 The Pressure Equation in the Fast Diffusion RangeEmmanuel Chasseigne [1]and Juan Luis Vázquez [2](1) Université François Rabelais, Tours, France (2) Universidad Autónoma de Madrid, Spain We consider the following degenerate parabolic equation $$ v_{t}=v\Delta v-\gamma|\nabla v|^{2}\quad\mbox{in $\mathbb{R}^{N} \times(0,\infty)$,} $$ whose behaviour depends strongly on the parameter $\gamma$. While the range $\gamma < 0$ is well understood, qualitative and analytical novelties appear for $\gamma>0$. Thus, the standard concepts of weak or viscosity solution do not produce uniqueness. Here we show that for $\gamma>\max\{N/2,1\}$ the initial value problem is well posed in a precisely defined setting: the solutions are chosen in a class $\mathcal{W}_s$ of local weak solutions with constant support; initial data can be any nonnegative measurable function $v_{0}$ (infinite values also accepted); uniqueness is only obtained using a special concept of initial trace, the $p$-trace with $p=-\gamma < 0$, since the standard concepts of initial trace do not produce uniqueness. Here are some additional properties: the solutions turn out to be classical for $t>0$, the support is constant in time, and not all of them can be obtained by the vanishing viscosity method. We also show that singular measures are not admissible as initial data, and study the asymptotic behaviour as $t\to \infty$. Keywords: Pressure equation, fast diffusion; well-posed problem, non-uniqueness; measure as initial trace, optimal initial data Chasseigne Emmanuel, Vázquez Juan Luis: The Pressure Equation in the Fast Diffusion Range. Rev. Mat. Iberoam. 19 (2003), 873-917. doi: 10.4171/RMI/373
Search Now showing items 1-10 of 155 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Revista Matemática Iberoamericana Full-Text PDF (244 KB) | Metadata | Table of Contents | RMI summary Volume 19, Issue 3, 2003, pp. 919–942 DOI: 10.4171/RMI/374 Published online: 2003-12-31 Calderón-Zygmund theory for non-integral operators and the $H^{\infty}$ functional calculusSönke Blunck [1]and Peer Christian Kunstmann [2](1) Université de Cergy-Pontoise, Cergy-Pontoise, France (2) Karlsruher Institut für Technologie (KIT), Germany We modify Hörmander's well-known weak type (1,1) condition for integral operators (in a weakened version due to Duong and McIntosh) and present a weak type $(p,p)$ condition for arbitrary operators. Given an operator $A$ on $L_2$ with a bounded $H^\infty$ calculus, we show as an application the $L_r$-boundedness of the $H^\infty$ calculus for all $r\in(p,q)$, provided the semigroup $(e^{-tA})$ satisfies suitable weighted $L_p\to L_q$-norm estimates with $2\in(p,q)$. This generalizes results due to Duong, McIntosh and Robinson for the special case $(p,q)=(1,\infty)$ where these weighted norm estimates are equivalent to Poisson-type heat kernel bounds for the semigroup $(e^{-tA})$. Their results fail to apply in many situations where our improvement is still applicable, e.g. if $A$ is a Schrödinger operator with a singular potential, an elliptic higher order operator with bounded measurable coefficients or an elliptic second order operator with singular lower order terms. Keywords: Calderón-Zygmund theory, functional calculus, elliptic operators Blunck Sönke, Kunstmann Peer Christian: Calderón-Zygmund theory for non-integral operators and the $H^{\infty}$ functional calculus. Rev. Mat. Iberoam. 19 (2003), 919-942. doi: 10.4171/RMI/374
It looks like you're new here. If you want to get involved, click one of these buttons! In the last few lectures, we have seen how to describe complex systems made out of many parts. Each part has abstract wires coming in from the left standing for 'requirements' - things you might want it to do - and from the right standing for 'resources' - things you might need to give it, so it can do what you want: We've seen that these parts can be stuck together in series, by 'composition': and in parallel, using 'tensoring': One reason I wanted to show you this is for you to practice reasoning with diagrams in situations where you can both compose and tensor morphisms. Examples include: functions between sets linear maps between vector spaces electrical circuits PERT charts the example we spent a lot of time on: feasibility relations or more generally, \(\mathcal{V}\)-enriched profunctors. The kind of structure where you can compose and tensor morphisms is called a 'monoidal category'. This is a category \(\mathcal{C}\) together with: a functor \(\otimes \colon \mathcal{C} \times \mathcal{C} \to \mathcal{C} \) called tensoring, an object \(I \in \mathcal{C}\) called the unit for tensoring, a natural isomorphism called the associator $$ \alpha_{X,Y,Z} \colon (X \otimes Y) \otimes Z \stackrel{\sim}{\longrightarrow} X \otimes (Y \otimes Z) $$ $$ \lambda_X \colon I \otimes X \stackrel{\sim}{\longrightarrow} X $$ $$ \rho_x \colon X \otimes I \stackrel{\sim}{\longrightarrow} X $$ We need the associator and unitors because in examples it's usually not true that \( (X \otimes Y) \otimes Z\) is equal to \(X \otimes (Y \otimes Z)\), etc. They're just isomorphic! But we want the associator and unitors to obey equations because they're just doing boring stuff like moving parentheses around, and if we use them in two different ways to go from, say, $$ ((W \otimes X) \otimes Y) \otimes Z) $$ to $$ W \otimes (X \otimes (Y \otimes Z)) $$ we want those two ways to agree! Otherwise life would be too confusing. If you want to see exactly what equations the associator and unitors should obey, read this: But beware: these equations, discovered by Mac Lane in 1963, are a bit scary at first! They say that certain diagrams built using tensoring, the associator and unitors commute, and the point is that Mac Lane proved a theorem saying these are enough to imply that all diagrams of this sort commute. This result called 'Mac Lane's coherence theorem'. It's rather subtle; if you're curious about the details try this: Note: monoidal categories may not necessarily have a natural isomorphism $$ \beta_{X,Y} \colon X \otimes Y \stackrel{\sim}{\longrightarrow} Y \otimes X .$$ When we have that, obeying some more equations, we have a 'braided monoidal category'. You can see the details in my notes. And when our braided monoidal category has the feature that braiding twice: $$ X\otimes Y \stackrel{\beta_{X,Y}}{\longrightarrow } Y \otimes X \stackrel{\beta_{Y,X}}{\longrightarrow } X \otimes Y $$is the identity, we have a 'symmetric monoidal category'. In this case we call the braiding a symmetry and often write it as $$ \sigma_{X,Y} \colon X \otimes Y \stackrel{\sim}{\longrightarrow} Y \otimes X $$ since the letter \(\sigma\) should make you think 'symmetry'. All the examples of monoidal categories I listed are actually symmetric monoidal - unless you think of circuit diagrams as having wires in 3d space that can actually get tangled up with each other, in which case they are morphisms in a braided monoidal category. Puzzle 278. Use the definition of monoidal category to prove the interchange law $$ (f \otimes g) (f' \otimes g') = ff' \otimes gg' $$ whenever \(f,g,f',g'\) are morphsms making either side of the equation well-defined. (Hint: you only need the part of the definition I explained in my lecture, not the scary diagrams I didn't show you.) Puzzle 279. Draw a picture illustrating this equation. Puzzle 280. Suppose \(f : I \to I\) and \(g : I \to I\) are morphisms in a monoidal category going from the unit object to itself. Show that $$ fg = gf .$$ To read other lectures go here.
For some reason, I had the dubious idea to watch Snakes On A Plane yesterday. If you haven’t seen the movie, the summary says it all: An FBI agent takes on a plane full of deadly and poisonous snakes, deliberately released to kill a witness being flown from Honolulu to Los Angeles to testify against a mob boss. As I struggled to pay attention throughout the movie, I started to wonder just how many snakes could possibly be on that plane? Then I started to wonder what would be the maximum number of snakes that could theoretically fit on the plane? Estimation is integral to most of the things I do. 1 I routinely use back of the envelope calculations in my work and in every day situations to estimate how long a project will take or to eat cheeseburgers. Making accurate approximations is an invaluable skill, composed of part art and part science. I’ve learned a lot about estimation simply by observing how other people work through these types of problems. There’s usually no right or wrong approach to making estimates. Accurate solutions can come from disparate strategies. This post is about some of the techniques I routinely use to make estimations and how I applied these techniques to estimate how many snakes could fit on a plane. My approach to this estimate problem showcases 4 main techniques which will make repeated appearances thoughout the estimation. I’ve highlighted these techniques because they are generalizable to most estimation problems: In the movie, the snakes on the plane 2 range is size from tiny to large. To simplify the problem, I wanted to estimate the volume of a single snake as a fixed spherical volume. Here in New England, the majority of snakes are small, so for this problem, I used a golf ball as a proxy for the volume of a snake. Using a golf ball also allowed me to determine how accurate my estimation was because the size of a golf ball is easy to determine. 3 The first thing I usually do in estimation problems is to break them apart into smaller sub-problems. I call this approach Divide and Conquer based on the algorithm design paradigm of the same name. I use Divide/Conquer because it’s conceptually easier to reason about smaller estimations. How far is it from Chicago to Coffee Club Island? I don’t know. It easier for me to start by estimating how far it is from Chicago to Cleveland or other familiar distances and then use this information to construct a final estimate. 4 Following this logic, I divided the problem into 3 main sub-problems: The sub-problems I’ve chosen may seem odd, but I deliberately set them up as ratio estimations. Ratio are dimensionless and frequently remove unwieldy unit conversions. I prefer ratios because I find them easier to reason about. 5 The number of rows in a 747 cabin is much easier for me to estimate than the length of the cabin in meters. I’ve walked the isle of a 747 many times so my estimation in units of seats is more precise than if I were to try and make an arbitrary estimation in meters. I attempted the first sub-problem by approximating the number of golf balls that would lay along each dimension of a shoe box. I estimated that the length, width, and height of a shoe box is between 7-14 golf balls long, 5-9 golf balls wide, and 2-4 golf balls high, respectively. These estimations are limits, not a single value, but a range of plausible values. I’ve bounded my approximations with a minimum and maximum value where I feel reasonably confident the true values is between. One of the over-arching concepts of good estimation technique is to not necessarily try and approximate the correct answer, rather try to make estimations that are unlikely to be egregiously wrong. Estimation is less about finding the right answer and more about avoiding massive errors in logic and estimation. Finding limits is a very powerful estimation technique, especially when the range of the limits is large. I worked in bioinformatics for many years and routinely needed to make estimations on extremely large and extremely small numbers. One trick I learned to exploit was to make estimations using the geometric mean of the limits. Imagine trying to estimate a value where the true answser is somewhere between 10 and 1000. Many people would be tempted to make an estimate of 500 by approximating the average of the limits. The problem with an estimate of 500 is that if the true value is 1000, I would be off by a factor of 2. If the true value was 10, I would be off by a factor of 50. A better approach is to use the geometric mean. The geometric mean is also easy to approximate by averaging the exponents of the limits. In this case, the geometric mean produces a more sensible estimation of approximately 100. In the worst case I can only be off by a factor of 10 in either direction. The geometric mean is yet another application of ratios where the value of the estimation $x$, satisfies the property of cross products where $i$ and $j$ are the limits $ \frac{i}{x} = \frac{x}{j}$ . I knew that the volume of a rectanglar prism was the product of its width, height, and length. I estimated that the number of golf balls in a shoe box $B_s$, to be 200: I then repeated the logic I used for the first sub-problem to generate estimates for the number of shoe boxes on an airplane seat and the number of seats in a 747 cabin. Here are all the estimations I made in the sub-problem: Sub-problem Length Width Height Golf balls in a shoebox 10 7 3 Shoeboxes in a seat 3 3 6 Seats in a Boeing 747 50 8 3 The end result of these sub-problems yielded the following conversion calculation and a final estimate of $1.5 \times 10^7$ golfballs/747 cabin: By using many smaller estimations I am able to leverage statistics in my favor. I constructed my sub-problems so that they would produce approximations of roughly the same magnitude even though the units are different. I made nine total estimates with values ranging from 3 to 50. I did this because estimation error frequently follows a symmetrical distribution. In a series of estimates, the probability of under or over estimating a quantity is often approximately equivalent. By linking a series of estimations together as I’m doing in this problem, estimation error can be mitigated because the under and over estimations tend to cancel each other out in the final estimation. Choosing sub-problems where the estimations are roughly the same size protects against any one bad estimation from skewing the final estimation. According to Boeing, the interior cabin volume of a 747 is 876 cubic meters. The diameter of a golf ball is 42.67 millimeters. Using these measurements and the average density of close-packing spheres, I can determine an accurate solution for how many golf balls would fit in a 747: $$ \frac{\pi}{3\sqrt2}\cdot \frac{876}{\frac{\frac{4}{3}\pi}{(2.13\times 10^{-2})^{-3}}} = 1.59 \times 10^7$$ My estimation for the number of snakes on a plane was $ 1.50 \times 10^7 $ compared to the true value that I worked out above—$ 1.59 \times 10^7$. Any time I can make an estimate that’s within an order of magnitude of the true answer, I consider it a win. My estimate was only off by about 900,000 golf balls or roughly 5%. This is a very good approximation. The remarkable property of estimation is that it frequently involves almost no math. My solution only required that I know how to calculate the volume of a rectanglar prism and be able to do simple multiplication. If you’re reading this Samuel L. Jackson, I’m available for Snakes On A Plane 2. If you should need me, I already have an idea for the plot. Learning estimation is important. Sound estimation techniques should be a mandatory component of high school education. ↩ In the movie, it’s explicitly stated that the plane is a Boeing 747. ↩ Percentages and probabilities are also useful for many of the same reasons. ↩
In the first week of teaching my Calculus 1 discussion section this term, I decided to give the students a Precalc Review Worksheet. Its purpose was to refresh their memories of the basics of arithmetic, algebra, and trigonometry, and see what they had remembered from high school. Surprisingly, it was the arithmetic part that they had the most trouble with. Not things like multiplication and long division of large numbers – those things are taught well in our grade schools – but when they encountered a complicated multi-step arithmetic problem such as the first problem on the worksheet, they were stumped: Simplify: $1+2-3\cdot 4/5+4/3\cdot 2-1$ Gradually, some of the groups began to solve the problem. But some claimed it was $-16/15$, others guessed that it was $34/15$, and yet others insisted that it was $-46/15$. Who was correct? And why were they all getting different answers despite carefully checking over their work? The answer is that the arithmetic simplification procedure that one learns in grade school is ambiguous and sometimes incorrect. In American public schools, students are taught the acronym “PEMDAS”, which stands for Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. This is called the order of operations, which tells you which arithmetic operations to perform first by convention, so that we all agree on what the expression above should mean. But PEMDAS doesn’t work properly in all cases. (This has already been wonderfully demonstrated in several YouTube videos such as this one, but I feel it is good to re-iterate the explanation in as many places as possible.) To illustrate the problem, consider the computation $6-2+3$. Here we’re starting with $6$, taking away $2$, and adding back $3$, so we should end up with $7$. This is what any modern calculator will tell you as well (try typing it into Google!) But if you follow PEMDAS to the letter, it tells you that addition comes before subtraction, and so we would add $2+3$ first to get $5$, and then end up with $6-5=1$. Even worse, what happens if we try to do $6-3-2$? We should end up with $1$ since we are taking away $2$ and $3$ from $6$, and yet if we choose another order in which to do a subtraction first, say $6-(3-2)=6-1$, we get $5$. So, subtraction can’t even properly be done before itself, and the PEMDAS rule does not deal with that ambiguity. Mathematicians have a better convention that fixes all of this. What we’re really doing when we’re subtracting is adding a negative number: $6-2+3$ is just $6+(-2)+3$. This eliminates the ambiguity; addition is commutative and associative, meaning no matter what order we choose to add several things together, the answer will always be the same. In this case, we could either do $6+(-2)=4$ and $4+3=7$ to get the answer of $7$, or we could do $(-2)+3$ first to get $1$ and then add that to $6$ to get $7$. We could even add the $6$ and the $3$ first to get $9$, and then add $-2$, and we’d once again end up with $7$. So now we always get the same answer! There’s a similar problem with division. Is $4/3/2$ equal to $4/(3/2)=8/3$, or is it equal to $(4/3)/2=2/3$? PEMDAS doesn’t give us a definite answer here, and has the further problem of making $4/3\cdot 2$ come out to $4/(3\cdot 2)=2/3$, which again disagrees with Google Calculator. As in the case of subtraction, the fix is to turn all division problems into multiplication problems: we should think of division as multiplying by a reciprocal. So in the exercise I gave my students, we’d have $4/3\cdot 2=4\cdot \frac{1}{3}\cdot 2=\frac{8}{3}$, and all the confusion is removed. To finish the problem, then, we would write $$\begin{eqnarray*} 1+2-3\cdot 4/5+4/3\cdot 2-1&=&1+2+(-\frac{12}{5})+\frac{8}{3}+(-1) \\ &=&2+\frac{-36+40}{15} \\ &=&\frac{34}{15}. \end{eqnarray*} $$ The only thing we need to do now is come up with a new acronym. We still follow the convention that Parentheses, Exponents, Multiplication, and Addition come in that order, but we no longer have division and subtraction since we replaced them with better operators. So that would be simply PEMA. But that’s not quite as catchy, so perhaps we could add in the “reciprocal” and “negation” rules to call it PERMNA instead. If you have something even more catchy, post it in the comments below!
For example: $\color{red}{\text{Show that}}$$$\color{red}{\frac{4\cos(2x)}{1+\cos(2x)}=4-2\sec^2(x)}$$ In high school my maths teacher told me To prove equality of an equation; you start on oneside and manipulate it algebraically until it is equal to the other side. So starting from the LHS: $$\frac{4\cos(2x)}{1+\cos(2x)}=\frac{4(2\cos^2(x)-1)}{2\cos^2(x)}=\frac{2(2\cos^2(x)-1)}{\cos^2(x)}=\frac{4\cos^2(x)-2}{\cos^2(x)}=4-2\sec^2(x)$$ $\large\fbox{}$ At University, my Maths Analysis teacher tells me To prove a statement is true, you must notuse what you are trying to prove. So using the same example as before: LHS = $$\frac{4\cos(2x)}{1+\cos(2x)}=\frac{4(2\cos^2(x)-1)}{2\cos^2(x)}=\frac{2(2\cos^2(x)-1)}{\cos^2(x)}=\frac{2\Big(2\cos^2(x)-\left[\sin^2(x)+\cos^2(x)\right]\Big)}{\cos^2(x)}=\frac{2(\cos^2(x)-\sin^2(x))}{\cos^2(x)}=\bbox[yellow]{2-2\tan^2(x)}$$ RHS =$$4-2\sec^2(x)=4-2(1+\tan^2(x))=\bbox[yellow]{2-2\tan^2(x)}$$ So I have shown that the two sides of the equality in $\color{red}{\rm{red}}$ are equal to the same highlighted expression. But is this a sufficient proof? Since I used both sides of the equality (which is effectively; using what I was trying to prove) to show that $$\color{red}{\frac{4\cos(2x)}{1+\cos(2x)}=4-2\sec^2(x)}$$ One of the reasons why I am asking this question is because I have a bounty question which is suffering from the exact same issue that this post is about. EDIT: Comments and answers below seem to indicate that you can use both sides to prove equality. So does this mean that my high school maths teacher was wrong? $$\bbox[#AFF]{\text{Suppose we have an identity instead of an equality:}}$$ $$\bbox[#AFF]{\text{Is it plausible to manipulate both sides of an identity to prove the identity holds?}}$$ Thank you.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Wheel diameter depends on the ground clearance of the robot mower chassis. Wheels with a diameter of 30 cm are chosen: this guarantees the minimum distance to the ground required in the general specifications and leaves some room for the layout of electric propulsion motors. In addition, wheels with a diameter of 30 cm allow the use of inflated (low pressure) tyres, which makes it easier to move in slightly uneven terrain, as is the case with crops. The relationship between the speed of rotation and wheel radius is given by: \[ V= r\cdot \omega \] where: \( V \) is the speed in m/s \( r \) is the wheel radius in m \( \omega \) is the speed of rotation in rad/s The speed of rotation can be calculated from the linear speed as follows: \[ \omega =\frac{V}{r} \] or, in our case: \omega =\frac{0.5}{0.15}=3.33rad/sec=1.05rps=63rpm
I'm trying to find the best approximation to the function $e^x$ in the finite dimensional polynomial space $P_4$ with respect to the standard basis vectors $B=\{1,x,x^2,x^3,x^4\}$ with inner product $$(f,g)=\int_0^1 \frac{f(x)g(x)}{\left(x(1-x)\right)^{\frac{1}{2}}}dx$$ Let $w=\sum_{j=0}^4a_j\phi_j $ where $\phi_j=x^j$. Then, the best approximation of $w$ to $e^x$ satisfies the orthogonality property: $$(e^x-w,\phi_i)=0$$ for $i=0,...,4$. Hence, $$\sum_{j=0}^4 a_j (\phi_j,\phi_i) = (e^x,\phi_i)$$ for $i=0,...,4.$. This generates linear system of equations which I'm trying to solve. Unfortunately, the integrals are a bit difficult to evaluate by hand. So, I tried using a three point gaussian quadrature rule to approximate the integrals and solve the system. Unfortunately, I keep getting a nearly singular system. When I observed this, I tried using a higher order quadrature rule, but the matrix remains ill-conditioned. I looked at my code over and over again and I can't seem to find any errors in the implementation. I'm tempted to believe that these inner products simply can't be evaluated well with a gaussian quadrature rule. Is this common for Gram Matrices? Should I use a different quadrature rule? Or is it a bug in my code? I have provided my matlab code assembing this matrix system below for reference. Any help would be greatly appreciated. n=5;%Quadrature points on [-1,1]z1=-sqrt(3/5);z2=0;z3=sqrt(3/5);%Quadrature points on [0,1]x1=z1/2+1/2;x2=z2/2+1/2;x3=z3/2+1/2;A=zeros(n,n);b=zeros(n,1);%assemble gramm matrixfor i=1:n for j=1:n F1=x1^(i-1+j-1)/sqrt(x1*(1-x1)); F2=x2^(i-1+j-1)/sqrt(x2*(1-x2)); F3=x3^(i-1+j-1)/sqrt(x3*(1-x3)); A(i,j)=(1/2)*((5/9)*F1+(8/9)*F2+(5/9)*F3); end G1=x1^(i-1)*exp(x1)/sqrt(x1*(1-x1)); G2=x2^(i-1)*exp(x2)/sqrt(x2*(1-x2)); G3=x3^(i-1)*exp(x3)/sqrt(x3*(1-x3)); b(i)=(1/2)*((5/9)*G1+(8/9)*G2+(5/9)*G3);end
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Defining parameters Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.bt (of order \(6\) and degree \(2\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 9 \) Character field: \(\Q(\zeta_{6})\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\). Total New Old Modular forms 72 6 66 Cusp forms 0 0 0 Eisenstein series 72 6 66 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
It looks like you're new here. If you want to get involved, click one of these buttons! Last time we saw how to tensor enriched profunctors. Now let's see caps and cups for enriched profunctors! We are building up to a kind of climax of this course: the theory of 'compact closed categories'. This is the right language for studying many kinds of complex systems, from collaborative design and PERT charts to electrical circuits and control theory. But we need to climb the mountain patiently. We've already seen caps and cups for feasibility relations in Lecture 68. We can just generalize what we did. As usual, let's assume \(\mathcal{V}\) is a commutative quantale, so we get a category \(\mathbf{Prof}_\mathcal{V}\) where: and To keep my hands from getting tired, from now on in this lecture I'll simply write 'enriched' when I mean '\(\mathcal{V}\)-enriched'. Let \(\mathcal{X}\) be an enriched category. Then there's an enriched profunctor called the cup $$ \cup_{\mathcal{X}} \colon \mathcal{X}^{\text{op}} \otimes \mathcal{X} \nrightarrow \textbf{1} $$ drawn as follows: To define it, remember that enriched profunctors \(\mathcal{X}^{\text{op}} \otimes \mathcal{X} \nrightarrow \textbf{1}\) are really just enriched functors \( (\mathcal{X}^{\text{op}} \otimes \mathcal{X})^\text{op} \otimes \textbf{1} \to \mathcal{V} \). Also, remember that \(\mathcal{X}\) comes with a hom-functor, which is the enriched functor $$ \mathrm{hom} \colon \mathcal{X}^{\text{op}} \otimes \mathcal{X} \to \mathcal{V} $$ sending any object \( (x,x') \) to \( \mathcal{X}(x,x')\). So, we define \(\cup_\mathcal{X}\) to be the composite $$ (\mathcal{X}^{\text{op}} \otimes \mathcal{X})^\text{op} \otimes \textbf{1} \stackrel{\sim}{\to} (\mathcal{X}^{\text{op}} \otimes \mathcal{X})^\text{op} \stackrel{\sim}{\to} (\mathcal{X}^{\text{op}})^\text{op} \otimes \mathcal{X}^{\text{op}} \stackrel{\sim}{\to} \mathcal{X} \otimes \mathcal{X}^{\text{op}} \stackrel{\sim}{\to} \mathcal{X}^{\text{op}} \otimes \mathcal{X} \stackrel{\text{hom}}{\to} \mathcal{V} $$ where the arrows with squiggles over them are isomorphisms, most of which I explained last time. There's also an enriched profunctor called the cap $$ \cap_\mathcal{X} \colon \textbf{1} \nrightarrow \mathcal{X} \otimes \mathcal{X}^{\text{op}} $$ drawn like this: To define this, remember that enriched profunctors \(\textbf{1} \nrightarrow \mathcal{X} \otimes \mathcal{X}^{\text{op}} \) are enriched profunctors \(\textbf{1}^{\text{op}} \otimes (\mathcal{X} \otimes \mathcal{X}^{\text{op}}) \). But \(\textbf{1}^{\text{op}} = \textbf{1}\), so we define the cap to be the composite $$ \textbf{1}^{\text{op}} \otimes (\mathcal{X} \otimes \mathcal{X}^{\text{op}}) = \textbf{1}\otimes (\mathcal{X} \otimes \mathcal{X}^{\text{op}}) \stackrel{\sim}{\to} \mathcal{X} \otimes \mathcal{X}^{\text{op}} \stackrel{\sim}{\to} \mathcal{X}^{\text{op}} \otimes \mathcal{X} \stackrel{\text{hom}}{\to} \mathcal{V} . $$As we've already seen for feasibility relations, the cap and cup obey the snake equations, also known as zig-zag equations or yanking equations. (Everyone likes making up their own poetic names for these equations.) The first snake equation says In other words, the composite is the identity, where the arrows with squiggles over them are obvious isomorphisms that I described last time. The second snake equation says In other words, the composite is the identity. $$ \sigma_{\mathcal{X}, \mathcal{Y}} \colon \mathcal{X} \otimes \mathcal{Y} \nrightarrow \mathcal{Y} \otimes \mathcal{X} $$ obeying certain rules. These let us switch the order of objects in a tensor product: in terms of diagrams, it means wires can cross each other! And finally, when every object in a symmetric monoidal category has a cap and cup obeying the snake equations, we say that category is compact closed. I will define all these concepts more carefully soon. For now I just want you to know that and also that \(\mathbf{Prof}_{\mathcal{V}}\) is an example of a compact closed category. If you're impatient to learn more, try Section 4.4 of the book. Puzzle 227. Prove the snake equations in \(\mathbf{Prof}_{\mathcal{V}}\). For this, I should state the snake equations more precisely! The first one says this composite: is the identity, where \(\alpha\) is the associator and \(\lambda, \rho\) are the left and right unitors, defined last time. The second snake equation says this composite: is the identity.
I’ve been doing a lot of reading on confidence interval theory. Some of the reading is more interesting than others. There is one passage from Neyman’s (1952) book “Lectures and Conferences on Mathematical Statistics and Probability” (available here) that stands above the rest in terms of clarity, style, and humor. I had not read this before the last draft of our confidence interval paper, but for those of you who have read it, you’ll recognize that this is the style I was going for. Maybe you have to be Jerzy Neyman to get away with it. Neyman gets bonus points for the footnote suggesting the “eminent”, “elderly” boss is so obtuse (a reference to Fisher?) and that the young frequentists should be “remind[ed] of the glory” of being burned at the stake. This is just absolutely fantastic writing. I hope you enjoy it as much as I did. [Neyman is discussing using “sampling experiments” (Monte Carlo experiments with tables of random numbers) in order to gain insight into confidence intervals. \(\theta\) is a true parameter of a probability distribution to be estimated.] The sampling experiments are more easily performed than described in detail. Therefore, let us make a start with \(\theta_1 = 1\), \(\theta_2 = 2\), \(\theta_3 = 3\) and \(\theta_4 = 4\). We imagine that, perhaps within a week, a practical statistician is faced four times with the problem of estimating \(\theta\), each time from twelve observations, and that the true values of \(\theta\) are as above [ie, \(\theta_1,\ldots,\theta_4\)] although the statistician does not know this. We imagine further that the statistician is an elderly gentleman, greatly attached to the arithmetic mean and that he wishes to use formulae (22). However, the statistician has a young assistant who may have read (and understood) modern literature and prefers formulae (21). Thus, for each of the four instances, we shall give two confidence intervals for \(\theta\), one computed by the elderly Boss, the other by his young Assistant. [Formula 21 and 22 are simply different 95% confidence procedures. Formula 21 is has better frequentist properties; Formula 22 is inferior, but the Boss likes it because it is intuitive to him.] Using the first column on the first page of Tippett’s tables of random numbers and performing the indicated multiplications, we obtain the following four sets of figures. The last two lines give the assertions regarding the true value of \(\theta\) made by the Boss and by the Assistant, respectively. The purpose of the sampling experiment is to verify the theoretical result that the long run relative frequency of cases in which these assertions will be correct is, approximately, equal to \(\alpha = .95\). You will notice that in three out of the four cases considered, both assertions (the Boss’ and the Assistant’s) regarding the true value of \(\theta\) are correct and that in the last case both assertions are wrong. In fact, in this last case the true \(\theta\) is 4 while the Boss asserts that it is between 2.026 and 3.993 and the Assistant asserts that it is between 2.996 and 3.846. Although the probability of success in estimating \(\theta\) has been fixed at \(\alpha = .95\), the failure on the fourth trial need not discourage us. In reality, a set of four trials is plainly too short to serve for an estimate of a long run relative frequency. Furthermore, a simple calculation shows that the probability of at least one failure in the course of four independent trials is equal to .1855. Therefore, a group of four consecutive samples like the above, with at least one wrong estimate of \(\theta\), may be expected one time in six or even somewhat oftener. The situation is, more or less, similar to betting on a particular side of a die and seeing it win. However, if you continue the sampling experiment and count the cases in which the assertion regarding the true value of \(\theta\), made by either method, is correct, you will find that the relative frequency of such cases converges gradually to its theoretical value, \(\alpha= .95\). Let us put this into more precise terms. Suppose you decide on a number \(N\) of samples which you will take and use for estimating the true value of \(\theta\). The true values of the parameter \(\theta\) may be the same in all \(N\) cases or they may vary from one case to another. This is absolutely immaterial as far as the relative frequency of successes in estimation is concerned. In each case the probability that your assertion will be correct is exactly equal to \(\alpha = .95\). Since the samples are taken in a manner insuring independence (this, of course, depends on the goodness of the table of random numbers used), the total number \(Z(N)\) of successes in estimating \(\theta\) is the familiar binomial variable with expectation equal to \(N\alpha\) and with variance equal to \(N\alpha(1 – \alpha)\). Thus, if \(N = 100\), \(\alpha = .95\), it is rather improbable that the relative frequency \(Z(N)/N\) of successes in estimating \(\alpha\) will differ from \(\alpha\) by more than\( 2\sqrt{\frac{\alpha(1-\alpha)}{N}} = .042 \) This is the exact meaning of the colloquial description that the long run relative frequency of successes in estimating \(\theta\) is equal to the preassigned \(\alpha\). Your knowledge of the theory of confidence intervals will not be influenced by the sampling experiment described, nor will the experiment prove anything. However, if you perform it, you will get an intuitive feeling of the machinery behind the method which is an excellent complement to the understanding of the theory. This is like learning to drive an automobile: gaining experience by actually driving a car compared with learning the theory by reading a book about driving. Among other things, the sampling experiment will attract attention to the frequent difference in the precision of estimating \(\theta\) by means of the two alternative confidence intervals (21) and (22). You will notice, in fact, that the confidence intervals based on \(X\), the greatest observation in the sample, are frequently shorter than those based on the arithmetic mean \(\bar{X}\). If we continue to discuss the sampling experiment in terms of cooperation between the eminent elderly statistician and his young assistant, we shall have occasion to visualize quite amusing scenes of indignation on the one hand and of despair before the impenetrable wall of stiffness of mind and routine of thought on the other. [See footnote] For example, one can imagine the conversation between the two men in connection with the first and third samples reproduced above. You will notice that in both cases the confidence interval of the Assistant is not only shorter than that of the Boss but is completely included in it. Thus, as a result of observing the first sample, the Assistant asserts that .956 \leq \theta \leq 1.227. \) On the other hand, the assertion of the Boss is far more conservative and admits the possibility that \(\theta\) may be as small as .688 and as large as 1.355. And both assertions correspond to the same confidence coefficient, \(\alpha = .95\)! I can just see the face of my eminent colleague redden with indignation and hear the following colloquy. Boss: “Now, how can this be true? I am to assert that \(\theta\) is between .688 and 1.355 and you tell me that the probability of my being correct is .95. At the same time, you assert that \(\theta\) is between .956 and 1.227 and claim the same probability of success in estimation. We both admit the possibility that \(\theta\) may be some number between.688 and .956 or between1.227 and 1.355. Thus, the probability of \(\theta\) falling within these intervals is certainly greater than zero. In these circumstances, you have to be a nit-wit to believe that \( \begin{eqnarray*} P\{.688 \leq \theta \leq 1.355\} &=& P\{.688 \leq \theta < .956\} + P\{.956 \leq \theta \leq 1.227\}\\ && + P\{1.227 \leq \theta \leq 1.355\}\\ &=& P\{.956 \leq \theta \leq 1.227\}.\mbox{”} \end{eqnarray*} \) Assistant: “But, Sir, the theory of confidence intervals does not assert anything about the probability that the unknown parameter \(\theta\) will fall within any specified limits. What it does assert is that the probability of success in estimation using either of the two formulae (21) or (22) is equal to \(\alpha\).” Boss: “Stuff and nonsense! I use one of the blessed pair of formulae and come up with the assertion that \(.688 \leq \theta \leq 1.355\). This assertion is a success only if \(\theta\) falls within the limits indicated. Hence, the probability of success is equal to the probability of \(\theta\) falling within these limits —.” Assistant: “No, Sir, it is not. The probability you describe is the a posteriori probability regarding \(\theta\), while we are concerned with something else. Suppose that we continue with the sampling experiment until we have, say, \(N = 100\) samples. You will see, Sir, that the relative frequency of successful estimations using formulae (21) will be about the same as that using formulae (22) and that both will be approximately equal to .95.” I do hope that the Assistant will not get fired. However, if he does, I would remind him of the glory of Giordano Bruno who was burned at the stake by the Holy Inquisition for believing in the Copernican theory of the solar system. Furthermore, I would advise him to have a talk with a physicist or a biologist or, maybe, with an engineer. They might fail to understand the theory but, if he performs for them the sampling experiment described above, they are likely to be convinced and give him a new job. In due course, the eminent statistical Boss will die or retire and then —. [footnote] Sad as it is, your mind does become less flexible and less receptive to novel ideas as the years go by. The more mature members of the audience should not take offense. I, myself, am not young and have young assistants. Besides, unreasonable and stubborn individuals are found not only among the elderly but also frequently among young people. [end excerpt]
This question is broken into two sections really: Symmetry in RSA I have been analyzing raw RSA and I have noticed some interesting symmetrical properties of the algorithm. Assume that $M$ is a positive integer between $2$ and $N\over2$, ($N$ is the RSA modulus). Then: $$ M_s = N - M \\ C = M^e \pmod N \\ C_s = M_s^e \pmod N $$ This means that $C + C_s = N$, $M + M_s = N$, $C - M = C_s - M_s$ and therefore $C_s = N - C$. Also, $$(N-M)^e \mod N = N - (M^e \mod N)$$ Let the RSA modulus ($N$) be the circumference of a circle radius $r$. Converting each $M$ value from $1$ through to $N - 1$ into segments of the circumference, each with an equal angle between segments, then it will be possible to convert each $M$ and its corresponding $C$ into Cartesian coordinates ($X, Y$) using the formula: $$ (X,Y)= \left(r \cdot \cos({M\over360}) , r \cdot \sin({M\over360}) \right) $$ Using $X$ and $Y$ to plot a line across the circle from $M$ to its corresponding $C$ value. In some cases when calculating $C$, $C$ will be equal to $M$ (for example when $M = 1$). In these cases draw a loop back to $M$. In all cases the end diagram will be symmetrical through the diameter of the circle from 0 to 180 degrees. Example: RSA using $P=11 and Q=7$ (These small numbers were chosen deliberately in order to show the symmetrical pattern. The symmetrical property is the same with large primes, but because there are so many more $M$ values the image becomes a blur of lines, which are indistiguishable from one another. It is also impossible to spot the loop backs (see below)). This also means the one can accurately predict the cipher text of a symmetric partner of $M$. Inherently weak primes and RSA Some prime numbers are very weak with regards to RSA. Examples include $P = 257$ or $Q = 193$. This is because $P-1$ is smooth, e.g. $P-1$ is smooth to $2$ ($2^8$), $Q-1$ is smooth to $3$ ($2^6, 3^1$). When these two primes are combined as $N=PQ=49601$, this produces $16705$ “loop backs” (where $C = M$). The interesting thing about this is the value of $M$ when the “loop back” occurs. For this combination the first ten loop backs are: $$1, 3, 8, 9, 11, 13, 14, 20, 23, 24$$ Why is this significant? If $Z = C \pmod M$ then in most cases $Z = M$. Where this doesn’t happen we can say that: If $Z = 0$ then $T_1 = C$ else $T_1 = Z$ This means that in most cases $T_1 = M$. Thus we can say: If $T_1 \neq M$ then $T_2 = \gcd(T_1, N)$ If $T_2 = 1$ then $T_3 = M – C$ In most cases $T_3 = P$ or $Q$. When $T_3 \neq P$ and $T_3\neq Q$: - $T_4 = GCD(T_3, N)$ - $T_4$ will be either $P$ or $Q$. $T_5 = {T_3 \over T_4}$. The value of $T_5$ will always be one of the loop back $M$ values, e.g. $1, 3, 8, 9, 11, 13, 14, 20, 23, 24, \dots$ Although this is easy to demonstrate with small value weak prime numbers, the same case is applicable to large value RSA with strong primes. These are harder and more time consuming to calculate because of the vast size of the data to be collected. To perform this analysis one must iterate over every $M$ value between $2$ and $N – 1$. Example: Loop back image for $P=257, Q= 193$ My question: Is it possible to use any of this information to form a cryptanalytic attack against RSA?
CGAL 4.14.1 - 2D Conforming Triangulations and Meshes #include <CGAL/Delaunay_mesh_size_criteria_2.h> The shape criterion on triangles is given by a bound \( B\) such that for good triangles \( \frac{r}{l} \le B\) where \( l\) is the shortest edge length and \( r\) is the circumradius of the triangle. By default, \( B=\sqrt{2}\), which is the best bound one can use with the guarantee that the refinement algorithm will terminate. The upper bound \( B\) is related to a lower bound \( \alpha_{min}\) on the minimum angle in the triangle: \[ \sin{ \alpha_{min} } = \frac{1}{2 B} \] so \( B=\sqrt{2}\) corresponds to \( \alpha_{min} \ge 20.7\) degrees. This traits class defines also a size criteria: all segments of all triangles must be shorter than a bound \( S\). CDT must be a 2D constrained Delaunay triangulation. Delaunay_mesh_size_criteria_2 () Default constructor with \( B=\sqrt{2}\). More... Delaunay_mesh_size_criteria_2 (double b=0.125, double S=0) Construct a traits class with bound \( B=\sqrt{\frac{1}{4 b}}\). More... CGAL::Delaunay_mesh_size_criteria_2< CDT >::Delaunay_mesh_size_criteria_2 ( ) Default constructor with \( B=\sqrt{2}\). No bound on size.
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
A few weeks ago I attended the AWM (Association of Women in Mathematics) Research Symposium in Houston, TX. I gave a talk in my special session, speaking on queer supercrystals for the first time, to a room full of female mathematicians. I was a bit disappointed when, at the end of my talk, no one raised their hand to ask any questions. It’s usually the classic sign of an uninteresting or inappropriately aimed talk, so I figured that maybe I had to revisit my slides and make them more accessible for the next time I spoke on the subject. Afterwards, however, several of the women in my session came up to me privately to ask specific questions about my research. When I told my husband about this after the conference, he pointed out that perhaps they just were the kind of people to prefer asking questions one-on-one rather than raising their hands during or after the lecture. “Did anyone in your session ask questions after the other talks?” he asked me, testing his theory. I thought about it, and was surprised when I realized the answer. “Woah, I think you’re right,” I said. “I asked at least one question after nearly every talk. But I think I was the only one. Once in a while one other woman would ask something too. But the rest kept their hands down and went up to the speaker during the break to ask their questions.” Upon further reflection, I realized that this was even true during the plenary talks. During an absolutely fantastic lecture by Chelsea Walton, I was intrigued by something she said. She mentioned that the automorphism group of the noncommutative ring $$\mathbb{C}\langle x,y\rangle/(xy-qyx)$$ is $\mathbb{C}^{\times} \times \mathbb{C}^{\times}$ for all $q\neq \pm 1$, but the answer is different at $q=1$ and $q=-1$. I knew that many of the standard $q$-analogs arise naturally in computations in this particular ring, such as the $q$-numbers $$[n]_q=1+q+q^2+\cdots +q^{n-1}.$$ So, I wondered if the exceptions at $q=1$ and $q=-1$ were happening because $q$ was a root of unity, making some of the $q$-numbers be zero. So maybe she was considering $q$ as a real parameter? I raised my hand to ask. “Is $q$ real or complex in this setting?” “It’s complex,” Chelsea answered. “Any nonzero complex parameter $q$.” “Really?” I asked. “And there are no exceptions at other roots of unity?” “Nope!” she replied with a smile, getting excited now. “Just at $1$ and $-1$. The roots of unity get in your way when looking at the representation theory. But for the automorphism group, there are only two exceptional values for $q$.” Fascinating! No one else asked any mathematical questions during or after that talk. Now, I have the utmost faith in womankind. And I would normally have chalked the lack of questions and outspokenness up to it being a less mathematically cohesive conference than most, because the participants were selected from only a small percentage of mathematicians (those that happened to be female). But it reminded me of another time, several years ago, that I had been surprised to discover the same phenomenon among a group of women in mathematics. One summer I was visiting the Duluth REU, a fantastic research program for undergraduates run by Joe Gallian in the beautiful and remote city of Duluth, Minnesota. As a former student at the program myself, I visited for a couple of weeks to hang out and talk math with the students. I attended all the weekly student talks, and as usual, participated heavily, raising my hand to ask questions and give suggestions. The day before I left, Joe took me aside. “I wanted to thank you for visiting,” he said. “Before you came, the women never raised their hand during the other students’ talks. But after they saw you doing it, suddenly all of them are participating and raising their hands!” I was floored. I didn’t know that being a woman had anything to do with asking questions. I have always felt a little out of place at AWM meetings. They are inevitably host to many conversations about the struggles faced by women in competitive male-dominant settings, which I have never really related to on a personal level. I love the hyper-competitive setting of academia. I live for competition; I thrive in it. And it never occurs to me to hold back from raising my hand, especially when I’m genuinely curious about why $q$ can be a complex root of unity without breaking the computation. But, clearly, many women are in the habit of holding back, staying in the shadows, asking their questions in a one-on-one setting and not drawing attention to themselves. And I wonder how much this phenomenon plays a role in the gender imbalance and bias in mathematics. At the reception before the dinner at the AWM conference, I spotted Chelsea. She was, unsurprisingly, quite popular, constantly engaged in conversation with several people at once. I eventually made my way into a conversation in a group setting with her in it, and I introduced myself. “Hi, I just wanted to say I really enjoyed your talk! I was the one asking you whether $q$ was real.” Her expression suddenly shifted from ‘oh-no-not-another-random-person-I-have-to-meet’ to a warm, smiling face of recognition. “Oh! I liked your question!” she exclaimed. The conversation immediately turned to math, and she was nice enough to walk me through enough computations to convince me that $q=\pm 1$ were special cases in computing the automorphism group of the noncommutative ring. (See Page 2 of this post for the full computation!) The entire experience got me thinking. It was because I raised my hand that Chelsea recognized me, that she was happy to talk to me and mathematics was communicated. It was because I raised my hand that I got the question out in the open so that other participants could think about it as well. It was because I raised my hand that women were doing mathematics together. And perhaps it is because I raise my hand that I have no problem interacting in a male-dominant environment. After all, they raise their hands all the time. It is tempting to want to ask the men in mathematics to take a step back and let the women have the limelight once in a while. But I don’t think that’s the answer in this case. Men should keep raising their hands. It’s part of how mathematics gets done. It helps to communicate ideas more efficiently, to the whole room at once rather than only in private one-on-one settings. It draws visibility to the interesting aspects of a talk that other participants may not have thought of. What we really need is for women to come out of the shadows. So, to my fellow women in mathematics: I’m calling on all of us to ask all our questions, to engage with the seminar room, to not hold back in those immensely valuable times when we are confused. And raise our hands!
In the first week of teaching my Calculus 1 discussion section this term, I decided to give the students a Precalc Review Worksheet. Its purpose was to refresh their memories of the basics of arithmetic, algebra, and trigonometry, and see what they had remembered from high school. Surprisingly, it was the arithmetic part that they had the most trouble with. Not things like multiplication and long division of large numbers – those things are taught well in our grade schools – but when they encountered a complicated multi-step arithmetic problem such as the first problem on the worksheet, they were stumped: Simplify: $1+2-3\cdot 4/5+4/3\cdot 2-1$ Gradually, some of the groups began to solve the problem. But some claimed it was $-16/15$, others guessed that it was $34/15$, and yet others insisted that it was $-46/15$. Who was correct? And why were they all getting different answers despite carefully checking over their work? The answer is that the arithmetic simplification procedure that one learns in grade school is ambiguous and sometimes incorrect. In American public schools, students are taught the acronym “PEMDAS”, which stands for Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. This is called the order of operations, which tells you which arithmetic operations to perform first by convention, so that we all agree on what the expression above should mean. But PEMDAS doesn’t work properly in all cases. (This has already been wonderfully demonstrated in several YouTube videos such as this one, but I feel it is good to re-iterate the explanation in as many places as possible.) To illustrate the problem, consider the computation $6-2+3$. Here we’re starting with $6$, taking away $2$, and adding back $3$, so we should end up with $7$. This is what any modern calculator will tell you as well (try typing it into Google!) But if you follow PEMDAS to the letter, it tells you that addition comes before subtraction, and so we would add $2+3$ first to get $5$, and then end up with $6-5=1$. Even worse, what happens if we try to do $6-3-2$? We should end up with $1$ since we are taking away $2$ and $3$ from $6$, and yet if we choose another order in which to do a subtraction first, say $6-(3-2)=6-1$, we get $5$. So, subtraction can’t even properly be done before itself, and the PEMDAS rule does not deal with that ambiguity. Mathematicians have a better convention that fixes all of this. What we’re really doing when we’re subtracting is adding a negative number: $6-2+3$ is just $6+(-2)+3$. This eliminates the ambiguity; addition is commutative and associative, meaning no matter what order we choose to add several things together, the answer will always be the same. In this case, we could either do $6+(-2)=4$ and $4+3=7$ to get the answer of $7$, or we could do $(-2)+3$ first to get $1$ and then add that to $6$ to get $7$. We could even add the $6$ and the $3$ first to get $9$, and then add $-2$, and we’d once again end up with $7$. So now we always get the same answer! There’s a similar problem with division. Is $4/3/2$ equal to $4/(3/2)=8/3$, or is it equal to $(4/3)/2=2/3$? PEMDAS doesn’t give us a definite answer here, and has the further problem of making $4/3\cdot 2$ come out to $4/(3\cdot 2)=2/3$, which again disagrees with Google Calculator. As in the case of subtraction, the fix is to turn all division problems into multiplication problems: we should think of division as multiplying by a reciprocal. So in the exercise I gave my students, we’d have $4/3\cdot 2=4\cdot \frac{1}{3}\cdot 2=\frac{8}{3}$, and all the confusion is removed. To finish the problem, then, we would write $$\begin{eqnarray*} 1+2-3\cdot 4/5+4/3\cdot 2-1&=&1+2+(-\frac{12}{5})+\frac{8}{3}+(-1) \\ &=&2+\frac{-36+40}{15} \\ &=&\frac{34}{15}. \end{eqnarray*} $$ The only thing we need to do now is come up with a new acronym. We still follow the convention that Parentheses, Exponents, Multiplication, and Addition come in that order, but we no longer have division and subtraction since we replaced them with better operators. So that would be simply PEMA. But that’s not quite as catchy, so perhaps we could add in the “reciprocal” and “negation” rules to call it PERMNA instead. If you have something even more catchy, post it in the comments below!
No, message commitment by disclosing its HMAC-MD5 with a key later revealed is no longer any secure, because of the ease with which MD5 collisions can now be found. There's however no compelling evidence that's insecure for messages constrained to belong in a small arbitrary set that no adversary can choose or influence. Still, whatever the constraints on the messages, the narrow 128-bit output width of HMAC-MD5 allows an attack with a relatively modest (and clearly feasible) $2^{65}$ evaluations of HMAC-MD5. HMAC builds a PRF from a hash function $H$ with Merkle-Damgård structure, message block width $w$ and output width $h$, with $w\ge h$, as$$\operatorname{HMAC}_H(K,m)=\begin{cases}H\Big((K\oplus\text{opad})\mathbin\|H\big((K\oplus\text{ipad})\mathbin\|m\big)\Big) &\text{if $|K|\le w$}\\\operatorname{HMAC}_H\big(H(K),m\big) &\text{if $|K|>w$}\end{cases}$$where $\text{opad}$ (resp. $\text{ipad}$) is the 0x5c5c5c… (resp. 0x363636…) pattern with width $w$, and $\oplus$ is bitwise exclusive-OR with the shortest operand right-padded using zero bits. If the compression function used to build $H$ has suitable properties, then for random unknown $K$ with $|K|\ge h$, the function $m\mapsto \operatorname{HMAC}_H(K,m)$ is indistinguishable from random with effort less than about $2^h$ evaluations of $H$. We have no compelling evidence that this does not hold for $H=\operatorname{MD5}$ (for which $h=128$, $w=512$). For more details see Mihir Bellare, New Proofs for NMAC and HMAC: Security without Collision Resistance, in Journal of Cryptology, 2015 (originally in proceedings of Crypto 2006). When $H$ is MD5, or any $H$ that is not collision-resistant, the attack in the question renders insecure a commitment protocol where Alice secretly chooses $m$ and $K$ computes and publishes $\operatorname{HMAC}_H(K,m)$ as a commitment of $m$ performs some action dependent on $m$ (like: offer a bet about the first bit of $m$) later reveals $m$ reveals $K$, allowing a verifier to compute $\operatorname{HMAC}_H(K,m)$ on the $m$ Alice alleges, and compare against Alice's commitment. Notice however that even with an ideal $H$, there's an attack with effort about $2^{h/2}$ evaluations of $H$, where Alice finds $m$ and $m'$ with $H\big((K\oplus\text{ipad})\mathbin\|m\big)=H\big((K\oplus\text{ipad})\mathbin\|m'\big)$; thus the mere output size of HMAC-MD5 limits its security level to a modest $2^{64}$ evaluations of MD5, in this protocol where $m$ is unconstrained. On the other hand, I see no attack (much better than brute force on $K$) on the use of HMAC-MD5 (or HMAC with a non-collision-resistant $H$) in the variant of this protocol where Alice is constrained to choose $m$ in a small arbitrary set that no adversary can choose or influence, like $\{\text{“stone”},\text{“paper”},\text{“scissors”}\}$, as considered in the suggestion referenced in the question. Alice has so little choice on the messages that she must be clever in her choice of $K$, rather than $m$, in order to cause a collision. My intuition is that because $K$ enters twice in the computation of $\operatorname{HMAC}_H(K,m)$, with at least one execution of the compression function in-between (for heavily constrained $m$), finding a theoretical shortcut would be extremely hard, well beyond what the current cryptanalysis status of MD5 allows.
Gravity itself @aeronalias is absolutely right. Given the gravitational acceleration of $g=9.81m/s^2$ on the ground, a perfect spherical earth of radius $R_E=6370km$ with homogenous (at least: radially symmetric) density, one can calculate the gravitational acceleration at an altitude of $h=12km$ by $$g(h)=g\cdot\frac{R_E^2}{(h+R_E)^2}= 9.773 \rm{m}/s^2$$ Expressed in terms of $g$, the difference is $$g_\rm{diff} = 0.0368565736 m/s^2 = 0.003757g$$ Centrifugal forces The question also asks for the centrifugal effect on the aircraft as it travels round the curve of the earth, which has not yet been answered yet. The effect is considered small, but compared to the effect on gravity itself, it isn't always. I got some heavy objections on my answer and I have to admit, I really don't see their point. Therefore, I've edited this section and hope this helps. In general, an object moving on a circular path experiences a centrifugal acceleration, pointing away from the center of the circle: $$a_c=\omega^2r=\frac{v^2}{r}$$ $\omega=\frac{\alpha}{t}$ is the angular speed, i.e. the angle $\alpha$ (in radians) the object travels in a given time $t$ (in seconds). Now let's consider a "perfect" Earth as described above, plus no wind.A balloon hovering stationary over a point at the equator at 12km altitude will do one revolution ($\alpha=2\pi[=360°]$) in 24 hours. So it is $\omega=\frac{2\pi}{24\cdot60\cdot60s}$. Together with $r=R_e+h$, one gets for the balloon: $$a_{cb}=0.03374061 m/s² = 0.0034394098 g$$ The circumference of the circle the balloon flies is $2\pi(R_e+h)=40099km$ Now consider an aircraft flying east along the equator at the same altitude at 250m/s (900km/h, 485kt) with respect to the surrounding air. (Keep in mind: no wind). In 24h, this aircraft travels a distance of 21600km, or 0.539 of the circumference. This means the aircraft does 1.539 revolutions of the circle in 24h, which means its angular speed is $\omega=1.539\cdot\frac{2\pi}{24\cdot60\cdot60s}$.Thus, the centrifugal force on the aircraft flying east is $$a_\rm{ce} = 0.0799053814 m/s^2 = 0.0081452988 g$$ The same way, one can calculate what happens when the aircraft flies west:$\omega=(1-0.539)\cdot\frac{2\pi}{24\cdot60\cdot60s}$ $$a_\rm{cw} = 0.0071833292 m/s^2 = 0.0007322456 g$$ Comparison Let's write the values together to compare them. I've also added how much lighter a 100kg (220lb) person would feel due to the effects: | "weight loss" g_diff = 0.0368565736 m/s² = 0.003757 g | 376gram (0.829lb) a_cb = 0.03374061 m/s² = 0.0034394098 g | 344gram (0.758lb) a_ce = 0.0799053814 m/s² = 0.0081452988 g | 815gram (1.797lb) a_cw = 0.0071833292 m/s² = 0.0007322456 g | 73gram (0.161lb) Note: The 100kg is what a scale at the North Pole (i.e. without any centrifugal effect) shows. The person already feels 344g lighter on the ground at the equator. The balloon doesn't change this (much).But moving east/west has a larger effect on the weight than gravity alone. A person flying west feels even heavier than on ground! Maybe another table, showing the weight of the person: kg lb 1. Man at north pole 100.00 220.46 2. Man at equator 99.66 219.70 3. Man at equator, in balloon 99.28 218.88 4. Man at equator, in aircraft flying east 98.81 217.84 5. Man at equator, in aircraft flying west 99.55 219.47 <- More than 3. The numbers shown are only valid at the equator and for flights east / west. In other cases, it becomes a little more complex. EDIT: Being curious on how this depends on latitude, I created this plot about the absolute acceleration an aircraft experiences. The radius in the equation of the centrifugal force is the distance of the aircraft to the axis of the Earth. It is clear that it decreases when moving away from the equator, and so does the acceleration. The speed of the aircraft flying west will cancel out the speed of the earth at about 57° N / S, i.e. there is no centrifugal force. At larger latitude, the aircraft will fly in the opposite direction around the axis of the earth, building up a centrifugal force again. Near the poles, both aircraft become centrifuges (theoretically). E.g. flying a circle of 500m radius gives an acceleration of 12.7g. This is why the data rises to infinity there. (When doing the math, one has to keep in mind that gravity always points to the center of the earth, while the centrifugal force points away from the axis. You can't just add them)
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
Category:Primitives Jump to navigation Jump to search Let: $\forall x \in \left({a \,.\,.\, b}\right): F' \left({x}\right) = f \left({x}\right)$ Then $F$ is a primitive of $f$, and is denoted: $\displaystyle F = \int f \left({x}\right) \, \mathrm d x$ Subcategories This category has the following 54 subcategories, out of 54 total. E I ► Integral Substitutions (16 P) P ► Primitives involving a squared minus x squared (3 C, 22 P) ► Primitives involving a x + b (12 C, 23 P) ► Primitives involving a x + b and p x + q (3 C, 16 P) ► Primitives involving a x squared plus b x plus c (3 C, 18 P) ► Primitives involving Cosecant Function (1 C, 16 P) ► Primitives involving Cosine Function (6 C, 44 P) ► Primitives involving Cotangent Function (15 P) ► Primitives involving Exponential Function (6 C, 23 P) ► Primitives involving Hyperbolic Cosine Function (1 C, 46 P) ► Primitives involving Hyperbolic Cotangent Function (1 C, 12 P) ► Primitives involving Hyperbolic Sine Function (1 C, 41 P) ► Primitives involving Hyperbolic Tangent Function (1 C, 14 P) ► Primitives involving Inverse Cosine Function (2 C, 8 P) ► Primitives involving Inverse Sine Function (2 C, 8 P) ► Primitives involving Inverse Tangent Function (1 C, 7 P) ► Primitives involving Logarithm Function (2 C, 20 P) ► Primitives involving Reciprocals (10 C, 27 P) ► Primitives involving Root of a squared minus x squared (1 C, 31 P) ► Primitives involving Root of a x + b (1 C, 25 P) ► Primitives involving Root of a x squared plus b x plus c (3 C, 19 P) ► Primitives involving Root of x squared minus a squared (3 C, 32 P) ► Primitives involving Root of x squared plus a squared (4 C, 31 P) ► Primitives involving Secant Function (1 C, 16 P) ► Primitives involving Sine Function (4 C, 50 P) ► Primitives involving Sine Function and Cosine Function (3 C, 51 P) ► Primitives involving Tangent Function (2 C, 18 P) ► Primitives involving x cubed plus a cubed (2 C, 12 P) ► Primitives involving x squared minus a squared (3 C, 24 P) ► Primitives involving x squared plus a squared (6 C, 21 P) ► Primitives of Hyperbolic Functions (6 C, 23 P) ► Primitives of Inverse Trigonometric Functions (6 C, 1 P) ► Primitives of Quadratic Functions (1 C) ► Primitives of Rational Functions (8 C, 15 P) ► Primitives of Trigonometric Functions (7 C, 11 P) Pages in category "Primitives" The following 43 pages are in this category, out of 43 total. P Primitive of Composite Function Primitive of Constant Primitive of Constant Multiple of Function Primitive of Cosine Integral Function Primitive of Error Function Primitive of Exponential Function/General Result Primitive of Exponential Integral Function Primitive of Function of Constant Multiple Primitive of Function under its Derivative Primitive of Periodic Function Primitive of Pointwise Sum of Functions Primitive of Power Primitive of Sine Integral Function Primitives involving a squared minus x squared Primitives involving a squared minus x squared squared Primitives involving a x squared plus b x plus c Primitives involving Power of a squared minus x squared Primitives involving Power of x squared minus a squared Primitives involving Power of x squared plus a squared Primitives involving Root of a squared minus x squared Primitives involving Root of a squared minus x squared cubed Primitives involving Root of x squared minus a squared Primitives involving Root of x squared minus a squared cubed Primitives involving Root of x squared plus a squared Primitives involving Root of x squared plus a squared cubed Primitives involving x squared minus a squared Primitives involving x squared minus a squared squared Primitives involving x squared plus a squared Primitives involving x squared plus a squared squared Primitives of Functions involving a x + b and p x + q Primitives of Functions involving Power of Root of a x + b Primitives of Functions involving Root of a x + b Primitives of Functions involving Root of a x + b and p x + q Primitives of Functions involving Root of a x + b and Root of p x + q Primitives of Hyperbolic Functions Primitives of Rational Functions involving a x + b Primitives of Rational Functions involving a x + b cubed Primitives of Rational Functions involving a x + b squared Primitives of Rational Functions involving Power of a x + b Primitives of Trigonometric Functions Primitives which Differ by Constant
By Joannes Vermorel, June 2016 Probabilistic forecasts assign a probability to every possible future. Yet, all probabilistic forecasts are not equally accurate, and metrics are needed to assess the respective accuracy of distinct probabilistic forecasts. Simple accuracy metrics such as MAE (Mean Absolute Error) or MAPE (Mean Absolute Percentage Error) are not directly applicable to probabilistic forecasts. The Continuous Ranked Probability Score (CRPS) generalizes the MAE to the case of probabilistic forecasts. The CPRS is one of the most widely used accuracy metrics where probabilistic forecasts are involved. Overview The CRPS is frequently used in order to assess the respective accuracy of two probabilistic forecasting models . In particular, this metric can be combined with a backtesting process in order to stabilize the accuracy assessment by leveraging multiple measurements over the same dataset. This metric notably differs from simpler metrics such as MAE because of its asymmetric expression: while the forecasts are probabilistic, the observations are deterministic. Unlike the pinball loss function , the CPRS does not focus on any specific point of the probability distribution, but considers the distribution of the forecasts as a whole. Formal definition Let $X$ be a random variable. Let $F$ be the cumulative distribution function (CDF) of $X$, such as $F(y)=\mathbf{P}\left[X \leq y\right]$. Let $x$ be the observation, and $F$ the CDF associated with an empirical probabilistic forecast. The CRPS between $x$ and $F$ is defined as:$$CRPS(F, x) = \int_{-\infty}^{\infty}\Big(F(y)- 𝟙(y - x)\Big)^2dy$$where $𝟙$ is the Heaviside step function and denotes a step function along the real line that attains: the value of 1 if the real argument is positive or zero, the value of 0 otherwise. The CRPS is expressed in the same unit as the observed variable. The CRPS generalizes the mean absolute error; in fact, it reduces to the mean absolute error (MAE) if the forecast is deterministic. Envision syntax Lokad's scripting language provides built-in support for the CRPS through the crps() function: Accuracy = crps(Z, X) where Z is expected to be a distribution intended to represent the probabilistic forecast, and X is expected to be a number intended to represent the observed values. Known properties Gneiting and Raftery (2004) show that the continuous ranked probability score can be equivalently written as:$$CRPS(F,x) = \mathbf{E}\Big[|X-x|\Big]-\frac{1}{2}\mathbf{E}\Big[|X-X^*|\Big]$$where $X$ and $X^*$ are independent copies of a linear random variable, $X$ is the random variable associated with the cumulative distribution function $F$, $\mathbf{E}[X]$ is the expected value of $X$. Numerical evaluation From a numerical perspective, a simple way of computing CPRS consists of breaking down the original integral into two integrals on well-chosen boundaries to simplify the Heaviside step function, which gives:$$CRPS(F, x) = \int_{-\infty}^x F(y)^2dy + \int_x^{\infty}\Big(F(y)- 1\Big)^2dy$$In practice, since $F$ is an empirical distribution obtained through a forecasting model, the corresponding random variable $X$ has a compact support , meaning that there is only a finite number of points where $\mathbf{P}[X = x] \gt 0$. Thus, the integrals can be turned into discrete finite sums. References Gneiting, T. and Raftery, A. E. (2004). Strictly proper scoring rules, prediction, and estimation. Technical Report no. 463, Department of Statistics, University of Washington, Seattle, Washington, USA.
It looks like you're new here. If you want to get involved, click one of these buttons! Last time I introduced natural transformations, and I think it's important to solve a bunch more puzzles to get a feel for what they're like. First I'll remind you of the basic definitions. I'll go through 'em quickly and informally: Definition. A category \(\mathcal{C}\) consists of: a collection of objects and a set of morphisms \(f : x \to y\) from any object \(x\) to any object \(y\), such that: a) each pair of morphisms \(f : x \to y\) and \(g: y \to z\) has a composite \(g \circ f : x \to z \) and b) each object \(x\) has a morphism \(1_x : x \to x\) called its identity, for which i) the associative law holds: \(h \circ (g \circ f) = (h \circ g) \circ f\), and ii) the left and right unit laws hold: \(1_y \circ f = f = f \circ 1_x \) for any morphism \(f: x \to y\). A category looks like this: Definition. Given categories \(\mathcal{C}\) and \(\mathcal{D}\), a functor \(F: \mathcal{C} \to \mathcal{D} \) maps each object \(x\) of \(\mathcal{C}\) to an object \(F(x)\) of \(\mathcal{D}\), each morphism \(f: x \to y\) in \(\mathcal{C}\) to a morphism \(F(f) : F(x) \to F(y) \) in \(\mathcal{D}\) , in such a way that: a) it preserves composition: \(F(g \circ f) = F(g) \circ F(f) \), and b) it preserves identities: \(F(1_x) = 1_{F(x)}\). A functor looks sort of like this, leaving out some detail: Definition. Given categories \(\mathcal{C},\mathcal{D}\) and functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x \in \mathcal{C}\), such that for each morphism \(f : x \to y\) in \(\mathcal{C}\) we have $$ G(f) \alpha_x = \alpha_y F(f) ,$$or in other words, this naturality square commutes: A natural transformation looks sort of like this: You should also review the free category on a graph if you don't remember that. Okay, now for a bunch of puzzles! If you're good at this stuff, please let beginners do the easy ones. Puzzle 129. Let \(\mathbf{1}\) be the free category on the graph with one node and no edges: Let \(\mathbf{2}\) be the free category on the graph with two nodes and one edge from the first node to the second: How many functors are there from \(\mathbf{1}\) to \(\mathbf{2}\), and how many natural transformations are there between all these functors? It may help to draw a graph with functors \(F : \mathbf{1} \to \mathbf{2} \) as nodes and natural transformations between these as edges. Puzzle 130. Let \(\mathbf{3}\) be the free category on this graph: How many functors are there from \(\mathbf{1}\) to \(\mathbf{3}\), and how many natural transformations are there between all these functors? Again, it may help to draw a graph showing all these functors and natural transformations. Puzzle 131. How many functors are there from \(\mathbf{2}\) to \(\mathbf{3}\), and how many natural transformations are there between all these functors? Again, it may help to draw a graph. Puzzle 132. For any category \(\mathcal{C}\), what's another name for a functor \(F: \mathbf{1} \to \mathcal{C}\)? There's a simple answer using concepts you've already learned in this course. Puzzle 133. For any category \(\mathcal{C}\), what's another name for a functor \(F: \mathbf{2} \to \mathcal{C}\)? Again, there's a simple answer using concepts you've already learned here. Puzzle 134. For any category \(\mathcal{C}\), what's another name for a natural transformation \(\alpha : F \Rightarrow G\) between functors \(F,G: \mathbf{1} \to \mathcal{C}\)? Yet again there's a simple answer using concepts you've learned here. Puzzle 135. For any category \(\mathcal{C}\), classify all functors \(F : \mathcal{C} \to \mathbf{1} \). Puzzle 136. For any natural number \(n\), we can define a category \(\mathbf{n}\) generalizing the categories \(\mathbf{1},\mathbf{2}\) and \(\mathbf{3}\) above: it's the free category on a graph with nodes \(v_1, \dots, v_n\) and edges \(f_i : v_i \to v_{i+1}\) where \(1 \le i < n\). How many functors are there from \(\mathbf{m}\) to \(\mathbf{n}\)? Puzzle 137. How many natural transformations are there between all the functors from \(\mathbf{m}\) to \(\mathbf{n}\)? I think Puzzle 137 is the hardest; here are two easy ones to help you recover: Puzzle 138. For any category \(\mathcal{C}\), classify all functors \(F : \mathbf{0} \to \mathcal{C}\). Puzzle 139. For any category \(\mathcal{C}\), classify all functors \(F : \mathcal{C} \to \mathbf{0} \).
Consider a manifold and a complex where cochains are sections of vector bundles and coboundary maps are differential operators, which are locally exact except in lowest degree (think de Rham complex). I'd like to know the relationship between the cohomology of this complex and the cohomology of the formal adjoint complex with compact supports (for the de Rham complex, this is again the de Rham complex, but with compact supports, and the relationship is given by Poincaré duality). Update: Just added a bounty to raise the question's profile. The biggest obstacle, as came out of the discussion on an unsuccessful previous answer, to a straightforward application of Verdier duality is that it's hard to see how to connect the dual sheaf $\mathcal{V}^\vee$ with the sections of the dual density vector bundles $\Gamma(\tilde{V}^{\bullet*})$. The basic construction of $\mathcal{V}^\vee$ requires, for an open $U\subset M$, the assignment $U\mapsto \mathrm{Hom}_\mathbb{Z}(\Gamma(U,V^\bullet),\mathbb{Z})$, where $\mathrm{Hom}_\mathbb{Z}$ is taken in the category of abelian groups, which is MUCH bigger than $\Gamma(U,V^{\bullet*})$ itself. Let me be more explicit, which unfortunately requires some notation. Let $M$ be the manifold, $V^i\to M$ be the vector bundles (non-zero for only finitely many $i$) and $d^i \colon \Gamma(V^i) \to \Gamma(V^{i+1})$ be the coboundary maps. Then $H^i(\Gamma(V^\bullet),d^\bullet) = \ker d^i/\operatorname{im} d_{i-1}$. By local exactness I mean that for every point $x\in M$ there exists an open neighborhood $U_x$ such that $H^i(\Gamma(V^\bullet|_{U_x}), d^\bullet) = 0$ for all except the smallest non-trivial $i$. Now, for each vector bundle $V\to M$, I can define a densitized dual bundle $\tilde{V}^* = V^*\otimes_M \Lambda^{\dim M} T^*M$, which is just the dual bundle $V^*$ tensored with the bundle of volume forms (aka densities). For any differential operator $d\colon \Gamma(V) \to \Gamma(W)$ between vector bundles $V$ and $W$ over $M$, I can define its formal adjoint $d^*\colon \Gamma(\tilde{W}^*) \to \Gamma(\tilde{V}^*)$, locally, by using integration by parts in local coordinates or, globally, by requiring that there exist a bidifferential operator $g$ such that $w\cdot d[v] - d^*[w]\cdot v = \mathrm{d} g[w,v]$. Thus, the formal adjoint complex is defined by the coboundary maps $d^{i*}\colon \Gamma(\tilde{V}^{(i+1)*}) \to \Gamma(\tilde{V}^{i*})$. There is a natural, non-degenerate, bilinear pairing $\langle u, v \rangle = \int_M u\cdot v$ for $v\in \Gamma(V)$ and $u\in \Gamma_c(\tilde{V}^*)$, where subcript $c$ refers to compactly supported sections. Because $\langle u^{i+1}, d^i v^i \rangle = \langle d^{i*} u^{i+1}, v^i \rangle$ this paring descends to a bilinear pairing in cohomology $$ \langle-,-\rangle\colon H^i(\Gamma_c(\tilde{V}^{\bullet*}),d^{(\bullet-1)*}) \times H^i(\Gamma(V^\bullet),d^i) \to \mathbb{R} . $$ Finally, my question can be boiled down to the following: is this pairing non-degenerate (and if not what is its rank)? As I mentioned in my first paragraph, the case $V^i = \Lambda^i T^*M$ with $d^i$ the de Rham differential is well known. Its formal adjoint complex is isomorphic to the de Rham complex itself. Essentially, Poincaré duality states that the natural pairing in cohomology is non-degenerate. I am hoping that a more general result can be deduced from Verdier duality applied to the sheaf $\mathcal{V}$ resolved by the complex $(\Gamma(V^\bullet),d^\bullet)$. I know that the sheaf cohomology $H^i(M,\mathcal{V})$ can be identified with $H^i(\Gamma(V^\bullet),d^\bullet)$. I also know that the abstract form of the duality states that the algebraic dual $H^i(M,\mathcal{V})^*$ is given by the sheaf cohomology with compact supports $H^i_c(M,\mathcal{V}^\vee)$ with coefficients in the "dualizing sheaf" $\mathcal{V}^\vee$. Unfortunately, I'm having trouble extracting the relationship between $\mathcal{V}^\vee$ and my formal adjoint complex $(\Gamma_c(\tilde{V}^{\bullet*}), d^{(\bullet-1)*})$ from standard references (e.g., the books of Iversen or Kashiwara and Schapira).
Line 5: Line 5: [[New_from_June | New from June 2015]] [[New_from_June | New from June 2015]] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + [K. Freese and M. Lewis, Cardassian Expansion: a Model in which the Universe is Flat, Matter Dominated, and Accelerating, arXiv: 0201229] Cardassian Model is a modification to the Friedmann equation in which the Universe is flat, matter dominated, and accelerating. An additional term, which contains only matter or radiation (no vacuum contribution), becomes the dominant driver of expansion at a late epoch of the universe. During the epoch when the new term dominates, the universe accelerates. The authors named this period of acceleration by the Cardassian era. (The name Cardassian refers to a humanoid race in Star Trek whose goal is to take over the universe, i.e., accelerated expansion. This race looks foreign to us and yet is made entirely of matter.)Pure matter (or radiation) alone can drive an accelerated expansion if the first Friedmann equation is modified by the addition of a new term on the right hand side as follows:\[H^2=A\rho+B\rho^n,\]where the energy density $\rho$ contains only ordinary matter and radiation, and $n<2/3$. In the usual Friedmann equation $B=0$. To be consistent with the usual result, we take \[A=\frac{8\pi}{3M_{Pl}^2},\] where $M_{Pl}^2\equiv1/G$. A Universe filled with a perfect fluid represents quite a simple which seems to be in good agreement with cosmological observations. But, on a more physical and realistic basis we can replace the energy-momentum tensor for the simplest perfect fluid by introducing cosmic viscosity. The energy momentum tensor with bulk viscosity is given by\[T_{\mu\nu}=(\rho=p-\xi\theta)u_\mu u_\nu+(p-\xi\theta)g_{\mu\nu},\]where $\xi$ is bulk viscosity, and $\theta\equiv3H$ is the expansion scalar. This modifies the equation of state of the cosmic fluid. The Friedmann equations with inclusion of the bulk viscosity, i.e. using the energy-momentum tensor $T_{\mu\nu}$, read\begin{align}\nonumber \frac{\dot a^2}{a^2}&=\frac13\rho,\quad \rho=\rho_m+\rho_\Lambda,\quad 8\pi G=1;\\\nonumber \frac{\ddot a^2}{a}&=-\frac16(\rho+3p-9\xi H).\end{align} Problems #150_8-#150_14 are inspired by A. Avelino and U. Nucamendi, Can a matter-dominated model with constant bulk viscosity drive the accelerated expansion of the universe? arXiv:0811.3253
The property follows from the property of the corresponding (weak form of the) partial differential equation; this is one of the advantages of finite element methods compared to, e.g., finite difference methods. To see that, first recall that the finite element method starts from the weak form of the Poisson equation (I'm assuming Dirichlet boundary conditions here): Find $u\in H^1_0(\Omega)$ such that$$ a(u,v):= \int_\Omega \nabla u\cdot \nabla v \,dx = \int_\Omega fv\,dx \qquad\text{for all }v\in H^1_0(\Omega).$$The important property here is that$$ a(v,v) = \|\nabla v\|_{L^2}^2 \geq c \|v\|_{H^1}^2 \qquad\text{for all }v\in H^1_0(\Omega). \tag{1}$$(This follows from Poincaré's inequality.) Now the classical finite element approach is to replace the infinite-dimensional space $H^1_0(\Omega)$ by a finite-dimensional subspace $V_h\subset H^1_0(\Omega)$ and find $u_h\in V_h$ such that $$ a(u_h,v_h):= \int_\Omega \nabla u_h\cdot \nabla v_h \,dx = \int_\Omega fv_h\,dx \qquad\text{for all }v_h\in V_h.\tag{2}$$The important property here is that you are using the same $a$ and a subspace $V_h\subset H^1_0(\Omega)$ (a conforming discretization); that means that you still have$$ a(v_h,v_h) \geq c \|v_h\|_{H^1}^2 >0 \qquad\text{for all }v_h\in V_h. \tag{3}$$ Now for the last step: To transform the variational form to a system of linear equations, you pick a basis $\{\varphi_1,\dots,\varphi_N\}$ of $V_h$, write $u_h =\sum_{i=1}^N u_i\varphi_i$ and insert $v_h=\varphi_j$, $1\leq j\leq N$ into $(2)$. The stiffness matrix $K$ then has the entries $K_{ij}=a(\varphi_i,\varphi_j)$ (which coincides with what you wrote). Now take an arbitrary vector $\vec v=(v_1,\dots,v_N)^T\in \mathbb{R}^N$ and set $v_h:=\sum_{i=1}^Nv_i \varphi_i\in V_h$. Then we have by $(3)$ and the bilinearity of $a$ (i.e., you can move scalars and sums into both arguments)$$ \vec v^T K \vec v = \sum_{i=1}^N\sum_{j=1}^N v_iK_{ij} v_j =\sum_{i=1}^N\sum_{j=1}^N a(v_i\varphi_i,v_j\varphi_j) = a(v_h,v_h) >0.$$Since $\vec v$ was arbitrary, this implies that $K$ is positive definite. TL;DR: The stiffness matrix is positive definite because it comes from a conforming discretization of a (self-adjoint) elliptic partial differential equation.
Second SSC CGL level Solution Set, topic Trigonometry This is the second solution set of 10 practice problem exercise for SSC CGL exam on topic Trigonometry. Students must complete the corresponding question set in prescribed time first and then only refer to the solution set. It is emphasized here that answering in MCQ test is not at all the same as answering in a school test where you need to derive the solution in perfectly elaborated steps. In MCQ test instead, you need basically to deduce the answer in shortest possible time and select the right choice. None will ask you about what steps you followed. Based on our analysis and experience we have seen that, for accurate and quick answering, the student must have complete understanding of the basic concepts of the topics is adequately fast in mental math calculation should try to solve each problem using the most basic concepts in the specific topic area and does most of the deductive reasoning and calculation in his head rather than on paper. Actual problem solving happens in item 3 and 4 above. But how to do that? You need to use your your problem solving abilities only. There is no other recourse. If you have not taken the corresponding test yet, first take the test by referring to in prescribed time and then only return to these solutions for gaining maximum benefits. SSC CGL level Question Set 2 on Trigonometry Watch the video solutions below. Second solution set- 10 problems for SSC CGL exam: topic Trigonometry - time 20 mins Q1. If $0^0 < \theta < 90^0$ and $2sin^2\theta + 3cos\theta = 3$ then the value of $\theta$ is, $30^0$ $60^0$ $45^0$ $75^0$ Solution: You identify the given equation as a second order equation mixed in $sin\theta$ and $cos\theta$. You reason that to find the value of $\theta$ you must know the value of $sin\theta$ or $cos\theta$, and that you can get from a second order quadratic equation in either $sin\theta$ or $cos\theta$, but not both mixed up. It implies that you must convert either $sin\theta$ to $cos\theta$ or $cos\theta$ to $sin\theta$. This is Deductive reasoning starting from the target, that is, the end point. This type of analysis falls under the powerful End State Analysis Approach. Most of the tricky math problems can be quickly solved by intelligent use of this approach. Thus you decide at this point to convert the $sin^2\theta$ to $cos^2\theta$ by the most important trigonometric identity, $sin^2\theta + cos^2\theta = 1$. This is the first transformation on the given equation, and we get given equation as, \begin{align} E = & 2(sin^2\theta + cos^2\theta) \\ & -2cos^2\theta + 3cos\theta = 3 \end{align} $$Or,\quad 2cos^2\theta - 3cos\theta + 1 = 0 $$ $$Or,\quad (2cos\theta - 1)(cos\theta - 1)=0. $$ This is simple algebra where you have to use your knowledge in factorization of quadratic equation at the least. Hint - noticing $2cos^2\theta$ in the second order term and $1$ in the third numeric term, you straightway decide one factor to be $(2cos\theta - 1)$. Now only you use the first condition in the problem. In case of second root, $cos\theta=1$ and $\theta=0^0$ that violates the given condition. The first root $cos\theta=\frac{1}{2}$ gives $\theta=60^0$, the answer. If you find it difficult to remember the values of $sin\theta$ and $cos\theta$ for various values of $\theta$, you just have to remember the following picture depicting variation of $sin\theta$ and $cos\theta$ for various values of $\theta$. From the figure it can never be forgotten that $cos\theta=1$ when $\theta=0^0$ and its value reduces to $0$ when value of $\theta$ goes on increasing to $90^0$. In between there are two significant values of $cos\theta$, namely $\frac{1}{2}$ and $\frac{\sqrt{3}}{2}$ at $\theta$ values either $30^0$ or $60^0$. As $\frac{1}{2}$ is smaller than $\frac{\sqrt{3}}{2}$, and also as value of $cos\theta$ reduces from 1 to 0, it follows that $cos60^0=\frac{1}{2}$. This is direct application of the Principle of Less facts and more procedure. Answer: Option b: $60^0$. Key concept used: Elimination of $sin\theta$ -- factorization -- use of given condition. To remember values of $cos\theta$ without loading the memory use of powerful problem solving Principle of Less facts more procedures. Q2. If $sin\theta=\frac{a}{\sqrt{a^2 + b^2}}$, then the value of $cot\theta$ will be, $\frac{b}{a}$ $\frac{a}{b}$ $\frac{a}{b} + 1$ $\frac{b}{a} + 1$ Solution: We reason, to find the value of $cot\theta$, we need values of $sin\theta$ and $cos\theta$, one of which is given. Using the identity, $ sin^2\theta + cos^2\theta = 1$ we can mentally arrive at the result, $cos^2\theta = 1 - sin^2\theta =\frac{b^2}{a^2 + b^2}$ Or, $cos\theta = \frac{b}{\sqrt{a^2 + b^2}}$ So, $cot\theta = \frac{b}{a}$. Answer: Option a : $\frac{b}{a}$ . Key concept used: Find the unknown element in the target $cot\theta$ using given $sin\theta$. Alternative method: We remember that there is a relationship between $sin\theta$ and $cot\theta$ via $cosec\theta$ using the identity, $cosec^2\theta = 1 + cot^2\theta$. We simply inverse the given equation and square it to get, \begin{align} cosec^2\theta & = 1 + cot^2\theta \\ & = \frac{a^2 + b^2}{a^2} \\ & = 1 + \frac{b^2}{a^2} \end{align} which gives in turn, $cot\theta = \frac{b}{a}$. Can you spot the difference between the two methods? Apparently both are quick ways to reach the solution. Think for a few minutes. The first method worked from the very basics and the second used a rich concept. Usually we prefer to solve a problem using the most basic concept. That is fastest. But in this case, with repeated use, the two rich concepts involving $tan^2\theta + 1 = sec^2\theta$ and $cot^2\theta + 1 = cosec^2\theta$ can very well be used as first level resource in establishing relationship between $sec\theta$, $tan\theta$ and $cosec\theta$, $cot\theta$. Q3. If $tan\theta=\frac{3}{4}$ and $0<\theta<\frac{\pi}{2}$ and $25xsin^2\theta{cos\theta}=tan^2\theta$, then the value of $x$ is, $\frac{7}{64}$ $\frac{9}{64}$ $\frac{3}{64}$ $\frac{5}{64}$ Solution: In any problem of this type, you examine first the target expression. Here target expression is the third one as it is the most complex. The second expression here is a condition and the first is a simple identity. Best plan is to use these simpler given information to simplify the most complex expression. This again is another important principle of problem solving. If two facts or expressions are given, one much more complex than the other, always know that, you will start simplifying the second using the first. This principle we call as Principle of relative complexity. We use the less complex information as a tool to simplify the more complex information. In the second expression, we cancel out the $sin^2\theta$ on both sides and take the $cos\theta$ on the RHS to get, $25x = sec\theta{sec^2\theta}$ Now we use the simpler expression $tan\theta=\frac{3}{4}$ to derive, $sec^2\theta=1 + tan^2\theta=\frac{25}{16}$. Substituting we get, $25x=\frac{25}{16}sec\theta$, Or, $x = \frac{sec\theta}{16}$ Now is the time to use the second condition in the expression, $sec\theta = \pm{\frac{5}{4}=\frac{5}{4}}$. Finally we then get $x=\frac{5}{64}$. Answer: Option d: $\frac{5}{64}$. Key concept used: Simplifying more complex trigonometric expression -- using the given simpler information suitably. Q4. If $xsin\theta - ycos\theta = \sqrt{x^2 + y^2}$ and $\frac{cos^2\theta}{a^2} + \frac{sin^2\theta}{b^2} = \frac{1}{x^2 + y^2}$ then, $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ $\frac{x^2}{b^2} + \frac{y^2}{a^2} = 1$ $\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$ $\frac{x^2}{b^2} - \frac{y^2}{a^2} = 1$ Solution: At first the expressions seem to be a little complex, but on examining the first expression (as usual the resource to be used as a tool) against the second expression and the choices, you will find first, there is no square root. So without hesitation you undertake to square the expression and simplify, $x^2sin^2\theta + y^2cos^2\theta$ $\qquad - 2xysin\theta{cos\theta} = x^2 + y^2$ Transposing and collecting $sin^2\theta$ and $cos^2\theta$ it further simplifies to, $x^2cos^2\theta + y^2sin^2\theta + 2xysin\theta{cos\theta} = 0$ Or, $(xcos\theta + ysin\theta)^2 = 0$ Or, $xcos\theta = -ysin\theta$ Or, $cot^2\theta = \frac{y^2}{x^2}$ Or, $cot^2\theta + 1 = cosec^2\theta = \frac{y^2 + x^2}{x^2}$ Or, $sin^2\theta = \frac{x^2}{x^2 + y^2}$ Or, $cos^2\theta = 1 - sin^2\theta = \frac{y^2}{x^2 + y^2}$ Now your target is to eliminate $cos\theta$ and $sin\theta$, by substituting values of $cos^2\theta=\frac{y^2}{x^2 + y^2}$, and $sin^2\theta=\frac{x^2}{x^2 + y^2}$ in the second expression, giving, $\frac{x^2}{b^2} + \frac{y^2}{a^2} = 1$. Answer: Option b: $\frac{x^2}{b^2} + \frac{y^2}{a^2} = 1$. Key concept used: Identifying no square root -- squaring and simplifying -- eliminating $sin\theta$ and $cos\theta$ in second expression. Q5. The value of $sin^21^0 + sin^23^0 + sin^25^0 + ...$ $... + sin^287^0 + sin^289^0$ is, $22$ $22\frac{1}{2}$ $23$ $22\frac{1}{4}$ Solution: We must have the basic knowledge that, $sin\theta = cos(90^0 - \theta)$ for positive $\theta$ less than or equal to $90^0$. Using this knowledge we can form pairs sum of each of which will be 1 as below, \begin{align} sin^21^0 + sin^289^0 & = cos^289^0 + sin^289^0 \\ & = 1 \end{align} Question now is how many terms does the expression have? Without using any formula, just test for a similar expression ending with $sin^27^0$ and another with the next one $sin^29^0$. We know the first to have 4 terms and the second 5 terms. Knowing this we try the conjecture that number of terms for the first = $\frac{7 + 1}{2} = 4$ and for the second, $\frac{9 + 1}{2} = 5$ both tallying with our direct knowledge. Thus confirming our conjecture, we use the often used Principle of induction to get the number of terms in the given expression = $\frac{89 + 1}{2} = 45$. With 45 terms, we get 22 pairs giving sum as 22 and a single term in the middle, the 23rd term. The second term has angle $2\times{(2 - 1)} + 1 = 3^0$, the third, $2\times{(3 - 1)} + 1 = 5^0$. It tallies. Using induction again, we get the angle of the middle term = $2\times{(23 - 1)} + 1 = 45^0$. As $sin45^0 = \frac{1}{\sqrt{2}}$, we get the given sum = $22\frac{1}{2}$. Answer: Option b: $22\frac{1}{2}$. Key concept used: Identifying the useful pattern of pairing using the relationship $sin\theta = cos(90^0 - \theta)$ and of course $sin^2\theta + cos^2\theta = 1$ --- finding number of terms and the middle term. Q6. The minimum value of $cos^2\theta + sec^2\theta$ is, 0 1 2 3 Solution: The value of $cos\theta$ ranges from -1 to 1 intermediate values being less than 1. That is $-1\leq\cos\theta\leq1$. This gives, $0\leq\cos^2\theta\leq1$. Thus assuming $cos^2\theta = \frac{1}{x}$, we get $x\geq1$, that is, a positive real number with minimum value 1. Substituting in the given expression, we get, $E=\frac{1}{x} + x = \frac{x^2 + 1}{x}$, where $x\geq1$. As $x\geq1$ and numerator grows much faster than the denominator with increasing value of $x$, the expression will have minimum value when $x$ has minimum value of 1. So minimum value of expression will be 2. Answer: Option c : 2. Key concept used: Transforming the expression in such a way that positive valued variables greater than 1 form the numerator and the denominator allowing comparison of the growth of the two with increasing value of $x$ -- using the knowledge of how changing variable values affect an expression. Q7. If $cos\theta + sec\theta = 2$ $(0^0\leq{\theta}\leq{90^0})$ then the value of $cos{10}\theta + sec{11}\theta$ is, 0 1 2 -1 Solution: After initial examination we find the target expression a little hard to break. Recalling that simpler initial expression is to be used for evaluating more complex expression, we concentrate on the first expression. $cos\theta + sec\theta = 2$ Or, $cos^2\theta - 2cos\theta + 1 = 0$ Or, $cos\theta=1$, giving $\theta=0^0$. So given expression = 2. Answer: Option c: 2. Key concept used: Simplification of given expression automatically resolves the problem of $10\theta$ and $11\theta$. Q8. If $tan\theta=\frac{3}{4}$ and $\theta$ is acute then, $cosec\theta$ is equal to, $\frac{5}{3}$ $\frac{5}{4}$ $\frac{4}{3}$ $\frac{4}{5}$ Solution: $cot\theta=\frac{4}{3}$, or, $cot^2\theta = cosec^2\theta - 1 = \frac{16}{9}$, or, $cosec^2\theta = \frac{25}{9}$, or, $cosec\theta = \frac{5}{3}$. Answer: Option a: $\frac{5}{3}$. Key concept used: Recognition and use of $cosec^2\theta$ and $cot^2\theta$ relation. Q9. If $\displaystyle\frac{sin\theta + cos\theta}{sin\theta - cos\theta} = 3$ then the numerical value of $sin^4\theta - cos^4\theta$ is, $\frac{1}{2}$ $\frac{2}{5}$ $\frac{3}{5}$ $\frac{4}{5}$ Solution: Adding 1 to both sides, subtracting 1 from both sides and taking ratio of the two we get, $tan\theta = 2$. Target expression, \begin{align} E & = (sin^2\theta + cos^2\theta)(sin^2\theta - cos^2\theta) \\ & = (sin^2\theta - cos^2\theta) \\ & = 1 - 2cos^2\theta \end{align} Again $tan\theta = 2$, or, $tan^2\theta = 4 = sec^2\theta - 1$, or, $sec^2\theta = 5$, or, $cos^2\theta = \frac{1}{5}$ Thus $E = 1 - \frac{2}{5} = \frac{3}{5}$ Answer: Option c: $\frac{3}{5}$. Key concept used: Simplification of given expression to get value of $tan\theta$, and then $cos^2\theta$ - simplifying target expression in terms of $cos^2\theta$. Simplification using componendo dividendo technique which is a standard technique when the numerator and denominator are in this form. Getting $tan\theta$ we can always get $sec\theta$ and hence $cos\theta$. Q10. The minimum value of $2sin^2\theta + 3cos^2\theta$ is, 0 3 2 1 Solution: Target expression, $E = 2 + cos^2\theta$ Minimum value of $cos^2\theta$ being 0, minimum value of target expression is 2. Answer: Option c: 2. Key concept used: Simplification of given expression -- minimum value of $cos^2\theta$. Note: You will observe that in many of the Trigonometric problems rich algebraic concepts and techniques are to be used. In fact that is the norm. Algebraic concepts are frequently used for elegant solutions of Trigonometric problems. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Solution Set 2 on Trigonometry
Let $\nu(x)$ be a symmetric probability measure with respect to the origin on $x\in[-1,1]$ such that $\nu(\{0\})\neq 1$. Consider a random walk started at $S_0=0$, denoted $S_n=X_1+\dotsb+X_n$, where $X_1,X_2, \dotsc$ are i.i.d. random variables such that $X_i\sim \nu(x)$. Fix $1\leq L<\infty$, and put $\tau=\inf\{n\geq0: S_n>L\}$ and $\hbar_{\nu,L}=\mathbb{E}(S_\tau)-L$. In other words, $\hbar_{\nu,L}$ is the mean value of exitpoint distance from $L$. My question is how to derive the explicit formula for $\hbar_{\nu,L}$. Maybe one can start by fixing $L = 1$ and choosing some simple $\nu(x)$, say with probability density function $\mu(x)$ given by $\mu(x)=1/2$, $x\in[-1,1]$ or $\mu(x)=\frac{2}{\pi}\sqrt{1-x^2}$, $x\in[-1,1]$. Could you recommend some relevant papers or books for me? Anyway, any hints or help would be appreciated.
Basic Statistics Basic statistics for a discrete variable X: Mean(μ) = Expect Value E[X] =\(\frac{1}{n} \sum_{i=1}^{n} x_{i} \) Median If n is odd then \(x_{\frac{n+1}{2}}\) else \(\frac{x_{\frac{n}{2}} + x_{\frac{n+2}{2}}}{2}\) Variance (\(\sigma^{2}=E_{x \sim p(x)}[(X-E[X])^2]\)) (n-1 is called degree of freedom) =\(\frac{1}{n-1}\sum_{i=1}^{n} (x_{i} -\mu)^{2}\) Standard deviation (\(\sigma\)) =\(\sqrt{\sigma^{2}}\) Mode is the value x at which its probability mass function takes its maximum value. For example the mode of {1,1,1,2,2,3,4} is 1 because it appears 3 times Covariance(X,Y) \(= E[(X-E[X])(Y-E[Y])]\) \(= E[XY] -E[X]E[Y]\) \(= \frac{1}{n-1} \sum_{i=1}^{n} (X_i – μ_X)(Y_i – μ_Y)\) Correlation(X,Y) =\(\frac{Covariance(X,Y)}{\sqrt{Var(X).Var(Y)}}\) Standard error \(= \frac{σ}{\sqrt{n}}\) Basic statistics for a continuous variable X: Mean(μ) = Expect Value E[X] =\(\int_{all\, x} p(x)\,x\,dx\) Median m such as p(x<=m) = .5 Variance (\(\sigma^{2}=E_{x \sim p(x)}[(X-E[X])^2]\)) =\(\int_{all\, x} p(x)\,(x -\mu)^{2}\,dx\) Standard deviation (\(\sigma\)) =\(\sqrt{\sigma^{2}}\) Mode is the value x at which its probability density function has its maximum value Examples: For a set {-1, -1, 1, 1} => Mean = 0, Variance = 1.33, Standard deviation = 1.15 If \(x \in [0, +\infty]\) and p(x) = exp(-x) => Mean = 1, Variance = 1, Standard deviation = 1 Expected value Expectations are linear. E[X+Y] = E[X] + E[Y] Probability Distributions A random variable is a set of outcomes from a random experiment. A probability distribution is a function that returns the probability of occurrence of an outcome. For discrete random variables, this function is called “Probability Mass Function”. For continuous variables, this function is called “Probability Density Function”. A joint probability distribution is a function that returns the probability of joint occurrence of outcomes from two or more random variables. If random variables are independent then the joint probability distribution is equal to the product of the probability distribution of each random variable. A conditional probability distribution is the probability distribution of a random variable given another random variables. Example: A 0.2 B 0.8 P(X) is the probability distribution of X. C 0.1 D 0.9 P(Y) is the probability distribution of Y. A C 0.1 A D 0.1 B D 0.8 P(X,Y) is the joint probability distribution of X and Y. A 0.1/0.9 B 0.8/0.9 P(X|Y=D) is the conditional probability distribution of X given Y = D. Marginal probability Sometimes we know the probability distribution over a set of variables and we want to know the probability distribution over just a subset of them. The probability distribution over the subset is known as the marginal probability distribution. For example, suppose we have discrete random variables x and y, and we know P(x, y). We can find P(x) with the sum rule: \(P(x) = \sum_y P(x,y)\) Below the statistical properties of some distributions. Binomial distribution Nb of output values = 2 (like coins) n = number of trials p = probability of success P(X) = \(C_n^X * p^{X} * (1-p)^{n-X}\) Expected value = n.p Variance = n.p.(1-p) Example: If we flip a fair coin (p=0.5) three time (n=3), what’s the probability of getting two heads and one tail? P(X=2) = P(2H and 1T) = P(HHT + HTH + THH) = P(HHT) + P(HTH) + P(THH) = p.p(1-p) + p.(1-p).p + (1-p).p.p = \(C_3^2.p^2.(1-p)\) Bernoulli distribution Bernoulli distribution is a special case of the binomial distribution with n=1. Nb of output values = 2 (like coins) n = 1 (number of trials) p = probability of success X \(\in\) {0,1} P(X) = \(p^{X} * (1-p)^{1-X}\) Expected value = p Variance = p.(1-p) Example: If we flip a fair coin (p=0.5) one time (n=1), what’s the probability of getting 0 head? P(X=0) = P(0H) = P(1T) = 1-p = \(p^0.(1-p)^1\) Multinomial distribution It’s a generalization of Binomial distribution. In a Multinomial distribution we can have more than two outcomes. For each outcome, we can assign a probability of success. Normal (Gaussian) distribution P(x) = \(\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{x-\mu}{\sigma})^{2})\) σ and μ are sufficient statistics (sufficient to describe the whole curve) Expected value = \(\int_{-\infty}^{+\infty} p(x) x \, dx\) Variance = \(\int_{-\infty}^{+\infty} (x – \mu)^2 p(x) \, dx\) Standard Normal distribution (Z-Distribution) It’s a normal distribution with mean = 0 and standard deviation = 1. P(z) = \(\frac{1}{\sqrt{2\pi}} exp(-\frac{1}{2} z^2)\) Cumulative distribution function:\(P(x \leq z) = \int_{-\infty}^{z} \frac{1}{\sqrt{2\pi}} exp(-\frac{1}{2} x^2) \, dx\) Exponential Family distribution \(P(x;\theta) = h(x)\exp \left(\eta (θ)\cdot T(x)-A(θ)\right) \), where T(x), h(x), η(θ), and A(θ) are known functions. θ = vector of parameters. T(x) = vector of “sufficient statistics”. A(θ) = cumulant generating function. \(P(x)=C_n^x\ p^{x}(1-p)^{n-x},\quad x\in \{0,1,2,\ldots ,n\}\) The Binomial distribution is an Exponential Family distribution This can equivalently be written as:\(P(x)=C_n^x\ exp (log(\frac{p}{1-p}).x – (-n.log(1-p)))\) The Normal distribution is an Exponential Family distribution Consider a random variable distributed normally with mean μ and variance \(σ^2\). The probability density function could be written as: \(P(x;θ) = h(x)\exp(η(θ).T(x)-A(θ)) \) With:\(h(x)={\frac{1}{\sqrt {2\pi \sigma ^{2}}}}exp^{-{\frac {x^{2}}{2\sigma ^{2}}}}\) \(T(x)={\frac {x}{\sigma }}\) \(A(\mu)={\frac {\mu ^{2}}{2\sigma ^{2}}}\) \(\eta(\mu)={\frac {\mu }{\sigma }}\) Poisson distribution The Poisson distribution is popular for modelling the number of times an event occurs in an interval of time or space (eg. number of arrests, number of fish in a trap…). In a Poisson distribution, values are discrete and can’t be negative. The probability mass function is defined as: \(P(x=k)=\frac{λ^k.e^{-λ}}{k!}\), k is the number of occurrences. λ is the expected number of occurrences. Exponential distribution The exponential distribution has a probability distribution with a sharp point at x = 0.\(P(x; λ) = λ.1_{x≥0}.exp (−λx)\) Laplace distribution Laplace distribution has a sharp peak of probability mass at an arbitrary point μ. The probability mass function is defined as \(Laplace(x;μ,γ) = \frac{1}{2γ} exp(-\frac{|x-μ|}{γ})\) Laplace distribution is a distribution that is symmetrical and more “peaky” than a normal distribution. The dispersion of the data around the mean is higher than that of a normal distribution. Laplace distribution is also sometimes called the double exponential distribution. Dirac distribution The probability mass function is defined as \(P(x;μ) = δ(x).(x-μ)\) such as δ(x) = 0 when x ≠ μ and \(\int_{-∞}^{∞} δ(x).(x-μ) dx= 1\). Empirical Distribution Other known Exponential Family distributions Dirichlet. Laplace Smoothing Given a set S={a1, a1, a1, a2}. Laplace smoothed estimate for P(x) with domain of x in {a1, a2, a3}:\(P(x=a1)=\frac{3 + 1}{4 + 3}\) \(P(x=a2)=\frac{1 + 1}{4 + 3}\) \(P(x=a3)=\frac{0 + 1}{4 + 3}\) Maximum Likelihood Estimation Given three independent data points \(x_1=1, x_2=0.5, x_3=1,5\), what the mean μ of a normal distribution that these three points are more likely to come from (we suppose the variance=1). If μ = 4, then the probabilities \(P(X=x_1), P(X=x_2), P(X=x_3)\) will be low, and \(P(x_1,x_2,x_3) = P(X=x_1)*P(X=x_2)*P(X=x_3)\) will be also low. If μ = 1, then the probabilities \(P(X=x_1), P(X=x_2), P(X=x_3)\) will be high, and \(P(x_1,x_2,x_3) = P(X=x_1)*P(X=x_2)*P(X=x_3)\) will be also high. Which means that the three points are more likely to come from a normal distribution with mean μ = 1. The likelihood function is defined as: \(P(x_1,x_2,x_3; μ)\) Central Limit Theorem The central limit theorem states that if you have a population with mean μ and standard deviation σ and take sufficiently large random samples from the population with replacement , then the distribution of sample means will be approximately normally distributed. Bayesian Network X1, X2 are random variables. P(X1,X2) = P(X2,X1) = P(X2|X1) * P(X1) = P(X1|X2) * P(X2) P(X1) is called prior probability. P(X1|X2) is called posterior probability. Example: A mixed school having 60% boys and 40% girls as students. The girls wear trousers or skirts in equal numbers; the boys all wear trousers. An observer sees from a distance a student wearing a trouser. What is the probability this student is a girl? The prior probability P(Girl): 0.4 The posterior probability P(Girl|Trouser): \(\frac{P(Trouser|Girl)*P(Girl)}{P(Trouser|Girl) * P(Girl) + P(Trouser|Boy) * P(Boy)} = 0.25\) Parameters estimation – Bayesian Approach Vs Frequentist Approach There are two approaches that can be used to estimate the parameters of a model. \(arg\ \underset{θ}{max} \prod_{i=1}^m P(y^{(i)}|x^{(i)};θ)\) \(arg\ \underset{θ}{max} P(θ|\{(x^{(i)}, y^{(i)})\}_{i=1}^m)\)\(=arg\ \underset{θ}{max} \frac{P(\{(x^{(i)}, y^{(i)})\}_{i=1}^m|θ) * P(θ)}{P(\{(x^{(i)}, y^{(i)})\}_{i=1}^m)}\)\(=arg\ \underset{θ}{max} P(\{(x^{(i)}, y^{(i)})\}_{i=1}^m|θ) * P(θ)\) If \(\{(x^{(i)}, y^{(i)})\}\) are independent, then:\(=arg\ \underset{θ}{max} \prod_{i=1}^m P((y^{(i)},x^{(i)})|θ) * P(θ)\) To calculate P(θ) (called the prior), we assume that θ is Gaussian with mean 0 and variance \(\sigma^2\).\(=arg\ \underset{θ}{max} log(\prod_{i=1}^m P((y^{(i)},x^{(i)})|θ) * P(θ))\) \(=arg\ \underset{θ}{max} log(\prod_{i=1}^m P((y^{(i)},x^{(i)})|θ)) + log(P(θ))\) after some few derivations, we will find the the expression is equivalent to the L2 regularized linear cost function.\(=arg\ \underset{θ}{min} \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} + λ θ^Tθ\) Because of the prior, Bayesian algorithms are less susceptible to overfitting. Cumulative distribution function (CDF) Given a random continuous variable S with density function p(s). The Cumulative distribution function \(F(s) = p(S<=s) = \int_{-∞}^{s} p(s) ds\) F'(s) = p(s)
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
Mathematics Here, I will briefly review the terms and then derive the equation system for homography. If you follow this, you will see why you need to detect at least 4 points which aren’t on the line. Terms Correspondence. Imagine have 2 photos of the same object from a slightly different position. Then the point \(p_1\) and \(p_2\) on the respective images are corresponding if they are projecting the same physical point. Sign for correspondence is \(p_1\ \hat{=}\ p_2\). Homogeneous coordinates. This is a coordinate system used in projective geometry and will be used here from now on as well. 2D vector \(\begin{bmatrix} x\\ y \end{bmatrix}\) in cartesian coordinates is expressed as 3D vector \(\begin{bmatrix} wx\\ wy\\ w \end{bmatrix}, \forall w\neq 0\) in homogeneous coordinates. Similarly, 3D vectors in cartesian coordinates are 4D vectors in homogeneous coordinates. Also, note that \(\begin{bmatrix} w_{1}x\\ w_{1}y\\ w_1 \end{bmatrix}=\begin{bmatrix} w_{2}x\\ w_{2}y\\ w_2 \end{bmatrix}, \forall w_1\neq 0,\ w_2\neq 0\) in homogeneous coordinates. Matrices are used to represent certain geometric transformations in the homogeneous coordinates. Transformation of the point \(p\) is realized by a simple multiplication so that \(p’=Mp\). In addition, transformations can merged into a single one by the standard matrix multiplication. Homography Equations Mr. Wikipedia says that any two images of the same planar surface in space are related by a homography (assuming a pinhole camera model). In other words, if \(I\) and \(I’\) are 2 images, containing same planar surface, then there exists a 3×3 matrix \(H\) which maps points \(p\) into corresponding points \(p’\), such as \(p’=Hp\). Remember that these points must be on that plane. Let’s write down the equations in more details. \[ H=\begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{bmatrix} \\ \begin{bmatrix} w’x’ \\ w’y’ \\ w’ \end{bmatrix}=\begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{bmatrix}\begin{bmatrix} wx \\ wy \\ w \end{bmatrix} \\ \begin{bmatrix} w’x’ \\ w’y’ \\ w’ \end{bmatrix}=\begin{bmatrix} h_{11}wx + h_{12}wy + h_{13}w \\ h_{21}wx + h_{22}wy + h_{23}w \\ h_{31}wx + h_{32}wy + h_{33}w \end{bmatrix} \] The goal is to figure out 9 elements of matrix \(H\). Without losing any generality, you can assume that \(w = 1\) and switch to the cartesian coordinates by division. This will make the following equation. \[ \begin{bmatrix} x’ \\ y’ \end{bmatrix}=\begin{bmatrix} \frac{h_{11}x + h_{12}y + h_{13}}{h_{31}x + h_{32}y + h_{33}} \\ \frac{h_{21}x + h_{22}y + h_{23}}{h_{31}x + h_{32}y + h_{33}} \end{bmatrix} \] This equation system has 9 degrees of freedom. Luckily, you can multiply all elements of \(H\) by a non-zero \(k\) without having affecting the solution at all. This removes 1 degree of freedom and opens 2 possible ways for a solution. First way is to set \(h_{33} = 1\). You can do this as soon as \(h_{33}\neq 0\). Second, more general way, is to impose unit vector constraint such as \(h_{11}^2+h_{12}^2+h_{13}^2+h_{21}^2+h_{22}^2+h_{23}^2+h_{31}^2+h_{32}^2+h_{33}^2=1\). Here I will use the first way because it seems to be more intuitive and better supported by the numerical libraries. Homography Solution Setting \(h_{33}=1\) will give the following. \[ \begin{bmatrix} x’ \\ y’ \end{bmatrix}=\begin{bmatrix} \frac{h_{11}x + h_{12}y + h_{13}}{h_{31}x + h_{32}y + 1} \\ \frac{h_{21}x + h_{22}y + h_{23}}{h_{31}x + h_{32}y + 1} \end{bmatrix} \] After separating to the components, multiplying, and reorganizing you will get these 2 equations. \[ x’=h_{11}x + h_{12}y + h_{13} – h_{31}xx’ – h_{32}yx’ \\ y’=h_{21}x + h_{22}y + h_{23} – h_{31}xy’ – h_{32}yy’ \] These are the linear equations with 8 unknowns. Therefore, in theory, it is required to have 8 equations (with certain preconditions to make sure the system is not degenerated) to be able to figure out the unknowns. In practice, we have an estimated 4 corner points of the marker paper. Although there are some errors carried out of the image processing part, these points do not lie on a single line. Therefore it is possible to plug them into equations and use numerical methods to get the approximated solution with minimal error. This is how the equations look like. \[ \begin{bmatrix} x_1 & y_1 & 1 & 0 & 0 & 0 & -x_1x_1′ & -y_1x_1′ \\ 0 & 0 & 0 &x_1 & y_1 & 1 & -x_1y_1′ & -y_1y_1′ \\ x_2 & y_2 & 1 & 0 & 0 & 0 & -x_2x_2′ & -y_2x_2′ \\ 0 & 0 & 0 &x_2 & y_2 & 1 & -x_2y_2′ & -y_2y_2′ \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \end{bmatrix} \begin{bmatrix} h_{11} \\ h_{12} \\ h_{13} \\ h_{21} \\ h_{22} \\ h_{23} \\ h_{31} \\ h_{32} \end{bmatrix} = \begin{bmatrix} x_1′ \\ y_1′ \\ x_2′ \\ y_2′ \\ \cdots \end{bmatrix} \] I won’t go into numerics here. I just use the solver provided by the mathematics library to get a solution like this one. \[ H=\begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & 1 \end{bmatrix} \\ \] The source code for homography estimation is located inside the Ar class, method estimateHomography. Use Case Homography has several use cases, you can easily find them on the internet. Here just the one relevant to this project. Let’s estimate the homography in the way that detected rectangle corresponds to the fixed rectangle. Then draw the blue square to the fixed rectangle and corresponding square to the original image. The result is right below. Summary This chapter covered homography. This allows you to draw into the planar surfaces of the image. In the next chapter, you will learn how to extend homography to get the projection matrix and be able to draw 3D objects lying on top of that plane.
Unrecognised TeX code (2) Why is the following latex code correctly recognised by the view()-function (for outputting results) but not by the text()-function (for inclusion in a plot)? Dy='$\\begin{align}\\Delta y&=y_2-y_1 \\\\ &=m\\cdot\\Delta x \\end{align}$' view(Dy) gives a correctly formatted latex formula but text(Dy,...) returns an error message.
Natural number The numbers which are generally used in our day to day life for counting are termed as natural numbers. They are also referred to as "counting" numbers , N = 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 {\displaystyle \mathbb {N} =0,1,2,3,4,5,6,7,8,9} Even number Number divides by 2 without remainder . Even number is denotes as 2N 2 N = 0 , 2 , 4 , 6 , 8 , . . . {\displaystyle \mathbb {2N} =0,2,4,6,8,...} Odd number Number divides by 2 with remainder Odd number is denotes as 2N+1 2 N + 1 = 1 , 3 , 5 , 7 , 9 , . . . {\displaystyle \mathbb {2N+1} =1,3,5,7,9,...} Prime number Number divides by 1 and itself without remainder . Prime number is denoted as P P = 1 , 3 , 5 , 7... {\displaystyle \mathbb {P} =1,3,5,7...} Integer Signed numbers I = ( − I , 0 , + I ) = ( I < 0 , I = 0 , I > 0 ) {\displaystyle \mathbb {I} =(-I,0,+I)=(I<0,I=0,I>0)} Fraction a b {\displaystyle {\frac {a}{b}}} Complex Number number made up of real and imaginary number Z = a + i b = a 2 + b 2 ∠ b a {\displaystyle \mathbb {Z} =a+ib={\sqrt {a^{2}+b^{2}}}\angle {\frac {b}{a}}} Imaginary Number i = − 1 {\displaystyle \mathbb {i} ={\sqrt {-1}}} i 9 {\displaystyle i9} Mathematical Operations on arithmetic numbers Mathematical Operation Symbol Example Addition A + B = C {\displaystyle A+B=C} 2 + 3 = 5 {\displaystyle 2+3=5} Subtraction A − B = C {\displaystyle A-B=C} 2 − 3 = − 1 {\displaystyle 2-3=-1} Multiplication A × B = C {\displaystyle A\times B=C} 2 × 3 = 6 {\displaystyle 2\times 3=6} Division A B = C {\displaystyle {\frac {A}{B}}=C} 2 3 ≈ 0.667 {\displaystyle {\frac {2}{3}}\approx 0.667} Exponentiation A n = C {\displaystyle A^{n}=C} 2 3 = 2 × 2 × 2 = 8 {\displaystyle 2^{3}=2\times 2\times 2=8} Root A = C {\displaystyle {\sqrt {A}}=C} 9 = 3 {\displaystyle {\sqrt {9}}=3} Logarithm L o g A = C {\displaystyle LogA=C} L o g 100 = 2 {\displaystyle Log100=2} Natural Logarithm L n A = C {\displaystyle LnA=C} L n 9 ≈ 2.2 {\displaystyle Ln9\approx 2.2} Example [ edit ] a x , y 2 {\displaystyle ax,y^{2}} Operation Arithmetic Expression [ edit ] Order of performing mathematical operation on expression follows Parenthesis . {}, [] , () Power . Add, Subtract, Multiply, Divide . +, -, x , / Example ( x − y ) 2 + y = z {\displaystyle (x-y)^{2}+y=z} x + y 2 = 6 {\displaystyle x+y^{2}=6} Cartesian Coordinate [ edit ] Polar Coordinate [ edit ] Real Number Coordination [ edit ] A point in XY co ordinate can be presented as ( ) and ( X , Y {\displaystyle X,Y} ) in R θ co ordinate R , θ {\displaystyle R,\theta } A , ( ) , ( X , Y {\displaystyle X,Y} ) R , θ {\displaystyle R,\theta } Scalar maths Vector Maths R ∠ θ = X 2 + Y 2 ∠ T a n − 1 Y X {\displaystyle R\angle \theta ={\sqrt {X^{2}+Y^{2}}}\angle Tan^{-1}{\frac {Y}{X}}} R = X 2 + Y 2 {\displaystyle R={\sqrt {X^{2}+Y^{2}}}} θ = ∠ T a n − 1 Y X {\displaystyle \theta =\angle Tan^{-1}{\frac {Y}{X}}} X ( θ ) = R C o s θ {\displaystyle X(\theta )=RCos\theta } Y ( θ ) = R S i n θ {\displaystyle Y(\theta )=RSin\theta } R ( θ ) = X ( θ ) + Y ( θ ) = R ( C o s θ + S i n θ ) {\displaystyle R(\theta )=X(\theta )+Y(\theta )=R(Cos\theta +Sin\theta )} ∇ ⋅ R ( θ ) = X ( θ ) = R C o s θ {\displaystyle \nabla \cdot R(\theta )=X(\theta )=RCos\theta } ∇ × R ( θ ) = Y ( θ ) = R S i n θ {\displaystyle \nabla \times R(\theta )=Y(\theta )=RSin\theta } Complex Number Coordination [ edit ] Z . ( X , j Y ) , ( Z , θ ) {\displaystyle Z.(X,jY),(Z,\theta )} Z ∗ . ( X , − j Y ) , ( R , − θ ) {\displaystyle Z^{*}.(X,-jY),(R,-\theta )} X ( θ ) = Z C o s θ {\displaystyle X(\theta )=ZCos\theta } j Y ( θ ) = j Z S i n θ {\displaystyle jY(\theta )=jZSin\theta } − j Y ( θ ) = − j Z S i n θ {\displaystyle -jY(\theta )=-jZSin\theta } Z ∠ θ = X 2 + Y 2 ∠ T a n − 1 Y X {\displaystyle Z\angle \theta ={\sqrt {X^{2}+Y^{2}}}\angle Tan^{-1}{\frac {Y}{X}}} Z ∠ − θ = X 2 + Y 2 ∠ − T a n − 1 Y X {\displaystyle Z\angle -\theta ={\sqrt {X^{2}+Y^{2}}}\angle -Tan^{-1}{\frac {Y}{X}}} Z = X 2 + Y 2 {\displaystyle Z={\sqrt {X^{2}+Y^{2}}}} θ = ∠ T a n − 1 Y X {\displaystyle \theta =\angle Tan^{-1}{\frac {Y}{X}}} Z ( θ ) = X ( θ ) + j Y ( θ ) = Z ( C o s θ + j S i n θ ) {\displaystyle Z(\theta )=X(\theta )+jY(\theta )=Z(Cos\theta +jSin\theta )} ∇ ⋅ Z ( θ ) = X ( θ ) = Z C o s θ {\displaystyle \nabla \cdot Z(\theta )=X(\theta )=ZCos\theta } ∇ × Z ( θ ) = j Y ( θ ) = j Z S i n θ {\displaystyle \nabla \times Z(\theta )=jY(\theta )=jZSin\theta } Z ∗ ( θ ) = X ( θ ) − j Y ( θ ) = Z ( C o s θ − j S i n θ ) {\displaystyle Z^{*}(\theta )=X(\theta )-jY(\theta )=Z(Cos\theta -jSin\theta )} ∇ ⋅ Z ( θ ) = X ( θ ) = Z ∗ C o s θ {\displaystyle \nabla \cdot Z(\theta )=X(\theta )=Z^{*}Cos\theta } ∇ × Z ( θ ) = − j Y ( θ ) = j Z ∗ S i n θ {\displaystyle \nabla \times Z(\theta )=-jY(\theta )=jZ^{*}Sin\theta } C o s θ = Z ( θ ) + Z ∗ ( θ ) 2 {\displaystyle Cos\theta ={\frac {Z(\theta )+Z^{*}(\theta )}{2}}} S i n θ = Z ( θ ) − Z ∗ ( θ ) 2 j {\displaystyle Sin\theta ={\frac {Z(\theta )-Z^{*}(\theta )}{2j}}} − S i n θ = Z ∗ ( θ ) − Z ( θ ) 2 j {\displaystyle -Sin\theta ={\frac {Z^{*}(\theta )-Z^{(}\theta )}{2j}}} Arithematic Function [ edit ] Definition [ edit ] Function is an arithmetical expression which relates 2 variables . Function is denoted as f ( x ) = y {\displaystyle f(x)=y} meaning for any value of x there is a corresponding value y=f(x) Where x - independent variable y - dependent variable f(x) - function of x Graph of function [ edit ] f ( x ) = x {\displaystyle f(x)=x} x -2 -1 0 1 2 f(x) -2 -1 0 1 2 Straight line passing through origin point (0,0) with slope equals 1 f ( x ) = 2 x {\displaystyle f(x)=2x} x -2 -1 0 1 2 f(x) -4 -2 0 2 4 Straight line passing through origin point (0,0) with slope equals 2 f ( x ) = 2 x + 3 {\displaystyle f(x)=2x+3} x -2 -1 0 1 2 f(x) -1 1 3 5 7 Straight line with slope equals 2 has x intercept (-3/2,0) and y intercept (0,3) Types of Functions [ edit ] Mathematical operations on function [ edit ] An arithmetic equation is an expression of a function of variable that has a value equal to zero f ( x ) = 0 {\displaystyle f(x)=0} Arithmetic equations can be solved to find the value of variable that satisfies the equation. The process of finding this value is called root finding. All values of variable that make its function equal to zero are called roots of the equation. Example [ edit ] Equation . 2 x + 5 = 9 {\displaystyle 2x+5=9} Root . x = 9 − 5 2 = 4 2 = 2 {\displaystyle x={\frac {9-5}{2}}={\frac {4}{2}}=2} is the root of the equation x = 2 {\displaystyle x=2} since substitution the value of x in the equation we have 2 x + 5 = 9 {\displaystyle 2x+5=9} 2 ( 2 ) + 5 = 9 {\displaystyle 2(2)+5=9} Types of equations [ edit ]
Passing between measures In this article we show how to pass between dense sets under the uniform measure on [math][3]^n[/math] and dense sents under the equal-slices measure on [math][3]^n[/math], at the expense of localising to a combinatorial subspace. Let's first show how to pass between two product distributions. Given [math]p_1 + p_2 + p_3 = 1[/math], let [math]\mu^n_{p_1,p_2,p_3}[/math] denote the product distributions on [math][3]^n[/math] where each coordinate is independently chosen to be [math]j[/math] with probability [math]p_j[/math], [math]j=1,2,3[/math]. Let [math]A \subseteq [3]^n[/math], which we can think of as a set or as an event. We are interested in [math]A[/math]'s probability under different product distributions; say, [math]\mu = \mu^n_{p_1,p_2,p_3}[/math] and [math]\mu' = \mu^n_{p'_1,p'_2,p'_3}[/math]. By definition, [math]|\mu(A) - \mu'(A)| \leq d_{TV}(\mu, \mu')[/math], where [math]d_{TV}[/math] denotes total variation distance. Since the measures are product measures, it's more convenient to work with Hellinger distance [math]d_{H}[/math] (see this article for background information). It is known that [math]d_{TV} \leq d_{H}[/math]. The convenient aspect of Hellinger distance is that if [math]\nu = \nu_1 \times \nu_2[/math], [math]\lambda = \lambda_1 \times \lambda_2[/math] are any two product distributions then we have the "triangle inequality" [math]d_H^2(\nu, \lambda) \leq d_H^2(\nu_1, \lambda_1) + d_H^2(\nu_2, \lambda_2)[/math]. Hence [math]d_H(\mu, \mu') \leq \sqrt{n} d_H(\mu^1_{p_1,p_2,p_3}, \mu^1_{p'_1,p'_2,p'_3})[/math]. In particular, if the Hellinger distance between the two 1-coordinate distributions is small compared to [math]1/\sqrt{n}[/math], the overall total variation distance is small. To bound these 1-coordinate Hellinger distances we can use the fact that [math]d_H \leq \sqrt{2 d_{TV}}[/math]. Or if you are desperate to save small constant factors, one can check: Fact: If the 1-coordinate distribution [math](p'_1, p'_2, p'_3)[/math] is a mixture of the 1-coordinate distributions [math](p_1,p_2,p_3)[/math] and [math](1/3, 1/3, 1/3)[/math], with mixing weights [math]1-\epsilon[/math] and [math]\epsilon[/math], then [math]d_{H}(\mu^1_{p_1,p_2,p_3}, \mu^1_{p'_1,p'_2,p'_3}) \sim \sqrt{\epsilon}[/math] for small $\epsilon$, and is always at ost \leq \sqrt{2\epsilon}</math>.
Forgot password? New user? Sign up Existing user? Log in ∑n=0∞120152n−2015−(2n)\displaystyle\sum_{n=0}^{\infty} \dfrac{1}{2015^{2^{n}} - 2015^{-(2^{n})}} n=0∑∞20152n−2015−(2n)1 If the closed form of the series above is in the form ab \frac a b ba, where aaa and bbb are positive coprime integers, then find b−a.b - a.b−a. Problem Loading... Note Loading... Set Loading...
Motivation: Earlier today I was talking to a researcher about how well a normal distribution could approximate a uniform distribution over an interval . I gave a few arguments for why I thought a normal distribution wouldn’t be good but I didn’t have the exact answer at the top of my head so I decided to find out. Although the following analysis involves nothing fancy I consider it useful as it’s easily generalised to higher dimensions(i.e. multivariate uniform distributions) and we arrive at a result which I wouldn’t consider intuitive. For those who appreciate numerical experiments, I wrote a small TensorFlow script to accompany this blog post. Statement of the problem: We would like to minimise the KL-Divergence: \begin{equation} \mathcal{D_{KL}}(P|Q) = -\int_{-\infty}^\infty p(x) \ln \frac{p(x)}{q(x)}dx \end{equation} where is the target uniform distribution and is the approximating Gaussian: \begin{equation} p(x)= \frac{1}{b-a} \mathbb{1}_{[b-a]} \implies p(x \notin [b-a]) = 0 \end{equation} and \begin{equation} q(x)= \frac{1}{\sqrt{2 \pi \sigma^2}} e^{\frac{(x-\mu)^2}{2 \sigma^2}} \end{equation} Now, given that if we assume that is fixed our loss may be expressed in terms of and : \begin{equation}\begin{split}\mathcal{L}(\mu,\sigma) & = -\int_{a}^b p(x) \ln \frac{p(x)}{q(x)}dx & = \ln(b-a) - \frac{1}{2}\ln(2\pi\sigma^2)-\frac{\frac{1}{3}(b^3-a^3)-\mu(b^2-a^2)+\mu^2(b-a)}{2\sigma^2(b-a)} \end{split} \end{equation} Minimising with respect to and : We can easily show that the mean and variance of the Gaussian which minimises correspond to the mean and variance of a uniform distribution over : \begin{equation} \frac{\partial}{\partial \mu} \mathcal{L}(\mu,\sigma) = \frac{(b+a)}{2\sigma^2} - \frac{2\mu}{2\sigma^2}= 0 \implies \mu = \frac{a+b}{2} \end{equation} \begin{equation} \frac{\partial}{\partial \sigma} \mathcal{L}(\mu,\sigma) = -\frac{1}{\sigma}+\frac{\frac{1}{3}(b^2+a^2+ab)-\frac{1}{4}(b+a)^2}{\sigma^3} =0 \implies \sigma^2 = \frac{(b-a)^2}{12} \end{equation} Although I wouldn’t have guessed this result the careful reader will notice that this result readily generalises to higher dimensions. Analysing the loss with respect to optimal Gaussians: After entering the optimal values of and into and simplifying the resulting expression we have the following residual loss: \begin{equation} \mathcal{L}^* = -\frac{1}{2}(\ln \big(\frac{\pi}{6}\big)+1) \approx -.17 \end{equation} I find this result surprising because I didn’t expect the dependence on to vanish. That said, my current intuition for this result is that if we tried fitting to we would obtain: \begin{equation} [a,b] = [\mu - \sqrt{3}\sigma, \mu + \sqrt{3}\sigma] \end{equation} so this minimisation problem corresponds to a linear re-scaling of the uniform parameters in terms of and . Remark: The reader may experiment with the following TensorFlow function which outputs the approximating mean and variance of a Gaussian given a uniform distribution on the interval .
It looks like you're new here. If you want to get involved, click one of these buttons! Last time I threw the definition of 'adjoint functor' at you. Now let me actually explain adjoint functors! As we learned long ago, the basic idea is that adjoints give the best possible way to approximately recover data that can't really be recovered. For example, you might have a map between databases that discards some data. You might like to reverse this process. Strictly speaking this is impossible: if you've truly discarded some data, you don't know what it is anymore, so you can't restore it. But you can still do your best! There are actually two kinds of 'best': left adjoints and right adjoints. Remember the idea. We have a functor \(F : \mathcal{A} \to \mathcal{B}\). We're looking for a nice functor \(G: \mathcal{B} \to \mathcal{A} \) that goes back the other way, some sort of attempt to reverse the effect of \(F\). We say that \(G\) is a right adjoint of \(F\) if there's a natural one-to-one correspondence between and whenever \(a\) is an object of \(\mathcal{A}\) and \(b\) is an object of \(\mathcal{B}\). In this situation we also say \(F\) is a left adjoint of \(G\). The tricky part in this definition is the word 'natural'. That's why I had to explain natural transformations. But let's see how far we can get understanding adjoint functors without worrying about this. Let's do an example. There's a category \(\mathbf{Set}\) where objects are sets and morphisms are functions. And there's much more boring category \(\mathbf{1}\), with exactly one object and one morphism. Today let's call that one object \(\star\), so the one morphism is \(1_\star\). In Puzzle 135 we saw there is always exactly one functor from any category to \(\mathbf{1}\). So, there's exactly one functor $$ F: \mathbf{Set} \to \mathbf{1} $$ This sends every set to the object \(\star\), and every function between sets to the morphism \(1_\star\). This is an incredibly destructive functor! It discards all the information about every set and every function! \(\mathbf{1}\) is like the ultimate trash can, or black hole. Drop data into it and it's gone. So, it seems insane to try to 'reverse' the functor \(F\), but that's what we'll do. First let's look for a right adjoint $$ G: \mathbf{1} \to \mathbf{Set} .$$ For \(G\) to be a right adjoint, we need a natural one-to-one correspondence between morphisms $$ m: F(S) \to \star $$ and morphisms $$ n: S \to G(\star) $$ where \(S\) is any set. Think about what this means! We know \(F(S) = \star\): there's nothing else it could be, since \(\mathbf{1}\) has just one object. So, we're asking for a natural one-to-one correspondence between the set of morphisms $$ m : \star \to \star $$ and the set of functions $$ n : S \to G(\star) .$$ This has got to work for every set \(S\). This should tell us a lot about \(G(\star)\). Well, there's just one morphism \( m : \star \to \star\), so there had better be just one function \(n : S \to G(\star)\), for any set \(S\). This forces \(G(\star)\) to be a set with just one element. And that does the job! We can take \(G(\star)\) to be any set with just one element, and that gives us our left adjoint \(G\). Well, okay: we have to say what \(G\) does to morphisms, too. But the only morphism in \(\mathbf{1}\) is \(1_\star\), and we must have \(G(1_\star) = 1_{G(\star)} \), thanks to how functors work. (Furthermore you might wonder about the 'naturality' condition, but this example is so trivial that it's automatically true.) So: if you throw away a set into the trash bin called \(\mathbf{1}\), and I say "wait! I want that set back!", and I have to make up something, the right adjoint procedure says "pick any one-element set". Weird but true. When you really understand adjoints, you'll have a good intuitive sense for why it works this way. What about the left adjoint procedure? Let's use \(L\) to mean a left adjoint of our functor \(F\): Puzzle 149. Again suppose \(F: \mathbf{Set} \to \mathbf{1}\) is the functor that sends every set to \(\star\) and every function to \(1_\star\). A left adjoint \(L : \mathbf{1} \to \mathbf{Set} \) is a functor for which there's a natural one-to-one correspondence between functions $$ m: L(\star) \to S $$ and morphisms $$ n: \star \to F(S) $$ for every set \(S\). On the basis of this, try to figure out all the left adjoints of \(F\). Let's also try some slightly harder examples. There is a category \(\mathbf{Set}^2\) where an object is a pair of sets \( (S,T)\). In this category a morphism is a pair of functions, so a morphism $$ (f,g): (S,T) \to (S',T') $$ is just a function \(f: S \to S'\) together with function \(g: T \to T'\). We compose these morphisms componentwise: $$ (f,g) \circ (f',g') = (f\circ f', g \circ g') . $$ You can figure out what the identity morphisms are, and check all the category axioms. There's a functor $$ F: \mathbf{Set}^2 \to \mathbf{Set} $$ that discards the second component. So, on objects it throws away the second set: $$ F(S,T) = S $$ and on morphisms it throws away the second function: $$ F(f,g) = f .$$ Puzzle 150. Figure out all the right adjoints of \(F\). Puzzle 151. Figure out all the left adjoints of \(F\). There's also a functor $$ \times: \mathbf{Set}^2 \to \mathbf{Set} $$ that takes the Cartesian product, both for sets: $$ \times (S,T) = S \times T $$ and for functions: $$ \times (f,g) = f \times g $$ where \((f\times g)(s,t) = (f(s),g(t))\) for all \(s \in S, t \in T\). Puzzle 152. Figure out all the right adjoints of \(\times\). Puzzle 153. Figure out all the left adjoints of \(\times\). Finally, there's also a functor $$ + : \mathbf{Set}^2 \to \mathbf{Set} $$ that takes the disjoint union, both for sets: $$ + (S,T) = S + T $$ and for functions: $$ +(f,g) = f + g .$$ Here \(S + T\) is how category theorists write the disjoint union of sets \(S\) and \(T\). Furthermore, given functions \(f: S \to S'\) and \(g: T \to T'\) there's an obvious function \(f+g: S+T \to S'+T'\) that does \(f\) to the guys in \(S\) and \(g\) to the guys in \(T\). Puzzle 152. Figure out all the right adjoints of \(+\). Puzzle 153. Figure out all the left adjoints of \(+\). I think it's possible to solve all these puzzles even if one has a rather shaky grasp on adjoint functors. At least try them! It's a good way to start confronting this new concept.
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Defining parameters Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.dm (of order \(12\) and degree \(4\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 360 \) Character field: \(\Q(\zeta_{12})\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\). Total New Old Modular forms 96 0 96 Cusp forms 0 0 0 Eisenstein series 96 0 96 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
Is there a way to convert a FIR to an IIR filter with the most similar behavior? I would say that the answer to your question - if taken literally - is 'no', there is no general way to simply convert an FIR filter to an IIR filter. I agree with RBJ that one way to approach the problem is to look at the FIR filter's impulse response and use a time domain method (such as Prony's method) to approximate that impulse response by an IIR filter. If you start from the frequency response then you have lots of methods for designing IIR filters. Even though it was published about 25 years ago, I believe that the method by Chen and Parks is still one of the better ways to approach the design problem. Another very simple method for the frequency domain design of IIR filters is the equation error method, which is described in the book Digital Filter Design by Parks and Burrus. I've explained it in this answer. If the phase response is of importance to you, then one problem you will be facing when designing IIR filters in the frequency domain is the exact choice of the desired phase response. If the overall shape of the desired phase is given you still have one degree of freedom, which is the delay. E.g., if the desired phase is $\phi_D(\omega)$, and the desired magnitude is $M_D(\omega)$ then your desired frequency response can be chosen as $$H_D(\omega)=M_D(\omega)e^{j(\phi(\omega)-\omega\tau)}\tag{1}$$ where $\tau$ is an unknown delay parameter. Of course you can say that if $\phi_D(\omega)$ is given then you don't want to modify it with an additional (positive or negative) delay. But it turns out that in practice the average delay is not always important, and - more importantly - for certain values of $\tau$ your approximation will be much better for a given filter order than for others. So the delay $\tau$ can become an additional design parameter and should be chosen optimally or at least reasonably. I've written a thesis on the design of digital filters with prescribed magnitude and phase responses. One chapter deals with the frequency domain design of IIR filters. That method can be used to design IIR filters with approximately linear phase in the pass-bands, or to approximate any other desired phase (and magnitude) response. The filters are not only guaranteed to be stable, but you can also prescribe a maximum pole radius, i.e., you can define a certain stability margin. You can also find this method in a paper published in the IEEE Transactions on Signal Processing. Matt L's answer is the best from a DSP perspective. There exist a whole array of techniques from the control literature that might also do what you're asking. While this is not explicitly turning an FIR filter into an IIR, the techniques will generally find an IIR solution unless some other constraints are applied. Some of the techniques are: Optimal Hankel Norm Approximation uses the infinity-like Hankel norm to approximate a high order system with one of a lower order. There is a matlab implementation of it in the Robust Control Toolbox. Yet another method that might be able approximate (not exactly match) a given arbitrary frequency response (such as one described by some given FIR filter) by an IIR filter, is Differential Evolution. Differential Evolution is a type of genetic algorithm that, for this use, iteratively selects and adapts a set of poles and zeros in an attempt to minimize a computed difference error. There seem to be a few IEEE papers on the topic, as well as a chapter in one of Rick Lyons books ("Streamlining DSP"). If you're trying to match the impulse response of the IIR to a given impulse response, however it's mathematically defined (I guess the FIR is as good of a definition as any), I've always thought that the Prony method was the first stab at the problem. If you're trying to match the frequency response of the IIR to a given frequency response, however it's mathematically defined (I guess the frequency response of the FIR is as good of a definition as any), I've recently thought that Greg Berchin's FDLS might be the way to go. Richard Lyons (who commented to your question), published a monograph where Greg had a chapter describing the method. Matt L also has researched and published on the problem. well yes, since you didn't require an exact equivalent but not without grief A FIR filter is equivalent to a polynomial One can derive a Pade approximation. It will not necessarily be stable, it is very sensitive to scaling, and the result isn't thrilling. Using a hanning window as an FIR example and the Pade routine in the symbolic toolbox (which most people don't have but gnu Maxima does) My other idea which I haven't pursued would be to generate a pseudorandom MA process and then use an ARMA estimator to recover the rational transfer function. p = poly2sym(sym(round(100*hanning(16))))% scaled hanning p = 3*x^15 + 13*x^14 + 28*x^13 + 45*x^12 + 64*x^11 + 80*x^10 + 93*x^9 + 99*x^8 + 99*x^7 + 93*x^6 + 80*x^5 + 64*x^4 + 45*x^3 + 28*x^2 + 13*x + 3 h=pade(p,'Order', [3 3]) h = -(2534*x^3 + 11071*x^2 + 10368*x + 2961)/(- 2213*x^3 + 1964*x^2 + 821*x - 987) [n,d]=numden(h) n = - 2534*x^3 - 11071*x^2 - 10368*x - 2961 d = - 2213*x^3 + 1964*x^2 + 821*x - 987 num=sym2poly(n) num = -2534 -11071 -10368 -2961 den=sym2poly(d) den = -2213 1964 821 -987 fir=sym2poly(p); rn=roots(num) rn = -3.2067 + 0.0000i -0.5812 + 0.1633i -0.5812 - 0.1633i rd=roots(den) rd = -0.6679 + 0.0000i 0.7777 + 0.2510i 0.7777 - 0.2510i num=num/sum(abs(num)); %normalizing coefficients den=den/sum(abs(den)); fir=fir/sum(abs(fir)); [h,z]=freqz(num,den,1024); figure(1) plot(z,log10(abs(h))); ylabel('dB') figure(2) [h,z]=freqz(fir,1,1024); plot(z,log10(abs(h))); ylabel('dB') echo off It it tempting to speculate that if a windowed impulse response, h of lenght L can be "well modelled" by a low order (relative to L) iir filter, then the latter can be used to extrapolate the FIR filter beyond its original length. What is the practical pros and cons of using prony (time domain) vs using invfreqz (frequency domain)? -k
I was reading the following equivalence (which is my summarized version of writing theorem 4.2 in Rudin in 1 line, look at Appendix in my Q to see exactly what this means): $$ \lim_{x\to p} f(x) = q \iff \lim_{p_n \to p, p_n \neq p } f(p_n) = q$$ what I am interested is in showing the RHS implies the LHS rigorously. I saw Rudin's proof via contrapositive/contradiction. I think I understand it but was a bit surprised that it was via that method rather than a direct method because the statement seems quite natural to me, though there is one step that is tricky to make formal (perhaps the reason a direct proof doesn't exist?). Basically what we know is the RHS is true, so that every sequence $(p_n)$ in $E$ that converges to $p$, $p_n \neq p$ has $f(p_n)$ converge to $q$. So whenever we are $\delta$ close to $p$ via $n \geq N_{\delta}$, we have that we have $f(p_n)$ is epsilon close to $q$. The issue is that the RHS doesn't actually talk about $\delta's$ but about $N$'s (since its about sequences). But the idea is simple. Choose a $\delta >0$ regardless. Then because every element in the neighbourhood defined by $\delta$ is an element of $E$ it means we can define any sequence we want from those element (or they can be the tail end of some sequence by appending whatever elements we want that are not within $\delta >0$ but that are still in $E$). Thus, since the RHS is in terms of every sequence in $E$ we can guarantee we can cover any neighbourhood for any $\delta$ (because the RHS is in terms of every sequence in $E$, which is the key part for things to work of course). Thus, since we can cover every neighbourhood of size $\delta$ because enough sequences will cover it means that for those terms we have $f(p_n)$ is within $\epsilon$ from $q$, which is what we needed. I am pretty sure the proof is correct but had a hard time expressing it in a clean way that makes it 100% clear that its correct. Does anyone know how to do this? (essentially write the direct proof in a convincing unambiguous way)? I noticed that the key part to show is that because we have a property for every sequence that we can actually cover the whole neighbourhood for any $\delta$. Given a $\delta >0$ consider some terms inside it. Then that corresponds to some sequence. Now consider the remaining points, now define another sequence. Keep doing this forever. Then because the RHS guarantees every sequence is covered (regardless if its in the neighbourhood captured by $\delta >0$ or not) it means that any countable sequence we choose form the neighbourhood has some sequence with the properties of the RHS. Since this is true it must mean that we covered the whole neighbourhood. This holds true for any $\delta$, so it doesn't matter which $\delta$ we choose. Appendix 1: Let $X$,$Y$ be metric spaces $E \subset X$ and $p \in LP(E)$ (i.e. $p$ is a limit point of $E$). Then: $$ \lim_{x \to p} f(x) = q$$ if and only if: $$ \lim_{n \to \infty} f(p_n) = q $$ for every sequence $\{ p_n \}$ in $E$ such that: $$ p_n \neq p, \lim_{n \to \infty } = p $$ Appendix 2: To make the question self contained I will write Rudin's definition of limit i.e what: $$ \lim_{x \to p} f(x) = q$$ means: If there is a point $q \in Y$ with the following property: $$ \forall \epsilon >0, \exists \delta >0: x \in E \text{ if } 0 < d_{X}(x,p) < \delta \implies d_Y(f(x),y) < \epsilon$$ $$ $$
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Cauchy problem of semilinear inhomogeneous elliptic equations of Matukuma-type with multiple growth terms 1. School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi 710062, China 2. Department of Mathematics and Computer Science, John Jay College of Criminal Justice, CUNY, New York, NY 10019, USA 3. School of Science, Hangzhou Dianzi University, Hangzhou, Zhejiang 310018, China $ \Delta u+\sum\limits_{i = 1}^{k}K_i(|x|)u^{p_i}+\mu f(|x|) = 0, \quad x\in\mathbb{R}^n, $ Keywords:Cauchy problem, positive radial solutions, stability, sub- and super-solutions, Matukuma-type equation. Mathematics Subject Classification:Primary: 35J10, 35J20; Secondary: 35J65. Citation:Yunfeng Jia, Yi Li, Jianhua Wu, Hong-Kun Xu. Cauchy problem of semilinear inhomogeneous elliptic equations of Matukuma-type with multiple growth terms. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2019227 References: [1] [2] S. Bae, Infinite multiplicity and separation structure of positive solutions for a semilinear elliptic equation in $\mathbb{R}^n$, [3] S. Bae, Separation structure of positive radial solutions of a semilinear elliptic equation in $\mathbb{R}^n$, [4] [5] [6] S.-H. Chen and G.-Z. Lu, Asymptotic behavior of radial solutions for a class of semilinear elliptic equations, [7] [8] Y. Deng, Y. Li and Y. Liu, On the stability of the positive radial steady states for a semilinear Cauchy problem, [9] Y. Deng, Y. Li and F. Yang, On the stability of the positive steady states for a nonhomogeneous semilinear Cauchy problem, [10] Y. Deng and F. Yang, Existence and asymptotic behavior of positive solutions for an inhomogeneous semilinear elliptic equation, [11] [12] [13] [14] [15] A. Friedman, [16] [17] [18] C.-F. Gui, On positive entire solutions of the elliptic equation $\Delta u+K(x)u^p = 0$ and its applications to Riemannian geometry, [19] [20] [21] C.-F. Gui, W.-M. Ni and X.-F. Wang, On the stability and instability of positive steady state of a semilinear heat equation in $\mathbb{R}^n$, [22] [23] [24] T.-Y. Lee and W.-M. Ni, Global existence, large time behavior and life span of solutions of semilinear parabolic Cauchy problems, [25] [26] Y. Li and W.-M. Ni, On the asymptotic behavior and radial symmetry of positive solutions of semilinear elliptic equations in $\mathbb{R}^n$ I. Asymptotic behavior, [27] [28] T. Matukuma, Dynamics of globular clusters, [29] M. Naito, Asymptotic behaviors of solutions of second order differential equations with integral coefficients, [30] [31] W.-M. Ni, On the elliptic equation $\Delta u+K(x)u^{\frac{(n+2)}{(n-2)}}=0$, it's generalizations and applications in geometry, [32] [33] [34] [35] [36] [37] [38] [39] [40] F. Yang and D. Zhang, Separation property of positive radial solutions for a general semilinear elliptic equation, show all references References: [1] [2] S. Bae, Infinite multiplicity and separation structure of positive solutions for a semilinear elliptic equation in $\mathbb{R}^n$, [3] S. Bae, Separation structure of positive radial solutions of a semilinear elliptic equation in $\mathbb{R}^n$, [4] [5] [6] S.-H. Chen and G.-Z. Lu, Asymptotic behavior of radial solutions for a class of semilinear elliptic equations, [7] [8] Y. Deng, Y. Li and Y. Liu, On the stability of the positive radial steady states for a semilinear Cauchy problem, [9] Y. Deng, Y. Li and F. Yang, On the stability of the positive steady states for a nonhomogeneous semilinear Cauchy problem, [10] Y. Deng and F. Yang, Existence and asymptotic behavior of positive solutions for an inhomogeneous semilinear elliptic equation, [11] [12] [13] [14] [15] A. Friedman, [16] [17] [18] C.-F. Gui, On positive entire solutions of the elliptic equation $\Delta u+K(x)u^p = 0$ and its applications to Riemannian geometry, [19] [20] [21] C.-F. Gui, W.-M. Ni and X.-F. Wang, On the stability and instability of positive steady state of a semilinear heat equation in $\mathbb{R}^n$, [22] [23] [24] T.-Y. Lee and W.-M. Ni, Global existence, large time behavior and life span of solutions of semilinear parabolic Cauchy problems, [25] [26] Y. Li and W.-M. Ni, On the asymptotic behavior and radial symmetry of positive solutions of semilinear elliptic equations in $\mathbb{R}^n$ I. Asymptotic behavior, [27] [28] T. Matukuma, Dynamics of globular clusters, [29] M. Naito, Asymptotic behaviors of solutions of second order differential equations with integral coefficients, [30] [31] W.-M. Ni, On the elliptic equation $\Delta u+K(x)u^{\frac{(n+2)}{(n-2)}}=0$, it's generalizations and applications in geometry, [32] [33] [34] [35] [36] [37] [38] [39] [40] F. Yang and D. Zhang, Separation property of positive radial solutions for a general semilinear elliptic equation, [1] Hiroshi Morishita, Eiji Yanagida, Shoji Yotsutani. Structure of positive radial solutions including singular solutions to Matukuma's equation. [2] Liping Wang. Arbitrarily many solutions for an elliptic Neumann problem with sub- or supercritical nonlinearity. [3] Jifeng Chu, Pedro J. Torres, Feng Wang. Radial stability of periodic solutions of the Gylden-Meshcherskii-type problem. [4] Patricio Cerda, Leonelo Iturriaga, Sebastián Lorca, Pedro Ubilla. Positive radial solutions of a nonlinear boundary value problem. [5] Zongming Guo, Xuefei Bai. On the global branch of positive radial solutions of an elliptic problem with singular nonlinearity. [6] [7] Ruofei Yao, Yi Li, Hongbin Chen. Uniqueness of positive radial solutions of a semilinear elliptic equation in an annulus. [8] Cunming Liu, Jianli Liu. Stability of traveling wave solutions to Cauchy problem of diagnolizable quasilinear hyperbolic systems. [9] P. Álvarez-Caudevilla, J. D. Evans, V. A. Galaktionov. The Cauchy problem for a tenth-order thin film equation II. Oscillatory source-type and fundamental similarity solutions. [10] [11] [12] Isabel Flores, Matteo Franca, Leonelo Iturriaga. Positive radial solutions involving nonlinearities with zeros. [13] Rui-Qi Liu, Chun-Lei Tang, Jia-Feng Liao, Xing-Ping Wu. Positive solutions of Kirchhoff type problem with singular and critical nonlinearities in dimension four. [14] Guozhen Lu and Juncheng Wei. On positive entire solutions to the Yamabe-type problem on the Heisenberg and stratified groups. [15] [16] Leif Arkeryd, Raffaele Esposito, Rossana Marra, Anne Nouri. Exponential stability of the solutions to the Boltzmann equation for the Benard problem. [17] Kai Zhao, Wei Cheng. On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem. [18] [19] Dagny Butler, Eunkyung Ko, Eun Kyoung Lee, R. Shivaji. Positive radial solutions for elliptic equations on exterior domains with nonlinear boundary conditions. [20] João Marcos do Ó, Sebastián Lorca, Justino Sánchez, Pedro Ubilla. Positive radial solutions for some quasilinear elliptic systems in exterior domains. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
How To Sort By Average Rating Sunday August 17, 2014 Evan Miller's well-known How Not To Sort By Average Rating points out problems with ranking by "wrong solution #1" (by differences, upvotes minus downvotes) and "wrong solution #2" (by average ratings, upvotes divided by total votes). Miller's "correct solution" is to use the lower bound of a Wilson score confidence interval for a Bernoulli parameter. I think it would probably be better to use Laplace smoothing, because: Laplace smoothing is much easier Laplace smoothing is not always negatively biased This is the Wilson scoring formula given in Miller's post, which we'll use to get 95% confidence interval lower bounds: (Use minus where it says plus/minus to calculate the lower bound.) Here \( \hat{p} \) is the observedfraction of positive ratings, z α/2is the (1-α/2) quantile of the standard normal distribution, and nis the total number of ratings. Now here's the formula for doing Laplace smoothing instead: \[ \frac{\text{upvotes} + \alpha}{\text{total votes} + \beta} \] Here \( \alpha \) and \( \beta \) are parameters that represent our estimation of what rating is probably appropriate if we know nothing else (cf. Bayesian prior). For example, \( \alpha = 1 \) and \( \beta = 2 \) means that a post with no votes gets treated as a 0.5. The Laplace smoothing method is much simpler to calculate - there's no need for statistical libraries, or even square roots! Does it successfully solve the problems of "wrong solution #1" and "wrong solution #2"? First, the problem with "wrong solution #1", which we might summarize as "the problem with large sample sizes": upvotes downvotes wrong #1 wrong #2 Wilson Laplace first item 209 50 159 0.81 0.7545 0.80 second item 118 25 93 0.83 0.7546 0.82 All the methods agree except for "wrong solution #1" that the second item should rank higher. Then there's the problem with "wrong solution #2", which we might summarize as "the problem with small sample sizes": upvotes downvotes wrong #1 wrong #2 Wilson Laplace first item 1 0 1 1.0 0.21 0.67 second item 534 46 488 0.92 0.90 0.92 All the methods agree except for "wrong solution #2" that the second item should rank higher. How similar are the results for the Wilson method and the Laplace method overall? Take a look: here color encodes the score, with blue at 0.0, white at 0.5, and red at 1.0: They're so similar, you might say, that you would need a very good reason to justify the complexity of the calculation for the Wilson bound. But also, the differences favor the Laplace method! The Wilson method, because it's a lower bound, is negatively biased everywhere. It's certainly not symmetrical. Let's zoom in: With the Wilson method, you could have three upvotes, no downvotes and still rank lower than an item that is disliked by 50% of people over the long run. That seems strange. The Laplace method does have its own biases. By choosing \( \alpha=1 \) and \( \beta=2 \), the bias is toward 0.5, which I think is reasonable for a ranking problem like this. But you could change it: \( \alpha=0 \) with \( \beta=1 \) biases toward zero, \( \alpha=1 \) with \( \beta=0 \) biases toward one. And \( \alpha=100 \) with \( \beta=200 \) biases toward 0.5 very strongly. With the Wilson method you can tune the size of the interval, adjusting the confidence level, but this only adjusts how strongly you're biased toward zero. Here's another way of looking at the comparison. How do the two methods compare for varying numbers of upvotes with a constant number (10) of downvotes? Those are similar curves. Not identical - but is there a difference to justify the complexity of the Wilson score? In conclusion: Just adding a little bit to your numerators and denominators (Laplace smoothing) gives you a scoring system that is as good or better than calculating Wilson scores. [code for this post] This post was originally hosted elsewhere.
Large time behavior of ODE type solutions to nonlinear diffusion equations 1. Mathematical Institute, Tohoku University, Aoba, Sendai 980-8578, Japan 2. Graduate School of Mathematical Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan $ \begin{equation} \left\{ \begin{array}{ll} \partial_t u = \Delta u^m+u^\alpha & \quad\mbox{in}\quad{\bf R}^N\times(0,\infty),\\ u(x,0) = \lambda+\varphi(x)>0 & \quad\mbox{in}\quad{\bf R}^N, \end{array} \right. \end{equation} $ $ m>0 $ $ \alpha\in(-\infty,1) $ $ \lambda>0 $ $ \varphi\in BC({\bf R}^N)\,\cap\, L^r({\bf R}^N) $ $ 1\le r<\infty $ $ \inf_{x\in{\bf R}^N}\varphi(x)>-\lambda $ $ \zeta' = \zeta^\alpha $ $ (0,\infty) $ $ +\infty $ $ t\to\infty $ Keywords:ODE type solutions, nonlinear diffusion equation, large time behavior, the higher order asymptotic expansions, Gauss kernel. Mathematics Subject Classification:Primary: 35B40, 35K55. Citation:Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2019229 References: [1] J. Aguirre and M. A. Escobedo, Cauchy problem for $u_t - \Delta u = u^p$ with $0 < p < 1$, Asymptotic behaviour of solutions, [2] [3] J. A. Carrillo and G. Toscani, Asymptotic $L^1$-decay of solutions of the porous medium equation to self-similarity, [4] [5] [6] [7] K. Ishige, M. Ishiwata and and T. Kawakami, The decay of the solutions for the heat equation with a potential, [8] [9] K. Ishige and T. Kawakami, Asymptotic expansions of solutions of the Cauchy problem for nonlinear parabolic equations, [10] K. Ishige, T. Kawakami and K. Kobayashi, Asymptotics for a nonlinear integral equation with a generalized heat kernel, [11] K. Ishige and K. Kobayashi, Convection-diffusion equation with absorption and non-decaying initial data, [12] [13] [14] [15] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, [16] L. A. Peletier and J. Zhao, Large time behaviour of solutions of the porous media equation with absorption: the fast diffusion case, [17] R. Suzuki, Asymptotic behavior of solutions of quasilinear parabolic equations with supercritical nonlinearity, [18] [19] [20] J. L. Vázquez, [21] show all references References: [1] J. Aguirre and M. A. Escobedo, Cauchy problem for $u_t - \Delta u = u^p$ with $0 < p < 1$, Asymptotic behaviour of solutions, [2] [3] J. A. Carrillo and G. Toscani, Asymptotic $L^1$-decay of solutions of the porous medium equation to self-similarity, [4] [5] [6] [7] K. Ishige, M. Ishiwata and and T. Kawakami, The decay of the solutions for the heat equation with a potential, [8] [9] K. Ishige and T. Kawakami, Asymptotic expansions of solutions of the Cauchy problem for nonlinear parabolic equations, [10] K. Ishige, T. Kawakami and K. Kobayashi, Asymptotics for a nonlinear integral equation with a generalized heat kernel, [11] K. Ishige and K. Kobayashi, Convection-diffusion equation with absorption and non-decaying initial data, [12] [13] [14] [15] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, [16] L. A. Peletier and J. Zhao, Large time behaviour of solutions of the porous media equation with absorption: the fast diffusion case, [17] R. Suzuki, Asymptotic behavior of solutions of quasilinear parabolic equations with supercritical nonlinearity, [18] [19] [20] J. L. Vázquez, [21] [1] Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. [2] Jean-Claude Saut, Jun-Ichi Segata. Asymptotic behavior in time of solution to the nonlinear Schrödinger equation with higher order anisotropic dispersion. [3] [4] [5] [6] Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin. Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation. [7] Nakao Hayashi, Pavel I. Naumkin. Asymptotic behavior in time of solutions to the derivative nonlinear Schrödinger equation revisited. [8] [9] [10] Joana Terra, Noemi Wolanski. Large time behavior for a nonlocal diffusion equation with absorption and bounded initial data. [11] [12] Kazuhiro Ishige, Asato Mukai. Large time behavior of solutions of the heat equation with inverse square potential. [13] Peter V. Gordon, Cyrill B. Muratov. Self-similarity and long-time behavior of solutions of the diffusion equation with nonlinear absorption and a boundary source. [14] [15] Engu Satynarayana, Manas R. Sahoo, Manasa M. Higher order asymptotic for Burgers equation and Adhesion model. [16] [17] Peng Jiang. Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics. [18] Shifeng Geng, Lina Zhang. Large-time behavior of solutions for the system of compressible adiabatic flow through porous media with nonlinear damping. [19] Guofu Lu. Nonexistence and short time asymptotic behavior of source-type solution for porous medium equation with convection in one-dimension. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 1-10 of 53 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE (Elsevier, 2017-11) We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ... System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE (Elsevier, 2017-11) We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ... Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions (Elsevier, 2017-11) Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ... Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Let $(X_{n, \theta})_{n \in \mathbb{N}, \theta \in \Theta}$ be a sequence of parameter dependent real-valued random variables where $\Theta$ is some parameter space. Assume that $X_{n, \theta}$ converges uniformly to $X_\theta$, i.e. for any continuous and bounded $f: \mathbb{R} \to \mathbb{R}$ $$ \sup_{\theta} \left|E(f(X_{n, \theta})) - E(f(X_\theta)) \right| \to 0 $$ as $n \to \infty$. Let $(y_\theta)_{\theta \in \Theta}$ be some family of real numbers. Does then $(X_{n, \theta}, y_\theta)$ converge uniformly to $(X_\theta, y_\theta)$, i.e. for any continuous and bounded $f: \mathbb{R}^2 \to \mathbb{R}$ $$ \sup_{\theta} \left|E(f(X_{n, \theta}, y_\theta)) - E(f(X_\theta, y_\theta)) \right| \to 0 $$ as $n \to \infty$. Intuitively I find it crazy that adding a constant that does nothing would change this convergence but perhaps I need some assumptions like boundedness of $y_\theta$ (which would be fine) but I just cant figure out a way to show it. Usually arguments like this will be of the form: note that $g(x) = f(x, y_\theta)$ is continuous and then we're done but $g$ is now $\theta$-dependent and therefore I don't think the argument works. Any ideas?
Mini Series: Designing a Satellite for Dummies Are you an aspiring aerospace engineer, a space enthusiast, a parent checking your child’s homework or simply interested in the specifics of how to design certain satellite parts? Then this is the place to be. In this mini series we will go through the basics of designing and scaling a satellite, ranging from solar arrays to propellant tanks and even orbital parameters. If you would like us to cover other space-related topics, feel free to reach out to engineering@valispace.com. Part 2: How to size a Battery In the first part of this series we learned how to size the solar arrays of a satellite. We will now continue with another main part of the power subsystem: The battery pack! The batteries of an Earth orbiting satellite are primarily used to power the satellite during eclipse, so when the Earth is in between the satellite and the Sun. Another use case for batteries is when the system has a high power demand for short periods of time. In this case, it is not useful to size the solar arrays to meet this demand, so batteries are used. However, to keep this tutorial simple, we will focus on the battery usage during eclipse. We will start this tutorial by finding the required capacity of the batteries, after which we will perform the sizing of the batteries itself. Throughout this tutorial we will assume parameters for Lithium-Ion (Li-Ion) batteries. That is because, as can be seen in the figure below, Li-Ion batteries currently have the highest volumetric and specific energy density compared to other batteries, which make them very suitable for space applications. The required capacity In eclipse, the solar arrays cannot provide power to the satellite. This means that for the duration of the eclipse the batteries must provide power continuously to the satellite. The amount of energy the battery must be able to store is called the capacity ($C_{req}$). This is expressed as follows: $$C_{req} = P_{req} \cdot t_e \; \; [Wh]$$ In here, $P_{req}$ is the expected maximum average power required during eclipse. This average can be accurately found by calculating a weighted average of all power modes. However, for initial sizing purposes an estimate of the average required power is sufficient. $t_e$ is the time of eclipse and is dependent on your satellite orbit. The capacity at End of Life (EOL) To evaluate the capacity which the battery must actually have, it is necessary to take into account its inherent efficiency ($\eta_{bat}$) and the Depth of Discharge( $DOD$). Values for these parameters obviously change along with the type of battery. For example, for Li-Ion batteries they can be assumed to be $DOD = 40 \; \%$ and $\eta_{bat} = 95 \; \%$. As this is the capacity that is also required at the end of the satellites life, this is called the EOL capacity ($C_{EOL}$), which then is: $$C_{EOL} = \frac{C_{req}}{\eta_{bat} \cdot DOD} \; \; [Wh] $$ The capacity at Beginning of Life (BOL) Over the lifetime ($N_{years}$) of a satellite, the batteries will degrade. There is a strong link between the DOD and the degradation of the batteries, a higher DOD results in higher degradation. This is one of the reasons that choosing what batteries are suitable for your mission is not as straightforward as just choosing the highest values at BOL. The fact that the batteries will degrade means that the EOL capacity will be significantly lower than the BOL capacity. This is taken into account by using the fading factor ($F_{fading}$), which can be assumed to be $0.92 \; \% / year$ for Li-Ion batteries. The BOL capacity will then be: $$ C_{BOL} = \frac{C_{EOL}}{F_{fading} \cdot N_{years}} \; \; [Wh] $$ The required mass and size Now that we know the required capacity of the batteries, we can go on to determine the mass and size of the batteries. This is done using empirical relations, using the specific mass ($m_{sp}$) and the energy density ($e_{density}$). For Li-Ion batteries they can be assumed to be $m_{sp} = 170 \; [Wh/kg]$ and $e_{density} = 300 \; [Wh/liter]$ . The mass of the battery is then: $$M_{bat} = \frac{C_{BOL}}{ m_{sp}} \; \; [kg]$$ And the volume of the battery is: $$V_{bat} = \frac{C_{BOL}}{e_{density} } \; \; [liter]$$ If you followed the steps correctly, you have now performed the sizing of your own battery pack, congratulations! We hope you liked this mini-tutorial! If you want to learn how the batteries and the power subsystem are related to other subsystems in a satellite or how to design a complete satellite using Valispace and practical examples, also check our Satellite Tutorial by Calum Hervieu and Paolo Guardabasso. Stay tuned for more and feel free to give us feedback at contact-us@valispace.com ! Valispace is a single source of truth and collaboration platform for all your engineering data. Click here to get a demo and try it for free.
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
Liouville's rigidity theorem (1850) states that a map $f:\Omega\subset R^d \to R^d$ that satisfies $Df \in SO(d)$ is an affine map. Reshetnyak (1967) generalized this result and showed that if a sequence $f_n$ satisfies $Df_n \to SO(d)$ in $L^p$, then $f_n$ converges to an affine map. In this talk I will discuss generalizations of these theorems to mappings between manifolds and sketch the main ideas of the proof (using techniques from the calculus of variations and from harmonic analysis). Finally, I will describe how these rigidity questions are related to weak notions of convergence of manifolds and present some open questions. Based on a joint work with Asaf Shachar and Raz Kupferman. Date: Thu, 15/12/2016 - 14:30 to 15:30 Location: Manchester Building (Hall 2), Hebrew University Jerusalem
Definition:Angle/Unit/Degree Jump to navigation Jump to search The The $\LaTeX$ code for \(\degrees\) is Definition The degree (of arc) is a measurement of plane angles, symbolized by $\degrees$. \(\displaystyle \) \(\) \(\displaystyle 1\) degree \(\displaystyle \) \(=\) \(\displaystyle 60\) minutes \(\displaystyle \) \(=\) \(\displaystyle 60 \times 60 = 3600\) seconds \(\displaystyle \) \(=\) \(\displaystyle \dfrac 1 {360}\) full angle (by definition) $1 \degrees = \dfrac {\pi} {180} \radians \approx 0.01745 \ 32925 \ 19943 \ 29576 \ 92 \ldots \radians$ The $\LaTeX$ code for \(\degrees\) is \degrees . Sources 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $60$ 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $3600$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $60$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $3600$
When the measures of the angles of a triangle are placed in order, the difference between the middle angle and smallest angle is equal to the difference between the middle angle and largest angle. If one of the angles of the triangle has measure 23, then what is the measure in degrees of the largest angle of the triangle? When the measures of the angles of a triangle are placed in order, the difference between the middle angle and smallest angle is equal to the difference between the middle angle and largest angle. If one of the angles of the triangle has measure 23, then what is the measure in degrees of the largest angle of the triangle? \(\text{Let $\alpha+\beta+\gamma=180^\circ$ with $\alpha < \beta < \gamma$ } \) \(\begin{array}{|rcll|} \hline \gamma-\beta &=& \beta-\alpha \\ \mathbf{\gamma} &=& \mathbf{2\beta-\alpha} \quad | \quad \alpha=180^\circ-\beta-\gamma \\ \gamma &=& 2\beta-(180^\circ-\beta-\gamma) \\ \gamma &=& 2\beta-180^\circ+\beta+\gamma \quad | \quad -\gamma \\ 0 &=& 2\beta-180^\circ+\beta \\ 180^\circ &=& 3\beta \\ \beta &=& \dfrac{180^\circ}{3} \\ \mathbf{\beta} &=& \mathbf{60^\circ} \\ \hline \end{array}\) \(\begin{array}{|rcll|} \hline \gamma &=& 2\beta-\alpha \quad | \quad \alpha = 23^\circ,\ \beta=60^\circ \\ \gamma &=& 2\cdot 60^\circ-23^\circ \\ \gamma &=& 120^\circ-23^\circ \\ \mathbf{\gamma} &=& \mathbf{97^\circ} \\ \hline \end{array}\) The measure in degrees of the largest angle of the triangle is \(\mathbf{97^\circ}\)
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. We give a partial answer to a question attributed to Chris Miller on algebraic values of certain transcendental functions of order less than one. We obtain$C(\log H)^{\unicode[STIX]{x1D702}}$bounds for the number of algebraic points of height at most$H$on certain subsets of the graphs of such functions. The constant$C$and exponent$\unicode[STIX]{x1D702}$depend on data associated with the functions and can be effectively computed from them. Let$B$be a rational function of degree at least two that is neither a Lattès map nor conjugate to$z^{\pm n}$or$\pm T_{n}$. We provide a method for describing the set$C_{B}$consisting of all rational functions commuting with$B$. Specifically, we define an equivalence relation$\underset{B}{{\sim}}$on$C_{B}$such that the quotient$C_{B}/\underset{B}{{\sim}}$possesses the structure of a finite group$G_{B}$, and describe generators of$G_{B}$in terms of the fundamental group of a special graph associated with$B$. In this article, we establish a new estimate for the Gaussian curvature of open Riemann surfaces in Euclidean three-space with a specified conformal metric regarding the uniqueness of the holomorphic maps of these surfaces. As its applications, we give new proofs on the unicity problems for the Gauss maps of various classes of surfaces, in particular, minimal surfaces in Euclidean three-space, constant mean curvature one surfaces in the hyperbolic three-space, maximal surfaces in the Lorentz–Minkowski three-space, improper affine spheres in the affine three-space and flat surfaces in the hyperbolic three-space. This work studies slice functions over finite-dimensional division algebras. Their zero sets are studied in detail along with their multiplicative inverses, for which some unexpected phenomena are discovered. The results are applied to prove some useful properties of the subclass of slice regular functions, previously known only over quaternions. Firstly, they are applied to derive from the maximum modulus principle a version of the minimum modulus principle, which is in turn applied to prove the open mapping theorem. Secondly, they are applied to prove, in the context of the classification of singularities, the counterpart of the Casorati-Weierstrass theorem. We give some sufficient conditions for the periodicity of entire functions based on a conjecture of C. C. Yang, using the concepts of value sharing, unique polynomial of entire functions and Picard exceptional value. We give an upper estimate for the order of the entire functions in the Nevanlinna parameterization of the solutions of an indeterminate Hamburger moment problem. Under a regularity condition this estimate becomes explicit and takes the form of a convergence exponent. Proofs are based on transformations of canonical systems and I.S.Kac' formula for the spectral asymptotics of a string. Combining with a lower estimate from previous work, we obtain a class of moment problems for which order can be computed. This generalizes a theorem of Yu.M.Berezanskii about spectral asymptotics of a Jacobi matrix (in the case that order is ⩽ 1/2). We prove several results concerning the relative position of points in the postsingular set P(f) of a meromorphic map f and the boundary of a Baker domain or the successive iterates of a wandering component. For Baker domains we answer a question of Mihaljević-Brandt and Rempe-Gillen. For wandering domains we show that if the iterates Un of such a domain have uniformly bounded diameter, then there exists a sequence of postsingular values pn such that${\rm dist} (p_n, U_n)\to 0$as$n\to \infty $. We also prove that if$U_n \cap P(f)=\emptyset $and the postsingular set of f lies at a positive distance from the Julia set (in ℂ), then the sequence of iterates of any wandering domain must contain arbitrarily large disks. This allows to exclude the existence of wandering domains for some meromorphic maps with infinitely many poles and unbounded set of singular values. We investigate several quantitative properties of entire and meromorphic solutions to some differential-difference equations and generalised delay differential-difference equations. Our results are sharp in a certain sense as illustrated by several examples. Let$\{\mathbf{F}(n)\}_{n\in \mathbb{N}}$and$\{\mathbf{G}(n)\}_{n\in \mathbb{N}}$be linear recurrence sequences. It is a well-known Diophantine problem to determine the finiteness of the set${\mathcal{N}}$of natural numbers such that their ratio$\mathbf{F}(n)/\mathbf{G}(n)$is an integer. In this paper we study an analogue of such a divisibility problem in the complex situation. Namely, we are concerned with the divisibility problem (in the sense of complex entire functions) for two sequences$F(n)=a_{0}+a_{1}f_{1}^{n}+\cdots +a_{l}f_{l}^{n}$and$G(n)=b_{0}+b_{1}g_{1}^{n}+\cdots +b_{m}g_{m}^{n}$, where the$f_{i}$and$g_{j}$are nonconstant entire functions and the$a_{i}$and$b_{j}$are non-zero constants except that$a_{0}$can be zero. We will show that the set${\mathcal{N}}$of natural numbers such that$F(n)/G(n)$is an entire function is finite under the assumption that$f_{1}^{i_{1}}\cdots f_{l}^{i_{l}}g_{1}^{j_{1}}\cdots g_{m}^{j_{m}}$is not constant for any non-trivial index set$(i_{1},\ldots ,i_{l},j_{1},\ldots ,j_{m})\in \mathbb{Z}^{l+m}$. We prove two main results on Denjoy–Carleman classes: (1) a composite function theorem which asserts that a function$f(x)$in a quasianalytic Denjoy–Carleman class${\mathcal{Q}}_{M}$, which is formally composite with a generically submersive mapping$y=\unicode[STIX]{x1D711}(x)$of class${\mathcal{Q}}_{M}$, at a single given point in the source (or in the target) of$\unicode[STIX]{x1D711}$can be written locally as$f=g\circ \unicode[STIX]{x1D711}$, where$g(y)$belongs to a shifted Denjoy–Carleman class${\mathcal{Q}}_{M^{(p)}}$; (2) a statement on a similar loss of regularity for functions definable in the$o$-minimal structure given by expansion of the real field by restricted functions of quasianalytic class${\mathcal{Q}}_{M}$. Both results depend on an estimate for the regularity of a${\mathcal{C}}^{\infty }$solution$g$of the equation$f=g\circ \unicode[STIX]{x1D711}$, with$f$and$\unicode[STIX]{x1D711}$as above. The composite function result depends also on a quasianalytic continuation theorem, which shows that the formal assumption at a given point in (1) propagates to a formal composition condition at every point in a neighbourhood. Let$\unicode[STIX]{x1D70C}\in (0,\infty ]$be a real number. In this short note, we extend a recent result of Marques and Ramirez [‘On exceptional sets: the solution of a problem posed by K. Mahler’, Bull. Aust. Math. Soc.94 (2016), 15–19] by proving that any subset of$\overline{\mathbb{Q}}\cap B(0,\unicode[STIX]{x1D70C})$, which is closed under complex conjugation and contains$0$, is the exceptional set of uncountably many analytic transcendental functions with rational coefficients and radius of convergence$\unicode[STIX]{x1D70C}$. This solves the question posed by K. Mahler completely. We consider the uniqueness of an entire function and a linear differential polynomial generated by it. One of our results improves a result of Li and Yang [‘Value sharing of an entire function and its derivatives’, J. Math. Soc. Japan51(4) (1999), 781–799]. This paper concerns the problem of algebraic differential independence of the gamma function and${\mathcal{L}}$-functions in the extended Selberg class. We prove that the two kinds of functions cannot satisfy a class of algebraic differential equations with functional coefficients that are linked to the zeros of the${\mathcal{L}}$-function in a domain$D:=\{z:0<\text{Re}\,z<\unicode[STIX]{x1D70E}_{0}\}$for a positive constant$\unicode[STIX]{x1D70E}_{0}$. In this paper, in terms of the hyperbolic metric, we give a condition under which the image of a hyperbolic domain of an analytic function contains a round annulus centred at the origin. From this, we establish results on the multiply connected wandering domains of a meromorphic function that contain large round annuli centred at the origin. We thereby successfully extend the results of transcendental meromorphic functions with finitely many poles to those with infinitely many poles. In this paper, we prove some value distribution results which lead to normality criteria for a family of meromorphic functions involving the sharing of a holomorphic function by more general differential polynomials generated by members of the family, and improve some recent results. In particular, the main result of this paper leads to a counterexample to the converse of Bloch’s principle. Let be a non-constant elliptic function. We prove that the Hausdorff dimension of the escaping set of f equals 2q/(q+1), where q is the maximal multiplicity of poles of f. We also consider the escaping parameters in the family fβ = βf, i.e. the parameters β for which the orbit of one critical value of fβ escapes to infinity. Under additional assumptions on f we prove that the Hausdorff dimension of the set of escaping parameters ε in the family fβ is greater than or equal to the Hausdorff dimension of the escaping set in the dynamical space. This demonstrates an analogy between the dynamical plane and the parameter plane in the class of transcendental meromorphic functions. We generalize Siegel’s theorem on integral points on affine curves to integral points of bounded degree, giving a complete characterization of affine curves with infinitely many integral points of degree$d$or less over some number field. Generalizing Picard’s theorem, we prove an analogous result characterizing complex affine curves admitting a nonconstant holomorphic map from a degree$d$(or less) analytic cover of$\mathbb{C}$.
This week, we covered Linear Regression, Logistic Regression, cross-validation, and gradient descent. If you've taken Andrew Ng's Machine Learning course (video lectures here), then a lot of these concepts will be review. I found myself reading about more regularization techniques (we covered L1 & L2 this week). Regularization is a way to protect your model from overfitting by adding a regularization term to your cost function. This regularization term penalizes some summation of your feature coefficients by a factor of $\lambda$. $\lambda$ is called the shrinkage parameter, because in order to minimize your cost function, you must shrink your feature coefficient values. When shrinking coefficient values, your model is less likely to fit noise, and your model becomes more "lean" and "efficient" in a sense, as you are effectively getting rid of features that don't add much on top of what other feature coefficients are capturing due to correlation. In other words, with an increased $\lambda$, both model variance and multicollinearity is reduced. However, too big of a value for $\lambda$ will drive your model closer to a horizontal line that spits out $y = \beta_0$ (underfitting), and too small of a value for $\lambda$ approximates normal regression without the regularization term, so $\lambda$ should be tuned using cross-validation. L1 (Lasso) & L2 (Ridge) Regression In L2 regression (Ridge regression), the regularization parameter penalizes the sum of the squared coefficient values. I did some googling around to figure out why it's called Ridge, and this is my understanding: In unregularized regression with multicollinearity present, there is a "ridge" (or long valley) in the likelihood function surface. In Ridge regression with multicollinearity minimized, the "ridge" gets minimized and takes more of a "bowl" shape. See the surface plots in these posts. $$Ridge \ cost \ function = \sum\limits_{i=0}^{n} (y_i - \hat{y}_i)^2 + \lambda \sum_{j=1}^{p} \beta^2_j$$ However, the squared penalty might seem a bit harsh for the cost function, so let's look at L1 regression. In L1 regression (Lasso regression), the regularization parameter penalizes the sum of the absolute coefficient values. Thus the name: LASSO = Least Absolute Shrinkage and Selection Operator. $$Lasso \ cost \ function = \sum\limits_{i=0}^{n} (y_i - \hat{y}_i)^2 + \lambda \sum_{j=1}^{p} |\beta_j|$$ Previously, I had only learned Ridge. I'm guessing this is what I learned first because implementing the derivative of the cost function with a squared regularization term is much more straight-forward than implementing the derivative of the cost function with an absolute value regularization term. We explored Ridge and Lasso regression using the sklearn diabetes dataset in one of our assignments last week. I'll share some code to illustrate the effect of regularization on feature coefficients. First code block is for setting up our environment. import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import scale from sklearn.linear_model import Ridge, Lasso from sklearn.datasets import load_diabetes# Load diabetes datadiabetes = load_diabetes() X = diabetes.data[:150] y = diabetes.target[:150] Side note: if you're like me and wondering what the data and target values in the diabetes dataset represent, lucky for us that last year, someone complained about the lack of documentation, which you can now find here: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/datasets/descr/diabetes.rst Next, I wrote a function to iterate through a list of shrinkage parameters and plot the corresponding coefficient values. Yes... my code says alpha, which is inconsistent with the $\lambda$ that I've been throwing around in this post. I blame it on sklearn- their models take in a shrinkage parameter with the keyword argument alpha, so I stuck with their nomenclature. def plot_coef(linmodel, alphas): # Store parameters corresponding to each alpha for i, a in enumerate(alphas): X_data = scale(X) fit = linmodel(alpha=a, normalize=True).fit(X_data, y) params[i] = fit.coef_ # Plot: coefficient vs. alpha fig = plt.figure(figsize=(14, 8)) sns.set_palette(sns.color_palette("Paired", len(params.T))) for i, param in enumerate(params.T): plt.plot(alphas, param, label='x{}'.format(i+1)) plt.legend(loc = 'lower right', ncol=5, fontsize=16) plt.xlabel('alpha', fontsize=16) plt.ylabel('coefficient', fontsize=16) plt.title('{} Regression Coefficients'.format(linmodel.__name__), fontsize=24) plt.show() Now to generate some plots! # L1 (Lasso) Regressionalphas = np.linspace(0.1, 3) params = np.zeros((len(alphas), X.shape[1])) plot_coef(Lasso, alphas)# L2 (Ridge) Regressionalphas = np.logspace(-2, 2) params = np.zeros((len(alphas), X.shape[1])) plot_coef(Ridge, alphas) These plots show that Lasso can drive coefficients to zero, whereas Ridge just makes them very small. So Lasso does double duty! It does variable selection automatically on top of coefficient shrinking. The Lasso plot is also much more interpretable. Another side note: When I first generated these plots using the default matplotlib color palette, the plot colors started repeating after plotting 8 lines. In my quest to find a color palette that wouldn't repeat colors for my 10 feature coefficients, I found that seaborn has a seaborn.xkcd_palette from xkcd's color-naming survey! The xkcd blog post describing the survey results is pretty entertaining. The 954 most common color identifications from the survey are listed here (to be used with seaborn.xkcd_palette). So of course, I played around with it a bit... Anyways. In our assignment with the diabetes dataset, we compared Ridge, Lasso, and LinearRegression model errors (Lasso had the lowest error), which was how I was planning to pick between regularization techniques in the future. I did come across a thread: Cross Validated: When should I use lasso vs ridge? Generally, when you have many small/medium sized effects you should go with ridge. If you have only a few variables with a medium/large effect, go with lasso. Hastie, Tibshirani, Friedman Overall, it seems like there's no clear answer that one regularization is better than the other, as it is problem dependent. I believe Ridge is easier to implement due to its nice differentiability, but it can't zero out feature coefficients like Lasso.
Journal of Operator Theory Volume 82, Issue 2, Fall 2019 pp. 369-382. Commutators close to the identity Authors: Terence Tao Author institution:UCLA Department of Math., Los Angeles, CA 90095-1555,U.S.A. Summary: Let $D,X \in B(H)$ be bounded operators on an infinite dimensional Hilbert space $H$. If the commutator $[D,X] = DX-XD$ lies within $\varepsilon $ in operator norm of the identity operator $1_{B(H)}$, then it was observed by Popa that one has the lower bound $\| D \| \|X\| \geqslant \frac{1}{2} \log \frac{1}{\varepsilon }$ on the product of the operator norms of $D,X$; this is a quantitative version of the Wintner--Wielandt theorem that $1_{B(H)}$ cannot be expressed as the commutator of bounded operators. On the other hand, it follows easily from the work of Brown and Pearcy that one can construct examples in which $\|D\| \|X\| = O(\varepsilon ^{-2})$. In this note, we improve the Brown--Pearcy construction to obtain examples of $D,X$ with $\| [D,X] - 1_{B(H)} \| \leqslant \varepsilon $ and $\| D\| \|X\| = O( \log^{5} \frac{1}{\varepsilon } )$. DOI: http://dx.doi.org/10.7900/jot.2018may28.2206 Contents Full-Text PDF
It didn't seem to be a particularly difficult problem with conventional solution visible not far away For some problems that don't look particularly difficult, the conventional steps to the solution may not seem to take long, but invariably it will be a mundane and wasteful few extra steps. The elegant lighted up way would have been just a few quick steps away from the goal, waiting to be believed and discovered, always. In this session we will showcase a seemingly innocuous problem that doesn't seem to be difficult. With such problems oftentimes we take the conventional path. The first solution presented will thus be the deductive conventional one. This is how we think and act, conventionally, most of the times. Though this conventional solution does not take many steps, it involves squaring and simplification, an additional burden. We will present two more solutions with gradually decreasing efficiency. Contrasting these three, the fourth solution will show how quickly and effortlessly the solution can be reached using the rich potential embedded in the nature of the problem itself. It is not only about saving of time and cost, it is also about elegance and effortlessness. Before going ahead you should refer to our concept tutorials on Trigonometry, Chosen Problem. If $cot \theta + cosec \theta =3$, and $\theta$ an acute angle, the value of $cos \theta$ is, $1$ $\displaystyle\frac{1}{2}$ $\displaystyle\frac{4}{5}$ $\displaystyle\frac{3}{4}$ First solution: Conventional approach with no early function conversion but involving quadratic expression with one step conversion to linear The problem looks simple at first glance and that created a comfort zone in mind. Knowing that $cosec^2 \theta - cot^2 \theta=1$, a feasible path to the solution seems to be, taking the $cot \theta$ on the RHS and squaring the equation so that only $cot \theta$ is left in the expression. From the value of $cot \theta$ we would get $tan \theta$ and then $cos \theta$ using again the relation $1+tan^2 \theta=sec^2 \theta$. Solution is visible not too far away. The given expression is, $cot \theta + cosec \theta =3$, Or, $cosec \theta = 3 - cot \theta$. Squaring, $cosec^2 \theta=9-6cot \theta + cot^2 \theta$, Or, $9-6cot \theta=cosec^2 \theta - cot^2 \theta=1$, Or, $6cot \theta = 8$, Or, $tan \theta = \displaystyle\frac{3}{4}$. Squaring and adding 1, $tan^2 \theta + 1=sec^2 \theta =\displaystyle\frac{25}{16}$, Or, $sec \theta = \displaystyle\frac{5}{4}$, as $\theta$ is acute $sec \theta$ is not negative. So, $cos \theta=\displaystyle\frac{4}{5}$. Answer: Option c: $\displaystyle\frac{4}{5}$. We will now present a second possible solution. Second solution: Early conversion of trigonometric functions and use of quadratic expression with two terms and single function variable The decision taken in this solution is to convert both the input functions into $cos$ and $sin$ terms. The reason behind this decision must be that the target is a $cos$ function. Given expression, $cot \theta + cosec \theta =3$, Or, $\displaystyle\frac{cos \theta}{sin \theta}+\displaystyle\frac{1}{sin \theta}=3$, Or, $cos \theta=3sin \theta -1$, Squaring, $cos^2 \theta=9sin^2\theta - 6sin\theta +1$, Or, $10sin^2\theta=6sin\theta$, we could reduce number of variables to one and terms to two, Or, $sin \theta = \displaystyle\frac{3}{5}$, with $\theta$ acute $sin \theta \neq 0$ Or, $cos \theta = \sqrt{1-\displaystyle\frac{3^2}{5^2}}=\displaystyle\frac{4}{5}$. Answer: Option c: $\displaystyle\frac{4}{5}$. You might think at this point that no more variation in solution approach is possible. The following yet another solution is a strong example of Many ways technique. Third solution: Early trigonometric function conversion involving quadratic expressions with three terms In the previous solution you will find that we could adhere to the powerful during the simplification process. In the solution we could reduce the number of terms to two in the quadratic expression of a single variable. This allowed us avoiding the additional principle of minimum number of terms and variables burden of solving a quadratic equation. The says, principle of minimum number of terms and variables If you can reduce the number of terms as well as the number of variables in the expression involved in simplification to the minimum, faster will you reach the solution. The intial approach in this third solution is same as in previous solution. The $cot$ and $sec$ functions are converted to $cos$ and $sin$ functions and resulting expression squared but with an important difference. Given expression, $cot \theta + cosec \theta =3$, Or, $\displaystyle\frac{cos \theta}{sin \theta}+\displaystyle\frac{1}{sin \theta}=3$, Or, $cos \theta+1=3sin \theta$. Squaring, $cos^2 \theta+2cos \theta+1=9sin^2\theta$, Or, $cos^2 \theta+2cos \theta+1=9(1-cos^2\theta)$, Or, $10cos^2 \theta+2cos \theta -8=0$, Or, $5cos^2 \theta +cos \theta -4=0$, Or, $(5cos \theta - 4)(cos \theta +1)=0$, As $\theta$ is acute, $cos \theta \neq -1$. So, $cos \theta =\displaystyle\frac{4}{5}$. Answer: Option c: $\displaystyle\frac{4}{5}$. Compare the shortcomings of the three solutions and identify the reasons behind the shortcomings. At the last we will now present the elegant most efficient problem solver's solution. Fourth solution: Elegant solution: Problem solver's approach using principle of friendly trigonometric function pair and linear expressions We are already aware of the . The results from this rich principle are simple and based on fundamental concepts but have far reaching potential in simplifying trigonometric expressions quickly. principle of friendly trigonometric function pairs Two of the results of this principle are, $sec \theta + tan \theta =\displaystyle\frac{1}{sec \theta - tan \theta}$, and $cosec \theta + cot \theta = \displaystyle\frac{1}{cosec \theta - cot \theta}$. In this problem, the given expression directly conforms to this principle and rest of the steps are trivial algebraic simplification in linear expressions. Given expression, $cot \theta + cosec \theta =3$, Or, $\displaystyle\frac{1}{cosec \theta - cot \theta}=3$, only the LHS changes, Or, $cosec \theta - cot \theta =\displaystyle\frac{1}{3}$. Adding the two equations, $2cosec \theta=3+\displaystyle\frac{1}{3}=\displaystyle\frac{10}{3}$, $cot \theta$ is eliminated and we get the most desired single variable value Or $cosec \theta = \displaystyle\frac{5}{3}$, Or, $sin \theta =\displaystyle\frac{3}{5}$, Or, $cos \theta =\sqrt{1-\displaystyle\frac{3^2}{5^2}}=\displaystyle\frac{4}{5}$, as $\theta$ is acute $cos \theta$ can't be negative. Answer: c: $\displaystyle\frac{4}{5}$. Key concepts used: -- Key pattern identification -- rich concept of friendly trigonometric function pairs -- basic algebraic concepts -- solving simple linear equations Many ways technique. In this elegant solution, we have reduced the number of variables (in this case distinct trigonometric functions) to just one using simple algebraic manipulations and no squaring and dealing with quadratic expressions. This refers to yet another basic principle of algebraic simplification, the Primarily, the more you increase the order (or power) of the terms in the deductive process, the more you deviate from the shortest path solution and make the problem harder to solve. minimum order simplest solution principle. This fundamental algebraic principle of states, minimum order simplest solution In a solution process, if you keep the order of the variables to the minimum, generally linear of unit power, you will have the simplest solution. Adhering to this principle we always try to keep the expressions involved linear. You will notice that in the elegant solution also the expression were all linear. Important The elegant solution showcased above exemplifies, We should always start solving a problem with a strategy suitable and specific for the problem type and problem solving objectives. First objective is of course to solve the problem, but more importantly, solve the problem along the most efficient shortest path. Even if the conventional solution may seem to be within reach, one must analyze and search for a more efficient concept based solution before embarking on the conventional path. Any random approach will invariably take you along a longer and generally confusing path to the solution. The tendency and capacity to use basic and rich concepts for elegant and efficient solutions are not acquired in a day, these grow over a period of time and become reflexive in nature by constant search and execute of elegant solution concept structures. Specifically, the have far reaching potential in simplifying many trigonometric expressions quickly. If in a problem such a friendly pair is explicitly present in an expression, it will invariably result in a quick solution. friendly trigonometric function pair concepts To achieve elegant solutions in Trigonometry consistently, role of basic and rich algebraic principles is critically important. While solving the problem in four different ways, we have applied the problem solving skill improvement that tested our skill in solving a problem in multiple ways as well as gave us the opportunity to compare the multiple solutions with each other. Many ways technique Even simple things need to be made simpler. Only then more difficult things can be made simple with ease. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry How to solve not so difficult SSC CGL level problem in a few light steps, Trigonometry 7 A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving.
Answer $9 \div 1 \times 2 \times 2 \div (11-2) = 4$ Work Step by Step Follow BEDMAS. In this case, we have Brackets, Division and Multiplication. 1. Complete the Brackets $(11 - 2) = 9$ New equation $ = 9 \div 1 \times 2 \times 2 \div 9$ 2. Divide the numbers a) $9 \div 1 = 9$ b) $2 \div 9 = \frac{2}{9}$ New equation $ = 9 \times 2 \times \frac{2}{9}$ 3. Multiply the numbers a) $(9 \times 2) = 18$ New Equation $ = 18 \times \frac{2}{9}$ b) $ = 18 \times \frac{2}{9} = \frac{36}{9} = 4$ Therefore the answer is $ = 4$
4 0 I found this question on a website. Prove that [itex]\displaystyle \int_{0}^{\pi}\frac{1-\cos(nx)}{1-\cos(x)} dx=n\pi \ \ , n \in \mathbb{N}[/itex] Here's my attempt using induction: Let [itex]P(n) [/itex] be the statement given by [tex] \int_{0}^{\pi}\frac{1-\cos(nx)}{1-\cos(x)} dx=n\pi \ \ , n \in \mathbb{N}[/tex] P(1): [tex] \int_{0}^{\pi}\frac{1-\cos(x)}{1-\cos(x)} dx=\int_{0}^{\pi}dx=\pi[/tex] [tex](1)\pi=\pi[/tex] Let [itex]P(n)[/itex] be true. Now, we need to show that [itex]P(n+1)[/itex] is true. [tex]\int_{0}^{\pi} \frac{1-\cos{x(n+1)}}{1-\cos(x)}dx=\int_{0}^{\pi}\frac{1-[\cos(nx)\cos(x)-\sin(nx)\sin(x)]}{1-\cos(x)} dx[/tex] I don't know how to proceed from here. Furthermore, I would like to know if I could prove the statement by solving the integral. 1. Homework Statement Prove that [itex]\displaystyle \int_{0}^{\pi}\frac{1-\cos(nx)}{1-\cos(x)} dx=n\pi \ \ , n \in \mathbb{N}[/itex] 2. The attempt at a solution Here's my attempt using induction: Let [itex]P(n) [/itex] be the statement given by [tex] \int_{0}^{\pi}\frac{1-\cos(nx)}{1-\cos(x)} dx=n\pi \ \ , n \in \mathbb{N}[/tex] P(1): [tex] \int_{0}^{\pi}\frac{1-\cos(x)}{1-\cos(x)} dx=\int_{0}^{\pi}dx=\pi[/tex] [tex](1)\pi=\pi[/tex] P(1) holds true. Let [itex]P(n)[/itex] be true. Now, we need to show that [itex]P(n+1)[/itex] is true. [tex]\int_{0}^{\pi} \frac{1-\cos{x(n+1)}}{1-\cos(x)}dx=\int_{0}^{\pi}\frac{1-[\cos(nx)\cos(x)-\sin(nx)\sin(x)]}{1-\cos(x)} dx[/tex] I don't know how to proceed from here. Furthermore, I would like to know if I could prove the statement by solving the integral.
Completely revised: If $a\in A$, I’ll write $R(a)$ for $\{b\in B:\langle a,b\rangle\in R\}$, and if $A_0\subseteq A$, I’ll write $R[A_0]$ for $$\left\{b\in B:\exists a\in A\Big(\langle a,b\rangle\in R\Big)\right\}=\bigcup_{a\in A_0}R(a)\;.$$ Your conditions are that $R(a)$ is finite for each $a\in A$ and that $|R[A_0]|\le n|A_0|$ for each finite $A_0\subseteq A$. You want to conclude that there are pairwise disjoint sets $B_1,\ldots,B_n\subseteq B$ and bijections $f_k:B_k\to A$ for $k=1,\dots,n$. Note that the second condition implies the first; in fact, it implies that $|R(a)|\le n$ for each $a\in A$. However, neither condition matters, since the desired conclusion does not depend in any way on $R$; are you sure that you stated the problem correctly? Since $B$ is countably infinite, there is a bijection $h:N^n\times\{1,\ldots,n\}\to B$. For $k\in\{1,\dots,n\}$ let $$B_k=h\big[\Bbb N\times\{k\}\big]=\big\{h(\langle m,k\rangle):m\in\Bbb N\big\}\;;$$ the sets $B_1,\ldots,B_n$ are pairwise disjoint (and their union is $B$, though that wasn’t required). Since $A$ is also countably infinite, there is a bijection $g:\Bbb N\to A$, and for $k\in\{1,\ldots,n\}$ we can define $f_k:B_k\to A$ as follows: if $b\in B_k$, there is a unique $m\in\Bbb N$ such that $b=h(\langle m,k\rangle)$, and we define $f_k(b)=g(m)$. I leave it to you to check that $f_k$ is then a bijection. (To me countable means finite or countably infinite, but I’m assuming that you were using it to mean countably infinite. If not, the result can be false when $B$ is finite. For example, take $A=\Bbb N$ and $B=\{0\}$, and let $R=A\times B$; your hypotheses are satisfied with $n=1$, but there is no function from $B$ onto $A$.)
Somehow, in all the time I’ve posted here, I’ve not yet described the structure of my favorite graded $S_n$-modules. I mentioned them briefly at the end of the Springer Correspondence series, and talked in depth about a particular one of them – the ring of coinvariants – in this post, but it’s about time for… The Garsia-Procesi modules! Also known as the cohomology rings of the Springer fibers in type $A$, or as the coordinate ring of the intersection of the closure of a nilpotent conjugacy class of $n\times n$ matrices with a torus, with a link between these two interpretations given in a paper of DeConcini and Procesi. But the work of Tanisaki, and Garsia and Procesi, allows us to work with these modules in an entirely elementary way. Using Tanisaki’s approach, we can define $$R_{\mu}=\mathbb{C}[x_1,\ldots,x_n]/I_{\mu},$$ where $I_{\mu}$ is the ideal generated by the partial elementary symmetric functions defined as follows. Recall that the elementary symmetric function $e_d(z_1,\ldots,z_k)$ is the sum of all square-free monomials of degree $d$ in the set of variables $z_i$. Let $S\subset\{x_1,\ldots,x_n\}$ with $|S|=k$. Then the elementary symmetric function $e_r(S)$ in this subset of the variables is called a partial elementary symmetric function, and we have $$I_{\mu}=(e_r(S) : k-d_k(\mu) \lt r \le k, |S|=k).$$ Here, $d_k(\mu)=\mu’_n+\mu’_{n-1}+\cdots+ \mu’_{n-k+1}$ is the number of boxes in the last $k$ columns of $\mu$, where we pad the transpose partition $\mu’$ with $0$’s so that it has $n$ parts. This ring $R_\mu$ inherits the natural action of $S_n$ on $\mathbb{C}[x_1,\ldots,x_n]$ by permuting the variables, since $I_\mu$ is fixed under this action. Since $I_\mu$ is also a homogeneous ideal, $R_\mu$ is a graded $S_n$-module, graded by degree. To illustrate the construction, suppose $n=4$ and $\mu=(3,1)$. Then to compute $I_{\mu}$, first consider subsets $S$ of $\{x_1,\ldots,x_4\}$ of size $k=1$. We have $d_1(\mu)=0$ since the fourth column of the Young diagram of $\mu$ is empty (see image below), and so in order for $e_r(S)$ to be in $I_\mu$ we must have $1-0\lt r \le 1$, which is impossible. So there are no partial elementary symmetric functions in $1$ variable in $I_\mu$. For subsets $S$ of size $k=2$, we have $d_2(\mu)=1$ since there is one box among the last two columns (columns $3$ and $4$) of $\mu$, and we must have $2-1\lt r\le 2$. So $r$ can only be $2$, and we have the partial elementary symmetric functions $e_2(S)$ for all subsets $S$ of size $2$. This gives us the six polynomials $$x_1x_2,\hspace{0.3cm} x_1x_3,\hspace{0.3cm} x_1x_4,\hspace{0.3cm} x_2x_3,\hspace{0.3cm} x_2x_4,\hspace{0.3cm} x_3x_4.$$ For subsets $S$ of size $k=3$, we have $d_3(\mu)=2$, and so $3-2 \lt r\le 3$. We therefore have $e_2(S)$ and $e_3(S)$ for each such subset $S$ in $I_\mu$, and this gives us the eight additional polynomials $$x_1x_2+x_1x_3+x_2x_3, \hspace{0.5cm}x_1x_2+x_1x_4+x_2x_4,$$ $$x_1x_3+x_1x_4+x_3x_4,\hspace{0.5cm} x_2x_3+x_2x_4+x_3x_4,$$ $$x_1x_2x_3, \hspace{0.4cm} x_1x_2x_4, \hspace{0.4cm} x_1x_3x_4,\hspace{0.4cm} x_2x_3x_4$$ Finally, for $S=\{x_1,x_2,x_3,x_4\}$, we have $d_4(\mu)=4$ and so $4-4\lt r\le 4$. Thus all of the full elementary symmetric functions $e_1,\ldots,e_4$ in the four variables are also relations in $I_{\mu}$. All in all we have $$\begin{align*} I_{(3,1)}= &(e_2(x_1,x_2), e_2(x_1,x_3),\ldots, e_2(x_3,x_4), \\ & e_2(x_1,x_2,x_3), \ldots, e_2(x_2,x_3,x_4), \\ & e_3(x_1,x_2,x_3), \ldots, e_3(x_2,x_3,x_4), \\ & e_1(x_1,\ldots,x_4), e_2(x_1,\ldots,x_4), e_3(x_1,\ldots,x_4), e_4(x_1,\ldots,x_4)) \end{align*}$$ As two more examples, it’s clear that $R_{(1^n)}=\mathbb{C}[x_1,\ldots,x_n]/(e_1,\ldots,e_n)$ is the ring of coninvariants under the $S_n$ action, and $R_{(n)}=\mathbb{C}$ is the trivial representation. So $R_\mu$ is a generalization of the coinvariant ring, and in fact the graded Frobenius characteristic of $R_\mu$ is the Hall-Littlewood polynomial $\widetilde{H}_\mu(x;q)$. Where do these relations come from? The rings $R_\mu$ were originally defined as follows. Let $A$ be a nilpotent $n\times n$ matrix over $\mathbb{C}$. Then $A$ has all $0$ eigenvalues, and so it is conjugate to a matrix in Jordan normal form whose Jordan blocks have all $0$’s on the diagonal. The sizes of the Jordan blocks, written in nonincreasing order form a partition $\mu’$, and this partition uniquely determines the conjugacy class of $A$. In other words: There is exactly one nilpotent conjugacy class $C_{\mu’}$ in the space of $n\times n$ matrices for each partition $\mu’$ of $n$. The closures of these conjugacy classes $\overline{C_{\mu’}}$ form closed matrix varieties, and their coordinate rings were studied here. However, they are easier to get a handle on after intersecting with the set $T$ of diagonal matrices, leading to an interesting natural question: what is the subvariety of diagonal matrices in the closure of the nilpotent conjugacy class $C_{\mu’}$? Defining $R_\mu=\mathcal{O}(\overline{C_{\mu’}}\cap T)$, we obtain the same modules as above. Tanisaki found the presentation for $R_\mu$ given above using roughly the following argument. Consider the matrix $A-tI$, where $A\in C_{\mu’}$. Then one can show (see, for instance, the discussion of invariant factors and elementary divisors in the article on Smith normal form on Wikipedia) that the largest power of $t$ dividing all of the $k\times k$ minors of $A-tI$, say $t^{d_k}$, is fixed under conjugation, so we can assume $A$ is in Jordan normal form. Then it’s not hard to see, by analyzing the Jordan blocks, that this power of $t$ is $t^{d_k(\mu)}$ where $\mu$ is the transpose partition of $\mu’$ and $d_k(\mu)$ is defined as above – the sums of the ending columns of $\mu$. It follows that any element of the closure of $C_\mu$ must also have this property, and so if $X=\mathrm{diag}(x_1,\ldots,x_n)\in \overline{C_\mu}\cap T$ then we have $$t^{d_k(\mu)} | (x_{i_1}-t)(x_{i_2}-t)\cdots (x_{i_k}-t)$$ for any subset $S=\{x_{i_1},\ldots,x_{i_k}\}$ of $\{x_1,\ldots,x_n\}$. Expanding the right hand side as a polynomial in $t$ using Vieta’s formulas, we see that the elementary symmetric functions $e_r(S)$ vanish on $X$ as soon as $r \gt k-d_k(\mu)$, which is exactly the relations that describe $I_\mu$ above. It takes somewhat more work to prove that these relations generate the entire ideal, but this can be shown by showing that $R_\mu$ has the right dimension, namely the multinomial coefficient $\binom{n}{\mu}$. And for that, we’ll discuss on page 2 the monomial basis of Garsia and Procesi.