text
stringlengths 256
16.4k
|
|---|
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
|
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
|
1 - Eugenio Calabi affirms that every closed non singular 1-forms in closed manifolds are intrinsecally harmonic ("An instrinsic characterization of harmonic 1-forms", 1969). This is easy proved observing first that such forms are transitive. The dual case is a open problem, but I believe that is true. A closed $(n-1)$-form non singular and non null in cohomology is intrinsecally harmonic iff the voluming-preserving flow induced by this form admit cross section (closed comdimension one submanifold cutting every orbit of the flow) or iff admit complementar foliation (foliation transversal to the flow; at leas $C^2$). Is possible proof this fact in the spirit of Calabi's work. This problem have consequences in flat characterization of circle bundles.
2 - In the case of closed $p$-forms of the rank $p$, the problem have a translate for foliation theory. Is possible to show that if a closed $p$ form $\omega$ of the rank $p$ is transitive, then exists complementar form $\eta$, that is, $\eta$ is a closed form such that $\omega\wedge\eta$ is volume form. This is proved using the theory of foliations cycles (see the Sullivan's paper "Cycles for the dynamical study of foliated manifolds and complex manifolds"). However, in a fiber bundle $\xi=(\pi,F,E,M)$ with symple connected base, compact total space, if $\Omega_M$ is any volume form in $M$, the form $\pi^*\Omega$ is intrinsecally harmonic iff $\xi$ is trivial. We can run away from trivial cases showing that $[F]\neq 0\in H_{\dim F}(E;\mathbb{R})$ and exists exemples those bundles with section. This give us the examples cited by Dan Fox. The problem is the dimension of the kernel of $\eta$. In the cases $p=1$ or $p=n-1$ (without singularities), in the condition of transitivity, the dimension of the kernel of $\eta$ is $n-1$ (case $p=1$) or 1 (case $p=n-1$), and the Calabi argument applies.
3 - Is too a open problem to show that harmonic forms are transitive, as observed by Katz ("Harmonic forms and near-minimal singular foliations"). We can to show that if $\omega$ is a harmonic $p$-form of rank $p$, then the have in $M$ two complementary $SL(*)$-foliations induced by $\ker\omega$ and $\ker *\omega$.
4 - I have studied the problem of decomposable forms. By the Tischler's argument (and others considerations; see "On fibering certain foliated manifolds over $\mathbb{S}^1$") is sufficient considering bundles $\xi=(F,\pi,E,\mathbb{T}^{p})$. If this bundle admit transversal foliation with holonomy group contained in $SL(*)$, then the form $\pi^*(\Omega_{\mathbb{T}^{p}})$ is intrinsecally harmonic. The ideia of study particular examples is know if we can rule out the hypothesis of transitivity. In any bundle $\xi=(\pi,F,E,M)$, with compact total space, $[F]\neq 0$ and $\pi_1(M)$ finite, the form $\pi^*(\Omega)$ is intrinsecally harmonic.
Ps. The above remarks is part of development of my doctoral project and are is under analysis.
|
EXAMPLE:A 7-segment display shows any number from 0 to 9 at random (equalprobabilities).
Let $X$ be the indicator random variable ofwhether the blue segment is on. Similarly, $Y$ is theindicator for the red segment. Find the conditional distributionof $Y$ given $X.$SOLUTION:Here $X,Y$ both take values in $\{0,1\}.$We need to find $P(Y=y | X=x)$ for $x,y\in\{0,1\}.$Now $P(Y=1|X=1) = P(X=1,Y=1)/P(X=1).$Both the blue and the red segments are on in only the numbers3,4,5,6,8,9. So $P(X=1,Y=1) = \frac{6}{10}.$The blue segment is on in the numbers 2,3,4,5,6,8,9. So $P(X=1) =\frac{7}{10}.$Hence $P(Y=1|X=1) = P(X=1,Y=1)/P(X=1) = \frac 67.$You should now be able to work out the other three conditionalprobabilities similarly.We can define conditional CDF or conditional PMF in the obviousway.It is important to understand that the conditionalexpectation/variance is a random variable, which is a function ofthe conditioning random variable.
Remember the throm of total probability:$$P(A) = P(B) P(A|B) + P(B^c)P(A|B^c),$$where combined the two conditional probabilities of $A$ toarrive at the (unconditional) probability of $A?$Well, we can do similar things with conditionalexpectation/variance also.
Proof:Let $X$ take values $x_1,x_2,...$ and $Y$ takevalues $y_1,y_2,...$. Let the joint PMF of $(X,Y)$ be$$P(X=x_i~\&~Y=y_j) = p_{ij}.$$Then $P(Y=y_j | X=x_i) = \frac{p_{ij}}{p_{i\bullet}}.$
So $E(Y|X=x_i) = \sum_j y_j \frac{p_{ij}}{p_{i\bullet}}.$Expectation of this is$$\sum_i E(Y|X=x_i) p_{i\bullet} = \sum_i \sum_j y_j\frac{p_{ij}}{p_{i\bullet}}p_{i\bullet} = \sum_i \sum_j y_j p_{ij} =\sum_j y_j \sum_i p_{ij} = \sum_j y_j p_{\bullet j} = E(Y),$$as required.[QED]Many expectation problems can be handled step-bystep using thisresult. Here are some examples.
EXAMPLE:A casino has two gambling games:
Roll a fair die, and win Rs. $D$ if $D$ is theoutcome.
Roll two fair dice, and win Rs 5 if both show the samenumber, but lose Rs 5 otherwise.
You throw a coin with $P(Head)=\frac 13$ and decide to play game1 if $Head,$ and game 2 if $Tail.$ What is yourexpected gain?SOLUTION:Let $X$ be your gain (in Rs), and let $Y$ be the outcome of thetoss.Then $E(X|Y=Head) = 3.5$ and $E(X|Y=Tail) = 5\times\frac{6-30}{36}=-\frac{10}{3}.$So, by the tower property, $E(X) = P(X|Y=Head)\times P(Y=Head)+P(X|Y=Tail)\times P(Y=Tail) = \cdots.$The tower property is very useful for computing expectationsinvolving a random number of random variables. Here is anexample.
EXAMPLE:A random number $N$ of customers enter a shop in aday, where $N$ takes values in $\{1,...,100\}$ withequal probabilities. The $i$-th customer pays a random amount $X_i$,where $X_i$ takes values in $\{1,2,...,10+i\}$ith equal probabilities. Assuming that $N,X_1,...,X_N$ areall independent, find the total expected payments by thecustomers on that day.
SOLUTION:We have $E(X_i) = \frac{11+i}{2}.$So $E\left(\sum_1^N X_i|N\right) = \sum_1^N E(X_i|N) = \sum_1^N E(X_i) = \sum_1^N \frac{11+i}{2} = 5.5N+\frac{N(N+1)}{4}.$By tower property, the required answer is $E\left(5.5N+\frac{N(N+1)}{4}\right)=\cdots.$
EXAMPLE:10 holes, numbered 1 to 10, in a row. 5 balls are droppedrandomly in them (a hole may contain any number of balls). Call aball "lonely" if there is no other ball in its hole or theadjacent holes. Find the expected number of lonely balls.
SOLUTION:Define the indicators $I_1,...,I_5$ as$$I_i = \left\{\begin{array}{ll}1&\text{if }i\mbox{-th ball is lonely}\\0&\text{otherwise.}\end{array}\right.$$Then the total number of lonely balls is $X = \sum I_i.$So we are to find $E(X) = \sum E(I_i).$Let $Y_i = $ the hole where the $i$-th ball has fallen.Then $E(I_i|Y_i=1)$ is the conditional probability thatall the balls except the $i$-th one has landed inholes $2,...,10$ given that the $i$-th ball has landedin hole 1.You should be able to compute this easily. Similarly, you cancompute $E(I_i|Y_i=k)$ for $k=1,...,10.$Notice that $Y_i$ can take values $1,...,10$ with equal probabilities.So tower property should provide the answer as$$E(X) = \sum E(E(I_i|Y_i)) = \cdots.$$
Proof:This follows directly from the tower property.
If $X,Y,Z$ are jointly distributed random variables, then wecan talk about conditional distribution of $Z$given $(X,Y)$ or $X$ given $Z$ or $(X,Z)$given $Y,$ etc. We can even condition step by step. Forexample, we can talk about $E(E(Z|X,Y)|X).$ This is afunction of $X$ alone.
Let $I_j$ be the indicator variable for whether there is arecord at position $j.$ Then $P(I_j=1)$ may be computedby total probability:$$P(I_j=1) = \sum_{k=j}^n P(X_j=k)P(I_j=1|X_j=k).$$Similarly for $P(I_jI_k=1).$
The problem is basically optimising $\sum P_i^2$ subjectto $\sum P_i$ being fixed. Cauchy-Scwartz might help.
|
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting
Author Message zedler Joined: 03 Mar 2006 Posts: 15
Posted: Thu Mar 09, 2006 9:20 am Post subject: spacing Hello,
Please have a look at the spacing of these equations
- $[]_{\langle n\times m\rangle}$ (the brackets touch)
- $Z_{F1}$ (F and 1 spacing)
- $\frac{L_L}{L_R}$ (the subscript L too close to the fraction rule)
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Thu Mar 09, 2006 10:37 am Post subject: Re: spacing
zedler wrote: Hello,
Please have a look at the spacing of these equations
- $[]_{\langle n\times m\rangle}$ (the brackets touch)
- $Z_{F1}$ (F and 1 spacing)
- $\frac{L_L}{L_R}$ (the subscript L too close to the fraction rule)
Michael
The $\frac{L_L}{L_R}$ is not a font issue. Things are just as bad for
Computer Modern (actually, slightly worse for Computer Modern); it's a matter of how TeX sets fractions in this (somewhat strange) situation.
Notice that for a displayed equation $$\frac{L_L}{L_R}$$ there is no
problem at all. If you really want $frac{L_L}{L_R}$ rather than
$L_L/L_R$, then you would have to adjust spacing yourself (for example, by replacing the _L with something like _{\astrut L} where \astrut was
some strut that had some extra depth to move things up).
I guess I could go and add kerns for upper-case letters and the numeral 1 (which has more space on each side than the other numerals). I'm interested to know how this expression arises. It's uncommon to have a character {\it followed\/} by the factor 1. [Also note that, if you know a bit you can always go in and change any kerning that you would like to adjust to your own specifications: use tftopl to go from mt2mi*.tfm to
mt2mi*.pl, go into mt2mi*.pl with any reasonable text editor to change, or add, a kern, and then use pltotf to go back to the mt2mi*.tfm.
As for $[]$, yes the brackets [almost] touch, but even the brackets in
Computer Modern almost touch. Are you sure this is what you really want, and not something like $[\ ]$ or $[\,]$, etc? zedler Joined: 03 Mar 2006 Posts: 15
Posted: Tue Apr 18, 2006 7:44 am Post subject: Re: spacing Hello,
what do you think about the following, is it worth to introduce kerning pairs with the comma?
\documentclass{minimal}
\usepackage{mtpro2}
%\usepackage{MinionPro}
%\usepackage{lucimatx}
%\usepackage{fourierx}
\begin{document}
\[\phi_{N,n}\]
\end{document}
Best,
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Tue Apr 18, 2006 9:26 am Post subject: Re: spacing
zedler wrote: Hello,
what do you think about the following, is it worth to introduce kerning pairs with the comma?
\[\phi_{N,n}\]
Best,
Michael
This again is not a font question, but a TeX question. A comma in math mode is a "punctuation" (see The TeXBook, pg.154), and the spacing before an ordinary symbol, as given on pg.170, is (1), meaning that there
is a thin space after the punctuation, but only in display and text sizes, not in script and scriptscript styles. So there is no space after the comma.
Moreover, it is not possible to kern the comma with the n, because even
if you put a kern in the tfm file it will be ignored, since the comma, as a punctuation, has its own rules for spacing.
(Similarly, a subscript a+b will not have any space around the + sign, which may or may not bother you. Some typists may decide to add
their own \, in various places of a subscript. It's sort of sad that TeX doesn't provide an option for things of this sort being done automatically, but that's the way it is.)
All times are GMT - 7 Hours
You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
|
Search
Now showing items 1-10 of 17
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV
(Springer, 2014-08)
The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
|
There were a number of upsets in the NBA this past Christmas week. Here, we offer no explanation, but do attempt to quantify just how bad those upsets were, taken in aggregate. Short answer: real bad! To argue this point, we review and then apply a very simple predictive model for sporting event outcomes — python code given in footnotes.
Follow @efavdb
Follow us on twitter for new submission alerts! Quick review of x-mas week The Christmas holiday week$^1$ (Dec. 19 – 25) provided a steady stream of frustrating upsets. The two most perplexing, perhaps, were the Lakers win over the Warriors and the Jazz win over the Grizzlies: two of this year’s greats losing to two of its most lackluster. In all, $24$ of the $49$ games that week were upsets (with an upset defined here to be one where the winning team started the game with a lower win percentage than the loser). That comes out to an upset ratio just under $49\%$, much higher than the typical rate, about $34\%$. A general sporting model A $49\%$ upset rate sounds significant. However, this metric does not quite capture the emotional magnitude of the debacle. To move towards obtaining such a metric, we first review here a “standard”$^2$ sporting model that will allow us to quantify the probability of observing a week as bad as this just past. For each team $i$, we introduce a variable $h_i$ called its mean scoring potential: Subtracting from this the analogous value for team $j$ gives the expected number of points team $i$ would win by, were it to play team $j$. More formally, if we let the win-difference for any particular game be $y_{ij}$, we have $$h_i – h_j \equiv \langle score(i) – score(j) \rangle \equiv \langle y_{ij} \rangle, $$ where we average over hypothetical outcomes on the right in order to account for the variability characterizing each individual game.
By taking into account the games that have already occurred this season, one can estimate the set of $\{h_i\}$ values. For example, summing the above equation over all past games played by team $1$, we obtain $$ \sum_{j\text{ (past opponents of 1)}} (h_1 – h_j) = \sum_j \langle y_{1j} \rangle \approx \sum_j y_{1j}.$$ Here, in the sum on right we have approximated the averaged sum in the middle by the score differences actually observed in the games already played (note that in the sum on $j$ here, each team appears exactly the number of times they have already played team $1$ — this could be zero, once, twice, etc.) Writing down all equations analogous to this last one (one for each team) returns a system of $30$ linear equations in the $30$ $\{h_i\}$ variables. This system can be easily solved using a computer$^3$. We did this, applying the algorithm to the complete set of 2014-15 games played prior to the Christmas week, and obtained the set of $h$ values shown at right$^4$. The ranking looks quite reasonable, from top to bottom.
A Gaussian NBA Now that we have the $\{h_i\}$ values, we can use them to estimate the mean score difference for any game. For example, in a Warriors-76ers game, we’d expect the Warriors to win, since they have the larger $h$ value. Further, on average, we’d expect them to win by about $h_{\text{War’s}} – h_{\text{76’s}}$ $ = 9.24 – (-11.96) \approx 21$ points. These two actually played this week, on Dec 30, and the Warriors won by $40$, a much larger margin than predicted.
The distinction between our predicted and the actual Warriors-76ers outcome motivates further consideration of the variability characterizing NBA games. It turns out that if we analyze the complete set of games already played this year, something simple pops out: Plotting a histogram of our estimate errors, $\epsilon_{ij} \equiv (h_i – h_j) – y_{ij}$, we see that the actual score difference distribution of NBA games looks a lot like a Gaussian, or bell curve. This is centered about our predicted value and has a standard deviation of $\sigma \approx 11$ points, as shown in the figure at right. These observations allow us to estimate various quantities of interest. For instance, we can estimate the frequency with which the Warriors should beat the 76ers by 40 or more points, as they did this week. This is simply equal to the frequency with which we underestimate the winning margin by at least $40 – 21 = 19$ points. This, in turn, can be estimated by counting how often this has already occurred in past games, using our histogram. Alternatively, we can use the fact that our errors are Gaussian distributed to write this as $$ P(\epsilon \leq -19) = \frac{1}{\sqrt{2 \pi \sigma^2}} \int_{-\infty}^{-19} e^{-\frac{\epsilon^2}{2\sigma^2}} d \epsilon \approx 0.042,$$
where we have evaluated the integral by computer. This result says that a Warriors win by 40 or more points will only occur about $4.2\%$ of the time. Using a similar argument, one can show that the 76ers should beat the Warriors only about $2.8 \%$ of the time. Christmas week, quantified It is now a simple matter to extend our analysis method so that we can estimate the joint likelihood of a given set of outcomes all happening the same week: We need only make use of the fact that the mean estimate error $\langle \epsilon \rangle$ of our predictions on a set of $N$ games $(\langle \epsilon \rangle = \frac{1}{N}\sum_{\text{games }i = 1}^N \epsilon_i)$ will also be Gaussian distributed, but now with standard deviation $\sigma/ \sqrt{N}$. The $1/\sqrt{N}$ factor here reduces the width of the mean error distribution, relative to that of the single games — it takes into account the significant cancellations that typically occur when you sum over many games, some with positive and some with negative errors. A typical week has about $50$ games, so the mean error standard deviation will usually be about $11/\sqrt{50} \approx 1.6$.
In the four figures below, we plot histograms of our prediction errors for four separate weeks: Christmas week is shown last (in red), and the other subplots correspond to the three weeks preceding it (each in green). We also show in each subplot (in gray) a histogram of all game errors preceding the week highlighted in that subplot — notice that each is quite well-fit by a Gaussian. In the first week, $53$ games were played, and our average error on these games was just $\langle \epsilon \rangle = 0.5$ points. The probability of observing an average overestimate of $0.5$ or greater in such a week is given by, $$P(\langle \epsilon \rangle \geq 0.5) = \frac{1}{\sqrt{2 \pi \sigma^2/53}} \int_{0.5}^{\infty} e^{-\frac{\epsilon^2}{2\sigma^2/53}} d \epsilon \approx 0.38.$$ That is, a weekly average overestimate of $\langle \epsilon \rangle \geq 0.5$ will happen about $38\%$ of the time, and so is pretty common. Similarly, in the second, third, and fourth weeks, the number of games played and average estimate errors were $(N,\langle \epsilon \rangle) = (52,0.8),$ $(55,2.2)$, and $(49,5.7)$, respectively. Calculating as above, overestimates of these magnitudes or larger occur with frequency $30\%$, $7\%$, and $0.01 \%$, respectively. The previous two are both fairly common,
but — on average — it would apparently take about ten thousand trials to find a week as bad as Christmas week 2014. Discussion A week in ten thousand is equivalent to about one week in every $400$ seasons! We don’t really take this estimate too seriously. In fact, we suspect that one of the following might be happening here: a) there may have been something peculiar about the games held this Christmas week that caused their outcomes to not be distributed in the same manner as other games this season$^5$, b) alternatively, there may be long tails in the error distribution that we can’t easily observe, or c) it may be that improvements to our model (e.g., taking into account home team advantage, etc.) would result in a larger frequency estimate. Maybe all three are true, or maybe this really was a week in ten thousand. Either way, it’s clear that this past Christmas week was a singular one. Footnotes [1] The NBA workweek starts on Friday.
[2] We first read about this modeling method here. In the addendum, it’s stated that the author thinks that nobody in particular is credited with having developed it, and that it’s been around for a long time.
[3] Notice that we can shift all $h_i \to h_i +c$, with $c$ some common constant. This invariance means that the solution obtained by solving the system of equations is not unique. Consequently, the matrix of coefficients is not invertible, and the system needs to be solved by Gaussian elimination, or some other irritating means.
[4] Python code and data for evaluating the NBA $h$ values given here.
[5] Note, however, that carrying out a similar analysis over the past 9 seasons showed no similar anomalies in their respective Christmas weeks.
|
The answer is... it is not so simple. Some quantum mechanics follow, but the
TL;DR version is that while $m_l=0$ corresponds to $p_z$, the orbitals for $m_l=+1$ and $m_l=-1$ lie in the $xy$-plane, but not on the axes. The reason for this outcome is that the wavefunctions are usually formulated in spherical coordinates to make the maths easier, but graphs in the Cartesian coordinates make more intuitive sense for humans. The $p_x$ and $p_y$ orbitals are constructed via a linear combination approach from radial and angular wavefunctions and converted into $xyz$. Thus, it is not possible to directly correlate the values of $m_l=\pm1$ with specific orbitals. The notion that we can do so is sometimes presented in introductory courses to make a complex mathematical model just a little bit simpler and more intuitive.
From
Physical Chemistry by Atkins and DePaula, the three wavefunctions for $n=2$ and $l=1$ are as follows.
$$\begin{align}&\Psi_{2,1,0}&&=r\cos{\theta}f(r)\\&\Psi_{2,1,+1}&&=-\dfrac{r}{\sqrt{2}}\sin{\theta}\mathrm{e}^{\mathrm{i}\phi}f(r)\\&\Psi_{2,1,-1}&&=\dfrac{r}{\sqrt{2}}\sin{\theta}\mathrm{e}^{-\mathrm{i}\phi}f(r)\\&\end{align}$$
The notation is $\Psi_{n,l,m_l}$, $r$ is the radius, $\theta$ is the angle with respect to the $z$-axis and $\phi$ is the angle with respect to the $xz$-plane. $$f(r)=\sqrt{\dfrac{Z^5}{32\pi a_0^5}}\mathrm{e}^{-Zr/2a_0}$$
in which $Z$ is the atomic number (or probably better nuclear charge) and $a_0$ is the Bohr radius.
In switching from spherical to Cartesian coordinates, we make the substitution $z=r\cos{\theta}$, so:$$\Psi_{2,1,0}=zf(r)$$
This is $\Psi_{2p_z}$ since the value of $\Psi$ is dependent on $z$: when $z=0;\ \Psi=0$, which is expected since $z=0$ describes the $xy$-plane.
The other two wavefunctions are unhelpfully degenerate in the $xy$-plane. An equivalent statement is that these two orbitals do not lie on the $x$- and $y$-axes, but rather bisect them. Thus it is typical to take linear combinations of them to make the equation look prettier. Linear combinations are allowed by the maths of quantum mechanics. If any set of wavefunctions is a solution to the Schrödinger equation, then any set of linear combinations of these wavefunctions must also be a solution. We can do this because orbitals and the wavefunctions that describe them are not real physical objects. They constitute a mathematical model.
In the equations below, we're going to make use of some trigonometry, notably Euler's formula:
$$\mathrm{e}^{\mathrm{i}\phi}=\cos{\phi}+\mathrm{i}\sin{\phi}$$$$\sin{\phi} = \frac{\mathrm{e}^{\mathrm{i}\phi}-\mathrm{e}^{-\mathrm{i}\phi}}{2\mathrm{i}}$$$$\cos{\phi} = \frac{\mathrm{e}^{\mathrm{i}\phi}+\mathrm{e}^{-\mathrm{i}\phi}}{2}$$
We're also going to use $x=\sin{\theta}\cos{\phi}$ and $y=\sin{\theta}\sin{\phi}$.
$$\begin{align}\Psi_{2p_x}=\frac{1}{\sqrt{2}}\left(\Psi_{2,1,+1}-\Psi_{2,1,-1}\right)=\frac{1}{2}\left(\mathrm{e}^{\mathrm{i}\phi}+\mathrm{e}^{-\mathrm{i}\phi} \right)r\sin{\theta}f(r)=r\sin{\theta}\cos{\phi}f(r)=xf(r) \\\Psi_{2p_y}=\frac{\mathrm{i}}{\sqrt{2}}\left(\Psi_{2,1,+1}+\Psi_{2,1,-1}\right)=\frac{1}{2\mathrm{i}}\left(\mathrm{e}^{\mathrm{i}\phi}-\mathrm{e}^{-\mathrm{i}\phi} \right)r\sin{\theta}f(r)=r\sin{\theta}\sin{\phi}f(r)=yf(r)\\\end{align}$$
So, while $m_l=0$ corresponds to $\Psi_{p_z}$, $m_l=+1$ and $m_l=-1$ cannot be directly assigned to $\Psi_{p_x}$ and $\Psi_{p_y}$. Rather $m_l=\pm1$ corresponds to $\{\Psi_{p_x},\Psi_{p_y} \}$. Put another way, I suppose we could say that $m_l=+1$ might correspond to $\Psi_{p_{x+y}}$ and $m_l=-1$ might correspond to $\Psi_{p_{x-y}}$.
|
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
|
It sounds like you could simply use the Cost Minimization Problem:$$\underset{z_1,...z_N}{min}\sum_{i=1}^N q_iz_i$$$$s.t.\quad f(z_1,...,z_N)\geq \bar{y}$$$$z_1,...z_n\geq0$$Where $z_i$ and $q_i$ are the quantity and price of input $i$, respectively, $\bar{y}$ is some predetermined level of output, and $f(\frac{}{})$ is the production function.
A production function relates physical output of a production process to factors of production. Of course, it seems to me that it may be a challenge to characterize a production function for your case. However, the production function can be any function which satisfies the following:
1) Strict Monotonicity: If $ z'>z$ then $f(z')>f(z)$
2) Quasi-concavity: $V(y)=\{ z:f(x)\geq y\}$ is a convex set
3) $V(y)$ is closed and non-empty
4) $f(z)$ is finite, nonnegative, real valued, and single valued $\forall z\geq 0$
5) $f(z)$ is a $C^2$ function
To be more specific to your case (data is the input and output) the problem reduces to:$$\underset{d_i}{min}\; d_iq+wl+rk$$ $$s.t.\quad f(d_i)\geq \bar{d_o}$$$$d_i,l,k\geq 0$$
Where $d_i$ is the data used as input, $d_o$ is the data output, $q$ is the price of input data, $w$ is wage, $l$ is labor hours, $r$ is the rental price of capital and $k$ is quantity of capital.
Again, this type of production process is foreign to me, so I cannot make an informed suggestion regarding specification of the production function, but maybe someone who is more apt on this site may offer a suggestion. I hope this helps!
|
As you have already pointed out in your question, it is not possible (without using optimization methods) to compute an exact L2 solution for the frequency domain design problem of IIR filters due to the non-linear relationship between the filter coefficients and the error function. There is, however, a method which can come close and which transforms the problem to a linear one: the Equation Error Method. Instead of defining the error measure as
$$E_0=\sum_k\left |H_k -\frac{B_k}{A_k}\right|^2\tag{1}$$
(where $k$ is the frequency index, $H_k$ is the complex desired frequency response at frequency $\omega_k$, and $B_k$ and $A_k$ are the numerator and denominator polynomials, respectively, also evaluated at the frequency point with index $k$), one defines an error measure
$$E_1=\sum_k\left |H_kA_k -B_k\right|^2\tag{2}$$
Minimizing $E_1$ is a linear problem in the filter coefficients. You get an overdetermined system of linear equations which can be solved (in the $l_2$ sense) by solving a set of linear equations. (Of course, the number of frequency points must be greater than the number of filter coefficients). The result of minimizing (2) is identical to solving a weighted $l_2$ problem with weight function $|A_k|^2$:
$$E_1=\sum_kW_k\left|H_k -\frac{B_k}{A_k}\right|^2\quad\textrm{with}\quad W_k=|A_k|^2$$
There are two (related) problems with this method:
The specification must not only include the desired magnitude response but also the desired phase. If the phase specification is chosen in a way that does not fit the chosen filter order and the general properties of IIR filters, the approximation error will be large, and the filter might be unstable (which brings us to the next point).
Stability is not considered in the design process. Depending on the specification, the best approximation could either be stable or unstable.
Of course, poles outside the unit circle can be reflected inside the unit circle without affecting the designed magnitude response, but the approximation error might still be large, because an unstable filter indicates that the specification is not well suited to the chosen filter order. So it takes quite some experience to choose an appropriate desired phase response.
An in-depth explanation of the equation error method can be found in the book
Digital Filter Design by Parks and Burrus. You can find a good overview here.
|
In Wikipedia's QHO page there is a moment when the following is stated:
I don't know why "the ground state in the
position representation is determined by $a|0\rangle=0$". I would say that the position representation of the ground state is rather $\langle x|0\rangle$, isn't it?
However, there are other things that I'm not being able to understand about this procedure:
Why $\langle x|a|0\rangle=0$? I thought that the annihilation operator couldn't be applied to the ground state. Does it return a $0$ if one does that? Is it possible to get operators out of a bra and a ket? I mean, for any operator $\hat{A}$, is $\langle\phi|\hat{A}|\psi\rangle=\hat{A}\langle\phi|\psi\rangle$ true? In the first case I would be doing the inner product between a bra ($\langle\phi|$) and a ket ($\hat{A}|\psi\rangle$), but in the second case I'm applying the operator to a constant. So... that doesn't seem right to me, but I'd appreciate it if you told me.
Related to the last item: what happens when an operator is applied to a constant? Do I get another operator?
How does it jump from the second line to the third one (I mean from the one with the derivative in it to the one with the $\exp$ function)? I have absolutely no idea about that.
|
Ananthanarayan, B and Rindani, Saurabh D (2004)
CP violation at a linear collider with transverse polarization. In: Physical Review D, 70 (3). 036005/1-6. This is the latest version of this item.
PDF
A25anantnaryanab.pdf
Download (225kB)
Abstract
We show how transverse beam polarization at $e^+e^-$ colliders can provide a novel means to search for CP violation by observing the distribution of a single final-state particle without measuring its spin. We suggest an azimuthal asymmetry which singles out interference terms between standard model contribution and newphysics scalar or tensor effective interactions in the limit in which the electron mass is neglected. Such terms are inaccessible with unpolarized or longitudinally polarized beams. The asymmetry is sensitive to CP violation when the transverse polarizations of the electron and positron are in opposite senses. The sensitivity of planned future linear colliders to new-physics CP violation in $e^+e^- \rightarrow \={tt}$ is estimated in a model-independent parametrization. It would be possible to put a bound of ~7 TeV on the new-physics scale L at the 90% C.L. for $ \sqrt S = 500 GeV$ and $\int dt L = 500 fb^-^1$ , with transverse polarizations of 80% and 60% for the electron and positron beams, respectively.
Item Type: Journal Article Additional Information: Copyright for this article belongs to American Physical Society. Department/Centre: Division of Physical & Mathematical Sciences > Centre for Theoretical Studies (Ceased to exist at the end of 2003) Depositing User: Ramnishath A Date Deposited: 01 Feb 2005 Last Modified: 19 Sep 2010 04:18 URI: http://eprints.iisc.ac.in/id/eprint/2709 Available Versions of this Item CP violation at a linear collider with transverse polarization. (deposited 30 Jul 2004) CP violation at a linear collider with transverse polarization. (deposited 01 Feb 2005) [Currently Displayed] CP violation at a linear collider with transverse polarization. (deposited 01 Feb 2005) Actions (login required)
View Item
|
Here is a detailed explanation of Jech's proof.
Let $D$ be a normal measure on $\kappa$. Suppose, towards a contradiction, that $\kappa$ is not Mahlo. Then there is some club $C \subseteq \kappa$ such that$$C \cap \{ \alpha < \kappa \mid \mathrm{cof}(\alpha) = \alpha \} = \emptyset.$$
Since $D$ is normal, it contains all clubs. In particular $C \in D$. Since $D$ is closed under intersections, we therefore must have that $\{ \alpha < \kappa \mid \mathrm{cof}(\alpha) = \alpha \} \not \in D$ and hence that$$\{ \alpha < \kappa \mid \mathrm{cof}(\alpha) < \alpha \} = \kappa \setminus \{ \alpha < \kappa \mid \mathrm{cof}(\alpha) = \alpha \} \in D.$$
By normality there is some $\lambda < \kappa$ such that
$$E_\lambda = \{ \alpha < \kappa \mid \mathrm{cof}(\alpha) = \lambda \} \in D.$$
By replacing $E_\lambda$ with $E_\lambda \setminus \lambda$ we may and shall assume that $E_\lambda \cap \lambda = \emptyset$
For each $\alpha \in E_\lambda$ fix a strictly increasing, cofinal function$$f_\alpha \colon \lambda \to \alpha$$Now, for each $\xi < \lambda$, the function$$g_\xi \colon E_\lambda \to \kappa, \ \alpha \mapsto f_\alpha(\xi)$$is decreasing. Hence there is some $A_\xi \in D$ and some $y_\xi < \kappa$ such that $f_\alpha(\xi) = y_\xi$ for all $\alpha \in A_\xi$.
Let
$$A = \bigcap_{\xi < \lambda} A_\xi.$$
Since $\lambda < \kappa$ we have that $A \in D$.
Now let $\alpha \in A$. For all $\xi < \lambda$ we have $f_\alpha(\xi) = y_\xi$ is independent of $\alpha$ (by the construction of $A_\xi$). But$$\alpha = \sup_{\xi < \lambda} f_\alpha(\xi) = y_\xi$$is completely determined by the sequence $(y_\xi \mid \xi < \lambda)$.
Hence $A$ contains at most one element. This is a contradiction, since $D$ is non-principal.
It follows that $\kappa$ is Mahlo after all!
|
I got a simple modulating signal $x(t)=\sin(2\pi\alpha t)\sin(2 \pi \beta t)$ with carrier frequency $\alpha$ and modulation frequency $\beta$. The spectral correlation will obviously have components ...
some days ago I asked here parseval for a continuos but limited signal if the Parseval can be applied for limited signal.Can you recommend me a book or a paper that I can use as reference for this?...
I have a question about the parseval relation written herehttps://en.wikipedia.org/wiki/Parseval%27s_theorem (In the chapter Notation used in physics).If I have a signal continuous but limited (so ...
I'm trying to check Parseval's theorm for Gaussian signal. It's well known that fourier transform of $\exp(-t^2)$ is $\sqrt{\pi}\exp(-\pi^2 k^2)$. So I implement it by using quad and simps. I think ...
Let $x(n)$ be a sequence of length $N$, which is zero outside the interval $(0,N-1)$. Let $X(k), k=0,1,\cdots,N-1$ be the FFT coefficients of $x(n)$, that is, $X(k)=\sum_{n=0}^{N-1}x(n) \exp\left( -\...
|
I'm going through the article in the following link lately and one point confuses me a lot. https://arxiv.org/pdf/1509.05001.pdf
So, the goal of this paper is to solve the following constrained binary quadratic programming. Here the parameters $A$, $Q$ and $b$ are all of integer values.
max $x^{T}Qx$ $s.t.,$ $Ax\leq b$ $\text{and}$ $x\in \{0,1\}^{n}$.
The authors consider the lagrangian relaxation of this problem in the following.
So, we have the following optimization problem $L_{\lambda}$.
$d(\lambda) = \text{min}_{x} x^{T}Qx+\lambda^{T}(Ax-b) $ $s.t., $ $x\in \{0,1\}^{n}$.
And the further optimization problem $L$ in the following.
$L:\text{max}_{\lambda \in R_{+}^{m}}$ $d(\lambda)$
Then, a branch-and-bound tree is created (which I do not quite understand why to create the tree). In each node $u$ of the branch-and-bound tree, a lower bound is computed by solving the problem $L$ and the primal-dual pair $(x^{u},\lambda^{u})$ is obtained. we define the slack of constraint $i$ at a point $x$ to be $s_{i}$ = $b_{i} −a_{i}^{T}x$, where $a_{i}$ is the $i$-th row of $A$. Then the set of violated constraints at $x$ is the set $V = {i : si < 0}$. If $x^{u}$ is infeasible for the original problem, it must violate one or more constraints. Additionally, we define the change in slack for constraint $i$ resulting from flipping variable $j$ in $x^{u}$ to be $$\delta_{ij}=a_{ij}(2x_{j}^{u}-1).$$
I do not understand why $\delta_{ij}$ is defined in this way. For my understanding, $x^{u}\in \{0,1\}^{n}$, so when you flip the variable $j$ of $x^{u}$, you change it either from 0 to 1 or from 1 to 0. So, in either case, the change in slack for constraint $i$ can be calculated.
Did I miss something here ? Could anyone shed some light on what shall I do here? Many thanks for your time and attention.
|
Let's say you want to estimate a quantity $\mu$, but you have only access to unbiased estimates of its logarithm, i.e., $\log\mu$. Can you obtain an unbiased estimate of $\mu$?
2019/06/14 2016/10/26
Say that you have a dynamical process of interest $X_1,\ldots,X_n$ and you can only observe the process with some noise, i.e., you get an observation sequence $Y_1,\ldots,Y_n$. What is the optimal way to estimate $X_n$ conditioned on the whole sequence of observations $Y_{1:n}$?
2016/09/23
If I give you a function on $[0,1]$ and a computer and want you to find the minimum, what would you do? Since you have the computer, you can be lazy: Just compute a grid on $[0,1]$, evaluate the grid points and take the minimum. This will give you something close to the true minimum. But how much?
2016/01/17
Suppose that you sample from a probability measure $\pi$ to estimate the expectation $\pi(f) := \int f(x) \pi(\mbox{d}x)$ and formed an estimate $\pi^N(f)$. How close are you to the true expectation $\pi(f)$?
2015/09/07
I submitted a preprint on matrix factorisations and linear filters. I managed to derive some factorisation algorithms as linear filtering algorithms. In the paper, I left a discussion to here, estimating parameters via maximising marginal likelihood. So here it is.
2015/06/19
I arXived a new preprint titled Online Matrix Factorization via Broyden Updates.
Around this April, I was reading quasi-Newton methods (from this very nice paper of Philipp Hennig) and when I saw the derivation of the Broyden update, I immediately realized that this idea may be used for computing factorizations. Furthermore, it will lead to an online scheme, more preferable!
The idea is to solve the following optimization problem at each iteration $k$:\begin{align*} \min_{x_k,C_k} \big\| y_k - C_k x_k \big\|_2^2 + \lambda \big\|C_k - C_{k-1}\big\|_F^2.\end{align*}
The motivation behind this cost is in the manuscript.
Although the basic idea was explicit, I set a few goals. First of all, I would like to develop a method that one can sample any column of the dataset and use it immediately. So I modified the notation a bit, as you can see from Eq. (2) in the manuscript. Secondly, I wanted that one must be able to use mini-batches as well, a group of columns at each time. Thirdly, it was obvious that a modern matrix factorization method must handle the missing data, so I had to extend the algorithm to handle the missing data. Consequently, I have sorted out all of this except a rule for missing data with mini-batches which turned out to be harder, so I left out that for this work.
2015/03/08
I was tinkering around logistic map $x_{n+1} = a x_n (1 - x_n)$ today and I wondered what happens if I plot the histogram of the generated sequence $(x_n)_{n\geq 0}$. Can it possess some statistical properties?
2015/03/04
Suppose we have a continuous random variable $X \sim p(x)$ and we would like to estimate its tail probability, i.e. the probability of the event $\{X \geq t\}$ for some $t \in \mathbb{R}$. What is the most intuitive way to do this?
2014/06/12
Fisher's identity is useful to use in maximum-likelihood parameter estimation problems. In this post, I give its proof. The main reference is Douc, Moulines, Stoffer; Nonlinear time series theory, methods and applications.
|
Timeline of prime gap bounds
Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments Aug 10 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) May 14 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. May 21 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations May 28 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] May 30 59,470,640 (Morrison)
58,885,998? (Tao)
59,093,364 (Morrison)
57,554,086 (Morrison)
Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m May 31 2,947,442 (Morrison)
2,618,607 (Morrison)
48,112,378 (Morrison)
42,543,038 (Morrison)
42,342,946 (Morrison)
Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] Jun 1 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] Jun 2 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) Jun 3 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison)
4,802,222 (Morrison)
Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. Jun 4 1/224?? (v08ltu)
1/240?? (v08ltu)
4,801,744 (Sutherland)
4,788,240 (Sutherland)
Uses asymmetric version of the Hensley-Richards tuples Jun 5 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz)
4,717,560 (Sutherland)
397,110? (Sutherland)
4,656,298 (Sutherland)
389,922 (Sutherland)
388,310 (Sutherland)
388,284 (Castryck)
388,248 (Sutherland)
387,982 (Castryck)
387,974 (Castryck)
[math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance.
[math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve
Jun 6 387,960 (Angelveit)
387,904 (Angeltveit)
Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. Jun 7
26,024? (vo8ltu)
387,534 (pedant-Sutherland)
Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland)
285,752 (pedant-Sutherland)
values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here.
An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired.
Jun 12 22,951 (Tao/v08ltu)
22,949 (Harcos)
249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu)
6,329? (Harcos)
6,329 (v08ltu)
60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu)
5,672? (v08ltu)
5,459? (v08ltu)
5,454? (v08ltu)
5,453? (v08ltu)
60,740 (xfxie)
58,866? (Sun)
53,898? (Sun)
53,842? (Sun)
A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu)
5,453? (v08ltu)
5,452? (v08ltu)
53,774? (Sun)
53,672*? (Sun)
Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao)
[math]148\varpi + 33\delta \lt 1[/math]? (Tao)
Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu)
1,467 (v08ltu)
12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu)
[math]140\varpi + 32 \delta \lt 1[/math]? (Tao)
1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes)
1,007? (Hannes)
10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen)
[math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao)
962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao)
873? (Hannes)
Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao)
Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility
Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao)
632 (Harcos)
4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma)
12 [EH] (Maynard)
Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard)
5 [EH] (Maynard)
600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard)
582#*? (Nielsen])
59,451 [m=2]#? (Nielsen])
42,392 [m=2]? (Nielsen)
356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie)
448#*? (Nielsen)
43,134 [m=2]#? (Nielsen)
698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen)
10,000,000? [m=3] (Tao)
1,700,000? [m=3] (Tao)
38,000? [m=2] (Tao)
300#? (Clark-Jarvis)
182,087,080? [m=3] (Sutherland)
179,933,380? [m=3] (Sutherland)
More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20
55#? (Nielsen)
36,000? [m=2] (xfxie)
175,225,874? [m=3] (Sutherland)
27,398,976? [m=3] (Sutherland)
Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck)
75,000,000? [m=4] (Castryck)
3,400,000,000? [m=5] (Castryck)
5,511? [EH] [m=3] (Sutherland)
2,114,964#? [m=3] (Sutherland)
309,954? [EH] [m=5] (Sutherland)
395,154? [m=2] (Sutherland)
1,523,781,850? [m=4] (Sutherland)
82,575,303,678? [m=5] (Sutherland)
A numerical precision issue was discovered in the earlier m=4 calculations Dec 23 41,589? [EH] [m=4] (Sutherland) 24,462,774? [m=3] (Sutherland)
1,512,832,950? [m=4] (Sutherland)
2,186,561,568#? [m=4] (Sutherland)
131,161,149,090#? [m=5] (Sutherland)
Dec 24 474,320? [EH] [m=4] (Sutherland)
1,497,901,734? [m=4] (Sutherland)
Dec 28 474,296? [EH] [m=4] (Sutherland) Jan 2 2014 474,290? [EH] [m=4] (Sutherland) Jan 6 54# (Nielsen) 270# (Clark-Jarvis) Jan 8 4 [GEH] (Nielsen) 8 [GEH] (Nielsen) Using a "gracefully degrading" lower bound for the numerator of the optimisation problem. Calculations confirmed here. Jan 9 474,266? [EH] [m=4] (Sutherland) Jan 28 395,106? [m=2] (Sutherland) Jan 29 3 [GEH] (Nielsen) 6 [GEH] (Nielsen) A new idea of Maynard exploits GEH to allow for cutoff functions whose support extends beyond the unit cube Feb 9 Jan 29 results confirmed here Feb 17 53?# (Nielsen) 264?# (Clark-Jarvis) Managed to get the epsilon trick to be computationally feasible for medium k Feb 22 51?# (Nielsen) 252?# (Clark-Jarvis) More efficient matrix computation allows for higher degrees to be used Mar 4 Jan 6 computations confirmed Apr 14 50?# (Nielsen) 246?# (Clark-Jarvis) A 2-week computer calculation! Apr 17 35,410? [m=2]* (xfxie) Redoing the m=2,3,4,5 computations using the confirmed MPZ estimates rather than the unconfirmed ones Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [GEH] - bound is conditional the generalized Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted
See also the article on
Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
|
Starting with a finite set of 3D points, Plotly can generate a
Mesh3d object, that depending on a key value can be the convex hull of that set, its Delaunay triangulation or an alpha set.
This notebook is devoted to the presentation of the alpha shape as a computational geometric object, its interpretation, and visualization with Plotly.
Alpha shape of a finite point set $S$ is a polytope whose structure depends only on the set $S$ and a parameter $\alpha$.
Although it is less known in comparison to other computational geometric objects, it has been used in many practical applications in pattern recognition, surface reconstruction, molecular structure modeling, porous media, astrophysics.
In order to understand how the algorithm underlying
Mesh3d works, we present shortly a few notions of Computational Geometry.
Let S be a finite set of 2D or 3D points. A point is called $0$-simplex or vertex. The convex hull of:
from IPython.display import IFrameIFrame('https://plot.ly/~empet/13475/', width=800, height=350)
If $T$ is the set of points defining a $k$-simplex, then any proper subset of $T$ defines an $\ell$-simplex, $\ell<k$. These $\ell$-simplexes (or $\ell$-simplices) are called faces.
A 2-simplex has three $1$-simplexes, and three 0-simplexes as faces, whereas a tetrahedron has as faces three 2-simplexes, six 1-simplexes and four zero simplexes.
k-simplexes are building blocks for different structures in Computational Geometry, mainly for creating meshes from point clouds.
Let $S$ be a finite set in $\mathbb{R}^d$, $d=2,3$ (i.e. a set of 2D or 3D points). A collection $\mathcal{K}$ of k-simplexes, $0\leq k\leq d$, having as vertices the points of $S$, is a
simplicial complex if its simplexes have the following properties:
The next figure illustrates a simplicial complex(left), and a collection of $k$-simplexes (right), $0\leq k\leq 2$ that do not form a simplicial complex because the condition 2 in the definition above is violated.
IFrame('https://plot.ly/~empet/13503/', width=600, height=475)
Triangular meshes used in computer graphics are examples of simplicial complexes.
The underlying space of a simplicial complex, $\mathcal{K}$, denoted $|\mathcal{K}|$, is the union of its simplexes, i.e. it is a region in plane or in the 3D space, depending on whether d=2 or 3.
A
subcomplex of the simplicial complex $\mathcal{K}$ is a collection, $\mathcal{L}$, of simplexes in $\mathcal{K}$ that also form a simplicial complex.
The points of a finite set $S$ in $\mathbb{R}^2$ (respectively $\mathbb{R}^3$) are in
general position if no $3$ (resp 4) points are collinear (coplanar), and no 4 (resp 5) points lie on the same circle (sphere).
A particular simplicial complex associated to a finite set of 2D or 3D points, in general position, is the
Delaunay triangulation.
A triangulation of a finite point set $S \subset \mathbb{R}^2$ (or $\mathbb{R}^3$) is a collection $\mathcal{T}$ of triangles (tetrahedra), such that:
A Delaunay triangulation of the set $S\subset\mathbb{R}^2$ ($\mathbb{R}^3$) is a triangulation with the property that the open balls bounded by the circumcircles (circumspheres) of the triangulation triangles (tetrahedra) contain no point in $S$. One says that these balls are empty.
If the points of $S$ are in general position, then the Delaunay triangulation of $S$ is unique.
Here is an example of Delaunay triangulation of a set of ten 2D points. It illustrates the emptiness of two balls bounded by circumcircles.
IFrame('https://plot.ly/~empet/13497/', width=550, height=550)
An intuitive description of the alpha shape was given by Edelsbrunner and his coauthor in a preprint of the last paper mentioned above:
A huge mass of ice-cream fills a region in the 3D space, and the point set $S$ consists in hard chocolate pieces spread in the ice-cream mass. Using a sphere-formed ice-cream spoon we carve out the ice-cream such that to avoid bumping into chocolate pieces. At the end of this operation the region containing the ciocolate pieces and the remaining ice cream is bounded by caps, arcs and points of chocolate. Straightening all round faces to triangles and line segments we get the intuitive image of the alpha shape of the point set $S$.
Now we give the steps of the computational alpha shape construction.
Let $S$ be a finite set of points from $\mathbb{R}^d$, in general position, $\mathcal{D}$ its Delaunay triangulation and $\alpha$ a positive number.
Select the d-simplexes of $\mathcal{D}$ (i.e. triangles in the case d=2, respectively tetrahedra for d=3) whose circumsphere has the radius less than $\alpha$. These simplexes and their faces form a simplicial subcomplex of the Delaunay triangulation, $\mathcal{D}$. It is denoted $\mathcal{C}_\alpha$, and called $\alpha$-complex.
The $\alpha$-shape of the set $S$ is defined by its authors, either as the underlying space of the $\alpha$-complex, i.e. the union of all its simplexes or as the boundary of the $\alpha$-complex.
The boundary of the $\alpha$-complex is the subcomplex consisting in all k-simplexes, $0\leq k<d$, that are faces of a single $d$-simplex (these are called external faces).
In the ice-cream example the alpha shape was defined as the boundary of the alpha-complex.
The underlying space of the $\alpha$-complex is the region where the ice-cream spoon has no access, because its radius ($\alpha$) exceeds the radius of circumscribed spheres to tetrahedra formed by pieces of chocolate.
To get insight into the process of construction of an alpha shape we illustrate it first for a set of 2D points.
The following panel displays the Delaunay triangulation of a set of 2D points, and a sequence of $\alpha$-complexes (and alpha shapes):
IFrame('https://plot.ly/~empet/13479/', width=825, height=950)
We notice that the Delaunay triangulation has as boundary a convex set (it is a triangulation of the convex hull of the given point set).
Each $\alpha$-complex is obtained from the Delaunay triangulation, removing the triangles whose circumcircle has radius greater or equal to alpha.
In the last subplot the triangles of the $0.115$-complex are filled in with light blue. The filled in region is the underlying space of the $0.115$-complex.
The $0.115$-alpha shape of the given point set can be considered either the filled in region or its boundary.
This example illustrates that the underlying space of an $\alpha$-complex in neither convex nor necessarily connected. It can consist in many connected components (in our illustration above, $|\mathcal{C}_{0.115}|$ has three components).
In a family of alpha shapes, the parameter $\alpha$ controls the level of detail of the associated alpha shape. If $\alpha$ decreases to zero, the corresponding alpha shape degenerates to the point set, $S$, while if it tends to infinity the alpha shape tends to the convex hull of the set $S$.
In order to generate the alpha shape of a given set of 3D points corresponding to a parameter $\alpha$,the Delaunay triagulation or the convex hull we define an instance of the
go.Mesh3d class. The real valueof the key
alphahull points out the mesh type to be generated:
alphahull=$1/\alpha$ generates the $\alpha$-shape, -1 corresponds to the Delaunaytriangulation and 0, to the convex hull of the point set.
The other parameters in the definition of a
Mesh3d are given here.
Mesh3d generates and displays an $\alpha$-shape as the boundary of the $\alpha$-complex.
An intuitive idea on the topological structure modification, as $\alpha=1/$
alphahull varies can be gained from the following three different alpha shapes of the same point set:
IFrame('https://plot.ly/~empet/13481/', width=900, height=950)
We notice in the subplots above that as
alphahull increases, i.e. $\alpha$ decreases, some parts of the alpha shape shrink anddevelop enclosed void regions. The last plotted alpha shape points out a polytope that contains faces of tetrahedra,and patches of triangles.
In some cases as $\alpha$ varies it is also possible to develop components that are strings of edges and even isolated points.
Such experimental results suggested the use of alpha shapes in modeling molecular structure. A search on WEB gives many results related to applications of alpha shapes in structural molecular biology.
Here is an alpha shape illustrating a molecular-like structure associated to a point set of 5000 points.
import numpy as npimport plotly.plotly as pyimport plotly.graph_objs as gofrom plotly import tools as tls
Load data:
pts = np.loadtxt('Data/data-file.txt')x, y, z = zip(*pts)
Define two traces: one for plotting the point set and another for the alpha shape:
points = go.Scatter3d(mode='markers', name='', x =x, y= y, z= z, marker=dict(size=2, color='#458B00'))
simplexes = go.Mesh3d(alphahull =10.0, name = '', x =x, y= y, z= z, color='#90EE90', opacity=0.15)
axis = dict(showbackground=True, backgroundcolor="rgb(245, 245, 245)", gridcolor="rgb(255, 255, 255)", gridwidth=2, zerolinecolor="rgb(255, 255, 255)", tickfont=dict(size=11), titlefont =dict(size=12))
x_style = dict(axis, range=[-2.85, 4.25], tickvals=np.linspace(-2.85, 4.25, 5)[1:].round(1))y_style = dict(axis, range=[-2.65, 1.32], tickvals=np.linspace(-2.65, 1.32, 4)[1:].round(1))z_style = dict(axis, range=[-3.67,1.4], tickvals=np.linspace(-3.67, 1.4, 5).round(1))
layout = go.Layout(title='Alpha shape of a set of 3D points. Alpha=0.1', width=500, height=500, scene=dict(xaxis=x_style, yaxis=y_style, zaxis=z_style))
fig = go.FigureWidget(data=[points, simplexes], layout=layout)#fig
fig = go.FigureWidget(data=[points, simplexes], layout=layout)#py.plot(fig, filename='3D-AlphaS-ex')
'https://plot.ly/~empet/13499'
IFrame('https://plot.ly/~empet/13499/', width=550, height=550)
We construct the alpha shape of a set of 2D points from the Delaunay triangulation, defined as a
scipy.spatial.Delaunay object.
from scipy.spatial import Delaunay
def sq_norm(v): #squared norm return np.linalg.norm(v)**2
def circumcircle(points,simplex): A = [points[simplex[k]] for k in range(3)] M = [[1.0]*4] M += [[sq_norm(A[k]), A[k][0], A[k][1], 1.0 ] for k in range(3)] M = np.asarray(M, dtype=np.float32) S = np.array([0.5*np.linalg.det(M[1:, [0,2,3]]), -0.5*np.linalg.det(M[1:, [0,1,3]])]) a = np.linalg.det(M[1:, 1:]) b = np.linalg.det(M[1:, [0,1,2]]) return S/a, np.sqrt(b/a + sq_norm(S)/a**2) #center=S/a, radius=np.sqrt(b/a+sq_norm(S)/a**2)
Filter out the Delaunay triangulation to get the $\alpha$-complex:
def get_alpha_complex(alpha, points, simplexes): #alpha is the parameter for the alpha shape #points are given data points #simplexes is the list of indices in the array of points #that define 2-simplexes in the Delaunay triangulation return filter(lambda simplex: circumcircle(points,simplex)[1] < alpha, simplexes)
pts = np.loadtxt('Data/data-ex-2d.txt')tri = Delaunay(pts)
colors = ['#C0223B', '#404ca0', 'rgba(173,216,230, 0.5)']# colors for vertices, edges and 2-simplexes
Get data for the Plotly plot of a subcomplex of the Delaunay triangulation:
def Plotly_data(points, complex_s): #points are the given data points, #complex_s is the list of indices in the array of points defining 2-simplexes(triangles) #in the simplicial complex to be plotted X = [] Y = [] for s in complex_s: X += [points[s[k]][0] for k in [0,1,2,0]] + [None] Y += [points[s[k]][1] for k in [0,1,2,0]] + [None] return X, Y
def make_trace(x, y, point_color=colors[0], line_color=colors[1]):# define the trace #for an alpha complex return go.Scatter(mode='markers+lines', #vertices and #edges of the alpha-complex name='', x=x, y=y, marker=dict(size=6.5, color=point_color), line=dict(width=1.25, color=line_color))
figure = tls.make_subplots(rows=1, cols=2, subplot_titles=('Delaunay triangulation', 'Alpha shape, alpha=0.15'), horizontal_spacing=0.1, )
This is the format of your plot grid: [ (1,1) x1,y1 ] [ (1,2) x2,y2 ]
title = 'Delaunay triangulation and Alpha Complex/Shape for a Set of 2D Points'figure.layout.update(title=title, font=dict(family="Open Sans, sans-serif"), showlegend=False, hovermode='closest', autosize=False, width=800, height=460, margin=dict(l=65, r=65, b=85, t=120));
axis_style = dict(showline=True, mirror=True, zeroline=False, showgrid=False, showticklabels=True, range=[-0.1,1.1], tickvals=[0, 0.2, 0.4, 0.6, 0.8, 1.0], ticklen=5 )
for s in range(1,3): figure.layout.update({'xaxis{}'.format(s): axis_style}) figure.layout.update({'yaxis{}'.format(s): axis_style})
alpha_complex = list(get_alpha_complex(0.15, pts, tri.simplices))
X, Y = Plotly_data(pts, tri.simplices)# get data for Delaunay triangulationfigure.append_trace(make_trace(X, Y), 1, 1) X, Y = Plotly_data(pts, alpha_complex)# data for alpha complexfigure.append_trace(make_trace(X, Y), 1, 2)
shapes = []for s in alpha_complex: #fill in the triangles of the alpha complex A = pts[s[0]] B = pts[s[1]] C = pts[s[2]] shapes.append(dict(path=f'M {A[0]}, {A[1]} L {B[0]}, {B[1]} L {C[0]}, {C[1]} Z', fillcolor='rgba(173, 216, 230, 0.5)', line=dict(color=colors[1], width=1.25), xref='x2', yref='y2' ))figure.layout.shapes=shapes
py.plot(figure, filename='2D-AlphaS-ex', width=850)
'https://plot.ly/~empet/13501'
IFrame('https://plot.ly/~empet/13501', width=800, height=460)
|
Why don't people discuss the eigenstate of the field operator? For example, the real scalar field the field operator is Hermitian, so its eigenstate is an observable quantity.
The answer is simple. They are not realized in nature too often. And more, they are not stationary states, so such a state evolves in time to a state that contains fluctuations of the field variable $\phi$ over all the space. I.e. eigenstate of $\hat{\phi}$ evolves to a non-eigenstate of $\hat{\phi}$. These fluctuations increase and spread with time. Practically, to obtain this state you need to measure $\phi$ over all space with considerable precision in relation to the Compton's wavelength of the field, and we know that in this scale fluctuations on the field start to show up as the evolution through time starts.
The fields that we probe in classical mechanics are actually the coherent state:$$|\phi_{cl}\rangle=\exp\left(\int \phi_{cl}(x)\hat{\phi}(x)\right)|0\rangle.$$This state is not an eigenstate of $\hat{\phi}$ so the problems of the first paragraph can be avoided. Actually this state has
minimal fluctuations, and the uncertainty is constant in time. The expectation value of the field operator is:$$\langle\hat{\phi}(x)\rangle=\langle \phi_{cl}|\hat{\phi}(x)|\phi_{cl}\rangle=\phi_{cl}(x).$$You can check this by the definition of $|\phi_{cl}\rangle$
Note that the vacuum state $|0\rangle$ is also a coherent state associated with the trivial classical solution $\phi_{cl}(x)=0$.
As $\phi(f)$ and $\pi(f)$, which are self adjoint, satisfy the same commutation relations as $X$ and $P$, the closure of the space generated by polynomials of the former pair of operators applied to $\lvert 0\rangle$ is isomorphic to $L^2(\mathbb R)$. Therefore the spectrum of $\phi(f)$ and $\pi(f)$, is purely continuous and coincides to $\mathbb R$ and there are no proper eigenvectors, but they are just formal ones and isomorphic to $\lvert x\rangle$ and $\lvert p\rangle$.
One thing missing in the other answers... people actually
do discuss eigenstates of the field operator, or at least they are important in QFT. A complete set of field eigenstates are used to prove that n-point functions can be written in terms of a path integral, which is a critical result. But they are not used as "states after a field measurement", as $|x\rangle$ and $|p\rangle$ are used in quantum mechanics. At least not in mainstream applications / that I know of.
In case you're curious to know how exactly they are used, I'll sketch it out below, with the real scalar field as an example. In deriving the path integral, it is necessary to write the identity in a basis of field eigenstates
$$1 = \int \mathcal{D}\phi \, |\phi_t(\vec{x})\rangle \langle \phi_t(\vec{x}) |$$ with (operators have hats, numbers do not) $$\hat{\phi}(t, \vec{x})|\phi_t(\vec{x})\rangle = \phi_t(\vec{x})|\phi_t(\vec{x})\rangle \,\,\,\,\,\,\,\,\, \forall \vec{x}$$
So at a given time $t$, each eigenvector $|\phi_t(\vec{x})\rangle$ is a simultaneous eigenstate of all field operators of different $\vec{x}$ but equal $t$. The eigenvalue $\phi_t(\vec{x})$ depends on which $\vec{x}$ is chosen in the field argument, so we write it as a function of $\vec{x}$. This defines a classical field (a map from $\mathbb{R}^3\to \mathbb{R})$. Operators can be simultaneously diagonalized if they commute*, and luckily the usual QFT commutation relations give exactly this for equal times:
$$[ \phi(t,\vec{x}), \phi(t, \vec{y})] = 0$$
The diagonalization process can be repeated for any time because the commutation relation above holds for any $t$.
*I think that strictly there are more things to worry about for infinite-dimensional operators, so you might want to take the commutation relations as an indication that they can be simultaneously diagonalized, rather than a proof. I don't know enough to expand on this.
|
Digital electronics is based on the ancient philosophy of
logic.
Logical variables are two-valued, or
binary—either true or false.
Logic variables are combined with a short list of
operators—AND, OR, and NOT.
We represent logic several ways—with equations, symbols, and truth tables.
Written by Willy McAllister.
Contents Background
Logic is a branch of philosophy. In 300 BCE the philosophers Plato and Aristotle sat around with their buddies and asked questions about truth. They tried to prove a series of statements were either true or false,
Arnie likes round things.
Apples are round. Therefore Arnie likes apples.
All squares are rectangles.
All rectangles have four sides. Therefore all squares have four sides.
For hundreds of years philosophers studied the nuances of true and false. They organized logic into categories and entertained each other with logic puzzles.
Then in the mid-1800’s English mathematician George Boole demonstrated how logic could be described with just three terms,
AND,
OR, and
NOT. He showed how logic could be described with mathematical notation. He invented an unusual logic algebra that we now call Boolean algebra. People thought this was interesting but for the next century it was taught only as a philosopher’s amusement.
Then in 1934, an MIT student named Claude Shannon spent a summer internship at Bell Telephone Laboratories. His job was to babysit a complicated analog computer called a differential analyzer. He had learned about Boole’s logic in a philosophy class. While watching the machine work for hours on end, Shannon made the connection between logic and electrical circuits. That realization is a key moment in the history of the digital world we live in today.
Boolean logic
Now we define the abstract notions of Boolean Logic. Don’t worry how circuits actually do this. We’ll get to that soon.
Variables
In Boole’s system a
logical variable is allowed to have two values, TRUE or FALSE, abbreviated as T and F. We often indicate TRUE with the symbol 1 and false with 0. Don’t think of 1 and 0 as numbers—yet—just think of them as two distinct symbols that are easy to tell apart. Some variables are Boolean, some are not.
Not everything can be modeled as a Boolean variable. For example,
“What day is it?”
is not TRUE or FALSE because it has seven possible answers (Sunday, Monday, Tuesday, Wednesday …). However, if we ask a slightly different question,
“Is today Friday?”
this can be modeled as a Boolean variable because the answer to this question is either TRUE or FALSE.
Operators
A
logical operator combines logic variables and produces another. There are three operators: AND, OR, and NOT. The AND operation
The output of AND is true if
all of its inputs are true.
Joe will get an good grade if he completes the work and writes his name on the page.
Joe completed the work. Joe forgot to write his name. Did Joe get a good grade?
The way the problem statement is written suggests the AND operator. Begin by defining some logic variables,
A = Did Joe complete the work? B = Did Joe write his name at the top? C = Did Joe get a good grade?
Work out the values of the variables,
A is TRUE. B is FALSE. C is FALSE because A AND B are not both true.
We can write the problem as a Boolean equation,
A $\cdot$ B = C $\qquad$ where the $\cdot$ symbol is one way to indicate AND.
This crisp little equation captures the entire meaning of the problem.
The AND operator can be indicated several ways in equations and programming languages,
AB $\qquad$ A $\cdot$ B $\qquad$ A & B $\qquad$ A && B $\qquad$ A $\land$ B
We can also draw the AND function with a logic symbol,
The OR operation
The output of OR is true if
any of its inputs are true.
Lilly goes to the library on Tuesday or if it is raining.
It is raining. It is Wednesday. Does Lily go to the library?
The problem statement suggests the OR operator. It also suggests these logic variables,
A = Is it raining?
B = Is is Tuesday? C = Does Lily go to the library?
A is TRUE.B is FALSE.
Therefore C is TRUE because at least one input is TRUE.
We write this in equation form,
A + B = C $\qquad$ where the + symbol is one way to indicate OR.
You will see the OR operator indicated like this in equations and programming languages,
A $+$ B $\qquad$ A || B $\qquad$ A $\lor$ B
We can draw the problem with the logic symbol for OR,
The NOT operation
The NOT operator works on a single input. The output is the opposite of the input.
If A is TRUE, then NOT A is FALSE.
If A is FALSE, then NOT A is TRUE.
It’s as simple as that.
The NOT operation is referred to as
logical inversion or negation. An electronic circuit that performs inversion is called an inverter.
You will see several different notations for the NOT operator,
$-$A $\qquad$ A$-$ $\qquad$ ~A $\qquad$ A
The last one is pronounced A
bar.
Variable names might indicate negation with n, N, *, or ! as a prefix or suffix,
nReset $\qquad$ resetN $\qquad$ reset* $\qquad$ reset!
The last one is pronounced reset
bang! just for fun.
The symbol for an inverter introduces the
bubble notation,
The triangle shape is there so the bubble has something to attach to. The triangle is not the logic function. All the action is in the bubble.
If the input happens to be inverted, like A, you might want to draw the bubble on the input side of the triangle, so the bubble is on the same side as A,
Both variations of the inverter symbol mean exactly the same thing. The logic input variable is inverted going through the gate. You will see why this is a good idea when we study bubble matching and logical
assertion. The XOR operation
The
exclusive OR (XOR) is a variation of the OR operation. The output is true if either but not both of the inputs are true. Here’s an example of exclusive OR in a sentence,
I would like either the pie or the cake.
This means: I will have pie or I will have cake but I won’t have both. The key word is
either. That’s what tells you the OR is the exclusive flavor. The XOR is important when we add binary numbers.
In textbooks or in writing the XOR function is a plus sign in a circle, $A \oplus B$. Most programming languages don’t have a symbol for XOR—you call a library function or assemble the operation from other operators. The language C++ uses the ^ carat symbol for bitwise-XOR. You will see this in Arduino code.
The symbol for XOR is a variation on the OR symbol,
That’s the four symbols for the basic gates. We will learn a bunch more soon.
Truth table
A
truth table is another way to understand the logic operations. It is common to use the 1/0 notation, 1 = true, 0 = false. Again, don’t think of 1 and 0 as numbers, yet. They are just two symbols easy to tell apart. A truth table lists all possible variations of the inputs and the outcome in the last column.
The truth table for AND looks like this,
This truth table has four rows because that’s how many distinct combinations of A and B there are. (Two input variables have $2^2=4$ combinations. Three variables would have $2^3=8$ possible combinations.) For a second, think about the 1’s and 0’s as numbers. Notice how similar the AND function is to multiplication. That is where the notation for AND comes from, AB or A$\cdot$B.
The truth table for OR looks like this,
The OR function sort of resembles addition—but not quite. That’s why the written form of OR is A+B.
And the XOR truth table—the only difference is the last row,
The XOR operation is closer to a true addition. It’s main job in digital systems is doing the add operation.
I bet you can write out the truth table for NOT. It has two rows.
Concept check Write logic equations representing these thoughts,
Sid saw a seagull and a seesaw.
show answer
Let A = seagull, B = seesaw, and C = what Sid sees.
C = A $\cdot$ B.
Mary went to the store to buy apples and bananas.
Burt wants either an iPhone or a Samsung phone.
I don’t ride a bicycle. What do I ride?
You write with a pen or a pencil.
show answers
Mary bought = apples $\cdot$ bananas
Burt wants = iPhone $\oplus$ Samsung
I ride = bicycle
You write with = pen + pencil
Create a truth table with three logical inputs.
hint: How many rows does a 3-input truth table have?
Then fill in columns for the output of 3-input AND and OR. show answer
3-input truth table
Pro tip: You want to be able to write out the input rows of a truth table really fast. Look at this truth table and find a pattern for each column that you can memorize. For example, the C column alternates 0 and 1. What pattern do you see for the A and B columns?
What is the equation represented by this logic diagram?
show answer
D = AB + reset
Extra credit: Create a truth table for this function. The 8 input rows are the same as before, with column headings A, B, reset. The output is D.
Summary
The logic operations are AND, OR, NOT (and XOR).
In digital they mean exactly the same as their regular English meanings.
A truth table is a convenient way to list out the result of an operator.
We haven’t shown how to create these logic functions with electronic circuits, but that comes soon.
References
“A Mind At Play — How Claude Shannon Invented the Information Age”, Jimmy Soni and Rob Goldman, 2017. This is a biography about one of the most influential engineers of the 20th century. You may not have heard of him, but I guarantee he has had an influence on your life.
A Mathematical Theory of Communication - original paper, Claude E. Shannon,
Bell Systems Technical Journal, Vol. 27, pp. 379–423, 623–656, July and October, 1948. This groundbreaking paper created the field of Information Theory. The term bit appears in print for the very first time near the end of page 1.
A mathematical theory of communication - Khan Academy video demonstrates the information redundancy in English.
|
Difference between revisions of "The Erdos-Rado sunflower lemma"
(→Variants and notation: added weak sunflower section)
(→Variants and notation: cleared up DES part)
Line 53: Line 53:
<B>Disproven for <math>k=3,r=3</math></B>: set <math>|V_1|=|V_2|=|V_3|=3</math> and use ijk to denote the 3-set consisting of the i^th element of V_1, j^th element of V_2, and k^th element of V_3. Then 000, 001, 010, 011, 100, 101, 112, 122, 212 is a balanced family of 9 3-sets without a 3-sunflower.
<B>Disproven for <math>k=3,r=3</math></B>: set <math>|V_1|=|V_2|=|V_3|=3</math> and use ijk to denote the 3-set consisting of the i^th element of V_1, j^th element of V_2, and k^th element of V_3. Then 000, 001, 010, 011, 100, 101, 112, 122, 212 is a balanced family of 9 3-sets without a 3-sunflower.
−
A ''weak sunflower'' (''weak Delta-system'') of size <math>r</math> is a family of <math>r</math> sets, <math> A_1,\ldots,A_r</math>, such that their pairwise intersections have the same size, i.e., <math> |A_i\cap A_j|=|A_{i'}\cap A_{j'}|</math> for every <math> i\ne j</math> and <math> i'\ne j'</math>. If we denote the size of the largest family of <math>k</math>-sets without an <math>r</math>-weak sunflower by <math>g(k,r)</math>, by definition we have <math>g(k,r)\le f(k,r)</math>. Also, if we denote by <math>R_r(k)-1</math> the size of the largest complete graph whose edges can be colored with <math>r</math> colors such that there is no monochromatic clique on <math>k</math> vertices, then we have <math>g(k,r)\le R_r(k)-1</math>, as we can color the edges running between the <math>k</math>-sets of our weak sunflower-free family with the intersection sizes.
+
A ''weak sunflower'' (''weak Delta-system'') of size <math>r</math> is a family of <math>r</math> sets, <math> A_1,\ldots,A_r</math>, such that their pairwise intersections have the same size, i.e., <math> |A_i\cap A_j|=|A_{i'}\cap A_{j'}|</math> for every <math> i\ne j</math> and <math> i'\ne j'</math>. If we denote the size of the largest family of <math>k</math>-sets without an <math>r</math>-weak sunflower by <math>g(k,r)</math>, by definition we have <math>g(k,r)\le f(k,r)</math>. Also, if we denote by <math>R_r(k)-1</math> the size of the largest complete graph whose edges can be colored with <math>r</math> colors such that there is no monochromatic clique on <math>k</math> vertices, then we have <math>g(k,r)\le R_r(k)-1</math>, as we can color the edges running between the <math>k</math>-sets of our weak sunflower-free family with the intersection sizes.
+ +
by <math>3DES(n)</math> the integer such that <math>n</math> <math>S</math> of size <math>n</math> three disjoint equivoluminous , i.e., there <math>S=S_1\cup^* S_2\cup^* S_3\cup^* </math> such that <math>\sum_{s\in S_1} =\sum_{s\in S_2} =\sum_{s\in S_3} </math>. Then <math>{3DES(n)} / n > </math>.
== Small values ==
== Small values ==
Revision as of 09:15, 5 February 2016 Contents The problem
A
sunflower (a.k.a. Delta-system) of size [math]r[/math] is a family of sets [math]A_1, A_2, \dots, A_r[/math] such that every element that belongs to more than one of the sets belongs to all of them. A basic and simple result of Erdos and Rado asserts that Erdos-Rado Delta-system theorem: There is a function [math]f(k,r)[/math] so that every family [math]\cal F[/math] of [math]k[/math]-sets with more than [math]f(k,r)[/math] members contains a sunflower of size [math]r[/math].
(We denote by [math]f(k,r)[/math] the smallest integer that suffices for the assertion of the theorem to be true.) The simple proof giving [math]f(k,r)\le k! (r-1)^k[/math] can be found here.
The best known general upper bound on [math]f(k,r)[/math] (in the regime where [math]r[/math] is bounded and [math]k[/math] is large) is
[math]\displaystyle f(k,r) \leq D(r,\alpha) k! \left( \frac{(\log\log\log k)^2}{\alpha \log\log k} \right)^k[/math]
for any [math]\alpha \lt 1[/math], and some [math]D(r,\alpha)[/math] depending on [math]r,\alpha[/math], proven by Kostkocha from 1996. The objective of this project is to improve this bound, ideally to obtain the Erdos-Rado conjecture
[math]\displaystyle f(k,r) \leq C^k [/math]
for some [math]C=C(r)[/math] depending on [math]r[/math] only. This is known for [math]r=1,2[/math](indeed we have [math]f(k,r)=1[/math] in those cases) but remains open for larger r.
Variants and notation
Given a family [math]\cal F[/math] of sets and a set S, the
star of S is the subfamily of those sets in [math]\cal F[/math] containing S, and the link of S is obtained from the star of S by deleting the elements of S from every set in the star. (We use the terms link and star because we do want to consider eventually hypergraphs as geometric/topological objects.)
We can restate the delta system problem as follows: f(k,r) is the maximum size of a family of k-sets such that the link of every set A does not contain r pairwise disjoint sets.
Let f(k,r;m,n) denote the largest cardinality of a family of k-sets from {1,2,…,n} such that that the link of every set A of size at most m-1 does not contain r pairwise disjoint sets. Thus f(k,r) = f(k,r;k,n) for n large enough.
Conjecture 1: [math]f(k,r;m,n) \leq C_r^k n^{k-m}[/math] for some [math]C_r[/math] depending only on r.
This conjecture implies the Erdos-Ko-Rado conjecture (set m=k). The Erdos-Ko-Rado theorem asserts that
[math]f(k,2;1,n) = \binom{n-1}{k-1}[/math] (1)
when [math]n \geq 2k[/math], which is consistent with Conjecture 1. More generally, Erdos, Ko, and Rado showed
[math]f(k,2;m,n) = \binom{n-m}{k-m}[/math]
when [math]n[/math] is sufficiently large depending on k,m. The case of smaller n was treated by several authors culminating in the work of Ahlswede and Khachatrian.
Erdos conjectured that
[math]f(k,r;1,n) = \max( \binom{rk-1}{k}, \binom{n}{k} - \binom{n-r}{k} )[/math]
for [math]n \geq rk[/math], generalising (1), and again consistent with Conjecture 1. This was established for k=2 by Erdos and Gallai, and for r=3 by Frankl (building on work by Luczak-Mieczkowska).
A family of k-sets is
balanced (or k-colored) if it is possible to color the elements with k colors so that every set in the family is colorful. Reduction (folklore): It is enough to prove Erdos-Rado Delta-system conjecture for the balanced case. Proof: Divide the elements into d color classes at random and take only colorful sets. The expected size of the surviving colorful sets is [math]k!/k^k \cdot |\cal F|[/math]. Hyperoptimistic conjecture: The maximum size of a balanced collection of k-sets without a sunflower of size r is (r-1)^k. Disproven for [math]k=3,r=3[/math]: set [math]|V_1|=|V_2|=|V_3|=3[/math] and use ijk to denote the 3-set consisting of the i^th element of V_1, j^th element of V_2, and k^th element of V_3. Then 000, 001, 010, 011, 100, 101, 112, 122, 212 is a balanced family of 9 3-sets without a 3-sunflower.
A
weak sunflower ( weak Delta-system) of size [math]r[/math] is a family of [math]r[/math] sets, [math] A_1,\ldots,A_r[/math], such that their pairwise intersections have the same size, i.e., [math] |A_i\cap A_j|=|A_{i'}\cap A_{j'}|[/math] for every [math] i\ne j[/math] and [math] i'\ne j'[/math]. If we denote the size of the largest family of [math]k[/math]-sets without an [math]r[/math]-weak sunflower by [math]g(k,r)[/math], by definition we have [math]g(k,r)\le f(k,r)[/math]. Also, if we denote by [math]R_r(k)-1[/math] the size of the largest complete graph whose edges can be colored with [math]r[/math] colors such that there is no monochromatic clique on [math]k[/math] vertices, then we have [math]g(k,r)\le R_r(k)-1[/math], as we can color the edges running between the [math]k[/math]-sets of our weak sunflower-free family with the intersection sizes. For all three functions only exponential lower bounds and factorial type upper bounds are known.
Denote by [math]3DES(n)[/math] the largest integer such that there is a group of size [math]n[/math] and a subset [math]S[/math] of size [math]3DES(n)[/math] without three
disjoint equivoluminous subsets, i.e., there is no [math]S=S_1\cup^* S_2\cup^* S_3\cup^* S_{rest}[/math] such that [math]\sum_{s\in S_1} s=\sum_{s\in S_2} s=\sum_{s\in S_3} s[/math]. Then [math]{3DES(n) \choose DES(n)} / n \le g(DES(n),3)[/math] holds, thus if [math]g(k,3)[/math] grows exponentially, then [math]3DES(n)=O(\log n)[/math]. Small values
Below is a collection of known constructions for small values, taken from Abbott-Exoo. Boldface stands for matching upper bound (and best known upper bounds are planned to be added to other entries). Also note that for [math]k[/math] fixed we have [math]f(k,r)=k^r+o(k^r)[/math] from Kostochka-Rödl-Talysheva.
r\k 2 3 4 5 6 ...k 3 6 20 54- 160- 600- ~3.16^k 4 10 38- 114- 380- 1444- ~3.36^k 5 20 88- 400- 1760- 8000- ~4.24^k 6 27 146- 730- 3942- 21316- ~5.26^k Threads Polymath10: The Erdos Rado Delta System Conjecture, Gil Kalai, Nov 2, 2015. Inactive Polymath10, Post 2: Homological Approach, Gil Kalai, Nov 10, 2015. Inactive Polymath 10 Post 3: How are we doing?, Gil Kalai, Dec 8, 2015. Inactive Polymath10-post 4: Back to the drawing board?, Gil Kalai, Jan 31, 2016. Active Erdos-Ko-Rado theorem (Wikipedia article) Sunflower (mathematics) (Wikipedia article) What is the best lower bound for 3-sunflowers? (Mathoverflow) Bibliography
Edits to improve the bibliography (by adding more links, Mathscinet numbers, bibliographic info, etc.) are welcome!
On set systems not containing delta systems, H. L. Abbott and G. Exoo, Graphs and Combinatorics 8 (1992), 1–9. On finite Δ-systems, H. L. Abbott and D. Hanson, Discrete Math. 8 (1974), 1-12. On finite Δ-systems II, H. L. Abbott and D. Hanson, Discrete Math. 17 (1977), 121-126. Intersection theorems for systems of sets, H. L. Abbott, D. Hanson, and N. Sauer, J. Comb. Th. Ser. A 12 (1972), 381–389. Hodge theory for combinatorial geometries, Karim Adiprasito, June Huh, and Erick Katz The Complete Nontrivial-Intersection Theorem for Systems of Finite Sets, R. Ahlswede, L. Khachatrian, Journal of Combinatorial Theory, Series A 76, 121-138 (1996). On set systems without weak 3-Δ-subsystems, M. Axenovich, D. Fon-Der-Flaassb, A. Kostochka, Discrete Math. 138 (1995), 57-62. Intersection theorems for systems of finite sets, P. Erdős, C. Ko, R. Rado, The Quarterly Journal of Mathematics. Oxford. Second Series 12 (1961), 313–320. Intersection theorems for systems of sets, P. Erdős, R. Rado, Journal of the London Mathematical Society, Second Series 35 (1960), 85–90. On the Maximum Number of Edges in a Hypergraph with Given Matching Number, P. Frankl An intersection theorem for systems of sets, A. V. Kostochka, Random Structures and Algorithms, 9 (1996), 213-221. Extremal problems on Δ-systems, A. V. Kostochka On Systems of Small Sets with No Large Δ-Subsystems, A. V. Kostochka, V. Rödl, and L. A. Talysheva, Comb. Probab. Comput. 8 (1999), 265-268. On Erdos' extremal problem on matchings in hypergraphs, T. Luczak, K. Mieczkowska Intersection theorems for systems of sets, J. H. Spencer, Canad. Math. Bull. 20 (1977), 249-254.
|
Since the Goldbach conjecture is in $\Pi_1^0$, if it were proven to be independent of Peano Arithmetic, it would follow that the Goldbach conjecture is
true (i.e. true in the standard model), since informally speaking, if it were indeed false, we would have a counterexample of the Goldbach conjecture, contradicting the claim that the Goldbach conjecture is independent.
This argument initially lead me to the intuition that quantifiers (or at least $\forall$, I'm not sure if this method equally applies for statements in $\Sigma^0_1$) allow us to remove a single level of undecidability, turning it into negation, but still leaving the possibility that, the decidability of the Goldbach conjecture could be undecidable, leaving us no metamathematical proof of the truth of the Goldbach conjecture. However, how this argument works for other statements in the arithmetical hierarchy is not clear to me.
Certainly in $\Pi^0_0$, nothing can be done to reduce independence to truth or falsity via metamathematical means, so statements in $\Pi^0_0$ proven to be independent are "actually independent". However, I run into problems thinking about $\Pi^0_n$ for $ n > 1 $. My initial thought was that possibly, building on my intuition, $\Pi^0_n$ statements should reduce "n-fold undecidability" to truth, but this intuition does not seem to be correct. For example, this answer to a math overflow question claims that since the twin primes conjecture is a $\Pi^0_2$ statement, "Neither it nor its negation are obviously true if verified independent", plus, if I am interpreting this other answer correctly, we cannot actually prove n-fold undecidability for $n > 1$. My question then is, what are the possibilities for a general statement in $\Pi_n^0$ which has been proven to be independent? When can we reduce such a statement to truth or falsity using metamathematical means, and when can we say that it is "absolutely undecidable"?
|
I have area
A and length of the base s. I need to describe a triangle minimal perimeter using this information. from given equation I can find height. and final equation will look like this Since my perimeter is equal to $ s + \sqrt{x^2 + h^2} + \sqrt{(x - s)^2 + h^2} $ problem is to change orthocenter of h (x) in order to minimize equation above? How do I do that?
I have area
If I understand the problem correctly, both $s$ and $h$ are fixed, and you want to choose $x$ so that $p(x)=s + \sqrt{x^2 + h^2} + \sqrt{(s-x)^2 + h^2}$ is minimized. Note that $x$ can be negative, in which case one of the base angles is obtuse.
Computing $p'(x)$ and simplifying gives a fraction whose numerator is $$ (x-s)\sqrt{h^2+x^2}+x \sqrt{h^2+(s-x)^2}.$$ Set equal to zero and solve for $x$, giving $x=\frac{s}{2}$. Thus the minimal-perimeter triangle is isosceles on its base $s$.
Do you know any other way to approach this problem?
For an alternative approach, note that the given area and base determine the altitude $h=2A/s\,$, so the third point must lie on a line parallel to the base and at distance $h$ from it. Also, the base is fixed, so minimizing the perimeter of the triangle is equivalent to minimizing the sum of distances to the two endpoints of the base.
In the case at hand, let $AB$ be the fixed side, the dashed line $L$ the parallel to $AB$ at distance $h$ on which the third vertex $C$ must lie, and $B'$ the symmetric of $B$ across $L$. Let $C = AB' \cap L\,$, then:
$\triangle ABC$ is isosceles in $C$;
for $\,\forall \,C' \in L$ the sum of distances $C'A+C'B = C'A+C'B'\ge AB' = CA+CB$.
|
There are many factors to be considered such as covalent character and electron-electron interactions in ionic solids. But for simplicity, let us consider the ionic solids as a collection of positive and negative ions. In this simple view, appropriate number of cations and anions come together to form a solid. The positive ions experience both attraction and repulsion from ions of opposite charge and ions of the same charge. The Madelung constant is a property of the crystal structure and depends on the lattice parameters, anion-cation distances, and molecular volume of the crystal.
1D Crystal
Before considering a three-dimensional crystal lattice, we shall discuss the calculation of the energetics of a linear chain of ions of alternate signs (Figure \(\PageIndex{1}\)).
Figure \(\PageIndex{1}\): A hypothetical one-dimensional \(\ce{NaCl}\) lattice.
Let us select the positive sodium ion in the middle (at \(x=0\)) as a reference and let \(r_0\) be the shortest distance between adjacent ions (the sum of ionic radii). The Coulomb energy of the other ions in this 1D lattice on this sodium atom can be decomposed by proximity (or "shells").
Nearest Neighbors (first shell):This reference sodium ion has two negative chloride ions as its neighbors on either side at \(\pm r_0\) so the Coulombic energy of these interactions is \[ \underbrace{ \dfrac{-e^2}{4 \pi \epsilon_o r_o}}_{\text{left chloride ion}} + \underbrace{ \dfrac{-e^2}{4 \pi \epsilon_o r_o}}_{\text{right chloride ion}} = - \dfrac{2e^2}{4 \pi \epsilon_o r_o} \label{eq1}\] Next Nearest Neighbors (second shell): Similarly the repulsive energy due to the next two positive sodium ions at a distance of \(2r_0\) is \[ \underbrace{ \dfrac{+e^2}{4 \pi \epsilon_o (2r_o)}}_{\text{left sodium ion}} + \underbrace{ \dfrac{+e^2}{4 \pi \epsilon_o (2r_o)}}_{\text{right sodium ion}} = + \dfrac{2e^2}{4 \pi \epsilon_o (2r_o)} \label{eq2}\] Next Next Nearest Neighbors (third shell): The attractive Coulomb energy due to the next two chloride ions neighbors at a distance \(3r_0\) is \[ \underbrace{ \dfrac{-e^2}{4 \pi \epsilon_o (3r_o)}}_{\text{left chloride ion}} + \underbrace{ \dfrac{-e^2}{4 \pi \epsilon_o (3r_o)}}_{\text{right chloride ion}} = - \dfrac{2e^2}{4 \pi \epsilon_o (3r_o)} \label{eq3}\]
and so on. Thus the total energy due to all the ions in the linear array is
\[ E = - \dfrac{2e^2}{4 \pi \epsilon_o r_o} + \dfrac{2e^2}{4 \pi \epsilon_o (2r_o)} - \dfrac{2e^2}{4 \pi \epsilon_o (3r_o)} - \ldots\]
or
\[ E= \dfrac{e^2}{4 \pi \epsilon_o r_o} \left[ 2 \left (1 -\dfrac{1}{2} + \dfrac{1}{3} - \dfrac{1}{4} + \ldots \right) \right] \label{eq6}\]
We can use the following Maclaurin expansion
\[ \ln(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}- \frac{x^3}{4} + \cdots\]
to simplify the sum in the parenthesis of Equation \ref{eq6} as \(\ln (1+ 1)\) to obtain
\[ \begin{align} E &= \dfrac{e^2}{4 \pi \epsilon_o r_o} \left[ 2 \ln 2 \right] \label{eq7} \\[4pt] &= \dfrac{e^2}{4 \pi \epsilon_o } M \end{align} \]
The first factor of Equation \ref{eq7} is the Coulomb energy for a single pair of sodium and chloride ions, while the \(2 \ln 2\) factor is the
Madelung constant (\(M \approx 1.38 \)) per molecule. The Madelung constant is named after Erwin Medelung and is a geometrical factor that depends on the arrangement of ions in the solid. If the lattice were different (when considering 2D or 3D crystals), then this constant would naturally differ. 3D Crystal
In three dimensions the series does present greater difficulty and it is not possible to sum the series conveniently as in the case of one-dimensional lattice. As an example, let us consider the the \(\ce{NaCl}\) crystal. In the following discussion, assume \(r\) be the distance between \(\ce{Na^{+}}\) and \(\ce{Cl^-}\) ions. The nearest neighbors of \(\ce{Na^{+}}\) are six \(\ce{Cl^-}\) ions at a distance 1
r, 12 \(\ce{Na^{+}}\) ions at a distance 2 r, eight \(\ce{Cl^-}\) ions at 3 r, six \(\ce{Na^{+}}\) ions at 4 r, 24 \(\ce{Na^{+}}\) ions at 5 r, and so on. Thus, the electrostatic potential of a single ion in a crystal by approximating the ions by point charges of the surrounding ions:
\[ E_{ion-lattice} = \dfrac{Z^2e^2}{4\pi\epsilon_or} M \label{12.5.4}\]
For NaCl is a poorly converging series of interaction energies:
\[ M= \dfrac{6}{1} - \dfrac{12}{2} + \dfrac{8}{3} - \dfrac{6}{4} + \dfrac{24}{5} ... \label{21.5.5}\]
with
\(Z\) is the number of charges of the ions, (e.g., 1 for NaCl), \(e\) is the charge of an electron (\(1.6022 \times 10^{-19}\; C\)), \(4\pi \epsilon_o\) is 1.11265x10 -10C 2/(J m).
The Madelung constant depends on the structure type and Equation \(\ref{21.5.5}\) is applicable only for the sodium chloride (ei.g, rock salt) lattice geometry. Other values for other structural types are given in Table \(\PageIndex{2}\). \(A\) is the number of anions coordinated to cation and \(C\) is the numbers of cations coordinated to anion.
Compound Crystal Lattice M A : C Type NaCl NaCl 1.74756 6 : 6 Rock salt CsCl CsCl 1.76267 6 : 6 CsCl type CaF 2 Cubic 2.51939 8 : 4 Fluorite CdCl 2 Hexagonal 2.244 MgF 2 Tetragonal 2.381 ZnS (wurtzite) Hexagonal 1.64132 TiO 2 (rutile) Tetragonal 2.408 6 : 3 Rutile bSiO 2 Hexagonal 2.2197 Al 2O 3 Rhombohedral 4.1719 6 : 4 Corundum
There are other factors to consider for the evaluation of lattice energy and the treatment by Max Born and Alfred Lande led to the formula for the evaluation of lattice energy for a mole of
crystalline solid. The Born–Landé equation (Equation \(\ref{21.5.6}\)) is a means of calculating the lattice energy of a crystalline ionic compound and derived from the electrostatic potential of the ionic lattice and a repulsive potential energy term
\[ U= \dfrac{N_A M Z^2e^2}{4\pi \epsilon_o r} \left( 1 - \dfrac{1}{n} \right) \label{21.5.6}\]
where
\(N_A\) is Avogadro constant; \(M\) is the Madelung constant for the lattice \(z^+\) is the charge number of cation \(z^−\) is the charge number of anion \(e\) is elementary charge, 1.6022×10 −19C \(ε_0\) is the permittivity of free space \(r_0\) is the distance to closest ion \(n\) is the Born exponentthat is typically between 5 and 12 and is determined experimentally. \(n\) is a number related to the electronic configurations of the ions involved (Table \(\PageIndex{3}\)).
Atom/Molecule n He 5 Ne 7 Ar 9 Kr 10 Xe 12 LiF 5.9 LiCl 8.0 LiBr 8.7 NaCl 9.1 NaBr 9.5
Example \(\PageIndex{1}\): \(\ce{NaCl}\)
Estimate the lattice energy for \(\ce{NaCl}\).
SOLUTION Using the values giving in the discussion above, the estimation is given by
\[\begin{align*} U_{NaCl} &= \dfrac{(6.022 \times 10^{23} /mol) (1.74756 ) (1.6022 \times 10 ^{-19})^2 (1.747558)}{ 4\pi \, (8.854 \times 10^{-12} C^2/m ) (282 \times 10^{-12}\; m)} \left( 1 - \dfrac{1}{9.1} \right) \nonumber \\[4pt] &= - 756 \,kJ/mol\nonumber \end{align*} \nonumber\]
Much more should be considered in order to evaluate the lattice energy accurately, but the above calculation leads you to a good start. When methods to evaluate the energy of crystallization or lattice energy lead to reliable values, these values can be used in the Born-Hable cycle to evaluate other chemical properties, for example the electron affinity, which is really difficult to determine directly by experiment.
|
I am calculating the Gibbs energy of dissolving one mole of solid glucose in pure water such that the final solution has a volume of one liter and a concentration of one mole per liter. I have three different ways of calculating it, and I get two different answers.
The Gibbs energy is a state function, so it should not matter which path I take. At the beginning of the reaction, there is one mole of solid glucose and about one liter of pure water, and at the end of the reaction, there is one mole of glucose dissolved in one liter of solution.
I will use $\Delta_r G$ for the molar Gibbs energy of reaction (dimensions: energy/amount) and $\Delta G$ for the change in Gibbs energy from start to finish (extensive quantity, dimensions: energy).
1. Calculation using Gibbs energies of formation
This corresponds to two processes, turning the reactants into elements and then turning the elements into products. In the problem, all species are at standard state, so there are no correction terms for concentration.
$$\Delta G_{\text{total}} = \pu{1 mol} \Delta G_f(\text{Glucose(s)}) - \pu{1 mol } \Delta G_f(\text{(Glucose(aq)})$$
$$= \pu{1 mol } \Delta_r G^\circ\text{(dissolution)}$$
2. Integrating over Gibbs energy
The reaction starts with no glucose in solution, and then the glucose concentration increases gradually until it reaches the final concentration. During this process, the Gibbs energy of reaction changes because it is concentration-dependent:
$$ \Delta_r G\text{(dissolution)} = \Delta_r G^\circ\text{(dissolution)} + R T \ln(Q)$$
I will use the concentration of glucose divided by 1 mol/L as the integration variable x. Q is equal to x. The amount of glucose dissolved is $c\ V = (x\ \pu{mol/L) }V = x\ \pu{ mol}$. We have to integrate $ \Delta_r G\text{(dissolution)}$ from zero to one:
$$\Delta G_{\text{total}} = \pu{1 mol }\int_0^1 \left[ \Delta_r G^\circ\text{(dissolution)} + R T \ln(x) dx \right] $$
Taking constants and constant factors out of the integral, we get:
$$\Delta G_{\text{total}} = \pu{1 mol } \Delta_r G^\circ\text{(dissolution)} + \pu{1 mol } R T \int_0^1 \ln(x) dx$$
The value of the integral is negative one, so overall we have:
$$\Delta G_{\text{total}} = \pu{1 mol } (\Delta_r G^\circ\text{(dissolution)} - R T) $$
3. Running the reaction at a constant concentration of 1 M
Here, we will use a process that keeps the glucose concentration constant. We place a semi-permeable membrane into the pure water, separating it into two compartments. At the beginning, one compartment (the one in contact with the solid glucose) has a volume of zero. As glucose dissolves, we move the membrane, increasing the volume of the compartment so that the glucose concentration remains at 1 mol/L. We keep doing that over the course of the dissolution reaction until the volume of solution is one liter at the end (and the volume of pure water is zero).
Because all species are at standard state at all time, we can use the standard Gibbs energy of reaction without a term correcting for concentration. This is one component of the total change in Gibbs energy. The other one is work against the osmotic pressure difference between pure water and 1 M glucose:
$$ w = \Pi \times V = \Delta c R T \times V = \pu{1 mol } R T$$
This work represents the difference between dissolving 1 mol glucose into pure water and dissolving 1 mol glucose into a 1 M glucose solution, so we have to add (or subtract?) it to get the Gibbs energy of our original process.
Which calculation is correct, and where are the problems with the other ones?
Calculation 2 and 3 are off by a difference of $\pu{1 mol }R T$ compared to calculation 1.
I have two hunches where the problem might be. First, the concentration of water changes during the reaction (in an ideal solution, it would change by 1 mol/L, I think). I did not include the water in the first calculation, and I wonder if this is related to the discrepancy. Second, I realize that 1 M glucose in water is not an ideal solution, and that I should use activities rather than concentrations. I don't know what happens in an example with much lower concentrations.
|
Uniformly Accelerated Motion
Objects on the earth surface are accelerated towards the centre of the earth at a rate of 9.8 m/ sec². If we raise an object above the surface of the earth and then drop it and the object will start from rest, its velocity will be increased by 9.8 m/ sec². For each second it is falling towards the earth surface until the object strike the ground.
What is Uniformly Accelerated Motion?
If the acceleration always remains constant, then that acceleration is called uniform acceleration. A movement with uniformly increasing or decreasing speed is called Uniformly Accelerated Motion. An object is said to moving with uniform acceleration, if equal change in velocity takes place in equal intervals of time, however small these intervals may be. In uniform acceleration, velocity changes with a uniform rate.
The acceleration of a freely falling object due to the gravitation is uniform acceleration. Equal forces act on an object of uniform acceleration, both magnitude and direction of uniform acceleration remains constant. So, it occurs when the speed of an object changes at a constant rate.
In the above figure uniform acceleration is shown by successive change of velocity with time along a straight line. The uniform acceleration of a body is 9 m/ sec means that the velocity of the body changes in each second by 9 m/ sec in the same direction.
How to find the Uniformly Accelerated Motion? Problem: A car was travelling at a speed of 4.9 m/ sec, the driver saw a cat on the road and slammed on breaks. After 3 seconds the car came to stop, how far did the car travel from the point where the brakes were first passed to the point where the car stopped? Solution: Given,
Initial Velocity (u) = 4.9 m/ sec
Final Velocity (V) = 0 m/ sec [∵ After some time car came to stop],
Time (t) = 6 sec
Distance (s) =?
Using Kinematic Relation:
\(Dis\tan ce\,\,(s)=\frac{(V+u)}{2}\times t\).
\(Dis\tan ce\,\,(s)=\frac{(4.9+0)}{2}\times 3=7.35m \).
Therefore, the car stopped after 7.35m.
|
If $0 \rightarrow X \xrightarrow{i} Y \xrightarrow{\pi} Z \rightarrow 0$ is a short exact sequence and $W$ is a vector space, then the sequence $$0 \rightarrow X \otimes W \xrightarrow{i\otimes \text{id}_W}Y \otimes W\xrightarrow{\pi \otimes \text{id}_W}Z \otimes W \rightarrow 0 $$ is also exact. In particular,if $X \subset Y$ is a subspace and $W$ is arbitrary, then show that $$\frac{Y \otimes W}{X \otimes W} \simeq \frac{Y}{X} \otimes W$$
Before proving it, here is a small lemma that we shall need.
If $\phi:W \to Y$ and $\psi: X \to Z$ are injective linear maps, then $$\phi \otimes \psi: W \otimes X \to Y \otimes Z$$ is also injective.
Proof: Suppose that $\{w_i\}_{i \in I}$ is a basis for $W$. Let $v \in W \otimes X$. Then there exists a unique $\{x_i\}_{i \in I} \subset X$ such that $v =\sum_{i \in I}w_i \otimes x_i$. It is easy to see that $\{\phi(w_i)\}_{i \in I}$ (All but finitely many terms are non zero) is a linearly independent set in $Y$. If $\phi \otimes \psi(v)=0$, then we have $\sum_{i=1}^n \phi(w_i) \otimes \psi(x_i)=0$. Let $y_i^* \in Y^*$ be such that $y_i^*(\phi(w_j))=\delta_{ij}$ for each $i=1,2,\ldots,n$ and $j=1,2,\ldots,n$. Let $z^* \in Z^*$ be arbitrary. Then $$0=y_j^* \otimes z^*\left(\sum_{i=1}^n \phi(w_i) \otimes \psi(x_i)\right)=z^*(\psi(x_j))$$
Since this holds for all such $z^*$, we must $\psi(x_j)=0$ for each $j=1,2,\ldots,n$ which further implies that $x_j=0$ for each $j$, as $\psi$ is one to one. Thus, $v=0$.
From the above lemma, we see that $i \otimes \text{id}_W$ is one-to-one. Since $\pi$ is surjective, $\pi \otimes \text{id}_W$ is also surjective. Now we just need to show the exactness in the middle. Now suppose that $\{w_i\}_{i \in I}$ is a basis for $W$. Let $v \in \text{Image}(i \otimes \text{id}_W)$.Then there exists unique $\{y_i\}_{i \in I} \subset Y$ such that $v=\sum_{i \in I} y_i \otimes w_i$. Also there exists $u \in X \otimes W$ such that $v=i \otimes \text{id}_W (u)$. Moreover there exists unique $\{x_i\}_{i \in I}$ such that $u=\sum_{i \in I}x_i \otimes w_i$. Thus, we have $$\sum_{i \in I}y_i\otimes w_i=i \otimes \text{id}_W\left(\sum_{i \in I}x_i \otimes w_i \right)=\sum_{i \in I}i(x_i) \otimes w_i$$ which gives that $$\sum_{i \in I}\left(y_i-i(x_i)\right) \otimes w_i=0$$ By a similar argument as we did above, since $\{w_i\}_{i \in I}$ are linearly independent, we have that $i(x_i)=y_i$ for each $i \in I$. Since $\text{Image}(i)=\text{Kernel}(\pi)$, we have $\pi(y_i)=0$ for each $i \in I$. Thus, $\pi \otimes \text{id}_W(v)=0$.
For the other direction suppose that $v \in \text{Kernel}(\pi \otimes \text{id}_W)$. Then $v$ can be written as $\sum_{i \in I}y_i \otimes w_i$ for a unique $\{y_i\}_{i \in I} \subset Y$. Hence we have $\left( \pi \otimes \text{id}_W \right)(v)=\sum_{i \in I}\pi(y_i) \otimes w_i=0$. Again, since $\{w_i\}_{i \in I}$ are linearly independent, $\pi(y_i)=0$. So, $y _i \in \text{Kernel}(\pi)=\text{Image}(i)$ and hence can be written as $i(x_i)$ for some $x_i \in X$. Thus, $$v=\sum_{i \in I}i(x_i) \otimes w_i=i \otimes \text{id}_W \left(\sum_{i \in I}x_i \otimes w_i\right)$$
For the second part: We show the isomorphism between $\frac{Y \times W}{X \otimes W}$ and $Z \otimes W$ by exhibiting a right inverse from $\frac{Y \times W}{X \otimes W}$ to $\frac{Y}{X} \otimes W$ and left inverse from $Z \otimes W$ to $\frac{Y \otimes W}{X \otimes W}$. How do I show this?
Thanks for the help!!
|
What you have is a
MultinormalDistribution. The quadratic and linear forms in the exponential can be rewritten in terms of $-\frac12(\vec{x}-\vec{\mu})^\top\Sigma^{-1}(\vec{x}-\vec{\mu})$ where $\vec{\mu}$ represents the mean and $\Sigma$ the covariance matrix, see the documentation.
With this, you can do integrals of the type given in the question by invoking
Expectation, as in this example:
Expectation[
x^2 y^3, {x, y} \[Distributed]
MultinormalDistribution[{μ1, μ2},
{{σ1^2, ρ σ1 σ2},
{ρ σ1 σ2, σ2^2}}]]
The result is:
$\text{$\mu $1}^2 \text{$\mu $2}^3+\text{$\mu $2}^3 \text{$\sigma $1}^2+6 \text{$\mu $1} \text{$\mu $2}^2 \rho \text{$\sigma $1} \text{$\sigma
$2}+3 \text{$\mu $1}^2 \text{$\mu $2} \text{$\sigma $2}^2+3 \text{$\mu $2} \text{$\sigma $1}^2 \text{$\sigma $2}^2+6 \text{$\mu $2} \rho ^2 \text{$\sigma
$1}^2 \text{$\sigma $2}^2+6 \text{$\mu $1} \rho \text{$\sigma $1} \text{$\sigma $2}^3$
Edit
Regarding the normalization prefactor mentioned in Sjoerd's comment, you can use the fact that for any dimension $n$
$\iint\exp(-\frac{1}{2}\vec{z}^\top \Sigma^{-1}\vec{z})\mathrm dz^n = (2\pi)^{n/2}\sqrt{\det(\Sigma)}$
Hopefully these hints will be enough for you to fill in the missing linear-algebra steps to make the connection to your given matrix
matrix.
Edit 2
In response to the comment by chris, for polynomials as prefactors one can also use the slightly simpler but equivalent form
Moment[
MultinormalDistribution[{μ1, μ2},
{{σ1^2, ρ σ1 σ2},
{ρ σ1 σ2, σ2^2}}],
{2, 3}]
This is the same example as above, with the powers of
x and
y appearing in the second argument. See the documentation for
Moment.
The difference between
Moment and
Expectation is that
Moment is restricted to the expectation values of polynomials.
Edit 3
Before going on with the symbolic manipulations that I assumed are desired here, let me also point out that you can do your integrals pretty straightforwardly if your integrand contains no symbolic parameters. Then you just need to do a
numerical integral by replacing
Integrate with
NIntegrate.
But now back to the symbolic part:
A follow-up question arose how to complete the square in the exponential to get to the standard form of the multinormal distribution, starting from a form like this:
$$\exp(\,\vec{x}^\top A\vec{x}+\vec{v}^\top\vec{x})$$
The matrix $A$ in the exponential is symmetric, $A^\top=A$, and positive definite. Therefore $A$ is invertible, and the inverse is symmetric,
$$\left(A^{-1}\right)^\top=A^{-1}$$
With this, you can verify
$$\left(\vec{x}+\frac{1}{2}A^{-1}\vec{v}\right)^\top A\left(\vec{x}+\frac{1}{2}A^{-1}\vec{v}\right)=\vec{x}^\top A\vec{x}+\vec{v}^\top\vec{x}+\frac{1}{4}\vec{v}^\top A^{-1}\vec{v} $$
by directly multiplying out the factors on the left. Therefore,
$$\exp(\vec{x}^\top A\vec{x}+\vec{v}^\top \vec{x})=\exp(\left(\vec{x}-\vec{\mu}\right)^\top A\left(\vec{x}-\vec{\mu}\right)-\frac{1}{4}\vec{v}^\top A^{-1}\vec{v})$$
where
$$\vec{\mu}\equiv-\frac{1}{2}A^{-1}\vec{v} $$
Compare this to the standard form of the Gaussian integral, and you see that in the notation of Mathematica's documentation
$$A \equiv -\frac{1}{2} \Sigma^{-1}$$
and our integral differs from the standard Gaussian one by a factor
$$\exp(-\frac{1}{4}\vec{v}^\top A^{-1}\vec{v})$$
Now we have all the pieces that are needed, except that you still have to calculate the inverse matrix $A^{-1} \equiv -2\Sigma$, using
Inverse[mat]
if I go back to your original notation where the matrix $A$ is called
mat.
Edit 4:
In view of the other answers, I put together the above steps in a module so that my approach can be compared more easily to the alternatives. The result is quite compact and is not significantly slower than the fastest alternative (by @ybeltukov):
gaussMoment[fPre_, fExp_, vars_] :=
Module[{coeff, dist, ai, μ, norm},
coeff = CoefficientArrays[fExp, vars, "Symmetric" -> True];
ai = Inverse[2 coeff[[3]]];
μ = -ai.coeff[[2]];
dist = MultinormalDistribution[μ, -ai];
norm = 1/PDF[dist, vars] /. Thread[vars -> μ];
Simplify[
norm Exp[1/2 coeff[[2]].μ + coeff[[1]]] Distribute@
Expectation[fPre, vars \[Distributed] dist]]]
The normalization factor can be easily obtained from the
PDF. I used the same approach as @ybeltukov to extract the matrix $A$ from the exponent, except that I added a factor of $2$ at that stage to prevent that factor from popping up twice at later points.
Here are some tests:
RepeatedTiming[
gaussMoment[(x^2 + x^4 + x^6), -(x - 1)^2, {x}]]
$$\left\{0.0023,\frac{223 \sqrt{\pi }}{8}\right\}$$
RepeatedTiming[
gaussMoment[(x^2 + x y) , -(x - a1)^2 - (y - a2)^2 - (x - y)^2, {x, y}]]
$$\left\{0.0014,\frac{\pi \left(4 \text{a1}^2+6 \text{a1}
\text{a2}+2 \text{a2}^2+3\right) e^{-\frac{1}{3}
(\text{a1}-\text{a2})^2}}{6 \sqrt{3}}\right\}$$
This is about four orders of magnitude faster than doing the integrals using plain
Integrate.
One other big advantage (in addition to its simplicity and speed) is that it can deal with non-polynomial prefactors. Here is an example (it takes longer to run, but the other methods cannot do it at all):
gaussMoment[
Sin[2 Pi x] Cos[Pi x] , -(x - a1)^2 - (y - a2)^2 - (x -
y)^2, {x, y}]
$$\frac{i \pi e^{-\text{a1}^2-\frac{\text{a2}^2}{2}}
\left(e^{\frac{1}{6} (2 \text{a1}+\text{a2}-i \pi
)^2}-e^{\frac{1}{6} (2 \text{a1}+\text{a2}+i \pi
)^2}+e^{\frac{1}{6} (2 \text{a1}+\text{a2}-3 i \pi
)^2}-e^{\frac{1}{6} (2 \text{a1}+\text{a2}+3 i \pi
)^2}\right)}{4 \sqrt{3}}$$
|
Let $f: A\rightarrow B$ be a function and consider a subset $Y\subseteq B$.
,is $f(f^{-1}(Y)) = Y$ always true?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Yes, there are counterexamples. Let $A = B = \mathbb{R}$ and let $f(x) = x^2$. Let $Y = \{x \in \mathbb{R} : x \leq 0\}$. Then $$f(f^{-1}(Y))= f(\{0\}) = \{0\} \subsetneq Y.$$
It's always true that $f(f^{-1}(Y)) \subset Y$, as if $x \in f^{-1}(Y)$ then by definition $f(x) \in Y$.
If $Y \subset f(A)$, then it is true, as if $y \in Y$, then since $Y \subset f(A)$ there exists some $x \in A$ such that $f(x) = y$, thus $y = f(x)\in f(f^{-1}(Y))$. Therefore $$Y \subset f(f^{-1}(Y)).$$
No. For example, set $A=B=Y=\mathbb{N}$, and $f:x\mapsto 2x$. Then $f^{-1}(Y)=\mathbb{N}$, but $$\mbox{$f(\mathbb{N})=\{$evens$\}\subsetneq Y$.}$$
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
If the natural density of $A = \{a_i\}$ exists, then we can show that it must be zero.
Let $\displaystyle S_{n} = \frac{|A \cup [1,n]|}{n}$
Now $\displaystyle \{\frac{n}{a_n}\}$ is a subsequence of $S_{n}$ and so if the limit is $\displaystyle 2\delta > 0 $ then we have that for all $\displaystyle n > N_0$, $\displaystyle \frac{n}{a_n} > \delta$ and so $\displaystyle \frac{1}{a_n} > \frac{\delta}{n}$ for all $\displaystyle n > N_0$ and so $\displaystyle \sum \frac{1}{a_n}$ diverges.
The main problem is actually showing that the limit exists.
It is easy to show that $\liminf$ is zero: If the limit was $\displaystyle 2\delta > 0$ then for all $n > N_{0}$, $S_{n} > \delta$ and an argument similar to above works.
Now suppose $\displaystyle \limsup S_n = 2\delta > 0$. Then there is a subsequence $\displaystyle S_{N_1}, S_{N_2}, ..., S_{N_k}, \dots $ which converges to $\displaystyle 2\delta$.
Now we can choose the subsequence so that $\displaystyle S_{N_i} > \delta$ and $\displaystyle N_{k+1} > \frac{2N_{k}}{\delta}$
Now the number of elements of $\displaystyle A$ in the interval $\displaystyle (N_{k}, N_{k+1}]$ is atleast $\displaystyle \delta N_{k+1} - N_k \ge \delta N_{k+1} - \frac{\delta N_{k+1}}{2} \ge \frac{\delta N_{k+1}}{2}$ and so the sum of reciprocals in that interval is atleast $\displaystyle \frac{\delta N_{k+1}}{2} \frac{1}{N_{k+1}} = \displaystyle \frac{\delta}{2}$
And so the sum of reciprocals must diverge.
Hence $\displaystyle \limsup S_n = 0 = \liminf S_n$ and thus $\displaystyle \lim S_n = 0$ and thus the natural density is zero.
|
Here, we introduce — and outline a solution to — a generalized SIR model for infectious disease. This is referenced in our following post on measles and vaccination rates. Our generalized SIR model differs from the original SIR model of Kermack and McKendrick in that we allow for two susceptible sub-populations, one vaccinated against disease and one not. We conclude by presenting some python code that integrates the equations numerically. An example solution obtained using this code is given below.
Follow @efavdb
Follow us on twitter for new submission alerts! The model
The equations describing our generalized SIR model are
\begin{eqnarray}\label{eq1} \dot{S}_{U} &=& – b_{U} S_{U} I\\ \label{eq2} \dot{S}_{V} &=& – b_{V} S_{V} I\\ \label{eq3} \dot{R} &=& k I\\ 1 &=& I + R + S_U + S_V \label{Ieq} \end{eqnarray} Here, $S_{U}$, $S_{V}$, $I$, and $R$ are population fractions corresponding to those unvaccinated and as yet uninfected, vaccinated and as yet uninfected, currently infected and contagious, and once contagious but no longer (recovered, perhaps), respectively. The first two equations above are instances of the law of mass action. They approximate the infection rates as being proportional to the rates of susceptible-infected individual encounters. We refer to $b_{U}$ and $b_{V}$ here as the infection rate parameters of the two subpopulations. The third equation above approximates the dynamics of recovery: The form chosen supposes that an infected individual has a fixed probability of returning to health each day. We will refer to $k$ as the recovery rate parameter. The final equation above simply states that the subpopulation fractions have to always sum to one. Parameter estimation
We can estimate the values $b_{U}$ and $b_{V}$ by introducing a close contact number ($ccn$) variable, which is the average number of close contacts that individual infected, contagious people make per day. As a rough ball park, let us suppose that $ccn \approx 3$. According to the CDC, an un-vaccinated person making close contact with someone with measles has a 90$\%$ chance of contracting the illness. On the other hand, those who have been vaccinated a single time have a 95$\%$ chance of being immune to the disease. Let’s estimate that the combined population of individuals who have been vaccinated have a 1$\%$ chance of contracting the illness upon close contact. These considerations suggest
\begin{eqnarray} b_{U} \approx 3 \times 0.9 &=& 0.27, \ \ b_{V} \approx 3 \times 0.01 &=& 0.03 \end{eqnarray} The value of $k$ can be simply estimated using the fact that infected individuals are only contagious for about $8$ days, only four of which occur before rash appears. Assuming those who are showing symptoms quickly stop circulating, this suggests about five “effectively contagious” days, or \begin{eqnarray} k \approx 1/5 = 0.2. \end{eqnarray} Note that here and elsewhere, we measure time in units of days.
It’s important to note that, although the qualitative properties of the solutions to our model are insensitive to parameter value variations, this is not true for the numerical values that it predicts. We have chosen parameter values that seem reasonable to us. Further, with these choices, many of the model’s key quantitative values line up with corresponding CDC estimates. Those interested can experiment to see what sort of flexibility is allowed through modest parameter variation.
Solution by quadrature
Equations (\ref{eq1}-\ref{eq3}) give
\begin{eqnarray}\label{Svals} S_{U} = S_{U0} e^{ – \frac{b_{U} R}{k}}, \ \ \ S_{V} = S_{V0} e^{- \frac{b_{V} R}{k}}. \end{eqnarray} Combining with (\ref{Ieq}) and integrating gives \begin{eqnarray} \frac{\dot{R}}{k} =I_0 -S_{U0} \left [ e^{ – \frac{b_{U} R}{k}}- 1 \right ] – S_{20} \left [e^{ – \frac{b_{V} R}{k}}- 1 \right ] – R \end{eqnarray} Integrating again, \begin{eqnarray} kt = \int_{0}^R \frac{d R^{\prime}}{I_0 -S_{U0} \left [ e^{ – \frac{b_{U} R^{\prime}}{k}}- 1 \right ] – S_{V0} \left [e^{ – \frac{b_{V} R^{\prime}}{k}}- 1 \right ] – R^{\prime}} \label{solution}. \end{eqnarray} This implicitly defines $R$ as function of time. Small time behavior
At small $t$, $R$ is also small, so (\ref{solution}) can be approximated as
\begin{eqnarray} k t = \int_{0}^R \frac{d R^{\prime}}{I_0 + \left [ \frac{ b_{U} S_{U0}}{k} +\frac{ b_{V} S_{V0}}{k} – 1 \right ]R^{\prime}}. \end{eqnarray} This form can be integrated analytically. Doing so, and solving for $R$, we obtain \begin{eqnarray} R = \frac{k}{b_{U} S_{U0} + b_{V} S_{V0} – k} \left \{e^{ (b_{U} S_{U0} + b_{V} S_{V0} – k )t } -1 \right \}, \ \ \ I = I_0 e^{ (b_{U} S_{U0} + b_{V} S_{V0} – k )t}. \end{eqnarray} Early disease spread is characterized by either exponential growth or decay, governed by the sign of the parameter combination $b_{U} S_{U0} + b_{V} S_{V0} – k$: a phase transition! Total contractions
The total number of people infected in an outbreak can be obtained by evaluating $R$ at long times, where $I = 0$. In this limit, using (\ref{Ieq}) and (\ref{Svals}), we have
\begin{eqnarray} S_{U0} e^{- \frac{b_{U} R}{k}}+ S_{V0} e^{ – \frac{b_{V} R}{k}}+ R = 1. \end{eqnarray} This equation can be solved numerically to obtain the total contraction count as a function of the model parameters and initial conditions. A plot against $S_{U0}$ of such a solution for our measles-appropriate parameter estimates is given in our next post. Numerical integration in python
Below, we provide code that can be used to integrate (\ref{eq1}-\ref{Ieq}). The plot shown in our introduction provides one example solution. It’s quite interesting to see how the solutions vary with parameter values, and we suggest that those interested try it out.
#Solving the SIR model for infectious disease. JSL 2/18/2015 import math ccn = 3 #``close contact number" = people per day #interacting closely with typical infected person k = 1./5 #Rate of 'recovery' [1]. b1 = ccn*0.9 #Approximate infection rate un-vaccinated [3]. b2 = ccn*0.01 #Approximate infection rate un-vaccinated [4]. #Initial conditions (fraction of people in each category) I0 = 0.001 #initial population fraction infected. S10 = 0.2 #population fraction unvaccinated. S20 = 1 - I0 - S10 #population fraction vacccinated. R0 = 0.0 #intial recovered fraction. dt = 0.01 #integration time step days = 100 #total days considered I = [I0 for i in range(int(days/dt))] S1 = [S10 for i in range(int(days/dt))] S2 = [S20 for i in range(int(days/dt))] R = [R0 for i in range(int(days/dt))] for i in range(1,int(days/dt)): S1[i] = S1[i-1] - b1 * S1[i-1] * I[i-1] * dt S2[i] = S2[i-1] - b2 * S2[i-1] * I[i-1] * dt I[i] = I[i-1] + (b1 * S1[i-1] * I[i-1] + \ b2 * S2[i-1] * I[i-1] - k*I[i-1] ) * dt R[i] = R[i-1] + k * I[i-1] * dt time = [dt * i for i in range(0, int(days/dt))] %pylab inline plt.plot(time, I, color = 'red') plt.plot(time, S1, color = 'blue') plt.plot(time, S2, color = 'green') plt.plot(time, R, color = 'black') plt.plot(time[:1400],[I0* math.exp((b1 *S10 + b2* S20 - k)*t) \ for t in time[:1400]],color = 'purple') plt.axis([0, 100, 10**(-4),1]) plt.yscale('log') plt.xlabel('time [days]') plt.ylabel('population %\'s') plt.show() #[1] Measles patients are contagious for eight days # four of which are before symptoms appear. [2] #[2] http://www.cdc.gov/measles/about/transmission.html #[3] Assume infected have close contact with five people/day. # 90% of the un-vaccinated get sick in such situations. #[4] Single vaccination gives ~95% immunity rate [5]. Many # have two doses, which drops rate to very low. #[5] http://www.cdc.gov/mmwr/preview/mmwrhtml/00053391.htm
|
The key here is that the potential comes from the fact that there are losses. Consider a circuit element like a resistor; If instead this was an ideal wire, then the work needed to move a charge (neglecting external fields such as gravity) is 0 because there is no force applied to the charge. Now if we replace this ideal wire with a resistor, there is a force needed to move the charge across the element due to the dissipation of energy (to heat in this case). This is much like pushing a block up a frictionless hill vs across a frictionless table.
We can then derive the expression for work from the definition:
$$ W = \int_a^b \mathbf{F} \cdot \mathbf{\mathop{dr}} $$
Where $\mathbf{\mathop{dr}}$ is the displacement along the path.
Then since $\mathbf{F} = Q \mathbf{E}$ we get:
$$ W = Q \int_a^b \mathbf{E} \cdot \mathbf{\mathop{dr}} $$
But for a circuit element, the integral is the potential difference between $a$ and $b$. So we can write this as:
$$W = QV$
Or, for a battery, where we call the potential difference the EMF $\epsilon$:
$$ W = Q\epsilon$$
As you wanted to show.
|
I have asked this question before in the sense that what does a high frequency and low frequency component signify in a image and i got satisfactory answers now i want to know that how i can get high frequency and low frequency component separately from the image.
I mean what changes i have to make in the equations of bilateral filters and its implementation in matlab so that i can get both high frequency and low frequency component of that image.
Because in my experiment i need both the high frequency and low frequency component of a image.
Bilateral filters takes a weighted sum of the pixels in a local neighborhood; the weights depend on both the spatial distance and the intensity distance.
The value of a pixel assigned is given as
$$ BF[I_p] = \frac{1}{W_p} \sum_{\substack{ q\in S }} G_{\sigma_s}(\parallel p-q \parallel) G_{\sigma_r}(\mid I_p - I_q\mid) I_q $$
Do I have to make changes to this equation of bilateral filter to get high frequency and low frequency components separately .
What component does this equation signify is it the high frequency component or low frequency component ?
|
A mapping $f$ between topological spaces $(X, \mathcal I_X)$ and $(Y, \mathcal I_Y)$ is continuous in $x \in X$ if $f^{-1}(V)$ is a neighborhood of $x$ for every neighborhood of $f(x)$. I want to show that this
agree with the usual definition of continuity if $X$ and $Y$ are metric spaces and $\mathcal I_X$ and $\mathcal I_Y$ are the induced topologies of open sets.
I've shown that if $f$ is continuous between metric spaces in $x \in X$ then it is also continuous between topological spaces in $x \in X$.
However, how can I prove that if $f$ is continuous between topological spaces in $x \in X$ then it also continuous between metric spaces in $x \in X$ ?
I must find a $\delta > 0$ given $\epsilon > 0$, but the only thing I know is that $x \in U \subset \mathcal I_X$ and $f(x) \in U' \subset \mathcal I_Y$ both open ?
|
In
Classical Fourier Analysis by Loukas Grafakos we have in Proposition 2.3.25 the following definition for $\mathcal{S}_\infty(\mathbf{R}^n)$, namely that these are all the Schwartz functions $\phi$ such that for all multi-indices $\alpha$ we have that
$$\int_{\mathbf{R}^n} x^\alpha \phi(x) \, dx = 0.$$
Now I'm trying to find non-trivial functions in this space. I know that the Fourier transform maps the Schwartz-functions to itself, so I note that the requirement is actually that the Fourier transform evaluated in $0$ of $x^\alpha \phi(x)$ is $0$. So, $x^\alpha \mapsto i^\alpha d/dx^\alpha$ in the Fourier domain, so we actually want a function $\phi$ such that (let's take $n = 1$):
$$\left . \frac{d}{dx^\alpha} \widehat{\phi}(x) \right |_{x = 0} = 0.$$
For all $n \geq 0$ An obvious candidate is $f(x) = \text{exp}(-1/x^2)$ for $x > 0$ and $0$ otherwise.
However, if I now compute the Fourier transform (or the inverse) of this function (with Maple) I get another function say $g$, but if I plot the real part of $g$ it is not smooth (it has a cusp in 0), how is this possible? Further the integral which I want to be zero is only zero for odd $n$, for even $n$ it is complex! What goes wrong?
|
The Yoneda Lemma
Welcome to our third and final installment on the Yoneda lemma! In the past couple of weeks, we've slowly unraveled the mathematics behind the Yoneda perspective, i.e. the categorical maxim that an object is completely determined by its relationships to other objects.
Last week we divided this maxim into two points:
point #1 Everything we need to know about X is encoded in hom(--,X). In effect,the object X represents the functor hom(--,X). point #2 X and Y are isomorphic if and only if their represented functors hom(--,X) and hom(--,Y) are isomorphic. Point #1, we noticed, is the informal way of saying that the Yoneda embedding $\mathscr{Y}:\mathsf{C}\to\mathsf{Set}^{\mathsf{C}^{op}}$ that sends an object $X$ to the functor $\text{hom}(-,X),$ is fully faithful. In other words, the function from $\text{hom}(X,Y)$ (a set of morphisms) to $\mathsf{Nat}(\text{hom}(-,X),\text{hom}(-,Y))$ (a set of natural transformations) that sends $f$ to $f_*$ is a bijection. (Here, $f_*$ sends a morphism $g:Z\to X$ to the composition $f\circ g:Z\to Y$.)
This means that for every $f:X\to Y$, there is
exactly one natural transformation $\text{hom}(-,X)\to\text{hom}(-,Y)$, cooked up from $f$ itself. Conversely, if $\eta:\text{hom}(-,X)\to\text{hom}(-,Y)$ is any natural transformation, there is exactly one morphism $X\to Y$ that's obtained from $\eta$ itself. And this is where we left off last time.
Now here's a simple - yet crucial - observation: notice
the set $\text{hom}(X,Y)$ lies in the image of the functor $\text{hom}(-,Y):\mathsf{C}^{op}\to \mathsf{Set}$! "But," you ask, "why is that important?"
Because it allows us to rephrase last week's result in the following way:
For any object $X$ in $\mathsf{C}$, natural transformations $\text{hom}(-,X)\to \text{hom}(-,Y)$ are in bijection with elements in the set $\text{hom}(X,Y)$.
Pretty pithy, right?
And you know what's amazing? It's true
It's true
not onlyfor functors of the form $\text{hom}(-,Y)$. It's true for ALL functors from $\mathsf{C}^{op}$ to $\mathsf{Set}$. ALL OF THEM!
And
that is the Yoneda lemma.
The Yoneda Lemma The Yoneda Lemma: For any functor $F:\mathsf{C}^{op}\to\mathsf{Set}$ and any object $X$ in $\mathsf{C}$, natural transformations $\text{hom}(-,X)\to F$ are in bijection with elements in the set $F(X)$. That is,$$\mathsf{Nat}(\text{hom}(-,X),F)\cong F(X).$$
Do you see the import here? The set of natural transformations $\text{hom}(-,X)\to F$ could be
m-a-s-s-i-v-e, a dense forest of unknowable, untamable, and frankly unhelpful weeds. "Except," the Yoneda lemma tells us, " it's not!" The only natural transformations that exist are those which can be cooked up from elements in the set obtained by evaluating $F$ at the object of interest, $X$.
And what's the recipe?
Given an element $c\in F(X)$, define $\eta:\text{hom}(-,X)\to F$ by declaring $\eta_Y:\text{hom}(Y,X)\to F(Y)$ to be the morphism that sends a map $g:Y\to X$ to the element $Fg(c)$ in $F(Y)$ where $Fg$ denotes the image of $g$ under $F$.
On the flip side, any natural transformation $\eta:\text{hom}(-,X)\to F$ gives rise to an element in $F(X)$, namely $\eta_X(\text{id}_X)$. (Here $\eta_X$ denotes the morphism $\text{hom}(X,X)\to F(X)$ and $\text{id}_X$ is the identity morphism on $X$.)
It remains to check that these assignments really are inverses of each other (and that $\eta$ is a bona fide natural transformation). But as Tom Leinster once said, "To understand the question is very nearly to know the answer... there is only one possible way to proceed."
An immediate consequence of the Yoneda lemma is the content of last week's discussion:
First Corollary (Point #1) The Yoneda embedding $\mathscr{Y}:\mathsf{C}\to\mathsf{Set}^{\mathsf{C}^{op}}$ is fully faithful. Corollary 1:
Corollary 1:
Injectivity of the map $\text{hom}(X,Y)\mapsto\mathsf{Nat}(\text{hom}(-,X),\text{hom}(-,Y))$ given by $f\mapsto f_*$ is clear. (If $f\neq g$ then $f_*\neq g_*$.) The Yoneda lemma gives us surjectivity. To see this, set $F=\text{hom}(-,Y)$ in the statement of the lemma. Then we have a bijection $$\mathsf{Nat}(\text{hom}(-,X),\text{hom}(-,Y))\cong \text{hom}(X,Y) .$$ Now suppose $\eta:\text{hom}(-,X)\to\text{hom}(-,Y)$ is any natural transformation. We need to show the existence of a morphism $f:X\to Y$ so that $\eta=f_*$. Here, "$\eta=f_*$" means that for every object $W$ in $\mathsf{C}$ and for any map $g:W\to X$, $$\eta_W(g)=f\circ g.$$ So what should the map $f$ be? There's really only one choice! According to the Yoneda lemma, $\eta$ gives rise to exactly one morphism $\eta_X(\text{id}_X):X\to Y$. So let's choose that one! That is, $$\text{let }\; f:=\eta_X(\text{id}_X).$$ Now we just need to verify that this works. But by definition, the phrase "$\eta$ is a natural transformation" means for any pair of objects $Z,W\in\mathsf{C}$ and for any map $g:W\to Z$ we have the equality $\eta_W\circ g^*=g^*\circ\eta_Z,$ which is to say, for any $h:Z\to W$, $$\eta_W(h\circ g)=\eta_Z( h)\circ g$$
Since this holds
for all $Z$ and $h$, it holds in the special case when $Z=X$ and $h= \text{id}_X\in \text{hom}(X,X)$. Now the naturality condition gives us exactly what we want: $\eta_W(g)=fg$ for all $g:W\to X$. And hence $\eta = f_*$.
So an object $X$ is effectually the same as its representable functor. So far, we've focused on the contravariant functors $\text{hom}(-,X)$, but it turns out there's a
contravariant version of the Yoneda embedding (and a version of the Yoneda lemma for covariant functors $F$) and so there's an analogous "Corollary 1" with $\text{hom}(-,X)$ replaced by $\text{hom}(X,-)$. But the essence is the same - $X$ is determined by is relationships to other objects.
Better yet, it is
completely determined by its relationships to other objects. The word "completely" is given to us by point #2 mentioned above, which is actually a second corollary of the Yoneda lemma.
Second Corollary (Point #2) $X\cong Y$ if and only if $\text{hom}(-,X)\cong\text{hom}(-,Y)$. Corollary 2:
Corollary 2:
One direction follows simply because $\mathscr{Y}$ is a functor: if $X$ and $Y$ are isomorphic, then so are $\text{hom}(-,X)$ and $\text{hom}(-,Y)$. The converse follows because $\mathscr{Y}$ is fully faithful. This is a general fact: if $F:\mathsf{C}\to\mathsf{D}$ is a fully faithful functor and if $F(X)\cong F(Y)$, then $X\cong Y$. (Proof in footnote.*)
We saw an illustration of this last week: two topological spaces $X$ and $Y$ have the same cardinality if and only if $\text{hom}(\bullet,X)\cong \text{hom}(\bullet,Y)$; they have the same path space if and only if $\text{hom}(I,X)\cong\text{hom}(I,Y)$; they have the same loop space if and only if $\text{hom}(S^1,X)\cong\text{hom}(S^1,Y)$; and so on. Probing $X$ and $Y$ with various spaces gives us more information. Probing them with
all spaces gives us all information.
Of course, looking at maps
out of $X$ provides useful information, too. For instance, $X$ is connected if and only if every map $X\to \{0,1\}$ is constant. Interestingly enough, if we consider the same set $\{0,1\}$ endowed with the Sierpinski topology, then (as we've seen before) the set $\text{hom}(X,\{0,1\})$ captures the full topology on $X$. Further, maps - when considered up to homotopy - from $X$ to an Eilenberg-MacLane sapce give rise to the homology groups of $X$. On the other hand, homotopy classes of maps from the $n$-sphere into $X$ form the homotopy groups of $X$.
The heavy emphasis on morphisms is really a consequence of the Yoneda perspective (and hence the Yoneda lemma);
it's all about relationships!
I'd like to close this series with one more example. The Yoneda lemma is sometimes described as a generalization of Cayley's theorem from group theory. And rightly so. We can use the Yoneda lemma to
prove Cayley's theorem.
Cayley's theorem: a proof
Any group $G$ can be viewed as a category, call it $\mathsf{B}G$, with a single object $\bullet$ and a morphism for each group element. A functor $F$ from $\mathsf{B}G^{op}$ to $\mathsf{Set}$ is a right $G$-set. It sends $\bullet$ to a set $X$ and a morphism (i.e. group element) $g$ to the function $X\to X$ that multiplies on the right by $g$. In particular, when $F=\text{hom}(-,\bullet)$, the set $\text{hom}(\bullet,\bullet)$ is $G$
itself viewed as a right $G$-set. Then according to the Yoneda lemma, we have a bijection
The right-hand side is simply the set of all elements in $G$. But what about the left-hand side? First notice that natural transformations $\text{hom}(-,\bullet)\to \text{hom}(-,\bullet)$ are simply $G$-equivariant functions $G\to G$.
But
which $G$-equivariant functions are they? According to the bijection in Corollary 1, they are constructed from elements in $G$. In short, the set $\mathsf{Nat}(\hom(\bullet,\bullet),\hom(\bullet,\bullet))$ is nothing more than the set of all functions $f_g:G\to G$ defined by $x\mapsto xg$. And these are precisely automorphisms of $G$ that arise from multiplication by a fixed element!
The left-hand side is therefore a subgroup of the group of all permutations on $G$. Moreover, this subgroup is - by the Yoneda lemma - isomorphic to the group $G$ itself. And this is Cayley's theorem.
Further Reading
For more on the Yoneda lemma, I highly recommend Tom Leinter's Basic Category Theory as well as his incredibly clear The Yoneda Lemma: What's it All About? I also recommend Emily Riehl's Category Theory in Context (her examples are particularly enriching) and, for some really meaty math, the nLab. At those links, you'll notice that there's a third classic corollary of the Yoneda lemma, which we did not cover in this series. Perhaps in a future post!
If you enjoyed the "probing objects with other objects" idea, you'll be happy to know that it's part of a
philosophy of generalized points. For more, check out Leinster's Doing Without Diagrams and William Lawvere's An Elementary Theory of the Category of Sets (the 2005 version).
Finally, there's a neat result called the density theorem (e.g. Theorem 6.5.8 here) that tells us every functor $F:\mathsf{C}^{op}\to\mathsf{Set}$ is really
built up from the represented functors $\text{hom}(-,X)$. Formally, every such $F$ is a colimit of certain $\text{hom}(-,X)$. This is really a fantastic result (and has wonderful mathematics - like Kan extensions! - behind it). But I'll postpone the discussion - we haven't talked about colimits or limits! Yet.
*
Proof. Suppose $h:F(X)\to F(Y)$ is an isomorphism with inverse $h^{-1}$. Because $F$ is fully faithful, there is a unique morphism $f:X\to Y$ so that $Ff=h$. Similarly, there is a unique morphism $g:Y\to X$ so that $Fg=h^{-1}$. Then $\text{id}_{F(X)}=h^{-1}\circ h=Fg\circ Ff=F(fg)$. But $F(\textit{id}_X)$ also maps to $\text{id}_{F(X)}$. Therefore, $fg=\textit{id}_X$ since $F$ is faithful. A similar argument shows $gf=\textit{id}_Y$ and so $f$ is an isomorphism.
In this series:
|
In the previous post, I discussed the function fitting view of supervised learning. It is theoretically impossible to find the best fitting function from an infinite search space. In this post, I will discuss how we can restrict the search space in function fitting with assumptions.
In the following example, we do not know for sure what the fitting function \(f\) should look like. We may guess the function is linear, polynomial, or non-linear, but our guess is as good as randomly drawing some lines connecting all the points. There are infinite number of functions in the space, and it is in theory impossible to iterate all of them to find the best one.
A lazy and naive way to predict unseen data is to use the “proximity rule” with the algorithm
. For a new input data point \(x^{\prime}\) in \(X_{test}\), our k nearest neighbor (KNN) is that the predicted output \(\hat y_{test}\) can be represented by the \(k\) nearest neighbors in the training data \(X_{train}\), and we use the aggregated training output as the predicted output: assumption
\(\hat y_{test}( x_i^{\prime}) = \frac {1} {k} \sum_{x_i \in N_k(x)} y_i \)
Here, \(N_k(x) \) is the neighborhood of \(x_i^{\prime}\) defined by the \(k\) nearest training points \(x_i\).
In this figure, the red triangle is the real testing data point and the light blue triangle is the KNN prediction with \(k = 1\): the prediction takes the nearest training data and assign \(y_{train}\) to the testing data.
As we increase \(k = 2 \), we can see the prediction (darker blue triangle) gets closer to the real output (red triangle).
We can play with different \(k\) values and select the one with lowest prediction error (mean square error). Prediction error is related to bias variance trade-off: at low \(k\), the prediction heavily depends on a small local neighborhood with small number of training data, thus with low bias but high variance; at high \(k\), the prediction is made from larger number of training data set and becomes more global, thus with low variance but high bias.
KNN is also applicable to discrete output (classification problem). In the following example with 2-D input, we use \(k=1\) to make prediction for testing data.
As shown in the bottom right figure, of the 20 testing data, 2 are misclassified (red) based on 100 training data. The prediction error of classification is usually not computed by mean squared error, but by different loss functions (summarized in later posts).
From a fitting function perspective, KNN builds a function \(f\) which performs pair-wise computation between all \(X_{train}\) and \(X_{test}\). Therefore, KNN is
, particularly for large data. With \(n\) training data of \(p\) dimension and \(m\) new testing data, we have to compute the similarity matrix \(m \times n\), and then get the top \(k\) smallest distance for each testing data. The time complexity of KNN is computationally expensive O(np + nk).
To better restrict the search space for fitting functions and to reduce computation cost, most of the time, we make some
about the data and functions. assumptions
A commonly used fitting function is
which has 5 assumptions to guarantee that the OLS estimator is BLUE (best linear unbiased estimator), introduced in most statistics 101 courses. In practice, when we apply linear regression fitting, we need to define and assume the format of the linear function (linear, polynomial, or piece-wise), but we do not need to make any assumptions to compute the parameters that minimize the total loss function, shown in the next post. Ordinary Least Square (OLS)
These assumptions are critical, however, if we would like to make any inferences about the real parameter values. Say we want to understand if output \(y\) is correlated with an input \(X_1\). We can always use OLS to compute the parameter \(\hat \beta_1\). But how confident are we about the estimated parameter value? Is the correlation significant?
It is worth noticing that function fitting and statistical inference are related yet distinct topics. The latter usually has more strict assumptions.
linear in parameters can be non-linear on the input \(y = \beta_0 + \beta_1 x + \varepsilon\) \(y = \beta_0 + \beta_1 x + \beta_2 x^2+ \varepsilon\) \(y = \beta_0 + \beta_1 log(x)+ \varepsilon\) counterexample: non-linear in parameters \(y = \beta_0 + \beta_1 ^ 2 x \) each data point is randomly and independently sampled \((x_i, y_i) \sim i.i.d. \) The error term \(\varepsilon\) should also be independent and random The number of input (N) should be larger than the number of parameters of each input (p): N>=p. Otherwise, the resulting parameters will not be unique (discussed in later post). 0 conditional mean of error \(E(\varepsilon|X) = 0 \), which is equivalent to \(cov(x_i, \varepsilon_i) = 0 \) error does not depend on the input no perfect collinearity in parameters counterexample \(X_1 = 2X_2\) \(y = \beta_1X_1 +\beta_2X_2 + \beta_3X_3\) can be written as \(y = \beta_1 (2X_2) +\beta_2X_2 + \beta_3X_3 = (2\beta_1 +\beta_2)X_2 + \beta_3X_3 = \beta_{12}X_2 + \beta_3X_3\) We may compute a unique \(\beta_{12}\), but cannot derive unique values for \(\beta_1\) and \(\beta_2\) as there are infinite solutions in \(\beta_{12} = 2 \beta_1+\beta_2 \) As a result, we cannot disentangle the effect of \(X_1\) and \(X_2\). counterexample homoskedastic error \(Var(\varepsilon|X) = \sigma^2 \) Error terms for each data point should be i.i.d., i.e. there is no autocorrelation between error terms \(Cov(\varepsilon_i \varepsilon_j | X) = 0\), for \(i \neq j\)
Linear function is one of the most basic supervised learning fitting functions, and is extremely powerful. I will discuss linear function in detail in the next post.
Another commonly used function is
Gaussian Process (GP). GP makes assumptions of the prior distribution of fitting functions, and assumes any finite number of these functions have a joint Gaussian distribution. Say we have input data \(x_1, x_2, …,x_N\), and a GP assumes that \(p(f(x_1), f(x_2), …, f(x_N))\) is joint Gaussian.
For example, before we see any data, we can define a GP prior distribution of fitting functions for a 1-D input with mean of 0 as shown below.
As we have more training data (red dots below), we can update the GP posterior distribution with the GP mean (dashed line) and standard deviation (purple area). As we can see, more training data will decrease variance for prediction.
GP has been widely applied in optimization of black-box functions. As will be discussed in later posts, quite a lot of supervised learning algorithms and fitting functions have a differentiable loss function and relay on gradient based approach for function optimization. However, in the case of gradient free functions, particularly if the objective function is highly convoluted and not directly differentiable, GP has great advantages and application such as Bayesian Optimization in hyperparameter tuning. I will discuss Gaussian Process in detail in later posts.
Take home message
Finding the best fitting function from an infinite function space is theoretically impossible. In order to practically find the “best” fitting function, we usually restrict the function space and make key assumptions about the data and the fitting function. Under these assumptions, we can optimize the fitting function with manageable computational complexity.
Demo code can be found on my Github.
|
If I'm standing at a lat of 30.000$^\circ$ and a long of 30.000$^\circ$ and I move to a lat of 30.001$^\circ$ and a long 30.000$^\circ$, how far have I gone? Is there a relationship between decimal places and lat and long distances?
I don't understand the last sentence about decimal places, but I can tell you about the relationship between
lat, long and distance.
Over two centuries ago, the meter was defined as one ten-millionth (1/10 000 000) of the length of a quadrant along the Earth's meridian; that is, the distance from the Equator to the North Pole. So, for
latitude the number of degrees from the pole to the equator is $90^\circ\!$, and the number of meters is 10 million (or 10,000 kilometers). That means $1^\circ\!$ of latitude is $10,000/90 = 111$ kilometers, and $0.001^\circ = 0.111$ kilometers or $111$ meters, essentially an American football field plus both endzones.
The total length of the Equator is about equal to four times the distance from a Pole to the Equator. (Slightly more because the Earth's figure is a little oblate, like a slightly flattened ball.) Think of a great circle going from the North pole, to the Equator, to the South pole, to the Equator on the opposite side of the world, then back to North Pole ($360^\circ\!\!$ of a circle). That would be (almost) equivalent to the great circle ($360^\circ\!\!$ of
longitude) of the Equator. So, at the Equator, $0.001^\circ\!$ of longitude will also be $111$ meters. But...
The other circles of
latitude are smaller than the great circle of the Equator, yet still have $360^\circ\!$, so those degrees cover less than $111$ kilometers.
In fact, the size of a degree of
longitude as a function of latitude scales as the cosine of the latitude. So $0.001^\circ\!$ of longitude change at $30^\circ\!$ latitude would be $111 \times \cos (30^\circ\!) = 111 \times 0.866 = 96$ meters.
Latitudes are "parallels" while longitudes are "meridians" that all meet at the poles. For latitude, there is a set distance traveled per degree latitude no matter where you are on a spherical globe. For longitude, it depends what latitude you are at.
Image source: https://www.learner.org/jnorth/tm/LongitudeIntro.html
Image Credit: Illinois State University
Here is a nice calculator for you online: http://www.stevemorse.org/nearest/distance.php
1 degree of latitude in physical distance is 68.94 statute miles or 59.91 nautical miles (110.95 km) -- for a spherical earth assumption. So, a change of 0.001 degrees latitude is 0.06894 statute miles or 0.05991 nautical miles. Though, an ellipsoidal earth approximation does have small variance in latitudinal distances. Incidentally, longitude has the same distance between each degree
if you are at the equator. At the poles, the distance between lines of longitude is zero. Note that all distances are "as the crow flies" which means the terrain is disregarded in the distance calculation.
|
Let's say that you have $n$ independent and identically distributed (real valued) random variables $(X_{1},\ldots,X_{n})$ with distribution $\mathcal{N}(\mu,\sigma^{2})$. Here, $(\mu,\sigma^{2})$ are unknown parameters which we can estimate. Note that we assume that all random varialbes $X_{i}$ have the same mean $\mu$. In other words, we could say that we assume the following model :
$$ \forall i \in \left\{1,\ldots,n \right\}, \; X_{i} = \mu + \varepsilon_{i} \tag{$\star$}$$
where $(\varepsilon_{1},\ldots,\varepsilon_{n})$ are independent and identically distributed random variables with distribution $\mathcal{N}(0,\sigma^{2})$. One way to estimate the mean $\mu$ is to consider its
least squares estimate $\hat{\mu}_{n}$, where $\hat{\mu}_{n}$ is defined as follows :
$$ \hat{\mu}_{n} = \mathop{\mathrm{argmin}} \limits_{\mu \in \mathbb{R}} \sum_{i=1}^{n} \big( X_{i} - \mu \big)^{2} $$
To determine this estimate, you want to minimize the function $f$ given by :
$$ f(\mu) = \sum_{i=1}^{n} \big( X_{i} - \mu \big)^{2} = \sum_{i=1}^{n} X_{i}^{2} - 2\mu \sum_{i=1}^{n} X_{i} + n\mu^{2}.$$ $f$ is a differentiable function whose derivative is :
$$ f'(\mu) = -2 \sum_{i=1}^{n} X_{i} + 2n \mu $$
So, $f'(\mu) = 0$ if and only if $\displaystyle \mu = \frac{1}{n} \sum_{i=1}^{n} X_{i}$. Since $\displaystyle \lim \limits_{\mu \to \pm \infty} f(\mu) = +\infty$, the critical point of $f'$ we found is actually the global minimum of $f$. As a conclusion :
$$ \boxed{\displaystyle \hat{\mu}_{n} = \frac{1}{n} \sum_{i=1}^{n} X_{i}} $$
Minimizing a sum of squared "errors" is not the only way to determine the parameters in $(\star)$. The, perhaps, most common way is to determine
maximum likelihood estimates. This method consists in maximizing a function (the likelihood function), which, in the case of the model $(\star)$, writes :
$$ \ell \big( \mu,\sigma \big) = \frac{1}{\big( \sigma \sqrt{2\pi} \big)^{n}} \exp \Big( - \frac{1}{2\sigma^{2}} \sum_{i=1}^{n} (x_{i} - \mu)^{2} \Big) $$
Instead of maximizing $\ell$, it is more convenient to maximize $\log \ell$, the
log-likelihood. Maximizing $\log \ell$ with respect to $\mu$ (consider $\sigma$ fixed) yields :
$$\begin{align*} \mathop{\mathrm{argmax}} \limits_{\mu \in \mathbb{R}} \log \ell & = {} \mathop{\mathrm{argmax}} \limits_{\mu \in \mathbb{R}} \sum_{i=1}^{n} \big( X_{i}-\mu \big)^{2} \\&= \hat{\mu}_{n} \\\end{align*}$$
So, our least squares estimates is also (in this case) the maximum likelihood estimate of the mean.
|
I suspect there is in general not much difference between GMRES and CG for an SPD matrix.
Let's say we are solving $ Ax = b $ with $ A $ symmetric positive definite and the starting guess $ x_0 = 0 $ and generating iterates with CG and GMRES, call them $ x_k^c $ and $ x_k^g $. Both iterative methods will be building $ x_k $ from the same Krylov space $ K_k = \{ b, Ab, A^2b, \ldots \} $. They will do so in slightly different ways.
CG is characterized by minimizing the error $ e_k^c = x - x_k^c $ in the energy norm induced by $ A $, so that\begin{equation} (A e_k^c, e_k^c) = (A (x - x_k^c), x - x_k^c) = \min_{y \in K} (A (x-y), x-y).\end{equation}
GMRES minimizes instead the residual $ r_k = b - A x^g_k $, and does so in the discrete $ \ell^2 $ norm, so that\begin{equation} (r_k, r_k) = (b - A x_k^g, b - A x_k^g) = \min_{y \in K} (b - Ay, b - Ay).\end{equation}Now using the error equation $ A e_k = r_k $ we can also write GMRES as minimizing\begin{equation} (r_k, r_k) = (A e_k^g, A e_k^g) = (A^2 e_k^g, e_k^g)\end{equation}where I want to emphasize that this only holds for an SPD matrix $ A $. Then we have CG minimizing the error with respect to the $ A $ norm and GMRES minimizing the error with respect to the $ A^2 $ norm. If we want them to behave very differently, intuitively we would need an $ A $ such that these two norms are very different. But for SPD $ A $ these norms will behave quite similarly.
To get even more specific, in the first iteration with the Krylov space $ K_1 = \{ b \} $, both CG and GMRES will construct an approximation of the form $ x_1 = \alpha b $. CG will choose\begin{equation} \alpha = \frac{ (b,b) }{ (Ab,b) }\end{equation}and GMRES will choose\begin{equation} \alpha = \frac{ (Ab,b) }{ (A^2b,b) }.\end{equation}If $ A $ is diagonal with entries $ (\epsilon,1,1,1,\ldots) $ and $ b = (1,1,0,0,0,\ldots) $ then as $ \epsilon \rightarrow 0 $ the first CG step becomes twice as large as the first GMRES step. Probably you can construct $ A $ and $ b $ so that this factor of two difference continues throughout the iteration, but I doubt it gets any worse than that.
|
In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean.
It is kind of a "Forrest Gump" approach to the subject, but it is worth the try.
Consider you have 10 independent observations $X_1, X_2, \ldots, X_{10}\sim N(\mu,\sigma^2)$ that came right from a normal population whose mean $\mu$ and variance $\sigma^2$ are unknown.
Your observations bring to you collectively information both about $\mu$ and $\sigma^2$. After all, your observations tend to be spread around one central value, which ought to be close to the actual and unknown value of $\mu$ and, likewise, if $\mu$ is very high or very low, then you can expect to see your observations gather around a very high or very low value respectively. One good "substitute" for $\mu$ (in the absence of knowledge of its actual value) is $\bar X$, the average of your observation.
Also, if your observations are very close to one another, that is an indication that you can expect that $\sigma^2$ must be small and, likewise, if $\sigma^2$ is very large, then you can expect to see wildly different values for $X_1$ to $X_{10}$.
If you were to bet your week's wage on which should be the actual values of $\mu$ and $\sigma^2$, you would need to
choose a pair of values in which you would bet your money. Let's not think of anything as dramatic as losing your paycheck unless you guess $\mu$ correctly until its 200th decimal position. Nope. Let's think of some sort of prizing system that the closer you guess $\mu$ and $\sigma^2$ the more you get rewarded.
In some sense, your better, more informed, and more polite guess for $\mu$'s value could be $\bar X$. In that sense, you
estimate that $\mu$ must be some value around $\bar X$. Similarly, one good "substitute" for $\sigma^2$ (not required for now) is $S^2$, your sample variance, which makes a good estimate for $\sigma$.
If your were to believe that those substitutes are the actual values of $\mu$ and $\sigma 2$, you would probably be wrong, because very slim are the chances that you were so lucky that your observations coordinated themselves to get you the gift of $\bar X$ being equal to $\mu$ and $S^2$ equal to $\sigma^2$. Nah, probably it didn't happen.
But you could be at different levels of wrong, varying from a bit wrong to really, really,
really miserably wrong (a.k.a., "Bye-bye, paycheck; see you next week!").
Ok, let's say that you took $\bar X$ as your guess for $\mu$. Consider just two scenarios: $S^2=2$ and $S^2=20,000,000$. In the first, your observations sit pretty and close to one another. In the latter, your observations vary wildly. In which scenario you should be more concerned with your potential losses? If you thought of the second one, you're right. Having a estimate about $\sigma^2$ changes your confidence on your bet very reasonably, for the larger $\sigma^2$ is, the wider you can expect $\bar X$ to variate.
But, beyond information about $\mu$ and $\sigma^2$, your observations also carry some amount of just pure random fluctuation that is not informative neither about $\mu$ nor about $\sigma^2$.
How can you notice it?
Well, let's assume, for sake of argument, that there is a God and that He has spare time enough to give Himself the frivolity of telling you specifically the real (and so far unknown) values of both $\mu$ and $\sigma$.
And here is the annoying plot twist of this lysergic tale: He tells it to you
after you placed your bet. Perhaps to enlighten you, perhaps to prepare you, perhaps to mock you. How could you know?
Well, that makes the information about $\mu$ and $\sigma^2$ contained in your observations quite useless now. Your observations' central position $\bar X$ and variance $S^2$ are no longer of any help to get closer to the actual values of $\mu$ and $\sigma^2$, for you already know them.
One of the benefits of your good acquaintance with God is that you actually know by how much you failed to guess correctly $\mu$ by using $\bar X$, that is, $(\bar X - \mu)$ your estimation error.
Well, since $X_i\sim N(\mu,\sigma^2)$, then $\bar X\sim N(\mu,\sigma^2/10)$ (trust me in that if you will), also $(\bar X - \mu)\sim N(0,\sigma^2/10)$ (ok, trust me in that on too) and, finally,$$\frac{\bar X - \mu}{\sigma/\sqrt{10}} \sim N(0,1)$$(guess what? trust me in that one as well), which carries absolutely no information about $\mu$ or $\sigma^2$.
You know what? If you took any of your individual observations as a guess for $\mu$, your estimation error $(X_i-\mu)$ would be distributed as $N(0,\sigma^2)$. Well, between estimating $\mu$ with $\bar X$ and any $X_i$, choosing $\bar X$ would be better business, because $Var(\bar X) = \sigma^2/10 < \sigma^2 = Var(X_i)$, so $\bar X$ was less prone to be astray from $\mu$ than an individual $X_i$.
Anyway, $(X_i-\mu)/\sigma\sim N(0,1)$ is also absolutely non informative about neither $\mu$ nor $\sigma^2$.
"Will this tale ever end?" you may be thinking. You also may be thinking "Is there any more random fluctuation that is non informative about $\mu$ and $\sigma^2$?".
[I prefer to think that you are thinking of the latter.]
Yes, there is!
The square of your estimation error for $\mu$ with $X_i$ divided by $\sigma$, $$\frac{(X_i-\mu)^2}{\sigma^2}= \left(\frac{X_i-\mu}{\sigma}\right)^2\sim \chi^2$$has a Chi-squared distribution, which is the distribution of the square $Z^2$ of a standard Normal $Z\sim N(0,1)$, which I am sure you noticed has absolutely no information about either $\mu$ nor $\sigma^2$, but conveys information about the variability you should expect to face.
That is a very well known distribution that arises naturally from the very scenario of you gambling problem for every single one of your ten observations and also from your mean:$$\frac{(\bar X-\mu)^2}{\sigma^2/10}= \left(\frac{\bar X-\mu}{\sigma/\sqrt{10}}\right)^2= \left(N(0,1)\right)^2\sim\chi^2$$and also from the gathering of your ten observations' variation:$$\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2/10}=\sum_{i=1}^{10} \left(\frac{X_i-\mu}{\sigma/\sqrt{10}}\right)^2=\sum_{i=1}^{10} \left(N(0,1)\right)^2=\sum_{i=1}^{10} \chi^2.$$Now that last guy doesn't have a Chi-squared distribution, because he is the sum of ten of those Chi-squared distributions, all of them independent from one another (because so are $X_1, \ldots, X_{10}$). Each one of those single Chi-squared distribution is one contribution to the amount of random variability you should expect to face, with roughly the same amount of contribution to the sum.
The value of each contribution is not mathematically equal to the other nine, but all of them have the same expected behavior in distribution. In that sense, they are somehow symmetric.
Each one of those Chi-square is one contribution to the amount of pure, random variability you should expect in that sum.
If you had 100 observations, the sum above would be expected to be bigger
just because it have more sources of contibutions.
Each of those "sources of contributions" with the same behavior can be called
degree of freedom.
Now take one or two steps back, re-read the previous paragraphs if needed to accommodate the sudden arrival of your quested-for
degree of freedom.
Yep, each degree of freedom can be thought of as one unit of variability that is obligatorily expected to occur and that brings nothing to the improvement of guessing of $\mu$ or $\sigma^2$.
The thing is, you start to count on the behavior of those 10 equivalent sources of variability. If you had 100 observations, you would have 100 independent equally-behaved sources of strictly random fluctuation to that sum.
That sum of 10 Chi-squares gets called a
Chi-squared distributions with from now on, and written $\chi^2_{10}$. We can describe what to expect from it starting from its probability density function, that can be mathematically derived from the density from that single Chi-squared distribution (from now on called 10 degrees of freedom Chi-squared distribution with and written $\chi^2_1$), that can be mathematically derived from the density of the normal distribution. one degree of freedom
"So what?" --- you might be thinking --- "That is of any good only if God took the time to tell me the values of $\mu$ and $\sigma^2$, of all the things He could tell me!"
Indeed, if God Almighty were too busy to tell you the values of $\mu$ and $\sigma^2$, you would still have that 10 sources, that 10 degrees of freedom.
Things start to get weird (Hahahaha; only now!) when you rebel against God and try and get along all by yourself, without expecting Him to patronize you.
You have $\bar X$ and $S^2$, estimators for $\mu$ and $\sigma^2$. You can find your way to a safer bet.
You could consider calculating the sum above with $\bar X$ and $S^2$ in the places of $\mu$ and $\sigma^2$:$$\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10}=\sum_{i=1}^{10} \left(\frac{X_i-\bar X}{S/\sqrt{10}}\right)^2,$$but that is not the same as the original sum.
"Why not?" The term inside the square of both sums are very different. For instance, it is unlikely but possible that all your observations end up being larger than $\mu$, in which case $(X_i-\mu) > 0$, which implies $\sum_{i=1}^{10}(X_i-\mu) > 0$, but, by its turn, $\sum_{i=1}^{10}(X_i-\bar X) = 0$, because $\sum_{i=1}^{10}X_i-10 \bar X =10 \bar X - 10 \bar X = 0$.
Worse, you can prove easily (Hahahaha; right!) that $\sum_{i=1}^{10}(X_i-\bar X)^2 \le \sum_{i=1}^{10}(X_i-\mu)^2$ with strict inequality when at least two observations are different (which is not unusual).
"But wait! There's more!"$$\frac{X_i-\bar X}{S/\sqrt{10}}$$doesn't have standard normal distribution,$$\frac{(X_i-\bar X)^2}{S^2/10}$$doesn't have Chi-squared distribution with one degree of freedom,$$\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10}$$doesn't have Chi-squared distribution with 10 degrees of freedom$$\frac{\bar X-\mu}{S/\sqrt{10}}$$doesn't have standard normal distribution.
"Was it all for nothing?"
No way. Now comes the magic! Note that$$\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2}=\sum_{i=1}^{10} \frac{[X_i-\mu+\mu-\bar X]^2}{\sigma^2}=\sum_{i=1}^{10} \frac{[(X_i-\mu)-(\bar X-\mu)]^2}{\sigma^2}=\sum_{i=1}^{10} \frac{(X_i-\mu)^2-2(X_i-\mu)(\bar X-\mu)+(\bar X-\mu)^2}{\sigma^2}=\sum_{i=1}^{10} \frac{(X_i-\mu)^2-(\bar X-\mu)^2}{\sigma^2}=\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\sum_{i=1}^{10} \frac{(\bar X-\mu)^2}{\sigma^2}=\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-10\frac{(\bar X-\mu)^2}{\sigma^2}=\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\frac{(\bar X-\mu)^2}{\sigma^2/10}$$or, equivalently,$$\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}=\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2}+\frac{(\bar X-\mu)^2}{\sigma^2/10}.$$Now we get back to those known faces.
The first term has Chi-squared distribution with 10 degrees of freedom and the last term has Chi-squared distribution with one degree of freedom(!).
We simply split a Chi-square with 10 independent equally-behaved sources of variability in two parts, both positive: one part is a Chi-square with one source of variability and the other we can prove (leap of faith? win by W.O.?) to be also a Chi-square with 9 (= 10-1) independent equally-behaved sources of variability, with both parts independent from one another.
This is already a good news, since now we have its distribution.
Alas, it uses $\sigma^2$, to which we have no access (recall that God is amusing Himself on watching our struggle).
Well,$$S^2=\frac{1}{10-1}\sum_{i=1}^{10} (X_i-\bar X)^2,$$so$$\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2}=\frac{\sum_{i=1}^{10} (X_i-\bar X)^2}{\sigma^2}=\frac{(10-1)S^2}{\sigma^2}\sim\chi^2_{(10-1)}$$therefore$$\frac{\bar X-\mu}{S/\sqrt{10}}=\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\frac{S}{\sigma}}=\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{S^2}{\sigma^2}}}=\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{\frac{(10-1)S^2}{\sigma^2}}{(10-1)}}}=\frac{N(0,1)}{\sqrt{\frac{\chi^2_{(10-1)}}{(10-1)}}},$$which is a distribution that is not the standard normal, but whose density can be derived from the densities of the standard normal and the Chi-squared with $(10-1)$ degrees of freedom.
One very, very smart guy did that math[^1] in the beginning of 20th century and, as an unintended consequence, he made his boss the absolute world leader in the industry of Stout beer. I am talking about William Sealy Gosset (a.k.a. Student; yes,
that Student, from the $t$ distribution) and Saint James's Gate Brewery (a.k.a. Guinness Brewery), of which I am a devout.
[^1]: @whuber told in the comments below that Gosset did not do the math, but
guessed instead! I really don't know which feat is more surprising for that time.
That, my dear friend, is the origin of the $t$ distribution with $(10-1)$ degrees of freedom. The ratio of a standard normal and the squared root of an independent Chi-square divided by its degrees of freedom, which, in an unpredictable turn of tides, wind up describing the expected behavior of the estimation error you undergo when using the sample average $\bar X$ to estimate $\mu$ and using $S^2$ to estimate the variability of $\bar X$.
There you go. With an awful lot of technical details grossly swept behind the rug, but not depending solely on God's intervention to dangerously bet your whole paycheck.
|
In our final application of group theory, we will investigate the way in which symmetry considerations influence the interaction of light with matter. We have already used group theory to learn about the molecular orbitals in a molecule. In this section we will show that it may also be used to predict which electronic states may be accessed by absorption of a photon. We may also use group theory to investigate how light may be used to excite the various vibrational modes of a polyatomic molecule.
Last year, you were introduced to spectroscopy in the context of electronic transitions in atoms. You learned that a photon of the appropriate energy is able to excite an electronic transition in an atom, subject to the following selection rules:
\[\begin{array}{rcl} \Delta n & = & \text{integer} \\ \Delta l & = & \pm 1 \\ \Delta L & = & 0, \pm 1 \\ \Delta S & = & 0 \\ \Delta J & = & 0, \pm 1; J=0 \not \leftrightarrow J=0 \end{array} \tag{27.1}\]
What you may not have learned is where these selection rules come from. In general, different types of spectroscopic transition obey different selection rules. The transitions you have come across so far involve changing the
electronic state of an atom, and involve absorption of a photon in the UV or visible part of the electromagnetic spectrum. There are analogous electronic transitions in molecules, which we will consider in more detail shortly. Absorption of a photon in the infrared (IR) region of the spectrum leads to vibrational excitation in molecules, while photons in the microwave (MW) region produce rotational excitation. Each type of excitation obeys its own selection rules, but the general procedure for determining the selection rules is the same in all cases. It is simply to determine the conditions under which the probability of a transition is not identically zero.
The first step in understanding the origins of selection rules must therefore be to learn how transition probabilities are calculated. This requires some quantum mechanics.
Last year, you learned about operators, eigenvalues and eigenfunctions in quantum mechanics. You know that if a function is an eigenfunction of a particular operator, then operating on the eigenfunction with the operator will return the observable associated with that state, known as the eigenvalue (i.e. \(\hat{A} \Psi = a \Psi\)). What you may not know is that operating on a function that is NOT an eigenfunction of the operator leads to a change in state of the system. In the transitions we will be considering, the molecule interacts with the
electric field of the light (as opposed to NMR spectroscopy, in which the nuclei interact with the magnetic field of the electromagnetic radiation). These transitions are called electric dipole transitions, and the operator we are interested in is the electric dipole operator, usually given the symbol \(\hat{\boldsymbol{\mu}}\), which describes the electric field of the light.
If we start in some initial state \(\Psi_i\), operating on this state with \(\hat{\boldsymbol{\mu}}\) gives a new state, \(\Psi = \hat{\boldsymbol{\mu}} \Psi\). If we want to know the probability of ending up in some particular final state \(\Psi_f\), the probability amplitude is simply given by the overlap integral between \(\Psi\) and \(\Psi_f\). This probability amplitude is called the
transition dipole moment, and is given the symbol \(\boldsymbol{\mu}_{fi}\) ..
\[\hat{\boldsymbol{\mu}}_{fi} = \langle\Psi_f | \Psi\rangle = \langle\Psi_f | \hat{\boldsymbol{\mu}} | \Psi_i\rangle \tag{27.2}\]
Physically, the transition dipole moment may be thought of as describing the ‘kick’ the electron receives or imparts to the electric field of the light as it undergoes a transition. The transition probability is given by the square of the probability amplitude.
\[P_{fi} = \hat{\boldsymbol{\mu}}_{fi}^2 = |\langle\Psi_f | \hat{\boldsymbol{\mu}} | \Psi_i\rangle|^2 \tag{27.3}\]
Hopefully it is clear that in order to determine the selection rules for an electric dipole transition between states \(\Psi_i\) and \(\Psi_f\), we need to find the conditions under which \(\boldsymbol{\mu}_{fi}\)
In section \(17\), we showed how to use group theory to determine whether or not an integral may be non-zero. This forms the basis of our consideration of selection rules.
Electronic transitions in molecules
Assume that we have a molecule in some initial state \(\Psi_i\). We want to determine which final states \(\Psi_f\) can be accessed by absorption of a photon. Recall that for an integral to be non-zero, the representation for the integrand must contain the totally symmetric irreducible representation. The integral we want to evaluate is
\[\hat{\boldsymbol{\mu}}_{fi} = \int \Psi_f^* \hat{\boldsymbol{\mu}} \Psi_i d\tau \tag{27.4}\]
so we need to determine the symmetry of the function \(\Psi_f^* \hat{\boldsymbol{\mu}} \Psi_i\). As we learned in Section \(18\), the product of two functions transforms as the direct product of their symmetry species, so all we need to do to see if a transition between two chosen states is allowed is work out the symmetry species of \(\Psi_f\), \(\hat{\boldsymbol{\mu}}\) and \(\Psi_i\) , take their direct product, and see if it contains the totally symmetric irreducible representation for the point group of interest. Equivalently (as explained in Section \(18\)), we can take the direct product of the irreducible representations for \(\hat{\boldsymbol{\mu}}\) and \(\Psi_i\) and see if it contains the irreducible representation for \(\Psi_f\). This is best illustrated using a couple of examples.
Earlier in the course, we learned how to determine the symmetry molecular orbitals. The symmetry of an electronic state is found by identifying any unpaired electrons and taking the direct product of the irreducible representations of the molecular orbitals in which they are located. The ground state of a closed-shell molecule, in which all electrons are paired, always belongs to the totally symmetric irreducible representation\(^7\). As an example, the electronic ground state of \(NH_3\), which belongs to the \(C_{3v}\) point group, has \(A_1\) symmetry. To find out which electronic states may be accessed by absorption of a photon, we need to determine the irreducible representations for the electric dipole operator \(\hat{\boldsymbol{\mu}}\). Light that is linearly polarized along the \(x\), \(y\), and \(z\) axes transforms in the same way as the functions \(x\), \(y\), and \(z\) in the character table\(^8\). From the \(C_{3v}\) character table, we see that \(x\)- and \(y\)-polarized light transforms as \(E\), while \(z\)-polarized light transforms as \(A_1\). Therefore:
For \(x\)- or \(y\)-polarized light, \(\Gamma_\hat{\boldsymbol{\mu}} \otimes \Gamma_{\Psi 1}\) transforms as \(E \otimes A_1 = E\). This means that absorption of \(x\)- or \(y\)-polarized light by ground-state \(NH_3\) (see figure below left) will excite the molecule to a state of \(E\) symmetry. For \(z\)-polarized light, \(\Gamma_\hat{\boldsymbol{\mu}} \otimes \Gamma_{\Psi 1 }\) transforms as \(A_1 \otimes A_1 = A_1\). Absorption of \(z\)-polarized light by ground state \(NH_3\) (see figure below right) will excite the molecule to a state of \(A_1\) symmetry.
Of course, the photons must also have the appropriate energy, in addition to having the correct polarization to induce a transition.
We can carry out the same analysis for \(H_2O\), which belongs to the \(C_{2v}\) point group. We showed previously that \(H_2O\) has three molecular orbitals of \(A_1\)
The electronic ground state has two electrons in a \(B_2\) orbital, giving a state of \(A_1\) symmetry (\(B_2 \otimes B_2 = A_1\)). The first excited electronic state has the configuration \((1B_2)^1(3A_1)^1\) and its symmetry is \(B_2 \otimes A_1 = B_2\). It may be accessed from the ground state by a \(y\)-polarized photon. The second excited state is accessed from the ground state by exciting an electron to the \(2B_1\) orbital. It has the configuration \((1B_2)^1(2B_1)^1\), its symmetry is \(B_2 \otimes B_1 = A_2\). Since neither \(x\)-, \(y\)- or \(z\)-polarized light transforms as \(A_2\), this state may not be excited from the ground state by absorption of a single photon.
Vibrational transitions in molecules
Similar considerations apply for vibrational transitions. Light polarized along the \(x\), \(y\), and \(z\) axes of the molecule may be used to excite vibrations with the same symmetry as the \(x\), \(y\) and \(z\) functions listed in the character table.
For example, in the \(C_{2v}\) point group, \(x\)-polarized light may be used to excite vibrations of \(B_1\) symmetry, \(y\)-polarized light to excite vibrations of \(B_2\) symmetry, and \(z\)-polarized light to excite vibrations of \(A_1\) symmetry. In \(H_2O\), we would use \(z\)-polarized light to excite the symmetric stretch and bending modes, and \(x\)-polarized light to excite the asymmetric stretch. Shining \(y\)-polarized light onto a molecule of \(H_2O\) would not excite any vibrational motion.
Raman Scattering
If there are vibrational modes in the molecule that may not be accessed using a single photon, it may still be possible to excite them using a two-photon process known as Raman scattering\(^9\). An energy level diagram for Raman scattering is shown below.
The first photon excites the molecule to some high-lying intermediate state, known as a
virtual state. Virtual states are not true stationary states of the molecule (i.e. they are not eigenfunctions of the molecular Hamiltonian), but they can be thought of as stationary states of the ‘photon + molecule’ system. These types of states are extremely short lived, and will quickly emit a photon to return the system to a stable molecular state, which may be different from the original state. Since there are two photons (one absorbed and one emitted) involved in Raman scattering, which may have different polarizations, the transition dipole for a Raman transition transforms as one of the Cartesian products \(x^2\), \(y^2\), \(z^2\), \(xy\), \(xz\), \(yz\) listed in the character tables.
Vibrational modes that transform as one of the Cartesian products may be excited by a Raman transition, in much the same way as modes that transform as \(x\), \(y\), or \(z\) may be excited by a
one-photon vibrational transition.
In \(H_2O\), all of the vibrational modes are accessible by ordinary one-photon vibrational transitions. However, they may also be accessed by Raman transitions. The Cartesian products transform as follows in the \(C_{2v}\) point group.
\[\begin{array}{clcl} A_1 & x^2, y^2, z^2 & B_1 & xz \\ A_2 & sy & B_2 & yz \end{array} \tag{27.5}\]
The symmetric stretch and the bending vibration of water, both of \(A_1\) symmetry, may therefore be excited by any Raman scattering process involving two photons of the same polarization (\(x\)-, \(y\)- or \(z\)-polarized). The asymmetric stretch, which has \(B_1\) symmetry, may be excited in a Raman process in which one photon is \(x\)-polarized and the other \(z\)-polarized.
\(^7\)It is important not to confuse
molecular orbitals (the energy levels that individual electrons may occupy within the molecule) with electronic states (arising from the different possible arrangements of all the molecular electrons amongst the molecular orbitals, e.g. the electronic states of \(NH_3\) are NOT the same thing as the molecular orbitals we derived earlier in the course. These orbitals were an incomplete set, based only on the valence \(s\) electrons in the molecule. Inclusion of the \(p\) electrons is required for a full treatment of the electronic states. The \(H_2O\) example above should hopefully clarify this point.
\(^8\)‘\(x\)-polarized’ means that the electric vector of the light (an electromagnetic wave) oscillates along the direction of the \(x\) axis.
\(^9\)You will cover Raman scattering (also known as Raman spectroscopy) in more detail in later courses. The aim here is really just to alert you to its existence and to show how it may be used to access otherwise inaccessible vibrational modes.
Contributors
Claire Vallance (University of Oxford)
|
Are there any famous problems/algorithms in scientific computing that cannot be sped up by parallelisation? It seems to me whilst reading books on CUDA that most things can be.
The central issue is the length of the critical path $C$ relative to the total amount of computation $T$. If $C$ is proportional to $T$, then parallelism offers at best a constant speed-up. If $C$ is asymptotically smaller than $T$, there is room for more parallelism as the problem size increases. For algorithms in which $T$ is polynomial in the input size $N$, the best case is $C \sim \log T$ because very few useful quantities can be computed in less than logarithmic time.
Examples $C = T$ for a tridiagonal solve using the standard algorithm. Every operation is dependent on the previous operation completing, so there is no opportunity for parallelism. Tridiagonal problems can be solved in logarithmic time on a parallel computer using a nested dissection direct solve, multilevel domain decomposition, or multigrid with basis functions constructed using harmonic extension (these three algorithms are distinct in multiple dimensions, but can exactly coincide in 1D). A dense lower-triangular solve with an $m\times m$ matrix has $T = N = \mathcal O(m^2)$, but the critical path is only $C = m = \sqrt T$, so some parallelism can be beneficial. Multigrid and FMM both have $T = N$, with a critical path of length $C = \log T$. Explicit wave propagation for a time $\tau$ on a regular mesh of the domain $(0,1)^d$ requires $k = \tau / \Delta t \sim \tau N^{1/d}$ time steps (for stability), therefore the critical path is at least $C = k$. The total amount of work is $T = k N = \tau N^{(d+1)/d}$. The maximum useful number of processors is $P = T/C = N$, the remaining factor $N^{1/d}$ cannot be recovered by increased parallelism. Formal complexity
The NC complexity class characterizes those problems that can be solved efficiently in parallel (i.e., in polylogarithmic time). It is unknown whether $NC = P$, but it is widely hypothesized to be false. If this is indeed the case, then P-complete characterizes those problems that are "inherently sequential" and cannot be sped up significantly by parallelism.
To give a theoretical aspect to this, $NC$ is defined as the complexity class that is solvable in $O(log^c n)$ time on a system with $O(n^k)$ parallel processors. It is still unknown whether $P=NC$ (although most people suspect it's not) where $P$ is the set of problems solvable in polynomial time. The "hardest" problems to parallelize are known as $P$-complete problems in the sense that every problem in $P$ can be reduced to a $P$-complete problem via $NC$ reductions. If you show that a single $P$-complete problem is in $NC$, you prove that $P=NC$ (although that's probably false as mentioned above).
So any problem that is $P$-complete would intuitively be hard to parallelize (although big speedups are still possible). A $P$-complete problem for which we don't have even very good constant factor speedups is Linear Programming (see this comment on OR-exchange).
Start by grocking Amdahl's Law. Basically anything with a large number of serial steps will benefit insignificantly from parallelism. A few examples include parsing, regex, and most high-ratio compression.
Aside from that, the key issue is often a bottleneck in memory bandwidth. In particular with most GPU's your theoretical flops vastly outstrip the amount of floating point numbers you can get to your ALU's, as such algorithms with low arithmetic intensity (flops / cache-miss) will spend a vast majority of time waiting on RAM.
Lastly, any time that a piece of code requires branching, it is unlikely to get good performance, as ALU's typically outnumber logic.
In conclusion, a really simple example of something that would be hard to get a speed gain from a GPU is simply counting the number of zeros in a array of integers, as you may have to branch often, at most perform 1 operation (increment by one) in the case that you find a zero, and make at least one memory fetch per operation.
An example free of the branching problem is to compute a vector which is the cumulative sum of another vector. ( [1,2,1] -> [1,3,4] )
I don't know if these count as "famous" but there is certainly a large number of problems that parallel computing will not help you with.
The (famous) fast marching method for solving the Eikonal equation cannot be sped up by parallelization. There are other methods (for example fast sweeping methods) for solving the Eikonal equation that are more amenable to parallelization, but even here the potential for (parallel) speedup is limited.
The problem with the Eikonal equation is that the flow of information depends on the solution itself. Loosely speaking, the information flows along the characteristics (i.e. light rays in optics), but the characteristics depend on the solution itself. And the flow of information for the discretized Eikonal equation is even worse, requiring additional approximations (like implicitly present in fast sweeping methods) if any parallel speedup is desired.
To see the difficulties for parallelization, imagine a nice labyrinth like in some of the examples on Sethian's webpage. The number of cells on the shortest path through the labyrinth (probably) is a lower bound for the minimal number of steps/iterations of any (parallel) algorithm solving the corresponding problem.
(I write "(probably) is", because lower bounds are notoriously difficult to prove, and often require some reasonable assumptions on the operations used by an algorithm.)
Another class of problems that are hard to parallelize in practice are problems sensitive to rounding errors, where numerical stability is achieved by serialization.
Consider for example the Gram–Schmidt process and its serial modification. The algorithm works with vectors, so you might use parallel vector operations, but that does not scale well. If the number of vectors is large and the vector size is small, using parallel classical Gram–Schmidt and reorthogonalization might be stable and faster than single modified Gram–Schmidt, although it involves doing several times more work.
|
, where x is the unknown, and a, b, and c are known numbers, with a ≠ 0. The unknowns
\(a\)
,
\(b\)
and
\(c\)
are the coefficients of the equation and are called respectively, the quadratic coefficient, the linear coefficient and the constant term.Note that although
\(a\)
must not be zero,
\(b\)
or
\(c\)
could be. This means, some quadratic equations might be missing the linear coefficient or the constant term, but they are perfectly valid. For example,
\(2{x}^{2}-64=0\)
lacks the linear coefficient (b=0), while
\(3{x}^{2}+8x=0\)
is missing the constant term (c=0).There are many ways to solve quadratic equations, such as through factoring, completing the square, or using the quadratic formula. In the following section, we will demonstrate using the quadratic formula.
Quadratic Formula
For a quadratic equation, which has the form
\(a{x}^{2}+bx+c=0\)
, its solutions are described by the quadratic formula. In other words, the values of
\(x\)
follow the quadratic formula:
\(x=\frac{-b \pm \sqrt{{b}^{2}-4ac}}{2a}\)
In case you are not familiar with the
\(\pm\)
symbol, it indicates that, the expression on the right side of the equation can be expanded into two expressions, one with "
\(+\)
" and one with "
\(-\)
". Therefore, there are in fact two solutions to the quadratic equation. To be explicit, they are as follows:
\(x=\frac{-b+\sqrt{{b}^{2}-4ac}}{2a}\)
and
\(x=\frac{-b-\sqrt{{b}^{2}-4ac}}{2a}\)
Quadratic Equation Practice
For practice, let’s solve the following quadratic equation using the quadratic formula.
\({x}^{2}+6x-8=3x+7\)
First, Let's move all the terms to one side, which gives:
\({x}^{2}+6x-8-3x-7=0\)
Then, simplify the equation again by combining terms:
\({x}^{2}+3x-15=0\)
Now, we have a quadratic equation that follows the form:
\(a{x}^{2}+bx+c=0\)
This means we can use the quadratic formula because we see that
\(a=1\)
,
\(b=3\)
and
\(c=-15\)
. Substituting these values into the quadratic equation yields:
\(x=\frac{-3 \pm \sqrt{{3}^{2}-(4)(-15)}}{2}\)
After simplification, we have:
\(x=\frac{-3 \pm \sqrt{69}}{2}\)
We are done! We have found the solutions (values of x) of the original equation.
What's Next
Want to get better at solving quadratic equations and using the quadratic formula? Start with our practice problems at the top of this page and see if you can solve them. If you run into trouble, try the Cymath quadratic equation calculator to get the full solution and see the steps.At Cymath, it is our goal to help students get better at math. Ready to take your learning to the next level with “how” and “why” steps? Sign up for Cymath Plus today.
|
I have been using terms like underfitting/overfitting and bias-variance tradeoff for quite some while in data science discussions and I understand that underfitting is associated with high bias and over fitting is associated with high variance. But what is the reason of such association or in terms of a model what is high bias and high variance, How can one understand it intuitively?
Let us assume our model to be described by $y = f(x) +\epsilon$, with $E[\epsilon]=0, \sigma_{\epsilon}\neq 0$. Let furthermore $\hat{f}(x)$ be our regression function, i.e. the function whose parameters are the ones that minimise the loss (whatever this loss is). Given a new observation $x_0$, the expected error of the model is$$ E[(y-\hat{f}(x))^2|x=x_0].$$ This expression can be reduced (by means of more or less tedious algebra) to $$E[(y-\hat{f}(x))^2|x=x_0] = \sigma_{\epsilon}^2 + (E[\hat{f}(x_0)]-f(x_0))^2 + E[\hat{f}(x_0)-E[\hat{f}(x_0)]]^2$$where the second term is the difference between the expected value of our estimator $\hat{f}$ and its true value (therefore the
bias of the estimator) and the last term is the definition of variance.
Now for the sake of the example consider a very complex model (say, a polynomial with many parameters or similar) which you are fitting against the training data. Because of the presence of these many parameters, they can be adapted very closely to the training data to even the average out (because there is many of them); as a consequence the bias term is reduced drastically. On the other hand, though, it is generally the case that whenever you have many parameters their least square estimations come with high variance: as already mentioned, since they have been deeply adapted to the training data, they might not generalise well on new unseen data. Since we have many parameters (complex model) a small error in each of them sums up to a big error in the overall prediction.
The converse situation may happen when one has a model that is very static (imagine very few parameters): their variances do not sum up very much (because there is few of them) but the trade-off is that their estimation of the mean might not correspond closely to the true value of the regressor.
In the literature one refers to the former behaviour as
overfit, to the latter as underfit. In the description I have given you can see that they may be related to the complexity of the model but need not necessarily be, namely you may as well have particularly complex models that do not necessarily overfit (because of the way they are constructed, one above all is random forest) and simple model that do not necessarily underfit (for instance linear regressions when the data are actually linear).
Check out the answer provided by Brando Miranda in the following quora question:
"High variance means that your estimator (or learning algorithm) varies a lot depending on the data that you give it."
"Underfitting is the “opposite problem”. Underfitting usually arises because you want your algorithm to be somewhat stable, so you are trying to restrict your algorithm to much in some way. This might make it more robust to noise but if you restrict it too much it might miss legitimate information that your data is telling you. This usually results in bad train and test errors. Usually underfitting is also caused by biasing you model too much."
How can one understand it intuitively?
Underfitting is called "Simplifying assumption" (Model is HIGHLY BIASED towards its assumption). your model will think linear hyperplane is good enough to classify your data which may not be true. consider you are shown a picture of cat 1000 times, Now you are blindfolded, No matter Whatever you are shown the 1001th time, probability that you will say cat is very high(You are HIGHLY BIASED that the next picture is also gonna be a cat). Its because you believe its gonna be a cat anyway. Here you are simplifying assumptions
In statistics, Variance informally means how far your data is spread out. Overfitting is you memorise 10 qns for your exam and on the next day exam, only one question has been asked in the question paper from that 10 you read. Now you will answer that one qn correctly just like in the book, but you have no idea what the remaining questions are(Question are HIGHLY VARIED from what you read). In overfitting, model will memorise the entire train data such that it will give high accuracy on train but will suck in test. Hope its helps
|
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
|
A unit of measurement is a definite amount of a physical quantity
, defined and adopted byconvention, that is used as a standard for measurement of the samephysical quantity of any amount. A unit is given a universallyrecognised symbol that represents the definite amount of thephysical quantity. For measurement, a pure number is written beforethe unit that indicates how many times the predefined amount of theconcerned physical quantitiy is meant.
For e.g., Length is a physical quantity. Metre is a unit of lengththat represents a definite predetermined length. The symbol ofmetre is 'm'. When we say 10 metres or 10 m, we actually mean 10times the definite predetermined length called metre.
The definition, agreement, and practical use of
units ofmeasurement
have played a crucial role in human endeavourfrom early ages up to this day. Disparate systems of measurement
used to bevery common. Now there is a global standard, the International System
(SI) of units, the modern form ofthe metric system
. The SI has been oris in the process of being adopted
throughout the world.
In trade,
weights and measures
is often a subjectof governmental regulation, to ensure fairness and transparency.The Bureauinternational des poids et mesures
(BIPM) is tasked withensuring worldwide uniformity of measurements and theirtraceability to the International System of Units (SI). Metrology
is the science for developing nationaland internationally accepted units of weights and measures.
In physics
and metrology
, units are standards for measurement
of physical quantities
that need cleardefinitions to be useful. Reproducibility
of experimental results iscentral to the scientific method
.A standard system of units facilitates this. Scientific systems ofunits are a refinement of the concept of weights and measuresdeveloped long ago for commercial purposes.Science
, medicine
,and engineering
often use larger andsmaller units of measurement than those used in everyday life andindicate them more precisely. The judicious selection of the unitsof measurement can aid researchers in problem solving
(see, for example, dimensional analysis
).
In the social sciences
, there are nostandard units of measurement and the theory and practice ofmeasurement is studied in psychometrics
and the theory of conjointmeasurement
.
Systems of measurement Traditional systems
Prior to the near global adoption of the metric system manydifferent systems of measurement had been in use. Many of thesewere related to some extent or other. Often they were based on thedimensions of the human body according to the proportions describedby Marcus Vitruvius Pollio
.As a result, units of measure could vary not only from location tolocation, but from person to person.
Metric systemsA numberof metric systems of units haveevolved since the adoption of the original metric system inFrance in 1791.
The current international standardmetric system is the International system ofunits
. An important feature of modern systems is standardization
. Each unit has a universallyrecognized size.
Both the Imperial units
and US customary units
derive from earlierEnglish units
. Imperial units weremostly used in the BritishCommonwealth
and the former BritishEmpire
. US customary units are still the main systemof measurement used in the United States despite Congress having legally authorized metricmeasure on 28 July 1866.
Some steps towards US metrication
have been made, particularly theredefinition of basic US units to derive exactly from SI units, sothat in the US the inch is now defined as 0.0254 m (exactly),and the avoirdupois pound is now defined as 453.59237 g(exactly)
Natural systems
While the above systems of units are based on arbitrary unitvalues, formalised as standards, some unit values occur naturallyin science. Systems of units based on these are called natural units
. Similar to natural units,atomic units
(au) are a convenientsystem of units
of measurement usedin atomic physics
.
Also a great number of unusual
andnon-standard units may be encountered. These may include theSolar mass
, the Megaton
(1,000,000 tons of TNT
), and the Hiroshima atombomb
.
Legal control of weights and measures
To reduce the incidence of retail fraud, many national statutes
have standard definitions of weights andmeasures that may be used (hence "statute measure"), and these areverified by legal officers.
Base and derived units
Different systems of units are based on different choices of a setof fundamental units
.The mostwidely used system of units is the International System of Units,or SI
. There are seven SIbase units
. All other SI units
can be derived from these base units.
For most quantities a unit is absolutely necessary to communicatevalues of that physical quantity. For example, conveying to someonea particular length without using some sort of unit is impossible,because a length cannot be described without a reference used tomake sense of the value given.
But not all quantities require a unit of their own. Using physicallaws, units of quantities can be expressed as combinations of unitsof other quantities. Thus only a small set of units is required.These units are taken as the
base units
. Other units are
derived units
. Derived units are a matter of convenience,as they can be expressed in terms of basic units. Which units areconsidered base units is a matter of choice.
The base units of SI are actually not the smallest set possible.Smaller sets have been defined. For example, there are unit sets inwhich the electric
and magnetic field
have the same unit. This isbased on physical laws that show that electric and magnetic fieldare actually different manifestations of the same phenomenon.
Calculations with units Units as dimensions
Any value of a physical quantity
is expressed as a comparison to a unit of that quantity. Forexample, the value of a physical quantity
Z
is expressedas the product of a unit [Z] and a numerical factor:
Z = n \times [Z] = n [Z].
The multiplication sign is usually left out, just as it is left outbetween variables in scientific notation of formulas. Theconventions used to express quantities is referred to as quantity calculus
. In formulas the unit[Z] can be treated as if it were a specific magnitude of a kind ofphysical dimension
: see dimensional analysis
for more on thistreatment.
A distinction should be made between units and standards. A unit isfixed by its definition, and is independent of physical conditionssuch as temperature. By contrast, a standard is a physicalrealization of a unit, and realizes that unit only under certainphysical conditions. For example, the metre is a unit, while ametal bar is a standard. One metre is the same length regardless oftemperature, but a metal bar will be one metre long only at acertain temperature.
Guidelines Treat units algebraically. Only add like terms. When a unit isdivided by itself, the division yields a unitless one. When twodifferent units are multiplied, the result is a new unit, referredto by the combination of the units. For instance, in SI, the unitof speed is metres per second (m/s). See dimensional analysis. A unit can bemultiplied by itself, creating a unit with an exponent (e.g.m 2/s 2). Put simply, units obey the laws ofindices.(See Exponentiation) Some units have special names, however these should be treatedlike their equivalents. For example, one newton (N) is equivalentto one kg·m/s 2. Thus a quantity may have several unitdesignations, for example: the unit for surface tension can be referred to as eitherN/m (newtons per metre) or kg/s 2 (kilograms per secondsquared). Whether these designations are equivalent is disputedamongst metrologists. Expressing a physical value in terms of another unitConversion of units
involvescomparison of different standard physical values, either of asingle physical quantity or of a physical quantity and acombination of other physical quantities.
Starting with:
Z = n_i \times [Z]_i
just replace the original unit [Z]_i with its meaning in terms ofthe desired unit [Z]_j, e.g. if [Z]_i = c_{ij} \times [Z]_j,then:
Z = n_i \times (c_{ij} \times [Z]_j) = (n_i \times c_{ij})\times [Z]_j
Now n_i and c_{ij} are both numerical values, so just calculatetheir product.
Or, which is just mathematically the same thing, multiply
Z
by unity, the product is still
Z
:
Z = n_i \times [Z]_i \times ( c_{ij} \times [Z]_j/[Z]_i )
For example, you have an expression for a physical value
Z
involving the unit
feet per second
([Z]_i) and you want itin terms of the unit
miles per hour
([Z]_j):
Find facts relating the original unit to the desiredunit: 1 mile = 5280 feet and 1 hour = 3600 seconds Next use the above equations to construct a fraction that has avalue of unity and that contains units such that, when it ismultiplied with the original physical value, will cancel theoriginal units: 1 = \frac{1\,\mathrm{mi}}{5280\,\mathrm{ft}}\quad\mathrm{and}\quad 1 = \frac{3600\,\mathrm{s}}{1\,\mathrm{h}} Last,multiply the original expression of the physical value bythe fraction, called a conversionfactor, to obtain the same physical value expressed interms of a different unit. Note: since valid conversion factors are dimensionless and have a numerical value ofone, multiplying any physical quantity by such aconversion factor (which is 1) does not change that physicalquantity. 52.8\,\frac{\mathrm{ft}}{\mathrm{s}} =
52.8\,\frac{\mathrm{ft}}{\mathrm{s}}
\frac{1\,\mathrm{mi}}{5280\,\mathrm{ft}}
\frac{3600\,\mathrm{s}}{1\,\mathrm{h}} =
\frac {52.8 \times 3600}{5280}\,\mathrm{mi/h}
= 36\,\mathrm{mi/h}
Or as an example using the metric system, you have a value of fueleconomy in the unit
litres per 100 kilometres
and you wantit in terms of the unit
microlitres per metre
:
\mathrm{\frac{9\,\rm{L}}{100\,\rm{km}}} =
\mathrm{\frac{9\,\rm{L}}{100\,\rm{km}}}
\mathrm{\frac{1000000\,\rm{\mu L}}{1\,\rm{L}}}
\mathrm{\frac{1\,\rm{km}}{1000\,\rm{m}}} =
\frac {9 \times 1000000}{100 \times 1000}\,\mathrm{\mu L/m} =
90\,\mathrm{\mu L/m}
Real-world implicationsOneexample of the importance of agreed units is the failure of theNASA Mars ClimateOrbiter, which was accidentally destroyed on a mission to theplanet Mars in September1999 instead of entering orbit, due tomiscommunications about the value of forces: different computerprograms used different units of measurement (newton versus poundforce).
Enormous amounts of effort, time, and money werewasted.OnApril 15 1999 Korean Air cargo flight 6316 from Shanghai to Seoul was lost dueto the crew confusing tower instructions (in metres) and altimeterreadings (in feet).
Three crew and five people on the groundwere killed. Thirty seven were injured.In 1983, aBoeing 767 (which came to be know as the Gimli Glider) ran out of fuel in mid-flight because of twomistakes in figuring the fuel supply of AirCanada's first aircraft to use metric measurements.
Thisaccident is apparently the result of confusion both due to thesimultaneous use of metric & Imperial measures as well as mass& volume measures.
See also Notes as amended by Public Law 110–69 dated August 9, 2007 External links General Legal Metric information and associations Imperial/U.S. measure information and associations
|
I have been trying to find the coefficient of $a^2x^3$, in the expansion of $(a+x+c)^2(a+x+d)^2$, without success.
I am having trouble expanding the above expression, because I can't find a way to merge them into one.
How can I solve this? Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I have been trying to find the coefficient of $a^2x^3$, in the expansion of $(a+x+c)^2(a+x+d)^2$, without success.
I am having trouble expanding the above expression, because I can't find a way to merge them into one.
How can I solve this? Thanks
Here is a variation to determine the coefficient of $a^2x^3$
without a full expansion of the expression. It is convenient to use the coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ in an expression.
We obtain \begin{align*} [a^2x^3]&(a+x+c)^2(a+x+d)^2\\ &=[a^2]\left([x^2](a+x+c)^2\right)\left([x^1](a+x+d)^2\right)\\ &\qquad +[a^2]\left([x^1](a+x+c)^2\right)\left([x^2](a+x+d)^2\right)\tag{1}\\ &=[a^2]\left(2(a+d)\right)+[a^2]\left(2(a+c)\right)\tag{2}\\ &=0 \end{align*}
Comment:
In (1) we observe that in order to obtain the coefficient of $x^3$ one factor has to contribute $x$ and the other factor has to contribute $x^2$. There are no other possibilities to obtain a term with $x^3$.
In (2) we select the coefficient of $x$ resp. $x^2$ according to the binomial formula \begin{align*} (a+x+u)^2=(a+u)^2+2(a+u)x+x^2 \end{align*} Since there is no term with a factor $a^2$ the result is $0$.
Well, I would just do this by `brute force', as they say, with a lot of distributing:
$(a+x+c)^2(a+x+d)^2=(a+x+c)(a+x+c)(a+x+d)(a+x+d)=(a+x+c)(a+x+c)[a(a+x+d)+x(a+x+d)+d(a+x+d)]$
and keep going from there. Does this pattern make sense?
You have $(a + x + c)^2(a + x + d)^2$ and want to find the coefficient of $a^2x^3$. In other words, you have four boxes: the first two each have 2 $a$s, 2 $x$s, and 2 $c$s, while the second two each have 2 $a$s, 2 $x$s, and 2 $d$s. You want to pick exactly one letter from each box and in the end have 2 $a$s and 3 $x$s. Obviously, this isn't possible because you need 5 letters, and thus the coefficient is 0.
(Suppose that you instead wanted the coefficient of $a^2x^2$. Then you would choose 2 of the 4 boxes to pick an $a$ from, and pick a $d$ from the other 2. This means the coefficient is the number of ways you can choose 2 from 4, $\binom{4}{2} = 6$.
What about $ac^2d$? The 2 $c$s can only come from the first two boxes, so you can take an $a$ and a $d$ from the second two, in a total of 2 ways.
This method is generalizable.)
It is easy to see that all monomials in you expression are of form $a^{\alpha}c^{\gamma}d^{\delta}x^{\chi}$ for some non-negative integers $\alpha$, $\gamma$, $\delta$ and $\chi$ such that $\alpha + \gamma + \delta + \chi = 4$. You need coefficient of $a^2x^3$, where sum of powers is not 4, therefore coefficient is 0.
|
Example 1
Question:
Prove that there is only one circle with $AB$ as its diameter.
Assumption:
Assume that there are 3 circles $C_1$, $C_2$, and $C_3$ passing through the points $A$ and $B$. $C_1$ and $C_2$ are concentric and $C_1$ and $C_3$ are not concentric. $C_1$ and $C_2$ have different radii and $C_3$ has any radius. Let $C_1$ be on the midpoint of $AB$ such that $AB$ is its diameter.
Contradicting Arguments:
As $C_1$ and $C_2$ have different radii, points $A$ and $B$ cannot be on the circle $C_2$.
As $C_3$ is not on the middle of $AB$, $AB$ cannot be its diameter.
Conclusion:
So there is only one circle $C_1$ with $AB$ as its diameter.
Example 2
Question:
Prove that $\sqrt{2}$ is an irrational number.
Assumption:
Let $\sqrt 2$ be a rational number. So it can be represented as $\sqrt{2}=\frac{m}{n}$ where $m$ and $n$ are natural numbers without common factors other than $1$.
Contradicting Arguments:
Squaring both sides, we get\begin{align}2 &=\frac{m^2}{n^2}\\m^2 &= 2n^2\end{align}
Because $m^2$ is a multiple of $2$ then $m^2$ is an even number. Recall that
The square of an even number is even.
it implies that $m$ is also even. Let $m=2k$ where $k$ is any natural number. Substituting it for $m$, we get\begin{align}(2k)^2 &=2n^2\\4k^2 &= 2 n^2\\n^2 &= 2k^2\end{align}
With the same reasoning, $n$ is even. As both $m$ and $n$ are even numbers, 2 becomes their common factors so it contradicts the assumption that they have no common factors other than 1.
Conclusion:
$\sqrt 2$ cannot be represented as a ratio of two natural numbers without common factor other than 1. It implies that $\sqrt 2$ is irrational.
|
I am having some problems with both the notation and the geometrical side.
1)I don't know what kind of objects $N,L$ are precisely. In Lemma 1.1 the author says that $L$ is a smooth $(1,0)$-tangent vector field on $\partial \Omega$, so I think it should be something like $$ L=\sum_{j=1}^na_j\frac{\partial}{\partial z_j} $$ where $a_j$'a are some smooth functions. Are they complex valued?
2) At the end of page 3, he writes $$ \operatorname{Hess}_{\rho}(L,N)=g(\nabla_L\nabla r,N) $$ where $g$ stands for the euclidean metric in $\Bbb C^n$. Thus $g$ should represent a standard scalar vector between its first entry and the conjugate of the second, that is $$ \nabla_L\nabla r\cdot \overline N $$ am I right? What does $$ \nabla_L $$ mean?
3)Let us look at the proof of Lemma 1.1; for what reason $N_r-\overline{N_r}$ should be tangent to $\partial\Omega$? How should I check this? Why does this imply $(N_r-\overline{N_r})\overline L\rho=0$? Does "$z$ approaches $p$ from the normal direction" play any relevant role in the computations?
I would have other questions about it, but I hope these questions could help you to understand my gaps, and hopefully give an answer or some suitable reference.
I know my questions are basics, but the paper is MO's stuff, I think, that's why I wrote here instead MSE.
|
Consider the equations of motion
$$\begin{cases} \dot{x}(t) & = v(t) \\ \dot{v}(t) & = -\frac{\lambda}{m} v(t) + \frac{1}{m}F^{c}(x(t)) = a(x(t), v(t)) \end{cases},$$
where $x$ is the position, $v$ is the velocity, $m$ is the mass, $\lambda$ is the damping coefficient, $F^c(x(t))$ is a conservative force and $a$ is the net acceleration.
One way to numerically integrate the equations of motion is the Velocity Verlet algorithm (VV). Let $x_i = x(i\Delta t)$ and $v_i = v(i \Delta t)$, where $\Delta t > 0$ is the desired time step. VV steps are the following:
$x_{i+1} = x_i + v_i \Delta t + \frac{1}{2}a(x_i, v_i) \Delta t^2$ $v_{inter} = v_i + \frac{1}{2}a(x_i, v_i) \Delta t$ $v_{i+1} = v_{inter} + \frac{1}{2}a(x_{i+1}, v_{inter}) \Delta t$
How does this scheme change when we consider also the presence of a random force? Formally, the equations of motion are: $$\begin{cases} \dot{x}(t) & = v(t) \\ m\dot{v}(t) & = -\frac{\lambda}{m}v(t) + \frac{1}{m}F^{c}(x(t)) + \frac{1}{m}F^{r} = a(x(t), v(t), F^r) \end{cases},$$
where $F^r$ is a random force with zero mean and variance $\sigma^2$.
Moreover, given a generic numerical integrator
$$\begin{cases} x_{i+1} & = f(x_i, v_i) \\ v_{i+1} & = g(x_i, v_i) \end{cases}$$
how it changes when we consider also the presence of random forces?
|
Current browse context:
q-bio.BM
Change to browse by: References & Citations Bookmark(what is this?) Quantitative Biology > Biomolecules Title: Void distributions reveal structural link between jammed packings and protein cores
(Submitted on 31 Oct 2018)
Abstract: Dense packing of hydrophobic residues in the cores of globular proteins determines their stability. Recently, we have shown that protein cores possess packing fraction $\phi \approx 0.56$, which is the same as dense, random packing of amino acid-shaped particles. In this article, we compare the structural properties of protein cores and jammed packings of amino acid-shaped particles in much greater depth by measuring their local and connected void regions. We find that the distributions of surface Voronoi cell volumes and local porosities obey similar statistics in both systems. We also measure the probability that accessible, connected void regions percolate as a function of the size of a spherical probe particle and show that both systems possess the same critical probe size. By measuring the critical exponent $\tau$ that characterizes the size distribution of connected void clusters at the onset of percolation, we show that void percolation in packings of amino acid-shaped particles and protein cores belong to the same universality class, which is different from that for void percolation in jammed sphere packings. We propose that the connected void regions of proteins are a defining feature of proteins and can be used to differentiate experimentally observed proteins from decoy structures that are generated using computational protein design software. This work emphasizes that jammed packings of amino acid-shaped particles can serve as structural and mechanical analogs of protein cores, and could therefore be useful in modeling the response of protein cores to cavity-expanding and -reducing mutations. Submission historyFrom: John Treado [view email] [v1]Wed, 31 Oct 2018 19:29:05 GMT (5732kb,D)
|
Let $\left\{ A_{n}\right\} $ be a sequence of sets that Lebesgue measurable on $\mathbb{R}$ such that $\mu\left(A_{n}\right)<\infty$ for all $n$ (integer). Let
$$A={ \bigcup_{m=1}^{\infty}}{ \left(\bigcap_{k\geq m}^{\infty}A_{k}\right)}$$
Do we have the following inequality:
$$ \mu(A) \leq \liminf_{n\to\infty} \mu(A_n) ?$$
And can
$$\mu(A) < \liminf_{n\to\infty}\mu(A_n)?$$
My question is the second inequality (the first is well-known).
Thank you in advance.
|
One usually
starts from the CCR for the creation/annihilation operators and derives from there the commutation rules for the fields.However, one can start from either (see for example here about this).Suppose we want then to start from the equal-time anticommutation rules for a Dirac field $\psi_\alpha(x)$:$$ \tag{1} \{ \psi_\alpha(\textbf{x}), \psi_\beta^\dagger(\textbf{y}) \} = \delta_{\alpha \beta} \delta^3(\textbf{x}-\textbf{y}),$$where $\psi_\alpha(x)$ has an expansion of the form$$ \tag{2} \psi_\alpha(x) = \int \frac{d^3 p} {(2\pi)^3 2E_\textbf{p}} \sum_s\left\{ c_s(p) [u_s(p)]_\alpha e^{-ipx} + d_s^\dagger(p) [v_s(p)]_\alpha e^{ipx} \right\}$$or more concisely$$ \psi(x) = \int d\tilde{p} \left( c_p u_p e^{-ipx} + d_p^\dagger v_p e^{ipx} \right), $$
and we want to
derive the CCR for the creation/annihilation operators:$$ \tag{3} \{ a_s(p), a_{s'}^\dagger(q) \} = (2\pi)^3 (2 E_p) \delta_{s s'}\delta^3(\textbf{p}-\textbf{q}).$$To do this, we want to express $a_s(p)$ in terms of $\psi(x)$. We have:$$ \tag{4} a_s(\textbf{k}) = i \bar{u}_s(\textbf{k}) \int d^3 x \left[ e^{ikx} \partial_0 \psi(x) - \psi(x) \partial_0 e^{ikx} \right]\\= i \bar{u}_s(\textbf{k}) \int d^3 x \,\, e^{ikx} \overset{\leftrightarrow}{\partial_0} \psi(x) $$$$ \tag{5} a_s^\dagger (\textbf{k}) = -i \bar{u}_s(\textbf{k}) \int d^3 x \left[ e^{-ikx} \partial_0 \psi(x) - \psi(x) \partial_0 e^{-ikx} \right] \\=-i \bar{u}_s(\textbf{k}) \int d^3 x \,\, e^{-ikx} \overset{\leftrightarrow}{\partial_0} \psi(x) $$which you can verify by pulling the expansion (2) into (4) and (5).Note that these hold for any $x_0$ on the RHS.
Now you just have to insert in the anticommutator on the LHS of
(3) these expressions and use (1) (I can expand a little on this calculation if you need it).
most sources simply 'pull the $u, u^\dagger$ out of the commutators' to get (anti)commutators of only the creation/annihilation operators. How is this justified?
There is a big difference between a polarization spinor $u$ and a creation/destruction operator $c,c^\dagger$.
For fixed polarization $s$ and momentum $\textbf{p}$, $u_s(\textbf{p})$ is a four-component spinor, meaning that $u_s(\textbf{p})_\alpha \in \mathbb{C}$ for each $\alpha=1,2,3,4$. Conversely, for fixed polarization $s$ and momentum $\textbf{p}$, $c_s(\textbf{p})$ is an operator in the Fock space. This post imported from StackExchange Physics at 2014-12-06 00:45 (UTC), posted by SE-user glance
Not just a number, which makes meaningful wondering about (anti)commutators.
|
One of my data anaylsis pet peeves is false precision. Just because it is possible calculate a quantity to three decimal places doesn’t mean all of those decimal places are meaningful. This post explores how much precision is justified in the context of two common sports statistics: batting average in Major League Baseball and save percentage in the National Hockey League. Using Bayesian hierarchical models, we find that though these quantities are conventionally calculated to three decimal places, only the first two decimal places of precision are justified.
%matplotlib inline
import arviz as azfrom matplotlib import pyplot as pltfrom matplotlib.ticker import StrMethodFormatterimport numpy as npimport pandas as pdimport pymc3 as pmimport scipy as spimport seaborn as sns
sns.set(color_codes=True)svpct_formatter = ba_formatter = StrMethodFormatter("{x:.3f}")
We begin by loading hitting data for the 2018 MLB season from Baseball Reference.
def get_data_url(filename): return f"https://www.austinrochford.com/resources/sports_precision/{filename}"
def load_data(filepath, player_col, usecols): df = pd.read_csv(filepath, usecols=[player_col] + usecols) return (pd.concat((df[player_col] .str.split('\\', expand=True) .rename(columns={0: 'name', 1: 'player_id'}), df.drop(player_col, axis=1)), axis=1) .rename(columns=str.lower) .groupby(['player_id', 'name']) .first() # players that switched teams have their entire season stats listed once per team .reset_index('name'))
mlb_df = load_data(get_data_url('2018_batting.csv'), 'Name', ['AB', 'H'])
mlb_df.head()
name ab h player_id abreujo02 Jose Abreu 499 132 acunaro01 Ronald Acuna 433 127 adamewi01 Willy Adames 288 80 adamja01 Jason Adam 0 0 adamsau02 Austin L. Adams 0 0
batter_df = mlb_df[mlb_df['ab'] > 0]n_player, _ = batter_df.shape
This data set covers nearly 1,000 MLB players.
n_player
984
Batting average(https://en.wikipedia.org/wiki/Batting_average_(baseball%29) is the most basic summary of a player’s batting performance and is defined as their number of hits divided by their number of at bats. In order to assess the amount of precision that is justified when calculating batting average, we build a hierarchical logistic model. Let \(n_i\) be the number of at bats for the \(i\)-th player and let \(y_i\) be their number of hits. Our model is
\[ \begin{align*} \mu_{\eta} & \sim N(0, 5^2) \\ \sigma_{\eta} & \sim \textrm{Half-}N(2.5^2) \\ \eta_i & \sim N(\mu, \sigma_{\eta}^2) \\ \textrm{ba}_i & = \textrm{sigm}(\eta_i) \\ y_i\ |\ n_i & \sim \textrm{Binomial}(n_i, \textrm{ba}_i). \end{align*} \]
def hierarchical_normal(name, shape, μ=None): if μ is None: μ = pm.Normal(f"μ_{name}", 0., 5.) Δ = pm.Normal(f"Δ_{name}", shape=shape) σ = pm.HalfNormal(f"σ_{name}", 2.5) return pm.Deterministic(name, μ + Δ * σ)
with pm.Model() as mlb_model: η = hierarchical_normal("η", n_player) ba = pm.Deterministic("ba", pm.math.sigmoid(η)) hits = pm.Binomial("hits", batter_df['ab'], ba, observed=batter_df['h'])
We proceeed to sample from the model’s posterior distribution.
CHAINS = 3SEED = 88564 # from random.org, for reproducibilitySAMPLE_KWARGS = { 'draws': 1000, 'tune': 1000, 'chains': CHAINS, 'cores': CHAINS, 'random_seed': list(SEED + np.arange(CHAINS))}
with mlb_model: mlb_trace = pm.sample(**SAMPLE_KWARGS)
Auto-assigning NUTS sampler...Initializing NUTS using jitter+adapt_diag...Multiprocess sampling (3 chains in 3 jobs)NUTS: [σ_η, Δ_η, μ_η]Sampling 3 chains: 100%|██████████| 6000/6000 [00:44<00:00, 133.94draws/s]
Before drawing conclusions from the posterior samples, we use
arviz to verify that there are no obvious problems with the sampler diagnostics.
az.plot_energy(mlb_trace);
az.gelman_rubin(mlb_trace).max()
<xarray.Dataset>Dimensions: ()Data variables: μ_η float64 1.01 Δ_η float64 1.0 σ_η float64 1.0 η float64 1.0 ba float64 1.0
First we’ll examine the posterior distribution of Mike Trout’s batting average.
fig, ax = plt.subplots(figsize=(8, 6))trout_ix = (batter_df.index == 'troutmi01').argmax()ax.hist(mlb_trace['ba'][:, trout_ix], bins=30, alpha=0.5);ax.vlines(batter_df['h'] .div(batter_df['ab']) .loc['troutmi01'], 0, 275, linestyles='--', label="Actual batting average");ax.xaxis.set_major_formatter(ba_formatter);ax.set_xlabel("Batting average");ax.set_ylabel("Posterior density");ax.legend();ax.set_title("Mike Trout");
We see that the posterior places significant mass between .260 and .320, quite a wide range of batting averages. This range roughly corresponds to the 95% credible interval for his 2018 batting average.
np.percentile(mlb_trace['ba'][:, trout_ix], [2.5, 97.5])
array([ 0.25516468, 0.32704036])
We will use the width of the 95% credible interval for each player’s batting average to determine how many digits of precision are justified.
mlb_df = batter_df.assign( width_95=sp.stats.iqr(mlb_trace["ba"], axis=0, rng=(2.5, 97.5)))
The following plot shows the width of these intervals, grouped by the number of at bats the player had in 2018.
def plot_ci_width(grouped, width): fig, ax = plt.subplots(figsize=(8, 6)) low = grouped.quantile(0.025) high = grouped.quantile(0.975) ax.fill_between(low.index, low, high, alpha=0.25, label=f"{width:.0%} interval"); grouped.mean().plot(ax=ax, label="Average") ax.set_ylabel("Width of 95% credible interval"); ax.legend(loc=0); return ax
ax = plot_ci_width(mlb_df['width_95'].groupby(mlb_df['ab'].round(-2)), 0.95)ax.set_xlim(0, mlb_df['ab'].max());ax.set_xlabel("At bats");ax.set_ylim(bottom=0.);ax.yaxis.set_major_formatter(ba_formatter);ax.set_title("Batting average");
We see that, on average, about 100 at bats are required to justify a single digit of precision in a player’s batting average. Even in the limit of very many at bats (600 at bats corresponds to just under four at bats per game across a 162 game season) the 95% credible interval has an average width approaching 0.060. This limit indicates that batting average is at most meaningful to the second digit, and even the second digit has a fair bit of uncertainty. This result is not surprising; calculating batting average to three decimal places is a historical convention, but I don’t think many analysts rely on the third digit for their decisions/arguments. While intuitive, it is pleasant to have a somewhat rigorous justification for this practice.
We apply a similar analysis to save percentage in the NHL. First we load 2018 goaltending data from Hockey Reference.
nhl_df = load_data(get_data_url('2017_2018_goalies.csv'), 'Player', ['SA', 'SV'])
nhl_df.head()
name sa sv player_id allenja01 Jake Allen 1614 1462 andercr01 Craig Anderson 1768 1588 anderfr01 Frederik Andersen 2211 2029 appleke01 Ken Appleby 55 52 bernijo01 Jonathan Bernier 1092 997
n_goalie, _ = nhl_df.shape
This data set consists of the goaltending performance of just under 100 players.
n_goalie
95
Our save percentage model is almost identical to the batting average model. Let \(n_i\) be the number of at shots the \(i\)-th goalie faced and let \(y_i\) be the number of saves they made. The model is
\[ \begin{align*} \mu_{\eta} & \sim N(0, 5^2) \\ \sigma_{\eta} & \sim \textrm{Half-}N(2.5^2) \\ \eta_i & \sim N(\mu, \sigma_{\eta}^2) \\ \textrm{svp}_i & = \textrm{sigm}(\eta_i) \\ y_i\ |\ n_i & \sim \textrm{Binomial}(n_i, \textrm{svp}_i). \end{align*} \]
with pm.Model() as nhl_model: η = hierarchical_normal("η", n_goalie) svp = pm.Deterministic("svp", pm.math.sigmoid(η)) saves = pm.Binomial("saves", nhl_df['sa'], svp, observed=nhl_df['sv'])
with nhl_model: nhl_trace = pm.sample(nuts_kwargs={'target_accept': 0.9}, **SAMPLE_KWARGS)
Auto-assigning NUTS sampler...Initializing NUTS using jitter+adapt_diag...Multiprocess sampling (3 chains in 3 jobs)NUTS: [σ_η, Δ_η, μ_η]Sampling 3 chains: 100%|██████████| 6000/6000 [00:17<00:00, 338.38draws/s]
Once again, the convergence diagnostics show no cause for concern.
az.plot_energy(nhl_trace);
az.gelman_rubin(nhl_trace).max()
<xarray.Dataset>Dimensions: ()Data variables: μ_η float64 1.0 Δ_η float64 1.0 σ_η float64 1.0 η float64 1.0 svp float64 1.0
We examine the posterior distribution of Sergei Bobrovsky’s save percentage.
fig, ax = plt.subplots(figsize=(8, 6))bobs_ix = (nhl_df.index == 'bobrose01').argmax()ax.hist(nhl_trace['svp'][:, bobs_ix], bins=30, alpha=0.5);ax.vlines(nhl_df['sv'] .div(nhl_df['sa']) .loc['bobrose01'], 0, 325, linestyles='--', label="Actual save percentage");ax.xaxis.set_major_formatter(ba_formatter);ax.set_xlabel("Save percentage");ax.set_ylabel("Posterior density");ax.legend(loc=2);ax.set_title("Sergei Bobrovsky");
We see that the posterior places significant mass between .905 and .925. We see below that the best and worst save percentages (for goalies that faced at least 200 shots in 2018) are separated by about 0.070.
(nhl_df['sv'] .div(nhl_df['sa']) [nhl_df['sa'] > 200] .quantile([0., 1.]))
0.0 0.8669951.0 0.933712dtype: float64
Sergei Bobrovsky’s 0.020-wide credible interval is a significant proportion of this 0.070 total range.
np.percentile(nhl_trace['svp'][:, bobs_ix], [2.5, 97.5])
array([ 0.90683748, 0.92526507])
As with batting average, we plot the width of each goalie’s interval, grouped by the number of shots they faced.
nhl_df = nhl_df.assign( width_95=sp.stats.iqr(nhl_trace["svp"], axis=0, rng=(2.5, 97.5)))
ax = plot_ci_width(nhl_df['width_95'].groupby(nhl_df['sa'].round(-2)), 0.95)ax.set_xlim(0, nhl_df['sa'].max());ax.set_xlabel("Shots against");ax.set_ylim(bottom=0.);ax.yaxis.set_major_formatter(svpct_formatter);ax.set_title("Save percentage");
This plot shows that even goalies that face many (2000+) shots have credible intervals wider that 0.010, a signifcant proportion of the total variation between goalies.
This post is available as a Jupyter notebook here.
|
$$\begin{cases}&-z^{\prime\prime}(t)=\lambda(1+(N-2)t)^{\frac1{2-N}(2(N-1)+\alpha)}f(z(t)),\quad t\in(0,+\infty)\\&z(0)=z^\prime(+\infty)=0\end{cases}$$
I'm trying to solve the above differential equation. I'm using the shooting method, but for each initial guess a different solution appears. Which solution is correct? Is it possible to monitor the convergence rate and the iteration steps of the shooting method? Any help or hint is welcome!
Here is the code:
NN = 3.;p = 3.;xf = 10000.;sols = Map[{z -> NDSolveValue[{-z''[ t] == (1 + (NN - 2) t)^(1/(2 - NN) (2 (NN - 1) + 1)) z[t]^ p, z[0] == 0, z'[xf] == 0}, z, {t, 0, xf}, Method -> {"Shooting", "StartingInitialConditions" -> {z[0] == 0, z'[0] == #}}]} &, {0, 5, 10, 20, 40, 60, 140}];Plot[Evaluate[z[t] /. sols], {t, 0, 100}, PlotRange -> All]
|
Just this week I started a
Data Structures and Algorithms at my alma mater, San Jose state. And in jsut this short week I have noticed that the ones ability to properly evaluate ones code will take you a long way. Moreover, the skills developed through practiing the analysis of algortihm can go far in other discplines and studies.
A firm understanding and comfortablity with summations is fundamental to propering analyzing ones algortihm. In fact, coming form an EE background, I have seen summations many of times. And I have always felt as though I understood the underlying princple and implications of them. However, with that being said, I want to refresh myself. More specifically, create a document that I can later refer too and add to if needed.
Prior to anything in this class, I have always thought of summations as discrete representations of analog intergals. However, when I naviely assumed in the first reading that the trival summation, $$ \sum_{i=1}^{n} i$$ was easily translated to continous integral as, $$ \int_1^n i di $$ I quicly realized I was wrong once I did the math. For, $$ \sum_{i=1}^{n} i = \frac{n(n+1)}{2},\text{ (1) and }\int_1^n i di = \frac{n^2-1}{2} \text{ (2)}$$
At that point I figured it wpuld be a great investment to really look how the definte-continous-integral relates back to discrete-summation
Defintion:
Much of what I read in preparation for this post can be found here. Thank you UC Davis.
According the resources linked above, the definition for a definite integral related to discrete summations is as follows, $$ \int_a^b f(x)dx = \lim_{x\to\infty} \sum_{i=1}^{\infty} f(c_{i})\cdot \Delta x_{i}\text{ (3)}$$ where, $$ \Delta x_{i} = \frac{b-a}{n},\text{ the length of step interval, (4)}$$ and $$ c_{i} = a + (\frac{b-a}{n})i,\text{ right-end point of sampling interval, (5)}$$
Objective: Solve one of the sample problems from the link with the (3) Then work backwards from (1) with (3) to get the equivalent continous-intergal representation (2) Solve Example
I decided to solve problem 2. It had similar representation to continous equation in which I thought eq (1) would yeild. So i figured doing a similar example to my problem would offer more insight.
The UC Davis site contained solutions. So for an indepth solution, I suggest one vist thier site. Also, they are many more solved examples to practice.
Problem 2:
Use the limit definition of definite integral to evaluate $\int_0^1 (2x + 3) dx$
With $f(x) = 2x + 3$, $a=0$ and $b=1$, we get $$x = c_{i} = a + (\frac{b-a}{n})i = \frac{i}{n} $$ and, $$ \Delta x_{i} = \frac{b-a}{n} = \frac{1}{n}$$ combining these equations in the left-hand-side of eq (3) we obtain the following equation $$ \lim_{n\to\infty} \sum_{i=1}^n (\frac{2i}{n^2} + \frac{3}{n}) = \lim_{n\to\infty} ( \frac{2}{n^2} \sum_{i=1}^n i + \sum_{i=1}^n \frac{3}{n} ) $$ from here, one can go view the detailed solution at the UC Davis link. I merely present up until this point to show I step the problem and used the eqs (3), (4), (5)
|
Visualizing high-dimensional data is a demanding task since we are restricted to our three-dimensional world. A common approach to tackle this problem is to apply some dimensionality reduction algorithm first. This maps \(n\) data points \(\fvec{x}_i \in \mathbb{R}^d\) in the feature space to \(n\) projection points \(\fvec{y}_i \in \mathbb{R}^r\) in the projection space. If we choose \(r \in \{1,2,3\}\), we reach a point where we can successfully visualize the data. However, this mapping does not come at no cost since it is just not possible to visualize high-dimensional data in a low-dimensional space without the loss of at least some information. Hence, different algorithms focus on different aspects. \(t\)-Distributed Stochastic Neighbor Embedding (\(t\)-SNE) [video introduction] is such an algorithm which tries to preserve local neighbour relationships at the cost of distance or density information.
The general idea is to use probabilites for both the data points and the projections which reflect the local neighbourhood. For the data points, conditional probabilities\begin{equation} \label{eq:ProbCondFeature} p_{j|i} = \frac {e^{-\left\| \fvec{x}_i - \fvec{x}_j \right\|^2 / 2\sigma_i^2}} {\sum_{k \neq i} e^{-\left\| \fvec{x}_i - \fvec{x}_k \right\|^2 / 2\sigma_i^2}} \quad \text{with} \quad p_{i|i} = 0. \end{equation}
are used and symmetrized\begin{equation*} p_{ij} = \frac{p_{j|i} + p_{i|j}}{2 n}. \end{equation*}
The parameter \(\sigma_i\) controls how many neighbours are considered in the probability distribution \(p_{j|i}\) for each point. This is demonstrated in the following animation for a two-dimensional example dataset.
We also define probabilities for the projection points\begin{equation} \label{eq:ProbProj} q_{ij} = \frac{1}{n} \frac {\left( 1 + \left\| \fvec{y}_i - \fvec{y}_j \right\|^2 \right)^{-1}} {\sum_{k \neq i} \left( 1 + \left\| \fvec{y}_i - \fvec{y}_k \right\|^2 \right)^{-1}} \quad \text{with} \quad q_{ii} = 0. \end{equation}
Our goal is to find points with corresponding probabilities \(p_{ij} = q_{ij}\). Even though it is unlikely that we get equality here, we can still try to make the two probability distributions \(P\) and \(Q\) as close as possible. Note that the probabilities \(p_{ij}\) are fixed and so we already know how the matrix \(P\) looks like.
What is left is to change our projections points \(\fvec{y}_i\) so that \(Q\) is more like \(P\). To achieve this goal, \(t\)-SNE defines a cost function based on the Kullback–Leibler divergence\begin{equation} \label{eq:KLDivergence} D_Q(P) = \sum_{i=1}^{n} \sum_{j=1}^{n} p_{ij} \cdot \log_2\left( \frac{p_{ij}}{q_{ij}} \right) \end{equation}
and calculates the gradient for this function\begin{equation} \label{eq:Gradient} \frac{\partial D_Q(P)}{\partial \fvec{y}_i} = 4 \sum_{j=1}^{n} (p_{ij} - q_{ij}) (\fvec{y}_i - \fvec{y}_j) \left( 1 + \left\| \fvec{y}_i - \fvec{y}_j \right\|^2 \right)^{-1}. \end{equation}
This allows defining learning rules for randomly initialized projection points \(\fvec{y}_i(0)\)\begin{align*} \fvec{y}_i(t + 1) &= \fvec{y}_i(t) - \eta \cdot \frac{\partial D_Q(P)}{\partial \fvec{y}_i} \\ &= \fvec{y}_i(t) - \eta \cdot 4 \sum_{j=1}^{n} (p_{ij} - q_{ij}(t)) (\fvec{y}_i(t) - \fvec{y}_j(t)) \left( 1 + \left\| \fvec{y}_i(t) - \fvec{y}_j(t) \right\|^2 \right)^{-1}. \end{align*}
\(\eta < 0\) is a learning rate controlling the speed of convergence of the algorithm (but setting this value too high may cause problems). How the complete algorithm iterates to a solution for the example dataset is shown in the following animation.
Plotting the cost function of \eqref{eq:KLDivergence} as a function of the iteration time \(t\) reveals how the algorithm approaches a local minimum. However, it has not reached it since only 1000 iterations of the algorithm are shown.
← Back to the overview page
|
I hope this isn't a silly question. I'm learning single variable calc, and having lots of fun with optimization problems. This isn't exactly an optimization problem, but something that came up while working on one.
Let's say I have a small circular garden with a short brick border. This border is perhaps 1 foot tall, so that any sun or rain that reaches the flowers has to come from directly overhead. Suppose that the radius from the center of the circular flowerbed to the outermost edge of the circular brick border is $r.$ I plant a metal rod at the circle's center. At the top of the rod is a fan blade of sorts: it's flat, thin, parallel to the ground, and has the shape of a circular sector with radius $r.$ This blade is opaque, so it provides some shade for that part of the flowerbed beneath it.
I give the blade a spin: as it's spinning, all of the flowerbed receives some shade. Then I get an idea: I automate the spinning of the blade. I can control the angular velocity, $w,$ of the blade with a remote control. Let $l$ be the amount of light (or, if you want, rain) admitted to the flowerbed. My question is this: Is it the case that $$\lim_{w\to \infty}l=0?$$
I have reasons for thinking this is the case, and other reasons for thinking it's nonsense. And if it
is the case, then it's true regardless of the value $\theta$ of the central angle of the circular sector, right?
|
Linear Least-Squares L2-Regularized Regression Algorithm
An Linear Least-Squares L2-Regularized Regression Algorithm is a regularized regression algorithm can be implemented by a L2-Regularized Optimization System to solve an linear least-squares l2-regularized regression task)
AKA:ℓ2 Ridge Regression Method, Tikhonov Regularization Technique, Ridge Regression Algorithm, Phillips-Twomey Algorithm. Context: ... Example(s): Counter-Example(s): See:L2-Norm Regularizer, Regularization, Supervised Estimation Algorithm, Regularized Supervised Learning Algorithm, Parameter Shrinkage, Support Vector Machine, Non-Linear Least Squares. References 2017 (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Tikhonov_regularization Retrieved:2017-8-13. Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems. In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversionmethod, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.
Suppose that for a known matrix [math] A [/math] and vector [math] \mathbf{b} [/math] , we wish to find a vector [math] \mathbf{x} [/math] such that: : [math] A\mathbf{x}=\mathbf{b} [/math] The standard approach is ordinary least squares linear regression. However, if no [math] \mathbf{x} [/math] satisfies the equation or more than one [math] \mathbf{x} [/math] does — that is, the solution is not unique — the problem is said to be ill posed. In such cases, ordinary least squares estimation leads to an overdetermined (over-fitted), or more often an underdetermined (under-fitted) system of equations. Most real-world phenomena have the effect of low-pass filters in the forward direction where [math] A [/math] maps [math] \mathbf{x} [/math] to [math] \mathbf{b} [/math] . Therefore, in solving the inverse-problem, the inverse mapping operates as a high-pass filter that has the undesirable tendency of amplifying noise (eigenvalues / singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version of [math] \mathbf{x} [/math] that is in the null-space of [math] A [/math] , rather than allowing for a model to be used as a prior for [math] \mathbf{x} [/math] . Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as: : [math] \|A\mathbf{x}-\mathbf{b}\|^2 [/math] where [math] \left \| \cdot \right \| [/math] is the Euclidean norm. In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization: : [math] \|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2 [/math] for some suitably chosen
Tikhonov matrix, [math] \Gamma [/math] . In many cases, this matrix is chosen as a multiple of the identity matrix ( [math] \Gamma= \alpha I [/math] ), giving preference to solutions with smaller norms; this is known as . In other cases, lowpass operators (e.g., a difference operator or a weighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted by [math] \hat{x} [/math] , is given by: : [math] \hat{x} = (A^\top A+ \Gamma^\top \Gamma )^{-1}A^\top\mathbf{b} [/math] The effect of regularization may be varied via the scale of matrix [math] \Gamma [/math] . For [math] \Gamma = 0 [/math] this reduces to the unregularized least squares solution provided that (A L 2regularization TA) −1exists. L 2regularization is used in many contexts aside from linear regression, such as classification with logistic regression or support vector machines, and matrix factorization. 2011a = (Quadrianto & Buntine, 2011) ⇒ Novi Quadrianto and Wray L. Buntine (2011). "Linear Regression" In: (Sammut & Webb, 2011) pp 747-750. QUOTE: Ridge Regression- The regularization term is in the form of
[math]R(w)=\sum_{d=1}^Dw^2_d \quad\quad[/math](10)
[math](Xw−y)^T(Xw−y)+λw^Tw \quad\quad[/math](11)
Since the additional term is a quadratic of [math]w[/math], the regularized objective function is still quadratic in [math]w[/math], thus the optimal solution is unique and can be found in closed form. As before, setting the first derivative of (11) with respect to [math]w[/math] to zero, the optimal weight vector is in the form of
[math]\partial wEreg(w)=2X^T(Xw−y)+\lambda w=0\quad\quad[/math](12)
[math]w∗=(X^TX+\lambda I)^{−1}X^Ty\quad\quad[/math](13)
QUOTE: 2011b (Zhang) ⇒ Xinhua Zhang (2017)"Regularization" In: "Encyclopedia of Machine Learning and Data Mining" (Sammut & Webb, 2017), Springer US, Boston MA, pp 1083-1088. [ISBN:978-1-4899-7687-1], DOI:10.1007/978-1-4899-7687-1_718 QUOTE: Ridge regression is illustrative of the use of regularization. It tries to fit the label [math]y[/math] by a linear model [math] \langle w,x \rangle [/math] (inner product). So we need to solve a system of linear equations in [math]w: (x_1,\cdots, x_n)^T w= y[/math], which is equivalent to a linear least square problem: [math]min_{w\in\mathcal{R}^p} \parallel X^Tw-y \parallel^2[/math]. If the rank of [math]X[/math] is less than the dimension of [math]w[/math], then it is overdetermined and the solution is not unique.
To approach this ill-posed problem, one needs to introduce additional assumptions on what models are preferred, i.e., the regularizer. One choice is to pick a matrix [math]\Gamma [/math] and regularize [math]w[/math] by [math]\parallel \Gamma w \parallel[/math]. As a result we solve [math]min_{w\in\mathcal{R}^p} \parallel X^Tw-y \parallel ^2+\lambda \parallel \Gamma^T w\parallel^2 [/math], and the solution has a closed form [math]w^*=(XX^T+\lambda \Gamma\Gamma^T)Xy[/math] can be simply the identity matrix which encodes our preference for small norm models. The use of regularization can also be justified from a Bayesian point of view. Treating [math]\exp\left(-\parallel X^TW-y \parallel^2\right)[/math] as a multivariate random variable and the likelihood as [math][/math], then the minimizer of [math]\parallel X^TW-y \parallel^2[/math] is just a maximum likelihood estimate of [math]w[/math]. However, we may also assume a prior distribution over [math]w[/math], e.g., a Gaussian prior [math]\exp\left(-\parallel\lambda^TW\parallel^2\right)[/math]. Then the solution of the ridge regression is simply the maximum a posterior estimate of [math]w[/math].
QUOTE: Ridge regression is illustrative of the use of regularization. It tries to fit the label [math]y[/math] by a linear model [math] \langle w,x \rangle [/math] (inner product). So we need to solve a system of linear equations in [math]w: (x_1,\cdots, x_n)^T w= y[/math], which is equivalent to a linear least square problem: [math]min_{w\in\mathcal{R}^p} \parallel X^Tw-y \parallel^2[/math]. If the rank of [math]X[/math] is less than the dimension of [math]w[/math], then it is overdetermined and the solution is not unique.
|
Consider the functor $F: B \rightarrow Set$ where $B$ is a locally small category.
Is it true that if $F$ has a left adjoint then it is representable?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Yes, it is. See for instance "Algebraic Theories: A Categorical Introduction to General Algebra" by J. Adámek, J. Rosický, E. M. Vitale, page 7 (chapter 0, section 0.10 about representable functors).
Proof: Let $L \dashv F$ and $1 = \{*\}$. Then,\begin{align}Fb \cong \mathbf{Set}(1,Fb) \cong B(L1,b)\end{align}naturally in $b \in B$, hence $F \cong H^{L1}$.
|
Difference between revisions of "Superstrong"
m (→Relation to other large cardinal notions: \{\})
(Super 0-huge is the same as tall)
Line 26: Line 26:
The consistency strength of $n$-superstrongness follows the [[n-fold variants|double helix pattern]] <cite>Kentaro2007:DoubleHelix</cite>. Specifically:
The consistency strength of $n$-superstrongness follows the [[n-fold variants|double helix pattern]] <cite>Kentaro2007:DoubleHelix</cite>. Specifically:
−
*[[measurable]] = $0$-superstrong = [[huge|almost $0$-huge]] =
+
*[[measurable]] = $0$-superstrong = [[huge|almost $0$-huge]] = $0$-huge
* $n$-superstrong
* $n$-superstrong
* $n$-fold supercompact
* $n$-fold supercompact
Latest revision as of 20:48, 5 October 2019
Superstrong cardinals were first utilized by Hugh Woodin in 1981 as an upper bound of consistency strength for the axiom of determinacy. However, Shelah had then discovered that Shelah cardinals were a weaker bound that still sufficed to imply the consistency strength of $\text{(ZF+)AD}$. After this, it was found that the existence of infinitely many Woodin cardinals was equiconsistent to $\text{AD}$. Woodin-ness is a significant weakening of superstrongness.
Most results in this article can be found in [1] unless indicated otherwise. Contents Definitions
There are, like most critical point variations on measurable cardinals, multiple equivalent definitions of superstrongness. In particular, there is an elementary embedding definition and an extender definition.
Elementary Embedding Definition
A cardinal $\kappa$ is
$n$-superstrong (or $n$-fold superstrong when referring to the $n$-fold variants) iff it is the critical point of some elementary embedding $j:V\rightarrow M$ such that $M$ is a transitive class and $V_{j^n(\kappa)}\subset M$ (in this case, $j^{n+1}(\kappa):=j(j^n(\kappa))$ and $j^0(\kappa):=\kappa$).
A cardinal is
superstrong iff it is $1$-superstrong.
The definition quite clearly shows that $\kappa$ is $j^n(\kappa)$-strong. However, the least superstrong cardinal is never strong.
Extender Definition
A cardinal $\kappa$ is
$n$-superstrong (or $n$-fold superstrong) iff there is a $(\kappa,\beta)$-extender $\mathcal{E}$ for a $\beta>\kappa$ with $V_{j^n_{\mathcal{E}}(\kappa)}\subseteq$ $Ult_{\mathcal{E}}(V)$ (where $j_{\mathcal{E}}$ is the canonical ultrapower embedding from $V$ into $Ult_{\mathcal{E}}(V)$).
A cardinal is
superstrong iff it is $1$-superstrong. Relation to other large cardinal notions measurable = $0$-superstrong = almost $0$-huge = $0$-huge $n$-superstrong $n$-fold supercompact $(n+1)$-fold strong, $n$-fold extendible $(n+1)$-fold Woodin, $n$-fold Vopěnka $(n+1)$-fold Shelah almost $n$-huge super almost $n$-huge $n$-huge super $n$-huge $(n+1)$-superstrong
Let $M$ be a transitive class $M$ such that there exists an elementary embedding $j:V\to M$ with $V_{j(\kappa)}\subseteq M$, and let $\kappa$ be its superstrong critical point. While $j(\kappa)$ need not be an inaccessible cardinal in $V$, it is always worldly and the rank model $V_{j(\kappa)}$ satisfies $\text{ZFC+}$"$\kappa$ is strong" (although $\kappa$ may not be strong in $V$).
Superstrong cardinals have strong upward reflection properties, in particular there are many measurable cardinals
above a superstrong cardinal. Every $n$-huge cardinal is $n$-superstrong, and so $n$-huge cardinals also have strong reflection properties. Remark however that if $\kappa$ is strong or supercompact, then it is consistent that there is no inaccessible cardinals larger than $\kappa$: this is because if $\lambda>\kappa$ is inaccessible, then $V_\lambda$ satisfies $\kappa$'s strongness/supercompactness. Thus it is clear that supercompact cardinals need not be superstrong, even though they have higher consistency strength. In fact, because of the downward reflection properties of strong/supercompact cardinals, if there is a superstrong above a strong/supercompact $\kappa$, then there are $\kappa$-many superstrong cardinals below $\kappa$; same with hugeness instead of superstrongness. In particular, the least superstrong is strictly smaller than the least strong (and thus smaller than the least supercompact).
Every $1$-extendible cardinal is superstrong and has a normal measure containing all of the superstrongs less than said $1$-extendible. This means that the set of all superstronges less than it is stationary. Similarly, every cardinal $\kappa$ which is $2^\kappa$-supercompact is larger than the least superstrong cardinal and has a normal measure containing all of the superstrongs less than it.
Every superstrong cardinal is Woodin and has a normal measure containing all of the Woodin cardinals less than it. Thus the set of all Woodin cardinals below it is stationary, and so is the set of all measurables smaller than it. Superstrongness is consistency-wise stronger than Hyper-Woodinness.
Letting $\kappa$ be superstrong, $\kappa$ can be forced to $\aleph_2$ with an $\omega$-distributive, $\kappa$-c.c. notion of forcing, and in this forcing extension there is a normal $\omega_2$-saturated ideal on $\omega_1$. [3]
Superstrongness is not Laver indestructible. [4]
Every $C^{(n)}$-superstrong cardinal belongs to $C^{(n)}$. Every superstrong cardinal is $C^{(1)}$-superstrong. For every $n ≥ 1$, if $κ$ is $C^{(n+1)}$-superstrong, then there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that $\{α < κ : α$ is $C^{(n)}$-superstrong$\} ∈ U$. Hence, the first $C^{(n)}$-superstrong cardinal, if it exists, is not $C^{(n+1)}$-superstrong. If κ is $2^κ$-supercompact and belongs to $C^{(n)}$, then there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. If $κ$ is $κ+1$-$C^{(n)}$-extendible and belongs to $C^{(n)}$, then $κ$ is $C^{(n)}$-superstrong and there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-superstrong (inter alia) in $V_δ$, for all $n$ and $m$.
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
|
At a dance party there are 100 men and 20 women. Each man selects a group of women as potential dance partners, but in such a way that given any group of 20 men it is always possible to pair 20 men with 20 women, with each man paired with someone on their list. What is the smallest number L where L is the sum of the total number of women on each mans list that will guarantee this.
I came here to verify my solution to this problem, exercise 20 from chapter 2 of the fourth edition of
Introductory Combinatorics by Richard A. Brualdi. My solution: 1981 Problem text: "At a dance-hop there are $100$ men and $20$ women. For each $i$ from $1, 2, \ldots, 100$, the $i$th man selects a group of $a_i$ women as potential dance partners (his dance list), but in such a way that given any group of $20$ men, it is always possible to pair the $20$ men up with the $20$ women with each man paired up with a woman on his dance list. What is the smallest sum $a_1 + a_2 + \cdots + a_{100}$ that will guarantee this?" Clarification: In order for the problem to make sense, we assume $1 \leq a_i \leq 20$ for each $i$. Pedantic nitpicking: First, we have to establish that, if $n > 100$ does not guarantee this, then no number $100 \leq m < n$ can guarantee this. Suppose $n$ does not guarantee this. Then, there is a choice of lists such that, for some group of $20$ men, it is not possible to pair each man up with a partner from his list. Now, since $n > 100$, then some list must contain at least two names. If we strike out one of the names from the list, we now have a sum of $n-1$ total names. If this configuration were valid, then the pairing chosen would be valid for the original configuration, so $n-1$ does not guarantee a pairing. Argument: First, we show that $1980$ does not guarantee the condition. Let $a_i = 20$ for $1 \leq i \leq 80$, and $a_i = 19$ for $81 \leq i \leq 100$. If the $20$ men with $19$ women on their lists are chosen, and each list is the same, then those $20$ men cannot each be paired up with a woman from their list, so $1980 = 80 \cdot 20 + 20 \cdot 19$ does not work.
Now, we prove that $1981$ guarantees the condition. Choose any configuration of lists of names with sum of lengths equal to $1981$. Consider any $k \leq 19$ lists with less than $20$ women on the list. The total number of women referenced by the lists is at least $20 - \lfloor 19/k \rfloor$, since a name has to be struck off $k$ lists to lose a reference, and there are at most $19$ missing names. By Hall's Marriage Theorem, there is a matching if $20 - \lfloor 19/k \rfloor \geq k$, but this clearly holds for all $1 \leq k \leq 19$.
|
let sequence $\{a_{n}\}$ such $$a_{1}=1,a_{n+1}=1+\dfrac{n}{a_{n}}$$
show that: $$a_{n}=\sqrt{n}+\dfrac{1}{2}-\dfrac{1}{8\sqrt{n}}+o\left(\dfrac{1}{\sqrt{n}}\right)?$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I will present two approaches to this question. The first is less advanced and easier but also yields a less optimal result.
Consider the auxiliary sequence $b_n =\sqrt{n+\frac{1}{4}}+\frac{1}{2}$. This is the unique positive solution to the $P_n(X)=0$ where $P_n(X)=X^2-X-n$.
${\bf Lemma~1.}$ For every $n\geq1$, we have $b_{n+1}\geq 1+\dfrac{n}{b_{n-1}}$.
Proof. Indeed, let $u=1+\frac{n}{b_{n-1}}$ then$$u^2-u=\frac{(n+b_{n-1})n}{b_{n-1}^2}=\frac{(n+b_{n-1})n}{b_{n-1}+n-1}=n+\frac{n}{b_{n-1}+n-1}\leq n+1$$since $b_{n-1}\geq 1$. This shows that $P_{n+1}(u)\leq0$ and consequently $u\leq b_{n+1}$ as desired.$\qquad \square$
${\bf Lemma~2.}$ For every $n\geq1$, we have $b_{n-1}\leq a_n \leq b_n$.
Proof. This is now an easy induction. Clearly true for $n=1$, and if it is true for some $n$ then$$b_n=1+\frac{n}{b_n}\leq 1+\frac{n}{a_n}\leq 1+\frac{n}{b_{n-1}}\leq b_{n+1}.$$and the result follows.$\qquad \square$
It follows that $$ \sqrt{n-\frac{3}{4}}-\sqrt{n}\leq a_n-\sqrt{n}-\frac{1}{2}\leq \sqrt{n+\frac{1}{4}}-\sqrt{n} $$ Thus $$\sqrt{n}\left\vert a_n-\sqrt{n}-\frac{1}{2}\right\vert\leq \frac{3}{4(1+\sqrt{1-3/(4n)})}\leq \frac{1}{2}$$ So, we have proved that, for every $n\geq 1$ we have $$ a_n= \sqrt{n}+\frac{1}{2}+{\cal O} \left(\frac{1}{\sqrt{n}}\right) $$ This is not the desired expansion but it has the merit to be easy to prove.
The general reference in this part is the book of D. Knuth. "The art of Computer programming, Vol III, second edition, pp.63--65".
Let $I(n)$ be the number of involutions in the symmetric group $S_n$, ($i.e.$ $\sigma\in S_n$ such that $\sigma^2=I$). It is well-known that $I(n)$ can be calculated inductively by $$ I(0)=I(1)=1,\qquad I(n+1)=I(n)+nI(n-1) $$ This shows that our sequence $\{a_n\}$ is related to these numbers by the formula $$a_n=\frac{I(n+1)}{I(n)}.$$
So, we can use what we know about these numbers, In particular, the following asymptotic expansion, from Knuth's book: $$ I(n+1)=\frac{1}{\sqrt{2}} n^{n/2}e^{-n/2+\sqrt{n}-1/4}\left(1+\frac{7}{24\sqrt{n}}+{\cal O}\left(\frac{1}{n^{3/4}}\right)\right). $$ Now, it is a "simple" matter to conclude from this that $a_n$ has the desired asymptotic expansion.
Edit: In fact the term $\mathcal{O}(n^{-3/4})$ effectively destroys the asymptotic expansion as mercio noted, But in fact we have $$I(n+1)=\frac{1}{\sqrt{2}} n^{n/2}e^{-n/2+\sqrt{n}-1/4}\left(1+\frac{7}{24\sqrt{n}}+{\cal O}\left(\frac{1}{n}\right)\right).$$This is shown by WIMP AND ZEILBERGER in their paper ``Resurrecting the Asymptoticsof Linear Recurrences'', that can be found here.
It will be simpler in what follows to use two steps at once, so we can start by computing the relationship $a_{n+2} = \frac{a_n(n+2) + n}{a_n + n}$, and letting $b_n = a_n / \sqrt n$ we get $b_{n+2} = \frac {b_n (1 + 2/n) + 1/\sqrt n}{b_n\sqrt{1/n+2/n^2} + \sqrt{1+2/n}}$.
Let's denote $\begin{bmatrix} a & b \\ c & d \end{bmatrix} x = \frac {ax+b}{cx+d}$.If we have a relation $b_{n+2} = \begin{bmatrix} 1+A & B \\ C & 1+D \end{bmatrix} b_n$ where each $A,B,C,D$ is an $O(1/\sqrt n)$ and we look then at some $c_n = (b_n-k)\sqrt n$, we obtain on the sequence $c_n$ the same kind of relation (so with dominant term still begin $I_2$) with
$$A' = (1+A-kC)\sqrt{1+2/n}-1, \\B' = (B-kD+kA-k^2C)\sqrt{n+2}, \\ C' = C/\sqrt n, \\ D' = D+kC$$.
Right now, $C$ is still as big as everyone else, so to make $B'$ not an order of magnitude bigger than the rest, we have to solve a degree $2$ equation in $k$, which gives $k= \pm 1$.
If we define $c_n = (b_n-1)\sqrt n$, we get $A' = -n^{-1/2} + O(n^{-1}), D' = +n^{-1/2} + O(n^{-1}), C' = O(n^{-1}), B' = O(n^{-1/2})$.
From now $C'$ will continue to plummet, the dominant terms of $A'$ and $D'$ will not be able to change, and so to make the next $B$ into an $O(n^{-1/2})$ term, we will have to pick $k = $ the constant term of $B/(A-D)$.
What this tells us is that there are exactly two asymptotic developments for $a_n$ whose error terms at each level obey a recurrent relation of the form $r_{n+2} = [I_2 + O(n^{-1/2})] r_n$.
Since all of this can be done algebraically, those two power series are $\sqrt n$ times the two solutions in $\Bbb Q[[n^{-1/2}]]$ to the original equation on $a_n/\sqrt n$ (and you go from one to the other by switching the sign of $\sqrt n$).
Now we can look at $u_n = ((((a_n/\sqrt n - 1)\sqrt n - \frac 12)\sqrt n + \frac 18)\sqrt n + \frac 18)\sqrt n$. It satisfies $u_{n+2} = \begin{bmatrix}1 - n^{-1/2} + O(n^{-1}) && \frac 7 {64}n^{-1} + O(n^{-3/2}) \\ O(n^{-5/2}) && 1 + n^{-1/2} + O(n^{-1}) \end{bmatrix} u_n$, (and each constant in those $O$ can be effectively computed)
This means that if you have $u_n$ close enough to $0$, then $u_{n+2} = (1-2n^{-1/2})u_n + O(n^{-1})$. So it should mean that for each bound $\delta$ there is an $n_0$ such that $\forall n > n_0, |u_n < \delta | \implies |u_{n+2} < \delta|$.
If you effectively compute $n_0$ for, say, $\delta = 1$, this allows you to provably check that a sequence $u_n$ stays bounded by finding a term $u_n$ with $n$ large enough that is below the $\delta$. If the asymptotic development is valid, a finite computation will be able to prove it.
I would love to see a proof that the asymptotic development is valid for every starting value of $a_n$ (except the one value that will stay away infinitely from the asymptotic formula and will instead obey the other asymptotic development)
|
A Support vector machine (SVM) is a popular choice for a classifier and radial basis functions (RBFs) are commonly used kernels to apply SVMs also to non-linearly separable problems. There are two hyperparameters in this case. First, the margin is maximized by minimizing the function\begin{equation*} \varphi(\fvec{w}, \delta) = \frac{\left\| \fvec{w} \right\|_2^2}{2} + C \sum_{i=1}^{N} \delta_i \end{equation*}
with the weight vector \(\fvec{w}\) and the slack variables \(\delta_i \geq 0\). Here, we have to tune the regularization parameter \(C \in \mathbb{R}^+\). Second, the RBF kernel\begin{equation*} k(\fvec{x}_i, \fvec{x}_j) = e^{-\gamma \left\| \fvec{x}_i - \fvec{x}_j \right\|_2^2} \end{equation*}
which calculates the distance between the data points \(\fvec{x}_i\) introduces the tunable scaling parameter \(\gamma \in \mathbb{R}^+\).
In the following animation, you can control both parameters and switch between a linear and an RBF kernel. It uses data points from the Iris flower dataset showing two features and two classes (selected to be non-separable). The idea is inspired by this sklearn example.
List of attached files:
SVMParametersRBF.ipynb (Jupyter notebook used to create the visualization)
← Back to the overview page
|
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
|
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
|
Search
Now showing items 31-40 of 155
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
|
Consider the algorithm LastMatch below, which returns the offset (shift) of the last occurrence of the pattern P in text T, or -1 if P does not occur in T:
LastMatch(T,P) for(s = T.length - P.length downto 0) j = 1 while(j =< P.length and P[j] == T[s + j]) j++ if(j == P.length + 1) return s return -1
I've been given a loop invariant for the while loop:
$\forall k(1 \leq k<j \rightarrow P[k] ==T[s+k])$
The initialisation of this invariant confuses me. Before we enter the while loop, $j=1$. So we're asking is there a $1\leq k<1$ such that $P[k] ==T[s+k]$?
I cannot find a $k$ which satisfies this inequality, so I do not understand what this is saying. So why is the invariant satisfied before we enter the loop? Is it because when I cannot find a $k$ it implies that $P[k]$ and $T[s+k]$ are both equal to the empty set?
|
I want to show the following:
Let $f: \mathbb{R} \to \mathbb{R}$ be continuous.
a) If $f$ is differentiable and for $x \ne 0$ the limit $\lim_{x\to 0} f'(x) = A$ exists, then $f$ is differentiable at $x = 0$ and $f'(0) = A$.
b) Show that the inverse is false, i.e. there exists a function $f$ which is differentiable at $x = 0$, but $\lim_{x\to 0} f'(x)$ does not exists.
For a), I worked out a proof, but I am unsure if the limit manipulations I used are okay and rigorous enough, so please can you comment on my solution and point out possible flaws?
Ok, in a) I have to show, that if the limit exists, then it obeys the continuity condition at $x = 0$, i.e. $$ \lim_{x\to 0} f'(x) = f'(0) $$ so I calculate (by using the limit definition of the derivative) \begin{align*} \lim_{x\to 0} f'(x) & = \lim_{x \to 0} \left[ \lim_{h \to 0} \left( \frac{f(x+h)-f(x)}{h} \right) \right] \\ & = \lim_{h \to 0} \left[ \lim_{x \to 0} \left( \frac{f(x+h)-f(x)}{h} \right) \right] \\ & = \lim_{h \to 0} \left[ \frac{1}{h} \lim_{x \to 0} \left( f(x+h)-f(x) \right) \right] \\ & = \lim_{h \to 0} \left[ \frac{1}{h} ( f(h) - f(0) ) \right] \\ & = f'(0) \end{align*} Thats my proof, in the last step I used the continuity of $f$ and the manipulations are possible, I think, because all limits exists. I never saw such manipulations, most proofs in my textbooks use $\epsilon/\delta$-Arguments, so I am unsure as how valid are such limit-exchange operations.
For b) the function $f(x) := |x|$ is differentiable at $x = 0$ but it's derivative is not continuous at $x = 0$.
|
When we have a logarithmic equation of second degree, we need to get rid of the logarithms and obtain an equivalent equation of second degree.
$$$\log(x^2+2x)-\log 8=0$$$ We can move the constant to the other side of the equation in order to eliminate the logarithms. We obtain a second degree equation that we know how to solve: $$$\log(x^2+2x)=\log 8 \Rightarrow x^2+2x=8 \Rightarrow x^2+2x-8=0$$$ To solve it it is necessary to remember the formula: $$$\displaystyle x= \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$$ So that: $$$\displaystyle x=\frac{-2 \pm \sqrt{4-4\cdot (-8)}}{2}=\frac{-2 \pm \sqrt{4+32}}{2}=\frac{-2\pm \sqrt{36}}{2}=\frac{-2\pm 6}{2}$$$ Therefore, the solutions to the equation will be: $$$\displaystyle x=\frac{-2+6}{2}=\frac{4}{2}=2 \\ \displaystyle x=\frac{-2-6}{2}=\frac{-8}{2}=-4$$$
But it is necessary to bear in mind that some of the solutions will not be valid for a logarithmic equation, since the logarithm is only defined by numbers greater than $$0$$. So we need to verify that when substituting $$x$$ we obtain negative values. Let's check then:
When $$x=2$$ $$$x^2+2x \Rightarrow 2^2+2 \cdot 2=4+4=16 >0$$$ Then, the solution is valid.
For $$x =-4$$ $$$x^2+2x \Rightarrow (-4)^2+2 \cdot (-4)=16-8=8 >0$$$ Therefore, this solution is also valid.
In the previous case, we obtained an equation of second degree that we could immediately recognise. Sometimes this might be a bit more tricky.
$$$\log(9-x^2)=2\log(3x-3)$$$ By the property of the power of logarithms we can obtain: $$$\log(9-x^2)=\log(3x-3)^2$$$ Thus, the logarithms can be eliminated and we can work with the equations: $$$9-x^2=(3x-3)^2 \Rightarrow 9-x^2=9x^2-18x+9 \Rightarrow 9x^2+x^2-18x+9-9=0 \Rightarrow$$$ $$$\Rightarrow 10x^2-18x=0$$$ Note that we can simplify our equation and we are left with no constant. We can then extract the comon factor $$x$$ and obtain: $$$x(10x-18)=0$$$ Thus: $$$\begin{array}{l}x=0\\ \\ 10x-18=0 \Rightarrow \displaystyle x=\frac{18}{10}\Rightarrow x=\frac{9}{5} \end{array}$$$
And these are the two candidate solutions. We need to verify that they are well defined, that is, that when we put them back into the logarithm we do not get a negative number:
When $$x=0$$ $$$9-x^2 \Rightarrow 9-0=9>0$$$ and $$$(3x-3)^2 \Rightarrow (3\cdot 0-3)^2= 9>0$$$ Therefore $$x=0$$ is a solution of the logarithmic equation.
When $$x =\displaystyle \frac{9}{5}$$ $$$\displaystyle 9-x^2\Rightarrow 9-\Big( \frac{9}{5}\Big)^2=9-\frac{81}{25}>0$$$ and $$$\displaystyle (3x-3)^2 \Rightarrow \Big(3\cdot \frac{9}{5}-3\Big)^2=\Big(\frac{27}{5}-3\Big)^2=\Big(\frac{27-5}{5}\Big)^2=\frac{12^2}{5^2}>0$$$ Then, it is also a valid solution.
Let's see a last example before going on to the exercises:
$$$\displaystyle \log \sqrt{2x}=\log (x-3)+\log 2$$$ As in the previous cases, it is necessary to try to get rid of the logarithms.
To do this we will use the fact that the sum of the logarithms is the logarithm of its product: $$$\displaystyle \log\sqrt{2x}=\log (2 \cdot (x-3))$$$ Thus we can eliminate the logarithms to obtain the following equivalent equation: $$$\displaystyle \sqrt{2x}=2 \cdot (x-3) \Rightarrow \sqrt{2x}=2x-6$$$ Now it is necessary to get rid of the square root. In order to do so we can square the terms in each of the sides, and we can see that we actually obtain an equation of second degree: $$$2x=(2x-6)^2$$$ We can extend the righthand side term in order to obtain the second degree equation: $$$2x=4x^2-24x+36 \Rightarrow 4x^2-24x-2x+36=0 \Rightarrow 4x-26x+36=0$$$
Note that we can divide all the coefficients by $$2$$ and nothing changes, obtaining a simplified equation: $$$\displaystyle \frac{4x^2-26x+36}{2}=0 \Rightarrow 2x^2-13x+18=0$$$
We can now use the formula to solve for $$x$$: $$$\displaystyle x=\frac{13 \pm \sqrt{13^2-4 \cdot 2 \cdot 18}}{2 \cdot 2}=\frac{13 \pm \sqrt{169-144}}{4}=\frac{13 \pm \sqrt{25}}{4}=\frac{13 \pm 5}{4}$$$
So the possible solutions will be: $$$\begin{array}{rcl}x & = &\displaystyle \frac{13+5}{4}=\frac{18}{4}=\frac{9}{2}\\ x&=&\frac{13-5}{4}=\frac{8}{4}=2\end{array}$$$
Now it is necessary to verify that the values we found are actually a solution, since we cannot take the logarithm of a negative number. Let's check both solutions:
If $$x =\displaystyle \frac{9}{2}$$: $$$x-3 \Rightarrow \frac{9}{2} -3 =\frac{9-6}{2}=\frac{3}{2} >0$$$ Therefore, the first value is a result of the logarithmic equation.
If $$x=2$$: $$$x-3 \Rightarrow 2-3=-1 < 0$$$ Note that we obtain a negative number so we have to rule out this solution.
Thus we are left with only one solution, namely: $$x=\displaystyle \frac{9}{2}$$.
|
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting
Author Message murray Joined: 07 Feb 2006 Posts: 47 Location: Amherst, MA, USA
Posted: Wed Mar 15, 2006 10:53 pm Post subject: Y&Y TeX vs. MiKTeX with MathTimePro2 fonts I used updated fonts just prior to posting of 0.98.
Initially I had different page breaks with a test document of my own in the two TeX systems. but that seems to have disappeared once I updated geometry.sty in Y&Y to the same version I use with MiKTeX.
There are two other peculiarities, indirectly related to the fonts:
1. With Y&Y, when mtpro2.sty is loaded, I get messages that \vec is already defined, and then ditto for \grave, \acute, \check, \breve, \bar, \hat, \dot, \tilde, \ddot. Perhaps this is due to different versions of amsmath.sty?
2. In Y&Y, I must include
\usepackage[T1]{fontenc}
for otherwise I get message:
OT1/ptm/m/n/10.95=ptm7t at 10.95pt not loadable:
Metric (TFM) file not found
I am clearly using different psnfss package files with Y&Y than with MiKTeX. I tried updating the Y&Y versions to be the same as those for MiKTeX, but then all hell breaks loose over encodings -- with Y&Y expecting to find TeXnAnsi encodings and not finding them. (It may be that in Y&Y I have to update tfm's for Times, too. But I am loathe to mess further with Y&Y with respect to a working font configuration.) WaS Joined: 07 Feb 2006 Posts: 27 Location: Erlangen, Germany
Posted: Thu Mar 16, 2006 3:47 am Post subject: please, send me <w.a.schmidt@gmx.net> your test document and the
log files that would result with and without \usepackage[T1]{fontenc}
TIA
Walter WaS Joined: 07 Feb 2006 Posts: 27 Location: Erlangen, Germany
Posted: Fri Mar 17, 2006 9:27 am Post subject: Preliminary anwers:
1) Using T1 encoding with Times cannot work on Y&Y-TeX.
Y&Y-TeX supports Times and other fonts from the non-TeX world
only with LY1 encoding.
2) Updating psnfss on Y&Y-TeX is pointless. The psnfsss collection
supports the Base35 fonts with OT1 and T1/TS1 encoding, which
does not work on Y&Y-TeX; see above.
3) Loading fontenc should not be necessary at all, but
I do not yet understand why you get the error re. OT1/ptm.
Does it help to issue \usepackage[LY1]{fontenc} before loading
mtpro2?
4) The errors re. \vec etc. may be due to an obsolete amsmath.sty,
as compared with MikTeX. Please, run a minimal test document
that does not use amsmath to check this.
More info on Sunday. murray Joined: 07 Feb 2006 Posts: 47 Location: Amherst, MA, USA
Posted: Fri Mar 17, 2006 7:49 pm Post subject: Your answers identified the problems & solutions!
WaS wrote: Preliminary anwers:
1) Using T1 encoding with Times cannot work on Y&Y-TeX....
2) Updating psnfss on Y&Y-TeX is pointless....
3) Loading fontenc should not be necessary at all, but
I do not yet understand why you get the error re. OT1/ptm.
Does it help to issue \usepackage[LY1]{fontenc} before loading
mtpro2?
4) The errors re. \vec etc. may be due to an obsolete amsmath.sty,
as compared with MikTeX. Please, run a minimal test document
that does not use amsmath to check this.
Re 3): Yes, \usepackage[LY1]{fontenc} in my test documen avoides he error about OT1.
5) Yes, the error about \vec, etc., was due to an obsolete amsmath.sty. Refreshing the amsmath files fixed this.
Thank you!
All times are GMT - 7 Hours
You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
|
Risk-neutral default probability implied from CDS is approximately $P=1-e^\frac{-S * t}{1-R}$, where $S$ is the flat CDS spread and $R$ is the recovery rate.The CDS Spread can be solved using the inverse:$$S=\ln(1-P) \frac{R-1}{t}$$$S$ is the spread expressed in percentage terms (not basis points)$t$ are the years to maturity$R$ is the recovery rate ...
You can use the "credit triangle" which states that the (annualised) credit spread $S$ equals the annualised probability of default $p$ times the loss given default LGD which equals par minus the expected recovery amount $R$, i.e. $S=p(1-R)$. This is a "back-of-the-envelope" approximation to a full hazard rate credit model - from experience I find that the ...
For an individual firm, a theoretical model of the capital structure was developed by Robert Merton in 1974.The simplest form of this model assumes the firm has zero-coupon debt maturing at some future time $T$. Default is defined as the condition where the value of the firm's assets fall below the outstanding debt. The firm equity is viewed as a call ...
Firstly, the use of the logit models to estimate the PDs is particularly appreciated in some credit industries, as, for instance, the credit retail one. The logit model predicts pretty well the PD on loans, consumer credit, credit cards, ... and all concerns the retail consumer world.Mainly, those listed are the principal sub-industries in the credit ...
You cannot do it.It is an under-determined problem. That is to say, a whole multitude (subspace of $\mathbb{R}^{N\times N}$) of migration matrices will agree with any given table of default probabilities.Say you want to find a transition matrix for 2 states (IG, HY) plus default$$\left(\begin{matrix}p_{11} & p_{12} & p_{1D} \\p_{21} &...
This is what Moody's does to calculate default probabilities, but I don't believe they give a whole lot of detail on their exact methodology because they sell their models as software. I quickly found this which gives a brief overview: http://www.moodysanalytics.com/~/media/Brochures/Enterprise-Risk-Solutions/RiskCalc/RiskCalcPlus-Fact-Sheet.ashxEdit- ...
"Debt issuer default risk" and "counterparty risk" are very similar. From Risk magazine:Counterparty RiskThe risk that a counterparty to a transaction or contract will default (fail to perform) on its obligation under the contract. Counterparty risk is not limited to credit risk (the risk that the counterparty cannot fulfill its contractual ...
The relationship between volatility and CDS is very interesting. Volatility in finance is synonym of risk. There are many aspects of volatility. There are 2 primary ways to find CDS premium, one is using structural model and the other is reduced form or intensity based model. Structural models use equity valuation, outstanding debt and equity volatility to ...
I believe the answer can be further improved for all those being directed here by google after 3 years.A common way to model the default probability is by the hazard rate. As @Bob correctly mentions, a traditional requirement is for it to satisfy (see Option Futures and Other Derivatives section 23.4 in which the author discusses also other more exact ...
Firstly it's good to straighten out our goal.You correctly say, that IFRS9 requires analysis of expected losses.There are two components of expected losses.1) Expected probability of a default event2) Expected recovery rateSo, not only do we need the probability but also the recovery rate.Luckily, both are approximated by the credit spread, which ...
Actually, there is a practical way to do it.You can use you PoD estimates to assign a credit rating to your securities and then use a published transition matrix for your purposes.Or you can estimate transition probabilities by linear interpolation based on the PoD values that you have.Here is a publication containing transition matrices from Moody'...
Let $\tau$ be the default time, $\lambda$ be the constant hazard rate, and $T=1.0$ be the bond maturity. The value of the defaultable zero-coupon bond is given by\begin{align*}D(0, T) &= e^{-rT}P(\tau > T).\end{align*}Then the default probability is given by\begin{align*}P(\tau \le T) &= 1- P(\tau > T)\\&=1-D(0, T) \times e^{rT}\\&...
Based on your definition, they are certainly not the same. Generally, the marginal default probability is the probability that the default happens in a given time period, such as $[t, t+\Delta]$, that is, $P(t < \tau \le t+\Delta)$. Here, $\tau$ is the default time. See Chapter 10 of the book Counterparty Credit Risk and Credit Value Adjustment for ...
The question sounds like a conditional probability problem. However, note that, for conditional probability, people will generally say if survived to or conditional on. Here it says that survived in year one and (i.e., followed by) will default in year two. Then we should not treat this as a conditional or marginal probability.Based on the above ...
Let $\tau_{(1)} = \min(\tau_1, \ldots, \tau_K)$ be the first-to-default time. Moreover, for $1< m \le K$, let\begin{align*}\tau_{(m)} = \min\left(\tau_k: k=1, \ldots, K, \tau_{k} > \tau_{(m-1)}\right).\end{align*}be the $m^{\rm th}$-to-default time. In particular, $\tau_{(K)} = \max(\tau_1, \ldots, \tau_K)$.Note that, for $t \ge 0$,\begin{align*}...
The chapter in Hull on Credit Risk gives the same formula as emcor as a first approximation with a justification:Consider first an approximate calculation. Suppose that a bond yields 200 basis points more than a similar risk-free bond and that the expected recovery rate in the event of a default is 40%. The holder of a corporate bond must be expecting to ...
This does not sum to 1 because you have forgotten to add the 6th scenario, the NonDefault (ND).If Ps is the probability of survival and Pd the probability of default, the ND has the probability Ps^5.This makes:Pd+Ps*Pd+...Ps^4*Pd+Ps^5=Pd*(1+Ps+...+Ps^4)+Ps^5=Pd*(1-Ps^5)/(1-Ps)+Ps^5=(1-Ps)**(1-Ps^5)/(1-Ps)+Ps^5=1.
I think the problem is that, for countries with a sizeable risk of hyperinflation, you will not have deep and mature markets to extract market expectations from.Argentina is a good example. Hyperinflation is just 'very big inflation', but you don't have inflation swaps in ARS. The CDS that you mention will pay in USD, and are therefore immune to ARS ...
RRL's answer is entirely correct in terms of the theoretical reason underpinning the relationship between equity IV and CDS spreads."CDS spreads are not “pure” default risk compensation" - no they are not since the ISDA Quoted Spreads assume a homogeneous Poisson process (implying that instantaneous default risk is a constant over the life of a contract) ...
I believe that your problem can be formulated as:Find PD matrix that is as close as possible to a given PD matrix (result of some previous calibration, or the matrix computed using average hazard rate, or any other "target", or the penalty on non-smoothness) subject to the following constraints:The values that are given must be matched exactly...
I suggest you to start from the Altman's model, that is the basic model to implement the kind of econometric analysis you're looking for.You can find the original paper at my Dropbox public folder.After that reading, you can find a number of paper about scoring models on SSRN or Google Scholar. Moreover, I suggest you to look for all academic papers that ...
Suppose I give you objective probabilities $\mathbb{P}(S_T \geq K)$ of an equity finishing above a certain level $K$ at a future time $T$ (or in your case a survival probability in the form of default rates). Can you convert these to risk-neutral probabilities $\mathbb{Q}(S_T \geq K)$ ? Not immediately.First, I need to give you a model for the behaviour ...
Re LGD: you can look at Mark-iT for ISDA Credit Event Auction settlements, here's a link actuallyhttp://www.creditfixings.com/CreditEventAuctions/fixings.jspObviously these will give recoveries as determined by bond auctions, varying according to SUB and SENIOR in the type of names you are looking at. These outcomes are a bit pathological for banks as SUB ...
I'm no expert on this topic but here's my two cents. Hopefully if I'm wrong someone will correct me.From the 2 relations you wrote, we see that$$ DD_q = -N^{-1}(P) - \lambda R \sqrt{T} $$or equivalently\begin{align}DD_q &= DD_p - \lambda R \sqrt{T} \\&= \frac{\ln(A/D)+((\mu-\lambda \sigma R) - 0.5\sigma^2)T}{\sigma \sqrt{T}}\end{align}where ...
First, I would emphasize that default protection is bought and sold on debt securities , not on assets. To answer your question, you cannot sell protection on your own debt. You can sell protection on sovereign debt, including the sovereign where your company is based. However, the buyer of this protection understands that there may be a high correlation ...
It is mainly subjective, depends on country and sector. E.g. when I worked in private equity in a distressed fund a highly levered company was a company with a net-debt to EBITDA ratio > 7.0.Those are back of the envelope numbers. They actually do not tell you much about the health of the company nor its risk. A company with 7.0x net-debt to EBITDA ratio ...
If there are 10 issuers and one defaults this year, the issuer weighted probability of default is 0.1. But if the one issuer that defaults is one with a larger than average amount of debt outstanding, the dollar volume weighted rate of default for the year is going to be > 0.1.Moody's tries to predict the default of issuers, so they mostly work with ...
The correlation does not play any role for a linear portfolio, such as a CDS index, However, for a portfolio with nonlinear dependence on the loss of underlying entities, such as the case for a CDO or an $m$-th to default swap, the correlation plays a role. Here, certain techniques such as copula may be needed, depending on the complexity of the structure.
|
I really need help with this question. I am required to show that the set of odd natural numbers is closed under the operation * defined by a*b=a+b+ab, and I'm not quite sure how. Any work/help is greatly appreciated.
Let $a = 2n + 1$ and let $b = 2m + 1$ where $n, m \geq 0$.
We want to show that the set of natural odd numbers are closed under the defined operation $*$.
So: $$a*b = a + b + ab$$ $$= (2n + 1) + (2m + 1) + (2n + 1)(2m + 1)$$ $$= (2n + 2m + 2) + (4nm + 2n + 2m + 1)$$ $$= (4n + 4m + 4mn + 2) + 1$$
Thus $*$ is closed under the defined operation.
show a+b+ab is odd whenever a and b are odd
You could use the fact that $$a*b=(a+1)(b+1)-1$$ We have that $$(2a+1)*(2b+1)=(2a+2)(2b+2)-1$$ Do you see why that number must be odd?
If you want to see it inmediately, if $a$ and $b$ are odd, $a + b$ is even and $a\cdot b$ is odd; and odd plus even is odd.
$ \newcommand{odd}[1]{#1\text{ is odd}} \newcommand{even}[1]{#1\text{ is even}} $Just for fun, here is a slightly different (a "logical") approach compared to the existing answers.
"The set of odd natural number is closed under $\;*\;$" means that if any $\;a\;$ and $\;b\;$ are odd natural numbers, then also $\;a * b\;$ is an odd natural number.
Therefore we ask ourselves: when is $\;a * b\;$ an
odd natural number? First, from the definition of $\;*\;$ it is clear that if $\;a,b\;$ are natural numbers, then $\;a * b\;$ also is a natural number.
So, what about the
oddness of $\;a * b\;$? Let's calculate:
\begin{align}& \odd{a * b} \\\equiv & \qquad \text{"definition of $\;*\;$"} \\& \odd{a + b + a \times b} \\\equiv & \qquad \text{"sum is odd if exactly one is odd"} \\& \odd{a + b} \;\not\equiv\; \odd{a \times b} \\\equiv & \qquad \text{"sum is odd if exactly one is odd; product is odd if both are odd"} \\& \odd{a} \;\not\equiv\; \odd{b} \;\not\equiv\; \odd{a} \;\land\; \odd{b} \\\equiv & \qquad \text{"logic: simplify by removing double negation"} \\& \odd{a} \;\equiv\; \odd{b} \;\equiv\; \odd{a} \;\land\; \odd{b} \\\equiv & \qquad \text{"logic: golden rule"} \\& \odd{a} \;\lor\; \odd{b} \\\end{align}So $\;a * b\;$ is odd iff either $\;a\;$ or $\;b\;$ is odd, so certainly if
both are odd.
This completes the proof.
Note how both $\;\not\equiv\;$ and $\;\equiv\;$ are associative, so that we could safely leave out the parentheses in the above proof. The golden rule mentioned above is $$ P \;\equiv\; Q \;\equiv\; P \land Q \;\equiv\; P \lor Q $$ for any boolean expressions $\;P,Q\;$.
Definition: A number is odd if and only if it can get written in the form [(2*n)+1] where n is an integer.
Now, "a" and "b" both qualify as odd. But, you don't want to write them in the same form for this question, since they might not equal each other. Thus, select one letter to go in the blank space of [(2*_)+1] for "a" and another letter to go in the blank space for "b". Now put those things in place of "a" and "b" in a+b+ab. Then expand a+b+ab. After expanding a+b+ab, select letters to represent certain equations. Eventually, you should end up with something of the form [(2*n)+1].
For instance, say I wanted to show that if x, and y are even, that x+y is even. I would first let x=2a, and y=2b. Then we can see that x+y=2a+2b=2(a+b). Letting a+b=z we then have x+y=2z. But all the variables are arbitrary (within the set of even numbers), and thus z is arbitrary also. Consequently, in "2z" z indicates an arbitrary variable, which means that z means the same thing as an arbitrary variable in the definition of an even number. Therefore, x+y is even.
a+b*ab=(2k+1)+(2j+1)+(2k+1)(2j+1)=(2k+2j)+2+(4kj+2k+2j+1)=2(k+j+1+2kj+k+j)+1=2z+1.
To just answer the question, simplest is to observe that $a+b+ab$ is the sum of three numbers that are each positive and odd (if $a,b$ are so), and hence itself positive and odd.
To better understand what this operation does, one can compute$$ (x-1)*(y-1) = (x-1)+(y-1)+(x-1)(y-1)=xy-1$$for any $x,y$, which would be in practice taken to be positive
even numbers. This shows the operation is just the multiplication of positive even numbers in disguise, the disguise consisting of representing each such number systematically (on input and on output) by the odd number before it. Thus one sees for instance immediately that '$*$' is (commutative and) associative, which would otherwise require a bit of computation to verify. Multiplication of even numbers has the property that only numbers divisible by$~4$ ever occur as value of the operation, and more generally that combining $n$ numbers results in a number divisible by$~2^n$; correspondingly, combining $n$ odd numbers by '$*$' result in a number that is congruent to$~{-}1$ modulo$~2^n$. Finally just as the even positive numbers remain closed under multiplication if one adds $0$ to the set (which acts as absorbing element: $0x=x0=0$ for all$~x$), one could add $-1$ to the set of odd positive numbers, which would become an absorbing element for '$*$'.
|
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
|
Learning Objectives
To recognize the SI base units and explain the system of prefixes used with them.
People who live in the United States measure weight in pounds, height in feet and inches, and a car’s speed in miles per hour. In contrast, chemistry and other branches of science use the International System of Units (also known as
SI after Système Internationale d’Unités), which was established so that scientists around the world could communicate efficiently with each other. Many countries have also adopted SI units for everyday use as well. The United States is one of the few countries that has not. Base SI Units Base (or basic) units, are the fundamental units of SI. There are seven base units, which are listed in Table \(\PageIndex{1}\), Chemistry uses five of the base units: the mole for amount, the kilogram for mass, the meter for length, the second for time, and the kelvin for temperature. The degree Celsius (°C) is also commonly used for temperature. The numerical relationship between kelvins and degrees Celsius is as follows:
\[K = °C + 273 \label{Eq1}\]
Property Unit Abbreviation length meter m mass kilogram kg time second s amount mole mol temperature kelvin K electrical current ampere amp luminous intensity candela cd
The United States uses the English (sometimes called Imperial) system of units for many quantities. Inches, feet, miles, gallons, pounds, and so forth, are all units connected with the English system of units. There have been many mistakes due to the improper conversion of units between the SI and English systems.
The size of each base unit is defined by international convention. For example, the
kilogram is defined as the quantity of mass of a special metal cylinder kept in a vault in France (Figure \(\PageIndex{1}\)). The other base units have similar definitions and standards. The sizes of the base units are not always convenient for all measurements. For example, a meter is a rather large unit for describing the width of something as narrow as human hair. Instead of reporting the diameter of hair as 0.00012 m or as 1.2 × 10 −4 m using scientific notation as discussed in section 1.4, SI also provides a series of prefixes that can be attached to the units, creating units that are larger or smaller by powers of 10.
Common prefixes and their multiplicative factors are listed in Table \(\PageIndex{2}\). (Perhaps you have already noticed that the base unit
kilogram is a combination of a prefix, kilo- meaning 1,000 ×, and a unit of mass, the gram.) Some prefixes create a multiple of the original unit: 1 kilogram equals 1,000 grams, and 1 megameter equals 1,000,000 meters. Other prefixes create a fraction of the original unit. Thus, 1 centimeter equals 1/100 of a meter, 1 millimeter equals 1/1,000 of a meter, 1 microgram equals 1/1,000,000 of a gram, and so forth.
Prefix Abbreviation Multiplicative Factor Multiplicative Factor in Scientific Notation giga- G 1,000,000,000 × 10 9 × mega- M 1,000,000 × 10 6 × kilo- k 1,000 × 10 3 × deca- D 10 × 10 1 × deci- d 1/10 × 10 −1 × centi- c 1/100 × 10 −2 × milli- m 1/1,000 × 10 −3 × micro- µ* 1/1,000,000 × 10 −6 × nano- n 1/1,000,000,000 × 10 −9 × *The letter µ is the Greek lowercase letter for m and is called “mu,” which is pronounced “myoo.”
Both SI units and prefixes have abbreviations, and the combination of a prefix abbreviation with a base unit abbreviation gives the abbreviation for the modified unit. For example, kg is the abbreviation for kilogram. We will be using these abbreviations throughout this book.
What is the difference between “mass” and “weight”?
The mass of a body is a measure of its inertial property or how much matter it contains. The weight of a body is a measure of the force exerted on it by gravity or the force needed to support it. Gravity on earth gives a body a downward acceleration of about 9.8 m/s
2. In common parlance, weight is often used as a synonym for mass in weights and measures. For instance, the verb “to weigh” means “to determine the mass of” or “to have a mass of.” The incorrect use of weight in place of mass should be phased out, and the term mass used when mass is meant. The SI unit of mass is the kilogram (kg). In science and technology, the weight of a body in a particular reference frame is defined as the force that gives the body an acceleration equal to the local acceleration of free fall in that reference frame. Thus, the SI unit of the quantity weight defined in this way (force) is the newton (N). Derived SI Units Derived units are combinations of SI base units. Units can be multiplied and divided, just as numbers can be multiplied and divided. For example, the area of a square having a side of 2 cm is 2 cm × 2 cm, or 4 cm 2 (read as “four centimeters squared” or “four square centimeters”). Notice that we have squared a length unit, the centimeter, to get a derived unit for area, the square centimeter.
Volume is an important quantity that uses a derived unit.
Volume is the amount of space that a given substance occupies and is defined geometrically as length × width × height. Each distance can be expressed using the meter unit, so volume has the derived unit m × m × m, or m 3 (read as “meters cubed” or “cubic meters”). A cubic meter is a rather large volume, so scientists typically express volumes in terms of 1/1,000 of a cubic meter. This unit has its own name—the liter (L). A liter is a little larger than 1 US quart in volume. (Table \(\PageIndex{3}\) gives approximate equivalents for some of the units used in chemistry.)
1 m ≈ 39.36 in. ≈ 3.28 ft ≈ 1.09 yd 1 in. ≈ 2.54 cm 1 km ≈ 0.62 mi 1 kg ≈ 2.20 lb 1 lb ≈ 454 g 1 L ≈ 1.06 qt 1 qt ≈ 0.946 L
As shown in Figure \(\PageIndex{2}\), a liter is also 1,000 cm
3. By definition, there are 1,000 mL in 1 L, so 1 milliliter and 1 cubic centimeter represent the same volume.
\[1\; mL = 1\; cm^3 \label{Eq2}\]
Example \(\PageIndex{1}\)
Give the abbreviation for each unit and define the abbreviation in terms of the base unit.
kiloliter microsecond decimeter nanogram Answer a
The abbreviation for a kiloliter is kL. Because kilo means “1,000 ×,” 1 kL equals 1,000 L.
Answer b
The abbreviation for microsecond is µs. Micro implies 1/1,000,000th of a unit, so 1 µs equals 0.000001 s.
Answer c
The abbreviation for decimeter is dm. Deci means 1/10th, so 1 dm equals 0.1 m.
Answer d
The abbreviation for nanogram is ng and equals 0.000000001 g.
Exercise \(\PageIndex{1}\)
Give the abbreviation for each unit and define the abbreviation in terms of the base unit.
kilometer milligram nanosecond centiliter Energy, another important quantity in chemistry, is the ability to perform work, such as moving a box of books from one side of a room to the other side. It has a derived unit of kg•m 2/s 2. (The dot between the kg and m units implies the units are multiplied together.) Because this combination is cumbersome, this collection of units is redefined as a joule (J). An older unit of energy, but likely more familiar to you, the calorie (cal), is also widely used. There are 4.184 J in 1 cal. Energy changes occur during all chemical processes and will be discussed in a later chapter.
To Your Health: Energy and Food
The food in our diet provides the energy our bodies need to function properly. The energy contained in food could be expressed in joules or calories, which are the conventional units for energy, but the food industry prefers to use the kilocalorie and refers to it as the Calorie (with a capital C). The average daily energy requirement of an adult is about 2,000–2,500 Calories, which is 2,000,000–2,500,000 calories (with a lowercase c).
If we expend the same amount of energy that our food provides, our body weight remains stable. If we ingest more Calories from food than we expend, however, our bodies store the extra energy in high-energy-density compounds, such as fat, and we gain weight. On the other hand, if we expend more energy than we ingest, we lose weight. Other factors affect our weight as well—genetic, metabolic, behavioral, environmental, cultural factors—but dietary habits are among the most important.
In 2008 the US Centers for Disease Control and Prevention issued a report stating that 73% of Americans were either overweight or obese. More alarmingly, the report also noted that 19% of children aged 6–11 and 18% of adolescents aged 12–19 were overweight—numbers that had tripled over the preceding two decades. Two major reasons for this increase are excessive calorie consumption (especially in the form of high-fat foods) and reduced physical activity. Partly because of that report, many restaurants and food companies are working to reduce the amounts of fat in foods and provide consumers with more healthy food options.
Density is defined as the mass of an object divided by its volume; it describes the amount of matter contained in a given amount of space.
\[\mathrm{density=\dfrac{mass}{volume}}\label{Eq3}\]
Thus, the units of density are the units of mass divided by the units of volume: g/cm
3 or g/mL (for solids and liquids), g/L (for gases), kg/m 3, and so forth. For example, the density of water is about 1.00 g/cm 3, while the density of mercury is 13.6 g/mL. (Remember that 1 mL equals 1 cm 3.) Mercury is over 13 times as dense as water, meaning that it contains over 13 times the amount of matter in the same amount of space. The density of air at room temperature is about 1.3 g/L.
Example \(\PageIndex{2}\): Density of Bone
What is the density of a section of bone if a 25.3 cm
3 sample has a mass of 27.8 g? SOLUTION
Because density is defined as the mass of an object divided by its volume, we can set up the following relationship:
\[ \begin{align*} \mathrm{density} &=\dfrac{mass}{volume} \\[4pt] &= \dfrac{27.8\:g}{25.3\:cm^3} \\[4pt] &=1.10\:g/cm^3 \end{align*}\]
Note that we have limited our final answer to three significant figures.
Exercise \(\PageIndex{2}\): Density of Oxygen
What is the density of oxygen gas if a 15.0 L sample has a mass of 21.7 g?
Concept Review Exercises
What is the difference between a base unit and a derived unit? Give two examples of each type of unit.
Do units follow the same mathematical rules as numbers do? Give an example to support your answer.
Answers
Base units are the seven fundamental units of SI; derived units are constructed by making combinations of the base units; base units: kilograms and meters (answers will vary); derived units: grams per milliliter and joules (answers will vary).
yes; \(\mathrm{mL\times\dfrac{g}{mL}=g}\) (answers will vary)
Key Takeaways Recognize the SI base units and derived units. Combining prefixes with base units creates new units of larger or smaller sizes.
|
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
|
I tried to figure out the MO-scheme of the tetragonal-bipyramidal complex
trans-$\ce{[Co(en)2(NCS)2]SCN}$ in which the isothiocyanate ligands are bound to the $\ce{Co^3+}$-Ion in $\eta^{1}$-mode (en = ethylenediamine). While undertaking this task I noticed that I don't know the MO-scheme of the isothiocyanate ion, which I need to determine whether it acts as a $\sigma$ and/or $\pi$ acceptor or donor.
I tried to construct the MO via a symmetry-based treatment but ran into the problem that the isothiocyanate ion - belonging to thet $C_{\infty v}$ point group - lacks symmetry elements perpendicular to its principal axis. Thus, I wasn't able to construct the bonding and antibonding sigma-orbitals with this method.
I could construct those $\sigma$-interactions by hand or use $\ce{CO2}$ as a model but I don't know whether this will give me the right results. I haven't found the MO-scheme in the internet so I would be happy if someone could advise me on how to construct a qualitative MO-scheme of $\ce{SCN-}$ or simply show it to me. I would also be very grateful for getting some information on how exactly the isothiocyanate ion acts as a ligand (which $\pi$-interactions and how strong are they, is there also $\sigma$-donation as with carbonyl ligands, etc.).
Edit:I forgot to mention explicitly that the $\ce{SCN-}$ ion is bound to the metal center via its nitrogen atom.
|
Goal
I wish to prove that it is possible to relate the Sobolev norms on an arbitrary triangle $K$ to Sobolev norms on a reference triangle $\hat{K}$.
Preliminaries
To this end: Let $F \colon \hat{K} \to K$ be an invertible affine map given by $F(\hat{x}) = B\hat{x} + c$. For a function $\hat{v} \in C^2(\hat{K})$, we define the corresponding function $v \in C^2(K)$ by $$ v(x) = (\hat{v}\circ F^{-1})(x). $$
I am interested in giving a bound for the Sobolev semi-norm $|v|_{2, K}$ in terms of $|\hat{v}|_{2, \hat{K}}$ and the matrix $B$.
I use the definitions
$$ |v|_{2, K} = \left(\int_{K} \sum_{|\alpha| = 2} |\partial^\alpha v(x)|^2 dx \right)^{1/2} $$ where $\partial^\alpha v$ is the the mixed partial derivatives of order $|\alpha| = 2$.
What I have tried
I started by trying to bound $|\partial^\alpha v(x)|$ in terms of the directional derivatives
$$ |\partial^\alpha v(x)| \leq \sup_{\|\xi_1\|, \|\xi_2\| \leq 1} |\nabla ((\nabla v(x))\cdot \xi_1)\cdot \xi_2)| $$ and then use that since $v(x) = \hat{v}(\hat{x})$ this equals
$$ |\partial^\alpha v(x)| \leq \sup_{\|\xi_1\|, \|\xi_2\| \leq 1} |\nabla ((\nabla v(x))\cdot \xi_1)\cdot \xi_2)| \\ = \sup_{\|\xi_1\|, \|\xi_2\| \leq 1} |\nabla ((\nabla v(x))\cdot B^{-1}\xi_1)\cdot B^{-1}\xi_2)| \\ \leq \sup_{\|\xi_1\|, \|\xi_2\| \leq 1} |\nabla ((\nabla v(x))\cdot \xi_1)\cdot \xi_2)|\| B^{-1} \|^{2}, $$
however - I get lost in the notation. Am I on the right path, and is there any better notation for working with derivatives in this fashion?
|
I once spent far too long getting nowhere with this.
Is there a way of finding the real roots of $ax^k-bx^{k-1}+b-a=0$ where $a, b, k\in \mathbb N$ and $b\gt a$ and $k\gt 1$? I know that there is no general formula for solving polynomials of degree greater than 4, but with so few $x$s I thought it might be possible. Note that $x=1$ is always a solution. Because of the dearth of $x$s the stationary points are easy to find, and I know a solution exists between $x=\frac{b(k-1)}{ak}$ and $x=\frac{b}{a}$.
I once spent far too long getting nowhere with this.
Let's examine the specific case where $a = 1,$ $b = 2$ and $n = 6$ i.e. consider the polynomial $$f(X) = X^6 - 2X^5 + 1.$$
As you've pointed out, $1$ is a root of $f$ and hence $X-1$ divides $f$ in $\mathbb{Q}[X].$ In fact,
$$f(X) = (X -1)(X^5 -X^4 -X^3 -X^2 - X - 1).$$
So let's instead consider the polynomial $$g(X) = X^5 -X^4 -X^3 -X^2 - X - 1.$$ We claim $g(X)$ is not solvable by radicals.
First observe that $g(X)$ is irreducible over $\mathbb{Q}$ as it's reduction modulo $5$ is irreducible over $\mathbb{F}_5.$
Let $L$ be the splitting field of $g$ over $\mathbb{Q}$ and $G = Gal (L/\mathbb{Q}).$ There is a faithful representation of $G \rightarrow S_5$ given by the action of $G$ on the roots of $g,$ identify $G$ with it's image under this representation.
As $L$ is the splitting field of a irreducible fifth degree polynomial, we have $5| |G|$. And so $G$ contains an element of order $5.$ As the only elements in $S_5$ of order $5$ are $5$ cylces, we obtain $G$ contains a $5$-cycle.$
We claim that complex conjugation restricted to $G$ has a cycle decomposition equal to the product of two $2$-cycles. Note that this is equivalent to showing $g$ has one real root. So we show the latter.
Observe
$$f'(X) = 6X^5 - 10X^4 = 2X^4(3X^4-5).$$
has $2$ real roots. It follows $f$ has at most $3$ real roots. So $g$ has at most $2$ real roots. As complex roots of a polynomial over $\mathbb{R}$ necessarily come in conjugate pairs and every odd degree polynomial over $\mathbb{R}$ has a real root, it must be the case that $g$ has exactly one real root.
So $G$ contains a five cycle and an element with a cycle decomposition equal to the product of two $2$-cycles. It follows $G$ contains $A_5$ and $G$ is not solvable. Consequently, $g$ will not be solvable by radicals.
|
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this article, we functorially associate definable sets to$k$-analytic curves, and definable maps to analytic morphisms between them, for a large class of$k$-analytic curves. Given a$k$-analytic curve$X$, our association allows us to have definable versions of several usual notions of Berkovich analytic geometry such as the branch emanating from a point and the residue curve at a point of type 2. We also characterize the definable subsets of the definable counterpart of$X$and show that they satisfy a bijective relation with the radial subsets of$X$. As an application, we recover (and slightly extend) results of Temkin concerning the radiality of the set of points with a given prescribed multiplicity with respect to a morphism of$k$-analytic curves. In the case of the analytification of an algebraic curve, our construction can also be seen as an explicit version of Hrushovski and Loeser’s theorem on iso-definability of curves. However, our approach can also be applied to strictly$k$-affinoid curves and arbitrary morphisms between them, which are currently not in the scope of their setting.
In this paper we prove the Rigidity Theorem for motives of rigid analytic varieties over a non-Archimedean valued field$K$. We prove this theorem both for motives with transfers and without transfers in a relative setting. Applications include the construction of étale realization functors, an upgrade of the known comparison between motives with and without transfers and an upgrade of the rigid analytic motivic tilting equivalence, extending them to$\mathbb{Z}[1/p]$-coefficients.
In this note, we prove the logarithmic$p$-adic comparison theorem for open rigid analytic varieties. We prove that a smooth rigid analytic variety with a strict simple normal crossing divisor is locally$K(\unicode[STIX]{x1D70B},1)$(in a certain sense) with respect to$\mathbb{F}_{p}$-local systems and ramified coverings along the divisor. We follow Scholze’s method to produce a pro-version of the Faltings site and use this site to prove a primitive comparison theorem in our setting. After introducing period sheaves in our setting, we prove aforesaid comparison theorem.
In this article we prove the explicit Mordell Conjecture for large families of curves. In addition, we introduce a method, of easy application, to compute all rational points on curves of quite general shape and increasing genus. The method bases on some explicit and sharp estimates for the height of such rational points, and the bounds are small enough to successfully implement a computer search. As an evidence of the simplicity of its application, we present a variety of explicit examples and explain how to produce many others. In the appendix our method is compared in detail to the classical method of Manin–Demjanenko and the analysis of our explicit examples is carried to conclusion.
Let${\mathcal{X}}$be a regular variety, flat and proper over a complete regular curve over a finite field such that the generic fiber$X$is smooth and geometrically connected. We prove that the Brauer group of${\mathcal{X}}$is finite if and only Tate’s conjecture for divisors on$X$holds and the Tate–Shafarevich group of the Albanese variety of$X$is finite, generalizing a theorem of Artin and Grothendieck for surfaces to arbitrary relative dimension. We also give a formula relating the orders of the group under the assumption that they are finite, generalizing the known formula for a surface.
For a proper, smooth scheme$X$over a$p$-adic field$K$, we show that any proper, flat, semistable${\mathcal{O}}_{K}$-model${\mathcal{X}}$of$X$whose logarithmic de Rham cohomology is torsion free determines the same${\mathcal{O}}_{K}$-lattice inside$H_{\text{dR}}^{i}(X/K)$and, moreover, that this lattice is functorial in$X$. For this, we extend the results of Bhatt–Morrow–Scholze on the construction and the analysis of an$A_{\text{inf}}$-valued cohomology theory of$p$-adic formal, proper, smooth${\mathcal{O}}_{\overline{K}}$-schemes$\mathfrak{X}$to the semistable case. The relation of the$A_{\text{inf}}$-cohomology to the$p$-adic étale and the logarithmic crystalline cohomologies allows us to reprove the semistable conjecture of Fontaine–Jannsen.
Given a finite group$\text{G}$and a field$K$, the faithful dimension of$\text{G}$over$K$is defined to be the smallest integer$n$such that$\text{G}$embeds into$\operatorname{GL}_{n}(K)$. We address the problem of determining the faithful dimension of a$p$-group of the form$\mathscr{G}_{q}:=\exp (\mathfrak{g}\otimes _{\mathbb{Z}}\mathbb{F}_{q})$associated to$\mathfrak{g}_{q}:=\mathfrak{g}\otimes _{\mathbb{Z}}\mathbb{F}_{q}$in the Lazard correspondence, where$\mathfrak{g}$is a nilpotent$\mathbb{Z}$-Lie algebra which is finitely generated as an abelian group. We show that in general the faithful dimension of$\mathscr{G}_{p}$is a piecewise polynomial function of$p$on a partition of primes into Frobenius sets. Furthermore, we prove that for$p$sufficiently large, there exists a partition of$\mathbb{N}$by sets from the Boolean algebra generated by arithmetic progressions, such that on each part the faithful dimension of$\mathscr{G}_{q}$for$q:=p^{f}$is equal to$fg(p^{f})$for a polynomial$g(T)$. We show that for many naturally arising$p$-groups, including a vast class of groups defined by partial orders, the faithful dimension is given by a single formula of the latter form. The arguments rely on various tools from number theory, model theory, combinatorics and Lie theory.
In this article we construct a p-adic three-dimensional eigenvariety for the group$U$(2,1)($E$), where$E$is a quadratic imaginary field and$p$is inert in$E$. The eigenvariety parametrizes Hecke eigensystems on the space of overconvergent, locally analytic, cuspidal Picard modular forms of finite slope. The method generalized the one developed in Andreatta, Iovita and Stevens [$p$-adic families of Siegel modular cuspforms Ann. of Math. (2) 181, (2015), 623–697] by interpolating the coherent automorphic sheaves when the ordinary locus is empty. As an application of this construction, we reprove a particular case of the Bloch–Kato conjecture for some Galois characters of$E$, extending the results of Bellaiche and Chenevier to the case of a positive sign.
We study the problem of how the dual complex of the special fiber of a strict normal crossings degeneration$\mathscr{X}_{R}$changes under products. We view the dual complex as a skeleton inside the Berkovich space associated to$X_{K}$. Using the Kato fan, we define a skeleton$\text{Sk}(\mathscr{X}_{R})$when the model$\mathscr{X}_{R}$is log-regular. We show that if$\mathscr{X}_{R}$and$\mathscr{Y}_{R}$are log-smooth, and at least one is semistable, then$\text{Sk}(\mathscr{X}_{R}\times _{R}\mathscr{Y}_{R})\simeq \text{Sk}(\mathscr{X}_{R})\times \text{Sk}(\mathscr{Y}_{R})$. The essential skeleton$\text{Sk}(X_{K})$, defined by Mustaţă and Nicaise, is a birational invariant of$X_{K}$and is independent of the choice of$R$-model. We extend their definition to pairs, and show that if both$X_{K}$and$Y_{K}$admit semistable models,$\text{Sk}(X_{K}\times _{K}Y_{K})\simeq \text{Sk}(X_{K})\times \text{Sk}(Y_{K})$. As an application, we compute the homeomorphism type of the dual complex of some degenerations of hyper-Kähler varieties. We consider both the case of the Hilbert scheme of a semistable degeneration of K3 surfaces, and the generalized Kummer construction applied to a semistable degeneration of abelian surfaces. In both cases we find that the dual complex of the$2n$-dimensional degeneration is homeomorphic to a point,$n$-simplex, or$\mathbb{C}\mathbb{P}^{n}$, depending on the type of the degeneration.
We obtain a new lower bound on the size of the value set$\mathscr{V}(f)=f(\mathbb{F}_{p})$of a sparse polynomial$f\in \mathbb{F}_{p}[X]$over a finite field of$p$elements when$p$is prime. This bound is uniform with respect to the degree and depends on some natural arithmetic properties of the degrees of the monomial terms of$f$and the number of these terms. Our result is stronger than those that can be extracted from the bounds on multiplicities of individual values in$\mathscr{V}(f)$.
We develop a theory of enlarged mixed Shimura varieties, putting the universal vectorial bi-extension defined by Coleman into this framework to study some functional transcendental results of Ax type. We study their bi-algebraic systems, formulate the Ax-Schanuel conjecture and explain its relation with the logarithmic Ax theorem and the Ax-Lindemann theorem which we shall prove. All these bi-algebraic and transcendental results extend their counterparts for mixed Shimura varieties. In the end we briefly discuss the André–Oort and Zilber–Pink type problems for enlarged mixed Shimura varieties.
In this paper we establish some constraints on the eigenvalues for the action of a self map of a proper variety on its$\ell$-adic cohomology. The essential ingredients are a trace formula due to Fujiwara, and the theory of weights.
The Chabauty–Kim method allows one to find rational points on curves under certain technical conditions, generalising Chabauty’s proof of the Mordell conjecture for curves with Mordell–Weil rank less than their genus. We show how the Chabauty–Kim method, when these technical conditions are satisfied in depth 2, may be applied to bound the number of rational points on a curve of higher rank. This provides a non-abelian generalisation of Coleman’s effective Chabauty theorem.
This note is about certain locally complete families of Calabi–Yau varieties constructed by Cynk and Hulek, and certain varieties constructed by Schreieder. We prove that the cycle class map on the Chow ring of powers of these varieties admits a section, and that these varieties admit a multiplicative self-dual Chow–Künneth decomposition. As a consequence of both results, we prove that the subring of the Chow ring generated by divisors, Chern classes, and intersections of two cycles of positive codimension injects into cohomology via the cycle class map. We also prove that the small diagonal of Schreieder surfaces admits a decomposition similar to that of K3 surfaces. As a by-product of our main result, we verify a conjecture of Voisin concerning zero-cycles on the self-product of Cynk–Hulek Calabi–Yau varieties, and in the odd-dimensional case we verify a conjecture of Voevodsky concerning smash-equivalence. Finally, in positive characteristic, we show that the supersingular Cynk–Hulek Calabi–Yau varieties provide examples of Calabi–Yau varieties with “degenerate” motive.
We suggest an analog of the Bass–Quillen conjecture for smooth affinoid algebras over a complete non-archimedean field. We prove this in the rank-1 case, i.e. for the Picard group. For complete discretely valued fields and regular affinoid algebras that admit a regular model (automatic if the residue characteristic is zero) we prove a similar statement for the Grothendieck group of vector bundles$K_{0}$.
The main aim of this paper is to show that a cyclic cover of ℙn branched along a very general divisor of degree d is not stably rational, provided that n ≥ 3 and d ≥ n + 1. This generalizes the result of Colliot-Thélène and Pirutka. Generalizations for cyclic covers over complete intersections and applications to suitable Fano manifolds are also discussed.
Let$X$be a smooth projective curve of genus$g\geq 2$over an algebraically closed field$k$of characteristic$p>0$. We show that for any integers$r$and$d$with$0<r<p$, there exists a maximally Frobenius destabilised stable vector bundle of rank$r$and degree$d$on$X$if and only if$r\mid d$.
In this series of papers, we explore moments of derivatives of L-functions in function fields using classical analytic techniques such as character sums and approximate functional equation. The present paper is concerned with the study of mean values of derivatives of quadratic Dirichlet L-functions over function fields when the average is taken over monic and irreducible polynomials P in 𝔽q[T]. When the cardinality q of the ground field is fixed and the degree of P gets large, we obtain asymptotic formulas for the first moment of the first and the second derivative of this family of L-functions at the critical point. We also compute the full polynomial expansion in the asymptotic formulas for both mean values.
|
Evening All,Just enquiring as to whether someone can help me with the exact location for purchasing a 2016 Examination Response (for my younger sibling).I am on Students Online. I have done the following Go to My Details > Results Services - But can't seem to see the link to purchase...
Evening All,I'm new to completing a Tax Return.... Could anyone give me some advice on completing/submitting a Tax Return please?? :)Tips/ Hints etc would be appreciated. :)Thanks in advance.Regards,Smile
Evening AllI was just wondering if anyone knew how many words one could go over the word limit without being penalised??I presume it would probably be different for each school - is there a certain % of words that you can go without being penalised??And should the school have...
Afternoon AllHow do you do the following questions?I know you integrate and then sub in each x value......1. \int^{\pi}_{\frac{\pi}{2}} \sin\frac{x}{2} dx2. \int^{\frac{\pi}{2}}_{0} \cos 3x dxThanks in advance. :)
Hello AllCould someone please assist me with the following:'Given that the wingspan of an aeroplane is 30m, find the plane's altitude to the nearest metre if the wingspan subtends an angle of 14' when it is directly over head..."Thanks in advance. :)Smile. :)
Hello AllCould someone please help me with the following:'A store offers furniture on hire purchase at 20% p.a. over 5 years, with no repayments for 6 months. Ali buys furniture worth $12000.What are the monthly repayments?'Thanks in advance. :)
Hello AllCould someone please help me with the following question:"Water evaporates from a pond at an average rate of 7% each week.""What percentage is left after 15 weeks?"Thanks for your help in advance. :)CheersSmile
Hello All (those who have gone before me, experienced with writing resumes)....I am about to start writing my first resume. :)Is anyone able to share with me, the structure of the resume they used?Any general rules/ tips you could share with me???Thanks for your assistance in...
Hello All... :)Could someone please assist me with the following questions?1. Use a calculator to find, to 2 d.p where appropriate the approximate value of:a)limh--> 0\left (\frac{10^h - 1}{h} \right )2. Find the value of k, correct to 3 d.p if:ln k = 1.9.Thanks for your help...
Hello AllIf I have the line y = 5x + 4 and the parabola y = (x - 4)^2How do I find the area between the x axis (on the bottom), the y axis (on the LH side), y = 5x + 4 (on the top) and the parabola on the RH side?? Hope this makes sense!Thanks in advance. :)
Hello All... :)Could someone please help me with the following question??$The normal to the curve y $ = \frac{1}{8}x^2 $ at P $ (2, \frac{1}{2} ) $ meets the curve again at Q. Find the co-ordinates of Q.$Thanks in advance. :)
Hello All :)I know this is probably thinking a bit far ahead but.....When is the 2014 HSC Timetable released?? On the Board of Studies it says 'Term 2 of your HSC year' ...Is this Term 1 2014 - Seeing though Term 4 was Year 12???Thanks in advance. :):):)
Hello All...'Consider the curve given by y = 2x^3 - \frac{1}{2}x^2 where -2\le x \le 5.''a) Find the stationary points and determine their nature'.I've got the two stationary points at x = 0 and x = 1/6... But something I'm doing is wrong from there...Thanks for your help in advance. :)
|
Energy changes which accompany chemical reactions are almost always expressed by
thermochemical equations, such as
\[\text{C} H_{4} (g) + 2 \text{O}_{2} (g) \rightarrow \text{C} \text{O}_{2} (g) + 2 \text{H}_{2} \text{O} (l) \text{ (25°C, 1 atm pressure)} \\ \Delta H_{m} = –890 \text{kJ} \label{1}\]
which is displayed on the atomic level below. To get an idea of what this reaction looks like on the macroscopic level, check out the flames on the far right.
Here the Δ
H m (delta Hsubscript m) tells us whether heat energy is released or absorbed when the reaction occurs as written, and also enables us to find the actual quantity of energy involved. By convention, if Δ His m positive, heat is absorbedby the reaction; i.e., it is endothermic. More commonly, Δ His m negativeas in Eq. \(\ref{1}\), indicating that heat energy is releasedrather than absorbed by the reaction, and that the reaction is exothermic. This convention as to whether Δ His positive or negative looks at the heat change in terms of the matter actually involved in the reaction rather than its surroundings. In the reaction in Eq. \(\ref{1}\), the C, H, and O atoms have collectively lost energy and it is this loss which is indicated by a negative value of Δ m H. m
It is important to notice that Δ
H m is the energy for the reaction as written. In the case of Equation \(\ref{1}\), that represents the formation of 1 mol of carbon dioxide and 2 mol of water. The quantity of heat released or absorbed by a reaction is proportional to the amount of each substance consumed or produced by the reaction. Thus Eq. \(\ref{1}\) tells us that 890.4 kJ of heat energy is given off for every mole ofCH 4which is consumed. Alternatively, it tells us that 890.4 kJ is released for every 2 moles ofH 2O produced. Seen in this way, Δ His a conversion factor enabling us to calculate the heat absorbed or released when a given amount of substance is consumed or produced. If m qis the quantity of heat absorbed or released and nis the amount of substance involved, then
Example \(\PageIndex{1}\) : Heat Energy
How much heat energy is obtained when 1 kg of ethane gas, C
2H 6, is burned in oxygen according to the equation:
\[2 \text{C}_{2} \text{H}_{6} (g) + 7 \text{O}_{2} (g) \rightarrow 4 \text{C} \text{O}_{2} (g) + 6 \text{H}_{2} \text{O} (l) \\ \Delta H_{m} = –3120 \text{ kJ}\label{3}\]
Solution: The mass of C 2H 6 is easily converted to the amount of C 2H 6 from which the heat energy q is easily calculated by means of Eq. (2). The value of Δ H m is –3120 kJ per per 2 mol C 2H 6. The road map is
so that
\(\large m_{\text{C}_{\text{2}}\text{H}_{\text{6}}}\text{ }\xrightarrow{M}\text{ }n_{\text{C}_{\text{2}}\text{H}_{\text{6}}}\text{ }\xrightarrow{\Delta H_{m}}\text{ }q\)
\(\begin{align} q &= 1 \times 10^3 \text{ g }\ce{C2H6} \times \frac{\text{1 mol }\ce{C2H6}}{\text{30.07 g }\ce{C2H6}} \times \frac{-3120\text{ kJ}}{\text{2 mol }\ce{C2H6}} \\ &= -\text{51 879 kJ} = -\text{51.88 MJ} \end{align} \)
By convention, a negative value of
q corresponds to a release of heat energy by the matter involved in the reaction. His referred to as an m enthalpy change for the reaction. In this context the symbol Δ (delta) signifies change in” while His the symbol for the quantity being changed, namely the enthalpy. We will deal with the enthalpy in some detail in Chap. 15. For the moment we can think of it as a property of matter which increases when matter absorbs energy and decreases when matter releases energy.
It is important to realize that the value of Δ
H m given in thermochemical equations like \(\ref{1}\) or \(\ref{3}\) depends on the physical state of both the reactants and the products. Thus, if water were obtained as a gas instead of a liquid in the reaction in Eq. \(\ref{1}\), the value of Δ Hwould be different from -890.4 kJ. It is also necessary to specify both the temperature and pressure since the value of Δ m Hdepends very slightly on these variables. If these are not specified they usually refer to 25°C and to normal atmospheric pressure. m
Two more characteristics of thermochemical equations arise from the law of conservation of energy. The first is that
writing an equation in the reverse direction changes the sign of the enthalpy change. For example,
In the image above, the flames input energy into the water, giving it the energy necessary to transition to the gas phase. Since flames provide the energy for the phase transition, this is an endothermic reaction (energy is absorbed).tells us that when a mole of liquid water vaporizes, 44 kJ of heat is absorbed. This corresponds to the fact that heat is absorbed from your skin when perspiration evaporates, and you cool off. Condensation of 1 mol of water vapor, on the other hand, gives off exactly the same quantity of heat. \[ \text{H}_{2} \text{O} (g) \rightarrow \text{H}_{2} \text{O} (l) \\ \Delta \text{H}_{m} = –44 \text{kJ} \]
It's counterintuitive, but the common summer occurrence seen above is actually exothermic. Since the reaction isn't highly exothermic (like the combustion of CH
4), we find it hard to associate with a release of energy. Thermodynamics allows us to better understand on a micro level energy changes like this one.
\[ \Delta \text{H}_{m} \text{forward} = –\Delta \text{H}_{m} \text{reverse} \]
To see why this must be true, suppose that Δ H m [Eq. (4a)] = 44 kJ while Δ H[Eq. (4b)] = –50.0 kJ. If we took 1 mol of liquid water and allowed it to evaporate, 44 kJ would be absorbed. We could then condense the water vapor, and 50.0 kJ would be given off. We could again have 1 mol of liquid water at 25°C but we would also have 6 kJ of heat which had been created from nowhere! This would violate the law of conservation of energy. The only way the problem can he avoided is for Δ m Hof the reverse reaction to be equal in magnitude but opposite in sign from Δ m Hof the forward reaction. That is, m
|
Royden's (Real analysis, 4th edition, p.231) definition:
Let $X$ be a nonempty set and consider a collection of mapping $F=\{f_\alpha:X\rightarrow X_\alpha\}_{\alpha \in \Delta}$, where each $X_\alpha $ is a topological space. The weakest (coarsest) topology for $X$ that contains the collections of sets $\mathbb{F}=\{f_\alpha^{-1}(A_\alpha):f_\alpha \in F, A_\alpha \ open \ in \ X_\alpha\}$ is called the
weak topology for $X$ induced by $F$.
Let $\tau_w$ be the weak topology for $X$ induced be $F$. I want to show that $\tau_w$ is a topology. So I thought to proceed in the following way.
(1) I start by showing that $\mathbb{F}=\{f_\alpha^{-1}(A_\alpha):f_\alpha \in F, A_\alpha \ open \ in \ X_\alpha\}$ is a subbasis for some topology $\tau$ on $X$ $^{[*1]}$. For this I show that the collection of all finite intersections of elements of $\mathbb{F}$, say $\mathbb{F}'$, is a basis for $\tau$ on $X$. So far, I assumed that $\tau$ is a topology on $X$.
(2) Now I invoke the result: let $\mathcal{B}$ be a basis for a topology $\tau$ on $X$. Then $\tau$ equals the collection of all unions of elements of $\mathcal{B}$. Now, I gave a form for the topology generated by the subbasis $\mathbb{F}$ (which is a topology).
(3) Next, I show that this $\tau$ is actually the coarsest topology which contains $\mathbb{F}$. That is, $\tau = \tau_w$.
Do you think it is correct? Do you think it is too much? I feel like it is much more straighforward than that. How would you stab this question? Is there a way to show this by directly showing that $\tau_w$ (the topology generated by the subbasis $\mathbb{F}$) satisfies the axioms of topological spaces?
Thanks in advance!
$^{[*1]}$Do I need to put $\emptyset$ and $X$ together with $\mathbb{F}$ to obtain a subbasis? I didn't require $f_\alpha$ to be surjective.
|
Good day to everyone.In my research work I came out with a function, which looks like this (it is the pdf of some random variable):$$f(x,\rho,\psi)=\frac{2}{\pi }+\sqrt{\frac{2}{\pi }} e^{-\frac{\rho ^2}{4}} \rho \sum _{k=1}^{\infty } \cos (2 k x ) \cos (2 k \psi )\left(I_{k+\frac{1}{2}}\left(\frac{\rho ^2}{4}\right)+I_{k-\frac{1}{2}}\left(\frac{\rho ^2}{4}\right)\right) $$where $\rho >0, 0<\psi<\frac{\pi}{2}, 0<x<\frac{\pi}{2}$ and $I_k(x)$ - modified Bessel function of the first kind and the summation is over all odd indices.
The thing is that the series converges slowly (because of the cosine terms). So it is hard to use this representation in practice. At first I tried to sum up the series but at last gave up. But I know that in similar problems the pdf is usually assumed to be the Von Mises or wrapped normal. So at first I noticed that that the factor before the sum compensates the increment of the Bessel functions $\lim_{\rho \to \infty }\sqrt{\frac{2}{\pi }} e^{-\frac{\rho ^2}{4}} \rho \left(I_{k+\frac{1}{2}}\left(\frac{\rho ^2}{4}\right)+I_{k-\frac{1}{2}}\left(\frac{\rho ^2}{4}\right)\right) =\frac{4}{\pi }$. And then, manipulating the Von Mises pdf obtained: $$f_1(x,\rho,\psi)=\frac{e^{\frac{1}{4} \rho^2 \cos (2 (x -\psi ))}+e^{\frac{1}{4} \rho^2 \cos (2 (x +\psi ))}}{\pi I_0\left(\frac{\rho ^2}{4}\right)}$$
The same with wrapped normal.
$$f_2(x,\rho,\psi)=\frac{\vartheta _3\left(x -\psi ,e^{-\frac{2}{\rho ^2}}\right)+\vartheta _3\left(x +\psi ,e^{-\frac{2}{\rho ^2}}\right)}{\pi }$$where $\vartheta _3$ is the Jacobi theta function.Well, but this is just "manipulating". May be you can give me a hint how to show it analytically?
After that I tried to compare those approximations. There are a lot of criteria to define how close those distributions are, and they all come down to computational procedure with different parameter $\rho, \psi$, which is not vary nice/descriptive. What I really want to find is a strict enough proof that this or that approximation (in analytic form) is superior for different $\rho, \psi$ in application to those pdf's. Do you think it is possible?
|
When we are confronted with a new dataset, it is often of interest to analyse the structure of the data and search for patterns. Are the data points organized into groups? How close are the groups together? Are there any other interesting structures? These questions are addressed in the cluster analysis field. It contains a collection of unsupervised algorithms (meaning that they don't rely on class labels) which try to find these patterns. As a result, each data point is assigned to a cluster.
In \(k\)-means clustering, we define the number of clusters \(k\) in advance and then search for \(k\) groups in the data. Heuristic algorithms exist to perform this task computational efficient even though there is no guarantee to find a global optimum. Hartigan's method is one algorithm of this family. It is not to be confused with Lloyd's algorithm which is the standard algorithm in \(k\)-means clustering and so popular that people often only talk about \(k\)-means and what they mean is that they apply Lloyd's algorithm to achieve a \(k\)-means clustering result.
Whatever algorithm is used, they all belong to the family of centroid-based clustering where we calculate a representative for each cluster based on the points which belong to it. It is a new data point which condenses the major information of the cluster and a common approach is to use the centre of gravity\begin{equation*} c_j = \frac{1}{\left| C_j \right|} \sum_{\fvec{x} \in C_j} \fvec{x}. \end{equation*}
\(C_j\) is the set with all data points which are assigned to the \(j\)-th cluster. Also, we need a measure which tells us how good our current clustering is in a single numerical value. It is common to use the variance of the current partition\begin{equation*} D_{var}(\mathfrak{C}) = \sum_{j=1}^{\left|\mathfrak{C}\right|} \sum_{\fvec{x} \in C_j} \left\| \fvec{x} - \fvec{c}_j \right\|^2. \end{equation*}
\( \mathfrak{C} = \left\{ C_1, C_2, \ldots \right\}\) denotes the set of all clusters. This basically calculates the squared Euclidean distance from each data point \(\fvec{x}\) to its assigned cluster centre \(\fvec{c}_j\) and this over all clusters. As lower the value of \(D_{var}(\mathfrak{C})\), as less variance there is inside the clusters meaning that the data points are closer together. Usually, this is what we want. Hence, the \(k\)-means clustering algorithms try to minimize \(D_{var}(\mathfrak{C})\) but since they only work in a heuristic fashion, we can only end up in a local minimum.
Here, Hartigan's method is of interest. The underlying idea is sometimes also known as exchange clustering algorithm: start with an arbitrary initial clustering and then iterate over each data point. When doing so, re-assign the point to a different cluster on a trial basis and if this move improves \(D_{var}(\mathfrak{C})\), then accept the exchange; otherwise, when \(D_{var}(\mathfrak{C})\) gets worse, reject the exchange and revert it. For example, let's say that the data point \(\fvec{x}\) belongs initially to the cluster \(C_1\). We then remove it from \(C_1\) and add it to the set \(C_2\). After re-calculating \(D_{var}(\mathfrak{C})\) (the partition changed so it is likely that the variance changed as well), we decide if we keep the exchange or not. Summarizing, we apply the following steps:
\(t=0.25\): select a new point which should be tested. \(t=0.5\): move the data point to a different cluster on a trial basis. \(t=0.75\): calculate the variance after the exchange is performed. This also means that, for the calculation of the variance, the new cluster centres are used (the ones which arise due to the exchange). If the variance is lower than before, we keep the change. Otherwise, it is reverted. \(t=1\): show the result after the data point is processed. The data point either keeps its old label (if the exchange resulted in no improvement of variance) or takes over the new label of the other cluster (if the exchange did result in an improvement).
The following animation shows an example for a two-dimensional dataset with the initial clustering\begin{align} \begin{split} C_1 &= \{ (1, 1), (3, 3), (2, 2) \} \\ C_2 &= \{ (2, 1), (0, 2), (3, 5), (4, 4), (5,4) \}. \end{split} \label{eq:ExchangeClusteringAlgorithm_Dataset} \end{align}
It shows each of the above-mentioned steps in increments of \(\Delta t = 0.25\). The algorithm runs two times over the complete dataset \(X = C_1 \cup C_2 \) and in every run each point is processed in the order shown in the animation (the order matters since a previous exchange can influence whether a later exchange is good or not since the cluster centres are re-calculated after each performed exchange). For this example, already in the second run there is no change and the algorithm converged (local minimum of \(D_{var}(\mathfrak{C})\) found).
If we plot the variance \(D_{var}(\mathfrak{C})\) over the iteration time \(t\), we see that the algorithm converged at around \(t=5\).
Maybe you want to explore Hartigan's method on a different dataset. For this purpose, you can use the general Mathematica notebook. Here, you can manually create your own dataset and then apply the algorithm from above.
List of attached files:
ExchangeClusteringAlgorithm_General.nb [PDF] (Mathematica notebook with the general implementation of the algorithm and where you can create your own dataset) ExchangeClusteringAlgorithm_Manual.nb [PDF] (Mathematica notebook which goes through the above example step-by-step) ExchangeClusteringAlgorithm_Functions.nb [PDF] (Mathematica notebook with auxiliary functions used by the two other notebooks. Note that all notebooks must be stored in the same folder when executing)
← Back to the overview page
|
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
|
My question is how to evaluate Riemann's functional equation for values $\Re(s)<0$?
I know I must use the functional equation which is defined as $\zeta (s)=2^s\pi {}^{s-1}\sin \left(\frac{\pi s}{2}\right)\Gamma (1-s)\zeta (1-s)$.
Say I wanted to find $\zeta (-3+2i)$. I know how to calculate $2^s\pi {}^{s-1}\sin \left(\frac{\pi s}{2}\right)\Gamma (1-s)$ at $-3+2i$, more specifically my question is about the $\zeta (1-s)$ term. Would I use the sum formula of the zeta function since $1-(-3+2i)=4-2i$ and thus $\Re(s)>1$?
Thanks for any help received.
|
$$\int_{}^{} {{e^{ax}}dx} = \frac{{{e^{ax}}}}{a}+C$$
Unless $a=0$, in which case we're integrating $1$, and the answer is $x+C$.
This discontinuity is jarring, and seemingly odd. If we were to just substitute $a=0$ into $\frac{{{e^{ax}}}}{a}$, we don't get an indeterminate form that could possibly turn into $x$ if you took the limit instead — you just get infinity, which is nuts.
The key to solving this weirdness lies in the $+C$ term — because of its presence, you can't intelligently evaluate such a limit. After all, perhaps you should take $C$ to be equal to minus infinity in some way. The way to handle this is to use a definite integral. If one integrates instead between two limits — say, 0 and $x$, the the arbitrary constant disappears.
$$\int_0^x {{e^{ax}}dx} = \frac{{{e^{ax}} - 1}}{a}$$
Meanwhile, integrating $1$ between 0 and $x$ just gives you $x$.
Now, the limit $\mathop {\lim }\limits_{a \to 0} \frac{{{e^{ax}} - 1}}{a}$ is easy to take — just do a bit of L' Hopital, and you see that indeed:
$$\mathop {\lim }\limits_{a \to 0} \frac{{{e^{ax}} - 1}}{a} = x$$
Like I said, this integral shows up a lot when we're dealing with complex functions. For example, the integral:
$$\int\limits_{ - \infty }^\infty {{e^{-i\omega t}}dt} $$
Is zero for all values of $\omega$ except $\omega=0$, where it goes to infinity. We call this function the "Dirac delta function" $\delta(\omega)$. The integral is exactly the same as before, but
this time, taking the limit won't work either— the limit of the integral as $\omega\to0$ is 0, notinfinity.
How do we understand this? Well, notice that the integral is really a Fourier transform — it's the Fourier transform of the function "1", but the same integral is also important in the Fourier transform of any function of the form ${e^{i{\omega _n}t}}$, that is —
$$\int\limits_{ - \infty }^\infty {{e^{i({\omega _n} - \omega )t}}dt} $$
Similarly as above, the integral goes crazy when $\omega = {\omega _n}$, so the integral equals $\delta(\omega_n)$. So the limit is still 0 as you approach $\omega_n$.
What changed in our integral that made the limit argument no longer apply? Could it be that our use of complex variables made everything weirder by introducing peridocity? Well, no — our evaluation of the limit didn't assume anything about $a$ being real. The only reason we choose periodicity here is so the improper integral doesn't diverge. Well, the other change was our
use of an infinite domain of integration. Could this have made $F(\omega)$ discontinuous?
Watch the video above. You should
reallywatch the video above (it's 3blue1brown) if you want to understand what I'm going to say next. The idea is this: the Fourier transform is zero when the wrapped-up plot has its centre of mass at 0. When the domain of your integration is infinite, this is true whenever$\omega\neq\omega_n$, because the discrepancy between $\omega$ and $\omega_n$, however small, means the little cardoid keeps getting rotated a tiny little bit each winding, and finally gets smeared around the entire circle, so the centre of mass is at zero.
Meanwhile when $\omega=\omega_n$, the cardoid keeps returning to the same point, so the Fourier transform goes to infinity, because a non-zero centre of mass is getting added an infinite number of times.
On the other hand when you're only Fourier-transforming a finite piece of the function (i.e. the limits of your integral are not infinite), the cardoid doesn't get smeared all across the circle, so the value of $F(\omega)$ starts to rise even before $\omega=\omega_n$.
If the domain of the Fourier transform were infinite, the cardoid would have
been smeared further, winding around the circle an infinite number of times.
In general, when you have an asymmetric shape forming from the wrapped-up plot, there is some number $N$ so that after $N$ windings, the asymmetric shape returns to its original position after winding around tons of places, and the resulting shape is symmetric. Or if $\frac{\phi}{2\pi}$ (where $\phi$ is the phase) is not a rational fraction of $2\pi$, then you can get as close as you want to the such a symmetric shape by approximating it a sufficiently close rational number, and the actual value of $N$ would be infinite.
Calculate $N$.
However, when using a finite domain for the Fourier transform, only those winding frequencies $\omega$ for which $N$ is less than the domain of winding —
i.e. values where the phase difference is "sufficiently rational"— allow this symmetry to form, so only these values of $\omega$ show up as zero in the finite-domain Fourier transform.
Meanwhile, the main peak where $\omega = \omega_n$ isn't quite infinitely tall, because you're only adding up the centre of mass a finite number of times ($\omega t/2\pi$ times).
So the finite-Fourier transform actually ends up looking like this:
We've actually been considering the x-coordinate (real part) of the Fourier
transform of $\cos(\omega_n t)$ in these illustrations, but this is really essentially
the same as the Fourier transform of $e^{i\omega_n t}$, as in our calculations.
Which
isn'ta discontinuous Dirac delta function! As the domain of the transform widens, the true peaks above become narrower and narrower, taller and taller, the wavy stuff flattens out, and the Fourier transform approaches a Dirac delta function!
So this tells us exactly what we need — we do still need to take a limit, but we need to take a limit of what
function$F(\omega)$ the integral approaches as the domain $(-T,T)\to(-\infty,\infty)$. And this is simple.
$$\int_{ - T}^T {{e^{ - i\omega t}}dt} = \frac{{{e^{i\omega T}} - {e^{ - i\omega T}}}}{{i\omega }} = \frac{2}{\omega }\sin (\omega T)$$
It is left as an exercise to the reader to prove that this converges to the delta function $2\pi\delta(\omega)$ in the limit where $T\to\infty$.
To prove the coefficient $2\pi$ on the delta function, consider the area under the curve.
Here's another way you could've arrived at the idea of taking a finite-limit integral: Fourier transforms are pretty common in practical settings, except they're typically done over finite domains of time, since it's kind of impractical to play signals forever. It seems unlikely you'd get crazy some Dirac-delta in standard signal processing. So it seems sensible to expect that the discontinuity only arises when you integrate over all $\mathbb{R}$.
Explain similar limiting cases in the following integrals:
Integral of $x^n$ as $n\to-1$ Integral of $a^x$ as $a\to1$ (hint: this isn't really different from the integral of $e^{ax}$)
|
We are going to try to solve the following example with the basic tools of Probabilities.
A screw factory has two machines, the M1, which is old, and does 75 % of all the screws, and the M2, newer but small, that does 25 % of the screws. The M1 does 4 % of defective screws, while the M2 just does 2 % of defective screws. If we choose a screw at random: what is the probability that ir turns out to be defective?
We consider the following events:
$$M1$$ = "being produced by the machine 1"
(and therefore, $$\overline{M1}=M2$$ ="being produced by the machine 2")
$$D$$ = "defective screw"
If we represent our problem in a tree, we can compute the probability more easily.
In the diagram we can see the branches that we are interested in (in dark orange).
The top branch, "being produced by machine 1 and being defective", has probability $$0,75 \cdot 0,04 = 0,03$$, that is, $$3$$%. The bottom branch, "being produced by machine 2 and being defective", has probability $$0,2\cdot 0,02 = 0,005$$, that is to say $$0,5$$%.
Therefore, the probability of being defective is:
$$$P(D) = 0,75\cdot 0,04 + 0,250\cdot 0,02 = 0.035$$$
Let's analyse what we have done. The probabilities with which we have worked are in a way conditional probabilities: $$P(D/M1)$$ is the probability that it turns out to be defective given that it has been produced with machine 1.
So, we can re-write our result as:
$$$ P(D) = P(M1)\cdot P(D/M1) + P(M2)\cdot P(D/M2)$$$
We can generalize this result with the theorem or law of total probability.
Law of total probability:
Let $$A_1,A_2,\ldots, A_n$$ be a complete system of events and $$B$$ any event associated with the same experiment. Then, we have :
$$$ P(B)=P(A_1)\cdot P(B/A_1)+P(A_2)\cdot P(B/A_2)+\ldots+ P(A_n)\cdot P(B/A_n) $$$
Let's recall that a complete system of events or partition is a set of incompatible events two by two (that is, that they cannot happen simultaneously), and such that the union of all of them is the complete sample space. A particular example of a partition of the sample space is each of their events. Unless we say so, we will always assume that the events in a partition always have a non zero probability. Otherwise we could always eliminate such an evenet and we would obtain a smaller partition of the sample space.
Let's see other examples where we can apply this result.
We have three boxes with light bulbs. The first one contains $$10$$ bulbs, with $$4$$ of them broken; in the second one there are $$6$$ light bulbs, and only one broken, and in the third one there are three broken light bulbs out of eight. What is the probability that if we choose a box at random and we take a light bulb, this is broken?
We will write $$C1$$, $$C2$$, $$C3$$ if we choose the boxes 1, 2, and 3, respectively.
Since the election is random, we have probability $$\dfrac{1}{3}$$ of choosing each box.
The event that we are interested in is $$F =$$ "broken light bulb", or if we write $$\overline{F} =$$ "working bulb".
We represent it in a tree:
In dark orange we observe the branches of the tree that lead to the event that we are interested in, i.e. "broken light bulbs".
$$(C1, C2, C3)$$ is a partition of the sample space, since we will always choose one of three boxes (in other words, its union is the whole), and we cannot choose more than one box (i.e., they are two by two incompatible).
We apply, then, the law of total probability:
$$$ \begin{array}{rl} P(F) =& P(C1)\cdot P(F/C1) + P(C2)\cdot P(F/C2) + P(C3)\cdot P(F/C3) \\ =& \dfrac{1}{3}\cdot\dfrac{4}{10}+\dfrac{1}{3}\cdot\dfrac{1}{6}+ \dfrac{1}{3}\cdot\dfrac{3}{8} = \dfrac{4}{30}+\dfrac{1}{18}+\dfrac{3}{24}= \dfrac{113}{360} \end{array}$$$
As we can see, to apply the theorem of the total probability is the same as to compute the probability using the tree.
The only thing we have to care about is to compute the probabilities for each of the branches.
A bag contains three red balls and two blue balls. We do two experiments:
i) We remove successively, and with replacement, two balls and observe their colour. What is the probability of $$S =$$ "to extract a red ball and a blue one, without taking into account the order"?
ii) We remove successively, but without replacement, two balls and observe their colour. What is the $$P(S)$$ in this case?
This is a very frequent experiment: when we remove something with replacement, it means that when we remove the ball from the bag, we then introduce it back. If it is without replacement, we keep the ball out. This will influence the probability of the second ball being one color or another.
Let's consider the events: $$R =$$ "to remove a red ball", $$A =\overline{R} =$$ "to remove a blue ball.
Our sample space is: $$\Omega=\{RR,RA,AR,AA\}$$, and the events that we are interested in are $$RA$$ and $$AR$$.
i) We represent our problem in a tree.
Using the law of total probability, $$$ P (S) = P (R)\cdot P (A / R) + P (A)\cdot P (R / A) = \dfrac{3}{5}\cdot\dfrac{2}{5}+\dfrac{2}{5}\cdot\dfrac{3}{5}= \dfrac{12}{25}$$$
ii) In this case, the second time that we remove a ball, the probabilities will be different, depending on whether the first ball was red or blue.
For example, $$P (A / R) =$$ "probability of removing a blue ball the second time, knowing that in the first one we have removed a red one" $$= \dfrac{2}{4}$$, since upon removing a red ball from the bag, there will be two blue balls remaining.
In this case, our tree is the following (compute the conditional probabilities, and verify that you obtain the same result):
And so, by virtue of the law of total probability, we have a different result from that of the previous case: $$$P (S) = P (R)\cdot P (A / R) + P (A)\cdot P (R / A) = \dfrac{3}{5}\cdot\dfrac{2}{4}+\dfrac{2}{5}\cdot\dfrac{3}{4}= \dfrac{12}{20}$$$
We will finish this level with a more complicated problem that will be useful to see that the way of solving more complicated problems is the same that what we have done until now.
Suppose we have $$250$$ doctors from Europe meeting in a conference. Among these $$115$$ are Germans; $$65$$, French, and $$70$$ Englishmen. We also know that, 75 % of the Germans, 60 % of the French and 65 % of the Englishmen are in favour of using a new vaccine for the flu. In order to decide whether the vaccine is finally used they agree on the following: among all the doctors they select at random three doctors, who answer if they are in favour or not (with replacement). Remember that with replacement means that the same doctor can be selected all three times (or two times for the matter). The vaccine is approved out of these three picks, at least two agree on using the vaccine. What is the probability that this occurs?
Let's consider the following events: $$A =$$ "German doctor", $$F =$$ "French doctor", $$I =$$ "English doctor", as well as $$V =$$ "to be in favour of the vaccine" (and therefore, $$\overline{V} =$$ "to be against of the vaccine").
It is a question of a compounded election: we choose three doctors, each of which can be in favor or against using the vaccine.Our sample space would be: $$$\Omega=\{ (V,V,V),(V,V,\overline{V}),(V,\overline{V},V),(V,\overline{V},\overline{V}), \\ (\overline{V},V,V),(\overline{V},V,\overline{V}),(\overline{V},\overline{V},V),(\overline{V},\overline{V},\overline{V}) \}$$$
The are four favorable cases: $$(V,V,V)$$, $$(V,V,\overline{V})$$, $$(V,\overline{V},V)$$, $$(\overline{V},V,V)$$.
Now , in order to choose every doctor, we do another compounded experiment: we choose randomly a country between $$A$$, $$F$$, $$I$$, and when the doctor is chosen then he must say whether he is in favor ($$V$$), or against ($$\overline{V}$$) the vaccine.
We represent our problem in a tree. We have to be sure that we repeat this experiment three times, one for every vote. Note again that it is a case with replacement, since the same doctor can be picked more than once. Thus we have three votes coming out from the same tree, represented below:
Our sample space for every experiment is $$\Omega_i=\{ (A,V), (A,\overline{V}), (F,V), (F,\overline{V}), (I,V), (I,\overline{V}) \}$$.
What is the probability that a doctor chosen at random is in favor of the vaccine?
Using the law of total probability,
$$$ P (V) = P (A)\cdot P (V / A) + P (F)\cdot P (V / F) + P (I)\cdot P (I / F)$$$
Seen through other lenses, we add the probability of all the branches that finish in $$V$$.
Substituting, $$$P(V)=\dfrac{115}{250}\cdot 0,75 +\dfrac{65}{250}\cdot 0,6+ \dfrac{70}{250}\cdot 0,65= 0,345+ 0,156+ 0,182 = 0,683 $$$
That is, a probability of 68,3%. Since we know that $$P(\overline{V})=1-P(V)$$, we also have that $$P(\overline{V})=1-0,683=0,317$$, or in words, a probability of 31,7%.
We repeat this experiment three times. We have four favorable cases:
Case $$(V,V,V)$$: its probability is $$0,683\cdot0,683\cdot0,683 = 0,319$$. Case $$(V,V,\overline{V})$$: its probability is $$0,683\cdot 0,683\cdot 0,317 = 0,148$$. Case $$(V,\overline{V},V)$$: its probability is $$0,683\cdot 0,317\cdot 0,683 = 0,148$$. Case $$(\overline{V},V,V)$$: its probability is $$0,317\cdot 0,683\cdot 0,683 = 0,148$$.
Finally, the probability of approving the vaccine is: $$$0,319 + 0,148 + 0,148 + 0,148 = 0,763$$$ that is to say, of 76,3%.
Observació 1:
Since we are not concerned with the order with which the doctors are chosen, but rather with whether they are in favor of the vaccine or not, we might think that our sample space is $$\Omega=\{\{V,V,V\}, \{V,V,\overline{V}\}, \{V,\overline{V},V\}, \{\overline{V},V,V\}\}$$.
Note that, instead of writing the elementary events with brackets, "( )", ewe write them with "{ }", to signify that our results are wihtout order. We will analyze this with some more depth in the combinatorial analysis cell.
There are two favorable cases: $$\{V,V,V\}$$ and $$ \{V,V,\overline{V}\}$$, although we have to be sure that the second one counts three times, since it corresponds in fact to three unordered events. Thus we would calculate, in the same way, the probability of every case, and would obtain the total probability making $$P(\text{"as minimum two affirmative votes"}) = P(\{V,V,V\})+ 3P(\{V,V,\overline{V}\})$$, The result should not change if we follow this strategy.
Observation 2:
It may look a bit weird that a single doctor can be chosen three times. In fact, if we do not allow the same doctor to vote more than once the result does not change much. This is because of the large number of doctors taking part in the conference. You are entitled, however, to redo the problem assuming that each doctor can vote only once, that is, that there is no replacement.
|
Continuous Functions, Discontinuous Supremum
A function $f:\mathbb{R}\to\mathbb{R}$ is said to be
continuous if the preimage of any open set is open. Analogously, we might say that a function is measurable if the preimage of a measurable set is measurable. This is a generalization of the more standard defintions of a measurable function: "$f:\mathbb{R}\to\mathbb{R}$ is measurable if for any $a\in\mathbb{R}$ the set $f^{-1}((a,\infty))=\{x\in\mathbb{R}:f(x)>a\}$ is measurable," or equivalently, "$f$ is measurable if the preimage of a Borel set is an open set."
It's not hard to show that if $\{f_n:\mathbb{R}\to\mathbb{R}\}$ is a sequence of measurable functions, then $\sup_n f_n$, $\inf_n f_n$, $\limsup_n f_n$ and $\liminf_n f_n$ are
also measurable functions*. But here the analogy between continuity and measurability breaks down. It is not true that if each $f_n$ is a continuous function, then $\sup_n f_n$, $\inf_n f_n$, $\limsup_n f_n$ and $\liminf_n f_n$ are continuous as well.
Below is a counterexample (which is
not as bad as it looks!) - we have a sequence of functions $\{f_n:[0,1]\to[0,1]\}$, each of which is continuous, yet $\sup_n f_n$ is not! (Counter)Example
Fix $x\in [0,1]$. For each $n\in\mathbb{N}$ define
I know this looks like a hot mess! But it's not bad at all. The graph of each $f_n$ consists of the four lines which connect $(0,0)$, $(1/(n+1),0)$, $((2n+1)/(2n(n+1)),1)$, $(1/n,0)$ and $(1,0)$, in that order. Or to put it simply,
$f_n$ forms a triangle centered at $\frac{2n+1}{2n(n+1)}$ and has a base of width $\frac{1}{n(n+1)}$ the right base point of $f_{n+1}$ equals the left base point of $f_n$ $f_n(0)=0$ for all $n$
Here's what the first 3 functions look like:
Let $x\in[0,1]$. Clearly each $f_n$ is continuous. We claim $g(x)=\sup\{f_n(x)\}$ is not. Indeed, for each $k\in\mathbb{N}$ let $$x_k=\frac{2k+1}{2k(k+1)}$$ (the value of $x$ when $f_n$ is at its peak) and observe that $\displaystyle{ \lim_{k\to\infty} x_k=0}$. Assuming to the contrary that $g(x)$
is continuous, we must have $$ \lim_{k\to\infty}g(x_k)=g(0). $$ On the right hand side, we have $g(0)=\sup_n\{f_n(0)\}=0$ since $f_n(0)=0$ for all $n$. But on the left hand side, for a fixed $k\in\mathbb{N}$, we see that $g(x_k)=\sup\{f_n(x_k)\}=1$ since \begin{align*} f_n(x_k)=\begin{cases} 1, &\text{if $n=k$}\\ 0, &\text{if $n\neq k.$} \end{cases} \end{align*}This implies that $$ 1= \lim_{k\to\infty}g(x_k)=g(0)=0 $$ which is, of course, a contradiction.
Footnotes:
* Here's the proof that $\sup_n f_n$ and $\limsup_n f_n$ are measurable: Fix $a\in\mathbb{R}$ and let $g=\sup f_n$ (i.e. for a fixed $x\in \mathbb{R}$, $g(x)=\sup_n\{f_n(x)\}$). We claim $$g^{-1}((a,\infty])=\bigcup_{n=1}^\infty f_n^{-1}((a,\infty]).$$ If this is true, then indeed the function $g$ is measurable since we can express $g^{-1}((a,\infty])$ as a countable union of measurable sets. To prove the claim, notice that $x\in g^{-1}((a,\infty])$ if and only if $g(x)>a$ if and only if $\sup_n\{f_n(x)\}>a$ if and only if $f_n(x)>a$ for some $n$. And this last bit is true if and only if $x\in \bigcup_{n=1}^\infty f_n^{-1}((a,\infty])$.
To see that $\limsup f_n$ is measurable, suppose we have proved that, for any measurable $f_n$, the function $\inf f_n$ is measurable (the proof is very similar to the previous paragraph). Write $$\limsup f_n=\inf_{n\geq1}\sup_{k\geq n}f_k$$ and let $g_n=\sup_{k\geq n}f_k$. By above, we know each $g_n$ is a measurable function, and so by assumption $\inf_n g_n$ -- and hence $\limsup f_n$ -- is measurable.
By letting $h=\inf f_n$ and claiming $h^{-1}([\infty,a))=\bigcup_{n=1}^\infty f_n^{-1}([-\infty,a))$, one can use similar techniques to show that $\inf f_n$ and $\liminf f_n$ are measurable.
|
As we mentioned in the last post, there are currently over 2000 active speed loop detectors within the Bay Area highway system. The information provided by these loops is often highly redundant because speeds at neighboring sites typically differ little from one another. This observation suggests that a higher level, “macro” picture of traffic conditions could provide more insight: Rather than stating the speed at each detector, we might instead offer info like “101S is rather slow right now”. In fact, we aim to characterize traffic conditions as efficiently as possible. To move towards this goal, we have carried out a principal component analysis (PCA)$^1$ of the full 2014 (year to date) PEMS data set.
As described in [1] below, PCA provides us with a slick, automated method for identifying the most common “traffic patterns” or “modes” that get excited in our system. By adding together these patterns — with appropriate time-specific amplitudes — we can reconstruct the site-by-site traffic conditions observed at any particular moment. Importantly, summing over only the most significant modes will provide us with a system-tailored, minimal-loss method of data compression that will simplify our later prediction analysis. We will discuss this compression benefit further in the next post. Here, we present the two dominant modes of the Bay Area traffic system (see figures above). Notice that the first is fairly uniform, which presumably captures some nearly-site-independent changes in mean speed associated with night vs. daytime driving. In contrast, the second mode captures some interesting structure, showing slowdowns for some highways/directions and speedups for others. Evidently, this structure is the second most highly exhibited pattern in the Bay Area system; We couldn’t have intuited this pattern, but it has been captured automatically via our PCA.
[1]
Statistical physics of PCA: One way of thinking about PCA as applied here is to imagine that the traffic system is harmonic. That is, we suppose that the traffic dynamics observed can be characterized by an energy cost function that is quadratic in the speeds of the different loops, measured relative to their average values, $E = \frac{\beta^{-1}}{2} \delta \textbf{v}^{T} \cdot H \cdot \delta \textbf{v}.$ Here, $\delta v_i = v_i – \langle v_i \rangle $ and $H$ is a matrix Hamiltonian. Under some effective, thermal driving, the pair correlation for two sites will be given by $\langle \delta v_a \delta v_b \rangle \equiv$$ \frac{1}{Z} \int_{{\delta \textbf{v}_i }} e^{- \frac{1}{2} \delta \textbf{v}^{T} \cdot H \cdot \delta \textbf{v}} \delta v_a \delta v_b =$$ H^{-1}_{ab}$. It is this pair correlation function that is measured when one carries out a PCA analysis, and the matrix $H^{-1}$ is called the covariance matrix. Its eigenvectors are the modes of the system — the independent traffic patterns that we discuss above. The low lying modes are those with a larger $H^{-1}$ eigenvalue. These have low energy, are consequently often highly excited, and generally dominate the traffic conditions that we observe.
|
This is a problem known as finding 'moments of moments'.
NotationDefine the power sum $s_r$:
$$s_r=\sum _{i=1}^n X_i^r$$
Your problem only involves $s_1$.
The Problem
Let $\left(X_1,\ldots,X_n\right)$ denote a random sample of size $n$ from a population random variable $X$.
The problem is to find:
$$ E\Big [\Big (\frac1n\sum_{i=1}^n X_i\Big)^2\Big ] = E\Big [\big(\frac{s_1}{n}\big)^2\Big]$$
i.e. we seek the expectation of $\big(\frac{s_1}{n}\big)^2$ ...
i.e. the 1st Raw Moment of $\big(\frac{s_1}{n}\big)^2$ ... so the solution (expressed
ToCentral moments of the population) is:
where:
RawMomentToCentral is a function from the
mathStatica package for Mathematica,
$\acute{\mu}_1$ denote the 1st raw moment of random variable $X$ (i.e. the mean of $X$) and
$\mu_2$ denotes the 2nd central moment of random variable $X$ (i.e. the variance of $X$).
In your case, $X \sim N(\bar{x}_{0}, \sigma^2)$, so $\acute{\mu}_1 = \bar{x}_{0}$ and $\mu_2 = \sigma^2$. Substituting these values into
Out[1]= yields: $$\frac{\sigma^2}{n} + \bar{x}_{0}^2 \quad \quad \text{(as required)}$$ All done.
More detail
There is an extensive discussion of
moments of moments in Chapter 7 of our book: Rose and Smith, " Mathematical Statistics with Mathematica", Springer, NY
A free download of the chapter is available here:
http://www.mathstatica.com/book/Rose_and_Smith_2002edition_Chapter7.pdf
|
$$(2\sqrt 2 - 2)x^2 + \sqrt8 x + (1+\sqrt 2)=0$$
Now the discriminant of this is $0$, so it has one real repeated root. A plot on Desmos confirms this.
However, Wolfram Alpha displays the following (see image). The solution contains $i$ and doesn't agree with what it should be $= -1.707...$
What is happening?
[Solution should be as I said above because $x = \frac{-\sqrt8 \pm 0 }{2\sqrt2 - 2} = -1 - \frac{1}{2} \sqrt 2 \approx -1.707...$ after simplifying]
|
Typesetting Math
Floats
Introduction to LATEXAleksandar PetrovEUROAVIA Delft
October 8, 2015EUROAVIA
Additional resources
What is LATEX
Contents
12
What is LATEXThe Basics of the BasicsDocument LayoutSectioningLine BreaksTable of contentsBig project managementSpecial charactersTypesetting MathMath environments
Typing symbolsBracketingEquation numberingFloatsFiguresTablesReferencing & CitingReferencingCitingAdditional resources
Document Layout
Document LayoutA report class example:\ documentclass { report }% T i t l e Page\ t i t l e { Report T i t l e }\ a u t h o r { A u t h o r name}\ b e g i n { document }\ maketitle\ begin { abstract }Sum up what i t i s\ end { a b s t r a c t }
a l l about
\ chapter { Introduction }Ba ck gr oun d i n f o r m a t i o n\ end { document }
Sectioning
Line Breaks
This is the first paragraph, contains some text to show how amazing an author I am. I know it ismagnificent, you dont need to tell me this.We are continuing with the fancy stuff. As you can see, the line of my thought has not changed. \parIndeed, I am still wondering if you are impressed enough...\newlineFor sure thiswhat it is. \\
text
To sum up:empty line - new paragraph without adding an empty line(keeping the topic the same)\par - new paragraph without adding an empty line (keepingthe topic the same)\\ - new paragraph with adding an empty line (introducingnew topic)\newline - new paragraph with adding an empty line(introducing new topic)
Table of contents
\begin{document}\maketitle\tableofcontents\newpage\chapter{Introduction}\chapter{Literature study}\section{Historical remarks}\section{Modern development}\chapter{Design propsal}\appendix\chapter{Additional information}\end{document}
Sometimes LATEX source code can get pretty huge. That is why itis handy to separate it over several files. Then these files can belinked to the main file. The way to do this is the\input{filename} command. When the compiler sees it, it firstprocesses the external file and then continues with the main file asif it is one whole thing. Meaning that no breaks in page andsection numbering, referencing and labeling will occur.When interested in previewing only one part of the document, youcan comment the \input commands for the others. That willreduce the building time significantly.
Special characters
NoteThere are some reserved characters and using them in your textwill most probably result in an error message from the compiler.# $ % ^ & _ { } \If you still want to use any of them, just put a backslash before it.1
#, $, %, , &, , {, }, \
Typesetting MathThe mathematical typesetting capabilities of LATEX are extremelyadvanced. They present one of the main strengths of LATEX.However, in order to unlock the full capability we need to usethe AMS-LATEX package.Adding the AMS-LATEX package to your file can be done in thesame way as adding any other extension package. Just use the\usepackage{amsmath} in the preamble.\documentclass{report}\usepackage{amsmath}% Title Page\title{Report Title}\author{Author name}\begin{document}\maketitle...
For example, a2 + b 2 = c 2
(2)
(3)
(4)
Lets compute 7 =
213
21.3X1 + X2 + X33r2a + b2c=4sin xlim=1x0 x
Xavg =
(5)(6)(7)
Greek letters
$\alpha, \beta, \gamma, \omega, \psi, \eta, \theta, \mu, \nu, \delta$
, , , , , , , , , Some of the capital Greek letters can also be typed in LATEX:1
, , , ,
k=0
(1)k z 2k+1(2k+1)!
= sin z
X(1)k z 2k+1k=0
= sin z(2k + 1)!Z Mdxx05Ynn=1
n1
(8)(9)(10)
Dots
$$ 216\pi = 32\pi $$
2 16 = 32
$$ X n = \{ x 1, ... , x n \} $$
Xn = {x1 , ..., xn }
$$ X n = \{ x 1, \ ldots , x n \} $$$$ S n = x 1 + \cdots + x n $$
Xn = {x1 , . . . , xn }Sn = x1 + + xn
or
cos ?
$$\sin , \cos, \tan, \cot, \log , \ln , \arccos , \ arcsin , \arctan , \exp, \min, \max, \ldots$$
sin, cos, tan, cot, log, ln, arccos, arcsin, arctan, exp, min, max, . . .
BracketingThese brackets dont look good:1
x2 + 1f (x) = ( )(7 x)x
f (x) =
x2 + 1
(7 x)
hai nao a ,, bbb
Equation numbering
Did you notice that some of the equations above had numbers on theright?\begin{equation}f (x) = \sqrt{x}(x2 + 1)3 \end{equation}
x(x 2 + 1)
(11)
This numbering makes referencing easy as we will see later. Note alsothat only display equations can have numbering.Removing the numbering of a given equation can be made in two ways:Use equation* (or any other starred) environment instead ofequationPut the \nonumber tag right after the equation (on the same line)
Floats are content that is considered separate from the main text.As such they can float around the pages and position themselvesat a suitable place. Another important aspect of floats is that (inthe general case) they cannot be broken over two or more pages.Two types of objects are considered floats:FiguresTables
Figures
The basic code structure for including a figure (image) into a LATEXdocument is the following:\begin{figure}[position preference]\centering\includegraphics[width=somewidth]{path to image}\caption{Some caption}\end{figure}
Figuresposition preferenceAlthough LATEX positions the figure at a place it considers optimal,you can still specify a preference. However, keep in mind that thisis only a preference and LATEX might not follow it.Specifierhtbp!H
PermissionPlace the float here, i.e., approximately at the same point it occursin the source text (however, not exactly at the spot)Position at the top of the page.Position at the bottom of the page.Put on a special page for floats only.Override internal parameters LATEX uses for determining good float positions.Places the float at precisely the location in the LATEX code.Requires the float package, e.g., \usepackage{float}.This is somewhat equivalent to h!.Taken from https://en.wikibooks.org/wiki/LaTeX/Floats,_Figures_and_Captions
widthThis can be measured in any of the units LATEX understands (e.g.cm, px, in). However, for many cases it is most convenient tospecify the width of the picture as a percentage of the text width.For example:\includegraphics[width=0.5\textwidth]{path to image}
FiguresHere we present an example of imported picture:
EUROAVIA\begin{ figure }2 \ centering3 \ includegraphics [width=0.7\linewidth]{./ logo}4 \caption{EUROAVIA logo}5 \end{figure}1
The caption of the figure can be above the image if the \captiontag is placed before the \includegraphics tag.
Tables
TablesAn example table generated with TablesGenerator.com.\begin{table }[b]\ centering3 \caption{My caption}4 \begin{tabular }{| ll |c|}5 \ hline6 Column A & Column B & Column C \\ \hline71&2& abc\\84&5& def\\97&8& ghi\\ \ hline10 \end{tabular}11 \end{table}12
Table : My captionColumn A147
Column B258
Column Cabcdefghi
Making sure that all the referencing and citing within your reportis correct can be quite a tedious task if you type your document ina typical word processor. LATEX incorporates a powerful, yet simpleand flexible methodology to keep track of all the references andcitations.
Referencing
Labels
References
CitingCiting to external sources can be done in a similar way. The most simpleway to do this is to create a list with sources between\begin{thebibliography} and \end{thebibliography} tags. Then,wherever in the main text referencing to this source is needed use the\cite{cite_label} in the same way as \ref.\LaTeX{} is widely used for scientific papers. \cite{lamport94}\begin{thebibliography}{9}\bibitem{lamport94}Leslie Lamport,\emph{\LaTeX: a document preparation system},Addison Wesley, Massachusetts,2nd edition,1994.\end{thebibliography}
Citing
Wikipedia.orgCTAN.orgThe Not So Short Introduction to LATEX2, Tobias Oetiker,https://tobi.oetiker.ch/lshort/lshort.pdfGoogletex.stackexchange.com
|
Thermodynamics is the study of thermal, electrical, chemical, and mechanical forms of energy. The study of thermodynamics crosses many disciplines, including physics, engineering, and chemistry. Of the various branches of thermodynamics, the most important to chemistry is the study of the change in energy during a chemical reaction.
Consider, for example, the general equilibrium reaction shown in Equation \(\ref{6.1}\), involving the species A, B, C, and D, with stoichiometric coefficients
a, b, c, and d.
\[a\textrm A+b\textrm B\rightleftharpoons c\textrm C+d\textrm D\label{6.1}\]
For obvious reasons, we call the double arrow, ⇋, an equilibrium arrow.
By convention, we identify species on the left side of the equilibrium arrow as reactants, and those on the right side of the equilibrium arrow as products. As Berthollet discovered, writing a reaction in this fashion does not guarantee that the reaction of A and B to produce C and D is favorable. Depending on initial conditions, the reaction may move to the left, move to the right, or be in a state of equilibrium. Understanding the factors that determine the reaction’s final, equilibrium position is one of the goals of chemical thermodynamics.
The direction of a reaction is that which lowers the overall free energy. At a constant temperature and pressure, typical of many bench-top chemical reactions, a reaction’s free energy is given by the
Gibb’s free energy function
\[\Delta G=\Delta H-T\Delta S\label{6.2}\]
where
\(T\) is the temperature in Kelvin and \(ΔG\), \(ΔH\), and \(ΔS\) are the differences in the Gibb's free energy, the enthalpy, and the entropy between the products and the reactants. Enthalpy is a measure of the flow of energy, as heat, during a chemical reaction. Reactions releasing heat have a negative Δ H and are called exothermic. Endothermic reactions absorb heat from their surroundings and have a positive Δ H. Entropy is a measure of energy that is unavailable for useful, chemical work. The entropy of an individual species is always positive and tends to be larger for gases than for solids, and for more complex molecules than for simpler molecules. Reactions producing a large number of simple, gaseous products usually have a positive Δ S.
The sign of Δ
G indicates the direction in which a reaction moves to reach its equilibrium position. A reaction is thermodynamically favorable when its enthalpy, Δ H, decreases and its entropy, Δ S, increases. Substituting the inequalities Δ H < 0 and Δ S > 0 into Equation \(\ref{6.2}\) shows that a reaction is thermodynamically favorable when Δ G is negative. When Δ G is positive the reaction is unfavorable as written (although the reverse reaction is favorable). A reaction at equilibrium has a Δ G of zero.
Equation \(\ref{6.2}\) shows that the sign of Δ
G depends on the signs of Δ H and Δ S, and the temperature, T. The following table summarizes the possibilities.
Δ H Δ S Δ G - + Δ G < 0 at all temperatures - - Δ G < 0 at low temperatures + + Δ G < 0 at low temperatures + - Δ G > 0 at all temperatures
As a reaction moves from its initial, non-equilibrium condition to its equilibrium position, the value of Δ
G approaches zero. At the same time, the chemical species in the reaction experience a change in their concentrations. The Gibb's free energy, therefore, must be a function of the concentrations of reactants and products.
As shown in Equation \(\ref{6.3}\), we can split the Gibb’s free energy into two terms.
\[\Delta G=\Delta G^\circ+RT\ln Q\label{6.3}\]
The first term, Δ
G o, is the change in Gibb’s free energy when each species in the reaction is in its standard state, which we define as follows: gases with partial pressures of 1 atm, solutes with concentrations of 1 mol/L, and pure solids and pure liquids. The second term, which includes the reaction quotient, Q, accounts for non-standard state pressures or concentrations. For reaction \(\ref{6.1}\), the reaction quotient is
\[Q=\dfrac{[\textrm C]^c[\textrm D]^d}{[\textrm A]^a[\textrm B]^b}\label{6.4}\]
where the terms in brackets are the concentrations of the reactants and products. Note that we define the reaction quotient with the products are in the numerator and the reactants are in the denominator. In addition, we raise the concentration of each species to a power equivalent to its stoichiometry in the balanced chemical reaction. For a gas, we use partial pressure in place of concentration. Pure solids and pure liquids do not appear in the reaction quotient.
Note
Although not shown here, each concentration term in Equation \(\ref{6.4}\) is divided by the corresponding standard state concentration; thus, the term [C]
really means c
\[\mathrm{\left\{\dfrac{[C]}{[C]^o}\right\}}^c\]
where [C]
o is the standard state concentration for C. There are two important consequences of this: (1) the value of Q is unitless; and (2) the ratio has a value of 1 for a pure solid or a pure liquid. This is the reason that pure solids and pure liquids do not appear in the reaction quotient.
At equilibrium the Gibb’s free energy is zero, and Equation \(\ref{6.3}\) simplifies to
\[\Delta G^\circ=-RT\ln K\]
where
K is an equilibrium constant that defines the reaction’s equilibrium position. The equilibrium constant is just the numerical value of the reaction quotient, Q, when substituting equilibrium concentrations into Equation \(\ref{6.4}\) .
\[K=\mathrm{\dfrac{[C]_{eq}^\mathit c[D]_{eq}^\mathit d}{[A]_{eq}^\mathit a[B]_{eq}^\mathit b}}\label{6.5}\]
Here we include the subscript “eq” to indicate a concentration at equilibrium. Although we usually will omit the “eq” when writing equilibrium constant expressions, it is important to remember that the value of
K is determined by equilibrium concentrations.
As written, Equation \(\ref{6.5}\) is a limiting law that applies only to infinitely dilute solutions where the chemical behavior of one species is unaffected by the presence of other species. Strictly speaking, Equation \(\ref{6.5}\) should be written in terms of activities instead of concentrations. We will return to this point in Section 6.9. For now, we will stick with concentrations as this convention is already familiar to you.
|
I am trying to compute a similarity measure between the orientation of 3D objects accounting for symmetry invariance. I have a set of 3D objects, which are defined by a center of mass, and 3 unit vectors associated to the orientation of this object such that $vec_1$ defines the orientation of the main axis, $vec_2$ the orientation of the second axis and $vec_3$ the orientation of the third axis.
I would like to compare the objects orientations by comparing their reference frame defined by these vectors. Quaternions seemed to be a nice way to do so, and I saw already several questions about it which helped me getting started.
In order to measure the relative rotation between two orientations, I first define the rotations matrices using the orientation vectors sorted by importance
$$ R_{obj_i} = \begin{bmatrix}vec_{1_i} \\ vec_{2_i} \\ vec_{3_i}\end{bmatrix} $$
and I derive the corresponding "reference" quaternions $q_{obj_i}$ of each object. I found that one way to measure the relative angle $\theta$ between two quaternions $q_{obj_i}$ and $q_{obj_j}$ is to compute $$ cos(\theta) = 2⟨q_{obj_i},q_{obj_j}⟩^2−1 $$ where $⟨q_{obj_i},q_{obj_j}⟩$ denotes the dot product between the two quaternions. However this method does not account for the fact that several similar orientations have distinct quaternions. In my case, I could define 3 others quaternions using the rotation matrices:
$$ \begin{bmatrix}-vec_{1_i} \\ -vec_{2_i} \\ vec_{3_i}\end{bmatrix},\begin{bmatrix}-vec_{1_i} \\ vec_{2_i} \\ -vec_{3_i}\end{bmatrix} and \begin{bmatrix}vec_{1_i} \\ -vec_{2_i} \\ -vec_{3_i}\end{bmatrix} $$
which should be considered similar as the first one. In order to account for this, I changed the measure of the relative angle $\theta$ to be $$ cos(\theta) = arg_{k}max\left( 2⟨p_k\times q_{obj_i},q_{obj_j}⟩^2−1 \right) $$
where $p_k$ are the quaternions defined with the coordinates (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1).
My problems are:
I am unsure about my method. Is there simpler way to perform this kind of analysis ? Using only an uniform random distribution of quaternions, I found that $Average(cos(\theta)) \approx$ 0.2357. I have been trying to figure out where this comes from using this formalism (hyperspherical coordinates). In the simplest case, not accounting for any symmetry, $Average(cos(\theta))$ should be $$ Average(cos(\theta)) = \frac{1}{2\pi^2}\int_0^\pi\int_0^\pi\int_0^{2\pi} cos(2\alpha)*sin^2(\alpha)sin(\beta)\ d\gamma\ d\beta\ d\alpha = -0.5 $$ I have been trying to reduce the 4-space to obtain 0.235, however I an not sure that this is the right way to proceed.
Any help would be really appreciated.
|
Okay, so I understand that the equation is a downwards facing parabola with a y-intercept at 12. I don't understand what it means by upper vertices? I know that the answer is 32 but I don't understand how to get there. Can someone please guide me and explain to me the process of solving this problem? Thank you!
Let's $t_1$ and $t_2$ the abscissas of the lower vertices $(t_2>t_1)$ and clearly the upper vertices have the ordinates $$12-t_1^2=12-t_2^2\iff t_1=-t_2\quad\text{since}\; t_1\ne t_2$$ and the area of the rectangle is $$(t_2-t_1) (12-t_1^2)=2t_2(12-t_2^2)$$ hence to answer the question we should maximize the function $$f(t)=t(12-t^2)$$ and since $$f'(t)=12-t^2-2t^2=12-3t^2=0\iff t=\pm2$$ hence we see easily that $t_2=2$ and the area is $$2f(2)=32$$
Hint: Draw a picture of the downward-facing parabola, and of a rectangle of the type described.
Let $(x,y)$ be the upper right-hand corner of the rectangle. Then by symmetry the base of the rectangle has length $2x$, and the height is $y$, that is, $12-x^2$.
So the area $A(x)$ of the rectangle is given by $$A(x)=2x(12-x^2).$$ Maximize, using the usual tools. Note that we must have $0\le x\le \sqrt{12}$.
The rectangle has $4$ vertices, now the lower two are on the $x$-axis, and the upper two are on a parallel line to the $x$-axis, say at height $h$.
Then, find the $x$ coordinates of the vertices, using $h$: the upper vertices are on the parabole on height $y=h$, so the $x$ coordinates satisfy $h=12-x^2$, i.e. $$x^2\ =\ 12-h$$ Then, you get two solutions, $x_1$ and $x_2$, finally maximize the area of the rectangle ($h\cdot(x_2-x_1)$) in $h$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.