text stringlengths 256 16.4k |
|---|
Peretto, Nicolas, Fuller, G., Zijlstra, A. and Patel, N. 2007. The massive expanding molecular torus in the planetary nebula NGC 6302. Astronomy and Astrophysics 473 (1) , pp. 207-217. 10.1051/0004-6361:20066973
PDF - Published Version
Download (516kB) | Preview
Abstract
Aims.We measure the mass and kinematics of the massive molecular torus in the the proto-typical butterfly planetary nebula NGC 6302. Determining the mass-loss history of the source is an important step in understanding the origin and formation of the wing-like morphology. Methods. Using the SMA interferometer we have imaged both the continuum emission and the J=2-1 transitions of 12CO and 13CO at arcsecond resolution. These data are analysed in combination with observations of both the J=2-1 and J=3-2 transitions of 12CO and 13CO made with JCMT. Results. The 12CO and 13CO emission match the dark lane seen in absorption in the H${\alpha}$ image of the object and trace an expanding torus of material. The CO indicates a mass of the torus of ~2 $M_\odot \pm 1$ $M_\odot$. The torus is expanding with a velocity of ~8 km s-1, centred at $V_{\rm lsr}=-31.5\rm\,km\,s^{-1}$. The size and expansion velocity of the torus indicates that it was ejected from ~7500 yr to 2900 yr ago, with a mass-loss rate of $5\times10^{-4}\,M_\odot\,\rm yr^{-1}$. In addition we detect a ballistic component in the CO images which has a velocity gradient of 140 km s-1 pc-1. Conclusions.The derived mass-loss history of the torus favours binary interaction as the cause of the ejection of the torus and we predict the existence of a companion with an orbital period $P\lesssim 1\,$month.
Item Type: Article Date Type: Publication Status: Published Schools: Physics and Astronomy Subjects: Q Science > QB Astronomy Uncontrolled Keywords: stars: AGB and post-AGB; planetary nebulae: general; planetary nebulae: individual: NGC 6302 Additional Information: Pdf uploaded in accordance with publisher's policy at http://www.sherpa.ac.uk/romeo/issn/0004-6361/ (accessed 17/04/2014) Publisher: EDP Sciences ISSN: 0004-6361 Date of First Compliant Deposit: 30 March 2016 Last Modified: 04 Jun 2017 06:05 URI: http://orca-mwe.cf.ac.uk/id/eprint/56282 Citation Data
Cited 27 times in
Google Scholar. View in Google Scholar
Cited 23 times in
Scopus. View in Scopus. Powered By Scopus® Data Actions (repository staff only)
Edit Item |
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues |
Rocky Mountain Journal of Mathematics Rocky Mountain J. Math. Volume 44, Number 2 (2014), 521-529. Algebraic polynomials with symmetric random coefficients Abstract
This paper provides an asymptotic estimate for the expected number of real zeros of algebraic polynomials $P_n (x)=a_0 + a_1 x + a_2 x^2 + \cdots + a_{n-1}x^{n-1} $, where $a_j$'s ($j=0,1,2,\cdots,n-1$) are a sequence of normal standard independent random variables with the symmetric property $a_j \equiv a_{n-1-j}$. It is shown that the expected number of real zeros in this case still remains asymptotic to $(2/\pi)\log n$. In the previous study, it was shown for the case of random trigonometric polynomials this expected number of real zeros is halved when we assume the above symmetric properties.
Article information Source Rocky Mountain J. Math., Volume 44, Number 2 (2014), 521-529. Dates First available in Project Euclid: 4 August 2014 Permanent link to this document https://projecteuclid.org/euclid.rmjm/1407154912 Digital Object Identifier doi:10.1216/RMJ-2014-44-2-521 Mathematical Reviews number (MathSciNet) MR3240512 Zentralblatt MATH identifier 1296.60137 Citation
Farahmand, K.; Gao, Jianliang. Algebraic polynomials with symmetric random coefficients. Rocky Mountain J. Math. 44 (2014), no. 2, 521--529. doi:10.1216/RMJ-2014-44-2-521. https://projecteuclid.org/euclid.rmjm/1407154912 |
In the previous article, we discussed the camera transformation which maps a vertex from world space into the camera space. Recall that the camera spans a orthonormal coordinate system with the three vectors \(\vec{u}\), \(\vec{v}\) and \(\vec{w}\), where \(-\vec{w}\) points along the viewing direction.
In this section we will deal with the projection of the 3D vertex in camera space into a 2D view plane. In OpenGL, what follows is clipping and mapping to so-called normalized device coordinates which are tightly coupled into the construction of projection matrix. In fact, the pure mathematical construction of the projection matrix is easy. What makes it difficult is the clipping part.
As you maybe remember from school, there two important kind of projections:
orthogonal projection, and perspective projection.
We will cover both successively.
Orthographic Projection Matrix
Orthogonal projection itself is pretty straight forward; you simply dismiss the \(w\)-coordinate for the projected 2D point. By that, you discard the depth information. So a pure orthogonal projection matrix looks like
\[ P_{ortho} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \]
Note that it looks almost like the identity matrix, except for the missing 1 in the 3rd dimension which is replaced by 0. The 0 causes the 3rd dimension to be omitted when applied on this matrix.
However, OpenGL needs to clip triangles that lay outside of the viewing frustum. So it requires some means to quickly determine, whether a given vertex lays outside the clipping volume or inside. The general idea is, to scale the viewing frustum to a box such that the faces of the cube range from -1 to +1 for all three coordinate axes. More concretely, it is scaled to the unit cube which is defined by having a minimum corner at (-1,-1,-1) and a maximum corner at (1,1,1). The faces are called
clipping planes and are defined as \(left, right, top, bottom, far\) and \(near\). This new coordinate system is referreed to as clip space. The clip space (ie. unit cube) is centered around the line of sight (negative z axis) of the camera space.
Once a point is in clip space, OpenGL checks each coordinate against whether it falls in the range between -1 and +1. If not, the point is discarded and the point is marked as laying outside clip space.
Once clipping has been performed, OpenGL takes the last dimension of the point in clip space and divides all other dimensions by it. This is a valid operation in homogenious coordindates. Recall that in homogenious coordinates a \((x, y, z, \omega)\)vector gets an additional dimension, usually set to 1. This dimension is \(\omega\). But since \(\omega\) is usually 1, it does not affect the vector. This basically puts the point into
normalized device coordinates (NDC) from where it is further processed towards the fragment shader.
Clipping and NDC conversion is internal in OpenGL and kinda trivial. What’s not so trivial is the projection part in combination with the mapping to clip space.
To translate the clip space to center around the origin, we subtract \(\frac{left+right}{2}\) (the mid-point between \(right\) and \(left\) ) from the point. And then scale it to fit in the range -1 to +1 with respect to \(right-left\) such that we scale the point by \(\frac{1-(-1)}{right-left}\). We do the same for the y and z coordinate. We now express the translate \(T\) and scaling \(S\) operation in one matrix multiplication.
\( ST = \begin{pmatrix} \frac{2}{right-left} & 0 & 0 & 0 \\ 0 & \frac{2}{top-bottom} & 0 & 0 \\ 0 & 0 & \frac{2}{far-near} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & – \frac{left+right}{2} \\ 0 & 1 & 0 & – \frac{top+bottom}{2} \\ 0 & 0 & 1 & – \frac{far+near}{2} \\ 0 & 0 & 0 & 1 \end{pmatrix}\)
which yields
\( P = \begin{pmatrix} \frac{2}{right-left} & 0 & 0 & – \frac{right+left}{right-left} \\ 0 & \frac{2}{top-bottom} & 0 & – \frac{top+bottom}{top-bottom} \\ 0 & 0 & \frac{2}{far-near} & – \frac{far+near}{far-near} \\ 0 & 0 & 0 & 1 \end{pmatrix}\)
If we would apply this matrix on a vector \(\vec{a}=(a_x,a_y,a_z,1)^{\top}\), we would get
\( P\vec{a}= \begin{pmatrix} \frac{2a_x}{right-left} – \frac{right+left}{right-left} \\ \frac{2a_y}{top-bottom}-\frac{top+bottom}{top-bottom} \\ \frac{2a_z}{far-near} – \frac{far+near}{far-near} \\ 1 \end{pmatrix} \)
A quick check with \(a_x = left, a_y = top, a_z=near\) assuming the boundaries of the unit cube, assures us that \(P\) scales and translates into clip space correctly. Computing \(P\vec{a}\) becomes
\( \begin{pmatrix} \frac{2left-right-left}{right-left} \\ \frac{2top-top-bottom}{top-bottom} \\ \frac{2near-far-near}{far-near} \\ 1 \end{pmatrix}= \begin{pmatrix} \frac{left-right}{-1(left-right)} \\ \frac{top-bottom}{top-bottom} \\ \frac{near-far}{-1(near-far)} \\ 1 \end{pmatrix} = \begin{pmatrix} -1 \\ 1 \\ -1 \\ 1\end{pmatrix}\)
which maps the frustum boundaries to the correct clip space boundaries.
It is important to understand the transformation of a point into clip space for the orthographic case, because we are going to use the same principle for the perspective projection case.
Perspective Projection Matrix
The way to simulate perspective viewing is making use of so called
forth-shortening. It basically means that all points converge closer to the focal point the further away the point is from the camera. So let \(x_p\) and \(y_p\) be the 2D coordinates on the projection plane located at \(z=near\). With similar triangles,
\[ \frac{x_p}{near}=\frac{x}{z} \hspace{2 mm} \rightarrow \hspace{2 mm} x_p=x\frac{near}{z}\]
and
\[ \frac{y_p}{near}=\frac{y}{z} \hspace{2 mm} \rightarrow \hspace{2 mm} y_p=y\frac{near}{z}\]
So much for simulating forth-shortening. But we still need to deal with clipping. In the perspective case, the unit cube is not a simple cube, but actually needs to take the forth-shortening into account. So we need to use the above forth-shortening equations in our clip space mapping.
We already know how to scale and translate a point into clip space, so filling the perspective projection formulas into the clip space mapping formulas from the above matrix \(P\vec{a}\) we get the following equations
\[ \begin{aligned} a_x = &\frac{2 x \frac{near}{z}}{right-left} -\frac{right+left}{right-left} \\ a_y = &\frac{2 y \frac{near}{z}}{top-bottom} – \frac{top+bottom}{top-bottom} \end{aligned}\]
The problem is that \(a_x\) and \(a_y\) depend on \(z\), which keeps us from setting up a simple matrix-vector multiplication as we did in the orthographic case. Instead, we are going to apply an extremely awesome trick: We are going to make all factors scaled by \(z\) and wait until the perspective divide to later divide it out of it again automatically by OpenGL. This is what perspective divide and normalized device coordinates are about, recall that for homegenious coordinates this holds true
\[ \begin{pmatrix}a_x \\ a_y \\ a_z \\ a_{\omega} \end{pmatrix} = \begin{pmatrix}a_x /a_{\omega} \\ a_y/a_{\omega} \\ a_z/a_{\omega} \\ 1 \end{pmatrix}\]
That means we need to restructure all projection mappings such that they can be divided them by \(z\) later and we need to last dimension \(\omega\) in the vertex to assume \(-z\).
We first deal with the restructuring for \(a_x\) to become dividable by \(-z\)
\[ \begin{aligned} a_x &= \frac{2 x \frac{near}{z}}{right-left} -\frac{right+left}{right-left} \\ & = \frac{2x \times near}{-z(right-left)} -\frac{-z(right+left)}{-z(right-left)} \\ &= x \frac{\frac{2near}{right-left}}{-z} + \frac{z\frac{right+left}{right-left}}{-z}\end{aligned}\]
Similarly, we restructure \(a_y\) in the same manner
\[ \begin{aligned} a_y = y \frac{\frac{2near}{top-bottom}}{-z} +\frac{z\frac{top+bottom}{top-bottom}}{-z}\end{aligned}\]
To see where we are going, let us take a first look at the final projection matrix. You nicely see how \( /-z\) is missing and replaced with -1 in the z coordinate so that after the matrix-vector multiplication \(a_{\omega}\) is set to \(-z\).
\[ \begin{pmatrix} \frac{2near}{right-left} & 0 & \frac{right+left}{right-left} & 0\\ 0 & \frac{2near}{top-bottom} & \frac{top+bottom}{top-bottom} & 0 \\ 0 & 0 & d & q \\ 0 & 0 & -1 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \\ 1\end{pmatrix}\]
We have not said anything yet about the \(z\) coordinate, depicted \(d\) and \(q\) in the matrix above. Deriving the mapping equation from the matrix and considering the later perspective divide we get
\[ a_z = \frac{d \times z+q}{-z}\]
Similar to what we did in the orthographic case, we fill in the clip space boundaries so that we get two equations with two unknown variables.
\[ \begin{aligned} \frac{-d \times near+q}{near} & = -1 & \rightarrow & d = \frac{q}{near}+1 \\ \frac{-d\times far+q}{near} &= 1 & \rightarrow & q = far(1+d) \end{aligned} \]
By solving for \(d\) and \(q\) (which you can do easily by hand), we get
\[ \begin{aligned} d = & -\frac{far+near}{far-near} \\ q = & -\frac{2 \times far \times near}{far-near}\end{aligned}\]
so that we are finally done and can complete the projection matrix
\[ \begin{pmatrix} \frac{2near}{right-left} & 0 & \frac{right+left}{right-left} & 0\\ 0 & \frac{2near}{top-bottom} & \frac{top+bottom}{top-bottom} & 0 \\ 0 & 0 & -\frac{far+near}{far-near} & -\frac{2 \times far \times near}{far-near} \\ 0 & 0 & -1 & 0 \end{pmatrix} \]
We are done. Yay! Now you understand the projection matrix and how to set it up.
OpenGL obviously does much more than just clipping and mapping to normalized device coordinates, but this post is just about the projection matrix and in my humble opinion, once you understand the viewing and projection matrix, the rest will not be difficult and, honestly, is not that import in my opinion.
Code Examples
There is some code that will help you setting up your own projection matrix.
/** * Constructs a projection matrix. * @param left the left-hand side clipping plane in camera space * @param right the right-hand side clipping plane in camera space * @param bottom the lower clipping plane in camera space * @param top the upper clipping plane in camera space * @param near the closer clipping plane in camera space * @param far the clipping plane further away in camera space*/public Matrix4f perspectiveFrustum( float left, float right, float bottom, float top, float near, float far) { Matrix4f projection = new Matrix4f(); // note the signature: set(COLUMN, ROW, value) // it may be different in the matrix implementation that you use projection.set(0, 0, (2f*near)/(right-left) ); projection.set(2, 0, (right+left)/(right-left) ); projection.set(1, 1, (2*near)/(top-bottom) ); projection.set(2, 1, (top+bottom)/(top-bottom) ); projection.set(2, 2, -(far+near)/(far-near) ); projection.set(3, 2, -2*(far*near)/(far-near) ); projection.set(2, 3, -1); projection.set(3, 3, 0); return projection;}
It is usually more intuitive to set up the projection matrix with a field-of-view angle.
/** * Constructs a projection matrix out of a field-of-view angle * @param viewAngle field-of-view angle in degrees * @param width the width of the screen in camera space * @param height the height of the screen in camera space * @param nearClippingPlaneDistance the near clipping plane (projection plane) * @param farClippingPlaneDistance the far clipping plane*/public Matrix4f projection( float viewAngle, float width, float height, float nearClippingPlaneDistance, float farClippingPlaneDistance){ // convert angle from degree to radians final float radians = (float) (viewAngle*Math.PI / 180f); float halfHeight = (float) (Math.tan(radians/2)*nearClippingPlaneDistance); float halfScaledAspectRatio = halfHeight*(width/height); Matrix4f projection = frustum(-halfScaledAspectRatio, halfScaledAspectRatio, -halfHeight, halfHeight, nearClippingPlaneDistance, farClippingPlaneDistance); return projection;}
Additional Resources
Some additional resources that helped me writing this post. |
On blocking sets in projective Hjelmslev planes
1.
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. G. Bonchev str. bl. 8, Sofia 1113, Bulgaria
q $(q+1)$. The image of a $(n$ m-1 q $(q +1), n)$-blocking multiset with $n <$char$R$ under the the canonical map $\pi^{(1)}$ is ''sum of lines''. In particular, the smallest $(k, 1)$-blocking set is the characteristic function of a line and its cardinality is $k =$ m-1 q $(q + 1)$. We prove that if $R$ has a subring $S$ with $\sqrt R$ elements that is a chain ring such that $R$ is free over $S$ then the subplane PHG($S^3_S$) is an irreducible $1$-blocking set in PHG($R^3_R$). Corollaries are derived for chain rings with $|R| = q^2, R$∕rad$R \cong \mathbb F_q$. m-1
In case of chain rings $R$ with $|R| = q^2, R$∕rad$R \cong \mathbb F_q$ and $n = 1$, we prove that the size of the second smallest irreducible $(k, 1)$-blocking set is $q^2 + q + 1$. We classify all blocking sets with this cardinality. It turns out that if char$R = p$ there exist (up to isomorphism) two such sets; if char$R = p^2$ the irreducible $(q^2 + q + 1, 1)$-blocking set is unique. We introduce a class of irreducible $(q^2 + q + s, 1)$ blocking sets for every $s \in {1, \cdots , q + 1}$. Finally, we discuss briefly the codes over $\mathbb F_q$ obtained from certain blocking sets.
Keywords:blocking set, affine Hjelmslev plane, chain rings, linear codes over finite chain rings., arcs, Projective Hjelmslev plane, Galois rings. Mathematics Subject Classification:Primary: 51E26, 51E21, 51E22; Secondary: 94B0. Citation:Ivan Landjev. On blocking sets in projective Hjelmslev planes. Advances in Mathematics of Communications, 2007, 1 (1) : 65-81. doi: 10.3934/amc.2007.1.65
[1] [2] [3]
Michael Kiermaier, Matthias Koch, Sascha Kurz.
$2$-arcs of maximal size in the affine and the projective Hjelmslev plane over $\mathbb Z$
Advances in Mathematics of Communications,
2011, 5
(2)
: 287-301.
doi: 10.3934/amc.2011.5.287 [4] [5] [6] [7] [8] [9]
David Grant, Mahesh K. Varanasi.
The equivalence of space-time codes and codes defined over finite fields and Galois rings.
[10] [11]
Sergio R. López-Permouth, Steve Szabo.
On the Hamming weight of repeated root cyclic and negacyclic codes over Galois rings.
[12]
Minjia Shi, Daitao Huang, Lin Sok, Patrick Solé.
Double circulant self-dual and LCD codes over Galois rings.
[13]
Nuh Aydin, Yasemin Cengellenmis, Abdullah Dertli, Steven T. Dougherty, Esengül Saltürk.
Skew constacyclic codes over the local Frobenius non-chain rings of order 16.
[14] [15] [16] [17]
M. DeDeo, M. Martínez, A. Medrano, M. Minei, H. Stark, A. Terras.
Spectra of Heisenberg graphs over finite rings.
[18] [19] [20]
2018 Impact Factor: 0.879
Tools Metrics Other articles
by authors
[Back to Top] |
Defining parameters
Level: \( N \) = \( 4000 = 2^{5} \cdot 5^{3} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 4000.z (of order \(8\) and degree \(4\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 160 \) Character field: \(\Q(\zeta_{8})\) Newforms: \( 0 \) Sturm bound: \(600\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{1}(4000, [\chi])\).
Total New Old Modular forms 40 0 40 Cusp forms 0 0 0 Eisenstein series 40 0 40
The following table gives the dimensions of subspaces with specified projective image type.
\(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0 |
In attempting to answer this question, I reduced it to a seemingly simple generating functions question, but after days of work was unable to construct a proof. Since I do not have experience trying to do asymptotics with generating functions, I would like to know if a proof is salvageable from these methods.
The problem introduces the sequence $a_n$, defined by $a_0 = 1$ and $$ a_{n}=a_{\left\lfloor n/2\right\rfloor}+a_{\left\lfloor n/3 \right\rfloor}+a_{\left\lfloor n/6\right\rfloor} $$ and asks for a proof that $$ \lim_{n\to\infty}\dfrac{a_{n}}{n}=\dfrac{12}{\log{432}}. $$ Writing the generating function $\displaystyle A(x) = \sum_{n \ge 0} a_n x^n$, this translates to $$ A(x) = (1 + x)A(x^2) + (1 + x + x^2) A(x^3) + (1 + x + x^2 + \cdots + x^5)A(x^6) - 2 $$
Even better, let $b_0 = a_0$ and $b_n = a_n - a_{n-1}$ for all $n \ge 1$, and define the generating function $\displaystyle B(x) = \sum_{n \ge 0} b_n x^n = (1 - x)A(x)$. Multiplying the above by $(1-x)$ gives $$ (1 - x)A(x) = (1 - x^2)A(x^2) + (1 - x^3)A(x^3) + (1 - x^6)A(x^6) + 2x - 2 $$ i.e. $$ B(x) = B(x^2) + B(x^3) + B(x^6) + 2x - 2 \tag{1} $$
After unsuccessfully trying to do assymptotics with the above elegant formula, I used it to find an explicit representation of $B$, using the Delannoy Numbers:
$$ B(x) = 1 + 2 \sum_{l, m \ge 0} \sum_{d \ge 0} 2^d {l \choose d}{m \choose d} x^{2^l 3^m} $$
It follows that in fact \begin{align*} b_n&= \begin{cases} 1 &n=0 \\ 2 \sum_{d \ge 0} 2^d \binom{l}{d} \binom{m}{d} &n =2^l3^m \\ 0 &\text{otherwise} \end{cases} \\[10pt] a_n&=1+2\sum_{d\ge0}2^d\sum_{\begin{matrix}l,m\ge0\\2^l 3^m \le n\end{matrix}}{l \choose d}{m \choose d} \tag{2} \end{align*}
One can do naive bounds on the sum in (2) - replacing the condition $2^l 3^m \le n$ with $2^l 2^m \le n$ and $3^l 3^m \le n$ for upper and lower bounds, respectively. But this isn't good enough; it gives (after algebra and combinatorial work) approximately $$ \frac{n^{\log_3(1 + \sqrt{2}) - 1}}{2} < \frac{a_n}{n} < \frac{n^{\log_2(1 + \sqrt{2}) - 1}}{2} $$
This seems to suggest trying to approximate (2) with the condition $(1 + \sqrt{2})^l (1 + \sqrt{2})^m \le n$, but I have no idea how to justify that.
At any rate, I've made too much of what feels like progress to give up on the problem, and if anyone can think of a way to use (2) to get a solution or else to use (1) and find the asymptotics directly, I'd be very thankful. |
On the subject of categorical versus set-theoretic foundations thereis too much complicated discussion about structure that misses theessential point about whether "collections" are necessary.
It doesn't matter exactly what your personal list of mathematicalrequirements may be -- rings, the category of them, fibrations,2-categories or whatever -- developing the appropriate foundationalsystem for it is just a matter of "programming", once you understandthe general setting.
The crucial issue is whether you are taken in by the Great Set-TheoreticSwindle that mathematics depends on collections (completed infinities).(I am sorry that it is necessary to use strong language here in order toflag the fact that I reject a widely held but mistaken opinion.)
Set theory as a purported foundation for mathematics does not and cannotturn collections into objects. It just axiomatises some of the intuitionsabout how we would like to handle collections, based on the relationshipcalled "inhabits" (eg "Paul inhabits London", "3 inhabits N"). Thisbinary relation, written $\epsilon$, is formalised using first order predicate calculus, usually with just one sort, the universe of sets.The familiar axioms of (whichever) set theory are formulae in first orderpredicate calculus together with $\epsilon$.
(There are better and more modern ways of capturing the intuitions aboutcollections, based on the whole of the 20th century's experience of algebraand other subjects, for example using pretoposes and arithmetic universes,but they would be a technical distraction from the main foundational issue.)
Lawvere's "Elementary Theory of the Category of Sets" axiomatises someof the intuitions about the category of sets, using the same methodology.Now there are two sorts (the members of one are called "objects" or "sets"and of the other "morphisms" or "functions"). The axioms of a categoryor of an elementary topos are formulae in first order predicate calculustogether with domain, codomain, identity and composition.
Set theorists claim that this use of category theory for foundationsdepends on prior use of set theory, on the grounds that you need to startwith "the collection of objects" and "the collection of morphisms".Curiously, they think that their own approach is immune to the samecriticism.
I would like to make it clear that I do NOT share this view of Lawvere's.
Prior to 1870 completed infinities were considered to be nonsense.
When you learned arithmetic at primary school, you learned some rules thatsaid that, when you had certain symbols on the page in front of you, such as "5+7", you could add certain other symbols, in this case "=12".If you followed the rules correctly, the teacher gave you a gold star,but if you broke them you were told off.
Maybe you learned another set of rules about how you could add lines andcircles to a geometrical figure ("Euclidean geometry"). Or another oneinvolving "integration by parts". And so on. NEVER was there a "completedinfinity".
Whilst the mainstream of pure mathematics allowed itself to be seducedby completed infinities in set theory, symbolic logic continued andcontinues to formulate systems of rules that permit certain additionsto be made to arrays of characters written on a page. There are manydifferent systems -- the point of my opening paragraph is that you candesign your own system to meet your own mathematical requirements --but a certain degree of uniformity has been achieved in the way that theyare presented.
We need an inexhaustible supply of VARIABLES for which we may substitute.
There are FUNCTION SYMBOLS that form terms from variables and other terms.
There are BASE TYPES such as 0 and N, and CONSTRUCTORS for forming newtypes, such as $\times$, $+$, $/$, $\to$, ....
There are TRUTH VALUES ($\bot$ and $\top$), RELATION SYMBOLS ($=$)and CONNECTIVES and QUANTIFIERS for forming new predicates.
Each variable has a type, formation of terms and predicates must respectcertain typing rules, and each formation, equality or assertion of a predicate is made in the CONTEXT of certain type-assignments andassumptions.
There are RULES for asserting equations, predicates, etc.
We can, for example, formulate ZERMELO TYPE THEORY in this style. It hastype-constructors called powerset and {x:X|p(x)} and a relation-symbolcalled $\epsilon$. Obviously I am not going to write out all of the detailshere, but it is not difficult to make this agree with what ordinarymathematicians call "set theory" and is adequate for most of theirrequirements
Alternatively, one can formulate the theory of an elementary topos is thisstyle, or any other categorical structure that you require. Then a "ring"is a type together with some morphisms for which certain equations areprovable.
If you want to talk about "the category of sets" or "the category of rings"WITHIN your tpe theory then this can be done by adding types known as"universes", terms that give names to objects in the internal categoryof sets and a dependent type that provides a way of externalisingthe internal sets.
So, although the methodology is the one that is practised by type theorists,it can equally well be used for category theory and the traditional purposesof pure mathematics. (In fact, it is better to formalise a type theorysuch as my "Zermelo type theory" and then use a uniform construction toturn it into a category such as a topos. This is easier because theassociativity of composition is awkward to handle in a recursive setting.However, this is a technical footnote.)
A lot of these ideas are covered in my book "Practical Foundations ofMathematics" (CUP 1999), http://www.PaulTaylor.EU/Practical-FoundationsSince writing the book I have written things in a more type-theoreticthan categorical style, but they are equivalent. My programme called"Abstract Stone Duality", http://www.PaulTaylor.EU/ASD is an example of themethodology above, but far more radical than the context of this questionin its rejection of set theory, ie I see toposes as being just as bad. |
When $x$ is discrete, KL divergence is $D_{KL}(P||Q)=\sum\limits_{x}P(x)\log \frac{P(x)}{Q(x)}$, when $x$ is continuous, $D_{KL}(P||Q)=\int\limits_{x}p(x)\log \frac{p(x)}{q(x)}dx$. However, when the space of the random variable $x$ is defined on mixed continuous and discrete space, what would be the KL divergence?
For example, $x=(r,a)$, where $r$ is a continuous variable that follows Gaussian distribution, $a$ is a discrete variable that follows Bernoulli distribution. $r$ and $a$ are independent of each other.
Under $P(x)$, $r\sim \mathcal{N}(\mu_1,\sigma^2)$ and $a\sim \text{Bernoulli} (\beta)$, i.e., $$P(r,a)=\left\{\begin{matrix} \mathcal{N}(\mu_1,\sigma^2))\cdot\beta, \quad a = 1, \forall r\in R\\ \mathcal{N}(\mu_1,\sigma^2))\cdot(1-\beta), \quad a = 0, \forall r\in R \end{matrix}\right.$$
Under $Q(x)$, $r\sim \mathcal{N}(\mu_2,\sigma^2)$ and $a\sim \text{Bernoulli} (1-\beta)$, i.e., $$Q(r,a)=\left\{\begin{matrix} \mathcal{N}(\mu_2,\sigma^2))\cdot(1-\beta), \quad a = 1, \forall r\in R\\ \mathcal{N}(\mu_2,\sigma^2))\cdot \beta, \quad a = 0, \forall r\in R \end{matrix}\right.$$
What is the KL divergence of $P$ and $Q$. Thank you very much fo the help! |
SOLVE
LATER
Living alone with Mitty in the Abyss, Nanachi has developed an interest in number theory problems. Nanachi recently came up with the following problem but is unable to solve it.
Let \(\phi(n)\) denote the count of positive integers up to
n which are coprime with n. Since summations bore our Nanachi, Nanachi decided to evaluate the function \(f(n)\) instead which is defined as:
\(\displaystyle f(n) = \prod_{d|n} \phi(d)\)
where \(a|b\) mean
a divides b. Nanachi seeks your help in finding the value of \(f(n)\). INPUT A single integer n OUTPUT Value of \(f(n)\). Since this value can be large, output it modulo \(10^9 + 7\). CONSTRAINTS \(1 \leq n \leq 10^{12}\)
For 10% of the testcases, \(n \leq 10^6\)
The divisors of \(12\) are \(1, 2, 3, 4, 6\) and \(12\) i.e \(f(n) = \phi(1)\cdot\phi(2)\cdot\phi(3)\cdot\phi(4)\cdot\phi(6)\cdot \phi(12)\)
\(\Rightarrow f(n) = 1\cdot 1\cdot 2\cdot 2\cdot 2\cdot 4 = 32\) |
The PARTITION problem is NP-complete:
INSTANCE: finite set $A$ and a size $s(a) \in \mathbb{Z}^+$ for each $a \in A$
QUESTION: Is there a subset $A' \subseteq A$ such that $\sum_{a \in A'} s(a) = \sum_{a \in A \setminus A'} s(a)$
The problem remains NP-complete even if the elements are ordered as $a_1,a_2,...,a_{2n}$ and we require that $A'$ contains exactly one of $a_{2i-1},a_{2i}$ for $1 \leq i \leq n$ (Garey and Johnson, Computers and Intractability).
This variant should be known as EVEN-ODD PARTITION.
Do you see a quick reduction to prove its hardness? (or do you know the paper where it was first defined and proved) |
Cross-posted to Math Educators Stack Exchange. (link)
I am looking for high school algebra/mathematics textbooks targeted at talented students, as preparation for fully rigorous calculus à la Spivak. I am interested in the best materials available in English, French, German or Hebrew.
Ideally, the book(s) should provide a comprehensive introduction to algebra at this level, starting from the most basic operations on polynomials. It should include necessary theory (e.g., Bezout's remainder theorem on polynomials, proof of the fundamental theorem of arithmetic, Euclid's algorithm, a more honest discussion of real numbers than usual, proofs of the properties of rational exponents, etc., and a general attitude that all statements are to be proved, with few exceptions). It should also have problems that range from exercises acquainting students with the basic algebraic manipulations on polynomials to much more difficult ones.
Specifically, I am looking for something similar in spirit to a series of excellent Russian books by Vilenkin for students in so-called "mathematical schools" from grades 8 to 11, although I am only looking for the equivalent of the grade 8 and 9 books, which are at precalculus level. To give you an idea, here are a sample of typical problems from the grade-8 book.
Perform the indicated operations. $\frac{3p^2mq}{2a^2 b^2} \cdot \frac{3abc}{8x^2 y^2} : \frac{9a^2 b^2 c^3}{28pxy}$
Prove that when $a \ne 0$, the polynomial $x^{2n} + a^{2n}$ is divisible neither by $x + a$ nor by $x - a$.
Prove that if $a + b + c = 0$, then $a^3 + b^3 + c^3 + 3(a + b)(a + c) (b + c) = 0$.
Prove that if $a > 1$, then $a^4 + 4$ is a composite number.
Prove that if $n$ is relatively prime to $6$, then $n^2 - 1$ is divisible by 24.
Simplify $\sqrt{36x^2}$.
Simplify $\sqrt{12 + \sqrt{63}}$.
Prove that the difference of the roots of the equation $5x^2 -2(5a + 3)x + 5a^2 + 6a + 1 = 0$ does not depend on $a$.
Solve the inequality $|x - 6| \leq |x^2 - 5x + 2|$.
And here are the chapter titles for the grade 8 and 9 books.
Grade 8: Fractions. Polynomials. Divisibility; prime and composite numbers. Real numbers. Quadratic equations; systems of nonlinear equations; resolution of inequalities.
Grade 9: Elements of set theory. Functions. Powers and roots. Equations and inequalities, and systems thereof. Sequences. Elements of trigonometry. Elements of combinatorics and probability theory.
Broadly similar questions have been asked elsewhere, however the suggestions made there are not satisfactory for my purposes.
The English translations of Gelfand's books are good; however they are not a sufficiently broad introduction to high school algebra, and do not have enough material on computational technique. They are more in the nature of supplements to an ordinary textbook.
Some 19th century books like Hall and Knight have been suggested. On conceptual material, these tend to be too old in language and outlook.
Basic Mathematicsby Serge Lang seems more to dabble in various topics than to provide a thorough introduction to algebra.
I am not inclined towards books with a very strong "New Math" orientation (1971-1983 France, for example). I don't think a student should need to understand the group of affine transformations of $\mathbb{R}$ to know what a line is.
Also, previous questions have perhaps focused implicitly on material in English. I have in mind a student who can also easily read French, German or Hebrew if something better can be found in those languages.
Edit. I'd like to clarify that I'm not asking for something identical to these books, just something as close as possible to their spirit. Fundamentally, this means: 1. It is a substitute for, rather than just a complement to, a regular school algebra textbook. 2. It is directed at the most able students. 3. It conveys the message that proofs and creative problem-solving are central to mathematics. |
Any ideas on how I could prove the veracity or falseness of the following inequality?
Let $X:\Omega \to \mathbb{R}$ a random variable such that the expressions under are well-defined. Then
$$E[e^X] \leq 1 + e^{E[|X|]}.$$
I have the feeling that this is true but i do not know how to show it. I was thinking Jensen's inequality but it goes the wrong way.
One more question, if the above inequality is false, is there a way to upperbound $E[e^X] \leq f(E|X|)$ ?
Thanks you very much for your help. |
Related but not necessary to know: here
Looking at the temperature distribution in an infinitely long cylinder of metal with insulated sides and initial temperature distribution $f(x)= \left\{\begin{align}0,\quad|x|\lt L \\ C,\quad|x| \gt L \end{align} \right.$
$C$ is constant.
Now I want to workout $B_n$ for the fourier series, and I thought that I would want:
$$B_n = \frac2L\int_L^\infty C\sin\left(\frac{n\pi x}{L}\right) dx$$
But perhaps I haven't setup the integral correctly. Thank you for listening.
I based my choice off of the general form of the solution to the heat equation: $u(x,t)=\sum \limits_{n=1}^\infty B_n e^{({-n\pi C/L})^2} \sin\left(\frac{n\pi x}{L}\right)$ With $B_n = \frac2L \int_0^L \sin\left(\frac{n\pi x}{L}\right) f(x) dx$ |
Answer
$\frac{\pi}{4}$
Work Step by Step
RECALL: Since $\tan{x}$ and $\tan^{-1}{x}$ are inverse functions of each other, then, for all valid values of $x$, $\tan^{-1}{(\tan(x))}=x$ Thus, $\tan^{-1}{(\tan{(\frac{\pi}{4})})}=\frac{\pi}{4}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
CGAL 4.7 - Combinatorial Maps
The concept
Dart defines a
d-dimensional dart. A dart mainly stores handles to the darts linked with itself by \( \beta_i\), \( \forall\) i, 0 \( \leq\) i \( \leq\) d. Moreover, it stores also handles to each non void attribute associated with itself. Creation
A dart
d0 is never constructed directly, but always created within a combinatorial map
cm by using the method cm.create_dart(). A new dart is initialized to be
i-free, \( \forall\) i: 0 \( \leq\) i \( \leq\) dimension, and having all its attribute handles initialized to
NULL, for each non
void attribute.
CombinatorialMap::null_dart_handle is no longer a static data member. This implies to move the following methods of Dart concept into CombinatorialMap concept:
is_free,
highest_nonfree_dimension,
opposite and
other_extremity. You can define the
CGAL_CMAP_DEPRECATED macro to keep the old behavior.
static unsigned int dimension The dimension of the dart.
typedef unspecified_type Dart_handle Dart handle type. More... typedef unspecified_type Dart_const_handle Dart const handle type. More... template<unsigned int i> using Attribute_handle = unspecified_type
Attribute_handle<i>::type is a handle to
i-attributes, with 0 \( \leq \)
i \( \leq \)
dimension. More...
template<unsigned int i> using Attribute_const_handle = unspecified_type
Attribute_const_handle<i>::type is a const handle to
i-attributes, with 0 \( \leq \)
i \( \leq \)
dimension. More...
Dart_handle beta (unsigned int i) To simplify a future implementation, it is recommended to not use this function and to use cmap.beta(dh,i) instead. More... Dart_const_handle beta (unsigned int i) const To simplify a future implementation, it is recommended to not use this function and to use cmap.beta(dh,i) instead. More... Dart_handle beta_inv (unsigned int i) To simplify a future implementation, it is recommended to not use this function and to use cmap.beta(dh,CGAL_BETAINV(i)) instead. More... Dart_const_handle beta_inv (unsigned int i) const To simplify a future implementation, it is recommended to not use this function and to use cmap.beta(dh,CGAL_BETAINV(i)) instead. More... template<unsigned int i> Attribute_handle< i >::type attribute () To simplify a future implementation, it is recommended to not use this function and to use cmap.attribute(dh) instead. More... template<unsigned int i> Attribute_const_handle< i >::type attribute () const To simplify a future implementation, it is recommended to not use this function and to use cmap.attribute(dh) instead. More...
Attribute_const_handle<i>::type is a const handle to
i-attributes, with 0 \( \leq \)
i \( \leq \)
dimension.
Must be a model of
ConstHandle concept.
Attribute_handle<i>::type is a handle to
i-attributes, with 0 \( \leq \)
i \( \leq \)
dimension.
Must be a model of
Handle concept.
Dart const handle type.
Must be a model of
ConstHandle concept.
Attribute_handle<i>::type Dart::attribute ( )
To simplify a future implementation, it is recommended to not use this function and to use cmap.attribute(dh) instead.
Returns a handle to the
i-attribute associated to the dart.
void.
Attribute_const_handle<i>::type Dart::attribute ( ) const
To simplify a future implementation, it is recommended to not use this function and to use cmap.attribute(dh) instead.
Returns a const handle to the
i-attribute associated to the dart, when the dart is const.
void.
Dart_handle Dart::beta ( unsigned int i)
To simplify a future implementation, it is recommended to not use this function and to use cmap.beta(dh,i) instead.
Returns \( \beta_i\)(
*this).
Dart_const_handle Dart::beta ( unsigned int i) const
To simplify a future implementation, it is recommended to not use this function and to use cmap.beta(dh,i) instead.
Returns \( \beta_i\)(
*this) when the dart is const.
Dart_handle Dart::beta_inv ( unsigned int i)
To simplify a future implementation, it is recommended to not use this function and to use cmap.beta(dh,CGAL_BETAINV(i)) instead.
Returns \( \beta_i^{-1}\)(
*this). |
We begin with the integral with exponentials in the denominator, which is$$I=\int_{-\infty}^{\infty} \frac{x^2}{(2+e^{x}+e^{-x})^2} \ dx.$$
Let $x=\ln(u),$ and $dx =\frac{1}{u} \ du,$ in which we have
$$I=\int_{0}^{\infty} \frac{u \ (\ln(u))^2}{(u+1)^4} \ du.$$
Now split $I$ into
$$I=\int_{0}^{1} \frac{u \ (\ln(u))^2}{(u+1)^4} \ du + \int_{1}^{\infty} \frac{u \ (\ln(u))^2}{(u+1)^4} \ du.$$ Perform a change of variables $u=\frac{1}{z}$ to see that the two integrals are equal to one another, and thus,
$$I=\int_{0}^{1} \frac{2u \ (\ln(u))^2}{(u+1)^4} \ du.$$
Now here is the fun part. Consider the triple integral
$$J=\int_{0}^{1}\int_{1}^{u}\int_{1}^{y} \frac{2u}{zy(1+u)^4} \ dz \ dy \ du.$$ If we integrate this in the order presented, we get $J=I.$ On the other hand, we reverse the order of integration.
$$J=\int_{0}^{1}\int_{0}^{y}\int_{1}^{y} \frac{4u}{zy(1+u)^4} \ dz \ du \ dy,$$ and integrating (by parts the second time) gives us
$$I= \int_{0}^{1} \frac{2y(3+y) \ln(y)}{3(1+y)^3} \ dy.$$ Expand the integrand with partial fractions and see
$$I= \int_{0}^{1} - \frac{2 \ln(y)}{3(y+1)} \ dy - \int_{0}^{1} \frac{2 \ln(y)}{3(y+1)^2} \ dy + \int_{0}^{1} \frac{4 \ln(y)}{3(y+1)^3} \ dy$$
Now the first term $$\int_{0}^{1} -\frac{2 \ln(y)}{3(y+1)} \ dy=\frac{\zeta(2)}{3}$$ which we can obtain by converting the integrand into a geometric series and using the fact that $$\sum_{n=0}^{\infty} \frac{(-1)^{n}}{(n+1)^2} =\eta(2)=\frac{\zeta(2)}{2}=\frac{\pi^2}{12}$$ by the Basel Problem.
The second term $$\int_{0}^{1} -\frac{2 \ln(y)}{3(y+1)^2} \ dy=\frac{2 \ln(2)}{3},$$ which can be proved by integration by parts $u=\frac{2}{3} \ln(y)$ and $dv = \frac{-1}{(y+1)^2} \ dy$ and a use of partial fractions (on the $\int_{0}^{1} v \ du$ part).
Similarly, apply the same reasoning to show that $$\int_{0}^{1} \frac{4 \ln(y)}{3(y+1)^3} \ dy=-\frac{1+ 2\ln(2)}{3}$$
Combining the values together we get that $$I= \frac{-1+\zeta(2)}{3}.$$ Finally multiply this value by $\frac{1}{(\ln(2))^3}$ to get the value of your original integral. |
Let us start with a problem of the form
$$(\mathcal{L} + k^2) u=0$$
with a set of given boundary conditions (Dirichlet, Neumann, Robin, Periodic, Bloch-Periodic). This corresponds with finding the eigenvalues and eigenvectors for some operator $\mathcal{L}$, under some geometry, and boundary conditions. One can obtain a problem like this in acoustics, electromagnetism, elastodynamics, quantum mechanics, for example.
I know that one can discretize the operator using different methods, e.g, Finite Difference Methods to obtain
$$[A]\{U\} = k^2 \{U\}$$
or using, Finite Element Methods to obtain
$$[K]\{U\} = k^2 [M]\{U\} \enspace .$$
Some thoughts The method of Manufactured Solutions is not useful in this case since there is no source term to balance the equation.
One can verify that the matrices $[K]$ and $[M]$ are well captured using a frequency domain problem with source term, e.g.
$$[\nabla^2 + \omega^2/c^2] u(\omega) = f(\omega) \enspace ,\quad \forall \omega \in [\omega_\min, \omega_\max]$$
instead of
$$[\nabla^2 + k^2] u = 0 \enspace .$$
But this will not check the solver issues.
Maybe, one can compare solutions for different methods, like FEM and FDM.
Question
What is the way to verify the solutions (eigenvalue-eigenvector pairs) for discretization schemes due to numerical methods like FEM and FDM for eigenvalue problems? |
Difference between revisions of "Fujimura's problem"
(→n=5)
(→General n)
Line 64: Line 64:
since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set.
since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set.
+ + Revision as of 04:45, 13 February 2009
Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid
[math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math]
which contains no equilateral triangles. Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain "hyper-optimistic" version of DHJ(3).
n=0
It is clear that [math]\overline{c}^\mu_0 = 1[/math].
n=1
It is clear that [math]\overline{c}^\mu_1 = 2[/math].
n=2
It is clear that [math]\overline{c}^\mu_2 = 4[/math] (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]).
n=3
Deleting (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math] shows that [math]\overline{c}^\mu_3 \geq 6[/math]. In fact [math]\overline{c}^\mu_3 = 6[/math]: just note (3,0,0) or something symmetrical has to be removed, leaving 3 triangles which do not intersect, so 3 more removals are required.
n=4 [math]\overline{c}^\mu_4=9:[/math]
The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and doesn’t contain any equilateral triangles.
Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math].
n=5 [math]\overline{c}^\mu_5=12[/math]
The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles.
Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles:
(3,1,1),(0,4,1),(0,1,4)
(4,1,0),(1,4,0),(1,1,3)
(4,0,1),(1,3,1),(1,0,4)
(1,2,2),(0,3,2),(0,2,3)
(3,2,0),(2,3,0),(2,2,1)
(3,0,2),(2,1,2),(2,0,3)
So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math].
General n
[Cleanup required here]
A lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero.
A trivial upper bound is
[math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math]
since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set.
Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math]. |
The basic Grothendieck's assumptions means we are dealing with an connected atomic site $\mathcal{C}$ with a point, whose inverse image is the fiber functor $F: \mathcal{C} \to \mathcal{S}et$:
(i) Every arrow $X \to Y$ in $\mathcal{C}$ is an strict epimorphism.
(ii) For every $X \in \mathcal{C}$ $F(X) \neq \emptyset$.
(iii) $F$ preseves strict epimorphisms.
(iv) The diagram of $F$, $\Gamma_F$ is a cofiltered category.
Let $G = Aut(F)$ be the localic group of automorphisms of $F$.
Let $F: \widetilde{\mathcal{C}} \to \mathcal{S}et$ the pointed atomic topos of sheaves for the canonical topology on $\mathcal{C}$. We can assume that $\mathcal{C}$ are the connected objects of $\widetilde{\mathcal{C}}$.
(i) means that the objects are connected, (ii) means that the topos is connected, (iii) that $F$ is continous, and (iv) that it is flat.
By considering stonger finite limit preserving conditions (iv) on $F$ (corresponding to stronger cofiltering conditions on $\Gamma_F$) we obtain different Grothendieck-Galois situations (for details and full proofs see [1]):
S1) F preserves all inverse limits in $\widetilde{\mathcal{C}}$ of objets in $\mathcal{C}$, that is $F$ is essential. In this case $\Gamma_F$ has an initial object $(a,A)$ (we have a "universal covering"), $F$ is representable, $a: [A, -] \cong F$, and $G = Aut(A)^{op}$ is a discrete group.
S2) F preserves arbritrary products in $\widetilde{\mathcal{C}}$ of a same $X \in \mathcal{C}$ (we introduce the name "proessential for such a point [1]). In this case there exists galois closures (which is a cofiltering-type property of $\Gamma_F)$, and $G$ is a prodiscrete localic group, inverse limit in the category of localic groups of the discrete groups $Aut(A)^{op}$, $A$ running over all the galois objects in $\mathcal{C}$.
S2-finite) F takes values on finite sets. This is the original situation in SGA1. In this case the condition "F preserves finite products in $\widetilde{\mathcal{C}}$ of a same $X \in \mathcal{C}$ holds automatically by condition (iv) ($F$ preserves finite limits), thus there exists galois closures, the groups $Aut(A)^{op}$ are finite, and $G$ is a profinite group, inverse limit in the category of topological groups of the finite groups $Aut(A)^{op}$.
NOTE. The projections of a inverse limit of finite groups are surjective. This is a key property. The projection of a inverse limit of groups are not necessarily surjective, but if the limit is taken in the category of localic groups, they are indeed surjective (proved by Joyal-Tierney). This is the reason we have to take a localic group in 2). Grothendieck follows an equivalent approach in SGA4 by taking the limit in the category of Pro-groups.
S3) No condition on $F$ other than preservation of finite limits (iv). This is the case of a general pointed atomic topos. The development of this case we call "Localic galois theory" see [2], its fundamental theorem first proved by Joyal-Tierney.
[1] "On the representation theory of Galois and Atomic topoi", JPAA 186 (2004)
[2] "Localic galois theory", Advances in mathematics", 175/1 (2003). |
I have this claim left as an exercise in my course:
Let $f:M\to N$ be some function between two smooth manifolds $M$ and $N$ (respectively of dimensions $m$ and $n$). Prove that, if for any smooth function $\mu:N\to\mathbb{R}$, we have that $\mu\circ f$ is smooth, then $f$ is smooth. (Here, I say smooth to say "smooth of class $C^{k}$", $k\geq 1$). Hint: use the chain rule.
Here is how I tried:
Let $(U_{i},\varphi_{i})_{i\in I}$ be some atlas of $M$ and $(V_{j},\psi_{j})_{j\in J}$ some atlas of $N$. Now take $(i,j)\in I\times J$ such that $f^{-1}(V_{j})\cap U_{j}\neq\emptyset$. Proving that $f$ is smooth means that we need to show \begin{equation}\psi_{j}\circ f\circ\varphi^{-1}_{i}\left.\right\vert_{\varphi_{i}(f^{-1}(V_{j})\cap U_{i})}\end{equation} is of class $C^{k}$ for any $(i,j)\in I\times J$.
We know that every component of $\psi_{j}$ is smooth and then, by hypothesis on $f$, any component of $\psi_{j}\circ f$ is smooth. I posted that on another forum and their conclusion was I can't state that without constructing a function extending $\psi_{j}$ $C^{k}$-continuously on $N$. We can do that by constructing a function $$\begin{aligned}\alpha_{j}&=1 &\text{on}\,\,V'_{j}\\ &= 0 &\text{out of}\,\,V_{j} \end{aligned}$$ and $\alpha_{i}$ is some $C^{k}$-continuous function on $V_{j}\setminus V'_{j}$ where $V'_{j}$ is some open set of $N$ such that $\overline{V'}_{j}\subset V_{j}$. If this is true until there, my main problem is to prove the existence of such an open set $V'_{j}$.
I suppose the chain rule is there to prove the smoothness of the composition in the first equation above. |
A $(p,q)$
tensor, $T$ is a MULTILINEAR MAP that takes $p$ copies of $V^*$ and $q$ copies of $V$ and maps multilinearly (linear in each entry) to $k:$
$$T: \underset{p}{\underbrace{V^*\times \cdots \times V^*}}\times \underset{q}{\underbrace{V\times\times \cdots V\times V}} \overset{\sim}\rightarrow K\tag 1$$
The
$(p,q)$ TENSOR SPACE is defined as a set:
$$\begin{align}T^p_q\,V &= \underset{p}{\underbrace{V\otimes\cdots\otimes V}} \otimes \underset{q}{\underbrace{V^*\otimes\cdots\otimes V^*}}:=\{T\, |\, T\, \text{ is a (p,q) tensor}\}\tag2\\[3ex]&=\{T: \underset{p}{\underbrace{V^*\times \cdots \times V^*}}\times \underset{q}{\underbrace{V\times \cdots \times V}} \overset{\sim}\rightarrow K\}\end{align}\tag3$$
is the set of all tensors where $T$ is (p,q), equipped this with pointwise addition and s-multiplication.
I can't find an example online to get an idea of what these expressions mean. I have followed, for example, all 25 lectures on tensors on this series, but these expressions are not even mentioned. I'd like to see an example that is not completely trivial, and that it could be have been dealt with using linear algebra - something with "arrow vectors" and matrices, perhaps, so that the linear functional(s) in $V^*$ and the vectors in $V$ are clearly spelled out, together with the operations entailed ($\otimes$).
If asking for an example is not a good question, a step-by-step translation in English of what these expressions are saying would be great. |
In the book
Mixed Finite Element Methods and Applications by Boffi, Brezzi, and Fortin there is a pretty long discussion about why the Raviart-Thomas (RT) projection is only defined for functions in $H^1$ (Remark 2.5.1). In addition, For DG methods there are similar projections, only defined for functions in $H^1.$
To set the stage for my question, I introduce a linear elliptic PDE. Let $\Omega\in \mathbb{R}^d$ with $d\in \{2,3\}$ be a bounded polyhedral domain. Suppose a source function $\sigma\colon \Omega\to \mathbb{R}$, is in $L^2(\Omega)$, the Dirichlet boundary condition $u_d\colon \partial\Omega_D\to \mathbb{R}$, and the Neumann boundary condition $\vec{g}_N\cdot\vec{\eta}\colon \partial\Omega_N \to \mathbb{R}$ are all functions with enough regularity such that the following PDE is well defined.
Our PDE is \begin{align*} 0 &= \vec{q} + \vec{\nabla} u&& x\in \Omega,\\ \sigma &= \vec{\nabla} \cdot \vec{q}&& x\in \Omega,\\ u_d &= u&& x\in \partial\Omega_D,\\ \vec{g}_N\cdot \vec{n}&= \vec{q}\cdot \vec{n} && x\in \partial\Omega_N. \end{align*}
What are weak restrictions that we need to place on the data (the domain, the source term, and the boundary conditions) so that $\vec{q}\in H^1(\Omega)$? Am I correct in assuming that we need some sort of elliptic regularity?
References to the literature are appreciated. I have many papers dealing with elliptic regularity but they all deal with even simpler PDEs (homogenous BC, smooth boundary, no Neumann data, etc). |
For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available. Please do not feel obliged to fill the whole space. Posters can be put up for the full duration of the event.
Abah, Obinna
The understanding of memory effects arising from the interaction between system and environment is a key for engineering quantum thermodynamic devices beyond the standard Markovian limit. We study the performance of measurement-based thermal machine whose working medium dynamics is subject to backflow of information from the reservoir via collision based model. In this study, the non-Markovian effect is introduced by allowing for additional unitary interactions between the environments. We present two strategies of realizing non-Markovian dynamics and study their influence on the work produced by the engine. Moreover, the role of system-environment memory effects on the engine performance can be beneficial in short time.
Barthel, Thomas
The entanglement entropies in ground states of typical condensed matter systems obey area and log-area laws. In contrast, subsystem entropies in random and thermal states are extensive, i.e., obey a volume law. For energy eigenstates, one expects a crossover from the groundstate scaling at low energies and small subsystem sizes to the extensive scaling at high energies and large subsystem sizes. We elucidate this crossover. Due to eigenstate thermalization (ETH), the eigenstate entanglement can be related to subsystem entropies in thermodynamic ensembles. For one-dimensional critical systems, the universal crossover function follows from conformal field theory (CFT) and can be adapted to better capture nonlinear dispersion. For critical fermions in two dimensions, we obtain a crossover function by employing the 1+1d CFT result for contributions from lines perpendicular to the Fermi surface. Scaling functions for gapped systems additionally depend on a mass parameter. Using ETH, we also easily obtain the distribution function for eigenstate entanglement. The results are demonstrated numerically for quadratic fermionic systems, finding excellent data collapse to the scaling functions. Ref: Q. Miao and T. Barthel, arXiv:1905.07760 (2019)
Barthel, Thomas
Lie-Trotter-Suzuki decompositions are an efficient way to approximate operator exponentials exp(tH) when H is a sum of n (non-commuting) terms which, individually, can be exponentiated easily. They are employed in time-evolution algorithms for tensor network states, digital quantum simulation protocols, path integral methods like quantum Monte Carlo, and splitting methods for symplectic integrators in classical Hamiltonian systems. We provide optimized decompositions up to sixth order. The leading error term is expanded in nested commutators (Hall bases) and we minimize the 1-norm of the coefficients. For n=2 terms, several of the optima we find are close to those in [McLachlan, SlAM J. Sci. Comput. 16, 151 (1995)]. Generally, our results substantially improve over unoptimized decompositions by Forest, Ruth, Yoshida, and Suzuki. We explain why these decompositions are sufficient to efficiently simulate any one- or two-dimensional lattice model with finite-range interactions. This follows by solving a partitioning problem for the interaction graph. Ref: T. Barthel and Y. Zhang, arXiv:1901.04974 (2019)
Białończyk, Michał
I will consider the one-dimensional quantum Ising chain in the transverse field driven from the paramagnetic phase to the critical point and study its free evolution there. I will discuss how the system size and quench-induced scaling relations from the Kibble–Zurek theory of non-equilibrium phase transitions are encoded in transverse magnetization and Loshmidt echo and I will present the methods to compute these observables analytically. Finally, I will show how different is the behaviour of longitudinal magnetization and how it can be used to phase detection.
Correa, Luis
The reaction-coordinate mapping is a useful technique to study complex quantum dissipative dynamics into structured environments. In essence, it aims to mimic the original problem by means of an 'augmented system', which includes a suitably chosen collective environmental coordinate---the 'reaction coordinate'. This composite then couples to a simpler 'residual reservoir' with short-lived correlations. If, in addition, the residual coupling is weak, a simple quantum master equation can be rigorously applied to the augmented system, and the solution of the original problem just follows from tracing out the reaction coordinate. But, what if the residual dissipation is strong? Here we consider an exactly solvable model for heat transport---a two-node linear "quantum wire" connecting two baths at different temperatures. We allow for a structured spectral density at the interface with one of the reservoirs and perform the reaction-coordinate mapping, writing a perturbative master equation for the augmented system. We find that: (a) strikingly, the stationary state of the original problem can be reproduced accurately by a weak-coupling treatment even when the residual dissipation on the augmented system is very strong; (b) the agreement holds throughout the entire dynamics under large residual dissipation in the overdamped regime; (c) and that such master equation can grossly overestimate the stationary heat current across the wire, even when its non-equilibrium steady state is captured faithfully. These observations can be crucial when using the reaction-coordinate mapping to study the largely unexplored strong-coupling regime in quantum thermodynamics.
Flynn, Vincent
We take steps towards developing exact solutions for open dynamical systems, for which translational symmetry is broken by boundary conditions. Specifically, we leverage a recently proposed generalization of Bloch's theorem to obtain the spectrum and exact normal modes of a bosonic analogue to the familiar Kitaev-Majorana chain, which exhibits effective non-Hermitian Hamiltonian dynamics and extreme sensitivity to boundary conditions. We present exact analytical solutions for the chain under periodic, anti-periodic, open, and $\pi/2$-twisted boundary conditions for which we find the system can only be made dynamically stable in the latter two cases. We identify the breakdown of dynamical stability with a spontaneous breaking of a generalized $\mathcal{P}\mathcal{T}$-symmetry and employ tools from non-Hermitian quantum mechanics to characterize the extreme sensitivity of the system dynamics to boundary conditions.
Luoma, Kimmo
We derive a family of Gaussian non-Markovian stochastic Schrödinger equations for the dynamics of open quantum systems. The different unravelings correspond to different choices of squeezed coherent states, reflecting different measurement schemes on the environment. Consequently, we are able to give a single shot measurement interpretation for the stochastic states and microscopic expressions for the noise correlations of the Gaussian process. By construction, the reduced dynamics of the open system does not depend on the squeezing parameters. They determine the non-Hermitian Gaussian correlation, a wide range of which are compatible with the Markov limit. We demonstrate the versatility of our results for quantum information tasks in the non-Markovian regime. In particular, by optimizing the squeezing parameters, we can tailor unravelings for improving entanglement bounds or for environment-assisted entanglement protection.
Mathey, Steven
We investigate the critical dynamics of driven classical and quantum systems. Specifically, we consider slowly time-dependent couplings. We have developed an adiabatic dynamical Renormalization Group formalism and we use it to access the critical regime. We recover Kibble--Zurek phenomenology when the system is quenched across its phase boundary. We obtain the scaling of the correlation length with the quench speed ${\xi \sim v^{-\nu/(1+z \nu)}}$ from first principles. Moreover, we find another scaling regime, which is visible when the system is quenched along the phase boundary. In this regime, exponents that are sub-leading at equilibrium become dominant and observable without any fine-tuning.
Nation, Charlie
Random matrix theory has provided early insights into the theoretical understanding of the foundations of quantum statistical mechanics. In particular, Deutsch [1991 Phys. Rev. A 43 2046] presented a random matrix model that could be shown to thermalize as the eigenstates themselves formed an effective microcanonical ensemble. This was the foundation of the Eigenstate Thermalization Hypothesis (ETH), which has since become a leading contender for the mechanism behind thermalization. We extend this model, developing a method to find arbitrary correlation functions by including the effect of interactions between random wave-functions due to orthogonality. We derive the complete ETH ansatz, also guaranteeing that fluctuations are small in the thermodynamic limit. Further, from the developed framework, we derive an expression for the time-averaged fluctuations that resembles a classical fluctuation-dissipation theorem of Brownian motion; as such, we observe hints towards the emergence of classical statistical physics from chaotic quantum systems. We will further discuss a proposal to perform a measurement of the density of states of a quantum simulator by confirmation of the fluctuation-dissipation theorem.
Peronaci, Francesco
We present a theoretically study on a strongly repulsive Fermi-Hubbard model with a periodically driven interaction and a coupling to external bath. Such a model is directly relevant for many current experiments. We use non-perturbative numerical methods and analytical Floquet expansions to show that the Mott-insulating phase is reshaped into a stationary state with enhanced local pairing correlations: a signature of the exotic eta-pairing superconducting phase. This suggests a path to stabilize intriguing far-from-equilibrium phases in Mott insulators, using driving protocols of current experimental relevance.
Petiziol, Francesco
Adiabatic driving is one of the pillars of time-dependent quantum control. However, the limitations imposed by coherence times are typically in sharp contrast with the necessity of slow evolutions imposed by the adiabatic theorem. We present a shortcut-to-adiabaticity control protocol for few-level quantum systems. The method works by assisting the accelerated adiabatic drive with fast oscillations in the intrinsic parameters of the original Hamiltonian: the oscillations mediate a counterdiabatic Hamiltonian dynamically compensating for nonadiabatic transitions. This construction remarkably avoids the necessity of introducing new complex time-dependent interactions and is robust against parameter biases. We further discuss how the method can be combined with strategies to counteract dissipation for realizing an efficient counterdiabatic driving in an open system scenario. Our results are applied to realistic implementations based on molecular nanomagnets, superconducting circuits, and ultra cold atoms.
Poggi, Pablo
The notion of quantum speed limit (QSL) refers to the fundamental fact that two quantum states become completely distinguishable upon dynamical evolution only after a finite amount time, called the QSL time. A different, but related concept is that of minimum control time (MCT), which is the minimum evolution time needed for a state to be driven (by suitable, generally time-dependent, control fields) to a given target state. While the QSL can give information about the MCT, it usually imposes little restrictions to it, and is thus unpractical for control purposes. In this work we revisit this issue by first presenting a theory of geometrical QSL for unitary transformations, rather than for states, and discuss its implications and limitations. Then, we propose a framework for bounding the MCT for realizing unitary transformations that goes beyond the QSL results and gives much more meaningful information to understand the controlled dynamics of the system at short times.
Ptaszyński, Krzysztof
We report [1] two results complementing the second law of thermodynamics for Markovian open quantum systems coupled to multiple reservoirs with different temperatures and chemical potentials. First, we derive a nonequilibrium free energy inequality providing an upper bound for a maximum power output, which for systems with inhomogeneous temperature is not equivalent to the Clausius inequality. Secondly, we derive local Clausius and free energy inequalities for the subsystems of a composite system. These inequalities, which generalize an influential result obtained previously for classical bipartite systems [2], differ from the total system one by the presence of an information-related contribution and build the ground for thermodynamics of quantum information processing. Our theory is used to study an autonomous quantum Maxwell demon based based on quantum dots. [1] K. Ptaszy\'{n}ski and M. Esposito, arXiv:1901.01093 (2019). [2] J. M. Horowitz and M. Esposito, Phys. Rev. X 4, 031015 (2014).
Tobalina, Ander
We present a procedure to accelerate the relaxation of an open quantum system towards its equilibrium state. The control protocol, termed Shortcut to Equilibration, is obtained by reverse engineering the non-adiabatic master equation. This is a non-unitary control task aimed at rapidly changing the entropy of the system. Such a protocol serves as a shortcut to an abrupt change in the Hamiltonian, i.e., a quench. As an example, we study the thermalization of a particle in a harmonic well. We observe that for short protocols there is a four orders of magnitude improvement in accuracy.
Yoshioka, Nobuyuki
We propose a new variational scheme based on the neural-network quantum states to simulate the stationary states of open quantum many-body systems. Using the high expressive power of the variational ansatz described by the restricted Boltzmann machines, which we dub as the neural stationary state ansatz, we compute the stationary states of quantum dynamics obeying the Lindblad master equations. The mapping of the stationary-state search problem into finding a zero-energy ground state of an appropriate Hermitian operator allows us to apply the conventional variational Monte Carlo method for the optimization. Our method is shown to simulate various spin systems efficiently, i.e., the transverse-field Ising models in both one and two dimensions and the XYZ model in one dimension. |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Let $X$ be an Abelian variety over a field $k$; $L$ line bundle on $X$. I would like to calculate the cohomology of the Mumford line bundle $\Lambda(L)=m^*L\otimes p_1^*L^{-1}\otimes p_2^*L^{-1}$; where $m$ is the group law on $X$ and $p_i$ are the standard projections $X\times X \to X$.
I know that $R^np_{2,*}\Lambda(L)=0$ if $n\neq g=:dimX$ and $R^gp_{2,*}\Lambda(L)=i_*O_{K(L)}$ where $K(L)$ is the kernel of the map $\phi_L:X\to \widehat{X}$ and $i$ its inclusion into X. Let us suppose $K(L)$ finite.
I want to use Leray spectral sequence and what I´ve got so far is that, since the stable page is everywhere 0 but in the g-th row, $H^n(X\times X,\Lambda(L))=0$ for each $n<g$ and $H^n(X\times X,\Lambda(L))=H^n(X,i_*O_{K(L)})=H^n(K(L),O_{K(L)})$ and hence it equals 0 for each $n>0$ since $dimK(L)=0$.
The questions are the following:
1)Since this is the first time I do calculation with spectral sequences, is what I´ve written before correct?
2)Is it true that $dim H^g(X\times X, \Lambda(L))=deg\phi_L$? if the answer is yes, why? |
I want to find the roots for $\kappa$ for the equation
$$\sqrt{\alpha - 1} \cos{\left (\frac{\sqrt{2} \sqrt{\alpha - 1}}{2 \sqrt{\epsilon}} \right )} \cosh{\left (\frac{\sqrt{2} \sqrt{\alpha + 1}}{2 \sqrt{\epsilon}} \right )} - \sqrt{\alpha - 1} \\ -\frac{1}{\sqrt{\alpha + 1}} \sin{\left (\frac{\sqrt{2} \sqrt{\alpha - 1}}{2 \sqrt{\epsilon}} \right )} \sinh{\left (\frac{\sqrt{2} \sqrt{\alpha + 1}}{2 \sqrt{\epsilon}} \right )} = 0 \enspace ,$$ where $\alpha=\sqrt{1 + 4\epsilon\kappa^2}$. This equation has infinite roots, but I am interested in the first $N$ of them.
One option to solve this problem is to use Newton method, the problem is to define the initial points, since the values of the function can be quite high, as can be seen below
This problem comes from finding the eigenvalues of
$$\left[\frac{d^2 u}{ds^2} - \epsilon \frac{d^4u}{ds^4} + \kappa^2\right]u = 0$$
then I can obtain an approximation using perturbation methods, i.e., the eigenvalues are approximated by
$$\kappa_n^2 = n^2\pi^2 + \epsilon n^4\pi^4$$
for small $\epsilon$. Then, for small values of $\epsilon$, I can use the values $\kappa_n^2$ as initial guesses to the Newton algorithm. But when $\epsilon$ increases these initial guesses fail.
Since I know the original differential equation, I can use FEM or FDM to find the eigenvalues, but I am interested in other methods. Below, you can see the comparison using FDM (1001 points), perturbation solution and Newton method (using the perturbation solution as initial guess). All the curves are for $\epsilon=10^{-3}$, but I couldn't make those guesses to work for greater values of $\epsilon$.
Question: Is there any other method to solve this problem? Maybe some kind of transformation that can be applied to the equation? |
If we factor $n_k$ into primes, $n_k = p_{1}^{b_{1}}\cdots p_r^{b_{r}}$, then the Chinese Remainder Theorem tells us that $x\equiv a_k\pmod{n_k}$ is equivalent to the system of congruences$$\begin{align*}
x&\equiv a_k\pmod{p_1^{b_{1}}}\\
x&\equiv a_k\pmod{p_2^{b_{2}}}\\
&\vdots\\
x&\equiv a_k\pmod{p_r^{b_{r}}}
\end{align*}$$Thus, we can replace the given system of congruences with one in which every modulus is a prime power, $n_i = p_i^{b_i}$.
Note that the assumption that $a_i\equiv a_j\pmod{\gcd(n_i,n_j)}$ "goes through" this replacement (if they were congruend modulo $\gcd(n_i,n_j)$, then they are congruent modulo the gcds of the prime powers as well).
So, we may assume without loss of generality that every modulus is a prime power.
I claim that we can deal with each prime separately, again by the Chinese Remainder Theorem. If we can solve all congruences involving the prime $p_1$ to obtain a solution $x_1$ (which will be determined modulo the highest power of $p_1$ that occurs); and all congruences involving the prime $p_2$ to obtain a solution $x_2$ (which will be determined modulo the highest power of $p_2$ that occurs); and so on until we obtain a solution $x_n$ for all congruences involving the prime $p_n$ (determined modulo the highest power of $p_n$ that occurs), then we can obtain a simultaneous solution by solving the usual Chinese Remainder Theorem system$$\begin{align*}
x &\equiv x_1 \pmod{p_1^{m_1}}\\
&\vdots\\
x &\equiv x_n\pmod{p_n^{m_n}}
\end{align*}$$(where $m_i$ is the highest power of $p_i$ that occurs as a modulus).
So we are reduced to solving figuring out whether we can solve the system$$\begin{align*}
x &\equiv a_1\pmod{p^{b_1}}\\
x &\equiv a_2\pmod{p^{b_2}}\\
&\vdots\\
x & \equiv a_n\pmod{p^{b_n}}
\end{align*}$$with, without loss of generality, $b_1\leq b_2\leq\cdots\leq b_n$.
When can this be solved? Clearly, this can be solved if and only if $a_i\equiv a_j\pmod{p^{b_{\min(i,j)}}}$: any solution must satisfy this condition, and if this condition is satisfied, then $a_n$ is a solution.
For example: say the original moduli had been $n_1 = 2^3\times 3\times 7^2$, $n_2= 2^2\times 5\times 7$, $n_3=3^2\times 5^3$. First we replace the system with the system of congruences$$\begin{align*}
x&\equiv a_1 \pmod{2^3}\\
x&\equiv a_2\pmod{2^2}\\
x&\equiv a_1\pmod{3}\\
x&\equiv a_3\pmod{3^2}\\
x&\equiv a_2\pmod{5}\\
x&\equiv a_3\pmod{5^3}\\
x&\equiv a_1\pmod{7^2}\\
x&\equiv a_2\pmod{7}.
\end{align*}$$Then we separately solve the systems:$$\begin{align*}
x_1&\equiv a_1 \pmod{2^3} &x_2&\equiv a_1\pmod{3}\\
x_1&\equiv a_2\pmod{2^2}&x_2&\equiv a_3\pmod{3^2}\\
\strut\\
x_3&\equiv a_2\pmod{5}&x_4&\equiv a_1\pmod{7^2}\\
x_3&\equiv a_3\pmod{5^3}&x_4&\equiv a_2\pmod{7}.
\end{align*}$$
Assuming we can solve these, $x_1$ is determined modulo $2^3$, $x_2$ modulo $3^2$, $x_3$ modulo $5^3$, and $x_4$ modulo $7^2$, so we then solve the system$$\begin{align*}
x &\equiv x_1\pmod{2^3}\\
x &\equiv x_2\pmod{3^2}\\
x&\equiv x_3 \pmod{5^3}\\
x&\equiv x_4\pmod{7^2}
\end{align*}$$and obtain a solution to the original system.
Hence, if the condition $a_i\equiv a_j\pmod{\gcd(n_i,n_j)}$ holds in the original system, then we obtain a solution for each prime, and from the solution for each prime we obtain a solution to the original system by applying the usual Chinese Remainder Theorem twice. |
I've recently had occasion (providing an engineering colleague with a little mathematical help) to solve a non linear system
$\begin{align*}f(x,y)&=0,\\ g(x,y)&=0.\end{align*}$
If derivatives of $f$ and $g$ were available, I could use Newton's method
$\mathbf{x}_{n+1}=\mathbf{x}_n-J_n^{-1}\mathbf{x}_n$
where $J$ is of course the Jacobian
$\displaystyle{\frac{\partial(f,g)}{\partial(x,y)}}$.
Since the functions are too complicated to be easily differentiated (computing each of $f$ and $f$ requires multiple steps), my next approach would be to use a secant method: something like Broyden's method.
However, given that this is only a 2D system, I wonder what is wrong with using a Newton-type iteration where the partial derivatives in the Jacobian are approximated by finite differences, so that, for example:
$\displaystyle{\frac{\partial f}{\partial x}\approx \frac{f(x_n,y_n)-f(x_{n-1},y_n)}{x_n-x_{n-1}}}$.
Broyden's formula aims to simplify the updating of the Jacobian approximation each step, and chooses that approximation to be a matrix $J_n$ for which
$J_n(\mathbf{x}_n-\mathbf{x}_{n-1})=\mathbf{F}(\mathbf{x}_n)-\mathbf{F}(\mathbf{x}_{n-1})$.
The use of finite differences provides a Jacobian approximation which doesn't satisfy any nice properties, but on the other hand it is very easy to implement (only a few lines of code), and seems to be quite fast. And for a 2D system I would guess it to be reasonably efficient.
My question is: is there any good reason for using Broyden's method for a 2D system, over the simpler finite-difference method? (Which, if it has a formal name, I don't know it). I'm not a numerical analyst (my knowledge is limited to the simple methods I've taught as part of elementary numerical methods courses), so if anybody can offer some expert opinions, I'll be glad to hear them. |
Question: I'd like to know if there is some reference or reasonable way to develop curve theory in a plane with degenerate metric $(\Bbb R^2, {\rm d}s^2 ={\rm d}x^2)$. Context: In Lorentz-Minkowski space $\Bbb L^3 = (\Bbb R^3, \langle \cdot,\cdot\rangle_L = {\rm d}x^2+{\rm d}y^2 - {\rm d}z^2)$ one can consider unit-speed spacelike curves $\alpha\colon I \to \Bbb L^3$ with lightlike normal direction (meaning $T(s) = \alpha'(s)$, $N(s) = \alpha''(s)$). Assume that $(T(s),N(s))$ is a positive basis of the plane they span (for example, assume that the Euclidean cross product $T(s)\times_E N(s)$ is future-directed).
Say I now complete that frame with a lightlike "binormal vector" $B(s)$ orthogonal to $T(s)$ that makes the basis $(T(s),N(s),B(s))$ positive (with normalization constraint $\langle N(s),B(s)\rangle_L = -1$). We have a Frenet-like system $$ \begin{pmatrix} T'(s) \\ N'(s) \\ B'(s)\end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & \tau(s) & 0 \\ 1 & 0 & -\tau(s)\end{pmatrix}\begin{pmatrix} T(s) \\ N(s) \\ B(s) \end{pmatrix},$$where $\tau(s)$ is the so-called pseudo-torsion of $\alpha$.
Assuming $\alpha(0) = 0$ and calling $\mathcal{F} = (T(0),N(0),B(0))$, Taylor expansion gives $$\alpha(s) - R(s) = \left(s, \frac{s^2}{2} + \tau(0) \frac{s^3}{6},0 \right)_{\mathcal{F}},$$for some $R(s)$ with $R(s)/s^3 \to 0$. This in particular hints that every curve in these conditions is a plane curve (this is actually true, not that hard to prove).
Projecting that in the osculating plane of the curve, we can consider $\gamma\colon I \to \Bbb R^2$ given by $\gamma(s) = (s, \frac{s^2}{2} + \tau(0) \frac{s^3}{6})$, where the metric in this $\Bbb R^2$ is only ${\rm d}x^2$.
I'd like to know if there is any reasonable notion of curvature for $\gamma$ related to the pseudo-torsion $\tau(0)$. Naively I could compute $$\kappa_\gamma(s) = \frac{\det(\gamma'(s),\gamma''(s))}{\|\gamma'(s)\|^3} = 1+\tau(0)s,$$but this is far from satisfactory for me. |
Homework 1
Hint for complex chain rule:
Let
$ \frac{f(z)-f(a)}{z-a} - f'(a) = E_f(z). $
Then
$ f(z)= f(a)+f'(a)(z-a)+E_f(z)(z-a). $
Note that if $ f(z) $ is complex differentiable at $ a $, then
$ \lim_{z\to a} E_f(z)=0. $
Next, let
$ \frac{g(w)-g(A)}{w-A} - g'(A) = E_g(w), $
where $ A=f(a). $
Now
$ g(w)= g(A)+g'(A)(w-A)+E_g(w)(w-A). $
To prove the Chain Rule, let $ w=f(z) $ in the formula above and write out the difference quotient for $ g(f(z)) $. The limit should become obvious. You will need to use the fact that $ f $ complex differentiable at $ a $ implies that $ f $ is continuous at $ a $ in order to deduce that $ E_g(f(z)) $ tends to zero as $ z $ tends to $ a $. --Steve Bell |
I am trying to prove that if I have $f:\mathbb{R} \to \mathbb{R}$ satisfying $\forall x,y\in\mathbb{R},f(x+y) = f(x) + f(y)$. Which is assumed continuous at $0$, that $f$ is continuous on $\mathbb{R}$
I am fairly sure it is, for that property seems to be a property of polynomials, and we know polynomials are continuous where defined(all reals)
Any ideas for rigor?
$|x-a| \lt \delta \implies |f(x+a)-f(x)-f(a)| \lt \epsilon$
No idea where to go from here, this is my first time doing $\epsilon-\delta$ stuff. |
How to solve the follow optimization problem?
$$\begin{array}{ll} \text{minimize} &\displaystyle\sum_{i=1}^{T} \| \mathbf{A}_i\mathbf{x} - \mathbf{b}_i \|^2\\ \text{subject to} & \mathbb{1}^\top {\mathbf{x}} = 1\\ & \mathbf{x} \geq 0\end{array}$$
where $\mathbf{x} \in \mathbb R^{n}$ is a vector and $\mathbf{b}_i \in \mathbb R^{m}$ are a sequence of vectors and $\mathbf{A}_i \in \mathbb R^{m \times n}$ is a sequence of matrices. $T$ is the number of equation. The constraints mean that all element of vector $\mathbf{x}$ are non-negative and that the sum of its elements is equal to $1$.
A simple reduction is solve in How to optimize $\|Ax - b\|^2$ subject t0 $x1 = 1$, $x\geq 0$
This time, my goal is to jointly optimize those $T$ equation.
My second question is the following. How to show constraint $\mathbf{x} $ is in compact convex set? |
Going through my functional analysis course notes, I feel like there are two different proofs for the following theorem.
In $\mathbb{R}^n$ (or $\mathbb{C}^n$), any two norms are equivalent.
One uses the compactness-related extreme value theorem (i.e., a continuous function on compact set must achieve its maximum and minimum values), while the other uses the open mapping theorem (i.e, for every continuous linear mapping $T$ from a Banach space $X$ onto another Banach space $Y$, and every $U \in X$ open, $T(U)$ is open). These two theorems use different hypothesis and are not equivalent. Therefore, I am suspicious that I am doing something wrong.
My questions is whether both these proofs are correct, or if I am doing something wrong here. Common steps of both proofs Define (recall) the $\|.\|_\text{sup}$ norm as $\|x\| = \sup_{i} |x_i|$. (*) Show that for any norm $\|.\|_b$ on $\mathbb{R}^n$, there exist an $M_b > 0$, such that for all $x \in \mathbb{R}^n$, $\|x\|_b \leq M_b \|x\|_\text{sup}$ (see for example here on how to find $M_b$). Proof with extreme value theorem From (*) we deduce that any norm $\|x\|_b$ is continuous w.r.t the $\|x\|_\text{sup}$ norm. (+) Use the fact that the unit sphere (of the sup norm) is compact in $\mathbb R^n$, (*), and the extreme value theorem to deduce that $\|x\|_b$ achieves a minimum $m_b$ on the unit sphere (of the sup norm). In other words, there exists $m_b > 0$ such that $\|x\|_b \geq m_b \|x\|_\text{sup}$ for all $x \in \mathbb R^n$. Combining (+) and (*), we get that any norm $\|.\|_b$ and $\|.\|_\text{sup}$ are equivalent $\blacksquare$ Proof with the open mapping theorem
This is the proof that I am not sure about.
From the open mapping theorem, one can prove that (see for example here for a proof):
Let $\|.\|_1$ and $\|.\|_2$ be norms on a Banach space $X$, such that $\|x\|_1 \leq\|x\|_2$. Then, the norms are equivalent.
Now combine this with (*) with the fact that $\mathbb R^n$ is complete (Banach), and you get that any norm in $\mathbb{R}^n$ is equivalent to the $\|.\|_\text{sup}$ norm $\blacksquare$ |
The expression $0/0$ is meaningless because the operation of division $(a,b) \mapsto a/b$ is not defined for pairs $(a,b)$ where $b$ is zero, just as it is not defined for pairs $(a,b)$ where $b$ is an elephant.
Sometimes you will hear that $0/0$ is an "indeterminate form" that can equal different things depending on the context. This is a horribly imprecise way of speaking. What it really means is that if you are evaluating a limit and you do the computation
$$\lim_{x \to 0} \frac{f(x)}{g(x)} = \frac{\lim_{x \to 0} f(x)}{\lim_{x \to 0} g(x)} = \frac{0}{0}$$
Then you have made a mistake in the first step and you have to evaluate the limit in a different way,
e.g. with l'Hospital's rule, to get a valid answer (which could be anything depending on the particulars of the problem, hence "indeterminate.") This reason that the first step in the displayed calculation is a mistake is that the rule "the limit of a quotient is the quotient of the limits" is not true in general—it is only true when the limit of the denominator is nonzero. |
Max Koecher (for example, in
The Minnesota Notes on Jordan Algebras and Their Applications; new edition: Springer Lecture Notes in Mathematics, number 1710, 1999), defined a domain of positivity for a symmetric nondegenerate bilinear form $B: X \times X \rightarrow \mathbb{R}$ on a finite dimensional real vector space $X$, to be an open set $Y \subseteq X$ such that $B(x,y) > 0$ for all $x,y \in Y$, and such that if $B(x,y) > 0$ for all $y \in Y$, then$x \in Y$. (More succinctly, perhaps, we could say it's a maximal set $Y \subseteq X$ such that $B(Y,Y) > 0$.) Aloys Krieger and Sebastian Walcher, in their notes to chapter 1 of this book, state that "In the language used today, a domain of positivity is a self-dual open proper convex cone." [I now believe this is wrong; see my answer below for what I think is true instead.] It's quite easy to prove that it's an open proper convex cone. (Proper means it contains no nonzero linear subspace of $X$, i.e. that its closure is pointed.) But, although I have a vague recollection of having encountered a proof once in a paper on homogeneous self-dual cones, I haven't succeeded in finding it again, or in supplying it myself. I'm pretty sure Krieger and Walcher's claim is correct—for example, the 1958 paper by Koecher that is generally cited (along with a 1960 paper by Vin'berg) for the proof of the celebrated result that the (closed) finite-dimensional homogeneous self-dual cones are precisely the cones of squares in finite dimensional formally real Jordan algebras, is titled "The Geodesics of Domains of Positivity" (but in German).
The most natural way to prove this would be to find a positive semidefinite nondegenerate $B'$, such that the cone is a domain of positivity for $B'$ as well. In principle, $B'$ might depend on the domain $Y$. (While maximal in the subset ordering, domains of positivity for a given form $B$ are not unique.) But a tempting possibility, independent of $Y$, is to transform to a basis for $X$ in which $B$ is diagonal, with diagonal elements $\pm 1$, change the minus signs to plus signs, and transform back to obtain $B'$.
To clarify the question: we will define a cone $K$ in a real vector space $X$ to be self-dual iff there
exists an inner product—that is, a positive definite bilinear form $\langle . , . \rangle: X \times X \rightarrow \mathbb{R}$—such that $K = K^*_{\langle . , . \rangle}$. Here $K^*_{\langle . , . \rangle}$ is the dual with respect to the inner product $\langle . , . \rangle$, that is $K^*_{\langle . , . \rangle} := \{ y \in X: \forall x \in X ~\langle y, x \rangle > 0 \}$. So in asking for a proof that a domain of positivity is a self-dual cone, we are asking whether some inner product $\langle . , . \rangle$ with respect to which $K$ is self-dual exists. Above, I considered the case $K=Y$, and called the inner product I was looking for, $B'$.
Does anyone know, or can anyone come up with, a proof? |
Challenging integral marathon!
The idea is poached from the AOPS forum.
Rules: Solve the integral in the previous post (show your working) and provide the next challenging integral for the next person. This is open to suggestion/refinement, I'm just getting us started..
Problem 1 (by greg1313)
Quote:
Solution 1 (by Skipjack)
Quote:
Problem 2
$$ \int_0^\infty x e^{1-x} - \left\lfloor{x}\right\rfloor e^{1 - \left\lfloor{x}\right\rfloor} \, dx$$
\begin{align}\int_0^\infty x e^{1-x} - \left\lfloor{x}\right\rfloor e^{1 - \left\lfloor{x}\right\rfloor} \, \mathrm dx &= \int_0^\infty x e^{1-x} \,\mathrm dx - \left( 1+2e^{-1} + 3e^{-2} + \ldots \right) \\ &= -xe^{1-x}\bigg|_0^\infty + \int_0^\infty e^{1-x} \,\mathrm dx - \left( 1\left(e^{-1}\right)^0+2\left(e^{-1}\right)^1 + 3\left(e^{-1}\right)^2 + \ldots \right) \\ &= 0 - e^{1-x}\bigg|_0^\infty - \left(\frac{\mathrm d}{\mathrm dx} (1-x)^{-1}\right) _{x=e^{-1}} \\ &= e-\left(-(1-x)^{-2}\right) _{x=e^{-1}} \\ &= e + \left( \frac1{1-e^{-1}}\right)^2 \\ &= e +\frac{e^2}{e^2-2e+1} \\ &= \frac{{e^3-2e^2+e}+e^2}{e^2-2e+1} \\ &= \frac{e^3-e^2+e}{e^2-2e+1}
\end{align}
Rewrite it as a sum
\[ \int_0^\infty x^{1-x} - \lfloor x \rfloor e^{1-\lfloor x \rfloor} = \sum_{n=0}^\infty \int_n^{n+1} x^{1-x} \ dx - ne^{1-n} \]
Now compute the integral via $\frac{d}{dx}(x+1)e^{1-x} = -xe^{1-x}$ and evaluate on $[n,n+1]$. This simplifies the sum to
\[ \sum_{n=0}^\infty (2-n)e^{1-n} \]
but this is just the series expansion for the derivative of $\frac{e^2}{1-x}$ evaluated at $x = e^{-1}$. Since this series converges uniformly for $|x| < 1$, it follows that \[ \sum_{n=0}^\infty (2-n)e^{1-n} = -\frac{e^2}{(1-e^{-1})^2}\]
Alternate solutions are good but please provide a new problem afterwards so we can keep it going :).
$$\int \frac{\sqrt{x+1} - \sqrt{x-1}}{\sqrt{x+1}+\sqrt{x-1}} \,\mathrm dx$$
Answers without trigonometric or hyperbolic functions.
This one is quite good. I still haven't found a solution without trig/complex exponentials. Looking forward to reading the solution when somebody smarter than me figures it out.
I don't mean that you can't use them to get to your answer, but the final answer shouldn't contain them.
Quote:
Quote:
Quote:
Start by noticing the factorization $\displaystyle \frac{2}{\sqrt{x+1}+\sqrt{x-1}} = {\sqrt{x+1}-\sqrt{x-1}}$. Substituting for the denominator of the integrand and simplifying, we have $\displaystyle F = \int 1 - \sqrt{x^2 - 1} \ dx = x - \int \sqrt{x^2 - 1} \ dx$
Next, we recall the integral identity $\displaystyle \int f(x) \ dx = xf(x) - \int xf'(x) dx $ which holds for $f \in C^1\!$. Applying this to $f = \sqrt{x^2-1}$ we get $\displaystyle F = x - \int \sqrt{x^2 - 1} \ dx = x - x\sqrt{x^2-1} + \int \frac{x^2}{\sqrt{x^2-1}} \ dx $
Write $\displaystyle \frac{x^2}{\sqrt{x^2-1}} = \sqrt{x^2-1} + \frac{1}{\sqrt{x^2-1}}$ so we can solve for the integral of $f$:
$\displaystyle 2 \int \sqrt{x^2-1}\,dx = x \sqrt{x^2-1} - \int \frac{1}{\sqrt{x^2-1}}\,dx $
Finally, we can solve this last integral by trig substitution with $x = \sec (s)$ and noting that $\int \sec s \ ds = \log( \tan s + \sec s)$.
Adding up this mess, we get $\displaystyle F = x - \frac{1}{2} \left(x \sqrt{x^2-1} - \log{\large(}\sqrt{x^2-1} + x{\large)} \right) $.
Sorry about the formatting. For some reason, only inline math mode will compile properly.
Anyways, I remembered the following integral which kept me busy for a few hours (or more) in grad school. Enjoy.
Compute
$\int_0^{2\pi} \log \Gamma (\frac{x}{2 \pi}) \exp(\cos x) \sin(x + \sin x) \ dx $ where $\Gamma$ is the usual gamma function.
All times are GMT -8. The time now is 09:27 AM.
Copyright © 2019 My Math Forum. All rights reserved. |
The correlated colour temperature noted $T_{cp}$ and shortened to $CCT$ is the temperature of the Planckian radiator having the chromaticity nearest the chromaticity associated with the given spectral distribution on a diagram where the (CIE 1931 2° Standard Observer based) $u^\prime, \cfrac{2}{3}v^\prime$ coordinates of the Planckian locus and the test stimulus are depicted. [2]
The
CIE Standard Illuminant A, CIE Standard Illuminant D65 and CIE Illuminant E illuminants plotted in the CIE 1960 UCS Chromaticity Diagram:
import colourfrom colour.plotting import *
colour_style();
with colour.utilities.suppress_warnings(python_warnings=True): plot_planckian_locus_in_chromaticity_diagram_CIE1960UCS(['A', 'D65', 'E']);
# Zooming into the *Planckian Locus*.with colour.utilities.suppress_warnings(python_warnings=True): plot_planckian_locus_in_chromaticity_diagram_CIE1960UCS( ['A', 'D65', 'E'], bounding_box=[0.15, 0.35, 0.25, 0.45]);
The concept of correlated colour temperature should not be used if the chromaticity of the test source differs more than $\Delta C=5\cdot10^{-2}$ from the Planckian radiator with: [2]$$ \Delta C= \Biggl[ \Bigl(u_t^\prime-u_p^\prime\Bigr)^2+\cfrac{4}{9}\Bigl(v_t^\prime-v_p^\prime\Bigr)^2\Biggr]^{1/2} $$
where $u_t^\prime$, $u_p^\prime$ refer to the test source, $v_t^\prime$, $v_p^\prime$ to the Planckian radiator.
Colour implements various methods for correlated colour temperature computation $T_{cp}$ from chromaticity coordinates $xy$ or $uv$ and chomaticity coordinates $xy$ or $uv$ computation from correlated colour temperature:
Robertson (1968) method is based on $T_{cp}$ computation by linear interpolation between two adjacent members of a defined set of 31 isotemperature lines. [3]
In the
CIE 1960 UCS chromaticity diagram the distance $d_i$ of the chromaticity point of given source ($u_s$, $v_s$) from each of the chromaticity point ($u_i$, $v_i$) through which the $i$th isotemperature line of slope $t_i$ passes is calculated as follows: [3]
The chromaticity point ($u_s$, $v_s$) is located between the adjacent isotemperature lines $j$ and $j + 1$ if $d_j/d_{j+1} < 0$$$ \begin{equation} T_c=\Biggl[\cfrac{1}{T_j}+\cfrac{\theta_1}{\theta_1+\theta_2}\biggl(\cfrac{1}{T_{j+1}}-\cfrac{1}{T_j}\biggr)\Biggr]^{-1} \end{equation} $$
where $\theta_1$ and $\theta_2$ are respectively the angles between the isotemperature lines $T_j$ and $T_{j+1}$ and the line joining ($u_s$, $v_s$) to their intersection. Since the isotemperature lines are narrow spaced $\theta_1$ and $\theta_2$ are small enough that one can set $\theta_1/\theta_2 = \sin\theta_1/\sin\theta_2$. The above equation can then be written:$$ \begin{equation} T_c=\Biggl[\cfrac{1}{T_j}+\cfrac{d_j}{d_j-d_{j+1}}\biggl(\cfrac{1}{T_{j+1}}-\cfrac{1}{T_j}\biggr)\Biggr]^{-1} \end{equation} $$
The
colour.uv_to_CCT_Robertson1968 definition is used to calculate the correlated colour temperature $T_{cp}$ and distance $D_{uv}$ ($d_i$):
colour.temperature.uv_to_CCT_Robertson1968((0.19783451566098664, 0.31221744678060825))
array([ 6.50303994e+03, 3.25561654e-03])
colour.uv_to_CCT definition is implemented as a wrapper for various correlated colour temperature computation methods:
colour.uv_to_CCT((0.19783451566098664, 0.31221744678060825), 'Robertson 1968')
array([ 6.50303994e+03, 3.25561654e-03])
Note:
'robertson1968'is defined as a convenient alias for
'Robertson 1968':
colour.uv_to_CCT((0.19783451566098664, 0.31221744678060825), 'robertson1968')
array([ 6.50303994e+03, 3.25561654e-03])
Converting from correlated colour temperature $T_{cp}$ and distance $D_{uv}$ to chomaticity coordinates $uv$:
colour.CCT_to_uv((6503.03994225557, 0.0032556165414977167), 'Robertson 1968')
array([ 0.19783447, 0.31221739])
colour.CCT_to_uv((6503.03994225557, 0.0032556165414977167), 'robertson1968')
array([ 0.19783447, 0.31221739])
Ohno (2013) presented new practical accurate methods to calculate the correlated colour temperature $T_{cp}$ and distance $D_{uv}$ with an error of 1 $K$ in $T_{cp}$ range from 1000 to 20,000 and $\pm$0.03 in $D_{uv}$. [4]
The correlated colour temperature is calculated by searching the closest point on the Planckian locus on the
CIE 1960 UCS chromaticity diagram but without the complexity of Roberston (1968) method.
A table of coordinates ($U_i$, $V_i$) of Planckian locus (Planckian ($u$, $v$) table) in the estimated range of correlated colour temperature needed is generated and then the distance $d_i$ from the chromaticity coordinates ($U_x$, $V_x$) of a test light source is calculated.
The point $i = m$ is the point where $d_i$ is the smallest in the table ensuring that the correlated colour temperature to be obtained lies between $T_{m-1}$ and $T_{m+1}$.
The previous computation is repeated $n$ times through cascade expansion in order to reduce errors.
A triangle is then formed by the chromaticity point ($U_x$, $V_x$) of the test light soure and the chromaticity points on Planckian locus at $T_{m-1}$ and $T_{m+1}$. The blackbody temperature $T_x$ for the closest point to the line between $T_{m-1}$ and $T_{m+1}$ is calculated as follows: [4]$$ \begin{equation} T_x=T_{m-1}+(T_{m+1}-T_{m-1})\cdot\cfrac{x}{l} \end{equation} $$
with$$ \begin{equation} \begin{aligned} x&=\cfrac{d_{m-1}^2-d_{m+1}^2+l^2}{2l}\\ l&=\sqrt{(u_{m+1}-u_{m-1})^2+(v_{m+1}-v_{m-1})^2} \end{aligned} \end{equation} $$
$D_{uv}$ is then calculated as follows:$$ \begin{equation} D_{uv}=(d_{m-1}^2-x^2)^{1/2}\cdot sgn(v_x-v_{T_x}) \end{equation} $$
with$$ \begin{equation} \begin{aligned} v_{T_x}&=v_{m-1}+\{v_{m+1}-v_{m-1}\}\cdot x/l\\ SIGN(z)&=1\ for\ z \geq0\ and\ SIGN(z)=-1\ for\ z <0 \end{aligned} \end{equation} $$
Errors due to the non linearity of the correlated colour temperature scale on ($u$, $v$) coordinates are reduced by applying the following correction:$$ \begin{equation} T_{x,cor}=T_x\times 0.99991 \end{equation} $$
This correction is not needed for Planckian ($u$, $v$) table with steps of 0.25% or smaller.
After finding $T_{m-1}$ and $T_{m+1}$ as described in the triangular solution method above, $d_{m−1}$, $d_m$, $d_{m+1}$ are fitted to a parabolic function. The polynomial is derived from $d_{m−1}$, $d_m$, $d_{m+1}$ and $T_{m−1}$, $T_m$, $T_{m+1}$ as: [4]$$ \begin{equation} d(T)=aT^2+bT+c \end{equation} $$
where$$ \begin{equation} \begin{aligned} a&\ =[T_{m-1}(d_{m+1}-d_m)+T_m(d_{m-1}-d_{m+1})+T_{m+1}(d_m-d_{m-1})]\cdot X^{-1}\\ b&\ =-[T_{m-1}^2(d_{m+1}-d_m)+T_m^2(d_{m-1}-d_{m+1})+T_{m+1}^2(d_m-d_{m-1})]\cdot X^{-1}\\ c&\ =-[d_{m-1}(T_{m+1}-T_m)\cdot T_m\cdot T_{m+1}+d_m(T_{m-1}-T_{m+1})\cdot T_{m-1}\cdot T_{m+1}+d_{m+1}(T_m-T_{m-1})\cdot T_{m-1}\cdot T_m]\cdot X^{-1} \end{aligned} \end{equation} $$
with$$ \begin{equation} X=(T_{m+1}-T_m)(T_{m-1}-T_{m+1})(T_m-T_{m-1}) \end{equation} $$
The correlated colour temperature $T=T_x$ is then obtained as follows:$$ \begin{equation} T_X=-\cfrac{b}{2a}\qquad\because d^\prime(T)=2aT_x+b=0 \end{equation} $$
The correction factor $T_{x,cor}$ for nonlinearity is applied as described in the triangular solution method.
$D_{uv}$ is then calculated as follows:$$ \begin{equation} D_{uv}=SIGN(v_x-v_{T_x})\cdot(aT_{x,cor}^2+bT_{x,cor}+c) \end{equation} $$
with$$ \begin{equation} SIGN(z)=1\ for\ z \geq0\ and\ SIGN(z)=-1\ for\ z <0 \end{equation} $$
The parabolic solution works accurately except in on or near the Planckian locus. Taking triangular solution results for $|D_{uv}| < 0.002$ and the parabolic solution results for other regions solves that problem. [4]
The
colour.uv_to_CCT_Ohno2013 definition is used to calculate the correlated colour temperature $T_{cp}$ and distance $D_{uv}$ ($d_i$):
colour.temperature.uv_to_CCT_Ohno2013((0.19783451566098664, 0.31221744678060825))
array([ 6.50364955e+03, 3.20563638e-03])
Precision can be changed by passing a value to the
iterations argument:
for i in range(10): print(colour.temperature.uv_to_CCT_Ohno2013( (0.19783451566098664, 0.31221744678060825), iterations=i + 1))
[ 1.68848187e+04 2.66789445e-03] [ 6.79456522e+03 3.46904097e-03] [ 6.55383806e+03 3.48674101e-03] [ 6.50607833e+03 3.20681676e-03] [ 6.50374738e+03 3.20565144e-03] [ 6.50364955e+03 3.20563638e-03] [ 6.50364731e+03 3.20563610e-03] [ 6.50364722e+03 3.20563604e-03] [ 6.50364721e+03 3.20563264e-03] [ 6.50364721e+03 3.20561976e-03]
Using the
colour.uv_to_CCT wrapper definition:
colour.uv_to_CCT((0.19783451566098664, 0.31221744678060825), 'Ohno 2013')
array([ 6.50364955e+03, 3.20563638e-03])
Note:
'ohno2013'is defined as a convenient alias for
'Ohno 2013':
colour.uv_to_CCT((0.19783451566098664, 0.31221744678060825), 'ohno2013')
array([ 6.50364955e+03, 3.20563638e-03])
Converting from correlated colour temperature $T_{cp}$ and distance $D_{uv}$ to chomaticity coordinates $uv$:
colour.CCT_to_uv((6503.03994225557, 0.0032556165414977167), 'Ohno 2013')
array([ 0.19779726, 0.31225121])
colour.CCT_to_uv((6503.03994225557, 0.0032556165414977167), 'Ohno 2013')
array([ 0.19779726, 0.31225121])
McCamy (1992) proposed an equation to compute correlated colour temperature $T_{cp}$ from
CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ by using a chromaticity epicenter ($x_e$, $y_e$) where the isotemperature lines in some of the correlated colour temperature range converge and the inverse slope of the line $n$ that connects it to $x$, $y$. [5]
The cubic approximation equation is defined as follows: [5]$$ \begin{equation} T_{cp}=-449n^3+3525n^2-6823.3n+5520.33 \end{equation} $$
where$$ \begin{equation} n=\cfrac{x-x_e}{y-ye}\\ x_e=0.3320\qquad y_e=0.1858 \end{equation} $$
The
colour.xy_to_CCT_mccamy definition is used to calculate the correlated colour temperature $T_{cp}$ from
CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$:
colour.temperature.xy_to_CCT_McCamy1992((0.31271, 0.32902))
6504.3893830489724
The
colour.xy_to_CCT definition is implemented as a wrapper for various correlated colour temperature computation methods from
CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$:
colour.xy_to_CCT((0.31271, 0.32902), 'McCamy 1992')
6504.3893830489724
Note:
'mccamy1992'is defined as a convenient alias for
'McCamy 1992':
colour.xy_to_CCT((0.31271, 0.32902), 'mccamy1992')
6504.3893830489724
Hernandez-Andres, Lee and Romero (1999) extended McCamy (1992) work by using a second epicenter to extend the accuracy over a wider correlated colour temperature and chromaticity coordinates range ($3000$–$10^6K$). [6]
The new extended equation to calculate the correlated colour temperature $T_{cp}$ is defined as follows: [6]$$ \begin{equation} T_{cp}=A_0+A_1exp(-n/t_1)+A_2exp(-n/t_2)+A_3exp(-n/t_3) \end{equation} $$
where$$ \begin{equation} n=\cfrac{x-x_e}{y-ye}\\ \end{equation} $$
with
Constants $T_{cp}$ Range ($K$) $3000$-$50,000$ $T_{cp}$ Range ($K$) $50,000$-$8\times10^5$ $A_0$ $-949.86315$ $36284.48953$ $A_1$ $6253.80338$ $0.00228$ $t_1$ $0.92159$ $0.07861$ $A_2$ $28.70599$ $5.4535\times10^{-36}$ $t_2$ $0.20039$ $0.01543$ $A_3$ $0.00004$ $t_3$ $0.07125$ $x_e$ $0.3366$ $0.3356$ $y_e$ $0.1735$ $0.1691$
The
colour.xy_to_CCT_hernandez definition is used to calculate the correlated colour temperature $T_{cp}$ from
CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$:
colour.temperature.xy_to_CCT_Hernandez1999((0.31271, 0.32902))
6500.0421533365825
Using the
colour.xy_to_CCT wrapper definition:
colour.xy_to_CCT((0.31271, 0.32902), 'Hernandez 1999')
6500.0421533365825
Note:
'hernandez1999'is defined as a convenient alias for
'Hernandez 1999':
colour.xy_to_CCT((0.31271, 0.32902), 'hernandez1999')
6500.0421533365825
Krystek (1985) proposed a polynomial approximation valid from $1000K$ to $15000K$. [7]
The
CIE UCS colourspace chromaticity coordinates $u$, $v$ are given by the following equations: [7]
The
colour.CCT_to_uv_Krystek1985 definition is used to calculate the
CIE UCS colourspace chromaticity coordinates $u$, $v$ from correlated colour temperature $T_{cp}$:
colour.temperature.CCT_to_uv_Krystek1985(6504.389383048972)
array([ 0.18376696, 0.30934437])
Using the
colour.CCT_to_uv wrapper definition:
colour.CCT_to_uv(6504.389383048972, 'Krystek 1985')
array([ 0.18376696, 0.30934437])
Kang et al. (2002) proposed an advanced colour-temperature control system for HDTV applications in the range from $1667K$ to $25000K$. [8]
The
CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ are given by the following equations: [8]
The
colour.CCT_to_xy_kang definition is used to calculate the
CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ from correlated colour temperature $T_{cp}$:
colour.temperature.CCT_to_xy_Kang2002(6504.389383048972)
array([ 0.313426 , 0.32359597])
The
colour.CCT_to_xy definition is implemented as a wrapper for various
CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ computation from correlated colour temperature:
colour.CCT_to_xy(6504.389383048972, 'Kang 2002')
array([ 0.313426 , 0.32359597])
Note:
'kang2002'is defined as a convenient alias for
'Kang 2002':
colour.CCT_to_xy(6504.389383048972, 'kang2002')
array([ 0.313426 , 0.32359597])
Judd et al. (1964) defined the following equations to calculate the
CIE 1931 2° Standard Observer chromaticity coordinates $x_D$, $y_D$ of a CIE Illuminant D Series: [9]
The
colour.CCT_to_xy_CIE_D definition is used to calculate the
CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ of a CIE Illuminant D Series from correlated colour temperature $T_{cp}$:
colour.temperature.CCT_to_xy_CIE_D(6504.389383048972)
array([ 0.31270775, 0.32911283])
Using the
colour.CCT_to_xy wrapper definition:
colour.CCT_to_xy(6504.389383048972, 'CIE Illuminant D Series')
array([ 0.31270775, 0.32911283])
Note:
'cie_d'is defined as a convenient alias for
'CIE Illuminant D Series':
colour.CCT_to_xy(6504.389383048972, 'cie_d')
array([ 0.31270775, 0.32911283]) |
I was looking at a book of FEM on problems of Diffusion-Transport.
$$-div(\mu \nabla u) + b \cdot \nabla u + \gamma u = f \qquad in~\Omega$$ $$u = 0 \qquad in~\partial\Omega\text{ (in the boundaries)}$$
It says that if $\displaystyle \frac{|b|}{\mu} \gg 1$ then the problem is a problem dominated by transport.
Taking some stuff from the previous chapter, more precisely the Galerking analysis of stability and convergence, and the approximation of the error, we have
$$\displaystyle M \cong \mu + |b| \qquad \textrm{continuity constant}$$ $$\alpha = \mu \qquad \textrm{coercivity constant}$$
Then it has $$\displaystyle \frac{M}{\alpha} \cong1+\frac{|b|}{\mu} \gg 1$$
As a consequence of that, it concludes that, the estimation of the error is $$\displaystyle ||u -u_h|| ≤C\frac{M}{\alpha}h^r|u|_{H^{r+1}(\Omega)}$$
which tell us that the Galerkin's method could give unsatisfactory results if the space step $h$ isn't small enough.
I Can't understand how can it conclude such thing. Any idea? Please, do not give complex explanations, I'm just trying to understand this FEM stuff, I'm a computer science student more than a math one. |
A quadratic of the form
$$h(t)=gt^2+v_0t+h_0$$
where $t$ is time in seconds, $g$ is acceleration due to gravity, and $h(t)$ is height in meters - can be used to approximate the theoretical (no wind resistance etc) trajectory of a projectile. If $h_0=0$ and the ground is level, the initial velocity is $v_0$, and the firing angle is $\theta = \arcsin{(v_0)}$. Accordingly, increasing $v_0$ rapidly increases $\theta$.
Here is my question:
At the level of a high school geometry student, how can I change my $v_0$ while leaving $\theta$ constant. In other words, how can I modify my $h(t)$ equation so that $\theta$ is fixed but $v_0$ can be modified? Assume understanding of basic trig ratios, but very little understanding of trig functions. For instance, how could I obtain a quadratic model with a $v_0$ of 40 m/s and an initial firing angle of $\frac{\pi}{4} = 45$?
The goal is to allow students to find a particular quadratic model (that accounts for gravity correctly but not other factors) that can be used to launch a projectile to hit a target.
Posting solutions of any kind is appreciated; posting a solution in high school level math is greatly appreciated. |
Longitude lines are perpendicular and latitude lines are parallel to the equator.
A
geographic coordinate system is a coordinate system that enables every location on the Earth to be specified by a set of numbers or letters. The coordinates are often chosen such that one of the numbers represents vertical position, and two or three of the numbers represent horizontal position. A common choice of coordinates is latitude, longitude and elevation. [1]
Contents Geographic latitude and longitude 1 UTM and UPS systems 2 Stereographic coordinate system 3 Geodetic height 4 Cartesian coordinates 5 Shape of the Earth 6 Expressing latitude and longitude as linear units 7 Datums often encountered 8 Geostationary coordinates 9 On other celestial bodies 10 See also 11 Notes 12 References 13 External links 14 Geographic latitude and longitude
The "latitude" (abbreviation: Lat., φ, or phi) of a point on the Earth's surface is the angle between the equatorial plane and the straight line that passes through that point and is normal to the surface of a reference ellipsoid which approximates the shape of the Earth.
[n 1] This line passes a few kilometers away from the center of the Earth except at the poles and the equator where it passes through Earth's center. [n 2] Lines joining points of the same latitude trace circles on the surface of the Earth called parallels, as they are parallel to the equator and to each other. The north pole is 90° N; the south pole is 90° S. The 0° parallel of latitude is designated the equator, the fundamental plane of all geographic coordinate systems. The equator divides the globe into Northern and Southern Hemispheres.
The "longitude" (abbreviation: Long., λ, or lambda) of a point on the Earth's surface is the angle east or west from a reference meridian to another meridian that passes through that point. All meridians are halves of great ellipses (often improperly called great circles), which converge at the north and south poles.
A line, which was intended to pass through the Royal Observatory, Greenwich (a suburb of London, UK), was chosen as the international zero-longitude reference line, the Prime Meridian. Places to the east are in the eastern hemisphere, and places to the west are in the western hemisphere. The antipodal meridian of Greenwich is both 180°W and 180°E. The zero/zero point is located in the Gulf of Guinea about 625 km south of Tema, Ghana.
In 1884 the United States hosted the
Mathematics Topics-Coordinate Systems Geographic coordinates of countries (CIA World Factbook) Coordinates conversion tool (batch conversions of Decimal, DM, DMS and UTM) FCC coordinates conversion tool (DD to DMS/DMS to DD) Coordinate converter, formats: DD, DMS, DM Latitude and Longitude External links Portions of this article are from Jason Harris' "Astroinfo" which is distributed with KStars, a desktop planetarium for Linux/KDE. See The KDE Education Project - KStars ^ a b c d e f A Guide to coordinate systems in Great Britain v1.7 October 2007 D00659 accessed 14.4.2008 ^ Greenwich 2000 Limited (9 June 2011). "The International Meridian Conference". Wwp.millennium-dome.com. Retrieved 31 October 2012. ^ DMA Technical Report Geodesy for the Layman, The Defense Mapping Agency, 1983 ^ "Making maps compatible with GPS". Government of Ireland 1999. Archived from the original on 21 July 2011. Retrieved 15 April 2008. References ^ The surface of the Earth is closer to an ellipsoid than to a sphere, as its equatorial diameter is larger than its north-south axis. ^ The greatest distance between an ellipsoid normal and the center of the Earth is 21.9 km at a latitude of 45°, using Earth radius#Radius at a given geodetic latitude and Latitude#Numerical comparison of auxiliary latitudes: (6367.5 km)×tan(11.67')=21.9 km. ^ The French Institut Géographique National (IGN) maps still use longitude from a meridian passing through Paris, along with longitude from Greenwich. ^ WGS 84 is the default datum used in most GPS equipment, but other datums can be selected. Notes See also
Similar coordinate systems are defined for other celestial bodies such as:
On other celestial bodies
Geostationary satellites (e.g., television satellites) are over the equator at a specific point on Earth, so their position related to Earth is expressed in longitude degrees only. Their latitude is always zero, that is, over the equator.
Geostationary coordinates
In popular GIS software, data projected in latitude/longitude is often represented as a 'Geographic Coordinate System'. For example, data in latitude/longitude if the datum is the North American Datum of 1983 is denoted by 'GCS North American 1983'.
Latitude and longitude values can be based on different geodetic systems or mapping system can sometimes be roughly changed into another datum using a simple translation. For example, to convert from ETRF89 (GPS) to the Irish Grid add 49 metres to the east, and subtract 23.4 metres from the north.
[4] More generally one datum is changed into any other datum using a process called Helmert transformations. This involves converting the spherical coordinates into Cartesian coordinates and applying a seven parameter transformation (translation, three-dimensional rotation), and converting back. [1] Datums often encountered
Longitudinal length equivalents at selected latitudes Latitude City Degree Minute Second ±0.0001° 60° Saint Petersburg 55.80 km 0.930 km 15.50 m 5.58 m 51° 28' 38" N Greenwich 69.47 km 1.158 km 19.30 m 6.95 m 45° Bordeaux 78.85 km 1.31 km 21.90 m 7.89 m 30° New Orleans 96.49 km 1.61 km 26.80 m 9.65 m 0° Quito 111.3 km 1.855 km 30.92 m 11.13 m
where Earth's equatorial radius a equals
6,378,137 m and \scriptstyle{\tan \beta = \frac{b}{a}\tan\phi}\,\!; for the GRS80 and WGS84 spheroids, b/a calculates to be 0.99664719. (\scriptstyle{\beta}\,\! is known as the reduced (or parametric) latitude). Aside from rounding, this is the exact distance along a parallel of latitude; getting the distance along the shortest route will be more work, but those two distances are always within 0.6 meter of each other if the two points are one degree of longitude apart. \frac{\pi}{180}a \cos \beta \,\!
where Earth's average meridional radius \scriptstyle{M_r}\,\! is 6,367,449 m. Since the Earth is not spherical that result can be off by several tenths of a percent; a better approximation of a longitudinal degree at latitude \scriptstyle{\phi}\,\! is
\frac{\pi}{180}M_r\cos \phi \!
To estimate the length of a longitudinal degree at latitude \scriptstyle{\phi}\,\! we can assume a spherical Earth (to get the width per minute and second, divide by 60 and 3600, respectively):
(Those coefficients can be improved, but as they stand the distance they give is correct within a centimeter.)
111132.954 - 559.822\, \cos 2\varphi + 1.175\, \cos 4\varphi
On the WGS84 spheroid, the length in meters of a degree of latitude at latitude φ (that is, the distance along a north-south line from latitude (φ - 0.5) degrees to (φ + 0.5) degrees) is about
On the GRS80 or WGS84 spheroid at sea level at the equator, one latitudinal second measures
30.715 metres, one latitudinal minute is 1843 metres and one latitudinal degree is 110.6 kilometres. The circles of longitude, meridians, meet at the geographical poles, with the west-east width of a second naturally decreasing as latitude increases. On the equator at sea level, one longitudinal second measures 30.92 metres, a longitudinal minute is 1855 metres and a longitudinal degree is 111.3 kilometres. At 30° a longitudinal second is 26.76 metres, at Greenwich (51° 28' 38" N) 19.22 metres, and at 60° it is 15.42 metres. Expressing latitude and longitude as linear units
The Earth is not static as points move relative to each other due to continental plate motion, subsidence, and diurnal movement caused by the Moon and the tides. The daily movement can be as much as a metre. Continental movement can be up to 10 cm a year, or 10 m in a century. A weather system high-pressure area can cause a sinking of 5 mm. Scandinavia is rising by 1 cm a year as a result of the melting of the ice sheets of the last ice age, but neighbouring Scotland is rising by only 0.2 cm. These changes are insignificant if a local datum is used, but are statistically significant if the global GPS datum is used.
[1]
Though early navigators thought of the sea as a flat surface that could be used as a vertical datum, this is not actually the case. The Earth has a series of layers of equal potential energy within its gravitational field. Height is a measurement at right angles to this surface, roughly toward the centre of the Earth, but local variations make the equipotential layers irregular (though roughly ellipsoidal). The choice of which layer to use for defining height is arbitrary. The reference height that has been chosen is the one closest to the average height of the world's oceans. This is called the geoid.
[1] [3]
The Earth is not a sphere, but an irregular shape approximating a biaxial ellipsoid. It is nearly spherical, but has an equatorial bulge making the radius at the equator about 0.3% larger than the radius measured through the poles. The shorter axis approximately coincides with axis of rotation. Map-makers choose the true ellipsoid that best fits their need for the area they are mapping. They then choose the most appropriate mapping of the spherical coordinate system onto that ellipsoid. In the United Kingdom there are three common latitude, longitude, height systems in use. The system used by GPS, WGS84, differs at Greenwich from the one used on published maps OSGB36 by approximately 112m. The military system ED50, used by NATO, differs by about 120m to 180m.
[1] Shape of the Earth
An example is the NGS data for a brass disk near Donner Summit, in California. Given the dimensions of the ellipsoid, the conversion from lat/lon/height-above-ellipsoid coordinates to X-Y-Z is straightforward—calculate the X-Y-Z for the given lat-lon on the surface of the ellipsoid and add the X-Y-Z vector that is perpendicular to the ellipsoid there and has length equal to the point's height above the ellipsoid. The reverse conversion is harder: given X-Y-Z we can immediately get longitude, but no closed formula for latitude and height exists. See "Geodetic system." Using Bowring's formula in 1976
Survey Review the first iteration gives latitude correct within 10^{-11} degree as long as the point is within 10000 meters above or 5000 meters below the ellipsoid.
Z-axis along the axis of the ellipsoid, positive northward
X- and Y-axis in the plane of the equator, X-axis positive toward 0 degrees longitude and Y-axis positive toward 90 degrees east longitude.
With the origin at the center of the ellipsoid, the conventional setup is the expected right-hand:
Every point that is expressed in ellipsoidal coordinates can be expressed as an x y z (Cartesian) coordinate. Cartesian coordinates simplify many mathematical calculations. The origin is usually the center of mass of the earth, a point close to the Earth's center of figure.
Cartesian coordinates
To completely specify a location of a topographical feature on, in, or above the Earth, one has to also specify the vertical distance from the centre of the Earth, or from the surface of the Earth. Because of the ambiguity of "surface" and "vertical", it is more commonly expressed relative to a precisely defined vertical datum which holds fixed some known point. Each country has defined its own datum. For example, in the United Kingdom the reference point is Newlyn, while in Canada, Mexico and the United States, the point is near Rimouski, Quebec, Canada. The distance to Earth's centre can be used both for very deep positions and for positions in space.
[1] Geodetic height
Although no longer used in navigation, the stereographic coordinate system is still used in modern times to describe crystallographic orientations in the fields of crystallography, mineralogy and materials science.
During medieval times, the stereographic coordinate system was used for navigation purposes. The stereographic coordinate system was superseded by the latitude-longitude system.
Stereographic coordinate system
The Universal Transverse Mercator (UTM) and Universal Polar Stereographic (UPS) coordinate systems both use a metric-based cartesian grid laid out on a conformally projected surface to locate positions on the surface of the Earth. The UTM system is not a single map projection but a series of map projections, one for each of sixty 6-degree bands of longitude. The UPS system is used for the polar regions, which are not covered by the UTM system.
UTM and UPS systems
The combination of these two components specifies the position of any location on the planet, but does not consider altitude nor depth. This latitude/longitude "webbing" is known as the
graticule. A graticule representing latitude and longitude of the Earth does not constitute a hierarchy of geographical areas. This is to say, it is not an arrangement of related information or data.
[n 3]
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
In fact, it is nothing but a vague and, strictly speaking, false requirement that can be found in some physically minded books (also of very good level).
There are however physical situations where regularity of the used functions implies that they must vanish at infinity. If one is solving the stationary Schroedinger equation and the potential is sufficiently regular, the eigenvectors must be accordingly regular because of known theorems (especially due to Weyl) on
elliptic regularity. In some cases regularity plus the requirement that the function belongs to $L^2$ and some control of the asymptotic value of some derivatives (arising from a nice asymptotic shape of the potential) imply that the wavefunction must vanish at infinity.
On the other hand, from a physical viewpoint, it is impossible to prepare a state in space working arbitrarily far from a given position where physical instruments are localized. It is therefore reasonable to assume that physically realisable states are described by wavefunctions vanishing outside a sufficiently large spatial region. This, in turn, implies a corresponding requirement on physically accessibile observables and their realistic realisation (it does not make sense to assume to fill in the universe with detectors). However these are
physical requirements that should not be confused with mathematical restrictions.
From a pure mathematical viewpoint, instead, no requirement about a rapid decay at infinity applies on vawefunctions in $L^2$ (it is easy to construct smooth $L^2(\mathbb R)$ functions with larger and larger oscillations as $x\to \infty$ and corresponding strongly unphysical Hamiltonian operators whose these monsters are eigenfunctions). Nor any strong regularity condition applies. Indeed, these functions are defined up to zero-measure sets and all operators representing observables, in order to be properly self adjoint (not simply Hermitian) are the
closure of differential operators whose final domains are therefore Sobolev-like spaces: Derivatives are required to exist in a weak sense at most.
The only exceptions are probably eigenfunctions of Hamiltonian operators with sufficiently regular potentials, where both Sobolev's regularity and elliptic regularity results can be applied and outside singularities of the potential, eigenfunctions are $C^2$ (or even smooth) in a proper sense.
Restriction to the Schwartz space may have sense because most operators have self-adjointness domains including that space (which sometimes is also a
core of the operators) and also because Schwartz space is dense in $L^2$.
However it turns out to be a too strong restriction also in some elementary cases. Think of an 1D Hamiltonian operator with a potential with some mild discontinuity. It does not admit eigenfunctions of Schwartz type.
Example.
I construct here a true monster without physical meaning, but permitted by mathematical requirements of QM.
Consider a function $\psi \in L^2(\mathbb R)$ costructed this way. It attains the value $0$ everywhere except for every interval $$I_n= \left(n -\frac{1}{2(n^4+1)^{2}},\:\: n+ \frac{1}{2(n^4+1)^{2}}\right) $$ thus of length $(n^4+1)^{-2}$ and centered on every $n\in \mathbb Z$. In these intervals $$\psi(x) = n^2 \quad x\in I_n\:.$$ It is clear that $\int_{\mathbb R} |\psi(x)|^2 dx$ for some constant $C>0$ satisfies $$\int_{\mathbb R} |\psi(x)|^2 dx \leq C \sum_{n=0}^{+\infty} \frac{n^4}{(n^4+1)^2} <+\infty\:.$$ It is clear that this function oscillates with larger and larger oscillation as $|x|\to +\infty$, but these oscillations are sharper and sharper.
Next, smoothly modify $\psi$ in every interval $I_n$ to produce a smooth non-negative function $0\leq \phi(x) \leq \psi(x)$ with $\phi(n)=\psi(n)$ thus preserving finiteness of the integral of $|\phi|^2$.
Finally define$$\Psi(x) := \phi(x) + \frac{1}{1+x^2}\:.$$It easy to see that, since $\phi(x)\geq 0$, it must be $$\Psi(x) >0\tag{1}$$ and $$\Psi \in L^2(\mathbb R)\cap C^\infty(\mathbb R)$$ because $\Psi$ is sum of two $L^2$ (smooth) functions and $L^2(\mathbb R)$ is a vector space. A possible
qualitative shape of $\Psi$ is represented here
Observe that $\Psi$ is an eigenfunction (a
proper eigenvector since it belongs to $L^2$), of the Hamiltonian$$H = -\frac{\hbar^2}{2m}\frac{d^2}{dx^2} + U(x)\:, $$where $$U(x) := \frac{\hbar^2}{2m}\frac{\Psi''(x)}{\Psi(x)}$$is smooth by construction, in particular, (1) holding.
It holds$$H\Psi = E \Psi$$with $E=0$.Obviously all this construction is physically meaningless, but it is permitted by general requirements of QM formalism. |
Newspace parameters
Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.bh (of order \(4\) and degree \(2\)) Newform invariants
Self dual: No Analytic conductor: \(1.79663404548\) Analytic rank: \(0\) Dimension: \(4\) Relative dimension: \(2\) over \(\Q(i)\) Coefficient field: \(\Q(i, \sqrt{6})\) Coefficient ring: \(\Z[a_1, \ldots, a_{19}]\) Coefficient ring index: \( 1 \) Projective image \(D_{6}\) Projective field Galois closure of 6.2.450000.1
Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2,\beta_3\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form.
Basis of coefficient ring in terms of a root \(\nu\) of \(x^{4}\mathstrut +\mathstrut \) \(9\):
\(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \( \nu^{2} \)\(/3\) \(\beta_{3}\) \(=\) \( \nu^{3} \)\(/3\)
\(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(3\) \(\beta_{2}\) \(\nu^{3}\) \(=\) \(3\) \(\beta_{3}\) Character Values
We give the values of \(\chi\) on generators for \(\left(\mathbb{Z}/3600\mathbb{Z}\right)^\times\).
\(n\) \(577\) \(901\) \(2801\) \(3151\) \(\chi(n)\) \(\beta_{2}\) \(1\) \(1\) \(1\)
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
Label \(\iota_m(\nu)\) \( a_{2} \) \( a_{3} \) \( a_{4} \) \( a_{5} \) \( a_{6} \) \( a_{7} \) \( a_{8} \) \( a_{9} \) \( a_{10} \) 2593.1
0 0 0 0 0 −1.22474 + 1.22474 i 0 0 0 2593.2 0 0 0 0 0 1.22474 − 1.22474 i 0 0 0 3457.1 0 0 0 0 0 −1.22474 − 1.22474 i 0 0 0 3457.2 0 0 0 0 0 1.22474 + 1.22474 i 0 0 0
Char. orbit Parity Mult. Self Twist Proved 1.a Even 1 trivial yes 3.b Odd 1 CM by \(\Q(\sqrt{-3}) \) yes 5.b Even 1 yes 5.c Odd 2 yes 15.d Odd 1 yes 15.e Even 2 yes
This newform can be constructed as the kernel of the linear operator \(T_{7}^{4} \) \(\mathstrut +\mathstrut 9 \) acting on \(S_{1}^{\mathrm{new}}(3600, [\chi])\). |
Ah yes, a fave topic of mine. Basically, there is no universally-agreed on way to do this. The problem is, that, in general, there isn't a unique way to interpolate the values of tetration at integer "height" (which is what the "number of exponents in the 'tower'" may be called). So in theory, you could define it to be anything.
In the case of exponentiation, one has the useful identity $a^{n + m} = a^n a^m$, which enables for a "natural" extension to non-integer values of the exponent. Namely, you can see, for example, that $a^1 = a^{1/2 + 1/2} = (a^{1/2})^2$, from which we can say that we need to define $a^{1/2} = \sqrt{a}$ if we want that identity to hold in the extended exponentiation. No such identities exist for tetration.
You may also want to look at Qiaochu Yuan's answer here, where he explores some of this from a viewpoint of higher math:
https://math.stackexchange.com/a/56710/11172
One could, perhaps, compare this problem to the question of the interpolation of factorial $n!$ to non-integer values of $n$. There is, in general, no simple identity that provides a natural extension for this, either.
But, when an extension is desired, the usual choice is to use what is called the "Gamma function", defined by
$$\Gamma(x) = \int_{0}^{\infty} e^{-t} t^{x-1} dt$$.
Then, you can extend $n!$ to non-integer $x$ by $x! = \Gamma(x+1)$. However, usually one does not use $x!$ for non-integer factorials, but rather the Gamma function notation.
One can give a uniqueness theorem involving soem simple analytical conditions; it is called the Bohr-Mullerup theorem. In addition, the gamma function has various nice number-theoretic and analytic properties, and turns up in a number of different areas of math.
But in the case of tetration, there are no nice integral representations known. Henryk Trappmann and some others recently proved a theorem that gives a simple uniqueness criterion for the
inverse of tetration (with respect to the "height") here, presuming extension not just to the real, but the complex numbers:
http://www.ils.uec.ac.jp/~dima/PAPERS/2009uniabel.pdf
The solution that satisfies the condition is one that was developed by Hellmuth Kneser in the 1940s. I call it "Kneser's tetrational function" or simply "Kneser's function". It defies simple description.
On this site:
http://math.eretrandre.org/tetrationforum/index.php
an algorithm was posted to compute the Kneser solution (though I'm not sure if it's been proven) for various bases of tetration. Using this solution, the answer to your question would be
$$^{4/3} 3_\mathrm{Kneser} = 4.834730793026332...$$
Other interpolations for tetration have been proposed, some of which give different results. But this is the only one that seems to satisfy "nice" properties like analyticity and has a simple uniqueness theorem via its inverse. Yet as I said in the beginning, I don't believe that it's universally agreed by the general mathematical community that this is "the" answer. |
I'm trying to find out the area of an equilateral triangle with respect to one side. Anything wrong with my proof?
An equilateral triangle with sides of length $a$ can be divided in half along the base to create two triangles with right angles. The sides would be $h$ (height), $y$ (hypotenuse) and $b$ (base). I know what some of these sides are:
$$y = a \\ b = 0.5a \\ h = ?$$
Using Pythagorean theorem, I can find the length of $h$:
$$\begin{align}c^2 &= a^2 + b^2 \\ h^2 &= b^2 + y^2 \\ &= 0.5a^2 + a^2 \\ h &= \pm\frac{a\times\sqrt{3}}{2} \end{align}$$
Since the length can only be positive, we're left with $h =\frac{a\times\sqrt{3}}{2}$.
The area $A$ of a triangle is $A = (b \times h) / 2$. We can plug in our numbers:
$$\begin{align} A &= \frac{b \times h}{2} \\ &= \frac{\frac{a}{2} \times \frac{a\times\sqrt{3}}{2}}{2} \\ &= \frac{a^2\times\sqrt{3}}{8} \end{align}$$
But since we halved the equilateral triangle together, the area of the equilateral is twice this number, since we recombine the two triangles:
$$\begin{align} A &= 2\times\frac{a^2\times\sqrt{3}}{8} \\ &= \frac{a^2\times\sqrt{3}}{4} \end{align}$$
This is indeed the area of an equilateral triangle with respect to side $a$. |
The aim is to determine the group velocity of a wave packet with the general form
$$\Psi\left(x,t\right)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \phi\left(x\right)e^{i\left(kx-\omega t\right)}dk.$$
Quoting from
Introducting to Quantum Mechanics by David Griffiths:
Since the integrand is negligible except in the vicinity of $\ k_{0}$, we may as well Taylor expand the function $\ \omega\left(k\right)$ about that point and keep only the leading terms: $$\omega\left(k\right) \approx w_{0}+w_{0}'\left(k-k_{0}\right)$$ where $\omega_{0}'$ is the derivative of $\omega$ with respect to $k$
What is unclear to me here is why do we Taylor-expand the function $\omega\left(k\right)$
Proceeding on, the author performed a change of coordinate variables from $k$ to $s\equiv k-k_{0}$ so that
$$\Psi\left(x,t\right)\approx \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \phi\left(k_{0}+s\right)e^{i\left[\left(k_{0}+s\right)x-\left(\omega_{0}+\omega_{0}'s\right)t\right]}ds$$
Perhaps I am not understanding the preceding argument completely but what is the motivation for performing this change of variables from $k$ to $s$?
again, the author performed a change of coordinate so that we have
$$\Psi\left(x,t\right)\approx \frac{1}{\sqrt{2\pi}}e^{i\left(-\omega_{0}t+k_{0}\omega_{0}t\right)}\int_{-\infty}^{\infty} \phi\left(k_{0}+s\right)e^{i\left(k_{0}+s\right)\left(x-\omega_{0}'t\right)}ds.$$
A fuzzy intuition I can conjure to account for the change of variables is that the wave packet behaviours like a wave front(I.e., ablation of a layer of material that sublimates so ideally, the best way to prevent the coordinate system from 'moving' is to introduce a very similar change of variable. An explanation would really be ideal. |
Here's one technique to enumerate the best $n$ assignments, for any instance of the assignment problem. I suspect my approach isn't optimal, but it does run in polynomial time: it uses $O(nm)$ invocations of the Hungarian algorithm, where $m$ denotes the number of agents in the problem instance. In your example, $m=26$, so my approach requires $O(n)$ invocations of the Hungarian algorithm.
Let $A_1,A_2,A_3,\dots$ denote the assignments, from best to worse. $A_1$ is the best assignment; $A_2$ is the next-best; and so on. Our goal want to enumerate $A_1,\dots,A_n$.
You can find $A_1$ by solving the original assignment problem, e.g., with the Hungarian algorithm.
How can we find $A_2$, the second-best assignment? The idea is to use a case analysis. Let $v_1,\dots,v_m$ denote the $m$ agents in the problem instance, and let $A(v)$ denote the task assigned to agent $v$ by assignment $A$. We'll break down the space $\mathcal{S}$ of possible candidates for $A_2$ (i.e., the space of all assignments other than $A_1$) into the disjoint union $\mathcal{S} = \mathcal{S}_1 \cup \dots \cup \mathcal{S}_m$, where $\mathcal{S}_i$ is the space of assignments that agree with $A_1$ for $v_1,\dots,v_{i-1}$ but disagree with $A_1$ on $v_i$. (In other words, we look at the first agent that receives a different assignment in $A_1$ vs $A_2$. Then there are $m$ possibilities for that agent; we let $i$ denote its index, i.e., the index of the first agent whose assignment in $A_1$ is different from its assignment in $A_2$. This breaks down the space $\mathcal{S}$ into subspaces $\mathcal{S}_1,\dots, \mathcal{S}_m$, as listed before.)
Now the approach will be to find the best assignment in each $\mathcal{S}_i$, separately.
$\mathcal{S}_1$: We find the best assignment $A$ such that $A(v_1) \ne A_1(v_1)$ using one invocation of the Hungarian algorithm, by changing the cost of the edge $(v_1,A_1(v_1))$ to $\infty$ (or some very large positive number) and then re-running the Hungarian algorithm. This finds the best assignment out of all assignments that assign $v_1$ to something different than $A_1$ did.
$\mathcal{S}_2$: We find the best assignment $A$ such that $A(v_1) = A_1(v_1)$ and $A(v_2) \ne A_1(v_2)$ using one invocation of the Hungarian algorithm: change the cost of the edge $(v_1,A_1(v_1))$ to $0$, and change the cost of the edge $(v_2,A_1(v_2))$ to $\infty$.
$\mathcal{S}_i$: Similarly, for each $i$, we can find the best assignment $A$ such that $A(v_j) = A_1(v_j)$ for all $j=1,2,\dots,i-1$ and such that $A(v_i) \ne A_1(v_i)$, using one invocation of the Hungarian algorithm.
This gives us $m$ assignments, i.e., $m$ candidates for $A_2$. By construction, each one of these assignments is different from $A_1$. Also, by construction, this covers all the space of all assignments that are different from $A_1$. Therefore, $A_2$ will be the best of these $m$ candidates, so we can just compare these $m$ candidates and call it $A_2$.
That find the second-best assignment. How can we find $A_3$, the third-best assignment? Well, the same ideas apply: we'll use a case split, but now the case-split will be a little more involved. Suppose that $v_i$ is the first agent where $A_1$ and $A_2$ disagree (i.e., $A_1$ and $A_2$ agree on $v_1,\dots,v_{i-1}$ but disagree on $v_i$, so that $A_2 \in \mathcal{S}_i$). Then we can break down the space of possibilities for $A_3$ by looking at the first agent that receives a different assignment from $A_2$, or from $A_1$.
In particular, let $\mathcal{T}$ denote the space of possible candidates for $A_3$ (i.e., the space of all assignments other than $A_1$ or $A_2$). We can partition it into the disjoint union
$$\mathcal{T} = \mathcal{S}_1 \cup \dots \cup \mathcal{S}_{i-1} \cup (\mathcal{T}_1 \cup \dots \cup \mathcal{T}_m) \cup \mathcal{S}_{i+1} \cup \dots \cup \mathcal{S}_m.$$
In other words, since $A_2 \in S_i$ and we now want to exclude $A_2$ from the space of allowable assignments, we partition $S_i$ into $S_i = \{A_2\} \cup \mathcal{T}_1 \cup \dots \cup \mathcal{T}_m$ and remove $A_2$. Here $\mathcal{T}_j$ denotes the set of assignments that agree with $A_2$ on $v_1,\dots,v_{j-1}$ but disagrees with $A_2$ on $v_j$ (and, if $j=i$, disagrees with $A_1$ on $v_j$ as well).
Now, we use the Hungarian algorithm to find the best assignment in each of $\mathcal{S}_1, \dots, \mathcal{S}_{i-1}, \mathcal{T}_1, \dots, \mathcal{T}_m, \mathcal{S}_{i+1}, \dots, \mathcal{S}_m$. This is doable using the techniques shown above, using one invocation of the Hungarian algorithm per subspace. Finally, we let $A_3$ be best of all the solutions found.
We can continue in this way, at each step identifying the next-best by decomposing the space of remaining assignments into multiple subspaces and invoking the Hungarian algorithm on each subspace. At each step, we introduce at most $m$ new subspaces, and we can reuse the previously-obtained results for the other subspaces. Therefore, on each step we make at most $m$ invocations of the Hungarian algorithm, so the total number of invocations of the Hungarian algorithm is $O(nm)$.
There's probably a better way to do it, but if you can't find any other algorithm, this is one you could use. Note that this is a general technique for problem of enumerating the $n$ best assignments to any instance of the assignment problem. It's not specific to your substitution-cipher example. |
Inclusion-Exclusion principle, which will be called from now also the principle, is a famous and very useful technique in combinatorics, probability and counting.
For the purpose of this article, at the beginning the most common application of the principle, which is counting the cardinality of sum of $$n$$ sets, will be considered. Later, more applications will be given.
Cardinality of the sum of sets
For a given set $$A$$ of $$n$$ sets $$A_1, A_2, \ldots, A_n$$, let $$S_n$$ be the sum of these sets, more formally, $$S_n = \bigg|\bigcup_{i=1}^n A_i\bigg|$$
The principle gives a direct formula for computing $$S_n$$. It is important to notice that since any two of given sets can have a non-empty intersection, the cardinality of their sum can be smaller than the sum of their cardinalities. This is the reason why inclusion-exclusion principle is so useful.
The formula:
$$$S_n = \sum_{i = 1}^n |A_i| - \sum_{1 \leq i < j \leq n} |A_i \cap A_j| + \sum_{1 \leq i < j < k \leq n} |A_i \cap A_j \cap A_k| - \ldots + (-1)^{n - 1} \cdot |A_1 \cap \ldots \cap A_n|$$$
If for a given set $$A$$ of $$n$$ sets $$A_1, A_2, \ldots, A_n$$, the cardinality of intersection of sets in any subset of $$A$$ can be computed efficiently, then the formula can be used in order to get the value of $$S_n$$.
A very important thing to notice is that the formula can be quite long. Actually, each subset of $$A$$ is considered exactly once in the formula, thus the formula for a set of $$n$$ sets has $$2^n$$ components. This is very important fact to keep in mind. Inclusion-exclusion principle has may different applications, but in some of them the complexity cost of using it is too big to be practical.
Example:
Let $$A$$ be a set of $$4$$ sets:
$$A_1 = {1,2,3}$$
$$A_2 = {2,3,4}$$ $$A_3 = {1,3,5}$$ $$A_4 = {2, 3}$$
The goal is to compute the cardinality of sum of all the above subsets. For the purpose of the example, let $$A_{i, j}, A_{i, j, k}, A_{i, j, k, l}$$ denote correspondently the intersection of sets $$A_i$$ and $$A_j$$, the intersection of sets $$A_i, A_j, A_k$$, and the intersection of sets $$A_i, A_j, A_k, A_l$$.
Then the formula for the above example looks like this:
$$$|A_1| + |A_2| + |A_3| + |A_4| - |A_{1,2}| - |A_{1,3}| - |A_{1, 4}| - |A_{2,3}| - |A_{2,4}| - |A_{3,4}| + |A_{1,2,3}| + |A_{1,2,4}| + |A_{1,3,4}| + |A_{2,3,4}| - |A_{1,2,3,4}|$$$
Since in the example the cardinality of each above intersection can be computed just by looking at the sets in the intersection, the formula is transformed to:
$$$3 + 3 + 3 + 2 - 2 - 2 - 2 - 1 - 2 - 1 + 1 + 2 + 1 + 1 - 1 = 5$$$
This example might seem trivial, because computing the cardinality of sum of all sets is as straightforward like computing the cardinality of their intersections. However, in many cases the complexity of computing these two values differ significantly. At the end of this article, there will be an example problem demonstrating such case. But first, a very handy method of implementing the formula will be given.
Implementation:
For the purpose of the article, let
int intersectionCardinality(vector<int> indices)
be a function returning the cardinality of intersection of sets $$A_i$$, where i is in the indices vector.
Then the value of the formula can be computed by generating all subsets of $$A$$, which is the set of sets $$A_1, \ldots, A_n$$, and for each such subset, by computing the cardinality of its intersection and adding this value with an appropriate sign to the final result.
int n; // the number of sets in the set Aint result = 0; //final result, the cardinality of sum of all subsets of Afor(int b = 0; b < (1 << n); ++b){ vector<int> indices; for(int k = 0; k < n; ++k) { if(b & (1 << k)) { indices.push_back(k); } } int cardinality = intersectionCardinality(indices); if(indices.size() % 2 == 1) result += cardinality; else result -= cardinality;}cout << result << endl; //printing the final result
This method of iterating over all subsets of some set using indicator vector is a super handy method and it is important to remember and master it. The idea is pretty simple, since there are $$2^n$$ subsets of a set of n elements, each integer in a range $$[0, 2^n)$$ represents an indicator vector of one such subset when this integer is interpreted as a binary number. |
Lagrange Multiplier with more constraint variables
Extrema of f(x,y) with constraint g(x,y)=0
$\displaystyle f_{x}dx+f_{y}dy=0$ at an extremum, but dx and dy not independent:
$\displaystyle g_{x}dx+g_{y}dy=0$
$\displaystyle dy=-\frac{g_{x}}{g_{y}}dx$
$\displaystyle f_{x}dx+f_{y}(-\frac{g_{x}}{g_{y}}dx)=0$
$\displaystyle (f_{x}-\frac{f_{y}}{g_{y}}g_{x})dx=0$ for arbitrary dx $\displaystyle \rightarrow f_{x}-\frac{f_{y}}{g_{y}}g_{x}=0
$
Let $\displaystyle \lambda =\frac{f_{y}}{g_{y}}$
Then
$\displaystyle f_{y}-\lambda g_{y}=0$
$\displaystyle f_{x}-\lambda g_{x}=0$
$\displaystyle g(x,y)=0$
are 3 eqs in 3 unknowns x,y,$\displaystyle \lambda$
As a purely formal rule, the equations can be gotten by setting partial derivatives wrt x,y,$\displaystyle \lambda$ equal to zero of:
L(x,y,$\displaystyle \lambda$)=f(x,y)-$\displaystyle \lambda$(g(x,y)
------------------------------------------------------------------------------------------------------
For extrema of f(x,y,z) with constraint g(x,y,z)=0:
$\displaystyle f_{x}dx+f_{y}dy+f_{z}dz=0$ at an extremum, but dx,dy,dz not independent:
$\displaystyle g_{x}dx+g_{y}dy+g_{z}dz=0$
$\displaystyle dy=-\frac{g_{x}}{g_{y}}dx-\frac{g_{z}}{g_{y}}dz$
$\displaystyle f_{x}dx+f_{y}(-\frac{g_{x}}{g_{y}}dx-\frac{g_{z}}{g_{y}}dz)+f_{z}dz=0$
$\displaystyle (f_{x}-\frac{f_{y}}{g_{y}}g_{x})dx+(f_{z}-\frac{f_{y}}{g_{y}}g_{z})dz=0$
arbitrary dx and dz $\displaystyle \rightarrow f_{x}-\frac{f_{y}}{g_{y}}g_{x}=f_{z}-\frac{f_{y}}{g_{y}}g_{z}=0$
Let $\displaystyle \lambda = \frac{f_{y}}{g_{y}}$ and you have Lagrange's multiplier.
As a formal rule to get the equations, set to 0 partial derivatives wrt x,y,z,$\displaystyle \lambda$ of L(x,y,z,$\displaystyle \lambda$)=f(x,y,z)-$\displaystyle \lambda$ g(x,y,z):
$\displaystyle \frac{\partial L }{\partial x}=f_{x}-\lambda g_{x}=0$
$\displaystyle \frac{\partial L }{\partial y}=f_{y}-\lambda g_{y}=0$
$\displaystyle \frac{\partial L }{\partial z}=f_{z}-\lambda g_{z}=0$
$\displaystyle \frac{\partial L }{\partial \lambda}=g(x,y,z)=0$
--------------------------------------------------------------------------------------
Now consider extrema of f(x,y) subject to g(x,y,z) = 0
$\displaystyle f_{x}dx+f_{y}dy=0$
$\displaystyle g_{x}dx+g_{y}dy+g_{z}dz=0$
To change independent variable to, say, dx and dz, solve latter equation for for dy and plug into former to get:
$\displaystyle (f_{x}-\frac{f_{y}}{g_{y}}g_{x})dx-\frac{f_{y}}{g_{y}}g_{z}dz=0$
Now setting the coefficients of dx and dz (arbitray) to zero and letting
$\displaystyle \lambda = \frac{f_{y}}{g_{y}}$, gives the modified form of Lagrange's equations for this case:
$\displaystyle f_{x}-\lambda g_{x}=0$
$\displaystyle f_{y}-\lambda g_{y}=0$
$\displaystyle \lambda g_{z}=0$
$\displaystyle g(x,y,z)=0$
Formally, you get the equations, as before, from:
L(x,y,z,$\displaystyle \lambda$) = f(x,y)-$\displaystyle \lambda$ g(x,y,z)
by taking partial derivatives wrt x,y,z,$\displaystyle \lambda$ and setting them equal to 0.
The latter procedure was used by JeffM1 in:
http://mymathforum.com/calculus/3448...straint-2.html
--------------------------------------------------------------------------------------
For the case of more than one constraint:
Extrema of f(x,y,z,w) and g(x,y,z,w) and h(x,y,x,w)
Solve for the differentials of dz and dw in terms of dx and dy from dg=0 and dh=0.
Then substitute into df=0 and equate the coefficients of dx and dy to zero. It will be obvious what to label $\displaystyle \lambda$1 and $\displaystyle \lambda$2 as, and everything will fall into place.
Forget the formulas, grasp the simple principle:
At an extreme, df(x,y,z,w,..)=0 for arbitrary dx, dy, dz, dw... (independent). So their coefficients must be zero.
I have taken pains to do this because I never came across Lagrange multipliers except as a rule which I didn't understand.
All times are GMT -8. The time now is 09:07 AM.
Copyright © 2019 My Math Forum. All rights reserved. |
Prologue: The big $O$ notation is a classic example of the power and ambiguity of some notations as part of language loved by human mind. No matter how much confusion it have caused, it remains the choice of notation to convey the ideas that we can easily identify and agree to efficiently.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
Sorry, but you do not have an issue if you understand the meaning of big $O$ notation.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$
$O(f(n))$, if one asks you what's the equals of $O(f(n))$, what would be your answer? There is no answer. value
What is important is the semantics. What is important is (how) people can agree easily on (one of) its precise interpretations that will describe asymptotic behavior or time or space complexity we are interested in. The default precise interpretation/definition of $T(n)=O(f(n))$ is, as translated from Wikipedia,
$T$ is a real or complex valued function and $f$ is a real valued function, both defined on some unbounded subset of the real positive numbers, such that $f(n)$ is strictly positive for all large enough values of $n$. For for all sufficiently large values of $n$, the absolute value of $T(n)$ is at most a positive constant multiple of $f(n)$. That is, there exists a positive real number $M$ and a real number $n_0$ such that
${\text{ for all }n\geq n_{0}, |T(n)|\leq \;Mf(n){\text{ for all }}n\geq n_{0}.}$
Please note this interpretation is considered
the definition. All other interpretations and understandings, which may help you greatly in various ways, are secondary and corollary. Everyone (well, at least every answerer here) agrees to this interpretation/definition/semantics. As long as you can apply this interpretation, you are probably good most of time. Relax and be comfortable. You do not want to think too much, just as you do not think too much about some of the irregularity of English or French or most of natural languages. Just use the notation by that definition.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$
$O(f(n))$, if one asks you what's the equals of $O(f(n))$, what would be your answer? There is no answer. value
Indeed, there could be no answer, since the question is ill-posed. $T(n)$ does not mean an exact number. It is meant to stand for a function whose name is $T$ and whose formal parameter is $n$ (which is sort of bounded to the $n$ in $f(n)$). It is just as correct and even more so if we write $T=O(f)$. If $T$ is the function that maps $n$ to $n^2$ and $f$ is the function that maps $n$ to $n^3$, it is also conventional to write $f(n)=O(n^3)$ or $n^2=O(n^3)$. Please also note that the definition does not say $O$ is a function or not. It does not say the left hand side is supposed to be equal to the right hand side at all! You are right to suspect that equal sign does not mean equality in its ordinary sense, where you can switch both sides of the equality and it should be backed by an equivalent relation. (Another even more famous example of abuse of the equal sign is the usage of equal sign to mean assignment in most programming languages, instead of more cumbersome
:= as in some languages.)
If we are only concerned about that one equality (I am starting to abuse language as well. It
is not an equality; however, it is an equality since there is an equal sign in the notation or it could be construed as some kind of equality), $T(n)=O(f(n))$, this answer is done.
However, the question actually goes on. What does it mean by, for example, $f(n)=3n+O(\log n)$? This equality is not covered by the definition above. We would like to introduce another convention,
the placeholder convention. Here is the full statement of placeholder convention as stated in Wikipedia.
In more complicated usage, $O(\cdots)$ can appear in different places in an equation, even several times on each side. For example, the following are true for $n\to \infty$.
$(n+1)^{2}=n^{2}+O(n)$
$(n+O(n^{1/2}))(n+O(\log n))^{2}=n^{3}+O(n^{5/2})$
$n^{O(1)}=O(e^{n})$
The meaning of such statements is as follows: for any functions which satisfy each $O(\cdots)$ on the left side, there are some functions satisfying each $O(\cdots)$ on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function $f(n) = O(1)$, there is some function $g(n) = O(e^n)$ such that $n^{f(n)} = g(n)$."
You may want to check here for another example of placeholder convention in action.
You might have noticed by now that I have not used the set-theoretic explanation of the big $O$-notation. All I have done is just to show even without that set-theoretic explanation such as "$O(f(n))$ is a set of functions", we can still understand big $O$-notation fully and perfectly. If you find that set-theoretic explanation useful, please go ahead anyway.
You can check the section in "asymptotic notation" of CLRS for a more detailed analysis and usage pattern for the family of notations for asymptotic behavior, such as big $\Theta$, $\Omega$, small $o$, small $\omega$, multivariable usage and more. The Wikipedia entry is also a pretty good reference.
Lastly, there is some inherent ambiguity/controversy with big $O$ notation with multiple variables,1 and 2. You might want to think twice when you are using those. |
How do I integrate this?
$$\int_0^{2\pi}\frac{dx}{2+\cos{x}}, x\in\mathbb{R}$$
I know the substitution method from real analysis, $t=\tan{\frac{x}{2}}$, but since this problem is in a set of problems about complex integration, I thought there must be another (easier?) way.
I tried computing the poles in the complex plane and got $$\text{Re}(z_0)=\pi+2\pi k, k\in\mathbb{Z}; \text{Im}(z_0)=-\log (2\pm\sqrt{3})$$ but what contour of integration should I choose? |
Note that
$$\begin{array}{rl} x_1 = 0 \lor x_1 \geq 10 &\equiv (x_1 \geq 0 \land x_1 \leq 0) \lor x_1 \geq 10\\\\ &\equiv x_1 \geq 0 \land (x_1 \leq 0 \lor x_1 \geq 10)\end{array}$$
We can handle the disjunction $x_1 \leq 0 \lor x_1 \geq 10$ using the Big M method. We introduce binary variables $z_1, z_2 \in \{0,1\}$ such that $z_1 + z_2 = 1$, i.e., either $(z_1,z_2) = (1,0)$ or $(z_1,z_2) = (0,1)$. We introduce also a large constant $M \gg 10$ so that we can write the disjunction in the form
$$x_1 \leq M z_1 \land x_1 \geq 10 - M z_2$$
If $(z_1,z_2) = (1,0)$, we have $x_1 \leq M$ and $x_1 \geq 10$, which is roughly "equivalent" to $x_1 \geq 10$. If $(z_1,z_2) = (0,1)$, we have $x_1 \leq 0$ and $x_1 \geq 10 - M$, which is roughly "equivalent" to $x_1 \leq 0$.
Thus, we have a
mixed-integer linear program (MILP)
$$\begin{array}{ll} \text{maximize} & 1.5x_1 + 2x_2\\ \text{subject to} & x_1, x_2 \leq 300\\ & x_1 \geq 0\\ & x_1 - M z_1\leq 0\\ & x_1 + M z_2 \geq 10\\ & z_1 + z_2 = 1\\ & z_1, z_2 \in \{0,1\}\end{array}$$
For a quick overview of MILP, read Mixed-Integer Programming for Control by Arthur Richards and Jonathan How. |
Last year I made a post about the universal program, a Turing machine program $p$ that can in principle compute any desired function, if it is only run inside a suitable model of set theory or arithmetic. Specifically, there is a program $p$, such that for any function $f:\newcommand\N{\mathbb{N}}\N\to\N$, there is a model $M\models\text{PA}$ — or of $\text{ZFC}$, whatever theory you like — inside of which program $p$ on input $n$ gives output $f(n)$.
This theorem is related to a very interesting theorem of W. Hugh Woodin’s, which says that there is a program $e$ such that $\newcommand\PA{\text{PA}}\PA$ proves $e$ accepts only finitely many inputs, but such that for any finite set $A\subset\N$, there is a model of $\PA$ inside of which program $e$ accepts exactly the elements of $A$. Actually, Woodin’s theorem is a bit stronger than this in a way that I shall explain.
Victoria Gitman gave a very nice talk today on both of these theorems at the special session on Computability theory: Pushing the Boundaries at the AMS sectional meeting here in New York, which happens to be meeting right here in my east midtown neighborhood, a few blocks from my home.
What I realized this morning, while walking over to Vika’s talk, is that there is a very simple proof of the version of Woodin’s theorem stated above. The idea is closely related to an idea of Vadim Kosoy mentioned in my post last year. In hindsight, I see now that this idea is also essentially present in Woodin’s proof of his theorem, and indeed, I find it probable that Woodin had actually begun with this idea and then modified it in order to get the stronger version of his result that I shall discuss below.
But in the meantime, let me present the simple argument, since I find it to be very clear and the result still very surprising.
Theorem. There is a Turing machine program $e$, such that $\PA$ proves that $e$ accepts only finitely many inputs. For any particular finite set $A\subset\N$, there is a model $M\models\PA$ such that inside $M$, the program $e$ accepts all and only the elements of $A$. Indeed, for any set $A\subset\N$, including infinite sets, there is a model $M\models\PA$ such that inside $M$, program $e$ accepts $n$ if and only if $n\in A$. Proof. The program $e$ simply performs the following task: on any input $n$, search for a proof from $\PA$ of a statement of the form “program $e$ does not accept exactly the elements of $\{n_1,n_2,\ldots,n_k\}$.” Accept nothing until such a proof is found. For the first such proof that is found, accept $n$ if and only if $n$ is one of those $n_i$’s.
In short, the program $e$ searches for a proof that $e$ doesn’t accept exactly a certain finite set, and when such a proof is found, it accepts exactly the elements of this set anyway.
Clearly, $\PA$ proves that program $e$ accepts only a finite set, since either no such proof is ever found, in which case $e$ accepts nothing (and the empty set is finite), or else such a proof is found, in which case $e$ accepts only that particular finite set. So $\PA$ proves that $e$ accepts only finitely many inputs.
But meanwhile, assuming $\PA$ is consistent, then you cannot refute the assertion that program $e$ accepts exactly the elements of some particular finite set $A$, since if you could prove that from $\PA$, then program $e$ actually would accept exactly that set (for the shortest such proof), in which case this would also be provable, contradicting the consistency of $\PA$.
Since you cannot refute any particular finite set as the accepting set for $e$, it follows that it is consistent with $\PA$ that $e$ accepts any particular finite set $A$ that you like. So there is a model of $\PA$ in which $e$ accepts exactly the elements of $A$. This establishes statement (2).
Statement (3) now follows by a simple compactness argument. Namely, for any $A\subset\N$, let $T$ be the theory of $\PA$ together with the assertions that program $e$ accepts $n$, for any particular $n\in A$, and the assertions that program $e$ does not accept $n$, for $n\notin A$. Any finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. Any model of this theory realizes statement (3).
QED
One uses the Kleene recursion theorem to show the existence of the program $e$, which makes reference to $e$ in the description of what it does. Although this may look circular, it is a standard technique to use the recursion theorem to eliminate the circularity.
This theorem immediately implies the classical result of Mostowski and Kripke that there is an independent family of $\Pi^0_1$ assertions, since the assertions $n\notin W_e$ are exactly such a family.
The theorem also implies a strengthening of the universal program theorem that I proved last year. Indeed, the two theorems can be realized with the same program!
Theorem. There is a Turing machine program $e$ with the following properties: $\PA$ proves that $e$ computes a finite function; For any particular finite partial function $f$ on $\N$, there is a model $M\models\PA$ inside of which program $e$ computes exactly $f$. For any partial function $f:\N\to\N$, finite or infinite, there is a model $M\models\PA$ inside of which program $e$ on input $n$ computes exactly $f(n)$, meaning that $e$ halts on $n$ if and only if $f(n)\downarrow$ and in this case $\varphi_e(n)=f(n)$. Proof. The proof of statements (1) and (2) is just as in the earlier theorem. It is clear that $e$ computes a finite function, since either it computes the empty function, if no proof is found, or else it computes the finite function mentioned in the proof. And you cannot refute any particular finite function for $e$, since if you could, it would have exactly that behavior anyway, contradicting $\text{Con}(\PA)$. So statement (2) holds. But meanwhile, we can get statement (3) by a simple compactness argument. Namely, fix $f$ and let $T$ be the theory asserting $\PA$ plus all the assertions either that $\varphi_e(n)\uparrow$, if $n$ is not the domain of $f$, and $\varphi_e(n)=k$, if $f(n)=k$. Every finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. But any model of this theory exactly fulfills statement (3). QED
Woodin’s proof is more difficult than the arguments I have presented, but I realize now that this extra difficulty is because he is proving an extremely interesting and stronger form of the theorem, as follows.
Theorem. (Woodin) There is a Turing machine program $e$ such that $\PA$ proves $e$ accepts at most a finite set, and for any finite set $A\subset\N$ there is a model $M\models\PA$ inside of which $e$ accepts exactly $A$. And furthermore, in any such $M$ and any finite $B\supset A$, there is an end-extension $M\subset_{end} N\models\PA$, such that in $N$, the program $e$ accepts exactly the elements of $B$.
This is a much more subtle claim, as well as philosophically interesting for the reasons that he dwells on.
The program I described above definitely does not achieve this stronger property, since my program $e$, once it finds the proof that $e$ does not accept exactly $A$, will accept exactly $A$, and this will continue to be true in all further end-extensions of the model, since that proof will continue to be the first one that is found. |
The simplest approach (in terms of programming effort) might be to try using an existing graph layout tool. Those solve a related problem: given a graph with distances on the edges, try to find the best layout to draw the graph on the plane. You can treat your problem as an instance of the graph layout problem: we have one vertex per point, and for each pair of points $v,w$ with distance bounds $[\ell,u]$, we create an edge $v \to w$ with length $(\ell+u)/2$. However, this does have some limitations: typical graph layout algorithms try to get the distances between vertices correct, but also try to avoid edges that cross each other; whereas in your case you don't care about crossings. So, your problem might be easier.
Another possibility is to apply the ideas used for graph layout to your problem. There are several algorithmic techniques for graph layout. For instance, you could use a spring-based model, where you have a spring between each pair of vertices that have a distance bound, and the spring tries to keep those pair of vertices a suitable distance apart.
A third approach is to use black-box mathematical optimization. Introduce an objective function $\Phi$ which, given a set of locations for the points, calculates a penalty value (how "badly" the arrangement violates your constraints), and then try to find an arrangement that minimizes $\Phi$.
For instance, suppose for each pair $v,w$ of points, you have a lower bound $\ell_{v,w}$ and an upper bound $u_{v,w}$. You could define
$$\Phi(x_1,\dots,x_n) = \sum_{i,j} \frac{[||x_i - x_j||_2 - (u_{i,j} + \ell_{i,j})/2]^2}{(u_{i,j} - \ell_{i,j})^2},$$
and then use some optimization technique to find an arrangement $x_1,\dots,x_n$ that minimizes $\Phi(x_1,\dots,x_n)$. For instance, you could try using hillclimbing, gradient descent, or other convex optimization methods. This approach might be sensitive to the initial values for $x_1,\dots,x_n$, so you might want to repeat it multiple times with different random choices for the initial value, and take the best result.
Finally, you could try using simulated annealing.
The latter two approaches can be easily adjusted to incorporate angle constraints, simply by modifying the objective function appropriately to add a term that penalizes angles that differ from the desired value. |
Let $\lambda$ be a nonzero real constant. Find all functions $f,g: \Bbb R \rightarrow \Bbb R$ that satisfy the functional equation $f(x+y)+g(x−y)=\lambda f(x)g(y)$.
I try this :
Let $y=0$ in the equation to get $f(x)+g(x)=\lambda f(x)g(0)$
Here We have two cases:
$g(0)=0$ here $f(x)=-g(x)$
$g(0) \neq 0$ here $g(x)=\beta f(x)$ , $\beta = g(0)\lambda-1$
Is this true? And if this true how I can complete the the solution especially in case two? |
I was reading about the applications of Euler-Lagrange equation in
Mathematics for Physics by Stone, Goldbart; they were showing the application of the variational principle for the central force problem $F = -\partial_r V(r)$ where $V$ is the scalar potential; then the authors concluded $\dot{(mr^2\dot \theta)} = 0$ which implies $l =mr^2\dot \theta = \textrm{const.} $
However, then they wrote:
Warning: We might realize, without having gone to the trouble of deriving it from the Lagrange equations, that rotational invariance guarantees that the angular momentum $l = mr^2\dot\theta $ is constant. Having done so, it is almost irresistible to try to short-circuit some of the labour by plugging this prior knowledge into $$L =\frac12 m\left(\dot r^2 + r^2\dot \theta^2\right) - V (r) \tag{1.53}$$
so as to eliminate the variable $\dot \theta$ in favour of the constant $l$. If we try this we get $$L \stackrel{?}{\to}\frac12 mr^2 +\frac{l^2}{2mr^2} - V (r). \tag{1.54}$$
We can now directly write down the Lagrange equation $r$, which is $$m\ddot r + \frac{l^2}{mr^3} \stackrel{?}{=}-\frac{\partial V}{\partial r} \tag{1.55}$$
Unfortunately this has the wrong sign before the $l^2/mr^3$ term!
The lesson is that we must be very careful in using consequences of a variational principle to modify the principle.
Indeed the sign is wrong; there should be '$-$' and not '$+$' in $(1.55)$. However, I'm not getting why it yielded the wrong result - the wrong sign.
After all, we have used the correct conclusion that came from the variational principle itself; still a wrong sign appeared. Why is it so?
What did the authors meant by the last line? |
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers.
Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic?
By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the
standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models.
We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a
topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic?
In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic.
Let me state the main theorem and briefly sketch the proof.
Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output.
\begin{equation*}\small\begin{array}{rcr}
\cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*}
This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the
final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$.
Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired.
But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the
final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired.
Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order.
The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$.
We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$
The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$.
Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable.
The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order.
Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order.
Go to the article to read more.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } |
Search
Now showing items 1-10 of 23
OGLE-2017-BLG-0173Lb: Low-mass-ratio Planet in a "Hollywood" Microlensing Event
(2018)
We present microlensing planet OGLE-2017-BLG-0173Lb, with planet-host mass ratio either $q\simeq 2.5\times 10^{-5}$ or $q\simeq 6.5\times 10^{-5}$, the lowest or among the lowest ever detected. The planetary perturbation ...
OGLE-2016-BLG-1045: A Test of Cheap Space-Based Microlens Parallaxes
(2018)
Microlensing is a powerful and unique technique to probe isolated objects in the Galaxy. To study the characteristics of these interesting objects based on the microlensing method, measurement of the microlens parallax ...
OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only
(2018)
We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ...
OGLE-2017-BLG-0329L: A Microlensing Binary Characterized with Dramatically Enhanced Precision Using Data from Space-based Observations
(2018)
Mass measurements of gravitational microlenses require one to determine the microlens parallax PIe, but precise PIe measurement, in many cases, is hampered due to the subtlety of the microlens-parallax signal combined ...
OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function
(2018)
We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ...
OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy
(2018)
We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ...
OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge
(2018)
We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ...
Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb
(2018)
We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ...
OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit
(2018)
We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ...
KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion
(2018)
We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ... |
Answer
Amplitude=2 Period$=\pi$ sec frequency$=\frac{1}{\pi}$ rotations per sec
Work Step by Step
Since the particle moves uniformly around a circle of radius $2$ units, its amplitude $a$ is $2$. Also, it is given that the angular speed $w$ is 2 cycles per second. As we are interested in the displacement $s(t)$ of the particle from the equilibrium position, the equation is: $s(t)=a\sin wt$ $s(t)=2\sin 2t$ We know that the amplitude is 2. In addition, $w$ can be used to find the period: Period$=\frac{2\pi}{w}$ Period$=\frac{2\pi}{2}$ Period$=\pi$ sec Since frequency is the reciprocal of period, frequency$=\frac{1}{\pi}$ rotations per sec |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
I am trying to prove that
$\qquad L=\{\langle M\rangle \mid M \text{ is a TM }, \exists w. \text{ in } M(w) \text{ the head moves only right and } M(w)\!\uparrow \}$
is decidable.
I thought about the following solution:
Lets build $\hat{M}$, a TM that will decide L: M on input $\langle N \rangle$: 1. $\Sigma^{Q+1} $ is decidable so it has an enumerator f. 2. for every word $ w\in\Sigma^{Q+1} $ simulate parallel N on w for |Q|+1 steps: $\space \space \space$ - if N got to a blank then N moved only right and is stuck in a loop. 3. if all of those words stopped the simulation at a blank accept. else reject.
I am quite sure I am missing something here. Can you help please? |
2019-10-11 06:14
Implementation of CERN secondary beam lines T9 and T10 in BDSIM / D'Alessandro, Gian Luigi (CERN ; JAI, UK) ; Bernhard, Johannes (CERN) ; Boogert, Stewart (JAI, UK) ; Gerbershagen, Alexander (CERN) ; Gibson, Stephen (JAI, UK) ; Nevay, Laurence (JAI, UK) ; Rosenthal, Marcel (CERN) ; Shields, William (JAI, UK) CERN has a unique set of secondary beam lines, which deliver particle beams extracted from the PS and SPS accelerators after their interaction with a target, reaching energies up to 400 GeV. These beam lines provide a crucial contribution for test beam facilities and host several fixed target experiments. [...] 2019 - 3 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW069 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW069 Detaljnije - Slični zapisi 2019-10-09 06:01
HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Detaljnije - Slični zapisi 2019-10-09 06:01
Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Detaljnije - Slični zapisi 2019-10-09 06:00
The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Detaljnije - Slični zapisi 2019-10-09 06:00
The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Detaljnije - Slični zapisi 2019-09-21 06:01
Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Detaljnije - Slični zapisi 2019-09-20 08:41
Shashlik calorimeters with embedded SiPMs for longitudinal segmentation / Berra, A (INFN, Milan Bicocca ; Insubria U., Varese) ; Brizzolari, C (INFN, Milan Bicocca ; Insubria U., Varese) ; Cecchini, S (INFN, Bologna) ; Chignoli, F (INFN, Milan Bicocca ; Milan Bicocca U.) ; Cindolo, F (INFN, Bologna) ; Collazuol, G (INFN, Padua) ; Delogu, C (INFN, Milan Bicocca ; Milan Bicocca U.) ; Gola, A (Fond. Bruno Kessler, Trento ; TIFPA-INFN, Trento) ; Jollet, C (Strasbourg, IPHC) ; Longhin, A (INFN, Padua) et al. Effective longitudinal segmentation of shashlik calorimeters can be achieved taking advantage of the compactness and reliability of silicon photomultipliers. These photosensors can be embedded in the bulk of the calorimeter and are employed to design very compact shashlik modules that sample electromagnetic and hadronic showers every few radiation lengths. [...] 2017 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 64 (2017) 1056-1061 Detaljnije - Slični zapisi 2019-09-20 08:41
Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Detaljnije - Slični zapisi 2019-04-26 08:32
Baby MIND: A magnetised spectrometer for the WAGASCI experiment / Hallsjö, Sven-Patrik (Glasgow U.)/Baby MIND The WAGASCI experiment being built at the J-PARC neutrino beam line will measure the ratio of cross sections from neutrinos interacting with a water and scintillator targets, in order to constrain neutrino cross sections, essential for the T2K neutrino oscillation measurements. A prototype Magnetised Iron Neutrino Detector (MIND), called Baby MIND, has been constructed at CERN and will act as a magnetic spectrometer behind the main WAGASCI target. [...] SISSA, 2018 - 7 p. - Published in : PoS NuFact2017 (2018) 078 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.078 Detaljnije - Slični zapisi 2019-04-26 08:32 Detaljnije - Slični zapisi |
Here is the Gamma PDF:
$f(x) = \frac{1}{\Gamma(a) b} (\frac{x}{b})^{a-1} e^{-x/b} \;\; x\geq 0; a , b>0$
The mean is ab and the variance is ab². When a=1 it is equivalent to the exponential distribution. In fact when a is an integer, it is equivalent to the sum of (a) independent exponentially distributed random variables each of which has a mean of (b). It is shaped like the exponential distribution with a spike at 0 for a<1, but has a mode at (a-1)b for a>1 (see the Wikipedia article).
MacKay suggests representing the positive real variable x in terms of its logarithm z=ln x (ITILA, pp. 314). This will give us a better idea about the order of magnitude of typical x in terms of a and b. The distribution in terms of z is:
$f(z) = \frac{1}{\Gamma(a)} (\frac{x}{b})^a e^{-x/b} \;\; z \in \Re; x=e^z; a, b>0$
We can get an idea about the shape of f(z) by looking at its first two derivatives with respect to z:
$f'(z) = f(z) (a-\frac{x}{b})$
$f''(z) = f(z) (a^2 - (2a+1)\frac{x}{b} + (\frac{x}{b})^2)$
The graph above shows f(z) and its two derivatives for a=1/10 and b=10. The first derivative tells us that f(z) has a single mode at x=ab. Note that x=ab is the mean of f(x) but only the mode (not the mean) of f(z). The curve raises slowly on the left of the mode and falls sharply on the right. The second derivative has two roots that give us the values with the minimum and the maximum slope:
$x = ab + \frac{b}{2} \pm \frac{b}{2} \sqrt{1+4a}$.
Now we are going to look at the limit where a<<1, typically used as a vague prior. The height of the mode at x=ab is a
ae -a/Γ(a). Γ(a) is well approximated by 1/a for small a, a aand e -aboth go to 1, so f(z) ≈ a at the mode.
Next, let's look at the right side (x>ab) where f(z) seems to fall sharply. According to the roots of the second derivative given above, the minimum slope occurs at around x=b (if we ignore the terms with a<<1). The value of f(z) when x=b is 1/(e Γ(a)). Γ(a) is well approximated by 1/a for small a, so this value is approximately a/e. The slope at x=b is approximately -a/e and if we fit a line at that point the line would cross 0 at x=eb. Thus for small a, the probability can be considered negligible for x>eb.
Next, let's look at the left side (x < ab) where f(z) appears more flat. The maximum slope occurs around x=a²b (if we approximate √ 1+4a with 1+2a-2a²). The slope at x=a²b is approximately a² which gives a flat shape for x<ab when a<<1.
In summary, when used with a<<1, f(z) rises slowly for x<ab (with approximate slope a²) and falls sharply for x>ab (with approximate slope -a/e). You are unlikely to see x values larger than eb from such a distribution, but you may see values much smaller than the mean ab. Thus a vague Gamma prior is practically putting an upper bound on your positive value. The figure below shows how the f(z) distribution starts looking like a step function as the shape parameter approaches 0 (b=1/a and the peak heights have been matched for comparison).
I should also note that in the limit where a→0 and ab=1, we get an improper prior where f(z) becomes flat and the Gamma distribution becomes indifferent to the order of magnitude of the random variable. However it flattens a lot faster on the left than on the right.
Full post... |
If $A^2$ is regular, does it follow that $A$ is regular?
My attempt on a proof:
Yes, for contradiction assume that $A$ is not regular. Then $A^2 = A \cdot A$.
Since concatenation of two non-regular language is not regular $A^2$ cannot be regular. This contradicts our assumption. So $A$ is regular. So if $A^2$ is regular then $A$ is regular.
Is the proof correct?
Can we generalize this to $A^3$, $A^4$, etc...? And also if $A^*$ is regular then $A$ need not be regular?
Example: $A=\lbrace 1^{2^i} \mid i \geq 0\rbrace$ is not regular but $A^*$ is regular. |
I want to define a notion of "closeness" between two regular languages of finite words in $\Sigma^*$ (and/or infinite words in $\Sigma^\omega$). The basic idea is that we want two languages to be close if they don't differ by many words. We could also use the edit distance in some way... I could not find good references on this issue.
I don't call it a distance because I don't require all the distance axioms to be true (although it's not bad if they are).
A first attempt is to define $$d(L,K)= \limsup_{n\to\infty} \frac{|L_n\Delta K_n|}{|L_n\cup K_n|}$$ where $L_n$ and $K_n$ are the restrictions of $L$ and $K$ to $\Sigma^n$, and $\Delta$ is the symmetric difference.
Is this "distance" studied? Are there references on the subject (possibly with alternative choices for distance function)? Any help or pointer would be appreciated, thanks. |
Forgot password? New user? Sign up
Existing user? Log in
Is the number represented by the decimal
0.001001002003005008013021034055089144233377610...0.001001002003005008013021034055089144233377610...0.001001002003005008013021034055089144233377610...
rational, or irrational? Please justify your answer.
Note by Matt Enlow 5 years, 6 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Let x=∑n=1∞F(n)10−3nx=\sum _{ n=1 }^{ \infty }{ F(n){ 10 }^{ -3n } } x=∑n=1∞F(n)10−3n. A little foodling with this, using an approach you (yes, you, Matt) used in another problem similar to this, gets us the equation to solve: 1000(1000x−x−1)=x1000(1000x-x-1)=x1000(1000x−x−1)=x, which means x is rational. I leave it to the reader as an exercise to find that value x. Thanks for the tip on how to do this, Matt!
Log in to reply
The other version that I like: Is the decimal
0.1123581321345589144...0.1123581321345589144...0.1123581321345589144...
rational or irrational?
It simplifies the consideration of "What happens if the fibonacci number has 4 digits or more?"
@Calvin Lin Hmmm i remember this one from somewhere else not sure where,
to proof The periodic sequence, It can be shown that fibonacci sequence is never eventually periodicWhich goes to show that it is irrational.
I think we discussed this on brilliant a while back.
Yup, I believe that there was a past discussion about it. This is a gem, and worth revisiting.
@Calvin Lin – Ok lets see If i remember we write the above sequence like
F=0.f1f2f3.........fnF=0.{ f }_{ 1 }{ f }_{ 2 }{ f }_{ 3 }.........{ f }_{ n }F=0.f1f2f3.........fn, F(n)≡f(n)mod10F(n)\equiv f(n)mod10F(n)≡f(n)mod10, Then fn+2=fn+1+fnmod10 { f }_{ n+2 }={ f }_{ n+1 }+{ f }_{ n }mod10fn+2=fn+1+fnmod10
We can develop on this to show the existance of a fibonacci sequence congruent to −1mod10k-1 mod{ 10 }^{ k }−1mod10k but we will have to show that for any modulus p -- Fnmodpforn(−∞to∞){ F }_{ n }mod\quad p for\quad n(-\infty \quad to\quad \infty )Fnmodpforn(−∞to∞), is periodic.
I will have to think about it a bit more, My number theory is a rough.
@Beakal Tiliksew – Can you post the link to the previous discussion?
@Eddie The Head – i found the answer before i could figure it out, since it is a solution of @Calvin Lin , i won't post it here,so as not to spoil it.and for others to try it, i could mail you the link if you want.
You can try by proving that
For every kkk there exists a fibonacci number whose decimal representation ends in kkk 9's ,
One idea that we can try is proving that given any digit, and any number n, there always exist some Fibonacci number that has a sequence of that digit n times. If we were to prove that we can always find in some Fibonacci number a sequence like 11111.. as arbitrarily long as we'd like, that would pretty much prove that the decimal never repeats, and therefore it'd be irrational.
Would I want to try proving this? Ah, maybe too much for me now.
If I were to make a snap judgement on this one, I'd say not only it's probably irrational, it's probably even transcendental. I'm almost afraid to try touching this one.
As a matter of fact, Calvin, your suggested decimal number is similar to Champernowne's constant, which is 0.123456789101112131415..., which was shown to be transcendental. Not easy to do that.
u r ans r complez why is math complex
The first question I'd have is, what does the number look like after you get to four digit Fibonacci numbers? Would the thousand's digit of one segment add to the one's digit of the previous?
Yes, great question. And I would say that your suggestion as to how to resolve this ambiguity is the one that makes the most sense. (Also see @Calvin Lin 's comment.)
It equals 1000998999\frac{1000}{998999}9989991000, a rational number (see my response to Eddie the Head).
since it can be converted into p/q.form...it should be rational number..what do u say..???
How can you convert it to p/q form??It has no repeating parts as far as I see :p ..But I would love to see a rigorous solution
The generating function for the Fibonacci sequence is
F(x)=f0+f1x+f2x2+f3x3+…F(x)=f_0+f_1x+f_2x^2+f_3x^3+\dotsF(x)=f0+f1x+f2x2+f3x3+…
Since fn=fn−1+fn−2f_n=f_{n-1}+f_{n-2}fn=fn−1+fn−2, we have to cancel some terms
F(x)=f0+f1x+f2x2+f3x3+…F(x)=f_0+f_1x+f_2x^2+f_3x^3+\dotsF(x)=f0+f1x+f2x2+f3x3+…xF(x)=f0x+f1x2+f2x3+f3x4+…xF(x)=f_0x+f_1x^2+f_2x^3+f_3x^4+\dotsxF(x)=f0x+f1x2+f2x3+f3x4+…x2F(x)=f0x2+f1x3+f2x4+f3x5+…x^2F(x)=f_0x^2+f_1x^3+f_2x^4+f_3x^5+\dotsx2F(x)=f0x2+f1x3+f2x4+f3x5+…
Gathering the coefficients of (1−x−x2)F(x)(1-x-x^2)F(x)(1−x−x2)F(x), we get
(1−x−x2)F(x)=f0+(f1−f0)x+(f2−f1−f0)x2+(f3−f2−f3)x3+⋯=f0+(f1−f0)x(1-x-x^2)F(x)=f_0+(f_1-f_0)x+(f_2-f_1-f_0)x^2+(f_3-f_2-f_3)x^3+\dots=f_0+(f_1-f_0)x(1−x−x2)F(x)=f0+(f1−f0)x+(f2−f1−f0)x2+(f3−f2−f3)x3+⋯=f0+(f1−f0)x
Hence,
F(x)=f0+(f1−f0)x1−x−x2=x1−x−x2=f0+f1x+f2x2+f3x3+…F(x)=\frac{f_0+(f_1-f_0)x}{1-x-x^2}=\frac{x}{1-x-x^2}=f_0+f_1x+f_2x^2+f_3x^3+\dotsF(x)=1−x−x2f0+(f1−f0)x=1−x−x2x=f0+f1x+f2x2+f3x3+…
If we want to encode this into a decimal, we just let x=11000x=\frac{1}{1000}x=10001. The answer is
f0+f1⋅11000+f2⋅110002+f3⋅110003+⋯=110001−11000−110002=1000998999f_0+f_1\cdot\frac{1}{1000}+f_2\cdot\frac{1}{1000^2}+f_3\cdot\frac{1}{1000^3}+\dots=\frac{\frac{1}{1000}}{1-\frac{1}{1000}-\frac{1}{1000^2}}=\frac{1000}{998999}f0+f1⋅10001+f2⋅100021+f3⋅100031+⋯=1−10001−10002110001=9989991000
But beware of convergence issues! 11000\frac{1}{1000}10001 is sufficiently small, so I assumed it converged, which it did. Challenge: find 0.001002003004…0.001002003004\dots0.001002003004… and 0.001004009016025…0.001004009016025\dots0.001004009016025….
@Cody Johnson – Challenge 1.\textbf{Challenge 1.}Challenge 1.
F(x)=x+2x2+3x3+4x4+...............F(x) = x + 2x^{2} + 3x^{3} + 4x^{4}+...............F(x)=x+2x2+3x3+4x4+...............xF(x)=x2+2x3+3x4+........xF(x) = x^{2} + 2x^{3} + 3x^{4}+........xF(x)=x2+2x3+3x4+........
(1−x)F(x)=x+x2+x3+x4+.........(1-x)F(x) = x + x^{2} + x^{3} + x^{4}+.........(1−x)F(x)=x+x2+x3+x4+.........F(x)=x(1−x)2F(x) = \frac{x}{(1-x)^{2}}F(x)=(1−x)2xHence we have f(11000)=1000998001f(\frac{1}{1000}) = \frac{1000}{998001}f(10001)=9980011000
We can clearly see that the terms converge.
Challenge 2.\textbf{Challenge 2.}Challenge 2.
In this problem we must use the same approach twice,the resulting generating function F(x)=1+x2(1−x)3F(x) = \frac{1+x^{2}}{(1-x)^{3}}F(x)=(1−x)31+x2
Hence F(11000)=1000001000998001F(\frac{1}{1000}) = \frac{1000001000}{998001}F(10001)=9980011000001000
@Cody Johnson – But what will your logicbe for convergence? I think it is because the denominator is incresing exponentially and the numerator is not.
@Eddie The Head – The Fibonacci does increase exponentially with ratio 5+12\frac{\sqrt5+1}225+1, but this is less than 100010001000.
Just do partial fraction decomposition on the LHS and you can find the interval of convergence.
@Cody Johnson – Nice...I'm familiar with the usage of generating functions in association with the terms of a series but in this case it didn't come to my mind at a first glance....,.
What value of p and q would you say they are??
rational because no is not repeated till now
irrational
Problem Loading...
Note Loading...
Set Loading... |
Search
Now showing items 1-6 of 6
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful?
closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
Here's a cute and lovely theorem.
There exist two irrational numbers $x,y$ such that $x^y$ is rational.
Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$
(Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.)
How about the proof that
$$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$
I remember being impressed by this identity and the proof can be given in a picture:
Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments.
Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list.
I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction!
Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$.
Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that
$$x+iy = (a+ib)(c+id)$$
Taking the magnitudes of both sides are squaring gives
$$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$
I would go for the proof by contradiction of an infinite number of primes, which is fairly simple:
Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes.
I think I learned that both in high-school and at 1st year, so it might be a little too simple...
By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$
The first player in Hex has a winning strategy.
There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy.
You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$.
For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$."
Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$.
Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros.
But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction.
Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks.
Proof:
Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the
samecolor. Thus, it can no longer be possible to cover the remaining area.
(Well, it may be
too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...)
One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the
rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree
$\qquad\qquad$
Descent in the tree is given by the formula
$$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$
e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant.
Ascent in the tree by inverting this map, combined with trivial sign-changing reflections:
$\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$
$\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$
$\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$
See my MathOverflow post for further discussion, including generalizations and references.
I like the proof that there are infinitely many Pythagorean triples.
Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$
One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1.
Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere.
If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other.
Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.)
In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first.
The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice:
Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal.
This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles.
Parity of sine and cosine functions using Euler's forumla:
$e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$
$e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$
$cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$
Thus
$cos\ (-\theta) = cos\ \theta$
$sin\ (-\theta) = -\ sin\ \theta$
$\blacksquare$
The proof is actually just the first two lines.
I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$
If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic.
Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$:
$$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$
Proposition (No universal set): There does not exists a set which contain all the sets (even itself)
Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set
$$C=\{A\in X: A \notin A\}$$
of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction.
Edit: Assuming that one is working in ZF (as almost everywhere :P)
(In particular this proof really impressed me too much the first time and also is very simple)
Most proofs concerning the Cantor Set are simple but amazing.
The total number of intervals in the set is zero.
It is uncountable.
Every number in the set can be represented in ternary using just
0 and
2. No number with a
1 in it (in ternary) appears in the set.
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
The Menger sponge which is a 3d extension of the Cantor set
simultaneously exhibits an infinite surface area and encloses zero volume.
The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here:
Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as:
$y=f(x)$
This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a).
Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$.
The slope of the line joining $P$ and $Q$ is given by:
$tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$
Suppose now that the point $Q$ moves along the curve towards $P$.
In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish.
What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$:
$m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$
The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$.
It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$.
Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as:
$\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$,
which is the required formula.
This proof that $n^{1/n} \to 1$ as integral $n \to \infty$:
By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $.
Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner?
The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible.
The Eigenvalues of a skew-Hermitian matrix are purely imaginary.
The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices.
I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep. |
Visualization Math
The formula of normal distribution is
where $\mu$ controls the “center” or “peak” of the distribution and $\sigma$ tells us how “wide” or “disperse” the distribution is.
To understand the distribution, we take some limits.
$x = \mu$
First of all, when $x = \mu$ we have
Notice the argument of the exponential is some squared value and can not be negative. This condition gives us the peak value.
$x=\mu-a$ and $x=\mu + a$
For $x=\mu-a$, we have
For $x=\mu + a$, we have
which is exactly the same as the previous case.
The distribution is symmetric around $x=\mu$.
$x=\pm \infty$
We have 0 for both cases.
Tricks Integral
We integrate distributions a lot. For Gaussian distribution, it is quite helpful to remember the following identity.
It tells us that for $\mu=0$ and $\sigma=1/\sqrt{2}$, the area under the distribution is $\sqrt{\pi}$.
Hey, it is time to ask the question. Where the hell is the circle?
Error Function
The error function is defined as
Obviously, the coefficient $\frac{1}{\sqrt{\pi}}$ normalizes the function to be within $[-1,1]$. |
If your goal is to assume the fluid inside is friction-less, then consider a rotating hollow sphere with a non-rotating mass inside. Include the total mass of the shell and water in $m$, but only include the inertia from the shell in $I$. Secondly, if you want the acceleration then you can't rely on energy methods, and need to write a free body diagram in 2D. I've oriented the x-axis along the downward ramp direction, and the y-axis perpendicular to that.
$a_x$ acceleration of the object's center along the downward ramp [$m/s^2$], $\alpha$ angular acceleration (about z-axis) of the object's center [$rad/s^2$], $f_x$ friction force between shell and ramp (along x-axis pointed opposite of $a_x$) [$N$], $m$ the total mass of the object (fluid+shell) [$kg$], $g$ acceleration due to gravity [$m/s^2$], $\theta$ angle between the ramp and the horizontal ground [$rad$], $I$ moment of inertia about the center of mass for the shell [$kg \cdot m^2$], $N=mg\text{cos}(\theta)$ normal force perpendicular to ramp surface (positive y-direction) [$N$] $R$ outer radius of the shell [$m$], $M_z$ moments about z-axis [$Nm$].
First solve for the friction force $f$ assuming that no slip occurs, implying the shell's angular acceleration is matched to the ball's center acceleration,$$a_x=\alpha R$$Summing the moments (about z-axis) to solve for friction force $f$$$\sum M_z = I\alpha = fR$$
$$f = \frac{Ia_x}{R^2}$$
For completeness here is the y-direction equation of motion, though it's not needed:$$\sum F_y= 0 = N - mg \cdot \text{cos}(\theta)$$Next, create the x-direction equation of motion:$$\sum F_x = ma_x = mg \cdot \text{sin}(\theta)-f$$Substitute in the previous $f$ and solve for $a_x$:
$$ a_x = \frac{mg \cdot \text{sin}\theta}{m+I/R^2}$$ |
I want to understand the tensor product $\mathbb C$-algebra $\mathbb{C}\otimes_\mathbb{R} \mathbb{C}$. Of course it must be isomorphic to $\mathbb{C}\times\mathbb{C}.$ How can one construct an explicit isomorphism?
An explicit isomorphism of $\mathbb C$-algebras is given (on generators) by $ \mathbb C\otimes _\mathbb R \mathbb C\stackrel {\cong }{\to} \mathbb C\times \mathbb C: z\otimes w \mapsto (z\cdot w,z\cdot\bar w)$.
Here $ \mathbb C \otimes _\mathbb R \mathbb C$ is considered as a $\mathbb C$-algebra through its first factor: $z_1\cdot (z\otimes w)=z_1 z\otimes w $
Write $\mathbb C=\mathbb R[x]/\langle x^2+1\rangle$ for one of the copies. Then, using a universal property of tensor products, $$ \mathbb C\otimes_{\mathbb R} \mathbb C \;\approx\; \mathbb R[x]/\langle x^2+1\rangle \otimes_{\mathbb R}\mathbb C \;\approx\; \mathbb C[x]/\langle (x+i)(x-i)\rangle \;\approx\; \mathbb C[x]/\langle x+i\rangle \oplus \mathbb C[x]/\langle x-i\rangle $$ the last isomorphism via Sun-Ze's theorem (a.k.a. "Chinese Remainder Theorem"). That last isomorphism can be made explicit by choice of polynomials $A(x),B(x)$ such that $A(x)\cdot (x+i) + B(x)\cdot (x-i)=1$.
Edit: this treats "right" $\mathbb C$-algebra, but/and reversing the roles gives the same outcome as "left" algebra.
As a minor complement to Georges's answer, I'll try to convince the reader that it's better to write $$\mathbb C\otimes_{\mathbb R}\mathbb C\simeq\mathbb C\times\overline{\mathbb C}$$ than $$\mathbb C\otimes_{\mathbb R}\mathbb C\simeq\mathbb C\times\mathbb C.$$ For each complex vector space $V$ define the
conjugate vector space $\overline V$ as being the abelian group $V$ with the multiplication by (complex) scalars $*$ defined by $z*v:=\overline zv$. [Of course $\overline{\mathbb C}$ is canonically isomorphic to $\mathbb C$, but in general $\overline V$ is not canonically isomorphic to $V$.] Then the map $$\mathbb C\otimes_{\mathbb R}V\to V\times\overline V,\quad z\otimes v\mapsto(zv,z*v)$$ is a $\mathbb C$-linear isomorphism.
More generally, let $L/K$ be a finite Galois extension with Galois group $G$ and let $A$ be an $L$-algebra. [In this post, an $L$-algebra is just an $L$-vector space $A$ together with an $L$-bilinear map $A\times A\to A$.] For each $\sigma$ in $G$ let $*_\sigma$ be the multiplication by scalars in $L$ defined on $A$ by $z*_\sigma a:=\sigma(z)\,a$, and let $A_\sigma$ be the resulting $L$-algebra. Then the map $$ \phi_A:L\otimes_KA\to\prod_{\sigma\in G}A_\sigma,\quad z\otimes a\mapsto(z*_\sigma a)_{\sigma\in G} $$ is an $L$-algebra isomorphism.
This is proved as Proposition 8 on page A.V.64 of the book
Algebra, Chapters 4-7 by Bourbaki, book freely and legally available here.
Here is a simple proof:
It suffices to prove the statement obtained by replacing the $L$-algebra $A$ with an $L$-vector space $V$. If $(V_i)$ is a family of $L$-vector spaces such that each $\phi_{V_i}$ is an isomorphism, then $\phi_{\oplus V_i}$ is also an isomorphism. It suffices thus to prove that $\phi_L$ is an isomorphism.
Let $B$ be a $K$-basis of $L$. It suffices to show that the $\phi_L(1\otimes b)$, when $b$ runs over $B$, form an $L$-basis of $\prod L_\sigma$. It even suffices to check that the $\phi_L(1\otimes b)$ are linearly independent over $L$.
Suppose by contradiction that there are $x_b$ in $L$, not all zero, such that $\sum_bx_b\,\phi_L(1\otimes b)=0$, that is $\sum_b\sigma(x_b)\,b=0$ for all $\sigma$. The latter condition is equivalent to $\sum_bx_b\,\sigma(b)=0$ for all $\sigma$. If this condition holds, then the square matrix $(\sigma(b))_{\sigma,b}$ is singular, and there are $y_\sigma$ in $L$, not all zero, such that $\sum_\sigma y_\sigma\,\sigma(b)=0$ for all $b$, which contradicts Dedekind Independence Theorem.
To fill in some details to paul garret's answer, to prove $\mathbb{R}[x]/⟨x^2+1⟩ \otimes_{\mathbb{R}} \mathbb{C} ≈ \mathbb{C}[x]/⟨x^2+1⟩$, prove the general fact $R[x_1, . . . , x_n]/I \otimes_R R[y_1, . . . , y_m]/J ≈ R[x_1, . . . , x_n, y_1, . . . , y_m]/⟨I, J⟩$.
Then observe that $$\mathbb{R}[x]/⟨x^2+1⟩ \otimes_{\mathbb{R}} \mathbb{C} ≈ \mathbb{R}[x]/⟨x^2+1⟩ \otimes_{\mathbb{R}} \mathbb{R}[y]/⟨y^2+1⟩ \\≈ \mathbb{R}[x,y]/⟨x^2+1,y^2+1⟩ ≈ \mathbb{R}[y,x]/⟨y^2+1,x^2+1⟩ ≈ \mathbb{C}[x]/ ⟨x^2+1⟩. $$ |
How do I prove group of order 15 is abelian?
Is there any general strategy to prove that a group of particular order(composite order) is abelian?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let $G$ be a group of order 15. We know $G$ has subgroups of order 3 and order 5, say $P_3$ and $P_5$ from Sylow theory. These must be cyclic (why?) so write $P_3 = \langle a \rangle$, $P_5 = \langle b \rangle$.
Using the lemma below, show $G = P_3P_5$. Prove the lemma if it's not something you already know.
Lemma. For subgroups $H$ and $K$ of a finite group $G$, $|HK| = |H||K|/ |H \cap K|$, where $HK = \{hk \mid h \in H, k \in K\}$.
Using Sylow theory, show $P_3$ is normal.
Then $bab^{-1} \in \langle a \rangle$. If $bab^{-1} = a$, we have $ba = ab$, so $G$ is abelian. Observe $bab^{-1} \neq 1$ (why?). The only "bad" possibility now is that $bab^{-1} = a^2$.
Suppose, to get a contradiction, that $bab^{-1} = a^2$. Then $ba = a^2b$. Using this identity repeatedly to fill in the $ \cdots $, show $a = b^5a = \cdots = a^2b^5 = a^2$. But $a \neq a^2$, so this is a contradiction.
PS - Since $P_3$ and $P_5$ are both normal, you could instead argue that $G = P_3P_5$ implies $G \simeq P_3 \times P_5$. In general, you can adapt this argument to show for primes $p,q$ with $p > q$ and $q \nmid p - 1$, every group of order $pq$ is abelian.
Here is a 2000 paper of Pakianathan and Shankar which gives characterizations of the set of positive integers $n$ such that every group of order $n$ is (i) cyclic, (ii) abelian, or (iii) nilpotent.
Say that a positive integer $n > 1$ is a
nilpotent number if $n = p_1^{a_1} \cdots p_r^{a_r}$ (here the $p_i$'s are distinct prime numbers) and for all $1 \leq i,j \leq r$ and $1 \leq k \leq a_i$, $p_i^k \not \equiv 1 \pmod{p_j}$. Also, let us say that $1$ is a nilpotent number.
(So, for instance, any prime power is a nilpotent number. A product of two distinct primes $pq$ is a nilpotent number unless $p \equiv 1 \pmod q$ or $q \equiv 1 \pmod p$.)
Then, for $n \in \mathbb{Z}^+$:
(i) (Pazderski, 1959) Every group of order $n$ is nilpotent iff $n$ is a nilpotent number.
(ii) (Dickson, 1905) Every group of order $n$ is abelian iff $n$ is a cubefree nilpotent number. (iii) (Szele, 1947) Every group of order $n$ is cyclic iff $n$ is a squarefree nilpotent number.
For example, if $n = pq$ is a product of distinct primes, then $n$ is squarefree, so every group of order $n$ is nilpotent iff every group of order $n$ is abelian iff every group of order $n$ is cyclic iff $p \not \equiv 1 \pmod q$ and $q \not \equiv 1 \pmod p$. In particular, every group of order $15$ is cyclic.
Addendum: This 2006 paper of T. Müller is a natural followup. Rather than describing it myself, let me quote the MathSciNet review.
It is a popular problem to find for which positive integers n do all groups of order n have a given property (e.g., cyclicity, are abelian, etc.). The article under review contains a contribution to this problem which seems to have escaped notice. Define a multiplicative function $\psi$ on the positive integers by letting $\psi(1)=1$, and $\psi(p^ν)=(p^{ν}−1)(p^{ν−1}−1)\cdots(p−1)$ if $p$ is a prime and $ν\geq 1$. The author proves that every group of order $n$ is nilpotent of class at most $c$ if and only if $\operatorname{gcd}(n,\psi(n))=1$ and $n$ is $(c+2)$-power free. Setting $c=\infty$ yields a result of G. Pazderski [Arch. Math. 10 (1959), 331--343; MR0114863 (22 #5681)] describing the case of nilpotency; and setting $c=1$ yields the classic result of L. E. Dickson [Trans. Amer. Math. Soc. 6 (1905), no. 2, 198--204; MR1500706] describing the case of abelianness. (Reviewed by Arturo Magidin)
Hint: Any non-trivial subgroup is a Sylow subgroup. OTOH Sylow theorems tell that there is only one of order 3, and only one of order 5. Therefore there are 15-5-3+1=8 elements that don't belong to a proper subgroup, so...
In addition to the answers of Hans and Pete: it is well-known that if $n$ is a natural number, there is only one group of order $n$ if and only if $\gcd(n,\phi(n))=1$. Here $\phi$ is the Euler totient function. For $n=15$ this applies. |
Difference between revisions of "Hardy hierarchy"
Line 28: Line 28:
There are much stronger systems of fundamental sequences you can see on the following pages:
There are much stronger systems of fundamental sequences you can see on the following pages:
− +
*[[Madore's ψ function]]
*[[Madore's ψ function]]
*[[Buchholz's ψ functions]]
*[[Buchholz's ψ functions]]
−
*[[User blog:Denis Maksudov/Ordinal functions collapsing the least weakly Mahlo cardinal; a system of fundamental sequences|
+ − +
*[[User blog:Denis Maksudov/Ordinal functions collapsing the least weakly Mahlo cardinal; a system of fundamental sequences|functions based on a weakly Mahlo cardinal]]
The Hardy hierarchy has the following property \(H_{\alpha+\beta}(n)=H_\alpha(H_\beta(n))\).
The Hardy hierarchy has the following property \(H_{\alpha+\beta}(n)=H_\alpha(H_\beta(n))\).
Latest revision as of 03:51, 20 May 2018
The Hardy hierarchy, named after G. H. Hardy, is a family of functions \((H_\alpha:\mathbb N\rightarrow\mathbb N)_{\alpha<\mu}\) where \(\mu\) is a large countable ordinal such that a fundamental sequence is assigned for each limit ordinal less than \(\mu\). The Hardy hierarchy is defined as follows:
\(H_0(n)=n\) \(H_{\alpha+1}(n)=H_\alpha(n+1)\) \(H_\alpha(n)=H_{\alpha[n]}(n)\) if and only if \(\alpha\) is a limit ordinal,
where \(\alpha[n]\) denotes the \(n\)th element of the fundamental sequence assigned to the limit ordinal \(\alpha\)
Every nonzero ordinal \(\alpha<\varepsilon_0=\min\{\beta|\beta=\omega^\beta\}\) can be represented in a unique Cantor normal form \(\alpha=\omega^{\beta_{1}}+ \omega^{\beta_{2}}+\cdots+\omega^{\beta_{k-1}}+\omega^{\beta_{k}}\) where \(\alpha>\beta_1\geq\beta_2\geq\cdots\geq\beta_{k-1}\geq\beta_k\).
If \(\beta_k>0\) then \(\alpha\) is a limit and we can assign to it a fundamental sequence as follows
\(\alpha[n]=\omega^{\beta_{1}}+ \omega^{\beta_{2}}+\cdots+\omega^{\beta_{k-1}}+\left\{\begin{array}{lcr} \omega^\gamma n \text{ if } \beta_k=\gamma+1\\ \omega^{\beta_k[n]} \text{ if } \beta_k \text{ is a limit.}\\ \end{array}\right.\)
If \(\alpha=\varepsilon_0\) then \(\alpha[0]=0\) and \(\alpha[n+1]=\omega^{\alpha[n]}\).
Using this system of fundamental sequences we can define the Hardy hierarchy up to \(\varepsilon_0\).
For \(\alpha<\varepsilon_0\) the Hardy Hierarchy relates to the fast-growing hierarchy as follows
\(H_{\omega^\alpha}(n)=f_\alpha(n)\)
and at \(\varepsilon_0\) the Hardy hierarchy "catches up" to the fast-growing hierarchy i.e.
\(f_{\varepsilon_0}(n-1) ≤ H_{\varepsilon_0}(n) ≤ f_{\varepsilon_0}(n+1)\) for all \(n ≥ 1\).
There are much stronger systems of fundamental sequences you can see on the following pages:
List of systems of fundamental sequences Madore's ψ function Buchholz's ψ functions Jäger's ψ functions Collapsing functions based on a weakly Mahlo cardinal
The Hardy hierarchy has the following property \(H_{\alpha+\beta}(n)=H_\alpha(H_\beta(n))\).
See also References
Hardy,G.H. A theorem concerning the infinite cardinal numbers. Quarterly Journal of Mathematics (1904) vol.35 pp.87–94 |
Table of Contents
Coupled coincidence point results for contraction of $C$-class mappings in ordered uniform spaces Article References A. H. Ansari, D. Binbasioglu, D. Turkoglu 3-13
Some weaker sufficient conditions of $L$-index boundedness in direction for functions analytic in the unit ball Article References A. I. Bandura 14-25
Asymptotics of the entire functions with $\upsilon$-density of zeros along the logarithmic spirals Article References M. V. Zabolotskyj, Yu. V. Basiuk 26-32
Representation of a quotient of solutions of a four-term linear recurrence relation in the form of a branched continued fraction Article References I. Bilanyk, D. I. Bodnar, L. Buyak 33-41
Note on bases in algebras of analytic functions on Banach spaces Article References I. V. Chernega, A. V. Zagorodnyuk 42-47
Spectral approximations of strongly degenerate elliptic differential operators Article References M. I. Dmytryshyn, O. V. Lopushansky 48-53
On some of convergence domains of multidimensional S-fractions with independent variables Article References R. I. Dmytryshyn 54-58
Ricci soliton and Ricci almost soliton within the framework of Kenmotsu manifold Article References A. Ghosh 59-69
Interconnection between Wick multiplication and integration on spaces of nonregular generalized functions in the Levy white noise analysis Article References N. A. Kachanovsky, T. Kachanovska 70-88
Algebraic basis of the algebra of block-symmetric polynomials on $\ell_1 \oplus \ell_{\infty}$ Article References V. V. Kravtsiv 89-95
The relationship between algebraic equations and $(n,m)$-forms, their degrees and recurrent fractions Article References I. I. Lishchynsky 96-106
Inverse problem for $2b$-order differential equation with a time-fractional derivative Article References A. O. Lopushansky, H. P. Lopushanska 107-118
Some inequalities for strongly $(p,h)$-harmonic convex functions Article References M. A. Noor, K. I. Noor, S. Iftikhar 119-135
Characterizations of regular and intra-regular ordered $\Gamma$-semihypergroups in terms of bi-$\Gamma$-hyperideals Article References S. Omidi, B. Davvaz, K. Hila 136-151
On a new application of quasi power increasing sequences Article References H. S. Özarslan 152-157
On approximation of homomorphisms of algebras of entire functions on Banach spaces Article References H. M. Pryimak 158-162
On the solutions of a class of nonlinear integral equations in cone $b$-metric spaces over Banach algebras Article References L. T. Quan, T. Van An 163-178
Classification of generalized ternary quadratic quasigroup functional equations of the length three Article References F. M. Sokhatsky, A. V. Tarasevych 179-192
On integral representation of the solutions of a model $\vec{2b}$-parabolic boundary value problem Article References N. I. Turchyna, S. D. Ivasyshen 193-203
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
I recall seeing that the category of schemes can be captured by a general construction as follows.
Let $\mathbf{Spec}\colon \mathbf{CRing}^{op}\to \mathbf{LRS}$ be the usual functor from the category of commutative rings to the category of locally ringed spaces by assigning a ring to its structured sheaf.
Now, this is where I am a bit hesitant to continue, since we are to take the Yoneda extension along $\mathbf y\colon \mathbf{CRing}^{op}\to \mathbf{Set}^{\mathbf{CRing}}$ to a essentially unique functor $L\colon \mathbf{Set}^{\mathbf{CRing}}\to \mathbf{LRS}$ such that $L\circ \mathbf y=\mathbf{Spec}$. By the general construction, $L$ has a right adjoint, $R\colon \mathbf{LRS}\to \mathbf{Set}^{\mathbf{CRing}}$.
My concern is that the usual construction of $L$ is taken to be $L(F)=colim_{el(F)}\mathbf{Spec}\circ \pi_F$, where $\pi_F\colon el(F)\to \mathbf{CRing}$ is the natural projection. However, $el(F)$ is not guaranteed to be small and $\mathbf{LRS}$ does not have all large colimits.
Is there a way to make this idea work so that the category of schemes is equivalent to the full subcategory consisting of locally ringed spaces $(X,\mathscr O_X)$ such that the unit of the adjunction $\eta_X\colon LR(X)\to X$ is an isomorphism? Perhaps we should restrict $\mathbf{Set}^{\mathbf{CRing}}$ to be the smallest subcategory which includes only those objects which have colimits we want. |
202 35 Summary What happens to invariant mass of an object when it gets closer or further from a gravitational body?
In Special Relativity, you learn that invariant mass is computed by taking the difference between energy squared and momentum squared. (For simplicity, I'm saying c = 1).
[tex] m^2 = E^2 - \vec{p}^2 [/tex] This can also be written with the Minkowski metric as: [tex] m^2 = \eta_{\mu\nu} p^\mu p^\nu [/tex] More generally, if there is a different metric (for example Schwartzchild), you would write it as: [tex] m^2 = g_{\mu\nu} p^\mu p^\nu [/tex] Now the question is, if invariant mass does not change from one metric to the other, you get the equation: [tex] 0 = (g_{\mu\nu} - \eta_{\mu\nu})p^\mu p^\nu [/tex] This seems to give unphysical results. I solved for a photon in the Schwartzchild metric, and the only physical solution available is if the Schwartzchild radius is 0. So this seems to imply that invariant mass (or lack thereof) is not invariant under gravitational fields. Any help here would be much appreciated. Thank you.
[tex] m^2 = E^2 - \vec{p}^2 [/tex]
This can also be written with the Minkowski metric as:
[tex] m^2 = \eta_{\mu\nu} p^\mu p^\nu [/tex]
More generally, if there is a different metric (for example Schwartzchild), you would write it as:
[tex] m^2 = g_{\mu\nu} p^\mu p^\nu [/tex]
Now the question is, if invariant mass does not change from one metric to the other, you get the equation:
[tex] 0 = (g_{\mu\nu} - \eta_{\mu\nu})p^\mu p^\nu [/tex]
This seems to give unphysical results.
I solved for a photon in the Schwartzchild metric, and the only physical solution available is if the Schwartzchild radius is 0. So this seems to imply that invariant mass (or lack thereof) is not invariant under gravitational fields.
Any help here would be much appreciated. Thank you. |
In the previous article we learned about spaces and how to position and orient objects in world space by applying transformation matrices on them. We also learned about camera space, that is simply another coordinate system within the world space.
Recall that during rendering, a vertex first gets mapped from world space into camera space and then projected onto the 2D view plane using a projection matrix (roughly speaking). So in this post, we deal with the question how to set up the camera space by positioning and orienting the camera and how to derive a matrix from it so that we can map a vertex from world space into camera space. In OpenGL, this matrix (called the “view matrix”) plays a big role and requires to be specified in every OpenGL program. Camera Space
The coordinate system of the camera, as discussed in the previous section, is spaned by three orthonormal vectors \( \vec{u}, \vec{v},\vec{w}\in \mathbb{R}^3\). The position of the camera is defined by its focal point (or eye) named \(\vec{c}\) in world space. Note that \(\vec{c}\) is also the origin of the camera coordinate system. By convention the camera looks into a direction \( -\vec{w} \), which is most often calculated by subtracting a “look-at” point \(\vec{l}\) from the focal point
\[ \vec{w} = | \vec{c} – \vec{l}| \]
Now we compute \(\vec{u}\) and \(\vec{v}\) with help of an “up-vector”, (0, 0, 1), that basically points upwards. With the help of the cross-product we can now compute the vectors
\[ \begin{aligned} \vec{u} & = | (0,0,1)^{\top} \times \vec{w} | \\ \vec{v} & =| \vec{w} \times \vec{u}| \end{aligned} \]
that spans our new orthonormal camera coordinate system.
Camera Transformation
Now that we set up the camera space, we need to construct a matrix that maps from world space into camera space. More concretely, to map a given vertex \(\vec{a}\) from world space to camera space, we apply the following two steps:
translate \(\vec{a}\) with respect to the camera position, and then map the translated point into the coordinate system \(\vec{u},\vec{v},\vec{w}\).
These two steps will later be combined into one matrix which are then together called
camera transformation.
The translation part is fairly easy. With our given camera position \(\vec{c}\), we use a translation matrix \(T\) to move \(\vec{a}\) relative to the camera position
\[ T(-\vec{c})\vec{a} = \begin{pmatrix} 1 & 0 & 0 & -c_x \\ 0 & 1 &0& -c_y \\ 0 & 0 &1& -c_z\\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix}a_x\\a_y\\a_z\\1\end{pmatrix}=\begin{pmatrix}a_x-c_x\\a_y-c_y\\a_z-c_z\\1\end{pmatrix}\]
Ok. Now for the mapping part, we have two options how to proceed: Either we try to set up a rotation matrix that rotates the vertex into place in camera space. This would require to determine the angles between the axis coordinates so that we can use them to rotate the dimensions of the point. Or, we make use of a wonderful trick that is applicable when dealing with orthonormal coordinate systems.
We are going to do the latter.
Let us quickly review definition of the dot product, which says that for two given vectors \(\vec{a}\), \(\vec{b}\), where \(\vec{b}=\|\vec{b}\|=1\) the dot product yields
\[ \vec{a} \cdot \vec{b} = \| \vec{a}\| cos(\theta)\]
and \(\theta\) is the angle between the two vectors. The dot product basically computes the scaling factor of both \(\vec{a}\) and \(\vec{b}\) to the point where \(\vec{a}\) is orthogonally projected on \(\vec{b}\) (and vice versa). This may be a little confusing which is why I tried to elaborate on the dot product a bit deeper in the math appendix.
Now, \(\vec{u}\), \(\vec{v}\) and \(\vec{w}\) are orthogonal to each other, meaning that
\[ \vec{u} \cdot \vec{v} = \vec{v} \cdot \vec{w} = \vec{w} \cdot \vec{u} = 0\]
holds true, so that they span a so-called
orthonormal basis in world space. This allows to set up a mapping matrix that “rotates” a point from world space into camera space by simply computing the dot product between the point and each coordinate axis vector
\[ R(\vec{u},\vec{v},\vec{w})\vec{a} = \begin{pmatrix} u_x & u_y & u_z & 0 \\ v_x & v_y & v_z & 0 \\ w_x & w_y & w_z & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix}a_x\\a_y\\a_z\\1\end{pmatrix}=\begin{pmatrix}\vec{u} \cdot \vec{a} \\ \vec{v} \cdot \vec{a} \\ \vec{w} \cdot \vec{a}\\1 \end{pmatrix}\]
Now that we have both transformations ready, we are able to multiply them into one matrix \(V = R(\vec{u},\vec{v},\vec{w})T(-\vec{c}) \). This is called the
view matrix.
\[ V = \begin{pmatrix} u_x & u_y & u_z & -c_{x}u_x-c_{y}u_y-c_{z}u_z \\ v_x & v_y & v_z & -c_{x}v_x-c_{y}v_y-c_{z}v_z \\ w_x & w_y & w_z & -c_{x}w_x-c_{y}w_y-c_{z}w_z \\ 0 & 0 & 0 & 1 \end{pmatrix}\]
The result of applying the view matrix \(V\) on \(\vec{a} \) in world space is a new set of coordinates in camera space. It is important to understand that the coordinates are just scaling coefficients for the axes \(\vec{u},\vec{v},\vec{w}\) which itself lay in world space. The linear combination \(a_x\vec{u}+ a_y+\vec{v}+a_z\vec{w}\) describes the position of the point in camera space. But the point has not been “moved”, its position is just now been described relative to the origin and orientation of the camera space.
Code Example
Let us apply the above example in code. I make use of my own vector and matrix implementation which are very similar to those other countless implementations that can be found on the web.
Note that I start using heterogenious coordinates right from the beginning.
// the position of the camera, called 'eye'Vector3f c = new Vector3f(5, -5, 8);Vector3f u = new Vector3f();Vector3f v = new Vector3f();Vector3f w = new Vector3f();// compute "negative" look direction by substracting// c from the look-at point (3,4,0)w.subAndAssign(c, new Vector3f(3, 4, 0)); // w = c - (3,4,0) w.normalize();// compute cross productu.crossAndAssign(new Vector3f(0, 0, 1), w); // side = (0,0,1) x wu.normalize();v.crossAndAssign(w, u); // up = side x lookv.normalize();Matrix4f rotation = new Matrix4f(); // identityrotation.setIdentity();// note the format: set(COLUMN, ROW, value)// it may be different for your matrix implementationrotation.set(0, 0, u.x);rotation.set(1, 0, u.y);rotation.set(2, 0, u.z);rotation.set(0, 1, v.x);rotation.set(1, 1, v.y);rotation.set(2, 1, v.z);rotation.set(0, 2, w.x);rotation.set(1, 2, w.y);rotation.set(2, 2, w.z);Matrix4f translation = new Matrix4f(); // identitytranslation.set(3, 0, -c.x);translation.set(3, 1, -c.y);translation.set(3, 2, -c.z);// view matrixMatrix4f view = new Matrix4f();view.multAndAssign(rotation, translation); // view = rotation * translation// print matrix on consoleview.print();
At the end of the code snipped we print out the matrix on the console. This is what it says.
0.9761871 0.21693046 0.0 -3.7962832 -0.1421731 0.6397789 0.75529456 -2.1325965 0.16384639 -0.73730874 0.65538555 -9.748859 0.0 0.0 0.0 1.0
If you are building your own Matrix class, I recommend to incorporate a
lookAt method that sets up the matrix. Also, I recommend creating a camera object that caches the \(\vec{u}\), \(\vec{v}\) and \(\vec{w}\) vectors and provides general heper methods to deal with positioning, pitch, yaw, roll and other stuff that you may need (eg. a matrix stack). |
(I am looking at Wikipedia page on Robinson Arithmetic)
Hmmm, I run into a difficulty as well: how to go from $a + s(d) = b$ to $s(a) + d = b$
ADDENDUM
Aha! As I suspected ... you
cannot prove $\forall x \forall y \ s(x) + y = x + s(y)$ in Robinson Arithmetic, and thus the proof above cannot be completed. In fact, your statement $\forall x \forall y (x \le y \leftrightarrow (x = y \lor s(x) < y))$ cannot be proven in Robinson Arithmetic in any way. Below is a non-standard model $M$ that satisfies all the axioms of Robinson Arithmetic, but in which these two specific claims are false:
Domain
$D^M = \{ q_0, q_1, q_2, q_3, d_0, d_1, d_2, ... \}$ (in other words, a countably infinite number of objects $d_i$ that of course serve the role of the natural numbers as intended, plus 4 more objects)
$0^M = d_0$
Interpretation of successor function
$s^M(q_0) = q_1$
$s^M(q_1) = q_0$
$s^M(q_2) = q_3$
$s^M(q_3) = q_2$
$s^M(d_i) = d_{1+1}$
This will satisfy axiom 1 ($d_0$ is not the successor of any object), axiom 2 (no two different objects have the same successor), and axiom 3 (every object other than $d_0$ has a predecessor (i.e is the successor of some other object).
Interpretation of addition function
(rows are left argument, columns right argument, e.g $q_0 +^M q_1=q_1$ and $q_1 +^M q_0=q_0$)
\begin{array}{c|cccccccc}& q_0 & q_1 & q_2 & q_3 & d_0 & d_{2k+1} & d_{2k+2}\\\hlineq_0 & q_0 & q_1 & q_2 & q_3 & q_0 & q_1 & q_0\\q_1 & q_0 & q_1 & q_2 & q_3 & q_1 & q_0 & q_1\\q_2 & q_2 & q_3 & q_2 & q_3 & q_2 & q_3 & q_2\\q_3 & q_2 & q_3 & q_2 & q_3 & q_3 & q_2 & q_3\\d_0 & q_0 & q_1 & q_2 & q_3 & d_0 & d_{2k+1} & d_{2k + 2}\\d_{i+1} & q_2 & q_3 & q_2 & q_3 & d_{i+1}& d_{2k+1+ i+1} & d_{2k+2+i+1}\\\end{array}
This will satisfy axiom 4 ($x +^M d_0 = x$ for any object $x$) and axiom 5 ($\forall x \forall y x + s(y) = s(x + y)$ ... this is a bit tedious to verify)
Interpretation of multiplication function
(rows are left argument, columns right argument, e.g $q_0 \cdot^M q_1=q_1$ and $q_1 \cdot^M q_0=q_0$)
\begin{array}{c|ccccccc}& q_0 & q_1 & q_2 & q_3 & d_0 & d_{i+1}\\\hlineq_0 & q_0 & q_0 & q_0 & q_0 & d_0 & q_0\\q_1 & q_1 & q_1 & q_1 & q_1 & d_0 & q_1\\q_2 & q_2 & q_2 & q_2 & q_2 & d_0 & q_2\\q_3 & q_3 & q_3 & q_3 & q_3 & d_0 & q_3\\d_0 & q_0 & q_0 & q_2 & q_2 & d_{2k} & d_{2k+1}\\d_{2k+1} & q_2 & q_3 & q_2 & q_3 & d_{2k+1} & d_{2k+1+i+1}\\d_{2k+2} & q_2 & q_2 & q_2 & q_2 & d_{2k+2} & d_{2k+2+i+1}\\\end{array}
This will satisfy axiom 6 ($x \cdot^M d_0 = d_0$ for any object $x$) and axiom 7 ($\forall x \forall y x \cdot s(y) = (x \cdot y) + x$ ... this is again tedious to verify)
OK, but now notice that:
$s(d_0) +^M q_0 = d_1 +^M q_0 = q_2$, but $d_0 +^M s(q_0) = d_0+^M q_1 = q_0$, so $\forall x \forall y \ s(x) + y = x + s(y)$ is false in this model.
Also notice that while $d_0 \le q_0$ is true since $\exists z d_0 + z = q_0$ is true (simply choose $z = q_0$), $d_0 = q_0 \lor s(d_0) \le q_0$ is false, since $d_0 \not = q_0$ and since $s(d_0) = d_1$, and there is no $z$ such that $d_1 + z = q_0$. So, $\forall x \forall y (x \le y \leftrightarrow (x = y \lor s(x) < y))$ is also false in this model. |
I'm newbie both in calculus and Python/Scipy so I apologize if this question is too dumb. I'm trying to model flow between two pressure vessels. Let's say we have two points and a link between them like this
[$Vc_1$, $P_1$]----($A$)----[$Vc_2$, $P_2$]
$Vc_1$, $Vc_2$ are constants volumes of the nodes(vessels) and $P_1$, $P_2$ are varying pressures of the nodes respectively.
I end up writing differential questions below. Never mind physical meaning I just want to get math correct.
$\frac{\mathrm{dP_1} }{\mathrm{d} t} = \frac{\mathrm{dVa} }{\mathrm{d} t} \cdot \frac{1}{Vc_1 \cdot B}$
$\frac{\mathrm{dP_2} }{\mathrm{d} t} = \frac{\mathrm{dVa} }{\mathrm{d} t} \cdot \frac{1}{Vc_2 \cdot B}$
Here $B$ is compressibility.
$\frac{\mathrm{dVa} }{\mathrm{d} t} = A \cdot K \cdot \sqrt {P_1 - P_2}$
$\frac{\mathrm{dVa} }{\mathrm{d} t}$ is amount of "flow" or change of additional volume between nodes. $K$ is some constant coefficient and $A$ is link "throughput". ($P_1-P_2$) can change sign so I've adjusted for this in the software.
Below is Python program that I wrote to evaluate this.
#!/usr/bin/env python import math from scipy.integrate import odeint from time import time import numpy B_compressibility = 0.0000033 # water compressibility K = 0.747871759938 # coefficient Vc_1 = 20 Vc_2 = 50 A = 0.01 P_1 = 4000 P_2 = 2000 def deriv(state, t): _P_1 = state[0] _P_2 = state[2] diff_P = _P_1 - _P_2 flow_direction = math.copysign(1, diff_P) dVa = flow_direction * A * K * math.sqrt(abs(diff_P)) dP_1 = -(dVa/Vc_1)/B_compressibility dP_2 = (dVa/Vc_2)/B_compressibility #print 'IN ', state #print 'OUT ', [dP_1, -dVa, dP_2, dVa] return [dP_1, -dVa, dP_2, dVa] if __name__ == '__main__': Va_1 = Vc_1 * P_1 * B_compressibility Va_2 = Vc_2 * P_2 * B_compressibility odeIterations = 10 timeperiod = numpy.linspace(0.0,1.0, odeIterations) initial_state = [P_1, Va_1, P_2, Va_2] t0 = time() state_array = odeint(deriv, initial_state, timeperiod) t1 = time() print 'runtime %fs' %(t1-t0) print state_array P_1, Va_1, P_2, Va_2 = state_array[odeIterations-1]
Below is output from program
Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information. runtime 0.041000s [[ 4.00000000e+003 2.64000000e-001 2.00000000e+003 3.30000000e-001] [ 3.49242034e+003 2.30499743e-001 2.20303186e+003 3.63500257e-001] [ 3.09580400e+003 2.04323064e-001 2.36167840e+003 3.89676936e-001] [ 2.81015098e+003 1.85469965e-001 2.47593961e+003 4.08530035e-001] [ 2.63546127e+003 1.73940444e-001 2.54581549e+003 4.20059556e-001] [ 2.57173487e+003 1.69734501e-001 2.57130605e+003 4.24265499e-001] [ 2.57142857e+003 1.69714286e-001 2.57142857e+003 4.24285714e-001] [ 1.83357137e-299 1.80790662e-299 1.83145695e-299 1.87152166e-299] [ 1.83276935e-299 1.80681296e-299 1.83182150e-299 1.87141230e-299] [ 1.83379011e-299 1.80746916e-299 1.83320682e-299 1.83229543e-299]] lsoda-- at current t (=r1), mxstep (=i1) steps taken on this call before reaching tout In above message, I1 = 500 In above message, R1 = 0.6110315150411E+00
odeint gives correct results up to 7th line and then something goesseriously wrong. I have searched in google and it looks like I'm not theonly one who struggles with scipy. Everybody suggests increasing mxstep butthat doesn't solve my problem. In addition it slows down the methodsignificantly. Somebody suggested reducing accuracy but I don't know how todo that. Decreasing accuracy is OK if that help as I don't need super accuracy from the
odeint. Couple of digits after dot is more than enough for me. Also I just need final values so in fact I want to decrease number of steps. Any help is greatly appreciated! |
Logical AND: Use the linear constraints $y_1 \ge x_1 + x_2 - 1$, $y_1 \le x_1$, $y_1 \le x_2$, $0 \le y_1 \le 1$, where $y_1$ is constrained to be an integer. This enforces the desired relationship. (Pretty neat that you can do it with just linear inequalities, huh?)
Logical OR: Use the linear constraints $y_2 \le x_1 + x_2$, $y_2 \ge x_1$, $y_2 \ge x_2$, $0 \le y_2 \le 1$, where $y_2$ is constrained to be an integer.
Logical NOT: Use $y_3 = 1-x_1$.
Logical implication: To express $y_4 = (x_1 \Rightarrow x_2)$ (i.e., $y_4 = \neg x_1 \lor x_2$), we can adapt the construction for logical OR. In particular, use the linear constraints $y_4 \le 1-x_1 + x_2$, $y_4 \ge 1-x_1$, $y_4 \ge x_2$, $0 \le y_4 \le 1$, where $y_4$ is constrained to be an integer.
Forced logical implication: To express that $x_1 \Rightarrow x_2$ must hold, simply use the linear constraint $x_1 \le x_2$ (assuming that $x_1$ and $x_2$ are already constrained to boolean values).
XOR: To express $y_5 = x_1 \oplus x_2$ (the exclusive-or of $x_1$ and $x_2$), use linear inequalities $y_5 \le x_1 + x_2$, $y_5 \ge x_1-x_2$, $y_5 \ge x_2-x_1$, $y_5 \le 2-x_1-x_2$, $0 \le y_5 \le 1$, where $y_5$ is constrained to be an integer.
And, as a bonus, one more technique that often helps when formulating problems that contain a mixture of zero-one (boolean) variables and integer variables:
Cast to boolean (version 1): Suppose you have an integer variable $x$, and you want to define $y$ so that $y=1$ if $x \ne 0$ and $y=0$ if $x=0$. If you additionally know that $0 \le x \le U$, then you can use the linear inequalities $0 \le y \le 1$, $y \le x$, $x \le Uy$; however, this only works if you know an upper and lower bound on $x$. Or, if you know that $|x| \le U$ (that is, $-U \le x \le U$) for some constant $U$, then you can use the method described here. This is only applicable if you know an upper bound on $|x|$.
Cast to boolean (version 2): Let's consider the same goal, but now we don't know an upper bound on $x$. However, assume we do know that $x \ge 0$. Here's how you might be able to express that constraint in a linear system. First, introduce a new integer variable $t$. Add inequalities $0 \le y \le 1$, $y \le x$, $t=x-y$. Then, choose the objective function so that you minimize $t$. This only works if you didn't already have an objective function. If you have $n$ non-negative integer variables $x_1,\dots,x_n$ and you want to cast all of them to booleans, so that $y_i=1$ if $x_i\ge 1$ and $y_i=0$ if $x_i=0$, then you can introduce $n$ variables $t_1,\dots,t_n$ with inequalities $0 \le y_i \le 1$, $y_i \le x_i$, $t_i=x_i-y_i$ and define the objective function to minimize $t_1+\dots + t_n$. Again, this only works nothing else needs to define an objective function (if, apart from the casts to boolean, you were planning to just check the feasibility of the resulting ILP, not try to minimize/maximize some function of the variables).
For some excellent practice problems and worked examples, I recommend Formulating Integer Linear Programs:A Rogues' Gallery. |
Let $t\ge 0$ and let $f_{s}(t)$ be the sampled version of $f(t)$.
If $$f(t)=\begin{cases}\int_{0}^{t}\exp\left(-\frac{1}{1-s^{2}}\right)\,ds,&|s|<1, \\ 0,&\text{otherwise}.\end{cases}$$
I used
Plot[NIntegrate[Piecewise[{{Exp[-1/(1 - s^2)], Abs[s] < 1}, {0, Abs[s] >= 1}}], {s, 0, t}], {t, 0, 2}
which looks like this:
which is as I would expect.
Then we sample for arbitrary time interval $T=1/10$ to get
$$\begin{aligned}f_{s}(t)&=\sum_{n=-\infty}^{\infty}f(n/10)\delta(t-n/10) \\ &=\begin{cases}\int_{0}^{n/10}\exp\left(-\frac{1}{1-s^{2}}\right)\,ds\cdot\delta(t-n/10),&|s|<1, \\ 0,&\text{otherwise}.\end{cases} \end{aligned}$$
However, for the sampled function, when I use
DiscretePlot[ Accumulate[ Table[ NIntegrate[ Piecewise[ {{Exp[-1/(1 - s^2)]*DiscreteDelta[t - n*0.1], Abs[s] < 1}, {0, Abs[s] >= 1}}], {s, 0, n*0.1}, AccuracyGoal -> 20], {n, -20, 20}]], {t, 0, 2}]
this yields:
which clearly isn't how it should look.
Moreover, when I attempt to perform a discrete time Fourier transform on the sampled data using
Plot[ FourierSequenceTransform[ NIntegrate[ Piecewise[{{Exp[-1/(1 - s^2)], Abs[s] < 1}, {0, Abs[s] >= 1}}], {s, 0, 0.1 n}, AccuracyGoal -> 20], {n, -20, 20}, ω], {ω, -20, 20}]
I fail to get the
n in the
FourierSequenceTransform argument, since it is in the integral limit, I think. But there must be a way to bypass this obstacle? |
Let us consider a problem of the form
$$(\mathcal{L} + k^2) u(\mathbf{x})=0\, ,\quad \forall \mathbf{x} \in \Omega$$
with Dirichlet boundary conditions
$$u(\mathbf{x}) = 0, \quad \forall \mathbf{x} \in \partial\Omega\, .$$
We discretize this problem using, for example, the finite element method, to obtain the following generalized eigenvalue problem
$$[K]\{U\} = k^2 [M]\{U\} \, .$$
To impose boundary conditions one can do one of the following:
Reduce the system by deleting rows/columns corresponding to boundary degrees of freedom in matrices $[K]$ and $[M]$.
Zero row/column values for the boundary degrees of freedom of both matrices $[K]$ and $[M]$, and fix the diagonal values in $[K]$ to any finite number. The problem with this approach is that the matrix $[M]$ loses its positive-definiteness (if it was) and eigensolvers do not like this.
Zero row/column values for the boundary degrees of freedom of both matrices $[K]$ and $[M]$, and fix the diagonal values in $[M]$ to any finite number.
For eigenvalue problems I have (almost) exclusively used 1, while I have used the other methods (or equivalent) for solving linear systems of equations.
Questions
What is the most common method used for imposing Dirichlet boundary conditions?
Is there any other method that I have not mentioned?
This question has been partially answered in here. |
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 2 초 128 MB 8 0 0 0.000%
Computing the number of fixed points and, more generally, the number of periodic orbits within a dynamical system is a question attracting interest from different fields of research. However, dynamics may turn out to be very complicated to describe, even in seemingly simple models. In this task you will be asked to compute the number of periodic points of period \(n\) of a piecewise linear map \(f\) mapping the real interval \(\left[ 0, m \right]\) into itself. That is to say, given a map \(f : \left[ 0, m \right] \rightarrow \left[ 0, m \right]\) you have to calculate the number of solutions to the equation \(f^n(x) = x\) for \(x \in \left[ 0, m \right] \), where \(f^n\) is the result of iterating function \(f\) a total of \(n\) times, i.e.
\[f^n =\overbrace { f \circ \cdots \circ f \circ f } ^ {\text{n f's}}, \]
where \(\circ\) stands for the composition of maps, \((g \circ h)(x) = g(h(x))\).
Fortunately, the maps you will have to work with satisfy some particular properties:
Figure 1: Graphs of the third map in the sample input, \(f_3\) (left), and of its square, \(f^2_3\) (right).
Since there might be many periodic points you will have to output the result modulo an integer.
The input consists of several test cases, separated by single blank lines. Each test case begins with a line containing the integer \(m\) (1 ≤ \(m\) ≤ 80). The following line describes the map \(f\); it contains the \(m\)+1 integers \(f(0), f(1), \dots , f(m)\), each of them between 0 and \(m\) inclusive. The test case ends with a line containing two integers separated by a blank space, \(n\) (1 ≤ \(n\) ≤ 5 000) and the modulus used to compute the result, mod (2 ≤ mod ≤ 10 000).
The input will finish with a line containing 0.
For each case, your program should output the number of solutions to the equation \(f^n(x) = x\) in the interval \(\left[ 0, m \right]\) modulo mod. If there are infinitely many solutions, print Infinity instead.
2 2 0 2 2 10 3 0 1 3 2 1 137 3 2 3 0 3 20 10000 0 4 Infinity 9074 |
Code Listing by Fernando Nieuwveldt
I originally posted this code in Recipe 577132 and this is a repost of that recipe with corrections since there was an error in the original recipe. Added here is an error analysis to show the effectiveness of the Laplace inversion method for pricing European options. One can test the accuracy of this method to the finite difference schemes. The laplace transform of Black-Scholes PDE was taken and the result was inverted using the Talbot...
I present a method of computing the 1F1(a,b,x) function using a contour integral. The method is based on a numerical inversion, basically the Laplace inversion. Integral is 1F1(a,b,x) = Gamma(b)/2\pi i \int_\rho exp(zx)z^(-b)(1+x/z)^(-a)dz, \rho...
This is a fast and highly accurate numerical method for the inversion of the Laplace transform
All files and free downloads are copyright of their respective owners. We do not provide any hacked, cracked, illegal, pirated version of scripts, codes, components downloads. All files are downloaded from the publishers website, our file servers or download mirrors. Always Virus check files downloaded from the web specially zip, rar, exe, trial, full versions etc. Download links from rapidshare, depositfiles, megaupload etc not published. |
It is almost Valentine's Day. So if $a$ and $b$ are positive integers such that $a\ne b$ and $ab=N$, call $a$ and $b$
lovers. There are two cases to consider, (i) $N$ is not a perfect square and (ii) $N$ is a perfectsquare.
Case (i): If $N$ is not a perfect square, then the positive divisors of $N$ are divided into pairs of lovers. The number of pairs is $\frac{d(N)}{2}$, where $d(N)$ is the number of positive divisors of $N$.
The product of the divisors of $N$ is therefore $N^{d(N)/2}$. This is $N^2$ (meaning that $N$ is multiplicatively perfect) precisely if $d(N)=4$. This is the case if $N$ is the product of $2$ distinct primes or if $N$ is the cube of a prime. Here we have used the fact that if $N=p_1^{a_1}\cdots p_k^{a_k}$, where the $p_i$ are distinct primes, then $d(N)=(a_1+1)\cdots (a_k+1)$, though one can do it with less machinery.
Case (ii): Let $N$ be a perfect square, say $m^2$. Then the divisors of $N$ consist of pairs of lovers, together with the solitary divisor $m$. So the product of the divisors is $N^{(d(N)-1)/2}\sqrt{N}$, which is $N^{d(N)/2}$. For this to be $N^2$ we want either $N=1$ or $\frac{d(N)}{2}=2$, which is impossible if $N$ is a perfect square. So in addition to the examples you mentioned, $1$ is also multiplicatively perfect.
\ |
Does the mass of matter falling into a black hole affect the size of an event horizon the moment it passes through it, or when it has been incorporated into the singularity?
Indeed, as commenters already suggested, for
test particles the question of when they cross horizon (and when they are 'incorporated into singularity') is rather difficult to define if we wish to take the point of view of outside observer. However, your question does have an unambiguous answer (at least in terms of order of magnitude calculations). The key is that usually such 'test particle' is assumed to have zero mass, while we want to calculate when does the mass of this particle is 'felt' by the black hole. To do that we need to assume finite mass of the falling matter and consider the backreaction of this mass on the metric. And so we will see that the point when the mass is incorporated into black hole by increasing its horizon radius happens at a finite time as measured by the asymptotic observer.
For simplicity, instead of one point-like observer, let us consider the spherical shell (of dust-like matter with mass $\delta M$) falling into the Schwartzschild black hole (of mass $M$). This situation possesses spherical symmetry, and so Birkhoff's theorem applies: we have Schwarzschild metric with mass $M$ and gravitational radius $r_g=2 M$
inside the falling shell and with mass $M+\delta M$ and gravitational radius $r_g'= r_g + 2 \delta M$ outside of it. We could find the motion of the shell by using appropriate junction conditions, but if we for simplicity assume that $\delta M \ll M$ then the motion of this shell would coincide with radial geodesic for the unperturbed mass. And so, when the shell has the radius $r=r_g'$ it occurs in finite time (let's denote it $t'$) by the clock of outside observer. That would be the time, when the matter of the shell is fully incorporated into the black hole: no observer outside of the black hole could after that detect any trace of the shell. And if there was say transmitter falling with the shell, the latest moment of time when the signal from it could reach outside is $t'$ (of course such signal must also spend some time climbing to finite distance away from black hole)
If the mass $\delta M$ is falling from the distance of several $r_g$ to a distance $r$ that is slightly more than (unperturbed) $r_g$ that fall would last approximately$$\Delta t = t - t_0 \approx \frac{r_g}{c} \ln \frac{r_g}{r-r_g},$$(the necessary equations could be found e.g. here, we are interested in case of zero angular momentum and leading term diverging at $r=r_g$). Replacing with $r$ with $r_g'$ we obtain:$$\Delta t \approx \frac{r_g}{c} \ln \frac{M}{\delta M}.$$That is the (order of magnitude) answer. The factor in front of it is Schwarzschild radius
light-crossing time. We see that for $\delta M$ comparable with $M$ logarithm would be rather small and time in question would simply be 'several light-crossing times'. But even for large ratios logarithm function makes the time reasonably small. For example, let us consider astronaut $\delta M\approx 70\,\text{kg}$ falling into the Sagittarius A* black hole. Logarithm would be $\approx 81$ and $r_g/c$ is about 40 s and $\Delta t\approx 53\,\text{min}$, which is quite finite from a human perspective.
The answer would not change much if we include into consideration aspherical configurations and nonzero angular momentum. But in that case one needs to remember that such aspherical accretion tend to produce gravitational radiation which perturbes black hole horizon. This perturbations (so-called 'ringing down') exponentially decay with time constants about Schwarzschild radius light-crossing time.
For more technical discussion one can look at:
Frolov, V., & Novikov, I. (2012).
Black hole physics: basic concepts and new developments(Vol. 96). Springer Science & Business Media, google books. |
The smile is there exactly because the model is wrong.
The reason it's used though (despite being wrong) is that it provides a convenient space to look at the underlying - the vol*
The (undiscounted) value of an option is given by:
$$ \int_0^\infty \mathrm{PDF}(s) (s-K)^+ \mathrm{d}s $$
where $\mathrm{PDF}(x)$ is the real probability distribution of the underlying. This is model independent.
Under BS, the value is the following:
$$ \int_0^\infty \mathrm{PDF_{LN}}(F,\sigma)(s) (s-K)^+ \mathrm{d}s $$
where $\mathrm{PDF_{LN}}(F,\sigma)$ is the pdf of a lognormal variable with an expected value of $F$ and vol of $\sigma$.
So now, we just need to make them match - and we have one parameter to solve for - $\sigma$. This gives us the BS vol. If the distribution truly were lognormal, you'd obtain the same vol everywhere. Since it's not, you get a changing vol.
*BS vol. |
26th SSC CGL tier II level Solution Set, 2nd on Time and work problems
This is the 26th solution set of 10 practice problem exercise for SSC CGL Tier II exam and the 2nd on Time and work problems. For maximum gains, the test should be taken first, that is obvious. But more importantly, to absorb the concepts, techniques and deductive reasoning elaborated through these solutions, one must solve many problems in a systematic manner using this conceptual analytical approach. One can learn well only by practicing learning by doing.
Before going through these solutions you should take the test by referring to
. SSC CGL Tier II level Question Set 26 on Time and Work problems 2 26th solution set - 10 problems for SSC CGL Tier II exam: 2nd on Time Work problems - time 15 mins Problem 1.
In 16 days A can do 50% of a job. B can do one-fourth of the job in 24 days. In how many days can they do three-fourths of the job while working together?
21 9 18 24 Solution 1: Problem analysis and conceptual solution by work portion done in a day and working together concepts
As number of days to complete a portion of a job is directly proportional to the portion of work done by a worker, by the first statement, A completes the whole job in,
$16\times{2}=32$ days, as $50\text{%}=\frac{1}{2}$
By the same concept, as B completes $\frac{1}{4}$th of the job in 24 days, the whole job is completed by B in,
$24\times{4}=96$ days.
This is the use of the first concept of direct proportionality of work done to number of days the worker worked.
Solution 1: Problem solving second stage: Working together concept of summing up work portion done in a day
When A and B work together,
total work portion done by them in a day is given by summing up the work portion done by each of them in a day. Inverting the total work portion done in a day, you will get the number of days required to complete the work by them while working together.
To get the
work portion done in day for a worker, just invert the number of days required by the worker to complete the work.
Using these concepts work done in a day by A and B working together is,
$\displaystyle\frac{1}{32}+\displaystyle\frac{1}{96}=\displaystyle\frac{4}{96}=\displaystyle\frac{1}{24}$.
This means, the whole work will be completed by the two working together in 24 days, and $\displaystyle\frac{3}{4}$th of the job will be completed in,
$24\times{\displaystyle\frac{3}{4}}=18$ days.
Answer: c: 18 days. Key concepts used: -- Work portion done to number of days of work direct proportionality -- Work portion done in a day as inverse of number of days to complete the work -- Working together concept to get portion of work done in a day by summing up portions of work done by each worker in a day Number of days to complete the work as inverse of work portion done in a day.
If you are used to these common concepts of time and work, you can easily solve the problem in mind by being a little careful.
Problem 2.
If each of them had worked alone, B would have taken 10 hours more than what A would have taken to complete a job. Working together, they can complete the job in 12 hours. How many hours B would take to do 50% of the job?
30 20 10 15 Solution 2: Problem analysis and execution: By Mathematical reasoning and Working together concept
We have to introduce one variable for work completion time for either A or B, not two. By
general principle of mathematical problem solving, that is supported by common sense. We would assume $b$ hours as the time taken by B to complete the work, because target duration involves B's completion time.
So completion time for A is,
$a=b-10$, which is in terms of $b$.
We would be dealing with a single variable.
As working together A and B complete the job in 12 hours, applying the
of total work portion done by the two in a day as the sum of work portion done by each individually in a day, working together concept
$\displaystyle\frac{1}{b-10}+\displaystyle\frac{1}{b}=\displaystyle\frac{1}{12}$.
Cross-multiplying and rearranging terms we get the quadratic equation in $b$ as,
$b^2-34b+120=0$.
4 times 30 is 120 as well as 4 plus 30 is 34. So the factors of the quadratic equation are,
$(b-30)(b-4)=0$.
$b=4$ is not possible as $a=b-10$ will then be negative.
So, $b=30$ hours.
To complete 50% of the job then, B willl take half of 30, that is, 15 hours.
Answer: d: 15. Key concepts used: -- Working together concept as work portion done in a day by two workers by summing up their individual work portion done in a day Work portion done in a day as inverse of number of days to complete the work -- Formation and factorization of quadratic equation.
In this form of time and work problems, it is hard to avoid formation and factorization of a quadratic equation. But usually this is easy.
Problem 3.
Two workers P and Q are engaged to do a piece of work. Working alone P would take 8 hours more to complete the work than when working together. Working alone Q would take $4\frac{1}{2}$ hours more than when they work together. The time required to finish the work together is,
5 hours 6 hours 4 hours 8 hours Solution 3: Problem analysis and solution by work per unit time and working together concept
Though the problem is quite interestingly framed, it is easy to set up the equation for per hour work portion done when P and Q work together as,
$\displaystyle\frac{1}{T}=\displaystyle\frac{1}{T+8}+\displaystyle\frac{1}{T+4.5}$, where $T$ is the working together work completion time you have to find.
The two denominators represent the two work completion times in terms of $T$ for P and Q.
Cross-multiply and simplify to form the desired quadratic equation as,
$(T+8)(T+4.5)=T(2T+12.5)$,
Or, $T^2=36$,
Or, $T=6$
An unexpected quick result if you have followed the right path.
Cancellation of $12.5T$ on both sides of the equation makes things simpler.
Answer: b: 6 hours. Key concepts used: -- Work portion done per unit time . Working together concept Problem 4.
A contractor employed 200 men to complete a certain work in 150 days. If only one-fourth of the work gets completed in 50 days, then how many more men the contractor must employ to complete the whole work in time?
100 300 200 600 Solution 4 : Problem analysis and execution: Mandays concept
200 men do one-fourth of the work in 50 days. Assuming that work rate (work portion done by a man in a day) remains same for all men, a total of $4\times{50}$, that is, 200 days would have required for 200 men to finish the job. Obviously the contractor misjudged the work rate capacity of the men. That's why to meet the target of 150 days he would need to employ more men.
The reason for the need of more men being clear, let's get on with our main task of calculating the number of extra men required.
In 50 days, $\displaystyle\frac{1}{4}$th of work is done by 200 men,
So the total work amount in terms of mandays is,
$50\times{200}\times{4}=40000$ mandays.
To complete three-fourth remaining part of this work, that is, 30000 mandays work, in remaining 100 days, number of men required will simply be,
$\displaystyle\frac{30000}{100}=300$.
The contractor has to employ then 100 more men to finish the job in 150 days.
Answer: a: 100. Key concepts used: -- Work amount in terms mandays concept -- Work rate assessment Mandays technique to find the number of extra men required . Problem 5.
A, B and C are engaged to do a work for Rs.5290. A and B together are supposed to do $\displaystyle\frac{19}{23}$rd of the work and B and C together $\displaystyle\frac{8}{23}$rd of the work. Then A should be paid,
Rs.4250 Rs.3450 Rs.2290 Rs.1950 Solution 5: Problem analysis and execution: Earning share concept, Worker compensation proportional to work portion done
As B and C together complete $\displaystyle\frac{8}{23}$rd of the work, the rest of the work must be completed by A alone.
So A completes,
$1-\displaystyle\frac{8}{23}=\displaystyle\frac{15}{23}$rd of the work.
Total amount of Rs.5290 is to be paid proportionate to the work amount done. So A will be paid,
$\displaystyle\frac{15}{23}\times{5290}=15\times{230}=\text{Rs.}3450$.
The first statement of work portion done by A and B is to create diversion and is not required for getting the answer. But we can satisfy our curiosity by calculating that with A doing $\displaystyle\frac{15}{23}$rd portion of work, B would have done,
$\displaystyle\frac{19}{23}-\displaystyle\frac{15}{23}=\displaystyle\frac{4}{23}$rd of work and so C's work portion will be,
$\displaystyle\frac{8}{23}-\displaystyle\frac{4}{23}=\displaystyle\frac{4}{23}$rd portion of whole work.
This is the reason of total work given by two statements becoming $\displaystyle\frac{27}{23}$, the extra $\displaystyle\frac{4}{23}$ coming from C contributing twice.
Answer: b: Rs.3450. Key concepts used: -- Earning share concept Worker compensation proportional to work portion done . Problem 6.
Ruchi does $\displaystyle\frac{1}{4}$th of a job in 6 days and Bivas completes rest of the same job in 12 days. Then they together complete the job in,
$9\frac{3}{5}$ days $9$ days $7\frac{1}{3}$ days $8\frac{1}{8}$ days. Solution 6 : Problem analysis and solution: Working together concept
The first step is to accurately evaluate portion of job completed by each separately in 1 day.
As Ruchi does $\displaystyle\frac{1}{4}$th of the job in 6 days, her work rate in terms of work portion done in a day is,
$\displaystyle\frac{1}{4}\times{\displaystyle\frac{1}{6}}=\displaystyle\frac{1}{24}$.
Bivas completes the rest of the job, that is, $\displaystyle\frac{3}{4}$th of the job in 12 days. So the portion of job he completes in a day is,
$\displaystyle\frac{3}{4}\times{\displaystyle\frac{1}{12}}=\displaystyle\frac{1}{16}$.
Together they complete the portion of job in 1 day is then,
$\displaystyle\frac{1}{24}+\displaystyle\frac{1}{16}=\displaystyle\frac{5}{48}$.
And number of days they take to complete the job is inverse of this portion of total work done in a day by the two, which is,
$\displaystyle\frac{48}{5}=9\frac{3}{5}$ days.
Answer: a: $9\frac{3}{5}$ days. Key concepts used: Work portion done directly proportional to number of days of work -- -- Work rate in terms of work portion done in a day is work portion done divided by number of days of work . Working together per unit time concept -- Number of days of completion of work is inverse of work portion done in a day
Easy to solve in mind with a little care.
Problem 7.
P and Q together can do a job in 6 days and Q and R finishes the same job in $\displaystyle\frac{60}{7}$ days. Starting the work alone P worked for 3 days. Then Q and R continued for 6 days to complete the work. What is the difference in days in which R and P can complete the job, each working alone?
15 8 12 10 Solution 7: Problem analysis and solution: Work rate technique and Working together concept
Assume, $p$, $q$ and $r$ to be the portion of work done by P, Q and R respectively in 1 day, each working alone.
By the first statement then,
$6(p+q)=W$, where $W$ is the total work amount.
So, $(p+q)=\displaystyle\frac{1}{6}W$
By the second statement similarly,
$\displaystyle\frac{60}{7}(q+r)=W$,
Or, $(q+r)=\displaystyle\frac{7}{60}W$.
And by the third statement,
$3p+6(q+r)=W$,
Or, $3p=W\left(1-\displaystyle\frac{7}{10}\right)=\displaystyle\frac{3}{10}W$
So, $10p=W$.
It means P completes the work in 10 days working alone.
Subtracting $(q+r)$ from $(p+q)$, you get,
$p-r=W\left(\displaystyle\frac{1}{6}-\displaystyle\frac{7}{60}\right)=\displaystyle\frac{1}{20}W$,
Or, $20p-20r=W$,
Or, $20r=2W-W=W$.
This means R will complete the work in 20 days working alone, and the desired difference in days is,
$20-10=10$.
Answer: d: 10. Key concepts used: -- Work rate technique -- Working together concept Sequencing of events -- Algebraic simplification techniques.
The solution is speeded up because of bypassing the need of evaluating $q$.
Problem 8.
A man is twice as fast as a woman who is twice as fast as a boy in doing a piece of work. If one each of them work together and finish the work in 7 days, in how many days would a boy finish the work when working alone?
7 6 49 42 Solution 8: Problem analysis and solution: Work rate technique and Worker equivalence concept
Assume, $m$, $w$ and $b$ to be the portion of work done in a day by a man, a woman and a boy respectively when working alone. This is use of work rate technique. This approach reduces fraction calculation and thus speeds up solution.
So by the given efficiency statements, as in a day, a man does twice the work portion done by a woman, and a woman does twice the work portion done by a boy,
$m=2w=4b$.
Basically this means 1 man is equivalent to 4 boys and 1 woman is equivalent to 2 boys. This is
Worker equivalence concept. Worker efficiency leads to worker equivalence.
So by the working together statement,
$7(m+w+b)=W$ where $W$ is the work amount.
Or, $7(4b+2b+b)=49b=W$,
This means, a boy working alone would complete the work in 49 days.
Answer: c: 49. Key concepts used: -- Work rate technique Worker equivalence concept -- Working together concept -- Worker efficiency concept . Problem 9.
While A can do a job working alone in 27 hours, B can do it in 54 hours also working alone. Find the share of C (in Rs.) if A, B and C get paid Rs.4320 for completing the job in 12 hours working together.
1440 960 1280 1920 Solution 9: Problem analysis and solution: Earning share proportional to work portion done and Working together concept
In 12 hours, work portion done by A and B is,
$12\left(\displaystyle\frac{1}{27}+\displaystyle\frac{1}{54}\right)=\displaystyle\frac{2}{3}$.
So rest $\displaystyle\frac{1}{3}$rd portion of the work is completed by C.
As share of earning is proportional to work portion done, and total work is worth Rs.4320, the earning by C is one-third of Rs.4320,
$\displaystyle\frac{1}{3}\times{4320}=1440$.
Answer: a: 1440. Key concepts used: -- Earning share concept Earning to work done proportionality -- Working together concept -- Work portion left concept . Problem 10.
While A and B together finish a work in 15 days, A and C take 2 more days than B and C working together to finish the same work. If A, B and C complete the work in 8 days, in how many days would C complete it working alone?
$20$ days $40$ days $24$ days $17\frac{1}{7}$ days Solution 10: Problem analysis and solution: Strategic problem definition, Work rate technique and Working together concept
The
strategy of problem definition is to form first the algebraic relations that contain maximum amount of certain information. Out of four given statements, the fourth statement carrying maximum amount of certain information, we'll first form corresponding equation as,
$8(a+b+c)=W$.
By work rate technique we have assumed variables $a$, $b$ and $c$ to be the work portion done per day by A, B and C respectively and $W$ as the total work amount.
Next we'll form the equation corresponding to the first statement as it involves no uncertainty,
$15(a+b)=W$.
It is easy to see that $c$ can be evaluated from these two equations by eliminating $(a+b)$.
From first equation,
$(a+b+c)=\displaystyle\frac{W}{8}$, and from second equation,
$(a+b)=\displaystyle\frac{W}{15}$.
Subtracting the second result from the first,
$c=W\left(\displaystyle\frac{1}{8}-\displaystyle\frac{1}{15}\right)=W\left(\displaystyle\frac{7}{120}\right)$.
Inverse of this work rate of C is the number of days to complete the work by C working alone. It is then,
$\displaystyle\frac{120}{7}=17\frac{1}{7}$.
Answer: d: $17\frac{1}{7}$ days. Key concepts used: Strategy of problem definition -- Work rate technique -- Working together concept -- Solving in mind .
This is a good example of diversionary tactics in a question. The second and the third statements are not required at all in finding the answer.
Task for you: What would be the number of days to complete the work for B working alone? Note: Observe that most, if not all, of the problems can be solved quickly in mind if you use the right concepts and techniques. Problem analysis and clear problem definition play an important role in such quick solutions. Useful resources to refer to Guidelines, Tutorials and Quick methods to solve Work Time problems SSC CGL Tier II level Work Time, Work wages and Pipes cisterns Question and solution sets SSC CGL Tier II level Solution set 26 on Time-work Work-wages 2 |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. |
I am working with beamforming algorithms for speaker arrays. One of the problems we face is that we need to
estimate the maximum amplitude of our test signals after we applied FIR/IIR filters to them. I am looking for ways to express the reduction in headroom that a given set of filters would give us. Ultimately I want to know by how much I have to attenuate my input signal to definitely avoid clipping in the DAC, with respect to the filter.
For sine sweeps, this is easy: The maximum of the filters magnitude response is also the maximum amplitude gain we can expect. However, for broadband signals, this is not as simple.
I considered adding up $N$ sine waves of frequencies $\omega_n$ covering the range of 0Hz - Nyquist. If my filter input signal consisted of sines with amplitude 1, the maximum amplitude of this "stacked sine" signal would be $y_{max} = N$.
Looking at an output signal of the filter, the same method can be used to estimate the maximum amplitude: This time, the amplitudes would not be 1 but $|H(\omega_n)|$ and the maximum output amplitude would be $\sum_{n=0}^{N-1}|H(\omega_n)|$.
I could now compare the maximum input amplitude with the maximum output amplitude:
$AmplitudeGain = \frac{\sum_{n=0}^{N-1}|H(\omega_n)|}{N}$
My hope was that I may see by what amount I'd have to attenuate my broadband input signal to avoid clipping. The truth is that I'm really calculating the average of my magnitude response and that there are signals for which the calculated gain would produce clipping. Consider the following magnitude response, which is the result of a superdirective beamforming optimization:
As you can see, the maximum gain is about 13dB, but my formula would return the average which is in this case -12.8dB. This tells me that there is an error in my thinking. Adding up the respective maximum values of all sine-components of a broadband signal should give me the highest amplitude this signal could ever reach, right? Apparently my approach in itself is wrong.
How could I better estimate the maximum amplitude gain of my filter for broadband signals? |
The value of the directional derivative in its direction of greatest increase (i.e., as you said, along the direction of the gradient) is just that. It is the rate of change of the function along that direction. This answers the question, 'if I move a tiny bit $\epsilon$ in the direction of greatest increase, how does does the function change?' The answer is it changes by (epsilon)*(the derivative in the direction of greatest increase).
It seems something is confusing you so perhaps a quick recap of the directional derivative is in order. Say to avoid clutter we're considering the derivative at the origin. Then we have the first order Taylor expansion $$ f(\mathbf x) \approx f(\mathbf 0) + \nabla f(\mathbf 0)\cdot\mathbf x$$ where $\nabla f$ is the gradient. Recall that the directional derivative of $f$ in the direction of a unit vector $\hat{\mathbf u}$ is $D_{\hat{\mathbf u}}f = \nabla f\cdot\hat{\mathbf u}.$ Thus if we move a small amount $\epsilon $ in the direction $\hat{\mathbf u}$ from the origin then $\mathbf x = \epsilon\hat{\mathbf u}$ and we have $$ f(\mathbf x)-f(\mathbf 0) = \nabla f\cdot(\epsilon\hat{\mathbf u}) = \epsilon D_{\hat{\mathbf u}}f(0).$$
Why is the gradient the direction of greatest derivative? Well, the derivative is the dot product of the gradient with the direction... of course that's largest when the direction is parallel to the gradient.
And what is the value of the derivative at in this direction? It's just the gradient dotted into the unit vector in the same direction, so it's the magnitude of the gradient. Going a little more explicitly, the direction of maximum increase is $\hat{\mathbf u}_{max} = \frac{\nabla f}{|\nabla f|}$ so we have a maximal derivative $$D_{max} = \nabla f\cdot \hat{\mathbf u}_{max} = \frac{\nabla f\cdot \nabla f}{|\nabla f|} = |\nabla f|. $$ So the magnitude of the gradient represents the rate of change in the direction of greatest increase. And as discussed before, the direction of the gradient is the direction of greatest increase. Thus, both the magnitude and direction have nice interpretations. |
Views Measurement of \eta meson production in \gamma \gamma interactions and \Gamma \left ( \eta \rightarrow \gamma \gamma \right ) with the KLOE detector 131
April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 Measurement of \eta meson production in \gamma \gamma interactions and \Gamma \left ( \eta \rightarrow \gamma \gamma \right ) with the KLOE detector 1 0 2 1 0 2 0
Views pytko_czerwinski_moskal_silarski_zdebik_et-al_measurement_of_eta_meson_production_2013.pdf 1 pytko_czerwinski_moskal_silarski_zdebik_et-al_measurement_of_eta_meson_production_2013.odt 1
Views Poland 119 United States 5 Germany 4 China 1 India 1 Ukraine 1
Views Warsaw 77 Kraków 31 Jacksonville 2 Woodbridge 2 Changsha 1 Mountain View 1 New Delhi 1 Troszyn 1 |
Let $A \in \mathbb{R}^{n\times n}$ be a symmetric positive semi-definite matrix with exactly one zero eigenvalue and $B \in \mathbb{R}^{n\times n}$ be a symmetric matrix having $k$ positive eigenvalues.
Is it possible to infer the number of positive eigenvalues of the GEP
$Av = \lambda B v$
given the above information? Or some bounds on the number of positive eigenvalues?
I assume that the generalized eigenvalues will be real in this case, but I'm not sure about the proof. Following the classic proof for the basic eigenvalue problem results in
$u^{*T}Bu(\lambda^* - \lambda) = 0$
with $u^{*T}Bu$ not necessarily being nonzero if $B$ is just a real symmetric matrix.
A similar question assumes a general matrix $A$, not a real PSD one.
Another related question points out, that the number of generalized eigenvalues equal to zero will be the same as the number of such eigenvalues of $A$, but I don't understand the argumentation. |
I am given a matrix $A =(a,b;c,d)$ in $GL(2,\mathbb{C})$ and a real algebra say, $V$ with basis $X,Y,Z$ such that $[X,Y]=0, [X,Z]=aX+bY, [Y,Z]=cX+dY$ I have to show that $V$ is a real Lie Algebra.
My attempt: it's vector space over $\mathbb{R}$ (duh!) I think we first need to find $[X,X], [Y,Y], [Z,Z]$ which I don't know how...and then use it to show bilinearity. Similarly, first somehow find $[Y,X], [Z,Y], [Z,X]$ to verify anti symmetry.
Assuming both bilinearity and anti symmetry, it's sufficient to verify Jacobi Identity for elements $\alpha, \beta, \gamma$ where $\alpha, \beta, \gamma \in {X,Y,Z}$.
Consider the three cases when each of these three elements are different from one another, all are the same,two of them are same and one different from the other two. Now, using anti symmetry and above computed values of lie bracket helps us verify the Jacobi Identity.
But as trivial as it seems, I don't seem to have a clue about how to show bilinearity and compute lie brackets $[X,X],... I'd appreciate any hint/s. Please do not post a solution. Thank you very much! |
Let $K$ be a number field. For any $\alpha, \beta \in \mathcal{O}_K$ such that $N_{K/\mathbb{Q}}(\alpha) | N_{K/\mathbb{Q}}(\beta)$, is there a $\gamma \in \mathcal{O}_K$ such that $N_{K/\mathbb{Q}}(\gamma) = N_{K/\mathbb{Q}}(\beta)/N_{K/\mathbb{Q}}(\alpha)$?
Obviously, we have $N_{K/\mathbb{Q}}(\beta/\alpha) = N_{K/\mathbb{Q}}(\beta)/N_{K/\mathbb{Q}}(\alpha)$, but it is not true in general that $\beta/\alpha \in \mathcal{O}_K$. For example, take $K = \mathbb{Q}(i)$ and $\alpha = 5, \beta = 6 + 8i$. Then $N_{K/\mathbb{Q}}(\alpha) = 25 | 100 = N_{K/\mathbb{Q}}(\beta)$, but $\beta/\alpha = \frac{6 + 8i}{5} \not\in \mathbb{Z}[i] = \mathcal{O}_K$. Of course we know that $N_{K/\mathbb{Q}}(2) = 4 = \frac{100}{25}$ so this is not a counterexample to the question.
But perhaps there is a chance that given $\beta/\alpha$, we could find some element $\mu \in K$ such that $N_{K/\mathbb{Q}}(\mu) = 1$ and such that $\mu\beta/\alpha \in \mathcal{O}_K$. If $\beta/\alpha \not\in \mathcal{O}_K$ then we cannot have $\mu \in \mathcal{O}_K$, but there exist in general plenty of elements of unit norm in $K$ that are not algebraic integers, so the limitation is not as stringent as that given by the structure of the unit group of $\mathcal{O}_K$. For an example of this, see again $K = \mathbb{Q}(i)$ and the element $\frac{3+4i}{5}$ which has norm $1$ but is not in $\mathbb{Z}[i]$ (this is also what I used to produce the example in the paragraph above).
I know that in the case of quadratic fields $K = \mathbb{Q}(\sqrt{d})$, we can at least parametrize the set $\{x + y\sqrt{d}: x^2 - dy^2 = 1 \text{ and } x, y \in \mathbb{Q}\}$ using the method of choosing a starting point like $(1, 0)$ and constructing the intersection of lines of rational slopes passing through this point with the curve defined by $x^2 - dy^2 = 1$. But I don't know if that is of much help. |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Let $B_x$ be the $x$-
section of a $\mu_x\otimes \mu_y$-measurable set $B$, where $\mu_x\otimes \mu_y$, which I will call $\mu$, is the Lebesgue extension of the product measure $\mu_x\times \mu_y$ (both measures being $\sigma$-additive complete measures defined on $\sigma$-algebras of subsets of $X$ and $Y$, respectively), defined by $$B_x=\{y\in Y:(x,y)\in B\}$$I read that if $\mu(B)=0$ then for almost all $x\in X$ $\mu_y(B_x)=0$ but, although it sounds very intuitive, I cannot prove it to myself. Could anybody explain this interesting fact? I $\infty$-ly thank you!
Let $B_x$ be the $x$-
Recall that $\mu(B)=\int_X\mu_Y(B_x)\mathrm d\mu_X(x)$ hence, if $\mu(B)=0$ then $\mu_Y(B_x)=0$ for $\mu_X$-almost every $x$.
To show this, consider $A_n=\{x\in X\mid n\mu_Y(B_x)\geqslant1\}$ and note that, for every $n$, $0=n\mu(B)\geqslant\int_{A_n}n\mu_Y(B_x)\mathrm d\mu_X(x)\geqslant\mu_X(A_n)$ hence $\mu_X(A_n)=0$. This implies that $\mu_X(A_0)=0$ where $A_0=\bigcup\limits_{n\geqslant1}A_n=\{x\in X\mid\mu_Y(B_x)\ne0\}$, QED. |
How can I generate the higher $n$ quantum harmonic oscillator wavefunction (in position space) numerically? Here, higher means around $n=500$, or say $n=2000$, where $n$ is the $n$th oscillator wavefunction.
The position-space wavefunction of the $n$th state involves Hermite polynomials of order n (see Griffith's book for the detailed form of the solution). I can generate wavefunction up to $n\approx 160$ only using 'gsl' library in C because the library doesn't offer Hermite polynomials of order higher than $\approx 170$.
To get rid of this limitation, I found that the higher $n$ wavefunctions can be achieved by using 'raising operator', and then repeatedly acting it on the previous wavefunction to get the next wavefunction, and so on.
Raising operator: $a^{\dagger} = \frac{1}{\sqrt{2}} \left(x - \frac{d}{dx}\right)$. (Consider, constants in the expression to be 1.)
In the expression, the derivative term, i.e., $d/dx$ needs to be evaluated. It can be solved either by the finite-difference method or by using Fourier transform to get 1st derivative (which is more accurate than the finite-difference method).
I wrote a Matlab program for $a^{\dagger}$ (
shown below) using the 2nd method, i.e., Fourier transform. After evaluation of states greater than $n=10$, the program is becoming unstable and becoming even worse for higher $n$ ( instability plot is shown below in the link). The same issue had appeared when I used the finite-difference method to evaluate the derivative (code is not attached).
Solution to the issue or an alternate approach to the problem will be helpful.
UPDATE: I'm also interested to know the reason for the development of instability while using 'fft' in the code.
sigma = 1.0;xmin = -10.0;xmax = 10.0;npts = 512;nstates = 14;dx = (xmax-xmin)/npts;x = xmin + dx*(0:npts-1);% -- initial state/wavefunctionpsi_init = exp(-0.5*x.^2/sigma^2)*(pi*sigma^2)^(-0.25);psi = zeros(nstates,npts); % -- list to store oscillator statespsi(1,:) = psi_init;for nn=2:nstates psi(nn,:) = raising_psi(psi(nn-1,:),xmin,xmax,npts,sigma); endfunction adag_fn = raising_psi(previous_fn,a,b,n,sigma) % -- raising_operator dx = (b-a)/n; x = a + dx*(0:n-1); % -- going into fourier space using 'fft' library fwd_fft = fft(previous_fn); k = (2*pi/(b-a))*[0:n/2-1,0,-n/2+1:-1]; dfk_dx = 1i*k.*fwd_fft; % -- 1st derivative in fourier space df_dx = ifft(dfk_dx); % -- back into position space % -- a^dagger acting on the previous state adag_fn = (x.*previous_fn/sigma - sigma*real(df_dx))/sqrt(2); norm_fn = adag_fn*transpose(adag_fn); adag_fn = adag_fn/sqrt(norm_fn); % -- normalizationend |
Difference between revisions of "Kakeya problem"
(6 intermediate revisions by 4 users not shown) Line 1: Line 1: −
A '''Kakeya set''' in <math>{\mathbb F}_3^
+
A '''Kakeya set''' in <math>{\mathbb F}_3^is a subset <math>\subset{\mathbb F}_3^n</math> that contains an [[algebraic line]] in every direction; that is, for every <math>d\in{\mathbb F}_3^n</math>, there exists <math>\in{\mathbb F}_3^n</math> such that <math>,+d,+2d</math> all lie in <math></math>. Let <math>k_n</math> be the smallest size of a Kakeya set in <math>{\mathbb F}_3^n</math>.
Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements.
Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements.
−
==
+
== ==
Trivially, we have
Trivially, we have
Line 9: Line 9:
:<math>k_n\le k_{n+1}\le 3k_n</math>.
:<math>k_n\le k_{n+1}\le 3k_n</math>.
−
Since the Cartesian product of two Kakeya sets is another Kakeya set,
+
Since the Cartesian product of two Kakeya sets is another Kakeya set,
:<math>k_{n+m} \leq k_m k_n</math>;
:<math>k_{n+m} \leq k_m k_n</math>;
−
this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity.
+
this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity.
− +
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
−
:<math>k_n\
+
:<math>k_n\3^{(n+1)/2}.</math>
One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains
One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
−
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
+ + + + + + +
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
:<math>k_n \ge 3^{6(n-1)/11}.</math>
:<math>k_n \ge 3^{6(n-1)/11}.</math>
−
==
+
== ==
We have
We have
Latest revision as of 00:35, 5 June 2009
A
Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements.
Basic Estimates
Trivially, we have
[math]k_n\le k_{n+1}\le 3k_n[/math].
Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to
[math]k_{n+m} \leq k_m k_n[/math];
this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity.
Lower Bounds
To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence
[math]k_n\ge 3^{(n+1)/2}.[/math]
One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math].
The better estimate
[math]k_n\ge (9/5)^n[/math]
is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements).
A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus,
[math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]).
Putting all this together, we seem to have
[math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.