text
stringlengths
256
16.4k
I'm wondering how a set of keys could be assigned to nodes in a 2-3-4 tree in order to minimize the height of the tree? Does the sequence of insertion matter with 2-3-4 trees? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community The insertion order is relevant for the height of the tree. Inserting (in this order) 1,2,3,4,5,6,7,8 gives a tree of height 3, while inserting these keys in the order 1,3,4,6,7,8,2,5 gives a tree of height 2. In order to create a tree of minimal height, you can place the keys with ranks $\lceil \frac{n}4\rceil$, $\lceil \frac{n}2\rceil$, and $\lfloor \frac{3n}4\rfloor$ in the root, partition the remaining keys accordingly into the subtrees and apply the this recursively to each of the subtrees. Depending on the number of keys, you may have to shift a few keys around between leaves and their parents to make sure that all leaves are at the same level. No, the sequence of insertion does not matter, as 2-3-4 trees are self-balancing data structures. A 2-3-4 tree of $N$ nodes has the following height: $$ \frac{1}{2} \log (N + 1) \leq height \leq \log (N + 1) $$ That holds because as per Wikipedia: 2–3–4 trees are B-trees of order 4 (Knuth 1998); like B-trees in general, they can search, insert and delete in O(log n) time. One property of a 2–3–4 tree is that all external nodes are at the same depth.
There are thousands of NP-complete problems in the literature, and most pairs do not have explicit reductions. Since polynomial-time many-one reductions compose, it suffices for researchers to stop when the graph of published reductions is strongly connected, making research into NP-completeness a much more scalable activity. Although I really don't see the point, I'll humor you by giving a reasonably simple reduction from 3-PARTITION to BALANCED PARTITION, with a few hints about how the proof of correctness goes. Let the input to the reduction be $x_1, \ldots, x_{3n}, B \in \mathbb Z$, an instance of 3-PARTITION. Verify that $\sum_{i\in[3n]} x_i = nB$. Let $\beta$ be a large number to be chosen later. For every $i \in [3n]$ and every $j \in [n]$, output two numbers$$x_i \beta^j + \beta^{n+j} + \beta^{2n+i} + \beta^{(i+4)n+j}\\\beta^{(i+4)n+j}.$$Intuitively, the first number means that $x_i$ is assigned to 3-partition $j$, and the second number means the opposite. The $x_i \beta^j$ term is used to track the sum of 3-partition $j$. The $\beta^{n+j}$ term is used to track the cardinality of 3-partition $j$. The $\beta^{2n+i}$ term is used to ensure that each $x_i$ is assigned exactly once. The $\beta^{(i+4)n+j}$ term is used to force these numbers into different balanced partitions. Output two more numbers$$1 + \sum_{j\in[n]} \Bigl((n-2)B\beta^j + (3n-6)\beta^{n+j}\Bigr) + \sum_{i\in[3n]} (n-2)\beta^{2n+i}\\1.$$The first number identifies its balanced partition as “true”, and the other, as “false”. The $1$ term is used to force these numbers into different balanced partitions. The other terms make up the difference between the sum of a 3-partition and the sum of its complement and the size of a 3-partition and the size of its complement and the number of times $x_i$ is assigned. $\beta$ should be chosen large enough to ensure that “overflow” cannot occur.
I've been try to either prove or find a counter-example to the idea of jointly-wss being transitive. In other words: does ($x$ and $y$ are jointly wss) $\wedge$ ($y$ and $z$ are jointly wss) imply that $x$ and $z$ are jointly wss? Obviously it boils down to only asking about the cross-correlation condition, so one can ask: Does $\forall t, \Delta t : C_x,_y(t, t+\Delta t)=C_x,_y(0, \Delta t) \ and \ C_y,_z(t, t+\Delta t)=C_y,_z(0, \Delta t)$ imply that $\forall t, \Delta t : C_x,_z(t, t+\Delta t)=C_x,_z(0, \Delta t)\ $? My basic intuition is that this is not the case, but I cannot find an appropriately pathological set of signals...Nor have I managed to prove transitivity. Any help would be very much appreciated! Thank you. I've been try to either prove or find a counter-example to the idea of jointly-wss being transitive. The simplest counter example I can think of (and it is certainly pathological) is $y(t)=0$ for any $x,z$ which are not jointly WSS.
Suppose $f:\mathbb{R_+} \to \mathbb{R}$ is a continuous and strictly increasing function. Define $g(x) = \frac{f(x+1)-f(1)}{f(x)-f(0)}$. For which $f$ the function $g$ satisfies $g(x) \geq g(1)$ for all $x \geq 0$? Comment: this is part of a larger project where the existence of a solution boils down to the condition $g(x) \geq g(1)$ above. I am looking for necessary and sufficient conditions on $f$ which guarantee this. Examples: $f(x)=\frac{1}{a} \left[1-e^{-a x} \right]$ implies that $g(x) = e^{-a}$ independently of $x$ and therefore the condition is satisfied. $f(x)=x^a$ for $a > 0$ implies that $g(x) = \frac{(x+1)^a-1}{x^a}$ so that $g(x) \geq 1$ if and only if $a \geq 1$.
Let $a\mathop{.}b \stackrel{\text{def}}{=} aba^{-1} $ denote conjugation by $a$ Suppose we define a matrix $M$, the "conjugation table", associated with our finite group $G = (X,*_{\small{G}})$ as follows. (I'm considering the cells of $M$ to be formal sums of group elements (with the product of monomials defined in terms of the group operation), but I'm only using that machinery to talk about equivalence up to relabeling.) $$ M_{ij} \stackrel{\text{def}}{=} x_i \mathop{.} x_j = x_i x_j x_i^{-1} $$ I'm also thinking of two matrices $M$ and $M'$ as equivalent if they only differ by a permutation / relabelling, so $$ M \sim M' \stackrel{\text{def}}{\iff} MP=M' \;\;\text{where $P$ is a permutation matrix} $$ or equivalently $$ M \sim M' \stackrel{\text{def}}{\iff} M_{ij} = M'_{\sigma i \sigma j} \;\;\text{where $\sigma$ is a permutation} $$ I can think of a case where an $M$ does not uniquely identify a group and a case where an $M$ does uniquely identify a group. I think a group is Abelian if and only if the following holds. (The "if" direction is trivial). $$ x_i \mathop{.} x_j = x_j \;\;\forall i,j $$ So, if $G$ has four elements and is Abelian, then it could be the cyclic group on four elements $Z_4$ or the Klein four group $V_4$. $Z_4$ and $V_4$ are indistinguishable by their "conjugation tables". However, if $G$ has three elements, it can only be $Z_3$ since there's only one group of order 3. So, there are at least some circumstances under which a given $M$ is associated with exactly one group. Do we know what those circumstances are?
In this worked example we use source transformation to simplify a circuit. Source transformation is introduced here. Written by Willy McAllister. Contents Review Source transformation between Thévenin and Norton forms, The resistor value is the same for the Thévenin and Norton forms, $\text R_\text T = \text R_\text N$. Convert Thévenin to Norton: set $\text I_\text N = \text V_\text T / \text R_\text T$. Convert Norton to Thévenin: set $\text V_\text T = \text I_\text N \, \text R_\text N$. Thévenin and Norton forms are equivalent because they have the same $i$-$v$ behavior from the viewpoint of the output port. Strategy Think about source transformation when the homework problem asks about a single voltage or current for one specific component. Everything besides that one component is a candidate for source transformation. Your goal when you do a source transformation to increase the number of resistors in series or in parallel and create chances for simplification. The One Rule for source transformation is, Don’t include the component with the requested $i$ or $v$ in a source transformation. The strategy, Read the problem carefully. Identify the voltage or current being asked for. Scan your eyes over the circuit. Look for the familiar pattern of the two forms, Thévenin form is a voltage source in series with a resistor. Norton form is a current source in parallel with a resistor. Identify a candidate and do the source transformation. Simplify the circuit: merge resistors into their series or parallel equivalent. Redraw the circuit and look for another chance to transform sources. Solve for the requested variable in the simpler circuit. Example Find $\blueD i$. We could go after $\blueD i$ with methods we’ve learned before, like Node Voltage or Mesh Current. But this time we will do it with source transformation. What is asked for? show answer We are asked to find $\blueD i$ in $\text R1$, the $470 \,\Omega$ resistor. Are there any Thévenin or Norton forms? show answer Yes, one of each. The voltage source with $\text R1$ is a Thévenin form. The current source with $\text R2$ is a Norton form. The two little port circles split the forms, but they won’t be there in your circuit problems. Which ones are candidates for source transformation? show answer The Norton form on the right is a candidate for transformation. From the One Rule, the Thévenin form on the left is not a candidate for transformation. That’s because we’ve been asked to find the current in $\text R1$. We must not disturb that component if we want to get the right answer. Anticipate: What good thing would happen if we did a source transformation? show answer If we transform the Norton form we’ll end up with the two resistors in series. That creates the opportunity to simplify. Do the source transformation and redraw the circuit. $\text R2 = $ ________ $\text V2 = $ ________ show answer Transform the Norton form to the equivalent Thévenin form. $\text R2$ is the same for both. $\text R2 = 330\,\Omega$. The Thévenin voltage sources is $\text V2 = \text I2 \cdot \text R2 = 2\,\text{mA} \cdot 330\,\Omega = 0.66\,\text V$ Is it a good idea to try another transformation? show answer Not really. Current $i$ flows through $\text R1$. Anything else we try would involve touching $\text R1$, which would violate the One Rule. Simplify and find $i$. show answer Source transformation gave us two resistors in series. The voltage across the series resistors is $\text V1 - \text V2$. Ohm’s Law gives us, $i = \dfrac{\text V1 - \text V2}{(\text R1 + \text R2)}$ $i = \dfrac{3.3 - 0.66}{(470 + 330)} = \dfrac{2.64}{800}$ $i = 3.3\,\text{mA}$ Simulation model Open this simulation model in another tab. The top circuit is the original example. The bottom circuit shows the Norton to Thévenin source transformation, but $\text R2\text b$ and $\text V2$ don’t have the right values. You have to fix them! Click on DC in the menu bar to perform a DC analysis. Design challenge Double-click on $\text R2\text b$ and $\text V2$ and fill in the Thévenin equivalent values you calculated above. Then run another DC analysis. Is $i$ the same in both schematics? Here’s the circuit with the correct values filled in, show answer. Things to notice $\text R1$ and $\text R1\text b$ have the same current and the same voltage. That is what it means for these two circuits to be equivalent from the viewpoint of the chosen port. Notice the current in $\text V2$ is notthe same as current $\text I2$. That’s okay, because our focus is on the current and voltage for $\text R1$ and $\text R1\text b$. Is source transformation easier or harder than analyzing by Node Voltage or Mesh Current methods? What do you think? We simplified the circuit down to something we could solve with one application of Ohm’s Law. Compare that to how you would solve this circuit with Node Voltage or Mesh Current. In the next two articles we learn how to simplify any complex network of many resistors and sources down to a Thévenin equivalent or Norton equivalent.
Three Important Riemann Surfaces It occurred to me recently that my little blog is woefully short of posts about complex analysis. To remedy this situation, I've decided to use the next few posts to prove the following four facts: But before I do, let's have A Little Motivation Perhaps you're wondering, Why should I care what the automorphisms of these four spaces look like? (Er, besides the fact that I need it for tonight's homework!) Well, according to the uniformization theorem, every simply connected Riemann surface is conformally equivalent to either $\hat{\mathbb{C}}$, or $\mathbb{C}$, or $\Delta$ (and the latter is conformally equivalent to $\mathcal{U}$). But why is this nice? Because, from a topological viewpoint, it tells us that the universal cover of any Riemann surface will - up to conformal equivalence - either be $\hat{\mathbb{C}}$ or $\mathbb{C}$ or $\Delta$! Okay, so? Soooo, the universal cover is a very important object in mathematics. And the fact that there are only three options (complex-analytically speaking) for the universal cover of any given Riemann surface is quite nice! And why are universal covers so important?! In short, it's because they make life easier! You see, sometimes it can be difficult to prove a result on a topological space $X$ (say, a Riemann surface). So to get around the difficulty, you “lift” the mathematics “upstairs” to the universal cover where - presumably - you have more tools at your disposal. (In our case, we have lots of tools at our disposal because we know TONS about $\mathbb{C}$ and hence $\hat{\mathbb{C}}$ and $\Delta$ as well.) Then after solving your problem upstairs, you can then project the result back down onto X. And VOILA! Result proved. Crisis averted. This - the idea of lifting your (mathematical) problems to the universal cover - is just one technique that mathematicians use to gain information about a space $X$. Another well-known technique is to study functions on or to $X$. (For example, one may want to study functions $f$ from a circle $S^1$ into $X$. This results in a very powerful tool called the fundamental group.) In particular, one can study maps from $X$ to itself - and those are precisely the automorphisms of the space. So you see? We really do care about the automorphisms of $\hat{\mathbb{C}}$, $\mathbb{C}$, and $\Delta\cong\mathcal{U}$! (To my more knowledgeable readers: feel free to provide us with more motivation in the comments.) So far I've just rambled on a bit, but I hope this gives you at least a little motivation/background for the proofs in this series. Admittedly, the next few posts will be mostly computational and thus - in my opinion - not terribly exciting. But even so, I want to include them on the blog just in case a student or two may find them helpful. Next time, I'll start by proving that all automorphisms of the unit disc $\Delta$ can be expressed in the form $f(z)=\frac{az+b}{\bar bz+\bar a}$ where $a,b\in\mathbb{C}$ satisfy $|a|^2-|b|^2=1$. Until then!
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Let $L$ be the language $L=\{<,=,+,-,\cdot, 0,1\}$, with standard interpretations, and let $\mathcal{A}=\langle\mathbb{R}, <,=,+,-,\cdot,0,1\rangle$. Let $S\subseteq\mathbb{R}^n$. Show that if $S$ is definable, then the topological closure of $S$, given as $$\bar{S}:=\{a\in\mathbb{R}^n:\text{every open ball centered at $a$, contains a point of $S$}\},$$ is also definable. My attempt at a solution: Clearly, the set $\bar{S}$ is the set of all points in $S$ as well as the boundary of $S$. So really, what we want is the union between the set $S$, and the set $S'$ which I use to denote the set of all boundary points of $S$. We already have that $S$ is definable by some $L$-formula in the given structure, say $\phi^{\mathcal{A}}$. It remains to show that the boundary $S'$ is also definable. The boundary is the set of all points $a\in\mathbb{R}^n$, so that the following holds: $a\notin S$ $\forall\epsilon>0\exists s\in S$ such that $d(a,s)<\epsilon$, where $d$ is the distance function between two points Let's call the set of elements satisfying the latter property $B$. Clearly the set we are seeking is, $$\phi^{\mathcal{A}}\cup(\mathbb{R}^n\setminus\phi^{\mathcal{A}}\cap B).$$ It remains to show that $B$ is definable then. This is where I'm having some trouble. I can't see how to express this in my given language.
This question already has an answer here: I am looking for a proof that the derivative is the best linear approximation that makes this property "feel" fundamental. I have a proof below that somewhat gets close to what I am going for. But, first, I will explain my problems with it. Ideally, the best linear approximation would mean that, for a given point and for any linear approximation at this point, the derivative gives an at-least-as-good approximation (ideally, strictly better except at the given point) than this linear approximation everywhere within some neighborhood of this point. But, the proof below only shows that, for any linear function, there exists a neighborhood of the point such that, in this neighborhood, the maximum error of the derivative as an approximation is less than the maximum error of this linear approximation. Also, the following proof does not make this property feel fundamental. A large part of this is that this proof depends on the derivative being unique. Ideally, it would be nice to understand how this property makes the derivative unique. Here is the proof: Let $f$ be a differentiable function from $\mathbb R$ to $\mathbb R$ at a point $x_0$. Then, there exists a function $\phi$ such that $$ f(x) = f(x_0) + f'(x_0) \cdot (x-x_0) + \phi(x) $$ and $\lim_{x \to x_0} \frac{\phi(x)}{x-x_0} = 0$. Let $L$ be a real number, and let $\psi$ be the function $f(x) - (f(x_0) + L \cdot (x - x_0))$ so that $$ f(x) = f(x_0) + L \cdot (x - x_0) + \psi (x). $$ Since the derivative is unique, the limit of $\frac{\psi (x)}{x - x_0}$ is not equal to $0$, that is, there exists a positive real number $\epsilon$ such that, for any positive real number $\delta$, $$ \vert \frac{\psi (x)}{x - x_0} \vert \geq \epsilon $$ for some $x$ that is $\delta$-near $x_0$. Take $\delta$ to be such that $$ \vert \frac{\phi(x)}{x-x_0} \vert < \epsilon $$ for every $x$ that is $\delta$-near $x_0$. Thus, the derivative is a better approximation than $L$ in the ball of radius $\delta$ around $x_0$.
The papers you should read for this question are Vaidman, Lev. "Torque and force on a magnetic dipole." Am. J. Phys 58.10 (1990): 978-983 (Paywall-free version) and Haus, H. A., and P. Penfield. "Force on a current loop." Physics Letters A 26.9 (1968): 412-413. (References 8 and 9 in Vaidman's paper are also worth reading for more context.) The gist of the answer is this: if and only if there are no magnetic monopoles, magnetic dipoles and current loops are equivalent. As you have identified in your point (1), there is indeed a problem with the term magnetic dipole. Let's unpack the term starting with dipole. The simplest system in electromagnetism is the electric point charge at rest, which produces a spherically symmetric $1/r$ potential (it's easier to work with the potential here). This is called a monopole field, and a point charge is a monopole. Now consider two equal but opposite charges a distance $a$ from each other. This breaks rotational symmetry so the field will not be spherically symmetric, i.e., it is angle-dependent. Now expand the potential at a distance $r \gg a$ from the charges in powers of $1/r$. Since electromagnetism is linear, the $1/r$ terms are equal but opposite and cancel, but there remains a term of order $1/r^2$. Because the potential is a scalar, it must have the form $\mathbf p \cdot \mathbf r / r^3$ where $\mathbf p$ is a vector called the dipole moment. You can continue the expansion; the next term is $Q_{ij} x_i x_j / r^5$ where the tensor $Q_{ij}$ is the quadrupole moment, and so on. (The names reflect the minimum number of point charges you need to have a nonzero moment of that order: one for a monopole, two for a dipole, four for a quadrupole... All the details of this multipole expansion and what it's good for are in Jackson, of course.) So by a dipole field we mean a field whose potential looks like $\mathbf p \cdot \mathbf r /r^3$, and its source is an electric dipole. Of course the magnetic field has a vector potential rather than a scalar potential, so by a magnetic dipole field we should mean one where the vector potential is like $\mathbf A \sim \mathbf m \times \mathbf r / r^3$, and its source is a magnetic dipole. Now, one way to construct a magnetic dipole is by the obvious analogy: take two magnetic charges, i.e., two magnetic monopoles at a small distance. Well, that's easier said than done because no one has found any magnetic monpoles. We'll have to go with currents, then. Since the $1/r$ expansion is possible only if the size of the system is finite and charge is conserved, we'll have to use current loops, and conversely, any current loop will be a magnetic dipole. In this sense and this sense only magnetic dipoles and current loops are equivalent. If you were to find some magnetic monopoles and arrange them such that the dipole moment is $\mathbf m$, then the force on the system is $$\mathbf F_\text{MM} = (\mathbf m \cdot \nabla) \mathbf B - \frac{1}{c}\dot{\mathbf m} \times \mathbf E$$where $\dot{\mathbf m}$ is the time derivative,whereas the force on a current loop is $$\mathbf F_\text{CL} = \nabla (\mathbf m \cdot \mathbf B) - \frac{1}{c}\frac{d}{dt} ( \mathbf m \times \mathbf E ).$$If you expand $\mathbf F_\text{CL}$ and use Ampere's law with Maxwell's current, you see that these forces differ by $ k\mathbf m \times \mathbf J$ where $\mathbf J$ is the current density and $k$ is a constant that depends on your unit system. Clearly these magnetic dipoles are not equivalent. (However, several authors erroneously calculate the force using the magnetic charge model and think it must be true for current loops, which, as shown by Vaidman, is not the case.) There is one objection and that is that we know about spin, the intrinsic magnetic moment of particles such as electrons. I don't think it is obvious whether spin should be treated as a magnetic charge dipole, or as a current loop. For an elementary particle to be a current loop certainly seems strange, but it's not really less strange to think about it as a system of magnetic monopoles. One would think it's an experimental question, then, but Bohr and Pauli argued in the 20s that the spin of an individual electron is rather inaccessible to experiment, see Morrison, Margaret. "Spin: All is not what it seems." Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 38.3 (2007): 529-557 for an account.) In any experiment with electrons, the Lorentz force would anyway dominate the magnetic dipole force, so one would have to turn to neutrons, which come with other difficulties. Vaidman discusses this briefly. However, theoretically, if the situation is analyzed correctly, that is, using the the Foldy-Wouthuysen transformation (the orignal paper is Foldy, Leslie L., and Siegfried A. Wouthuysen. "On the Dirac theory of spin 1/2 particles and its non-relativistic limit." Physical Review 78.1 (1950): 29 which is a real gem and should be read by everyone studying quantum mechanics.) it is found that the current-loop model is correct. You can square it with the contradiction between current loop and elementary particle by realizing that this is a quantum mechanics thing, and in quantum mechanics you don't have to have point particles. In fact, you can't, by Heisenberg's principle. The electron is always a bit spread out, and in such a way as to produce a current loop magnetic dipole moment.
Let's begin with the following experiment: we throw a dice of six faces and see what the result is. Let's consider the following events $$A = \{ 2, 3 \}$$, $$B = \{ 1 , 2 \}$$, $$C = \{ 5 \}$$. We observe that if we extract $$2$$, then $$A$$ is satisfied as well as $$B$$. We say that the events are compatible, this means that they can happen simultaneously. On the contrary, events $$B$$ and $$C$$ are incompatible, since the two of them cannot happen simultaneously. To see when two events are compatible or not, we can observe that $$A$$ and $$B$$ have a common element: $$2$$, therefore they will be compatible. On the contrary, $$A$$ and $$C$$ do not have any common element, and therefore they are incompatible. We express this by saying that two events $$A$$ and $$B$$ are compatible if $$$A \cap B \neq \emptyset$$$ and on the contrary, they are incompatible if $$$A \cap B = \emptyset$$$ If we have three or more events, we say that they are incompatible two by two if any two events are incompatible (similarly, they are compatible two by two if any two events are compatible). In our case, $$A, B$$ and $$C$$ are not incompatible two by two, since, although $$A$$ and $$C$$, as well as $$B$$ and $$C$$ are incompatible, $$A$$ and $$B$$ are compatible. How is this related to complementary events? In our experiment of throwing a dice, we have our event $$A = \{ 2, 3 \}$$, so let's analyze what happens with its complementary event. In this case $$\overline{A}=\{1,4,5,6\}$$, since they are all the elementary events that do not satisfy $$A$$. It turns out that $$A$$ and $$\overline{A}$$ are incompatible, since they cannot happen simultaneously. For any event $$A$$ we calculate its complementary doing $$\overline{A}=\Omega - A$$, then $$A \cap \overline{A}=\emptyset$$, that is to say, two complementary events will always be incompatible. Let's suppose that $$D=$$"to extract an even number"$$=\{ 2, 4, 6 \}$$. Its complementary event is $$\overline{D}=$$"to extract an odd number"$$=\{ 1, 3, 5 \}$$. Then, $$D\cup \overline{D} = $$"to extract an even or odd number"$$= \{ 1, 2, 3, 4, 5, 6 \} = \Omega$$, that is to say, it is a sure event. By the definition of a complementary event, this will always happen, since one of the two is always satisifed, and as they are incompatible, either one or the other is satisfied.
Difference between revisions of "Extendible" (→Virtually extendible cardinals: unless...) (→Virtually extendible cardinals: +1) Line 140: Line 140: * If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite> * If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite> * If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of [[rank-into-rank|virtually rank-into-rank]] cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite> * If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of [[rank-into-rank|virtually rank-into-rank]] cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite> + * The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite> * The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite> * Open problems: Must there be an $n$-remarkable cardinal * Open problems: Must there be an $n$-remarkable cardinal Revision as of 10:17, 10 October 2019 A cardinal $\kappa$ is $\eta$-extendible for an ordinal $\eta$ if and only if there is an elementary embedding $j:V_{\kappa+\eta}\to V_\theta$, with critical point $\kappa$, for some ordinal $\theta$. The cardinal $\kappa$ is extendible if and only if it is $\eta$-extendible for every ordinal $\eta$. Equivalently, for every ordinal $\alpha$ there is a nontrivial elementary embedding $j:V_{\kappa+\alpha+1}\to V_{j(\kappa)+j(\alpha)+1}$ with critical point $\kappa$. Contents 1 Alternative definition 2 Relation to Other Large Cardinals 3 Variants 4 In set-theoretic geology 5 References Alternative definition Given cardinals $\lambda$ and $\theta$, a cardinal $\kappa\leq\lambda,\theta$ is jointly $\lambda$-supercompact and $\theta$-superstrong if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $\mathrm{crit}(j)=\kappa$, $\lambda<j(\kappa)$, $M^\lambda\subseteq M$ and $V_{j(\theta)}\subseteq M$. That is, a single embedding witnesses both $\lambda$-supercompactness and (a strengthening of) superstrongness of $\kappa$. The least supercompact is never jointly $\lambda$-supercompact and $\theta$-superstrong for any $\lambda$,$\theta\geq\kappa$. A cardinal is extendible if and only if it is jointly supercompact and $\kappa$-superstrong, i.e. for every $\lambda\geq\kappa$ it is jointly $\lambda$-supercompact and $\kappa$-superstrong. [1] One can show that extendibility of $\kappa$ is in fact equivalent to "for all $\lambda$,$\theta\geq\kappa$, $\kappa$ is jointly $\lambda$-supercompact and $\theta$-superstrong". A similar characterization of $C^{(n)}$-extendible cardinals exists. The ultrahuge cardinals are defined in a way very similar to this, and one can (very informally) say that "ultrahuge cardinals are to superhuges what extendibles are to supercompacts". These cardinals are superhuge (and stationary limits of superhuges) and strictly below almost 2-huges in consistency strength. To be expanded: Extendibility Laver Functions Relation to Other Large Cardinals Extendible cardinals are related to various kinds of measurable cardinals. Supercompactness Extendibility is connected in strength with supercompactness. Every extendible cardinal is supercompact, since from the embeddings $j:V_\lambda\to V_\theta$ we may extract the induced supercompactness measures $X\in\mu\iff j''\delta\in j(X)$ for $X\subset \mathcal{P}_\kappa(\delta)$, provided that $j(\kappa)\gt\delta$ and $\mathcal{P}_\kappa(\delta)\subset V_\lambda$, which one can arrange. On the other hand, if $\kappa$ is $\theta$-supercompact, witnessed by $j:V\to M$, then $\kappa$ is $\delta$-extendible inside $M$, provided $\beth_\delta\leq\theta$, since the restricted elementary embedding $j\upharpoonright V_\delta:V_\delta\to j(V_\delta)=M_{j(\delta)}$ has size at most $\theta$ and is therefore in $M$, witnessing $\delta$-extendibility there. Although extendibility itself is stronger and larger than supercompactness, $\eta$-supercompacteness is not necessarily too much weaker than $\eta$-extendibility. For example, if a cardinal $\kappa$ is $\beth_{\eta}(\kappa)$-supercompact (in this case, the same as $\beth_{\kappa+\eta}$-supercompact) for some $\eta<\kappa$, then there is a normal measure $U$ over $\kappa$ such that $\{\lambda<\kappa:\lambda\text{ is }\eta\text{-extendible}\}\in U$. Strong Compactness Interestingly, extendibility is also related to strong compactness. A cardinal $\kappa$ is strongly compact iff the infinitary language $\mathcal{L}_{\kappa,\kappa}$ has the $\kappa$-compactness property. A cardinal $\kappa$ is extendible iff the infinitary language $\mathcal{L}_{\kappa,\kappa}^n$ (the infinitary language but with $(n+1)$-th order logic) has the $\kappa$-compactness property for every natural number $n$. [2] Given a logic $\mathcal{L}$, the minimum cardinal $\kappa$ such that $\mathcal{L}$ satisfies the $\kappa$-compactness theorem is called the strong compactness cardinal of $\mathcal{L}$. The strong compactness cardinal of $\omega$-th order finitary logic (that is, the union of all $\mathcal{L}_{\omega,\omega}^n$ for natural $n$) is the least extendible cardinal. Variants $C^{(n)}$-extendible cardinals (Information in this subsection from [3] unless noted otherwise) A cardinal $κ$ is called $C^{(n)}$-extendible if for all $λ > κ$ it is $λ$-$C^{(n)}$-extendible, i.e. if there is an ordinal $µ$ and an elementary embedding $j : V_λ → V_µ$, with $\mathrm{crit(j)} = κ$, $j(κ) > λ$ and $j(κ) ∈ C^{(n)}$. For $λ ∈ C^{(n)}$, a cardinal $κ$ is $λ$-$C^{(n)+}$-extendible iff it is $λ$-$C^{(n)}$-extendible, witnessed by some $j : V_λ → V_µ$ which (besides $j(κ) > λ$ and $j(κ) ∈ C(n)$) satisfies that $µ ∈ C^{(n)}$. $κ$ is $C^{(n)+}$-extendible iff it is $λ$-$C^{(n)+}$-extendible for every $λ > κ$ such that $λ ∈ C^{(n)}$. Properties: The notions of $C^{(n)}$-extendible cardinals and $C^{(n)+}$-extendible cardinals are equivalent.[4] Every extendible cardinal is $C^{(1)}$-extendible. If $κ$ is $C^{(n)}$-extendible, then $κ ∈ C^{(n+2)}$. For every $n ≥ 1$, if $κ$ is $C^{(n)}$-extendible and $κ+1$-$C^{(n+1)}$-extendible, then the set of $C^{(n)}$-extendible cardinals is unbounded below $κ$. Hence, the first $C^{(n)}$-extendible cardinal $κ$, if it exists, is not $κ+1$-$C^{(n+1)}$-extendible. In particular, the first extendible cardinal $κ$ is not $κ+1$-$C^{(2)}$-extendible. For every $n$, if there exists a $C^{(n+2)}$-extendible cardinal, then there exist a proper class of $C^{(n)}$-extendible cardinals. The existence of a $C^{(n+1)}$-extendible cardinal $κ$ (for $n ≥ 1$) does not imply the existence of a $C^{(n)}$-extendible cardinal greater than $κ$. For if $λ$ is such a cardinal, then $V_λ \models$“κ is $C^{(n+1)}$-extendible”. If $κ$ is $κ+1$-$C^{(n)}$-extendible and belongs to $C^{(n)}$, then $κ$ is $C^{(n)}$-superstrong and there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. For $n ≥ 1$, the following are equivalent ($VP$ — Vopěnka's principle): $VP(Π_{n+1})$ $VP(κ, \mathbf{Σ_{n+2}})$ for some $κ$ There exists a $C(n)$-extendible cardinal. “For every $n$ there exists a $C(n)$-extendible cardinal.” is equivalent to the full Vopěnka's principle. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-extendible (inter alia) in $V_δ$, for all $n$ and $m$. $(\Sigma_n,\eta)$-extendible cardinals There are some variants of extendible cardinals because of the interesting jump in consistency strength from $0$-extendible cardinals to $1$-extendibles. These variants specify the elementarity of the embedding. A cardinal $\kappa$ is $(\Sigma_n,\eta)$-extendible, if there is a $\Sigma_n$-elementary embedding $j:V_{\kappa+\eta}\to V_\theta$ with critical point $\kappa$, for some ordinal $\theta$. These cardinals were introduced by Bagaria, Hamkins, Tsaprounis and Usuba [5]. $\Sigma_n$-extendible cardinals The special case of $\eta=0$ leads to a much weaker notion. Specifically, a cardinal $\kappa$ is $\Sigma_n$-extendible if it is $(\Sigma_n,0)$-extendible, or more simply, if $V_\kappa\prec_{\Sigma_n} V_\theta$ for some ordinal $\theta$. Note that this does not necessarily imply that $\kappa$ is inaccessible, and indeed the existence of $\Sigma_n$-extendible cardinals is provable in ZFC via the reflection theorem. For example, every $\Sigma_n$ correct cardinal is $\Sigma_n$-extendible, since from $V_\kappa\prec_{\Sigma_n} V$ and $V_\lambda\prec_{\Sigma_n} V$, where $\kappa\lt\lambda$, it follows that $V_\kappa\prec_{\Sigma_n} V_\lambda$. So in fact there is a closed unbounded class of $\Sigma_n$-extendible cardinals. Similarly, every Mahlo cardinal $\kappa$ has a stationary set of inaccessible $\Sigma_n$-extendible cardinals $\gamma<\kappa$. $\Sigma_3$-extendible cardinals cannot be Laver indestructible. Therefore $\Sigma_3$-correct, $\Sigma_3$-reflecting, $0$-extendible, (pseudo-)uplifting, weakly superstrong, strongly uplifting, superstrong, extendible, (almost) huge or rank-into-rank cardinals also cannot.[5] $A$-extendible cardinals (this subsection from [6]) Definitions: A cardinal $κ$ is $A$-extendible, for a class $A$, iff for every ordinal $λ > κ$ there is an ordinal $θ$ such that there is an elementary embedding $j : \langle V_λ , ∈, A ∩ V_λ \rangle → \langle V_θ , ∈, A ∩ V_θ \rangle$ with critical point $κ$ (such that $λ < j(κ)$ — removing this does not change, what cardinals are extendible). $λ$ is called the degree of $A$-extendibility of an embedding. A cardinal $κ$ is $(Σ_n)$-extendible, iff it is $A$-extendible, where $A$ is the $Σ_n$-truth predicate. (This is a different notion than $\Sigma_n$-extendible cardinals.)[4] Results: The Vopěnka principle is equivalent over GBC to both following statements: For every class $A$, there is an $A$-extendible cardinal. For every class $A$, there is a stationary proper class of $A$-extendible cardinals. ...... Virtually extendible cardinals Definitions: A cardinal $κ$ is (weakly? strongly? ......) virtually extendibleiff for every $α > κ$, in a set-forcing extension there is an elementary embedding $j : V_α → V_β$ with $\mathrm{crit(j)} = κ$ and $j(κ) > α$. A cardinal $κ$ is (weakly) virtually $A$-extendible, for a class $A$, iff for every ordinal $λ > κ$ there is an ordinal $θ$ such that in a set-forcing extension, there is an elementary embedding $j : \langle V_λ , ∈, A ∩ V_λ \rangle → \langle V_θ , ∈, A ∩ V_θ \rangle$ with critical point $κ$. For (strongly) virtually $A$-extendible$κ$, we require additionally $λ < j(κ)$.[4] A cardinal $κ$ is $n$-remarkable, for $n > 0$, iff for every $η > κ$ in $C^{(n)}$ , there is $α<κ$ also in $C^{(n)}$ such that in $V^{Coll(ω, < κ)}$, there is an elementary embedding $j : V_α → V_η$ with $j(\mathrm{crit}(j)) = κ$. A cardinal is completely remarkableiff it is $n$-remarkable for all $n > 0$.[8] A cardinal is A cardinal κ is weakly or strongly virtually $(Σ_n)$-extendible, iff it is respectively weakly or strongly virtually $A$-extendible, where $A$ is the $Σ_n$-truth predicate.[4] Equivalence and hierarchy: $1$-remarkability is equivalent to remarkability. A cardinal is virtually $C^{(n)}$-extendible iff it is $n + 1$-remarkable (virtually extendible cardinals are virtually $C^{(1)}$-extendible).[8] Weakly and strongly $A$-extendible cardinal are non-equivalent, although in the non-virtual context, the weak and strong forms of $A$-extendibility coincide.[4] It is relatively consistent with GBC that every class $A$ admits a (weakly) virtually $A$-extendible cardinal (and so the generic Vopěnka principle holds), but no class $A$ admits a (strongly) virtually $A$-extendible cardinal.[4] Every $n$-remarkable cardinal is in $C^{(n+1)}$.[8] Every $n+1$-remarkable cardinal is a limit of $n$-remarkable cardinals.[8] Upper limits for strength: If $κ$ is virtually Shelah for supercompactness or 2-iterable, then $V_κ$ is a model of proper class many virtually $C^{(n)}$-extendible cardinals for every $n < ω$.[7] If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals.[7] Completely remarkable cardinals can exist in $L$.[8] For a $2$-iterable cardinal $κ$, $V_κ$ is a model of proper class many completely remarkable cardinals.[8] If $0^\#$ exists, then every Silver indiscernible is in $L$ completely remarkable and virtually $A$-extendible for every definable class $A$.[4, 8] Lower limit for strength: Virtually extendible cardinals are remarkable limits of remarkable cardinals and 1-iterable limits of 1-iterable cardinals.[7] The following are equiconsistent $gVP(Π_n)$ $gVP(κ, \mathbf{Σ_{n+1}})$ for some $κ$ There is an $n$-remarkable cardinal. The following are equiconsistent $gVP(\mathbf{Π_n})$ $gVP(κ, \mathbf{Σ_{n+1}})$ for a proper class of $κ$ There is a proper class of $n$-remarkable cardinals. Unless there is a transitive model of ZFC with a proper class of $n$-remarkable cardinals, if for some cardinal $κ$, $gVP(κ, \mathbf{Σ_{n+1}})$ holds, then there is an $n$-remarkable cardinal. if $gVP(Π_n)$ holds, then there is an $n$-remarkable cardinal. if $gVP(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals. $κ$ is the least for which $gVP^∗(κ, \mathbf{Σ_{n+1}})$ holds. $\iff κ$ is the least $n$-remarkable cardinal. If $gVP^∗(Π_n)$ holds, then there is an $n$-remarkable cardinal. If $gVP^∗(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals. If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.[4] If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of virtually rank-into-rank cardinals.[4] The generic Vopěnka principle holds iff for every class $A$, there are a proper class of (weakly) virtually $A$-extendible cardinals.[4] The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.[4] Open problems: Must there be an $n$-remarkable cardinal if $gVP(κ, \mathbf{Σ_{n+1}})$ holds for some $κ$? if $gVP(Π_n)$ holds? ...... In set-theoretic geology This article is a stub. Please help us to improve Cantor's Attic by adding information. References Usuba, Toshimichi. Extendible cardinals and the mantle.Archive for Mathematical Logic 58(1-2):71-75, 2019. arχiv DOI bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex Hamkins, Joel David. The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme., 2016. www arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex
Every where and book I search I get that the definition of linear momentum is the amount of speed (quantity of motion) contained in it or simply it is mass $\times$velocity? So, what is an appropriate definition of linear momentum? What did Newton think when he discovered it? He certainly did not think it as the amount of speed in a body. Newton thought of momentum as "Quantity of motion" - as we can see in the translated version of 'Principia'. Particularly, he defined momentum in the following words: The quantity of motion is the measure of the same, arise from the velocity and quantity of matter conjointly. So yeah, that is the definition of momentum. The question why we defined the momentum the way we defined is the actual question you have in mind, I think. Well, the answer can be thought of like this. In physics, we actually try to find some combinations of some directly measured quantities of the objects whose appropriate summation remains constant in time - no matter whatever process the system is going through. There exists different such groups of terms of such a nature that summation of terms in group remain constant in time, but individual terms do not remain contstant in time, in general. We address each term of a particular group by one name - i.e. by the name of that group. Also we assign further names to individual terms of the group to further identify them. Like there exists a group of terms $1/2 mv^2 + mgh + ...$ which remains constant in time. We call each term of it to be the energy and then further associate different identification with each term, like Kinetic Energy, Gravitational Potential Energy, etc. A similar other group of terms is known and we call each term of it to be some momentum. Classically we had just summation of $m$ v in this group. But later on, we found that there were some other terms also in this group and all of them together remain constant in time. For example the momentum of electromagnetic fields. So we don't priori know how and what we should define as momentum. But we observe some traits in nature and to keep a track of these patterns we define some things and associate some intuition with it. (particularly while naming. Like Newton named it as quantity of motion.) Newton (if I recall correctly) typically referred to the concept of inertia, which was an objects resistance to changes in velocity when subjected to external forces. You are right about him not thinking about it as just the speed of the object, because this is where the mass term comes in. Many people think of Newton's second law as being written as $F = ma$, and while this is true, I think that Newton liked to write it in terms of momentum as $$F = \frac{dp}{dt}$$ Obviously, not with this notation since that came later. So, this can be thought of in the following way. The change in an objects momentum is equal to the force applied and the time interval over which the force is applied. Or, $$\Delta p = F\Delta t$$ Starting from rest, this will give you the total momentum of the object, $p$. Considering inertia, objects with a higher mass will exhibit more resistance to velocity change. If the object doesn't change mass, then $\Delta p = m\Delta v$. Starting from rest, where $v_0 = 0$ and $p_0 = 0$, you end up with $p=mv$ for the total linear momentum of an object. What you're looking for is an intuitive explanation or how you could visualize momentum.You can think of momentum as the quantity/amount of motion or "how much would I not want be in the path of this body." I'm going to try and provide some intuition through a few examples: A car of mass 1000 kg moving at 5 m/s would have the same "quantity/amount of motion" as a truck of mass 5000 kg moving at 1 m/s or a bicycle of mass 20 kg moving at 250 m/s. If a rocket of mass 500 kg moving at at 250 m/s, uses up 100 kg of stored fuel to accelerate the rocket upto 312.5 m/s, although the rocket is moving faster, the mass has decreased by 100 kg and hence the "quantity of motion" has remained the same or "how much would I not want be in the path of this rocket" has remained the same. Therefore, Netwon defined the force as something that changes the "amount of motion" that a body has in a certain amount of time, or: $F = \frac{dp}{dt}$ Where p represents momentum and t represents time I suppose Newton may have devised the momentum equation to numerically express how objects of exact speeds (but different densities) would create different effects upon impact and perhaps how much energy would be needed to move such objects to a given speed. Consider the following: a wood ball (25g, 33.5 cc) hurled at a sheet metal target at an average speed of 9 m/s. p = 225 a lead sphere (380g, 33.5 cc) hurled at a sheet metal target at an average speed of 9 m/s. p = 3,420 Most people would probably intuitively know which of those objects would cause the greatest impact. We can calculate that the lead sphere needs over 15 times the momentum (force) to get to the same average velocity as the wood ball--which also means more force or energy is needed to move the lead to that average speed. It also means it's impact force is greater. First of all you cannot separate linear from angular momentum. They work together just like linear and angular velocities do (or forces and torques). I am going to answer your question from the perspective of geometry. The quantity of momentum is not so important as the geometrical construction that momentum implies. Let's see if you can follow: All possible movements of a rigid body can be idealized as a rotation about some axis (finite or at infinity). This axis is actually a 3D line perfectly defined from the 6 motion components of a rigid body. The motion causes the rigid body to have linear and angular momentum at the center of mass. This combination is actually a manifestation of an instantaneous linear momentum along another 3D line. This line is called The Axis of Percussion of the rigid body for the said rotation (also known as the sweet spot). This axis is perpendicular to the rotation axis and at a distance $r = \frac{\rho^2}{c}$ away from the center of mass. The pivot to center of mass distance is $c$ and the radius of gyration about the rotation $\rho$. The geometrical interpretation of momentum is the axis by which if you apply an equal and opposite impulse the body is going to instantaneously stop rotating. Read this answer for more details on how to find the axis of pair of free and line vectors like linear and angular momenta. Essentially if you have a moving body with linear momentum $\boldsymbol{p} = m \boldsymbol{v}_{cm}$ and angular momentum at the center of mass $\boldsymbol{L}_{cm}=\mathrm{I}_{cm} \boldsymbol{\omega}$. The direction of the percussion axis is $$\boldsymbol{n} = \frac{\boldsymbol{p}}{\|\boldsymbol{p}\|}$$ The location of the percussion axis relative to the center of mass is $$\boldsymbol{r} = \frac{\boldsymbol{p} \times \boldsymbol{L}_{cm} }{\| \boldsymbol{p} \|^2} $$ NOTE: $\times$ is the vector cross product. A simple example is a thin rod of length $\ell$ (like a baseball bat) and mass $m$ rotating about one end. The center of mass is half way along the rod at $c=\frac{\ell}{2}$. Place the rod horizontally (along the x-axis) and rotate it about the z-axis with $\boldsymbol{\omega} = (0,0,\dot{\theta})$. The linear momentum of this configuration is $\boldsymbol{p}=(0,m\, v_{cm},0) = (0,m \frac{\ell}{2} \dot{\theta},0)$ The angular momentum about the center of mass is $\boldsymbol{L}_{cm} = I_{cm} \boldsymbol{\omega} = (0,0,\frac{\ell^2}{12} m \dot{\theta})$ since the mass moment of inertia of a thin rod about the center is $m \frac{\ell^2}{12}$. Note that the radius of gyration about the center is $\rho =\sqrt{\frac{I_{cm}}{m}}= \frac{1}{\sqrt{12}} \ell$ The direction of the axis of percussion is along the y-axis $$\boldsymbol{n} = \frac{(0,m \frac{\ell}{2} \dot{\theta},0)}{\|(0,\frac{\ell}{2} \dot{\theta},0)\|} = (0,1,0)$$ The position of the percussion axis from the center is along the rod ( xaxis) $$\boldsymbol{r} = \frac{(0,m \frac{\ell}{2} \dot{\theta},0) \times (0,0,\frac{\ell^2}{12} m \dot{\theta})}{\|(0,m \frac{\ell}{2} \dot{\theta},0)\|} = \frac{(\frac{m^2 \ell^3 \dot{\theta}^2}{24},0,0)}{\frac{m^2 \ell^2 \dot{\theta}^2}{4}} = (\frac{\ell}{6},0,0)$$ The distance of the percussion axis (CoP) from the pivot is then $\frac{\ell}{2} + \frac{\ell}{6} = \frac{2}{3} \ell$ (same value as in slide 12 of http://www.iitg.ac.in/asil/Lecture-12.pdf). You can compare the distance to the CoP with the expression $$\frac{\rho^2}{c} = \frac{ \frac{m}{12} \ell^2}{\frac{\ell}{2}} = \frac{\ell}{6}$$ So you can arrive at the center of percussion either from the shorthand expression $r=\frac{\rho^2}{c}$ or by the combination of the momentum vectors $$\boxed{ \boldsymbol{r} = \frac{\boldsymbol{p} \times \boldsymbol{L}_{cm} }{\| \boldsymbol{p} \|^2} }$$ For me momentum always describes the axis of percussion. protected by Qmechanic♦ Nov 4 '15 at 14:40 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Also known as "Nate slowly deciphers ESL to conceptual understanding/plainer language", part two (see part one) Help me understand this (bullets added) The term $\hat{β}_0$ is the intercept, also known as the biasin machine learning. Often it is convenient to include the constant variable 1 in $X$, include $\hat{β}_0$ in the vector of coefficients $\hat{β}$, and then write the linear model in vector form as an inner product $\hat{Y}= X^\top\hat{β}$ where $X^\top$denotes vector or matrix transpose (X being a column vector). Here we are modeling a single output, so $\hat{Y}$ is a scalar; in general $\hat{Y}$ can be a $K$–vector, in which case $β$ would be a $p \times K$ matrix of coefficients. In the $(p + 1)$-dimensional input–output space, $(X, \hat{Y} )$ represents a hyperplane. If the constant is included in $X$, then the hyperplane includes the origin and is a subspace; if not, it is an affine set cutting the $Y$-axis at the point $(0, \hat{β}_0)$. From now on we assume that the intercept is included in $\hat{β}$. My questions: $K$ in the context of a $K$-vector and $p \times K$ matrix of coefficients -- that value is obviously different than $p$; is it different than $N$ -- the number of observations? What does the notation $(X, \hat{Y} )$ mean? How do they mean "hyperplane"? For example, in the the 2-D example? How do they mean "subspace"? For example, in the the 2-D example? How do they mean "affine set"? For example, in the the 2-D example? When they "Assume the intercept is included" -- which did they choose, option 1 or option 2? How do they mean "$\hat{β}_0$ is the bias" -- why is that word used? Is it related to bias vs variance? Is there indeed a typo as suggested in the quoted part here ; should it be (... in which case $\mathbf{\hat{β}}$ would be a $p \times K$ matrix of coefficients...) In other words, when can $β$ take his hat off ? Guess at answers: Yes, $p$ is number of columns/variables (i.e. age, weight) $N$ is the number of rows/observations (i.e. Andy, Olly -- though this linear model operates on one-row-at-a-time) , then $K$ is yet another (orthogonal?) axis (i.e. Andy's age, and weight at age 3, age 10, age 20)? It looks like a Cartesian coordinate, but it's generic (uses $X$ and $\hat{Y}$). In 2-D , (1,2) represents the point on a 2-D graph. So does (x,y) represent a set of points; i.e. a line? I have trouble reconciling a scalar $x$ with a vector $X$ hyperplane; They mean it cuts the (..."space"?) into two parts. In 2-D , a line cuts (.... ${\rm I\!R^2}$ ?) space on a graph into two separate portions. Indeed, that's the whole point of the "linear model" binary classification (two parts) ; could be called a "hyperplane model" for higher dimensions. subspace; not sure. How can I think of "If the constant is included in $X$", in 2-D space, where X is just a scalar? Do they mean the hyperplane coincides with a plane formed by the intersection of p dimensions? In 2-D, like a line x=0 or y=0? affine set; they mean it does not have an origin? Because the "intercept" has moved it away from the origin, like the "$b$" in $y=mx+b$ ? I am naive about "affine": They've gone with option 1; where the constant ($1$) is included in $X$ andthe intercept ($\hat{β}_0$) is included in $\hat{β}$ "Bias" and "weight vector" are two relevant terms here explained at the link... I am trying to understand why they would use the word "bias"; In 2-D space the "bias" the y-intercept... is it because when "x" is 0, we know we can't actually estimate "y", but a bias suggests there is some non-zero value for "y"? It is different than bias vs variance (...?) Yes, it should be $\mathbf{\hat{β}}$ -- we only remove the hat when we start "viewing this as a function" (i.e. $f(X) = X^\topβ$ )
Here's my take on it, those conditions are wrong when it comes to defining if the feedback is positive/negative, so 1-) β being negative causes 180 degree phase shift so there is positive feedback For this block diagram, if \$A\$ is considered the plant, the feedback is negative if \$\beta>0\$ (doesn't mean it will stabilize the plant/loop or anything). If the plant is something after \$A\$ (an unitary gain block?), then the feedback is negative for this block diagram if \$A\beta>0\$. 2-) Condition for positive feedback: |1 + β*A| < 1 . β being negative alone is not enough for positive feedback saturation. For the loop to oscillate, you need the meet the Barkhausen stability criterion, where \$|A\beta|\ = 1\$ and also (in this case due to the negative sum junction) you must have either \$A < 0\$ or \$\beta < 0\$. Also, the book states three things in this page that are particularly confusing. In the Eq. 9.3 the feedback voltage \$V_f\$ is presented to the input circuit in subtractive fashion. The denominator \$|1+A\beta| > 1\$ and the feedback is negative. Equation 9.5 then shows that \$|A'|<|A|\$ and the gain of the system with feedback is less than the internal amplifier gain. Thus gain is sacrificed with negative feedback. I read the paragraph as, if \$|1+A\beta| > 1\$ and feedback is negative (\$\beta > 0\$, since the picture shows it just multiplies the output and goes to a negative sum junction), then \$|A'|<|A|\$. Or logically, $$|1+A\beta| > 1,~ (\beta>0)^* \rightarrow |\frac{A}{1+A\beta}| =|A'|<|A|.$$ *Notice this is useless to his prove, removing this premise doesn't break the conclusion. It is probably mentioned as it is common to have negative feedback systems where \$\beta>0, ~A>0\$ If \$A\$ is negative, as is usual in C-E amplifiers, we reverse \$V_f\$ from the \$\beta\$ network, resulting in a positive \$A\beta\$ term in Eq. 9.5 and so retain the negative feedback. I suppose it mean, if \$A<0\$ we need \$\beta < 0~\$ ("... we reverse Vf from the β network") to keep \$|A'|<|A|\$. Or logically, $$A<0,~ \beta<0 \rightarrow |1+A\beta| > 1 \rightarrow |\frac{A}{1+A\beta}|=|A'|<|A|.$$ Finally, If the phase of \$V_f\$ reverses, as may happen with nonresistive \$\beta\$ networks, the feedback voltage \$V_f\$ becomes additive to \$V_s\$ in Eq. 9.3 and the denominator of Eq. 9.5 shows that \$|1+A\beta|<1\$ and the feedback is positive. The closed-loop gain is \$|A'|>|A|\$ and the gain of the feedback system is greater than the internal gain. Having \$\beta<0\$ and \$|1+A\beta|<1\$, we have that \$|A'|>|A|\$. Also, logically, $$ (\beta<0)^*,~ |1+A\beta|<1 \rightarrow |\frac{A}{1+A\beta}|=|A'|>|A|.$$ *Also useless to the conclusion, but probably used to hint that \$A>0\$.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Measurements of the branching fractions for D + →K S 0 K S 0 K + ,K S 0 K S 0 π + and D 0 →K S 0 K S 0 ,K S 0 K S 0 K S 0 Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, ISSN 0370-2693, 02/2017, Volume 765, pp. 231 - 237 Journal Article 2. Solubility of N-phenylanthranilic acid in nine organic solvents from T=(283.15 to 318.15)K: Determination and modelling The Journal of Chemical Thermodynamics, ISSN 0021-9614, 12/2016, Volume 103, pp. 218 - 227 The solubility of -phenylanthranilic acid in nine pure organic solvents including methanol, ethanol, acetone, acetonitrile, ethyl acetate, -propanol,... N-Phenylanthranilic acid | Thermodynamic model | Mixing property | Solubility | Thermal properties | Thermodynamics | Analysis | Toluene | Nitriles | Esters | Models | Acetone N-Phenylanthranilic acid | Thermodynamic model | Mixing property | Solubility | Thermal properties | Thermodynamics | Analysis | Toluene | Nitriles | Esters | Models | Acetone Journal Article 3. Cross section measurements of e+e−→K+K−K+K− and ϕK+K− at center-of-mass energies from 2.10 to 3.08 GeV Physical Review D, ISSN 2470-0010, 08/2019, Volume 100, Issue 3, p. 1 Journal Article Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, ISSN 0370-2693, 08/2018, Volume 783, Issue C, pp. 200 - 206 Journal Article 5. Thermodynamic study of solubility for 2-amino-4-chloro-6-methoxypyrimidine in twelve organic solvents at temperatures from 273.15 K to 323.15 K Journal of Chemical Thermodynamics, ISSN 0021-9614, 02/2017, Volume 105, pp. 187 - 197 The solubility of 2-amino-4-chloro-6-methoxypyrimidine in methanol, ethanol, chloroform, toluene, ethyl acetate, acetonitrile, n-propanol, acetone,... 2-Amino-4-chloro-6-methoxypyrimidine | Mixing property | Thermodynamic model | Solubility | THERMODYNAMICS | PURE SOLVENTS | INFINITE-DILUTION | EQUILIBRIUM | PYRIMIDINES | CHEMISTRY, PHYSICAL | PLUS METHANOL | WATER 2-Amino-4-chloro-6-methoxypyrimidine | Mixing property | Thermodynamic model | Solubility | THERMODYNAMICS | PURE SOLVENTS | INFINITE-DILUTION | EQUILIBRIUM | PYRIMIDINES | CHEMISTRY, PHYSICAL | PLUS METHANOL | WATER Journal Article 6. Birnessite Nanosheet Arrays with High K Content as a High‐Capacity and Ultrastable Cathode for K‐Ion Batteries Advanced Materials, ISSN 0935-9648, 06/2019, Volume 31, Issue 24, pp. e1900060 - n/a Potassium‐ion batteries (PIBs) are one of the emerging energy‐storage technologies due to the low cost of potassium and theoretically high energy density.... cathode | potassium‐ion battery | ultrahigh stability | birnessite | high K content | potassium-ion battery | STORAGE | PHYSICS, CONDENSED MATTER | PHYSICS, APPLIED | OXIDE | ELECTRODE MATERIAL | MATERIALS SCIENCE, MULTIDISCIPLINARY | CHEMISTRY, PHYSICAL | NANOSCIENCE & NANOTECHNOLOGY | TOTAL-ENERGY CALCULATIONS | CHEMISTRY, MULTIDISCIPLINARY | Batteries | Interlayers | Structural stability | Lithium | Nanostructure | Intercalation | Storage batteries | Rechargeable batteries | Lithium-ion batteries | Electrode materials | Flux density | Anode effect | Potassium | Cathodes | Energy storage cathode | potassium‐ion battery | ultrahigh stability | birnessite | high K content | potassium-ion battery | STORAGE | PHYSICS, CONDENSED MATTER | PHYSICS, APPLIED | OXIDE | ELECTRODE MATERIAL | MATERIALS SCIENCE, MULTIDISCIPLINARY | CHEMISTRY, PHYSICAL | NANOSCIENCE & NANOTECHNOLOGY | TOTAL-ENERGY CALCULATIONS | CHEMISTRY, MULTIDISCIPLINARY | Batteries | Interlayers | Structural stability | Lithium | Nanostructure | Intercalation | Storage batteries | Rechargeable batteries | Lithium-ion batteries | Electrode materials | Flux density | Anode effect | Potassium | Cathodes | Energy storage Journal Article Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, ISSN 0370-2693, 02/2005, Volume 607, Issue 3-4, pp. 243 - 253 A partial wave analysis is presented of J/ψ→φπ π and φK K from a sample of 58M J/ψ events in the BES II detector. The f (980) is observed clearly in both sets... Journal Article 8. Solubility determination and correlation for o-phenylenediamine in (methanol, ethanol, acetonitrile and water) and their binary solvents from T=(283.15–318.15)K The Journal of Chemical Thermodynamics, ISSN 0021-9614, 02/2017, Volume 105, pp. 179 - 186 In the present study, the solubility of -phenylenediamine in four pure solvents (methanol, ethanol, acetonitrile and water) and three binary solvents (methanol... Jouyban-Acree | Dissolution enthalpy | Solid-liquid equilibrium | o-Phenylenediamine | Solubility | MIXTURES | ACID | THERMODYNAMICS | PURE SOLVENTS | CHEMISTRY, PHYSICAL | PLUS METHANOL | LIQUID PHASE-EQUILIBRIUM | Nitriles | Alcohol | Alcohol, Denatured | Methanol Jouyban-Acree | Dissolution enthalpy | Solid-liquid equilibrium | o-Phenylenediamine | Solubility | MIXTURES | ACID | THERMODYNAMICS | PURE SOLVENTS | CHEMISTRY, PHYSICAL | PLUS METHANOL | LIQUID PHASE-EQUILIBRIUM | Nitriles | Alcohol | Alcohol, Denatured | Methanol Journal Article 9. Measurements of the branching fractions for $D^+\to K^0_SK^0_SK^+$, $K^0_SK^0_S\pi^+$ and $D^0\to K^0_SK^0_S$, $K^0_SK^0_SK^0_S 11/2016 Phys Lett B 765 (2017) 231 By analyzing $2.93\ \rm fb^{-1}$ of data taken at the $\psi(3770)$ resonance peak with the BESIII detector, we measure the branching... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article Physical Review Letters, ISSN 0031-9007, 01/2019, Volume 122, Issue 1 Applying e+e- annihilation data of 2.93 fb-1 collected at center-of-mass energy $\sqrt{s}$ = 3.773 GeV with the BESIII detector, we calculate the absolute... NUCLEAR PHYSICS AND RADIATION PHYSICS NUCLEAR PHYSICS AND RADIATION PHYSICS Journal Article Advanced Materials, ISSN 0935-9648, 11/2018, Volume 30, Issue 46, pp. e1804011 - n/a The development of high‐performance dendrite‐free liquid‐metal anodes at room temperature is of great importance for the advancement of alkali metal batteries.... Na–K alloys | alkali metal batteries | self‐healing | wetting promotion | oxide layers | self-healing | BATTERY | PHYSICS, CONDENSED MATTER | ELECTROLYTE | PHYSICS, APPLIED | Na-K alloys | MATERIALS SCIENCE, MULTIDISCIPLINARY | CHEMISTRY, PHYSICAL | NANOSCIENCE & NANOTECHNOLOGY | CHEMISTRY, MULTIDISCIPLINARY | METAL | ANODE | Textile fabrics | Iron compounds | Alloys | Electrochemistry | Electrolytes | Liquid metals | Batteries | Energy management systems | Electric properties | Electrochemical analysis | Alkali metals | Electric batteries | Rechargeable batteries | Carbon fibers | Liquid alloys | Pigments | Cloth | Healing | Anodes | Potassium | Dendritic structure | Energy storage Na–K alloys | alkali metal batteries | self‐healing | wetting promotion | oxide layers | self-healing | BATTERY | PHYSICS, CONDENSED MATTER | ELECTROLYTE | PHYSICS, APPLIED | Na-K alloys | MATERIALS SCIENCE, MULTIDISCIPLINARY | CHEMISTRY, PHYSICAL | NANOSCIENCE & NANOTECHNOLOGY | CHEMISTRY, MULTIDISCIPLINARY | METAL | ANODE | Textile fabrics | Iron compounds | Alloys | Electrochemistry | Electrolytes | Liquid metals | Batteries | Energy management systems | Electric properties | Electrochemical analysis | Alkali metals | Electric batteries | Rechargeable batteries | Carbon fibers | Liquid alloys | Pigments | Cloth | Healing | Anodes | Potassium | Dendritic structure | Energy storage Journal Article 12. K‐Birnessite Electrode Obtained by Ion Exchange for Potassium‐Ion Batteries: Insight into the Concerted Ionic Diffusion and K Storage Mechanism Advanced Energy Materials, ISSN 1614-6832, 01/2019, Volume 9, Issue 1, pp. 1802739 - n/a Novel and low‐cost rechargeable batteries are of considerable interest for application in large‐scale energy storage systems. In this context, K‐Birnessite is... concerted ionic diffusion | potassium‐ion batteries | K‐Birnessite | structural changes | K-Birnessite | potassium-ion batteries | PHYSICS, CONDENSED MATTER | SODIUM | PHYSICS, APPLIED | ENERGY & FUELS | MATERIALS SCIENCE, MULTIDISCIPLINARY | CHEMISTRY, PHYSICAL | TOTAL-ENERGY CALCULATIONS | CRYSTAL | HIGH-TEMPERATURE DECOMPOSITION | CATHODE MATERIAL | CHALLENGES | LOW-COST | TRANSFORMATIONS | MORPHOLOGY | Phase transformations | Storage systems | X-ray diffraction | Batteries | Phase transitions | Storage batteries | Design optimization | Rechargeable batteries | Electrodes | Ion diffusion | Ion exchanging | Reaction kinetics | Chemical synthesis | Potassium | Energy storage concerted ionic diffusion | potassium‐ion batteries | K‐Birnessite | structural changes | K-Birnessite | potassium-ion batteries | PHYSICS, CONDENSED MATTER | SODIUM | PHYSICS, APPLIED | ENERGY & FUELS | MATERIALS SCIENCE, MULTIDISCIPLINARY | CHEMISTRY, PHYSICAL | TOTAL-ENERGY CALCULATIONS | CRYSTAL | HIGH-TEMPERATURE DECOMPOSITION | CATHODE MATERIAL | CHALLENGES | LOW-COST | TRANSFORMATIONS | MORPHOLOGY | Phase transformations | Storage systems | X-ray diffraction | Batteries | Phase transitions | Storage batteries | Design optimization | Rechargeable batteries | Electrodes | Ion diffusion | Ion exchanging | Reaction kinetics | Chemical synthesis | Potassium | Energy storage Journal Article Scientific Reports, ISSN 2045-2322, 2013, Volume 3, Issue 1, p. 1216 Journal Article
What's a Quotient Group, Really? Part 1 I realize that most of my posts for the past, er, few months have been about some pretty hefty duty topics. Today, I'd like to dial it back a bit and chat about some basic group theory! So let me ask you a question: When you hear the words "quotient group," what do you think of? In case you'd like a little refresher, here's the definition: Definition: Let $G$ be a group and let $N$ be a normal subgroup of $G$. Then $G/N=\{gN:g\in G\}$ is the set of all cosets of $N$ in $G$ and is called the quotient group of $N$ in $G$. Personally, I think answering the question "What is a quotient group?" with the words "the set of all cosets" isn't very enlightening or satisfying. Here's what I think is a more intuitive answer: But let me explain what I mean by "sorta." Recall that belonging to a subgroup $N$ simply means you satisfy a special property: $n\mathbb{Z}\subset\mathbb{Z}$ means: "You live in $n\mathbb{Z}$ iff you're an integer and a multiple of $n$." $SL_n(F)\subset GL_n(F)$ means: "You live in $SL_n(F)$ iff you're an invertible $n\times n$ matrix with entries in a field $F$ and your determinant is 1." $\ker\phi\subset G$ means: "You live in $\ker\phi$ iff you get sent to the identity $e\in H$ under a homomorphism $\phi:G\to H$." $Z(G)\subset G$ means: "You live in $Z(G)$ (the centerof $G$) iff you commute with every element of $G$." $[G,G]\subset G$ means: "You live in $[G,G]$ (the commutator subgroup) iff you look like a finite product of things of the form $ghg^{-1}h^{-1}$ where $g,h\in G$." and so on... So when you hear something like, "Form the quotient $G/N$..." or "Mod out $G$ by the subgroup $N$..." what the speaker really means is, "Consider all the elements of $G$ that don't satisfy the property of belonging to $N$." But in general, there are many ways to fail to satisfy the property to be in $N$. So there's a little more to say here. To get an idea for this, we can imagine all the elements of $G$ taking an online survey: Now suppose we were to collect the survey results and sort the elements of $G$ according to their answers. The story might go something like this: [Mathematician enters room full of elements of $G$ chatting quietly amongst themselves] Hi folks. How are we today? Doin' well? Great. Listen, would those of you who answered "yes" to question #1 please raise your hand? Fantastic, hi there. Thank you. Now, if you would, please huddle together in a single pile. Yes, just like that. You're doin' fine, folks, just fine. Alright, from now on we will refer to you collectively as "$N$" or - on a good day - we might also call you "the trivial coset." But we no longer care about ya'll as individuals. Sorry. You'll get used to it. [Mathematician turns her attention to the folks not in $N$] Hey there, everyone. Would you please raise your hand if you selected "not too badly" for question #2? Great, how you folks doin'? Good. Look, although none of you satisfy the property to belong to $N$, you do satisfy a different property: You all fail not too badly (ntb). Congrats! Now please form your own huddle over in that corner. Quickly now, folks. Okay perfect. Listen, we no longer care about you individually - ya'll are all indistinuishable to us. For this reason, we'll refer to you as "(ntb)N" or sometimes "the coset ntb." [Mathematician addresses remaining elements in the room] Hi there, ya'll, thanks for waiting. Would those of you who fail to belong to $N$ "pretty badly" (pb) please form your own pile? Sure, you can stand in that corner. That's right, go ahead. Now because you all possess the special property of 'failing pretty badly,' you're all the same to us, and so we'll just call all of you "(pb)N" or "the coset pb." Alright now, I see ya'll who are "not even close" (nec) to meeting the requirements of belonging to $N$ have already huddled together. Thanks so much, folks. Now now, stop all that crying! It's not such a bad thing. You, too, satisfy a very special property: you all fail really badly. Isn't that great? It sure is. So we'll collectively refer to you all as "(nec)N" or "the coset nec." [Mathematcian happily exists the room] [Group elements resume quiet chatter] So you see? We can organize the entire group $G$ based on the how the elements relate to the subgroup $N$. Those who belong to $N$, well, belong to $N$. And those who don't can be sorted together according to how badly they miss the mark. Of course, labels like "not too badly" and "pretty badly" and "not even close" are rather fabricated, and there can certainly be more than three options. In fact, it's better to replace "how badly they fail" with just " how they fail." But in any case, this is the bird's-eye-view. Those little organization piles are precisely the cosets of $G/N$. And taking our analogy one step further, this action of 'administering the survey,' i.e. of organizing the members of $G$ according to their relationship to $N$, is precisely what the so-called natural projection homomorphism $\varphi:G\to G/N$ is doing! (Here, $\varphi$ sends an element $g$ to the coset $gN$.) As I like to tell my college algebra students, functions are like verbs! They do things according to some rule. And the same thing is true of group homomorphisms such as $\varphi$. It tells the elements of $G$ to get organized - that's the verb - according to the rule: "If you fail *this* badly, then go stand in the appropriate coset." Well, I hope this was a little helpful! We'll continue this discussion next time by looking at the quotient group $\mathbb{Z}/n\mathbb{Z}$. I'll also say a word or two about the other examples listed at the beginning of this post. Until then!
Learning Outcomes Create and interpret a line of best fit Data rarely fit a straight line exactly. Usually, you must be satisfied with rough predictions. Typically, you have a set of data whose scatter plot appears to “fit” a straight line. This is called a Line of Best Fit or Least-Squares Line. Example A random sample of 11 statistics students produced the following data, where x is the third exam score out of 80, and y is the final exam score out of 200. Can you predict the final exam score of a random student if you know the third exam score? x (third exam score) y (final exam score) 65 175 67 133 71 185 71 163 66 126 75 198 67 153 70 163 71 159 69 151 69 159 Table showing the scores on the final exam based on scores from the third exam. Scatter plot showing the scores on the final exam based on scores from the third exam. try it SCUBA divers have maximum dive times they cannot exceed when going to different depths. The data in the table show different depths with the maximum dive times in minutes. Use your calculator to find the least squares regression line and predict the maximum dive time for 110 feet. X (depth in feet) Y (maximum dive time) 50 80 60 55 70 45 80 35 90 25 100 22 [latex]\displaystyle\hat{{y}}={127.24}-{1.11}{x}[/latex] At 110 feet, a diver could dive for only five minutes. The third exam score, x, is the independent variable and the final exam score, y, is the dependent variable. We will plot a regression line that best “fits” the data. If each of you were to fit a line “by eye,” you would draw different lines. We can use what is called a least-squares regression line to obtain the best fit line. Consider the following diagram. Each point of data is of the the form ( x, y) and each point of the line of best fit using least-squares linear regression has the form [latex]\displaystyle{({x}\hat{{y}})}[/latex]. The [latex]\displaystyle\hat{{y}}[/latex] is read “ ” and is the y hat estimated value of . It is the value of y yobtained using the regression line. It is not generally equal to yfrom data. The term [latex]\displaystyle{y}_{0}-\hat{y}_{0}={\epsilon}_{0}[/latex] is called the “ error” or residual. It is not an error in the sense of a mistake. The absolute value of a residual measures the vertical distance between the actual value of y and the estimated value of y. In other words, it measures the vertical distance between the actual data point and the predicted point on the line. If the observed data point lies above the line, the residual is positive, and the line underestimates the actual data value for y. If the observed data point lies below the line, the residual is negative, and the line overestimates that actual data value for y. In the diagram above, [latex]\displaystyle{y}_{0}-\hat{y}_{0}={\epsilon}_{0}[/latex] is the residual for the point shown. Here the point lies above the line and the residual is positive. ε = the Greek letter epsilon For each data point, you can calculate the residuals or errors, [latex]\displaystyle{y}_{i}-\hat{y}_{i}={\epsilon}_{i}[/latex] for i = 1, 2, 3, …, 11. Each | ε| is a vertical distance. For the example about the third exam scores and the final exam scores for the 11 statistics students, there are 11 data points. Therefore, there are 11 ε values. If you square each ε and add, you get [latex]\displaystyle{({\epsilon}_{{1}})}^{{2}}+{({\epsilon}_{{2}})}^{{2}}+\ldots+{({\epsilon}_{{11}})}^{{2}}={\stackrel{{11}}{{\stackrel{\sum}{{{}_{{{i}={1}}}}}}}}{\epsilon}^{{2}}[/latex] This is called the Sum of Squared Errors (SSE). Using calculus, you can determine the values of a and b that make the SSE a minimum. When you make the SSE a minimum, you have determined the points that are on the line of best fit. It turns out that the line of best fit has the equation: [latex]\displaystyle\hat{{y}}={a}+{b}{x}[/latex] where [latex]\displaystyle{a}=\overline{y}-{b}\overline{{x}}[/latex] and [latex]{b}=\frac{{\sum{({x}-\overline{{x}})}{({y}-\overline{{y}})}}}{{\sum{({x}-\overline{{x}})}^{{2}}}}[/latex]. The sample means of the x values and the y values are [latex]\displaystyle\overline{{x}}[/latex] and [latex]\overline{{y}}[/latex]. The slope b can be written as [latex]\displaystyle{b}={r}{\left(\frac{{s}_{{y}}}{{s}_{{x}}}\right)}[/latex] where s = the standard deviation of the y yvalues and s = the standard deviation of the x xvalues. ris the correlation coefficient, which is discussed in the next section. Least Squares Criteria for Best Fit The process of fitting the best-fit line is called linear regression. The idea behind finding the best-fit line is based on the assumption that the data are scattered about a straight line. The criteria for the best fit line is that the sum of the squared errors (SSE) is minimized, that is, made as small as possible. Any other line you might choose would have a higher SSE than the best fit line. This best fit line is called the least-squares regression line. Note Computer spreadsheets, statistical software, and many calculators can quickly calculate the best-fit line and create the graphs. The calculations tend to be tedious if done by hand. Instructions to use the TI-83, TI-83+, and TI-84+ calculators to find the best-fit line and create a scatterplot are shown at the end of this section. Example Third Exam vs Final Exam Example The graph of the line of best fit for the third-exam/final-exam example is as follows: The least squares regression line (best-fit line) for the third-exam/final-exam example has the equation: [latex]\displaystyle\hat{{y}}=-{173.51}+{4.83}{x}[/latex] Remember, it is always important to plot a scatter diagram first. If the scatter plot indicates that there is a linear relationship between the variables, then it is reasonable to use a best fit line to make predictions for y given x within the domain of x-values in the sample data, but not necessarily for x-values outside that domain. You could use the line to predict the final exam score for a student who earned a grade of 73 on the third exam. You should NOT use the line to predict the final exam score for a student who earned a grade of 50 on the third exam, because 50 is not within the domain of the x-values in the sample data, which are between 65 and 75. Understanding Slope The slope of the line, b, describes how changes in the variables are related. It is important to interpret the slope of the line in the context of the situation represented by the data. You should be able to write a sentence interpreting the slope in plain English. Interpretation of the Slope: The slope of the best-fit line tells us how the dependent variable ( y) changes for every one unit increase in the independent ( x) variable, on average. Third Exam vs Final Exam Example: Slope: The slope of the line is b = 4.83. Interpretation: For a one-point increase in the score on the third exam, the final exam score increases by 4.83 points, on average. Using the Linear Regression T Test: LinRegTTest In the STAT list editor, enter the X data in list L1 and the Y data in list L2, paired so that the corresponding ( x, y) values are next to each other in the lists. (If a particular pair of values is repeated, enter it as many times as it appears in the data.) On the STAT TESTS menu, scroll down with the cursor to select the LinRegTTest. (Be careful to select LinRegTTest, as some calculators may also have a different item called LinRegTInt.) On the LinRegTTest input screen enter: Xlist: L1 ; Ylist: L2 ; Freq: 1 On the next line, at the prompt βor ρ, highlight “≠ 0” and press ENTER Leave the line for “RegEq:” blank Highlight Calculate and press ENTER. The output screen contains a lot of information. For now we will focus on a few items from the output, and will return later to the other items. The second line says y = a + bx. Scroll down to find the values a = –173.513, and b = 4.8273; the equation of the best fit line is ŷ = –173.51 + 4.83 xThe two items at the bottom are r2 = 0.43969 and r = 0.663. For now, just note where to find these values; we will discuss them in the next two sections. Graphing the Scatterplot and Regression Line We are assuming your X data is already entered in list L1 and your Y data is in list L2 Press 2nd STATPLOT ENTER to use Plot 1 On the input screen for PLOT 1, highlightOn, and press ENTER For TYPE: highlight the very first icon which is the scatterplot and press ENTER Indicate Xlist: L1 and Ylist: L2 For Mark: it does not matter which symbol you highlight. Press the ZOOM key and then the number 9 (for menu item “ZoomStat”) ; the calculator will fit the window to the data To graph the best-fit line, press the “Y=” key and type the equation –173.5 + 4.83X into equation Y1. (The X key is immediately left of the STAT key). Press ZOOM 9 again to graph it. Optional: If you want to change the viewing window, press the WINDOW key. Enter your desired window using Xmin, Xmax, Ymin, Ymax Note Another way to graph the line after you create a scatter plot is to use LinRegTTest. Make sure you have done the scatter plot. Check it on your screen.Go to LinRegTTest and enter the lists. At RegEq: press VARS and arrow over to Y-VARS. Press 1 for 1:Function. Press 1 for 1:Y1. Then arrow down to Calculate and do the calculation for the line of best fit.Press Y = (you will see the regression equation).Press GRAPH. The line will be drawn.” The Correlation Coefficient r Besides looking at the scatter plot and seeing that a line seems reasonable, how can you tell if the line is a good predictor? Use the correlation coefficient as another indicator (besides the scatterplot) of the strength of the relationship between x and y. The correlation coefficient, , developed by Karl Pearson in the early 1900s, is numerical and provides a measure of strength and direction of the linear association between the independent variable r xand the dependent variable y. The correlation coefficient is calculated as [latex]{r}=\frac{{ {n}\sum{({x}{y})}-{(\sum{x})}{(\sum{y})} }} {{ \sqrt{\left[{n}\sum{x}^{2}-(\sum{x}^{2})\right]\left[{n}\sum{y}^{2}-(\sum{y}^{2})\right]}}}[/latex] where n = the number of data points. If you suspect a linear relationship between x and y, then r can measure how strong the linear relationship is. What the VALUE of r tells us: The value of ris always between –1 and +1: –1 ≤ r≤ 1. The size of the correlation rindicates the strength of the linear relationship between xand y. Values of rclose to –1 or to +1 indicate a stronger linear relationship between xand y. If r= 0 there is absolutely no linear relationship between xand y(no linear correlation). If r= 1, there is perfect positive correlation. If r= –1, there is perfect negativecorrelation. In both these cases, all of the original data points lie on a straight line. Of course,in the real world, this will not generally happen. What the SIGN of r tells us: A positive value of rmeans that when xincreases, ytends to increase and when xdecreases, ytends to decrease (positive correlation). A negative value of rmeans that when xincreases, ytends to decrease and when xdecreases, ytends to increase (negative correlation). The sign of ris the same as the sign of the slope, b, of the best-fit line. Note Strong correlation does not suggest that x causes y or y causes x. We say “correlation does not imply causation.” (a) A scatter plot showing data with a positive correlation. 0 < r < 1 (b) A scatter plot showing data with a negative correlation. –1 < r < 0 (c) A scatter plot showing data with zero correlation. r = 0 The formula for r looks formidable. However, computer spreadsheets, statistical software, and many calculators can quickly calculate r. The correlation coefficient ris the bottom item in the output screens for the LinRegTTest on the TI-83, TI-83+, or TI-84+ calculator (see previous section for instructions). The Coefficient of Determination The variable r2 is called the coefficient of determination and is the square of the correlation coefficient, but is usually stated as a percent, rather than in decimal form. It has an interpretation in the context of the data: r 2, when expressed as a percent, represents the percent of variation in the dependent (predicted) variable ythat can be explained by variation in the independent (explanatory) variable xusing the regression (best-fit) line. 1 – r 2, when expressed as a percentage, represents the percent of variation in ythat is NOT explained by variation in xusing the regression line. This can be seen as the scattering of the observed data points about the regression line. The line of best fit is [latex]\displaystyle\hat{{y}}=-{173.51}+{4.83}{x}[/latex] The correlation coefficient is r = 0.6631The coefficient of determination is r 2 = 0.66312 = 0.4397 Interpretation of r2 in the context of this example: Approximately 44% of the variation (0.4397 is approximately 0.44) in the final-exam grades can be explained by the variation in the grades on the third exam, using the best-fit regression line. Therefore, approximately 56% of the variation (1 – 0.44 = 0.56) in the final exam grades can NOT be explained by the variation in the grades on the third exam, using the best-fit regression line. (This is seen as the scattering of the points about the line.) Concept Review A regression line, or a line of best fit, can be drawn on a scatter plot and used to predict outcomes for the x and y variables in a given data set or sample data. There are several ways to find a regression line, but usually the least-squares regression line is used because it creates a uniform line. Residuals, also called “errors,” measure the distance from the actual value of y and the estimated value of y. The Sum of Squared Errors, when set to its minimum, calculates the points on the line of best fit. Regression lines can be used to predict values within the given set of data, but should not be used to make predictions for values outside the set of data. The correlation coefficient r measures the strength of the linear association between x and y. The variable r has to be between –1 and +1. When r is positive, the x and y will tend to increase and decrease together. When r is negative, x will increase and y will decrease, or the opposite, x will decrease and y will increase. The coefficient of determination r2, is equal to the square of the correlation coefficient. When expressed as a percent, r2 represents the percent of variation in the dependent variable y that can be explained by variation in the independent variable x using the regression line.
How can i evaluate this indefinite integral. $$ \int { \frac { 1 }{ \sqrt { 8-{ x }^{ 2 }-2x } } } \,dx$$ I know it involves completing square but i don't know how to do it. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Hint: $$8 - x^2 - 2x = 8 - (x^2 + 2x) = 8 - (x^2 + 2x + 1) + 1 = 9 - (x + 1)^2$$ Can you take it from here? Hints: $$8-x^2-2x=9-(x+1)^2\implies \sqrt{8-x^2-2x}=3\sqrt{1-\left(\frac{x+1}3\right)^2}$$ and now do remember the derivative of $\;\arcsin\;$ ...
What's a Quotient Group, Really? Part 2 Today we're resuming our informal chat on quotient groups. Previously we said that belonging to a (normal, say) subgroup $N$ of a group $G$ just means you satisfy some property. For example, $5\mathbb{Z}\subset\mathbb{Z}$ means "You belong to $5\mathbb{Z}$ if and only if you're divisible by 5". And the process of "taking the quotient" is the simple observation that every element in $G$ either #1) belongs to N or #2) doesn't belong to N and noting that the elements of $G$ can be grouped together according to HOW they satisfy either #1 or #2. The resulting 'piles' are precisely the cosets $gN$ of $G/N$. And the actual process of creating the piles is what the so-called "natural projection homomorphism" $\phi:G\to G/N$ is doing when it sends an element $g$ to the coset $gN$. The representative $g$ simply indicates/describes how #1 or #2 is satisfied. Notice there is only one way to satisfy #1---you simply belong to $N$---but in general there can be many ways to satisfy #2, i.e. there can be many ways to fail to be in $N$. That's why $G/N$ has exactly one "trivial" coset but often more than one "nontrivial" coset. This reminds me of the saying, There are many ways to be wrong, but only one way to be right! and is why I like to think of $G/N$ as roughly all the things in $G$ that don't belong to $N$. Sure, it's true that $G/N$ also contains all the folks that do belong to $N$ (after all, $N\in G/N$!), but there's only one way to satisfy this property, and therefore there's nothing interesting to talk about. It's trivial. As an example, let's look more closely at the case when $G=\mathbb{Z}$ and $N=5\mathbb{Z}$. A Simple Example We might think of $\mathbb{Z}/5\mathbb{Z}$ as all the integers that aren't multiples of 5, i.e. of those who fail to belong to the subgroup $5\mathbb{Z}$. Though to be efficient, we'll consider some integers as being the 'same' if they fail to belong to $5\mathbb{Z}$ in the same way: All the folks in $\{\ldots,-9,-4,1,6,11,\ldots\}$ aren'tin $5\mathbb{Z}$ precisely because they're off by 1. So we call them the coset $[1]=1+5\mathbb{Z}$. All the folks in $\{\ldots,-8,-3,2,7,12,\ldots\}$ aren'tin $5\mathbb{Z}$ precisely because they're off by 2. So we call them the coset $[2]=2+5\mathbb{Z}$. All the folks in $\{\ldots,-7,-2,3,8,13,\ldots\}$ aren'tin $5\mathbb{Z}$ precisely because they're off by 3. So we call them the coset $[3]=3+5\mathbb{Z}$. All the folks in $\{\ldots,-6,-1,4,9,14,\ldots\}$ aren't $5\mathbb{Z}$ precisely because they're off by 4. So we call them the coset $[4]=4+5\mathbb{Z}$. Of course everyone in $\{\ldots,-10,-5,0,5,10,\ldots\}$ is a multiple of 5. They're all off by 0! So we call them $0+5\mathbb{Z}$ or simply $[0]$. So you see? Every integer is either divisible by 5 or it's not. If it's not, then we can ask the additional question, "Why?" And since there are four possible answers---either it's off by 1 or 2 or 3 or 4---we get four non-trivial cosets. This is why $\mathbb{Z}/5\mathbb{Z}$ is a group of order five: there's exactly one way to be a member of $5\mathbb{Z}$ but four ways to not be. You probably noticed that the difference between any two integers in $[1]=\{\ldots,-9,-4,1,6,11,\ldots\}$ is a multiple of 5. And the same observation holds for any two integers in $[2]$, and any two integers in $[3]$, and so on. This notion of "taking the difference" is precisely how we determine which group elements belong in which coset. Two elements are thought of as "the same" whenever their difference lies in the normal subgroup that we're modding out by. This is exactly what's going on when your textbook says something like, "Suppose $N$ is normal in $G$ and let $a,b\in G$. Then two cosets $aN$ and $bN$ are equal if and only if $b^{-1}a\in N$." Notice that if $G$ is abelian, then we write $b^{-1}a$ as $-b+a$, i.e. "$b$ inverse plus $a$." But when $G$ is non-abelian, we replace "plus" by "times" to obtain $b^{-1}a$. This is the multiplicative version of a difference. Next week we'll close out this mini-series by taking an intuitive look at a few more quotient groups: $G/\ker\phi$ where $\phi:G\to H$ is any group homomorphism $GL_n(F)/SL_n(F)$ where $F$ is a field, $GL_n(F)$ is the general linear group, and $SL_n(F)$ is the special linear group $G/Z(G)$ where $G$ is any group and $Z(G)$ is its center
This is the problem: We can determine the solubility equilibrium for silver bromide using cell: Ag (s) | AgNO3 (aq) || KBr (aq) | AgBr (s) | Ag (s) We know that: $\text{AgBr} + \text{e}^- \rightarrow \text{Ag} + \text{Br}^- \ \ \ \text{E}^0=0.095 \text{V}$ $\text{Ag}^+ + \text{e}^- \rightarrow \text{Ag}\ \ \ \text{E}^0=0.799 \text{V}$ First of all isn't that cell diagram incorrect? As far as I understand it is the silver reducing from silver nitrate to metallic silver thus being cathode. And vice versa for the silver bromide that is being produced more as silver is oxidating. Shouldn't the cell diagram be like this instead: KBr (aq) | AgBr (s) | Ag (s) || Ag (s) | AgNO3 (aq) Anyways, if we consider the Nernst equation: $$E = E_0-\frac{RT}{nF}\ln{k}$$ and assume that the galvanic cell is in equilibrium when there is no voltage (i.e. $E=0$). We can then write: $$\ln{k}=\frac{E^0nF}{RT} \\ \leftrightarrow k=e^{\frac{E^0nF}{RT}}$$ Then plug in the numbers: $$\ln{k}=\frac{(-0.095+0.799)\ \text{V}\cdot 1 \ \text{mol}\cdot 96485.31 \ \frac{\text{C}}{\text{mol}}}{8.31451\ \frac{\text{J}}{\text{mol}\cdot \text{K}}\cdot 293.15 \ \text{K}} \\ \leftrightarrow k= 1.26754\cdot 10^{12}$$ But the right answer is: $$1.26 \cdot 10^{-12}\ \text{mol}^2/\text{dm}^6$$ Any ideas what am I doing wrong?
Limited regularity of solutions to fractional heat and Schrödinger equations Department of Mathematical Sciences, Copenhagen University, Universitetsparken 5, DK-2100 Copenhagen, Denmark When $ P$ is the fractional Laplacian $ (-\Delta )^a$, $ 0<a<1$, or a pseudodifferential generalization thereof, the Dirichlet problem for the associated heat equation over a smooth set $ \Omega \subset{\Bbb R}^n$: $ r^+Pu(x, t)+\partial_tu(x, t) = f(x, t)$ on $ \Omega \times \, ]0, T[\, $, $ u(x, t) = 0$ for $ x\notin\Omega$, $ u(x, 0) = 0$, is known to be solvable in relatively low-order Sobolev or Hölder spaces. We now show that in contrast with differential operator cases, the regularity of $ u$ in $ x$ at $ \partial\Omega$ when $ f$ is very smooth cannot in general be improved beyond a certain estimate. An improvement requires the vanishing of a Neumann boundary value. --- There is a similar result for the Schrödinger Dirichlet problem $ r^+Pv(x)+Vv(x) = g(x)$ on $ \Omega$, $ \text{supp } v\subset\overline\Omega$, with $ V(x)\in C^\infty$. The proofs involve a precise description, of interest in itself, of the Dirichlet domains in terms of regular functions and functions with a $ \text{dist}(x, \partial\Omega )^a$ singularity. Keywords:Fractional Laplacian, stable process, pseudodifferential operator, fractional heat equation, fractional Schrödinger Dirichlet problem, Lp and Hölder estimates, limited spatial regularity. Mathematics Subject Classification:Primary: 35K05, 35K35; Secondary: 35S11, 47G30, 60G52. Citation:Gerd Grubb. Limited regularity of solutions to fractional heat and Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3609-3634. doi: 10.3934/dcds.2019148 References: [1] N. Abatangelo, Large s-harmonic functions and boundary blow-up solutions for the fractional Laplacian, [2] [3] N. Abatangelo, S. Jarohs and A. Saldana, Integral representation of solutions to higher-order fractional Dirichlet problems on balls, [4] [5] [6] [7] [8] M. Bonforte, Y. Sire and J. L. Vazquez, Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains, [9] [10] [11] [12] [13] R. Cont and P. Tankov, [14] R. Courant and D. Hilbert, [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] G. Grubb, [26] G. Grubb, Local and nonlocal boundary conditions for [27] G. Grubb, Fractional Laplacians on domains, a development of Hörmander's theory of [28] [29] [30] [31] [32] G. Grubb, Green's formula and a Dirichlet-to-Neumann operator for fractional-order pseudodifferential operators, [33] G. Grubb, Fractional-order operators: Boundary problems, heat equations, in [34] [35] [36] L. Hörmander, [37] [38] [39] T. Jin and J. Xiong, Schauder estimates for solutions of linear parabolic integro-differential equations, [40] [41] [42] N. S. Landkof, [43] T. Leonori, I. Peral, A. Primo and F. Soria, Basic estimates for solutions of a class of nonlocal elliptic and parabolic equations, [44] [45] [46] X. Ros-Oton, Boundary regularity, Pohozaev identities and nonexistence results, [47] X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary, [48] [49] [50] X. Ros-Oton and H. Vivas, Higher-order boundary regularity estimates for nonlocal parabolic equations, [51] [52] [53] M. E. Taylor, Pseudodifferential Operators, Princeton University Press, Princeton, NJ, 1981. Google Scholar show all references References: [1] N. Abatangelo, Large s-harmonic functions and boundary blow-up solutions for the fractional Laplacian, [2] [3] N. Abatangelo, S. Jarohs and A. Saldana, Integral representation of solutions to higher-order fractional Dirichlet problems on balls, [4] [5] [6] [7] [8] M. Bonforte, Y. Sire and J. L. Vazquez, Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains, [9] [10] [11] [12] [13] R. Cont and P. Tankov, [14] R. Courant and D. Hilbert, [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] G. Grubb, [26] G. Grubb, Local and nonlocal boundary conditions for [27] G. Grubb, Fractional Laplacians on domains, a development of Hörmander's theory of [28] [29] [30] [31] [32] G. Grubb, Green's formula and a Dirichlet-to-Neumann operator for fractional-order pseudodifferential operators, [33] G. Grubb, Fractional-order operators: Boundary problems, heat equations, in [34] [35] [36] L. Hörmander, [37] [38] [39] T. Jin and J. Xiong, Schauder estimates for solutions of linear parabolic integro-differential equations, [40] [41] [42] N. S. Landkof, [43] T. Leonori, I. Peral, A. Primo and F. Soria, Basic estimates for solutions of a class of nonlocal elliptic and parabolic equations, [44] [45] [46] X. Ros-Oton, Boundary regularity, Pohozaev identities and nonexistence results, [47] X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary, [48] [49] [50] X. Ros-Oton and H. Vivas, Higher-order boundary regularity estimates for nonlocal parabolic equations, [51] [52] [53] M. E. Taylor, Pseudodifferential Operators, Princeton University Press, Princeton, NJ, 1981. Google Scholar [1] Selma Yildirim Yolcu, Türkay Yolcu. Sharper estimates on the eigenvalues of Dirichlet fractional Laplacian. [2] Carlos Lizama, Luz Roncal. Hölder-Lebesgue regularity and almost periodicity for semidiscrete equations with a fractional Laplacian. [3] Atsushi Kawamoto. Hölder stability estimate in an inverse source problem for a first and half order time fractional diffusion equation. [4] Tomás Sanz-Perela. Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian. [5] Miaomiao Niu, Zhongwei Tang. Least energy solutions for nonlinear Schrödinger equation involving the fractional Laplacian and critical growth. [6] Ran Zhuo, Yan Li. Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian. [7] Susanna Terracini, Gianmaria Verzini, Alessandro Zilio. Uniform Hölder regularity with small exponent in competition-fractional diffusion systems. [8] Chu-Hee Cho, Youngwoo Koh, Ihyeok Seo. On inhomogeneous Strichartz estimates for fractional Schrödinger equations and their applications. [9] Yalçin Sarol, Frederi Viens. Time regularity of the evolution solution to fractional stochastic heat equation. [10] Vladimir Georgiev, Koichi Taniguchi. On fractional Leibniz rule for Dirichlet Laplacian in exterior domain. [11] Mikko Kemppainen, Peter Sjögren, José Luis Torrea. Wave extension problem for the fractional Laplacian. [12] De Tang, Yanqin Fang. Regularity and nonexistence of solutions for a system involving the fractional Laplacian. [13] Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. [14] Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. [15] Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. [16] [17] Van Duong Dinh, Binhua Feng. On fractional nonlinear Schrödinger equation with combined power-type nonlinearities. [18] David Gómez-Castro, Juan Luis Vázquez. The fractional Schrödinger equation with singular potential and measure data. [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Let $f$ be a meromorphic function in a domain $D$. The set of zeros $Z_f$ and the set of poles $P_f$ are both discrete in $D$; it means that doesn't exist a sequence of zeros (risp. sequence of poles) that converges to a zero of $f$ (risp. to a pole of $f$). My question is the following: Let $a\in D$ such that $a\not\in Z_f $ and $a\notin P_f$; does exist a sequence $\{b_n\}$ with $b_n\in Z_f\cup P_f$ such that $b_n\rightarrow a$? Roughly speaking, can $a$ be an accumulation point for $Z_f\cup P_f$? I've tried to give myself an answer but I don't know if it is correct: If $f$ is continuous then $$\lim_{b_n\to a} f(b_n)=f\big(\lim_{b_n\to a}b_n\big)$$ but in the above case $$\lim_{b_n\to a}f(b_n)=0,\infty$$ or it doesn't exist, and $$f\big(\lim_{b_n\to a}b_n\big)=f(a)$$ Contradiction!
Main Page Contents The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here. Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].) Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any [math]c[/math]-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our [math]c[/math]-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [math][4]^d[/math]; Given a point [math]( (x_1,…,x_d),(y_1,…,y_d) )[/math] where [math]x_i,y_j[/math] are 0 or 1, it maps to [math](z_1,…,z_d)[/math], where [math]z_i=0[/math] if [math]x_i=y_i=0[/math], [math]z_i=1[/math] if [math]x_i=1[/math] and [math]y_i=0, z_i=2[/math] if [math]x_i=0[/math] and [math]y_i=1[/math], and finally [math]z_i=3[/math] if [math]x_i=y_i=1[/math]. Any combinatorial line in [math][4]^d[/math] defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do. I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler. Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think. I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A. Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all. O'Donnell.35: Just to confirm I have the question right… There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits [ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ] are equal to one of the following: [ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ] ? McCutcheon.469: IP Roth: Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$. Presumably, this should be (perhaps much) simpler than DHJ, k=3. High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.) Fourier approach Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient. You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7. DHJ for dense subsets of a random set Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. Bibliography H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119.
I understand that in CLT, we have $\sqrt{N} \left( \frac{\bar{X}_N - \mu}{\sigma} \right) \overset{d}\to N\left(0,1\right) \qquad (*)$ where $\bar{X}_N := \frac{1}{N} \sum_{i=1}^N X_i$ is the sample mean, and $\sqrt{N}$ inflates the $\frac{\bar{X}_N - \mu}{\sigma}$ to a $\mathcal{O}_p(1)$, while without the $\sqrt{N}$, $\frac{\bar{X}_N - \mu}{\sigma} \overset{p}\to 0$. $\textbf{My Question is:}$ since $\sqrt{N} \left( \frac{\bar{X}_N - \mu}{\sigma} \right) = \frac{\bar{X}_N - \mu}{\frac{\sigma}{\sqrt{N}}} = \frac{\bar{X}_N - \mathbb{E}\left[\bar{X}_N\right]}{SD\left(\bar{X}_N\right)}$, is it correct to view CLT as $\frac{\bar{X}_N - \mathbb{E}\left[\bar{X}_N\right]}{SD\left(\bar{X}_N\right)} \overset{d}\to N\left(0,1\right)$? If yes, are we considering the $\sqrt{N}$ from expression $(*)$ as a part of the standard deviation of the sample mean, as opposed to an inflater, and does that mean as long as we can show that $SD\left(\bar{X}_N\right) \in \mathcal{O}_p(1)$, we do not need to consider about the rate of inflation (e.g. $\sqrt{N}$) separately?
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Consider the balanced chemical equation (i.e., catalytic oxidation of ammonia) such as \[ 4 \text{ N} \text{H}_{3} (g) + 5 \text{O}_{2} (g) \rightarrow 4 \text{ N} \text{O} (g) + 6 \text{ H}_{2} \text{O} (g) \label{1}\] not only tells how many molecules of each kind are involved in a reaction, it also indicates the amount of each substance that is involved. Equation \(\ref{1}\) (represented molecularly by the image below it) says that 4 NH 3 molecules can react with 5 O 2 molecules to give 4 NO molecules and 6 H 2O molecules. It also says that 4 mol NH 3 would react with 5 mol O 2 yielding 4 mol NO and 6 mol H 2O. The balanced equation does more than this, though. It also tells us that \(2 \cdot4 = 8 \text{mol NH}_3\) will react with \(2 \cdot5 = 10 \text{mol O}_2\), and that \(\small\frac{1}{2} \cdot4 = 2 \text{mol NH}_3\) requires only \(\small\frac{1}{2} \cdot5 = 2.5 \text{mol O}_2\). In other words, the equation indicates that exactly 5 mol O 2 must react for every 4 mol NH 3 consumed. For the purpose of calculating how much O 2 is required to react with a certain amount of NH 3 therefore, the significant information contained in Equation \(\ref{1}\) is the ratio \[\frac{\text{5 mol O}_{\text{2}}}{\text{4 mol NH}_{\text{3}}}\label{2}\] We shall call such a ratio derived from a balanced chemical equation a stoichiometric ratio and give it the symbol S. Thus, for Equation \(\ref{1}\), \[\text{S}\left( \frac{\text{O}_{\text{2}}}{\text{NH}_{\text{3}}} \right)=\frac{\text{5 mol O}_{\text{2}}}{\text{4 mol NH}_{\text{3}}} \label{3}\] The word stoichiometric comes from the Greek words stoicheion, “element,“ and metron, “measure.“ Hence the stoichiometric ratio measures one element (or compound) against another. Example \(\PageIndex{1}\): Stoichiometric Ratios Derive all possible stoichiometric ratios from Equation \(\ref{1}\). Solution Any ratio of amounts of substance given by coefficients in the equation may be used: \[\begin{align*} &\text{S}\left(\frac{\ce{NH3}}{\ce{O2}}\right) = \frac{\text{4 mol NH}_3}{\text{5 mol O}_2} &\text{S}\left(\frac{\ce{O2}}{\ce{NO}}\right) &= \frac{\text{5 mol O}_2}{\text{4 mol NO}} \\ { } \\ &\text{S}\left(\frac{\ce{NH3}}{\ce{NO}}\right) = \frac{\text{4 mol NH}_3}{\text{4 mol NO}} &\space\text{S}\left(\frac{\ce{O2}}{\ce{H2O}}\right) &= \frac{\text{5 mol O}_2}{\text{6 mol }\ce{H2O}} \\ { } \\ &\text{S}\left(\frac{\ce{NH3}}{\ce{H2O}}\right) = \frac{\text{4 mol NH}_3}{\text{6 mol }\ce{H2O}} &\space\text{S}\left(\frac{\ce{NO}}{\ce{H2O}}\right) &= \frac{\text{4 mol NO}}{\text{6 mol }\ce{H2O}} \end{align*} \] There are six more stoichiometric ratios, each of which is the reciprocal of one of these. [Equation \(\ref{3}\) gives one of them.] When any chemical reaction occurs, the amounts of substances consumed or produced are related by the appropriate stoichiometric ratios. Using Equation \(\ref{2}\) as an example, this means that the ratio of the amount of O 2 consumed to the amount of NH 3 consumed must be the stoichiometric ratio S(O 2/NH 3): \[\frac{n_{\text{O}_{\text{2}}\text{ consumed}}}{n_{\text{NH}_{\text{3}}\text{ consumed}}} =\text{S} \left(\frac{\text{O}_2}{\text{NH}_3}\right) = \frac{\text{5 mol O}_{\text{2}}}{\text{4 mol NH}_{3}}\label{9}\] Similarly, the ratio of the amount of H 2O produced to the amount of NH 3 consumed must be S(H 2O/NH 3): \[\frac{n_{\text{H}_{\text{2}}\text{O produced}}}{n_{\text{NH}_{\text{3}}\text{ consumed}}} =\text{S}\left( \frac{\text{H}_{\text{2}}\text{O}}{\text{NH}_{3}} \right) = \frac{\text{6 mol H}_{\text{2}}\text{O}}{\text{4 mol NH}_{3}} \label{10}\] In general we can say that \[\text{Stoichiometric ratio }\left( \frac{\text{X}}{\text{Y}} \right)=\frac{\text{amount of X consumed or produced}}{\text{amount of Y consumed or produced}} \label{11}\] or, in symbols, \[\text{S}\left( \frac{\text{X}}{\text{Y}} \right)= \frac{n_{\text{X consumed or produced}}}{n_{\text{Y consumed or produced}}} \label{12}\] Note that in the word Equation \(\ref{11}\) and the symbolic Equation \(\ref{12}\), \(X\) and \(Y\) may represent any reactant or any product in the balanced chemical equation from which the stoichiometric ratio was derived. No matter how much of each reactant we have, the amounts of reactants consumed and the amounts of products produced will be in appropriate stoichiometric ratios. Example \(\PageIndex{2}\): Ratio of Water Find the amount of water produced when 3.68 mol NH 3 is consumed according to Equation \(\ref{10}\). Solution The amount of water produced must be in the stoichiometric ratio S(H 2O/NH 3) to the amount of ammonia consumed: \[\text{S}\left( \dfrac{\text{H}_{\text{2}}\text{O}}{\text{NH}_{\text{3}}} \right)=\dfrac{n_{\text{H}_{\text{2}}\text{O produced}}}{n_{\text{NH}_{\text{3}}\text{ consumed}}} \nonumber\] Multiplying both sides n NH3 consumed, by we have \[\begin{align} n_{\text{H}_{\text{2}}\text{O produced}} &= n_{\text{NH}_{\text{3}}\text{ consumed}} \normalsize \cdot\text{S}\left( \frac{\ce{H2O}}{\ce{NH3}} \right) \\ { } \\ & =\text{3.68 mol NH}_3 \cdot\frac{\text{6 mol }\ce{H2O}}{\text{4 mol NH}_3} \\ & =\text{5.52 mol }\ce{H2O} \end{align}\] This is a typical illustration of the use of a stoichiometric ratio as a conversion factor. Example \(\PageIndex{2}\) is analogous to Examples 1 and 2 from Conversion Factors and Functions, where density was employed as a conversion factor between mass and volume. Example \(\PageIndex{2}\) is also analogous to Examples 2.4 and 2.6, in which the Avogadro constant and molar mass were used as conversion factors. As in these previous cases, there is no need to memorize or do algebraic manipulations with Equation \(\ref{9}\) when using the stoichiometric ratio. Simply remember that the coefficients in a balanced chemical equation give stoichiometric ratios, and that the proper choice results in cancellation of units. In road-map form \[ \text{amount of X consumed or produced}\overset{\begin{smallmatrix} \text{stoichiometric} \\ \text{ ratio X/Y} \end{smallmatrix}}{\longleftrightarrow}\text{amount of Y consumed or produced}\] or symbolically. \[ n_{\text{X consumed or produced}}\text{ }\overset{S\text{(X/Y)}}{\longleftrightarrow}\text{ }n_{\text{Y consumed or produced}}\] When using stoichiometric ratios, be sure you always indicate moles of what. You can only cancel moles of the same substance. In other words, 1 mol NH 3 cancels 1 mol NH 3 but does not cancel 1 mol H 2O. The next example shows that stoichiometric ratios are also useful in problems involving the mass of a reactant or product. Example \(\PageIndex{3}\): Mass Produced Calculate the mass of sulfur dioxide (SO 2) produced when 3.84 mol O 2 is reacted with FeS 2 according to the equation \[\ce{4FeS2 + 11O2 -> 2Fe2O3 + 8SO2} \nonumber\] Solution The problem asks that we calculate the mass of SO 2 produced. As we learned in Example 2 of The Molar Mass, the molar mass can be used to convert from the amount of SO 2 to the mass of SO 2. Therefore this problem in effect is asking that we calculate the amount of SO 2 produced from the amount of O 2 consumed. This is the same problem as in Example 2. It requires the stoichiometric ratio \(\text{S}\left( \frac{\text{SO}_{\text{2}}}{\text{O}_{\text{2}}} \right)=\frac{\text{8 mol SO}_{\text{2}}}{\text{11 mol O}_{\text{2}}}\) The amount of SO 2 produced is then \[\begin{align*} n_{\ce{SO2}\text{ produced}} & = n_{\ce{O2}\text{ consumed}}\text{ }\normalsize\cdot\text{ conversion factor} \\ & =\text{3.84 mol O}_2\cdot\frac{\text{8 mol SO}_2}{\text{11 mol O}_2} \\ & =\text{2.79 mol SO}_2 \end{align*}\] The mass of SO 2 is \[\begin{align*}\text{m}_{\text{SO}_{\text{2}}} & =\text{2.79 mol SO}_2\cdot\frac{\text{64.06 g SO}_2}{\text{1 mol SO}_2} \\& =\text{179 g SO}_2 \end{align*}\] With practice this kind of problem can be solved in one step by concentrating on the units. The appropriate stoichiometric ratio will convert moles of O 2 to moles of SO 2 and the molar mass will convert moles of SO 2 to grams of SO 2. A schematic road map for the one-step calculation can be written as \[ n_{\text{O}_{\text{2}}}\text{ }\xrightarrow{S\text{(SO}_{\text{2}}\text{/O}_{\text{2}}\text{)}}\text{ }n_{\text{SO}_{\text{2}}}\text{ }\xrightarrow{M_{\text{SO}_{\text{2}}}}\text{ }m_{\text{SO}_{\text{2}}} \nonumber\] Thus \[ \text{m}_{\text{SO}_{\text{2}}}=\text{3}\text{.84 mol O}_{\text{2}}\cdot\text{ }\frac{\text{8 mol SO}_{\text{2}}}{\text{11 mol O}_{\text{2}}}\normalsize\text{ }\cdot\text{ }\frac{\text{64}\text{.06 g}}{\text{1 mol SO}_{\text{2}}}=\normalsize\text{179 g} \nonumber\] These calculations can be organized as a table, with entries below the respective reactants and products in the chemical equation. You may verify the additional calculations. \( 4 \text{ FeS}_{2}\) \(+ 11 \text{ O}_{2}\) \(\rightarrow 2 \text{Fe}_2 \text{O}_3\) \(+ 8 \text{SO}_2\) m (g) 168 123 111 179 M (g/mol) 120.0 32.0 159.7 64.06 n (mol) 1.40 3.84 0.698 2.79 The chemical reaction in this example is of environmental interest. Iron pyrite (FeS 2) is often an impurity in coal, and so burning this fuel in a power plant produces sulfur dioxide (SO 2), a major air pollutant. Our next example also involves burning a fuel and its effect on the atmosphere. Example \(\PageIndex{4}\): Mass of Oxygen What mass of oxygen would be consumed when 3.3 × 10 15 g, 3.3 Pg (petagrams), of octane (C 8H 18) is burned to produce CO 2 and H 2O? Solution First, write a balanced equation \[\ce{2C8H18 + 25O2 -> 16CO2 + 18H2O} \nonumber\] The problem gives the mass of C 8H 18 burned and asks for the mass of O 2 required to combine with it. Thinking the problem through before trying to solve it, we realize that the molar mass of octane could be used to calculate the amount of octane consumed. Then we need a stoichiometric ratio to get the amount of O 2 consumed. Finally, the molar mass of O 2 permits calculation of the mass of O 2. Symbolically \[ m_{\text{C}_{\text{8}}\text{H}_{\text{18}}}\text{ }\xrightarrow{M_{\text{C}_{\text{8}}\text{H}_{\text{18}}}}\text{ }n_{\text{C}_{\text{8}}\text{H}_{\text{18}}}\text{ }\xrightarrow{S\text{(SO}_{\text{2}}\text{/C}_{\text{8}}\text{H}_{\text{18}}\text{)}}\text{ }n_{\text{O}_{\text{2}}}\xrightarrow{M_{\text{O}_{\text{2}}}}\text{ }m_{\text{O}_{\text{2}}} \nonumber\] \[\begin{align} m_{\text{O}_{\text{2}}} & =\text{3}\text{.3 }\cdot\text{ 10}^{\text{15}}\text{ g }\cdot\text{ }\frac{\text{1 mol C}_{\text{8}}\text{H}_{\text{18}}}{\text{114 g}}\text{ }\cdot\text{ }\frac{\text{25 mol O}_{\text{2}}}{\text{2 mol C}_{\text{8}}\text{H}_{\text{18}}}\text{ }\cdot \text{ }\frac{\text{32}\text{.00 g}}{\text{1 mol O}_{\text{2}}} \\ & =\text{1}\text{.2 }\cdot\text{ 10}^{\text{16}}\text{ g } \end{align*}\] Thus 12 Pg (petagrams) of O 2 would be needed. The large mass of oxygen obtained in this example is an estimate of how much O 2 is removed from the earth’s atmosphere each year by human activities. Octane, a component of gasoline, was chosen to represent coal, gas, and other fossil fuels. Fortunately, the total mass of oxygen in the air (1.2 × 10 21 g) is much larger than the yearly consumption. If we were to go on burning fuel at the present rate, it would take about 100 000 years to use up all the O 2. Actually we will consume the fossil fuels long before that! One of the least of our environmental worries is running out of atmospheric oxygen.
When proving functional equation for Riemann zeta function one starts at the definition of gamma function $$\Gamma(s) = \int_0^{\infty} x^{s-1} e^x\mathrm dx\tag1$$ After a few steps we arrive at $$ \pi^{-\frac{s}{2}} \Gamma \left(\frac{s}{2} \right) \zeta(s) = \frac{1}{s(1-s)} - \int_1^{\infty} \left( x^{\frac{s}{2}} + x^{\frac{1-s}{2}} \right) \frac{\psi(x)}{x}\mathrm dx\tag2$$ And observing the RHS of the above equation doesn’t change when we replace $s$ with $(1-s)$ in equation $(2)$ to get functional equation below: $$ \pi^{-\frac{s}{2}} \Gamma \left(\frac{s}{2} \right) \zeta(s) = \pi^{-\frac{1-s}{2}} \Gamma \left(\frac{1-s}{2} \right) \zeta(1-s) \tag3 $$ You can watch detail proof here I’m trying to find similar generalized proof for any Dirichlet $L$-function. In particular I want to see what equation $(2)$ looks like. I tried searching using google search but could not find it. Does anybody know a reference for this proof? Thank you!
Rigorous Theory for Accurate Multi-scale Analysis with Non Well-seperated Length Scales The Generalized Finite Element Method (GFEM) is a domain decomposition method based upon a partition of unity of a computational domain. This method is initiated and developed in (Babuska, Caloz, and Osborne, 1994), (Babuska and Melenk, 1997), (Babuska, Banerjee, and Osborne, 2002). This approach is well suited to modeling heterogeneous elastic media and is a Galerkin scheme based upon choosing a finite dimensional approximation space inside each subdomain that supports the partition of unity. For example if the subdomains are triangles and the partition of unity functions are the classic hat functions then the GFEM is the well known linear Finite Element (FE) approximation. The approximation property of the FE basis improves as we refine the triangular mesh. On the other hand the GFEM approximation can improve if we increase the dimension of the local approximation space associated with each subdomain. This feature allows for an approximation theory based on the dimension of the approximation space which is not possible in either the classic FE approximation or in homogenization theory. This provides a new opportunity for developing an approximation theory of heterogeneous media in the absence of scale separation. This approach is distinct from classical homogenization which requires a separation of length scales between the length scale of heterogeneity and that of the boundary loading and body forces. It is demonstrated in (Babuska and Melenk, 1997) that the global error is controlled by the local approximation error. So the principle theoretical question surrounding GFEM is to find an optimal local approximation space for each subdomain participating in the partition of unity and to estimate its rate of convergence. These questions are answered in collaboration with I. Babuuska in (Babuska and Lipton, 2011) and with I. Babuuska and my Ph.D. student X. Huang in (Babuska, Xu, and Lipton, 2014). The first paper (Babuska and Lipton, 2011) solves this problem for scalar problems and the second (Babuska, Xu, and Lipton, 2014). applies the methodology to linear elasticity. In these papers we consider Neumann boundary conditions and inside the domain $D$ the displacement $u$ is a solution of $$ {\rm div}(\mathbb{A}(x)e(u))=0.$$ Here $\mathbb{A}(x)=\mathbb{A}_{ijkl}(x)$ is an elastic tensor with $L^\infty$ elements and satisfies ellipticity and boundedness $$0\leq c_1|e|^2\leq \mathbb{A}e:e\leq c_2|e|^2. $$ A function satisfying the homogeneous equation is said to be $A$-harmonic. We consider a local domain $S$ associated with the partition of unity. We assume that $S\subset \subset D$ is compactly contained inside the the domain of interest $D$. (Local domains touching the boundary of $D$ can be handled in a modified but similar way.) We then take a slightly larger domain $S^*$ such that $S\subset \subset S^*\subset \subset D$ and consider all $N$ dimensional spaces of $A$-harmonic functions on $S^*$. We ask of all such $N$ dimensional subspaces can we find the one that approximates the solution the best in the energy norm restricted to $S$. Since we consider the energy norm we are locally approximating the solution up to a constant but the constant can be determined. It turns out that the restriction operator $Ru=u_{|_{\scriptscriptstyle{S}}}$ is a compact on operator on the space of $A$-harmonic functions defined on $S^*$. The best basis with optimal approximation properties is shown to be the span of the first $N$ eigenfunctions of $RR^*$ and the convergence rate is given by the square root of the $N^{th}$ singular value of $R$. This singular value can be estimated using an iterated application of the Caccioppoli inequality and it decays almost exponentially with respect to $N$. From this we deduce that the approximation for the optimal global approximation decreases nearly exponentially with the dimension of local approximation spaces. We call the associated numerical method the Multiscale Spectral Generalized Finite Element Method (MS-GFEM). This method serves two roles: 1) it provides a dimensionally reduced coarse scale version of the original physical problem and 2) it allows for an efficient and accurate post processing step to resolve local fields at lower length scales. This feature is distinct from pure upscaling methods where only coarse scale field information is obtained. The simplest multiscale method is the well known method of periodic homgenization with correctors, see for example (Besounssan, Lions, and Papanicolau, 1978). However for non periodic problems with non well separated length scales the homogenization approach is no longer applicable. On the other hand MS-GFEM is designed for such problems and provides computational recovery of coarse scale information through a global solve while providing selective resolution of local fields as a post-processing step. The scheme is highly parallel and can be implemented on on large parallel machines. This work is supported by NSF Grant DMS-1211066 The work discussed here has appeared in I. Babuska and R. Lipton, Optimal Local Approximation Spaces for Generalized Finite Element Methods with Application to Multiscale Problems. Multiscale Modeling and Simulation, SIAM 9 (2011) 373-406. and I. Babuska, Xu Huang, and R. Lipton, Machine Computation Using the Exponentially Convergent Multiscale Spectral Generalized Finite Element Method. (With Ivo Babuska and Xu Huang). Mathematical Modeling and Numerical Analysis (M2AN), 48 Number 2 (2014) 493-515. This work is supported by NSF Grant DMS-1211066 References I. Babuska, G. Caloz, and J. E. Osborn, Special finite element methods for a class of second order elliptic problems with rough coefficients, SIAM J. Numer. Anal. 31 (1994), pp. 945-981. I. Babuska and J. Melenk, The partition of unity finite element method, Internat. J. Numer. Methods Engrg., 40 (1997), pp. 727-758. I. Babuska, U. Banerjee, and J. Osborn, Generalized finite element methods-main ideas, results and perspective, Int. J. Comput. Methods, 1 (2004), pp. 67-103. I. Babuska and R. Lipton, Optimal Local Approximation Spaces for Generalized Finite Element Methods with Application to Multiscale Problems. Multiscale Modeling and Simulation, SIAM 9 (2011) 373-406. I. Babuska, Xu Huang, and R. Lipton, Machine Computation Using the Exponentially Convergent Multiscale Spectral Generalized Finite Element Method. (With Ivo Babuska and Xu Huang). Mathematical Modeling and Numerical Analysis (M2AN), 48 Number 2 (2014) 493-515. A. Besounssan, J. L. Lions, and G. C. Papanicolau, Asymptotic Analysis for Periodic Structures, North Holland, Amsterdam, 1978.
Line 37: Line 37: <!--T:7--> <!--T:7--> − In the case of M8-M20 sequence, we + In the case of M8-M20 sequence, we used "Winsorized Sigma Clipping" in "Average stacking with rejection" to remove satellite tracks (<math>\sigma_{low}=4</math> and <math>\sigma_{high}=2</math>). <!--T:8--> <!--T:8--> [[File:Siril stacking screen.png]] [[File:Siril stacking screen.png]] + + + + + + + + + + + + + <!--T:9--> <!--T:9--> − + result. <!--T:10--> <!--T:10--> − + result . <!--T:11--> <!--T:11--> Revision as of 23:57, 14 February 2015 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (PSF image alignment) → Stacking Stacking The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. Sum Stacking This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations. These algorithms are very efficient to remove satellite/plane tracks. Median Stacking This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math]. Pixel Maximum Stacking This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. In the case of M8-M20 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=2[/math]). The output console thus gives the following result: 21:58:19: Pixel rejection in channel #0: 2.694% - 4.295% 21:58:19: Pixel rejection in channel #1: 1.987% - 3.620% 21:58:19: Pixel rejection in channel #2: 0.484% - 4.297% 21:58:19: Rejection stacking complete. 119 have been stacked. 21:58:28: Noise estimation (channel: #0): 4.913e-05 21:58:28: Noise estimation (channel: #1): 3.339e-05 21:58:28: Noise estimation (channel: #2): 3.096e-05 Noise estimation is a good estimator of the quality of your stacking process. In our example, the red channel has almost 1.5 times more noises that green or blue. That probably means that DSLR is unmodified: most of red photon are stopped by the original filter, therefore leading to a more noisy channel. Then, in this example we note that high rejection seems to be a bit strong. Setting high rejection to [math]\sigma_{high}=4[/math] could produce a better image. And this is what you have in the image below. After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., 119 files. The images above picture the result in Siril using the Histogram Equalization rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]38/3.9 = 9.7 \approx \sqrt{119} = 10.9[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math]. Here, comparison between the same crop of calibrated single frame and stacked result. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril: Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
I am going to consider Freivald's algorithm in the field mod 2. So in this algorithm we want to check wether $$AB = C$$ and be correct with high probability. The algorithm choose a random $r$ n-bit vector and if $$A(Br) = Cr$$ then outputs YES, otherwise outputs no. I want to show that it has has one-sided success probability 1/2. For that I want to show that when $AB = C$ then the algorithm is correct with probability 1 and when $AB \neq C$ then the probability of it being correct is at least 1/2. When $AB = C$ its clear that the algorithm is always correct because: $$ A(Br) = (AB)r = Cr$$ So, if $AB=C$, then $A(Br)$ is always equal to $Cr$. The case when $AB \neq C$ is a little trickier. Let $D = AB - C$. When the multiplication are not equal then $D \neq 0$. Let a good $r$ be a vector that discovers the incorrect multiplication (i.e. $D \neq 0$ and $Dr \neq 0$, multiplying by $r$ makes it still zero). let a bad $r$ be a vector that makes us mess up, i.e. conclude the multiplication is correct when is not. In other words, when $D \neq 0$ (i.e. multiplication was done correctly $AB = C$) but when we use $r$ to check this we get the wrong conclusion (i.e. $Dr = 0$, when in fact $D \neq 0$). The high level idea of the proof is, if we can show that for every bad $r$ there is a good $r$, then half the $r$'s are good so our algorithm is about 1/2 percent of the time correct. This high level idea of the proof makes sense to me, however, what is not clear to me is the precise detail of the inequality (whether $Pr[error] \geq \frac{1}{2}$ or the other way round). So consider the case where $D \neq 0 $ i.e. $AB \neq C$. In this case then we have that at least one entry $(i,j)$ in $D$ is not zero (its 1 cuz we are doing mod 2). Let that entry be $d_{i,j}$. Let $v$ be a vector such that it picks up that entry. i.e. $Dv \neq 0$ such that $(Dv)_{i} = d_{i,j}$. In this case, if we have a bad $r$ (i.e. an r such that when $D \neq 0$ yields $Dr = 0$ ) then we could make it into a good vector $r'$ by flipping its $i$ coordinate. i.e. $$ r' = r +v_i$$ This mapping from Bad to Good is clearly 1 to one. i.e. it clear that the equation $ r' = r +v_i$ only maps to one unique $r'$, so it cannot be one to many. Now lets see that its not many to one either. If there was another $\tilde r$ s.t. $\tilde r = r'$ then that would mean $$r' = r+ v_i = \tilde r + v_i \;\; (\operatorname{mod} \; 2) \implies r = \tilde r \;\; (\operatorname{mod} \; 2)$$ So its one to one. Therefore, the for each Bad $r$ there is a good $r'$. However, it is not clear to me why that would imply: $$ Pr[A(Br) \neq C] = Pr[Dr \neq 0] \geq \frac{1}{2}$$ I see why it would be exactly equal to 1 but it is not clear to me at all why it implies the above inequality.
I am seeing how the Berry connection $\mathcal{A}(k)$ transforms under time reversal symmetry. I seem to have a hiccup over something simple. I may have overcomplicated things but I think it points to some misconceptions I may have. From the definition of the Berry curvature as per usual: \begin{align} \mathcal{A}(k) &= i \langle \psi(k) | \frac{d}{dk} |\psi(k)\rangle \\ &= i \int \psi^\star (k) \frac{d \psi (k)}{dk} d\vec{r} \end{align} Applying time reversal $\hat{\mathcal{T}}$, \begin{align} \hat{\mathcal{T}} \mathcal{A}(k) &= i \langle \hat{\mathcal{T}} \psi(k) | \frac{d}{dk} | \hat{\mathcal{T}} \psi(k)\rangle \\ &= i \int \hat{\mathcal{T}} \psi^\star (k) \frac{d \hat{\mathcal{T}} \psi (k)}{dk} d\vec{r} \\ &= i \int \psi(-k) \frac{d \psi^\star (-k)}{dk} d\vec{r} \end{align} Since $\mathcal{A}$ must be real, we can conjugate it. \begin{align} &= -i \int \psi^\star(-k) \frac{d \psi(-k)}{dk} d\vec{r} \\ &= - \mathcal{A}(-k) \end{align} So If I am in a time-reversal invariant system, I have \begin{align} \hat{\mathcal{T}} \mathcal{A} (k) &= \mathcal{A} (k) \\ \implies \mathcal{A} (k) &= - \mathcal{A}(-k) \end{align} Which is wrong, I am off by the minus sign. My Questions: Is it true that the Berry connection is always real? That was my justification for conjugating it. I think they do this step in both sources listed below. Is this an abuse of notation if I did this (putting the derivative with the ket): \begin{align} \mathcal{A}(k) &= i \langle \psi(k) | \frac{d \psi(k)}{dk} \rangle \\ \hat{\mathcal{T}} \mathcal{A} (k) &= i \langle \hat{\mathcal{T}} \psi(k) | \hat{\mathcal{T}} \frac{d \psi(k)}{dk} \rangle \end{align} And if this were the case, how does $\hat{\mathcal{T}}$ act on the differential operator? Would \begin{align} \hat{\mathcal{T}} \frac{d }{dk} = \frac{d }{d(-k)} \hat{\mathcal{T}} \end{align} be true? I feel like you can't just take the derivative out without a minus sign, or else how would the velocity operator become negative? Why does $\hat{\mathcal{T}} $ not act on the $i$ outside the braces in $\hat{\mathcal{T}} \mathcal{A}(k) = i \langle \hat{\mathcal{T}} \psi(k) | \frac{d}{dk} | \hat{\mathcal{T}} \psi(k)\rangle $? Was there something else wrong in my derivation? Sources: They define the Berry connection with an extra minus sign but that shouldn't matter http://www-personal.umich.edu/~sunkai/teaching/Fall_2012/chapter3_part8.pdf Topological States on Interfaces Protected by Symmetry, by Takahashi 2015. The derivation from that is the following: \begin{align} \mathbf { a } ^ { \alpha } ( \mathbf { - k } ) & = - i \left\langle u ^ { \alpha } ( - \mathbf { k } ) | \nabla u ^ { \alpha } ( - \mathbf { k } ) \right\rangle \\ & = - i \left\langle \nabla \Theta u ^ { \alpha } ( - \mathbf { k } ) | \Theta u ^ { \alpha } ( - \mathbf { k } ) \right\rangle \\ & = i \left\langle \Theta u ^ { \alpha } ( - \mathbf { k } ) | \nabla \Theta u ^ { \alpha } ( - \mathbf { k } ) \right\rangle \\ & = \mathbf { a } ^ { \beta } ( \mathbf { k } ) + i \nabla \chi ( \mathbf { k } ) \end{align}
The distribution or table of frequencies is a table of the statistical data with its corresponding frequencies. Absolute frequency: number of times that a value appears. It is represented as $$f_i$$ where the subscript represents each of the values. The sum of the absolute frequencies is equal to the total number of data, represented bas $$N$$. $$$f_1+f_2+f_3+\ldots+f_n=N$$$ equivalent to: $$$\sum_{i=1}^n f_i=N$$$ Relative frequency: the result of dividing the absolute frequency of a certain value by the total number of data. It is represented as $$n_i$$. The sum of the relative frequencies is equal to $$1$$. We can prove this easily by factorizing $$N$$. $$$n_i=\displaystyle \frac{f_i}{N}$$$ Cumulative frequency: the sum of absolute frequencies of all the values equal to or less than the considered value. This is represented as $$F_i$$. Relative cumulative frequency: the result of dividing the cumulative frequency by the total number of information, which is represented by $$N_i$$ (when we are dealing with cumulative frequencies, the letters to represent them are in capital letters). $$15$$ students answer the question of how many brothers or sisters they have. The answers are: $$$1, 1, 2, 0, 3, 2, 1, 4, 2, 3, 1, 0, 0, 1, 2$$$ Then, we can construct a table of frequencies Brothers Absolute frequency $$f_i$$ Relative frequency $$n_i$$ Cumulative frequency $$F_i$$ Relative cumulative frequency $$N_i$$ $$0$$ $$3$$ $$\displaystyle \frac{3}{15}$$ $$3$$ $$\displaystyle \frac{3}{15}$$ $$1$$ $$5$$ $$\displaystyle \frac{5}{15}$$ $$3+5=8$$ $$\displaystyle\frac{3}{15}+\frac{5}{15} =\frac{8}{15}$$ $$2$$ $$4$$ $$\displaystyle \frac{4}{15}$$ $$3+5+4=12$$ $$\displaystyle \frac{12}{15}$$ $$3$$ $$2$$ $$\displaystyle \frac{2}{15}$$ $$3+5+4+2=14$$ $$\displaystyle \frac{14}{15}$$ $$4$$ $$1$$ $$\displaystyle \frac{1}{15}$$ $$3+5+4+2+1=15$$ $$\displaystyle\frac{15}{15}$$ $$\sum$$ $$15$$ $$1$$ Notice that the difference between the cumulative frequency and the relative frequency is only that in the case of the relative we must divide by the total number of data. This can help us avoid unnnecessary calculations.
Answer $$\cos\frac{\theta}{2}=\frac{R-b}{R}$$ Work Step by Step (The image is shown below) We see in the image that $BC$ is a circular curve where $OB=OC=R$. That means $OA=R$ as $A$ is a point in the circular curve. Thus, $OH=OA-AH=R-b$ As triangle $OHC$ is a right triangle, we can calculate $\cos\frac{\theta}{2}$ using the sides of the triangle. $$\cos\frac{\theta}{2}=\frac{OH}{OC}$$ $$\cos\frac{\theta}{2}=\frac{R-b}{R}$$
I'm stuck on one really simple example, I can't figure out what's happening to energy here... (This is not homework) Let's consider an uncharged electric cable, we'll model it by an infinite cylinder on the axis $(Oz)$ with radius $a$ and conductivity $\gamma$ with uniform, constant current $I$, and we'll obviously use cylindrical coordinates $(\vec{e_r},\vec{e_\theta},\vec{e_z})$. If I haven't made any mistake, we should have the electric and magnetic fields $\vec{E},\vec{B}$ as follows : $\vec{E}=\begin{cases}\frac{1}{\gamma}\frac{I}{\pi a^2}\vec{e_z}&r<a\\\vec{0}& r>a\end{cases}$, $\vec{B}=\begin{cases}\frac{\mu_0I}{2\pi}\frac{r}{a^2}\vec{e_\theta}&r\le a\\\frac{\mu_0I}{2\pi}\frac{1}{r}\vec{e_\theta}&r\ge a\end{cases}$ Thus the poynting vector $\vec{\Pi}=\begin{cases}\frac{-r}{2\pi^2\gamma a^4}I^2\vec{e_r}&r< a\\\vec{0}&r> a\end{cases}$ Hence, if we consider an lateral surface $(\mathcal{S})$ orientated towards the inside and a height $h$ of cable, $$\iint_{(\mathcal{S})}\vec{\Pi}\cdot\vec{dS}=\begin{cases}\frac{h}{\pi\gamma a^2}I^2&r< a\\\vec{0}&r> a\end{cases}$$. That expression is most puzzling. The energy comes from the side, however there is no energy outside.... where does the energy come from ? Something weird is going on here. At first I thought it was because we had made the hypothesis of an infinite cable, but it doesn't seem to be related at all. To allow the energy flux, there would need to be charges inside the cable, which is initially not charged. Thus I thought there might be some effect like the Hall effect with local charges appearing, but once again I do not see why that would be.
Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian Division of Computational Mathematics and Engineering, Institute for Computational Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam $ u \in L_{sp} \cap C^{1, 1}_{\rm loc}(\mathbb{R}^n\setminus\{0\}) $ $ (-\Delta)^s_p u = \left(\frac{1}{|x|^{n-\beta }} * \frac{u^q}{|x|^\alpha}\right) \frac{u^{q-1 }}{|x|^\alpha} \quad\text{ in } \mathbb{R}^n \setminus \{0\}, $ $ 0 < s < 1 $ $ 0 < \beta < n $ $ p>2 $ $ q\ge 1 $ $ \alpha>0 $ $ u $ $ u $ $ 0 < \beta < n $ Keywords:Choquard equations, fractional p-Laplacian, symmetry of solutions, positive solutions, singular solutions. Mathematics Subject Classification:35R11, 35J92, 35J75, 35B06. Citation:Phuong Le. Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian. Communications on Pure & Applied Analysis, 2020, 19 (1) : 527-539. doi: 10.3934/cpaa.2020026 References: [1] [2] P. Belchior, H. Bueno, O. H. Miyagaki and G. A. Pereira, Remarks about a fractional Choquard equation: Ground state, regularity and polynomial decay, [3] [4] [5] [6] [7] [8] [9] W. Dai, Y. Fang and G. Qin, Classification of positive solutions to fractional order Hartree equations via a direct method of moving planes, [10] [11] [12] A. T. Duong and P. Le, Symmetry and nonexistence results for a fractional Hénon-Hardy system on a half-space, [13] E. P. Gross, [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] S. Pekar, [24] [25] D. Xu and Y. Lei, Classification of positive solutions for a static Schrodinger-Maxwell equation with fractional Laplacian, [26] show all references References: [1] [2] P. Belchior, H. Bueno, O. H. Miyagaki and G. A. Pereira, Remarks about a fractional Choquard equation: Ground state, regularity and polynomial decay, [3] [4] [5] [6] [7] [8] [9] W. Dai, Y. Fang and G. Qin, Classification of positive solutions to fractional order Hartree equations via a direct method of moving planes, [10] [11] [12] A. T. Duong and P. Le, Symmetry and nonexistence results for a fractional Hénon-Hardy system on a half-space, [13] E. P. Gross, [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] S. Pekar, [24] [25] D. Xu and Y. Lei, Classification of positive solutions for a static Schrodinger-Maxwell equation with fractional Laplacian, [26] [1] Leyun Wu, Pengcheng Niu. Symmetry and nonexistence of positive solutions to fractional [2] [3] Shanming Ji, Yutian Li, Rui Huang, Xuejing Yin. Singular periodic solutions for the [4] Maya Chhetri, D. D. Hai, R. Shivaji. On positive solutions for classes of p-Laplacian semipositone systems. [5] [6] Adam Lipowski, Bogdan Przeradzki, Katarzyna Szymańska-Dębowska. Periodic solutions to differential equations with a generalized p-Laplacian. [7] [8] Leszek Gasiński. Positive solutions for resonant boundary value problems with the scalar p-Laplacian and nonsmooth potential. [9] Eun Kyoung Lee, R. Shivaji, Inbo Sim, Byungjae Son. Analysis of positive solutions for a class of semipositone [10] [11] Zuodong Yang, Jing Mo, Subei Li. Positive solutions of $p$-Laplacian equations with nonlinear boundary condition. [12] Sophia Th. Kyritsi, Nikolaos S. Papageorgiou. Pairs of positive solutions for $p$--Laplacian equations with combined nonlinearities. [13] [14] [15] Everaldo S. de Medeiros, Jianfu Yang. Asymptotic behavior of solutions to a perturbed p-Laplacian problem with Neumann condition. [16] [17] Rongrong Yang, Zhongxue Lü. The properties of positive solutions to semilinear equations involving the fractional Laplacian. [18] Friedemann Brock, Leonelo Iturriaga, Justino Sánchez, Pedro Ubilla. Existence of positive solutions for $p$--Laplacian problems with weights. [19] K. D. Chu, D. D. Hai. Positive solutions for the one-dimensional singular superlinear $ p $-Laplacian problem. [20] 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
You are right. A way to confirm your intuition is that the step response exhibits a single time constant (i.e., one pole) \$\tau=-4\$ and a steady state of \$y(\infty)=1\$. Even without going through calculations, you can then argue $$ T(s)=\frac{4}{s+4}. $$ Anyway your friend's answer is wrong not only because of the chosen parameters or form, but because of the use he makes of it. What defines an input response of a system in the time domain is the result of a convolution product, not of a scalar product: $$ y(t) = T(t)\star u(t) = \int_0^tT(\tau)\cdot u(t-\tau)dt \neq T(t)\cdot u(t)$$ What \$u(t)\$ does to the system at a time \$t=t_x\$ has an impact on \$y(t)\$ (theoretically) forever. In a scalar multiplication you simply don't have that. Obviously the type of multiplication used is not the result of a design. It's derived mathematically. But this is a simple argument to show your friend he can't be right. The main reason Laplace transform is so popular is exactly because the Laplace transform of a convolution product is a simple product: $$ L[T(t)\star u(t)] = T(s)U(s).$$ It makes representations easier to deal with and more elegant.
The Pseudo-Hyperbolic Metric and Lindelöf's Inequality Let $\Delta$ denote the unit disc in the complex plane $\mathbb{C}$. For any two points $z,a\in \Delta$, we define the pseudo-hyperbolic metric on $\Delta$ by $$d(z,a)=|\varphi_a(z)|=\bigg|\frac{z-a}{1-\bar a z}\bigg|.$$Observe that if $f:\Delta\to\Delta$ is holomorphic then by the Schwarz-Pick Theorem$$d(f(z),f(a))=\bigg|\frac{f(z)-f(a)}{1-\overline{f(a)}f(z)}\bigg|\leq \bigg|\frac{z-a}{1-\bar a z}\bigg|=d(z,a),$$and if $f\in \text{Aut}(\Delta)$ ($f$ is an automorphism of $\Delta$) then equality holds, i.e.$$d(f(z),f(a))=d(z,a) \qquad \text{for all $z,a\in \Delta$}.$$In today's post, we prove that $d$ is actually a metric on $\Delta$. Next time, as an application, we will use a useful inequality concerning $d$ to prove the following theorem: Theorem (Lindelöf's Inequality). Every holomorphic function $f:\Delta\to\Delta$ satisfies $$\frac{|f(0)|-|z|}{1-|f(0)||z|}\leq |f(z)|\leq \frac{|f(0)|+|z|}{1+|f(0)||z|}\quad\text{for all $z\in\Delta$}.$$ Note: Today's post is purely computational and is simply meant to be a reference (for homework-checking or the like). It's usually the case that calculations like those below are "left as an exercise." For this reason I wanted to include them on the blog just in case a student or two might find them helpful, because, well, occasionally we all be like --> Another note: I've tried to make sure there are no typos, but undoubtedly I've missed some. (One can get lost in a forest of inequalities.) Let me know if you spot any, and I'll correct them! The Pseudo-Hyperbolic Metric IS a Metric Proof. Let $z,a\in \Delta$. That $d(z,a)\geq 0$ is clear, as is the fact that $d(z,a)=0$ if and only if $z=a$. To see that $d(z,a)=d(a,z)$, notice that \begin{align}\label{one}d(z,a)=|\varphi_a(z)|= \bigg|\frac{z-a}{1-\bar a z}\bigg| \qquad \text{and} \qquad d(a,z)=|\varphi_z(a)|=\bigg|\frac{a-z}{1-\bar z a}\bigg|.\end{align}But \begin{align*}|1-\bar a z|^2 &= (1-\bar a z)(1-a\bar z)\\&= (1-\bar za)(1-z\bar a)\\&=|1-\bar z a|^2\end{align*}and so $|1-\bar a z|=|1-\bar z a|$. This implies, referring back to (\ref{one}), that $d(z,a)=d(a,z)$. It remains to show the triangle inequality, and we claim it suffices to prove \begin{align}\label{suffice} d(t_1,t_2)\leq |t_1|+|t_2|, \quad \text{for all $t_1,t_2\in\Delta$.} \end{align} Indeed for $z,w,a\in \Delta$, the triangle inequality $d(z,w)\leq d(z,a)+d(a,w)$ holds if and only if $d(\varphi_a(z),\varphi_a(w))\leq d(\varphi_a(z),0)+d(0,\varphi_a(w))$ since $\varphi_a\in \text{Aut}(\Delta)$ and $\varphi_a(a)=0$. Letting $t_1=\varphi_a(z)$ and $t_2=\varphi_a(w)$, this becomes $d(t_1,t_2)\leq d(t_1,0)+d(0,t_2)$ which is precisely (\ref{suffice}) since $d(z,0)=|z|$ for any $z\in \Delta$. Thus our goal is to prove (\ref{suffice}), but in fact we aim to prove a much stronger result, namely: Proposition. For any $t_1,t_2\in\Delta$,$$d(t_1,t_2)\leq \frac{|t_1|+|t_2|}{1+|t_1||t_2|.}$$ This of course implies (\ref{suffice}) since $1/(1+|t_1||t_2|)\leq 1$. So prove the proposition, let $t_1,t_2\in \Delta$ and observe that \begin{align}\label{here} 1-d(t_1,t_2)^2 \geq \frac{(1-|t_1|^2)(1-|t_2|^2)}{(1+|t_1||t_2|)^2} \end{align} since \begin{align*} 1-d(t_1,t_2)^2 &= 1-\bigg| \frac{t_1-t_2}{1-t_1\bar t_2} \bigg|^2 \\[5pt] &= \frac{(1-t_1\bar t_2)(1-\bar t_1 t_2)-(t_1-t_2)(\bar t_1-\bar t_2)}{|1-t_1\bar t_2|^2} \\[5pt] &= \frac{(1-|t_1|^2)(1-|t_2|^2)}{|1-t_1\bar t_2|^2} \\[5pt] &\geq \frac{(1-|t_1|^2)(1-|t_2|^2)}{(1+|t_1||t_2|)^2}, \end{align*} where the last line follows from the triangle inequality. We also compute the following: \begin{align*} d(|t_1|,-|t_2|)&=\bigg| \frac{|t_1|-(-|t_2|)}{1-|t_1|(\overline{-|t_2|})} \bigg| \\[5pt] &= \bigg| \frac{|t_1|+|t_2|}{1+|t_1||t_2|} \bigg| \\[5pt] &= \frac{|t_1|+|t_2|}{1+|t_1||t_2|} \end{align*} and so \begin{align*} 1-d(|t_1|,-|t_2|)^2 = 1-\left( \frac{|t_1|+|t_2|}{1+|t_1||t_2|} \right)^2 = \frac{(1-|t_1|^2)(1-|t_2|^2)}{( 1+|t_1||t_2|)^2}. \end{align*} Comparing the previous line with (\ref{here}) we see that $$ 1- d(t_1,t_2)^2\geq 1 - d(|t_1|,-|t_2|)^2$$ which implies $$d(t_1,t_2) \leq d(|t_1|,-|t_2|) = \frac{|t_1|+|t_2|}{1+|t_1||t_2|} .$$ This proves the proposition and hence the claim that $d$ satisfies the triangle inequality. Thus $d$ is indeed a metric.
Difference between revisions of "ORD is Mahlo" m (link) (+1) Line 13: Line 13: Relation to the [[Vopenka|Vopěnka principle]]:<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite> Relation to the [[Vopenka|Vopěnka principle]]:<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite> * The [[Vopenka|Vopěnka principle]] implies that $Ord$ is Mahlo: every club class contains a regular cardinal and indeed, an [[extendible]] cardinal and more. * The [[Vopenka|Vopěnka principle]] implies that $Ord$ is Mahlo: every club class contains a regular cardinal and indeed, an [[extendible]] cardinal and more. + * It is relatively consistent that GBC and the generic Vopěnka principle holds, yet $Ord$ is not Mahlo. * It is relatively consistent that GBC and the generic Vopěnka principle holds, yet $Ord$ is not Mahlo. * It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet $Ord$ is not definably Mahlo and not even $∆_2$-Mahlo. * It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet $Ord$ is not definably Mahlo and not even $∆_2$-Mahlo. Revision as of 10:43, 10 October 2019 The assertion $\text{Ord}$ is Mahlo is the scheme expressing that the proper class REG consisting of all regular cardinals is a stationary proper class, meaning that it has elements from every definable (with parameters) closed unbounded proper class of ordinals. In other words, the scheme asserts for every formula $\varphi$, that if for some parameter $z$ the class $\{\alpha\mid \varphi(\alpha,z)\}$ is a closed unbounded class of ordinals, then it contains a regular cardinal. If $\kappa$ is Mahlo, then $V_\kappa\models\text{Ord is Mahlo}$. Consequently, the existence of a Mahlo cardinal implies the consistency of $\text{Ord}$ is Mahlo, and the two notions are not equivalent. Moreoever, since the ORD is Mahlo scheme is expressible as a first-order theory, it follows that whenever $V_\gamma\prec V_\kappa$, then also $V_\gamma$ satisfies the Levy scheme. Consequently, if there is a Mahlo cardinal, then there is a club of cardinals $\gamma\lt\kappa$ for which $V_\gamma\models\text{Ord is Mahlo}$. A simple compactness argument establishes that $\text{Ord}$ is Mahlo is equiconsistent over $\text{ZFC}$ with the existence of an inaccessible reflecting cardinal. On the one hand, if $\kappa$ is an inaccessible reflecting cardinal, then since $V_\kappa\prec V$ it follows that any class club definable in $V$ with parameters below $\kappa$ will be unbounded in $\kappa$ and hence contain $\kappa$ as an element and consequently contain an inaccessible cardinal. On the other hand, if $\text{Ord}$ is Mahlo is consistent, then every finite fragment of the theory asserting that $\kappa$ is an inaccessible reflecting cardinal (which is after all asserted as a scheme) is consistent, and hence by compactness the whole theory is consistent. If there is a pseudo uplifting (proof in that article) cardinal, or indeed, merely a pseudo $0$-uplifting cardinal, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus $\text{Ord}$ is Mahlo.[1] The Vopěnka principle implies that $Ord$ is Mahlo: every club class contains a regular cardinal and indeed, an extendible cardinal and more. If the Vopěnka scheme holds, then there is a class-forcing extension $V[C]$ where it continues to hold, yet in which the Vopěnka principle fails and Ord is not Mahlo, although it remains definably Mahlo. It is relatively consistent that GBC and the generic Vopěnka principle holds, yet $Ord$ is not Mahlo. It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet $Ord$ is not definably Mahlo and not even $∆_2$-Mahlo. In such a model, there can be no $Σ_2$-reflecting cardinals and therefore also no remarkable cardinals. References Hamkins, Joel David and Johnstone, Thomas A. Resurrection axioms and uplifting cardinals., 2014. www arχiv bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex
Gabriel Romon Student at ENSAE Paristech and ENS Paris-Saclay (MVA Master's degree). Interested in statistics and machine learning. A few recent good answers of mine: DCT for convergence in probability $\frac{S_n}{\sqrt n}$ is dense in $\mathbb R$ almost surely $\cos(2^n)$ is dense in $[-1,1]$ Showing $(X_n >c_n \text{ i.o.})=(\max_{1\leq i\leq n}X_i >c_n \text{ i.o.})$ Derivative of the MGF Infinite convex combination of characteristic functions is a characteristic function Different $\mathcal C^\infty$ characteristic functions that coincide in a neighborhood of $0$ Different metrics that metrize convergence in probability Relations between different definitions of the Gaussian width Weak consistency from asymptotic unbiasedness $(\sum_{j=1}^{n} X_{j}) / b_{n} \overset {P}{\to} C$ implies $b_{n}\sim b_{n+1}$ CLT and pointwise convergence of densities If $X\in L^1$, $P(X>x)=o\left(\frac 1x\right)$ Convex function with directional derivatives in all directions is differentiable Concentration of the $q$-norm of a Gaussian vector Almost sure convergence of $\sum_n \frac{X_n}n$ Paris, France Member for 8 months 0 profile views Last seen 2 days ago Communities (31) Mathematics 21.3k 21.3k55 gold badges3838 silver badges9393 bronze badges Puzzling 623 62311 gold badge1111 silver badges1919 bronze badges Cross Validated 198 19811 silver badge99 bronze badges Mathematica 167 16722 silver badges99 bronze badges French Language 159 15911 silver badge77 bronze badges View network profile → Top network posts 72 Polynomials such that roots=coefficients 43 A game with $\delta$, $\epsilon$ and uniform continuity. 36 Find the liar in the library 31 Draw a line through all doors 29 'Obvious' theorems that are actually false 26 Polynomials such that roots=coefficients 22 Show that the closure of a subset is bounded if the subset is bounded View more network posts →
I was watching Geoff Hinton's lecture from May 2013 about the history of deep learning and his comments on the rectified linear units (ReLU's) made more sense that my previous reading on them had. Essentially he noted that these units were just a way of approximating the activity of a large number of sigmoid units with varying biases. "Is that true?" I wondered. "Let's try it and see ... " %pylab inline Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'. Let's first define a logistic sigmoid unit. This looks like \[ f(x;\alpha) = \frac{1}{1+ \exp(-x + \alpha)} \] where $ \alpha $ is the offset paramter to set the value at which the logistic evaluates to 0. Programatically this looks like: def logistic(x,offset): # X is an array of numbers at which to evaluate the logistc unit, offset is the offset value return (1/(1 + np.exp(-x+offset))) When evaluated for a number of values we see a distinctive 's'-shape. As the limits of the function evaluation expand this shape becomes much more 'squashed.' This is one of the difficulties of using such a function to limit the input values to a learning system. If you're unsure of the range of values to be inputted then the output can easily saturate for very large or very small values. Input normalization can help this, but sometimes this isn't practical (e.g. if you have a bimodal input distribution with heavy tails.) The $ \alpha $ parameter lets you adjust for this. x = np.linspace(-10,10,200)y = logistic(x,0)fig = plot(x,y)xlabel('input x')ylabel('output value of logistic')gcf().set_size_inches(6,5) So what happens if you were to sum many of these functions, all with a different bias? N_sum = 10 # Number of logistics to sumoffsets = np.linspace(0,np.max(x),100)y_sum = np.zeros(shape(x))for offset in offsets: y_sum += logistic(x,offset)y_sum = y_sum / N_sum plot(x,y_sum)xlabel('Input value')ylabel('Logistc Unit Summation output') <matplotlib.text.Text at 0x10699bed0> Yep, that definitely is starting to look like the ReLU's. So it turns out that you can approximate this summation using the equation \[ f(x) = \log(1+exp(x)) \] x = np.linspace(-10,10,200)inp = np.linspace(-10,10,200)inp[inp < 0] = 0f2 = plot(x,inp)xlabel('Input value')ylabel('ReLU output')relu_approx = np.log(1 + np.exp(x))f3 = plot(x,relu_approx) From here Hinton said, "do you really need the 'log' and the 'exp', or could I just take $ max(0,input)$? And that works fine", thus giving you the ReLU. Hinton's discussion is embedded below. He starts talking about different learning units at 27 minutes, 10 seconds. from IPython.display import YouTubeVideoYouTubeVideo('vShMxxqtDDs?t=27m10s') Join my mailing list for topics on scientific computing.
Suppose $f:\mathbb R\to\mathbb R^n$ satisfies $$ f'(t) = A(t)f(t), $$ where $A$ is a smooth matrix-valued function. If I know that the matrix $A(t)$ is asymptotically nilpotent, how could I prove a sub-exponential estimate for the solution $f$? To be more explicit, suppose $A(t)^2\to0$ but $A(t)\not\to0$ as $t\to\infty$. Then one would expect slower than exponential (perhaps even linear) growth for $f$; if $A$ and $A^{-1}$ had roughly constant norm, then one would expect exponential growth. My main interest is in the case when $A(t)^2\to0$, but also $A(t)^k\to0$ for $k>2$ is interesting. If I apply Grönwall's inequality to the function $t\mapsto|f(t)|^2$ and observe that $$ \frac{d}{dt}|f(t)|^2 = 2\langle f(t),A(t)f(t)\rangle \leq 2\|A(t)\|\cdot|f(t)|^2, $$ I get the exponential estimate $$ |f(t)| \leq |f(0)|\exp\left(\int_0^t\|A(s)\|ds\right) $$ for $t>0$. This estimate is much worse than I would expect in an asymptotically nilpotent case, but I don't know how to get a polynomial (or other sub-exponential) estimate. Example: $n=2$ and $A(t)=\begin{pmatrix}0&1\\(1+t^2)^{-2}&0\end{pmatrix}$.Now $A(t)^2=(1+t^2)^{-2}I$ which goes to zero as $t\to\infty$.The solution to our ODE with $f(0)=(a,b)$ is$$f(t)=\begin{pmatrix}\sqrt{1+t^2}(a+b\arctan(t))\\\frac1{\sqrt{1+t^2}}(at+b+bt\arctan(t))\end{pmatrix}.$$The solution grows essentially linearly: $|f(t)|\leq C|f(0)|(1+t)$ for any $t>0$ and some constant $C$.On the other hand, if I use Grönwall's inequality, I have the estimate$$2\langle f(t),A(t)f(t)\rangle=2f_1(t)f_2(t)[1+(1+t^2)^{-2}]\leq|f(t)|^2[1+(1+t^2)^{-2}],$$which cannot be significantly improved.Plugging this into Grönwall's inequality gives an exponential growth estimate for $f$, which much weaker than the linear estimate from the explicit solution.[The example ends here.] I could promote the ODE to a second order one: $f''(t)=[A(t)^2+A'(t)]f(t)$. Now the coefficient $A(t)^2$ is asymptotically small, but $A'(t)$ need not be. And even if it were, I don't know how to use Grönwall for a second order ODE. If $A$ was constant, I could use nilpotency to get $f(t)=e^{At}f(0)=(I+At)f(0)$. There is a series expansion also for time-dependent $A$ (the Dyson series), but I couldn't see how to turn that into a rigorous estimate. I do not assume that $A(t)$ is nilpotent for any $t$, just that some power tends to zero as $t\to\infty$. Question:Given some assumptions on the decay rate of $A(t)^2$ (or $A(t)^k$ for some $k>2$), what tools could I use to prove a growth estimate for norm of the solution $f(t)$?I am looking for an estimate that I could play with to see how different decay rates for $A^2$ give different growth rates for $f$. Edit:If we denote $B(t)=A(t)+\phi(t)I$ for some scalar function $\phi$ and $g(t)=\exp\left(\int_0^t\phi(s)ds\right)f(t)$, then $g'(t)=B(t)g(t)$.One could try to get estimates for $g$ and convert them to estimates for $f$, but it seems to me that this method cannot add much.(The exponentials of integrals coming from this change of functions and Grönwall's estimate cancel each other.)This is a generalization of an idea Normal Human gave in a comment below (there $\phi$ was constant).
Why are metals opaque? Is it due to the free electrons in a metal or a material's intrinsic properties? closed as unclear what you're asking by Kyle Kanos, Jon Custer, stafusa, Qmechanic♦ Jan 4 '18 at 22:28 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. Metals have a high concentration $n$ of quasi-free electron constituting an electron plasma. This results in plasma frequencies $$\omega_p=\sqrt \frac {ne^2}{\epsilon m_e}$$ which are typically between visible and ultraviolet light. Below the plasma frequency an ideal plasma neglecting collisions has a negative permittivity, reflects transverse EM waves perfectly and has imaginary wave vectors, i.e., purely exponentially damped waves. Only above the plasma frequency EM wave propagation is possible. This can explain the opacity of metals in the frequency range of visible light and below.
In general, you won't be able to replicate the option by a portfolio of the form $\Delta_t S_t + B_t$, though it is possible to do so with a portfolio of the form $\Delta_t^1 S_t + \Delta_t^2B_t$; see Chapter 3 of this book. Here, $B_t=e^{rt}$ is the value of the money-market account, and $r$ is the risk-free interest rate.On the other hand, you can create ... This is a basic fact about futures trading and the storage of commodities.The phrase that was used by futures traders in the old days (and probably still today) was "the contango is limited by the carrying cost, there is no limit to the backwardation". This means that for example if spot gold is at 1200, gold dated one year from now cannot possibly sell ... There is certainly much more to quantitative finance than technical analysis, and a previous question does a decent job of outlining the different areas, as does the wikipedia on "quantitative analyst".Even for what wikipedia terms an "algorithmic trading quant" or what Mark Joshi terms a "statistical arbitrage quant", technical analysis is just one tool ... I had read some of them; actually, it does not exist an on-line library that collected them (or, better, it existed here, but it seems the website does not work anymore).I reported here below some of them that you did not find:More Than You Ever Wanted To Know* About Volatility SwapsModel RiskThe Volatility Smile And Its implied TreeEnhanced Numerical ... The best overview I have seen so far is this paper which lists 214 (!) factors (or anomalies if you like) on over one hundred (!) pages:Harvey, Campbell R. and Liu, Yan and Zhu, Caroline, …and the Cross-Section of Expected Returns (February 3, 2015). Available at SSRN: https://ssrn.com/abstract=2249314 or http://dx.doi.org/10.2139/ssrn.2249314Abstract: ... I think you might find this answer in The future language of quant programming? useful.People get this problem wrong because they always end up discussing the theoretical advantages of these languages rather than the practical uses of these languages.Theoretically speaking:Haskell is elegant and has many of the theoretical advantages (language ... A hurst exponent, H, between 0 to 0.5 is said to correspond to a mean reverting process (anti-persistent), H=0.5 corresponds to Geometric Brownian Motion (Random Walk), while H >= 0.5 corresponds to a process which is trending (persistent).The hurst exponent is limited to a value between 0 to 1, as it corresponds to a fractal dimension between 1 and 2 (D=2-... Hi Quantitative Finance has in my opinion two main streams.The first is about of valuation of some derivative contracts in a consistent way. This is a theory and once paradigms accepted it is coherent, it can considered as science at the same level as economy can pretend to this kind of terminology.The second is about making (or trying to) prediction(s) ... C++Think in C++ can be a starting point. This is free. And, you might study Beginning Visual C++ 2010 by Ivan HortonQuantitative finance and C++ (if you are derivatives-oriented)You might find Mark Joshi as well as Daniel Duffy's writings of (great) interest.It is easy to find the references of both their books on a website such as Amazon.You can also ... For a basic introduction, the three chapters in Hull's Options, Futures, and Other Derivatives on Binomial Trees, Wiener Processes and Ito's Lemma, and The Black-Scholes-Merton Model helped me start to understand the basic concepts within a broader context.After that, Shreve's two books seems to be pretty popular (see here and here). He explains things ... Quantopian provides both the fundamental data (from Morningstar), as well as the backtest platform to reproduce results from the books you mentioned. Here's the introduction to our fundamentals offering: https://www.quantopian.com/posts/fundamental-data-from-morningstar-now-available-for-backtesting(disclosure: I'm the ceo of quantopian) Quant in trading creates system that can be backtested, has a certain risk valuation. It is more like playing chess when you need to calculate multistep strategy.Let say certain instrument moves 1% a day. Our goal is to create strategy for one year (250 step strategy). If we use stock + options we get 50 or more entries a day into our system for analysis. ... A detailed description of the Hurst Exponent can be found here. A further (rather short search of Google) turned up this site claiming to provide an Excel Workbook with, among other things, Hurst Exponent estimation. There is no guarantee you can improve the Sharpe in this case, depending on the correlation of the returns streams. For the two asset case (you can model your strategies as assets and take a linear combination of them), if the correlation of the two assets is equal to the ratio of Sharpes (smaller to larger), there is zero diversification benefit.For ... Unfortunately, there is no correct answer for this question, it's like what car you should drive on your weekend.C++ is a popular language in quantitative finance, but it's usually (but not always!) only used to build the application backbone, such as derivative pricing. Why C++? C++ is a good choice because C++ is platform independent, we can natively ... This thread will inevitably close because it doesn't meet community guidelines, but I respect your passion in this field and my best suggestion for you is that if you're trying to emulate a MFE education, go look up the course listings of any reputable MFE program, and then look into the sites for those (past) classes and see the recommended readings and ... Yes. Mark Joshi's book is a good preparation.For this question you are given some function random() yielding a uniform random number and what we want is a function next() which yields realizations of a random $X$ variable with values $v_j$ such that $P(X=v_j)=p_j$.From standard textbooks we know the following transformation: If $u_i$ are uniform random ... Yes, you can say they are traded on listed options, but only for a few limited markets, and not that liquid relative to options on a single asset.For instance, the commodity futures space, there are options on commodity spreads listed, and a strike of 0 would be the same as an exchange option.These options have some liquidity in energy and grain markets,... I found these nice lecture note by Karl Sigman on the web. On page three you see if $X\sim N(\mu,\sigma)$ then the moment generating function (mgf) of $X$ is given by$$M_X(s) = E(exp(sX)) = \exp( \mu s + \sigma^2 s^2 /2)$$Thus for Brownian motion with drift $X_t$ you get$$M_{X_t}(s) = E(exp(s X_t)) = \exp( \mu t s + \sigma^2 s^2 t /2).$$Finally for $... If you have a fairly good model of regime separation (of course requiring a good quantitative measure of regime state classifications -- momentum and reverting) and predictive likelihood (using something like a markov state transition matrix)-- one could weight contributions corresponding to next state probabilities. Of course, you will rarely get a ... It depends the kind of information you look for.Questions and answer.This web site is really the best I know on quant finance. You can browse "tags" and go the the associated wiki pages to have summarized information.Wilmott Forum is not that bad;Nuclear Phynance is good too.Generic knowledge.It depends on what area on finance you are interested in.... The general ideaFor equity securities, a simple backtest will typically consist of two steps:Computation of the portfolio return resulting from your portfolio formation rule (or trading strategy)Risk-adjustment of portfolio returns using an asset pricing modelStep 2 is simply a regression and computationally very simple in Matlab. What's trickier is ... There is not a single 'interest-rate' to reduce, there are various interest rates in play.The central bank mandate is usually to control CPI or a similar measure of inflation (e.g. Bank of England's 2% inflation target for GBP). There are various tools for them to do this, including QE and setting the central bank rate.However, at the moment, the central ... Just an update on my playlist, It has 33 videos now, roughly 3x more vids. I have included some more general economics and machine learning and programming vids, which have relevant applications in Q finance.https://www.youtube.com/watch?v=jXFNpDcYOxM&list=PLqMiStH7exaXmQqV7y-tg68f2ZYZK3Yur 1) In an academic sense could it be enough to use ML to create a new factor portfolio?The original FF papers (92,93) said something deep because they contradicted the dominant theory of the day. When you say in an academic sense, you may not get much respect from serious academics if you data mine a factor these days. However, as a statistical exercise, ... Another way of staying "time-varying risk-premium", is saying that the risk-premium is predictable. However, that the fact that the risk-premium is predictable does not means that you can make money out of this.The best two references to understand this are:Cochrane (2008) - The dog that did not barkGoyal and Welch (2007)The first tells you what ... I'm assuming you're talking about a European option. I did a similar problem for my homework recently, I used the in-out parity for pricing the up and in barrier option.Basically European Option = Knock up and in Option + Knock up and out optionYou can price the up and out easily using Binomial and use BS formula for pricing the European Option, then ...
By definition, a renormalizable quantum field theory (RQFT) has the following two properties (only the first one matters in regard to this question): i) Existence of a formal continuum limit: The ultraviolet cut-off may be taken to infinite, the physical quantities are independent of the regularization procedure (and of the renormalization subtraction point, if it applies). ii) There are no Landau-like poles: All the (adimensionalized) couplings are asymptotically safe (roughly, their value remain finite for all values — including arbitrarily high values — of the cut-off.) (Footnote: Here one has to notice that there are Gaussian and non Gaussian fixed points.) Thus, the answer to this question is: "The only condition is the renormalizability of the theory." The fact that in renormalizable theories some results seem to depend on the regularization procedure (dimensional regularization, Pauli-Villars, sharp cut-off in momentum space, lattice, covariant and non-covariant higher derivatives,etc.) and on the renormalization subtraction point (for example, minimal subtraction MS or renormalization at a given momentum) is due to the fact that what we call 'results' in QFT are expressions that relate a measurable magnitude, such as a cross section, to non-measurable magnitudes, such as coupling constants, which depend on the regularization or renormalization prescription. If we could express the measurable magnitudes in terms of other measurable magnitudes, then these relations would not depend on the regularization or renormalization prescription. That is, in QFT results usually have the form: $$P_i=P_i \, (c_1, …, c_n)$$ where $P_i$ are physical (directly measurable) magnitudes, such as cross-sections at different values of the incoming momenta, and $c_i$ are renormalized, but not physical, parameters, such as renormalized coupling constants. The $c_i$'s are finite and regularization/renormalization dependent. The $P_i$'s are finite and renormalization/regularization independent. Therefore the equations above are regularization/renormalization dependent. However, if we could obtain an expression that involved only physical magnitudes $P_i$, $$P_i=f_i\, (P_1,…, P_{i-1}, P_{i+1},… ,P_m)\,,$$ then the relation would be regularization/renormalization independent. Example: Considerer the following regularized (à la Pauli-Villars) matrix element (it is not a cross-section, but it is directly related) before renormalization (up to pure numbers everywhere) $$A(s,t,u)=g_B+g_B^2\,(\ln\Lambda^2/s+\ln\Lambda^2/t+\ln\Lambda^2/u)$$ where $g_B$ is the bare coupling constant, $\Lambda$ is the cut-off, and $s, t, u$ are the Mandelstan variable. At a different energy, one obviously has $$A(s',t',u')=g_B+g_B^2(\ln\Lambda^2/s'+\ln\Lambda^2/t'+\ln\Lambda^2/u')$$ And then $$A(s,t,u)=A(s',t',u')+A^2(s',t',u')\,(\ln s'/s+\ln t'/t+\ln u'/u)$$ This equation relates physical magnitudes and is regularization/renormalization independent. If we had chosen dimensional regularization, we would have obtained (up to pure numbers): $$A(s,t,u)=g_B+g_B^2\,(1/\epsilon +\ln\mu^2/s+\ln\mu^2/t+\ln\mu^2/u)$$ $$A(s',t',u')=g_B+g_B^2\,(1/\epsilon +\ln\mu^2/s'+\ln\mu^2/t'+\ln\mu^2/u')$$ And again $$A(s,t,u)=A(s',t',u')+A^2(s',t',u')\,(\ln s'/s+\ln t'/t+\ln u'/u)$$ is regularization/renormalization independent. The amplitudes $A$ are the previous $P_i$. The problem is that matrix elements aren't usually that simple and, in general, it is not possible to get rid of non measurable parameters. But the reason is technical rather than fundamental. The best we can usually do is to choose some $s',t',u'$ that not correspond to any physical configuration so that the "coupling" is an element matrix at a non-physical point of momentum space. This is called momentum-dependent subtraction. But even this is often problematic for technical reasons so that we have to use minimal subtraction, where the renormalized coupling does not correspond to any amplitude. These couplings are the previous $c$'s. Symmetries and regulators Let's assume that a classical theory has some given symmetries. Then there are two alternatives: i) There is not any regularization that respects all the symmetries. Then, there is an anomaly. If this anomaly does not destroy essential properties of the quantum theory, such as unitarity or existence of a vacuum, then the quantum theory has fewer symmetries than the classical one, but the quantum theory is consistent. These are anomalies related to global (non-gauge) symmetries. ii) There exists at least one regularization that respects all the symmetries of the theory. Nevertheless, we are not forced to use one of these regularizations. We can use one regularization that doesn't respect the symmetries of the classical theory, provided that we add all the (counter)terms to the action (in the path integral) compatible with the symmetries preserved by both the classical theory and the regularization. For example, in QED one can use a gauge-violating regularization, then the only thing one has to do is to add a term $\sim A^2$ to the action. Therefore, the fact that a regularization respects a symmetry has nothing to do with the dependence of results on the regularization. One can use the regularization one likes the best as long as one is consistent. Of course, in most cases, regularizations that respect the symmetries are technically more convenient. This post imported from StackExchange Physics at 2014-03-31 22:27 (UCT), posted by SE-user drake
I got Aircraft Design: A conceptual approach for Christmas, and I'm having a hard time with lift coefficients because I honestly have no idea what "lift force per unit span" means, so can someone please explain this to me? The concept of lift force per unit span comes from potential flow theory. It will need some background information to explain what it means, so bear with me. In the early years of flight, electricity was new and exciting, and it just happened that the equations which could calculate the strength of an electromagnetic field worked equally well when calculating the local flow change effected by a wing. What is the electrical current in a wire became the vorticity in a vortex, and the strength and orientation of the induced magnetic field were equivalent to the induced flow changes. So the vocabulary of electricity was copied over to aerodynamics, just like brain research used vocabulary from computer science when that was a hot topic. Now we are left with abstract concepts like induced drag or lift per unit span. It would be so much more descriptive to use proper names, but the authors of technical books learned it that way and are much too lazy to explain aerodynamics any better. In potential flow theory, you have sources, sinks and vortices. Sources and sinks are used to generate the displacement effect of a physical body moving through air, and vortices are used to explain why wings bend the flow and create lift. In order to calculate the lift force $L$ of a single vortex in two-dimensional flow, the circulation strength $\Gamma$ of the vortex is multiplied by the airspeed $u_{\infty}$ and air density $\rho$. You will find an equation like $L = -\Gamma\cdot u_{\infty}\cdot\rho$ in many treatises about numerical aerodynamics. To expand that into the third dimension (and, consequently, into reality), you need to add something measured in spanwise direction - but you have already lift, and adding the third dimension would give a moment (lift times distance) where only lift would make sense. Therefore, this two-dimensional lift is now called "lift per unit of span" so there is still space for a third dimension where two-dimensional flow did already produce lift (counter to any sound intuition). And no, this is never constant over span. In all cases the vorticity is gradually reduced towards the tips, or explained in a better way, the suction force acting on the wing is gradually reduced when you approach the tips because when the wing ends, nothing can prevent the air from flowing from the high-pressure region below to the low-pressure region on the upper surface of the wing. While the potential flow mentioned above is the mathematical way of looking at aircraft, lift coefficients are the engineer's way of expressing things. From tests it was soon clear that the lift force of a wing scales with the dynamic pressure $q$ of the flow, that is the product of air density and the square of airspeed: $q = \frac{\rho}{2}\cdot v^2$. The next observation of engineers was that lift also scales with wing area $S$. To make the lifting force independent of wing size and dynamic pressure, they stripped both from the lift (physical unit of Kilopond, Newton or pound-force) so they arrived at a dimensionless figure which they called lift coefficient $c_L$. Doing so made it much easier to compare measurements or scale up known designs for the next, better design. The lift equation now becomes $L = c_L\cdot S\cdot\frac{\rho}{2}\cdot v^2$ Imagine that the wing is a carrot, and chop it up as you would chop a carrot into discs. The lift (force) produced by a slice of thickness 1 is the lift (force) per unit span of that slice. ("Thickness 1" could be in whatever units you choose, so another way to look at that is to divide the lift by the thickness of the slice.) For a uniform (straight, not tapered, swept, or twisted) wing, every slice produces the same amount of lift, so as Riccati points out, the lift per unit span is just the total lift divided by the wing span. However, on a wing whose shape varies from the fuselage to the tip, each slice is slightly different. A tapered wing might look a bit like a very conical carrot, and the lift per unit span decreases smoothly from the root to the tip, just like the diameter of each disc decreases as you get towards the tip of the carrot. (I'm not saying the shape of the carrot is related at all: it's just a way of thinking about considering each slice separately.) While you can use total lift to compare different wings, you can use lift per unit span to compare wings in a way that's independent of their span. A wing twice as long will produce twice the lift (ignoring real-world effects like flex and prop wash), but it will have the same lift per unit span, because it has the same thickness and shape as the shorter wing. More usefully, you can use it to look at different parts of the same wing: to compare the root and the tip. Later in your book, you'll see charts showing how the lift per unit span varies along the length for different shapes/designs of wing. Consider a finite (three dimensional) wing producing lift. It would be difficult for us to calculate the total lift and exact lift distribution of the wing unless it is quite simple. One way to deal with it is to 'slice' the wing into a number of segments for which the lift force can be found and the take into account the effects of variation of various wing parameters like: Chord Geometric twist Aerodynamic twist (airfoil shape). The lift per unit span of the wing can be found from the lift coefficient of the airfoil- basically, we are assuming that the flow over a finite wing can be treated as locally two-dimensional and finding the forces on the wing using this. As an example, take a three dimensional wing and then slice it into small pieces so that the lift is essentially constant within each (i.e. the airfoil section and angle of attack is constant). For each of the slices, it is possible to find the lift (from airfoil and flow characteristics). This gives the lift per unit span (unit span here means the size satisfying above conditions). Now, the total lift can be found by simply adding the lifts from various sections. Another thing is that the spanwise variation of the lift per unit span gives the lift distribution of the wing- helping us to compare various wing planforms- like elliptical vs rectangular etc.
The average number of positions addressed is $$ \frac{1}{2} \left(1 + \frac{1}{1-\alpha}\right) $$for linear probing and $1 + \alpha/2$ for chained hashing. Here $\alpha$ is the load factor. For each constant $\alpha$, this average is constant, but it depends on $\alpha$. The expression $1/(1-\alpha)$ blows up as the table gets full ($\alpha \to 1$), which is probably what is meant by the dependence is at least linear. The dependence is on the load factor rather than on the absolute number of elements in the table, which is not the deciding factor. Usually, a linear dependence (not to be confused with the same terminology from linear algebra!) would be $\Theta(\alpha)$, at least linear would be $\Omega(\alpha)$, and more than linear $\omega(\alpha)$; but here $\alpha \to 1$ rather than $\alpha \to \infty$ so terminology is not standard. Perhaps the correct variable, which does go to $\infty$, is $1/(1-\alpha)$. In this case, we do have linear dependence for linear probing.
Preprints (rote Reihe) des Fachbereich Mathematik Refine Keywords average density (3) (remove) 303 We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall. 294 296 We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.
We derive the $\Delta \leftrightarrow \text Y$ transformation equations. The first derivation is from $\Delta$ to $\text Y$. The second derivation goes the opposite direction, from $\text Y$ to $\Delta$. The algebra for this one is harder and there are two versions presented. The first version does it strictly by algebraic manipulation. It works, but it seems a bit convoluted. The second version casts the resistors as conductances before diving into the math. The result is pleasingly symmetric. Find a full introduction to the Delta-Wye transformation in this article. Written by Willy McAllister. Contents Objective $\Delta$ to $\text Y$ derivation $\text Y$ to $\Delta$ derivation There is no need to memorize these transformation equations. If the need arises, you can look them up. No engineer ever needs to produce the following derivations on the spot. They are presented here for your amusement. Objective The resistance between any pair of terminals has to be the same in both the $\Delta$ and $\text Y$ configurations. For example, in $\Delta$, the resistance across the top between terminals $x$ and $y$ is $Rc \parallel (Ra+Rb)$. In $\text Y$, the resistance between terminals $x$ and $y$ is $R1 + R2$. Given the three resistor values of one configuration we are going to derive the resistor values for the other, going both ways. $\Delta$ to $\text Y$ derivation This derivation was contributed by Khan Academy learner phidot. Let’s figure out $R1$ in the $\text Y$ configuration in terms of $\Delta$ resistors $(Ra, Rb, Rc)$. We write a set of three simultaneous equations describing the resistance between each pair of terminals. $R_{xy}: \quad R1 + R2 = Rc \parallel (Ra+Rb)$ $R_{yz}: \quad R2 + R3 = Ra \parallel (Rb+Rc)$ $R_{zx}: \quad R3 + R1 = Rb \parallel (Rc+Ra)$ The left side is the resistance in the $\text Y$ configuration, the right side is the resistance in the $\Delta$ configuration. $R_{xy}$ stands for the resistance between terminals $x$ and $y$. The symbol $\parallel$ is shorthand notation for “in parallel with.” $R_i \parallel R_j = R_i \,R_j / (R_i + R_j)$ When we look at terminals $x$ and $y$, we assume terminal $z$ isn’t connected to anything, so the current in $\text R3$ is $0$. Tell me more about the assumption We can make this assumption because we know resistors are linear devices and we can apply the principle of superposition. (If you haven't studied linearity and superposition yet, please trust me for now that the assumption is a good one.) Let’s attempt to isolate $R1$ on the left side. We combine the equations with this operation, $(\quad[R_{xy}]\quad + \quad [R_{zx}]\quad - \quad [R_{yz}]\quad)\, / 2$ Fill in all the left sides, $( [R1 + R2] + [R3 + R1] - [R2 + R3] ) / 2$ $(R1+\,\cancel{R2}\,+\,\cancel{R3}+R1\,-\,\cancel{R2}\,-\,\cancel{R3})/2 = 2R1/2 = R1$ This verifies the operation isolates $R1$ on the left side. Now do the same operation on the right side, $R1 = ( \,Rc \parallel (Ra+Rb) + Rb \parallel (Rc+Ra) - Ra \parallel (Rb+Rc)\, ) / 2$ Replace the $\parallel$ symbol with the proper formula for two parallel resistors, $R1 = (Rc(Ra+Rb)/(Rc+(Ra+Rb) \,+$ $\qquad\quad Rb(Rc+Ra)/(Rb+(Rc+Ra) \,-$ $\qquad\quad Ra(Rb+Rc)/(Ra+(Rb+Rc)$ $\qquad \quad ) \,/ 2$ Notice the denominator is the same in all three terms, $R1 = \dfrac{Rc(Ra+Rb) + Rb(Rc+Ra) - Ra(Rb+Rc)}{2(Ra+Rb+Rc)}$ Multiply everything out and search for cancellations, $R1 = \dfrac{\cancel{RaRc} \,+ RcRb + RbRc + \,\cancel{RbRa} \,- \,\cancel{RaRb} \,-\, \cancel{RaRc}}{2(Ra+Rb+Rc)}$ $R1 = \dfrac{\cancel{2}\,RbRc}{\cancel{2}( Ra+Rb+Rc )}$ $R1 = \dfrac{Rb\,Rc}{Ra+Rb+Rc}$ That’s it! This expression tells us how to compute $\text Y$ resistor $R1$ from the given $\Delta$ resistors. The procedure for finding $R2$ and $R3$ is the same, with different subscripts. The specific operations you use on the equations are, To isolate $R2: ([R_{xy}] + [R_{yz}] - [R_{zx}]) / 2 $ To isolate $R3: ([R_{yz}] + [R_{zx}] - [R_{xy}]) / 2 $ $\text Y$ to $\Delta$ derivation Going in this direction the algebra is trickier. I found two derivations I admire. Start with the $\Delta \rightarrow \text Y$ equastions and use algebra to solve in reverse Convert the resistors ($R$) to conductances ($G$) and start from scratch $\text Y$ to $\Delta$ derivation using just algebra This first derivation is based on a video by Mohiuddin Jewel. We’re going to start with the three equations from the $\Delta$ to $\text Y$ transformation and solve for the lettered resistors $(Ra, Rb, Rc)$. To review, $R1 = \dfrac{Rb\,Rc}{Ra + Rb + Rc}$ $R2 = \dfrac{Ra\,Rc}{Ra + Rb + Rc}$ $R3 = \dfrac{Ra\,Rb}{Ra + Rb + Rc}$ These steps may seem a bit goofy, but they work. First, divide the $R3$ equation by the $R1$ equation, $\dfrac{R3}{R1} = \cfrac{\cfrac{Ra\,Rb}{Ra + Rb + Rc}}{\cfrac{Rb\,Rc}{Ra + Rb + Rc}}$ The $Ra + Rb + Rc$ terms are common to the top and bottom, so they cancel, leaving, $\dfrac{R3}{R1} = \dfrac{Ra\,Rb}{Rb\,Rc} = \dfrac{Ra}{Rc}$ or, solving for $Ra$, $Ra = \dfrac{R3\,Rc}{R1}$ Next, divide the equation for $R3$ by the equation for $R2$. The same kind of cancellation happens, giving us, $\dfrac{R3}{R2} = \dfrac{Ra\,Rb}{Ra\,Rc} = \dfrac{Rb}{Rc}$ or, solving for $Rb$, $Rb = \dfrac{R3\,Rc}{R2}$ Now we plug in our expressions for $Ra$ and $Rb$ into the equation for $R2$, $R2 = \cfrac{\cfrac{R3\,Rc}{R1}\,Rc}{\cfrac{R3\,Rc}{R1} + \cfrac{R3\,Rc}{R2} + Rc}$ A bunch of common $Rc$ terms cancel out, leaving, $R2 = \cfrac{\cfrac{R3\,Rc}{R1}}{\dfrac{R3}{R1} + \cfrac{R3}{R2} + 1}$ Now we work on the denominator. The LCM of the terms in the denominator is $R1\,R2$. Perform the addition by multiplying each term by the appropriate form of $1$, $R2 = \cfrac{\dfrac{R3\,Rc}{R1}}{\cfrac{R3}{R1}\cfrac{R2}{R2} + \cfrac{R3}{R2}\cfrac{R1}{R1} + 1\cfrac{R1}{R1}\cfrac{R2}{R2}}$ $R2 = \dfrac{\dfrac{R3\,Rc}{R1}}{\cfrac{R3\,R2 + R3\,R1 + R1\,R2}{R1\,R2}}$ Two $R1$ terms and two $R1$ terms cancel, $\cancel{R2} \rightarrow 1 = \cfrac{\cfrac{R3\,Rc}{\cancel{R1}}}{\cfrac{R3\,R2 + R3\,R1 + R1\,R2}{\cancel{R1}\,\cancel{R2}}}$ $1 = \dfrac{R3\,Rc}{R3\,R2 + R3\,R1 + R1\,R2}$ and now we can solve for $Rc$, $Rc = \dfrac{R3\,R2 + R3\,R1 + R1\,R2}{R3}$ Done! This expression tells us how to compute $\Delta$ resistor $Rc$ from the given $\text Y$ resistors. You can derive $Ra$ and $Rb$ with the same technique using different patterns of dividing equations. $\text Y$ to $\Delta$ derivation using conductance This second derivation involves treating the resistors as conductances. After changing to conductance, the derivation follows exactly the steps we did in the first derivation in this article, $\Delta$ to $\text Y$. The resistance model and conductance model are duals of each other. Each resistor is replaced with its equivalent conductance, $G = \dfrac{1}{R}.$ The rule for conductances in parallel is the sum of the conductances, $G_{\text{parallel}} = G_i + G_j$ The rule for two conductances in series is similar to two resistors in parallel, $G_{\text{series}} = \dfrac{G_i\,G_j}{G_i + G_j}$ Our strategy is to short circuit the conductance opposite the terminal in question and figure out the equivalent conductance. For example, suppose we are looking at terminal $x$. We short out $Ga$ on the opposite side by connecting a wire between $y$ and $z$. You can think of it as connecting both $y$ and $z$ to ground. Shorting out $Ga$ is the dual of leaving resistor $R1$ open while figuring out $R_{yz}$ up above. Then we compute the conductance between terminal $x$ and ground and call this $G_x$. In the $\Delta$ configuration, the conductance from $x$ to ground is $Gb$ in parallel with $Gc$, $G_x = Gb + Gc$ In the $\text Y$ configuration, the conductance from $x$ to ground is $G1$ in series with the parallel combination of $G2$ and $G3$, or $G_x = G1 + (G2 \parallel G3) = \dfrac{G1 \, (G2+G3)}{G1+(G2+G3)}$ $G_x$ has to be the same for $\Delta$ and $\text Y$, so we set them equal, $G_x: Gb + Gc = G1(G2+G3)/(G1+G2+G3)$ Now construct two more equations to describe all three terminals, $G_y: Ga + Gc = G2(G1+G3)/(G1+G2+G3)$ $G_z: Ga + Gb = G3(G1+G2)/(G1+G2+G3)$ Notice how these three equations are nearly identical to their duals shown at the beginning of the $\Delta$ to $\text Y$ derivation, $R_{xy}, R_{yz}, R_{zx}$. Let’s use the three equations to isolate $Gb$. We combine the equations with this operation, $(\quad[G_x]\quad + \quad [G_z]\quad - \quad [G_y]\quad)\,/2$ The left side becomes, $([Gb + Gc] + [Ga + Gb] - [Ga + Gc])\,/2$ We get a bunch of cancellation, $(Gb+\,\cancel{Gc}\,+\,\cancel{Ga}\,+\,Gb\,-\,\cancel{Ga}\,-\,\cancel{Gc})\,/2 = \dfrac{2Gb}{2} = Gb$ This verifies that the operation isolates $Gb$. Now apply the same operation to the right side, $Gb = ($ $\qquad\quad G1(G2+G3)/(G1+G2+G3) \,+$ $\qquad\quad G3(G1+G2)/(G1+G2+G3) \,-$ $\qquad\quad G2(G1+G3)/(G1+G2+G3)$ $\qquad ) \,/ 2$ Notice the denominator is the same in all three terms, $Gb = \dfrac{G1(G2+G3) + G3(G1+G2) - G2(G1+G3)}{2(G1+G2+G3)}$ Multiply everything out and search for cancellations, $Gb = \dfrac{\cancel{G1\,G2} \,+ G1\,G3 + G3\,G1 + \,\cancel{G3\,G2} \,-\, \cancel{G2\,G1} \,+ \,\cancel{G2\,G3}}{2(G1+G2+G3)}$ $Gb = \dfrac{\cancel{2}\,G1\,G3}{\cancel{2}(G1+G2+G3)}$ $Gb = \dfrac{G1\,G3}{G1+G2+G3}$ This tells us how to compute $\Delta$ conductance $Gb$ from the given $\text Y$ conductances. The procedure for finding $Ga$ and $Gc$ is the same. You get to come up with the combining operation to apply to the simultaneous equations. If you want to convert $Gb$ to resistance replace $G$ with $\dfrac{1}{R}$, etc. $\dfrac{1}{Rb}=\cfrac{\cfrac{1}{R1\,R3}}{\cfrac{1}{R1}+\cfrac{1}{R2}+\cfrac{1}{R3}}$ Go to work on the denominator. The least common multiple of the three fractions is $R1\,R2\,R3$. Multiply each term by an appropriate form of $1$ and add fractions. $\dfrac{1}{Rb}=\cfrac{\cfrac{1}{R1\,R3}}{\cfrac{R2\,R3}{R1\,R2\,R3}+\cfrac{R1\,R3}{R1\,R2\,R3}+\cfrac{R1\,R2}{R1\,R2\,R3}}$ $\dfrac{1}{Rb}=\cfrac{\cfrac{1}{R1\,R3}}{\cfrac{R2\,R3+R1\,R3+R1\,R2}{R1\,R2\,R3}}$ Now we turn a somersault to bring $R1\,R2\,R3$ to the numerator, $\dfrac{1}{Rb}=\dfrac{\dfrac{1}{\cancel{R1}\,\cancel{R3}}\,(\cancel{R1}\,R2\,\cancel{R3})}{R2\,R3+R1\,R3+R1\,R2}$ And a bit more cancellation gives us, $\dfrac{1}{Rb}=\dfrac{R2}{R2\,R3+R1\,R3+R1\,R2}$ One more flip to get the expression for $Rb$ we’ve been looking for, $Rb=\dfrac{R2\,R3+R1\,R3+R1\,R2}{R2}$ This matches the result from the algebraic derivation we did for $\text Y$ to $\Delta$. Derive the equations for $Ra$ and $Rc$ with the same process.
Electronic Communications in Probability Electron. Commun. Probab. Volume 24 (2019), paper no. 57, 12 pp. On the eigenvalues of truncations of random unitary matrices Abstract We consider the empirical eigenvalue distribution of an $m\times m$ principle submatrix of an $n\times n$ random unitary matrix distributed according to Haar measure. Earlier work of Petz and Réffy identified the limiting spectral measure if $\frac{m} {n}\to \alpha $, as $n\to \infty $; under suitable scaling, the family $\{\mu _{\alpha }\}_{\alpha \in (0,1)}$ of limiting measures interpolates between uniform measure on the unit disc (for small $\alpha $) and uniform measure on the unit circle (as $\alpha \to 1$). In this note, we prove an explicit concentration inequality which shows that for fixed $n$ and $m$, the bounded-Lipschitz distance between the empirical spectral measure and the corresponding $\mu _{\alpha }$ is typically of order $\sqrt{\frac {\log (m)}{m}} $ or smaller. The approach is via the theory of two-dimensional Coulomb gases and makes use of a new “Coulomb transport inequality” due to Chafaï, Hardy, and Maïda. Article information Source Electron. Commun. Probab., Volume 24 (2019), paper no. 57, 12 pp. Dates Received: 6 December 2018 Accepted: 20 July 2019 First available in Project Euclid: 13 September 2019 Permanent link to this document https://projecteuclid.org/euclid.ecp/1568361883 Digital Object Identifier doi:10.1214/19-ECP258 Citation Meckes, Elizabeth; Stewart, Kathryn. On the eigenvalues of truncations of random unitary matrices. Electron. Commun. Probab. 24 (2019), paper no. 57, 12 pp. doi:10.1214/19-ECP258. https://projecteuclid.org/euclid.ecp/1568361883
In this article, I want to discuss how the intersection area of two circles can be calculated. Given are only the two circles with their corresponding centre point together with the radius and the result is the area which both circles share in common. First, I want to take a look at how the intersection area can be calculated and then how the needed variables are derived from the given data. At the end of the article, I supply running code in C++. The following figure illustrates the general problem. A small and a large circle is shown and both share a common area at the right part of the first circle. As the figure already depicts, the problem is solved by calculating the area of the two circular segments formed by the two circles. The total intersecting area is then simply\begin{equation*} A_0 + A_1. \end{equation*} As equation 15 from MathWorld shows, the area of one circular segment is calculated as (all angles are in radiant)\begin{equation*} \begin{split} A &= \frac{1}{2} r^2 (\theta - \sin(\theta)), \\ &= \frac{1}{2} r^2 \theta - \frac{1}{2} r^2 \sin(\theta). \end{split} \end{equation*} The formula consists of two parts. The left part is the formula for the area of the circular sector (complete wedge limited by the radius), which is similar to the formula of the complete circle area (\( r^2\pi \)) where the arc length takes a complete round of the circle. Here instead, the arc length is explicitly specified by \(\theta\) instead of \(\pi\). If you plug a complete round into \(\theta\), you get the same result: \( \frac{1}{2} r^2 2\pi = r^2\pi \). The right part calculates the area of the isosceles triangle (triangle with the radii as sides and heights as baseline), which is a little bit harder to see. With the double-angle formula\begin{equation*} \sin(2x) = 2\sin(x)\cos(y) \end{equation*} \(\sin(\theta)\) can be rewritten as\begin{equation*} \sin(\theta) = 2\sin\left(\frac{1}{2}\theta\right) \cos\left(\frac{1}{2}\theta\right). \end{equation*} This leaves for the right part of the above formula\begin{equation*} \frac{1}{2} r^2 \sin(\theta) = r^2 \sin\left(\frac{1}{2}\theta\right) \cos\left(\frac{1}{2}\theta\right). \end{equation*} Also, note that \(r \sin\left(\frac{1}{2}\theta\right) = a\) and \( r \cos\left(\frac{1}{2}\theta\right) = h\) (imagine the angle \(\frac{\alpha}{2}\) from the above figure in a unit circle), which results in\begin{equation*} r^2 \sin\left(\frac{1}{2}\theta\right) \cos\left(\frac{1}{2}\theta\right) = ar \end{equation*} and since we have an isosceles triangle, this is exactly the area of the triangle. Originally, the formula is only defined for angles \(\theta < \pi\) (and probably \(\theta \geq 0\)). In this case, \(\sin(\theta)\) is non-negative and the area of the circular segment is the subtraction of the triangle area from the circular sector area (\( A = A_{sector} - A_{triangle} \)). But as far as I can see, this formula also works for \(\theta \geq \pi\), if the angle stays in the range \([0;2\pi]\). In this case, the triangle area and the area of the circular sector need to get added up (\( A = A_{sector} + A_{triangle} \)), which is considered in the formula by a negative \(\sin(\theta)\) (note the negative factor before the \(\sin(\theta)\) function). The next figure also depicts this situation. The following table gives a small example of these two elementary cases (circular sector for one circle). \(r\) \(\theta\) \(a = \frac{1}{2} r^2 \theta\) \(b = \frac{1}{2} r^2 \sin(\theta)\) \(A = a - b\) \(2\) \(\frac{\pi}{3} = 60°\) \(\frac{2 \pi }{3}\) \(\sqrt{3}\) \(\frac{2 \pi }{3} - \sqrt{3} = 0.362344\) \(2\) \(\frac{4\pi}{3} = 240°\) \(\frac{8 \pi }{3}\) \(-\sqrt{3}\) \(\frac{8 \pi }{3}- (-\sqrt{3}) = 10.1096\) It is also from interest to see the area of the circular segment as a function of \(\theta\): It is noticeable that the area of one circular segment (green line) starts degressively from the case where the two circles just touch each other, because here the area of the triangle is subtracted. Beginning from the middle at \(\theta = \pi\) the area of the triangle gets added and the green line proceeds progressively until the two circles contain each other completely (full circle area \(2^2\pi=4\pi\)). Of course, the function itself is independent of any intersecting scenario (it gives just the area for a circular segment), but the interpretation fits to our intersecting problem (remember that in total areas of two circular segments will get added up). Next, we want to use the formula. The radius \(r\) of the circle is known, but we need to calculate the angle \(\theta\). Let's start with the first circle. The second then follows easily. With the notation from the figure, we need the angle \(\alpha\). Using trigonometric functions, this can be done by\begin{equation*} \begin{split} \tan{\frac{\alpha}{2}} &= \frac{\text{opposite}}{\text{adjacent}} = \frac{h}{a} \\ \text{atan2}(y, x) &= \text{atan2}(h, a) = \frac{\alpha}{2} \end{split} \end{equation*} The \(\text{atan2}(y, x)\) function is the extended version of the \(\tan^{-1}(x)\) function where the sign of the two arguments is used to determine a resulting angle in the range \([-\pi;\pi]\). Please note that the \(y\) argument is passed first. This is common in many implementations, like also in the here used version of the C++ standard library std::atan2(double y, double x). For the intersection area the angle should be be positive and in the range \([0;2\pi]\) as discussed before, so in total we have Firstly, the range is expanded to \([-2\pi;2\pi]\) (factor from the previous equation, since the height \(h\) covers only half of the triangle). Secondly, positivity is ensured by adding \(+2\pi\) leaving a resulting interval of \([0;4\pi]\). Thirdly, the interval is shrinked to \([0;2\pi]\) to stay inside one circle round. Before we can calculate the \(\alpha\) angle, we need to find \(a\) and \(h\) 1. Let's start with \(a\). The two circles build two triangles (not to be confused with the previous triangle used to calculate the area of the circular segment) with the total baseline \(d=a+b = \left\| C_0 - C_1 \right\|_2 \) and the radii (\(r_0,r_1\)) as sides, which give us two equations The parameter \(b\) in the second equation can be omitted (using \(d-a=b\))\begin{equation*} r_1^2 = b^2 + h^2 = (d-a)^2 + h^2 = d^2 - 2da + a^2 + h^2 \end{equation*} and the equation solved by \(h^2\)\begin{equation*} h^2 = r_1^2 - d^2 + 2da - a^2. \end{equation*} Plugging this into the equation for the first triangle\begin{equation*} \begin{split} r_0^2 &= a^2 + r_1^2 - d^2 + 2da - a^2 \\ r_0^2 - r_1^2 + d^2 &= 2da \\ a &= \frac{r_0^2 - r_1^2 + d^2}{2d} \end{split} \end{equation*} results in the desired distance \(a\). This directly gives us the height\begin{equation*} h = \sqrt{r_0^2 - a^2}. \end{equation*} Using the existing information the angle \(\beta\) for the second circle can now easily be calculated\begin{equation*} \beta = \text{atan2}(h, d-a) \cdot 2 + 2 \pi \mod 2 \pi. \end{equation*} Now we have every parameter we need to use the area function and it is time to summarize the findings in some code. /** * @brief Calculates the intersection area of two circles. * * @param center0 center point of the first circle * @param radius0 radius of the first circle * @param center1 center point of the second circle * @param radius1 radius of the second circle * @return intersection area (normally in px²) */ double intersectionAreaCircles(const cv::Point2d& center0, const double radius0, const cv::Point2d& center1, const double radius1) { CV_Assert(radius0 >= 0 && radius1 >= 0); const double d_distance = cv::norm(center0 - center1); // Euclidean distance between the two center points if (d_distance > radius0 + radius1) { /* Circles do not intersect */ return 0.0; } if (d_distance <= fabs(radius0 - radius1)) // <= instead of <, because when the circles touch each other, it should be treated as inside { /* One circle is contained completely inside the other, just return the smaller circle area */ const double A0 = PI * std::pow(radius0, 2); const double A1 = PI * std::pow(radius1, 2); return radius0 < radius1 ? A0 : A1; } if (d_distance == 0.0 && radius0 == radius1) { /* Both circles are equal, just return the circle area */ return PI * std::pow(radius0, 2); } /* Calculate distances */ const double a_distanceCenterFirst = (std::pow(radius0, 2) - std::pow(radius1, 2) + std::pow(d_distance, 2)) / (2 * d_distance); // First center point to the middle line const double b_distanceCenterSecond = d_distance - a_distanceCenterFirst; // Second centre point to the middle line const double h_height = std::sqrt(std::pow(radius0, 2) - std::pow(a_distanceCenterFirst, 2)); // Half of the middle line /* Calculate angles */ const double alpha = std::fmod(std::atan2(h_height, a_distanceCenterFirst) * 2.0 + 2 * PI, 2 * PI); // Central angle for the first circle const double beta = std::fmod(std::atan2(h_height, b_distanceCenterSecond) * 2.0 + 2 * PI, 2 * PI); // Central angle for the second circle /* Calculate areas */ const double A0 = std::pow(radius0, 2) / 2.0 * (alpha - std::sin(alpha)); // Area of the first circula segment const double A1 = std::pow(radius1, 2) / 2.0 * (beta - std::sin(beta)); // Area of the second circula segment return A0 + A1; } Basically, the code is a direct implementation of the discussed points. The treatment of the three special cases (no intersection, circles completely inside each other, equal circles) are also from Paul Bourke's statements. Beside the functions of the C++ standard library I also use some OpenCV datatypes (the code is from a project which uses this library). But they play no important role here, so you can easily replace them with your own data structures. I also have a small test method which covers four basic cases. The reference values are calculated in a Mathematica notebook. void testIntersectionAreaCircles() { /* Reference values from IntersectingCirclesArea_TestCases.nb */ const double accuracy = 0.00001; CV_Assert(std::fabs(intersectionAreaCircles(cv::Point2d(200, 200), 100, cv::Point2d(300, 200), 120) - 16623.07332) < accuracy); // Normal intersection CV_Assert(std::fabs(intersectionAreaCircles(cv::Point2d(200, 200), 100, cv::Point2d(220, 200), 120) - 31415.92654) < accuracy); // Touch, inside CV_Assert(std::fabs(intersectionAreaCircles(cv::Point2d(200, 200), 100, cv::Point2d(400, 200), 100) - 0.0) < accuracy); // Touch, outside CV_Assert(std::fabs(intersectionAreaCircles(cv::Point2d(180, 200), 100, cv::Point2d(220, 200), 120) - 28434.24854) < accuracy); // Angle greater than 180° } List of attached files:
Ex.5.4 Q5 Arithmetic progressions Solutions - NCERT Maths Class 10 Question A small terrace at a football ground comprises of \(15\) steps each of which is \(50\, \rm{m}\) long and built of solid concrete. Each step has a rise of \(\begin{align}\frac{1}{4}\,\rm{m} \end{align}\) and a tread of \(\begin{align}\frac{1}{2}\,\rm{m} \end{align}\) (See figure) calculate the total volume of concrete required to build the terrace. Text Solution What is Known? \(15\) steps each of which is \(50 \,\rm{m}\) long and each step has a rise of \(\begin{align} \frac{1}{4}\,\rm{m} \end{align}\) and a tread of \(\begin{align} \frac{1}{2}\,\rm{m} \end{align}\) What is Unknown? Total volume of concrete required to build the terrace. Reasoning: Sum of the first \(n\) terms of an AP is given by \(\begin{align} {S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right] \end{align}\) Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms. Steps: From the figure, it can be observed that Height of 1 st step is \(\begin{align}\frac{1}{4} \,\rm{m} \end{align}\) Height of 2nd step is \(\begin{align} \left( {\frac{1}{4} + \frac{1}{4}} \right) \,\rm{m} = \frac{1}{2} \,\rm{m} \end{align}\) Height of 3rd step is \(\begin{align} \left( {\frac{1}{2} + \frac{1}{4}} \right)\,\rm{m} = \frac{3}{4} \,\rm{m} \end{align}\) Therefore, height of the each step is increasing by \(\begin{align} \frac{1}{4} \,\rm{m} \end{align}\) length \(50 \,\rm{m}\) and width (tread) \(\begin{align} \frac{1}{2} \,\rm{m} \end{align}\) remain the same for each of the steps. Volume of Step can be considered as \( \text{Volume of Cuboid}= Length \times Breadth \times Height\) Volume of concrete in 1st step \(\begin{align} = 50m \times \frac{1}{2} \,\rm{m} \times \frac{1}{4} \,\rm{m} = 6.25\,{m^3} \end{align}\) Volume of concrete in 2nd step \(\begin{align} = 50 \,\rm{m} \times \frac{1}{2} \,\rm{m} \times \frac{1}{2} \,\rm{m} = 12.50\,{m^3} \end{align}\) Volume of concrete in 3rd step \(\begin{align} = 50 \,\rm{m} \times \frac{1}{2} \,\rm{m} \times \frac{3}{4} \,\rm{m} = 18.75\,{m^3} \end{align}\) It can be observed that the volumes of concrete in these steps are in an A.P. \[6.25\,\rm{m^3},12.50\,\rm{m^3},\,18.75\,\rm{m^3},.......\] First term, \(a = 6.25\) Common difference, \(d = 6.25\) Number of steps, \(n = 15\) Sum of n terms, \(\begin{align} {S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right] \end{align}\) \[\begin{align}{S_{15}} &= \frac{{15}}{2}\left[ {2 \times 6.25 + \left( {15 - 1} \right) \times 6.25} \right]\\&= \frac{{15}}{2}\left[ {12.50 + 14 \times 6.25} \right]\\ &= \frac{{15}}{2}\left[ {12.50 + 87.50} \right]\\ &= \frac{{15}}{2} \times 100\\& = 750\end{align}\] Therefore, Volume of concrete required to build the terrace is \(750\;\rm{m^3}\).
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Essay and Opinion A new graph density Version 1Released on 08 April 2015 under Creative Commons Attribution 4.0 International License Authors' affiliations Laboratoire Electronique, Informatique et Image (Le2i). CNRS : UMR6306 - Université de Bourgogne - Arts et Métiers ParisTech Keywords Community discovery Density @Graph theory Graph properties Graph theory Metric spaces Abstract For a given graph $G$ we propose the non-classical definition of its true density: $\rho(G) = \mathcal{M}ass (G) / \mathcal{V}ol (G)$, where the $\mathcal{M}ass$ of the graph $G$ is a total mass of its links and nodes, and $\mathcal{V}ol (G)$ is a size-like graph characteristic, defined as a function from all graphs to $\mathbb{R} \cup \infty$. We show how the graph density $\rho$ can be applied to evaluate communities, i.e “dense” clusters of nodes. Background and motivation Take a simple graph $G = (V, E)$ with $n$ nodes and $m$ links. The standard definition of graph density, i.e. the ratio between the number of its links and the number of all possible links between $n$ nodes, is not very suitable when we are talking about the true density in the physical sense. More precisely, by “the true density” we mean: $\rho(G) = \mathcal{M}ass (G) / \mathcal{V}ol (G) \,,$ where the $\mathcal{M}ass$ of the graph $G$ equals to the total mass of its links and nodes, and the $\mathcal{V}ol$ is a size-like characteristic of $G$. Consider again the usual graph density: $D = \frac{2 m}{n \left( n - 1 \right)}$. Rewriting $D$ in the “mass divided by volume” form, one obtain the following definitions of graph mass and volume: \begin{align*} \mathcal{M}ass_D (G) &= 2 m \,, \\ \mathcal{V}ol_D (G) &= n \left( n - 1 \right) ; \end{align*} Note, that $\mathcal{V}ol_D (G)$ depends only of the number of nodes, so it is very rough estimation of the actual graph volume. Moreover, any function of the number of nodes (and the number links) will give somewhat strange results, because we neglect the actual graph structure in this way. In the next section of this article we give a formal definition of the actual graph volume. For the moment, just take a look at the Fig. 1, where different graphs with 6 nodes and 6 links are shown. Intuitively, graph $C$ is larger (more voluminous) than $B$ and $A$. But it is not clear which graph is larger: $A$ or $B$. True graph density $\mathcal{M}ass (G)$ It seems a god idea to define $\mathcal{M}ass(G)$ as the total mass of its nodes and links. The simple way consists in assuming that the mass of one link (or node) equals to $1$. \begin{equation} \tag{MASS} \mathcal{M}ass(G) = n + m \label{eq:mass} \end{equation} $\mathcal{V}ol (G)$ We cannot use any classical measure (e.g. Lebesgue-like) to define a volume of a graph $G$, because all measures are additive. Let us explain why the additivity is bad. Observing that $G$ is the union of its links and nodes, and assuming that the volume of a link (node) equals to one, we obtain: \[ \mathcal{C}lassical\mathcal{V}ol(G) = n + m \,,\] where $m$ is the number of links in $G$, and $n$ equals to the number of nodes. The graph structure disappears again, and we should find “another definition of volume”. A clever person can develop a notion of “volume” for any given metric space. Since any graph can be regarded as a metric space, we can use this as a solution of our problem. Here we briefly describe how Feige in his paper [2] defined the volume of a finite metric space $(S,d)$ of $n$ points. A a function $\phi : S \to \mathbb{R}^{n-1}$ is a contraction if for every $u,v \in S$, $d_{\mathbb{R}} \big( \phi (u) - \phi (v) \big) \le d(u,v)$, where $d_{\mathbb{R}}$ denotes usual Euclidean distance between points in $\mathbb{R}^{n-1}$. The Fiege's volume $\mathit{Vol} \big( (S,d) \big)$ is the maximum $(n-1)$ dimensional Euclidean volume of a simplex that has the points of $\{\phi(s) | s \in S \}$ as vertices, where the maximum is taking over all contractions $\phi : S \to \mathbb{R}^{n-1}$. Sometimes in order to calculate Fiege's volume, we need to modify the original metric. Abraham et al. deeply studied Fiege-like embeddings in [1]. Another approach is to find a good mapping $g : S \to \mathbb{R}^{n-1}$, trying to preserve original distances as much as possible, and calculate the $\mathit{Vol} \big( (S,d ) \big)$ as the volume of convex envelop that contains all $\{g(s) | s \in S\}$. The interested reader can refer to the Matoušek's book [3], which gives a good introduction into such embeddings. But we should note that not all finite metric spaces can be embedded into Euclidean space with exact preservation of distances. In this paper we chose another approach: instead of doing approximative embeddings, we compute the “volume” directly. First of all, let us introduce some natural properties that must be satisfied by the graph volume. A graph volume is a function from set of all graphs $\mathcal{G}$ to $\mathbb{R} \cup \infty$: \[ \mathcal{V}ol : \mathcal{G} \to \mathbb{R} \cup \infty \,,\] Note that our volume has no such parameter as dimension. The absence of dimension allows us directly compare volumes of any two graphs. Let the volume of any complete graph be equal to $1$: \begin{equation} \tag{I} \mathcal{V}ol (K_x) = 1 \label{eq:I} \end{equation} Then, for any disconnected graph, denoted by $G_{\bullet^\bullet_\bullet}$, let the volume be equal to infinity: \begin{equation} \tag{II} \mathcal{V}ol (G_{\bullet^\bullet_\bullet}) = \infty \label{eq:II} \end{equation} Intuitively, here one can make an analogy with a gas. Since gas molecules are “not connected”, they fill an arbitrarily large container in which they are placed. When we add a new edge between two existed vertices, the new volume (after edge addition) cannot be greater than the original volume: \begin{equation} \tag{III} \mathcal{V}ol (G) \ge \mathcal{V}ol (G + e) \label{eq:III} \end{equation} When we add a new vertex $v^1$ with degree $1$, the new volume cannot be less than the original one: \begin{equation} \tag{IV} \mathcal{V}ol (G) \le \mathcal{V}ol (G + v^1) \label{eq:IV} \end{equation} For a given graph $G = (V, E)$ the eccentricity $\epsilon(v)$ of a node $v$ equals to the greatest distance between $v$ and any other node from $G$: \[ \epsilon(v) = \max_{u \in V}{d(v,u)} \,, \] where $d(v,u)$ denotes the length of a shortest path between $v$ and $u$. Finally, we define the volume of a graph $G$ as a product of all eccentricities: \begin{equation} \mathcal{V}ol (G) = \sqrt[|V|]{\prod_{v \in V} \epsilon(v)} \tag{VOLUME} \label{eq:volume} \end{equation} Obviously properties \ref{eq:I}, \ref{eq:II} and \ref{eq:III} hold for this definition. But \ref{eq:IV} is needed to be proved or disproved. Reconsidering graphs from Fig. 1, we have $\mathcal{V}ol(A) = \sqrt[6]{3^3 2^3} \approx 2.45$, $ \mathcal{V}ol(B) = \sqrt[6]{3^3 2^3} \approx 2.45$ and $\mathcal{V}ol(C) = \sqrt[6]{3^6} = 3$. Possible applications Quality of communities Consider two graphs $A$ and $B$. We say that $A$ is better than $B$ if and only if $\rho(A) > \rho(B)$. Using this notion one can define a quality of graph partition. The volume of finite metrics spaces Our approach can be applied to calculate the “volume” of any finite metric space $(S,d)$: \[ \mathcal{V}ol \big( (S,d) \big) = \sqrt[|S|]{\prod_{s \in S} \epsilon(s)} \,,\] where $\epsilon(s) = \max_{p \in S}{d(s,p)} $. References I. Abraham, Y. Bartal, O. Neiman, and L. J. Schulman, Volume in general metric spaces, in Proceedings of the 18th annual European conference on Algorithms: Part II, ESA'10, Berlin, Heidelberg, 2010, Springer-Verlag, pp. 87–99. U. Feige, Approximating the bandwidth via volume respecting embeddings, Journal of Computer and System Sciences, 60 (2000), pp. 510 – 539. J. Matoušek, Lectures on Discrete Geometry, Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2002.
The time series is governed by the equation $S(T)=S(0)e^{(\mu-\frac{\delta^2}{2})T+\delta(w(T)-w(0))}$, in which $w(t)$ is a standard Brownian motion. Now given the data $\{S(t)\}_{t=0}^{t=T}$, how to estimate $\delta$ and $\mu$? By considering the log of the time series, i.e. $$ \{\log{S(t)}\}_{t=0}^{t=T}$$ we have $$ \log{S(0)} + (\mu - \delta^2/2)t + \delta w(t) $$ ( $w(0) = 0$ ). Taking first differences of this series, $\log{S_{t_i}} - \log{S_{t_{i-1}} }$, gives a new series: $$ (\mu - \delta^2/2)( t_i - t_{i-1} ) + \delta( w(t_i) - w(t_{i-1} ) )$$ This new series is independent and Normally distributed, with mean $(\mu - \delta^2/2)( t_i - t_{i-1} )$ and standard deviation $\delta\sqrt{ t_i - t_{i-1} }$. One can use MLE to find the "best" estimators of the two unknowns.
Why do we use the term "asymptotic" in complexity. Although I know what an asymptote is, but what is an asymptote doing here? There are several answers to this. "Asymptotic" here means "as something tends to infinity". It has indeed nothing to do with curves. There is no such thing as "complexity notation". We denote "complexities" using asymptotic notation, more specifically Landau notataion. "Complexity" is a mostly empty, overused and overloaded term. However, in the context of algorithms in TCS, it is usally agreed upon that it means "the $\Theta$-class (also "order of growth") of the worst-case running-time cost function" of a given algorithm. Similarly the "complexity" of a problem means "the best worst-case complexity among all algorithms for this problem". These can be overridden by adding qualifiers, e.g. "average-case space complexity". Note that item 3 is my own opinion; some people disagree. You will indeed find "complexity" used for many things in the literature and on this site. In case of doubt or ambiguity, ask the author. I would like to quote from "Concrete Mathematics" (Chapter 9) by Ronald Graham, Donald Knuth, and Oren Patashnik. It does mention curves and asymptotes. The word asymptotic stems from a Greek root meaning "not falling together". When ancient Greek mathematicians studied conic sections, they considered hyperbolas like the graph of $y = \sqrt{1 + x^2}$ which has the lines $y = x$ and $y = -x$ as "asymptotes". The curve approaches but never quite touches these asymptotes, when $x \to \infty$. Nowadays we use "asymptotic" in a broader sense to mean any approximate value that gets closer and closer to the truth, when some parameter approaches a limiting value[emphasis added]. For us, asymptotics means "almost falling together".
If $\sqrt{64}$ is equal to $\pm{}8$, is $-64$ equal to $\pm{}8i$, or just $8i$? $\sqrt{a}$, for real $a$, is almost always defined to be the positive solution of the equation $x^2 - a = 0$, so $\sqrt{64}$ is $8$ and not $\pm 8$. The reason the square root only takes on one value is because it is a function and so each element in the domain can be mapped to at most one element in the codomain. As for your second question, $i$ is defined to as a number satisfying the equation $i^2 + 1 = 0$ and so we can say that $\sqrt{-1} = i$. Following from this, we have $\sqrt{-64} = \sqrt{-1} \sqrt{64} = i\sqrt{64} = 8i$. Well, in theory, the square root of any number should return both its negative and positive root. Meaning $\sqrt{x^2}=\pm x$. But if you think about the geometric meaning of the square root, it’s finding the side length which makes a square of area $x^2$. So some people think that the square root should only be a positive answer since length cannot be negative. In the case where we do want both the negative and positive root, here’s what you can do. $\sqrt{-16}=\sqrt{-1} \cdot \sqrt{16}$ $\sqrt{-1}=i$, $\sqrt{16}=\pm 4$, which gives us $i \cdot \pm 4=\pm 4i$
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
The answer is yes for continuous functions $f$ on continuous curves $C$. The reason for this is that the line integral (with respect to arc length) satisfies the inequality $$ \min(f) \cdot L_C \leq \int_C f(x_1, \ldots x_n) \,ds \leq \max(f) \cdot L_C $$ which can be rearranged as $$ \min(f) \leq \cfrac{1}{L_C} \int_C f(x_1, \ldots x_n) \,ds \leq \max(f) $$ We know that the minimum and maximum of the continuous function $f$ on $C$ exist because $C$ is compact, being the continuous image of $[0, 1]$. But $C$ is also connected for the same reason, so the range of $f$ must be an interval. The only option is that the range of $f$ must be the interval $[\min(f), \max(f)]$. This is important because this number $\cfrac{1}{L_C} \int_C f(x_1, \ldots x_n)$ was just shown to be in that interval, and therefore is in the range of $f$. So, pick a value $\vec{c} \in C$ with $f(\vec{c}) = \cfrac{1}{L_C} \int_C f(x_1, \ldots x_n)$ and the theorem is proved. EDIT: The added part of your question about the integral of a gradient vector field $f(x) = \nabla F(x)$ will have a negative answer. For example, a line integral of a gradient across a loop (initial point = endpoint) will give you zero, but the gradient vector field you are integrating might never be zero. Example: Integrate the gradient of $F(x, y) = x + y$ along the counterclockwise oriented unit circle.
Since moving to a Mac a bit over a year ago, I've had only a few reasons to look back (the business with the HP LJ1022 printer being one of them). I'm now rather close to the end of my tether, and the reason is fonts. As an academic and a computer scientist, I end up writing quite a lot of papers and presentations with maths in them. Like any sensible person, I use LaTeX for typesetting the maths; it's a lot easier to type $\sum_{i=0}^{i=n-1} i^2$ than to wrestle with the equation editor in Word. I've also been using LaTeX for rendering mathematical expressions in lecture slides; there are two tools - LaTeXit and LaTeX Equation Editor - which make putting maths in Powerpoint or KeyNote a drag-and-drop operation. However, I've spent quite a lot of time over the last week trying to debug a problem with the font rendering of TeX-generated PDF files on OS X. If I wrote a LaTeX file containing the following: \documentclass{article} \begin{document} \section{This is a test} \[e = mc^2 \rightarrow \chi \pi \ldots r^2 \] \end{document} then I'd expect it to render something like this: Preview renders it like that, but not reliably - perhaps one time in eight. The rest of the time, it randomly substitutes a sans serif font for the various Computer Modern fonts. Sometimes it looks like this (missing the italic font): Sometimes it looks like this (missing the bold and italic fonts): And sometimes it looks like this (missing the bold and symbol fonts): It isn't predictable which rendering I get. The problem also isn't limited to CM, but appears whenever you have a subset of a Type1 font embedded in PDF (on my machine, at least); TeX isn't the problem. The problem didn't exist on 10.4. The best guess from the Mac communities is that it's a cache corruption problem with the OS X PDF-rendering component on 10.5 (which would explain why I see the same problem in LaTeXit, LEE and Papers, but not in Acrobat). I really don't see how Apple could have let a release out of the door with a bug like this - this is surely a critical bug for anyone in publishing. Edited to add links: Apple forums [1] [2] [3] Macscoop on 10.5.2 update Another report of the problem Clearing the font cache
I came to a conclusion here by applying the Euler-Lagrange operator to the given Lagrangian: $L^{2}=g_{ij}(x)\dot{x}^{i}\dot{x}^{j}$ Doing the algebra I find that: $\ddot{x}^{i}+\Gamma^{i}_{jk}\dot{x}^{j}\dot{x}^{k}=\frac{dL}{ds}\dot{x}^{i} \qquad(1)$ Now, according to this webpage the condition for a vector to be transported along a curve without changing its direction is: $\nabla_{x}V^{i}=\lambda(\tau)V^{i} \qquad (2)$ where $\nabla_x$ is the covariant derivative. Moreover, if the transported vector keeps the same magnitude, then the condition holds for $\lambda(\tau)=0$, which is only true if one chooses $\tau$ to be an affine parameter. If I take the covariant derivative of $\dot{x}^{i}$, then: $\nabla_{x}\dot{x}^{i}=\dot{x}^{k}\bigg(\frac{\partial \dot{x}^{i}}{\partial x^{k}}+\Gamma^{i}_{jk}\dot{x}^{j}\bigg)=\ddot{x}^{i}+\Gamma^{i}_{jk}\dot{x}^{j}\dot{x}^{k}$ which is the same as the left-hand side of the equation $(1)$. According to the parallel transport condition $(2)$, if I am choosing an affine parameter $s$, it must be true that: $\ddot{x}^{i}+\Gamma^{i}_{jk}\dot{x}^{j}\dot{x}^{k}=0$ This is the same as equation (1) only if $\boxed{\frac{dL}{ds}=0}$. This means that such condition for the Lagrangian must be true only if one chooses $s$ to be an affine parameter.
$\alpha$ need not be a random variable. The most natural choice for $(\Omega, \mathcal{F}, P)$ is product space: let $\Omega = [0,1]^{[0,1] \cup \{2\}}$ and for $A \subset [0,1] \cup 2$, let $\pi_A : \Omega \to [0,1]^A$ be the projection map. (For $a \in [0,1] \cup 2$, we let $\pi_a$ denote $\pi_{\{a\}} : \Omega \to [0,1]^{\{a\}} = [0,1]$.) Let $\mathcal{F}$ be the product $\sigma$-field on $\Omega$, which is by definition the smallest $\sigma$-field that makes all $\pi_a : \Omega \to [0,1]$ measurable. It is then not hard to show that every $B \in F$ is of the form $B = \pi_A^{-1}(C)$ for some countable $A \subset [0,1] \cup \{2\}$ and some $C \subset [0,1]^A$ which is measurable with respect to the product $\sigma$-field on $[0,1]^A$. That is, a measurable subset of $\Omega$ can only look at countably many coordinates. Now for each countable $A \subset [0,1] \cup \{2\}$, let $\mu_A$ be the measure on $[0,1]^A$ which is the infinite product of Lebesgue measure, and for $B = \pi_A^{-1}(C) \in \mathcal{F}$, set $\mu(B) = \mu_A(C)$. It's easy to verify that $\mu$ is well defined and countably additive (note that $\bigcup_n \pi_{A_n}^{-1}(C_n)$ is of the form $\pi_A^{-1}(C)$ where $A = \bigcup_n A_n$ is countable). Moreover, $\mu$ is a probability measure and, under $\mu$, the $\pi_a$ are iid $U(0,1)$ random variables. So $(\Omega, \mathcal{F}, \mu)$ satisfies the hypotheses, taking $\xi_a = \pi_a$ and $u = \pi_2$. We then define $\alpha$ as you say, via $\alpha(\omega) = \pi_{\pi_2(\omega)}(\omega)$. Then $\alpha$ is certainly not a random variable. If it were, then $\alpha^{-1}([0,1/2])$ would be of the form $\pi_A^{-1}(C)$ for $A$ countable and $C \subset [0,1]^A$. Let $b \in [0,1] \setminus A$. Define $\omega$ via $\omega(2)=b$, $\omega(b)=1$, and $\omega(a) = 0$ for all $a \in [0,1] \setminus \{b\}$. Define $\omega'$ similarly but with $\omega'(b)=0$. Then $\alpha^{-1}([0,1/2])$ contains $\omega'$ but not $\omega$, whereas $\pi_A^{-1}(C)$ must contain both of $\omega,\omega'$ or neither. This doesn't rule out the possibility of being able to choose some more exotic $(\Omega, \mathcal{F}, P)$ (which should perhaps be left to the set theorists). But even if you could, I agree with fedja that no good can come of it. For example, working formally, you might observe that for each finite $A \subset [0,1]$, $\alpha$ is independent of $\{\xi_a : a \in A\}$ (since almost surely $u \notin A$), whence $\alpha$ is independent of $\{\xi_a : a \in [0,1]\}$, which appears to be absurd.
The Law of Large Numbers does not say as much as you seem to think it says. There is a great deal of misunderstanding about this. Suppose that the probability of a coin's toss being heads is one half. The Law of Large Numbers says nothing at all about what the asymptotic results will be. It only says what the probability is that the asymptotic results will be $X$. That the asymptotic result of the ratio of heads will be unity has a probability of zero, for example. But probability zero does not mean physically impossible, and hence the Law does not say that the asymptotic result will not be unity. The great English analyst Littlewood explained this clearly in a famous maths club talk, later published in his collection of essays, A Mathematician's Miscellany, entitled "The Dilemma of Probability". Kolmogorov himself said essentially the same thing, in print. For more references and a discussion (oh, and other people have also referred to the play Rosencrantz and Guildenstern Are Dead, by Tom Stoppard), see my "Logic of Physical Probability Assertions", http://arxiv.org/abs/quant-ph/0508059 , and prof. Jan von Plato's important article cited there. The excerpt from Littlewood is as follows: Mathematics \dots has no grip on the real world; if probability is to deal with the real world it must contain elements outside mathematics, the \it meaning\rm\ of «probability» must relate to the real world; and there must be one or more «primitive» propositions about the real world, from which we can then proceed deductively (i.e., mathematically). We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the «probability axiom», and we will call it «$A$» for short. \dots the «real» probability problem; what are the axiom $A$ and the meaning of «probability» to be, and how can we justify $A$? It will be instructive to consider the attempt called the «frequency theory». It is natural to believe that if (with the natural reservations) an act like throwing a die is repeated $n$ times the proportion of 6's will, with certainty, tend to a limit, $p$ say, as $n \rightarrow \infty$. (Attempts are made to sublimate the limit into some Pickwickian sense---«limit» in inverted commas. But either you mean the ordinary limit, or else you have the problem of explaing how «limit» behaves, and you are no further. You do not make an illegitimate conception legitimate by putting it into inverted commas.) If we take this proposition as $A$ we can at least settle off-hand the other problem, of the meaning of probability, we can define its measure for the event in question to be the number $p$. But for the rest this $A$ takes us nowhere. Suppose we throw 1000 times and wish to know what to expect. Is 1000 large enough for the convergence to have got under way, and how far? $A$ does not say. We have, then, to add to it something about the rate of convergence. Now an $A$ cannot assert a certainty about a particular number $n$ of throws, such as «the proportion of 6's will certainly be within $p\pm\epsilon$ for large enough $n$ (the largeness depending on $\epsilon$)». It can only say « the proportion will lie between $p\pm\epsilon$ with at least such and such probability (depending on $\epsilon$ and $n_o$) whenever $n>n_o$.» The vicious circle is apparent. We have not merely failed to justify a workable $A$; we have failed even to state one which would work if its truth were granted. http://www.library.uu.nl/digiarchief/dip/diss/1957294/c4.pdf and http://philsci-archive.pitt.edu/archive/00000367/00/ergodic.ps are two reviews of prof. von Plato's ergodic theory of probability, which itself is not on-line.
Research Open Access Published: ( t,n) multi-secret sharing scheme extended from Harn-Hsu’s scheme EURASIP Journal on Wireless Communications and Networking volume 2018, Article number: 71 (2018) Article metrics 558 Accesses 2 Citations Abstract Multi-secret sharing scheme has been well studied in recent years. In most multi-secret sharing schemes, all secrets must be recovered synchronously; the shares cannot be reused any more. In 2017, Harn and Hsu proposed a novel and reasonable feature in multiple secret sharing, such that the multiple secrets should be reconstructed asynchronously and the recovering of previous secrets do not leak any information on unrecovered secrets. Harn and Hsu also proposed a ( t,n) multi-secret sharing scheme that satisfies this feature. However, the analysis on Harn-Hsu’s scheme is wrong, and their scheme fails to satisfy this feature. If one secret is reconstructed, all the other unrecovered secrets can be computed by any t − 1 shareholders illegitimately. Another problem in Harn-Hsu’s work is that the parameters are unreasonable which will be shown as follows. In this paper, we prove the incorrectness of Harn-Hsu’s scheme and propose a new ( t,n) multi-secret sharing scheme which is extended from Harn-Hsu’s scheme; our proposed scheme satisfies the feature introduced by Harn and Hsu. Introduction Secret sharing scheme [1, 2] is a useful fundamental cryptographic protocol that can protect information security among a group of participants. In traditional ( t,n) secret sharing scheme, each of the n participants keep a share of secret s in such a way that any t or more participants can reconstruct the secret s; less than t participants cannot get any information on s. Secret sharing scheme is a useful fundamental to other cryptographic protocols [3, 4]. Due to the low efficiency in secret reconstruction of traditional ( t,n) secret sharing scheme (shares are used to reconstruct only one secret), multiple secret sharing becomes more popular in recent years [5–7] which can improve the use efficiency of the shares. In most multiple secret sharing schemes, all secrets are reconstructed synchronously. This characteristic would limit its applications in some asynchronous systems. In [8], Harn and Hsu introduced a new feature such that in multiple secret sharing, the multiple secrets should have the capability to be reconstructed asynchronously. Multiple secret sharing scheme with this new feature would adapt higher secure requirement in some asynchronous systems; it can expand application background of multi-secret sharing. In [8], a ( t,n) multi-secret sharing based on bivariate polynomial was also proposed to fit the new feature (many verifiable secret sharing schemes were based on bivariate polynomial [9, 10]). However, their scheme does not satisfy the new feature. Their analysis is incomplete and ignores a wise attack from inside attackers. In addition, the parameters in their scheme are not reasonable either. In this paper, we propose a wise attack from inside attackers and prove Harn-Hsu’s scheme does not satisfy the new feature, and analysis why their parameters are unreasonable. Although their scheme does not work, the new feature of asynchronous secret reconstruction is worthwhile to be studied. Next, we introduce a new ( t,n) multi-secret sharing scheme that can satisfy the new feature. Review on Harn-Hsu’s scheme In [8], Harn and Hsu introduced a new secure requirement of multi-secret sharing scheme such that the secrets should be reconstructed asynchronously. The following definition concludes the secure model for this new feature: Definition 1 In multiple-secret sharing scheme with asynchronous secret reconstruction, reconstructed secrets do not leak any information on the unrecovered secrets. The proposed ( t,n) multi-secret sharing scheme based on bivariate polynomial in [8] is briefly described below. Harn-Hsu’s scheme Share generation phase: A dealer selects a bivariate polynomial F( x,y) over GF( p), where the xhas degree t− 1 and yhas degree h− 1. The kmultiple secrets are s 1= F(1,0), S 2= F(2,0),..., s k F( k,0). All the parameters satisfy th> ( t+ h)( t− 1) + ( k− 1). The dealer computes f i x) = F( x,v i g i y) = F( v i y), i= 1,2,..., nand sends f i x), g i y) to each shareholder P i v i P i Secret reconstruction phase: Let P 1, P 2,..., P t P i P j K i,j F( v i v j v i v j For a secret s r s 1, s 2,..., s k tshareholders computes his Lagrange component on s r Each shareholder P i i= 1,2,..., ksends share information on s r k− 1 shareholders P j j= 1,2,..., k,j≠ iusing the secure channel which is built up by K i,j Each shareholder can reconstruct the secret s r There are two main contributions of the above scheme. Contribution 1 The shares of shareholders cannot be only used to reconstruct secrets, but also to generate pairwise keys for each pair of shareholders. By transferring information using a secure channel which is built up by pairwise key, the scheme can resist attack from outsiders. Contribution 2 In [8], it is also claimed that their scheme satisfies Definition 1. It is proved that even k − 1 secrets have been reconstructed; any t − 1 shareholders still cannot get any information on the last secret. Results and discussion Proof of security in Harn-Hsu’s work In [8], Contribution 2 is proved by the following theorem: Theorem 1 In [8], all k multiple secrets can be reconstructed asynchronously such that t − 1 shareholders get no information of unrecovered secrets from reconstructed secrets. Proof The bivariate polynomial F( x,y) has th coefficients in total. On the other hand, each shareholder can establish t + h independent equations on those th coefficients from their shares; therefore, any t − 1 shareholder can build up ( t − 1)( t + h) equations. Suppose k − 1 secrets have been recovered which means that k − 1 additional equations are built. Since the parameters t,h,k satisfy th > ( t + h)( t − 1) + ( k − 1), this means t − 1 shareholders cannot get enough independent equations on those th coefficients to recover F( x,y). As a result, the last secret cannot be reconstructed. □ Comments on Harn-Hsu’s work In this part, we will show that the conclusion of above Theorem 1 is not correct. t − 1 shareholders do not need to reconstruct F( x,y) to compute unrecovered secrets; there exists a wise attack from these t − 1 shareholders. Theorem 2 In Harn-Hsu’s work, any t − 1 shareholder can recover all k − 1 secrets with only one reconstructed secret. Proof The k multiple secrets in Harn-Hsu’s work is s 1 = F(1,0), s 2 = F(2,0),..., s k F( k,0). Let f( x) = F( x,0), then the k secrets are k points on the f( x) ( s i f( i)) which is of degree t − 1. On the other hand, each shareholder receives a share g i y) = F( v i y) from a dealer; he can compute a value g i F( v i f( v i f( x); t − 1 shareholders would have t − 1 points on f( x). Therefore, once a secret s r t − 1 shareholder can obtain t − 1 + 1 = t points on a t − 1 degree polynomial f( x), then f( x) can be reconstructed by the Lagrange formula; all the other secrets s i i = 1,2,..., k,i≠ r are recovered by these t − 1 shareholders. □ In addition, Harn-Hsu’s scheme requires that th > ( t + h)( t − 1) + ( k − 1), which means the parameter h would be as large as t 2. In this case, the size of share g i y) = F( v i y) for each shareholder is expanded too much comparing with other multi-secret sharing schemes which is also unreasonable in practical applications. Proposed scheme Although Harn-Hsu’s work fails to satisfy the feature of asynchronous secret reconstruction, this new feature is still reasonable and practical. In this part, we propose a new ( t,n) multi-secret sharing scheme which is fit for the new feature. Our scheme is also based on a bivariate polynomial which is inspired by Harn-Hsu’s work. Proposed scheme Share generation phase: A dealer selects a bivariate polynomial F( x,y) over GF( p), where both xand yhave degree t− 1. The tmultiple secrets are s 1= F(1,0), S 2= F(2,0),..., s t F( t,0). The dealer computes f i x) = F( x,v i i= 1,2,..., nand sends f i x) to each shareholder P i v i P i Secret reconstruction phase: Let P 1, P 2,..., P t s r s 1, s 2,..., s k P i e i f i r). The secret s r e 1, e 2,..., e t Theorem 3 Our proposed scheme satisfies the feature of asynchronous secret reconstruction. Proof First we prove the correctness of our scheme. Each shareholder computes e i f i r) = F( r,v i i = 1,2,..., t. Let g r y) = F( r,y) ( g r y) is of degree t − 1), then each e i g r y) since e i g r v i g r y) can be reconstructed by these t points using the Lagrange formula. The secret s r F( r,0) = g r Suppose t − 1 secrets s 1, s 2,..., s have been recovered. In this case, any t − 1 t− 1 shareholder obtains t− 1 points on polynomial f ( s x) = F( x,0). In order to recover secret s , these t t− 1 shareholders need to obtain one more point on f ( s x). However, these t− 1 shareholders can build no more linear equation on the tcoefficients of f ( s x) at all based on the property of asymmetric bivariate polynomial [5,6]. In other word, with t− 1 recovered secrets s 1, s 2,...., s , any t− 1 t− 1 shareholders will find that each value u∈ GF( p) could be the last legal secret s , and they have equal probability such that \(\left \{Pr\left (u~=~s_{t}\right)~=~\frac {1}{p}\mid u\in GF(p)\right \}\). Therefore, t t− 1 shareholders cannot reconstruct the secret s with all previous reconstructed secrets. □ t In [8], each pair of shareholders computes a common pairwise key using their shares which can be used to build up a secure channel between any two shareholders. This secure channel can protect information from attack of outsiders. In the above scheme, no pairwise key exists and all t shareholders can share a common key to build up a secure platform. The security level from one common key is weaker than pairwise keys between any two shareholders. Therefore, we can improve our proposed scheme which is shown in the revised scheme below. Revised scheme Share generation phase: A dealer selects an asymmetric bivariate polynomial F( x,y) over GF( p), where both xand yhave degree t− 1. The tmultiple secrets are s 1= F(1,0), S 2= F(2,0),..., s t F( t,0). The dealer selects a symmetric bivariate polynomial G( x,y) over GF( p), where both xand yhave degree t− 1. The dealer computes f i x) = F( x,v i i= 1,2,..., nand sends f i x) to each shareholder P i v i P i The dealer computes u i x) = G( x,v i i= 1,2,..., nand sends u i x) to each shareholder P i Secret reconstruction phase: Let P 1, P 2,..., P t P i P j K i,j G( v i v j P i K i,j K i,j u i v j P j K i,j K i,j u j v i K i,j Same as in the proposed scheme, but transmits information using secure channels. The revised scheme satisfies both Contributions 1 and 2 in Harn-Hsu’s work, and the size of share is much smaller than their scheme. Comparisons between Harn-Hsu’s work and our schemes are shown in Table 1. Both our proposed scheme and its revised version are reasonable and practical. In a system that requires high secure level, the revised version is more practical; otherwise, our proposed scheme has advantage in a system that requires higher computational efficiency and speed. Conclusions Asynchronous secret reconstruction is a reasonable and practical feature in ( t,n) multi-secret sharing scheme which is first introduced by Harn-Hsu’s work [8] recently. However, in this paper, we prove that Harn-Hsu’s scheme does not satisfy this new feature. Once a secret is recovered by t shareholders, any t − 1 shareholder can reconstruct all the rest of the secrets illegitimately. Next, we propose a new ( t,n) multi-secret sharing scheme which satisfies this new feature. In revised version, each pair of shareholders can compute a common pairwise key to build up a secure channel which is consistent with Harn-Hsu’s work. Method In this work, we aim to point out the mistake of Harn-Hsu’s scheme and give a modification of their work to overcome the problem. The security analysis of Harn-Hsu’ work is only based on the property of interpolation polynomial. References 1 GR Blakley, in AFIPS1979 national computer conference. Safeguarding cryptographic keys. vol.48, (1979), pp. 313–317. 2 A Shamir, How to share a secret. Commun. ACM. 22(11), 612–613 (1979). 3 CM Tang, SH Gao, CL Zhang, The optimal linear secret sharing scheme for any given access structure. J. Syst. Sci. Complex. 26(4), 634–649 (2013). 4 CM Tang, CL Cai, Verifiable mobile online social network privacy-preserving location sharing scheme. Concurr. Comput. Pract. Experience. 29(24), 1–10 (2017). 5 L Harn, Secure secret reconstruction and multi-secret sharing schemes with unconditional secure. Secur. Commun. Netw. 7(3), 567–573 (2014). 6 J Herranz, A Ruiz, G Saez, New results and applications for multi-secret sharing schemes. Des. Codes Cryptography. 73(3), 841–864 (2014). 7 YX Liu, Efficient ( n,t,n) secret sharing schemes. J. Syst. Softw. 85(6), 1325–1332 (2012). 8 L Harn, CF Hsu, ( t,n) multi-secret sharing scheme based on bivariate polynomial. Wireless Pers. Commun. 95(2), 1–10 (2017). 9 J Katz, CY Koo, R Kumaresa, Improving the round complexity of VSS in point-to-point networks. Inf. Comput. 207(8), 889–899 (2009). 10 R Kumaresan, A Patra, CP Rangan, in ASIACRYPT2010, LNCS. The round complexity of verifiable secret sharing: the statistical case. vol. 6477 (Springer, Heidelberg, 2010), pp. 431–447. Funding The research presented in this paper is supported in part by the China National Natural Science Foundation (No. 61502384), Xi’an Science and Technology Project (No. 2017080CG/RC043(XALG004)), Industrial Science and Technology Project of Shaanxi Province (No. 2016GY-140), and Science Research Project of the Key Laboratory of Shaanxi Provincial Department of Education (No. 15JS078). Availability of data and materials Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. Ethics declarations Competing interests The authors declare that they have no competing interests. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
When performing orbital calculations, under what circumstances should I assume: Since this answer is a bit longish, a TL;DR is in order. Depending on the application, one should use A point mass / spherical mass distribution model. A model that incorporates the effects of the Earth's oblateness. This is subtly different from using the reference ellipsoid. A model that incorporates the effects of Earth's not-quite ellipsoidal shape, based on very detailed analyses of the orbits of existing satellites. For the Earth, there is but one approach, which is to use a spherical harmonics model of the Earth's gravitational potential field. A model that accounts for temporal variations in those spherical harmonics models. For the Earth, the largest temporal variations (by far) are short term variations due to the tides. Longer term variations are observable by specially designed satellites. A very different kind of model for small bodies such as asteroids or comets that look more like lumpy potatoes than distorted beach balls. When performing orbital calculations, under what circumstances should I assume: a perfectly spherical Earth A point mass model works quite nice when you are doing rather simplistic mission planning, analyzing an object that is very far from the Earth, or doing undergraduate homework. Beyond that, you'll need to account for the fact that the Earth is not quite spherical. a reference ellipsoid Calculating gravitation from an ellipsoid is nontrivial. There is missing information (e.g., a density model), and even with that information, the calculation will lead elliptical integrals. However, the largest deviation from a point mass / spherical mass distribution model is captured quite nicely by the Earth's second dynamic form factor $J_2$. With this factor, the gravitational potential as a function of orbital radius $r$ and geocentric latitude $\phi$ is $$ U(r,\theta) = -\frac{\mu_E} r \left( 1-J_2\left(\frac a r\right)^2 \left(\frac{3\sin^2\phi -1}2\right) \right) $$ where $a$ is the Earth's equatorial radius, $J2$ is the Earth's second dynamic form factor, which is caused by the Earth's equatorial bulge, and $\mu_E$ is the Earth's gravitational parameter 1. Take the gradient of the potential, negate, and voila! you'll have the gravitational acceleration. There's another issue with using the reference ellipsoid: The value of $J_2$ would be exactly described by the Earth's oblateness and its rotation rate if the Earth was in hydrostatic equilibrium. The observed value of $J_2$ and the calculated value based on the reference ellipsoid and the Earth's rotation rate differ slightly. The reason is that the Earth is not quite in hydrostatic equilibrium. It is instead still recovering from the huge masses of ice that covered large tracts of the Northern Hemisphere up until about 12000 years ago. a geoid Just as you don't want to use the reference ellipsoid, you don't to use the geoid, either. Calculating gravitation from a geoid model is ridiculously difficult. What you want instead is a spherical harmonics model. Geoid models are calculated from spherical harmonics models. The spherical harmonics model is what you want. The $J_2$ term discussed above is the leading term in the non-spherical part of the Earth's spherical harmonics model. As a function of distance to the center of the Earth $r$, geocentric latitude $\phi$, and geocentric longitude $\lambda$ 2, the spherical harmonics expansion of the Earth's gravitational potential is$$U = -\frac{\mu_E}r \left(1 + \sum_{n=2}^N\sum_{m=0}^n \left(\frac a r\right)^n\,\overline{P_{n,m}}(\cos\phi)\left(\overline{C_{n,m}}\cos(m\lambda) + \overline{S_{n,m}}\sin(m\lambda)\right)\right)$$where $\overline{P_{n,m}}(\cos\phi)$ are the fully normalized associated Legendre functions of the first kind, and $\overline{C_{n,m}}$ and $\overline{S_{n,m}}$ are the fully normalized spherical harmonics coefficients for the model. Take the gradient of the potential, negate, and voila! you'll have the gravitational acceleration. The fully normalized associated Legendre function and the fully normalized coefficients are used for two reasons. One is that the unnormalized coefficients (which is what physicists tend to use) tend result in all kinds of numerical problems. The other is that the fully normalized coefficient are the de facto standard. These coefficients are available for the Earth and for many bodies besides the Earth. You can find detailed descriptions in Vallado and many other texts. A free somewhat dated online paper that describes spherical harmonics as used in gravitation is The Evolution of Earth Gravity Models Used in Astrodynamics. A very recent open paper that describes how one such global gravity model was constructed is A GOCE only gravity model GOSG01S and the validation of GOCE related satellite gravity models. You can find multiple implementations online, in a number of languages, that use spherical harmonics to model gravitation. Regarding the coefficients themselves, the International Center for Global Gravity Field Models maintains a catalog of static global gravity models at http://icgem.gfz-potsdam.de/tom_longtime. The models listed on the page cited above are static. You'll need to account for temporal variations if you want to be extremely accurate. The largest temporal variations result from how the gravitational forces by the Moon and Sun distort the shape of the Earth. These solid Earth tides subtly affect satellite orbits out to GEO and beyond. There are also seasonal effects such as the buildup and melting of snow in Siberia and the buildup and drying of soil moisture in tropical rain forests. These are observable by specially designed satellites. Even longer term, the ice sheets over Greenland and Antarctica are melting, and lands in the far north are still rebounding from the glaciation that ended 12000 years ago. How to model these temporal effects is beyond the scope of this answer. no particular format at all There's always going to be some format / model. A spherical harmonics model does not work well for a small object whose shape is more like a lumpy potato than distorted ball. (Note well: This does not apply to the Earth.) This is getting into PhD land, quite literally. For example, here's a PhD thesis on how to compute the gravitation in the vicinity of a potato-shaped object. Footnotes 1 Regarding the gravitational parameter: Conceptually, this is the product of the gravitational constant $G$ and the Earth's mass $M_E$: $\mu_E = GM_E$. In practice, the Earth's mass is calculated from the gravitational parameter: $M_E = \mu_E/G$. The problem is that $G$ is only known to four or five places of accuracy while $\mu_E$ is known to about nine places of accuracy. This means that almost all of the uncertainty in the estimates of the Earth's mass is due to the uncertainty in the gravitational constant. Never use $GM$ if you know the gravitational parameter to more than five places of accuracy.For the Earth, we know $\mu_E$ to about nine places of accuracy. 2 That you need to know longitude means you need a model of the Earth's rotational state. These range from the very simple (the Earth rotates at a constant rate) to the ridiculously complex (thousands and thousands of terms). The ridiculously complex models target the milliarcsecond level of accuracy needed by radio astronomers. A very simple, constant rate model might be good for a few orbits, but you'll still need a good initial rotational state. The Standards of Fundamental Astronomy provides a library functions (both in Fortran and C) that calculate the Earth's orientation for you. So does JPL's SPICE Toolkit. Called2Voyage is right. Depends on application - and distance. For influence on probes far at other planets, point mass is sufficient. For spaceflight around the Moon or vicinity of Lagrange points, sphere is okay. You'll want a geoid model for LEO satellites in any orbits other than equatorial. You'll need an even more accurate model for sun-synchronous satellites, taking local gravitational anomalies into account.
Ex.4.4 Q2 Quadratic Equations Solutions - NCERT Maths Class 10 Question Find the value of \(k\) for each of the following quadratic equation , so that they have two equal roots. (i) \(2x^\text{2}+kx+3=0\) (ii) \(kx \left(x-2\right)+6=0\) Text Solution What is known? value of \(k.\) What is Known? Quadratic equation has equal real roots. Reasoning: Since the quadratic equation has equal real roots: Discriminant \(b^\text{2}-4ac=0\) Steps: (i) \(2x^\text{2}+kx+3=0\) \[a= 2,\;b = k,\;c = 3\] \[\begin{align}{b^2} - 4ac &= 0\\{{(k)}^2} - 4(2)(3) &= 0\\{k^2} - 24 &= 0\\{k^2} &= 24\\k &= \sqrt {24} \\k &= \pm \sqrt {2 \times 2 \times 2 \times 3} \\k& = \pm 2\sqrt 6 \end{align}\] (ii) \(kx \left(x-2\right)+6=0\) \[a = k,\;b = - 2k,\;c = 6\] \[\begin{align}{b^2} - 4ac &= 0\\{{( - 2k)}^2} - 4(k)(6) &= 0\\4{k^2} - 24k &= 0\\4k(k - 6) &= 0\\k = 6 & \qquad k = 0\\\end{align}\] If we consider the value of \(k\) as \(0,\) then the equation will not longer be quadratic. Therefore, \(k = 6\)
Geometry and Topology Seminar Contents Fall 2016 Spring 2017 date speaker title host(s) Jan 20 Carmen Rovi (University of Indiana Bloomington) "The mod 8 signature of a fiber bundle" Maxim Jan 27 Feb 3 Feb 10 Feb 17 Yair Hartman (Northwestern University) "TBA" Dymarz Feb 24 March 3 Mark Powell (Université du Québec à Montréal) "TBA" Kjuchukova March 10 March 17 March 24 Spring Break March 31 April 7 April 14 April 21 Joseph Maher (CUNY) "TBA" Dymarz April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud). Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky. Sean Howe Representation stability and hypersurface sections We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}! Nan Li Quantitative estimates on the singular sets of Alexandrov spaces The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber. Yu Li In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature. Peyman Morteza We develop a procedure to construct Einstein metrics by gluing the Calabi metric to an Einstein orbifold. We show that our gluing problem is obstructed and we calculate the obstruction explicitly. When our obstruction does not vanish, we obtain a non-existence result in the case that the base orbifold is compact. When our obstruction vanishes and the base orbifold is non-degenerate and asymptotically hyperbolic we prove an existence result. This is a joint work with Jeff Viaclovsky. Caglar Uyanik Geometry and dynamics of free group automorphisms A common theme in geometric group theory is to obtain structural results about infinite groups by analyzing their action on metric spaces. In this talk, I will focus on two geometrically significant groups; mapping class groups and outer automorphism groups of free groups.We will describe a particular instance of how the dynamics and geometry of their actions on various spaces provide deeper information about the groups. Bing Wang The extension problem of the mean curvature flow We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li. Ben Weinkove Gauduchon metrics with prescribed volume form Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti. Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set. Yu Zeng Short time existence of the Calabi flow with rough initial data Calabi flow was introduced by Calabi back in 1950’s as a geometric flow approach to the existence of extremal metrics. Analytically it is a fourth order nonlinear parabolic equation on the Kaehler potentials which deforms the Kaehler potential along its scalar curvature. In this talk, we will show that the Calabi flow admits short time solution for any continuous initial Kaehler metric. This is a joint work with Weiyong He. Spring Abstracts Carmen Rovi The mod 8 signature of a fiber bundle In this talk we shall be concerned with the residues modulo 4 and modulo 8 of the signature of a 4k-dimensional geometric Poincare complex. I will explain the relation between the signature modulo 8 and two other invariants: the Brown-Kervaire invariant and the Arf invariant. In my thesis I applied the relation between these invariants to the study of the signature modulo 8 of a fiber bundle. In 1973 Werner Meyer used group cohomology to show that a surface bundle has signature divisible by 4. I will discuss current work with David Benson, Caterina Campagnolo and Andrew Ranicki where we are using group cohomology and representation theory of finite groups to detect non-trivial signatures modulo 8 of surface bundles. Bena Tshishiku "TBA" Archive of past Geometry seminars 2015-2016: Geometry_and_Topology_Seminar_2015-2016 2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
The 2n law of thermodynamics can be stated in terms of entropy as follows $dS \geq \frac{dQ}{T},$ which holds for all quasistatic processes (reversible and irreversible ones). Is there a generalization of this statement to a general process between two equilibrium states $e_1$ and $e_2$ (a non-quasistatic process)? I.e. can one write down a similar inequality for $\Delta S = S(e_2) - S(e_1)$ (linking it to $\Delta Q$ and so on)? Or at the very least, is it possible to derive the well-known $\Delta S \geq 0$ for an isolated system? I'm aware of the fact that one can always write $\Delta S = \int_{\gamma} \frac{dQ}{T}$ for any reversible process $\gamma$ driving the system from to $e_1$ to $e_2$. However, it's not obvious how to exploit this, if at all.
I'll write down the idea of "method of Lagrange multiplier" here for my note. For simplicity, the number of variables would be 3. Supposedly, a function $\psi: \Omega \to \mathbb{R}$ differential for each variable, and satisfies a restriction $\psi(x,y,z)=0$. If a function $f : \mathbb{R}^3 \to \mathbb{R}$ realizes extremal at $p=(x_0, y_0, z_0)\in \Omega \subseteq \mathbb{R}^3$, then the following equations would be true for some constant $\lambda\in\mathbb{R}$: $\displaystyle \frac{\partial f}{\partial x}(p)-\lambda\frac{\partial \psi}{\partial x}(p) =\frac{\partial f}{\partial y}(p)-\lambda\frac{\partial \psi}{\partial y}(p)=\frac{\partial f}{\partial z}(p)-\lambda\frac{\partial \psi}{\partial z}(p)=0$
Is the triplet representation of $SU(2)$ the same as its adjoint representation? Where the convention for the adjoint representation used is the one used in particle physics, where the structure constants are real and antisymmetric: $$ \mathrm{ad}(t^b_G)_{ac} = i f^{abc} $$ I was under the impression that is was, but I see two different forms of the generators in the triplet representations used, one being just the real skew symmetric generators of the $SO(3)$ rotation group, which agrees with the adjoint representation, and the other being: $$ T^1 = \frac{1}{\sqrt{2}} \left(\begin{matrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0\end{matrix}\right) \quad T^2 = \frac{1}{\sqrt{2}} \left(\begin{matrix} 0 & -i & 0 \\ i & 0 & -i \\ 0 & i & 0\end{matrix}\right) \quad T^3= \left(\begin{matrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{matrix}\right)$$ These two representations do not agree, I assume that my idea about the adjoint reperesentation of $SU(2)$ being its triplet representation is wrong, but why?
There are already several good answers. However, the off-shell aspect related to Noether Theorem has not been addressed so far. (The words on-shell and off-shell refer to whether the equations of motion (e.o.m.) are satisfied or not.) Let me rephrase the problem as follows. Consider a (not necessarily isolated) Hamiltonian system with $N$ degrees of freedom (d.o.f.). The phase space has $2N$ coordinates, which we denote $(z^1, \ldots, z^{2N})$. (We shall have nothing to say about the corresponding Lagrangian problem.) 1) Symplectic structure. Usually, we work in Darboux coordinates $(q^1, \ldots, q^N; p_1, \ldots, p_N)$, with the canonical symplectic potential one-form $$\vartheta=\sum_{i=1}^N p_i dq^i.$$ However, it turns out to be more efficient in later calculations, if we instead from the beginning consider general coordinates $(z^1, \ldots, z^{2N})$ and a general (globally defined) symplectic potential one-form $$\vartheta=\sum_{I=1}^{2N} \vartheta_I(z;t) dz^I,$$ with non-degenerate (=invertible) symplectic two-form $$\omega = \frac{1}{2}\sum_{I,J=1}^{2N} \omega_{IJ} \ dz^I \wedge dz^J = d\vartheta,\qquad\omega_{IJ} =\partial_{[I}\vartheta_{J]}=\partial_{I}\vartheta_{J}-\partial_{J}\vartheta_{I}. $$ The corresponding Poisson bracket is $$\{f,g\} = \sum_{I,J=1}^{2N} (\partial_I f) \omega^{IJ} (\partial_J g), \qquad \sum_{J=1}^{2N} \omega_{IJ}\omega^{JK}= \delta_I^K. $$ 2) Action. The Hamiltonian action $S$ reads $$ S[z]= \int dt\ L_H(z^1, \ldots, z^{2N};\dot{z}^1, \ldots, \dot{z}^{2N};t),$$ where $$ L_H(z;\dot{z};t)= \sum_{I=1}^{2N} \vartheta_I(z;t) \dot{z}^I- H(z;t) $$ is the Hamiltonian Lagrangian. By infinitesimal variation $$\delta S = \int dt\sum_{I=1}^{2N}\delta z^I \left( \sum_{J=1}^{2N}\omega_{IJ} \dot{z}^J-\partial_I H - \partial_0\vartheta_I\right)+ \int dt \frac{d}{dt}\sum_{I=1}^{2N}\vartheta_I \delta z^I, \qquad \partial_0 \equiv\frac{\partial }{\partial t},$$ of the action $S$, we find the Hamilton e.o.m. $$ \dot{z}^I \approx \sum_{J=1}^{2N}\omega^{IJ}\left(\partial_J H + \partial_0\vartheta_J\right) = \{z^I,H\} + \sum_{J=1}^{2N}\omega^{IJ}\partial_0\vartheta_J. $$ (We will use the $\approx$ sign to stress that an equation is an on-shell equation.) 3) Constants of motion. The solution $$z^I = Z^I(a^1, \ldots, a^{2N};t)$$ to the first-order Hamilton e.o.m. depends on $2N$ constants of integration $(a^1, \ldots, a^{2N})$. Assuming appropriate regularity conditions, it is in principle possible to invert locally this relation such that the constants of integration $$a^I=A^I(z^1, \ldots, z^{2N};t)$$ are expressed in terms of the $(z^1, \ldots, z^{2N})$ variables and time $t$. These functions $A^I$ are $2N$ constants of motion (c.o.m.), i.e., constant in time $\frac{dA^I}{dt}\approx0$. Any function $B(A^1, \ldots, A^{2N})$ of the $A$'s, but without explicit time dependence, will again be a c.o.m. In particular, we may express the initial values $(z^1_0, \ldots, z^{2N}_0)$ at time $t=0$ as functions $$Z^J_0(z;t)=Z^J(A^1(z;t), \ldots, A^{2N}(z;t); t=0)$$ of the $A$'s, so that $Z^J_0$ become c.o.m. Now, let $$b^I=B^I(z^1, \ldots, z^{2N};t)$$ be $2N$ independent c.o.m., which we have argued above must exist. The question is if there exist $2N$ off-shell symmetries of the action $S$, such that the corresponding Noether currents are on-shell c.o.m.? Remark. It should be stressed that an on-shell symmetry is a vacuous notion, because if we vary the action $\delta S$ and apply e.o.m., then $\delta S\approx 0$ vanishes by definition (modulo boundary terms), independent of what the variation $\delta$ consists of. For this reason we often just shorten off-shell symmetry into symmetry. On the other hand, when speaking of c.o.m., we always assume e.o.m. 4) Change of coordinates. Since the action $S$ is invariant under change of coordinates, we may simply change coordinates $z\to b = B(z;t)$ to the $2N$ c.o.m., and use the $b$'s as coordinates (which we will just call $z$ from now on). Then the e.o.m. in these coordinates are just $$\frac{dz^I}{dt}\approx0,$$ so we conclude that in these coordinates, we have $$ \partial_J H + \partial_0 \vartheta_J=0$$ as an off-shell equation. [An aside: This implies that the symplectic matrix $\omega_{IJ}$ does not depend explicitly on time, $$\partial_0\omega_{IJ} =\partial_0\partial_{[I}\vartheta_{J]}=\partial_{[I} \partial_0\vartheta_{J]}=-\partial_{[I}\partial_{J]} H=0.$$ Hence the Poisson matrix $\{z^I,z^J\}=\omega^{IJ}$ does not depend explicitly on time. By Darboux Theorem, we may locally find Darboux coordinates $(q^1, \ldots, q^N; p_1, \ldots, p_N)$, which are also c.o.m.] 5) Variation. We now perform an infinitesimal variation $\delta= \varepsilon\{z^{I_0}, \cdot \}$, $$\delta z^J = \varepsilon\{z^{I_0}, z^J\}=\varepsilon \omega^{I_0 J},$$ with Hamiltonian generator $z^{I_0}$, where $I_0\in\{1, \ldots, 2N\}$. It is straightforward to check that the infinitesimal variation $\delta= \varepsilon\{z^{I_0}, \cdot \}$ is an off-shell symmetry of the action (modulo boundary terms) $$\delta S = \varepsilon\int dt \frac{d f^0}{dt}, $$ where $$f^0 = z^{I_0}+ \sum_{J=1}^{2N}\omega^{I_0J}\vartheta_J.$$ The bare Noether current is $$j^0 = \sum_{J=1}^{2N}\frac{\partial L_H}{\partial \dot{z}^J} \omega^{I_0 J}=\sum_{J=1}^{2N}\omega^{I_0J}\vartheta_J,$$ so that the full Noether current $$ J^0=j^0-f^0=-z^{I_0} $$ becomes just (minus) the Hamiltonian generator $z^{I_0}$, which is conserved on-shell $\frac{dJ^0}{dt}\approx 0$ by definition. So the answer is yes in the Hamiltonian case.
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
Two principles here: When dealing with a differential equation, you define intermediate state variables so everything is in terms of first derivatives. This system is nonlinear, so the state-space equations won't be in terms of matrices. Applying these principles, we define a state vector:$$\mathbf x = [x_1, x_2]^T,$$where:$$x_1 = y \\x_2 = \dot y$$Note that:$$\dot x_1 = x_2$$Substituting these into your original equation yields:$$\dot x_1 = x_2 = f_1(x_2) \\\dot x_2 = g - \frac{C}{m} \frac{u^2}{x_1^2} = f_2(x_1, u)$$which is of the form:$$\dot{\mathbf{x}} = \mathbf f(\mathbf x, u, t)$$Since the model coefficients (i.e. $C, g, m$) don't depend on time, we can drop the $t$:$$\dot{\mathbf{x}} = \mathbf f(\mathbf x, u)$$Now that you have a put the ODE in explicit form, you can take derivatives to find the operating point. The basic idea is to approximate $\mathbf f(\mathbf x, u)$ with a first-order Taylor approximation about some operating point $(\mathbf x_0, u_0)$:$$\mathbf f(\mathbf x, u) \approx \mathbf f(\mathbf x_0, u_0) + (\mathbf x - \mathbf x_0) \frac{\partial \mathbf f}{\partial \mathbf x}\Bigg|_{(\mathbf x_0, u_0)} + (u - u_0) \frac{\partial \mathbf f}{\partial u} \Bigg|_{(\mathbf x_0, u_0)}$$ EDIT: From the referenced lecture slides, note that:$$\mathbf f(\mathbf x, u) = \mathbf f(\mathbf x_0 + \Delta \mathbf x, u_0 + \Delta u) = \dot{\mathbf{x}}|_{(\mathbf x_0, u_0)} + \dot{\Delta \mathbf{x}},$$where $\Delta \mathbf x = \mathbf x - \mathbf x_0$, $\Delta u = u - u_0$, and$$\dot{\mathbf{x}}|_{(\mathbf x_0, u_0)} = f(\mathbf x_0, u_0).$$This leads to a linear, time-invariant system in terms of the state variable $\Delta \mathbf x$:$$\dot{\Delta \mathbf{x}} = \mathrm A \Delta \mathbf x + \mathrm B \Delta u,$$where:$$\mathrm A = \frac{\partial \mathbf f}{\partial \mathbf x}\Bigg|_{(\mathbf x_0, u_0)} \\\mathrm B = \frac{\partial \mathbf f}{\partial u}\Bigg|_{(\mathbf x_0, u_0)}$$The solution for this coupled set of equations yields the displacements $\Delta \mathbf x$ from the operating point $(\mathbf x_0, u_0)$. This is true for any operating point $(\mathbf x_0, u_0)$, regardless of whether or not $\mathbf f(\mathbf x_0, u_0) = 0$. Also note that this system is LTI mathematically, not physically. In reality, the matrices $\mathrm A$ and $\mathrm B$ can also change with time. Note that a good control algorithm based on a linearized model will have to update the operating point at a fast enough rate that the nonlinearities don't cause any problems.
Here is a shot in the dark (Disclosure: I really know nothing about this problem). Let $G:=\mathsf{SU}(2)$ act on $G^3$ by simultaneous conjugation; namely, $$g\cdot(a,b,c)=(gag^{-1},gbg^{-1},gcg^{-1}).$$ Then the quotient space is homeomorphic to $S^6$ (see Bratholdt-Cooper). The evaluation map shows that the character variety $\mathfrak{X}:=\mathrm{Hom}(\pi_1(\Sigma),G)/G$ is homeomorphic to $G^3/G,$ where $\Sigma$ is an elliptic curve with two punctures. Fixing generic conjugation classes around the punctures, by results of Mehta and Seshadri (Math. Ann. 248, 1980), gives the moduli space of fixed determinant rank 2 degree 0 parabolic vector bundles over $\Sigma$ (where we now think of the punctures are marked points with parabolic structure). In particular, these subspaces are projective varieties. Letting the boundary data vary over all possibilities gives a foliation of $\mathfrak{X}\cong G^3/G\cong S^6$. Therefore, we have a foliation of $S^6$ where generic leaves are projective varieties; in particular, complex. Moreover, the leaves are symplectic given by Goldman's 2-form; making them Kähler (generically). The symplectic structures on the leaves globalize to a Poisson structure on all of $\mathfrak{X}$. Is it possible that the complex structures on the generic leaves also globalize? Here are some issues: As far as I know, the existence of complex structures on the leaves is generic. It is known to exist exactly when there is a correspondence to a moduli space of parabolic bundles. This happens for most, but perhaps not all, conjugation classes around the punctures (or marked points). So I would first want to show that all the leaves of this foliation do in fact admit a complex structure. Given how explicit this construction is, if it is true, it may be possible to establish it by brute force. Assuming item 1., then one needs to show that the structures on the leaves globalize to a complex structure on all of $\mathfrak{X}$. Given that in this setting, the foliation is given by the fibers of the map: $\mathfrak{X}\to [-2,2]\times [-2,2]$ by $[\rho]\mapsto (\mathrm{Tr}(\rho(c_1)),\mathrm{Tr}(\rho(c_2)))$ with respect to a presentation $\pi_1(\Sigma)=\langle a,b,c_1,c_2\ |\ aba^{-1}b^{-1}c_1c_2=1\rangle$, it seems conceivable that the structures on the leaves might be compatible. Moreover, $\mathfrak{X}$ is not a smooth manifold. It is singular despite being homeomorphic to $S^6$. So lastly, one would have to argue that everything in play (leaves, total space and complex structure) can by "smoothed out" in a compatible fashion. This to me seems like the hardest part, if 1. and 2. are even true. Anyway, it is a shot in the dark, probably this is not possible...just the first thing I thought of when I read the question.
On small oscillations of mechanical systems with time-dependent kinetic and potential energy 1. Dipartimento di Matematica Pura e Applicata, Università de L'Aquila, I-67100 L'Aquila 2. Bolyai Institute, University of Szeged, Aradi vértanúk tere 1, H-6720 Szeged, Hungary $\sum$ n k=1$(a_{ik}(t)\ddot q_k+c_{ik}(t)q_k)=0, (i=1,2,\ldots,n).$(*) A nontrivial solution $q_1^0,\ldots ,q_n^0$ is called small if $\lim _{t\to \infty}q_k(t)=0 (k=1,2,\ldots n). It is known that in the scalar case ($n=1$, $a_{11}(t)\equiv 1$, $c_{11}(t)=:c(t)$) there exists a small solution if $c$ is increasing and it tends to infinity as $t\to \infty$. Sufficient conditions for the existence of a small solution of the general system (*) are given in the case when coefficients $a_{ik}$, $c_{ik}$ are step functions. The method of proofs is based upon a transformation reducing the ODE (*) to a discrete dynamical system. The results are illustrated by the examples of the coupled harmonic oscillator and the double pendulum. Mathematics Subject Classification:Primary:70J25, 34D05; Secondary: 39A1. Citation:Nicola Guglielmi, László Hatvani. On small oscillations of mechanical systems with time-dependent kinetic and potential energy. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 911-926. doi: 10.3934/dcds.2008.20.911 [1] [2] [3] [4] R. Yamapi, R.S. MacKay. Stability of synchronization in a shift-invariant ring of mutually coupled oscillators. [5] [6] Stilianos Louca, Fatihcan M. Atay. Spatially structured networks of pulse-coupled phase oscillators on metric spaces. [7] [8] Sondes khabthani, Lassaad Elasmi, François Feuillebois. Perturbation solution of the coupled Stokes-Darcy problem. [9] Xiaoqin P. Wu, Liancheng Wang. Hopf bifurcation of a class of two coupled relaxation oscillators of the van der Pol type with delay. [10] Martina Chirilus-Bruckner, Christopher Chong, Oskar Prill, Guido Schneider. Rigorous description of macroscopic wave packets in infinite periodic chains of coupled oscillators by modulation equations. [11] Xiaoxue Zhao, Zhuchun Li, Xiaoping Xue. Formation, stability and basin of phase-locking for Kuramoto oscillators bidirectionally coupled in a ring. [12] Ryotaro Tsuneki, Shinji Doi, Junko Inoue. Generation of slow phase-locked oscillation and variability of the interspike intervals in globally coupled neuronal oscillators. [13] Richard H. Rand, Asok K. Sen. A numerical investigation of the dynamics of a system of two time-delay coupled relaxation oscillators. [14] Alexandre Caboussat, Allison Leonard. Numerical solution and fast-slow decomposition of a population of weakly coupled systems. [15] Francisco Ortegón Gallego, María Teresa González Montesinos. Existence of a capacity solution to a coupled nonlinear parabolic--elliptic system. [16] Wenjing Song, Ganshan Yang. The regularization of solution for the coupled Navier-Stokes and Maxwell equations. [17] Ruizhao Zi. Global solution in critical spaces to the compressible Oldroyd-B model with non-small coupling parameter. [18] [19] Chunhua Jin. Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
De Bruijn-Newman constant For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. De Bruijn and Newman showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math].
For example, rationalizing expressions like $$\frac{1}{\pm \sqrt{a} \pm \sqrt{b}}$$ Is straightforward. Moreover cases like $$\frac{1}{\pm \sqrt{a} \pm \sqrt{b} \pm \sqrt{c}}$$ and $$\frac{1}{\pm \sqrt{a} \pm \sqrt{b} \pm \sqrt{c} \pm \sqrt{d}}$$ Are still easy to rationalize. But my question is in the more general case $$\frac{1}{\pm \sqrt{a_1} \pm \sqrt{a_2} \cdots \pm \sqrt{a_n}}$$ Where $n \ge 5$ Are they always rationalizable? If so, how would be an algorithm to rationalize them. If not, then a proof must exist. From my point of view, I can't find an obvious way to rationalize the case $n=5$, since grouping the radicals in a group of 3 and a group of 2 radicals and then applying the identity $$(a-b)*(a+b)=a^2-b^2$$ Just modifies the denominator from $$\pm \sqrt{a} \pm \sqrt{b} \pm \sqrt{c} \pm \sqrt{d} \pm \sqrt{e}$$ to $$\pm v \pm \sqrt{w} \pm \sqrt{x} \pm \sqrt{y} \pm \sqrt{z}$$ Will this help in something? Or a different method or identity is needed?
Black holes are states at thermal equilibrium, at a positive temperature inversely proportional to the mass, and they therefore emit black body radiation at that temperature. Everybody knows that. But why? By far the most common explanation in divulgation is in terms of pair production of particles at the horizon, which is a shame because I personally find it very opaque. The most popular among physicists instead is the "standard" derivation based on diagonalizing the equations of motion for the fields in two different charts and the Bogoliubov transformations and blah blah blah. Technical. As reliable and as sexy as a brick. It would be cool if there was a more elegant argument. Perhaps simpler, depending on your definition of simpler. Well, there is. Here it is: Let's use natural units. \( \hbar = c = k_B = 1\). (The inverse) temperature is periodicity in imaginary time. Consider an observable of a system which is a function of a duration of time, \( A(t)\). For example, if I give it a kick at time \(t'\), how big of a noise is it going to make at time \(t'+t\)? The answer to this question is an observable depending on the time interval \(t\). Ok, then more often than not these \(A(t)\) will be analytic functions and to some extent you will be able to extend them to function of complex \(t\). Then the statement is: If all of these functions \( A(t)\) are periodic of period \(\beta\) in imaginary time, that is \(A(t + i \beta) = A(t)\), then the system is actually in thermal equilibrium at temperature \(T = 1/\beta\). This is a very important and fundamental relationship. I can't really introduce it simply, the most elementary introduction I've seen yet is this. Now, assuming this is known, let's see the geometry part of this post. The Schwarzschild metric is: $$ ds^2 = - (1-\frac{r_S}{r}) dt^2 + (1-\frac{r_S}{r})^{-1} dr^2 + r^2 d\Omega^2$$ but as we will see all that matters is what happens close to the horizon at \( r = r_S\). So we change variables \( r = r_S(1+\epsilon)\) and take \( \epsilon\) very small so we study the near-horizon geometry. (It's not actually necessary to take this limit - it just simplifies the following calculations without compromising the result). So $$ds^2 = - \epsilon dt^2 + \frac{r_S^2}\epsilon d\epsilon^2 + r_S^2 d\Omega^2$$ Cool, now let's take a final change of variables to send this into a decent form, we try \( \frac{\rho^2}{4 r_S^2} = \epsilon\): $$ ds^2 = - \frac{\rho^2}{4r_S^2} dt^2 + d \rho^2 + r_S^2 d\Omega^2$$ This is actually all well-known, it's just the Rindler metric. Spacetime right outside a black hole is well approximated by uniformly accelerating spacetime, makes a lot of sense. ☆ But now, here's the actual quantum magic. Let's extend the metric to complex t. In particular we care about the imaginary axis, so \( t = i \tau\) with \( \tau\) real. So $$ ds^2 = \frac{\rho^2}{4r_S^2} d \tau^2 + d\rho^2 + r_S d\Omega^2$$ Now forget about the \( \Omega\) part - that's not an issue. Look at the τ - ρ part. Looks familiar? It's just the metric for the flat Euclidean plane, in polar coordinates: \( ds^2 = dr^2 + r^2 d\theta^2\). You just need to identify \( \theta = \tau/(2 r_S) \). Actually, however, the identification is not perfect because of the periodicity of \( \theta \), which is of course \( 2\pi \). If the periodicity is anything else than that, this geometry will have an angular defect or a conical singularity. For example, if the period is \( \pi \), then you can build this by taking a sheet of paper, marking a point along an edge, and gluing together the two pieces of the edge on the two sides of the marked point. You very clearly get a cone with a sharp tip. The tip has curvature, but we don't want curvature: curvature is proportional to stress-energy in general relativity, but black holes have no energy lying about at the horizon - they are empty. The tip must go away and so the period must be \( 2\pi \). So the period of \( \tau \) is \( 4\pi r_S = 8\pi G M \). So black holes are states of temperature \( T = 1/(8\pi G M) \), as measured by observers whose time is \( t \), therefore those at infinity. Simply glorious. It's very close to being elementary, because except for the relationship between temperature and imaginary time everything is pretty much changes of variables. It's such a cool idea that the thermodynamics of the quantum black hole metric is encoded somewhat in its geometrical structure. And it's also very general: we did not need to setup any specific quantum field theory (= a quantum particle theory) on the black hole metric, as you do in the other derivations. In fact, we know whatever theory actually describes quantum gravity, it cannot be a QFT. So this kind of argument is very powerful. On the other hand, this is a pretty informal argument. It's very creative, so to speak, but not particularly rigorous. There are a million points where you could nitpick. Therefore, this is only seen as a secondary reinforcement of the results of the more precise derivations. I won't comment further on all these million subtleties, because my intent here was only to introduce the argument in its deceiving beauty. Bonus round By the way, from the ☆ onwards this is also a proof of the Unruh effect - obvious because the latter and Hawking radiation are more or less the same thing. In the case of Unruh radiation the imaginary-time metric really is flat Euclidean space. In our case of a Schwarzschild black hole, this is only true in the near-horizon limit. If we take into account the whole exterior region, the imaginary-time metric is describable as and is in fact called "a cigar". The tip of the cigar is smooth (because we imposed it!) and of course when you zoom in becomes the flat plane. This doesn't really change the required periodicity of \( \tau\), which runs as an angular coordinate around the cigar. Refs I believe the first time the imaginary-time periodicity argument for Hawking radiation was appreciated is this 1977 Hawking + Gibbons paper. Also please take a look at the last sentence of the abstract - pretty interesting.
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 6 Edray Goins (Purdue) 1.3 Past Colloquia Mathematics Colloquium All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated. Spring 2018 date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 (Room: 911) Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 4 (Wednesday) John Baez (UC Riverside) TBA Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) TBA Erman, Sam April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia) Title: Elliptic curves and Goldfeld's conjecture Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses. February 2 Thomas Fai (Harvard) Title: The Lubricated Immersed Boundary Method Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics. February 5 Alex Lubotzky (Hebrew University) Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Abstract: Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders. In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1. This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders. February 6 Alex Lubotzky (Hebrew University) Title: Groups' approximation, stability and high dimensional expanders Abstract: Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm. The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated. All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom. February 9 Wes Pegden (CMU) Title: The fractal nature of the Abelian Sandpile Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor. Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research. March 2 Aaron Bertram (Utah) Title: Stability in Algebraic Geometry Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area. March 16 Anne Gelb (Dartmouth) Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data. April 6 Edray Goins (Purdue) Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math] This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus. This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
$\DeclareMathOperator{\erfc}{erfc} \DeclareMathOperator{\Ei}{Ei} $ What is the series expansion of $f$ for small $q$? \begin{align} U(q) &= q e^{q^2}\erfc q\\ I(q,q') &= \int_0^{2\pi} \frac{d\phi}{2\pi} U(\sqrt{q^2 + q'^2 -2qq'\cos\phi})-U(q')\\ f(q) &= \int_{0}^{\infty} dq'\,I(q,q') \end{align} Even better, perhaps you can integrate this analytically to find $f$ [or $I(q,q')]$? When I numerically integrate, the results are not grid dependent for small $q$: i.e. I get consistent results with double the gridpoints and/or double the grid maximum, so it seems to be well-behaved (and the result looks approximately quadratic in $q$). Here is my attempt to find the series expansion of $f$ by differentiating under both integrals, then integrating over $\phi$ analytically: \begin{align} D_n(q') &\equiv \left[\frac{\partial ^{n} I(q,q')}{\partial q^n}\right]_{q=0}\\ D_0(q') &= 0\\ D_1(q') &= \int_0^{2\pi} \frac{d\phi}{2\pi} \left[ \frac{U'(\sqrt{q^2 + q'^2 -2qq'\cos\phi})}{2\sqrt{q^2 + q'^2 -2qq'\cos\phi}} (2q - 2q'\cos\phi) \right]_{q=0}\\ &= -U'(q') \int_0^{2\pi} \frac{d\phi}{2\pi} \cos\phi = 0\\ D_2(q') &= \int_0^{2\pi} \frac{d\phi}{2\pi} \left[\sin^2\phi \frac{U'(q')}{q'} + \cos^2\phi \, U''(q')\right] \\ &= \frac{U'(q')}{2q'} + \frac{U''(q')}2 \\&= \left( 2q'^3+4q'+\frac1{2q'} \right)e^{q'^2}\erfc q' -\frac{3+2q'^2}{\sqrt\pi} \\ D_3(q') &= 0\\ f(q) &= \frac{q^2}2\int_{0}^{\infty} dq'\,D_2(q') + O(q^4) \end{align} $D_2(q') = \frac12 q'^{-1} + O(1)$ so the last integral diverges logarithmically for small $q'$. Also it seems that $D_{2n}(q') = O(q'^{1-2n})$ so the higher terms get worse. I don't know why the Taylor series looks like a sum of divergent terms: perhaps the series expansion of $f$ is in non-integer powers of $q$ rather than a Taylor series? In case it is helpful here is some Mathematica: U[q_] := q Exp[q^2] Erfc[q]Dint[qd_, n_] := Simplify[Integrate[(D[U[Sqrt[q^2 + qd^2 - 2 q qd Cos[phi]]] -U[qd], {q, n}]) /. q -> 0, {phi, 0, 2 Pi}]/(2 Pi), qd >= 0] so that Dint[qd, 2] gives the above expression for $D_2(q')$. Integrate[Dint[qd,2], qd] gives an expression in terms of $_{2}F_2$ and that expression is divergent for $q'\to0$. EDIT: If I try to evaluate $I_>$ from joriki's answer, I note that $q$ is small and $x$ is large, so $qx$ can be anything, but $\epsilon=q\left(\sqrt{1+x^2-2x\cos\phi}-x\right) = q[-\cos\phi + \sin^2\phi/2x + O(x^{-2})]$ is small, so \begin{align} I_> &= q\int_1^\infty dx \int_0^{2\pi} \frac{d\phi}{2\pi} U(qx +\epsilon) - U(qx) \\ &= \sum_{n=1}^\infty \frac{q^{n+1}}{n!}\int_1^\infty dx \, U^{(n)}(qx) \int_0^{2\pi} \frac{d\phi}{2\pi} \left(\sqrt{1+x^2-2x\cos\phi}-x\right)^n \\ &= \frac{q^2}{2}\int_1^\infty dx \, U'(qx) \int_0^{2\pi} \frac{d\phi}{2\pi} \left(\sqrt{1+x^2-2x\cos\phi}-x\right)\\ &+ \frac{q^3}{6}\int_1^\infty dx \, U''(qx) \int_0^{2\pi} \frac{d\phi}{2\pi} \left(\sqrt{x^2+1-2x\cos\phi}-x\right)^2 + O(q^4)\\ &= \frac{q^2}{2}\int_1^\infty dx \, U'(qx) f(x) + \frac{q^3}{6}\int_1^\infty dx \, U''(qx) [1-2x f(x)] + O(q^4)\\ f(x) &= \frac{(x+1)E[4x/(x+1)^2] + (x-1)E[-4x/(x-1)^2]}{\pi} - x = \frac{1}{4x} + O(x^{-3})\\ \end{align} so, since $U'(q) = -\frac{2 q}{\sqrt\pi} + (1+2 q^2)e^{q^2} \erfc(q)$, the first $x$ integral diverges logarithmically and the next one is worse. Expanding in $\epsilon$ gives an expansion in $q$ which is divergent. Perhaps someone has an idea of how to proceed to get an expansion in $1/q$ for small $q$.
Let $S_n$ be the symmetric group on $n$ elements. The Robinson-Schensted-Knuth (RSK) correspondence sends a permutation $\pi\in S_n$ to a pair of Standard Young Tableaux $(P,Q)$ with equal shapes $\mbox{sh}(P)=\mbox{sh}(Q)=\lambda$, where $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_r)$ and $\lambda_1\geq\lambda_2\geq\cdots\geq \lambda_r$. A well known fact is that if $\pi$ is an involution ($\pi^2=1$) then $P=Q$. A nice way to think about involutions is by their cycle types: they must have only fixed points or 2-cycles. I have a few questions on what is known in this area. 1) Suppose we fix a shape $\lambda$. Is anything known about the class of involutions whose RSK correspondence gives tableaux of shape $\lambda$? Is there anything particularly special about them? For general permutations $\pi$, this question is essentially intractable but, my hope is for involutions there is something extra special that can be said. The only things that I can think of are stuff like Greene's theorem on how the lengths of each row/column relate to the longest increasing/decreasing subsequences of the permutation. As well, the number of odd-length columns equals the number of fixed points. This unfortunately doesn't say much about the involutions involved. 2) Are there any other (not necessarily directly related to RSK) known bijections between involutions and Standard Young Tableaux? In particular, are these other bijections more amenable to answering questions similar to (1) about which subset of involutions maps to a specific shape $\lambda$?
Homology, Homotopy and Applications Homology Homotopy Appl. Volume 12, Number 1 (2010), 1-10. The gluing problem does not follow from homological properties of $\Delta_p(G)$ Abstract Given a block $b$ in $kG$ where $k$ is an algebraically closed field of characteristic $p$, there are classes $\alpha_Q \in H^2 (Aut_\mathcal{F}(Q);k^\times)$, constructed by Külshammer and Puig, where $\mathcal{F}$ is the fusion system associated to $b$ and $Q$ is an $\mathcal{F}$-centric subgroup. The gluing problem in $\mathcal{F}$ has a solution if these classes are the restriction of a class $\alpha \in H^2(\mathcal{F}^c;k^\times)$. Linckelmann showed that a solution to the gluing problem gives rise to a reformulation of Alperin's weight conjecture. He then showed that the gluing problem has a solution if for every finite group $G$, the equivariant Bredon cohomology group $H^1_G(|\Delta_p(G)|;\mathcal{A}^1)$ vanishes, where $|\Delta_p(G)|$ is the simplicial complex of the non-trivial $p$-subgroups of $G$ and $\mathcal{A}^1$ is the coefficient functor $G/H \hookrightarrow \rm{Hom} (H, k^\times)$. The purpose of this note is to show that this group does not vanish if $G=\Sigma_{p^2}$ where $p \geq 5$. Article information Source Homology Homotopy Appl., Volume 12, Number 1 (2010), 1-10. Dates First available in Project Euclid: 28 January 2011 Permanent link to this document https://projecteuclid.org/euclid.hha/1296223817 Mathematical Reviews number (MathSciNet) MR2594678 Zentralblatt MATH identifier 1196.20014 Citation Libman, Assaf. The gluing problem does not follow from homological properties of $\Delta_p(G)$. Homology Homotopy Appl. 12 (2010), no. 1, 1--10. https://projecteuclid.org/euclid.hha/1296223817
Practical and theoretical implementation discussion. Post Reply 10 posts • Page 1of 1 I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as: \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function. Click here.You'll thank me later. I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity. I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\). The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however. - Bobby Hill If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something. Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck. I mostly agree with you, \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead. Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. Click here.You'll thank me later.
Let $T = {\mathbb R}/{\mathbb Z}$ be the $1$-torus. Let $a_{ij}$ be integer numbers, $1 \leq i \leq m$, $1 \leq j \leq n$ and $A$ the $m \times n$ matrix whose $(i,j)$ entry is $a_{ij}$. Consider the following system of $m$ linear equations: $$\left\{\begin{array}{rl} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n & = \overline{0}\\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n & = \overline{0}\\ \vdots &\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n & = \overline{0} \end{array}\right.$$ where $x_1,x_2,...,x_n \in T$. The set of solutions $S$ is obviously a subgroup of $T^n$. Let $S_0$ be the connected component containing the trivial solution $(0,0,...,0)$. I would like to understand the quotient group $S/S_0$. By "understand" I mean compute it an algorithmic way — something which can be implemented on a computer. What is the right way to think about something like this? Thank you. Remark: For instance, if $m=n$ and $A$ is invertible, then it is fairly easy to show that $S/S_0$ has order $|\det A|$. (One way to do it is geometric, taking the cup product of the (Poincare duals) of the codimension 1 submanifolds corresponding to solutions of individual equations). Of course, this doesn't come close to answering the question. P.S. My motivation for asking this question comes from algebraic geometry.
Practical and theoretical implementation discussion. Post Reply 10 posts • Page 1of 1 I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as: \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function. Click here.You'll thank me later. I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity. I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\). The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however. - Bobby Hill If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something. Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck. I mostly agree with you, \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead. Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. Click here.You'll thank me later.
Practical and theoretical implementation discussion. Post Reply 10 posts • Page 1of 1 I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as: \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function. Click here.You'll thank me later. I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity. I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\). The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however. - Bobby Hill If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something. Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck. I mostly agree with you, \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead. Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. Click here.You'll thank me later.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Machine learning algorithms are designed to generalize from past observations across different problem settings. The goal of learning theory is to analyze statistical and computational properties of learning algorithms and to provide guarantees on their performance. To do so, it poses these tasks in a rigorous mathematical framework and deals with them under various assumptions on the data-generating process. In [ ] we initiate a formal analysis of compressing a data sample so as to encode a set of functions consistent with (or of minimal error on) the data. We propose several formal requirements (exact versus approximate recovery and worst case versus statistical data generation) for such compression and identify parameters of function classes that characterize the resulting compression sizes. In [ ] we provide a novel analysis for a life-long learning setup where performance guarantees are required for every encountered task. Such a setup had previously been analyzed only under rather restrictive assumptions on the data-generating process. Our study generalizes a natural lifelong learning (where at every step, if possible, a predictor is created as an ensemble over previously learned ones using only little data) and identifies conditions of task relatedness that render such a scheme data efficient. In [ ] we show that active learning can provide label savings in non-parametric learning settings. Previously this had mostly been done in parametric learning of a classifier from a fixed class of bounded capacity. We develop a novel active query procedure that takes in unlabeled data and constructs a compressed version of the underlying labeled sample while automatically adapting a number of label queries. We then show that this procedure maintains performance guarantees of nearest neighbor classification. In recent years the kernel mean embedding (KME) of distributions started to play an important role in various machine learning tasks, including independence testing, density estimation, implicit generative models, and more. Given a reproducing kernel and its corresponding reproducing kernel Hilbert space (RKHS), KME maps a distribution $P$ over the input domain to the element $\mu_P:= \int k(X,\cdot) dP(X)$ in the RKHS. An important step in many KME-based learning methods is to estimate the distribution embedding $\mu_P$ using observations $x_1,\dots,x_n$ sampled from $P$. Inspired by the James-Stein estimator, in [ ] we introduced a new type of KME estimators called kernel mean shrinkage estimators (KMSEs) and proved that it can converge faster than the empirical KME estimator $\hat{\mu}_P:=\sum_{i=1}^n k(x_i, \cdot) / n$. This improvement is due to the bias-variance tradeoff: the shrinkage estimator reduces variance substantially at the expense of a small bias. We also empirically showed that KMSE is particularly useful when the sample size $n$ is small compared to the input space dimensionality. We have studied the optimality of KME estimators in the minimax sense. In [ ] we show that the rate $O(n^{-1/2})$ achieved by $\hat{\mu_P}$, KMSE, and many other methods published in the literature is optimal and cannot be improved. This holds for any continuous translation-invariant kernel and for various classes of distributions, including both discrete and smooth distributions with infinitely differentiable densities. In [ ] we also study the minimax optimal estimation of the maximum mean discrepancy (MMD) between two probability distributions, which is defined as the RKHS distance between their KMEs: $\mathrm{MMD}(P,Q):=\|\mu_P - \mu_Q\|$. We show that for any radial universal kernel the rate $O(n^{-1/2} + m^{-1/2})$ achieved by several estimators published in the literature is minimax optimal. The properties of MMD are known to depend on the underlying kernel and have been linked to three fundamental concepts: universal, characteristic, and strictly positive definite kernels. In [ ] we show that these concepts are essentially equivalent and give the first complete characterization of those kernels whose associated MMD metrizes the weak convergence of probability measures. Finally, we show that KME can be extended to Schwartz-distributions and analyze properties of these distribution embeddings. While MMDs are known to metrize convergence in distribution, the underlying conditions are too stringent when one only aims to metrize convergence to a fixed distribution, which is the case for instance in goodness-of-fit tests. To address this, in [ ] we derive necessary and sufficient conditions for MMD to metrize tight convergence to a fixed target distribution. We use our characterizations to analyze the convergence properties of the targeted kernel Stein discrepancies (KSDs) commonly employed in the goodness-of-fit testing. The results validate the use of KSDs for a broader set of targets, kernels, and approximating distributions. The problem of estimating a distribution of functions of random variables plays an important role in the field of probabilistic programming, where it can be used to generalize functional operations to distributions over data types. In [ ] we proposed a non-parametric way to estimate the distribution of $f(X)$ for any continuous function $f$ of a random variable $X$. The proposed KME based estimators are proven to be asymptotically consistent. We also provide finite-sample guarantees under stronger assumptions. Motivated by recent advances in privacy-preserving machine learning and building upon the results of [ ], we have proposed a theoretical framework for a novel database release mechanism that allows third-parties to construct consistent estimators of population statistics while ensuring that the privacy of each individual contributing to the database is protected [ ]. Our framework is based on newly introduced differentially private and consistent estimators of KMEs, of interest in their own right. Visually impressive progress in machine learning has been made in the field of unsupervised generative modeling with generative adversarial networks (GANs), variational autoencoders (VAEs) and other deep neural network based architectures, significantly improving the state of the art in the quality of generated samples, especially in the domain of natural images. In [ ] we study the training of mixtures of generative models from a theoretical perspective. We find a globally optimal closed form solution for performing greedy updates while approximating an unknown distribution with mixtures in any given f-divergence. Based on this, we derive a boosting style meta-algorithm which can be combined with many modern generative models (including GANs and VAEs) to improve their quality. While training objectives in VAEs and GANs are based on f-divergences, it has been recently shown that other divergences, in particular, optimal transport distances, may be better suited to the needs of generative modeling. In [ ], starting from Kantorovich’s primal formulation of the optimal transport problem, we show that it can be equivalently written in terms of probabilistic encoders, which are constrained to match the latent posterior and prior distributions. We then apply this result to train latent variable generative models in [ ]. When relaxed, the constrained optimization problem leads to a new regularized autoencoder algorithm which we call Wasserstein auto-encoders (WAEs). WAEs share many of the properties of VAEs (stable training, nice latent manifold structure) while generating samples of better quality, as measured by the Frechet Inception score across multiple datasets. In [ ] and [ ] we focus on properties of the latent representations learned by WAEs and draw several interesting conclusions based on various experiments. First, we show that there are fundamental problems when training WAEs with deterministic encoders when the intrinsic dimensionality of the data is different from the latent space dimensionality. Second, we point out that training WAEs with probabilistic encoders is a challenging problem, and propose a heuristic approach leading to promising results on several datasets. Over the past four years, many deep neural network based architectures have been proven vulnerable to so-called adversarial attacks. In the case of natural image classifiers, carefully chosen but imperceptible image perturbations can lead to drastically changing predictions. We showed in [ ] that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. For most current network architectures, we prove that the $\ell_1$-norm of these gradients grows as the square root of the input size. These nets therefore become increasingly vulnerable with growing image size.
Practical and theoretical implementation discussion. Post Reply 10 posts • Page 1of 1 I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as: \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function. Click here.You'll thank me later. I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity. I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\). The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however. - Bobby Hill If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something. Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck. I mostly agree with you, \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead. Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. Click here.You'll thank me later.
Arithmetic Category : 5th Class Arithmetic 'Percent' means 'for every hundred'. Symbol for percentage is %. To convert a percentage to a decimal, divide the number by 100. e.g., 68% = \[\frac{68}{100}\] = 0.68 To convert a decimal to a percentage, multiply the number by 100%. e.g., \[0.59 = 0.59\times 100% = 59%\] To convert a percentage to a fraction, write the number with denominator 100 and reduce the fraction to its lowest terms. e.g., 45% =\[\frac{45}{100}=\frac{9}{20}\] To convert a fraction to a percentage, multiply one fraction with 100% e.g., \[\frac{9}{20}=\frac{9}{20}\times 100%=45%\] To find the percent of a quantity, multiply them and simplify. e.g., 30% Rs 100 \[\frac{30}{100}\times \text{Rs}100=\text{Rs30}\] \[\text{Average = }\frac{\text{The sum of quantities}}{\text{The number of quantities}}\] (a) The comparison of two quantities of the same kind by division gives their ratio. (b) The two quantities compared are written with a : (colon) between them. e.g., a; b read as 'a is to b'. (c) Ratio of two numbers can be thought of as a fraction and all the rules for operations with fractions can be used. (d) Double, triple, four times, etc., can be expressed in ratio as 2:1, 3:1, 4:1, etc. (e) A ratio can be expressed as a fraction. e.g., 2: 5 is the same as \[\frac{2}{5}\]. (f) In a ratio a: b, the first term 'a' is called the antecedent and the second term 'b' is called the consequent. The order of terms of a ratio is important i.e., 1:4 is not the same ratio as 4:1. (g) To find the ratio of two like quantities, they should be changed into the same unit of measurement. (h) While writing a ratio, co-prime numbers are generally used, that is, the ratio is often expressed in the lowest terms by cancelling the common factors from both the numbers. (i) A ratio does not have any unit of measurement. \[\text{Speed =}\frac{\text{Distance}}{\text{Time}}\] \[\text{Average }=\text{ }\frac{\text{Total distance covered}}{\text{Total time taken}}\] \[\operatorname{Distance} = Speed\times Time\] \[\text{Time }=\text{ }\frac{\text{Distance }}{\text{Speed}}\] |\[=\frac{\text{PTR}}{100}\], where |= Interest, P = Principal, T = Time, R = Rate per annum Amount (A) = P + | \[\Rightarrow \]| = A -P and also P = A -| (i) The price of an article is called its cost price denoted as C.R (ii) The price at which an article is sold is called its selling price denoted as S.R (iii) If the selling price is greater than the cost price, there is a gain/profit, which is equal to the difference of selling price and cost price. ∴ If S.P. > C.R, gain = S.R\[~-\]C.R (iv) If S.R < C.R there is a loss, which is equal to the difference of cost price and sell in price. ∴ If S.R < C.R, loss = C.R\[~-\]S.R Profit or loss is incurred on the cost price. So, percentage profit = \[\frac{\text{Profit}}{\text{C}\text{.P}\text{.}}\times 100%\]and percentage loss = \[\frac{\text{loss}}{\text{C}\text{.P}\text{.}}\times 100%.\] You need to login to perform this action. You will be redirected in 3 sec
Answer The reduction formula is $$\sin\theta$$ Work Step by Step *Summary of the method: For a formula $f(Q\pm\theta)$ 1) See that $Q$ terminates on the $x$ or $y$ axis. If it terminates on the $x$ axis, go for Case 1. If it terminates on the $y$ axis, go for Case 2. 2) Case 1: - For a small positive value of $\theta$, determinate $Q\pm\theta$ lies in which quadrant. - If $f\gt0$, use a $+$ sign. If $f\lt0$, use a $-$ sign. - The reduced form will have that sign, $f$ the function and $\theta$ the angle. 3) Case 2: - For a small positive value of $\theta$, determinate $Q\pm\theta$ lies in which quadrant. - If $f\gt0$, use a $+$ sign. If $f\lt0$, use a $-$ sign. - The reduced form will have that sign, cofunction of$f$ as the function and $\theta$ the angle. $$\cos(270^\circ+\theta)$$ 1) $270^\circ$ terminates on the $y$ axis. We go for Case 2. 2) As $\theta$ is a very small positive value, which means $\theta\gt0$, $$270^\circ\lt(270^\circ+\theta)\lt360^\circ$$ So $270^\circ+\theta$ lies in quadrant IV. 3) Cosine is positive in quadrant IV. So we use a $+$ sign. 4) In case 2, we use the cofunction of given formula, which here is sine, combined with the positive sign proved above. Overall, the reduced form would be $$\sin\theta$$
Cross-posted from MSE, where this question was asked over a year ago with no answers. Suppose I have a large system of polynomial equations in a large number of real-valued variables. \begin{align} f_1(x_1, x_2, &\dots, x_m) = 0 \\ f_2(x_1, x_2, &\dots, x_m) = 0 \\ &\vdots \\ f_n(x_1, x_2, &\dots, x_m) = 0 \\ \end{align} (In my particular case, I have about $n \approx 1000$ equations of degree $10$ in about $m \approx 200$ variables.) By numerical means, I've found an approximate solution vector $(\tilde{x}_1, \tilde{x}_2, \dots, \tilde{x}_m)$ at which the value of each $f_j$ is very small: $$\lvert f_j(\tilde{x}_1, \tilde{x}_2, \dots, \tilde{x}_m) \rvert < 10^{-16} \quad \forall j = 1, \dots, n.$$ This leads me to believe that a genuine solution of my system exists somewhere in a small neighborhood of $(\tilde{x}_1, \tilde{x}_2, \dots, \tilde{x}_m)$, and that the small residuals I see are due to round-off error in finite (IEEE double) precision arithmetic. However, it could conceivably be the case that the zero loci of my polynomials $f_j$ come very close to each other (within $10^{-16}$) but do not mutually intersect. How can I rigorously tell which is the case? I could, of course, further refine my solution using quadruple- or extended-precision arithmetic to push the residuals even closer to zero, but this would only provide supporting empirical evidence. If it helps, all of my polynomials have integer coefficients and can be evaluated exactly on integer and rational inputs. However, my approximate solution $(\tilde{x}_1, \tilde{x}_2, \dots, \tilde{x}_m)$ is probably irrational. In principle, there are methods of computational algebraic geometry (Groebner bases, cylindrical decomposition, etc.) that can algorithmically decide the existence of a true mathematical solution to a polynomial system, but my system is completely out of reach of all such algorithms I know. Buchberger's algorithm, for example, has doubly-exponential time complexity in the number of input variables. Note that interval/ball arithmetic won't help, because even if I can show that each $f_j$ exactly assumes the value $0$ in a small neighborhood of $(\tilde{x}_1, \tilde{x}_2, \dots, \tilde{x}_m)$, it could be the case that a different point zeroes out each $f_j$, and no single point simultaneously zeroes out all of them.
I was doing my homework when I came across this question: Three equal point charges, each with charge $1.40 \, \rm\mu C$ , are placed at the vertices of an equilateral triangle whose sides are of length $0.250 \,\rm m$. What is the electric potential energy $U$ of the system? (Take the potential energy of the three charges as zero when they are infinitely far apart) I had been thinking about voltage and electric potential in terms of gravitational potential energy, so I decided to consider the system as a single point charge. I used $$k\frac{q\cdot q}r$$ to get the potential from each charge at the center of the system. This was incorrect. The correct method was to get the potential between each of the charges and add them up. Why was my approach incorrect? Does calculating gravitational potential energy of a system work the same way? improved formattingWhy?
Difference between revisions of "Main Page" m m (bolding, wikifying mathematics) Line 78: Line 78: − Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) + Kalai.29:A sort of generic attack one can try with Sperner is to look at f=1_Aand express using the Fourier expansion of fthe expression \int f(x)f(y)1_{x<y}where x<yis the partial order (=containment) for 0-1 vectors. Then one may hope that if fdoes not have a large Fourier coefficient then the expression above is similar to what we get when Ais random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. This is not unrealeted to the regularity philosophy. − Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. + Gowers.31:Gil, a quick remark about Fourier expansions and the k=3case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. − The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient + The problem was that the natural Fourier basis in \null [3]^nwas the basis you get by thinking of \null [3]^nas the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that nis a multiple of 7, and you look at the set Aof all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set Ahas too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that Ahas no large Fourier coefficient. − + − + − + + === DHJ for dense subsets of a random set === === DHJ for dense subsets of a random set === Revision as of 21:04, 11 February 2009 Contents The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here. Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].) Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any [math]c[/math]-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our [math]c[/math]-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [math][4]^d[/math]; Given a point [math]( (x_1,…,x_d),(y_1,…,y_d) )[/math] where [math]x_i,y_j[/math] are 0 or 1, it maps to [math](z_1,…,z_d)[/math], where [math]z_i=0[/math] if [math]x_i=y_i=0[/math], [math]z_i=1[/math] if [math]x_i=1[/math] and [math]y_i=0, z_i=2[/math] if [math]x_i=0[/math] and [math]y_i=1[/math], and finally [math]z_i=3[/math] if [math]x_i=y_i=1[/math]. Any combinatorial line in [math][4]^d[/math] defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do. I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler. Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think. I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A. Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all. O'Donnell.35: Just to confirm I have the question right… There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits [ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ] are equal to one of the following: [ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ] ? McCutcheon.469: IP Roth: Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$. Presumably, this should be (perhaps much) simpler than DHJ, k=3. High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.) Fourier approach Kalai.29: A sort of generic attack one can try with Sperner is to look at [math]f=1_A[/math] and express using the Fourier expansion of [math]f[/math] the expression [math]\int f(x)f(y)1_{x\lty}[/math] where [math]x\lty[/math] is the partial order (=containment) for 0-1 vectors. Then one may hope that if [math]f[/math] does not have a large Fourier coefficient then the expression above is similar to what we get when [math]A[/math] is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the [math]k=3[/math] density HJ problem too but Sperner would be easier;)This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the [math]k=3[/math] case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. The problem was that the natural Fourier basis in [math]\null [3]^n[/math] was the basis you get by thinking of [math]\null [3]^n[/math] as the group [math]\mathbb{Z}_3^n[/math]. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that [math]n[/math] is a multiple of 7, and you look at the set [math]A[/math] of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set [math]A[/math] has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that [math]A[/math] has no large Fourier coefficient. You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset [math]W[/math] of [math]\null[n][/math] and just ask that the numbers of 0s, 1s and 2s inside [math]W[/math] are multiples of 7. DHJ for dense subsets of a random set Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. Bibliography H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119.
Creating Symbolic Links The function CreateSymbolicLink allows you to create symbolic links using either an absolute or relative path. Symbolic links can either be absolute or relative links. Absolute links are links that specify each portion of the path name; relative links are determined relative to where relative–link specifiers are in a specified path. Relative links are specified using the following conventions: Dot (. and ..) conventions—for example, "..\" resolves the path relative to the parent directory. Names with no slashes ()—for example, "tmp" resolves the path relative to the current directory. Root relative—for example, "\Windows\System32" resolves to the " current drive:\Windows\System32". directory Current working directory-relative—for example, if the current working directory is "C:\Windows\System32", "C:File.txt" resolves to "C:\Windows\System32\File.txt". NoteIf you specify a current working directory–relative link, it is created as an absolute link, due to the way the current working directory is processed based on the user and the thread. A symbolic link can also contain both junction points and mounted folders as a part of the path name. Symbolic links can point directly to a remote file or directory using the UNC path. Relative symbolic links are restricted to a single volume. Example of an Absolute Symbolic Link In this example, the original path contains a component, ' x', which is an absolute symbolic link. When ' x' is encountered, the fragment of the original path up to and including ' x' is completely replaced by the path that is pointed to by ' x'. The remainder of the path after ' x' is appended to this new path. This now becomes the modified path. X: "C:\alpha\beta\absLink\gamma\file" Link: "absLink" maps to "\\machineB\share" Modified Path: "\\machineB\share\gamma\file" Example of a Relative Symbolic Links In this example, the original path contains a component ' x', which is a relative symbolic link. When ' x' is encountered, ' x' is completely replaced by the new fragment pointed to by ' x'. The remainder of the path after ' x', is appended to the new path. Any dots (..) in this new path replace components that appear before the dots (..). Each set of dots replace the component preceding. If the number of dots (..) exceed the number of components, an error is returned. Otherwise, when all component replacement has finished, the final, modified path remains. X: C:\alpha\beta\link\gamma\file Link: "link" maps to "..\..\theta" Modified Path: "C:\alpha\beta\..\..\theta\gamma\file" Final Path: "C:\theta\gamma\file" Related topics
The aim of this example is to determine the "shape of highest probability" for the hydrogen molecule ion. For a given volume, the task is to use Monte Carlo integration to find the ratio between the semi-principal axes of an ellipsoid, such that the probability of finding the electron inside the ellipsoid is maximised. If you are unfamiliar with Monte Carlo integration, take a look at the short introduction given in this notebook. We are interested in determining the shape of the hydrogen molecule ion, that is one electron bound to two protons. The Hamiltonian for this system is $$H = -\frac{\hbar^2}{2m}\nabla^2 - \frac{e^2}{4 \pi \epsilon_o}\left(\frac{1}{r_1}+\frac{1}{r_2}\right). $$ Using the 1s wave function $\psi_{100}$ for hydrogen as a basis, we suggest the trial function $$ \psi = A[\psi_{100}(r_1)+\psi_{100}(r_2)].$$ The normalization constant is (see e.g. Griffiths p. 306 [1]) $$ A = \sqrt{\frac{1}{2(1+I)}}$$ with $$ I = \exp(-R/a)\left[1+\frac{R}{a}+\frac{1}{3}\left(\frac{R}{a}\right)^2\right],$$ where $R$ is the distance between the protons. As calculated on page 308 in Griffiths [1], the distance which minimizes the energy is $R=2.4a$, where $a= 0.529$ Å is the Bohr radius. However, in order to avoid rounding errors, we will set $a=1$ in the following. We assume azimutal symmetry, i.e. no $\phi$-dependence, such that the semi-principal axes in two directions, say the $x$- and $y$-directions, are equal. Then the two protons are placed on the $z$-axis, e.g. at $\pm R/2 \hat{z}$. For a given volume of the ellipsoid, $$V_0 = \frac{4}{3}\pi(2a)^3,$$ we want to find the ratio between the semi-principal axes $b$ and $c$ of the ellipsoid which maximizes the probability of the electron to be found inside the ellipsoid. We will also need that the surface of an ellipsoid is described by $$ \frac{x^2}{b^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2} = 1$$ and that the volume is given by $$ V = \frac{4}{3}\pi b^2 c.$$ Now we're good to go! %matplotlib inlinefrom __future__ import divisionimport matplotlib.pyplot as pltimport numpy as npimport randomimport matplotlib.collections as collections# Set common figure parameters:newparams = {'axes.labelsize': 14, 'axes.linewidth': 1, 'savefig.dpi': 200, 'lines.linewidth': 1.0, 'figure.figsize': (2, 2), 'ytick.labelsize': 10, 'xtick.labelsize': 10, 'ytick.major.pad': 5, 'xtick.major.pad': 5, 'legend.fontsize': 10, 'legend.frameon': False, 'legend.handlelength': 1.5, 'figure.dpi': 150}plt.rcParams.update(newparams) First of all we define a function psi(r1,r2,R) which returns the value of the wave funtion for given distances $r_1$ and $r_2$ to the two protons, and the distance $R$ between the protons. def psi(r1, r2, R): """Wavefunction for an electron in a potential from two protons. Input: r1 Distance to proton 1 r2 Distance to proton 2 R Distance between protons""" a = 1 # Bohr radius (set to one) I = np.exp(-R/a)*(1+R/a+1/3*(R/a)**2) A = np.sqrt(1/(2*(1+I))) return A/np.sqrt(np.pi*a**3)*(np.exp(-r1/a)+np.exp(-r2/a)) Next we define the number of random points, $N$, and use the general prosedure described here to calculate the probability inside an ellipsoid with semi-principal axes $b$ and $c$. Since the volume of the ellipsoid should be equal to the volume $V_0$, $c$ is determined from a chosen value of $b$. N = 1.0e5 # Number of random numbersa = 1 # Bohr radius (set to one)R = 2.4*a # From Griffiths p. 308i = 0n = 0V_0 = (4/3)*np.pi*(2*a)**3 # Given volume In the following code, we find the value of $b$ which maximizes the probability. $2a$ is chosen as an upper bound, since it seems reasonable that the ellipsoid should be stretched in the $z$-direction, meaning $b<c$ and hence $b<2a$, which corresponds to a sphere with the given volume $V_0$. """The following code calculates the value of b which maximizes the probability inside the ellipsoid.This is however not a very efficient solution."""b_min = 1*a # Lower limit for bb_max = 2*a # Upper limit for b, corresponds to sphereb_steps = 6 # Number of steps between b_min and b_maxb_ = np.linspace(b_min, b_max, b_steps) # Array of b-values, the lenght of semi-principal axis # in the x- and y-direction.acc = 0.0001*a # Wanted accuracyprob = np.zeros(b_steps)while (b_max-b_min) > acc: for j, b in enumerate(b_): c = V_0*3/(4*np.pi*b**2) # Length of semi-principal axis in z-direction, calculated from V_0 and b while i < N: x = random.uniform(-b, b) y = random.uniform(-b, b) z = random.uniform(-c, c) check = (x/b)**2+(y/b)**2+(z/c)**2 # Used to check if point is inside ellipsoid r1 = np.sqrt(x**2 + y**2 + (z-R/2)**2) r2 = np.sqrt(x**2 + y**2 + (z+R/2)**2) if check<=1: n = n + abs(psi(r1, r2, R))**2 i = i + 1 prob[j] = n/N*8*b**2*c n = 0 i = 0 b_max = b_[max(prob)==prob][0] + (b_max-b_min)/b_steps b_min = b_[max(prob)==prob][0] - (b_max-b_min)/b_steps b_ = np.linspace(b_min, b_max, b_steps) prob_max = max(prob)b = b_[prob_max==prob][0] c = V_0*3/(4*np.pi*b**2)print("Maximum probability is: %s" % prob_max)print("b/a = %s" % (b/a))print("c/a = %s" % (c/a))print("The ratio b/c is: %s" % (b/c)) Maximum probability is: 0.658615309675 b/a = 1.63442609867 c/a = 2.99474197578 The ratio b/c is: 0.54576524852 The value for the ratio $b/c$ which gives the highest probabiliy is approximately 0.6, which means that the ellipsoid is quite a bit stretched out along the $z$-axis. As an illustration of if this seems reasonable, the probability density in the $xz$-plane is plotted below, together with the ellipse with the $c/b$ ratio found above. p = 1000xs = np.linspace(-1.1*c, 1.1*c, p, True)X,Z = np.meshgrid(xs, xs)psi2 = np.zeros([p, p])r1 = np.sqrt(X**2+(Z-R/2)**2)r2 = np.sqrt(X**2+(Z+R/2)**2)psi2 = abs(psi(r1, r2, R))**2plt.figure(figsize=(6,4.5))levels = np.linspace(0, 1, 100, True)C = plt.contourf(X/a, Z/a, psi2/psi(0,R,R)**2, levels)plt.title('Probability density for hydrogen molecule ion')plt.ylabel(r'$z/a$')plt.xlabel(r'$x/a$')cbar = plt.colorbar(C)cbar.ax.set_ylabel('Probability density (relative at maximum)')x = lambda v: b/a*np.cos(v)z = lambda v: c/a*np.sin(v)theta = np.linspace(0, 2*np.pi, 1000)p1, = plt.plot(x(theta), z(theta), 'r')p2, = plt.plot([0,0], [-R/a/2,R/a/2], 'r+') As we see, the determined ratio seems quite reasonable! To check the obtained result, it is also interesting to integrate the probability density numerically using a built in function from scipy.integrate. This is done below, using the functions dblquad and tplquad, which lets you integrate in two and three dimensions respectively. The two dimensional integral function can be used since we have azimutal symmetry, which means that the integration over $\phi$ only contributes a factor $2\pi$. This is also definitely the most efficient code to run. from scipy.integrate import dblquad # Two dimensional integral functiondef f2D(r, theta): a = 1.0 # Bohr radius (set to one) I = np.exp(-R/a)*(1+R/a+1/3*(R/a)**2) A = np.sqrt(1/(2*(1+I))) f = (A/np.sqrt(np.pi*a**3)*(np.exp(-np.sqrt(r**2+R**2/4-R*r*np.cos(theta))/a)\ +np.exp(-np.sqrt(r**2+R**2/4+R*r*np.cos(theta))/a)))**2*r**2*np.sin(theta) return f#Integration limitsr1 = 0r2 = lambda theta: b*c/np.sqrt(c**2*np.sin(theta)**2+b**2*np.cos(theta)**2)t1 = 0t2 = np.piI = 2*np.pi*dblquad(f2D, t1, t2, lambda theta: r1, lambda theta: r2(theta))[0]print("The probability is: %s" % I) The probability is: 0.6545479922964303 from scipy.integrate import tplquad # Three dimensional integral functiondef f3D(phi, r, theta): a = 1.0 # Bohr radius (set to one) I = np.exp(-R/a)*(1+R/a+1/3*(R/a)**2) A = np.sqrt(1/(2*(1+I))) f = (A/np.sqrt(np.pi*a**3)*(np.exp(-np.sqrt(r**2+R**2/4-R*r*np.cos(theta))/a)+\ np.exp(-np.sqrt(r**2+R**2/4+R*r*np.cos(theta))/a)))**2*r**2*np.sin(theta) return f#Integration limitsr1 = 0r2 = lambda theta: b*c/np.sqrt(c**2*np.sin(theta)**2+b**2*np.cos(theta)**2)t1 = 0t2 = np.pip1 = 0p2 = 2*np.piI = tplquad(f3D,t1,t2,lambda theta: r1, lambda theta: r2(theta),lambda theta,r: p1, lambda theta,r: p2)[0]print("The probability is: %s" % I) The probability is: 0.6545479922883506 We see that the result we got using Monte Carlo integration is in quite good correspondence with the results obtained using the build in functions. As a test of our previous results, we can also try to find the optimal solution using the function optimize.minimize from the scipy library. We here use the two-dimensional integration method, since this was the most efficient one. from scipy.optimize import minimizedef obj_func(b, V): """Objective function which returns the probability for a given value of the semi-principal axis b""" B = b*a C = V*3/(4*np.pi*B**2) r1 = 0 r2 = lambda theta: B*C/np.sqrt(C**2*np.sin(theta)**2+B**2*np.cos(theta)**2) t1 = 0 t2 = np.pi I = 2*np.pi*dblquad(f2D, t1, t2, lambda theta: r1, lambda theta: r2(theta))[0] return -Idef optimizeRatio(V, tol): b_0 = 0.01 res = minimize(fun=obj_func, x0=b_0, args=(V,), jac=False, tol=tol) ratio = res.x[0]*a/(V*3/(4*np.pi*res.x[0]**2)/a**2) return (ratio, res)rat, res = optimizeRatio(V_0, 1e-5)print(res.message)print("b/a = %s" % res.x[0])print("Maximum probability = %s" % res.fun*(-1))print("Ratio b/c = %s" % rat) Optimization terminated successfully. b/a = 1.7091338453 Ratio b/c = 0.624077084894 As we see, the results agree quite well with what we have found earlier. An interesting next question is: How does the ratio $b/c$ change as the volume $V_0$ of the ellipsoid changes? Thinking about how the protons are located, one might guess that $b/c$ will decrease for decreasing volumes, i.e. the ellipsoid becomes very narrow in order to contain the two protons. For large volumes, $b/c$ should approach one. Let's check these assumptions. In order to save computing time, we will use the minimize function from the scipy library, as was done above. V = np.logspace(-3, 3, 7)ratios = np.zeros(np.size(V))results = np.zeros(np.size(V))# We need higher accuracy when the volume gets larger, in order to find the correct ratio:for i, V_ in enumerate(V): (ratios[i], res) = optimizeRatio(V_, 1e-4) plt.figure(figsize=(8,3))plt.loglog(V, ratios, '-')plt.ylabel(r"$b/c$")plt.xlabel(r"$V$"); We see that our guess seems to fit quite well!
A square is a topological manifold with boundary but not a smooth manifold with boundary because of its corners. But I am confused about it. I think since for a specific corner $p$, there is only one chart to cover $P$ (in order to be compatible), say $(U,f)$ , thus in the NBHD of $P$ the transition maps can only be $ff^{-1},f^{-1}f$, so the charts are compatible, so a square has a smooth structure. I feel confused about it. Could you tell me where I went wrong? Thank you! The topological space known as the square definitely does have a smooth structure, since it's homeomorphic to a disc. However, it does not have a smooth structure such that the inclusion map $i: \square \to \Bbb R^2$ is a smooth embedding. Proof: Put a smooth structure on the square. Let $(U,\varphi)$ be a boundary chart about one of the corners $p$ such that $\varphi(p)=0$. If $i$ is a smooth embedding, then the differential $di_p$ is an isomorphism. The tangent cone of a point $q$ is the set of vectors $v \in T_qM$ such that $v = \gamma'(0)$, where $\gamma: [0,\varepsilon) \to M$ is smooth. For an interior point, the tangent cone is the whole of $T_qM$; for a boundary point of a smooth manifold with boundary, it's a half-space, precisely the half-space of "inward-pointing" tangent vectors. (This is a useful notion when considering other sorts of 'manifolds with corners'; see my answer here for another application.) Now note that tangent cones are functorial: if $f: M \to N$ is a smooth map, then $df_p(C_p) \subset C_{f(p)}$. (If $f$ is a diffeomorphism we see that the linear isomorphism $df_p$ preserves the tangent cones.) Now let's break our assumption on the smooth map $i$. Because it's a boundary point in a manifold with boundary, $C_p$ is a half-space. But $C_{(0,0)}$ is a quadrant of $\Bbb R^2$, and there is no linear isomorphism that sends a half-space into a quadrant. Relatedly, it is worth thinking about the upper right quadrant in $\Bbb C = \Bbb R^2$, and why the map $z \mapsto z^2$ is a homeomorphism onto the upper half plane but not a diffeomorphism, and how the idea of this argument comes from that fact.
Let $\{A_\alpha:\alpha \in \Lambda\}$ be an indexed collection of sets. If $\bigcap \{A_\alpha:\alpha \in \Lambda\} \neq \emptyset$, then for each $\beta \in \Lambda$, $A_\beta \neq \emptyset$. My thought was a proof through contraposition: Assume $A_\beta = \emptyset$ for some $\beta \in \Lambda.$ It would follow that the intersection of $A_\beta$ with another set $A_\gamma$, where $\gamma \in \Lambda$ would yield the empty set. Thus $\bigcap \{A_\alpha:\alpha \in \Lambda\} = \emptyset$. Is this proof valid or did I maybe overlook something. Any thoughts would be appreciated.