text
stringlengths
256
16.4k
S S Talwar Articles written in Pramana – Journal of Physics Volume 67 Issue 1 July 2006 pp 121-134 Langmuir Blodgett (LB) process is an important route to the development of organized molecular layered structures of a variety of organic molecules with suitably designed architecture and functionality. LB multilayers have also been used as templates and precursors to develop nano-structured thin films. In this article, studies on the molecular packing and three-dimensional structure of prototypic cadmium arachidate (CdA), zinc arachidate (ZnA) and mixed CdA-ZnA LB multilayers are presented. The formation of semiconducting nano-clusters of CdS, ZnS and Cd xZn 1−xS alloys within the organic multilayer matrix, using arachidate LB multilayers as precursors is also discussed. Volume 87 Issue 4 October 2016 Article ID 0056 Regular We have synthesized, characterized and studied the third-order nonlinear optical properties of two different nanostructures of polydiacetylene (PDA), PDA nanocrystals and PDA nanovesicles, along with silver nanoparticles-decorated PDA nanovesicles. The second molecular hyperpolarizability $\gamma (−\omega; \omega,−\omega,\omega$) of the samples has been investigated by antiresonant ring interferometric nonlinear spectroscopic (ARINS) technique using femtosecond mode-locked Ti:sapphire laser in the spectral range of 720–820 nm. The observed spectral dispersion of $\gamma$ has been explained in the framework of three-essential states model and a correlation between the electronic structure and optical nonlinearity of the samples has been established. The energy of two-photon state, transition dipole moments and linewidth of the transitions have been estimated. We have observed that the nonlinear optical properties of PDA nanocrystals and nanovesicles are different because of the influence of chain coupling effects facilitated by the chain packing geometry of the monomers. On the other hand, our investigation reveals that the spectral dispersion characteristic of $\gamma$ for silver nanoparticles-coated PDA nanovesicles is qualitatively similar to that observed for the uncoated PDA nanovesicles but bears no resemblance to that observed in silver nanoparticles. The presence of silver nanoparticles increases the $\gamma$ values of the coated nanovesicles slightly as compared to that of the uncoated nanovesicles, suggesting a definite but weak coupling between the free electrons of the metal nanoparticles and $\pi$ electrons of the polymer in the composite system. Our comparative studies show that the arrangement of polymer chains in polydiacetylene nanocrystals is more favourable for higher nonlinearity. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
Let $G_1, G_2$ be groups. Find a group $G$ and homomorphisms $\iota_1: G_1 \rightarrow G, \iota_2: G_2 \rightarrow G$ which satisfy the following property: For every group $A$ and any homomorphisms $\phi_i: G_i \rightarrow A,\quad i=1,2$ there is a unique homomorphism $f: G \rightarrow A$ with $\phi_i=f \circ\iota$ for $i=1,2$. I know this is the property of the coproduct but I think that I should write down $(G, \iota_1,\iota_2)$ explicitly. I know that for abelian groups the kartesian product $G=G_1 \times G_2$ and the maps $g_1 \mapsto (g_1,e_{G_1})$, $g_2 \mapsto (e_{G_2},g_2)$ would satisfy the property. But in the category of groups $G_{1,2}=\mathbb{Z}$ the construction I just stated would fail. I had another idea of constructing G via the product the free groups but this didn't take me very far because it opened more question for me then it answered, especially how to choose the homomorphisms $\iota_{1,2}$. ADDITIONAL QUESTION: I should prove that $\iota_1, \iota_2$ are injective if $(C, \iota_1, \iota_2)$ is the coproduct of the groups $G_1, G_2$. And argue why this isn't the case in the class of all rings with unit. My proof: Since $C$ is a the coproduct there is a unique homomorphism $f: C \rightarrow D$ for a homomorphism $\phi_1: G_1 \rightarrow G_1$ with $\phi=f \circ \iota_1$. Let $D:=G_1$ and $\phi_1=id_{G_1}$. $\iota_1(g)=\iota_1(h) \Rightarrow f(\iota_1(g))=f(\iota_1(h)) \Rightarrow \phi_1(g)=\phi_1(h) \Rightarrow g=h$ Hence $\iota_1$ is injective. Analogous for $\iota_2.$ First of all, have I missed something? And secondly which step doesn't work for rings with unit? (My guess was that the zero ring could make troubles but that's not more then a guess right now.)
Lephenixnoir af424d1baa update documentation after writing the wiki 3 ay önce config 4 ay önce include/TeX 3 ay önce src 3 ay önce .gitignore 3 ay önce Makefile 4 ay önce README.md 3 ay önce TODO.md 3 ay önce configure 3 ay önce font5x7.bmp 4 ay önce font8x9.bmp 4 ay önce font10x12.bmp 4 ay önce This library is a customizable 2D math rendering tool for calculators. It can be used to render 2D formulae, either from an existing structure or TeX syntax. \frac{x^7 \left[X,Y\right] + 3\left|\frac{A}{B}\right>} {\left\{\frac{a_k+b_k}{k!}\right\}^5}+ \int_a^b \frac{\left(b-t\right)^{n+1}}{n!} dt+ \left(\begin{matrix} \frac{1}{2} & 5 \\ -1 & a+b \end{matrix}\right) List of currently supported elements: \frac) _ and ^) \left and \right) \sum, \prod and \int) \vec) and limits ( \lim) \sqrt) \begin{matrix} ... \end{matrix}) Features that are partially implemented (and what is left to finish them): See the TODO.md file for more features to come. First specify the platform you want to use : cli is for command-line tests, with no visualization (PC) sdl2 is an SDL interface with visualization (PC) fx9860g builds the library for fx-9860G targets (calculator) fxcg50 builds the library for fx-CG 50 targets (calculator) For calculator platforms, you can use --toolchain to specify a different toolchain than the default sh3eb and sh4eb. The install directory of the library is guessed by asking the compiler, you can override it with --prefix. Example for an SDL setup: % ./configure --platform=sdl2 Then you can make the program, and if it’s a calculator library, install it. You can later delete Makefile.cfg to reset the configuration, or just reconfigure as needed. % make% make install # fx9860g and fxcg50 only Before using the library in a program, a configuration step is needed. The library does not have drawing functions and instead requires that you provide some, namely: TeX_intf_pixel) TeX_intf_line) TeX_intf_size) TeX_intf_text) The three rendering functions are available in fxlib; for monospaced fonts the fourth can be implemented trivially. In gint, the four can be defined as wrappers for dpixel(), dline(), dsize() and dtext(). The type of formulae is TeX_Env. To parse and compute the size of a formula, use the TeX_parse() function, which returns a new formula object (or NULL if a critical error occurs). The second parameter display is set to non-zero to use display mode (similar to \[ .. \] in LaTeX) or zero to use inline mode (similar to $ .. $ in LaTeX). char *code = "\\frac{x_7}{\\left\\{\\frac{\\frac{2}{3}}{27}\\right\\}^2}";struct TeX_Env *formula = TeX_parse(code, 1); The size of the formula can be queried through formula->width and formula->height. To render, specify the location of the top-left corner and the drawing color (which will be passed to all primitives): TeX_draw(formula, 0, 0, BLACK); The same formula can be drawn several times. When it is no longer needed, free it with TeX_free(): TeX_free(formula);
This mod integrates the MathJax library into SMF forum. MathJax is the modern JavaScript-based LaTeX rendering solution for the Internet. Mod uses the MathJax CDN. The CDN will automatically arrange for your readers to download MathJax files from a fast, nearby server. And since bug fixes and patches are deployed to the CDN as soon as they become available, your pages will always be up to date with the latest browser and devices. https://www.mathjax.org GitHub repository: https://github.com/realdigger/SMF-MathJax-Mod [latex] E=mc^2 [/latex] Some text before [latex=inline] E=mc^2 [/latex] some text after \[ E=mc^2 \] Some text before \( E=mc^2 \) some text after Here is Bayes' theorem theorem: [latex]{\displaystyle P(A\mid B)={\frac {P(B\mid A)\,P(A)}{P(B)}},}[/latex] \[ {\displaystyle P(A\mid B)={\frac {P(B\mid A)\,P(A)}{P(B)}},} \] ...and a more complex example - Bayes' theorem applied to drug testing: [latex]{\displaystyle {\begin{aligned}P({\text{User}}\mid {\text{+}})&={\frac {P({\text{+}}\mid {\text{User}})P({\text{User}})}{P(+)}}\\&={\frac {P({\text{+}}\mid {\text{User}})P({\text{User}})}{P({\text{+}}\mid {\text{User}})P({\text{User}})+P({\text{+}}\mid {\text{Non-user}})P({\text{Non-user}})}}\\[8pt]&={\frac {0.99\times 0.005}{0.99\times 0.005+0.01\times 0.995}}\\[8pt]&\approx 33.2\%\end{aligned}}} [/latex] \[ {\displaystyle {\begin{aligned}P({\text{User}}\mid {\text{+}})&={\frac {P({\text{+}}\mid {\text{User}})P({\text{User}})}{P(+)}}\\&={\frac {P({\text{+}}\mid {\text{User}})P({\text{User}})}{P({\text{+}}\mid {\text{User}})P({\text{User}})+P({\text{+}}\mid {\text{Non-user}})P({\text{Non-user}})}}\\[8pt]&={\frac {0.99\times 0.005}{0.99\times 0.005+0.01\times 0.995}}\\[8pt]&\approx 33.2\%\end{aligned}}} \] Features of MathJax JavaScript-based LaTeX rendering High-quality typography. MathJaxTM uses modern CSS and web fonts, instead of equation images or Flash, so equations scale with surrounding text at all zoom levels. See how this works in the scaling math demo. Works in all modern browsers. This allows the math in your content to be seen clearly by virtually all readers, even those using smart phones. See supported browsers. Simple integration. Using MathJax with blogs, wikis, web pages and other web apps is easy. Learn more about installing MathJax with popular platforms like WordPress, MediaWiki, Drupal, and more. Copy and paste math. Let readers copy equations from your web pages into Word and LaTeX documents, science blogs, research wikis, calculation software like Maple, Mathematica and more. Watch the copy and paste demo. A rich API. Allows developers to create interactive course materials, advanced authoring tools, and math-enabled web apps. Learn how more about programming with MathJax. Accessible math. MathJax is compatible with screenreaders used by people with vision disabilities, and the Zoom feature allows all readers to see small details like scripts, primes and hats. See how to make your math accessible.
This is a continuation of The Springer Correspondence, Part I. Here we will work with unipotent matrices to construct the Springer resolution and the cohomology of its fibers. Unipotent Matrices and Partitions A unipotent element of a linear algebraic group $G$ is any element $u\in G$ such that $1-u$ is nilpotent. That is, $u=1+n$ where $n^k=0$ for some $k$. To get a sense of what unipotent matrices look like, consider the type A situation in which $\DeclareMathOperator{\GL}{GL}\newcommand{\CC}{\mathbb{C}} G=\GL_n(\CC)$. Given a unipotent element $u$, we can conjugate it by some matrix to put it in Jordan normal form. It will look something like this: $$gug^{-1}=\left(\begin{array}{ccccccc} \lambda_1 & 1 & & & & & \\ & \lambda_1 & 1 & & & & \\ & & \lambda_1 & & & & \\ & & & \lambda_2 & 1 & & \\ & & & & \lambda_2 & & \\ & & & & & \ddots & \\ & & & & & & \lambda_k \end{array}\right)$$ It turns out that the matrix above is particularly simple in this case: The eigenvalues $\lambda_i$ of a unipotent matrix are all $1$. To see this, suppose $\lambda$ is an eigenvalue of $u$. We have $uv=\lambda v$ for some vector $v$, and so $$(1-u)v=(1-\lambda)v.$$ Since $1-u=n$ is nilpotent, say with $n^k=0$, we have $$(1-u)^kv=(1-\lambda)^kv=0,$$ so $(1-\lambda)^k=0$. Since $\lambda\in\CC$ and $\CC$ is a field, it follows that $\lambda=1$, as claimed. Therefore, every unipotent matrix is conjugate to a matrix havnig all $1$’s on the diagonal, $0$’s or $1$’s on the off-diagonal, and $0$’s everywhere else. The blocks of $1$’s on the off-diagonal split the matrix into Jordan blocks, which we can order by size from greatest to least. Let the sizes of the Jordan blocks be $\mu_1,\mu_2,\ldots,\mu_k$. Then $\mu=(\mu_1,\ldots,\mu_k)$ is a partition of $n$, and determines the conjugacy class of a given unipotent matrix. For instance, the partition $\mu=(3,2,2)$ corresponds to the conjugacy class of unipotent matrices with the Jordan canonical form below. This can all be summed up in the following fact: The unipotent conjugacy classes in $\GL_n$ are in one-to-one correspondence with the partitions of $n$. Now, I know what you are thinking: “Maria, if the unipotent conjugacy classes of $\GL_n$ and the irreducible representations of $S_n$ are bothindexed by the partitions of $n$, shouldn’t there be some nice geometric construction that relates them directly?” Indeed there is! The Springer correspondence gives just that – and furthermore relates the unipotent conjgacy classes of any Lie group $G$ to the representations of its Weyl group. The Springer Resolution In what follows, let $G$ be a Lie group, and let $U$ be the subvariety of $G$ consisting of all unipotent elements. The variety $U$ is not smooth in general, and to resolve the singularities we construct the variety $\widetilde{U}\subset U\times \mathcal{B}$ by $$\widetilde{U}=\{(u,B):u\in B\}.$$ Recall from the previous post that $\mathcal{B}$ is the variety of all Borel subgroups of $G$ and is isomorphic to the Flag variety $G/B$ for any Borel $B$. If we interpret $\mathcal{B}$ as the Flag variety in the case $G=\GL_n$, we can alternatively define $\widetilde{U}$ as the set of all pairs $(u,F)$ where $F$ is a flag and $u$ fixes $F$, that is, $uF_i=F_i$ for each $i$. It turns out that $\widetilde{U}$ is smooth and the projection map $$\pi:\widetilde{U}\to U$$ is proper, so it resolves the singularities in $U$. This map is known as the Springer resolution. The theory gets rather deep at this point, so in what follows I will state the main facts without proof. For full details I refer the interested reader to the exposition in Chapter 3 of Representation Theory and Complex Geometry by Chriss and Ginzburg. Springer Fibers and Weyl Group Action For any $x\in U$, define the Springer fiber $\mathcal{B}_x$ to be the fiber $\pi^{-1}(x)$ of the Springer resolution over $x$, that is, the set of all Borel subgroups of $G$ that contain $x$. Now, consider the cohomology ring $H^\ast(\mathcal{B}_x)$ over $\CC$. It turns out that there is an action of the Weyl group $W$ on this cohomology ring, called the Springer action. There is unfortunately no direct way of defining this action. To get some intuition for where the action comes from, notice that the Springer resolution above can be lifted to the entire group: one can define $\widetilde{G}$ to be the subvariety of $G\times \mathcal{B}$ consisting of all pairs $(g,B)$ such that $g\in B$. Now, let $x$ be a regular semisimple element of $G$. In the case $G=\GL_n$, a regular semisimple element is simply a diagonalizable element $x$ with $n$ distinct nonzero eigenvalues. If $x$ is of this form, any subspace of $\CC^n$ fixed by $x$ is a direct sum of its (linear) eigenspaces. So, if $V_1,\ldots,V_n$ are the eigenspaces corresponding to the distinct eigenvalues of $x$, any flag fixed by $x$ is of the form $$V_{\sigma(1)}\subset V_{\sigma(1)}\oplus V_{\sigma(2)}\subset \cdots \subset V_{\sigma(1)}\oplus \cdots \oplus V_{\sigma(n)}$$ for some permutation $\sigma$ of $\{1,2,\ldots,n\}$. It follows that $\mathcal{B}_x$ consists of exactly $n!$ flags, and has a natural action of $S_n$ via permuting the eigenspaces $V_i$. Therefore, $S_n$ acts on $\mathcal{B}_x$ and therefore on $H^\ast(\mathcal{B}_x)$. In general, if $x$ is regular and semisimple, the fiber $\mathcal{B}_x$ is a finite set of size $|W|$ where $W$ is the Weyl group of $G$. The regular semisimple elements form a dense subset of $G$, and one can use this to extend the action to all cohomology rings $H^\ast(\mathcal{B}_x)$ for any $x\in G$. This is the tricky part, and involves many more constructions than fit in a reasonable-length blog post, so again I refer the reader to this awesome book. The Springer Correspondence We’re finally ready to state the Springer correspodence. For $x\in G$, let $d(x)$ be the dimension of the Springer fiber $\mathcal{B}_x$. In the case $G=\GL_n$, the top cohomology groups $H^{d(x)}(\mathcal{B}_x)$ are $S_n$-modules due to the Springer action described above. Notice also that $\mathcal{B}_x$ depends only on the conjugacy class of $x$, so for $x$ in the unipotent conjugacy class with shape $\mu$, we write $\mathcal{B}_\mu$ to denote this Springer fiber, with $d(\mu)$ its dimension. It turns out that these $S_n$-modules are precisely the irreducible representations of $S_n$. The $S_n$-module $H^{d(\mu)}(\mathcal{B}_\mu)$ is isomoprhic to the irreducible representation $V_\mu$ of $S_n$ corresponding to $\mu$. And there you have it. For general Lie groups $G$, the Springer correspondence is not quite as nice; the top cohomology groups $H^d(\mathcal{B}_u)$ (where $u$ is a unipotent conjugacy class) are not in general irreducible $W$-modules. However, all of the irreducible $W$-modules occur exactly once as a summand among the modules $H^d(\mathcal{B}_u)$, and there is a correspondence between the irreducible representations of $W$ and pairs $(u,\xi)$ where $u$ is a unipotent conjugacy class in $G$ and $\xi$ is an irreducible $G$-equivariant local system on $u$. Hall-Littlewood Polynomials The fact that the top cohomology groups $H^{d(\mu)}(\mathcal{B}_\mu)$ are so nice naturally raises the question: what about the other cohomology groups? What $S_n$-modules do we get in each degree? In particular, let $R_\mu=H^\ast(\mathcal{B}_\mu)$. Then $R_\mu$ is a graded $S_n$-module with grading $$R_\mu=\bigoplus (R_\mu)_i=\bigoplus H^i(\mathcal{B}_\mu),$$ and so we can construct the Frobenius series $$F_t(R_\mu)=\sum_{i=0}^{d(\mu)}F((R_{\mu})_i)t^i$$ where $F$ is the Frobenius map that sends $S_n$-modules to symmetric functions. The Hall-Littlewood polynomials $\widetilde{H}_\mu(\mathbf{x};t)$ are defined to be the Frobenius characteristics $F_t(R_\mu)$, and are therefore a class of symmetric polynomials in the variables $\mathbf{x}=x_1,x_2,\ldots$ with coefficients in $\mathbb{Q}[t]$. They have incredibly rich combinatorial structure which reveals the decomposition of $R_\mu$ into irreducible $S_n$-modules… structure that I will save for a later post. Stay tuned!
When thinking of some other problem, I stumbled upon the following innocently looking question that is natural enough to have been considered (and, possibly, solved) many years ago. However my attempts to search the literature for an answer resulted in next to nothing. Let $K\subset\mathbb R^n$ be a convex cone and let $K^*=\{y:\langle x,y\rangle\ge 0\text{ for every }x\in K\}$ be its dual cone. Suppose that $K\supset K^*$ (or, if you prefer, even that $K=K^*$). What is the minimal possible ratio $\frac{|K\cap B|}{|B|}$ where $B$ is the unit ball in $\mathbb R^n$ when $n$ is large? The answer should, probably, be of order $2^{-n}$ (positive orthant) but the best clean lower bound I can prove myself with my "homemade tools" is $(\sqrt 2+1)^{-n}$ (it can be improved a bit further to something like $2.317^{-n}$ but the argument gets somewhat messy and it is clear that this way won't lead to the optimal estimate). Any help would be appreciated.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Consider the $p$-th power of the Schatten $p$-norm $||q||_p$ of a probability distribution $q$ , ie, the function $\sum_j q_j^p$, where $\sum_j q_j = 1$ and $q_j \geq 0$. For fixed $q$ and $p>1$ this is a nonincreasing function of $p$. The question is: given $p_1, p_2$ both $>1$, is it true that: whenever $\left(||q||_{p_1} \right)^{p_1} \geq \left(||s||_{p_1} \right)^{p_1} $ , then also $\left(||q||_{p_2} \right)^{p_2} \geq \left(||s||_{p_2} \right)^{p_2} $? In other words, do all these functions define the same partial order on the set of probability distributions? Note: in information-theoretic terms, the question can also be equivalently stated in terms of the Tsallis entropies $T_p = \frac{1-\sum_j q_j^p}{p-1}$), and also the closely related Rényi entropies $R_p ( q)= \frac{1}{1-p} \ln \sum_j q_j^p$
By Joannes Vermorel, February 2012 More accurate demand forecasts are obviously good as far as inventory optimization is concerned. However, the quantitative assessment of the financial gains generated by an increase of the forecasting accuracy typically remains a fuzzy area for many retailers and manufacturers. This article details how to compute the benefits generated by an improved forecast. The viewpoint adopted in this article is a best fit for high turnover inventories, with turnovers above 15. For high turnover values, the dominant effect is not so much stockouts, but rather the sheer amount of inventory, and its reduction through better forecasts. If such is not your case, you can check out our alternative formula for low turnover . The formula The detail of the proof is given below, but let's start with the final result. Let's introduce the following variables: $D$ the turnover (total annual sales). $m$ the gross margin. $\alpha$ the cost of stockout to gross margin ratio. $p$ the service level achieved with the current error level (and current stock level). $\sigma$ the forecast error of the system in place, expressed in MAPE (mean absolute percentage error). $\sigma_n$ the forecast error of the new system being benchmarked (hopefully lower than $\sigma$). The yearly benefit $B$ of revising the forecasts is given by:$$B = D (1 - p) m \alpha \frac{\sigma - \sigma_n}{\sigma}$$ Download Excel sheet: accuracy-gains.xlsx (illustrated calculation) It is possible to replace the MAPE error measurements by MAE (mean absolute error) measures within the formula. This replacement is actually strongly advised if slow movers exist in your inventory. Practical example Let's consider a large retail network that can obtain a 10% reduction of the (relative) forecast error through a new forecasting system. $D=1,000,000,000€$ (1 billion Euros) $m=0.2$ (i.e.gross margin of 20%) $p=0.97$ (i.e. service level of 97%) $\alpha=3$ (stockouts cost 3x the gross margin loss) $\sigma=0.2$ (MAPE of 20%) $\sigma_n=0.18$ (MAPE of 18% - relatively 10% lower than the previous error) Based on the formula above, we obtain a gain at $B=1,800,000€$ per year. If we assume that the overall profitability of the retailer is 5%, then we see that a 10% improvement in forecasting accuracy already contribute to 4% of the overall profitability. Proof of the formula At a fundamental level, inventory optimization is a tradeoff between excess inventory costs vs. excess stockout costs. Let's assume, for now, that, for a given stock level, the stockout frequency is proportional to the forecasting error . This point will be demonstrated in the next section. The total volume of sales lost through stockouts is simple to estimate: it's $D(1-p)$, at least for any reasonably high value of $p$. In practice, this estimation is very good if $p$ is greater than 90%. Hence, the total volume of margin lost through stock-outs is $D(1-p)m$. Then, in order to model the real cost of the stock out, which is not limited to the loss of margin (think loss of customer loyalty for example), we introduce the coefficient $\alpha$. So the total economical loss caused by stock outs becomes $D(1-p)m\alpha$. Based the assumption (demonstrated below) that stockouts are proportional to the error, we need to apply the factor $(\sigma - \sigma_n) / \sigma$ as the evolution of the stockout cost caused by the new average forecast error. Hence, in the end, we obtain:$$B = D (1 - p) m \alpha \frac{\sigma - \sigma_n}{\sigma}$$ Stockouts are proportional to the error Let's demonstrate now the statement that, for a given inventory level, stockouts are proportional to the forecasting error. In order to do that, let's start with service levels at 50% ($p=0.5$). In this context, the safety stock formula indicates that safety stocks are at zero . Several variants exist for the safety stock formula, but they are all behaving similarly in this respect. With zero safety stocks, it becomes easier to evaluate the loss caused by forecast errors. When the demand is greater than the forecast (which happens here 50% of the time by definition of $p=0.5$), then the average percentage of sales lost is $\sigma$. Again, this is only the consequence of $\sigma$ being the mean absolute percentage error . However, with the new forecasting system, the loss is $\sigma_n$ instead. Thus, we see that with $p=0.5$, stockouts are indeed proportional to the error. The reduction of the stockouts when replacing the old forecast with the new one will be $\sigma_n / \sigma$. Now, what about $p \not= 0.5$? By choosing a service level distinct from 50%, we are transforming the mean forecasting problem into a quantile forecasting problem. Thus, the appropriate error metric for quantile forecasts becomes the pinball loss function , instead of the MAPE. However, since we can assume here that the two mean forecasts (the old one, and the new one) will be extrapolated as quantile (to compute the reorder point ), though the same formula, the ratio of the respective errors will remain the same . In particular, if the safety stock is small (say less than 20%) compared to the primary stock, then this approximation is excellent in practice. Cost of stockouts (α) The factor $\alpha$ has been introduced to reflect the real impact of a stockout on the business. A minima , we have $\alpha = 1$ because the loss caused by an extra stockout is at least equal to the volume of gross margin being lost. Indeed, when considering the marginal cost of a stockout, all infrasture and manpower costs are fixed, hence the gross margin should be considered. However, the cost for a stockout is typically greater than that the gross margin. Indeed, a stockout causes: a loss of client loyaulty. a loss of supplier trust. more erratic stock movements, stressing supply chain capacities (storage, transport, ...). overhead efforts for downstream teams who try to mitigate stockouts one way or another. ... Among several large food retail networks, we have observed that, as a rule thumb, practionners are assuming $\alpha=3$ . This high cost for stockouts is also the reason why, in the first place, the same retail networks typically seek high service levels, above 95%. Misconceptions about safety stocks In this section, we debunk one recurrent misconception about the impact of an extra accuracy, which can be expressed as extra accuracy only reduces safety stocks . Looking at the safety stock formula, one might be tempted to think that the impact of a reduced forecasting error will be limited to lowering the safety stock; all other variables remaining unchanged (stockouts in particular). This is a major misunderstanding . Classical safety stock analysis splits inventory in two components: the primary stock, equal to the lead demand, that is to say the average forecast demand multiplied by the lead time. the safety stock, equal to the demand error multiplied by a safety coefficient that depends mostly of $p$, the service level. Let's go back to the situation where the service level equals 50%. In this situation, safety stocks are at zero (as seen before). If the forecast error was only impacting the safety stock component, then it would imply that the primary stock was immune to poor forecast. However, since there is no inventory here beyond the primary stock, we end-up with the absurd conclusion that the whole inventory has become immune to arbitrarily bad forecasts . Obviously, this does not make sense. Hence, the initial assumption, that only safety stocks were impacted is wrong . Despite being incorrect, the safety stock only assumption is tempting because when looking at the safety stock formula, it looks like one immediate consequence. However, one should not jump to conclusions too hastily: this is not the only one consequence. The primary stock is built on top of the demand forecast as well, and it's the first one to be impacted by a more accurate forecast. Advance topics In section, we delve in further details that have been omitted in the discussion above for the sake of clarity and simplicity. Impact of varying lead times The formula above indicates that reducing the forecast error at 0% should bring stockouts at zero as well. On one hand, if customer demand could be anticipated with 100% accuracy 1 year in advance, achieving near-perfect inventory levels would seem less outstanding . One the other hand, some factors such as the varying lead time complicates the task. Even if the demand is perfectly known, an varying timing of delivery might generate further uncertainties. In practice, we observe that the uncertainty related to the lead time is typically small compared to the uncertainty related to the demand. Hence, neglecting the impact of varying lead time is reasonable as long as forecasts remain somewhat inaccurate (say for MAPEs higher than 10%). Lokad gotcha's Delivering superior forecasts is the number one priority for Lokad. For companies with advance forecasting systems in place, benchmarks performed by our clients indicate that we typically reduce the relative forecasting error by 10% or more. For companies with little practices in place, the gain can go up to 30%. However, don't take our word for granted, and benchmark yourself for free your inventory practices with our forecasting engine using our 30-day free trial.
Consider a monotonic predicate $P$ over the powerset $2^{|n|}$ (ordered by inclusion). By "monotonic" I mean: $\forall x, y \in 2^{|n|}$ such that $x \subset y$, if $P(x)$ then $P(y)$. I am looking for an algorithm to find all the minimal elements of $P$, i.e., the $x \in 2^{|n|}$ such that $P(x)$ but $\forall y \subset x$, $\neg P(y)$. Since the width of $2^{|n|}$ is $n \choose n/2$, there could be exponentially many minimal elements, and therefore the running time of such an algorithm could be exponential in general. However, could there exist an algorithm for this task which is polynomial in the size of the output? [Context: A more general question was asked but there was no attempt in the answers to evaluate the complexity of the algorithm in the size of the output. If I assume that there is only one minimal element, for instance, then I can perform a binary search following this answer and find it. However, if I want to continue finding more minimal elements, I need to maintain the current information I have about $P$ in a way which would make it tractable to continue the search without wasting time on what is already known. Is it possible to do this and find all the minimal elements in polynomial time in the size of the output?] Ideally, I would like to understand if this can be done with general DAGs, but I already don't know how to answer the question for $2^{|n|}$.
In the 2SAT problem, you are given a set of clauses, where each clause is the disjunction (OR) oftwo literals (a literal is a Boolean variable or the negation of a Boolean variable). You are lookingfor a way to assign a value ${\tt true}$ or ${\tt false}$ to each of the variables so that all clauses aresatisfied — that is, there is at least one true literal in each clause. For example, here’s an instance of 2SAT:$$(x_1 \lor \overline{x}_2) \land (\overline{x}_1 \lor \overline{x}_3) \land (x_1 \lor x_2) \land (\overline{x}_3 \lor x_4) \land (\overline{x}_1 \lor x_4)\, .$$This instance has a satisfying assignment: set $x_1$, $x_2$, $x_3$, and $x_4$ to ${\tt true}$, ${\tt false}$, ${\tt false}$, and${\tt true}$, respectively. Are there other satisfying truth assignments of this 2SAT formula? If so, find them all. Give an instance of 2SAT with four variables, and with no satisfying assignment. The purpose of this problem is to lead you to a way of solving 2SAT efficiently by reducing it tothe problem of finding the strongly connected components of a directed graph. Given an instance$I$ of 2SAT with $n$ variables and $m$ clauses, construct a directed graph $G_I = (V, E)$ as follows. $G_I$ has $2n$ nodes, one for each variable and its negation. $G_I$ has $2m$ edges: for each clause $(\alpha \lor \beta)$ of $I$ (where $\alpha, \beta$ are literals), $G_I$ has an edge fromthe negation of $\alpha$ to $\beta$, and one from the negation of $\beta$ to $\alpha$. Note that the clause $(\alpha \lor \beta)$ is equivalent to either of the implications $\overline{\alpha} \Rightarrow \beta$ or$\overline{\beta} \Rightarrow \alpha$. In thissense, $G_I$ records all implications in $I$. Carry out this construction for the instance of 2SAT given above, and for the instance youconstructed in 2. Show that if $G_I$ has a strongly connected component containing both $x$ and $\overline{x}$ for somevariable $x$, then $I$ has no satisfying assignment. Now show the converse of 4: namely, that if none of $G_I$’s strongly connected componentscontain both a literal and its negation, then the instance $I$ must be satisfiable. (Hint:Assign values to the variables as follows: repeatedly pick a sink strongly connected componentof $G_I$. Assign value ${\tt true}$ to all literals in the sink, assign ${\tt false}$ to their negations, anddelete all of these. Show that this ends up discovering a satisfying assignment.) Conclude that there is a linear-time algorithm for solving 2SAT. Given: A positive integer $k \le 20$ and $k$ 2SAT formulas represented as follows. The first line gives the number of variables$n \le 10^3$ and the number of clauses $m \le 10^4$, each of the following $m$ lines gives a clause of length $2$ by specifyingtwo differentliterals: e.g., a clause $(x_3 \lor \overline{x}_5)$ is given by 3 -5. Return: For each formula, output $0$ if it cannot be satisfied or $1$ followed by a satisfying assignment otherwise.
Given $n$, the number of vertices, what is the number of labeled triangle-free graphs on $n$ vertices? There shouldn't be any sensible exact formula, as Ira Gessel says. But there are very good asymptotics and a structural description. An old result of Erdős, Kleitman and Rothschild is that almost all triangle-free graphs are bipartite. Proemel, Schickinger and Steger refined this to show that almost all triangle-free graphs which are not bipartite can be made bipartite by removing one edge; and almost all of the rest can be made bipartite by removing two edges; and so on. It's easy to count bipartite graphs (there are roughly $2^{n+n^2/4}$; you can easily enough find accurate asymptotics which depend on the parity of $n$) and similarly the Proemel-Schickinger-Steger classes are not too hard to enumerate asymptotically (I don't know of these being explicitly in the literature). There are also similar results if you fix the number of edges (which get less precise for sparse graphs). user36212 has already essentially answered the question; the state of the art has not yet been pointed out though (by that I mean I miss a mention of the latest relevant publications, and a mention of the hypergraph container method, which the OP might find useful being told about) : due to work of Balogh and coworkers also for the class of maximal triangle-free graphs the asymptotics are known (and these differ from the asymptotics for all triangle-free graphs). In [BP2014] József Balogh, Šárka Petříková, Number of maximal triangle-free graphs, Bull. London Math. Soc. 46(5) (2014), 1003-1006 it was shown that the lower bound of $2^{\frac18 n^2 + o(n^2)}$ for labelled maximal triangle-free graphs does give the correct asymptotics (to within this precision). Incidentally, while [BP2014] avers without reference that the lower bound was "known much earlier", the first public reference seems to be this answer of Douglas Zare, which gives the a slighly more complicated construction than [BP2014], but in return gives a little more detail regarding the proof that the construction actually produces enough maximal triangle-free graphs. An exposition giving full detail does not exist as far as I know, though writing one would be easy: for the construction in [BP2014], that is, if one takes a pefect matching $M$ consisting of $n/4$ edges, and adds a new independent set $S$ of $n/2$ vertices, and then decides independently for each $(u,vw)\in S\times M$ whether to join $u$ to $v$ or to $w$ (and does exactly one of those), then evidently in the labelled sense precisely $2^{\frac{n}{2}\cdot\frac{n}{4}}=2^{n^2/8}$ are constructed, and what remains to be proved is that $2^{n^2/8 - r}$ with $r\in o(n^2)$ of those are maximal triangle-free; this is an exercise. The proof of the upper bound in [BP2014] uses a result of Saxton and Thomason sometimes referred to a the method of hypergraph containers. One should note that, in view of the results from the 1970s already mentioned by user36212, [JP2014] implies $\text{# maximal triangle-free graphs}\qquad\sim_{n\to\infty}\qquad\sqrt{\text{# triangle free graphs}}$. Even more recently, in [BLPS2015] József Balogh, Hong Liu, Šárka Petříková, Maryam Sharifzadeh, The Typical Structure of Maximal Triangle-Free Graphs, Forum of Mathematics, Sigma (2015), Vol. 3, e20, 19 pages two structural statements were proved, 'structural' in the sense that they shed some light on how most of the members of the set $\mathbb{K}$ of finite maximal triangle-free graphs 'look' like: (0) Almost every member of $\mathbb{K}$ can be constructed according to what arguably is the most straighforward construction of a maximal triangle-free graph: to start with a perfect matching $M$, to add an independent set $S$, and then to add edges between $V(M)$ and $S$ until the graph is maximal-triangle free. (Evidently, not every member of $\mathbb{K}$ can be so constructed: e.g., the Petersen graph cannot.) (1) By [BLPS2015, Lemma 2.4], $\forall n\in\omega$ $\forall$ maximal triangle-free $n$-vertex graph $G$ ($\#\text{maximal independent sets in $G$}$) $\leq$ $2^{\frac12 n - \frac{1}{25}\text{number of vertex-disjoint three-vertex paths in $G$}}$ It seems not to be known whether there is any triangle-free graph for which the bound in the above lemma is attained.
K-Means Clustering¶ Introduction¶ K-Means falls in the general category of clustering algorithms. Clustering is a form of unsupervised learning that tries to find structures in the data without using any labels or target values. Clustering partitions a set of observations into separate groupings such that an observation in a given group is more similar to another observation in the same group than to another observation in a different group. For more information, refer to “A Fast Clustering Algorithm to Cluster Very Large Categorical Data Sets in Data Mining” and “Extensions to the k-Means Algorithm for Clustering Large Data Sets with Catgorical Values” by Zhexue Huang. Defining a K-Means Model¶ model_id: (Optional) Specify a custom name for the model to use as a reference. By default, H2O automatically generates a destination key. training_frame: (Required) Specify the dataset used to build the model. NOTE: In Flow, if you click the Build a modelbutton from the Parsecell, the training frame is entered automatically. validation_frame: (Optional) Specify the dataset used to evaluate the accuracy of the model. x: Specify a vector containing the names or indices of the predictor variables to use when building the model. If xis missing, then all columns are used. nfolds: Specify the number of folds for cross-validation. keep_cross_validation_predictions: Enable this option to keep the cross-validation predictions. keep_cross_validation_fold_assignment: Enable this option to preserve the cross-validation fold assignment. fold_assignment: (Applicable only if a value for nfoldsis specified and fold_columnis not specified) Specify the cross-validation fold assignment scheme. The available options are AUTO (which is Random), Random, Modulo, or Stratified (which will stratify the folds based on the response variable for classification problems). fold_column: Specify the column that contains the cross-validation fold index assignment per observation. ignored_columns: (Optional, Python and Flow only) Specify the column or columns to be exclude from the model. In Flow, click the checkbox next to a column name to add it to the list of columns excluded from the model. To add all columns, click the Allbutton. To remove a column from the list of ignored columns, click the X next to the column name. To remove all columns from the list of ignored columns, click the Nonebutton. To search for a specific column, type the column name in the Searchfield above the column list. To only show columns with a specific percentage of missing values, specify the percentage in the Only show columns with more than 0% missing valuesfield. To change the selections for the hidden columns, use the Select Visibleor Deselect Visiblebuttons. ignore_const_cols: (Optional) Specify whether to ignore constant training columns, since no information can be gained from them. This option is enabled by default. score_each_iteration: (Optional) Specify whether to score during each iteration of the model training. k: Specify the number of clusters (groups of data) in a dataset that are similar to one another. estimate_k: Specify whether to estimate the number of clusters (<=k) iteratively (independent of the seed) and deterministically (beginning with k=1,2,3...). If enabled, for each kthat, the estimate will go up to max_iteration. This option is disabled by default. user_points: Specify a dataframe, where each row represents an initial cluster center. max_iterations: Specify the maximum number of training iterations. The range is 0 to 1e6. standardize: Enable this option to standardize the numeric columns to have a mean of zero and unit variance. Standardization is highly recommended; if you do not use standardization, the results can include components that are dominated by variables that appear to have larger variances relative to other attributes as a matter of scale, rather than true contribution. This option is enabled by default. Note: If standardization is enabled, each column of numeric data is centered and scaled so that its mean is zero and its standard deviation is one before the algorithm is used. At the end of the process, the cluster centers on both the standardized scale ( centers_std) and the de-standardized scale ( centers). To de-standardize the centers, the algorithm multiplies by the original standard deviation of the corresponding column and adds the original mean. Enabling standardization is mathematically equivalent to using h2o.scalein R with center= TRUE and scale= TRUE on the numeric columns. Therefore, there will be no discernible difference if standardization is enabled or not for K-Means, since H2O calculates unstandardized centroids. seed: Specify the random number generator (RNG) seed for algorithm components dependent on randomization. The seed is consistent for each H2O instance so that you can create models with the same starting conditions in alternative configurations. init: Specify the initialization mode. The options are Random, Furthest, PlusPlus, or User. Random initialization randomly samples the k-specified value of the rows of the training data as cluster centers. PlusPlus initialization chooses one initial center at random and weights the random selection of subsequent centers so that points furthest from the first center are more likely to be chosen. Furthest initialization chooses one initial center at random and then chooses the next center to be the point furthest away in terms of Euclidean distance. User initialization requires the corresponding user_pointsparameter. Note that the user-specified points dataset must have the same number of columns as the training dataset. Note: If PlusPlus is specified, the initial Y matrix is chosen by the final cluster centers from the K-Means PlusPlus algorithm. max_runtime_secs: Maximum allowed runtime in seconds for model training. Use 0 to disable. categorical_encoding: Specify one of the following encoding schemes for handling categorical features: autoor AUTO: Allow the algorithm to decide (default). In K-Means, the algorithm will automatically perform enumencoding. enumor Enum: 1 column per categorical feature one_hot_explicit: N+1 new columns for categorical features with N levels binaryor Binary: No more than 32 columns per categorical feature `eigenor Eigen: kcolumns per categorical feature, keeping projections of one-hot-encoded matrix onto k-dim eigen space only label_encoderor LabelEncoder: Convert every enum into the integer of its index (for example, level 0 -> 0, level 1 -> 1, etc.) export_checkpoints_dir: Specify a directory to which generated models will automatically be exported. Interpreting a K-Means Model¶ By default, the following output displays: A graph of the scoring history (number of iterations vs. within the cluster’s sum of squares) Output (model category, validation metrics if applicable, and centers std) Model Summary Model Summary (number of clusters, number of categorical columns, number of iterations, total within sum of squares, total sum of squares, total between the sum of squares. Note that Flow also returns the number of rows.) Scoring history (duration, number of iterations, number of reassigned observations, number of within cluster sum of squares) Training metrics (model name, checksum name, frame name, frame checksum name, description if applicable, model category, scoring time, predictions, MSE, RMSE, total within sum of squares, total sum of squares, total between sum of squares) Centroid statistics (centroid number, size, within cluster sum of squares) Cluster means (centroid number, column) K-Means randomly chooses starting points and converges to a local minimum of centroids. The number of clusters is arbitrary and should be thought of as a tuning parameter. The output is a matrix of the cluster assignments and the coordinates of the cluster centers in terms of the originally chosen attributes. Your cluster centers may differ slightly from run to run as this problem is Non-deterministic Polynomial-time (NP)-hard. Estimating k in K-Means¶ The steps below describe the method that K-Means uses in order to estimate k. Beginning with one cluster, run K-Means to compute the centroid. Find variable with greatest range and split at the mean. Run K-Means on the two resulting clusters. Find the variable and cluster with the greatest range, and then split that cluster on the variable’s mean. Run K-Means again, and so on. Continue running K-Means until a stopping criterion is met. H2O uses proportional reduction in error (\(PRE\)) to determine when to stop splitting. The \(PRE\) value is calculated based on the sum of squares within (\(SSW\)). \(PRE=\frac{(SSW\text{[before split]} - SSW\text{[after split]})} {SSW\text{[before split]}}\) H2O stops splitting when \(PRE\) falls below a \(threshold\), which is a function of the number of variables and the number of cases as described below: \(threshold\) takes the smaller of these two values: either 0.8or \(\big[0.02 + \frac{10}{number\_of\_training\_rows} + \frac{2.5}{number\_of\_model\_features^{2}}\big]\) FAQ¶ How does the algorithm handle missing values during training? Missing values are automatically imputed by the column mean. K-means also handles missing values by assuming that missing feature distance contributions are equal to the average of all other distance term contributions. How does the algorithm handle missing values during testing? Missing values are automatically imputed by the column mean of the training data. What happens when you try to predict on a categorical level not seen during training? An unseen categorical level in a row does not contribute to that row’s prediction. This is because the unseen categorical level does not contribute to the distance comparison between clusters, and therefore does not factor in predicting the cluster to which that row belongs. Does it matter if the data is sorted? No. Should data be shuffled before training? No. What if there are a large number of columns? K-Means suffers from the curse of dimensionality: all points are roughly at the same distance from each other in high dimensions, making the algorithm less and less useful. What if there are a large number of categorical factor levels? This can be problematic, as categoricals are one-hot encoded on the fly, which can lead to the same problem as datasets with a large number of columns. K-Means Algorithm¶ The number of clusters \(K\) is user-defined and is determined a priori. Choose \(K\) initial cluster centers \(m_{k}\) according to one of the following: Random: Choose \(K\) clusters from the set of \(N\) observations at random so that each observation has an equal chance of being chosen. Furthest(Default): Choose one center \(m_{1}\) at random. Calculate the difference between \(m_{1}\) and each of the remaining \(N-1\) observations \(x_{i}\). \(d(x_{i}, m_{1}) = ||(x_{i}-m_{1})||^2\) Choose \(m_{2}\) to be the \(x_{i}\) that maximizes \(d(x_{i}, m_{1})\). Repeat until \(K\) centers have been chosen. PlusPlus: Choose one center \(m_{1}\) at random. Calculate the difference between \(m_{1}\) and each of the remaining \(N-1\) observations \(x_{i}\). \(d(x_{i}, m_{1}) = \|(x_{i}-m_{1})\|^2\) Let \(P(i)\) be the probability of choosing \(x_{i}\) as \(m_{2}\). Weight \(P(i)\) by \(d(x_{i}, m_{1})\) so that those \(x_{i}\) furthest from \(m_{2}\) have a higher probability of being selected than those \(x_{i}\) close to \(m_{1}\). Choose the next center \(m_{2}\) by drawing at random according to the weighted probability distribution. Repeat until \(K\) centers have been chosen. Userinitialization allows you to specify a file (using the user_pointsparameter) that includes a vector of initial cluster centers. Once \(K\) initial centers have been chosen calculate the difference between each observation \(x_{i}\) and each of the centers \(m_{1},...,m_{K}\), where difference is the squared Euclidean distance taken over \(p\) parameters.\[d(x_{i}, m_{k})=\sum_{j=1}^{p}(x_{ij}-m_{k})^2=\|(x_{i}-m_{k})\|^2\] Assign \(x_{i}\) to the cluster \(k\) defined by \(m_{k}\) that minimizes \(d(x_{i}, m_{k})\) When all observations \(x_{i}\) are assigned to a cluster calculate the mean of the points in the cluster.\[\bar{x}(k)=\{\bar{x_{i1}},…\bar{x_{ip}}\}\] Set the \(\bar{x}(k)\) as the new cluster centers \(m_{k}\). Repeat steps 2 through 5 until the specified number of max iterations is reached or cluster assignments of the \(x_{i}\) are stable. References¶ Xiong, Hui, Junjie Wu, and Jian Chen. “K-means Clustering Versus Validation Measures: A Data- distribution Perspective.” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 39.2 (2009): 318-331.
Math Contents Introduction TeX was designed for ease of typesetting books that contained mathematics. As ConTeXt is built on top of TeX, it inherits all those features. In addition to these, ConTeXt adds lot of macros to make the typesetting of mathematics easier. For typesetting of mathematics follows different rules than that of normal text, TeX uses something called "math mode" where some characters get a different meaning to enable a simple syntax for complicated formulas. Simple Math Typesetting mathematics can be divided into two parts, inline math (mathematical formulas set within ordinary paragraphs as part of the text) and display math mathematics set on lines by themselves, often with equation numbers). Inline math consists of maths that is typed in a sentence. For example There are two ways of typing inline math. The TeX way is to surround what you want to type within $... $. Thus, the above will be typed as Pythagoras formula, stating $a^2 + b^2 = c^2$ was one of the first trigonometric results ConTeXt also provides an alternative way of typing the same result. Instead of dollars, you can write the material for maths inside \mathematics or \math (which is shorter). Thus, an alternate way to type the above is, Pythagoras formula, stating \mathematics{a^2 + b^2 = c^2} was one of the first trigonometric results Choose the method that suits your style. The famous result (once more) is given by \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: Numbering Formulae The famous result (once more) is given by \placeformula \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: The \placeformula command is optional, and produces the equation number; leaving it off produces an unnumbered equation. Changing format of numbers You can use \setupformulas to change the format of numbers. For example to get bold numbers inside square brackets use \setupformulas[left={[},right={]},numberstyle=bold] which gives To get alphabets instead of numbers, use \setupformulas[conversion=Character] which gives Not so Simple Maths ConTeXt's base mathematics support is built on the mathematics support in plain TeX, thus allowing quite complicated formulas. (There are also some additional macros, such as the \text command for text-mode notes within math.) For instance: A more complicated equation: \placeformula \startformula {{\theta_{\text{\CONTEXT}}}^2 \over x+2} = \pmatrix{a_{11}&a_{12}&\ldots&a_{1n}\cr a_{21}&a_{22}&\ldots&a_{2n}\cr \vdots&\vdots&\ddots&\vdots\cr a_{n1}&a_{n2}&\ldots&a_{nn}\cr} \pmatrix{b_1 \cr b_2 \cr \vdots \cr b_n} + \sum_{j=1}^\infty z^j \left( \sum_{\scriptstyle n=1 \atop \scriptstyle n \ne j}^\infty Z_j^n \right) \stopformula which produces Context provides a wrapper around tex \pmatrix. The above can be typeset in a contextish way as A more complicated equation: \definemathmatrix[pmatrix][left={\left(\,},right={\,\right)}] \placeformula \startformula {{\theta_{\text{\CONTEXT}}}^2 \over x+2} = \startpmatrix \NC a_{11} \NC a_{12} \NC \ldots \NC a_{1n} \NR \NC a_{21} \NC a_{22} \NC \ldots \NC a_{2n} \NR \NC \vdots \NC \vdots \NC \ddots \NC \vdots \NR \NC a_{n1} \NC a_{n2} \NC \ldots \NC a_{nn} \NR \stoppmatrix \startpmatrix b_1 \NR b_2 \NR \vdots \NR b_n \NR \stoppmatrix + \sum_{j=1}^\infty z^j \left( \sum_{\scriptstyle n = 1 \atop \scriptstyle n \ne j}^\infty Z_j^n \right) \stopformula Equation alignment is covered on a separate page. Sub-Formula Numbering As mentioned above, formulas can be numbered using the \placeformula command. This (and the related \placesubformula command have an optional argument which can be used to produce sub-formula numbering. For example: Examples: \placeformula{a} \startformula c^2 = a^2 + b^2 \stopformula \placesubformula{b} \startformula c^2 = a^2 + b^2 \stopformula What's going on here is simpler than it might appear at first glance. Both \placeformula and \placesubformula produce equation numbers with the optional tag added at the end; the sole difference is that the former increments the equation number first, while the latter does not (and thus can be used for the second and subsequent formulas that use the same formula number but presumably have different tags). This is sufficient for cases where the standard ConTeXt equation numbers suffice, and where only one equation number is needed per formula. However, there are many cases where this is insufficient, and \placeformula defines \formulanumber and \subformulanumber commands, which provide hooks to allow the use of ConTeXt-managed formula numbers with plain TeX equation numbering. These, when used within a formula, simply return the formula number in properly formatted form, as can be seen in this simple example with plain TeX's \eqno. Note that the optional tag is inherited from \placeformula. More examples: \placeformula{c} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \eqno{\formulanumber} \stopformula In order for this to work properly, we need to turn off ConTeXt's automatic formula number placement; thus the \let command to empty \doplaceformulanumber, which must be placed after the start of the formula. In many practical examples, however, this is not necessary; ConTeXt redefines \displaylines and \eqalignno to do this automatically. For more control over sub-formula numbering, \formulanumber and \subformulanumber have an optional argument parallel to that of \placeformula, as demonstrated in this use of plain TeX's \eqalignno, which places multiple equation numbers within one formula. Yet more examples: \placeformula \startformula \eqalignno{c^2 &= a^2 + b^2 &\formulanumber{a} \cr a^2 + b^2 &= c^2 &\subformulanumber{b} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula Note that both \formulanumber and \subformulanumber can be used within the same formula, and the formula number is incremented as expected. Also, if an optional argument is specified in both \placefigure and \formulanumber, the latter takes precedence. More examples for left-located equation number: \setupformulas[location=left] \placeformula{d} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \leqno{\formulanumber} \stopformula and \placeformula \startformula \leqalignno{c^2 &= a^2 + b^2 &\formulanumber{a} \cr a^2 + b^2 &= c^2 &\subformulanumber{b} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula -- 23:46, 15 Aug 2005 (CEST) Prinse Wang If you want named subformula with a reference see the solution proposed by Aditya Mahajan on the mailing-list [1] (2006-10-29). This feature should be added to the core eventually. List of Formulas You can have a list of the formulas contained in a document by using \placenamedformula instead of \placeformula. Only the formulas written with \placenamedformula are not put in the list, so that you can control precisely the content of the list. Example: \subsubject{List of Formulas} \placelist[formula][criterium=text,alternative=c] \subsubject{Formulas} \placenamedformula[one]{First listed Formula} \startformula a = 1 \stopformula \endgraf \placeformula \startformula a = 2 \stopformula \endgraf \placenamedformula{Second listed Formula}{b} \startformula a = 3 \stopformula \endgraf Gives: Other Methods There are two different math modules on CTAN, nath and amsl. And there's a new math module in the distribution. Context now has inbuilt support for Multiline equations It is also possible to use most LaTeX equations in ConTeXt with a relatively small set of supporting definitions. The "native" ConTeXt way of math is MathML, an application of XML - rather verbose but mighty. Number Formatting Math Fonts Bold Math Euler in ConTeXt (using Euler math font) by Adam Lindsay rsfs Using Ralph Smith's Formal Script Product integral symbol
Search Now showing items 1-2 of 2 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
I think I can safely say that nobody understands quantum mechanics. ~ Richard Feynman. Hilbert space Hilbert space is generalization of the Euclidean space. In a Hilbert space we can have infinite number of dimensions. A vector in an infinite-dimensional Hilbert space is represented as an infinite vector: \((x_1, x_2, .)\). The inner product of two vectors \(?x|y? = \sum_{i=1}^? x_i y_i\) The norm \( ||x|| = ?x|x?^{\frac{1}{2}} = \sqrt{\sum_{i=1}^? x_i^2} < ?\) Tensor product of two vectors \(\begin{bmatrix} a_1 \\ a_2 \end{bmatrix} ? \begin{bmatrix} b_1 \\ b_2 \end{bmatrix} = \begin{bmatrix} a_1 * b_1 \\ a_1 * b_2 \\ a_2 * b_1 \\ a_2 * b_2 \end{bmatrix}\) Qubit In quantum computing, a qubit or quantum bit is a unit of quantum information. Its the analog of a bit for quantum computation. A qubit is a two-state quantum-mechanical system. The state 0 could be represented as vector \(\begin{bmatrix} 1 \\ 0 \end{bmatrix} ? |0? ? |??\) (horizontal polarization) and state 1 could be represented as orthogonal vector \(\begin{bmatrix} 0 \\ 1 \end{bmatrix} ? |1? ? |??\) (vertical polarization) . A generic qubit state corresponds to a vector \(\begin{bmatrix} ? \\ ? \end{bmatrix} = ? \begin{bmatrix} 1 \\ 0 \end{bmatrix} + ? \begin{bmatrix} 0 \\ 1 \end{bmatrix} ? ? \ |0? + ? \ |1? \) in a two-dimensional Hilbert space such as \(|?|^2 + |?|^2 = 1\) (normalized vector). \(|?|^2, |?|^2\) are the probabilities for the particle to be in state 0 and 1. ? and ? are complex numbers and they are called amplitudes. Another way to write the state ? of a qubit is by using cos and sin functions. |?? = cos(?) |0? + sin(?) |1? More details about qubit can be found here: https://arxiv.org/pdf/1312.1463.pdf Two qubits A state in a combined two qubits system corresponds to a vector in four-dimensional Hilbert space \(|\gamma? = e \ |00? + f \ |01? + g \ |10?+ h \ |11? = \begin{bmatrix} e \\ f \\ g \\ h \end{bmatrix} \). In general, the state vector of two combined systems is the tensor product of the state vector of each system. Some expressions and their equivalents: |0??|0??|0? ? |000? |0??|1??|0? ? |010?\((\sqrt{\frac{1}{2}}|0? + \sqrt{\frac{1}{2}}|1?)?(\sqrt{\frac{1}{2}}|0? + \sqrt{\frac{1}{2}}|1?) ? \frac{1}{2}|00? + \frac{1}{2}|01? + \frac{1}{2}|10? + \frac{1}{2}|11?\) Bloch sphere A state ? of a qubit can be also written as: \(|?? = cos(\frac{?}{2})|0? + sin(\frac{?}{2}) exp(i?) |1?\), and it can be visualized as a vector of length 1 on the Bloch sphere. The angles ?,? correspond to the polar and azimuthal angles of spherical coordinates (??[0,?], ??[0,2?]). Bra-ket notation |01? ? |0??|1? ? \(\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \) (column vector) ?01| ? ?0|??1| ? \(\begin{bmatrix} 0 \ 1 \ 0 \ 0 \end{bmatrix} \) (row vector) Quantum superposition An arbitrary state of a qubit could be described as a superposition of 0 and 1 states. N qubits A 3 qubits quantum system can store numbers such as 3 or 7. |011? ? |3? |111? ? |7? But it can also store the two numbers simultaneously using a superposition state on the last qubit.\((\sqrt{\frac{1}{2}}|0? + \sqrt{\frac{1}{2}}|1?)?|1??|1? ? \sqrt{\frac{1}{2}} (|011? + |111?)\) In general, a quantum computer with n qubits can be in an arbitrary superposition of up to \(2^n\) different states simultaneously. Quantum gates Hadamard H gate The Hadamard gate acts on a single qubit. It maps the state |0? to a superposition state with equal probabilities on state 0 and 1 (\(\sqrt{\frac{1}{2}}|0? + \sqrt{\frac{1}{2}}|1?\)). The Hadamard matrix is defined as: \(H = \frac {1}{\sqrt {2}} \begin{bmatrix}1 \ 1 \\ 1 \ -1 \end{bmatrix}\)\(H * \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \sqrt{\frac{1}{2}} \\ \sqrt{\frac{1}{2}} \end{bmatrix}\) Pauli X gate (NOT gate) The Pauli-X gate acts on a single qubit. It is the quantum equivalent of the NOT gate for classical computers. The Pauli X matrix is defined as: \(X = \begin{bmatrix}0 \ 1 \\ 1 \ 0 \end{bmatrix}\)\(X * \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}\) Pauli Y gate The Pauli-Y gate acts on a single qubit. It equates to a rotation around the Y-axis of the Bloch sphere by ? radians. It maps |0? to i |1? and |1? to ?i |0? The Pauli Y matrix is defined as: \(Y = \begin{bmatrix}0 \ -i \\ i \ 0 \end{bmatrix}\)\(Y * \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ i \end{bmatrix}\) Pauli Z gate The Pauli-Z gate acts on a single qubit. It equates to a rotation around the Z-axis of the Bloch sphere by ? radians. It leaves the basis state |0? and maps |1? to ?|1? The Pauli Z matrix is defined as: \(Z = \begin{bmatrix}1 \ 0 \\ 0 \ -1 \end{bmatrix}\)\(Z * \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\) Other gates are described in this link: https://en.wikipedia.org/wiki/Quantum_gate State measurement State measurement can be done using polarizing filters. Filters transmit or block particles based on angle of polarization. IBM Q IBM Q is a cloud quantum computing platform. Below a simple experiment where we have two qubits. No transformation is applied on first qubit. A Pauli X transformation is applied on second qubit. As result there is one possible state |10? with probability equals 1. Data representation Given a vector \(x = (x_1, x_2, . , x_n)\). The quantum system topology needed to store the vector x is \(log_2(n)\) qubits. For example, the quantum system representation of the following vector requires only 3 qubits. (0, 0.2, 0.2, 0, 0.1, 0.2, 0.1, 0.3) ? 0 |000? + 0.2 |001? + 0.2 |010? + 0 |011? + 0.1 |100? + 0.2 |101? + 0.1 |110? + 0.3 |111?
Exponential of Real Number is Strictly Positive Theorem Let $x$ be a real number. Let $\exp$ denote the (real) exponential function. Then: $\forall x \in \R : \exp x > 0$ This proof assumes the series definition of $\exp$. That is, let: $\displaystyle \exp x = \sum_{n \mathop = 0}^\infty \dfrac {x^n} {n!}$ First, suppose $0 < x$. Then: \(\displaystyle 0\) \(<\) \(\displaystyle x^n\) Power Function is Strictly Increasing over Positive Reals: Natural Exponent \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(<\) \(\displaystyle \frac {x^n} {n!}\) Real Number Ordering is Compatible with Multiplication \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(<\) \(\displaystyle \sum_{n \mathop = 0}^\infty \frac{x^n}{n!}\) Ordering of Series of Ordered Sequences \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(<\) \(\displaystyle \exp x\) Definition of $\exp$ So $\exp$ is strictly positive on $\R_{>0}$. From Exponential of Zero, $\exp 0 = 1$. Finally, suppose that $x < 0$. Then: \(\displaystyle 0\) \(<\) \(\displaystyle -x\) Order of Real Numbers is Dual of Order of their Negatives \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(<\) \(\displaystyle \exp \left({-x}\right)\) from above \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(<\) \(\displaystyle \frac 1 {\exp x}\) Reciprocal of Real Exponential \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(<\) \(\displaystyle \exp x\) Ordering of Reciprocals So $\exp$ is strictly positive on $\R_{<0}$. Hence the result. $\blacksquare$ This proof assumes the limit definition of $\exp$. That is, let: $\displaystyle \exp x = \lim_{n \mathop \to \infty} \map {f_n} x$ where $\map {f_n} x = \paren {1 + \dfrac x n}^n$ First, fix $x \in \R$. Let $N = \ceiling {\size x}$, where $\ceiling {\, \cdot \,}$ denotes the ceiling function. Then: \(\displaystyle \exp x\) \(=\) \(\displaystyle \lim_{n \mathop \to \infty} \map {f_n} x\) \(\displaystyle \) \(=\) \(\displaystyle \lim_{n \mathop \to \infty} \map {f_{n + N} } x\) Tail of Convergent Sequence \(\displaystyle \) \(\ge\) \(\displaystyle \map {f_{n + N} } x\) Exponential Sequence is Eventually Increasing and Limit of Bounded Convergent Sequence is Bounded \(\displaystyle \) \(>\) \(\displaystyle 0\) Corollary to Exponential Sequence is Eventually Increasing $\blacksquare$ This proof assumes the definition of $\exp x$ as the unique continuous extension of $e^x$. Since $e > 0$, the result follows immediately from Power of Positive Real Number is Positive over Rationals. $\blacksquare$ This proof assumes the definition of $\exp$ as the inverse mapping of extension of $\ln$, where $\ln$ denotes the natural logarithm. Recall that the domain of $\ln$ is $\R_{>0}$. That is, the image of $\exp$ is $\R_{>0}$. Hence the result. $\blacksquare$ This proof assumes the definition of $\exp$ as the solution to an initial value problem. That is, suppose $\exp$ satisfies: $ (1): \quad D_x \exp x = \exp x$ $ (2): \quad \exp \left({0}\right) = 1$ on $\R$. $\forall x \in \R: \exp x \ne 0$ $\Box$ Aiming for a contradiction, suppose that $\exists \alpha \in \R: \exp \alpha < 0$. Then $0 \in \left({\exp \alpha \,.\,.\, 1}\right)$. $\exists \zeta \in \left({\alpha \,.\,.\, 0}\right): f \left({\zeta}\right) = 0$ $\blacksquare$
If they are asking this question, I would expect at this point you have learned at least one method to solving the equation $Ax = y$ with $A$ a matrix, and $x,y$ vectors (i.e. single column matrices). Here in this problem, they are taking it a step further to illustrate the problem where $y=0$ (the zero-vector). Those solutions, $x$, to the equation $Ax = 0$ are said to reside in the nullspace of $A$ (also commonly referred to as the kernel of $A$) which is commonly denoted as $N(A)$ (or as $ker(A)$ respectively). ( technically this is the right-nullspace, whereas the left nullspace is for the similar problem $xA = 0$). The nullspace itself satisfies the properties of being a vector space (meaning that for any $x_1, x_2\in N(A)$ you have $x_1+x_2\in N(A)$, you have $0\in N(A)$, and you have $\alpha x_1\in N(A)$ for any scalar $\alpha$). The nullspace is closely related to other subspaces, as shown in theorems like the fundamental theorem of linear algebra. As a vector space, the nullspace will have a minimal set of basis vectors, $b_1, b_2, \dots, b_k$ (where $k=\dim N(A)$) such that $N(A) = \text{span}\{b_1,b_2,\dots,b_k\}$. In other words, every vector $x\in N(A)$ can be written as a linear combination of the basis elements. ( note: the dimension of the nullspace is able to be any natural number up to the size of the matrix $A$ and is not necessarily always two) ( by linear combination we mean to say there exists some constants $\alpha_1,\dots,\alpha_k$ such that $x=\alpha_1 b_1 + \alpha_2 b_2 + \dots + \alpha_k b_k$. We call this a linear combination because the terms involved are all of the simple form $\alpha_i b_i$ and terms of a different form such as $b_1 b_2$, $b_1^2$, or $\sin(b_3)$ don't appear). This exercise is asking you to find a basis for the nullspace of your particular matrix $A$. One such method which should have been learned early on is to set up the matrix: $$\left[\begin{array}{cccc|c}1 & 2 & -1 & 0 & 0\\1 & 3 & -1 & -2 & 0\\-1 & 0 & 1 & -4 & 0\\2 & 3 & -2 & 2 & 0\end{array}\right]$$ and row reduce to get it into reduced row echelon form. You will arrive at a matrix looking something like: $$\left[\begin{array}{cccc|c}1 & 2 & 0 & 2 & 0\\0 & 0 & 1 & -2 & 0\\0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0\end{array}\right]$$ ( note this is not the actual result of row reduction in this case but an example of what it might look like) This tells you that a solution to the equation, $x = \begin{bmatrix}x_1 \\ x_2 \\ x_3 \\ x_4\end{bmatrix}$, will necessarily have: $x_1 = -2x_2 - 2x_4$ and $x_3 = 2 x_4$, with $x_2$ and $x_4$ as free variables. As such any solution will be of the form, $x_4\cdot \begin{bmatrix} -2 \\ 0 \\ 2 \\ 1\end{bmatrix} + x_2\cdot \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0\end{bmatrix}$. These vectors, $\begin{bmatrix} -2 \\ 0 \\ 2 \\ 1\end{bmatrix}, \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0\end{bmatrix}$ are precisely our basis vectors, $b_1, b_2$ for the nullspace, and you will have that $Ab_1 = 0, Ab_2 = 0$ and $A(rb_1 + sb_2) = 0$ for any scalars $r$ and $s$. Again, to solve your exact problem as it is worded, set up your matrix with an extra column of zeroes on the right, row-reduce, and use that information to find in what way each entry of a solution vector depends on the other entries. ( it is useful to note that due to the fact that the righthand column will have all zeroes, after row reduction it will continue to have all zeroes. This is an important fact which is necessary for the nullspace to actually be a subspace and why finding the solutions to the equation $Ax=0$ is as important as it is)
Second SSC CGL level Question Set, topic Trigonometry This is the second Question set of 10 practice problem exercise for SSC CGL exam on topic Trigonometry. Students must complete the this set in prescribed time first and then only refer to the corresponding solution set. It is emphasized here that answering in MCQ test is not at all the same as answering in a school test where you need to derive the solution in perfectly elaborated steps. In MCQ test instead, you need basically to deduce the answer in shortest possible time and select the right choice. None will ask you about what steps you followed. Based on our analysis and experience we have seen that, for accurate and quick answering, the student must have complete understanding of the basic concepts of the topics is adequately fast in mental math calculation should try to solve each problem using the most basic concepts in the specific topic area and does most of the deductive reasoning and calculation in his head rather than on paper. Actual problem solving happens in items 3 and 4 above. But how to do that? You need to use your your problem solving abilities only. There is no other recourse. Recommendation: Before taking the test you may refer to the tutorial on . Basic and rich concepts in Trigonometry and its applications Second Question set- 10 problems for SSC CGL exam: topic Trigonometry - time 20 mins Q1. If $0^0 < \theta < 90^0$ and $2sin^2\theta + 3cos\theta = 3$ then the value of $\theta$ is, $30^0$ $60^0$ $45^0$ $75^0$ Q2. If $sin\theta=\displaystyle\frac{a}{\sqrt{a^2 + b^2}}$, then the value of $cot\theta$ will be, $\displaystyle\frac{b}{a}$ $\displaystyle\frac{a}{b}$ $\displaystyle\frac{a}{b} + 1$ $\displaystyle\frac{b}{a} + 1$ Q3. If $tan\theta=\frac{3}{4}$ and $0<\theta<\frac{\pi}{2}$ and $25xsin^2\theta{cos\theta}=tan^2\theta$, then the value of $x$ is, $\frac{7}{64}$ $\frac{9}{64}$ $\frac{3}{64}$ $\frac{5}{64}$ Q4. If $xsin\theta - ycos\theta = \displaystyle\sqrt{x^2 + y^2}$ and $\displaystyle\frac{cos^2\theta}{a^2} + \frac{sin^2\theta}{b^2} = \frac{1}{x^2 + y^2}$ then, $\displaystyle\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ $\displaystyle\frac{x^2}{b^2} + \frac{y^2}{a^2} = 1$ $\displaystyle\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$ $\displaystyle\frac{x^2}{b^2} - \frac{y^2}{a^2} = 1$ Q5. The value of $sin^21^0+sin^23^0+sin^25^0+...$ $...+sin^287^0+sin^289^0$ is, $22$ $22\frac{1}{2}$ $23$ $22\frac{1}{4}$ Q6. The minimum value of $cos^2\theta + sec^2\theta$ is, 0 1 2 3 Q7. If $cos\theta + sec\theta = 2$ $(0^0\leq{\theta}\leq{90^0})$ then the value of $cos{10}\theta + sec{11}\theta$ is, 0 1 2 -1 Q8. If $tan\theta=\frac{3}{4}$ and $\theta$ is acute then, $cosec\theta$ is equal to, $\frac{5}{3}$ $\frac{5}{4}$ $\frac{4}{3}$ $\frac{4}{5}$ Q9. If $\displaystyle\frac{sin\theta + cos\theta}{sin\theta - cos\theta} = 3$ then the numerical value of $sin^4\theta - cos^4\theta$ is, $\frac{1}{2}$ $\frac{2}{5}$ $\frac{3}{5}$ $\frac{4}{5}$ Q10. The minimum value of $2sin^2\theta + 3cos^2\theta$ is, 0 3 2 1 You will find the detailed conceptual solutions to these questions in SSC CGL level Solution Set 2 on Trigonometry. And video solutions below. Note: You will observe that in many of the Trigonometric problems rich algebraic concepts and techniques are to be used. In fact that is the norm. Algebraic concepts are frequently used for elegant solutions of Trigonometric problems. Answers to the questions Problem 1. Answer: Option b: $60^0$. Problem 2. Answer: Option a : $\frac{b}{a}$. Problem 3. Answer: Option d: $\frac{5}{64}$. Problem 4. Answer: Option b: $\frac{x^2}{b^2} + \frac{y^2}{a^2} = 1$. Problem 5. Answer: Option b: $22\frac{1}{2}$. Problem 6. Answer: Option c : 2. Problem 7. Answer: Option c: 2. Problem 8. Answer: Option a: $\frac{5}{3}$. Problem 9. Answer: Option c: $\frac{3}{5}$. Problem 10. Answer: Option c: 2. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Question Set 2 on Trigonometry
Degree $n$ : $22$ Transitive number $t$ : $7$ Group : $C_{11}\times D_{11}$ Parity: $-1$ Primitive: No Nilpotency class: $-1$ (not nilpotent) Generators: (1,14,6,16,11,18,5,20,10,22,4,13,9,15,3,17,8,19,2,21,7,12), (1,14)(2,21)(3,17)(4,13)(5,20)(6,16)(7,12)(8,19)(9,15)(10,22)(11,18) $|\Aut(F/K)|$: $11$ |G/N| Galois groups for stem field(s) 2: $C_2$ 11: $C_{11}$ 22: $D_{11}$, 22T1 Resolvents shown for degrees $\leq 47$ Degree 2: $C_2$ Degree 11: None 22T7 x 4 Siblings are shown with degree $\leq 47$ A number field with this Galois group has no arithmetically equivalent fields. There are 77 conjugacy classes of elements. Data not shown. Order: $242=2 \cdot 11^{2}$ Cyclic: No Abelian: No Solvable: Yes GAP id: [242, 3] Character table: Data not available.
Free probability on $ C^{*}$-algebras induced by hecke algebras over primes 1. St. Ambrose Univ., Dept. of Math. & Stat., 421 Ambrose Hall, 518 W. Locust St., Davenport, IA 52803, USA 2. Univ. of Iowa, Dept. of Math., 14 McLean Hall, Iowa City, IA 52242, USA In this paper, we establish free-probabilistic models $ \left( \mathcal{H}(G_{p}),\text{ }\psi _{p}\right)$ on Hecke algebras $ \mathcal{H}(G_{p})$, and construct Hilbert-space representations of $ \mathcal{H} (G_{p}),$ preserving free-probabilistic information from $ \left( \mathcal{H}(G_{p}),\text{ }\psi _{p}\right) ,$ for primes $ p.$ From such free-probabilistic structures with representations, we study spectral properties of operators in $ C^{*}$-algebras generated by $ \left\{ \mathcal{H}(G_{p})\right\}_{p:primes}$, via their free distributions. Mathematics Subject Classification:Primary: 46L10, 46L53, 46L54, 47L55; Secondary: 05E15, 11G15, 11R47, 11R56. Citation:Ilwoo Cho, Palle Jorgense. Free probability on $ C^{*}$-algebras induced by hecke algebras over primes. Discrete & Continuous Dynamical Systems - S, 2019, 12 (8) : 2221-2252. doi: 10.3934/dcdss.2019143 References: [1] K. Abu-Ghanem, D. Alpay, F. Colombo and I. Sabadini, Gleason's problem and schur multipliers in the multivariable quaternionic setting, [2] D. Alpay, F. Colombo and I. Sabadini, Inner product spaces and krein spaces in the quaternionic setting, recent adv. inverse scattering, schur anal. stochastic process, [3] [4] [5] J.-B. Bost, A. Connes and Hecke Algebras, Type Ⅲ-factors and phase transitions with spontaneous symmetry breaking in number theory, [6] [7] [8] [9] [10] [11] I. Cho, Free probability on $ W^{*}$-dynamical systems determined by general linear group $ GL_{2}(\Bbb{Q}_{p})$, [12] [13] [14] [15] C. W. Curtis, Note on the structure constants of Hecke algebras of induced representations of finite Chevalley groups, [16] T. Gillespie, [17] [18] [19] [20] [21] [22] [23] D. Voiculescu, K. Dykemma and A. Nica, [24] show all references References: [1] K. Abu-Ghanem, D. Alpay, F. Colombo and I. Sabadini, Gleason's problem and schur multipliers in the multivariable quaternionic setting, [2] D. Alpay, F. Colombo and I. Sabadini, Inner product spaces and krein spaces in the quaternionic setting, recent adv. inverse scattering, schur anal. stochastic process, [3] [4] [5] J.-B. Bost, A. Connes and Hecke Algebras, Type Ⅲ-factors and phase transitions with spontaneous symmetry breaking in number theory, [6] [7] [8] [9] [10] [11] I. Cho, Free probability on $ W^{*}$-dynamical systems determined by general linear group $ GL_{2}(\Bbb{Q}_{p})$, [12] [13] [14] [15] C. W. Curtis, Note on the structure constants of Hecke algebras of induced representations of finite Chevalley groups, [16] T. Gillespie, [17] [18] [19] [20] [21] [22] [23] D. Voiculescu, K. Dykemma and A. Nica, [24] [1] G. Mashevitzky, B. Plotkin and E. Plotkin. Automorphisms of categories of free algebras of varieties. [2] Catarina Carvalho, Victor Nistor, Yu Qiao. Fredholm criteria for pseudodifferential operators and induced representations of groupoid algebras. [3] [4] [5] [6] Adina Juratoni, Flavius Pater, Olivia Bundău. Operator representations of logmodular algebras which admit $\gamma-$spectral $\rho-$dilations. [7] Hayk Mikayelyan, Henrik Shahgholian. Convexity of the free boundary for an exterior free boundary problem involving the perimeter. [8] Bruce Geist and Joyce R. McLaughlin. Eigenvalue formulas for the uniform Timoshenko beam: the free-free problem. [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724 Like this: [/url][/wiki][/url] [/wiki] [/url][/code] Many different combinations work. To reproduce, paste the above into a new post and click "preview". x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X I wonder if this works on other sites? (Remove/Change ) Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Related:[url=http://a.com/] [/url][/wiki] My signature gets quoted. This too. And my avatar gets moved down Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote: Related: [ Code: Select all [wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki] ] My signature gets quoted. This too. And my avatar gets moved down It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places. Here, I'll fix it: [/wiki][url]conwaylife.com[/url] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: It appears I fixed @Saka's open <div>. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce toroidalet Posts: 1018 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? The post before the one you quoted. The code was: Code: Select all [wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up. Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage. Last edited by Saka on June 21st, 2017, 10:51 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA I actually laughed at the terminology. "IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways, [/wiki] I like making rules Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it... -Fluffykitty Pusher Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] Screenshot? New one yay. -Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side. Last edited by Saka on June 21st, 2017, 10:20 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA Someone should create a phpBB-based forum so we can experiment without mucking about with the forums. This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X The testing grounds have now become similar to actual military testing grounds. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm We also have this thread. Also, is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you: Code: Select all [wiki][viewer][/wiki][viewer][/viewer][/viewer] Last edited by fluffykitty on June 22nd, 2017, 11:50 am, edited 1 time in total. I like making rules 83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact: oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar. EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale. Code: Select all x = 8, y = 10, rule = B3/S23 3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo! No football of any dui mauris said that. Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki] This dosen't do good things Edit: Code: Select all [wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url] Neither does this ^ What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer] I get about five different scroll bars when I preview this Edit: Code: Select all [viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki] Makes a really long post and makes the rest of the thread large and centred Edit 2: Code: Select all [url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer] Just don't do this (Sorry I'm having a lot of fun with this) ^ What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy Here's another small one: Code: Select all [url][wiki][viewer][/wiki][/url][/viewer] fg Moosey Posts: 2483 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: Code: Select all [wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki] [/code] Is a pinch broken Doesn’t this thread belong in the sandbox? I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" 77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Moosey Posts: 2483 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: 77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Now it's half an aidan mode testing grounds. Also, fluffykitty's messmaker: Code: Select all [viewer][wiki][*][/viewer][/*][/wiki][/quote] I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf
Suspension Bridges Mountain villages like to attract tourists by building suspension bridges, such as the one depicted here in the Harz Mountains in Germany. These bridges allow adventurously-inclined people to seek their thrills by crossing over deep gorges. To make sure that everyone gets just the right amount of excitement, the sag at the deepest point of the bridge should be significant relative to the distance the bridge covers. Given the distance between the anchor points where the bridge is attached, and given a desired amount of sag, compute how long each of the cables holding the suspension bridge needs to be! To help you solve this task, here is some background: A free-hanging suspension bridge will take on the form of a catenary curve ( catena is Latin for chain), just like a free-hanging chain between two poles. Given the horizontal distance $d$ between two anchor points and the desired amount $s$ the cable is sagging in the center, there exists a positive parameter $a$ such that $a + s = a \cdot \cosh \left(\frac{d}{2 a}\right)$. The length of the cable is then given by $\ell (a,d) = 2 a \cdot \sinh \left(\frac{d}{2 a}\right)$. The functions $\sinh $ and $\cosh $ denote the hyperbolic sine and hyperbolic cosine, respectively, which are defined as follows:\begin{align*} \sinh x & = \frac{e^ x - e^{-x}}{2} & \cosh x & = \frac{e^ x + e^{-x}}{2} \end{align*} Input The input consists of a single test case with two space-separated integers $d$ and $s$ given on a single line such that $0 < d \le 1\, 000$ and $0 < s \le 1\, 000$. The number $d$ denotes the distance between the anchor points and $s$ is the desired sag at the center of the bridge. More precisely, the “sag” here is the vertical distance between the cable’s lowest point in the center and the horizontal line formed by the $2$ anchor points on either side of the gorge. Output Output the length of cable needed to cover the distance between the anchor points to achieve the desired sag. Your answer should be correct within an absolute error of $10^{-4}$. Sample Input 1 Sample Output 1 400 40 410.474747252
According to my lecture, Fuzzy c-Means tries to minimize the following objective function: $$J(X,B,U)=\sum_{i=1}^c\sum_{j=1}^n u_{ij}^w \, d^2(\vec{\beta_i},\vec{x_j})$$ where $X$ are the data points, $B$ are the cluster-'prototypes', and $U$ is the matrix containing the fuzzy membership degrees. $d$ is a distance measure. A constraint is that the membership degrees for a single datapoint w.r.t. all clusters sum to $1$: $\sum_{j=1}^n\, u_{ij}=1$. Now in the first equation, what is the role of the $w$? I read that one could use any convex function instead of $(\cdot)^w$. But why use anything at all. Why don't we just use the membership degrees? My lecture says using the fuzzifier is necessary but doesn't explain why.
I realize turbines are more efficient than piston engines, but if that were true, then why don't turbo-props reach jet speeds? The thrust of a propeller is proportional to the inverse of airspeed, while the thrust of a pure turbojet is roughly constant over airspeed in the subsonic region. This means that two airplanes with the same static thrust, one propeller-powered and the other jet-powered, will reach very different top speeds. A piston engine produces a constant torque $\tau$, independent of speed. This torque drives the propeller which produces work $W$ per unit of time on the air passing through it. This work per unit of time is power $P$ and proportional to the product of torque and propeller speed $\omega$, which is again constant over airspeed $v$. The power to propel the aircraft is the product of thrust and airspeed and equals engine power times propeller efficiency $\eta_{Prop}$. When airspeed goes up, thrust must go down proportionally for power to stay constant. $$P = \tau\cdot\omega = T\cdot v\cdot\eta_{Prop}$$ $$\Rightarrow T \varpropto \frac{1}{v}$$ Turbojets, on the other hand, profit from flight speed because the intake pre-compresses the air when it slows down ahead of and in the intake. This pre-compression lifts the pressure level of the whole engine, so it sees an increased mass flow with increasing speed, producing higher thrust. This effect by itself would increase thrust in proportion with the square of flight speed, but the same effect which reduces the thrust of a propeller acts on a turbojet as well. This effect, however, is less pronounced because the jet accelerates less air to a higher speed, and both cancel each other, roughly. Turbofans are more similar to propeller engines, so here the thrust goes down over airspeed, and more so for higher bypass ratios. Turboprops are even closer to piston-powered propellers, so their thrust drops even more with increasing speed. This means that the speed at which drag equals thrust drops when you move from turbojets to turbofans, and further to turboprops and is lowest for piston-powered propeller aircraft. I'd like to visually illustrate why propellers aren't suited for high speed flight. First, let's consider a variable-pitch propeller mounted on a capable engine. If you study the two sketches I drew above, you'll notice as the horizontal speed increases (right image), the thrust direction moves away from the forward, despite the increased airflow and local lift on the blade. This is what many textbooks erroneously call "taking a bigger bite", but as you can see, it's the same bite (angle of attack). So the faster you fly a propeller, the lower forward thrust you get, making you unable to go any faster. Above graph shows T A (Thrust Available) for a propeller decreasing the faster the plane flies. Another factor is when you fly as fast as the propeller's tip speed—say Mach 0.75 (a Boeing 737 Classic in cruise)—the airfoil of each blade will be flying much faster than your forward speed (compare the hypotenuse to the vertical/horizontal sides in the topmost image). The airflow around the airfoil will be supersonic. Straight wings (and blades) don't do well in supersonic speeds (too much nasty drag). For a fixed-pitch propeller, let's say we dropped a plane with a fixed-pitch propeller from a mothership at high speed, the airflow will hit the top surface of the propeller disk, creating backward thrust—effectively propelling the plane backwards. It's best when the plane's forward speed keeps the propeller disk from approaching the sound barrier, and doesn't result in the thrust being directed too much away from the forward. For a turbo prop the limiting factor to some extent is the efficiency of the propeller, not the engine. There have been a bunch of attempts at very fast turbo props over the years with the fastest most likely going to the XF-84H Thunderscreach but in all of the cases the propeller becomes the limiting factor. First off, to get going in the speed range you mention with a usable size propeller your tips will go supersonic which presents problems into its self. The XF-84H suffered from this and you can read up on why that was a noise issue. On top of the noise issue it simply becomes more efficient to use a jet at those speeds so in practice its the power plant of choice. Many of today's top preforming turboprops easily encroach on the speed of light jets.
Al-Zamil, Qusay and Montaldi, James (2010) Witten-Hodge theory for manifolds with boundary. [MIMS Preprint] PDF Witten-Hodge.pdf Download (133kB) Abstract We consider a compact, oriented, smooth Riemannian manifold $M$ (with or without boundary) and we suppose $G$ is a torus acting by isometries on $M$. Given $X$ in the Lie algebra and corresponding vector field $X_M$ on $M$, one defines Witten's inhomogeneous operator $\d_{X_M} = \d+\iota_{X_M}: \Omega_G^\pm \to\Omega_G^\mp$ (even/odd invariant forms on $M$). Witten \cite{Witten} showed that the resulting cohomology classes have $X_M$-harmonic representatives (forms in the null space of $\Delta_{X_M} = (\d_{X_M}+\delta_{X_M})^2$), and the cohomology groups are isomorphic to the ordinary de Rham cohomology groups of the fixed point set. Our principal purpose is to extend these results to manifolds with boundary. In particular, we define relative (to the boundary) and absolute versions of the $X_M$-cohomology and show the classes have representative $X_M$-harmonic fields with appropriate boundary conditions. To do this we present the relevant version of the Hodge-Morrey-Friedrichs decomposition theorem for invariant forms in terms of the operator $ \d_{X_M}$ and its adjoint $\delta_{X_M}$; the proof involves showing that certain boundary value problems are elliptic. We also elucidate the connection between the $X_M$-cohomology groups and the relative and absolute equivariant cohomology, following work of Atiyah and Bott \cite{AB}. This connection is then exploited to show that every harmonic field with appropriate boundary conditions on $F$ has a uniqe extension to an $X_M$-harmonic field on $M$, with corresponding boundary conditions. Item Type: MIMS Preprint Uncontrolled Keywords: Hodge theory, manifolds with boundary, equivariant cohomology, Killing vector fields Subjects: MSC 2010, the AMS's Mathematics Subject Classification > 35 Partial differential equations MSC 2010, the AMS's Mathematics Subject Classification > 53 Differential geometry MSC 2010, the AMS's Mathematics Subject Classification > 55 Algebraic topology MSC 2010, the AMS's Mathematics Subject Classification > 57 Manifolds and cell complexes Depositing User: Dr James Montaldi Date Deposited: 11 Apr 2010 Last Modified: 08 Nov 2017 18:18 URI: http://eprints.maths.manchester.ac.uk/id/eprint/1433 Available Versions of this Item Witten-Hodge theory for manifolds with boundary. (deposited 11 Apr 2010) [Currently Displayed] Actions (login required) View Item
A few simple steps to the solution compared to conventional complex procedure At high school level many times we find math problems are solved following a long series of steps. This is what we call . conventional approach to solving problems This approach not only involves a large number of steps, in most cases, the steps themselves introduce a higher level of complexity, and increases chances of error. More importantly, the conventional inefficient problem solving approach curbs the out-of-the-box thinking skills of the students. While dealing with complex Trigonometry problems similar to school level in competitive exam scneario, the student is now forced to solve such a problem in a minute, and not in many minutes. The pressure to find the solution along the shortest path gains immense importance for successful performance in such tests as SSC CGL. Though at school level, all steps to the solution are to be written down, that does not take up most of the time, the bulk of the time is actually consumed in inefficient problem solving, finding the path and steps to the solution. We will take up a few apparently difficult Trigonometry problems from that actually belong to school level, and appear in MCQ form in the competitive test scenario. SSC CGL test level The here through solution of the problems thinking process that we will highlight as well as will help SSC CGL aspirants high school studentsto solve problems efficiently in a few steps like a problem solver, using deductive reasoning, powerful strategies, techniques and basic subject concepts, rather than being constrained by the long and costly routine approach. Problem example 1 If $\sec\theta = x + \displaystyle\frac{1}{4x}$, where $(0^0 \lt \theta \lt 90^0)$, then $\sec\theta + \tan\theta$ is, $\displaystyle\frac{x}{2}$ $\displaystyle\frac{1}{2x}$ $x$ $2x$ First try to solve this problem yourself and then only go ahead. You might be able to reach the elegant solution to this problem yourself. Efficient solution in a few steps Deductive reasoning: First stage analysis: one must analyze the problem first. We know that the basic relation between $\sec\theta$ and $\tan\theta$ $\sec^2\theta = \tan^2\theta$ + 1 will lead us towards the solution whenever in a problem $\sec\theta$ and $\tan\theta$ appear together. In our problem, though the two terms appear together, these are in unit power form, and so the given expression must be squared, resulting expression simplified using the basic relationship as mentioned above, and then finally a square root is to be taken to arrive at the desired result. This is what we call deductive reasoning based on problem analysis and using the subject concept and form of the problem. Outcome of this analysis is finding a clear pathway to the solution. Think over. Do you find this thread of reasoning sensible? Can you find any flaw in it? So, we decide that we should square up the given equation first. Second clue - use of principle of inverses We find a special property in the given expression - it has an $x$ and also an inverse of $x$. If we square up this expression the middle term won't have any $x$ in it. This property in general helps to reach the solution quickly in so many problems that we have named it as the powerful problem solving principle of inverses and repeatedly used it with great benefits. You may refer to a detailed treatment of its use here. So in the first stage action, we will first square up the given equation and use the principle of inverses to simplify further. Let's see how. First stage action: We have the given expression, $\sec\theta = x + \displaystyle\frac{1}{4x}$. Squaring both sides, $\sec^2\theta = x^2 + \displaystyle\frac{1}{16x^2} + \frac{1}{2}$. By the grace of principle of inverses, the middle term on the RHS has turned to a simple fraction without any trace of $x$. Continuing further, $\tan^2\theta + 1 = x^2 + \displaystyle\frac{1}{16x^2} + \displaystyle\frac{1}{2}$ Now we will use another great principle, the In the given expression we spot the possiblity of significant gains if we transfer the 1 from LHS to RHS so that the middle term now changes its sign and forms the expression of another square. principle of collection of friendly terms. $\tan^2\theta = x^2 + \displaystyle\frac{1}{16x^2} - \displaystyle\frac{1}{2} = \left(x - \frac{1}{4x}\right)^2$, Or, $\tan\theta = x - \displaystyle\frac{1}{4x}$, as $\tan\theta$ can't be negative as per the given condition. Summing it up now with $\sec\theta$ from given expression, $\sec\theta + tan\theta = 2x$. Answer: d: $2x$. Conventional solution We have the given expression, $\sec\theta = x + \displaystyle\frac{1}{4x}$. Or squaring up the two sides of the equation we have, $\sec^2\theta = \displaystyle\frac{(4x^2 + 1)^2}{(4x)^2}$, Or, $\sec^2\theta - 1 = \displaystyle\frac{(4x^2 + 1)^2 - (4x)^2}{(4x)^2}$, Or, $\tan^2\theta = \displaystyle\frac{16x^4 + 8x^2 + 1 - 16x^2}{(4x)^2}$, $=\displaystyle\frac{16x^4 - 8x^2 + 1}{(4x)^2}$ $=\displaystyle\frac{(4x^2 -1)^2}{(4x)^2}$. So, $\tan\theta = \displaystyle\frac{(4x^2 -1)}{(4x)} = x - \frac{1}{4x}$ and finally, $\sec\theta + \tan\theta = 2x$. regarding Compare the two solutions yourself ease, complexity, chances of errorand time takento reach the solution. Problem example 2 If $tan\theta = \displaystyle\frac{1}{\sqrt{11}}$, and $0 \lt {\theta} \lt \displaystyle\frac{{\pi}}{2}$, then the value of, $\displaystyle\frac{cosec^2\theta - sec^2\theta}{{cosec^2\theta} + sec^2\theta}$ is, $\displaystyle\frac{3}{4}$ $\displaystyle\frac{6}{7}$ $\displaystyle\frac{4}{5}$ $\displaystyle\frac{5}{6}$ Efficient solution in a few steps Problem analysis By looking at the problem, we recognize the target expression to be absolutely ready to be subjected to the well known Though the name is a bit awkward the concept is rather easy. algebraic technique of componendo dividendo. We will apply the concept but not the formula here. We have a strong apathy to use formulae without using our brains. The target expression, $E = \displaystyle\frac{cosec^2\theta - sec^2\theta}{{cosec^2\theta} + sec^2\theta}$ We add 1 to both sides and simplify. $E + 1 = \displaystyle\frac{cosec^2\theta - sec^2\theta}{{cosec^2\theta} + sec^2\theta} + 1$ $= \displaystyle\frac{2cosec^2\theta}{{cosec^2\theta} + sec^2\theta}$. Second time we subtract 1 from both sides of the original equation, $E - 1 = \displaystyle\frac{cosec^2\theta - sec^2\theta}{{cosec^2\theta} + sec^2\theta} - 1$ $=\displaystyle\frac{-2sec^2\theta}{{cosec^2\theta} + sec^2\theta}$. Dividing the earlier result of $E + 1$ by this result, $\displaystyle\frac{E + 1}{E - 1} = \frac{cosec^2\theta}{-sec^2\theta} $ $= -cot^2\theta = -11$. Adding and subtracting 1 to both sides we have, $\displaystyle\frac{2E}{E - 1} = -10$, and $\displaystyle\frac{2}{E - 1} = -12$. Taking the ratio, $E = \displaystyle\frac{10}{12} = \displaystyle\frac{5}{6}$ Answer: d: $\displaystyle\frac{5}{6}$. Cumbersome solution In the most cumbersome solution, you can expand both the terms $cosec^2\theta = 1 + cot^2\theta$ and $sec^2\theta = 1 + tan^2\theta$ and substitute in the already complex target expression to get the target only in terms of $tan^2\theta$ and $cot^2\theta$, value of both of which are known. We can say for this approach that, you don't have to think at all, you just have to go on deducing mechanically. Judge and choose yourself. Always think: is there any other shorter better way to the solution? And use your brains more than your factual memory and mass of mechanical routine procedures. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry How to solve difficult SSC CGL level School math problems in a few simple steps, Trigonometry 3 A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving.
I am looking for a computationally cheap way to compute $x$ such that $$(L L^T + \mu^2 I)x = y$$ where $L \in \mathbb{R}^{n \times n}$ is a lower triangular definite positive matrix (with some very small eigenvalues), $y \in \mathbb{R}^n$ and $\mu \in \mathbb{R}$ are known. If it is necessary, I can assume that $$\mu \ll 1$$ But $\mu^2$ is larger than the smallest eigenvalue of $LL^T$. Basically, I would like to make the most of my knowledge of the Cholesky decomposition $L L^T$. Eventually, I hope to be able to compute $x$ in $\mathcal{O}(n^2)$. Approximate approaches are also welcomed. I have seen here that this does not seem to be doable in a more general situation, but I hope the smallness of $\mu$ may help... Any idea, reference or warning? Thanks for your help.
Explicit computation of these expectations appears to be out of the question once the index exceeds two or three, so I will focus on issues that had been emphasized in earlier versions of the question: What happens (asymptotically) as $i$ increases? What happens as $\alpha$ increases? The answers turn out to be interesting, unusual, and perhaps counterintuitive. I was surprised by the results and therefore felt it worthwhile to post such a lengthy answer for those who appreciate the subtle interplay of intuition, simulation, and analysis and to expose any flaws in my analysis to critical examination. Exploration and Intuition Let's get a handle on this process by describing it in words. A sequence of random values $(X_i)$ generates sequences $(Y_i)$ and $(Z_i)$. $Z_i$ is the smaller of (a) the current value of $X$ (namely, $X_i$) and (b) the average of all its previous values (which is called $Y_{i-1}$). This has two effects: Because $Z_i$ cannot be any larger than $Y_{i-1}$, it follows (by inducting on $i$) that averaging $Z_i$ with all the previous $Y_j$ cannot increase $Y$. Thus, $(Y_i)$ is a non-increasing sequence. It decreases only when a value of $X$ falls below the running average of the $Z_i$. The further along we go (that is, the larger the index $i$ is), the smaller is the possible change from $Y_{i-1}$ to $Y_i$, because the weight of $Z_i$ in the average is just $1/i$. These effects show that any single realization of the process $(Y_i)$ must decrease more and more slowly, leveling off to a horizontal asymptote (because the $X_i$ are bounded below by $0$). Furthermore, one's intuition might suggest that when smaller values of $X_i$ are rare, then this asymptote ought to be positive. That is precisely what simulations suggest, as in the left hand plot in the figure which shows one realization of $(X_i)$ (as gray dots) and the corresponding $(Y_i)$ (as a dark graph) and $(Z_i)$ (as a faint red graph bouncing between the graph of $(Y_i)$ and the lowest values of the $(X_i)$): The right hand plot displays, in red, $50$ independent realizations of $(Y_i)$, again for $\alpha=2$. (The black curve will be explained later.) Indeed, all these realizations seem to level off quickly to asymptotic values. The striking thing, though, is that these values differ quite a bit. The differences are induced by the large changes occurring very early on in the processes: when the very first one or two of the $X_i$ are small, all subsequent values of $(Y_i)$ must be even smaller. Here we have an example of a stochastic process with an extremely high degree of autocorrelation. If this intuition is correct, then the expectations $\mathbb{E}(Y_i)$ ought to be a decreasing function of $i$ and level off to some nonzero value depending on $\alpha$. Most of this intuition is good--except that the probability that $(Y_i)$ levels off to a nonzero value is nil. That is, despite all appearances, essentially all realizations of $(Y_i)$ eventually go to zero! One of the more amazing results concerns how long this will take. First, though, let's see why a nonzero asymptotic value is so unlikely. Analysis of $(Y_i)$ Consider one realization of $(X_i)$, which can be denoted $(x_i) = x_1, x_2, \ldots, x_i, \ldots$. Associated with it are realizations of $(Y_i)$ and $(Z_i)$, similarly denoted with lower case letters. Suppose that $(y_i)$ reaches some value $y \gt 0$ asymptotically. Since $(y_i)$ is nondecreasing, this means that for any $\epsilon\gt 0$ there exists an integer $n$ such that $$y + \epsilon > y_i >= y$$ for all $i \ge n$. Consider what happens when $x_i$ has a value smaller than $y_{i-1}$. The change in the running average $(y_i)$ is $$y_{i} - y_{i-1} = \frac{(i-1)y_{i-1} + x_i}{i} - y_{i-1} = \frac{iy_{i-1} + x_i-y_i}{i}- y_{i-1} = \frac{x_i-y_i}{i}.$$ I am going to underestimate the size of that change by replacing it by $0$ if $z_i \ge y$ and, otherwise, by $(z_i - y)/i$. In other words, let's only count the amount by which $x_i$ is less than the asymptotic value $y$. I will further underestimate the rate at which such decreases occur. They happen exactly when $x_i \le y_{i-1}$, which is more often than when $x_i \le y$. Writing $F$ for the CDF of $X$, this rate is $F(y)$, allowing us to express the expected value of the change at step $i$ as being an amount more negative than $$\frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{i}.$$ Consider, now, what happens to $(y_i)$ starting at $y_n$ and continuing for a huge number of steps $m \gg n$ to $y_{m+n}$. Accumulating these conservative estimates of the decreases causes $y_n$ to drop to a value expected to be less than the sum of the preceding fractions. I will underestimate the size of that sum by replacing the denominators $n+1, n+2, \ldots, n+m$ by the largest denominator $m+n$. Because there are now $m$ identical terms in the sum, this underestimate equals $$\frac{m F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{m+n}.$$ Finally, we need to compare this expectation to what really is happening with the realization $(y_i)$. The Weak Law of Large Numbers says that for sufficiently large $m$, this realization is almost certain to exhibit a net decrease that is extremely close to the expected decrease. Let's accommodate this sense of "extremely close" by (a) taking $m$ much larger than $n$ but (b) underestimating $m/(m+n)$ as $1/2$. Thus, It is almost certain that the change from $y_n$ to $y_{m+n}$ is greater in magnitude than $$\beta(y) = \frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{2} \le 0.$$ It is of logical importance that $m$ did not really depend on $n$: the only relationship among them is that $m$ should be much larger than $n$. Return, then, to the original setting where we supposed that $y$ was a nonzero horizontal asymptote of $(y_i)$. Choose $\epsilon = -\beta(y)$. Provided this was nonzero, it determined the value of $n$ (at which the sequence $(y_i)$ finally approaches within $\epsilon$ of its asymptote). By taking a sufficiently large $m$, we have concluded that $(y_i)$ eventually must decrease by more than $\epsilon$. That is, $$y = y + \epsilon + \beta(y) > y_{m+n}.$$ Therefore we must have been wrong: either $y$ is not an asymptote of $(y_i)$ or else $\beta(y) = 0$. However, $\beta(y) = 0$ only when $y$ is smaller than all numbers in the support of $X$. In the case of a Beta distribution ( any Beta distribution), the support is always the full unit interval $[0,1]$. Consequently, the only possible value at which almost all realizations $y_i$ can level off to is $0$. It follows immediately that $$\lim_{i\to \infty} \mathbb{E}(Y_i) = 0.$$ Conclusions For $X \sim $ Beta$(\alpha, 1)$, $F(x) = x^\alpha$ concentrates more and more of the probability near $1$ as $\alpha$ increases. Consequently it is obvious (and easily proven) that $$\lim_{\alpha \to \infty} \mathbb{E}(Y_i) = 1.$$ Since $Z_i \le Y_i$ by definition, its expectation is squeezed between the expectation of $Y_i$ and zero, whence $$\lim_{i\to \infty} \mathbb{E}(Z_i) = 0.$$ Comments Notice that the results for the limiting values with respect to $i$ did not require that the $X_i$ have Beta distributions. Upon reviewing the argument is becomes clear that indeed the $X_i$ do not need to have identical distributions, nor do they need to be independent: the key idea behind the definition of $\beta(y)$ is that there needs to be a nonzero chance of seeing values of the $X_i$ that are appreciably less than $y$. This prevents most realizations $(y_i)$ from leveling off to any value $y$ for which $\beta(y)\gt 0$. The rates at which realizations reach zero, however, can be astonishingly slow. Consider the problem setting again, in which the $X_i$ are iid with common CDF $F(x) = x^\alpha$. According to the previous estimates the expected rate of change is approximately $$\frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{i} = \frac{y^\alpha (\alpha/(\alpha+1)y - y)}{i}.$$ The solution can be closely approximated by taking these differences to be derivatives of the expectation $f(i) = \mathbb{E}(Y_i)$ and integrating the resulting differential equation, yielding (for $\alpha \gt 1$) $$f(i) \approx \left(\frac{\alpha}{\alpha+1}\left(\log(i)+C\right)\right)^{-1/\alpha}$$ for some constant of integration $C$ (which we may ignore when studying the asymptotics for $i\to\infty$). Remember, this was obtained by consistently underestimating the rate of decrease of $f$. Therefore, $f$ approximates an upper bound of the realizations of $(Y_i)$ with probability $1$. Its graph (for $C=0$) is the thick black curve shown in the right-hand plot of the figure. It has the right shape and actually seems to be a pretty good approximation to the upper envelope of these realizations. This is a very slowly decreasing function. For instance, we might inquire how long it would take for the realizations to draw close to $0$: say, down to $y$. The general solution (ignoring $C$, which makes relatively little difference) is $$i \approx \exp\left(\frac{\alpha+1}{\alpha}y^{-\alpha}\right).$$ Even with $\alpha=2$ (where each $X$ has an appreciable chance of being close to $0$), and $y=0.1$ (which isn't even terribly close to $0$), the solution is $i\approx 1.4\times 10^{65}$. For $\alpha=100$ and $y=1/2$, $i$ is near $10^{10^{30}}$. I am not going to wait around for that simulation to finish! The moral here is that simulations can sometimes deceive. Their correct interpretation must be informed by an analysis of the underlying phenomenon being simulated. These asymptotic approximations appear to be pretty good, bringing us at least partway back from the purely limiting results obtained and the request in the question for information about the individual expectations $\mathbb{E}{Z_i}$, which will be less than but close to $\mathbb{E}{Y_i}$.
I am looking for results concering the following parabolic PDE $$u\cdot\nabla u + \Delta u = F(x),$$ where $$u\colon\Omega\to\mathbb{R}^2,$$ and $\Omega\subset\mathbb{R}^2$ is a 2D domain (bounded or unbounded), preferably the torus ($\mathbb{T}^2$). I do not specify any boundary condition, as I'm interested in results for any of periodic, Dirichlet, Neumann bd.cond, or unbounded domain. This equation can be written in scalar form as $$uu_x+vu_y+u_{xx}+u_{yy}=f\\uv_x+vv_y+v_{xx}+v_{yy}=g$$ This equation can be interpreted as time-independent 2D forced viscous Burgers' equation. I am looking for any known qualitative theoretical results. I've been surprised as little is available in the literature. I searched the net a bit for any existence, uniqueness, multiplicity of solutions result for this equation , but ended up having a bunch of numerical papers, and none theoretical. It looks like this equation is often used as a test-case for numerical methods. I am aware that the case $F(x)=0$ is simpler, as the equation reduces to a linear one by Hopf-Coles' transform, at least in some specific cases, and there are some results for that. I am interested in the cases $\|F\|\neq 0$, and especially $\|F\|$ large. I would also appreciate results for time-dependant version, i.e. $u_t=u\cdot\nabla u + \Delta u + F$.
5 2 I'm using the textbook Electricity and Magnetism by Purcell. In the section about continuous charge distributions I found the following formula [tex] \mathbf{E}(x,y,z)= \frac{1}{4\pi\epsilon_0 } \int \frac{\rho(x',y',z')\boldsymbol{\hat r} dx'dy'dz'}{r^{2}} [/tex]. It's stated that (x,y,z) is fixed while we let the variables x', y' and z' range over the domain of integration. What puzzles me is that radial unit vector, which is supposed to point from (x', y', z') to (x,y,z), making the integrand a vector valued function. What am I missing? [tex] \mathbf{E}(x,y,z)= \frac{1}{4\pi\epsilon_0 } \int \frac{\rho(x',y',z')\boldsymbol{\hat r} dx'dy'dz'}{r^{2}} [/tex]. It's stated that (x,y,z) is fixed while we let the variables x', y' and z' range over the domain of integration. What puzzles me is that radial unit vector, which is supposed to point from (x', y', z') to (x,y,z), making the integrand a vector valued function. What am I missing?
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest. Nah, I have a pretty garbage question. Let me spell it out. I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$. For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$. This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin. Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle. Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$ $$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$ @user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure). The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$. @RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea. The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible. Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$.... The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case? I think there you really do need some kind of PDEs to construct good charts. I might be way overcomplicating this. If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$? I think so by the squeeze theorem or something. this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$ but then we can replace all of those $U_i$'s with balls, incurring some fixed error In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid... @BalarkaSen what is this ok but this does confirm that what I'm trying to do is wrong haha In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas... Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function? I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
Some mathematical elements change their style depending on the context, whether they are in line with the text or in an equation-type environment. This article explains how to manually adjust the display style. Let's see an example Depending on the value of $x$ the equation \( f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \) may diverge or converge. \[ f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \] Superscripts, subscripts and fractions are formatted differently. The maths styles can be set explicitly. For instance, if you want an in-line mathematical element to display as a equation-like element put \displaystyle before that element. There are some more maths style-related commands that change the size of the text. In-line maths elements can be set with a different style: \(f(x) = \displaystyle \frac{1}{1+x}\). The same is true the other way around: \begin{eqnarray*} \begin{eqnarray*} f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \textstyle f(x) = \textstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \scriptstyle f(x) = \scriptstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \scriptscriptstyle f(x) = \scriptscriptstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \end{eqnarray*} \end{eqnarray*} For more information see
2016-03-01 A functional central limit theorem for a Markov-modulated infinite-server queue Publication Publication Methodology and Computing in Applied Probability , Volume 18 - Issue 1 p. 153- 168 We consider a model in which the production of new molecules in a chemical reaction network occurs in a seemingly stochastic fashion, and can be modeled as a Poisson process with a varying arrival rate: the rate is $\lambda_i$ when an external Markov process $J(\cdot)$ is in state $i$. It is assumed that molecules decay after an exponential time with mean $\mu^{-1}$. The goal of this work is to analyze the distributional properties of the number of molecules in the system, under a specific time-scaling. In this scaling, the background process is sped up by a factor $N^{\alpha}$, for some $\alpha>0$, whereas the arrival rates become $N\lambda_i$, for $N$ large. The main result of this paper is a functional central limit theorem ({\sc f-clt}) for the number of molecules, in that the number of molecules, after centering and scaling, converges to an Ornstein-Uhlenbeck process. An interesting dichotomy is observed: (i)~if $\alpha>1$ the background process jumps faster than the arrival process, and consequently the arrival process behaves essentially as a (homogeneous) Poisson process, so that the scaling in the {\sc f-clt} is the usual $\sqrt{N}$, whereas (ii)~for $\alpha\leq1$ the background process is relatively slow, and the scaling in the {\sc f-clt} is $N^{1-\alpha/2}.$ In the latter regime, the parameters of the limiting Ornstein-Uhlenbeck process contain the deviation matrix associated with the background process $J(\cdot)$. Additional Metadata Keywords Ornstein-Uhlenbeck processes, Markov modulation, Central limit theorems, Martingale methods MSC Queueing theory (msc 60K25), Processes in random environments (msc 60K37), Functional limit theorems; invariance principles (msc 60F17) THEME Life Sciences (theme 5) Persistent URL dx.doi.org/10.1007/s11009-014-9405-8 Journal Methodology and Computing in Applied Probability Project Coarse grained stochastic methods for biochemical reactions Citation Anderson, D.F, Blom, J.G, Mandjes, M.R.H, Thorsdottir, H, & deTurck, K.E.E.S. (2016). A functional central limit theorem for a Markov-modulated infinite-server queue. Methodology and Computing in Applied Probability, 18(1), 153–168. doi:10.1007/s11009-014-9405-8
GTU First Year Engineering (Semester 2) Vector Calculus and Linear Algebra May 2012 Vector Calculus and Linear Algebra May 2012 Total marks: -- Total time: -- Total time: -- INSTRUCTIONS (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary 1 (a) Solve the following homogeneous system of linear eq uation by using Gauss Jordan elimination. 2x -x x x 2x 1+2x 2-x 3+x 5=0 -x 1-x 2+2x 3-3x 4+x 5=0 x 1+x 2-2x 3-x 5=0 x 3+x 4+x 3=0 4 M Attempt the following: 1 (b) (i) Find A -1using row operation if \[ A\begin{bmatrix}1 &2 &3 \\2 &5 &3 \\1 &0 &8 \end{bmatrix} \] 3 M 1 (b) (ii) Solve the system of equation -2b+3c=1 3a+6b-3c=-2 6a+6b+3c=5 by Gaussian elimination -2b+3c=1 3a+6b-3c=-2 6a+6b+3c=5 by Gaussian elimination 3 M Attempt the following: 1 (c) (i) "Find the rank of the matrix \[ A = \begin{bmatrix}1 &-1 &2 &0 \\3 &1 &0 &0 \\-1 &2 &4 &0 \end{bmatrix} \] in terms of determinates" 2 M 1 (c) (ii) Use Cranner's rule to solve x+2y+z=5 3x-y+z=6 x+y+4z=7 x+2y+z=5 3x-y+z=6 x+y+4z=7 2 M 2 (a) Define the rank and nullity. Find the rank and nullity of the matrix \[ A=\begin{bmatrix}-1 &2 &0 &4 &5 &-3 \\3 &-7 &2 &0 &1 &4 \\2 &-5 &2 &4 &6 &1 \\4 &-9 &2 &-4 &-4 &7 \end{bmatrix} \] 5 M Attempt the following: 2 (b) (i) \[ If \ verctor \ r=x\widehat{i} = y \widehat{i} + z \widehat{k} \ then \ show \ that \ \nabla r^n = nr^{n-2} \ (vector \ r) \] 2 M 2 (b) (ii) \[ Prove \ that \ \nabla^2 f (r) = f'(r) + \dfrac {2}{r} f(r) \] 3 M Attempt the following: 2 (c) (i) Show that \[ f_1 = 1 , \ f_2= e^x, \ f_3 = e^{2x}, \] from a linearly, independent set of vectors in C 2(-infty, \infty) 2 M 2 (c) (ii) Check whether the set \[ W= \left \{ a_0 + a_1 x + a_2 x^2 + a_3 x^3 \ where ] a_0 + a_1 + a_2 + a_3 = 0, \ a_i \ \epsilon \ R \right \} \] is subspace of p 3. 2 M 3 (a) Show that the set of all 2 × 2 matrices of the form \[ \begin{bmatrix} a &1 \\1 &b \end{bmatrix} \] with addition defined by \[ \begin{bmatrix} a &1 \\1 &b \end{bmatrix}+ \begin{bmatrix} c &1 \\1 &b \end{bmatrix}= \begin{bmatrix}a+c &1 \\1 &b+d \end{bmatrix} \] and scalar multiplication\[ k \begin{bmatrix}a &1 \\1 &b \end{bmatrix} = \begin{bmatrix}ka &1 \\1 &kb \end{bmatrix} \] is a vector space. 5 M Attempt the following: 3 (b) (i) Find a standard basis vector that can be added to the set S={(1,0,3), (2,1,4)} to produce a basis of R 3 3 M 3 (b) (ii) Find the co-ordinate vector of P relative to the basis \[ S= \{ p_1, p_2 , p_3\} \\ where \ p=2-x+x^2, \ p_1 = 1 + x, \ p_2 =1+x^2, \ p_3 = x +x^3 \] 2 M Attempt the following: 3 (c) (i) Find two vector in R 2with Euclidean Norm 1 whose inner product with (-3,1) is zero. 2 M 3 (c) (ii) If v 1, v 2, v 3....... v rare pairwise orthogonal vectors in R nthen ||v 1+v 2+........ + v r|| 2= ||v 1|| 2+ ||v 2|| 2+ .......... + ||v r|| 2 2 M 4 (a) Let R 2have the Euclidean inner product. Use the Gram Schmidt process to transform the basis {u 1, u 2, u 3} into an orthonormal basis u 1=(1,0,0), u 2=(3,7,-2), u 3=(0,4,1) 4 M Attempt the following: 4 (b) (i) Find the least squares solution of the linear system Ax=B given by x 3x -2x and find the orthogonal Projection of B on the column space of A. x 1-x 2=4 3x 1+2x=1 -2x 1+4x=3 and find the orthogonal Projection of B on the column space of A. 3 M 4 (b) (ii) Let the vector space p ii) Find d(p,q) if p=1 and q=x 2have the inner product \[ (p,q)= \int^1_{-1} p(x)q(x)dx \] i) Find ||p|| for p=x 2 ii) Find d(p,q) if p=1 and q=x 3 M Attempt the following: 4 (c) (i) Define the eiganvalue and eiganvector. Find the eigen value of A 9for \[ A= \begin{bmatrix} 1 &3 &7 &11 \\0 &1/2 &3 &8 \\0 &0 &0 &4 \\0 &0 &0 &2\end{bmatrix} \] 2 M 4 (c) (ii) Find k, 1 and m to make A, a Hemition matrix \[ A=\begin{bmatrix} -1 &k &-i \\3-5i &0 &m \\l &2+4i &2 \end{bmatrix} \] 2 M 5 (a) Find bases for the eiganspace of \[ A= \begin{bmatrix} 0 &0 &2 \\1 &2 &1 \\1 &0 &3 \end{bmatrix} \] 4 M Attempt the following: 5 (b) (i) Use Cayley Hamilton theorem of find A -1for \[ A= \begin{bmatrix} 1&3&7 \\ 4&2&3 \\ 1&2&1\end{bmatrix} \] 3 M 5 (b) (ii) Find a matrix P that diagonalize \[ A = \begin{bmatrix}0&0&-2 \\ 1&2&1 \\ 1&0&3 \end{bmatrix} \] 3 M 5 (c) Find a change of variable that will reduce the quadratic form \[x^2_1-x^2_3-4_{x_1 x_2}+ 4_{x_2x_3} \] to a sum of squares and express the quadratic form in terms of the new variable. 4 M 6 (a) Verify Green's theorem for the field f(x,y)=(x-y)i+xj and the region R bounded by the unit circle C:r(t)=(cost)i+ (sint)j, 0≤t≤2π 4 M Attempt the following: 6 (b) (i) Find the flux F=4xzi-y^2j+yzk outward through the surface of the cube cut from the first octant by the planes x=1, y=1 and z=1 3 M 6 (b) (ii) Determine whether T:R 2→R 2is linear operator \[ 1) \ T(x,y)= \left(\sqrt[3]{x}, \sqrt[3]{y} \right ) \\ 2) \ T(x,y) = (x,0) \] 3 M Attempt the following: 6 (c) (i) Find the derivative of f(x,y)=x 2sin2y at the point (1, π/2) in the direction of v=3i-4j 2 M 6 (c) (ii) Use matrix multiplication to find the image of the vector (-2, 1, 2) if it is rotated -45° about the y-axis. 2 M 7 (a) Verify Stoke's Theorem for the hemisphere S:x 2+y 2+z 2=9, z≥0 its bounding circle C:x 2+y 2=9, z=0 and the field F=yi-xj 4 M Attempt the following: 7 (b) (i) Consider the basis S={v 1, v 2, v 3} for R 3, v 1=(1,2,1), v 2=(2,9,0) and v 3=(3,3,4) and let T:R 3→R 2be the linear transform such that T(v 1)=(1,0), T(v 2)=(-1,1), T(v 3)=(0,1). Find a formula for T(x 1, x 2, x 3) and use that formula to find T(7, 13, 7) 3 M 7 (b) (ii) Let T:R 2→R 3be the linear transformation defined by \[ T \left ( \begin{bmatrix}x_1\\x_2\end{bmatrix} \right )= \begin{bmatrix}x_2\\-5x_1+13x_2 \\-7x_1 + 16x_2 \end{bmatrix} . \] Find the matrix for the transformation T with respect to the bases \[ B=\{{u}_1, {u}_2 \}\ for \ R^2 \ and \ B' = \{ {v}_1, {v}_2, {v}_3 \}\ for \ R^3, \\ \ where \ {u}_1 = \begin{bmatrix}3 \\1 \end{bmatrix} , \ {u}_2\begin{bmatrix}5 \\2\end{bmatrix}, \ {v}_1 = \begin{bmatrix}1 \\ 0 \\-1 \end{bmatrix} , \ {v}_2 = \begin{bmatrix}-1 \\ 2 \\ 2 \end{bmatrix}, \ {v}_3 = \begin{bmatrix}0\\ 1 \\2\end{bmatrix} \] 3 M Attempt the following: 7 (c) (i) Determine whether the linear transformation T:R 2→R 2, where T(x,y)=(x,y,x+y) is one 2 M 7 (c) (ii) Find the standard matrix for the linear operator on R 2, an orthogonal projection on the y-axis followed by a contraction with factor k=1/3 2 M More question papers from Vector Calculus and Linear Algebra
26th SSC CGL tier II level Solution Set, 2nd on Time and work problems This is the 26th solution set of 10 practice problem exercise for SSC CGL Tier II exam and the 2nd on Time and work problems. For maximum gains, the test should be taken first, that is obvious. But more importantly, to absorb the concepts, techniques and deductive reasoning elaborated through these solutions, one must solve many problems in a systematic manner using this conceptual analytical approach. One can learn well only by practicing learning by doing. Before going through these solutions you should take the test by referring to . SSC CGL Tier II level Question Set 26 on Time and Work problems 2 26th solution set - 10 problems for SSC CGL Tier II exam: 2nd on Time Work problems - time 15 mins Problem 1. In 16 days A can do 50% of a job. B can do one-fourth of the job in 24 days. In how many days can they do three-fourths of the job while working together? 21 9 18 24 Solution 1: Problem analysis and conceptual solution by work portion done in a day and working together concepts As number of days to complete a portion of a job is directly proportional to the portion of work done by a worker, by the first statement, A completes the whole job in, $16\times{2}=32$ days, as $50\text{%}=\frac{1}{2}$ By the same concept, as B completes $\frac{1}{4}$th of the job in 24 days, the whole job is completed by B in, $24\times{4}=96$ days. This is the use of the first concept of direct proportionality of work done to number of days the worker worked. Solution 1: Problem solving second stage: Working together concept of summing up work portion done in a day When A and B work together, total work portion done by them in a day is given by summing up the work portion done by each of them in a day. Inverting the total work portion done in a day, you will get the number of days required to complete the work by them while working together. To get the work portion done in day for a worker, just invert the number of days required by the worker to complete the work. Using these concepts work done in a day by A and B working together is, $\displaystyle\frac{1}{32}+\displaystyle\frac{1}{96}=\displaystyle\frac{4}{96}=\displaystyle\frac{1}{24}$. This means, the whole work will be completed by the two working together in 24 days, and $\displaystyle\frac{3}{4}$th of the job will be completed in, $24\times{\displaystyle\frac{3}{4}}=18$ days. Answer: c: 18 days. Key concepts used: -- Work portion done to number of days of work direct proportionality -- Work portion done in a day as inverse of number of days to complete the work -- Working together concept to get portion of work done in a day by summing up portions of work done by each worker in a day Number of days to complete the work as inverse of work portion done in a day. If you are used to these common concepts of time and work, you can easily solve the problem in mind by being a little careful. Problem 2. If each of them had worked alone, B would have taken 10 hours more than what A would have taken to complete a job. Working together, they can complete the job in 12 hours. How many hours B would take to do 50% of the job? 30 20 10 15 Solution 2: Problem analysis and execution: By Mathematical reasoning and Working together concept We have to introduce one variable for work completion time for either A or B, not two. By general principle of mathematical problem solving, that is supported by common sense. We would assume $b$ hours as the time taken by B to complete the work, because target duration involves B's completion time. So completion time for A is, $a=b-10$, which is in terms of $b$. We would be dealing with a single variable. As working together A and B complete the job in 12 hours, applying the of total work portion done by the two in a day as the sum of work portion done by each individually in a day, working together concept $\displaystyle\frac{1}{b-10}+\displaystyle\frac{1}{b}=\displaystyle\frac{1}{12}$. Cross-multiplying and rearranging terms we get the quadratic equation in $b$ as, $b^2-34b+120=0$. 4 times 30 is 120 as well as 4 plus 30 is 34. So the factors of the quadratic equation are, $(b-30)(b-4)=0$. $b=4$ is not possible as $a=b-10$ will then be negative. So, $b=30$ hours. To complete 50% of the job then, B willl take half of 30, that is, 15 hours. Answer: d: 15. Key concepts used: -- Working together concept as work portion done in a day by two workers by summing up their individual work portion done in a day Work portion done in a day as inverse of number of days to complete the work -- Formation and factorization of quadratic equation. In this form of time and work problems, it is hard to avoid formation and factorization of a quadratic equation. But usually this is easy. Problem 3. Two workers P and Q are engaged to do a piece of work. Working alone P would take 8 hours more to complete the work than when working together. Working alone Q would take $4\frac{1}{2}$ hours more than when they work together. The time required to finish the work together is, 5 hours 6 hours 4 hours 8 hours Solution 3: Problem analysis and solution by work per unit time and working together concept Though the problem is quite interestingly framed, it is easy to set up the equation for per hour work portion done when P and Q work together as, $\displaystyle\frac{1}{T}=\displaystyle\frac{1}{T+8}+\displaystyle\frac{1}{T+4.5}$, where $T$ is the working together work completion time you have to find. The two denominators represent the two work completion times in terms of $T$ for P and Q. Cross-multiply and simplify to form the desired quadratic equation as, $(T+8)(T+4.5)=T(2T+12.5)$, Or, $T^2=36$, Or, $T=6$ An unexpected quick result if you have followed the right path. Cancellation of $12.5T$ on both sides of the equation makes things simpler. Answer: b: 6 hours. Key concepts used: -- Work portion done per unit time . Working together concept Problem 4. A contractor employed 200 men to complete a certain work in 150 days. If only one-fourth of the work gets completed in 50 days, then how many more men the contractor must employ to complete the whole work in time? 100 300 200 600 Solution 4 : Problem analysis and execution: Mandays concept 200 men do one-fourth of the work in 50 days. Assuming that work rate (work portion done by a man in a day) remains same for all men, a total of $4\times{50}$, that is, 200 days would have required for 200 men to finish the job. Obviously the contractor misjudged the work rate capacity of the men. That's why to meet the target of 150 days he would need to employ more men. The reason for the need of more men being clear, let's get on with our main task of calculating the number of extra men required. In 50 days, $\displaystyle\frac{1}{4}$th of work is done by 200 men, So the total work amount in terms of mandays is, $50\times{200}\times{4}=40000$ mandays. To complete three-fourth remaining part of this work, that is, 30000 mandays work, in remaining 100 days, number of men required will simply be, $\displaystyle\frac{30000}{100}=300$. The contractor has to employ then 100 more men to finish the job in 150 days. Answer: a: 100. Key concepts used: -- Work amount in terms mandays concept -- Work rate assessment Mandays technique to find the number of extra men required . Problem 5. A, B and C are engaged to do a work for Rs.5290. A and B together are supposed to do $\displaystyle\frac{19}{23}$rd of the work and B and C together $\displaystyle\frac{8}{23}$rd of the work. Then A should be paid, Rs.4250 Rs.3450 Rs.2290 Rs.1950 Solution 5: Problem analysis and execution: Earning share concept, Worker compensation proportional to work portion done As B and C together complete $\displaystyle\frac{8}{23}$rd of the work, the rest of the work must be completed by A alone. So A completes, $1-\displaystyle\frac{8}{23}=\displaystyle\frac{15}{23}$rd of the work. Total amount of Rs.5290 is to be paid proportionate to the work amount done. So A will be paid, $\displaystyle\frac{15}{23}\times{5290}=15\times{230}=\text{Rs.}3450$. The first statement of work portion done by A and B is to create diversion and is not required for getting the answer. But we can satisfy our curiosity by calculating that with A doing $\displaystyle\frac{15}{23}$rd portion of work, B would have done, $\displaystyle\frac{19}{23}-\displaystyle\frac{15}{23}=\displaystyle\frac{4}{23}$rd of work and so C's work portion will be, $\displaystyle\frac{8}{23}-\displaystyle\frac{4}{23}=\displaystyle\frac{4}{23}$rd portion of whole work. This is the reason of total work given by two statements becoming $\displaystyle\frac{27}{23}$, the extra $\displaystyle\frac{4}{23}$ coming from C contributing twice. Answer: b: Rs.3450. Key concepts used: -- Earning share concept Worker compensation proportional to work portion done . Problem 6. Ruchi does $\displaystyle\frac{1}{4}$th of a job in 6 days and Bivas completes rest of the same job in 12 days. Then they together complete the job in, $9\frac{3}{5}$ days $9$ days $7\frac{1}{3}$ days $8\frac{1}{8}$ days. Solution 6 : Problem analysis and solution: Working together concept The first step is to accurately evaluate portion of job completed by each separately in 1 day. As Ruchi does $\displaystyle\frac{1}{4}$th of the job in 6 days, her work rate in terms of work portion done in a day is, $\displaystyle\frac{1}{4}\times{\displaystyle\frac{1}{6}}=\displaystyle\frac{1}{24}$. Bivas completes the rest of the job, that is, $\displaystyle\frac{3}{4}$th of the job in 12 days. So the portion of job he completes in a day is, $\displaystyle\frac{3}{4}\times{\displaystyle\frac{1}{12}}=\displaystyle\frac{1}{16}$. Together they complete the portion of job in 1 day is then, $\displaystyle\frac{1}{24}+\displaystyle\frac{1}{16}=\displaystyle\frac{5}{48}$. And number of days they take to complete the job is inverse of this portion of total work done in a day by the two, which is, $\displaystyle\frac{48}{5}=9\frac{3}{5}$ days. Answer: a: $9\frac{3}{5}$ days. Key concepts used: Work portion done directly proportional to number of days of work -- -- Work rate in terms of work portion done in a day is work portion done divided by number of days of work . Working together per unit time concept -- Number of days of completion of work is inverse of work portion done in a day Easy to solve in mind with a little care. Problem 7. P and Q together can do a job in 6 days and Q and R finishes the same job in $\displaystyle\frac{60}{7}$ days. Starting the work alone P worked for 3 days. Then Q and R continued for 6 days to complete the work. What is the difference in days in which R and P can complete the job, each working alone? 15 8 12 10 Solution 7: Problem analysis and solution: Work rate technique and Working together concept Assume, $p$, $q$ and $r$ to be the portion of work done by P, Q and R respectively in 1 day, each working alone. By the first statement then, $6(p+q)=W$, where $W$ is the total work amount. So, $(p+q)=\displaystyle\frac{1}{6}W$ By the second statement similarly, $\displaystyle\frac{60}{7}(q+r)=W$, Or, $(q+r)=\displaystyle\frac{7}{60}W$. And by the third statement, $3p+6(q+r)=W$, Or, $3p=W\left(1-\displaystyle\frac{7}{10}\right)=\displaystyle\frac{3}{10}W$ So, $10p=W$. It means P completes the work in 10 days working alone. Subtracting $(q+r)$ from $(p+q)$, you get, $p-r=W\left(\displaystyle\frac{1}{6}-\displaystyle\frac{7}{60}\right)=\displaystyle\frac{1}{20}W$, Or, $20p-20r=W$, Or, $20r=2W-W=W$. This means R will complete the work in 20 days working alone, and the desired difference in days is, $20-10=10$. Answer: d: 10. Key concepts used: -- Work rate technique -- Working together concept Sequencing of events -- Algebraic simplification techniques. The solution is speeded up because of bypassing the need of evaluating $q$. Problem 8. A man is twice as fast as a woman who is twice as fast as a boy in doing a piece of work. If one each of them work together and finish the work in 7 days, in how many days would a boy finish the work when working alone? 7 6 49 42 Solution 8: Problem analysis and solution: Work rate technique and Worker equivalence concept Assume, $m$, $w$ and $b$ to be the portion of work done in a day by a man, a woman and a boy respectively when working alone. This is use of work rate technique. This approach reduces fraction calculation and thus speeds up solution. So by the given efficiency statements, as in a day, a man does twice the work portion done by a woman, and a woman does twice the work portion done by a boy, $m=2w=4b$. Basically this means 1 man is equivalent to 4 boys and 1 woman is equivalent to 2 boys. This is Worker equivalence concept. Worker efficiency leads to worker equivalence. So by the working together statement, $7(m+w+b)=W$ where $W$ is the work amount. Or, $7(4b+2b+b)=49b=W$, This means, a boy working alone would complete the work in 49 days. Answer: c: 49. Key concepts used: -- Work rate technique Worker equivalence concept -- Working together concept -- Worker efficiency concept . Problem 9. While A can do a job working alone in 27 hours, B can do it in 54 hours also working alone. Find the share of C (in Rs.) if A, B and C get paid Rs.4320 for completing the job in 12 hours working together. 1440 960 1280 1920 Solution 9: Problem analysis and solution: Earning share proportional to work portion done and Working together concept In 12 hours, work portion done by A and B is, $12\left(\displaystyle\frac{1}{27}+\displaystyle\frac{1}{54}\right)=\displaystyle\frac{2}{3}$. So rest $\displaystyle\frac{1}{3}$rd portion of the work is completed by C. As share of earning is proportional to work portion done, and total work is worth Rs.4320, the earning by C is one-third of Rs.4320, $\displaystyle\frac{1}{3}\times{4320}=1440$. Answer: a: 1440. Key concepts used: -- Earning share concept Earning to work done proportionality -- Working together concept -- Work portion left concept . Problem 10. While A and B together finish a work in 15 days, A and C take 2 more days than B and C working together to finish the same work. If A, B and C complete the work in 8 days, in how many days would C complete it working alone? $20$ days $40$ days $24$ days $17\frac{1}{7}$ days Solution 10: Problem analysis and solution: Strategic problem definition, Work rate technique and Working together concept The strategy of problem definition is to form first the algebraic relations that contain maximum amount of certain information. Out of four given statements, the fourth statement carrying maximum amount of certain information, we'll first form corresponding equation as, $8(a+b+c)=W$. By work rate technique we have assumed variables $a$, $b$ and $c$ to be the work portion done per day by A, B and C respectively and $W$ as the total work amount. Next we'll form the equation corresponding to the first statement as it involves no uncertainty, $15(a+b)=W$. It is easy to see that $c$ can be evaluated from these two equations by eliminating $(a+b)$. From first equation, $(a+b+c)=\displaystyle\frac{W}{8}$, and from second equation, $(a+b)=\displaystyle\frac{W}{15}$. Subtracting the second result from the first, $c=W\left(\displaystyle\frac{1}{8}-\displaystyle\frac{1}{15}\right)=W\left(\displaystyle\frac{7}{120}\right)$. Inverse of this work rate of C is the number of days to complete the work by C working alone. It is then, $\displaystyle\frac{120}{7}=17\frac{1}{7}$. Answer: d: $17\frac{1}{7}$ days. Key concepts used: Strategy of problem definition -- Work rate technique -- Working together concept -- Solving in mind . This is a good example of diversionary tactics in a question. The second and the third statements are not required at all in finding the answer. Task for you: What would be the number of days to complete the work for B working alone? Note: Observe that most, if not all, of the problems can be solved quickly in mind if you use the right concepts and techniques. Problem analysis and clear problem definition play an important role in such quick solutions. Useful resources to refer to Guidelines, Tutorials and Quick methods to solve Work Time problems SSC CGL Tier II level Work Time, Work wages and Pipes cisterns Question and solution sets SSC CGL Tier II level Solution set 26 on Time-work Work-wages 2
This answer says A helicopter uses a LOT more fuel hovering than it does in forward flight. Is this correct? Why? Aviation Stack Exchange is a question and answer site for aircraft pilots, mechanics, and enthusiasts. It only takes a minute to sign up.Sign up to join this community Yes it is correct that helicopters use more fuel when hovering: the engine needs to apply more power to overcome drag. Here is a graph of the engine power required for different airspeeds, from J. Gordon Leishman, Principles Of Helicopter Aerodynamics: The line for total power goes down between 0 - 70 kts with increasing airspeed, this is caused by the line for induced power: power required to overcome the induced drag of the helicopter blade. The total required engine power is the summation of: Induced power is dominant in the hover. Induced drag is caused by the backwards tilt of the lift vector: the higher the angle between blade and free stream, the more the vector is tilted backwards, which causes both loss of lift and increase of drag. The equation for lift L is: $$ L = C_L \cdot \frac{1}{2} \cdot \rho \cdot V^2 \cdot S$$ and at a given altitude, the two variables here are $C_L$ (lift coefficient) and $V$ (airspeed at the blade). $C_L$ is an approximately linear function of angle of attack at the blade, so lift increases linearly with blade tilt-back and quadratically with increasing airspeed over the blade. Above graph from Leishman shows the velocity distribution over the blades when hovering, and at airspeed. Quite a complicated situation - when hovering, the airspeed reaching the blade is only the rotational speed of the rotor, at forward speed the blade going forward has rotational speed plus airspeed. The helicopter does not roll over and both forward blade and retreating blade deliver the same amount of lift, with the rearward going blade tilted back more than it was in the hover. But the forward going blade is tilted back a lot less: airspeed has a quadratic influence. Note that the circle in the plot at fwd airspeed is not stalled flow, but reverse flow: the air streams in at the back of the blade. So drag is now negative, the airstream helps to propel the blade! However there is loss of lift in the reverse flow area. Induced power reduces with airspeed at first according to the simple 1-D impulse consideration (more air mass through the disk), and later increases as the disk is increasingly tilted forward and must do more work to overcome losses from rotor profile drag, airframe parasitic drag, and compressibility drag. There is also an interference effect of the downwash over the fuselage: in the hover the air streams straight down, while in forward flight the rotor wash is more aligned with the fuselage, catching more of a streamline shape. Parasitic drag is of course dominant at top speed, while offloading the rotor by using fixed wing surfaces reduces the induced power at high speeds - but from hover to moderate forward speeds it is purely the reduction in lift induced power that creates translational lift. Yes, it is correct, if the helicopter doesn’t fly too fast. A helicopter will produce the necessary lift most efficiently at a moderate forward speed. In a hover all the airflow which is available for lift creation must be generated by the rotation of the main rotor. This means that a small amount of air must be accelerated by a lot. If the helicopter adds forward speed, it can achieve a higher mass flow through the rotor, and now less acceleration of air is needed to achieve the same lift. This improves the efficiency of lift creation. If the helicopter goes faster than its speed for maximum rate of climb, aerodynamic drag grows too high and reduces efficiency again. At high speed, the tips of the advancing blades might reach transsonic speeds, which produces a noticeable drag increase, and the inner part of the retreating blade will see very little airspeed, and to still produce lift, the whole blade will pitch to a high angle of attack, causing the inner part to stall, which again produces a noticeable drag increase. There is a sweet spot between hover and fast speed where the required power reaches a minimum. Yeah, I'm not a physics student, but I work on Black Hawks. If you conceptualise a helicopter as just a main rotor disc producing lift, then Peter Kampf's answer about mass-flow through the rotor disc is the greatest factor. (Remember the disc is tilted forward as the helicopter moves forward). However, your question actually asked why do they burn less fuel: well, thousands of little design features on the airframe each help save precious pounds of fuel in forward flight. (You might want to do a Google image search to look at while you read this.) The Black Hawk has a cambered vertical fin which unloads the tail rotor above 60kts, and this torque is redirected into the main rotor. It has a variable stabilator which changes angle with fwd airspeed (= changing main rotor downwash angle) in order to provide lift, further unloading the main rotor. The tail rotor is canted at an angle and spins backwards into the main rotor wash, again to unload the main rotor, freeing up more power for forward speed. It has on-flight computers and a mixer unit which flattens out the airframe in flight, so that it doesn't present a flat cabin roof into the airstream at high forward airspeeds. The flatter you can keep the disc into the relative airflow, the smaller the pitching angles of the blades, and the less parasitic drag from the rotor disc. The main rotor blade tips are swept backwards to delay the onset of transonic tip drag as the advancing blade sees higher relative airspeeds in forward flight. Other helicopters have airframe fairings that generate lift off the cabin body in forward flight. All of these aerodynamic savings are present in forward flight, but not in the hover. And lastly, your turbine engine air inlets will benefit from some ram-air effect in forward flight, which means burning less fuel for the same torque. Every helicopter in the world uses some or all of these features to save fuel in flight, and if you compare generations of helicopters (Bell 47, Bell UH-1, Bell 412, Black Hawk), you can see these features gradually develop. There are other considerations when a helicopter is hovering just off the ground, but I've tried to list just some of the ways helicopters are designed to save fuel in flight. Hope some of this helps. The concept is known as "translational lift". When moving in forward flight, a helicopter's rotor disc acts a lot like an airplane's wing - it has a significant lift-to-drag ratio. The required thrust to maintain level flight is reduced by that ratio, and therefore necessary engine power and fuel flow are also reduced. In hover, the engine+rotor system has to supply thrust fully equal to the weight of the helicopter. When in a hover, the air has more time to setup into an induced wash from further upwards that translates into higher down flow speed by the time the induced wash reaches the plane of the rotor. When in translational flight, the rotor is continuously moving into clean air, so the down flow speed by the time the air reaches the plane of the rotor is less than that of a hover. Power equals force times speed, in this case consider the power output to the air. In both cases, the force is the same (equal to the weight of the helicopter), but in a hover, the down wash speed through the plane of the rotor is greater than during translational flight, so the required power in a hover is greater than in translational flight, until the translational drag becomes an issue. Another issue is tip vortices. In a hover, these can get quite large, again due to all the time for the vortices to set up and the rotor tips moving into the vortices induced by the other rotor tip(s). In translational flight, the vortices are "washed" off by the relative horizontal wind, reducing the size of the tip vortices. Another point to consider is whether helicopter has supplemental wings. Rather famous example is Mi-24 family of attack helicopters, where weapon pylons works as wings. "At high speed, the wings provide considerable lift (up to a quarter of total lift)." At high altitudes with full load the recommended lift-off procedure is to gain horizontal speed so wings pick up some lift. If gravity were the only force acting on an aircraft, then at each moment in time the aircraft would be gaining certain amount of downward momentum. So to maintain altitude, the aircraft must transfer that momentum to some other mass (i.e. air). That is, there’s going to be some air that starts with zero velocity (in the simplest case) and ends up with some downward velocity. Since momentum is mass times velocity, the velocity that the air has to be accelerated to will be inversely proportional to the mass of air accelerated: velocity = momentum/mass. However, the energy of that air is mv 2/2. When we substitute velocity into that equation, we get energy = mass*( momentum 2/(2*mass 2). One power of mass cancels out, giving energy = momentum 2/(2*mass). Thus, doubling the amount of air accelerated downward halves the energy required. When an airplane is traveling at high velocity, a large amount of air is coming into contact with its wings, meaning that it does not have to expend much energy to generate lift (of course, the faster it travels, the more drag it experiences, giving a lift-drag trade-off). A helicopter experiences something similar: when it is traveling horizontally, it naturally moves into new air. When it hovers, there’s less air to accelerate downward, and what air there is has to be pulled towards the rotor by the rotor’s own effort.
I would say that it's almost a sham - not in the sense that it's worthless, just that it markets to people who are apt to exaggerate its worth. Let's consider a few of the things that people might do with such 'bought' primes: they might try to decrypt something, encipher something, or have some sort of sentimental value for a particular prime. First: trying to decrypt something by buying lists of 400 digit primes. It seems to me that somehow, we have figured out that a message is enciphered using a 400 digit prime (likely the product of that prime and another, larger prime). We are trying to brute force attack it - but how many 400 digit primes are there? By the prime number theorem, we know there are about $\frac{x}{\log x} $ primes up to x, so we consider $\dfrac{10^{400}}{400 \log 10} - \dfrac{10^{399}}{399 \log 10}$, which is of the order $10^{398}$. Even if the list had each of these primes on it, just trying them all out is computationally improbable (to be... generous). It's akin to saying: we know this door is locked by a key of about this size, so let's get every such key in the world and try them. Secondly: we are trying to encipher something. I will assume that we are going to use some public key cryptography and are trying to come up with something incoveniently large to decrypt, perhaps using RSA. But then we will probably use 2 primes, resulting in a number of over 800 digits. That's annoyingly large. Worse, it is very easy to find your own 400 digit prime. One would expect to find one at random, by checking consecutive odd 400 digit numbers, every thousandth guess or so ($1/{400 \log 10}$ numbers on average need to be checked, not counting the consideration of only even numbers). One would assume that needing such incredible security would mean that you at least own a computer, and therefore have the capabilities of finding your own large primes. Thirdly, sentimental value. This reminds me of the 'Name a Star after Someone' campaign, like here. The author John Allen Paulos, I think, once quipped in his book Innumeracy that he should like to offer to name numbers after people, maybe charging more for prime numbers, etc. But in regards to your fear of buying a number that someone else has bought - suppose that they choose 100 different 400 digit primes randomly for each list they sell. Then it wouldn't be until something like their $10^{200}$th sale that we would expect someone to have bought the same number as you. However, if we consider this instead like the same-birthday 'paradox,' this number would be several dozen orders of magnitude reduced, and yet still meaninglessly large. There's isn't enough money in the world to get to that level. In fact, they could name a different 400 digit prime after each human, every day, for longer than our sun will be around. I should start selling primes on ebay. But the awards for computing larger and larger primes are not because of the use of the resulting prime number, but instead because it inspires development of computational and algorithmic methods. Finding large primes quickly becomes very, very challenging. I believe the current awards is for a prime of 100,000,000 digits, a very challenging number even to store in a computer's memory, let alone manipulate.
GTU First Year Engineering (Semester 2) Vector Calculus and Linear Algebra December 2013 Vector Calculus and Linear Algebra December 2013 Total marks: -- Total time: -- Total time: -- INSTRUCTIONS (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary 1 (a) (i) Show that the differential form under the integral of \[ I= \int^{(2,4,0)}_{(0,-1,1)} e^{x-y+z^2} (dx- dy+2zd) \] is exact in space and evaluate the integral. 5 M 1 (a) (ii) A parametric representation of the surface is given, Identity and sketch the surface; \[ \bar{r} (u,v) = a \cos u \widehat{i} + a \sin u \widehat{j}+ v \widehat{k}, \] where u, v, vary in the rectangle R:0≤u≤2π, -1 ≤v^le;1. 2 M 1 (b) For which values of 'a' will the following system have no solution? Exactly one solution? Infinitely many solutions? x+2 3 4 x+2 y-3 z=4, 3 x-y+5 z=2, 4 x+y+(a 2-14) z=a+2 4 M 1 (c) Let A be the matrix \[ \begin{bmatrix}3 &1 \\2 &1 \end{bmatrix} . \] P 1(x)=x 2-9, P 2(x)=x+3, and P 3(x)=x-3. Show that P 1(A) P 3(A). 3 M 2 (a) (i) Verify Gauss divergence theorem for \[ \bar{F}= 7 x \widehat{i} - z \widehat{k} \] over the sphere x 2+y 2+z 2=4 5 M 2 (a) (ii) Find the directional derivative of f(x,y,z)=xyz at the point p:(-1,1,3) in the direction of the vector \[ \bar{a}= \widehat{i}-2\widehat{j}+2\widehat{k} \] 2 M 2 (b) Find the inverse of the matrix \[ A= \begin{bmatrix}1 &2 &3 \\ &5 &3 \\1 &0 &8 \end{bmatrix} \] using Row operations. 4 M 2 (c) Determine whether the set of all polynomials a 0+a 1x+a 2x 2+a 3x 3for which a 0,a 1,a 2and a 3are integers, is a subspace of P 3. 3 M 3 (a) (i) Show that the set of all 2×2 matrices of the form \[ \begin{bmatrix}a &1 \\1 &b \end{bmatrix} \] with addition defined by \[ \begin{bmatrix}a &1 \\ 1 &b \end{bmatrix} + \begin{bmatrix}c &1 \\ 1 &d \end{bmatrix} = \begin{bmatrix}a+c &1 \\ 1 &b+d \end{bmatrix} \] and scalar multiplication defined by \[ \begin{bmatrix}a&1 \\ 1 &b \end{bmatrix} = \begin{bmatrix}ka&1 \\ 1 &kb \end{bmatrix} \] is a vector space. 5 M 3 (a) (ii) Find the area of the parallelgram determined by the vectors \[ \bar{u}=(2,3,0), \bar{v}=(-1, 2,-2). \] 2 M 3 (b) using Green's theorem, evaluate the line integral ∮ c(sin y dx + cos x dy) counter clockwise, where C is the boundary of the triangle with vertices (0,0), (π,0), (π,1) 4 M 3 (c) The velocity vector \[ \bar{v}= \overrightarrow{r}(t)=x^3 \widehat{k} \] of a fluid motion is given. Is the flow irrotational? Incompressible? Find the path of the particle. 3 M 4 (a) (i) Let P 1=1+x, P 2=1+x 2and P 3=x+x 2. Show that the set S={P 1, P 2, P 3} is a basis for P 2. Find the coordinate vector of P=2-x+x 2with respect to S. 5 M 4 (a) (ii) Use appropriate identities, where required to determine which of the following sets of vectors in F(-∞, ∞) are linearly dependent: i) x, cos x ii) cos 2x, sin i) x, cos x ii) cos 2x, sin 2x, cos 2x. 2 M 4 (b) Find the rank of the matrix \[ A = \begin{bmatrix}1&4&5&2 \\ 2&1&3&0 \\ -1&3&2&2 \end{bmatrix} \] 4 M 4 (c) Use Cramer's rule to solve the system x x 1+3x 2+x 3=4, 2x 1-x 2=-2, 4x 1-3x 3=0 3 M 5 (a) (i) Let W be the space of P 5spanned by the vectors \[ \bar{v}_1 = (1,4,5,6,9), \ \bar{v}_2 = (3,-2,1,4,-1), \\ \bar{v}_3= (-1,0,-1,-2,-1) , \ \bar{v}_4= (2,3,5,7,8).\] Find a basis for the orthogonal complement of W σ. 5 M 5 (a) (ii) Sketch the unit circle in an x-y coordinate system in R 2using the Euclidean inner product \[ \left ( \bar{u}, \bar{v} \right ) = \dfrac {1}{4}u_1v_1 + \dfrac {1}{16}u_2v_2 \] 2 M 5 (b) Find the least squares solution of the linear system AX=b given by 2 x-2 y=2, x+y=-1, 3 x+y=1. Also find the orthogonal projection of b on the column space of A. 4 M 5 (c) Let R 2have the Euclidean inner product. Use Gram Schmidt process to transform the basis vectors \[ \bar{u}_1=(1,-3), \ \bar{u}_2=(2,2) \] into an orthogonal basis. 3 M 6 (a) Find a matrix P that diagonalize \[ A = \begin{bmatrix}-1 &4 &-2 \\-3 &4 &0 \\-3 &1 &3 \end{bmatrix} \] and determine P -IAP. 7 M 6 (b) (i) Find the geometric and algebraic multiplicity of each eigen values of \[ \begin{bmatrix}2&0 \\ 1&2 \end{bmatrix} \] 2 M 6 (b) (ii) Let u=(u 1, u 2), v= (v 1,v 2) be vectors in R 2. Verify that the weighted Euclidean inner product ( u, v)= 3u 1v 1+5u 2v 2satisfies the four inner product axioms. 2 M 6 (c) Given the quadratic equation x 2-16y 2+128y = 256. A translation will put the comic in standard position. Name the comic and give its equation in the translated coordinate system. 3 M 7 (a) (i) Find the standard matrix for the stated composition of linear operators on R (a) A rotation of 60°, followed by an orthogonal projection on the x-axis followed by a reflection about the line y=x. (b) A dilation with factor k=2, followed by a rotation of 45°, followed by a reflection about the y-axis. 2: (a) A rotation of 60°, followed by an orthogonal projection on the x-axis followed by a reflection about the line y=x. (b) A dilation with factor k=2, followed by a rotation of 45°, followed by a reflection about the y-axis. 5 M 7 (a) (ii) Determine whether the function T:V→R, where V is an inner product space, and T(u)=||u||, is a linear transformation, Justify your answer. 2 M 7 (b) Let T:R 2→R 3be the linear transformation defined by \[ T \left ( \begin{bmatrix}x_1\\x_2\end{bmatrix} \right )= \begin{bmatrix}x_2\\-5x_1+13x_2 \\-7x_1 + 16x_2 \end{bmatrix} . \] Find the matrix for the transformation T with respect to the bases \[ B=\{\bar{u}_1, \bar{u}_2 \}\ for \ R^2 \ and \ B' = \{ \bar{v}_1, \bar{v}_2, \bar{v}_3 \}\ for \ R^3, \\ where \ \bar{u}_1 = \begin{bmatrix}3 \\1 \end{bmatrix} , \ \bar{u}\begin{bmatrix}5 \\2\end{bmatrix}, \ \bar{v}_1 = \begin{bmatrix}1 \\ 0 \\-1 \end{bmatrix} , \ \bar{v}_2 = \begin{bmatrix}-1 \\ 2 \\ 2 \end{bmatrix}, \ \bar{v}_3 = \begin{bmatrix}0\\ 1 \\2\end{bmatrix} \] 3 M 7 (c) Show that the linear operator T:R →R 2defined by the equations \[ w_1=x_1+2x_2 \\ w_2 = -x_1+ x_2 \] is one-to-one, and find T -1(w 1, w 2). 3 M More question papers from Vector Calculus and Linear Algebra
K. J. DUNCAN Articles written in Journal of Astrophysics and Astronomy Volume 40 Issue 2 April 2019 Article ID 0009 We report optical observations of TGSS J1054 $+$ 5832, a candidate high-redshift ($z = 4.8 \pm 2$) steep-spectrum radio galaxy, in $r$ and $i$ bands, using the faint object spectrograph and camera mounted on 3.6-m Devasthal Optical Telescope (DOT). The source previously detected at 150 MHz from Giant Meterwave Radio Telescope (GMRT) and at 1420 MHz from Very Large Array has a known counterpart in near-infrared bands with $K$-band magnitude of AB 22. The source is detected in $i$-band with AB24.3 $\pm$ 0.2 magnitude in theDOT images presented here. The source remains undetected in the $r$-band image at a 2.5$\sigma$ depth of AB 24.4 mag over an $1.2^{\prime\prime}\times 1.2^{\prime\prime}$ aperture. An upper limit to $i−K$ color is estimated to be $\sim$2.3, suggesting youthfulness of the galaxy with active star formation. These observations highlight the importance and potential of the 3.6-mDOT for detections of faint galaxies. Current Issue Volume 40 | Issue 5 October 2019 Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles. Click here for Editorial Note on CAP Mode
A Upadhyay Articles written in Pramana – Journal of Physics Volume 78 Issue 4 April 2012 pp 613-623 Research Articles A sequential three-dimensional (3D) particle-in-cell simulation code PICPSI-3D with a user friendly graphical user interface (GUI) has been developed and used to study the interaction of plasma with ultrahigh intensity laser radiation. A case study of laser–plasma-based electron acceleration has been carried out to assess the performance of this code. Simulations have been performed for a Gaussian laser beam of peak intensity $5 \times 10^{19}$ W/cm 2 propagating through an underdense plasma of uniform density $1 \times 10^{19}$ cm -3, and for a Gaussian laser beam of peak intensity $1.5 \times 10^{19}$ W/cm 2 propagating through an underdense plasma of uniform density $3.5 \times 10^{19}$ cm -3. The electron energy spectrum has been evaluated at different time-steps during the propagation of the laser beam. When the plasma density is $1 \times 10^{19}$ cm -3, simulations show that the electron energy spectrum forms a monoenergetic peak at $\sim 14$ MeV, with an energy spread of $\pm 7$ MeV. On the other hand, when the plasma density is $3.5 \times 10^{19}$ cm -3, simulations show that the electron energy spectrum forms a monoenergetic peak at $\sim 23$ MeV, with an energy spread of $\pm 7.5$ MeV. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
I have a Boolean function that outputs a one on half of its inputs and outputs a zero on half of its inputs, the inputs are assumed to be coming from the uniform distribution. Another way of saying this is the output is one half the time and zero half the time, on average. What is the scientific notation for describing this output scenario? If you absolutely insist on using symbolic notation, you can state that $\mathbb{E}[f] = 1/2$ (i.e., the expectation of $f$ is $1/2$, presumably with respect to the uniform measure over all inputs). You could also state that $\hat{f}(\emptyset) = 1/2$ (i.e., the Fourier coefficient at the empty set equals $1/2$), but $\mathbb{E}[f] = 1/2$ is probably better. Both of these assume that you already know that $f$ is Boolean. A boolean function is called balanced if it is zero on half its inputs and one on half its inputs. So, "balanced" might be the term you are looking for. If there's a word for it, use that one (see other answers). Mathematically speaking, you look at functions $f : X \to \{0,1\}$ so that $\qquad\displaystyle |f^{-1}(0)| = |f^{-1}(1)|$ where $f^{-1}$ denotes the inverse image of $f$. So this is how one could define the property: Let $X$ and $Y$ be finite sets with $|X| \in |Y|\mathbb{N}$. A function $f : X \to Y$ is balancedif $\qquad\displaystyle |f^{-1}(y)| = \frac{|X|}{|Y|}$ for all $y \in Y$. You could say that that boolean function follows the bernoulli distribution where an outcome of success (outputting a 1), with probability $p = 0.5$, and an outcome of failure (outputting a 0), with probability $q=(1-p)=1-0.5=0.5$. Depending on your context you can also make an output of 0 your success condition and flip your failure condition. In terms of notation, if $x$, represents an arbitrary input, and you have such a function $f$, you could say $f(x) \sim Bernoulli(0.5)$ and $Pr(f(x)=1) = 0.5$, meaning that the probability of outputting a one on input x is 0.5 and the former porton means that f(x) follows the Bernoulli distribution.
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
Let $f: \Omega\rightarrow \mathbb{R}$ where $\Omega\subset\mathbb{R}^d$ is bounded with lipschitz smooth boundary. Further suppose that $f\in\mathcal{H}^{\tau}(\Omega)$, $\tau>\frac{d}{2}$ (i.e. $f$ is continuous) and $f$ is zero on the boundary. Let $$ \Omega_{\delta} = \{ x\in\Omega : \inf_{y\in\partial\Omega} \left\|x-y\right\|_2 > \delta \} .$$ Where $\delta>0$ is small enough to preserve smoothness in the boundary of $\Omega_{\delta}$. See: Shrinking a Lipschitz smooth domain.. For sufficiently small $\delta>0$ it is true that: $\left\|f\right\|_{L_2(\Omega\setminus\Omega_{\delta})} \leq C\delta^\alpha\left\|f\right\|_{L_2(\Omega)}$ with $\alpha\geq 1$ and $C$ is a constant not depending on $\delta$ or $f$. If this is not possible what is the largest $\alpha\in(0,1)$ so that the above inequality holds. Thanks in advance. Note this is almost identical to my previous post Bounding a smooth function near the boundary but presented in a clearer fashion. I would be grateful for any comments or help.
I have to show that the function 𝑓(𝑥)=<𝑥,(34)> is a linear function. I understand that the proof that is not linear 𝑓(𝑥+𝑦)≠𝑓(𝑥)+𝑓(𝑦). But honestly I have no idea where to start to prove it. Any ideas or advice? Thanks! I think that $ 𝑓(𝑥)=<𝑥,(34)>$ has the following meaning: $(3,4) $ is a given vector in $\mathbb R^2$ and with $x=(x_1,x_2) \in \mathbb R^2$ we have $$f(x)=<(x_1,x_2),(3,4)>,$$ where $< \cdot,\cdot>$ denotes the usual inner product on $ \mathbb R^2.$ Hence $$f(x)=3x_1+4x_2.$$ Now it is your turn to show that $$f(x+y)=f(x)+f(y)$$ and $$f( \alpha x)=\alpha f(x)$$ for all $x,y \in \mathbb R^2$ and all $\alpha \in \mathbb R.$
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Definition:Subtraction Contents Definition Let $\N$ be the set of natural numbers. Let $m, n \in \N$ such that $m \le n$. Let $p \in \N$ such that $n = m + p$. Then we define the operation subtraction as: $n - m = p$ The natural number $p$ is known as the difference between $m$ and $n$. As the set of integers is the Inverse Completion of Natural Numbers, it follows that elements of $\Z$ are the isomorphic images of the elements of equivalence classes of $\N \times \N$ where two tuples are equivalent if the difference between the two elements of each tuples is the same. It follows that: $\forall a, b, c, d \in \N: \eqclass {\tuple {a, b} } \boxminus - \eqclass {\tuple {c, d} } \boxminus = \eqclass {\tuple {a, b} } \boxminus + \tuple {-\eqclass {\tuple {c, d} } \boxminus} = \eqclass {\tuple {a, b} } \boxminus + \eqclass {\tuple {d, c} } \boxminus$ $\forall x, y \in \Z: x - y = x + \paren {-y}$ Let $\struct {\Q, +, \times}$ be the field of rational numbers. The operation of subtraction is defined on $\Q$ as: $\forall a, b \in \Q: a - b := a + \paren {-b}$ where $-b$ is the negative of $b$ in $\Q$. Let $\struct {\R, +, \times}$ be the field of real numbers. The operation of subtraction is defined on $\R$ as: $\forall a, b \in \R: a - b := a + \paren {-b}$ where $-b$ is the negative of $b$ in $\R$. Let $\struct {\C, +, \times}$ be the field of complex numbers. The operation of subtraction is defined on $\C$ as: $\forall a, b \in \C: a - b := a + \paren {-b}$ where $-b$ is the negative of $b$ in $\C$. Let $\overline \R$ denote the extended real numbers. Define extended real subtraction or subtraction on $\overline \R$, denoted $-_{\overline \R}: \overline \R \times \overline \R \to \overline \R$, by: $\forall x, y \in \R: x -_{\overline \R} y := x -_{\R} y$ where $-_\R$ denotes real subtraction $\forall x \in \R: x -_{\overline \R} \paren {+\infty} = \paren {-\infty} -_{\overline \R} x := -\infty$ $\forall x \in \R: x -_{\overline \R} \paren {-\infty} = \paren {+\infty} -_{\overline \R} x := +\infty$ $\paren {-\infty} -_{\overline \R} \paren {+\infty} := -\infty$ $\paren {+\infty} -_{\overline \R} \paren {-\infty} := +\infty$ In particular, the expressions: $\paren {+\infty} -_{\overline \R} \paren {+\infty}$ $\paren {-\infty} -_{\overline \R} \paren {-\infty}$ are considered void and should be avoided. Let $\struct {R, +, \circ}$ be a ring. The operation of subtraction $a - b$ on $R$ is defined as: $\forall a, b \in R: a - b := a + \paren {-b}$ where $-b$ is the (ring) negative of $b$. Also known as The value $a - b$ (for any of the above definitions) is often called the difference between $a$ and $b$. In this context, whether $a - b$ or $b - a$ is being referred to is often irrelevant, but it pays to be careful. In certain historical texts, the term subduction can sometimes be seen. Also see Results about subtractioncan be found here. These symbols first appeared in print in $1481$. Sources 1960: Walter Ledermann: Complex Numbers... (previous) ... (next): $\S 1.1$. Number Systems 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 2$: Example $2.1$ 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $2$ 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics... (previous) ... (next): Entry: difference: 1. 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $2$ 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: difference: 1. 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: subtraction
Short answer: because it's a complex torus. Explanation below would take as through many topics. Topological covers The curve should be considered over complex numbers, where it can be seen as a Riemann surface, therefore a two-dimensional oriented closed variety. How to find out whether this particular one is a sphere, torus or something else? Just consider a two-fold covering onto $x$-axis and count the Euler characteristics as $-2 \cdot 2 + 4 = 0$ (don't forget the point at infinity.) Complex tori So this is a torus; now a torus with complex structure can be always defined as a quotient $\mathbb C/\Lambda$, where $\Lambda$ is the lattice of periods. It can be written as integrals $\int_\gamma \omega$ of any differential form $\omega$ over all elements $\gamma \in \pi_1$. The choice of differential form is unique up to $\lambda \in \mathbb C$. Algebraic addition A complex map of a torus into itself that leaves lattice $\Lambda$ fixed can be only given by a shift. Once you select a base point, these shifts are in one-to-one correspondence with points of $E$. We have unique distinguished point — infinity — so let's choose it as the base point. It follows that we now have an addition map $(u, v) \to u\oplus v$, though defined purely algebraically so far. Geometric meaning Now let's stop and ask ourselves: how to see this addition geometrically? For a start, consider map that sends $u$ to the third point of intersection with the line containing both $u$ and 0 (the infinity point). It's not hard to see that we fix 0 but change every class $\gamma$ in a fundamental group into $-\gamma$, so we must have the map $u\mapsto -u$ here. Group theory laws What would happen if you took a line through $u$ and $v$? By temporarily changing coordinates so that $u$ becomes the infinity point, one writes down that map as $(u, v) \mapsto -(u+v)$. Now if you took three points, there would be two different ways to add them; those would lead to $(u+v)+w$ and $u+(v+w)$ as complex numbers, which we know to be associative. Logically proven In the above, we worked over complex numbers, but we proved associativity which is a formal theorem about substitution of some rational expressions into others. Since it works over complex fields, it is required to work over all fields. (In any case, the big discovery of mid-20th century was that you actually can take all of the intuition described above and apply it to the case of elliptic curves over arbitrary field) Analytic computations (bonus) Consider a line that passes through points $u$, $0$ and $-u$. This line is actually vertical, and $y$ is a well-defined function there which has two zeroes and one double pole at infinity. After a shift and multiplication of several such functions we'll be getting a meromorphic function on a complex torus with poles $p_i$ and zeroes $z_i$ having the property $\sum p_i = \sum z_i$. This method can give all such functions and only them; it's not hard to see that only meromorphic functions with this property are allowed on elliptic curve. For example, $\wp'$-functions are the ones that have triple pole at 0 and single zeroes at points $\frac12w_1, \frac12w_2, \frac12(w_1+ w_2)$ where $w_1, w_2$ are generators of $\Lambda$. Jacobian of a curve (bonus 2) The formula above describes what types of functions are allowed on our curve. It is a good idea to organize this information into a curve: in this case, the information is that a single expression $p_1 + p_2 + \cdots + p_n - z_1 - \cdots - z_n$, considered a point of the curve, must vanish. For curves of higher genus, more relations are necessary; for $\mathbb C\mathbb P^1$, no relations beyond number of poles = number of zeroes are necessary. Those are relations in the group of classes of divisors (= Jacobian of a curve) mentioned in other answers. In particular, elliptic curves coincide with their Jacobian and that's another explanation for the additive law.
I’ve been doing a lot of reading on confidence interval theory. Some of the reading is more interesting than others. There is one passage from Neyman’s (1952) book “Lectures and Conferences on Mathematical Statistics and Probability” (available here) that stands above the rest in terms of clarity, style, and humor. I had not read this before the last draft of our confidence interval paper, but for those of you who have read it, you’ll recognize that this is the style I was going for. Maybe you have to be Jerzy Neyman to get away with it. Neyman gets bonus points for the footnote suggesting the “eminent”, “elderly” boss is so obtuse (a reference to Fisher?) and that the young frequentists should be “remind[ed] of the glory” of being burned at the stake. This is just absolutely fantastic writing. I hope you enjoy it as much as I did. [begin excerpt, p. 211-215] [Neyman is discussing using “sampling experiments” (Monte Carlo experiments with tables of random numbers) in order to gain insight into confidence intervals. \(\theta\) is a true parameter of a probability distribution to be estimated.] The sampling experiments are more easily performed than described in detail. Therefore, let us make a start with \(\theta_1 = 1\), \(\theta_2 = 2\), \(\theta_3 = 3\) and \(\theta_4 = 4\). We imagine that, perhaps within a week, a practical statistician is faced four times with the problem of estimating \(\theta\), each time from twelve observations, and that the true values of \(\theta\) are as above [ie, \(\theta_1,\ldots,\theta_4\)] although the statistician does not know this. We imagine further that the statistician is an elderly gentleman, greatly attached to the arithmetic mean and that he wishes to use formulae (22). However, the statistician has a young assistant who may have read (and understood) modern literature and prefers formulae (21). Thus, for each of the four instances, we shall give two confidence intervals for \(\theta\), one computed by the elderly Boss, the other by his young Assistant. [Formula 21 and 22 are simply different 95% confidence procedures. Formula 21 is has better frequentist properties; Formula 22 is inferior, but the Boss likes it because it is intuitive to him.] Using the first column on the first page of Tippett’s tables of random numbers and performing the indicated multiplications, we obtain the following four sets of figures. The last two lines give the assertions regarding the true value of \(\theta\) made by the Boss and by the Assistant, respectively. The purpose of the sampling experiment is to verify the theoretical result that the long run relative frequency of cases in which these assertions will be correct is, approximately, equal to \(\alpha = .95\). You will notice that in three out of the four cases considered, both assertions (the Boss’ and the Assistant’s) regarding the true value of \(\theta\) are correct and that in the last case both assertions are wrong. In fact, in this last case the true \(\theta\) is 4 while the Boss asserts that it is between 2.026 and 3.993 and the Assistant asserts that it is between 2.996 and 3.846. Although the probability of success in estimating \(\theta\) has been fixed at \(\alpha = .95\), the failure on the fourth trial need not discourage us. In reality, a set of four trials is plainly too short to serve for an estimate of a long run relative frequency. Furthermore, a simple calculation shows that the probability of at least one failure in the course of four independent trials is equal to .1855. Therefore, a group of four consecutive samples like the above, with at least one wrong estimate of \(\theta\), may be expected one time in six or even somewhat oftener. The situation is, more or less, similar to betting on a particular side of a die and seeing it win. However, if you continue the sampling experiment and count the cases in which the assertion regarding the true value of \(\theta\), made by either method, is correct, you will find that the relative frequency of such cases converges gradually to its theoretical value, \(\alpha= .95\). Let us put this into more precise terms. Suppose you decide on a number \(N\) of samples which you will take and use for estimating the true value of \(\theta\). The true values of the parameter \(\theta\) may be the same in all \(N\) cases or they may vary from one case to another. This is absolutely immaterial as far as the relative frequency of successes in estimation is concerned. In each case the probability that your assertion will be correct is exactly equal to \(\alpha = .95\). Since the samples are taken in a manner insuring independence (this, of course, depends on the goodness of the table of random numbers used), the total number \(Z(N)\) of successes in estimating \(\theta\) is the familiar binomial variable with expectation equal to \(N\alpha\) and with variance equal to \(N\alpha(1 – \alpha)\). Thus, if \(N = 100\), \(\alpha = .95\), it is rather improbable that the relative frequency \(Z(N)/N\) of successes in estimating \(\alpha\) will differ from \(\alpha\) by more than \( 2\sqrt{\frac{\alpha(1-\alpha)}{N}} = .042 \) This is the exact meaning of the colloquial description that the long run relative frequency of successes in estimating \(\theta\) is equal to the preassigned \(\alpha\). Your knowledge of the theory of confidence intervals will not be influenced by the sampling experiment described, nor will the experiment prove anything. However, if you perform it, you will get an intuitive feeling of the machinery behind the method which is an excellent complement to the understanding of the theory. This is like learning to drive an automobile: gaining experience by actually driving a car compared with learning the theory by reading a book about driving. Among other things, the sampling experiment will attract attention to the frequent difference in the precision of estimating \(\theta\) by means of the two alternative confidence intervals (21) and (22). You will notice, in fact, that the confidence intervals based on \(X\), the greatest observation in the sample, are frequently shorter than those based on the arithmetic mean \(\bar{X}\). If we continue to discuss the sampling experiment in terms of cooperation between the eminent elderly statistician and his young assistant, we shall have occasion to visualize quite amusing scenes of indignation on the one hand and of despair before the impenetrable wall of stiffness of mind and routine of thought on the other. [See footnote] For example, one can imagine the conversation between the two men in connection with the first and third samples reproduced above. You will notice that in both cases the confidence interval of the Assistant is not only shorter than that of the Boss but is completely included in it. Thus, as a result of observing the first sample, the Assistant asserts that \( .956 \leq \theta \leq 1.227. \) On the other hand, the assertion of the Boss is far more conservative and admits the possibility that \(\theta\) may be as small as .688 and as large as 1.355. And both assertions correspond to the same confidence coefficient, \(\alpha = .95\)! I can just see the face of my eminent colleague redden with indignation and hear the following colloquy. Boss: “Now, how can this be true? I am to assert that \(\theta\) is between .688 and 1.355 and you tell me that the probability of my being correct is .95. At the same time, you assert that \(\theta\) is between .956 and 1.227 and claim the same probability of success in estimation. We both admit the possibility that \(\theta\) may be some number between .688 and .956 or between 1.227 and 1.355. Thus, the probability of \(\theta\) falling within these intervals is certainly greater than zero. In these circumstances, you have to be a nit-wit to believe that \( \begin{eqnarray*} P\{.688 \leq \theta \leq 1.355\} &=& P\{.688 \leq \theta < .956\} + P\{.956 \leq \theta \leq 1.227\}\\ && + P\{1.227 \leq \theta \leq 1.355\}\\ &=& P\{.956 \leq \theta \leq 1.227\}.\mbox{”} \end{eqnarray*} \) Assistant: “But, Sir, the theory of confidence intervals does not assert anything about the probability that the unknown parameter \(\theta\) will fall within any specified limits. What it does assert is that the probability of success in estimation using either of the two formulae (21) or (22) is equal to \(\alpha\).” Boss: “Stuff and nonsense! I use one of the blessed pair of formulae and come up with the assertion that \(.688 \leq \theta \leq 1.355\). This assertion is a success only if \(\theta\) falls within the limits indicated. Hence, the probability of success is equal to the probability of \(\theta\) falling within these limits —.” Assistant: “No, Sir, it is not. The probability you describe is the a posteriori probability regarding \(\theta\), while we are concerned with something else. Suppose that we continue with the sampling experiment until we have, say, \(N = 100\) samples. You will see, Sir, that the relative frequency of successful estimations using formulae (21) will be about the same as that using formulae (22) and that both will be approximately equal to .95.” I do hope that the Assistant will not get fired. However, if he does, I would remind him of the glory of Giordano Bruno who was burned at the stake by the Holy Inquisition for believing in the Copernican theory of the solar system. Furthermore, I would advise him to have a talk with a physicist or a biologist or, maybe, with an engineer. They might fail to understand the theory but, if he performs for them the sampling experiment described above, they are likely to be convinced and give him a new job. In due course, the eminent statistical Boss will die or retire and then —. [footnote] Sad as it is, your mind does become less flexible and less receptive to novel ideas as the years go by. The more mature members of the audience should not take offense. I, myself, am not young and have young assistants. Besides, unreasonable and stubborn individuals are found not only among the elderly but also frequently among young people. [end excerpt]
Defining parameters Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.de (of order \(12\) and degree \(4\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 720 \) Character field: \(\Q(\zeta_{12})\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\). Total New Old Modular forms 48 16 32 Cusp forms 0 0 0 Eisenstein series 48 16 32 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
Discretization-invariant Bayesian inversion and Besov space priors 1. Department of Mathematics and Statistics, University of Helsinki, P.O. Box 68 (Gustaf Hallstromin katu 2b) FI-00014, Finland, Finland 2. Tampere University of Technology,Institute of Mathematics,, P.O. Box 553, 33101 Tampere εis considered, where $U$ is a function on a domain of $\R^d$. Here $A$ is a smoothing linear operator and εis Gaussian white noise. The data is a realization $m_k$ of the random variable $M_k = P_kA U+P_k$ ε, where $P_k$ is a linear, finite dimensional operator related to measurement device. To allow computerized inversion, the unknown is discretized as $U_n=T_nU$, where $T_n$ is a finite dimensional projection, leading to the computational measurement model $M_{kn}=P_k A U_n + P_k$ ε. Bayes formula gives then the posterior distribution $\pi_{kn}(u_n\|\m_{kn})$~ Π $(u_n)\exp(-\frac{1}{2}$||$\m_{kn} - P_kA u_n$||$\_2^2)$ n in $\R^d$, and the mean $\u_{kn}$:$=\int u_n \ \pi_{kn}(u_n\|\m_k)\ du_n$ is considered as the reconstruction of $U$. We discuss a systematic way of choosing prior distributions Π for all $n\geq n_0>0$ by achieving them as projections of a distribution in a infinite-dimensional limit case. Such choice of prior distributions is n discretization-invariantin the sense that Π represent the same n a prioriinformation for all $n$ and that the mean $\u_{kn}$ converges to a limit estimate as $k,n$→$\infty$. Gaussian smoothness priors and wavelet-based Besov space priors are shown to be discretization invariant. In particular, Bayesian inversion in dimension two with $B^1_11$ prior is related to penalizing the $\l^1$ norm of the wavelet coefficients of $U$. Keywords:wavelet, discretization invariance, Inverse problem, statistical inversion, Besov space., Bayesian inversion, reconstruction. Mathematics Subject Classification:60F17, 65C20, 42C4. Citation:Matti Lassas, Eero Saksman, Samuli Siltanen. Discretization-invariant Bayesian inversion and Besov space priors. Inverse Problems & Imaging, 2009, 3 (1) : 87-122. doi: 10.3934/ipi.2009.3.87 [1] Lassi Roininen, Janne M. J. Huttunen, Sari Lasanen. Whittle-Matérn priors for Bayesian statistical inversion with applications in electrical impedance tomography. [2] Lassi Roininen, Mark Girolami, Sari Lasanen, Markku Markkanen. Hyperpriors for Matérn fields with applications in Bayesian inversion. [3] [4] [5] Xiaomao Deng, Xiao-Chuan Cai, Jun Zou. A parallel space-time domain decomposition method for unsteady source inversion problems. [6] Tan Bui-Thanh, Quoc P. Nguyen. FEM-based discretization-invariant MCMC methods for PDE-constrained Bayesian inverse problems. [7] [8] Igor E. Shparlinski. Close values of shifted modular inversions and the decisional modular inversion hidden number problem. [9] [10] Hongmei Cao, Hao-Guang Li, Chao-Jiang Xu, Jiang Xu. Well-posedness of Cauchy problem for Landau equation in critical Besov space. [11] [12] [13] [14] Liying Wang, Weiguo Zhao, Dan Zhang, Linming Zhao. A geometric inversion algorithm for parameters calculation in Francis turbine. [15] [16] [17] [18] Vianney Perchet, Marc Quincampoix. A differential game on Wasserstein space. Application to weak approachability with partial monitoring. [19] Josep M. Olm, Xavier Ros-Oton. Approximate tracking of periodic references in a class of bilinear systems via stable inversion. [20] Eric Bedford, Kyounghee Kim. Degree growth of matrix inversion: Birational maps of symmetric, cyclic matrices. 2018 Impact Factor: 1.469 Tools Metrics Other articles by authors [Back to Top]
Airspeed is the speed of an aircraft relative to the air. Among the common conventions for qualifying airspeed are: indicated airspeed ("IAS"), calibrated airspeed ("CAS"), true airspeed ("TAS"), equivalent airspeed ("EAS") and density airspeed. During cruising flight at altitudes, airspeeds, and temperatures common for airliners, the four speeds trace a shape that looks like the mathematical square root symbol (√). Starting with indicated airspeed, calibrated is normally very close to the indicated airspeed, while equivalent is normally less than both indicated and calibrated, and true is normally higher than the other three. [1] The measurement and indication of airspeed is ordinarily accomplished on board an aircraft by an airspeed indicator ("ASI") connected to a pitot-static system. The pitot-static system comprises one or more pitot probes (or tubes) facing the on-coming air flow to measure pitot pressure (also called stagnation, total or ram pressure) and one or more static ports to measure the static pressure in the air flow. These two pressures are compared by the ASI to give an IAS reading. Contents Indicated airspeed 1 Calibrated airspeed 2 Equivalent airspeed 3 True airspeed 4 Groundspeed 5 See also 6 References 7 External links 8 Indicated airspeed Indicated airspeed (IAS) is the airspeed indicator reading (ASIR) uncorrected for instrument, position, and other errors. From current EASA definitions: Indicated airspeed means the speed of an aircraft as shown on its pitot static airspeed indicator calibrated to reflect standard atmosphere adiabatic compressible flow at sea level uncorrected for airspeed system errors. [2] Outside the former Soviet bloc, most airspeed indicators show the speed in knots (nautical miles per hour). Some light aircraft have airspeed indicators showing speed in statute miles per hour or kilometers per hour. An airspeed indicator is a differential pressure gauge with the pressure reading expressed in units of speed, rather than pressure. The airspeed is derived from the difference between the ram air pressure from the pitot tube, or stagnation pressure, and the static pressure. The pitot tube is mounted facing forward; the static pressure is frequently detected at static ports on one or both sides of the aircraft. Sometimes both pressure sources are combined in a single probe, a pitot-static tube. The static pressure measurement is subject to error due to inability to place the static ports at positions where the pressure is true static pressure at all airspeeds and attitudes. The correction for this error is the position error correction (PEC) and varies for different aircraft and airspeeds. Further errors of 10% or more are common if the airplane is flown in “uncoordinated” flight. Calibrated airspeed Calibrated airspeed (CAS) is indicated airspeed corrected for instrument errors, position error (due to incorrect pressure at the static port) and installation errors. Calibrated airspeed values less than the speed of sound at standard sea level (661.4788 knots) are calculated as follows: V_c=A_0\sqrt{5\Bigg[\bigg(\frac{q_c}{P_0}+1\bigg)^\frac{2}{7}-1\Bigg]} minus position and installation error correction. where V_c \, is the calibrated airspeed, q_c \, is the impact pressure (inches Hg) sensed by the pitot tube, P_0 \, is 29.92126 inches Hg; static air pressure at standard sea level, A_0 \, is 661.4788 knots;, speed of sound at standard sea level. Units other than knots and inches of mercury can be used, if used consistently. This expression is based on the form of Bernoulli's equation applicable to a perfect, incompressible gas. The values for P_0 and A_0 are consistent with the ISA i.e. the conditions under which airspeed indicators are calibrated. Equivalent airspeed Equivalent airspeed (EAS) is defined as the speed at sea level that would produce the same incompressible dynamic pressure as the true airspeed at the altitude at which the vehicle is flying. An aircraft in forward flight is subject to the effects of compressibility. Likewise, the calibrated airspeed is a function of the compressible impact pressure. EAS, on the other hand, is a measure of airspeed that is a function of incompressible dynamic pressure. Structural analysis is often in terms of incompressible dynamic pressure, so that equivalent airspeed is a useful speed for structural testing. At standard sea level pressure, calibrated airspeed and equivalent airspeed are equal. Up to about 200 knots CAS and 10,000 ft (3,000 m) the difference is negligible, but at higher speeds and altitudes CAS must be corrected for compressibility error to determine EAS. The significance of equivalent airspeed is that, at Mach numbers below the onset of wave drag, all of the aerodynamic forces and moments on an aircraft are proportional to the square of the equivalent airspeed. The equivalent airspeed is closely related to the indicated airspeed shown by the airspeed indicator. Thus, the handling and 'feel' of an aircraft, and the aerodynamic loads upon it, at a given equivalent airspeed, are very nearly constant and equal to those at standard sea level irrespective of the actual flight conditions. True airspeed True airspeed (V_t) is the speed of the aircraft relative to the atmosphere. The true airspeed and heading of an aircraft constitute its velocity relative to the atmosphere. The vector relationship between the true airspeed and the speed with respect to the ground (V_g) is: V_t \ = \ V_g - V_w where V_w = Windspeed vector Aircraft flight instruments, however, don't compute true airspeed as a function of groundspeed and windspeed. They use impact and static pressures as well as a temperature input. True airspeed is equivalent airspeed that is corrected for pressure altitude and temperature (which define density). The result is the true physical speed of the aircraft relative to the surrounding body of air. At standard sea level conditions, true airspeed, calibrated airspeed and equivalent airspeed are all equal. The simplest way to compute true airspeed is using a function of Mach number: V_t \ = \ a_0 \cdot M \sqrt{\frac{T}{T_0}} where a_0 = Speed of sound at standard sea level (661.4788 knots) M = Mach number T = Temperature (kelvin) T_0 = Standard sea level temperature (288.15 kelvin) Or if Mach number is not known: V_t \ = \ a_0 \cdot \sqrt{5\left[\left(\frac{q_c}{P}+1\right)^\frac{2}{7}-1\right] \cdot \frac{T}{T_0}} where a_0 = Speed of sound at standard sea level (661.4788 knots) q_c = Impact pressure (inHg) P = Static pressure (inHg) T = Temperature (kelvin) T_0 = Standard sea level temperature (288.15 kelvin) The above equation is only for Mach numbers less than 1.0. True airspeed differs from the equivalent airspeed because the airspeed indicator is calibrated at SL, ISA conditions, where the air density is 1.225 kg/m³, whereas the air density in flight normally differs from this value. \frac {1}{2} \rho V^2 = q = \frac {1}{2} \rho_0 V_e^2 Thus \frac {V}{V_e} = \sqrt{ \frac {\rho_0 }{ \rho}} where \rho \, is the air density at the flight condition. The air density may be calculated from: \frac {\rho }{\rho_0 } = \frac {p \, T_0}{p_0 \, T} where p \, is the air pressure at the flight condition, p_0 \, is the air pressure at sea level = 1013.2 hPa, T \, is the air temperature at the flight condition, T_0 \, is the air temperature at sea level, ISA = 288.15 K. Source: Aerodynamics of a Compressible Fluid. Liepmann and Puckett 1947. Publishers John Wiley & Sons Inc. Groundspeed Ground speed is the speed of the aircraft relative to the ground rather than through the air, which can itself be moving relative to the ground. See also References ^ http://www.calpoly.edu/~rcumming/Airspeed.pdf ^ http://easa.europa.eu/ws_prod/g/doc/Agency_Mesures/Certification_Spec/decision_ED_2003_11_RM.pdf Glauert H., The Elements of Aerofoil and Airscrew Theory, Chapter 2, Cambridge University Press, 1947 Liepmann H. W. and A. E. Pucket, Introduction to Aerodynamics of a Compressible Fluid, John Wiley and Sons, Inc. 1947 External links A free windows calculator which converts between various airspeeds (true / equivalent / calibrated) according to the appropriate atmospheric (standard and not standard!) conditions Calculate True and Equivalent Airspeed Calculate Ground Speed and Wind Triangles True, Equivalent, and Calibrated Airspeed at MathPages Measurement of Aircraft Airspeed and Altitude (NASA Publication 1046) Newbyte airspeed converter This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
I'm looking for a reference for the following result, which is a generalization of the classical theorem of Dirichlet on the approximability of real irrationals by rational numbers: Let $k$ be a number field, $O$ its ring of integers, $v$ an infinite place of $k$, $\alpha$ any element of the completion $k_v$. Let $\|\cdot\|_v$ be the usual absolute value (or its square, if $v$ is a complex place). Let $H$ denote the multiplicative height function relative to $k$ -- that is, for any element $x\in k$, let $H(x)=\prod_w \max(1,\|x\|_w)$, where the product is over all places $w$ of $k$. Then there is a positive real constant $C$ depending only on $k$ such that $$\|\alpha-x\|_v < \frac{C}{H(x)^2}$$ for infinitely many $x\in k$. I think I can prove this, but I am surely not the first. If anyone can tell me a good place to point to for this result, I'd be very grateful -- thanks!
I was trying to implement the algorithm from the paper "Adapting a Fourier pseudospectral method to Dirichlet boundary conditions for Rayleigh–Benard convection". I am having a hard time to understand the way the boundary conditions are imposed. The author rewrites the no-slip (on upper and lower boundaries) boundary conditions as $$ \sum_{q} \tilde f_{\bot,pq} = 0 \forall p$$ where $p$ and $q$ are horizontal and vertical wavenumbers. How do we impose that?
Notice: If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help! Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Genotype Refinement workflow for germline short variants Contents Overview Summary of workflow steps Output annotations Example More information about priors Mathematical details 1. Overview The core GATK Best Practices workflow has historically focused on variant discovery --that is, the existence of genomic variants in one or more samples in a cohorts-- and consistently delivers high quality results when applied appropriately. However, we know that the quality of the individual genotype calls coming out of the variant callers can vary widely based on the quality of the BAM data for each sample. The goal of the Genotype Refinement workflow is to use additional data to improve the accuracy of genotype calls and to filter genotype calls that are not reliable enough for downstream analysis. In this sense it serves as an optional extension of the variant calling workflow, intended for researchers whose work requires high-quality identification of individual genotypes. While every study can benefit from increased data accuracy, this workflow is especially useful for analyses that are concerned with how many copies of each variant an individual has (e.g. in the case of loss of function) or with the transmission (or de novo origin) of a variant in a family. If a “gold standard” dataset for SNPs is available, that can be used as a very powerful set of priors on the genotype likelihoods in your data. For analyses involving families, a pedigree file describing the relatedness of the trios in your study will provide another source of supplemental information. If neither of these applies to your data, the samples in the dataset itself can provide some degree of genotype refinement (see section 5 below for details). After running the Genotype Refinement workflow, several new annotations will be added to the INFO and FORMAT fields of your variants (see below). Note that GQ fields will be updated, and genotype calls may be modified. However, the Phred-scaled genotype likelihoods (PLs) which indicate the original genotype call (the genotype candidate with PL=0) will remain untouched. Any analysis that made use of the PLs will produce the same results as before. 2. Summary of workflow steps Input Begin with recalibrated variants from VQSR at the end of the germline short variants pipeline. The filters applied by VQSR will be carried through the Genotype Refinement workflow. Step 1: Derive posterior probabilities of genotypes Tool used: CalculateGenotypePosteriors Using the Phred-scaled genotype likelihoods (PLs) for each sample, prior probabilities for a sample taking on a HomRef, Het, or HomVar genotype are applied to derive the posterior probabilities of the sample taking on each of those genotypes. A sample’s PLs were calculated by HaplotypeCaller using only the reads for that sample. By introducing additional data like the allele counts from the 1000 Genomes project and the PLs for other individuals in the sample’s pedigree trio, those estimates of genotype likelihood can be improved based on what is known about the variation of other individuals. SNP calls from the 1000 Genomes project capture the vast majority of variation across most human populations and can provide very strong priors in many cases. At sites where most of the 1000 Genomes samples are homozygous variant with respect to the reference genome, the probability of a sample being analyzed of also being homozygous variant is very high. For a sample for which both parent genotypes are available, the child’s genotype can be supported or invalidated by the parents’ genotypes based on Mendel’s laws of allele transmission. Even the confidence of the parents’ genotypes can be recalibrated, such as in cases where the genotypes output by HaplotypeCaller are apparent Mendelian violations. Step 2: Filter low quality genotypes Tool used: VariantFiltration After the posterior probabilities are calculated for each sample at each variant site, genotypes with GQ < 20 based on the posteriors are filtered out. GQ20 is widely accepted as a good threshold for genotype accuracy, indicating that there is a 99% chance that the genotype in question is correct. Tagging those low quality genotypes indicates to researchers that these genotypes may not be suitable for downstream analysis. However, as with the VQSR, a filter tag is applied, but the data is not removed from the VCF. Step 3: Annotate possible de novo mutations Tool used: VariantAnnotator Using the posterior genotype probabilities, possible de novo mutations are tagged. Low confidence de novos have child GQ >= 10 and AC < 4 or AF < 0.1%, whichever is more stringent for the number of samples in the dataset. High confidence de novo sites have all trio sample GQs >= 20 with the same AC/AF criterion. Step 4: Functional annotation of possible biological effects Tool options: Funcotator (experimental) Especially in the case of de novo mutation detection, analysis can benefit from the functional annotation of variants to restrict variants to exons and surrounding regulatory regions. Funcotator is a new tool that is currently still in development. If you would prefer to use a more mature tool, we recommend you look into SnpEff or Oncotator, but note that these are not GATK tools so we do not provide support for them. 3. Output annotations The Genotype Refinement workflow adds several new info- and format-level annotations to each variant. GQ fields will be updated, and genotypes calculated to be highly likely to be incorrect will be changed. The Phred-scaled genotype likelihoods (PLs) carry through the pipeline without being changed. In this way, PLs can be used to derive the original genotypes in cases where sample genotypes were changed. Population Priors New INFO field annotation PG is a vector of the Phred-scaled prior probabilities of a sample at that site being HomRef, Het, and HomVar. These priors are based on the input samples themselves along with data from the supporting samples if the variant in question overlaps another in the supporting dataset. Phred-Scaled Posterior Probability New FORMAT field annotation PP is the Phred-scaled posterior probability of the sample taking on each genotype for the given variant context alleles. The PPs represent a better calibrated estimate of genotype probabilities than the PLs are recommended for use in further analyses instead of the PLs. Genotype Quality Current FORMAT field annotation GQ is updated based on the PPs. The calculation is the same as for GQ based on PLs. Joint Trio Likelihood New FORMAT field annotation JL is the Phred-scaled joint likelihood of the posterior genotypes for the trio being incorrect. This calculation is based on the PLs produced by HaplotypeCaller (before application of priors), but the genotypes used come from the posteriors. The goal of this annotation is to be used in combination with JP to evaluate the improvement in the overall confidence in the trio’s genotypes after applying CalculateGenotypePosteriors. The calculation of the joint likelihood is given as: where the GLs are the genotype likelihoods in [0, 1] probability space. Joint Trio Posterior New FORMAT field annotation JP is the Phred-scaled posterior probability of the output posterior genotypes for the three samples being incorrect. The calculation of the joint posterior is given as: where the GPs are the genotype posteriors in [0, 1] probability space. Low Genotype Quality New FORMAT field filter lowGQ indicates samples with posterior GQ less than 20. Filtered samples tagged with lowGQ are not recommended for use in downstream analyses. High and Low Confidence De Novo New INFO field annotation for sites at which at least one family has a possible de novo mutation. Following the annotation tag is a list of the children with de novo mutations. High and low confidence are output separately. 4. Example Before: 1 1226231 rs13306638 G A 167563.16 PASS AC=2;AF=0.333;AN=6;… GT:AD:DP:GQ:PL 0/0:11,0:11:0:0,0,249 0/0:10,0:10:24:0,24,360 1/1:0,18:18:60:889,60,0 After: 1 1226231 rs13306638 G A 167563.16 PASS AC=3;AF=0.500;AN=6;…PG=0,8,22;… GT:AD:DP:GQ:JL:JP:PL:PP 0/1:11,0:11:49:2:24:0,0,249:49,0,287 0/0:10,0:10:32:2:24:0,24,360:0,32,439 1/1:0,18:18:43:2:24:889,60,0:867,43,0 The original call for the child (first sample) was HomRef with GQ0. However, given that, with high confidence, one parent is HomRef and one is HomVar, we expect the child to be heterozygous at this site. After family priors are applied, the child’s genotype is corrected and its GQ is increased from 0 to 49. Based on the allele frequency from 1000 Genomes for this site, the somewhat weaker population priors favor a HomRef call (PG=0,8,22). The combined effect of family and population priors still favors a Het call for the child. The joint likelihood for this trio at this site is two, indicating that the genotype for one of the samples may have been changed. Specifically a low JL indicates that posterior genotype for at least one of the samples was not the most likely as predicted by the PLs. The joint posterior value for the trio is 24, which indicates that the GQ values based on the posteriors for all of the samples are at least 24. (See above for a more complete description of JL and JP.) 5. More information about priors The Genotype Refinement Pipeline uses Bayes’s Rule to combine independent data with the genotype likelihoods derived from HaplotypeCaller, producing more accurate and confident genotype posterior probabilities. Different sites will have different combinations of priors applied based on the overlap of each site with external, supporting SNP calls and on the availability of genotype calls for the samples in each trio. Input-derived Population Priors If the input VCF contains at least 10 samples, then population priors will be calculated based on the discovered allele count for every called variant. Supporting Population Priors Priors derived from supporting SNP calls can only be applied at sites where the supporting calls overlap with called variants in the input VCF. The values of these priors vary based on the called reference and alternate allele counts in the supporting VCF. Higher allele counts (for ref or alt) yield stronger priors. Family Priors The strongest family priors occur at sites where the called trio genotype configuration is a Mendelian violation. In such a case, each Mendelian violation configuration is penalized by a de novo mutation probability (currently 10-6). Confidence also propagates through a trio. For example, two GQ60 HomRef parents can substantially boost a low GQ HomRef child and a GQ60 HomRef child and parent can improve the GQ of the second parent. Application of family priors requires the child to be called at the site in question. If one parent has a no-call genotype, priors can still be applied, but the potential for confidence improvement is not as great as in the 3-sample case. Caveats Right now family priors can only be applied to biallelic variants and population priors can only be applied to SNPs. Family priors only work for trios. 6. Mathematical details Note that family priors are calculated and applied before population priors. The opposite ordering would result in overly strong population priors because they are applied to the child and parents and then compounded when the trio likelihoods are multiplied together. Review of Bayes’s Rule HaplotypeCaller outputs the likelihoods of observing the read data given that the genotype is actually HomRef, Het, and HomVar. To convert these quantities to the probability of the genotype given the read data, we can use Bayes’s Rule. Bayes’s Rule dictates that the probability of a parameter given observed data is equal to the likelihood of the observations given the parameter multiplied by the prior probability that the parameter takes on the value of interest, normalized by the prior times likelihood for all parameter values: $$ P(\theta|Obs) = \frac{P(Obs|\theta)P(\theta)}{\sum_{\theta} P(Obs|\theta)P(\theta)} $$ In the best practices pipeline, we interpret the genotype likelihoods as probabilities by implicitly converting the genotype likelihoods to genotype probabilities using non-informative or flat priors, for which each genotype has the same prior probability. However, in the Genotype Refinement Pipeline we use independent data such as the genotypes of the other samples in the dataset, the genotypes in a “gold standard” dataset, or the genotypes of the other samples in a family to construct more informative priors and derive better posterior probability estimates. Calculation of Population Priors Given a set of samples in addition to the sample of interest (ideally non-related, but from the same ethnic population), we can derive the prior probability of the genotype of the sample of interest by modeling the sample’s alleles as two independent draws from a pool consisting of the set of all the supplemental samples’ alleles. (This follows rather naturally from the Hardy-Weinberg assumptions.) Specifically, this prior probability will take the form of a multinomial Dirichlet distribution parameterized by the allele counts of each allele in the supplemental population. In the biallelic case the priors can be calculated as follows: $$ P(GT = HomRef) = \dbinom{2}{0} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 2)}{\Gamma(nSamples + 2)\Gamma(RefCount)} $$ $$ P(GT = Het) = \dbinom{2}{1} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 1)\Gamma(AltCount + 1)}{\Gamma(nSamples + 2)\Gamma(RefCount)\Gamma(AltCount)} $$ $$ P(GT = HomVar) = \dbinom{2}{2} \ln \frac{\Gamma(nSamples)\Gamma(AltCount + 2)}{\Gamma(nSamples + 2)\Gamma(AltCount)} $$ where Γ is the Gamma function, an extension of the factorial function. The prior genotype probabilities based on this distribution scale intuitively with number of samples. For example, a set of 10 samples, 9 of which are HomRef yield a prior probability of another sample being HomRef with about 90% probability whereas a set of 50 samples, 49 of which are HomRef yield a 97% probability of another sample being HomRef. Calculation of Family Priors Given a genotype configuration for a given mother, father, and child trio, we set the prior probability of that genotype configuration as follows: $$ P(G_M,G_F,G_C) = P(\vec{G}) \cases{ 1-10\mu-2\mu^2 & no MV \cr \mu & 1 MV \cr \mu^2 & 2 MVs} $$ where the 10 configurations with a single Mendelian violation are penalized by the de novo mutation probability μ and the two configurations with two Mendelian violations by μ^2. The remaining configurations are considered valid and are assigned the remaining probability to sum to one. This prior is applied to the joint genotype combination of the three samples in the trio. To find the posterior for any single sample, we marginalize over the remaining two samples as shown in the example below to find the posterior probability of the child having a HomRef genotype: This quantity P(Gc|D) is calculated for each genotype, then the resulting vector is Phred-scaled and output as the Phred-scaled posterior probabilities (PPs).
I'm trying to understand the concept of spin in Quantum Mechanics. I'm reading "Road to Reality" by Penrose, which despite not being a textbook, is reputed to give one a deep insight into physical processes. Let us suppose that we have a spin $\frac{1}{2}$ particle. It has two eigenstates- $|\uparrow\rangle$ and $|\downarrow\rangle$. I would assume that spin $S$ is an operator such that when it acts on the wavefunction $\alpha |\uparrow\rangle+\beta|\downarrow\rangle$, it collapses it to one eigenstate, with an eigenvalue which would be the spin (so $\frac{1}{2}$ here). However, Penrose says that the spin can be thought of as the point $[\alpha:\beta]$ in $\Bbb{C}P^1$. Why is this? Why do we not have a collapse to an eigenstate? Let us suppose that we have a particle with spin $j$. Then $N=2j$. An angular momentum eigenstate of such a particle can be written as $\psi_{AB\dots N}$. I know that there are $N+1$ independent eigenstates, and that for even values of $N$, the eigenstates are spherical harmonic functions while for odd values of $N$ the eigenstates are spin-weighted spherical harmonic functions. But how do you calculate the spin of this particle? One might say that the spin is just $j$. However, from the example of spin $\frac{1}{2}$, I would figure that the spin is suppose to be an element of a projective space. Is this not true?
Mini Series: Designing a Satellite for Dummies Are you an aspiring aerospace engineer, a space enthusiast, a parent checking your child’s homework or simply interested in the specifics of how to design certain satellite parts? Then this is the place to be. In this mini series we will go through the basics of designing and scaling a satellite, ranging from solar arrays to propellant tanks and even orbital parameters. If you would like us to cover other space-related topics, feel free to reach out to engineering@valispace.com. Part 3: How to Calculate Satellite Disturbance Torques In the first two tutorials of this series we discussed components of the satellite’s power system. We will now explain the necessary steps for designing the Attitude Determination and Control System (ADCS). During the lifetime of an Earth-bound satellite, its attitude (the direction it is pointed towards) gets continually disturbed by different forces, called disturbance torques. These torques have to be mitigated by the ADCS, to make sure the satellite is always pointed in the right direction. In this tutorial, we will discuss how to calculate these disturbance torques. Different types of torques We distinguish four different types of disturbance torques, namely gravity gradient, solar radiation pressure, magnetic field and aerodynamic torques. Because the ADCS must handle the maximum possible torques, we will try to find the worst-case torques for all different cases. Note that some of these forces can also cause orbit perturbations, but in this tutorial we will just focus on the attitude of the satellite. Below you can see the influence of the different torques on a typical satellite over a range of altitudes. It shows that generally, up to an altitude of around 500 km, the aerodynamic torque is the maximum torque, but going higher up the gravity gradient will take over. The Earth’s gravitational field varies inversely with distance from the Earth, thus the gravitational field varies slightly across a satellites length. This difference in acceleration causes a torque on the satellite. As a first order approximation, this torque is constant along the satellite’s orbit for an Earth-orientated vehicle and can be approximated as follows: $$T_g = \frac{3 \mu}{2 R^3} |I_z – I_y| sin(2\theta) \; \; [Nm]$$ In here, $\mu$ is Earth’s gravitational constant ($398600 \; km^3 / s^2$) and R is the orbital radius (dependent on the satellite’s altitude). $I_z$ and $I_y$ are the moments of inertia referring to the z- and y-axis. The moments of inertia are highly variable depending on the exact satellite design, to find out how to calculate them, give these different sources a look. Note that $I_y$ is interchangeable with $I_z$, and as we are trying to find the worst-case torque, the moment of inertia which gives the highest torque should be used. $\Theta$ is the maximum angle the local vertical makes from the z-axis, as visualized below. Solar radiation pressure torque Radiation (electromagnetic waves) carries momentum according to Maxwell’s theory of electromagnetism. Upon hitting a surface, this momentum can be transferred to the surface. The amount of momentum transferred depends on the type of surface being illuminated. It will be lowest for transparent surfaces, higher for absorbent surfaces and highest for reflective surfaces. In general, it can be said that solar arrays are absorbers and a spacecraft’s body is a reflector. The total force exerted on a spacecraft by solar radiation is as follows: $$ F = \frac{J_s}{c} A_s cos(I) (1 + q) \; \; [N]$$ Here, $J_s = 1367 \; W/m^2$ is the solar constant at Earth, $c = 3 \cdot 10^8 \; m/s$ is the speed of light, $A_s$ is the total surface area and $I$ is the angle of incidence of the solar radiation. Finally, $q$ is the reflectance factor. Obviously, this value is not constant over the spacecraft’s body, but with a first order approximation it can be assumed to be constant and equal to 0.6. Now, the torque exerted on the satellite will be: $$T_{sp} = F ( c_{ps} – cg) \; \; [Nm]$$ Here, $cg$ is the center of gravity and $c_{ps}$ is the center of solar pressure. This center can be found in a similar way to finding the center of gravity, namely by finding the surfaces which are hit by radiation and their areas as follows: $$ c_{ps} = \frac{\Sigma A_n x_n}{\Sigma A_n} \; \; [m]$$ Magnetic field torque The magnetic field of the Earth causes a cyclic torque across the spacecraft no matter its orientation due to interactions with the spacecraft’s magnetic dipole. It can be approximated using the following formula: $$T_m = DB = \frac{\mu_E}{c R^3} \; \; [Nm] $$ In here, $B$ is the Earth’s magnetic field strength, which is approximated using the magnetic moment of Earth $\mu_E = 7.96 \cdot 10^{15} \; tesla \cdot m^3$, c is an approximation constant, which can be taken to be 1 for a polar orbit and as 2 for an equatorial orbit, and R is the distance to Earth’s dipole center in meters. $D$ is the satellite’s residual dipole, which is caused by electric currents and magnetic material within the vehicle. In the design of your satellite, it is important to minimize this value, for instance by including twist in the cables and avoiding large loops on the electronics boards! Due to different currents over time, this is hard to calculate. However, $D$ can be estimated as a function of mass as: $$D = c \cdot 10^{-3} \cdot m_{sc} \; \; [A m^2]$$ Here, $m_{sc}$ is the satellite mass and c is a constant in a range of 1 to 10 depending on the level of magneticity of the spacecraft (NASA source). Also, the residual dipole can be measured with a magnetic moment test after the satellite is integrated! Aerodynamic torque An object flying through air is subjected to aerodynamic drag. For some low-flying satellites, this means that it is necessary to calculate torques generated aerodynamically across the vehicle. For satellites above an altitude of $500 \; km$ this can generally be neglected. This can be done using the following approximation: $$T_a = 0.5 [\rho C_d A V^2](c_{pa} – cg) \; \; [Nm] $$ In here, $\rho$ is the atmospheric density, $C_d$ is the drag coefficient (usually between 2 and 2.5), $A$ is the area of the front-facing satellite, $V$ is the spacecraft’s velocity and $cg$ is the center of gravity. $c_{pa}$ is the center of aerodynamic pressure, which can be calculated in a similar way as $c_{ps}$ is calculated. If you followed the steps correctly, you have now calculated the different disturbance torques on your satellite and you are ready to start the sizing of your ADCS, congratulations! We hope you liked this mini-tutorial! If you want to learn how the disturbance torques and the ADCS are related to other subsystems in a satellite or how to design a complete satellite using Valispace and practical examples, also check our Satellite Tutorial . Stay tuned for more and feel free to give us feedback at contact-us@valispace.com ! Valispace is a single source of truth and collaboration platform for all your engineering data. Click here to get a demo and try it for free.
Table of Contents While optimization problems have one decision maker that controls all decison variables, equilibrium problems are a collection of optimization problems and variational inequalities, each controlled by a different agent. We typically assume that each variable and each equation is controlled by or belongs to exactly one agent. Variables that are controlled by one agent but appear in the equations of a second agent are regarded as fixed or exogenous variables by that second agent: when taking first-order conditions, the second agent won't take derivatives wrt these exogenous variables. Later we will relax this assumption and introduce equilibrium problems with shared constraints and shared variables. Note that in this section we will discuss equilibrium problems of the Nash type, i.e. where all agents are on the same level, each assuming the decisions or strategies of the other agents are known and fixed. Equilibrium problems of the Stackelberg type, where there are leaders and followers, are covered in section Bilevel Programs. Consider the following equilibrium problem with \(N\) agents solving minimization problems and one agent solving a variational inequality: \begin{equation} \begin{array}{ll} \textrm{Find} & (x_{1}^{*}, \dots, x_{N}^{*}, p^{*}) \; \textrm{satisfying} \\ x_{i}^{*} \, \in \,&\textrm{arg min}_{x_i} \, f_i(x_i,x^{*}_{-i}) \\ & \textrm{s.t.} \quad \quad \quad g_i(x_i,x^{*}_{-i}) \leq 0, \; \textrm{for} \; i=1, \dots, N, \\ p^* \, \in &\textrm{SOL} \, (H(p,x^*), K(x^*)), \\ &\textrm{where} \; K(x^*) = \{p \, | \, w(p,x^*) \leq 0 \}. \tag {10} \\ \end{array} \end{equation} Note that \(f_i(x_i,x_{-i})\) denotes the objective function of the problem of agent \(i\), \(g_i(x_i,x_{-i})\) are the constraints relating to this optimization problem and \(x_{-i}=(x_1, \dots , x_{i-1}, x_{i+1}, \dots , x_N)\) denotes the decisions of the other agents. Further, \(\textrm{SOL} \, (H,K)\) represents the solution set of the variational inequality \(VI(H,K)\). This problem can be implemented with EMP as follows: Set i ;Variables obj(i), x(i), p;Equations deff(i), defg(i), defH, defw;*Definitions of equations are omittedModel equil / deff, defg, defH, defw /;File myinfo /'%emp.info%'/;put myinfo 'equilibrium';loop(i, put / 'min', obj(i), x(i), deff(i), defg(i););put 'vi defH p defw' /;putclose myinfo;solve equil using EMP; Note that the GAMS variable obj(i) holds the value of \(f_i(x)\), x(i) represents the variable \(x_i\) and p denotes the variable \(p\), of course. The equations deff(i) and defg(i) are closed-form definitions of the objective function \(f_i(x)\) and the constraint \(g_i(x)\) respectively. The equation defH defines the variational inequality function \(H\) and the equation defw defines the set \(K\). The EMP annotations found in the file emp.info specify the equilibrium structure of the model: equilibrium min obj('1') x('1') deff('1') defg('1') ... min obj('N') x('N') deff('N') defg('N') vi defH p defw The EMP keyword equilibrium indicates that the annotations are for an equilibrium problem. Each of the EMP keywords min begins a new optimization problem (owned by a new agent), where each problem has its own objective variable, decision variables, and equations / constraints. For example, the first agent minimizes obj('1'), controls or owns x('1'), and is subject to the constraints deff('1') and defg('1'). If other variables like x('2') and x('3') appear in deff('1') and defg('1'), they will be treated as exogenous by the first agent. This specification is consistent with the formulation above in (9). Each agent's optimization problem can be easily (re)constructed given the EMP annotations. Following the optimization problems of the \(N\) agents we have the VI specification for agent \(p\). The EMP keyword vi is followed by the equation-variable pair defH p defining the VI function \(H\) and the equation defw that defines the feasible set \(K\). Consider the following example from Kim & Ferris (2017) [143]. In this economic equilibrium problem there are three agents: one profit-maximizing producer, one utility-maximizing consumer and a market that determines the price of three commodities based on production and demand. The problem data include a technology matrix \(A\), where the entry \(a_{ij} > 0\) denotes the output of the commodity \(i\) for each unit of activity of producer \(j\) and \(a_{ij} < 0\) denotes the respective input. Further, an initial endowment \(b\) and the demand function \(d(p)\) is given, where \(p\) is the price. The consumer maximizes her utility within her budget, which depends on the price \(p\) and the initial endowment \(b\). Let \(y\) represent the activity of the producer, \(x\) represent the demand of the consumer and \(p\) represent the prices of commodities. Then \((y^{*},x^{*}, p^{*})\) is a general equilibrium if it satisfies the following: \begin{equation} \begin{array}{rll} -A^T p^* & \geq 0 & \textrm{No positive profit for each activity} \\ b+Ay^*-d(p^*) & \geq 0 & \textrm{No excess demand} \\ p^* \geq 0, \, y^* & \geq 0 & \textrm{Nonnegativity} \\ -A^T p^* & \perp y^* & \textrm{No activity for earning negative profit and positive activity implies balanced profit}\\ b+Ay^* - d(p^*) & \perp p^{*} & \textrm{Zero price for excess supply and market clearance for positive price} \\ \end{array} \end{equation} The code for the respective model is given below. Note that instead of using the consumer demand function \(d(p)\) in its explicit form, we introduce a utility-maximizing consumer with demand \(x\). set i 'commodities' / 1*3 /;variable u 'consumer utility';positive variables y 'activity of the producer' x(i) 'Marshallian demand of the consumer' p(i) 'prices' ;parameters A(i) 'technology matrix' / 1 1, 2 -1, 3 -1 / s(i) 'budget share' / 1 0.9, 2 0.1, 3 0 / b(i) 'endowment' / 1 0, 2 5, 3 3 / ;equations profit 'profit of activity' mkt(i) 'constraint on excess demand' udef 'Cobb-Douglas utility function' budget 'budget constraint' ;profit.. -sum(i, A(i)*p(i)) =g= 0;mkt(i).. b(i) + A(i)*y - x(i) =g= 0;udef.. u =e= sum(i, s(i)*log(x(i)));budget.. sum(i, p(i)*x(i)) =l= sum(i, p(i)*b(i));model m / mkt, profit, udef, budget /;file empinfo /'%emp.info%'/; putclose empinfo 'equilibrium' / ' max', u, 'x', udef, budget / ' vi profit y' / ' vi mkt p' / ;* the second commodity is used as a numerairep.fx('2') = 1;x.l(i) = 1;solve m using EMP; Observe that in the EMP annotations the problems of the three agents are specified after the EMP keyword equilibrium: the consumer solves a maximization problem (where the utility u is maximized) and the activities of the producer and the price-setting market are expressed as VI. As there are three commodities, the first VI actually generates three VI functions, one for each commodity. Thus there are three agents and four VI functions in the equilibrium problem. This is reflected in the EMP Summary in the listing file: --- EMP Summary ... VI Functions = 4 Equilibrium Agent = 3 ... In the GAMS EMP Library there are several models that have a similar form, e.g. Scarf's activity analysis model [SCARFEMP-DEM] and the simple equilibrium problem [SIMPEQUIL]. The latter demonstrates that there are equilibrium problems where the optimization problems of the individual agents are solvable, but the overall equilibrium problem does not have a solution. In many applications equilibrium problems come with a twist: the dual variable associated with a constraint in the problem of one agent appears exogenously in the problem of another agent. The following simple example with two agents is from model [DUALVAR] in the GAMS EMP Library. \begin{equation} \tag {11} \begin{array}{ll} \textrm{Problem of the first agent:} \textrm{Min}_{(v,w)} & z = v+w \\ \textrm{s.t.} & \sqrt{v+1} + 2w \geq 2 \qquad (\perp \; u \geq 0) \\ & v, w \geq 0 \\ \\ \textrm{Problem of the second agent:} VI: & F(y) := y-4u + 1 \quad \; (\perp \, y \; \textrm{free}) \\ \end{array} \end{equation} N.B.: the variable \(u\) that appears in the problem of the second agent is the dual multiplier (aka shadow price) of the first agent's constraint. This equilibrium problem can be modeled in GAMS with EMP as follows: positive variables v 'belongs to min agent' w 'belongs to min agent' u 'dual of min constraint' ;free variables y 'belongs to VI agent' z 'objective var' ;equations defz 'objective def' g 'constraint for min agent' Fy 'VI function' ;defz.. v + w =e= z;g.. sqrt(v+1) + 2*w =g= 2;Fy.. y - 4*u + 1 =n= 0;Model opt 'min agent and VI agent' / defz, g, Fy /;File empinfo / '%emp.info%' /; putclose empinfo'equilibrium' /' min z v w defz g' /' vi Fy y' /' dualvar u g' /;defz.m = -1;g.m = 0.5; Fy.m = 1;v.l = 0; w.l = 0.5;y.l = 1; u.l = 0.5;solve opt using emp; The EMP info file contains the EMP keyword equilibrium followed by the specifications for the two agents: a minimization problem for the first agent and a VI for the second agent. The special relationship between the variable u and the equation g is declared via the EMP keyword dualvar followed by the respective variable-equation pair. Recall our usual assumption that each variable and each equation is owned or controlled by exactly one agent. Since variable u is tied to equation g and g is owned by the first agent, variable u is owned by the first agent also. Besides the number of equilibrium agents and the number of VI functions, the EMP Summary lists the number of dual variable maps: --- EMP Summary ... Dual Variable Maps = 1 ... VI Functions = 1 Equilibrium Agent = 2 ... Note Although the example above contained an optimizing agent and a VI agent, dual variables most often occur in equilibrium problems with several optimizing agents. Other examples with dual variables in the GAMS EMP Library include a formulation of the well-known transportation model as an equilibrium problem [TRANSEQL], Scarf's activity analysis model [SCARFEMP-PRIMAL] and the general equilibrium model [TWO3EMP]. The EMP framework provides the following general syntax to specify equilibrium problems: Equilibrium {VIsol {equ}} {Implicit {var equ}} {MAX|MIN obj {var|*} {[-] equ}} {VI {var|*} {[-] equ var} {[-] equ}} {DualVar {var [-] equ}} The EMP keyword Equilibrium indicates that the specifications that follow define the structure of an equilibrium problem. The MAX, MIN, and VI keywords specify agents in the problem, while the rest of the keywords are optional modifiers used to adjust the structure of the agent models or the meaning of the equations they contain. Note An equilibrium problem must contain at least one agent, i.e must contain one of the keywords MAX, MIN, or VI. The EMP keyword VIsol identifies a shared constraint(s) and specifies the MCP reformulation to use for it: see section Equilibrium Problems with Shared Constraints below for details. The keyword Implicit identifies a shared variable and its defining constraint: see section Equilibrium Problems with Shared Variables below for details. The keywords MAX and MIN each begin the specification of an optimization agent and are followed by the objective variable obj and the other variables and equations owned by the agent, as described in the formulation and example above. The keyword VI begins the specification of a VI agent: see the section on VI above for details. Finally, the EMP keyword DualVar specifies that the variable var is the Lagrange multiplier for the equation equ. For examples, see sections Equilibrium Problems with EMP: Example with Dual Variables and Embedded Complementarity Systems. The symbol '*' specifies that the default or automatic assignment of variables to this agent be used, i.e. the set of variables used in the equations owned by this agent but not explicitly or otherwise assigned to another agent. Note that if a variable occurs in equations owned by multiple agents and is not explicitly assigned to any agent, the default assignment is not well defined and using it will be flagged as an error. To avoid confusion and promote clarity, we recommend that modelers use explicit variable lists and avoid the '*' symbol. The "-" sign in the syntax above is used to flip (i.e. to reorient or negate) the marked equation, e.g. so that x**1.5 =L= y becomes y =G= x**1.5. Flipped equations in EMP behave in the same way as flipped equations in MCP.
Difference between revisions of "Free implies residually finite" (→Proof details) (→Proof details) Line 35: Line 35: * Suppose <math>A</math> is the set of <math>i</math>s such that <math>a_i = t</math> and <math>B</math> is the set of <math>j</math>s such that <math>a_j^{-1} = t</math>. Then, set <math>f(t)</math> as any permutation <math>\sigma</math> that sends each <math>i \in A</math> to <math>i + 1</math>, and for each <math>j \in B</math>, sends <math>j + 1</math> to <math>j</math>. This is well-defined since an element and its inverse cannot occur adjacently in the reduced form expression for a word. * Suppose <math>A</math> is the set of <math>i</math>s such that <math>a_i = t</math> and <math>B</math> is the set of <math>j</math>s such that <math>a_j^{-1} = t</math>. Then, set <math>f(t)</math> as any permutation <math>\sigma</math> that sends each <math>i \in A</math> to <math>i + 1</math>, and for each <math>j \in B</math>, sends <math>j + 1</math> to <math>j</math>. This is well-defined since an element and its inverse cannot occur adjacently in the reduced form expression for a word. − We have thus obtained a function <math>f:T \to S_{n+1}</math>. This extends uniquely to a homomorphism from <math>F</math> to <math>S_{n + 1}</math>, because <math>F</math> is free. Moreover, under this homomorphism, we see that the image of <math>a</math> sends <math>1</math> to <math>n + 1</math>, and is not the identity element. The kernel of this homomorphism is thus a normal subgroup of <math>F</math> with finite quotient group (a subgroup of <math>S_{n+1}</math>). Moreover, the kernel does not contain <math>a</math>, because the permutation induced by <math>a</math> sends 1 to <matH>n + 1</math> (as can be verified by noting that each <math>a_i</math> sends <math>i</math> to <math>i + 1</math>, by + We have thus obtained a function <math>f:T \to S_{n+1}</math>. This extends uniquely to a homomorphism from <math>F</math> to <math>S_{n + 1}</math>, because <math>F</math> is free. Moreover, under this homomorphism, we see that the image of <math>a</math> sends <math>1</math> to <math>n + 1</math>, and is not the identity element. The kernel of this homomorphism is thus a normal subgroup of <math>F</math> with finite quotient group (a subgroup of <math>S_{n+1}</math>). Moreover, the kernel does not contain <math>a</math>, because the permutation induced by <math>a</math> sends 1 to <matH>n + 1</math> (as can be verified by noting that each <math>a_i</math> sends <math>i</math> to <math>i + 1</math>, by . Revision as of 17:02, 23 November 2017 This article gives the statement and possibly, proof, of an implication relation between two group properties. That is, it states that every group satisfying the first group property (i.e., free group) must also satisfy the second group property (i.e., residually finite group) View all group property implications | View all group property non-implications Get more facts about free group|Get more facts about residually finite group Statement Related facts Free implies residually nilpotent Free implies residually solvable Free abelian implies residually finite Finitely generated abelian implies residually finite Proof Proof idea The idea is to use the fact that finite groups are big enough to accommodate a particular word evaluating to a non-identity element. Proof details Given: A free group with freely generating set . A non-identity element . To prove: There exists a normal subgroup of such that is a finite group and . Proof: We write: as a reduced form expression for in terms of . Thus, for each , either and . We now define a function where is the symmetric group on the set : is the identity element if is not equal to any of the s or their inverses. Suppose is the set of s such that and is the set of s such that . Then, set as any permutation that sends each to , and for each , sends to . This is well-defined since an element and its inverse cannot occur adjacently in the reduced form expression for a word. We have thus obtained a function . This extends uniquely to a homomorphism from to , because is free. Moreover, under this homomorphism, we see that the image of sends to , and is not the identity element. The kernel of this homomorphism is thus a normal subgroup of with finite quotient group (a subgroup of ). Moreover, the kernel does not contain , because the permutation induced by sends 1 to (as can be verified by noting that each sends to , by construction_.
Problem Find every Householder reflection $H_v$ (with respect to $v$), such that $$ y = H_vx $$ where $x,y$ are unit vectors, i.e. $\Vert x \Vert_2 = \Vert y \Vert_2 = 1$, and $\langle x,y\rangle = \langle y,x\rangle$ with $x,y \in \mathbb{C}^n$. A Householder reflection with respect to $v$ is defined $$ H_v := I - 2\frac{vv^\ast}{v^\ast v} $$ which is the reflector with respect to the hyperplane orthogonal to $v$. Try I have found one. By defining $v := (x-y)$, we have $$ H_v = I - 2 \frac{(x-y)(x-y)^\ast}{\Vert x-y \Vert_2^2} $$ and since $\Vert x- y \Vert_2^2 = (x-y)^\ast (x-y) = 2 - 2x^\ast y$ and $(x-y)^\ast x = 1-x^\ast y$, we have $$ H_v x = x - (x-y) = y $$ Question I would like to know if every $H_v$ is expressed $$ H_v = I - 2\frac{vv^\ast}{v^\ast v} $$ where $v := c(x-y)$ for some constant $0 \neq c \in \mathbb{C}$. In other words, I would like to verify the uniqueness of the above $H_v$. But I cannot proceed from here. Any help will be appreciated.
In my last post, we saw that the number of positive up-down walks (going either up by $1$ or down by $1$ at each step) of length $2n$ starting from $(0,0)$ is $\binom{2n}{n}$. Furthermore, the number of these that return to height $0$ at the end is the $n$th Catalan number $\frac{1}{n+1}\binom{2n}{n}$. It is natural to ask, then: how many positive walks of length $2n$ end at height $h$ for a given $h$? This question was brought up at the dinner table last week by Carlos Shine, who leads the Brazilian IMO team and was also one of my coworkers at the IdeaMath summer camp last week in San Jose. In classic mathematician style, we asked the waiter for a pen and began scribbling on the nearest napkin. Table of values and a recursion The first thing we noticed was that the ending height is always even, since we change the parity by $1$ at each of the $2n$ steps. So let the ending height be $2k$, and define $f(n,k)$ to be the number of positive up-down walks of length $2n$ ending at height $2k$. We can simply list out the possibilities for the first few values of $n$ to make the following table of values of $f(n,k)$. $$\begin{array}{c|ccccc} k: & 0 & 1 & 2 & 3 & 4 \\\hline n=0 & 1 & & & & \\ n=1 & 1 & 1 & & & \\ n=2 & 2 & 3 & 1 & & \\ n=3 & 5 & 9 & 5 & 1 & \\ n=4 & 14& 28& 20& 7 & 1 \\ \end{array} $$ As expected, we have $f(n,0)=\frac{1}{n+1}\binom{2n}{n}$ and $\sum_{k}f(n,k)=\binom{2n}{n}$. The pattern $f(n,n)=1$ is explained by the fact that we must take $2n$ up-steps to reach height $2n$. For the other values in the table, Carlos pointed out that they appear to satisfy the recurrence relation $$f(n,k)=f(n-1,k-1)+2f(n-1,k)+f(n-1,k+1)$$ for $k\ge 1$, and for $k=0$, $f(n,0)=f(n-1,0)+f(n-1,1)$. Indeed, we can prove this by considering the last two steps in a walk to height $2k$. If the last two steps are both up, then the number of such walks is the number of positive walks of length $2n-2$ to height $2k-2$, or $f(n-1,k-1)$. If the last two steps are up down or down up, then in each case we get a count of $f(n-1,k)$, and if the last two steps are both down, we get $f(n-1,k+1)$. The exception to this rule is when $k=0$ and so the last two steps could not have been down up or up up; in this case we have $f(n,0)=f(n-1,0)+f(n-1,1)$ as desired. An explicit formula and a bijective proof With a bit of guesswork and by unwrapping the recursion for $k=n-1,n-2,\ldots$, we can find that $$f(n,n-k)=\binom{2n}{k}-\binom{2n}{k-1}$$ satisfies the recursion. Indeed, it is a natural generalization of the closed formula for the Catalan numbers, which can also be written $$f(n,0)=\binom{2n}{n}-\binom{2n}{n-1}.$$ A few days later, Carlos noticed that we can prove this formula bijectively using the reflection principle. In an email, he explained: “…I remembered the proof of the Catalan numbers using the reflection principle and thought, well, why don’t we try it for the generalized version? So here’s a more direct counting proof. I’m going to change the notation: let $g(n,k) = f(n,n-k)$. In $g(n,k)$, $k$ is the number of “down steps”, because, in order to get to the point $(2n,2(n-k))$, you need $k$ steps down and $2n-k$ steps up. We are going to prove that $$g(n,k) = \binom{2n}k – \binom{2n}{k-1}.$$ If we do not restrict the paths (that is, they are allowed to go below the $x$-axis), there are $\binom{2n}k$ choices ($k$ steps down, $2n-k$ steps up). We have to subtract the number of paths that go below the $x$-axis. Consider (quite like your previous post) the first time it hits the line $y=-1$, at point $P=(t,-1)$ and then reflect the path before $P$, that is, the points $(0,0), (1,a_1), \ldots, (t-1,0)$, about the line $y=-1$. For example: bijects to This bijects to a path from $(0,-2)$ to $(2n,2(n-k))$, which has the same number of steps up and down as a path from $(0,0)$ to $(2n,2(n-k+1))$, which has $k-1$ steps down and $2n-k+1$ steps up. In fact, any path from $(0,-2)$ to $(2n,2(n-k))$ has to hit the line $y=-1$, so the process is invertible. The number of such paths is $\binom{2n}{k-1}$. The result then follows.” Beautiful! The distribution Now that we have a formula, can we predict the asymptotic behaviour of the distribution of values as $n\to \infty$? The values appear to rise to a peak and then decrease again. Below is a plot of $g(100,k)$ for $k=0,\ldots,100$. Is it a normal distribution? In fact, it turns out to approach the derivative of a normal distribution. The binomial distribution, that is, the discrete distribution having density function $B_{n}(x)=\frac{1}{2^n}\binom{n}{x}$, approaches a normal distribution as $n\to \infty$, having mean $n$ and standard deviation $\sqrt{n}/2$. Using the formula for a normal distribution and substituting $2n$ for $n$, we have that $\binom{2n}{x}$ approaches the function $$\frac{4^n}{\sqrt{\pi n}}e^{-(x-n)^2/n}.$$ Now, our formula for $g(n,k)$ is a “discrete derivative” of the sequence of binomial coefficients $\binom{2n}{x}$, so it will be approximated by the continuous derivative of the function defined above. This derivative comes out to: $$4^n\cdot \frac{2(n-x)}{\sqrt{\pi n^3}}e^{-(x-n)^2/n}$$ Indeed, the plot for $n=200$ matches fairly closely! A generating function As a generatingfunctionologist, I can’t help but wrap up this post with a generating function for $g(n,k)$. Since $g(n,k)$ is the difference of consecutive coefficients of the generating function $$(1+x)^{2n}=\sum_{k}\binom{2n}{k}x^k,$$ we have that the first $n+1$ coefficients of $$(1-x)(1+x)^{2n}$$ are the values of $g(n,k)=\binom{2n}{k}-\binom{2n}{k-1}$ for $k=0,\ldots,n$.
There is a fun little fact regarding polynomials in two variables $x$ and $y$: (To be more precise, this is true for polynomials over any field of characteristic not equal to $2$. For simplicity, in what follows we will assume that our polynomials have coefficients in $\mathbb{C}$.) Recall that a polynomial $g$ is symmetric if it does not change upon permuting its variables. In this case, with two variables, $g(x,y)=g(y,x)$. It is antisymmetric if swapping any two of the variables negates it, in this case $g(x,y)=-g(y,x)$. It is not hard to prove the fact above. To show existence of the decomposition, set $g(x,y)=\frac{f(x,y)+f(y,x)}{2}$ and $h(x,y)=\frac{f(x,y)-f(y,x)}{2}$. Then $$f(x,y)=g(x,y)+h(x,y),$$ and $g$ is symmetric while $h$ is antisymmetric. For instance, if $f(x,y)=x^2$, then we can write $$x^2=\frac{x^2+y^2}{2}+\frac{x^2-y^2}{2}.$$ For uniqueness, suppose $f(x,y)=g_0(x,y)+h_0(x,y)$ where $g_0$ is symmetric and $h_0$ is antisymmetric. Then $g_0+h_0=g+h$, and so $$g_0-g=h-h_0.$$ The left hand side of this equation is symmetric and the right hand side is antisymmetric, and so both sides must be identically zero. This implies that $g_0=g$ and $h_0=h$, so the unique decomposition is $f=g+h$. QED. This got me thinking… Is there an analogous decomposition for polynomials in three variables? Or any number of variables? The above decomposition doesn’t make sense in three variables, but perhaps every polynomial in $x$, $y$, and $z$ can be written uniquely as a sum of a symmetric, antisymmetric, and… some other particular type(s) of polynomials. Indeed, it can be generalized in the following sense. Notice that any antisymmetric polynomial in two variables is divisible by $x-y$, since setting $x=y$ gives us $f(x,x)=-f(x,x)=0$. Moreover, dividing by $x-y$ gives a symmetric polynomial: if $$h(x,y)=p(x,y)\cdot(x-y)$$ is antisymmetric, then $p(x,y)\cdot (x-y)=-p(y,x)\cdot(y-x)$, and so $p(x,y)=p(y,x)$. Thus any antisymmetric polynomial $h$ is equal to $x-y$ times a symmetric polynomial, and so we can restate our fact above in the following way: Any two variable polynomial $f(x,y)$ can be written uniquely as a linear combination of $1$ and $x-y$, using symmetric polynomials as the coefficients. For instance, $f(x,y)=x^2$ can be written as $$x^2=\left(\frac{x^2+y^2}{2}\right)\cdot 1+\left(\frac{x+y}{2}\right)\cdot (x-y).$$ Now, to generalize this to three variables, in place of $x-y$, we consider the polynomial $$\Delta=(x-y)(x-z)(y-z).$$ Also consider the five polynomials: $x^2-z^2+2yz-2xy$, $z^2-y^2+2xy-2xz$, $x-y$, $y-z$, and $1$, each of which is obtained by taking certain partial derivatives starting with $\Delta$. It turns out that every polynomial in three variables can be decomposed uniquely as a linear combination of these six polynomials, using symmetric polynomials as the coefficients! Where do these six polynomials come from? Turn to the next page to find out…
Abstract We consider a connected semisimple Lie group $G$ with finite center, an admissible probability measure $\mu$ on $G$, and an ergodic $(G,\mu)$-space $(X,\nu)$. We first note (Lemma 0.1) that $(X,\nu)$ has a unique maximal projective factor of the form $(G/Q,\nu_0)$, where $Q$ is a parabolic subgroup of $G$, and then prove: 1. Theorem 1. If every noncompact simple factor of $G$ has real rank at least two, then the maximal projective factor is nontrivial, unless $\nu$ is a $G$-invariant measure. 2. Theorem 2. For any $G$ of real rank at least two, if the action has positive entropy and fails to have nontrivial projective factor, then $(X,\nu)$ has an equivariant factor space with the same properties, on which $G$ acts via a real-rank-one factor group. 3. Theorem 3. Write $\nu = \nu_0\ast \lambda$, where $\lambda$ is a $P$-invariant measure, $P = MSV$ a minimal parabolic subgroup [F2], [NZ1]. If the entropy $h_\mu(G/P,\nu_0)$ is finite, and every nontrivial element of $S$ is ergodic on $(X,\lambda)$ (or just a well chosen finite set, Theorem 9.1), then ($X,\nu)$ is a measure-preserving extension of its maximal projective factor. 4. The foregoing results are best possible (see Section 11, in particular Theorem 11.4). We also give some corollaries and applications of the main results. These include an entropy characterization of amenable actions, an explicit entropy criterion for the invariance of $\nu$, and construction of a projective factor for an action of a lattice in $G$ on a compact metric space.
Word Vectors Skip-gram Continuous Bag of words(CBOW) Negative Sampling Hierarchical SoftmaxM Word2Vec 1. How to represent words? With word vectors, we can quite easily encode this ability in the vectors themselves (using distance measures such as Jaccard, Cosine, Eu-clidean, etc). 2. Word Vectors encode word tokens into some vector(N-dimensional space, N << 13 million) that is sufficient to encode all semantics of our language. Each dimension would encode some meaning that we transfre using speech. one-hot vector: V is the size of vocabulary. each word is a completely independent entity. 3. Iteration Based Methods - Word2Vec 2 algorithms: continuous bag-of-words (CBOW) and skip-gram. CBOW aims to predict a center word from the surrounding context in terms of word vectors. Skip-gram does the opposite, and predicts the distribution (probability) of context words from a center word. 2 training methods: negative sampling and hierarchical softmax. Negative sampling defines an objective by sampling negative examples, while hierarchical softmax defines an objective using an efficient tree structure to compute probabilities for all the vocabulary. language model Unigram model : \[P(w_1,w_2,...,w_n) = \prod_{i=1}^nP(w_i)\] bigram model: \[P(w_1,w_2,...,w_n) = \prod_{i=1}^nP(w_i|w_{i-1})\] 4. Continuous bag of words model(CBOW) predict center word from the context. \[ \prod_{c=1}^{n}P(w^{(c)}|w^{(c-m)},...,w^{(c-1)},w^{(c+1)},...,w^{(c+m)})\] negative log likelihood: \[J(\theta)= -\sum_{c=1}^{n}logP(w^{(c)}|w^{(c-m)},...,w^{(c-1)},w^{(c+1)},...,w^{(c+m)})\] the words of context to generate the center word is dependent: \[J(\theta) = \dfrac{1}{n}\sum_{c=1}^T\sum_{-m\le j\le m}logp(w_c|w_{c+j})\] how to present this probability??? To one sentence: \[ \begin{align} minimize J &= -logP(w_c|w_{c-m},..,w_{c-1},w_{c+1},...,w_{c+m})\\ &= -log P(u_c|\hat v)\tag{1}\\ &= -log \dfrac{exp(u_c^T\hat v)}{\sum_{j=1}^{|V|}exp(u_j^T\hat v)}\tag{2}\\ &= -u_c^T\hat v + log\sum_{j=1}^{|V|}exp(u_j^T\hat v) \end{align} \] important: from word to vector the (1) to (2), using the softmax to present the probability \[P(u_c|\hat v) = \dfrac{exp(u_c^T\hat v)}{\sum_{j=1}^{|V|}exp(u_j^T\hat v)}\tag{* }\] 其实word2vec可以理解为两个word,他们的上下文越相似,那么他们俩的词向量表示也就越相似.比如 he 和 she 大多数情况下他们的语境,也就是上下文出现的单词v都是很接近的,那么同样与这些词内积得到的概率就会差不多~说到底,也是个频率统计的方法,只不过用了无监督学习这个方式来得到distribution vector了~从这个角度理解就很合理了。错误的理解是 u 和v 出现在同一个窗口,他们的内积的概率就越大,这无法解释任何东西。 4.1 We can use an simple neural networt to train this matrix weights input: \(x^{(c)}\in R^{|V|\times 1}\), the input one-hot vector of context labels: \(y^{(c)}\in R^{|V|\times 1}\), the one hot vector of the known center word. parameters: \(w_i\): word i from vocabulary V \(V \in R^{n\times |V|}\) input word matrix \(v_i\):i-th column of \(V\), the input vector representation of word \(w_i\) \(U\in R^{|V|\times n}\): output word matrix \(u_i\): i-th row of \(U\), the output vector representation of word \(w_i\) n is an arbitrary size which defines the size of our embedding space there are some differences with the figure....\(W_1^{n\times |V|}\), \(W_2^{|V|\times n}\) input : \(x_1.shape = (|V|, 1)\), \(x_2.shape = (|V|, 1)\),...,\(x_{2m}.shape = (|V|, 1)\) \(W_1\) : input matrix V, \(W_1.shape = (n, |V|)\) each column is the representation of \(w_i\) hidden layer: \(\hat v = \dfrac{V.dot(x_1)+...+V.dot(x_{2m})}{2m}\), \(\hat v.shape = (n,1)\) \(W_2\) : output matrix U, \(W_2.shape = (|V|, n)\) each row is the representation of \(w_i\) score: \(u.shape = (|V|, 1)\) output: \(\hat y = softmax(u)\), \(\hat y.shape=(|V|,1)\) cross entropy: \[H(\hat y, y) = -\sum_{j=1}^{|V|}y_jlog(\hat y_j)\] Because y is the one hot vector, and i is the index whose value is 1. \[H(\hat y, y) = -y_ilog(\hat y_i)\] look at the paper word2vec Parameter Learning Explained, it is very cautious, very wonderful!!! The symbols are different from the above. 4.2 one word context inference 4.3 one word context backpropagation 4.4 multi-words context 5. Skip-gram \[ \begin{align} minimize J &=-logP(w_{c-m},...,w_{c-1},w_{c+1},..,w_{c+m}|w_c)\\ &=-log\prod_{j=0,j\neq m }^{2m}P(w_{c-m+j}|w_c)\\ &=-\sum_{j=0,j\neq m}^{2m}logP(w_{c-m+j}|w_c)\tag{3}\\ &=-\sum_{j=0,j\neq m}^{2m}log\dfrac{exp(u_{c-m+j}^Tv_c)}{\sum_{k=1}^{|V|}exp(u_{k}^Tv_c)}\tag{4}\\ &=-\sum_{j=0,j\neq m}^{2m}u_{c-m+j}^Tv_c+2m\ log\sum_{k=1}^{|V|}exp(u_{k}^Tv_c) \end{align} \] important: from word to vectorthe (1) to (2), using the softmaxto present the probability \[P(u_{c-m+j}|w_c) = \dfrac{exp(u_{c-m+j}^Tv_c)}{\sum_{j=1}^{|V|}exp(u_j^Tv_c)}\tag{* }\] V is the input matrix, U is the output matrix 5.1 We can use the simple neural networks to train matrix weights #### 5.2 inference and backpropagation Skip-gram treats each context word equally: the models computes the probability for each word of appearing in the context independently of its distance to the center word. 6. Optimizing Computational Efficiency 6.1 Hirarchical Softmax 6.2 Negative Sampling loss function: \[E = -log\sigma(v'^T_{w_O}h)-\sum_{w_j\in W_{neg}}log\sigma(-v'^T_{w_j}h)\] in the CBOW, \(h=\dfrac{1}{C}\sum_{c=1}^Cv_{w_c^T}\) in the skip-gram, \(h=v_{w_I}^T\) how to choose the K negative samples? As described in (Mikolov et al., 2013b), word2vec uses a unigram distribution raised to the 3/4th power for the best quality of results. The basic idea is to convert a multinomial classification problem (as it is the problem of predicting the next word) to a binary classification problem. That is, instead of using softmax to estimate a true probability distribution of the output word, a binary logistic regression (binary classification) is used instead. For each training sample, the enhanced (optimized) classifier is fed a true pair (a center word and another word that appears in its context) and a number of kk randomly corrupted pairs (consisting of the center word and a randomly chosen word from the vocabulary). By learning to distinguish the true pairs from corrupted ones, the classifier will ultimately learn the word vectors. This is important: instead of predicting the next word (the "standard" training technique), the optimized classifier simply predicts whether a pair of words is good or bad. reference:
Missing values occur in many domains and most datasets contain missing values (due to non-responses, lost records, machine failures, dataset fusions, etc.). These missing values have to be considered before or during analyses of these datasets. Now, if you have a method that deals with missing values, for instance imputation or estimation with missing values, how can you assess the performance of your method on a given dataset? If the data already contains missing values, than this does not help you since you generally do not have a ground truth for these missing values. So you will have to simulate missing values, i.e. you remove values -- which you therefore know to be the ground truth -- to generate missing values. The mechanisms generating missing values can be various but usually they are classified into three main categories defined by Rubin 1976: missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). The first two are also qualified as ignorable missing values mechanisms, for instance in likelihood-based approaches to handle missing values, whereas the MNAR mechanism generates nonignorable missing values.In the following we will briefly introduce each mechanism (with the definitions used widely in the literature) and propose ways of simulations missing values under these three mechanism assumptions. For more precise definitions we refer to references in the bibliography on the R-miss-tastic website. Let's denote by $\mathbf{X}\in\mathcal{X_1}\times\dots\times\mathcal{X_p}$ the complete observations. We assume that $\mathbf{X}$ is a concatenation of $p$ columns $X_j\in\mathcal{X_j}$, $j\in\{1,\dots,p\}$, where $dim(\mathcal{X_j})=n$ for all $j$. The data can be composed of quantitative and/or qualitative values, hence $\mathcal{X_j}$ can be $\mathbb{R}^n$, $\mathbb{Z}^n$ or more generally $\mathcal{S}^n$ for any discrete set $S$. Missing values are indicated as np.nan (not a number) and we define an indicator matrix $\mathbf{R}\in\{0,1\}^{n\times p}$ such that $R_{ij}=1$ if $X_{ij}$ is observed and $R_{ij}=0$ otherwise. We call this matrix $\mathbf{R}$ the response (or missingness) pattern of the observations $\mathbf{X}$. According to this pattern, we can partition the observations $\mathbf{X}$ into observed and missing: $\mathbf{X} = (\mathbf{X}^{obs}, \mathbf{X}^{mis})$. We generate a small example of two dimensional normal observations $\mathbf{X} \sim \mathbf{N}(\mu, I_2), \mu \in \mathbb{R}^2$ import numpy as npimport matplotlib.pyplot as pltimport seaborn as sns%matplotlib inline # Generate the complete datanp.random.seed(0) # fix the seed n_samples = 900# Generate normal data with Id_2 covariance.mean = (0,0)cov = [[1,0],[0,1]]X_complete = np.random.multivariate_normal(mean, cov, size=n_samples)# X_complete = np.random.uniform(0, 1, size=(n_samples,2))# plot the datasns.jointplot(X_complete[:,0], X_complete[:,1], label = 'Complete data'); We want to simulate a matrix X_obs from X_complete with missing entries (remplacing some entries by np.nan). To do so we sample the indicator matrix R (missing pattern) with same shape as X_complete indicating non missing entries: $X_{obs} = X_{complete} \cdot \mathbb{1}_{\{R=1\}} + \mbox{np.nan} \cdot \mathbb{1}_{\{R=0\}}$ The observations are said to be Missing Completely At Random (MCAR) if the probability that an observation is missing is independent of the variables and observations: the probability that an observation is missing does not depend on $(\mathbf{X}^{obs},\mathbf{X}^{mis})$. Formally this is: $$\mathbb{P}_R(R\,|\, X^{obs}, X^{mis}; \phi) = \mathbb{P}_R(R) \qquad \forall \, \phi.$$ In Python, the easiest way is to sample a Bernouilli mask import warningsdef ampute_mcar(X_complete, missing_rate = .2): # Mask completly at random some values M = np.random.binomial(1, missing_rate, size = X_complete.shape) X_obs = X_complete.copy() np.putmask(X_obs, M, np.nan) print('Percentage of newly generated mising values: {}'.\ format(np.round(np.sum(np.isnan(X_obs))/X_obs.size,3))) # warning if a full row is missing for row in X_obs: if np.all(np.isnan(row)): warnings.warn('Some row(s) contains only nan values.') break # warning if a full col is missing for col in X_obs.T: if np.all(np.isnan(col)): warnings.warn('Some col(s) contains only nan values.') break return X_obs X_obs_mcar = ampute_mcar(X_complete) Percentage of newly generated mising values: 0.192 /home/thomas/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:15: UserWarning: Some row(s) contains only nan values. from ipykernel import kernelapp as app print('X_obs_mcar = ')print(X_obs_mcar[:10])print(' ...') X_obs_mcar = [[ 1.76405235 nan] [ 0.97873798 2.2408932 ] [ 1.86755799 -0.97727788] [ nan nan] [-0.10321885 0.4105985 ] [ 0.14404357 1.45427351] [ 0.76103773 0.12167502] [ 0.44386323 0.33367433] [ nan -0.20515826] [ nan nan]] ... # ploting functions with seaborn# scripts are at: https://github.com/R-miss-tastic/website/blob/master/static/how-to/python/utils_plot.pyfrom utils_plot import (hist_plot, scatter_plot_obs, scatter_plot_with_missing_completed) print('Data with "MCAR" mechanism.')scatter_plot_obs(X_obs_mcar) Data with "MCAR" mechanism. print('Plot completed with X_complete information.')scatter_plot_with_missing_completed(X_obs_mcar, X_complete) Plot completed with X_complete information. hist_plot(X_obs_mcar, X_complete) From plots and histogram below, we can infer that missingness is independant on the observed and missing values. The observations are said to be Missing At Random (MAR) if the probability that an observation is missing only depends on the observed data $\mathbf{X}^{obs}$. Formally,$$\mathbb{P}_R(R\,|\,X^{obs},X^{mis};\phi)=\mathbb{P}_R(R\,|\,X^{obs};\phi) \qquad \forall \,\phi,\, \forall \, X^{mis}.$$ Contrary to R, there is no python implementation of amputation yet. # The following code is work in progressfrom sklearn.preprocessing import normalizedef ampute_mar(X_complete, missing_rate=.2, W=None): """ Observed values will censor the missing ones The proba of being missing: M_proba = X_obs.dot(W) So for each sample, some observed feature (P=1) will influence the missingness of some others features (P=0) w.r.t to the weight matrix W (shape n_features x n_features). e.g. during a questionnary, those who said being busy (X_obs[:,0] = 1) usualy miss to fill the last question (X_obs[:,-1] = np.nan) So here W[0,-1] = 1 """ X_obs = X_complete.copy() M_proba = np.zeros(X_obs.shape) if W is None: # generate the weigth matrix W W = np.random.randn(X_complete.shape[1], X_complete.shape[1]) # Severals iteration to have room for high missing_rate for i in range(X_obs.shape[1]*2): # Sample a pattern matrix P # P[i,j] = 1 will correspond to an observed value # P[i,j] = 0 will correspond to a potential missing value P = np.random.binomial(1, .5, size=X_complete.shape) # potential missing entry do not take part of missingness computation X_not_missing = np.multiply(X_complete,P) # sample from the proba X_obs.dot(W) sigma = np.var(X_not_missing) M_proba_ = np.random.normal(X_not_missing.dot(W), scale = sigma) # not missing should have M_proba = 0 M_proba_ = np.multiply(M_proba_, 1-P) # M_proba[P] = 0 M_proba += M_proba_ thresold = np.percentile(M_proba.ravel(), 100 * (1 - missing_rate)) M = M_proba > thresold np.putmask(X_obs, M, np.nan) print('Percentage of newly generated mising values: {}'.\ format(np.sum(np.isnan(X_obs))/X_obs.size)) return X_obs W = np.array([[0,10],[0,0]]) # With this weight matrix W, # missingness of X[:,1] depends on X[:,0] valuesX_obs_mar = ampute_mar(X_complete, W=W) Percentage of newly generated mising values: 0.2 print('Data with "MAR" mechanism.')scatter_plot_obs(X_obs_mar) Data with "MAR" mechanism. From the scatter plot we see that the missing values with nan on the X[:,1] coordinate (y-axis), which are represented as orange star at the bottom, are highly correlated to the X[:,0] values (x-axis). X[i,0] > .5 => X[i,1] = np.nan with high probability. print('Samples completed with X_complete information.')scatter_plot_with_missing_completed(X_obs_mar, X_complete) Samples completed with X_complete information. hist_plot(X_obs_mar, X_complete) Histogram are overlaping : missingness does not depends on the missing values. Otherwise, if neither MCAR nor MAR, values are Missing Not At Random (MNAR) Here we ampute high values with high probability. It might be called "censoring" from scipy.special import expit as sigmoid # logistic functiondef ampute_mnar(X_complete, missing_rate= .2): """ ampute X_complete with censoring (Missing Not At Random) The missingness depends on the values. This will tends to "censor" X[i,j] where X[i,j] is high comparing to its column X[:,j] """ # M depends on X_complete values M_proba = np.random.normal(X_complete) M_proba = normalize(M_proba, norm='l1') # compute thresold wrt missing_rate thresold = np.percentile(M_proba.ravel(), 100 * (1- missing_rate)) M = M_proba > thresold X_obs = X_complete.copy() np.putmask(X_obs, M, np.nan) print('Percentage of newly generated mising values: {}'.\ format(np.sum(np.isnan(X_obs))/X_obs.size)) return X_obs X_obs_mnar = ampute_mnar(X_complete) Percentage of newly generated mising values: 0.2 scatter_plot_obs(X_obs_mnar)print('X_obs with "MNAR" mechanism') X_obs with "MNAR" mechanism print('Samples completed with X_complete information.')scatter_plot_with_missing_completed(X_obs_mnar, X_complete) Samples completed with X_complete information. hist_plot(X_obs_mnar, X_complete) The distribution of missing values is shifted to the right (comparing to the observed values' distribution). The missingness pattern depends on the values of the missing entries.
Science Advisor Homework Helper 1,198 400 Part 1 of (9) ##\phi: R \to S## is a ring epimorphism. Define ##I:= \phi^{-1}(J)##. It is well known that the inverse image of an ideal is an ideal, thus ##I## is an ideal. Define ##\psi: R/I \to S/J: [r] \mapsto [\phi(r)]## This is well defined: If ##r \in I##, then ##\phi(r) \in J##. Clearly, this is also a ring morphism. For injectivity, assume ##[\phi(r)] = 0##, then ##\phi(r) \in J##, and ##r \in \phi^{-1}(J) = I##, thus ##[r] = 0##. The kernel is trivial and the map is injective. Surjectivity follows immediately by surjectivity of ##\phi##. It follows that ##\psi## is an isomorphism, and thus ##R/I \cong S/J##. Define ##\psi: R/I \to S/J: [r] \mapsto [\phi(r)]## This is well defined: If ##r \in I##, then ##\phi(r) \in J##. Clearly, this is also a ring morphism. For injectivity, assume ##[\phi(r)] = 0##, then ##\phi(r) \in J##, and ##r \in \phi^{-1}(J) = I##, thus ##[r] = 0##. The kernel is trivial and the map is injective. Surjectivity follows immediately by surjectivity of ##\phi##. It follows that ##\psi## is an isomorphism, and thus ##R/I \cong S/J##. Last edited:
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
In my previous post on the major index, I mentioned the statistics $\DeclareMathOperator{\inv}{inv} \DeclareMathOperator{\maj}{maj} \DeclareMathOperator{\code}{code} \inv$ and $\maj$ on permutations, and the fact that they were equidistributed. I have since learned of a more enlightening way of proving this result, due to Carlitz. Let’s start by recalling the definitions of $\inv$ and $\maj$. An inversion of a permutation of the numbers $1,2,\ldots,n$ is a pair of numbers which appear “out of order” in the sequence, that is, with the larger number appearing first. For instance, for $n=4$, the permutation $3,1,4,2$ has three inversions: $(3,1)$, $(3,2)$, and $(4,2)$. We write $$\inv(3142)=3.$$ For the major index, define a descent of a permutation to be an index $i$ for which the $i$th number is greater than the $(i+1)$st. The major index is defined to be the sum of the descents. For instance, the permutation $3,1,4,2$ has two descents, in positions $1$ and $3$, so the major index is $4$. We write $$\maj(3142)=4.$$ It turns out that \begin{eqnarray} \sum_{w\in S_n} q^{\inv(w)} &=& (1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1}) \label{eqn} \\ &=& \sum_{w\in S_n}q^{\maj(w)}, \end{eqnarray} so we say that $\inv$ and $\maj$ are equidistributed, that is, the number of permutations having $\maj w=k$ is equal to the number having $\inv w=k$ for all $k$. Carlitz’s proof, which is explained beautifully on pages 5 and 6 of this paper of Mark Skandera, cleanly shows each equality directly, without first requiring any algebraic manipulation. (Carlitz’s original paper can be found here, and it seems that by creating a MyJSTOR account, you can now access this article online for free! Thumbs up to this recent step towards open access.) The main idea of Carlitz’s bijection is to consider an intermediate combinatorial object called a permutation code. Given a permutation $w=w_1, w_2,\ldots,w_n$ of $\{1,2,\ldots,n\}$, define $\code(w)=a_1a_2\cdots a_n$ where $a_i$ is the number of entries of $w$ to the right of $w_i$ which are less than $w_i$. For instance, we have $$\code(41532)=30210,$$ since the $4$ is part of an inversion with three entries to its right, the $1$ with none, and so on. It is not hard to see that each permutation has a unique code, and a sequence is the code of some permutation if and only if its last entry is (at most) $0$, its second-to-last entry is at most $1$, its third-to-last entry is at most $2$, and so on. To formalize this, we define a code of length $n$ to be a sequence of $n$ nonnegative integers such that the $i$th entry is no larger than $n-i$. There are clearly $n!$ codes (one choice for the last digit, two choices for the second-to-last, and so on), and furthermore the $q$-series that counts codes by the sum of their entries is given by $$\sum_{c\text{ code}\\ |c|=n} q^{\sum c_i}=(1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1}).$$ The bijection $\code$ described above sends a permutation $w$ with $\inv w=k$ to a code having sum $k$, and so this immediately shows that \begin{eqnarray*} \sum_{w\in S_n} q^{\inv w}&=&\sum_{c\text{ code} |c|=n} q^{\sum c_i} \\ &=&(1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1}). \end{eqnarray*} So, it now suffices to find a bijection between codes of length $n$ and permutations of $n$, which I’ll call $\phi$, which sends the sum of the code to $\maj$ of the permutation. Given a code of length $n$, we read it backwards, and at each step insert the numbers $n, n-1,\ldots$ successively to form a permutation, so that at each step, the major index of the resulting word increases by precisely the code number read at each step. For example, consider the code $30210$ from above. We read it backwards, so the first entry is $0$, and we first place a $5$ in our permutation. So our first step yields the sequence: $$5$$ The next code entry from the right is $1$, so we must insert the $4$ in a way that increases the major index by exactly one. We see that the only way to do this is to insert it to the right of the $5$: $$5,4$$ The next code entry is $2$, so we insert the $3$ to the right of the $4$, as this is the only way to increase the major index by $2$: $$5,4,3$$ The next entry is $0$, and the only place to insert a $2$ so that the major index does not change is before the $3$. So our sequence is now: $$5,4,2,3$$ Finally, we insert the $1$ so as to increase the major index by $3$: $$5,4,2,1,3$$ So $\phi(30210)=54213.$ It is not hard to show that there is always exactly one valid insertion of the next number at each step in the process, and hence $\phi$ is a bijection, with the property that $\sum(c)=\maj(\phi(c))$ for any code $c$ of length $n$. This implies immediately that \begin{eqnarray*} \sum_{w\in S_n} q^{\maj w}&=&\sum_{c\text{ code} |c|=n} q^{\sum c_i} \\ &=&(1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1}), \end{eqnarray*} and the desired equation follows.
How can I find the frequency response of the following linear system OK, in the comments you've shown that you can actually do it by yourself. This answer just summarizes the steps and shows you how to write your (correct) result in the form that you were given as an answer. First of all, note that you have two identical systems in series. If $G(\omega)$ is the frequency response of the system up to and including the first integrator (from the left), then the total frequency response is $$H(\omega)=G^2(\omega)\tag{1}$$ The response $G(\omega)$ can be seen as a multiplication of two subsystems, the system up to and including the adder, and the integrator. The first of the two has a frequency response $$G_1(\omega)=1-e^{-j\omega T}\tag{2}$$ Since $G_1(\omega)$ has a zero at $\omega=0$, we don't need to bother with any problems of the integrator at $\omega=0$ (I'm talking about the delta impulse). So for calculating $G(\omega)=G_1(\omega)G_2(\omega)$, we can simply use $$G_2=\frac{1}{j\omega}\tag{3}$$ which gives $$G(\omega)=\frac{1-e^{-j\omega T}}{j\omega}\tag{4}$$ and, from (1), $$H(\omega)=\frac{(1-e^{-j\omega T})^2}{(j\omega)^2}\tag{5}$$ In order to get from (5) the answer that you were given, use this trick to rewrite $G_1(\omega)$: $$G_1(\omega)=e^{-j\omega T/2}(e^{j\omega T/2}-e^{-j\omega T/2})=2je^{-j\omega T/2}\sin(\omega T/2)\tag{6}$$ Using (6) you can rewrite $G(\omega)$ as $$G(\omega)=2e^{-j\omega T/2}\frac{\sin(\omega T/2)}{\omega}= Te^{-j\omega T/2}\frac{\sin(\omega T/2)}{\omega T/2}\tag{7}$$ Squaring (7), using $\omega=2\pi f$ and $\text{sinc}(x)=\sin(\pi x)/(\pi x)$ will give you the final form of the result. Neat. This one you could also do in the time domain. Let's make x(t) a dirac impulse. At t=0 that gets into the integrate and the output of the integrator is 1 for t >=0. Now at T=t, the negative impulse gets into the integrator the result for this is -1 for t >=T. This cancels the original impulse and the output becomes zero again. So the impulse response of the first stage is basically a rectangle: h(t) = 1, for 0<= t < T, 0 otherwise. This corresponds, of course, to the sinc function in the frequency domain. Cascading two systems is done by convolving the two impulse response, so we have to convolve two rectangles which simply results in a triangle.
Revista Matemática Iberoamericana Full-Text PDF (246 KB) | Metadata | Table of Contents | RMI summary Volume 24, Issue 3, 2008, pp. 865–894 DOI: 10.4171/RMI/558 Published online: 2008-12-31 The real genus of the alternating groupsJosé Javier Etayo Gordejuela [1]and Ernesto Martínez [2](1) Universidad Complutense de Madrid, Spain (2) UNED, Madrid, Spain A Klein surface with boundary of algebraic genus $\mathfrak{p}\geq 2$, has at most $12(\mathfrak{p}-1)$ automorphisms. The groups attaining this upper bound are called $M^{\ast}$-groups, and the corresponding surfaces are said to have maximal symmetry. The $M^{\ast}$-groups are characterized by a partial presentation by generators and relators. The alternating groups $A_{n}$ were proved to be $M^{\ast}$-groups when $n\geq 168$ by M. Conder. In this work we prove that $A_{n}$ is an $M^{\ast }$-group if and only if $n\geq 13$ or $n=5,10$. In addition, we describe topologically the surfaces with maximal symmetry having $A_{n}$ as automorphism group, in terms of the partial presentation of the group. As an application we determine explicitly all such surfaces for $n\leq 14$. Each finite group $G$ acts as an automorphism group of several Klein surfaces. The minimal genus of these surfaces is called the real genus of the group, $\rho(G)$. If $G$ is an $M^{\ast}$-group then $\rho(G)=\frac{o(G)}{12}+1$. We end our work by calculating the real genus of the alternating groups which are not $M^{\ast}$-groups. Keywords: Alternating groups, real genus, M*-groups, bordered Klein surfaces Etayo Gordejuela José Javier, Martínez Ernesto: The real genus of the alternating groups. Rev. Mat. Iberoam. 24 (2008), 865-894. doi: 10.4171/RMI/558
A small fact got me into trouble (spoiler: the intercept in effects coding represents the mean of conditions, not the data-mean). Update 2016-08-11 I found a nice paper that remedies the last point: weighted effects coding Update 2018-11-30 If you enjoyed reading this post, check out my sucessor post on Effect/Sum Coding Update 2019-07-22 I highly recommend this recent paper: How to capitalize on a priori contrasts in linear (mixed) models: A tutorial (2019) which will explain all things of this blogpost in more space, more examples, more code and with better words! The goal We try to model two factors with two levels using a linear model. We therefore need a schema to model categorical variables as if they were continuous variables. 2×2 “ANOVA” As you see here, there are two Factors, A and B, with two levels each. For example A could be “Drives a Car”, and B could be “Owns a Suit”. The dependent variable, which we try to explain, could be “Total Money”. Alternatively in Cognitive-Neuroscience, A could be “Drank coffee before experiment” and B could be “Slept before experiment”, the dependent variable in that case could be “Alpha-Band-EEG-Amplitude” The data are split up by A, and color coded by B. A in addition is shape-coded. If A is “no” as well as B, we get the smallest dependent variable, if both are “yes” we get the largest one. I already put the linear regression model we are going to use in the image. As well as the respective means. Main Effects We are usually interested in the “main-effects”, which are depicted in the next picture. How much does the dependent variable change, if we move from A “no” to A “yes”. In this case the main effect of A is 30, and of B is 40. This categorical main effects can be estimated by linear regression. Because naively linear regression only works for continuous variables, we need a way to describe the categorical variables as continuos variables. In principle we want to fit the red and cyan line depicted in the plots. There are two often used methods to solve this with linear model coding. Dummy Coding and Effects Coding. Dummy Coding Let’s start with Dummy Coding. We simply set the first level (‘no’) to 0, and the (‘yes’) to 1. That is, we think of it as a continuous variable which has data only at two distinct values and code it as $X_A$. Thus for each factor we get one slope which we call $\beta_A$ and $\beta_B$ but in addition we could have an interaction, thus we code this as well. The interaction is simply the multiplication of the two main Factors, thus it is coded with 1, only if A and B are both 1 as well ($\beta_{AB}$). We estimated the betas (see following image). How do we interprete the coefficients? From the image it should become clear, that with dummy coding we are estimating the location of the cell-means. In order to calculate the main effects we would need some additional calculations. Effects Coding For effects coding we set the ‘no’ to -1 (or -0.5 if you prefer) and the ‘yes’ to +1 (or +0.5). You can clearly see, that the parameter estimates are the main-effects, not the cell-means anymore. $2\cdot\beta_A $ is the main effect of A. Why $\cdot 2$? Because we coded with -1 / +1 (thus the difference e.g. the jump from -1 to 1 is 2), if we would use -0.5 / 0.5 (thus the difference is 1 as in the dummy coding above) the parameter estimate directly represents the main effects. We want to know the cell mean of B ‘yes’ if A is ‘no’. Thus $X_B = +1$ and $X_A = -1$. As written before $X_{AB}$ is the multiplication, thus $X_{AB} = X_B \cdot X_A = -1 \cdot +1 = -1$. $${\hat{y}} = \beta_0 + \beta_A \cdot -1+ \beta_B \cdot +1 + \beta_{AB} \cdot -1$$ Intercepts As visible in the graphs the intercepts of dummy coding represents the reference category, here the value of A=’no’ and B=’no’. The intercept of effects coding represents the mean of the conditions. This can be very different from the the total mean of the data, if you have unbalanced data: Here the A=’yes’ and B=’yes’ condition has 2.5 times more data-points, thus it moves the total-mean upwards. But the effects did not change, thus the condition means and the mean of the condition means did not change. It is actually qiute useful, that the effects coding intercept does not represent the total mean, but the mean of condition means. And that’s it. When should you use which? I don’t think there is a clearcut case for one or the other. It boils down to interpretation and personal preference, but some cases it is more useful to have the one and in other the other. See for example here under “why use effects coding”.
The combinatorics behind the rule: symmetric function theory The Littlewood-Richardson and Pieri rules come up in symmetric function theory as well, and the combinatorics is much easier to deal with in this setting. symmetric functionsin infinitely many variables $\newcommand{\PP}{\mathbb{P}} \newcommand{\CC}{\mathbb{C}} \newcommand{\RR}{\mathbb{R}} \newcommand{\ZZ}{\mathbb{Z}} \DeclareMathOperator{\Gr}{Gr} \DeclareMathOperator{\Fl}{Fl} \DeclareMathOperator{\GL}{GL}x_1,x_2,\ldots$ is the ring $$\Lambda(x_1,x_2,\ldots)=\CC[x_1,x_2,\ldots]^{S_\infty}$$ of formal power series having bounded degree which are symmetric under the action of the infinite symmetric group on the indices. For instance, $x_1^2+x_2^2+x_3^2+\cdots$ is a symmetric function, because interchanging any two of the indices does not change the series. The most important symmetric functions in this context are the Schur functions. They can be defined in many equivalent ways, from being characters of irreducible representations of $\GL_n$ to an expression as a ratio of determinants. We use the combinatorial definition here, since it is most relevant to this context. The Schur functions are the symmetric functions defined by $$s_\lambda=\sum_{T} x^T$$ where the sum ranges over all SSYT’s $T$ of shape $\lambda$. It is known that the Schur functions are symmetric, they form a basis of $\lambda_\CC$, and they satisfy the Littlewood-Richardson rule: (see Fulton) $$s_\lambda\cdot s_\mu=\sum_{\nu} c^{\nu}_{\lambda\mu} s_\nu$$ the only difference being that here, the sum is not restricted by any Important Box. It follows that there is a surjective ring homomorphism $$\Lambda(x_1,x_2,\ldots)\to H^\ast(\Gr^n(\CC^m)))$$ sending $s_\lambda\mapsto \sigma_\lambda$ if $\lambda$ fits inside the Important Box, and $s_\lambda\mapsto 0$ otherwise. In particular, this means that any relation involving symmetric functions translates to a relation on $H^\ast(\Gr^n(\CC^m))$. This connection makes the combinatorial study of symmetric functions an essential tool in Schubert calculus.
Abstract Let $g$ be a scattering metric on a compact manifold $X$ with boundary, i.e., a smooth metric giving the interior $X^\circ$ the structure of a complete Riemannian manifold with asymptotically conic ends. An example is any compactly supported perturbation of the standard metric on $\mathrm{R}^n$. Consider the operator $H = \frac{1}{2} \Delta + V$, where $\Delta$ is the positive Laplacian with respect to $g$ and $V$ is a smooth real-valued function on $X$ vanishing to second order at $\partial X$. Assuming that $g$ is nontrapping, we construct a global parametrix $\mathcal{U}(z, w,t)$ for the kernel of the Schrödinger propagator $U(t) = e^{-i t H}$, where $z, w \in X^{\circ}$ and $t \neq 0$. The parametrix is such that the difference between $\mathcal{U}$ and $U$ is smooth and rapidly decreasing both as $t \to 0$ and as $z \to \partial X$, uniformly for $w$ on compact subsets of $X^{\circ}$. Let $r = x^{-1}$, where $x$ is a boundary defining function for $X$, be an asymptotic radial variable, and let $W(t)$ be the kernel $e^{-ir^2/2t}U(t)$. Using the parametrix, we show that $W(t)$ belongs to a class of ‘Legendre distributions’ on $X \times X^{\circ} \times \mathbb{R}_{\geq 0}$ previously considered by Hassell-Vasy. When the metric is trapping, then the parametrix construction goes through microlocally in the nontrapping part of the phase space. We apply this result to determine the singularities of $U(t) f$, for any tempered distribution $f$ and for any fixed $t \neq 0$, in terms of the oscillation of $f$ near $\partial X$. If the metric is nontrapping then we precisely determine the wavefront set of $U(t) f$, and hence also precisely determine its singular support. More generally, we are able to determine the wavefront set of $U(t) f$ for $t> 0$, resp. $t < 0$ on the non-backward-trapped, resp. non-forward-trapped subset of the phase space. This generalizes results of Craig-Kappeler-Strauss and Wunsch.
7.3 Tests for convergence There are many different tests that can be used to determine whether a sequence or series converges. I'll briefly state three of the most useful, with sketches of their proofs. Bounded and increasing sequences : A sequence that always increases, but neversurpasses a certain value, converges. This amounts to a restatement of the completeness axiom for the real numbers stated on page 157, and is therefore to be interpreted not so much as a statement about sequences but as one about the real number system. In particular, it fails if interpreted as a statement about sequences confined entirely to the rational number system, as we can see from the sequence 1, 1.4, 1.41, 1.414, ... consisting of the successive decimal approximations to √2, which does not converge to any rational-number value. Example 1 ◊ Prove that the geometric series 1+1/2+1/4+… converges. ◊ The sequence of partial sums is increasing, since each term is positive. Each term closes half of the remaining gap separating the previous partial sum from 2, so the sum never surpasses 2. Since the partial sums are increasing and bounded, they converge to a limit. Once we know that a particular series converges, we can also easily infer the convergence of other series whose terms get smaller faster. For example, we can be certain that if the geometric series converges, so does the series whose terms get smaller faster than any base raised to the power n. Alternating series with terms approaching zero : If the terms of a series alternatein sign and approach zero, then the series converges. Sketch of a proof: The even partial sums form an increasing sequence, the odd sums a decreasing one.Neither of these sequences of partial sums can be unbounded, since the difference between partialsums n and n+1 would then have to be unbounded, but this difference is simply the nth term,and the terms approach zero. Since the evenpartial sums are increasing and bounded, they converge to a limit, and similarly for the odd ones.The two limits must be equal, since the terms approach zero. Example 2 ◊ Prove that the series 1-1/2+1/3-1/4+… converges. ◊ Its convergence follows because it is an alternating series with decreasing terms. The sum turns out to be ln 2, although the convergence of the series is so slow that an extremely large number of terms is required in order to obtain a decent approximation, The integral test : If the terms of a series a are positive and decreasing, and n f( x) is a positive and decreasing function on the real number line such that f( n)= a , then the sum of n a from n n=1 to ∞ converges if and only if \int 1^∞ f(x)dx does. Sketch of proof: Since the theorem is supposed to hold for both convergence and divergence, and is also an “if and only if,” there are actually four cases to prove, of which we pick the representative one where the integral is known to converge and we want to prove convergence of the corresponding sum. The sum and the integral can be interpreted as the areas under two graphs: one like a smooth ramp and one like a staircase. Sliding the staircase half a unit to the left, it lies entirely underneath the ramp, and therefore the area under it is also finite. Example 3 ◊ Prove that the series 1+1/2+1/3+… diverges. ◊ The integral of 1/ x is ln x, which diverges as x approaches infinity, so the series diverges as well. The ratio test : If the limit R=\lim_{n→∞}|a_{n+1}/a n| exists,then the sum of a converges if n R<1 and diverges if R>1. The proof can be obtained by comparing with a geometric series. Example 4 ◊ Prove that the series 1+1/2 2+1/3 3+… converges. ◊ R is easily proved to be 0, so the sum converges by the ratio test. At this point it will seem like a mystery how anyone could have proved the exact results claimed forsome of the “special” series, such as 1-1/2+1/3-1/4+…=ln 2. Problems like these are not the mainfocus of the chapter, and in fact there is no well-defined toolbox of techniques that will allow any such“nice” series to be evaluated exactly. Even a relatively innocent-looking example like 1 -2+2 -2+3 -2+…defeated some of the best mathematicians of Europe for years (see problem 16, p. 116).It is currently unknown whether some apparently simple series such as \sum_{n=1}^∞ 1/(n 3 sin 2 n) converge. 1
A few months ago I discussed my approach towards effective information processing. Filtering useful content from large amounts of data is perhaps the most difficult step in this process. Nowhere is the problem more pronounced than on the internet where there are 10 billion indexable pages. How can useful information be found without investing the time to manually examine thousands of web pages? To remedy this problem, I use a machine learning tool to find useful information for me. This post explains some of the mechanics behind my personal recommendation engine called Pinborg. Pinborg is a mashup—part Pinboard URL aggregator and part machine learning cyborg. Pinborg takes a set of user provided URLs and recommends other URLs that will likely be useful to the user. I use Pinborg for two different applications—for general interests and for topic-specific research. If Pinborg is fed my entire bookmark collection it can find other URLs covering a broad range of topics that I will find interesting. I also use Pinborg for research by feeding it a given set of URLs focused on a narrow topic to find additional information about the same topic. The output of Pinborg is an RSS feed of my top scoring recommendations: I created Pinborg primarily as a playground to try out new machine learning, statistic, and data mining tools. A nice side-effect of this project is that I also get a tool that has a useful purpose. Pinborg is capable of finding things that search engines and commercial content recommender systems like Zite and Prismatic cannot. Part of Pinborg’s performance is attributed to a fantastic starting data set. In my previous post I highlighted why I like Pinboard for information mining: Pinboard represents a human curated pre-filtered information data set containing the best Internet content updated in nearly real time. Without quality data even the most sophisticated recommendation techniques are ineffective. I’ve found Pinboard superior to social networks and news aggregation sites for finding information because Pinboard costs money and people only save things they themselves find valuable. Another contributing factor to Pinborg’s performance is the use of matrix factorization (MF) techniques for generating URL recommendations. Many of the top algorithms from the Netflix competition also used latent factor models based on matrix factorization to recommend movies to Netflix customers. Matrix factorization is conceptually intuitive, displays high performance, and is useful in a large number of applications like image analysis, data compression, and gene expression analysis. In Pinborg, matrix factorization is used because of its high performance on sparse data with parts-based representations. The Netflix competition also demonstrated that recommendation systems frequently perform best by employing ensemble methodologies, which combine numerous techniques to make recommendations to users. The winning Netflix team used over 100 predictors to recommend new movies to Netflix customers. The Pinborg engine also uses several methods to recommend URLs, but in this post I will focus on the application of non-negative matrix factorization (NMF). 1 Below I provide an overview of NMF and some of the implementation I use in Pinborg. In Pinboard there are users and bookmarks. Generating the top URL recommendations for a user can be formalized as a discrete matrix completion problem. Let users be denoted as $ u_{1},u_{2},…,u_{m} $ and bookmarks be denoted as $ b_{1},b_{2},…,b_{n} $. In Pinborg, this data is expressed as a matrix, $ V $ composed of users, $ U $ and bookmarks, $ B $. $$ V \in {0,1}^{U \times B} $$ The matrix is constructed as a proxy for bookmark preference. Let a given user’s rating for a specific bookmark be denoted as $ v_{ub} $. If $ v_{ub} = 0 $, this represents an unknown—either a given user has seen a given bookmark and chosen not to save it or the user has simply never seen the bookmark. If a user has saved a given bookmark to Pinboard, let the initial value of $ v_{ub} = 1 $. Matrix factorization with discrete binary or boolean data is difficult, mathematically complex, and frequently shows poor predictive ability for unknown ratings in my experience. Therefore, a weight is applied to $ v_{ub} $ based on the popularity of the bookmark. The goal of NMF is to then use known $ v_{ub} $ values to learn user ratings where $ r_{ub} = 0 $. To make recommendations, NMF is used to decompose $ V $ into two submatrices, $ M $ and $ N $, such that the product of the submatrices approximate the original data matrix $ V $. The factorization yields submatrices containing latent features, $ k $, which describe underlying attributes within the data. This decomposition allows $ v_{ub} $ to be approximated by calculating the inner product of the corresponding feature vectors: $$ \hat{v_{ub}} = \sum_{k=1}^{k} m_{uk}n_{kb} $$ In Pinborg, latent features could describe attributes for a given URL, but the actual significance of the attributes are unknown. For example, users who like Apple products may share a bookmark about a git logging script from Brett Terpstra. The URL has nothing to do with Apple, however many Mac users follow Brett Terpstra’s blog because he’s a Mac developer. Users who have this bookmark may like Apple products for example. To approximate $ V $ and minimize the error, $ \epsilon $ the system learns the model by fitting previously observed bookmarks using stochastic gradient descent, where $ \epsilon_{ub} $ is the associated prediction error of a given rating: $$ m_{uk} := \alpha(\epsilon_{ub}n_{bk} - \beta m_{uk}) \ $$ $$ \;\;\;\;\;\;\;\;\;\;\;n_{bk} := \alpha(\epsilon_{ub}m_{uk} - \beta n_{bk}) \textrm{, where } \ $$ $$ \epsilon_{ub} = v_{ub} - \hat{v}_{ub} $$ Pinborg initializes $ M $ and $ N $ from a random distribution and then modifies the values within the submatrices accordingly to converge on a minimized error. The learning rate $ \alpha $ and regularization constant $ \beta $ are introduced to control overfitting and rate of convergence, respectively. Pinborg uses Python and runs on an Amazon EC2 instance. Public user bookmarks are collected from Pinboard every few minutes and inserted into a MongoDB database on an Elastic Block Store. Using the PyMongo API, Pinborg captures JSON data from Pinboard and processes the URL, user name, and user tags for each bookmark. Each document in the database contains a unique URL. An example document from the database looks like this: { "url" : "http://en.wikipedia.org/wiki/Curse_of_dimensionality", "users" : [ "joey", "sally", "jenny" ], "tags" : [ "statistics", "machinelearning", "bayesian", "data_analysis" ] } When the database reached 100,000 URLs, Pinborg carries out matrix factorization to make URL recommendations to the user. The matrix factorization component of Pinborg is heavily memory-bound as the matrix of users and bookmarks is held in memory while the computations are performed. The data is constructed as a sparse matrix with several billion unique values using SciPy. These computations would be virtually impossible without SciPy, which provides speed, storage efficiency, and other tools that facilitate linear algebra. The sparse matrix is then converted to a NumPy array and NMF is carried out using the following method: 2 import scipy as sp def nmf(self, V, M, N, K, iters=3000, error=0.02, alpha=0.005, beta=0.02): """nmf""" N = N.T for t in xrange(iters): it = sp.nditer(V, flags=['multi_index']) while not it.finished: if it[0] > 0: e = 0 value, x = (it[0], it.multi_index) eub = value - sp.dot(N[x[0],:], M[:,x[1]]) e = e + pow(eub, 2) for k in xrange(K): M[x[0],k] += alpha * (eub * N[k,x[1]] - beta * M[x[0],k]) N[k,x[1]] += alpha * (eub * M[x[0],k] - beta * N[k,x[1]]) e = e + (beta/2) * (pow(M[x[0],k], 2) + pow(N[k,x[1]], 2)) if e < error: break it.iternext() return M, N.T What constitutes good or bad performance in Pinborg is subjective. To evalute performance, I measure the number of my previously saved bookmarks that score in the top fifty recommendations returned by Pinborg. Using this simple metric, I’ve optimized the error threshold, learning rate, and regularization constant to yield the best recommendation performance with respect to computation time. The final step of Pinborg is to generate an RSS feed of the top scoring recommendations. I use the Pinboard API for this task using python-pinboard, which provides an easy way to upload bookmarks to Pinboard. The method below reads a text file containing the top 20 URL recommendations and uploads the bookmarks to my Pinboard account. import pinboard def post_url(self, bm, title, pin_tags): """ Send new bookmarks to Pinboard """ d = "{0}".format(self.cur_date).split("-") bm_date = (int(d[0]), int(d[1]), int(d[2])) self.p.add(url = bm, description = title, extended = "", tags = ['Pinborg'], date = bm_date) Each uploaded URL is tagged with the Pinborg tag. Pinboard provided RSS feeds for individual user tags, so I simply subscribe to the Pinborg tag feed and all new recommendations made by Pinborg are avilable through this RSS subscription. There are a number of excellent machine learning libraries for Python. These libraries would likely perform better than my implemenation. I’ve choosen to write my own implementations for learning and experimentation purposes. ↩
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
I'm trying to develop a thermoelectric (Peltier tile-based) beverage cooling system. Ideally, I'd also like a device to solve the warm beer problem if the thermodynamics give me any confidence. First, I'll detail my thought flow in the theory sense and then I'll get to my specific setup that I had in mind. Obviously, if one immerses a 100 W, 10% efficient Peltier tile in a glass of water at 298 K (let's say, containing 10 mol exactly), if the container system is assumed to be adiabatic in nature, to drop the temperature to 274 K (roughly 35 °F) requires: $$ Q = m \, c_p \, \Delta T = 180.2 \times 4.1813 \times 24 = 18083.3 \:\mathrm{J} $$ Then, since the efficiency gives the nominal "heat transfer" power to be 10 W: $$ t = \frac{18083.3\:\mathrm{J}}{10\:\mathrm{J/s}} = 1808.33 \:\mathrm{s} $$ So, the cooling would take about 30 minutes to chill the glass down, much more when you consider that a glass of water doesn't adhere closely to adiabatic behavior. The actual setup (poorly-drawn hand sketch follows): Now, obviously there are four sets of values to consider here: The specific heat and thermal conductivity of the beverage itself The specific heat and thermal conductivity of the glass container holding the beverage, easily approximatedby making the assumption that the glass can be characterized by fused silica The specific heat and thermal conductivity of the water transfer fluid, well-established The specific heat and thermal conductivity of the 6061 aluminum working chamber, which is again well-established My problem is how to model the system and get model calculations for the transfers between each step, so, for example, solving the equation so I know if I use a given cumulative wattage of Peltier tile, it'll reduce the temperature of the system by a given amount in a certain amount of time. As before, I'm willing to concede, for the purposes of discussion, that this system can be assumed to be adiabatic with respect to the external environment; that is, no heat is absorbed from the environment during the cooling process.
Introduction: Why is it that whenever balloons are inflated they converge towards the shape of a sphere regardless of their initial geometry? On one level this may be a purely geometrical problem due to thermodynamic constraints on a body with finite surface area. This suggests the necessity of solving a global optimisation problem. However, the sequence of deformations undergone may be facilitated by the elastic material the balloons are made of. In this article I consider the contribution of the latter by analysing the problem in two dimensions and demonstrate that a minimal surface may be entirely due to local mechanical instabilities. The role of material properties: Let’s consider an object that is only allowed to extend in one dimension. If you were to elongate such an object it would assume a roughly cylindrical shape. It follows that we must pay careful attention to the material properties of the balloon. Reasonable assumptions: A two-dimensional balloon is essentially an elastic loop that initially has perimeter of length: \begin{equation} \lvert \partial \mathcal{B}(t=0) \rvert = l_0 \end{equation} Furthermore, we may make the following reasonable assumptions: The balloon contains an astronomical number of gas particles that collectively satsify the ideal gas equation. The balloon is surrounded by a heat bath. The balloon itself is made of elastic filaments that form a Hamiltonian circuit. Furthermore, we may assume that the mechanical behaviour of the balloon is largely driven by energy-minimisation processes that I shall detail in the next couple sections. Isobaric inflation as a consequence of energy minimisation: If we consider the force required to elongate the elastic boundary of the balloon we may define an associated potential energy function: \begin{equation} U((\lvert \partial \mathcal{B}(t) \rvert - l_0)^2) \geq 0 \end{equation} such that: \begin{equation} U(\cdot) = 0 \iff \lvert \partial \mathcal{B}(t) \rvert = l_0 \end{equation} Now, if we consider that physical systems tend to minimise potential energy we may infer that the balloon would tend to increase in volume without increasing , the length of its perimeter. In the case of inflation, after accumulating a pressure difference with respect to its environment the evolution of would be guided by an approximately isobaric process provided that : \begin{equation} PV = nRT \end{equation} \begin{equation} \frac{\Delta V}{V} = \frac{\Delta T}{T} \end{equation} We can go further with this type of reasoning. Not only does the elastic membrane constrain the type of thermodynamic processes that is likely to guide inflation; it also constrains the mechanism for modifying the geometry of the balloon. Local deformations of elastic filaments lead to minimal surfaces: If we assume that the balloon constrains an ideal gas that my be modelled as an astronomical number of Newtonian particles, it’s reasonable to suppose that equal pressure is applied to equal areas. Now, if this is the case we may consider pressure-driven deformations of that exploit a local mechanism that is operational everywhere on the boundary. What might such a mechanism look like? Under a coarse-grained approximation, the boundary consists of a large chain of cylindrical elastic rods. If each individual rod is much larger than the characteristic length where bending occurs any amount of bending will guarantee tensile stress. It follows that the elastic membrane will try, as much as possible, to increase the enclosed volume while minimising the elongation globally. This global minimisation happens by minimising the bending angle locally. No global coordination is required. Another way of understanding this process is that deformations of the elastic membrane are mainly driven by local mechanical instabilities that lead to a global minimisation of potential energies. A polygonal approximation to two-dimensional elastic boundaries: One approach to modelling the activity of elastic boundaries is to approximate them as polygons with sides of equal length where is large. Given that the sum of the interior angles must add up to we may define the potential energy: \begin{equation} U = \frac{1}{2} \sum_{i=1}^N (\theta_i - \pi \cdot \big(\frac{N-2}{N}\big))^2 \end{equation} \begin{equation} \sum_{i=1}^N \theta_i = \pi \cdot (N-2) \end{equation} where: \begin{equation} \frac{\partial U}{\partial \theta_i} = \theta_i - \pi \cdot \big(\frac{N-2}{N}\big) \end{equation} \begin{equation} \Delta \theta_i \propto \frac{\partial U}{\partial \theta_i} \end{equation} and we find that if we choose the local update with : \begin{equation} \begin{split} \theta_i^{t+1} & = \theta_i^{t} - \Delta \theta_i \\ & = \theta_i^{t} - \lambda \frac{\partial U}{\partial \theta_i} \\ & = (1-\lambda) \cdot \theta_i^t + \lambda \cdot \pi \cdot \big(\frac{N-2}{N}\big) \end{split} \end{equation} and we can show that very quickly since: \begin{equation} x_{n+1} = (1-\lambda) \cdot x_n + \lambda \cdot \alpha \implies x_{n+1} - \alpha = (1-\lambda) \cdot (x_n - \alpha) \end{equation} \begin{equation} \frac{(x_{n+1}-\alpha)^2}{(x_n - \alpha)^2} = (1-\lambda)^2 \end{equation} so if we define: \begin{equation} \epsilon_{n+1}^2 = (x_{n+1}-\alpha)^2 \end{equation} \begin{equation} \epsilon_{n}^2 = (x_{n}-\alpha)^2 \end{equation} we find that: \begin{equation} \lim_{n \to \infty} \epsilon_{n+1}^2 = \epsilon_1^2 \cdot \prod_{n=1}^\infty \frac{\epsilon_{n+1}^2}{\epsilon_{n}^2} = \lim_{n \to \infty} \epsilon_1^2 \cdot (1-\lambda)^{2n} = 0 \end{equation} so we have exponentially fast convergence to a spherical geometry. Discussion: In this article I propose the existence of a local mechanical instability present everywhere in a closed elastic membrane with aspherical geometry. Surprisingly, the net action of this instability leads to exponentially fast convergence to the global minimum. But, this analysis may be refined. The above analysis is entirely based on phenomenological studies of rubber bands by alternately dropping and manipulating rubber bands on a table. This led me to a useful phenomenological model which may help simulate kinematics but doesn’t approximate the forces involved i.e. dynamics.
We organize a masterclass in Angers, from 17th to 19th of December, 2019.The organisation board will pay the housing with breakfast and the lunches. The dinners are not payed by the organizers. For the travel expenses, we will do our best in the limit of our budget. Registration are closed as we have reached the limit of participants. Lectures will be in the morning sessions and exercises in the afternoon.We will have two parallel sessions: In this masterclass we will explain the formalism for encoding the infinitesimal deformation of structures that comes from algebra, topology or geometry. We will discuss some applications like quantification of Poisson structure by Kontsevich or some quantum invariants in topology of low dimension. This master class is devoted to the proof of the celebrated Weyl's asymptotic law. The aim is to estimate sum of eigenvalues of the Dirichlet Laplacian on a regular enough open set. The word "asymptotic" refers to the semiclassical limit $h\to 0$. More precisely, one will show that\[\begin{align*}\mathrm{Tr}(-h^2\Delta-1)_-=&L_d|\Omega|h^{-d}-\frac{1}{4}L_{d-1}|\partial\Omega|h^{-d+1}\\&+o(h^{-d+1})\end{align*}\]where\[L_d=(2\pi)^{-d}\int_{\mathbb{R}^d}(\xi^2-1)_-\mathrm{d}\,\xi\,.\]The meaning of the symbol $\mathrm{Tr}$ will be explained in the first part of the lecture through the theory of trace-class and Hilbert-Schmidt operators. In the second part, we will prove the asymptotic formula by means of "standard" semiclassical tools. This second part can be seen as a lecture/exeresis on the following article:R. L. Frank and L. Geisinger.Two-term spectral asymptotics for the Dirichlet Laplacian on a bounded domain.In Mathematical results in quantum physics, pages 138--147. World Sci. Publ., Hackensack, NJ, 2011.
The problem does not have a polynomial kernel unless NP is in coNP/poly. The cross-composition technique from our paper applies in a nontrivial way. Let me show how the classic Vertex Cover problem OR-cross-composes into the k-FLIP-SAT problem; by the results in the cited paper, this is sufficient. Concretely, we build a polynomial-time algorithm whose input is a sequence of Vertex Cover instances $(G_1,k), (G_2, k), \ldots, (G_t, k)$ that all share the same value of $k$ and all have exactly $n$ vertices. The output is an instance of $k$-FLIP SAT with a parameter value of $O(k + \log t)$, which is sufficiently small for a cross-composition, such that the $k$-FLIP SAT instance has answer yes iff one of the input graphs has a vertex cover of size $k$. By duplicating one input (which does not change the value of the OR) we can ensure that the number of inputs $t$ is a power of two. The composition proceeds as follows. Number the vertices in graph each input graph $G_i$ as $v_{i,1}, v_{i,2}, \ldots, v_{i,n}$. Make a corresponding variable in the FLIP-SAT instance for each vertex of each input graph. Additionally, make a selector variable $u_i$ for each input instance number $i \in [t]$. For each input graph $G_i$, we add some clauses to the formula. For each edge $\{v_{i,x}, v_{i,y}\}$ of graph $G_i$, add the clause $(v_{i,x} \vee v_{i,y} \vee \neg u_i)$ to the formula, which will encode "either one of the endpoints of this edge is set to true, or the instance $i$ is not active". In the initial assignment, all vertex-variables are set to false and all selector variables $u_i$ are set to false, so that these clauses are all satisfied. To build the OR-behavior into the composition we will augment the formula to ensure that a satisfying assignment sets at least one selector to true, and must then also form a vertex cover of the selected graph. To make sure we can do this selection while keeping the flip distance small compared to the number of inputs $t$, we use the structure of a complete binary tree with $t$ leaves, which has height $\log t$. Number the leaves from $1$ to $t$ and associate the $i$-th leaf with the variable $u_i$ that controls if input $i$ is active or not. Create a new variable for each internal node of the binary tree. For each internal node, let its corresponding variable be $x$ and the variables of its two children be $y$ and $z$. Add the clause $(\neg x \vee y \vee z)$ to the formula which captures the implication $(x \rightarrow (y \vee z))$, enforcing that $x$ can only be true if one of its children is true. To complete the formula, add a singleton clause saying that the variable of the root node of the binary tree must be true. In the initial truth assignment, the values of all variables for internal nodes is set to false, which satisfies all clauses of the formula except for the singleton clause requiring the root node of the tree to have its variable true. This completes the description of the formula and truth assignment. Set the parameter $k'$ of the FLIP DISTANCE problem to be equal to $(k + \log t + 1)$, which is suitably bounded for a cross-composition. It remains to show that we can flip $k'$ variables to make the formula true iff some input graph $G_i$ has a vertex cover of size $k$. In the reverse direction, suppose that $G_i$ has a size-$k$ vertex cover. Set the $k$ variables corresponding to the $k$ vertices in the cover to true by flipping them. Set the selector variable $u_i$ to true to encode that input $i$ is activated, and flip the variables of the $\log t$ internal binary tree nodes on the path of leaf $i$ to the root to true. It is easy to verify that this is a satisfying assignment: the implications in the binary tree are all satisfied, the root node's value is set to true, the clauses that check edges of $G_{i'}$ for $i' \neq i$ remain satisfied because $u_{i'}$ remains false, while the clauses for graph $G_i$ are satisfied because for every edge we set at least one endpoint to true. For the forward direction, suppose that the formula can be satisfied by flipping at most $k + \log t + 1$ variables. Then we must flip the variable of the root node to true. The implications in the binary tree enforce that at least one selector variable of a leaf is set to true, say $u_i$. To satisfy the implications encoded in the binary tree, all internal nodes on the path from $u_i$ to the root were set to true, accounting for $1 + \log t$ flips. Since $u_i$ is set to true, the clauses made for graph $G_i$ are not satisfied on the literal $\neg u_i$, so they are satisfied because one of the endpoints of each edge of $G_i$ is set to true. Since at least $1 + \log t$ variables of the binary tree were flipped, at most $k$ vertex-variables are flipped to true in this solution. This encodes a vertex cover of size $k$ in $G_i$ and proves that one of the inputs is a YES-instance. This completes the proof.
Does it contradict the axioms of a field? I think not. If not so there need to be $a \in Z_7$ and so $3+a=0$ and $3\cdot a=1$ but I can not find this $a \in Z_7.$ If $a+b=0$, then $b=-a$. If simultaneously $1=ab$, then $1=-a^2$ which is equivalent to $a^2=-1$. This is possible in $\Bbb{Z}_p$, if and only if $p=2$ or $p\equiv1\pmod4$. So you're right. This cannot happen in $\Bbb{Z}_7$. Absolutely! I wrote a Python program a while ago to generate some examples for general quotient rings $\mathbb{Z}_n$. Here are some: $2$ and $3$ $\pmod{5}$ $3$ and $7$ $\pmod{10}$ $5$ and $8$ $\pmod{13}$ $4$ and $13$ $\pmod{17}$ Anyone interested can find my Python code here. In general, let's suppose a pair of such elements exists in the ring $\mathbb{Z}_m$. They will satisfy the system $x+y \equiv 0 \pmod{m}$ and $xy \equiv 1 \pmod{m}$, and from this we discover that such a pair exists if and only if there exists an $x \in \mathbb{Z}_m$ such that $x^2 \equiv -1 \pmod{m}$. To expand on your specific question regarding $\mathbb{Z}_p$ when $p$ is prime, note that if there is an element $x$ such that $x^2 \equiv -1 \pmod{p}$, then in particular $x^4 \equiv 1 \pmod{p}$. That is, there must exist an element of order $4$ in the multiplicative group $\mathbb{Z}_p^\times$. Now, it is a theorem that $\mathbb{Z}_p^\times \cong \mathbb{Z}_{p-1}$ when $p$ is prime. Finally, since $\mathbb{Z}_{p-1}$ is cyclic, it will contain an element of order $4$ $\iff$ $4|(p-1)$. So this phenomenon occurs in $\mathbb{Z}_p \iff$ $p \equiv 1 \pmod{4}$.
Current browse context: astro-ph Change to browse by: Bookmark(what is this?) Astrophysics > Cosmology and Nongalactic Astrophysics Title: Impact of kinetic and potential self-interactions on scalar dark matter (Submitted on 3 Jun 2019 (v1), last revised 22 Aug 2019 (this version, v2)) Abstract: We consider models of scalar dark matter with a generic interaction potential and non-canonical kinetic terms of the K-essence type that are subleading with respect to the canonical term. We analyze the low-energy regime and derive, in the nonrelativistic limit, the effective equations of motions. In the fluid approximation they reduce to the conservation of matter and to the Euler equation for the velocity field. We focus on the case where the scalar field mass $10^{-21} \ll m \lesssim 10^{-4} \, {\rm eV}$ is much larger than for fuzzy dark matter, so that the quantum pressure is negligible on cosmological and galactic scales, while the self-interaction potential and non-canonical kinetic terms generate a significant repulsive pressure. At the level of cosmological perturbations, this provides a dark-matter density-dependent speed of sound. At the nonlinear level, the hydrostatic equilibrium obtained by balancing the gravitational and scalar interactions imply that virialized structures have a solitonic core of finite size depending on the speed of sound of the dark matter fluid. For the most relevant potential in $\lambda_4 \phi^4/4$ or K-essence with a $(\partial \phi)^4$ interaction, the size of such stable cores cannot exceed 60 kpc. Structures with a density contrast larger than $10^6$ can be accommodated with a speed of sound $c_s\lesssim 10^{-6}$. We also consider the case of a cosine self-interaction, as an example of bounded nonpolynomial self-interaction. This gives similar results in low-mass and low-density halos whereas solitonic cores are shown to be absent in massive halos. Submission historyFrom: Patrick Valageas [view email] [v1]Mon, 3 Jun 2019 12:10:54 GMT (235kb) [v2]Thu, 22 Aug 2019 08:57:54 GMT (235kb)
There is a fun little fact regarding polynomials in two variables $x$ and $y$: (To be more precise, this is true for polynomials over any field of characteristic not equal to $2$. For simplicity, in what follows we will assume that our polynomials have coefficients in $\mathbb{C}$.) Recall that a polynomial $g$ is symmetric if it does not change upon permuting its variables. In this case, with two variables, $g(x,y)=g(y,x)$. It is antisymmetric if swapping any two of the variables negates it, in this case $g(x,y)=-g(y,x)$. It is not hard to prove the fact above. To show existence of the decomposition, set $g(x,y)=\frac{f(x,y)+f(y,x)}{2}$ and $h(x,y)=\frac{f(x,y)-f(y,x)}{2}$. Then $$f(x,y)=g(x,y)+h(x,y),$$ and $g$ is symmetric while $h$ is antisymmetric. For instance, if $f(x,y)=x^2$, then we can write $$x^2=\frac{x^2+y^2}{2}+\frac{x^2-y^2}{2}.$$ For uniqueness, suppose $f(x,y)=g_0(x,y)+h_0(x,y)$ where $g_0$ is symmetric and $h_0$ is antisymmetric. Then $g_0+h_0=g+h$, and so $$g_0-g=h-h_0.$$ The left hand side of this equation is symmetric and the right hand side is antisymmetric, and so both sides must be identically zero. This implies that $g_0=g$ and $h_0=h$, so the unique decomposition is $f=g+h$. QED. This got me thinking… Is there an analogous decomposition for polynomials in three variables? Or any number of variables? The above decomposition doesn’t make sense in three variables, but perhaps every polynomial in $x$, $y$, and $z$ can be written uniquely as a sum of a symmetric, antisymmetric, and… some other particular type(s) of polynomials. Indeed, it can be generalized in the following sense. Notice that any antisymmetric polynomial in two variables is divisible by $x-y$, since setting $x=y$ gives us $f(x,x)=-f(x,x)=0$. Moreover, dividing by $x-y$ gives a symmetric polynomial: if $$h(x,y)=p(x,y)\cdot(x-y)$$ is antisymmetric, then $p(x,y)\cdot (x-y)=-p(y,x)\cdot(y-x)$, and so $p(x,y)=p(y,x)$. Thus any antisymmetric polynomial $h$ is equal to $x-y$ times a symmetric polynomial, and so we can restate our fact above in the following way: Any two variable polynomial $f(x,y)$ can be written uniquely as a linear combination of $1$ and $x-y$, using symmetric polynomials as the coefficients. For instance, $f(x,y)=x^2$ can be written as $$x^2=\left(\frac{x^2+y^2}{2}\right)\cdot 1+\left(\frac{x+y}{2}\right)\cdot (x-y).$$ Now, to generalize this to three variables, in place of $x-y$, we consider the polynomial $$\Delta=(x-y)(x-z)(y-z).$$ Also consider the five polynomials: $x^2-z^2+2yz-2xy$, $z^2-y^2+2xy-2xz$, $x-y$, $y-z$, and $1$, each of which is obtained by taking certain partial derivatives starting with $\Delta$. It turns out that every polynomial in three variables can be decomposed uniquely as a linear combination of these six polynomials, using symmetric polynomials as the coefficients! Where do these six polynomials come from? Turn to the next page to find out…
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate? I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol. It just seems like this argument is all about the sets of n-simplices. Which is the trivial part. lol no i mean, i'm following it by context actually so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side @user1732 haha thanks! we had no idea if that'd actually find its way to the internet... @JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels @JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC @IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes @JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81 @HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary) @JonathanBeardsley what?! i really liked that picture! i wonder why they removed it @HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world @HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)? i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$ @JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat) I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open not put all my eggs in one basket, as it were I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality @JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak). There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k... @JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad It's enough to show everything works for generating cofaces and codegeneracies the codegeneracies are free, the 0 and nth cofaces are free all of those can be done treating frak{C} as a black box the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation). > Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question. I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers. You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.) I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.) @MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,. You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
GR9677 #31 Alternate Solutions There are no Alternate Solutions for this problem. Be the first to post one! Comments Ryry013 2019-09-24 06:27:18 For choosing between D and E, you can imagine dropping a feather vs a ball in air. A feather goes slowly, a ball goes quickly, so mass matters. Choose E. (See the other user answers for how to eliminate A-C). Skribb 2009-09-26 04:57:38 (A) Since the sphere starts at rest the kinetic energy must increase as the velocity increases to its terminal velocity. (B) The sphere reaches a terminal velocity due to the retarding force but the retarding force doesn't completely stop the sphere, so the kinetic energy doesn't go to zero. (C) It doesn't make physical sense for the sphere to exceed the terminal velocity and to then return back down to it, else it wouldn't be the terminal velocity. (D) The description finally makes sense but we need to figure out what the velocity is dependent on. Set up your force diagram with the force of gravity pointing down and the retarding force pointing up, . Solve for v and you get so v must be dependent on its mass, m, and the and the constant b. (E) As per the explanation in D, v is dependent on b and m so this is the correct answer jmason86 2009-09-21 20:19:44 (A) (B) and (C) were all pretty easy to eliminate because they just don't make any physical sense. The difference between (D) and (E), then, is just whether or not your speed will depend on your mass at all. Think limits: if you have 0 mass, then you have no surface area to drag with.. but I think this basically the buoyant force that the problem statement declares negligible. Hmpf. dumbguy 2007-10-16 11:30:43 Just think of it like a free falling object in which air is the vicous medium. What happens to a free falling object then will happen to this sphere, which leaves you at E. keflavich 2005-11-11 11:15:19 You can also recall terminal velocity happens when , i.e. when the acceleration is zero, which clearly shows that v depends on b and m. u0455225 2008-06-22 14:32:03 This implies that equation given in part (D) has a sign error. mg-bv=0 when a=0, but the equation in part (D) contends (perhaps unintentionally) that mg+bv=0 when a=0. physicsisgod 2008-10-28 15:53:26 u0455225, you are right. Which term is positive or negative depends on how you define the y-axis, but I would probably write , where acceleration is positive in the negative-y direction. It doesn't really matter though, as long as they're opposite signs. Post A Comment! Bare Basic LaTeX Rosetta Stone LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces . type this... to get... $\int_0^\infty$ $\partial$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
11th SSC CGL Tier II level Question Set, topic Trigonometry 2 This is the 11th question set of 10 practice problem exercise for SSC CGL Tier II level exam and 2nd on topic Trigonometry. We repeat the method of taking the test. It is important to follow result bearing methods even in practice test environment. Method of taking the test for getting the best results from the test: Before start,you may refer to our tutorial or any short but good material to refresh your concepts if you so require. Basic and rich Trigonometric concepts and applications Answer the questionsin an undisturbed environment with no interruption, full concentration and alarm set at 12 minutes. When the time limit of 12 minutes is over,mark up to which you have answered, but go on to complete the set. At the end,refer to the answers given at the end to mark your score at 12 minutes. For every correct answer add 1 and for every incorrect answer deduct 0.25 (or whatever is the scoring pattern in the coming test). Write your score on top of the answer sheet with date and time. Identify and analyzethe problems that you couldn't doto learn how to solve those problems. Identify and analyzethe problems that you solved incorrectly. Identify the reasons behind the errors. If it is because of your shortcoming in topic knowledgeimprove it by referring to only that part of conceptfrom the best source you can get hold of. You might google it. If it is because of your method of answering,analyze and improve those aspects specifically. Identify and analyzethe problems that posed difficulties for you and delayed you. Analyze and learn how to solve the problems using basic concepts and relevant problem solving strategies and techniques. Give a gapbefore you take a 10 problem practice test again. Important:both and practice tests must be timed, analyzed, improving actions taken and then repeated. With intelligent method, it is possible to reach highest excellence level in performance. mock tests Resources that should be useful for you You may refer to: 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but or section on SSC CGL generally for any hard MCQ test. If you like,you may to get latest content from this place. subscribe 11th question set- 10 problems for SSC CGL Tier II exam: 2nd on Trigonometry - testing time 12 mins Problem 1. If $5 cos \theta +12 sin \theta=13$, and $0^0 \lt \theta \lt 90^0$, then the value of $\sin \theta$, $-\displaystyle\frac{12}{13}$ $\displaystyle\frac{12}{13}$ $\displaystyle\frac{5}{13}$ $\displaystyle\frac{6}{13}$ Problem 2. The value of $(cosec \theta -sin \theta)(sec \theta - cos \theta)(tan \theta +cot \theta)$ is, 1 2 4 6 Problem 3. If $tan A = n tan B$ and $sin A = m sin B$, then the value of $cos^2 A$ is, $\displaystyle\frac{m^2+1}{n^2-1}$ $\displaystyle\frac{m^2+1}{n^2+1}$ $\displaystyle\frac{m^2 -1}{n^2+1}$ $\displaystyle\frac{m^2 -1}{n^2-1}$ Problem 4. If $\theta$ is a positive acute angle and $3(sec^2 \theta + tan^2 \theta)=5$, then the value of $cos 2\theta$ is, $\displaystyle\frac{1}{\sqrt{2}}$ $1$ $\displaystyle\frac{1}{2}$ $\displaystyle\frac{\sqrt{3}}{2}$ Problem 5. If $tan \alpha = 2$, then the value of $\displaystyle\frac{cosec^2 \alpha - sec^2 \alpha}{cosec^2 \alpha+sec^2 \alpha}$ is, $-\displaystyle\frac{3}{5}$ $-\displaystyle\frac{15}{9}$ $\displaystyle\frac{17}{5}$ $\displaystyle\frac{3}{5}$ Problem 6. If $\sin (\theta + 30^0)=\displaystyle\frac{3}{\sqrt{2}}$ the value of $cos^2 \theta$ is, $\displaystyle\frac{1}{2}$ $\displaystyle\frac{1}{4}$ $\displaystyle\frac{3}{4}$ $\displaystyle\frac{\sqrt{3}}{2}$ Problem 7. $(1 + sec 20^0 + cot 70^0)(1 - cosec 20^0 + tan 70^0)$ is equal to, $1$ $0$ $-1$ $2$ Problem 8. If $tan \theta - cot \theta =0$, and $\theta$ is a positive acute angle, then the value of $\displaystyle\frac{tan (\theta+15^0)}{tan(\theta-15^0)}$ is, $3$ $\displaystyle\frac{1}{\sqrt{3}}$ $\sqrt{3}$ $\displaystyle\frac{1}{3}$ Problem 9. If $sec \theta - tan \theta=\displaystyle\frac{1}{\sqrt{3}}$, then the value of $sec \theta.tan \theta$ is, $\displaystyle\frac{2}{3}$ $\displaystyle\frac{4}{\sqrt{3}}$ $\displaystyle\frac{1}{\sqrt{3}}$ $\displaystyle\frac{2}{\sqrt{3}}$ Problem 10. If $tan (5x - 10^0)=cot (5y+20^0)$, then the value of $x+y$ is, $15^0$ $16^0$ $20^0$ $24^0$ Answers to the problems Problem 1. Answer: b: $\displaystyle\frac{12}{13}$. Problem 2. Answer: a: 1. Problem 3. Answer: d: $\displaystyle\frac{m^2-1}{n^2-1}$. Problem 4. Answer: c: $\displaystyle\frac{1}{2}$ Problem 5. Answer: a: $-\displaystyle\frac{3}{5}$. Problem 6. Answer: c: $\displaystyle\frac{3}{4}$. Problem 7. Answer: d: 2. Problem 8. Answer: a: 3. Problem 9. Answer: a: $\displaystyle\frac{2}{3}$. Problem 10. Answer: b: $16^0$. For detailed please refer to companion conceptual solutions . SSC CGL Tier II level solution set 11 Trigonometry 2 You may watch the video solutions in the two-part video below. Part 1: Q1 to Q5 Part 2: Q6 to Q10 Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL Tier II level Question set 11 Trigonometry 2
GR9277 #20 Problem \prob{20} In a double-slit interference experiment, d is the distance between the centers of the slits and w is the width of the slit, as shown in the figure above. For incident plane waves, an interference maximum on a distant screen will be "missing" when 2d=w 2d=3w 3d=2w Optics}Missing Fringes Missing fringes in a double-slit interference experiment results when diffraction minima cancel interference maxima. From a bit of phasor analysis, one can derive the diffraction factor and the interference factor , where w is the width of the slits and d is the separation (taken from slit centers). The angles belong in the intensity equation given by . Thus, the condition for a double-slit diffraction minimum is given by . Also, the condition for interference maximum is given by . Now, one needs to find the choice that allows for an integer . This immediately eliminates choices (A) and (B). But, this leaves choices (C), (D), and (E). Among the remaining choices, there is only one choice that allows for slits that are smaller than the separation. This is choice (D). Take it. Alternate Solutions htr 2019-09-29 15:22:47 If this is a double slit, the distance between two slits is D=d-w. Then for interference maximum,\r\n. \r\n where n1 is any integer. \r\n\r\nNow actually there are two slits so, to have a minimum at some point, all the waves should be cancelled each other (consider derivation of single slit diffraction minimum, in this case the equivalent distance is d). So, \r\n .\r\n where n2 is an odd number. \r\n\r\nThen we have, d/(d-w) = n2/n1. \r\n\r\nThe relationship is an integer fraction and also, requires d>w.\r\n\r\nThe choice is D. kstephe6 2012-11-05 07:09:44 Just note that d MUST be greater than w because... d= w + (a little bit) since d is from the middle of one slit to the other. So without the "little bit extra" this two slits merge to become one. Then note what many have shown from combining equations from the single and double slit equations... d*(some integer) = w*(some other integer) combine the two facts and the ONLY valid answer is D ***EXTRA INFO BELOW*** any number of integers will work here: d2=w, d3=w, d3=2w, etc... the only other restriction is that the integers must be small so as to keep in good agreement with the small angle approximation. In other words, if ETS offer d30=w12 then this would not be a valid answer. Kabuto Yakushi 2010-11-12 09:36:44 The problem says that the interference maximum is missing (in other words it is a minimum). Recalling the diffraction and interference formulas for a minimum (dark screen): equ. 1: interference minimum equ. 2: diffraction minimum Dividing equation 1 by equation 2 and solving one gets (D). m has to be 1, since 0 would get an undefined result. nitin 2006-11-18 22:45:00 One easy way to solve this problem is as follows. The idea is that we need a minimum of the single-slit diffraction to coincide with a maximum of the double-slit interference pattern (which results in a "missing" interference maximum). The angular locations of the minima of the single single-slit diffraction pattern (which modulates or envelopes the double-slit diffraction pattern) are given by: , where The angular locations of the maxima (bright fringes) of the double-slit interference pattern are given by: , where Take the ratio of the 2 equations: . Now, it is obvious that will be in general greater than , since we have several interference maxima inside a diffraction maximum. Also, and are integers. The only instance which satisfies these 2 conditions is choice (D). Comments htr 2019-09-29 15:22:47 If this is a double slit, the distance between two slits is D=d-w. Then for interference maximum,\r\n. \r\n where n1 is any integer. \r\n\r\nNow actually there are two slits so, to have a minimum at some point, all the waves should be cancelled each other (consider derivation of single slit diffraction minimum, in this case the equivalent distance is d). So, \r\n .\r\n where n2 is an odd number. \r\n\r\nThen we have, d/(d-w) = n2/n1. \r\n\r\nThe relationship is an integer fraction and also, requires d>w.\r\n\r\nThe choice is D. htr 2019-09-29 15:18:15 If this is a double slit, the distance between two slits is D=d-w. Then for interference maximum where n1 is any integer. Now actually there are two slits so, to have a minimum at some point, all the waves should be cancelled each other (consider derivation of single slit diffraction minimum, in this case the equivalent distance is d). So, . where n2 is an odd number. \r\n\r\nThen we have, d/(d-w) = n2/n1. Relationship is an integer fraction and also, requires d>w. The choice is D. mahmoud 2017-09-25 23:41:25 It is so simple but tricky as well . As d is from the centre to the center , and there is a separation distance between the slits, d must be longer than we w. Then the only valid answer is D djh101 2014-08-29 15:40:47 The two slits can't overlap. Such an obvious condition that just didn't occur to me... Ole 2013-10-31 17:45:02 This sinc modulated with cos single slit w*sin(th) = m*lamb min m = 1 ,2 3 first min at angle th1 w*sin(th1)=lamb with in this min there are 2*L max coming from the double "slit zero width" double "slit zero width" d*sin(tho) = l/2*lamb min l = 1, 3 5 max at l = 0, 2 4 With in sin(th1) there are d*sin(th1) = L/2*lamb L max one disappears L-1 at same angle w*sin(th1) = lamb d/w =(l-1)/2 for each l=L 0, 2, 4 6 there is max d/w=3/2 is one of them Ole 2013-10-31 17:41:28 This sinc modulated with cos single slit w*sin(th) = m*lamb min m = 1 ,2 3 first min at angle th1 w*sin(th1)=lamb with in this min there are 2*k max coming from the double "slit zero width" double "slit zero width" d*sin(tho) = l/2*lamb min l = 1, 3 5 max at l = 0, 2 4 With in sin(th1) there are d*sin(th1) = L/2*lamb M max one disappears K-1 at same angle w*sin(th1) = lamb d/w =(l-1)/2 for each l=L 0, 2, 4 6 there is max d/w=3/2 is one of them kstephe6 2012-11-05 07:09:44 Just note that d MUST be greater than w because... d= w + (a little bit) since d is from the middle of one slit to the other. So without the "little bit extra" this two slits merge to become one. Then note what many have shown from combining equations from the single and double slit equations... d*(some integer) = w*(some other integer) combine the two facts and the ONLY valid answer is D ***EXTRA INFO BELOW*** any number of integers will work here: d2=w, d3=w, d3=2w, etc... the only other restriction is that the integers must be small so as to keep in good agreement with the small angle approximation. In other words, if ETS offer d30=w12 then this would not be a valid answer. kstephe6 2012-11-05 07:14:43 oops on the "EXTRA" info... I should have written other possible answers would be d=2w, d=3w, 2d=3w, 3d=4w... and a poor answer due to the small angle approximation would be 12d=30w... sorry walczyk 2012-10-15 13:24:38 I think the problem is asking for when the interference maximum coincides with the diffraction maximum, thus making it "disappear." Solving for this condition gets you the correct answer. I found the exact answer for intensity, and the only time diffraction minima cancel out is when the slit is an integer multiple of the distance between them. See: http://web.mit.edu/8.02t/www/materials/StudyGuide/guide14.pdf Kabuto Yakushi 2010-11-12 09:36:44 The problem says that the interference maximum is missing (in other words it is a minimum). Recalling the diffraction and interference formulas for a minimum (dark screen): equ. 1: interference minimum equ. 2: diffraction minimum Dividing equation 1 by equation 2 and solving one gets (D). m has to be 1, since 0 would get an undefined result. pam d 2011-09-23 18:08:27 This approach is incorrect because you have not used the condition of an interference maximum combined with a diffraction minimum. You used the condition for an interference minimum. You must arrive at a "missing" interference maximum, i.e. it was supposed to be there but is not. Also, the m's do not have to be the same here. Just look at the Fresnel interference pattern of a double slit and you'll see that many interference minima and maxima occur within the diffraction pattern. I hope this post was clear enough. Tbot 2011-10-25 05:47:09 The question asks "when is a maximum missing?" That must mean that the double slit maximum coincides with a single slit minimum. To solve the problem, we need to assume that the angle corresponds to a Maximum of the double slit diffraction and Minimum of the single slit diffraction. It is incorrect to select an angle for which both are at a minimum. cz 2012-11-04 22:53:22 This argument is wrong. The interference maximum is missing means simply that it is there but cannot be seen. It cannot be seen because it has been made dark by the diffraction minimum. (The diffraction effect has masked the interference effect.) The interference pattern always is always attenuated by the diffraction envelope. kstephe6 2012-11-05 07:11:02 this aligns the minima... exactly the opposite of what you want to do here. adhumunt 2015-10-20 21:27:24 This is perhaps the simplest and most intuitive method to solving this problem. RusFortunat 2015-10-22 12:16:49 Thank you chemicalsoul 2009-11-04 11:44:04 for double slit diffraction one can remember this little formula; if d=n*w , d and w as in the question and n is the missing interference maximum form the center with the condition that d is greater than w. In this particular question for a distant screen, (D) satisfies it. pavamanacs 2009-09-16 02:06:33 how can we explain uncertainity principle using this cz 2012-11-04 22:54:47 Uncertainty principle really has nothing to do with this.. Richard 2007-10-17 18:32:26 I think that there are two important pedagogical aspects to this problem. First being, that it shows how the GRE problems usually require you to use your "physics brain." What I mean is, you often have to rule out a few possibilities due to a "physical" argument of some sort: A few of the choices in this problem give you slit widths that are larger than the distance between them. Second, I think the phrase "an interference maximum...will be missing when..." is particularly important. Note they didn't say, "Find THE distance for which an interference maximum will be missing..." So it's not necessarily the FIRST missing interference maximum or the last (of course there is no last possible...). As with many problems there is some degree of ambiguity. Anyway, I hope this helps someone. nitin 2006-11-18 22:45:00 One easy way to solve this problem is as follows. The idea is that we need a minimum of the single-slit diffraction to coincide with a maximum of the double-slit interference pattern (which results in a "missing" interference maximum). The angular locations of the minima of the single single-slit diffraction pattern (which modulates or envelopes the double-slit diffraction pattern) are given by: , where The angular locations of the maxima (bright fringes) of the double-slit interference pattern are given by: , where Take the ratio of the 2 equations: . Now, it is obvious that will be in general greater than , since we have several interference maxima inside a diffraction maximum. Also, and are integers. The only instance which satisfies these 2 conditions is choice (D). jw111 2008-11-03 22:28:33 I think that m2 > m1 is from the fact that d > w for double-slit if d lin 2015-09-06 10:45:34 This answer is pretty clear to understand blackholepioneer 2006-10-29 18:19:39 I\\\\\\\'m a little confused about your derivations, so I did the following, check and see if I messed up somewhere. =distance between slits =width of a slit =distance on the screen =distance between 2 screens =wavelength =an interger max: min: for small , , thus , the s cancel, so we have . The only 2 choices that satisfy this condition are C and D, we pick D, because is the central maximum, so needs to be larger or equal to 2. dgelbi 2006-10-25 00:00:54 the conditions are mixed. The first one is related with the interference (cos is the interference term) and the second one with the difraction (sin is the diffraction term) astro_allison 2005-11-20 16:33:06 isn't there an easier way to solve this? When I took this as a practice test, I did this: we know that (m+.5)(lambda)=dsin(theta) for it to be dark, and (change in y)=wsin(theta), where w= width of slit. for small angles, sin(theta) ~ (theta). solving each eq for (theta) & setting them equal, we have: [(lambda)(m+.5)/d]=[(change in y)/w] solve for it in terms of w and d, and neglect (lambda) for now: w=[(change in y)/(m+.5)]*d y~m, so take it to a first order, ie m=1. then: w=[1/(3/2)]d or w=(2/3)d... 2d=3w. this might be a wrong approach though... tau1777 2008-11-03 16:46:13 i like this approach opposed to the other ones, to be honest there just to confusing. and this is the method i was trying to use when doing it,but i was missing the crucial fact that delta y = sin(theta)w. thanks for this post. i also think, as a general comment, that the simple solution is always the best one, as long as it gets you the right answer. it's a timed exam and i think the ETS wants us to be able to see these easier solutions. Post A Comment! Bare Basic LaTeX Rosetta Stone LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces . type this... to get... $\int_0^\infty$ $\partial$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
Cox Proportional Hazards (CoxPH)¶ Note CoxPH is not yet supported in Python. It is supported in R and Flow only. Cox proportional hazards models are the most widely used approach for modeling time to event data. As the name suggests, the hazard function, which computes the instantaneous rate of an event occurrence and is expressed mathematically as \(h(t) = \lim_{\Delta t \downarrow 0} \frac{Pr[t \le T < t + \Delta t \mid T \ge t]}{\Delta t},\) is assumed to be the product of a baseline hazard function and a risk score. Consequently, the hazard function for observation \(i\) in a Cox proportional hazards model is defined as \(h_i(t) = \lambda(t)\exp(\mathbf{x}_i^T\beta)\) where \(\lambda(t)\) is the baseline hazard function shared by all observations and \(\exp(\mathbf{x}_i^T\beta)\) is the risk score for observation \(i\), which is computed as the exponentiated linear combination of the covariate vector \(\mathbf{x}_i^T\) using a coefficient vector \(\beta\) common to all observations. This combination of a non-parametric baseline hazard function and a parametric risk score results in Cox proportional hazards models being described as semi-parametric. In addition, a simple rearrangement of terms shows that unlike generalized linear models, an intercept (constant) term in the risk score adds no value to the model fit, due to the inclusion of a baseline hazard function. An R demo is available here. This uses the CoxPH algorithm along with the WA_Fn-UseC_-Telco-Customer-Churn.csv dataset. Defining a CoxPH Model¶ model_id: (Optional) Specify a custom name for the model to use as a reference. By default, H2O automatically generates a destination key. training_frame: (Required) Specify the dataset used to build the model. NOTE: In Flow, if you click the Build a modelbutton from the Parsecell, the training frame is entered automatically. start_column: (Optional) The name of an integer column in the sourcedata set representing the start time. If supplied, the value of the start_columnmust be strictly less than the stop_columnin each row. stop_column: (Required) The name of an integer column in the sourcedata set representing the stop time. y: (Required) Specify the column to use as the dependent variable. The data can be numeric or categorical. ignored_columns: (Optional, Python and Flow only) Specify the column or columns to be excluded from the model. In Flow, click the checkbox next to a column name to add it to the list of columns excluded from the model. To add all columns, click the Allbutton. To remove a column from the list of ignored columns, click the X next to the column name. To remove all columns from the list of ignored columns, click the Nonebutton. To search for a specific column, type the column name in the Searchfield above the column list. To only show columns with a specific percentage of missing values, specify the percentage in the Only show columns with more than 0% missing valuesfield. To change the selections for the hidden columns, use the Select Visibleor Deselect Visiblebuttons. weights_column: Specify a column to use for the observation weights, which are used for bias correction. The specified weights_columnmust be included in the specified training_frame. Python only: To use a weights column when passing an H2OFrame to xinstead of a list of column names, the specified training_framemust contain the specified weights_column. Note: Weights are per-row observation weights and do not increase the size of the data frame. This is typically the number of times a row is repeated, but non-integer values are supported as well. During training, rows with higher weights matter more, due to the larger loss function pre-factor. offset_column: Specify a column to use as the offset. Note: Offsets are per-row “bias values” that are used during model training. For Gaussian distributions, they can be seen as simple corrections to the response (y) column. Instead of learning to predict the response (y-row), the model learns to predict the (row) offset of the response column. For other distributions, the offset corrections are applied in the linearized space before applying the inverse link function to get the actual response values. For more information, refer to the following link. stratify_by: A list of columns to use for stratification. ties: The approximation method for handling ties in the partial likelihood. This can be either efron(default) or breslow). See the Cox Proportional Hazards Model Details section below for more information about these options. init: (Optional) Initial values for the coefficients in the model. This value defaults to 0. lre_min: A positive number to use as the minimum log-relative error (LRE) of subsequent log partial likelihood calculations to determine algorithmic convergence. The role this parameter plays in the stopping criteria of the model fitting algorithm is explained in the Cox Proportional Hazards Model Algorithm section below. This value defaults to 9. max_iterations: A positive integer defining the maximum number of iterations during model training. The role this parameter plays in the stopping criteria of the model-fitting algorithm is explained in the Cox Proportional Hazards Model Algorithm section below. This value defaults to 20. interactions: Specify a list of predictor column indices to interact. All pairwise combinations will be computed for this list. interaction_pairs: (Internal only.) When defining interactions, use this option to specify a list of pairwise column interactions (interactions between two variables). Note that this is different than interactions, which will compute all pairwise combinations of specified columns. This option is disabled by default. export_checkpoints_dir: Specify a directory to which generated models will automatically be exported. Cox Proportional Hazards Model Results¶ Data¶ Number of Complete Cases: The number of observations without missing values in any of the input columns. Number of Non Complete Cases: The number of observations with at least one missing value in any of the input columns. Number of Events in Complete Cases: The number of observed events in the complete cases. Coefficients¶ \(\tt{name}\): The name given to the coefficient. If the predictor column is numeric, the corresponding coefficient has the same name. If the predictor column is categorical, the corresponding coefficients are a concatenation of the name of the column with the name of the categorical level the coefficient represents. \(\tt{coef}\): The estimated coefficient value. \(\tt{exp(coef)}\): The exponentiated coefficient value estimate. \(\tt{se(coef)}\): The standard error of the coefficient estimate. \(\tt{z}\): The z statistic, which is the ratio of the coefficient estimate to its standard error. Model Statistics¶ Cox and Snell Generalized \(R^2\) \(\tt{R^2} := 1 - \exp\bigg(\frac{2\big(pl(\beta^{(0)}) - pl(\hat{\beta})\big)}{n}\bigg)\) Maximum Possible Value for Cox and Snell Generalized \(R^2\) \(\tt{Max. R^2} := 1 - \exp\big(\frac{2 pl(\beta^{(0)})}{n}\big)\) Likelihood Ratio Test \(2\big(pl(\hat{\beta}) - pl(\beta^{(0)})\big)\), which under the null hypothesis of \(\hat{beta} = \beta^{(0)}\) follows a chi-square distribution with \(p\) degrees of freedom. Wald Test \(\big(\hat{\beta} - \beta^{(0)}\big)^T I\big(\hat{\beta}\big) \big(\hat{\beta} - \beta^{(0)}\big)\), which under the null hypothesis of \(\hat{beta} = \beta^{(0)}\) follows a chi-square distribution with \(p\) degrees of freedom. When there is a single coefficient in the model, the Wald test statistic value is that coefficient’s z statistic. Score (Log-Rank) Test \(U\big(\beta^{(0)}\big)^T \hat{I}\big(\beta^{0}\big)^{-1} U\big(\beta^{(0)}\big)\), which under the null hypothesis of \(\hat{beta} = \beta^{(0)}\) follows a chi-square distribution with \(p\) degrees of freedom. where \(n\) is the number of complete cases \(p\) is the number of estimated coefficients \(pl(\beta)\) is the log partial likelihood \(U(\beta)\) is the derivative of the log partial likelihood \(H(\beta)\) is the second derivative of the log partial likelihood \(I(\beta) = - H(\beta)\) is the observed information matrix Cox Proportional Hazards Model Details¶ A Cox proportional hazards model measures time on a scale defined by the ranking of the \(M\) distinct observed event occurrence times, \(t_1 < t_2 < \dots < t_M\). When no two events occur at the same time, the partial likelihood for the observations is given by \(PL(\beta) = \prod_{m=1}^M\frac{\exp(w_m\mathbf{x}_m^T\beta)}{\sum_{j \in R_m} w_j \exp(\mathbf{x}_j^T\beta)}\) where \(R_m\) is the set of all observations at risk of an event at time \(t_m\). In practical terms, \(R_m\) contains all the rows where (if supplied) the start time is less than \(t_m\) and the stop time is greater than or equal to \(t_m\). When two or more events are observed at the same time, the exact partial likelihood is given by \(PL(\beta) = \prod_{m=1}^M\frac{\exp(\sum_{j \in D_m} w_j\mathbf{x}_j^T\beta)}{(\sum_{R^* : \mid R^* \mid = d_m} [\sum_{j \in R^*} w_j \exp(\mathbf{x}_j^T\beta)])^{\sum_{j \in D_m} w_j}}\) where \(R_m\) is the risk set and \(D_m\) is the set of observations of size \(d_m\) with an observed event at time \(t_m\) respectively. Due to the combinatorial nature of the denominator, this exact partial likelihood becomes prohibitively expensive to calculate, leading to the common use of Efron’s and Breslow’s approximations. Efron’s Approximation¶ Of the two approximations, Efron’s produces results closer to the exact combinatoric solution than Breslow’s. Under this approximation, the partial likelihood and log partial likelihood are defined as \(PL(\beta) = \prod_{m=1}^M \frac{\exp(\sum_{j \in D_m} w_j\mathbf{x}_j^T\beta)}{\big[\prod_{k=1}^{d_m}(\sum_{j \in R_m} w_j \exp(\mathbf{x}_j^T\beta) - \frac{k-1}{d_m} \sum_{j \in D_m} w_j \exp(\mathbf{x}_j^T\beta))\big]^{(\sum_{j \in D_m} w_j)/d_m}}\) \(pl(\beta) = \sum_{m=1}^M \big[\sum_{j \in D_m} w_j\mathbf{x}_j^T\beta - \frac{\sum_{j \in D_m} w_j}{d_m} \sum_{k=1}^{d_m} \log(\sum_{j \in R_m} w_j \exp(\mathbf{x}_j^T\beta) - \frac{k-1}{d_m} \sum_{j \in D_m} w_j \exp(\mathbf{x}_j^T\beta))\big]\) Breslow’s Approximation¶ Under Breslow’s approximation, the partial likelihood and log partial likelihood are defined as \(PL(\beta) = \prod_{m=1}^M \frac{\exp(\sum_{j \in D_m} w_j\mathbf{x}_j^T\beta)}{(\sum_{j \in R_m} w_j \exp(\mathbf{x}_j^T\beta))^{\sum_{j \in D_m} w_j}}\) \(pl(\beta) = \sum_{m=1}^M \big[\sum_{j \in D_m} w_j\mathbf{x}_j^T\beta - (\sum_{j \in D_m} w_j)\log(\sum_{j \in R_m} w_j \exp(\mathbf{x}_j^T\beta))\big]\) Cox Proportional Hazards Model Algorithm¶ H2O uses the Newton-Raphson algorithm to maximize the partial log-likelihood, an iterative procedure defined by the steps: To add numeric stability to the model fitting calculations, the numeric predictors and offsets are demeaned during the model fitting process. Set an initial value, \(\beta^{(0)}\), for the coefficient vector and assume an initial log partial likelihood of \(- \infty\). Increment iteration counter, \(n\), by 1. Calculate the log partial likelihood, \(pl\big(\beta^{(n)}\big)\), at the current coefficient vector estimate. Compare \(pl\big(\beta^{(n)}\big)\) to \(pl\big(\beta^{(n-1)}\big)\). If \(pl\big(\beta^{(n)}\big) > pl\big(\beta^{(n-1)}\big)\), then accept the new coefficient vector, \(\beta^{(n)}\), as the current best estimate, \(\tilde{\beta}\), and set a new candidate coefficient vector to be \(\beta^{(n+1)} = \beta^{(n)} - \tt{step}\), where \(\tt{step} := H^{-1}(\beta^{(n)}) U(\beta^{(n)})\), which is the product of the inverse of the second derivative of \(pl\) times the first derivative of \(pl\) based upon the observed data. If \(pl\big(\beta^{(n)}\big) \le pl\big(\beta^{(n-1)}\big)\), then set \(\tt{step} := \tt{step} / 2\) and \(\beta^{(n+1)} = \tilde{\beta} - \tt{step}\). Repeat steps 2 - 4 until either \(n = \tt{iter\ max}\) or the log-relative error \(LRE\Big(pl\big(\beta^{(n)}\big), pl\big(\beta^{(n+1)}\big)\Big) >= \tt{lre\ min}\), where \(LRE(x, y) = - \log_{10}\big(\frac{\mid x - y \mid}{y}\big)\), if \(y \ne 0\) \(LRE(x, y) = - \log_{10}(\mid x \mid)\), if \(y = 0\) References¶ Andersen, P. and Gill, R. (1982). Cox’s regression model for counting processes, a large sample study. Annals of Statistics 10, 1100-1120. Harrell, Jr. F.E., Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis. Springer-Verlag, 2001. Therneau, T., Grambsch, P., Modeling Survival Data: Extending the Cox Model. Springer-Verlag, 2000.
GR8677 #100 Alternate Solutions casseverhart13 2019-07-16 13:39:56 Interesting and very rattling problem. Bookmarked. www.partnersairductcleaning.com/beverlyheights33301 Simplicio 2009-03-31 18:51:18 I would agree with hungrychemist and consider the order of magnitude. The reasonably values I used are , (A) ≈ (B) : Good luck making a camera pinhole the size of the wavelength ... (C) : This is even worse! (D) : LOL! (E) : Can this even be called a "pin"-hole? So by elimination, (A) gives a small yet reasonable number! Comments Cheese 2019-09-24 13:22:34 So I was out of time when I saw this question in my attempt, I though that the quantity \"Sharpness\" could be just a numerical measure (and hence dimensionless) much like how \"magnification\" is treated in optics, so I just locked (A) only for that reason. casseverhart13 2019-07-16 13:39:56 Interesting and very rattling problem. Bookmarked. www.partnersairductcleaning.com/beverlyheights33301 calvin_physics 2014-03-27 22:39:05 Just like others have suggested, answers from B to E are simply silly if we apply some logic to it. Visible lights has wavelength 400~700nm Distance for an image is usually few meters. Only (A) gives few millimeter as answer, which fits the description of pinhole. syreen 2013-09-23 18:45:05 I used rayleigh criterion and small angle approx: Rayleigh- sin(theta)=~theta=lambda/d small angle- theta~=arclength/distance~="blur"/D Set the two theta equations equal, solve for blur, then derivative of blur with respect to d, set to 0 to minimize blur. Solve for d. gives A. mpdude8 2012-04-19 17:18:46 Yeah, bag all the optics and look at it logically. If you know the wavelength of light is on the order of nanometers, a pinhole, while small, is probably on the order of millimeters, and the screen, probably on the order of centimeters, you can use orders of magnitude to figure out which answers are possible and which are nonsense. rizkibizniz 2011-10-07 02:10:00 tho im quite sure Yosun's answer is more accurate, but all this PGRE stuff is about approximation isn it? can someone point out why its a big mistake to approximate using Rayleigh's criterion? rizkibizniz 2011-10-07 02:01:41 couldn' we approximate all this just using Rayleigh's criterion? for sharp images, (to be able to distinguish two separate objects with small angle ) : , , Setareh 2011-10-29 01:07:31 Nice! rizkibizniz 2011-11-06 21:28:13 Hmm. I come to realize that I have mistakenly defined which doesn seem to be the correct reasoning to get the angle subtended by the objects through the pinhole on the screen which is what is.rnrnWhat I had (mistakenly) defined was the angle subtended by lines from the edges of the pinhole to a point on the center of the screen. rnrnI think it might be a coincidence that the answers came out to be similar if not the same. Or can they both be in fact the same to some certain approximations? Anyone have any thoughts on this? livieratos 2011-11-08 07:30:56 i think that this is the case: somebody correct me if i'm wrong please... ewcikewqikd 2014-07-19 12:41:00 The problem with ur solution is that the "d" in the single slit equation is the width of the slit (pinhole), not the width of the screen. The width of the pinhole is never given in the problem. Simplicio 2009-03-31 18:51:18 I would agree with hungrychemist and consider the order of magnitude. The reasonably values I used are , (A) ≈ (B) : Good luck making a camera pinhole the size of the wavelength ... (C) : This is even worse! (D) : LOL! (E) : Can this even be called a "pin"-hole? So by elimination, (A) gives a small yet reasonable number! flyboy621 2010-11-15 20:48:43 Nice! thebigshow500 2008-10-14 09:44:48 Not sure if this make sense: Can we simply apply single slit diffraction's equation? 1. Let m=1 to let the sharp image reach its 1st order minima, and then we have 2. For the Fraunhoffer diffraction, so that we can assume a small angle. Plug 2. back to 1., and we eventually get medellin 2008-11-04 13:01:09 I think you got the answer just by chance, but there is not really physics there. I think your procedure got the answer just by the orders of magnitude involve but no more. medellin 2008-11-04 13:02:53 You need FRESNEL DIFFRACTION as mentioned below. tinytoon 2008-11-06 00:04:52 Actually, your solution makes perfect sense, physically also. h.fei10 2012-11-07 22:14:23 I think, in your second equation, you should use the radius of the blur, not just the diameter of the pinhole. It doesn't make sense. ewcikewqikd 2014-07-19 12:38:12 The problem with ur solution is that the "d" in the single slit equation is the width of the slit (pinhole), not the width of the screen. The width of the pinhole is never given in the problem. Richard 2007-11-01 18:55:36 I did this one by relying on my intuition, namely, that if we increase the distance the screen is from the hole, the optimum diameter of the hole should increase, and also that if you let the wavelength become very large the diameter of the hole should increase to get a resolved image. The only solution that fits this is (A). For those wondering what is meant by "blur," it may be useful to look at the site Yosun quoted/pirated. "Blur" is basically a quantification of the interference on the image, how much "blur" there is. There is interference due to incident light entering the hole and interference due to diffraction around the hole's edges. These combine as Yosun quotes and the sum's minimum is obtained by a differentiation. hungrychemist 2007-09-17 08:00:32 Use order of magnitude approach. Say, if we set = 1 unit of length. D is about 10^8 of that same unit. (Size of camera should be in order of m where the size of the visible light wave length is order of nm).rnrnNow, you want d (pin hole size) at the order of 10^-2 or so. (About size of mm). rnrnChoise E is way too big(bigger than the camera)rnChoice D is way too smal(smaller than the wavelength of light)rnChoice C is still too small(still smaller than the wavelength of light)rnChoice B is same as wavelength of light(still quiet small, and we know from double slit experiment, this magnitude of hole will create dispersion of wave, which blurs things). rnChoice A is only reasonable magnitude. yosun 2005-10-31 01:14:19 Fresnel Diffraction would complicate things. Franhouffer should be a good approximation, especially when one has only about a minute to answer the question. alpha 2005-10-31 12:47:49 Shouldn't Fresnel Diffraction be used instead, as ? ee7klt 2005-11-11 04:44:44 Hi Alpha, I think the question says , thus justifying the Fraunhofer regime. I have one question, why is reasoning behind defining the blur equation as written? ee7klt 2005-11-11 04:45:27 oops..I meant "what is the reasoning.." student2008 2008-10-14 07:38:54 the condition is needed to neglect the second term under the square root in the Fresnel zones formula: Post A Comment! Bare Basic LaTeX Rosetta Stone LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces . type this... to get... $\int_0^\infty$ $\partial$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
Abstract In the unit ball $B(0,1)$, let $u$ and $\Omega$ (a domain in $\mathbb{R}^N$) solve the following overdetermined problem: \[ \Delta u = \chi_{\Omega} \quad \mathrm{in}\; B(0,1),\qquad\qquad 0\; \in \partial\Omega, \qquad\qquad u = |\nabla u| = 0 \quad \mathrm{in}\; B(0,1) \backslash \Omega, \] where $\chi_\Omega$ denotes the characteristic function, and the equation is satisfied in the sense of distributions. If the complement of $\Omega$ does not develop cusp singularities at the origin then we prove $\partial \Omega$ is analytic in some small neighborhood of the origin. The result can be modified to yield for more general divergence form operators. As an application of this, then, we obtain the regularity of the boundary of a domain without the Pompeiu property, provided its complement has no cusp singularities.
Nineteenth SSC CGL level Question Set, topic Trigonometry 3 This is the nineteenth question set of 10 practice problem exercise for SSC CGL exam and 3rd on topic Trigonometry. Before taking the test you should go through the method for taking the test for general guidelines. Method of taking the test for getting the best results from the test: Before start,go through or any short but good material to refresh your concepts if you so require. Tutorial on Basic and rich concepts in Trigonometry and its applications Answer the questionsin an undisturbed environment with no interruption, full concentration and alarm set at 12 minutes. When the time limit of 12 minutes is over,mark up to which you have answered, but go on to complete the set. At the end,refer to the answers given at the end to mark your score at 12 minutes. For every correct answer add 1 and for every incorrect answer deduct 0.25 (or whatever is the scoring pattern in the coming test). Write your score on top of the answer sheet with date and time. Identify and analyzethe problems that you couldn't doto learn how to solve those problems. Identify and analyzethe problems that you solved incorrectly. Identify the reasons behind the errors. If it is because of your shortcoming in topic knowledgeimprove it by referring to only that part of conceptfrom the best source you can get hold of. You might google it. If it is because of your method of answering,analyze and improve those aspects specifically. Identify and analyzethe problems that posed difficulties for you and delayed you. Analyze and learn how to solve the problems using basic concepts and relevant problem solving strategies and techniques. Give a gapbefore you take a 10 problem practice test again. Important:both and practice tests must be timed, analyzed, improving actions taken and then repeated. With intelligent method, it is possible to reach highest excellence level in performance. mock tests Resources that should be useful for you Before taking the test it is recommended that you refer to You may also refer to the related resources: or 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but section on SSC CGL generally for any hard MCQ test. If you like,you may to get latest subscribe content on competitive examspublished in your mail as soon as we publish it. You should refer to the corresponding only after you have taken the test as suggested above. solution set for this question set Now set the stopwatch alarm and start taking this test. It is not difficult. Nineteenth question set- 10 problems for SSC CGL exam: 3rd on Trigonometry - time 12 mins Problem 1. The value of $tan1^0tan2^0tan3^0.....tan89^0$ is, $\sqrt{3}$ $0$ $1$ $\displaystyle\frac{1}{\sqrt{3}}$ Problem 2. The value of $cot18^0\left(cot72^0cos^222^0 + \displaystyle\frac{1}{tan72^0sec^268^0}\right)$ is, $\displaystyle\frac{1}{\sqrt{3}}$ $3$ $1$ $\sqrt{2}$ Problem 3. If $asin\theta + bcos\theta =c$, then the value of $acos\theta - bsin\theta$ is, $\pm \sqrt{-a^2 + b^2 + c^2}$ $\pm \sqrt{a^2 - b^2 + c^2}$ $\pm \sqrt{a^2 - b^2 - c^2}$ $\pm \sqrt{a^2 + b^2 - c^2}$ Problem 4. The value of $\left(\displaystyle\frac{cos^2\theta(sin\theta + cos\theta)}{cosec^2\theta(sin\theta - cos\theta)} + \displaystyle\frac{sin^2\theta(sin\theta - cos\theta)}{sec^2\theta(sin\theta + cos\theta)}\right)(sec^2\theta - cosec^2\theta) $ is, 1 2 3 4 Problem 5. $\displaystyle\frac{tan\theta}{1 - cot\theta} + \displaystyle\frac{cot\theta}{1 - tan\theta}$ is equal to, $1 - tan\theta -cot\theta$ $1 + tan\theta + cot\theta$ $1 - tan\theta + cot\theta$ $1 + tan\theta - cot\theta$ Problem 6. If $tan\theta = \displaystyle\frac{sin\alpha - cos\alpha}{sin\alpha + cos\alpha}$ then $sin\alpha + cos\alpha$ is, $\pm \sqrt{2} sin\theta$ $\pm \sqrt{2} cos\theta$ $\pm \displaystyle\frac{1}{\sqrt{2}} cos\theta$ $\pm \displaystyle\frac{1}{\sqrt{2}} sin\theta$ Problem 7. If $cos^2\alpha - sin^2\alpha = tan^2\beta$, then $cos^2\beta - sin^2\beta = $ $tan^2\alpha$ $cot^2\alpha$ $cot^2\beta$ $tan^2\beta$ Problem 8. If $tan\alpha=ntan\beta$, and $sin\alpha = msin\beta$ then $cos^2\alpha$ is, $\displaystyle\frac{m^2 - 1}{n^2 - 1}$ $\displaystyle\frac{m^2 + 1}{n^2 + 1}$ $\displaystyle\frac{m^2}{n^2 + 1}$ $\displaystyle\frac{m^2}{n^2}$ Problem 9. If $A$, $B$ and $C$ are the three angles of a triangle, then the incorrect relation among the following is, $cos\displaystyle\frac{A + B}{2} = sin\displaystyle\frac{C}{2}$ $sin\displaystyle\frac{A + B}{2} = cos\displaystyle\frac{C}{2}$ $cot\displaystyle\frac{A + B}{2} = tan\displaystyle\frac{C}{2}$ $tan\displaystyle\frac{A + B}{2} = sec\displaystyle\frac{C}{2}$ Problem 10. If $\theta$ is a positive acute angle and $tan2\theta{tan3\theta} = 1$ then the value of $\left(2cos^2\displaystyle\frac{5\theta}{2} - 1\right)$ is, $0$ $1$ $-\displaystyle\frac{1}{2}$ $\displaystyle\frac{1}{2}$ You will find the detailed conceptual solutions to these questions in . SSC CGL level Solution Set 19 on Trigonometry You may also watch the video solutions in the two-part video. Part 1: Q1 to Q5 Part 2: Q6 to Q10 Note: You will observe that in many of the Trigonometric problems rich algebraic concepts and techniques are to be used. In fact that is the norm. Algebraic concepts are frequently used for elegant solutions of Trigonometric problems. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Question set 19 on Trigonometry
What is a suitable model for two-wheeled robots? That is, what equations of motion describe the dynamics of a two-wheeled robot. Model of varying fidelity are welcome. This includes non-linear models, as well as linearized models. Robotics Stack Exchange is a question and answer site for professional robotic engineers, hobbyists, researchers and students. It only takes a minute to sign up.Sign up to join this community There isn't a lot of information here. Let's fix the wheels as separated by distance $b$, and each wheel has orientation $\theta_i$ with respect to the line joining them. Then assume each wheel can be independently driven with an angular velocity $v_i$. If the wheels are independently driven, but fixed in direction, $\theta_1=\theta_2=90^\circ$, you have something like a differential drive (tank treads). It's worth noting that, assuming the wheels do not slip perpandicular to their orientation, you can solve for the motion of the robot base in closed form given velocity commands which are fixed over a small time duration (as is usually the case with robots under software control). The iCreate is such a platform, as are the smaller pioneers, and the Husky by Clearpath. Then the change in orientation of the base, labelled $\theta$ below, can be found in closed form. The usual model for these things, where $v_b$ is the base velocity and $\omega_b$ is the angular velocity of the base, is: $$v_b = \frac{1}{2}\cdot(v_1+v_2)$$ $$\omega_b=\frac{1}{b}(v_2-v_1)$$ For a fixed time increment, $\delta t$, you can find the change in orientation, and linear distance traveled using these. Note that the robot travels along a circle in this time window. The distance along the circle is exactly $\delta t\cdot v_b$, and the radius of the circle is $R=\frac{b}{2}\cdot\frac{v_1+v_2}{v_2-v_1}$. That's enough to plug into these equations: circular segments -- particularly the chord length equation, which describes the distance the robot displaces from its original location. We know $R$ and $\theta$, solve for $a$. So assuming the robot starts with orientation $0$, and position $(0,0)$, and moves along time window $\delta t$ with velocities $v_1$ (left wheel) and $v_2$ (right wheel), it's orientation will be: $$\theta_1=\frac{\delta t}{b}(v_2-v_1)$$ with position: $$p_x=\cos\left(\frac{\theta_1}{2}\right)\cdot\left(2 R \sin\left(\frac{\theta_1}{2}\right)\right)$$ $$p_y=\sin\left(\frac{\theta_1}{2}\right)\cdot\left(2 R \sin\left(\frac{\theta_1}{2}\right)\right)$$ Note that as $v_1\to v_2=v$ the limit is $$p_x=\delta t \cdot v$$ $$p_y=0$$ as expected. Update why?. Rearrange $p_x$ so that: $$ p_x = cos \left( \frac{v_2-v_1}{2b} \right) * 2 * \left( b\frac{v_1+v_2}{2(v_2-v1)} \right) * sin\left( \frac{v_2-v_1}{2b} \right) $$ $$ p_x = cos \left( \frac{v_2-v_1}{2b} \right) * \frac{(v_2+v_1)}{2}* \frac{sin\left( \frac{v_2-v_1}{2b} \right)} {\frac{v_2-v_1}{2b}} $$ Now note that we have three limits as $v_2 \rightarrow v_1$. $$cos \left( \frac{v_2-v_1}{2b} \right)\rightarrow 1$$ $$ \frac{(v_2+v_1)}{2} \rightarrow v_1 == v_2$$ $$ \frac{sin\left( \frac{v_2-v_1}{2b}\right)}{\frac{v_2-v_1}{2b}} \rightarrow 1 \text{ (see sinc function)}$$ This is covered all over the internet, but you might start here: http://rossum.sourceforge.net/papers/DiffSteer/ or here: https://web.cecs.pdx.edu/~mperkows/CLASS_479/S2006/kinematics-mobot.pdf If the wheels are not fixed in direction, as in you can vary the speed and orientation, it gets more complicated. In that sense, a robot can become essentially holonomic (it can move in arbitrary directions and orientations on the plane). However, I bet for fixed orientation, you end up with the same model. There are other models for two wheels, such as a bicycle model, which is easy to imagine as setting the velocities, and only varying one orientation. That's the best I can do for now. If you really want to dive into the mathematics of it, here's the seminal paper that unified and categorized most models for wheeled robots. The answer to this is simple, but the other answers obfuscate the dynamics. Differential drive robots can be modeled with unicycle dynamics of the form: $$\left[\begin{matrix}\dot{x}\\ \dot{y} \\ \dot{\theta} \end{matrix}\right] = \left[\begin{matrix}cos(\theta)&0\\sin(\theta)&0\\0&1\end{matrix}\right] \left[\begin{matrix}v\\\omega\end{matrix}\right],$$ where $x$ and $y$ are Cartesian coordinates of the robot, and $\theta \in (-\pi,\pi]$ is the angle between the heading and the $x$-axis. The input vector $\left[v, \omega \right]^T$ consists of linear and angular velocity inputs.
Revista Matemática Iberoamericana Full-Text PDF (1598 KB) | Metadata | Table of Contents | RMI summary Volume 7, Issue 1, 1991, pp. 1–24 DOI: 10.4171/RMI/103 Published online: 1991-04-30 Pointwise and Spectral Control of Plate VibrationsAlain Haraux [1]and Stéphane Jaffard [2](1) Université Pierre et Marie Curie, Paris, France (2) Université Paris Est, Créteil, France We consider the problem of controlling pointwise (by means of a time dependent Dirac measure supported by a given point) the motion of a vibrating plate $\Omega$. Under general boundary conditions, including the special cases of simply supported or clamped plates, but of course excluding the cases where some multiple eigenvalues exist for the biharmonic operator, we show the controllability of finite linear combinations of the eigenfunctions at any point of $\Omega$ $where$ $no$ $eigenfunction$ $vanishes$ at any time greater than half of the plate's area. This result is optimal since $no$ finite linear combination of the eigenfunctions other than 0 is pointwise controllable at a time smaller than half of the plate's area. Under the same condition on the time, but for an $arbitrary$ domain $\Omega$ in $\mathbb R^2$, we solve the problem of $internal$ spectral control, which means that for any open disk $\omega \subset \Omega$, any finite linear combination of the eigenfunctions can be set to equilibrium by means of a control function $h \in \mathcal D ((0, T) \times \Omega)$ supported in $(0, T) \times \omega$. No keywords available for this article. Haraux Alain, Jaffard Stéphane: Pointwise and Spectral Control of Plate Vibrations. Rev. Mat. Iberoam. 7 (1991), 1-24. doi: 10.4171/RMI/103
Kakeya problem A Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. Some Basic Estimates Trivially, we have [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, we also have [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. Lower Bounds From a paper of Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n \geq 3^n / 2^n[/math], but this is superseded by the estimates given below. To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\gtrsim 3^{(n+1)/2}.[/math] One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math]. A better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math]
First off, the fact that the board actually blocks the sun light going into the house may have cooled down the house itself (same effect as a solar screen). Since this question is about how the pressure and temperature will be changed after installing the Eco-Cooler air conditioner (the bottle board solely), I will give the following analysis. From the Gay-Lussac Law that \begin{align}\frac{P_1}{T_1}=\frac{P_2}{T_2},\end{align}the ratio between pressure and temperature is a constant for a given approximately fixed volume. When one installs the bottles on the windows, the shape of the bottles increases the air pressure before entering the room. This can be understood in the following manner: we assume the wind is entering an almost closed house that every cross section over the course of the bottle pipe is approximately under the force balanced/equilibrium condition that $$F_a=S_aP_a\approx S_bP_b$$ for two arbitrary cross sections $S_a$ and $S_b$. Since the bottleneck area is the smallest, pressure there may be the biggest before entering the house. In this process, however, the temperature may not be changed since it is contacting with the outdoor environment constantly. When the air blows into the room, the pressure gets decreased immediately to the normal or even maybe below-normal (depending on the actual room pressure) atmosphere pressure as there is no bottleneck-shaped pipe limiting its volume. As a result of the equation given above, the temperature goes down immediately inside of the room. One important note regarding the other answers and the valid condition of equation (2) above -- I have noticed other answers have been focusing on the opening of chimney and other windows, but here I don't need to assume that condition, and indeed I think the other outlets of the house should keep closed to prevent heat exchange from those openings. Firstly, we shouldn't focus on whether there is a chimney or outlets on the other side of the house to explain the temperature change due to the bottle board installation. Because before or after the bottle board is installed, the other chimney or windows are always there if there were any, they shouldn't be the cause of the change of temperature -- it is the installation of the bottle inlets generate the temperature change. Secondly, since the house is relatively large compared to the openings from the video, air flow will experience a friction effect when it enters the house (all the other answers haven't considered this effect). In other words, you can imagine the house is approximately a closed volume that it will give a resistance on the entering air and reduce its entering speed. Therefore, the air flow going through the bottle pipes will be compressed at the bottle neck position and the speed of entering the house is lower than the case that it enters an completely open space. This validates the condition of equation (2) which is any cross section of the air flow over the bottle path is approximately in a force balanced/equilibrium condition. Thirdly, a chimney may help to let warm air go out and the houses shown in the video are made by wood and are not air-tight indeed, but it is actually important to keep other big windows of the house closed in practice to prevent the hot air entering the house to raise the temperature again! The video shows the room temperature can be $5^\circ$ lower than outside. If you keep other big windows open, it is very easy to rebalance the room temperature back to high again. This is the same common requirement as we turn on air conditioner in summer, and makes the point 2 above even valid. Obviously, other answers may have ignored this common knowledge -- instead of analyzing how the bottle board helps but to argue about there must be openings to let air flow freely go in and out of the house to make the cooling process possible if at all. Feasibility and conditions to make it work: We see from the video that there is a $5^\circ$ temperature difference. We can assume that the outside temperature is about $30^\circ C$ or $T_1=303K$; and the temperature inside is about $25^\circ C$ or $T_2=298K$. Therefore, the pressure raised at the bottleneck relative to the normal house pressure is \begin{align}\eta=\frac{P_1}{P_2}=\frac{T_1}{T_2}\approx 1.017,\end{align}which is about $1.7\%$ of pressure increase. From equation (2), since the bottleneck cross section is a lot of smaller than the intake area, the ideal pressure increase can be a lot more than $1.7\%$ when Equ. (2) is completely an equation. Considering Equ. (2) becomes a complete equation only when the bottleneck is completely closed from the house side, which is not totally true, and the house is constantly exchanging heat from other non-ideal channels with the environment, we may find the $5^\circ$ temperature decrease is possible from a rough estimation. To make the Eco-Cooler work well, it is crucial to have a good insulation condition of the house and make sure all the other windows/openings of the house closed to make the constrain well satisfied. However, if there is no air flowing into the house from the bottles, this Eco-cooler may not work well from the pressure-temperature transitions, but it may still work to some extent by blocking the sunshine from shining into the house. A similar rule governs the case that when you evaporate water inside of an open room, in which the volume of the water vapor is increased from water and the chemical energy of water vapor from liquid water is also changed so that in the end water vapor will absorb heat from the air. Hopefully this helps your understanding on the power of physics laws.
Abstract The generalized Korteweg-de Vries equations are a class of Hamiltonian systems in infinite dimension derived from the KdV equation where the quadratic term is replaced by a higher order power term. These equations have two conservation laws in the energy space $H^1$ ($L^2$ norm and energy). We consider in this paper the critical generalized KdV equation, which corresponds to the smallest power of the nonlinearity such that the two conservation laws do not eimply a bound in $H^1$ uniform in time for all $H^1$ solutions (and thus global existence). From [15], there do exist for this equation solutions u(t) such that $|u(t)|_{H^1} \to +\infty$ as $\uparrow T$, where $T\le |\infty$ (we call them blow-up solutions). The question is to describe, in a qualitative way, how blow up occrs. For solutions with $L^2$ mass close to the minimal mass allowing blow up and with decay in $L^2$ at the right, we prove after rescaling and translation which leave invariant the $L^2$ norm that the solution converges to a universal profile locally in space at the blow-up time $T$. From the nature of this profile, we improve the standard lower bound on the blow-up rate for finite time blow-up solutions.
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?