text
stringlengths
256
16.4k
$\frac{k}{n}\sum_{k}^{n-1}\frac{1}{i}$ What is the first derivative of this with respect to k? Thank you Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community There is an alternate definition of the Harmonic Numbers $$ H_n=\sum_{k=1}^\infty\left(\frac1k-\frac1{k+n}\right)\tag{1} $$ This agrees with the standard definition when $n\in\mathbb{Z}$, and is analytic except at the negative integers. Furthermore, we get $$ H_n'=\sum_{k=1}^\infty\frac1{(k+n)^2}\tag{2} $$ If we notice that $$ \sum_{j=k}^{n-1}\frac1j=H_{n-1}-H_{k-1}\tag{3} $$ we get the derivative with respect to $k$ to be $$ \begin{align} \frac{\partial}{\partial k}\sum_{j=k}^{n-1}\frac1j &=-H_{k-1}'\\ &=-\sum_{j=0}^\infty\frac1{(j+k)^2}\tag{4} \end{align} $$ Using $(4)$ and the product rule should give the derivative in the question. If you interpret the sum for non-integer (k) like: $\begin{align} \sum_{k \le i \le n - 1} = H_{n - 1} - H_{\lfloor k \rfloor} \end{align}$ the derivative is just: $\begin{align} \begin{cases} \frac{1}{n} \sum_{k \le i \le n - 1} = \frac{1}{n} (H_{n - 1} - H_{\lfloor k \rfloor}) & \text{if \(k\) is not an integer} \\ \text{undefined} & \text{otherwise} \end{cases} \end{align}$ But that is certainly a not-very-standard interpretation of your sum notation...
Note that whether you roll again or not should be entirely based on the previous roll. That's because the eventual result is the same whatever you do. Now, if you stop after rolling $i$, then you should stop after rolling anything bigger than $i$. And if you continue when you roll an $i$, you just continue after any roll smaller than $i$. Then if the last roll is $i$, for $i=2,3,4,5,6$ then: Stopping after an $i$ result yields $i$. Continuing after an $i$ result yields $E$. So let $j$ be the largest roll on which continue The the expected value $E_j$ is: $$ \begin{align}E_j&=\frac{1+(j-1)E_j+(j+1)+(j+2)+\cdots + 6}{6}\\&=\frac{1+(j-1)E_j+j(6-j)+(1+2+\cdot+6-j)}{6}\\&=\frac{1+(j-1)E_j+j(6-j)+\frac{(6-j)(7-j)}{2}}6\\&=\frac{1+(j-1)E_j+\frac{(6-j)(7+j)}{2}}6\end{align}$$ Solve for $E_j$, we get: $$E_j=\frac{1+\frac{(6-j)(7+j)}2}{7-j}$$ So you want to pick $j$ from $1,2,3,4,5,6$ that maximizes this expression. So $E_1=\frac{7}{2}, E_2=\frac{19}{5}, E_3=\frac{16}{4}=4, E_4=\frac{12}{3}=4, E_5=\frac{7}{2}, E_6=1$. Interestingly, this means that two different strategies yield the maximum expected value of $4$.
The states of a LFSR with characteristic polynomial $p(x)$ correspond to elements of the multiplicative group of $GF(2)[x]/(p(x))$. The period of a LFSR state is the order of the corresponding group element. So, we can rephrase this as a group theory question, and use facts we know from group theory to answer the question. The LFSR state with minimal period is exactly the group element $g \in G$ with minimal order, where here $G_c$ is the multiplicative group of $GF(2)[x]/(c(x))$. The LFSR state with minimal period (other than the all-zeros state) is the group element with minimal order (other than the identity element). Thus, to find the minimal period you want, you need to find the order of the minimal-order group element of $G_c$ (other than the identity element $1$). Assuming $f,g$ have no common factor (i.e., $f\ne g$), the structure of $G_c$ is $G_c \cong G_f \times G_f$, where $G_f$ is the multiplicative group of $GF(2)[x]/(f(x))$ and $G_g$ is the multiplicative group of $GF(2)[x]/(g(x))$. This follows from an analog of the Chinese remainder theorem. Let $g_1$ be the group element of minimal order in $G_f$, and $g_2$ the group element of minimal order in $G_g$. From what we know about the structure of $G_c$, the group element of minimal order in $G_c$ is either $(g_1,1)$ or $(1,g_2)$. So it suffices to find the group elements of minimal order in $G_f$ and $G_g$. From the fact that $f$ and $g$ are primitive, we know that $G_f,G_g$ are cyclic groups of order $2^{n_f}-1$, $2^{n_g}-1$, respectively, where $n_f = \deg f$ and $n_g = \deg g$. Per Lagrange's theorem, we know that the divisors of $2^n-1$ correspond to the orders of the group elements of $G_f,G_g$. Let $d_f$ be the smallest divisor of $2^{n_f}-1$ (such that $d_f>1$), and similarly for $d_g$. Then the group element of minimal order in $G_f$ will have order $d_f$, and the group element of minimal order in $G_g$ will have order $d_g$. It follows that the group element of minimal order in $G_c$ will have order $d=\min(d_f,d_g)$, and the LFSR state of minimal period will have period $d$. So, the answer to your question is: the minimal period (other than the all-zeros state) is exactly $\min(d_f,d_g)$ where $d_f$ is the smallest divisor of $2^{\deg f}-1$ (other than 1) and $d_g$ is the smallest divisor of $2^{\deg g}-1$ (other than 1).
The force of gravity is constantly being applied to an orbiting object. And therefore the object is constantly accelerating. Why doesn't gravity eventually "win" over the object's momentum, like a force such as friction eventually slows down a car that runs out of gas? I understand (I think) how relativity explains it, but how does Newtonian mechanics explain it? Newtonian mechanics explains that they do fall toward the object they're orbiting, they just keep missing. Quick and dirty derivation for a circular orbit. Let the primary have mass $M$ and the satellite mass $m$ such that $m \ll M$ (it can also be done for other cases, but this saves on mathiness). Assume we start with an initial circular orbit on radius $r$, velocity $v = \sqrt{G\frac{M}{r}}$. The acceleration of the satellite due to gravity is $a = G\frac{M}{r^2}$ which means we can also write $v = \sqrt{\frac{a}{r}}$. The period of the orbit is $T = \frac{2\pi r}{v} = 2\pi \sqrt{\frac{r}{a}}$. Chose a coordinate system in which the initial position is $r\hat{i} + 0\hat{j}$ and the initial velocity points in the $+\hat{j}$ direction. Chose a short time $t \ll T$ and lets see how far from the primary the satellite ends up after that time. If we have chosen $t$ short enough, we can approximate gravity as having uniform strength through the time period (and we shall show later that that is justified). The new position is $\left(r - \frac{1}{2}at^2\right)\hat{i} + vt\hat{j}$ which lies at a distance $$ r_2 = \sqrt{r^2 - r a t^2 + \frac{1}{4}a^2 t^4 + v^2 t^2} $$ pulling our at factor of $r$ we get $$ r_2 = r \sqrt{1 - \frac{a}{r} t^2 + \frac{1}{4}\frac{a^2}{r^2} t^4 + \frac{v^2}{r^2} t^2} $$ and converting all the $\frac{a}{r}$ and $\frac{v}{r}$ terms into expressions of the period we get $$ r_2 = r \sqrt{1 - \left(2\pi\frac{t}{T}\right)^2 + \frac{1}{4}\left(2\pi\frac{t}{T}\right)^4 + \left(2\pi\frac{t}{T}\right)^2}$$ Finally, we drop the $(t/T)^4$ term as negligible and note that the $(t/T)^2$ terms cancel so the result is $$r_2 = r$$ or the radius never changed (which justified the constant magnitude for acceleration, and a small enough $t$ justifies both the constant direction and the dropping of the fourth degree term). The force of gravity has little to do with friction. As dmckee says, what is happening is that the body falls, but precisely because it has enough momentum, it falls around the object towards which it gravitates instead of into it. Of course, this is not always the case, collisions do happen. Also systems of astronomical bodies are complicated and the combined effect of the action of several different bodies on one can destabilize trajectories that in a simple 2-body case would be stable ellipses. The result could be collision or escape of the body. In the 2-body case however, the crucial aspect of gravity which guarantees the stability of the system is the fact that gravitation is a centripetal force. It always acts towards the center of the other gravitating mass. One can show that this feature implies the conservation of angular momentum, which means that if the 2-body system had some angular momentum to begin with, it will keep the same angular momentum indefinitely. (Extra note, even in the 2-body case, there can be collisions and escape to infinity, the first if there is not enough angular momentum (for instance one body having velocity directed towards the other body, like an apple falling from a tree), the other if there is too much angular momentum, resulting in parabolic or hyperbolic trajectories.) I always think of it this way: gravity and the centrifugal force generated by the orbiting object are exactly in balance. If you tie a rope to an object and spin it around you, the centrifugal force will pull on the rope. You pull back with the same force to keep the object orbiting around you. That's exactly what gravity does to orbiting objects. You can also see that the speed of the object forces it into a particular orbit. If the object slows down, it will fall and either reach a new lower orbit, or crash into the object they're rotating around.
[update 2012-10-04] In the meantime, I helped Nic with what he calls scratching some itches for ex-Google-Reader users: we now have gritttt, and gritttt has a) a means to import shared/stared items from g-reader into tt-rss; b) drive-by There are many helpful photography resources out there, these are the ones I found most useful: James Danziger & Barnaby Conrad III: Interviews To keep track of inspiring photographers I started this list – because nothing else worked. Irene Andessner Website: www.andessner.com Austrian fine artist. Collaborates with various photographers for »self portraits done by somebody else«. »Ursula K.«-series Statistician Nic Marks gave a really interesting talk on happiness at TED. Starting from the observation that at the moment we focus too much on the problems and apocalyptic scenarios, we are constantly turning Martin Luther King’s famous quote Today I found a second way to achieve a strikethrough in LaTeX (what is done by »line-through« in css: strike out text). If you want to put a line across text, your choices are »ulem« and »cancel«: Strikethough in LaTeX using »ulem« \usepackage{ulem} in the preamble I have phases in my work cycle, where I want to limit internet access to myself. Thus, I created a »work-user« and in the user’s properties I unticked the boxes Connect to internet using a modem Connect to wireless and ethernet networks Use modems I thought that should do the trick, yet it didn’t restrict internet I was somewhat flabberghasted when I found out my mobile phone (Sony Ericsson Cyber-shot) was unable to play .mp4, .flv, .avi and what else I tried. It refuses to play all video formats save .3gp. I was unable to convert to this with avidemux. Google In short: To avoid the standard pixel bitmap fonts and go for smooth, scalable post script ones, use one of the following: \usepackage{palatino} \usepackage{times} \usepackage{bookman} \usepackage{newcent} or, for standard post script fonts \usepackage{pslatex} My new job includes attending the odd job interview now and again. Today we had an applicant scheduled for 11 a.m. It turned eleven and it turned five past and ten past until at quarter past eleven the front desk finally rang to tell us the applicant In css there is the handy absolute positioning. Today I found out how to do it in LaTeX: In the preamble \usepackage{textpos} In the document\begin{textblock}{2}[0,0](8,1.5) Lorem ipsum dolor sit amet \end{textblock} The arguments are as follows: \begin{textpos} { <width> } This last weekend I understood a lot about the internet food chain: There are the smart guys and there are the monkeys. The smart guys find out how to crack a system. They publish their stuff and move on. The smart guys are too busy to play around. As I left biting marks in the table on this one. I don’t know if it’s a general issue or just my document. Anyway: I wanted to have footnotes from inside sections, subsections, and subsubsections. They work similar to footnotes in tables, you After graduation naturally comes application. Now it seems that application naturally comes with frustration. Each and every company knows exactly what they want to a degree that just saw me printing 14 pages of a single job description – the job Yesterday we arrived in London for a short holiday. In the early evening we were rambling the streets, not sure what to do. When at last we made our minds up and opted for cinema, a couple stopped us. Something was with their daughter, I did not Only today I did use stairs. By default LaTeX starts the footnote counter at zero for each chapter when you use the class {book} or {scrbook}. If you want to avoid that and have a continuous enumeration, here is how it works: Create a folder <remreset> in your local package There are various possibilities to include Greek text in your LaTeX document. The three ones I found are these: $\Gamma\rho\varepsilon\varepsilon\kappa$gets you Γρεεκ allright, but it looks clumsy and lacks all the accents etc. betababel. It does not work with my customised control sequences, and I am too lazy to change them and learn them all anew. polutonikogreek. Neat, slim, worked straight away. Nos. 2 & 3 use ngerman, so make sure they don’t start a fight with german. update I had a slight problem with polutonikogreek and titletoc. Whenever I used something like\greek{p’olemos} which referred to this entry in the preamble:\newcommand*{\greek}[1]% {\selectlanguage{polutonikogreek}{#1}% \selectlanguage{german}} the .toc-file looked like this at the corresponding place:[…] \contentsline {section}{\numberline {1.1}KAPITEL-1.1}{14} \contentsline {subsection}{\numberline {1.1.1}UNTERKAPITEL-1.1.1}{14} \select@language {polutonikogreek} \select@language {german} \select@language {polutonikogreek} \select@language {german} \contentsline {subsection}{\numberline {1.1.2}UNTERKAPITEL-1.1.2}{20} […] Wherever \select@language appeared in the toc, the styling of my toc entries of the subsection level was being messed up. I style subsection entries in the toc in a way that they all get written in a single line. It looks like this: \titlecontents*{subsection}[3.5em] {\vspace{-0.5mm}\itshape\footnotesize}{}% {}{\dots\normalfont\footnotesize% \thecontentspage.\enspace}% [\itshape][\vspace{1mm}] There are two solutions.Ignore the problem, compile your document, open the .toc-file, delete all \select@language entries and compile again (but only once). Use the following specifications in your preamble: \usepackage{ucs} \usepackage[utf8x]{inputenc} \usepackage[polutonikogreek,german]{babel} \newcommand{\gdir}% How it does work: Here is what we do: We define the counter\newcounter{MyCounter} then we add\renewcommand\theMyCounter{\roman{MyCounter}} How it does not work: When you define a new counter like this\newcounter{MyCounter} And later use it like this\refstepcounter{MyCounter}\label{example} \roman{MyCounter}. Beispiel eins And then reference it like this:And now I reference an example \ref{example}. \end{document} Then LaTeX still interprets it as In the course of writing my thesis I came across Nimrod, grandson of Ham, great grandson of Noah (Gen 10, 1-12). Now Nimrod is not only »a mighty hunter before the Lord: wherefore it is said, Even as
Take the straight forward Fibonacci equation $$F_0 = F_1 = 1$$ $$F_{n-2} + F_{n-1} = F_n$$ Let's consider a holomorphic function $F: \mathbb{C} \to \mathbb{C}$ such that $$F(z)\Big{|}_{\mathbb{N}} = F_n$$ $$F(z-2) + F(z-1) = F(z)$$ Let's call such $F$, those that satisfy the Fibonacci equation in the complex plane. It is very easy to produce such functions. Taking the Binet identity $$F_n = \frac{\phi^{n} - \psi^n}{\phi - \psi}$$ where $$\phi = \frac{1 + \sqrt{5}}{2}$$ $$\psi = \frac{1 - \sqrt{5}}{2}$$ It follows for any $k,j \in \mathbb{Z}$, using the standard branch of $\phi^z$ and $\psi^z$ that the functions $$F_{jk}(z) = \frac{e^{2\pi i j z}\phi^z -e^{2 \pi i k z}\psi^z}{\phi - \psi}$$ are a solution of the Fibonacci equation in the complex plane. These can't be all solutions though. Namely if $F$ and $G$ are solutions where we've simply chosen different $k$ and $j$ for each one, then $$\frac{F}{2} + \frac{G}{2}$$ is equally a solution, which corresponds to no function from our list. This additionally implies that the infinite sum $$\mathcal{F} = \sum_{j=-\infty}^\infty \sum_{k=-\infty}^\infty a_{jk}F_{jk}(z)$$ is a solution to the Fibonacci equation in the complex plane if it converges everywhere and $$\sum_{j=-\infty}^\infty \sum_{k=-\infty}^\infty a_{jk} = 1$$ Are these all of the solutions?
I am not very familiar with the common discretization schemes for PDEs. I know that Crank-Nicolson is popular scheme for discretizing the diffusion equation. Is also a good choice for the advection term? I am interesting in solving the Reaction-Diffusion-Advection equation, $\frac{\partial u}{\partial t} + \nabla \cdot \left( \boldsymbol{v} u - D\nabla u \right) = f$ where $D$ is the diffusion coefficient of substance $u$ and $\boldsymbol{v}$ is the velocity. For my specific application the equation can be written in the form, $\frac{\partial u}{\partial t} = \underbrace{D\frac{\partial^2 u}{\partial x^2}}_{\textrm{Diffusion}} + \underbrace{\boldsymbol{v}\frac{\partial u}{\partial x}}_{\textrm{Advection (convection)}} + \underbrace{f(x,t)}_{\textrm{Reaction}}$ Here is the Crank-Nicolson scheme I have applied, $\frac{u_{j}^{n+1} - u_{j}^{n}}{\Delta t} = D \left[ \frac{1 - \beta}{(\Delta x)^2} \left( u_{j-1}^{n} - 2u_{j}^{n} + u_{j+1}^{n} \right) + \frac{\beta}{(\Delta x)^2} \left( u_{j-1}^{n+1} - 2u_{j}^{n+1} + u_{j+1}^{n+1} \right) \right] + \boldsymbol{v} \left[ \frac{1-\alpha}{2\Delta x} \left( u_{j+1}^{n} - u_{j-1}^{n} \right) + \frac{\alpha}{2\Delta x} \left( u_{j+1}^{n+1} - u_{j-1}^{n+1} \right) \right] + f(x,t)$ Notice the $\alpha$ and the $\beta$ terms. This enables scheme to move between: $\beta=\alpha=1/2$ Crank-Niscolson, $\beta=\alpha=1$ it is fully implicit $\beta=\alpha=0$ it is fully explicit The values can be different, which allows the diffusion term to be Crank-Nicolson and the advection term to be something else. What is the most stable approach, what would you recommend?
Search Now showing items 1-8 of 8 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... Production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\mathbf{\sqrt{s_{{\rm NN}}} = 5.02}$ TeV (Elsevier, 2015-01) We report on the production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV at the LHC. The measurement is performed with the ALICE detector at backward ($-4.46< y_{{\rm ... Elliptic flow of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Springer, 2015-06-29) The elliptic flow coefficient ($v_{2}$) of identified particles in Pb--Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV was measured with the ALICE detector at the LHC. The results were obtained with the Scalar Product ... Measurement of electrons from semileptonic heavy-flavor hadron decays in pp collisions at s =2.76TeV (American Physical Society, 2015-01-07) The pT-differential production cross section of electrons from semileptonic decays of heavy-flavor hadrons has been measured at midrapidity in proton-proton collisions at s√=2.76 TeV in the transverse momentum range ... Multiplicity dependence of jet-like two-particle correlations in p-Pb collisions at $\sqrt{s_NN}$ = 5.02 TeV (Elsevier, 2015-02-04) Two-particle angular correlations between unidentified charged trigger and associated particles are measured by the ALICE detector in p–Pb collisions at a nucleon–nucleon centre-of-mass energy of 5.02 TeV. The transverse-momentum ...
Search Now showing items 1-7 of 7 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE (Elsevier, 2017-11) Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2013-11) We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
2019-07-18 17:03 Precision measurement of the $\Lambda_c^+$, $\Xi_c^+$ and $\Xi_c^0$ baryon lifetimes / LHCb Collaboration We report measurements of the lifetimes of the $\Lambda_c^+$, $\Xi_c^+$ and $\Xi_c^0$ charm baryons using proton-proton collision data at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3.0 fb$^{-1}$, collected by the LHCb experiment. The charm baryons are reconstructed through the decays $\Lambda_c^+\to pK^-\pi^+$, $\Xi_c^+\to pK^-\pi^+$ and $\Xi_c^0\to pK^-K^-\pi^+$, and originate from semimuonic decays of beauty baryons. [...] arXiv:1906.08350; LHCb-PAPER-2019-008; CERN-EP-2019-122; LHCB-PAPER-2019-008.- 2019-08-02 - 12 p. - Published in : Phys. Rev. D 100 (2019) 032001 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-07-02 10:45 Observation of the $\Lambda_b^0\rightarrow \chi_{c1}(3872)pK^-$ decay / LHCb Collaboration Using proton-proton collision data, collected with the LHCb detector and corresponding to 1.0, 2.0 and 1.9 fb$^{-1}$ of integrated luminosity at the centre-of-mass energies of 7, 8, and 13 TeV, respectively, the decay $\Lambda_b^0\to \chi_{c1}(3872)pK^-$ with $\chi_{c1}\to J/\psi\pi^+\pi^-$ is observed for the first time. The significance of the observed signal is in excess of seven standard deviations. [...] arXiv:1907.00954; CERN-EP-2019-131; LHCb-PAPER-2019-023; LHCB-PAPER-2019-023.- 2019-09-03 - 21 p. - Published in : JHEP 1909 (2019) 028 Article from SCOAP3: PDF; Fulltext: LHCb-PAPER-2019-023 - PDF; 1907.00954 - PDF; Related data file(s): ZIP; Supplementary information: ZIP; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-06-21 17:31 Updated measurement of time-dependent CP-violating observables in $B^0_s \to J/\psi K^+K^-$ decays / LHCb Collaboration The decay-time-dependent {\it CP} asymmetry in $B^{0}_{s}\to J/\psi K^{+} K^{-}$ decays is measured using proton-proton collision data, corresponding to an integrated luminosity of $1.9\,\mathrm{fb^{-1}}$, collected with the LHCb detector at a centre-of-mass energy of $13\,\mathrm{TeV}$ in 2015 and 2016. Using a sample of approximately 117\,000 signal decays with an invariant $K^{+} K^{-}$ mass in the vicinity of the $\phi(1020)$ resonance, the {\it CP}-violating phase $\phi_s$ is measured, along with the difference in decay widths of the light and heavy mass eigenstates of the $B^{0}_{s}$-$\overline{B}^{0}_{s}$ system, $\Delta\Gamma_s$. [...] arXiv:1906.08356; LHCb-PAPER-2019-013; CERN-EP-2019-108; LHCB-PAPER-2019-013.- Geneva : CERN, 2019-08-22 - 42 p. - Published in : Eur. Phys. J. C 79 (2019) 706 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-06-21 17:07 Measurement of $C\!P$ observables in the process $B^0 \to DK^{*0}$ with two- and four-body $D$ decays / LHCb Collaboration Measurements of $C\!P$ observables in $B^0 \to DK^{*0}$ decays are presented, where $D$ represents a superposition of $D^0$ and $\bar{D}^0$ states. The $D$ meson is reconstructed in the two-body final states $K^+\pi^-$, $\pi^+ K^-$, $K^+K^-$ and $\pi^+\pi^-$, and, for the first time, in the four-body final states $K^+\pi^-\pi^+\pi^-$, $\pi^+ K^-\pi^+\pi^-$ and $\pi^+\pi^-\pi^+\pi^-$. [...] arXiv:1906.08297; LHCb-PAPER-2019-021; CERN-EP-2019-111.- Geneva : CERN, 2019-08-07 - 30 p. - Published in : JHEP 1908 (2019) 041 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-05-16 14:53 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-05-16 14:31 Measurement of $CP$-violating and mixing-induced observables in $B_s^0 \to \phi\gamma$ decays / LHCb Collaboration A time-dependent analysis of the $B_s^0 \to \phi\gamma$ decay rate is performed to determine the $CP$-violating observables $S_{\phi\gamma}$ and $C_{\phi\gamma}$, and the mixing-induced observable $\mathcal{A}^{\Delta}_{\phi\gamma}$. The measurement is based on a sample of $pp$ collision data recorded with the LHCb detector, corresponding to an integrated luminosity of 3 fb$^{-1}$ at center-of-mass energies of 7 and 8 TeV. [...] arXiv:1905.06284; LHCb-PAPER-2019-015; CERN-EP-2019-077; LHCb-PAPER-2019-015; CERN-EP-2019-077; LHCB-PAPER-2019-015.- 2019-08-28 - 10 p. - Published in : Phys. Rev. Lett. 123 (2019) 081802 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-04-10 11:16 Observation of a narrow pentaquark state, $P_c(4312)^+$, and of two-peak structure of the $P_c(4450)^+$ / LHCb Collaboration A narrow pentaquark state, $P_c(4312)^+$, decaying to $J/\psi p$ is discovered with a statistical significance of $7.3\sigma$ in a data sample of $\Lambda_b^0\to J/\psi p K^-$ decays which is an order of magnitude larger than that previously analyzed by the LHCb collaboration. The $P_c(4450)^+$ pentaquark structure formerly reported by LHCb is confirmed and observed to consist of two narrow overlapping peaks, $P_c(4440)^+$ and $P_c(4457)^+$, where the statistical significance of this two-peak interpretation is $5.4\sigma$. [...] arXiv:1904.03947; LHCb-PAPER-2019-014 CERN-EP-2019-058; LHCB-PAPER-2019-014.- Geneva : CERN, 2019-06-06 - 11 p. - Published in : Phys. Rev. Lett. 122 (2019) 222001 Article from SCOAP3: PDF; Fulltext: PDF; Fulltext from Publisher: PDF; Related data file(s): ZIP; Supplementary information: ZIP; External link: SYMMETRY დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-04-03 11:16 Measurements of $CP$ asymmetries in charmless four-body $\Lambda^0_b$ and $\Xi_b^0$ decays / LHCb Collaboration A search for $CP$ violation in charmless four-body decays of $\Lambda^0_b$ and $\Xi_b^0$ baryons with a proton and three charged mesons in the final state is performed. To cancel out production and detection charge-asymmetry effects, the search is carried out by measuring the difference between the $CP$ asymmetries in a charmless decay and in a decay with an intermediate charmed baryon with the same particles in the final state. [...] arXiv:1903.06792; LHCb-PAPER-2018-044; CERN-EP-2019-13; LHCb-PAPER-2018-044 and CERN-EP-2019-13; LHCB-PAPER-2018-044.- 2019-09-07 - 30 p. - Published in : Eur. Phys. J. C 79 (2019) 745 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-04-01 11:42 Observation of an excited $B_c^+$ state / LHCb Collaboration Using $pp$ collision data corresponding to an integrated luminosity of $8.5\,\mathrm{fb}^{-1}$ recorded by the LHCb experiment at centre-of-mass energies of $\sqrt{s} = 7$, $8$ and $13\mathrm{\,Te\kern -0.1em V}$, the observation of an excited $B_c^+$ state in the $B_c^+\pi^+\pi^-$ invariant-mass spectrum is reported. The state has a mass of $6841.2 \pm 0.6 {\,\rm (stat)\,} \pm 0.1 {\,\rm (syst)\,} \pm 0.8\,(B_c^+) \mathrm{\,MeV}/c^2$, where the last uncertainty is due to the limited knowledge of the $B_c^+$ mass. [...] arXiv:1904.00081; CERN-EP-2019-050; LHCb-PAPER-2019-007.- Geneva : CERN, 2019-06-11 - 10 p. - Published in : Phys. Rev. Lett. 122 (2019) 232001 Article from SCOAP3: PDF; Fulltext: PDF; Fulltext from Publisher: PDF; Related data file(s): ZIP; Supplementary information: ZIP; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-04-01 09:46 Near-threshold $D\bar{D}$ spectroscopy and observation of a new charmonium state / LHCb Collaboration Using proton-proton collisiondata, corresponding to an integrated luminosity of 9 fb$^{-1}$, collected with the~LHCb detector between 2011 and 2018, a new narrow charmonium state, the $X(3842)$ resonance, is observed in the decay modes $X(3842)\rightarrow D^0\bar{D}^0$ and $X(3842)\rightarrow D^+D^-$. The mass and the natural width of this state are measured to be \begin{eqnarray*} m_{X(3842)} & = & 3842.71 \pm 0.16 \pm 0.12~ \text {MeV}/c^2\,, \\ \Gamma_{X(3842)} & = & 2.79 \pm 0.51 \pm 0.35 ~ \text {MeV}\,, \end{eqnarray*} where the first uncertainty is statistical and the second is systematic. [...] arXiv:1903.12240; CERN-EP-2019-047; LHCb-PAPER-2019-005; LHCB-PAPER-2019-005.- Geneva : CERN, 2019-07-08 - 23 p. - Published in : JHEP 1907 (2019) 035 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP; Supplementary information: ZIP; დეტალური ჩანაწერი - მსგავსი ჩანაწერები
Under the auspices of the Computational Complexity Foundation (CCF) We prove that the integrality gap after tightening the standard LP relaxation for Vertex Cover with Omega(sqrt(log n/log log n)) rounds of the SDP LS+ system is 2-o(1).more >>> We study integrality gaps for SDP relaxations of constraint satisfaction problems, in the hierarchy of SDPs defined by Lasserre. Schoenebeck recently showed the first integrality gaps for these problems, showing that for MAX k-XOR, the ratio of the SDP optimum to the integer optimum may be as large as ... more >>> This work considers the problem of approximating fixed predicate constraint satisfaction problems (MAX k-CSP(P)). We show that if the set of assignments accepted by $P$ contains the support of a balanced pairwise independent distribution over the domain of the inputs, then such a problem on $n$ variables cannot be approximated ... more >>> We study the performance of the Sherali-Adams system for VERTEX COVER on graphs with vector chromatic number $2+\epsilon$. We are able to construct solutions for LPs derived by any number of Sherali-Adams tightenings by introducing a new tool to establish Local-Global Discrepancy. When restricted to $O(1/ \epsilon)$ tightenings we show ... more >>> Lov\'{a}sz and Schrijver introduced several lift and project methods for $0$-$1$ integer programs, now collectively known as Lov\'{a}sz-Schrijver ($LS$) hierarchies. Several lower bounds have since been proven for the rank of various linear programming relaxations in the $LS$ and $LS_+$ hierarchies. In this paper we investigate rank bounds in the ... more >>> We consider the complexity of LS$_+$ refutations of unsatisfiable instances of Constraint Satisfaction Problems (CSPs) when the underlying predicate supports a pairwise independent distribution on its satisfying assignments. This is the most general condition on the predicates under which the corresponding MAX-CSP problem is known to be approximation resistant. We ... more >>> We show optimal (up to constant factor) NP-hardness for Max-k-CSP over any domain, whenever k is larger than the domain size. This follows from our main result concerning predicates over abelian groups. We show that a predicate is approximation resistant if it contains a subgroup that is ... more >>> A boolean predicate $f:\{0,1\}^k\to\{0,1\}$ is said to be {\em somewhat approximation resistant} if for some constant $\tau > \frac{|f^{-1}(1)|}{2^k}$, given a $\tau$-satisfiable instance of the MAX-$k$-CSP$(f)$ problem, it is NP-hard to find an assignment that {\it strictly beats} the naive algorithm that outputs a uniformly random assignment. Let $\tau(f)$ denote ... more >>> This brief survey gives a (roughly) self-contained overview of some complexity theoretic results about semi-algebraic proof systems and related hierarchies and the strong connections between them. The article is not intended to be a detailed survey on "Lift and Project" type optimization hierarchies (cf. Chlamtac and Tulsiani) or related proof ... more >>> For a predicate $f:\{-1,1\}^k \mapsto \{0,1\}$ with $\rho(f) = \frac{|f^{-1}(1)|}{2^k}$, we call the predicate strongly approximation resistant if given a near-satisfiable instance of CSP$(f)$, it is computationally hard to find an assignment such that the fraction of constraints satisfied is outside the range $[\rho(f)-\Omega(1), \rho(f)+\Omega(1)]$. We present a characterization of ... more >>> A predicate $f:\{-1,1\}^k \mapsto \{0,1\}$ with $\rho(f) = \frac{|f^{-1}(1)|}{2^k}$ is called {\it approximation resistant} if given a near-satisfiable instance of CSP$(f)$, it is computationally hard to find an assignment that satisfies at least $\rho(f)+\Omega(1)$ fraction of the constraints. We present a complete characterization of approximation resistant predicates under the ... more >>>
I do not intend to be rigorous here I just want to give you the intuition.You can go deeper on this subject both reading textbooks or googling it. If you have one random variable $X$ you say that it follows a normal distribution if ... the density $f_X(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{(...)}$ Suppose now you have several random variables $X_1, X_2, ..., X_n$ They form a vector $\textbf{X}$ and you say that $\textbf{X}$ is Multivariate Gaussain if the joint density $f_{\textbf{X}}(\textbf{x})$ has a specific form.This is just an extension of the definition we just saw in the univariate case. We are referring to the joint probability here$P(X_1 \le x_1, X_2 \le x_2,...,X_n \le x_n)$that is a multidimensional integral of the joint density $f_{\textbf{X}}(\textbf{x})$ The Mixture of gaussian is a different concept andwe are talking about hierarchical models here Suppose that the random variable $X$ has two parameters $\mu$ and $\sigma$, but both of them are also random with $\mu \sim N (m_1, s_1)$ and $\sigma \sim N (m_2, s_2)$ it could be that the conditional distribution of $X$ given $\mu$ and $\sigma$ is normal. In tis case you write $X \mid \mu, \sigma \sim N(\mu, \sigma)$ and that is a mixture of gaussian
Difference between revisions of "SageMath" m (→3D plot fails in notebook: flag for deletion) m (→sage -i doesn't work: flag for deletion) Line 149: Line 149: === sage -i doesn't work === === sage -i doesn't work === − If you have installed Sage from the official repositories, then you have to install your additional packages system-wide. See [[SageMath#Install Sage package|Install Sage package]] + + + + If you have installed Sage from the official repositories, then you have to install your additional packages system-wide. See [[SageMath#Install Sage package|Install Sage package]] === 3D plot fails in notebook === === 3D plot fails in notebook === Revision as of 16:26, 20 January 2016 SageMath (formerly Sage) is a program for numerical and symbolic mathematical computation that uses Python as its main language. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica. SageMath provides support for the following: Calculus: using Maxima and SymPy. Linear Algebra: using the GSL, SciPy and NumPy. Statistics: using R (through RPy) and SciPy. Graphs: using matplotlib. An interactive shellusing IPython. Access to Python modulessuch as PIL, SQLAlchemy, etc. Contents 1 Installation 2 Usage 3 Optional additions 4 Install Sage package 5 Troubleshooting 6 See also Installation contains the command-line version; for HTML documentation and inline help from the command line. includes the browser-based notebook interface. The optional dependencies for various features that will be disabled if the needed packages are missing.package has number of Usage SageMath mainly uses Python as a scripting language with a few modifications to make it better suited for mathematical computations. SageMath command-line SageMath can be started from the command-line: $ sage For information on the SageMath command-line see this page. Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example: sage: plot(sin,(x,0,10)) SageMath opens a browser window with the Sage Notebook. Sage Notebook A better suited interface for advanced usage in SageMath is the Notebook. To start the Notebook server from the command-line, execute: $ sage -n The notebook will be accessible in the browser from http://localhost:8080 and will require you to login. However, if you only run the server for personal use, and not across the internet, the login will be an annoyance. You can instead start the Notebook without requiring login, and have it automatically pop up in a browser, with the following command: $ sage -c "notebook(automatic_login=True)" Jupyter Notebook SageMath also provides a kernel for the Jupyter notebook. To use it, install and , launch the notebook with the command $ jupyter notebook and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports LaTeX output via the %display latex command and 3D plots if is installed. Cantor Cantor is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, SageMath, Octave, Scilab, etc. See the Cantor page on the Sage wiki for more information on how to use it with SageMath. Cantor can be installed with the official repositories.package or as part of the or groups, available in the Documentation For local documentation, one can compile it into multiple formats such as HTML or PDF. To build the whole SageMath reference, execute the following command (as root): # sage --docbuild reference html This builds the HTML documentation for the whole reference tree (may take longer than an hour). An option is to build a smaller part of the documentation tree, but you would need to know what it is you want. Until then, you might consider just browsing the online reference. For a list of documents see sage --docbuild --documents and for a list of supported formats see sage --docbuild --formats. Optional additions SageTeX If you have installed TeX Live on your system, you may be interested in using SageTeX, a package that makes the inclusion of SageMath code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away. As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use pdflatex): include the sagetexpackage in the preamble of your document with the usual \usepackage{sagetex} create a sagesilentenvironment in which you insert your code: \begin{sagesilent} dob(x) = sqrt(x^2 - 1) / (x * arctan(sqrt(x^2 - 1))) dpr(x) = sqrt(x^2 - 1) / (x * log( x + sqrt(x^2 - 1))) p1 = plot(dob,(x, 1, 10), color='blue') p2 = plot(dpr,(x, 1, 10), color='red') ptot = p1 + p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot, e.g. inside a floatenvironment: \begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $ pdflatex <doc.tex> $ sage <doc.sage> $ pdflatex <doc.tex> you can have a look at your output document. The full documentation of SageTeX is available on CTAN. Install Sage package If you installed sagemath from the official repositories, it is not possible to install sage packages using the sage option sage -i packagename. Instead, you should install the required packages system-wide. For example, if you need jmol (for 3D plots): $ sudo pacman -S jmol An alternative would be to have a local installation of sagemath and to manage optional packages manually. Troubleshooting TeX Live does not recognize SageTex If your TeX Live installation does not find the SageTex package, you can try the following procedure (as root or use a local folder): Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh TeX Live: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done. Starting Sage Notebook Server throws an ImportError The Sage Notebook Server is in an extra package. So, if you get an ImportError when launching % sage --notebook ┌────────────────────────────────────────────────────────────────────┐ │ Sage Version 6.4.1, Release Date: 2014-11-23 │ │ Type "notebook()" for the browser-based notebook interface. │ │ Type "help()" for help. │ └────────────────────────────────────────────────────────────────────┘ Please wait while the Sage Notebook server starts... Traceback (most recent call last): File "/usr/bin/sage-notebook", line 180, in <module> launcher(unknown) File "/usr/bin/sage-notebook", line 58, in __init__ from sagenb.notebook.notebook_object import notebook ImportError: No module named sagenb.notebook.notebook_object you most likely do not have the packageinstalled. sage -i doesn't work If you have installed Sage from the official repositories, then you have to install your additional packages system-wide. See Install Sage package 3D plot fails in notebook If you get the following error while trying to plot a 3D object: /usr/lib/python2.7/site-packages/sage/repl/rich_output/display_manager.py:570: RichReprWarning: Exception in _rich_repr_ while displaying object: Jmol failed to create file '/home/nicolas/.sage/temp/archimede/3188/dir_cCpcph/preview.png', see '/home/nicolas/.sage/temp/archimede/3188/tmp_JVpSqF.txt' for details RichReprWarning, Graphics3d Object then you probably miss the jmol package. See Install Sage package to install it.
doi: 10.1685/journal.caim.531 Stochastic processes related to time-fractional diffusion-wave equation Abstract It is known that the solution to the Cauchy problem: $$ D^\beta_* u(x,t)= R^\alpha u(x,t) \,, \quad u(x,0)=\delta(x) \,, \quad \frac{\partial}{\partial x}u(x,t=0) \equiv 0 \,, \quad -\infty < x < \infty \,, \quad t > 0 \,, $$ is a probability density if $$ 1 < \beta \le \alpha \le 2 $$ where $$ D^\beta_*$$ is the time fractional Caputo derivative of order \beta whereas $$R^\alpha$$ denotes the spatial Riesz fractional pseudo-differential operator. In the present paper it is considered the question if u(x,t) can be interpreted in a natural way as the sojourn probability density (in point x, evolving in time t) of a randomly wandering particle starting in the origin x=0 at instant t=0. We show that this indeed can be done in the extreme case \alpha=2, that is $$R^\alpha=\displaystyle{\frac{\partial^2}{\partial x^2}}$$ Moreover, if \alpha=2 we can replace $$D^\beta_*$$ by an operator of distributed orders with a non-negative (generalized) weight function b(\beta): $$ \displaystyle{\int_{(1,2]} \!\!\! b(\beta) \, D^\beta_* \dots d\beta} $$ For this case u(x,t) is a probability density. $$ D^\beta_* u(x,t)= R^\alpha u(x,t) \,, \quad u(x,0)=\delta(x) \,, \quad \frac{\partial}{\partial x}u(x,t=0) \equiv 0 \,, \quad -\infty < x < \infty \,, \quad t > 0 \,, $$ is a probability density if $$ 1 < \beta \le \alpha \le 2 $$ where $$ D^\beta_*$$ is the time fractional Caputo derivative of order \beta whereas $$R^\alpha$$ denotes the spatial Riesz fractional pseudo-differential operator. In the present paper it is considered the question if u(x,t) can be interpreted in a natural way as the sojourn probability density (in point x, evolving in time t) of a randomly wandering particle starting in the origin x=0 at instant t=0. We show that this indeed can be done in the extreme case \alpha=2, that is $$R^\alpha=\displaystyle{\frac{\partial^2}{\partial x^2}}$$ Moreover, if \alpha=2 we can replace $$D^\beta_*$$ by an operator of distributed orders with a non-negative (generalized) weight function b(\beta): $$ \displaystyle{\int_{(1,2]} \!\!\! b(\beta) \, D^\beta_* \dots d\beta} $$ For this case u(x,t) is a probability density. Refbacks There are currently no refbacks.
Scritto il01 Apr 2014 alle 01:40 inPubblicazioni Available from the Minor Planet Center. Sono presenti le pubblicazioni avvenute grazie al lavoro professionale o amatoriale dei soci ARA. Molte sono prese automaticamente dall'ADSABS Available from the Minor Planet Center. The Javalambre-Physics of the Accelerated Universe Astrophysical Survey (J-PAS) is a narrow band, very wide field Cosmological Survey to be carried out from the Javalambre Observatory in Spain with a purpose-built, dedicated 2.5m telescope and a 4.7 sq.deg. camera with 1.2Gpix. Starting in late 2015, J-PAS will observe 8500sq.deg. of Northern Sky and measure $0.003(1+z)$ photo-z for $9\times10^7$ LRG and ELG galaxies plus several million QSOs, sampling an effective volume of $\sim 14$ Gpc$^3$ up to $z=1.3$ and becoming the first radial BAO experiment to reach Stage IV. J-PAS will detect… We analyse the rest-frame ultraviolet (UV) to near-infrared (near-IR) spectral energy distribution (SED) of Lyman-break galaxies (LBGs), star-forming (SF) BzK (sBzK) and UV-selected galaxies at 1.5 ≲ z ≲ 2.5 in the COSMOS, GOODS-N and GOODS-S fields. Additionally, we complement the multiwavelength coverage of the galaxies located in the GOODS fields with deep far-infrared (FIR) data taken from the GOODS-Herschel project. According to their best-fitting SED-derived properties we find that, because of their selection criterion involving UV measurements, LBGs tend to be UV-brighter and bluer… We study a hundred of galaxies from the spectroscopic Sloan Digital Sky Survey with individual detections in the Far-Infrared Herschel PACS bands (100 or 160 $\mu$m) and in the GALEX Far-UltraViolet band up to z$\sim$0.4 in the COSMOS and Lockman Hole fields. The galaxies are divided into 4 spectral and 4 morphological types. For the star forming and unclassifiable galaxies we calculate dust extinctions from the UV slope, the H$\alpha$/H$\beta$ ratio and the $L_{\rm IR}/L_{\rm UV}$ ratio. There is a tight correlation between the dust extinction and both $L_{\rm IR}$ and metallicity. We… JPCam is a 14-CCD mosaic camera, using the new e2v 9k-by-9k 10 μm-pixel 16-channel detectors, to be deployed on a dedicated 2.55 m wide-field telescope at the OAJ (Observatorio Astrofísico de Javalambre) in Aragon, Spain. The camera is designed to perform a Baryon Acoustic Oscillations (BAO) survey of the northern sky. The J-PAS survey strategy will use 54 relatively narrow-band ( 13.8 nm) filters equi-spaced between 370 and 920 nm plus 3 broad-band filters to achieve unprecedented photometric red-shift accuracies for faint galaxies over 8000 square degrees of sky. The cryostat, detector…
Since none of the comments gave the concrete answer, I'll write it explicitly here in case anyone needs it (like I did).Firstly, unfortunately, the inverse of a band-limited matrix is a full (non-band-limited) matrix in general, so just filling out the entries of the inverse matrix would take $\Omega\left(n^2\right)$. So I'll assume you just want to solve ... The paper "SimRank++: Query Rewriting through Link Analysis of the Click Graph" describes a weighted SimRank algorithm that is similar to my attempt. It uses this formula to update similarity scores:$s_{k+1}(a, b) = \text{evidence}(a, b) \cdot C \cdot \sum_{i \in I(a)} \sum_{j \in I(b)} s_{k}(i, j) \cdot w(i, a) \cdot w(j, b) \cdot \text{spread}(i) \cdot \... The state-of-the art for any MILP like you describe, is complicated software like CPLEX (or some other expensive proprietary package). Such packages do take into account issues of sparse linear algebra, but this is a problem of numerical stability just as much as efficiency. See for example: (http://www.gurobi.com/resources/getting-started/lp-basics). Those ... In low dimension, Seidel's algorithm can be useful: if we have the optimal solution to $m$ constraints in $d$ dimensions, and you add one more constraint, then the amortized cost to find the optimal solution to those $m+1$ constraints is $O(d!)$, assuming the constraints are being presented in a random order.(It is usually presented in the following form: ... In view of rose tree being just a special case of a graph,the matrix representation you mentioned is simply the adjacency matrix representation of directed graph. Your particular encoding takes the rose tree's edge to always be directed consistently either from parent to child or child to parent.Questions about rose tree in this encoding thus can be ... What you have encountered is a very common convention. The sparsity assumption is probably in some algorithmic context: given a sparse matrix, here is an efficient algorithm that solves some problem. For example:Given a matrix with $O(n)$ non-zero elements, there is an algorithm to do X in $O(n^2)$.What this really means is:For every $C > 0$ ... Okay, here is a full answer.We will use the fact, that any bipartite graph of maximum degree $d$ can be broken into (at most) $d$ matchings.In our case, this means that we can split $A$ into (at most) $\sqrt{k}$ disjoint sets of elements $S_i\subseteq\{(i,j)\in[n]\times[n] \mid A_{i,j}=1\}$ of size at most $n$ such that any every element in each set has ... You can use bipartite matching for that.Nodes corresponding to rows in one set, nodes corresponding to columns in the other set. A row has an edge to every column in which it has a 1. If the maximum matching has size n, every row is matched up with a corresponding column and the matching gives the order. The index of the column that a row is matched up ... Option 2: Compare full linear index#define COMPARE(ia,ja,ib,jb) (ia*N + ja < ib*N + jb ? -1 : ia*N + ja > ib*N + jb)Instead of N, use the minimal power of 2 that is not less than N. For example, if N = 10000, use $2^{14}=16348$. So we can have #define COMPARE(ia,ja,ib,jb) ((ia << 14) + ja < (ib << 14) + jb ? -1 : (ia << 14) + ... Often we use $0$ to represent the all-zeros matrix, so the instruction to set $\Omega = 0$ might mean to set it to the all-zeros matrix (with a zero in every entry). You'll have to figure out from context whether that seems correct. As a sparse matrix is mostly made of zeros.Using a 2-dimensional array for all elements will be an inefficient way to represent such data as more than half of the array will be zeroes which is the reason for the increased time cost for finding bandwidth in your case.In your case if you have a matrix $M$ of size $n * m$ then you'll be using a 2 dimensional ... The question doesn't make any sense. It doesn't make sense to ask for the running time of a computation, unless you specify an algorithm for that computation. So how would you calculate this product? That's the first thing you need to do: Specify an algorithm how you calculate the product.The next step: Assuming that many array elements are zero, can you ...
The numbers $\theta_i$ here are real of course. Also there is only one question really; upon replacing $\boldsymbol{\alpha}$ with $(\boldsymbol{\alpha} - t \boldsymbol{\theta})/s$ the second problem reduces to the case that $s = 1$ and $t = 0$. That being said, there is nothing inherently ineffective in neither of the two best known proofs of Kronecker's approximation theorem: the diophantine approximations one based on the Dirichlet approximation theorem with an inductive scheme on $n$, and the harmonic analysis one based on Weyl's equidistribution criterion. Into either of these proofs you may input Liouville's lower bound on non-vanishing algebraic quantities, in the form $\mathrm{dist}(\mathbf{k} \cdot \boldsymbol{\theta}, \mathbb{Z}) \geq (2n \cdot \| \mathbf{k} \|_{\infty} \cdot H)^{-D^n}$ for non-zero $\mathbf{k} \in \mathbb{Z}^n$ and $\| \mathbf{k} \|_{\infty} := \max_{i=1}^n |k_i|$. Let me give some indication of how the latter route goes. It is Weyl's theorem that Erdos and Turan made completely explicit; their inequality is where you were referred to. You will however need the extension of the Erdos-Turan estimate to the higher dimensional torus $\mathbb{T}^n = (\mathbb{R}/\mathbb{Z})^n$, which is due to Koksma. A good reference for this is Theorem 1.21 of the book Sequences, Discrepancies and Application by Michael Drmota and Robert Tichy (LNM 1651, 1997). This bounds the discrepancy $D_N := \sup_{I \subset \mathbb{T}^n} \big| \#\{ \mathbf{x}_j \in I, \, j \leq N\} - N \cdot \mathrm{vol}(I) \big|$ of a sequence $\mathbf{x}_j \in \mathbb{T}^n$ by an explicit linear form of its Fourier coefficients: Theorem. (Erdos, Turan, Koksma). For any sequence $\mathbf{x}_j \in \mathbb{T}^n$ and every $K \in \mathbb{N}$ it holds$$D_N = D_N(\mathbf{x}) \leq N \cdot (3/2)^n \cdot \Big( \frac{2}{K+1} + \sum_{0 < \|\mathbf{k}\| \leq K} \frac{1}{r(\mathbf{k})} \Big| \frac{1}{N} \sum_{j=1}^N e( \mathbf{k} \cdot \mathbf{x}_j ) \Big| \Big),$$ with $e(z) := e^{2\pi i z}$ and $r(\mathbf{k}) := \prod_{i=1}^n \max (1,|k_i|)$. Apply this theorem taking $\mathbf{x}_j := j \, \boldsymbol{\theta}$. The exponential sums in this case are geometric series with quotients of the form $e( \mathbf{k} \cdot \boldsymbol{\theta} )$ and $\mathbf{k} \neq \mathbf{0}$. They are bounded in magnitude by $(2 \, \mathrm{dist}(\mathbf{k} \cdot \boldsymbol{\theta}, \mathbb{Z}))^{-1}$, which in turn is bounded by the Liouville diophantine estimate above. This is the same point as in the Polya-Vinogradov inequality on character sums.
My near-future hope is to set up my own weblog where each new blog item is my write-up of notes from my math lectures. The purpose of this is twofold: I then need to be able to write mathematical expressions in my HTML using TeX syntax and have the expressions converted to images. TeX (father of LaTeX (father of Itex)) looks like this: \[ \sum_{n=1}^\infty \frac{1}{n} \text{ is divergent, but } \lim_{n \to \infty} \sum_{i=1}^n \frac{1}{i}- \ln n \text{ exists.}\] You can hopefully see the results here This needs to be converted then somehow. Now, what I've found is itex2MML which looks promising. There's no double-clickable installer get this up and running, but altogether this might help me learn more about Debian (what this server runs on). I've so far had a look at latex2html but it doesn't work yet and there's a lot of management with imagefiles and the conversion is pretty slow. If I can't work it out with MathML, I'll give latex2html another bash. Some links for this little project:
Current browse context: astro-ph.GA Change to browse by: References & Citations Bookmark(what is this?) Astrophysics > Astrophysics of Galaxies Title: Star Formation Stochasticity Measured from the Distribution of Burst Indicators (Submitted on 4 Jan 2019 (v1), last revised 27 Mar 2019 (this version, v2)) Abstract: One of the key questions in understanding the formation and evolution of galaxies is how starbursts affect the assembly of stellar populations in galaxies over time. We define a burst indicator ($\eta$), which compares a galaxy's star formation rates on short ($\sim10$ Myr) and long ($\sim100$ Myr) timescales. To estimate $\eta$, we apply the detailed time-luminosity relationship for H$\alpha$ and near-ultraviolet emission to simulated star formation histories (SFHs) from semi-analytic models and the Mufasa hydrodynamical cosmological simulations. The average of $\eta$ is not a good indicator of star formation stochasticity (burstiness); indeed, we show that this average should be close to zero unless the population has an average SFH which is rising or falling rapidly. Instead, the width of the $\eta$ distribution characterizes the burstiness of a galaxy population's recent star formation. We find this width to be robust to variations in stellar initial mass function and metallicity. We apply realistic noise and selection effects to the models to generate mock HST and JWST galaxy catalogs and compare these catalogs with 3D-HST observations of 956 galaxies at $0.65<z<1.5$ detected in H$\alpha$. Measurements of $\eta$ are unaffected by dust measurement errors under the assumption that $E(B-V)_\mathrm{stars}=0.44\,E(B-V)_\mathrm{gas}$ (i.e., $Q_\mathrm{sg}=0.44$). However, setting $Q_\mathrm{sg}=0.8^{+0.1}_{-0.2}$ removes an unexpected dependence of the average value of $\eta$ upon dust attenuation and stellar mass in the 3D-HST sample while also resolving disagreements in the distribution of star formation rates. However, even varying the dust law cannot resolve all discrepancies between the simulated and the observed galaxies. Submission historyFrom: Adam Broussard [view email] [v1]Fri, 4 Jan 2019 16:23:20 GMT (1623kb,D) [v2]Wed, 27 Mar 2019 14:38:36 GMT (1623kb,D)
A slight variation on this approximation can be derived using the delta-method via the fact that the chi-squared distribution converges to the normal distribution. Although both are asymptotically valid, an alternative is mentioned by Fisher, without subtracting one from the part in the square-root (i.e., as $\chi_{n,q}^2 \approx \tfrac{1}{2} (z_q + \sqrt{2n})^2$). Your approximation approaches the true critical point values from above, and the Fisher approximation approaches the true critical point values from below. The one you mention is more accurate in terms of the relative error, and it is probably derived as a variation from the one used by Fisher. Derivation: It is well-known that the chi-squared distribution converges to the normal distribution as $n \rightarrow \infty$. More specifically, if $\chi_{n}^2$ has a chi-squared distribution with $n$ degrees-of-freedom then we have the following convergence-in-distribution: $$\frac{\chi_{n}^2-n}{\sqrt{2n}} \overset{\text{Dist}}{\longrightarrow} \text{N}(0,1) \quad \quad \quad \text{as } n \rightarrow \infty.$$ (More generally, the gamma distribution converges to the normal as the shape approaches infinity.) Hence, with a simple change in the multiplying constant we have the corresponding result: $$\sqrt{2n} \Big( \frac{\chi_{n}^2}{n}-1 \Big) \overset{\text{Dist}}{\longrightarrow} \text{N}(0,4).$$ We now apply the delta-method using a continuously differentiable transformation $g$, to give us an alternative asymptotic result: $$\sqrt{2n} \Big( g \Big( \frac{\chi_{n}^2}{n} \Big) - g(1) \Big) \overset{\text{Dist}}{\longrightarrow} \text{N} \Big( 0, 4 g'(\theta)^2 \Big).$$ Using the transformation $g(\theta) = \sqrt{\theta}$ gives us $g'(\theta)^2 = 1/4\theta$, which yields the alternative result: $$\sqrt{2n} \Big( \sqrt{\frac{\chi_{n}^2}{n}} - 1 \Big) \overset{\text{Dist}}{\longrightarrow} \text{N}(0, 1).$$ Hence, for large $n$ we have: $$\sqrt{2 \chi_{n}^2} -\sqrt{2 n} \overset{\text{Approx}}{\sim} \text{N}(0, 1).$$ (This approximation is mentioned in Fisher (1934) (p. 62), though he does not show the derivation.) Now, from the definition of the quantile you have: $$\begin{equation} \begin{aligned}q &\approx \mathbb{P}( \sqrt{2 \chi_{n}^2} -\sqrt{2 n} \geqslant z_q) \\[8pt]&= \mathbb{P}( \sqrt{2 \chi_{n}^2} \geqslant z_q + \sqrt{2 n}) \\[8pt]&= \mathbb{P}( 2 \chi_{n}^2 \geqslant (z_q + \sqrt{2 n})^2) \\[8pt]&= \mathbb{P} \Big( \chi_{n}^2 \geqslant \frac{1}{2} (z_q + \sqrt{2 n})^2 \Big). \\[8pt]\end{aligned} \end{equation}$$ Hence, we have the approximate quantile: $$\chi_{n,q}^2 \approx \frac{1}{2} (z_q + \sqrt{2 n})^2.$$ Your variation can be derived from the asymptotic result $\sqrt{2 \chi_{n}^2} -\sqrt{2 n-1} \overset{\text{Approx}}{\sim} \text{N}(0, 1)$, which is also asymptotically valid, since the effect of the minus-one is vanishing as $n \rightarrow \infty$. By an analogous derivation, this alternative asymptotic form gives: $$\chi_{n,q}^2 \approx \frac{1}{2} (z_q + \sqrt{2 n-1})^2.$$ Remark:The approximation used by Fisher differs slightly from the approximation in your question, since it does not subtract one in the part in the square root. Both are valid asymptotically, since the effect of subtracting one is vanishing as $n \rightarrow \infty$. It is worth noting that the approximation in your question is more accurate than the one used by Fisher, in terms of relative error.
Given two undirected networks with the same amount of nodes defined by $A_{ij}^1$ and $A_{ij}^2$, we take the unique unordered pairs $(i, j)$. After sorting them, we then have two binary vectors $\vec{A}^1 = (A_{12}^1, A_{13}^1, ...)$ and $\vec{A}^2 = (A_{12}^2, A_{13}^2, ...)$. For example, if an edge between nodes 1 and 2 is present in both networks, we have that $\vec{A}^1_1 = 1$ and $\vec{A}^2_1 = 1$. In other words, the two ordered vectors code the simultaneous presence $(00, 01, 10, 11) $ of the same edge $(i,j)$ in network 1 and 2. The Jaccard similarity between $\vec{A}^1$ and $\vec{A}^2$ is then defined as $$ J(\vec{A}^1, \vec{A}^2) = \frac{\sum_i \vec{A}^1_i \wedge \vec{A}^2_i}{\sum_i \vec{A}^1_i \vee \vec{A}^2_i}.$$ Question 1: Quoting from Bargigli et al. (2013), The Jaccard similarity [between $\vec{A}^1$ and $\vec{A}^2$] can be defined as the probability of observing a link in a network conditional on the observation of the same link in the other network. I don't understand why, given $ij$, $$P(A_{ij}^2 = 1 | A_{ij}^1 = 1) = \frac{P(A_{ij}^1 = 1, A_{ij}^2 = 1)}{P(A_{ij}^1 = 0, A_{ij}^2 = 1) + P(A_{ij}^1 = 1, A_{ij}^2 = 1)}$$ amounts to that. The numerator and denominator hint at the definition of $J$ but I'm unsure. I think $ij$ also needs to be marginalized out and some symmetry exploited, but I'm stuck. Also, do we need to assume independence between the networks? Would any of this change for a directed network, i.e. $A_{ij}^k$ need not be symmetric? Question 2: The Jaccard distance $d = 1 - J$ is a proper metric. When using it for hierarchical clustering with UPGMA linkage, the distance between two clusters $A$ and $B$ is defined as$$ d(A,B) = \frac{1}{|A||B|} \sum_{a \in A, b \in B} d(a, b) ,$$which, if question 1 is resolved, could be written as$$ d(A,B) = \frac{1}{|A||B|} \sum_{a \in A, b \in B} (1 - P(\text{link exists in $b$ given that it exists in $a$})).$$Can we deduce some interpretation from this in terms of the probability of observing edges in the networks in clusters $A$ and $B$? I'd be very grateful for any clue.
$S^1_2$ is a theory of bounded arithmetic, that is, a weak axiomatic theory obtained by severely restricting the schema of induction of Peano arithmetic. It is one of the theories defined by Sam Buss in his thesis, other general references include Chapter V of Hájek and Pudlák’s Metamathematics of first-order arithmetic, Krajíček’s “Bounded arithmetic, ... If SAT had a subexpoential-time algorithm, the you would disprove the exponential time hypothesis.For fun cosequences: if you showed that circuit SAT over AND,OR,NOT with $n$ variables and $poly(n)$ circuit gates can be solved faster than the trivial $2^n poly(n)$ approach, then by Ryan Williams' paper you show that $NEXP \not\subseteq P/poly$. The most natural restriction on the proof DAG is that it be a tree – that is, any "lemma" (intermediate conclusion) is not used more than once. This property is called being "tree-like". General resolution is exponentially more powerful than tree-like resolution, as shown for example by Ben-Sasson, Impagliazzo and Wigderson. The concept has also been ... The basic sum-of-squares proof system, introduced under the name of Positivstellensatz refutations by Grigoriev and Vorobjov, is a “static” proof system for showing that a set of polynomial equations and inequations$$S=\{f_1=0,\dots,f_k=0,h_1\ge0,\dots,h_m\ge0\},$$where $f_1,\dots,f_k,h_1,\dots,h_m\in\mathbb R[x_1,\dots,x_n]$, has no common solution in $\... Here's an somewhat relevant example we are currently covering in my class.The "storage access function" is defined on $2^k+k$ bits as:$SA(x_1,...,x_{2^k}, a_1,...,a_k) = x_{bin(a_1 \cdots a_k)}$where $bin(a_1 \cdots a_k)$ is the unique integer in $\{1,\ldots,2^k\}$ corresponding to the string $a_1 \cdots a_k$.$SA$ has formulas of size about $O(k \... In one of my blog posts, I mentioned four problems (Factoring, Parity Games, Stochastic Games, A Lattice Problem) that are known to be in $NP \cap coNP$ but not known to be in $P$.Parity Games and Stochastic Games can be considered as "graph problems".Also, The Two Bicliques Problem is in $NP \cap coNP$. This is a natural graph problem that is not known ... On satisfiable instances of $PHP$, DPLL based SAT solvers will furnish a satisfying assignment in linear time.To see why, observe how the CNF encoding of an unsatisfiable instance of $PHP$ with $n$ holes and $n + 1$ pigeons is sintactically identical to an instance of $k = n$ Graph Coloring, where the input graph is a clique of $n + 1$ vertices.... This is the same idea as Andrej's answer but with more details.Krajicek and Pudlak [LNCS 960, 1995, pp. 210-220] have shown that if $P(x)$ is a $\Sigma^b_1$-property that defines primes in the standard model and $$S^1_2 \vdash \lnot P(x) \to (\exists y_1,y_2)(1 < y_1, y_2 < x \land x = y_1y_2)$$ then there is a polynomial time factoring algorithm. ... The following example comes from the paper which gives a combinatorial characterization of resolution width by Atserias and Dalmau (Journal, ECCC, author's copy).Theorem 2 of the paper states that, given a CNF formula $F$, resolution refutations of width at most $k$ for $F$ are equivalent to winning strategies for Spoiler in the existential $(k+1)$-pebble ... 1)The only non-structural rule is resolution (on atoms).$$ \varphi\lor C, \psi\lor \overline{C} \over \varphi\lor \psi$$However a rule by itself doesn't give a proof system. See part 3.2)Think about it this way: is Gentzen's sequent calculus PK complete if we are using some other set of connectives in place of $\{\land, \lor, \lnot\}$? The logical ... This might be a slight reach, but the idea of XOR'ing a bunch of things to make a task "harder" shows up in cryptography. It first appeared in the guise of Yao's XOR lemma. If $X$ is a slightly unpredictable random variable, then $Y = X_1 \oplus X_2 \oplus \cdots \oplus X_k$ is extremely unpredictable if $k$ is large enough, where the $X_i$'s are independent ... 1, 2, 4) The best known lower bounds on extended Frege are the same as for Frege: linear number of lines, and quadratic size. This applies e.g. to the tautologies $\neg^{2n}\top$ (basically, any tautology that is not a substitution instance of a shorter tautology, and whose sum of lengths of all subformulas is quadratic). This is proved in Krajíček’s Bounded ... It depends on what kind of a "beginner" level you wish to have. I don't think there is a real good undergraduate level text on proof complexity (this is probably true for most specialized sub-areas in complexity). But for a beginner (graduate level) sources, I would recommend, something like understanding well the basic exponential size lower bound on ... Clarification on terminologyThe theorem proving community does not use the terms supervised and unsupervised. They use the terms interactive theorem proving or automated theorem proving. If you give a conjecture to the prover and it always comes back with a yes or no answer, such a prover is a decision procedure for a logical theory or simply called a ... SOS can be considered as a proof system where lines are of the form $p(\vec{x}) \geq 0$ where $p(\vec{x})$ is a polynomial in variables $\vec{x}$.The inference rules are:$\over x^2-x \geq 0$$\over x-x^2 \geq 0$$\over p(\vec{x})^2\geq 0$$p(\vec{x}) \geq 0 \over p(\vec{x})x \geq 0$$p(\vec{x}) \geq 0 \over p(\vec{x})(1-x) \geq 0$$p_1(\vec{x}) \geq 0, \... What proof system is being considered when discussing resolution? Is it just the resolution rule? What are the other rules?I discuss resolution in the context of "clauses", which are sequents made up of only literals. A classical clause would look like $$A_1,\ldots,A_n \to B_1,\ldots,B_m$$ But we can also write it as $${} \to \bar{A}_1,\ldots,\bar{A}_n, ... This example is a bit lower in the hierarchy than what Kaveh asks for, but it is an open problem whether the soundness of the uniform $\mathrm{TC}^0$ algorithms for integer division and iterated multiplication by Hesse, Allender, and Barrington can be proved in the corresponding theory $\mathit{VTC}^0$.The argument is pretty elementary, and there should be ... It sounds like you are interested in all-different constraints (and your last sentence is on the right track). These are non-trivial instances of the pigeonhole principle, where the number of pigeons is not necessarily greater than the number of holes, and in addition some pigeons may be barred from some of the holes.All-different constraints can be ... How about the edge coloring number in a dense graph (aka Chromatic index)? You are given the adjacency matrix of an $n$ vertex graph ($n^2$ bit input), but the natural witness describing the coloring has size $n^2\log n$. Of course, there might be shorter proofs for class 1 graphs in Vizing's theorem.See also this possibly related question The AKS primality test seems like a good candidate if Wikipedia is to be believed.However, I would expect such an example to be hard to find. Existing proofs are going to be phrased so that they are obviously not done in bounded arithmetic, but they will likely be "adaptable" to bounded arithmetic with more or less effort (usually more). Natural examples of propositional proof systems that do not fall under this definition are algebraic proof systems where the lines in the proof are arbitrary polynomials (not necessarily fully expanded). To verify the correctness of such proofs, among other things one has to test the identity of polynomials, which is not known to be possible in deterministic ... It is over two years since this question was asked, but in that time, there have been more papers published about algorithms for computing Craig interpolants. This is a very active research area and it is not feasible to give a comprehensive list here. I have chosen articles rather arbitrarily below. I would suggest following articles that reference them and ... Pavel Pudlák has recently shown an exponential lower bound for Resolution refutations of the formulas derived from the Ramsey theorem $n\to (k)^2_2$ for $k=\lfloor \frac12 \log n \rfloor$.These formulas have clauses $\bigvee_{i,j\in K} x_{i,j}$ and $\bigvee_{i,j\in K} \neg x_{i,j}$ for every subset $K\subseteq \{ 1 , \ldots , n \}$ of size $|K| = k$, they ... Another hard example for resolution is the mutilated chessboard formulas. They state that a $2n \times 2n$ chessboard with two diagonally opposite corners missing cannot be covered with $2\times 1$ tiles. See:Michael Alekhnovich. Mutilated chessboard problem is exponentially hard for resolution. Theoretical Compututer Science 310(1-3): 513-525 (2004). http:... Your definition of $D$ is not clear, if it is $D = \max_f (R(f) - ER(f))$, then it is exponential.There are DNFs whose shortest proofs in ER is exponentially shorter than their shortest proofs in R e.g. PHP (pigeon hole principle) has polynomial size ER-proofs but only exponential size R-proofs.Unsatisfiability of $f$ is the same as $\lnot f$ being a ... Think of $\Sigma^{*}$ encoding some sort of objects, and $Q$ as the set of all objects satisfying some property. Think of $P$ as a function which accepts (the encoding of) a pair $(x, p)$ where $x$ is an object and $p$ is alleged "evidence" of $x \in Q$. The function $P$ is a "proof checker": it verifies that $p$ actually represents valid evidence that $x \... I came along some quite natural NP-complete problems that seemingly require long witnesses. The problems, parameterized by integers $C$ and $D$ are as follows:Input: A one-tape TM $M$Question: Is there some $n\in\mathbb{N}$, such that $M$ makes more than $Cn+D$ steps on some input of length $n$?Sometimes the complement of the problem is easier to state: ... Müller and Szeider study Resolution proofs where the proof DAG has bounded tree-width or bounded path-width (for suitable extensions of these graph complexity measures to directed graphs.)They show that the path-width of the DAG is essentially the same as the space complexity of the proof, and define a generalized notion of proof space which is equivalent ... The third approach mentioned in the linked paper refers to the framework of bounded applicative theories, which are subsystems of Feferman's theory of Explicit Mathematics just as Bounded Arithmetic theories are subsystems of Peano Arithmetic (or higher order extensions of it). These theories contain full lambda calculus (or combinatory logic rather) and ... Let $m$ be the number of pigeons and $n$ be the number of holes. Let the propositional variables $B_{i,0}$ ... $B_{i,log(n)}$ encode the binary representation of $j-1$ if the $i$th pigeon is put into the $j$th hole. (Example, if pigeon 1 were placed in hole 10, $j - 1 = 9$, which is binary 1001. So $B_{1,3}$ = true, $B_{1,2}$ = false, $B_{1,1}$ = false and ...
Inferring posterior forms Inferring posterior forms in a Gaussian distribution can get really tricky with so many terms. There is a simple trick however that can save a lot of time. This post is in presenting this trick to save lot of calculations. This post specifically addresses the following topics: Estimating parameters of a Gaussian for the posterior in Bayesian Inference Finding Maximum likelihood parameters for a multivariate Gaussian Let \({\bf X}\) be a random vector that follows multivariate Gaussian distribution i.e, where \({\bf \mu} = (\mu_1, \dots, \mu_D)^T\) is the mean vector and \({\bf \Sigma}\) is the covariance matrix. If \({\bf x}\) is instantiation of the above random variable, Expanding the exponent term gives; where \(\text{constant}\) are terms that do not depend on \(\bf x\). All we care about in the exponent term is coefficient of quadratic term and the linear term. Bayesian Inference for the Gaussian Let us confine to univariate Gaussian distribution and divide the derivation into three cases. case 1: \(\sigma^2\) is known We shall find the parameter \(\mu\) given \(N\) independent observations \({\bf X} = \{x_1, x_2, \dots, x_N\}\). Then, the likelihood becomes; Choosing the prior such that the posterior forms conjugate pair, Now, the posterior is given by; Looking at the sum in the exponent of the above terms, it is clear that it is of quadratic from confirming that the posterior is a Gaussian. Let the posterior gaussian be represented by \(\mathcal{N}(\mu \mid \mu_N, \sigma_N^2)\).To find the parameters of the Gaussian, we look for the quadratic and linear terms in the exponent for \(\mu\). Writing the exponent, Comparing the quadratic coefficient gives, Next comparing the linear term gives, And hence, \(\mu_N\) becomes, case 2: \(\mu\) is known In this case, we will find the parameter \(\lambda \equiv \sigma^{-2}\) given \(N\) independent observations \({\bf X} = \{x_1, x_2, \dots, x_N\}\). Then, the likelihood becomes; To form the conjuagate pair, choose the prior such that it is proportional to \(\lambda^a exp(\alpha \lambda)\) which is a gamma distribution. Choosing above gamma distribution with parameters \(a_o, b_o\) as the prior, i.e, gives posterior the form; which is again a Gamma distribution, say \(Gamma(\lambda \mid a_N, b_N)\) and comparing to the standard Gamma distribution, case 3: When both \(\mu\) and \(\sigma^2\) are unkown Again using \(\lambda \equiv \sigma^{-2}\) for \(N\) independent observations \({\bf X} = \{x_1, x_2, \dots, x_N\}\) likelihood is given by; Now the prior is choosen similar to the way we did before; where \(c, d, \beta \) are constants. Now the posterior is given by; which is of the form of product of Gaussian and Gamma i.e, \(\mathcal{N}(\mu \mid \mu_N, \lambda_N) Gamma(\lambda \mid a_N, b_N)\) and the parameters are given by; Maximum likelihood parameters for a multivariate Gaussian Let \({\bf X} = \{x_1, x_2, \dots, x_N\}\) be set of independent observations. The likelihood function is given by: We shall estimate parameters of the above density model by first taking log likelihood of the above; Estimating the parameter \(\bf \mu\) Differentiate above equation wrt \({\bf \mu}\) and setting to zero; equating the above to zero; Estimating the parameter \(\bf \Sigma\) equating the above to zero; we get;
Let $(\Omega,\mathcal{A})$ be a measurable space $E$ be a Polish space and $\mathcal{E}$ be the Borel-$\sigma$-algebra on $E$ $I\subseteq\mathbb{R}$ $X_t$ be measurable with respect to $\mathcal{A}$-$\mathcal{E}$, for all $t\in I$ Given all these beautiful objects, I would like to call $(X_t,t\in I)$ a stochastic process on $(\Omega,\mathcal{A})$ with time domain $I$ and values in $(E,\mathcal{E})$. However, most people first introduce a probability measure $\operatorname{P}$ and then say that $(X_t,t\in I)$ is a stochastic process on $(\Omega,\mathcal{A},\operatorname{P})$ with time domain ... Why do we need $\operatorname{P}$ at this point? The only thing I can imagine is the following: When we talk about stochastic processes, we often talk about properties that hold almost surely (more exactly: $\operatorname{P}$-almost surely!), e.g. continuity properties of the path $$I\to E\;,\;\;\;t\mapsto X_t(\omega)$$ for $\omega\in\Omega$. So, am I right? Is this the reason why most definitions of stochastic processes include the probability measure $\operatorname{P}$?
This is not a Poisson distribution, but is related. It is a rather interesting distribution, but let me not disclose it at the moment. First note that, up to the multiple $n^n$, this is the probability generating function for the sum of $n$ iid random variables, which are uniformly distributed on the set $\{1,2,\dots,2^{n-1}\}$. Dividing by $2^{n-1}$, we get the sum of $n$ iid random variables $\xi_{1,n},\dots,\xi_{n,n}$, uniformly distributed on the set $\{1,2^{-1},2^{-2}\dots,2^{1-n}\}$. Then the characteristic function of the sum $S_n = \xi_{1,n}+\dots+\xi_{n,n}$ is$$\varphi_{S_n}(t) = \varphi_{\xi_{1,n}}(t)^n = \left(\frac1n \sum_{k=0}^{n-1}e^{it2^{-k}}\right)^n = \left(1+ \frac1n \sum_{k=0}^{n-1}\big(e^{it2^{-k}}-1\big)\right)^n\\\to \exp\left\{\sum_{k=0}^{\infty}\big(e^{it2^{-k}}-1\big)\right\} = \prod_{k=0}^{\infty}\exp\left\{e^{it2^{-k}}-1\right\},\quad n\to\infty.$$Therefore, $$S_n\overset{w}{\longrightarrow} S = \sum_{k=0}^\infty \frac{\zeta_k}{2^k},\quad n\to\infty,$$where $\zeta_k$ are iid with Poisson(1) distribution. Let us study the limit disribution $S$. As @A.S. wrote, its cumulants are $(1-2^{-k})^{-1}$. Further, let $0.\alpha_1\alpha_2\alpha_3\dots$ be the binary form of fractional part of $S$. Then the sequence $\{\alpha_n,n\ge 1\}$ of digits is stationary (in fact, this is the case for any random variable of the form $\sum_{k=0}^\infty \zeta_k 2^{-k}$ with iid integer-valued variables $\zeta_k$). It is possible to "identify" the marginal distribution of digits, that is, the probability $p = P(\alpha_1 = 1)$. Define for $k\ge 0$$$E_k = 1 - 2e^{-1}\sum_{n=0}^\infty \frac{1}{(2^k (2n+1))!}.$$Then $$p = \frac12\left(1-\prod_{k=0}^\infty E_k\right).$$If $p\neq 1/2$, which seems to be the case, then the distribution of $S$ is singular. (In the unlikely case that $p=1/2$, it is still singular since the digits $\alpha_n$ are dependent.) It is also continuous, but at the moment I don't see a simple way to prove this.
For general discussion about Conway's Game of Life. "Very", "extra", etc. are adverb that modify "long" (and each other), and can't directly modify "boat". They can only be used directly as adjectives in phrases like "This collision makes the object I want, plus an extra boat" or "That glider eliminated the very boat that was causing problems!" Small Life patterns were originally assigned arbitrary mnemonic names, which were adequately unique and descriptive for small patterns, but that nomeclature gets more and more strained, the larger and more complicated that patterns get. It also becomes more and more futile as patterns get larger, as there are exponentially more of them at each given size. In Life's early days, there were unique names for all still-lifes up to 7 bits, about half of the 8-bit ones, and only one 9-bit one. In my pattern collections, I've tried to extrapolate meaningful names for small lists of objects (up to around a hundred objects or so), with mixed results, but after a point, using long chains of adjectives to describe a feature becomes tedious. Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. Small Life patterns were originally assigned arbitrary mnemonic names, which were adequately unique and descriptive for small patterns, but that nomeclature gets more and more strained, the larger and more complicated that patterns get. It also becomes more and more futile as patterns get larger, as there are exponentially more of them at each given size. In Life's early days, there were unique names for all still-lifes up to 7 bits, about half of the 8-bit ones, and only one 9-bit one. In my pattern collections, I've tried to extrapolate meaningful names for small lists of objects (up to around a hundred objects or so), with mixed results, but after a point, using long chains of adjectives to describe a feature becomes tedious. Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. I would prefer "long^5 boat" to "very very very very long boat", personally.mniemiec wrote:Contractions like "long^5 boat" or "15-bit boat" or "length-10 snake" make much more sense after a point. EDIT: Also, Catagolue puts this in PATHOLOGICAL: Code: Select all x = 4, y = 8, rule = B3/S2-i34q2bo$bobo2$b3o3$bo$3o! "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane Well, guess who... yup, you're right.77topaz wrote:That's a good point. Who decided that the LifeWiki should get rid of the long^5 format names and replace them with the less adaptable adjectival names like "terribly long boat", anyway? I remember a discussion before that, though. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. she/they // "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Please stop using my full name. Refer to me as dani. "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. On the contrary, "amazingly long boat" is long^37, if you follow the footnotes for Long on the LifeWiki... which I definitely hope nobody does, since that thread has all the hallmarks of a "naming frenzy". People inevitably seem to get into naming frenzies every so often, and then hopefully come to regret them later.danny wrote:I remember a discussion before that, though. In any case I am not a fan, I think that the exponent notation is far more concise. Can you offhand remember which one is called 'amazingly long boat'? Guess what, it's none of them, but nobody reading this knew that. Me, I was just really happy to be able to go in and delete all the over-long pattern names in the LifeWiki collection that were messing up columnar lists, like Very_very_very_very_very_very_very_long_boat... so I didn't worry too much about whether the replacement names were really a good idea. They were a huge improvement if nothing else. I believe muzik added those very very very long names also, but it was some time before the most recent cleanup project. And just by the way, that 12-bit still life project was a whole heck of a lot of work on muzik's part, and it did definitely succeed in cleaning up a lot of things... though sometimes just by drawing attention to the problems, so Ian07's follow-up work also definitely deserves a good round of applause. I remember from the very old days, there were a small number of patterns that had qualifer names (e.g. long boat, long snake=python) but there was no real consistent nomenclature beyond that. Other than a few notable still-lifes (like paperclip), there generally wasn't even a standard nomenclature for still-lifes above 8 bits. When I started to systematically categorize pseudo-objects, it was much easier to use symbolic names (rather than empirical ones like 14.123 or 14P1.123), so I needed a way to concisely name the pieces involved. This meant coming up with ad-hoc names for objects up to 12 bits. I knew that the system would not hold up well much beyond that point, but it was never intended to. Also, as history tends to show, whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be), with the lack of any better nomenclature, that name tends to stick. When I started to systematically categorize pseudo-objects, it was much easier to use symbolic names (rather than empirical ones like 14.123 or 14P1.123), so I needed a way to concisely name the pieces involved. This meant coming up with ad-hoc names for objects up to 12 bits. I knew that the system would not hold up well much beyond that point, but it was never intended to. Also, as history tends to show, whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be), with the lack of any better nomenclature, that name tends to stick. coughjolsoncoughrunnynosecoughmniemiec wrote:whenever anyone comes up with a name (regardless of how inappropriate it might turn out to be) Hard agree.77topaz wrote:I think "long^3 boat" would be the best naming scheme/format for these pages. What do others think? Another somewhat controversial opinion I have is that everything above a certain number doesn't really deserve a page, but let's go one step at a time. she/they // "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Please stop using my full name. Refer to me as dani. "I'm always on duty, even when I'm off duty." -Cody Kolodziejzyk, Ph.D. Fine by me. Along with this, the LifeWiki needs standard pnames -- lowercase alphanumeric-only names for the RLE and plaintext pattern files. The one good thing about the Arbitrary Adjective naming system was that it avoided the whole '"long<sup>10</sup> boat" / "Long%5E10_boat" mess and produced decent-looking pnames.77topaz wrote:I think "long^3 boat" would be the best naming scheme/format for these pages. What do others think? I think "long3boat" is fine for a pname, though. I think the only Arbitrary Adjectival pname that I'm personally responsible for is "abominably long boat", which I pretty much made up in desperation to get rid of an even more abominable "very very very very..." name. I've now removed abominablylongboat.cellsand abominablylongboat.rlefrom the server, uploaded long10boat.cellsand long10boat.rleinstead, and fixed the pname in the article. If other pnames can be patched up to be consistent with this, and if someone can keep a list of all the RLE:arbitraryadjectivelongsomething pages that get moved to RLE:veryNlongsomething, then I can go through at some point and delete all the arbitraryadjectivelongsomething.cells/.rlepattern files from the server, and the auto-upload script will take care of the rest. Maybe put the list, and any further discussion on this topic, on the Tiki Bar or someone's LifeWiki user page? Moosey Posts:2491 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact: I feel that the arbitrary adjective name should be kept as an alternative name, as in: Long^103 doorjamb (or uselessly long doorjamb) is the long^103 equivalent of the doorjamb. Long^103 doorjamb (or uselessly long doorjamb) is the long^103 equivalent of the doorjamb. Minor, but prodigal is lowercase on Catagolue. I and wildmyron manage the 5S project, which collects all known spaceship speeds in Isotropic Non-totalistic rules. Things to work on: - Find a (7,1)c/8 ship in a Non-totalistic rule - Finish a rule with ships with period >= f_e_0(n) (in progress) Things to work on: - Find a (7,1)c/8 ship in a Non-totalistic rule - Finish a rule with ships with period >= f_e_0(n) (in progress) https://catagolue.appspot.com/object/xp15_4R4Z4R4/b3s23 https://catagolue.appspot.com/census/b3 ... ?offset=-2 https://catagolue.appspot.com/census/b3 ... ?offset=-2 x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Sometimes when I tried to view not-yet-searched censuses, it says "No one has investigated this, investigate it yourself". But sometimes I just get an empty symmetry list. This post was brought to you by the Element of Magic. Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Code: Select all #C Favorite Gun. Found by me.x = 4, y = 6, rule = B2e3i4at/S1c23cijn4ao2bo$4o3$4o$o2bo! Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X As far as I can tell, the "It appears that no-one has yet investigated this combination of rule and symmetry options.™" only appears if you also specify a symmetry. If you dont specify a symmetry (Only a rule), you will get said empty symmetry list.Hunting wrote:Sometimes when I tried to view not-yet-searched censuses, it says "No one has investigated this, investigate it yourself". But sometimes I just get an empty symmetry list. Airy Clave White It Nay (Check gen 2) Code: Select all x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! Moosey Posts:2491 Joined:January 27th, 2019, 5:54 pm Location:A house, or perhaps the OCA board. Contact: Here’s an oddity: https://catagolue.appspot.com/object/xs ... jns23-ckqy It seems to use a different lifeviewer theme... the “inverse” theme. EDIT: Holy cow, it’s EVERYWHERE on Catagolue! EDIT: Oh. https://catagolue.appspot.com/object/xs ... jns23-ckqy It seems to use a different lifeviewer theme... the “inverse” theme. EDIT: Holy cow, it’s EVERYWHERE on Catagolue! EDIT: Oh. "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane What's odd about that? It's a perfectly legitimate P2 oscillator.Hdjensofjfnen wrote:This: https://catagolue.appspot.com/object/xp ... v0rr/b3s23 But it is called ":D".mniemiec wrote:What's odd about that? It's a perfectly legitimate P2 oscillator.Hdjensofjfnen wrote:This: https://catagolue.appspot.com/object/xp ... v0rr/b3s23 This post was brought to you by the Element of Magic. Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Plz correct my grammar mistakes. I'm still studying English. Working on: Nothing. Favorite gun ever: Code: Select all #C Favorite Gun. Found by me.x = 4, y = 6, rule = B2e3i4at/S1c23cijn4ao2bo$4o3$4o$o2bo! Figure eight on pentadecathlon does not appear in the large objects section of the statistics page despite having a maximum population of 66. I'm guessing this is because its period is too high for Catagolue to calculate it. Wiki: http://www.conwaylife.com/wiki/User:Ian07 Discord: Ian07#6028 Discord: Ian07#6028 It's there now, perhaps the page hadn't been updated yet? I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section?wildmyron wrote: It's there now, perhaps the page hadn't been updated yet? Wiki: http://www.conwaylife.com/wiki/User:Ian07 Discord: Ian07#6028 Discord: Ian07#6028 Don't mind me, can't read properly. Sorry.Ian07 wrote:I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section? I'm confirming this odd bug. Maybe Catagolue just needs some time for Adam P. Goucher to add xp120 to the list of objects that Catagolue considers "large". (e.g. the count doesn't consider still-life bins below cloverleaf interchange, since that would waste time)wildmyron wrote:Don't mind me, can't read properly. Sorry.Ian07 wrote:I still don't see it. Are you sure you're looking at the "Large objects" section rather than the "Naturally-occurring high-period oscillators" section? EDIT: This looks fishy. Very fishy. http://catagolue.appspot.com/census/b345s4567/iC1 EDIT: By the way, the extremely high period of the new xp120 makes it the first object other than linear-growth patterns to break the preview. "A man said to the universe: 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane 'Sir, I exist!' 'However,' replied the universe, 'The fact has not created in me A sense of obligation.'" -Stephen Crane I'm fairly certain this explanation is correct - Catagolue only displays population statistics (or, @Hdjen, animated GIFs) for objects of period <100 (or sometimes ≤100 - I think the threshold isn't always consistent across functions). So, the system doesn't knowthe xp120 belongs in that section.
I have separately computed the cepstrum and the energy of a finite discrete-time signal. If the energy of the signal is high, we would expect (on average) larger cepstral values than if the energy of the signal is low. I am therefore interested in normalizing the cepstral values by the energy in some fashion to adjust for this, because I want to be able to compare the cepstral values from different signals in an energy-invariant manner. However while I know the units of the signal's energy $(\text{volts}^2\cdot\text{seconds})$ I have no idea what the units of the cepstral values are, and therefore do not know how to transform them in order to ensure that the normalization is performed appropriately. What would be the most unit-consistent way to perform this normalization? Nevermind, I figured it out. For my purposes, all I really want is the cepstrum of the energy-normalized signal, which after working through the algebra is equivalent to adding a delta function to the original cepstrum. (See below.) For a real, discrete, finite-time signal $\vec{s}$, we can compute its energy as $E(\vec{s})=\vec{s}\cdot\vec{s}$. To energy-normalize the signal (i.e. ensure that $\text{energy}=1$), we multiply $\vec{s}$ by the scalar $E(\vec{s})^{-1/2}$. The cepstrum of the the signal $\vec{s}$ is defined as $\text{cep}(\vec{s}) = \mathfrak{F}^{-1}\big(\text{log}\big(\big|\mathfrak{F}(\vec{s})\big|^2\big)\big)$. Therefore the cepstrum of the energy-normalized signal can be computed as: $\text{cep}\big(E(\vec{s})^{-1/2}\cdot\vec{s}\big)\\ = \mathfrak{F}^{-1}\big(\text{log}\big(\big|\mathfrak{F}\big(E(\vec{s})^{-1/2}\cdot\vec{s}\big)\big|^2\big)\big)\\ = \mathfrak{F}^{-1}\big(\text{log}\big(\big|E(\vec{s})^{-1/2}\cdot\mathfrak{F}(\vec{s})\big|^2\big)\big)\\ = \mathfrak{F}^{-1}\big(\text{log}\big(E(\vec{s})^{-1}\cdot\big|\mathfrak{F}(\vec{s})\big|^2\big)\big)\\ = \mathfrak{F}^{-1}\big(\text{log}\big(E(\vec{s})^{-1}\big)\cdot\mathbf{1} + \text{log}\big(\big|\mathfrak{F}(\vec{s})\big|^2\big)\big)\\ = \mathfrak{F}^{-1}\big(\text{log}\big(\big|\mathfrak{F}(\vec{s})\big|^2\big) - \text{log}\big(E(\vec{s})\big)\cdot\mathbf{1}\big)\\ = \mathfrak{F}^{-1}\big(\text{log}\big(\big|\mathfrak{F}(\vec{s})\big|^2\big)\big) - \mathfrak{F}^{-1}\big(\text{log}\big(E(\vec{s})\big)\cdot\mathbf{1}\big)\\ = \text{cep}(\vec{s}) - \big[\text{log}\big(E(\vec{s})\big)\;\; 0\;\; 0\;\; \dots\;\; 0\big]$ Where $\mathbf{1}$ denotes the vector of all 1's.
$t\cdot \delta (t)$ Does it equal to zero? If so, how can we prove it? Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. This answer is supposed to provide some more insight, even though I use "engineering math" that might not pass muster over at math.SE. One defining property of the Dirac delta impulse is $$\int_{-\infty}^{\infty}\delta(t)f(t)dt=f(0)\tag{1}$$ if $f(t)$ is continuous at $t=0$. Note that the Dirac delta impulse is not an ordinary function but a generalized function or distribution. For that reason it is only meaningful under an integral. The product of a distribution (such as $\delta(t)$) and an ordinary function $f(t)$ is also a distribution, and it is defined by $$\int_{-\infty}^{\infty}[\delta(t)f(t)]\phi(t)dt=\int_{-\infty}^{\infty}\delta(t)[f(t)\phi(t)]dt\tag{2}$$ where $\phi(t)$ is some test function. And since (from $(1)$) $$\int_{-\infty}^{\infty}\delta(t)[f(t)\phi(t)]dt=f(0)\phi(0)=\int_{-\infty}^{\infty}\delta(t)f(0)\phi(t)dt\tag{3}$$ we can conclude that $$\delta(t)f(t)=\delta(t)f(0)\tag{4}$$ if $f(t)$ is continuous at $t=0$. So for $f(t)=t$ we get $$t\,\delta(t)=0\tag{5}$$ since $f(t)=t$ is continuous at $t=0$ and $f(0)=0$.
Prove the Contraction Mapping Theorem. Let $(X,d)$ be a complete metric space and $g : X \rightarrow X$ be a map such that $\forall x,y \in X, d(g(x), g(y)) \le \lambda d(x,y)$ for some $0<\lambda < 1$. Then $g$ has a unique fixed point $x^* \in X $, and it attracts everything, i.e. for any $x_0 \in X$ , the sequence of iterates $x_0, g(x_0), g(g(x_0))$, ... converges to the fixed point $x^* \in X$. The hint I am given are for existence and convergence - prove that the sequence is Cauchy. For uniqueness, choose two fixed points of $g$ and apply the map to both. Still a bit do not know how to proceed after looking at the hint. Could anyone help me based on those hints?
Under the auspices of the Computational Complexity Foundation (CCF) In this paper we give improved constructions of several central objects in the literature of randomness extraction and tamper-resilient cryptography. Our main results are: (1) An explicit seeded non-malleable extractor with error $\epsilon$ and seed length $d=O(\log n)+O(\log(1/\epsilon)\log \log (1/\epsilon))$, that supports min-entropy $k=\Omega(d)$ and outputs $\Omega(k)$ bits. Combined with the protocol in \cite{DW09}, this gives a two round privacy amplification protocol with optimal entropy loss in the presence of an active adversary, for all security parameters up to $\Omega(k/\log k)$, where $k$ is the min-entropy of the shared weak random source. Previously, the best known seeded non-malleable extractors require seed length and min-entropy $O(\log n)+\log(1/\epsilon)2^{O{\sqrt{\log \log (1/\epsilon)}}}$ \cite{CL16, Coh16}, and only give two round privacy amplification protocols with optimal entropy loss for security parameter up to $k/2^{O(\sqrt{\log k})}$. (2) An explicit non-malleable two-source extractor for min-entropy $k \geq (1-\gamma)n$, some constant $\gamma>0$, that outputs $\Omega(k)$ bits with error $2^{-\Omega(n/\log n)}$. We further show that we can efficiently uniformly sample from the pre-image of any output of the extractor. Combined with the connection in \cite{CG14b} this gives a non-malleable code in the two-split-state model with relative rate $\Omega(1/\log n)$. This exponentially improves previous constructions, all of which only achieve rate $n^{-\Omega(1)}$.\footnote{The work of Aggarwal et. al \cite{ADKO15} had a construction which ``achieves" constant rate, but recently the author found an error in their proof.} (3) Combined with the techniques in \cite{BDT16}, our non-malleable extractors give a two-source extractor for min-entropy $O(\log n \log \log n)$, which also implies a $K$-Ramsey graph on $N$ vertices with $K=(\log N)^{O(\log \log \log N)}$. Previously the best known two-source extractor in \cite{BDT16} requires min-entropy $\log n 2^{O(\sqrt{\log n})}$, which gives a Ramsey graph with $K=(\log N)^{2^{O(\sqrt{\log \log \log N})}}$. We further show a way to reduce the problem of constructing seeded $s$-source non-malleable extractors to the problem of constructing non-malleable $(s+1)$-source extractors. Using the non-malleable $10$-source extractor with optimal error in \cite{CZ14}, we obtain a seeded non-malleable $9$-source extractor with optimal seed length, which in turn gives a $10$-source extractor for min-entropy $O(\log n)$. Previously the best known extractor for such min-entropy requires $O(\log \log n)$ sources \cite{CohL16}. Independent of our work, Cohen \cite{Cohen16} obtained similar results to (1) and the two-source extractor, except the dependence on $\epsilon$ is $\log(1/\epsilon) (\log \log (1/\epsilon))^{O(1)}$ and the two-source extractor requires min-entropy $\log n (\log \log n)^{O(1)}$.
I thought I knew this but have found it surprisingly difficult to find good references. I am interested in solving$$\left\{\begin{align}& \Delta \psi = - \rho & & \mbox{in } \mathbb{R}^3, &(1) \\& \psi(\infty) = 0. & & &(2)\end{align}\right. $$where $\rho$ is a compactly supported function. We know the answer should be given (up to constant factors) by \begin{equation} \psi(x) = \int_{\mathbb{R}^3} \frac{\rho(y)dy}{|x-y|}. \qquad (3) \end{equation} My first question is what are the mildest conditions that can be imposed on $\rho$ for (1) and (2) to hold pointwise, and is the solution indeed given by (3)? I think Gilbarg and Trudinger give it to hold for $\rho$ Holder continuous with Holder exponent $\alpha \in (0,1]$. My second question is what if I relax the condition that (1) hold pointwise, and instead seek weak solutions i.e. $\psi$ satisfying $$ \int_{\mathbb{R}^3}\left[ \nabla\psi\cdot\nabla\varphi - \rho\varphi\right]dx = 0, \qquad \psi(\infty)=0 \qquad (4) $$ should hold for all test functions (i.e. $C_c^\infty$) $\varphi$? Then how does the generality improve? Can it be broadened to allow for measures $\rho$ if we replace $\int\varphi\rho dx$ by $\int \varphi d\rho$ in (4)? FYI the books I have been consulting include Evan's PDE, Gilbarg and Trudinger's Elliptic PDE, Landkof's Potential Theory, Helms' Potential Theory, Jackson's Electrodynamics. Thank you all in advance.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
137 2 Hello! I think I got something wrong here, maybe someone can help me out. Lets consider a n-manifold. A differential n-form describing a signed volume element will then transform as: [tex]f(x^i) dx^1 \wedge dx^2 \wedge \cdots \wedge dx^n = f(y^i) \;\text{det}\left( \frac{\partial x^i}{\partial y^j}\right) dy^1 \wedge dy^2 \wedge \cdots \wedge dy^n,[/tex] which we can compare to the unsigned volume element in Euclidean space that transforms as: [tex]f(x^i) dx^1 dx^2 \cdots dx^n = f(y^i) \left|\text{det}\left( \frac{\partial x^i}{\partial y^j}\right)\right|dy^1 dy^2 \cdots dy^n.[/tex] Clearly we can get a sign wrong when integrating and conisdering coordinate changes from systems of different orientation. What I dont see is the need for the factor of [tex]\sqrt{|g|}[/tex] on Riemannian manifolds. Will this factor only fix the sign error or have I missunderstood something basic? Lets consider a n-manifold. A differential n-form describing a signed volume element will then transform as: [tex]f(x^i) dx^1 \wedge dx^2 \wedge \cdots \wedge dx^n = f(y^i) \;\text{det}\left( \frac{\partial x^i}{\partial y^j}\right) dy^1 \wedge dy^2 \wedge \cdots \wedge dy^n,[/tex] which we can compare to the unsigned volume element in Euclidean space that transforms as: [tex]f(x^i) dx^1 dx^2 \cdots dx^n = f(y^i) \left|\text{det}\left( \frac{\partial x^i}{\partial y^j}\right)\right|dy^1 dy^2 \cdots dy^n.[/tex] Clearly we can get a sign wrong when integrating and conisdering coordinate changes from systems of different orientation. What I dont see is the need for the factor of [tex]\sqrt{|g|}[/tex] on Riemannian manifolds. Will this factor only fix the sign error or have I missunderstood something basic?
257 23 Homework Statement There is an infinite charged plate in yz plane with surface charge density ##\sigma = 8*10^8 C/m^2## and negatively charged particle at coordinate (4,0,0) Find magnitude of efield at coordinate (4,4,0) Homework Equations E= E1+E2 So I figured to get e-field at point (4,4,0), I need to find the resultant e-field from the negatively charged particle and the plate ##E_{resultant}=E_{particle}+E_{plate}## ##E_{particle}=\frac{kq}{d^2}=\frac{(9*10^9)(-2*10^-6)}{4^2}=-1125N/C## Now for the plate is where I'm confused. If this was a wire, it would have been okay for me since I only need to deal with one dimension. Since what they requested was a plate in yz plane, does this means that my: ##\sigma=dy*dy*x?## where ##dy## is the 'slice' I take and x is the width of the plate? Is that accurate? If it is true, then to find the e-field created by that slice at the point, ##dE=\frac{kdq}{R^2}## ##dE=\frac{k\sigma *x*dy}{a^2+y^2}## I know that the vertical components of the resultant e-field will cancel out because there are same amount of segments on top and below the point. So need to find ##dE_{x}##, which = ##dEcos\theta##, where ##\theta## is shown: So ##dE_{x} = dEcos\theta = (\frac{k\sigma *x*dy}{a^2+y^2}) (\frac{a}{\sqrt{y^2+a^2}})##, Now the problem is I can't integrate this to find my resultant e-field because I do not know what the value of x is. If this was a wire in a plane it will have been solvable for me, but now I'm kind of stuck. Any clues/help? Thanks :)
I have the LP formulation at the link below for the following problem: Minimize: $\sum_{i=1}^{N_1} \sum_{j=1}^{N_2} x_{ij}$ Subject to: $\sum_{i=m}^{m+a-1} \sum_{j=n}^{n+b-1} x_{ij} \ge k$ $x_{ij} \in \{0,1\}$ where $\forall m, 1 \le m \le N_1 - a + 1 \wedge$ $\forall n, 1 \le n \le N_2 - b + 1$ We have a $N_1 \times N_2$ grid. Each cell of the grid can have the value either 0 or 1. Assume that we have $a \times b$ windows as the subset of the $N_1 \times N_2$ grid where $a < N_1$ and $b < N_2$, and we want to have at least $k$ of the cells in each window have the value 1. We want to minimize the number of cells having the value 1. $x_{ij}$, in the formulation, represents the cell at $i$th column and $j$th row. Actually, this problem is a reduction from the problem, Hitting Set. Now, I want to add another set of constraints which guarantees that the pairs that will be formed out of the cells (having value 1, and location $(i,j)$) of the $N_1 \times N_2$ grid will have unique slope values. I've been thinking on this, but couldn't figure out if it is doable or not using LP. Any ideas, suggestions? Thanks,
I'm studying the functional analysis book of Stein and Shakarchi. I have a trouble in solving some exercises. The question is about showing non-existence of bounded linear functional on weak-type space. Specifically, we define the weak-type space as a set of function for which $m(\{ x : |f(x)| > \alpha \}) \leq \frac{A}{\alpha}$ for some $A$ and all $\alpha > 0$. Also, we define the quantity $\mathcal{N}(f)$ as the infimum of $A$ that satisfying the inequality above(although this quantity is not a norm). The exercise I have a trouble with is showing that this space(weak-type space) has no non-trivial bounded linear functional. To show this, I have assumed the existence of bounded linear functional $l$ and want to show actually $l(f) = 0$ for all function $f$ of weak-type space. The followings are what I have tried. By assumption, we have some $M > 0$ with $|l(f)| \leq M \mathcal{N}(f)$ for all $f$ so $\mathcal{N}(f) \geq \frac{1}{M}|l(f)|$. I made some observation : $\mathcal{N}(f) > \beta$ is equivalent with existence of some $\alpha > 0$ where $m(\{x : |f(x)| > \alpha\}) > \frac{\beta}{\alpha}$. With the observation, for all $f$, there exists $\alpha > 0$ such that $m(\{|f(x)| > \alpha \}) > \frac{1}{M\alpha}|l(f)|$ or $\mathcal{N}(f) = \frac{|l(f)|}{M}$. Now I have no progress in this problem. Could you give me some hint or direction to solving this problem please?
My course book bluntly mentions (freely translation without any proof): Integral functions with the terms $x^{\alpha} \sin(\beta x)$, $x^{\alpha} \cos(\beta x)$ or $x^{\alpha}e^{\beta x}$ ($\alpha, \beta\in \mathbb R$) are elementary if $\beta=0$ or $\alpha\in \mathbb N\cup{0}$. Unfortunately, I cannot express the function $\int \cos(x) \ln(x) dx$ in any of the forms -- I always get three terms. Is there some elegant way to know whether some function is elementary, not just looking at some constants of certain functions? Could someone explain why the functions in the forms are elementary by which theorems? References I am doing the book alone here, ex. 5 on page 529 for future readers (sorry not English book).
Takeuti (1987, 223) deduces a cut-elimination theorem for infinitary logic from the corresponding soundness-and-completeness theorems. However, is there a way to adapt the basic Gentzen-style argument? The relevant cut rule says (this is a rephrasing from Takeuti, 215) From proofs of $\Gamma\to \Delta, A_\alpha$ for all $\alpha<\lambda$ and a proof of $\{A_\alpha\}_{\alpha<\lambda},\Pi\vdash \Lambda$, obtain a proof of $\Gamma,\Pi\to \Delta,\Lambda$. The notion of rank used in the basic Gentzen argument assumes that an inference has only one upper-left sequent, whereas here we have maybe infinitely many. Perhaps one could redefine left-rank by a lexicographic ordering along the ranks of the upper-left sequents? I was able to adapt the transformations needed for reduction in right-rank, but then got caught in a hairball on the upper left. The problem is that the transformations given in basic argument depend on the rule by which the upper-left sequent was obtained. Has this been worked out someplace? A reference (or a hint!) would be much appreciated. Thanks, Max PS. The relevant notion of infinitary logic is what Takeuti (213) calls 'a system of infinitary logic with homogeneous quantifiers'. Basically, this adjusts the logical rules to fit the infinitary connectives as one would expect, but also allows infinitely many simultaneous applications of any one logical rule. (This is also apparent in the generalized form of the cut rule above.) Please let me know if more detail would be helpful here. PPS. If it makes any difference, for my purposes the result is needed only for propositional infinitary logic.
Previously we talked a lot about location inference, which is looking at the mean or median of the population distribution, or in fancier words, inferences about centrality. In this chapter we explore whether our sample is consistent with being a random sample from a specified (continuous) distribution at the population level. Under $H_0$, the population distribution is completely specified with all the relevant parameters, such as a normal distribution with given mean and variance, or a uniform distribution with given lower and upper limits, or at least it's some specific family of distributions. Kolmogorov's Test The Kolmogorov's test is a widely used procedure. The idea is to look at the empirical CDF $S_n(x)$, which is a step function that has jumps of $\frac{1}{n}$ at the observed data points. We mentioned before that as $n \rightarrow \infty$, it becomes "close" to the true CDF, $F(x) = P(X \leq x)$. So, $S_n(x)$ is a consistent estimator for $F(x)$. If our data really are a random sample from the distribution $F(x)$, we should then be seeing evidence of that in $S_n(x) - F(x)$ and $S_n(x)$ should be "close". Under $H_0$, $F(x)$ is completely specified, so it's known. $S_n(x)$ is determined by the data, so it's known as well, which means we can compare them explicitly! The logic of the test statistic is if $x_1, x_2, \cdots, x_n$ are a sample from a population with distribution $F(x)$, then the maximum difference between the CDF under $H_0$ and the empirical CDF should be small. The larger the maximum difference is, the more evidence against $H_0$. It's generally a good idea to plot the empirical CDF together with the hypothesized to see visually how close they are [1]: Our Test statistic marked by the arrow is the maximum vertical distance between $F(x)$ and $S_n(x)$, or \[T = \sup_x |F(x) - S_n(x)|\] for a two-sided test for a two-tailed alternative. Deriving the exact distribution in this case is much more complex. In R, the function ks.test does the job. You have to specify what distribution to compare to, e.g. ks.test(data, "pnorm", 2, 4) to test whether data look like a sample from $N(2, 4)$. More typically, though, we won't know the values of the parameters that define the distribution. In other words, we have unknown parameters that need to be estimated. If we use the Kolmogorov test with estimated values (from the sample) of the parameters, the distribution of the test statistic $T$ changes. Lilliefors Test The Lilliefors test is a simple modification of Kolmogorov's test. We have a sample $x_1, x_2, \cdots, x_n$ from some unknown distribution $F(x)$. Compute the sample mean and sample standard deviation as estimates of $\mu$ and $\sigma$, respectively: \[\begin{aligned} \bar{x} &= \frac{1}{n}\sum\limits_{i=1}^n x_i \\ S &= \sqrt{\frac{1}{n-1} \sum\limits_{i=1}^n (x_i - \bar{x})^2} \end{aligned}\] Use these to compute "standardized" or "normalized" versions of the data to test for normality: \[\begin{aligned}Z_i = \frac{x_i - \bar{x}}{S} && i = 1, 2, \cdots, n\end{aligned}\] Compare the empirical CDF of the $Z_i$ to the CDF of $N(0, 1)$, as with the Kolmogorov procedure. Alternatively, use the original data and compare to $N(\bar{x}, S^2)$. Here $H_0$: random sample comes from a population with the normal distribution with unknown mean and standard deviation, and $H_1$: the population distribution is non-normal. This is a composite test of normality (testing multiple things simultaneously). We can obtain the distribution of the test statistics via simulation. In R, we can use the function nortest::lillie.test. Notes We computed $\bar{x}$ and $S$ and used those as estimators for the normal mean and s.d. in the population. Basically follow the Kolmogorov procedure with $\bar{x}$ and $s$. Lilliefors vs. Kolmogorov - procedurally very similar, but the reference distribution for the test statistic changes because we estimate the population mean and standard deviation. Lilliefors found this reference distribution by simulation in the late 1960s. The idea was to generate random normal variates. For various values of sample size $n$, these random numbers are grouped into "samples". For example, if $n = 8$, a simulated sample of size 8 from $N(0, 1)$ (under $H_0$) is generated. The $Z_i$ values are computed as described earlier. The empirical CDF is compared to the $N(0, 1)$ CDF, and the maximum vertical discrepancy is found / recorded. Repeat this thousands of times to build up the simulated reference distribution for the test statistic under $H_0$ when $n=8$. Repeat for many different sample sizes. As the number of simulations increases for a given sample size, the approximation improves. Test for the Exponential Let's look at a different example, the exponential. Our $H_0$: random sample comes from the exponential distribution: \[F(x) = \begin{cases} 1 - e^{-\frac{x}{\theta}} & x \geq 0 \\ 0 & x < 0 \end{cases}\] where $\theta$ is an unknown parameter, vs. $H_1$: distribution is not exponential. Another composite null. We can compute \[Z_i = \frac{x_i}{\bar{x}}\] where we use $\bar{x}$ to estimate $\theta$. Consider the empirical CDF of $Z_1, Z_2, \cdots, Z_n$. Compare it to \[F(x) = 1 - e^{-x}\] and find the maximum vertical distance between the two. This is the test statistic for the Lilliefors test for exponentiality. Tables for the exact distribution for this case exist, but not in general. The R package KScorrect tests against many hypothesized distributions. Another Test for Normality The Shapiro-Wilk test is another important test for normality which is used quite often in practice. We again have a random sample $x_1, x_2, \cdots, x_n$ with unknown distribution $F(x)$. $H_0$: $F(x)$ is a normal distribution with unspecified mean and variance, vs. $H_1$: $F(x)$ is non-normal. The idea essentially is to look at the correlation between the ordered sample values (order statistics from the sample) and the expected order statistics from $N(0, 1)$. If the null holds, we'd expect this correlation to be near 1. Smaller values are evidence against $H_0$. A Q-Q plot has the same logic as this test. For the test more specifically: \[W = \frac{1}{D} \left[ \sum\limits_{i=1}^k a_i \left( x_{(n - i + 1)} - x_{(i)} \right) \right]^2\] $k = \frac{2}{n}$ if $n$ is even, otherwise $k = \frac{n-1}{2}$ $x_{(j)}$ are the order statistics for the sample $a_j$ are the expected order statistics from $N(0, 1)$, obtained from tables $D = \sum\limits_{i=1}^n (x_i - \bar{x})^2$ We may also see it written as \[W =\frac {\left[ \sum\limits_{i=1}^n a_i x_{(i)} \right]^2}{D}\] With large samples, the chance to reject $H_0$ increases - even small departures from normality will be detected, and formally lead to rejecting $H_0$ even if the data are "normal enough". e.g. Many parametric tests (such as the t-test) are pretty robust to departures from normality. The takeaway here is to always think about what you're doing. Don't apply tests blindly - think about results, what they really mean, and how you will use them. Runs or Trends The motivation here is that many basic analyses make the assumption of a random sample, i.e. independent, identically distributed observations ( i.i.d). When this assumption doesn't hold, we need a different analysis strategy (e.g. time series, spatial statistics, etc.) depending on the characteristics of the data. Cox-Stuart Test When the data are taken over time (ordered in time), there may be a trend in the observations. Cox and Stuart (1955) proposed a simple test for a monotonically increasing or decreasing trend in the data. Note that monotonic doesn't mean linear, but simply a consistent tendency for values to increase or decrease. The procedure is based on the sign test. Consider a sample of independent observations $x_1, x_2, \cdots, x_n$. If $n = 2m$, take the differences \[x_{m+1} - x_1, x_{m+2} - x_2, \cdots, x_{2m} - x_m\] If $n = 2m + 1$, omit the middle value $x_{m+1}$ and calculate $x_{m+2} - x_1, \cdots$ If there is an increasing trend over time, we'd expect the observations earlier in the series will tend to be smaller, so the differences will tend to be positive, and vice versa if there is an decreasing trend. If there's no monotonic trend, the observations differ by random fluctuations about the center, and the differences are equally likely to be positive or negative. Under $H_0$ of no monotonic trend, the $+/-$ signs of the differences are $Bin(m, 0.5)$. That's a sign test scenario! Example: The U.S. Dept. of Commerce publishes estimates obtained for independent samples each year of the mean annual mileages covered by various classes of vehicles in the U.S. The figures for cars and trucks (in $1000$s of miles) for the years $1970-1983$ are: Cars 9.8 9.9 10.0 9.8 9.2 9.4 9.5 9.6 9.8 9.3 8.9 8.7 9.2 9.3 Trucks 11.5 11.5 12.2 11.5 10.9 10.6 11.1 11.1 11.0 10.8 11.4 12.3 11.2 11.2 Is there evidence of a monotonic trend in each case? \[\begin{aligned} &H_0: \text{no monotonic trend} \\ \text{vs. } &H_1: \text{monotonic trend} \end{aligned}\] We don't specify increasing or decreasing because we don't have information, so it's a two-sided alternative. For cars, all the differences are negative. When $X \sim Bin(7, 0.5)$, $P(X = 7) = 0.5^7 = 0.0078125$. We have a two-sided alternative, so we need to consider also $X=0$, which by symmetry has the same probability, so we get a p-value $\approx 0.0156$. This is reasonably strong evidence against $H_0$. For trucks, we have 4 negative differences and 3 positive differences, which is supportive of $H_0$, in fact, the most supportive you could be with just 7 differences. Runs Test Note that the sign test does not account for, or "recognize", the pattern in the signs for the trucks. There is evidence for some sort of trend, but since it's not monotonic, the sign test can't catch it. It also can't find periodic, cyclic, and seasonal trends, because it only counts the number of successes / failures. We need a different type of procedure. One possibility is the runs test, which looks for patterns in the successes / failures. We're looking for patterns that may indicate a "lack of randomness" in the data. Suppose we toss a coin 10 times, and see \[\text{H, T, H, T, H, T, H, T, H, T}\] We'd suspect non-randomness because of the constant switching back and forth. Similarly, if we saw \[\text{H, H, H, T, T, T, T, H, H, H}\] We'd suspect non-randomness because too few switch - too "blocky". For tests of randomness, both the numbers and lengths of runs are relevant. In the first case we have 10 runs of length 1 each, and in the second case we have 3 runs - one of length 3, followed by one of length 4, and another of length 3. Too many runs and too few are both indications of lack of randomness. Let \[\begin{aligned} &r\text{: the number of runs in the sequence} \\ &N \text{: the length of the sequence} \\ &m \text{: the number of observations of type 1 (e.g. H)} \\ &n = N - m \text{: the observations of type 2} \end{aligned}\] \[\begin{aligned} &H_0: \text{independence / randomness} \\ \text{vs. } &H_1: \text{not }H_0 \end{aligned}\] We reject $H_0$ if $r$ is too big or too small. To get a handle on this, we need to think about the distribution of the number of runs for a given sequence length. We'd like to know \[P(R = r) = \frac{\text{\# of ways of getting r runs}}{\text{total number of ways of arranging H/Ts}}\] This is conceptually easy, but doing this directly would be tedious for a even moderate $N$. We can use combinatorics to work it out. The denominator is the number of ways to pick $m$ out of $N$: $\binom{N}{m}$. As for the numerator, we need to think about all the ways to arrange $m$ Hs and $n$ Ts to get $r$ runs in total: \[P(R = 2s + 1) = \frac{\binom{m-1}{s-1} \binom{n-1}{s} + \binom{m-1}{s} \binom{n-1}{s-1}}{\binom{N}{m}}\] \[P(R = 2s) = \frac{2 \cdot \binom{m-1}{s-1} \binom{n-1}{s-1} }{\binom{N}{m}}\] In principle, we can use these formulas to compute tail probabilities of events, and hence p-values, if $m$ and $n$ aren't too large (both $\leq 20$). We could run into numerical issues, and computing the tail probabilities is tedious, so we also have a normal approximation: \[\begin{aligned} E(R) &= 1 + \frac{2mn}{N} \\ Var(R) &= \frac{2mn(2mn - N)}{N^2(N-1)} \end{aligned}\] Asymptotically, \[Z = \frac{R - E(R)}{\sqrt{Var(R)}} \dot\sim N(0, 1)\] We can still improve by continuity correction: add $\frac{1}{2}$ to numerator if $R < E(R)$ and substract $\frac{1}{2}$ if $R > E(R)$. The question of interest overall is randomness, or lack of randomness, thereof the test is two-sided by nature. There are two extremes of run behavior: clustering or clumping of types - small number of long runs is evidence (one-sided). alternating pattern of types - large number of runs is evidence of an alternating pattern (again, in an one-sided perspective). Runs Test for Multiple Categories We may also take a simulation-based approach. The goal is to find critical values, or p-values empirically based on simulation, rather than using the normal approximation. The procedure is to generate a large number of random sequences of length $N$, with $m$ of type 1 events and $n$ of type 2 events (e.g. use R to generate a random sequence of $0$s and $1$s, the probabilities for $m$ and $n$ comes from the original data - essentially permuting the original sequence). Count the number of runs in each sequence, and this number is what we found for our test statistic based on the data. The generated data is what we expect if the null is reasonable. Gathering all of these together gives an empirical distribution for the number of runs you might expect to see in a sequence of length $N$ ($m$ of type 1, $n$ of type 2) if $H_0$ is reasonable. If $N$ (hence also $m, n$) is small, we can compute the exact probabilities. Also, if $N$ is small or moderate, if you generate a lot of random sequences, you will see a lot of repeated sequences. What if we have more than 2 types of events? Smeeton and Cox (2003) described a method for estimating the distribution of the number of runs by simulating permutations of the sequence of categories. Suppose we have $k$ different outcomes / events, and let $n_i$ denote an observation of type $i$. We have $N = \sum\limits_{i=1}^k n_i$ - the total length of the sequence, and $p_i = \frac{n_i}{N}$ - proportion of observation of type $i$. We can again use the simulation approach here: generate a lot of sequences of length $N$, with $n_1$ of type 1, $n_2$ of type 2, ..., $n_k$ of type $k$, and count the number of runs in each sequence. p-values: suppose we have $1000$ random sequences of length $N$, and the number of runs ranges from $5$ to $25$. In the $1000$ simulations, we need to take down how many showed $5$ runs, $6$ runs, ..., $25$ runs. If we observed $12$ runs in our data, the tail probability is $P(R \leq 12)$, and find the tail probability on the other tail by symmetry (e.g. $(5, 6, 24, 25)$). Normal approximation: Schuster and Gu (1997) proposed an asymptotic test based on the normal distribution makes use of the mean and variance of the number of runs: \[E(R) = N \left( 1 - \sum\limits_{i=1}^k {p_i^2}\right) + 1\] \[Var(R) = N \left[ \sum\limits_{i=1}^k {\left(p_i^2 - 2p_i^3 \right)} + \left(\sum\limits_{i=1}^k {p_i^2} \right)^2\right]\] Use these in a normal approximation: \[Z = \frac{R - E(R)}{\sqrt{Var(R)}} \dot\sim N(0, 1)\] where $R$ denotes the observations in our sample. Barton and David (1957) suggest that the normal approximation is adequate for $N > 12$, no matter what the number of categories is. So far we've been talking about inferences on single samples. Let's take a step further to paired samples. Below is the R code for generating the plot: ↩︎ library(tidyverse) set.seed(42) tibble( Type = c(rep("Empirical", 1000), rep("Data", 10)), Value = c( round(rnorm(1000, mean=60, sd=15)), round(rnorm(10, mean = 60, sd = 15)) ) ) %>% ggplot(aes(Value, color = Type)) + stat_ecdf(geom = "step")+ theme_minimal()
I know of Shannon's work with entropy, but lately I have worked on succinct data structures in which empirical entropy is often used as part of the storage analysis. Shannon defined the entropy of the information produced by a discrete information source as $-\sum_{i=1}^k p_i \log{p_i}$, where $p_i$ is the probability of event $i$ occurring, e.g. a specific character generated, and there are $k$ possible events. As pointed out by MCH in the comments, the empirical entropy is the entropy of the empirical distribution of these events, and is thus given by $-\sum_{i=1}^k \frac{n_{i}}{n} \log{\frac{n_{i}}{n}}$ where $n_{i}$ is the number of observed occurrences of event $i$ and $n$ is the total number of events observed. This is called zero-th order empirical entropy. Shannon's notion of conditional entropy has a similar higher order empirical version. Shannon did not use the term empirical entropy, though he surely deserves some of the credit for this concept. Who did first used this idea and who first used the (very logical) name empirical entropy to describe it?
The acceleration can be employed in similar fashion as the gravity force. The linear acceleration "creates'' a conservative force of constant force and direction. The "potential'' of moving the mass in the field provides the energy. The Force due to the acceleration of the field can be broken into three coordinates. Thus, the element of the potential is \[ \label{ene:eq:acceleration3C} d\,PE_{a} = \pmb{a} \cdot d\pmb{ll} \,dm \tag{92} \] The total potential for element material \[ \label{ene:eq:elePE} PE_{a} = \int_{(0)}^{(1)} \pmb{a} \cdot d\pmb{ll} \,dm = \left( a_x \left( x_1 - x_0 \right) a_y \left( y_1 - y_0 \right) a_z \left( z_1 - z_0 \right) \right) \,dm \tag{93} \] At the origin (of the coordinates) \(x=0\), \(y=0\), and \(z=0\). Using this trick the notion of the \(a_x \left( x_1 - x_0 \right)\) can be replaced by \(a_x\,x\). The same can be done for the other two coordinates. The potential of unit material is \[ \label{ene:eq:PEtotal} {PE_a}_{total} = \int_{sys} \left( a_x\,x + a_y\,y + a_z\,z \right) \,\rho \,dV \tag{94} \] The change of the potential with time is \[ \label{ene:eq:PEtotalDT} \dfrac{D}{Dt} {PE_a}_{total} = \dfrac{D}{Dt} \int_{sys} \left( a_x\,x + a_y\,y + a_z\,z \right) \,dm \tag{95} \] Equation can be added to the energy equation as \[ \label{ene:eq:EneAccl} \dot{Q} - \dot{W} = \dfrac{D}{Dt} \int_{sys} \left[ E_u + \dfrac{U^2}{2\dfrac{}{}} + a_x\,x + a_y\, y + (a_z + g) z \right] \rho\,dV \tag{96} \] The Reynolds Transport Theorem is used to transferred the calculations to control volume as Energy Equation in Linear Accelerated Coordinate \[ \nonumber \dot{Q} - \dot{W} = \dfrac{d}{dt} \int_{cv} \left[ E_u + \dfrac{U^2}{2\dfrac{}{}} + a_x\,x + a_y\, y + (a_z + g) z \right] \rho\,dV \\ \label{ene:eq:ene:AccCV} + \int_{cv} \left( h + \dfrac{U^2}{2\dfrac{}{}} + a_x\,x + a_y\, y + (a_z + g) z \right) U_{rn}\, \rho\,dA\ \nonumber + \int_{cv} P\,U_{bn} \,dA \tag{97} \] Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
Let $w=a_1a_2a_3...$ be an infinite word over a finite alphabet and $\epsilon>0$. Do there exist integers $n,k$ such that $\frac{d(a_1a_2...a_n,a_{k+1}a_{k+2}...a_{k+n})}{n}<\epsilon$ ? ($d(u,v)$ is the hamming distence) OK, let's go over it slowly. The alphabet will consist of 4 symbols: $x,u,b,c$. The infinite word will be $xU_1Q_2U_3Q_4U_5Q_6\dots$ where $U_m$ is the finite word consisting of $m$ symbols $u$ and $Q_m$ is the random word consisting of $m$ symbols each of which is $b$ or $c$ with probability $1/2$ with the convention that the choices of symbols at different positions are independent. So you get something like$$xubcuuubcbbuuuuucbbccbuuuuuuucccbcbbb\dots$$ It is easy to check (see the discussion here) that as $n\to\infty$, the string $a_1a_2\dots a_n$ contains one symbol $x$ ($a_1=x$), $\frac n2+O(\sqrt n)$ symbols $u$ and $\frac n2+O(\sqrt n)$ symbols each of which is $b$ or $c$. Now suppose that $n$ is large enough and $k>n^2$. Then the $u$'s in the word $a_{k+1}a_{k+2}\dots a_{k+n}$ form a single block and the non-$u$'s form another block. One of these blocks has length $\ell \ge n/2$. However, the corresponding block in $a_1a_2\dots a_n$ is occupied by $\frac \ell 2+O(\sqrt n)$ symbols $u$ and $\frac \ell 2+O(\sqrt n)$ symbols that are not $u$, so the Hamming distance in question is at least $\frac \ell 2+O(\sqrt n)\ge \frac n4+O(\sqrt n)\ge \frac n5$ if $n\ge n_0$. Thus we need to look only at $k\le n^2$ for large $n$. We have $\frac n2+O(\sqrt n)$ random symbols in $a_1a_2\dots a_n$ and, for fixed $k\ge 1$, the probability that each of them is matched in $a_{k+1}a_{k+2}\dots a_{k+n}$ is $0$ or $1/2$, the corresponding events being independent. Thus, the chance that we have at least $\frac n3$ matchings instead of expected $\le\frac n4$ is at most $Ce^{-cn}$ by the Bernstein (a.k.a. Chernov, Hoeffding, etc.) bound. Since the series $\sum_n Cn^2e^{-cn}$ converges, we conclude that with probability close to $1$, the Hamming distance in question is at least $\frac n6+O(\sqrt n)>\frac n7$ for all $n\ge n_0$, $k\le n^2$. Finally, due to the uniqueness of $x$ in the word, the Hamming distance is always at least $1$, so the ratio in question is never less than $\min(\frac 17,\frac 1{n_0})$. I hope it is clearer now but feel free to ask questions if something is still confusing. By the way, the word "conjecture" means "a statement supported by extensive circumstantial evidence and several rigorous partial results", not "something that just came to my head" or "something I want to be true", so, since you put it in the title, I wonder what positive results you can prove here.
This question is based on this question: How to calculate a confidence level for a Poisson distribution? and its answers. In that post, the question is "How do I calculate the confidence interval of a poisson distribution with $n = 88$ and $\lambda =47.18$?" The answer came as $$ \lambda \pm 1.96\sqrt{\dfrac{\lambda}{n}}, $$ for the upper and lower bounds. It might be noted that this is an approximation which is okay when $n\lambda$ is big enough -- whatever big enough might be, apparently $4152$ is big enough. Now, as far as I know -- this might be completely wrong since I haven't really properly studied statistics yet -- the confidence interval gives you an interval such that the probability that the mean is in this interval is $95\%$. So, I'd think the probability of being outside this interval would be $2.5\%$? But it's not. So I'm confused.
MathModePlugin Add math formulas to TWiki topics using LaTeX markup language Description This plugin allows you to include mathematics in a TWiki page, with a format very similar to LaTeX. The external program latex2html is used to generate gif (or png ) images from the math markup, and the image is then included in the page. The first time a particular expression is rendered, you will notice a lag as latex2html is being run on the server. Once rendered, the image is saved as an attached file for the page, so subsequent viewings will not require re-renders. When you remove a math expression from a page, its image is deleted. Note that this plugin is called MathModePlugin , not LaTeXPlugin, because the only piece of LaTeX implemented is rendering of images of mathematics. Syntax Rules <latex [attr="value"]* > formula </latex> generates an image from the contained formula . In addition attribute-value pairs may be specified that are passed to the resulting img html tag. The only exeptions are the following attributes which take effect in the latex rendering pipeline: size: the latex font size; possible values are tiny, scriptsize, footnotesize, small, normalsize, large, Large, LARGE, huge or Huge; defaults to %LATEXFONTSIZE% color: the foreground color of the formula; defaults to %LATEXFGCOLOR% bgcolor: the background color; defaults to %LATEXBGCOLOR% The formula will be displayed using a math latex environment by default. If the formula contains a latex linebreak ( \\ ) then a multline environment of amsmath is used instead. If the formula contains an alignment sequence ( & = & ) then an eqnarray environment is used. Note that the old notation using %$formula$% and %\[formula\]% is still supported but are deprecated. If you might want to recompute the images cached for the current page then append ?refresh=on to its url, e.g. click here to refresh the formulas in the examples below. Examples The following will only display correctly if this plugin is installed and configured correctly. <latex title="this is an example"> \int_{-\infty}^\infty e^{-\alpha x^2} dx = \sqrt{\frac{\pi}{\alpha}} </latex> <latex> {\cal P} & = & \{f_1, f_2, \ldots, f_m\} \\ {\cal C} & = & \{c_1, c_2, \ldots, c_m\} \\ {\cal N} & = & \{n_1, n_2, \ldots, n_m\} </latex> <latex title="Calligraphics" color="orange"> \cal A, B, C, D, E, F, G, H, I, J, K, L, M, \\ \cal N, O, P, Q, R, S, T, U, V, W, X, Y, Z </latex> <latex> \sum_{i_1, i_2, \ldots, i_n} \pi * i + \sigma </latex> This is new inline test. Greek letters \alpha \theta \beta \iota \gamma \kappa \delta \lambda \epsilon \mu \zeta \nu \eta \xi Plugin Installation Instructions Download the ZIP file Unzip in your twiki installation directory. Content: MathModePlugin.zip File: Description: data/TWiki/MathModePlugin.txt lib/TWiki/Plugins/MathModePlugin/Core.pm lib/TWiki/Plugins/MathModePlugin.pm pub/TWiki/MathModePlugin/latex2img This plugin makes use of three additional tools that are used to convert latex formulas to images. These are Make sure they are installed and check the paths to the programs latex, dvipng and convert in the latex2img shiped with this plugin Edit the file <path-to-twiki>/pub/TWiki/MathModePlugin/latex2img accordingly and set execute permission for your webserver on it Visit configure in your TWiki installation, and enable the plugin in the {Plugins} section. Troubleshooting If you get error like "fmtutil: [some-dir]/latex.fmt does not exist", run fmtutil-sys --all on your server to recreate all latex formatstyles. If your generated image of the latex formula does not show up, then you probably have encoding issues. Look into the source of the <img>-tag in your page's source code. Non-ASCII characters in file names might cause troubles. Check the localization in the TWiki configure page. Configuration There are a set of configuration variables that an be set in different places. All of the below variables can be set in your LocalSite.cfg file like this: $TWiki::cfg{MathModePlugin}{<Name>} = <value>; Some of the below variables can only be set this way, some of the may be overridden by defining the respective prefrence variable. Name Preference Variable Default HashCodeLength 32 length of the hash code. If you switch to a different hash function, you will likely have to change this ImagePrefix '_MathModePlugin_' string to be prepended to any auto-generated image ImageType %LATEXIMAGETYPE% 'png' extension of the image type; possible values are 'gif' and 'png' Latex2Img '.../TWiki/MathModePlugin/latex2img' the script to convert a latex formula to an image LatexPreamble %LATEXPREAMBLE% '\usepackage{latexsym}' latex preamble to include additional packages (e.g. \usepackage{mathptmx} to change the math font) ; note, that the packages amsmath and color are loaded too as they are obligatory ScaleFactor %LATEXSCALEFACTOR% 1.2 factor to scale images LatexFGColor %LATEXFGCOLOR% black default text color LatexBGColor %LATEXBGCOLOR% white default background color LatexFontSize %LATEXFONTSIZE% normalsize default font size Plugin Info
Under the auspices of the Computational Complexity Foundation (CCF) We consider the following basic problem: given an $n$-variate degree-$d$ homogeneous polynomial $f$ with real coefficients, compute a unit vector $x \in \mathbb{R}^n$ that maximizes $|f(x)|$. Besides its fundamental nature, this problem arises in diverse contexts ranging from tensor and operator norms to graph expansion to quantum information theory. The homogeneous degree $2$ case is efficiently solvable as it corresponds to computing the spectral norm of an associated matrix, but the higher degree case is NP-hard. We give approximation algorithms for this problem that offer a trade-off between the approximation ratio and running time: in $n^{O(q)}$ time, we get an approximation within factor $O_d((n/q)^{d/2-1})$ for arbitrary polynomials, $O_d((n/q)^{d/4-1/2})$ for polynomials with non-negative coefficients, and $O_d(\sqrt{m/q})$ for sparse polynomials with $m$ monomials. The approximation guarantees are with respect to the optimum of the level-$q$ sum-of-squares (SoS) SDP relaxation of the problem (though our algorithms do not rely on actually solving the SDP). Known polynomial time algorithms for this problem rely on ``decoupling lemmas.'' Such tools are not capable of offering a trade-off like our results as they blow up the number of variables by a factor equal to the degree. We develop new decoupling tools that are more efficient in the number of variables at the expense of less structure in the output polynomials. This enables us to harness the benefits of higher level SoS relaxations. Our decoupling methods also work with ``folded polynomials,'' which are polynomials with polynomials as coefficients. This allows us to exploit easy substructures (such as quadratics) by considering them as coefficients in our algorithms. We complement our algorithmic results with some polynomially large integrality gaps for $d$-levels of the SoS relaxation. For general polynomials this follows from known results for \emph{random} polynomials, which yield a gap of $\Omega_d(n^{d/4-1/2})$. For polynomials with non-negative coefficients, we prove an $\tilde{\Omega}(n^{1/6})$ gap for the degree $4$ case, based on a novel distribution of $4$-uniform hypergraphs. We establish an $n^{\Omega(d)}$ gap for general degree $d$, albeit for a slightly weaker (but still very natural) relaxation. Toward this, we give a method to lift a level-$4$ solution matrix $M$ to a higher level solution, under a mild technical condition on $M$. From a structural perspective, our work yields worst-case convergence results on the performance of the sum-of-squares hierarchy for polynomial optimization. Despite the popularity of SoS in this context, such results were previously only known for the case of $q = \Omega(n)$. Significant revisions and restructuring in many places; improvement in the SoS gap for non-negative polynomials; addition of method to lift SoS gaps to higher levels (based on what we call the Tetris theorem); certification and SoS gap results for random polynomials no longer appear in this paper. We consider the following basic problem: given an $n$-variate degree-$d$ homogeneous polynomial $f$ with real coefficients, compute a unit vector $x \in \mathbb{R}^n$ that maximizes $|f(x)|$. Besides its fundamental nature, this problem arises in many diverse contexts ranging from tensor and operator norms to graph expansion to quantum information theory. The homogeneous degree $2$ case is efficiently solvable as it corresponds to computing the spectral norm of an associated matrix, but the higher degree case is NP-hard. In this work, we give multiplicative approximation algorithms for this problem. Our algorithms leverage the tractability of the degree $2$ case, and output the best solution among a carefully constructed set of quadratic polynomials. They offer a trade-off between the approximation ratio and running time, which is governed by the number of quadratic problems we search over. Specifically, in $n^{O(q)}$ time, we get an approximation within factor $O_d((n/q)^{d/2-1})$ for arbitrary polynomials, and $O_d((n/q)^{d/4-1/2})$ for polynomials with non-negative coefficients. The approximation guarantees are with respect to the optimum of the level-$q$ SoS SDP relaxation of the problem, which the algorithm rounds to a unit vector. We also consider the case when $f$ is random with independent $\pm 1$ coefficients, and prove that w.h.p the level-$q$ SoS solution gives a certificate within factor $\tilde{O}_d((n/q)^{d/4-1/2})$ of the optimum. We complement our algorithmic results with some polynomially large integrality gaps for $d$-levels of the SoS relaxation. For the random polynomial case, we show a gap of $\Omega_d(n^{d/4-1/2})$, which precisely matches the exponent of our upper bound, and shows the necessity of our $\Omega(d)$ exponent in the approximation ratio for general polynomials. For polynomials with non-negative coefficients, we show an $\tilde{\Omega}(n^{1/12})$ gap for the $d=4$ case. To obtain our results, we develop general techniques which help analyze the approximation obtained by higher levels of the SoS hierarchy. We believe these techniques will also be useful in understanding polynomial optimization for other constrained settings.
This is a question that I've previously asked over on math.stackexchange, and I have yet to receive a useful answer. It was suggested that I post this here. The problem itself originally comes from simulation of wave propagation, and involves large two-dimensional arrays of complex data. I'll explain it here as a simplified one-dimensional problem, however. Suppose I have the following integral relationship involving a convolution: $$ f(x) = \int_{-\infty}^{+\infty}dx'\, g(x')\, K(x - x') $$ Assume that the functional form of the kernel $K$ is known, and suppose that the function $g$ has compact support on $[-a/2, +a/2]$ for some width $a>0$. Assume further that I have data about the function $g$ in the form of $N$ equally-spaced samples $g_n$: $$ g_n = g(x_n)\, , $$ where $$ x_n = -\frac{a}{2} + n\frac{a}{N}\, , \qquad n = 0,..., N-1 $$ My goal is to generate some $N$ evenly-spaced samples of the function $f$. If we suppose that we want to evaluate $f$ on the same domain on which $g$ lives, and for which we have samples of $g$, then it's easy. I discretize everything in the usual way, and the whole thing becomes a discrete convolution:$$f_n = f(x_n) = \sum_{m = 0}^{N-1} g_m K_{n - m}\, ,$$where$$K_{n - m} = K(x_n - x_m) = K\left(\frac{a}{N}(n-m)\right)\, .$$Everything is simple since I can use the discrete convolution theorem: I FFT the $\{g_n\}$ and $\{K_n\}$ sequences, multiply them, do an inverse FFT, and I'm done. I have my $\{f_n\}$ data. The whole thing will be an $O(N\log N)$ process. Suppose instead that I have to evaluate $f$ on a different-sized domain. For example, perhaps the $g$ data is narrow, but the kernel $K$ is very wide. Let's say I have to generate $N$ samples of $f$ on the domain $[-L/2, +L/2]$, for some domain width $L > a$. $L$ could even be much larger than a. If I try to discretize everything in a similar way, I'd have to first define a sequence of points $\{X_n\}$ in the wider domain: $$ X_n = -\frac{L}{2} + n\frac{L}{N}\, , \qquad n = 0,..., N-1 $$ Then the data I seek is $\{f_n\}$ where $f_n = f(X_n)$. The original integral is still over the smaller domain, so we have: \begin{align} f_n &= \int_{-a/2}^{+a/2}dx'\, g(x')\, K(X_n - x') \\ &\approx \sum_{m = 0}^{N-1} g_m\, K(X_n - x_m) \qquad \text{(discretize)}\\ &= \sum_{m = 0}^{N-1} g_m\, K\left(\frac{L-a}{2} + \frac{a}{N}(\alpha\, n - m)\right)\, , \end{align} where $\alpha = L/a > 1$ is the ratio of the two domain sizes. This is no longer a discrete convolution, since the second term is no longer a function of $n-m$, but instead of $\alpha\, n - m$. So I can no longer use the convolution theorem. I can imagine evaluating the sum above by brute force, but that would be an $O(N^2)$ process, which is prohibitive for my problem. Is there any way of efficiently calculating the discrete version of this convolution integral between two different domain sizes? Thanks.
You have several preformed misconceptions about relativity in your question. I will try to address them all here. 1- Time Dilation Due To Relativity/Speed It happens that time dilation due to relativity becomes really evident at extremely high speeds. The time dilation experienced due to velocity is expressed as $$\Delta t = \frac{t}{\sqrt{1-\frac{v^2}{c^2}}}$$ where v is the velocity of the object under question and c is the speed of light in vacuum. If you solve the equation for various values of v, you would observe that while there is definitely some time dilation for every moving object, the effect is negligible, until we approach at least 10% the speed of light. Now let us examine the speeds at which we (humans on Earth) are travelling. Earth is moving around the sun at a speed of nearly 30 km/s. While being much, much faster than a rifle bullet, it is nothing when compared to the speed of light, which is 300000000 m/s. So we are moving at 0.01% of the speed of light that way. Next is our speed in the solar system, moving around the galactic center. A quick google search tells me this speed is 828,000 km/h or 230 km/s (source). One again, while being mind bogglingly fast, it is only about 0.076% of the speed of light. Time dilation at these speeds is negligible for all practical purposes. While our galaxy is also moving along in the cluster of galaxies and the cluster is adrift in the universe, but these speeds can hardly be calculated accurately due to the expansion of space in the universe which is increasing at the distance between far flung galaxies at velocities faster than the speed of light. So relativity equations don't really apply here. 2- If our star system happened to be close to the center of the galaxy, wouldn't it make sense to send a colony to slower parts of the galaxy and have them develop technology to be brought back to us? While star systems near the galactic center have time dilated for them (not due to higher speeds, but due to gravitation), it would be a completely impractical idea to try and settle a colony at regions of faster time so that they develop technology faster and that be brought back to us. Some of the reasons I can think of, are following: 1- Galactic center is a really really supermassive blackhole with extremely high gravity. Sending a spaceship to large distances away from this monster would be very very difficult, specially considering that you want to send the spaceship from near the galactic center to the outer reaches of the galaxy. 2- Before you send the huge spaceship containing thousands of people to start a colony on another planet in the outskirts of the galaxy, you first have to find a habitable planet in the outskirts of the galaxy. That is not an easy task, considering the startlingly long distances involved, the next-to-nothing technology we have for detailed mapping of all star systems and their planets at such vastness and how we cannot tell about habitability of planets at those distances. There is 99.9999999999999999% chance we would send our pioneers to certain death. 3- Considering that the galaxy is really, really huge and interstellar space looking dark to the eye is fraught with horrors like blackholes and neutron stars with horrific magnetic and gravitation fields, the spaceship would be in for nearly certain doom when considering travelling from galactic center to galactic outskirts. 4- And when those pioneers would finally reach the outskirts of the galaxy (in tens of millions of years, even if they travel at 5% of the speed of light, which is a very fast speed, for human standards), the people who would be landing on the exoplanet would probably be biologically different than us and definitely have a completely different psychology. And you can be 100% certain they would not be interested at all at sending back the results of technological advancement they achieve. They would no longer be emotionally, culturally or biologically linked to us anymore. 5- Also, forget any meaningful communication between the planets. Our galaxy is 100,000 light years across (source) meaning that it would take at least 50,000 years just for one message to reach either side. And then you would have to consider where to focus your message at (considering that the position of the star systems and planets would be very different after 50,000 years) and then process the signal to undo the red or blue shift. Then there is gravitational lensing effect which might bend the communication waves away from their designated straight path. In short, forget any communication at all. Edit to add: In response to Michael Kjörling You have stated the correct statistics, but I'm afraid you have ended up with a limited view of the journey and the dangers it holds. While a straight trip from the galactic center to the outskirts of our galaxy would indeed take a few multiples of 50,000 years (when traveling at a significant fraction of the speed of light), travel is anything, but straight (due to gravitational and magnetic fields of stars). For one, you have not considered the possibility of the source civilization living on north side of the galactic center and the destination start system occurring on the southern outstretch of the galaxy. You cannot travel through the galactic center. You would have to go around it, and considering its immense gravity and vast event horizon, travelling at 5% the speed of light, you would have to make very very large turn around it, taking tens of thousands of years. Furthermore, stellar density is much greater near the galactic center, implying that the spaceship would have to endure gravitational tugs from multiple sources, the moment it reaches interstellar space. Neglecting the impossible fuel requirements, travel would hardly (if ever possible at all) be in a straight line, making the route very complex and lengthy. Thirdly, you have not accounted the fact that in case (which is highly likely) the spaceship the spaceship is a couple dozen thousand years late, it will find that its destination stellar system has moves millions of miles ahead and will have to actually chase it to reach it, further increasing journey time.
Example 11.3 Solution 11.3 \(\mathbf{M}\) \(\mathbf{\dfrac{T }{ T_0}}\) \(\mathbf{\dfrac{\rho }{ \rho_0}}\) \(\mathbf{\dfrac{A }{ A^{\star} }}\) \(\mathbf{\dfrac{P }{ P_0}}\) \(\mathbf{\dfrac{A \,P }{ A^{\star} \,P_0 }}\) \(\mathbf{\dfrac{F }{ F^{\star} }}\) 0.88639 0.86420 0.69428 1.0115 0.60000 0.60693 0.53105 \[ T = 0.86420338 \times (273+ 21) = 254.076 K \\ \rho = \dfrac{\rho }{ \rho_0} \overbrace{\dfrac{P_{0}}{ R\, T_0}}^{\rho_0}= 0.69428839 \times {5 \times 10^6 [Pa] \over 287.0 \left[\dfrac{J }{ kg\, K}\right] \times 294 [K] } \\ = 41.1416 \left[\dfrac{kg }{ m^3 }\right] \] The velocity at that point is \[ U = M \,\overbrace{\sqrt{k\,R\,T}}^{c} = 0.88638317 \times \sqrt { 1.4 \times 287 \times 254.076} \sim 283 [m /sec] \] The tube area can be obtained from the mass conservation as \[ A = \dfrac{\dot {m} }{ \rho\, U} = 8.26 \times 10^{-5} [m^{3}] \] For a circular tube the diameter is about 1[cm]. Example 11.4 The Mach number at point \(\mathbf{A}\) on tube is measured to be \(M=2\) and the static pressure is \(2 [Bar]\). Downstream at point B the pressure was measured to be 1.5[Bar]. Calculate the Mach number at point B under the isentropic flow assumption. Also, estimate the temperature at point B. Assume that the specific heat ratio \(k=1.4\) and assume a perfect gas model. Well, this question is for academic purposes, there is no known way for the author to directly measure the Mach number. The best approximation is by using inserted cone for supersonic flow and measure the oblique shock. Here it is subsonic and this technique is not suitable. Solution 11.4 With the known Mach number at point \(\mathbf{A}\) all the ratios of the static properties to total (stagnation) properties can be calculated. Therefore, the stagnation pressure at point \(\mathbf{A}\) is known and stagnation temperature can be calculated. At \(M=2\) (supersonic flow) the ratios are Isentropic Flow Input: M k = 1.4 \(M\) \(\dfrac{T}{T_0}\) \(\dfrac{\rho}{\rho_0}\) \(\dfrac{A}{A^{\star} }\) \(\dfrac{P}{P_0}\) \(\dfrac{A\, P }{ A^{\star} \, P_0}\) \(\dfrac{F }{ F^{\star}}\) 2.0 0.555556 0.230048 1.6875 0.127805 0.21567 0.593093 With this information the pressure at point B can be expressed as \[ \dfrac{P_{A} }{ P_{0}} = \overbrace{\dfrac{P_{B} }{ P_{0}} }^ {\text{from the table isentropic @ M = 2}} \times \dfrac{P_{A} }{ P_{B}} = 0.12780453 \times \dfrac{2.0 }{ 1.5} = 0.17040604 \] The corresponding Mach number for this pressure ratio is 1.8137788 and \(T_{B} = 0.60315132\) \({P_{B} \over P_{0} }= 0.17040879\). The stagnation temperature can be "bypassed'' to calculate the temperature at point \(\mathbf{B}\) \[ T_{B} = T_{A}\times \overbrace{T_{0} \over T_{A} }^{M=2} \times \overbrace{T_{B} \over T_{0}} ^{M=1.81..} = 250 [K] \times {1 \over 0.55555556} \times {0.60315132} \simeq 271.42 [K] \] Example 11.5 Gas flows through a converging–diverging duct. At point "A'' the cross section area is \(50\) [\(cm^{2}\)] and the Mach number was measured to be 0.4. At point B in the duct the cross section area is 40 [\(cm^{2}\)]. Find the Mach number at point B. Assume that the flow is isentropic and the gas specific heat ratio is 1.4. Solution 11.5 To obtain the Mach number at point B by finding the ratio of the area to the critical area. This relationship can be obtained by \[ \dfrac{A_{B} }{ A{*} } = \dfrac{A _{B} }{ A_{A} } \times \dfrac{A_{A} }{ A^{\star}} = \dfrac{40 }{ 50} \times \overbrace{ 1.59014 }^{\text{from the 11.2}} = 1.272112 \] With the value of \(\dfrac{A_{B} }{ A{\star} } \) from the Table or from Potto-GDC two solutions can be obtained. The two possible solutions: the first supersonic M = 1.6265306 and second subsonic M = 0.53884934. Both solution are possible and acceptable. The supersonic branch solution is possible only if there where a transition at throat where M=1. Isentropic Flow input: \(\dfrac{A}{A^{\star}}\) k = 1.4 \(M\) \(\dfrac{A}{A^{\star}}\) \(\dfrac{\rho}{\rho_0}\) \(\dfrac{A}{A^{\star} }\) \(\dfrac{P}{P_0}\) \(\dfrac{A\, P }{ A^{\star} \, P_0}\) \(\dfrac{F }{ F^{\star}}\) 0.538865 0.945112 0.868378 1.27211 0.820715 1.04404 0.611863 1.62655 0.653965 0.345848 1.27211 0.226172 0.287717 0.563918 Example 11.6 Engineer needs to redesign a syringe for medical applications. They complained that the syringe is "hard to push.'' The engineer analyzes the flow and conclude that the flow is choke. Upon this fact, what engineer should do with the syringe; increase the pushing diameter or decrease the diameter? Explain Solution 11.6 This problem is a typical to compressible flow in the sense the solution is opposite the regular intuition. The diameter should be decreased. The pressure in the choke flow in the syringe is past the critical pressure ratio. Hence, the force is a function of the cross area of the syringe. So, to decrease the force one should decrease the area. Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
Assume a wave function $\psi = \psi(x,t)$ where $x$ is position from the starting point $(0,0)$ and $t$ is time. Two oscillating points A and B are located at $x_1$ and $x_2$ respectively with $x_2 \geq x_1$ and $x_1 > vt_1$ where $t_1$ is a moment in time. If I was to find their distance via using calculus could I do this? : \begin{align}s(x_2,x_1) &= \int^{x_2}_{x_1} ds \\ &= \int^{x_2}_{x_1} \sqrt{(dx)^2 + (d \psi)^2} \\ &= \int^{x_2}_{x_1} \sqrt{(dx)^2 + \bigg(\frac{\partial \psi}{\partial x}dx \bigg)^2} \\ &=\int^{x_2}_{x_1} \sqrt{1 + \bigg(\frac{\partial \psi}{\partial x}\bigg)^2} dx\end{align} Instead of finding $x = x_2 - x_1 $, $y = \psi(x_2,t_1) - \psi(x_1,t_1)$ and then $s(x_2,x_1) = \sqrt{x^2 + y^2}$
The problem is that the quaternions given in id: 4 distance: 1048 q0: 646 q1: -232 q2: -119 q3: 717 are not normalized. In fact,$$\|\textbf{q}\| = \sqrt{q_0^2+q_1^2+q_2^2+q_3^2} = 999.6950$$I suspect the wearable device gives the quaternions using fixed point numbers that must be scaled by dividing by 1000. Another problem is that the quaternions are only given to three significant digits, so it will be better to calculate the quaternion norm, $\|q\|$, first using the equation above, and then normalize the quaternions using $q_0\leftarrow q_0/\|\textbf{q}\|$, $q_1\leftarrow q_1/\|\textbf{q}\|$, $q_2\leftarrow q_2/\|\textbf{q}\|$, $q_3\leftarrow q_3/\|\textbf{q}\|$. Then you can directly calculate the Euler angles from the quaternions using\begin{eqnarray*}\phi &=& \tan^{-1}\left(\frac{2(q_2q_3+q_0q_1)}{q_0^2-q_1^2-q_2^2+q_3^2}\right) \\\theta &=& -\sin^{-1}\left(2q_1q_3-2q_0q_2\right) \\\psi &=& \tan^{-1}\left(\frac{2(q_1q_2+q_0q_3)}{q_0^2+q_1^2-q_2^2-q_3^2}\right)\end{eqnarray*}without going through the intermediate step of calculating the rotation matrix.Also make sure to use the four-quadrant arctan function, atan2(), to calculate $\phi$ and $\psi$ to obtain the correct quadrant.Here, $\phi$ is the roll angle, $\theta$ the pitch angle, and $\psi$ the yaw angle; all three angles in radians. Simple multiply by $180/\pi$ to obtain the angles in degrees. Your answer should be \begin{eqnarray*}\phi &=& -28.5815^\circ \\\theta &=& 10.3144^\circ \\\psi &=& 93.3298^\circ\end{eqnarray*}If you repeat the above calculation without first normalizing the quaternions, you will obtain the correct roll and yaw angles, but a complex pitch angle: \begin{eqnarray*}\phi &=& -28.5815^\circ \\\theta &=& (90-j732.7)^\circ \\\psi &=& 93.3298^\circ\end{eqnarray*} Hope this helps!
\(\text{Let }A=\{\text{string starts with 10}\}\\ B=\{\text{string contains exactly 4 0s}\}\) 1) \(P[0]=P[1]=\dfrac 1 2\\ \text{P[A] is the probability of selecting 10 out of the 4 possible 2 bit combos}\\ P[A] = \dfrac 1 4\\ P[B] = \dfrac{\binom{8}{4}}{2^8} = \dfrac{35}{128}\\ P[A \cap B] = P[\text{bits 1-2 are 10 and bits 3-8 have 3 0's}] = \\ \dfrac 1 4 \dfrac{\binom{6}{3}}{2^6}=\dfrac{5}{64}\) \(P[A \cup B] = P[A]+P[B]-P[A \cap B]\\ P[A \cup B] = \dfrac 1 4 + \dfrac{35}{128}-\dfrac{5}{64} = \dfrac{57}{128}\). 2) \(P[1]=\dfrac 3 5=p,~P[0]=\dfrac 2 5\) \(P[A]=\dfrac 3 5 \dfrac 2 5 = \dfrac{6}{25}\\ P[B] = \dbinom{8}{4}\left(\dfrac 3 5\right)^4\left(\dfrac 2 5\right)^4 = \dfrac{18144}{78125}\\ P[A \cap B] = \dfrac{6}{25} \dbinom{6}{3}\left(\dfrac 3 5\right)^3\left(\dfrac 2 5\right)^3 = \dfrac{5184}{78125}\) \(P[A \cup B] = \dfrac{6}{25} + \dfrac{18144}{78125}-\dfrac{5184}{78125}=\dfrac{6342}{15625}\). 3) \(P[A] = \left(1-\dfrac 1 2\right)\left(\dfrac 1 4\right) = \dfrac 1 8\\ P[B]= \text{.... are you supposed to be able to do this w/o software?}\\ \text{It's next to impossible by hand}\\ \text{unless I'm missing some major trick which is certainly possible}\).
I am studying Gross's and Zagier's proof of the BSD conjecture for elliptic curves of rank $\leq 1.$ Their calculation essentially boils down to the following ingredients: (1.) Finding a uitable imaginary-quadratic extension $K$ of $\mathbb{Q}$ with no rank growth and calculating the Néron-Tate height pairings $\left<x,\sigma(x)\right>$ of Heegner points $x$ of conductor $1$ with their conjugates. (2.) Showing that the function $\sum_{m\geq 1}\left<x,\sum_{\sigma\in\operatorname{Cl}_{K}}T_{m}\sigma(x)\right>$ pairs with $f$ under the Petersson scalar product to $L'(f,1).$ When you read the paper for the first time it feels like you are being hit in the face with a bombastic load of calculations the sense of which you understand only when you have reached the final chapter. In order to get myself a little more "familiar" with the methodology of the paper, in particular with step one, I want to ask how it can be generalized. Remember that Heegner points of an elliptic curve $E$ of conductor $N$ over an imaginary quadratic field $K$ can be parametrized by equivalence classes of maps $$\mathbb{C}/\mathcal{O}_{c} → \mathbb{C}/\mathcal{N}_{c}$$ where $\mathcal{N}$ is a ray class of $K$ dividing $N,$ $c$ is a natural number prime to the discriminant of $K,$ $$\mathcal{O}_{c}=\mathbb{Z}+c\cdot\mathcal{O}_{K},$$ and $\mathcal{N}_{c}:=\mathcal{O}_{c}\cap\mathcal{N}.$ My question is related to step (1.) mentioned above: Can the method of Gross and Zagier be adapted or has it been adapted to calculate the height pairings of Heegner points and their conjugates of conductor greater than $1$? I am not asking about $p$-adic heights (there seems to be tons of literature about that) but about Néron-Tate heights. A reference would also be nice. Remark: This is not about "How do you solve the BSD conjecture", I am rather trying to get a grasp of the methodology that is used.
kidzsearch.com > wiki Explore:images videos games Logic All men are mortal Socrates is a man Therefore, Socrates is mortal. Symbolic logic The same syllogism can be written in a notation: [math]\land[/math] is read like "and", meaning both of the two.[math]\lor[/math] is read like "or", meaning at least one of the two.[math]\rightarrow[/math] is read like "implies", or "If ... then ...".[math]\lnot[/math] is read like "not", or "it is not the case that ...". Parentheses (, ) are added for clarity and precedence; this means that what is in parenthesis should be looked at before the things outside. This is the same example using logic symbols: [math]\rm((human\rightarrow mortal)\land(Aristotle\rightarrow human))\rightarrow(Aristotle\rightarrow mortal)[/math] And this is the same example using general terms: [math]((a\rightarrow b)\land(c\rightarrow a))\rightarrow(c\rightarrow b)[/math] Finally, those talking about logic talk about statements. A statement is simply something like "Aristole is human" or "all humans are mortal". Statements have a truth value; they are either true or false, but not both. Mistakes in logic are called "fallacies". Logical proof A logical proof is a list of statements. Each statement in the proof is either an assumption or has been proven to follow from earlier statements in the proof. A proof shows that one statement, the conclusion, follows from the assumptions. One can, for example, prove that "Aristotle is mortal" follows from "Aristotle is a man" and "All men are mortal". There are statements that are always true. [math](a \lor \lnot a)[/math] is always true. It is called a tautology. (for example: "Either it rains, or it does not rain") Uses Logic is used in mathematics. People who study math create proofs that use logic to show that math facts are correct. There is an area of mathematics called mathematical logic that studies logic using mathematics. Logic is also studied in philosophy.
Previously we discussed some basic concepts in nonparametric statistics. Here we continue to present some basic tools (permutation test and sign test) and principles (order statistics, ranks, and efficiency) that will be useful moving forward. Permutation (or Randomization) Test Consider the following (pretty artificial) scenario, in which four out of nine subjects are selected at random to receive a new drug, and the other five get a placebo. After some time, all nine subjects are assessed on some outcome and are ranked from "best" ($rank = 1$) to "worst" ($rank = 9$). We assume no ties - for now . Here's our question: if the new drug has no beneficial effect, what is the probability that the subjects who got it were ranked $(1, 2, 3, 4)$, i.e. best responses? First, we need to think about what it means when we say that the four were selected "at random". If the drug really has no effect, it means that the labels "new drug" and "placebo" are essentially meaningless, and any subject is equally likely to be ranked low, medium, or high after the treatment. The ranks are essentially assigned at random. There are in total $\binom{9}{4} = 126$ ways of picking four people out of the nine to receive the new drug. If the drug has no effect, then the set of ranks belonging to the chosen four is equally likely to be any of the 126 possible sets of ranks. Here are some of the possibilities: A B C D 1 2 3 4 $\rightarrow$ most favorable to the new drug 1 2 3 5 1 2 3 6 1 2 3 7 $\vdots$ $\vdots$ $\vdots$ $\vdots$ 5 6 7 9 5 6 8 9 5 7 8 9 6 7 8 9 $\rightarrow$ least favorable to the new drug We could think of testing \[\begin{aligned} & H_0: \text{new drug has no effect} \\ vs. \, &H_1: \text{new drug has an effect} \end{aligned}\] using the enumeration of the rank outcomes. This is the basic logic of the permutation test. If there is no difference / no effect ($H_0$), then the labels are arbitrary and the ranks likewise can be thought of as arbitrarily assigned, so we can look at the configuration of ranks actually observed and see how likely it is. With the two-sided alternative $H_1$, we consider both extremes $(1, 2, 3, 4) \text{ and } (6, 7, 8, 9) \Rightarrow prob = \frac{2}{126} \approx 0.016$. Usually, instead of working with the ranks, we work with a function of the ranks that has a low value if all of the ranks are low, and a high value if all of the ranks are high, and an intermediate value if there is a mix of high and low ranks (in the "treated group"). For instance, sum (or average) of the ranks. For our example, the sum will range from 10 to 30: $S \text{ (rank sum)}$ # of occurrences Probability 10 1 0.008 11 1 0.008 12 2 0.0159 13 3 14 5 $\vdots$ $\vdots$ $\vdots$ 26 5 27 3 28 2 29 1 30 1 The distribution is symmetric. We could also define a rejection region, as in classical testing, by looking for values of $S$ in each tail with $prob. < 0.025$ (for two-tailed test, and taking $\alpha = 0.05$), to get an overall level of $0.05$. This gives $S = \{10, 11, 29, 30\}$ with $\text{cumulative prob.} \approx 0.0317 > 0.025$. We can't attain the exact 0.05 level here. This is characteristic of permutation tests because we're dealing with discrete distributions for the test statistic. As the sample sizes increase, you can get closer. We made no assumptionsabout what distribution the outcome was drawn from. But note that we alsodon't use the actual outcome values themselves - just the ranks! There is a loss of information for a gain in flexibility and robustness ( tradeoff in this case). Binomial Tests Another key component: tests that are built on the binomial distribution. We'll introduce this with an example. Motivating Example We have data on survival times of ten patients with a certain type of cancer. But, for one patient the precise survival time is not known - the study only followed subjects for 362 weeks and that subject is still alive at that point. In this case, we say that the observation is censored. The data are (survival time in weeks): \[49, 58, 75, 110, 112, 132, 151, 276, 281, 362^*\] $*$: censored, survival time > 362 weeks Suppose we want to test the hypothesis that the median survival time (in the relevant population) is 200 weeks, vs. the alternative that it is not. Let $\theta$ be the median survival time in the population from which the sample was drawn. We're interested in: \[ \begin{aligned} &H_0: \theta = 200 \\ vs. \, &H_1: \theta \neq 200 \end{aligned} \] For classical (parametric) approaches, this scenario has two main complications: the censored observation need special methods, and the fact that we're looking at the median rather than the mean. The Binomial test offers a way around both of these issues simultaneously. If we have a random sample from any continuous distribution with median 200, each sample value is equally likely to be above or below 200. Define a "success" to be a value above 200, and a "failure" to be a value below 200. Under $H_0$, a success is just as likely as a failure, since the median $\theta$ is 200 in the population. So, in a sample of size 10, the number of successes is $Bin(10, 0.5)$. We have 3 successes, including the censored observation (whose precise value no longer matters!). \[\text{F, F, F, F, F, F, F, S, S, S}\] Under $H_0$, $\text{Pr (3 successes)} = {10 \choose 3} 0.5^{10}$. We also need to consider the "more extreme" outcomes in this tail: 2 or 1 or 0 successes. In total: ${10 \choose 3}0.5 ^{10} + {10 \choose 2}0.5 ^{10} + {10 \choose 1}0.5 ^{10} + {10 \choose 0}0.5 ^{10} \approx 0.1719$. We have a two-sided test, so we also need to consider departures from $H_0$ in the other tail. Since $Pr(success) = Pr(failure) = 0.5$, this distribution is symmetric $\Rightarrow \text{p-value} = 2 \times 0.1719 = 0.3438$. This Binomial test is also called the sign test, and is very popular as a nonparametric test due to its simplicity. Intepreting Results As in the previous example of the permutation test, because of the discreteness of the binomial, not all levels (p-values) are attainable. As $n$ increases, this becomes less of an issue, but for small or moderate sample sizes, this argues against strict cutoffs (fixed, but essentially arbitrary) such as "$\text{p-value} < 0.05 \Rightarrow \text{ 'statistically significant'}$". It is better to report the p-value itself, and if possible, also report a confidence interval about the population quantity of interest. With $(0, 1, 9, 10)$ successes (low and high # of successes), we have a total probability of $0.0216$, which is "reasonable evidence" against $H_0$. If we add in 2 and 8, that probability jumps to $0.1094$, which you may or may not be willing to take as evidence against $H_0$. If we go with $0.0216$, we have between 2 and 8 successes if the median specified in $H_0$ has any value greater than 58 but less than 281. So $(58,281)$ is a $97.84\%$ confidence interval for $\theta$, the population median. This is a very wide confidence interval - probably too wide for a clinical setting. We've gained flexibility in analysis, but we've thrown out a lot of the information in the data. Asymmetric Cases In a dental practice, experience has shown that $75\%$ of adult patients require treatment after a routine checkup. So, in a sample if 10 independent patients, the number, $S$, needing additional treatment is $Bin(10, 0.75)$. The probabilities for each outcome are: $S\text{(\# of successes)}$ prob. of individual outcomes 0 0.0000 1 0.0000 2 0.0004 3 0.0031 4 0.0162 5 0.0584 6 0.146 7 0.2503 8 0.2816 9 0.1877 10 0.0563 We can see that the distribution is no longer symmetric in the two tails! Suppose we had a sample of size 10 from a different dental practice and wanted to test: \[\begin{aligned} H_0&: p= 0.75 \\ vs.\, H_1&: p > 0.75 \end{aligned}\] One-sided alternative values in the upper tail (high # of success) $\Rightarrow$ favorable to $H_1$. The most extreme result would be 10 successes in the second practice, which under $H_0 (p = 0.75)$ has probability $0.0563$, which is "just above" the "standard" $\alpha = 0.05$ threshold. So with a traditional testing approach, we could never reject $H_0$. Another situation where it makes more sense to report the p-value itself. The other one-tailed test: \[\begin{aligned} H_0&: p= 0.75 \\ vs. \, H_1&: p < 0.75 \end{aligned} \] One-sided alternative values in the lower tail (low # of successes) $\Rightarrow$ favorable to $H_1$. Here we keep accumulating the lower tail probabilities until we hit the p-value threshold. In this case, we can reject $H_0$ in the traditional framework. When the distribution is asymmetric and we have a two-tailed test, there are two options: Find the point in the other tail with equal or lower cumulative probability, e.g. both tail probabilities should approach 0.025 to get a p-value of 0.05. Take tails eqidistant from the mean (in discrete cases, take the same number of bins). Order Statistics and Ranks Ranking is at the basis of many nonparametric tests. It's a major way of dropping distributional assumptions (and losing some of the information in the raw data). We can use ranks / ordered data. In general, if we have observations $x_1, x_2, \cdots, x_n$ from a continuous distribution (i.e. no ties), we denote the order statistics $x_{(1)} < x_{(2)} < \cdots < x_{(n)}$: $x_{(1)}$ is the smallest obs. in the sample. $x_{(1)} < x_{(2)} < \cdots < x_{(n)}$ is the sample ordered from smallest to largest. The median of the sample can be defined in terms of the order statistics: \[\begin{aligned} &\text{n odd} \Rightarrow \text{sample median is }x_{\left(\frac{n+1}{2}\right)} \\ &n = 2m \Rightarrow \text{sample median is } \frac{1}{2}\left(x_{(m)} + x_{(m+1)}\right) \end{aligned}\] We can also define measures of dispersion - range or interquartile range - in terms of the order statistics, e.g. $x_{(n)} - x_{(1)}$ is a possible measure of dispersion. Another key use of the order statistics is to build the empirical distribution function: \[S_n(x) = \frac{1}{n} (\text{\# sample values} \leq x)\] In terms of the order statistics, \[S_n(x) = \begin{cases} 0, & x < x_{(1)} \\ \frac{i}{n}, & x_{(i)} \leq x < x_{(i+1)}, & i = 1 \cdots n-1 \\ 1, & x \geq x_{(n)} \end{cases}\] "Step function" with jumps of $\frac{1}{n}$ at each of the observed data points. This is an estimator of the population cumulative distribution function $F(x)$. Power and Efficiency How effective is a procedure in using the information in the sample? In general, nonparametric methods are less efficient than parametric counterparts when the assumptions of the parametric approaches are met. Nonparametric methods, e.g. based on ranks, replace the actual observed data with ordered values $\Rightarrow$ toss out information $\Rightarrow$ lose power / efficiency. One way of looking at efficiency is through asymptotic relative efficiency (ARE). "Asymptotic" meaning as sample sizes increase, and "relative efficiency" is comparing two procedures. Consider two sequences of tests, $T_1$ and $T_2$ (different tests with increasing sample sizes, e.g. 5, 6, 7, ... samples) where $\alpha$ (probability of Type I error) is fixed. We let $H_1$ vary in a way that $\beta$ (probability of Type II error) remains constant as the sample size for test sequence $T_1$, call it $n_1$, increases. For each value of $n_1$, the idea is to determine $n_2$ such that the test sequence $T_2$ has the same $\beta$ for the particular alternative. The proposed idea is to test what are the sample sizes you'd need for tests $T_1,(n_1)$ and $T_2,(n_2)$ to get the same test performance. The closer $n_1$ and $n_2$ are to each other, the tests have closer efficiency - they use the data with (near) equal effectiveness. A bit more rigorously, a bigger sample size usually leads to increased power for alternatives closer to the null. With a larger sample, you can detect smaller differences. For big samples, the ratio $\frac{n_1}{n_2}$ is potentially informative, and it can be shown that (under some circumstances) $\frac{n_1}{n_2}$ tends to a limit as $n_1 \rightarrow \infty$. This is the asymptotic relative efficiency. Crucially, nonparametric methods can be more powerful than their parametric counterparts when the assumptions of the latter don't hold. Next, we'll discuss location inference on a single sample, and the tool we'll be using is the Wilcoxon signed-rank test.
I know they have difference in scoping: data a (n : Set) : Set where introA : a ndata b : Set -> Set where introB : {n : Set} -> b n That's not what I'm caring about. Are they different semantically? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community Yes and no. The obvious difference is that indexed types are able to vary in the result type of each constructor. So you can do: data T : ℕ → Set where t : T 5 You can do this with a parameterized type by taking an argument: data T (n : ℕ) : Set where t : n ≡ 5 → T n But ≡ is itself an indexed type, so you need something indexed at the bottom ( ≡ can be seen as the fundamental indexed type, though, if you want). In older versions of Agda, there was another difference, where parameterized types could be smaller than indexed types on some occasions. For instance, your type b would be illegal, and would have to be: data b : Set → Set1 where introB : {n : Set} → b n This is because in general data types that contain values of Set must be large to avoid paradox. However, Agda now has a fairly sophisticated analysis that can tell that the value of the contained n is determined by the visible result type of the constructor, and so is 'safe'/equivalent to a parameterized type. So it allows it to be small, and this is no longer a difference between indexed and parameterized types. So, up to this analysis possibly failing (which it probably can, but I don't know how to trick it), indexed types are more general, I believe. The following explanation lacks mathematicial precision but should explain what is going on. A GADT is a special case of a recursive type. A recursive type $T$ is a solution of a type equation of the form$$T = \Phi(T).$$(If this is not clear, please ask.)Sometimes $\Phi$ depends on a parameter $p : P$ of some given type $P$, so we have a parameterized equation$$T = \Phi(p, T).$$Now the solution $T$ is not just a type, because it depends on $p$, so we get a dependent type $T : P \to \mathsf{Type}$ such that$$T(p) = \Phi(p, T(p))$$for all $p : P$. We may also consider recursive equations in collections other than $\mathsf{Type}$. For example, an equation in $\mathsf{Type} \times \mathsf{Type}$ is a mutual recursive type$$(T_1, T_2) = \Phi(T_1, T_2).$$The types $T_1$ and $T_2$ can be arbitrarily entangled, i.e., there is no guarantee that we can separate the equation into two equations$T_1 = \Phi_1(T_1)$ and $T_2 = \Phi_2(T_2)$. If we could separate like that, we would be back to the previous case of a parameterized fixed-point equation (the parameters ranging over $\{1, 2\}$). We could similarly solve simultaneously for three, four, etc. types. In general, given any type $I$, we can solve an equation which is simultaneous in $I$-many types $T_i$, one for each index $i : I$, which gives us an indexed equation$$(T_i)_{i : I} = \Phi((T_i)_{i : I}).$$The above notation is a bit ugly and it would be better to write$$T = \Phi(T)$$where it is understood that the uknown $T$ is a dependent type $T : I \to \mathsf{Type}$ indexed by $I$. Again, there is no guarantee that we can disentangle the equation into $I$-many separate equations of the form $T_i = \Phi_i(T_i)$. The two kinds of equations translate back to recursive and inductive type definitions. Let us write $\mathsf{rec}_U\,\Phi$ for the solution of a fixed-point equation $X = \Phi(X)$, where $X : U$ and $\Phi : U \to U$. We now have: Given a map $\Phi : A \to \mathsf{Type} \to \mathsf{Type}$, the type family $R : A \to \mathsf{Type}$ given by$$R = \lambda a : A \,.\, \mathsf{rec}_{\mathsf{Type}}\,(\lambda X : \mathsf{Type} \,.\, \Phi(a, X))$$is a recursive type with a parameter $a : A$, as in the first case. Given a map $\Psi : (A \to \mathsf{Type}) \to (A \to \mathsf{Type})$, the type family $Q : A \to \mathsf{Type}$ given by$$Q = \mathsf{rec}_{A \to \mathsf{Type}}\,(\lambda F : A \to \mathsf{Type} \,.\, \Psi(F))$$is a recursive type with index $a : A$, as in the second case. It should be clear that $R$ and $Q$ are not the same thing. In $R$ we solve one fixed-point equation in $\mathsf{Type}$ for each parameter $a : A$, whereas in $Q$ we solve a single fixed-point equation in $A \to \mathsf{Type}$. As a sanity check you should try to translate your examples to the above notation. Lastly, the indexed recursive types are more general than the parametrized ones. Indeed,$$R = \lambda a : A \,.\, \mathsf{rec}_{\mathsf{Type}}\,(\lambda X : \mathsf{Type} \,.\, \Phi(a, X))$$can be written as$$R = \mathsf{rec}_{A \to \mathsf{Type}}\,(\lambda F : A \to \mathsf{Type} \,.\, \lambda a : A \,.\, \Phi(a, F(a))).$$In the other direction, an indexed recursive type $Q$ as above may be converted to a parameterized one precisely when $\Psi$ can be separated, which means that it has the form $\Psi(F) = \lambda a : A \,.\, \Psi'(a, F(a))$ for some $\Psi' : A \times \mathsf{Type} \to \mathsf{Type}$, in which case we have$$Q = \lambda a : A \,.\, \mathsf{rec}_{\mathsf{Type}} (\lambda X \,.\ \Psi'(a, X )).$$
Large deviations for stochastic 3D Leray-$ \alpha $ model with fractional dissipation 1. School of Mathematical Sciences, Nankai University, 300071 Tianjin, China 2. School of Mathematics and Statistics, Jiangsu Normal University, 221116 Xuzhou, China In this paper we establish the Freidlin-Wentzell's large deviation principle for stochastic 3D Leray-$ \alpha $ model with general fractional dissipation and small multiplicative noise. This model is the stochastic 3D Navier-Stokes equations regularized through a smoothing kernel of order $ \theta_1 $ in the nonlinear term and a $ \theta_2 $-fractional Laplacian. The main result generalizes the corresponding LDP result of the classical stochastic 3D Leray-$ \alpha $ model ($ \theta_1 = 1 $, $ \theta_2 = 1 $), and it is also applicable to the stochastic 3D hyperviscous Navier-Stokes equations ($ \theta_1 = 0 $, $ \theta_2\geq\frac{5}{4} $) and stochastic 3D critical Leray-$ \alpha $ model ($ \theta_1 = \frac{1}{4} $, $ \theta_2 = 1 $). Keywords:Large deviation principle, Leray-$ \alpha $ model, fractional Laplacian, Navier-Stokes equation, weak convergence approach. Mathematics Subject Classification:Primary: 60H15, 60F10; Secondary: 35Q30, 35R11. Citation:Shihu Li, Wei Liu, Yingchao Xie. Large deviations for stochastic 3D Leray-$ \alpha $ model with fractional dissipation. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2491-2509. doi: 10.3934/cpaa.2019113 References: [1] [2] [3] [4] D. Barbato, F. Morandin and M. Romito, Global regularity for a slightly supercritical hyperdissipative Navier-Stokes system, [5] V. Barbu and M. Röckner, An operatorial approach to stochastic partial differential equations driven by linear multiplicative noise, [6] [7] H. Bessaih and B. Ferrario, The regularized 3D Boussinesq equations with fractional Laplacian and no diffusion, [8] H. Bessaih, E. Hausenblas and P. A. Razafimandimby, Strong solutions to stochastic hydrodynamical systems with multiplicative noise of jump type, [9] H. Bessaih and A. Millet, Large deviations and the zero viscosity limit for 2D stochastic Navier-Stokes equations with free boundary, [10] H. Bessaih and P. A. Razafimandimby, On the rate of convergence of the 2-D stochastic Leray-$\alpha$ model to the 2-D stochastic Navier-Stokes equations with multiplicative noise, [11] Z. Brzeźniak, B. Goldys and T. Jegaraj, Large deviations and transitions between equilibria for stochastic Landau-Lifshitz-Gilbert equation, [12] [13] [14] S. Cerrai and M. Röckner, Large deviations for stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term, [15] [16] V. V. Chepyzhov, E. S. Titi and M. I. Vishik, On the convergence of solutions of the Leray-$\alpha$ model to the trajectory attractor of the 3D Navier-Stokes system, [17] [18] [19] [20] L. Debbi, Well-posedness of the multidimensional fractional stochastic Navier-Stokes equations on the torus and on bounded domains, [21] A. Dembo and O. Zeitouni, [22] [23] [24] [25] P. W. Fernando, E. Hausenblas and P. A. Razafimandimby, Irreducibility and exponential mixing of some stochastic hydrodynamical systems driven by pure jump noise, [26] [27] F. Flandoli, A stochastic view over the open problem of well-posedness for the 3D Navier-Stokes equations. Stochastic analysis: a series of lectures, [28] M. I. Freidlin and A. D. Wentzell, [29] N. Glatt-Holtz and V. Vicol, Local and global existence of smooth solutions for the stochastic Euler equations with multiplicative noise, [30] [31] [32] S. Li, W. Liu and Y. Xie, [33] S. Li, W. Liu and Y. Xie, Ergodicity of 3D Leray-$\alpha$ model with fractional dissipation and degenerate stochastic forcing, [34] [35] J.-L. Lions, [36] [37] [38] [39] [40] W. Liu, M. Röckner and X.-C. Zhu, Large deviation principles for the stochastic quasi-geostrophic equations, [41] [42] D. Martirosyan, Large deviations for stationary measures of stochastic nonlinear wave equations with smooth white noise, [43] E. Olson and E. S. Titi, Viscosity versus vorticity stretching: global well-posedness for a family of Navier-Stokes-alpha-like models, [44] [45] [46] [47] [48] M. Röckner, R.-C. Zhu and X.-C. Zhu, Local existence and non-explosion of solutions for stochastic fractional partial differential equations driven by multiplicative noise, [49] [50] S. S. Sritharan and P. Sundar, Large deviations for the two-dimensional Navier-Stokes equations with multiplicative noise, [51] E. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton, NJ: Princeton University Press, 1970. Google Scholar [52] R. Temam, [53] [54] F.-Y. Wang and L. Xu, Derivative formula and applications for hyperdissipative stochastic Navier-Stokes/Burgers equations, [55] [56] [57] J. Yang and J. Zhai, Asymptotics of stochastic 2D hydrodynamical type systems in unbounded domains, [58] show all references References: [1] [2] [3] [4] D. Barbato, F. Morandin and M. Romito, Global regularity for a slightly supercritical hyperdissipative Navier-Stokes system, [5] V. Barbu and M. Röckner, An operatorial approach to stochastic partial differential equations driven by linear multiplicative noise, [6] [7] H. Bessaih and B. Ferrario, The regularized 3D Boussinesq equations with fractional Laplacian and no diffusion, [8] H. Bessaih, E. Hausenblas and P. A. Razafimandimby, Strong solutions to stochastic hydrodynamical systems with multiplicative noise of jump type, [9] H. Bessaih and A. Millet, Large deviations and the zero viscosity limit for 2D stochastic Navier-Stokes equations with free boundary, [10] H. Bessaih and P. A. Razafimandimby, On the rate of convergence of the 2-D stochastic Leray-$\alpha$ model to the 2-D stochastic Navier-Stokes equations with multiplicative noise, [11] Z. Brzeźniak, B. Goldys and T. Jegaraj, Large deviations and transitions between equilibria for stochastic Landau-Lifshitz-Gilbert equation, [12] [13] [14] S. Cerrai and M. Röckner, Large deviations for stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term, [15] [16] V. V. Chepyzhov, E. S. Titi and M. I. Vishik, On the convergence of solutions of the Leray-$\alpha$ model to the trajectory attractor of the 3D Navier-Stokes system, [17] [18] [19] [20] L. Debbi, Well-posedness of the multidimensional fractional stochastic Navier-Stokes equations on the torus and on bounded domains, [21] A. Dembo and O. Zeitouni, [22] [23] [24] [25] P. W. Fernando, E. Hausenblas and P. A. Razafimandimby, Irreducibility and exponential mixing of some stochastic hydrodynamical systems driven by pure jump noise, [26] [27] F. Flandoli, A stochastic view over the open problem of well-posedness for the 3D Navier-Stokes equations. Stochastic analysis: a series of lectures, [28] M. I. Freidlin and A. D. Wentzell, [29] N. Glatt-Holtz and V. Vicol, Local and global existence of smooth solutions for the stochastic Euler equations with multiplicative noise, [30] [31] [32] S. Li, W. Liu and Y. Xie, [33] S. Li, W. Liu and Y. Xie, Ergodicity of 3D Leray-$\alpha$ model with fractional dissipation and degenerate stochastic forcing, [34] [35] J.-L. Lions, [36] [37] [38] [39] [40] W. Liu, M. Röckner and X.-C. Zhu, Large deviation principles for the stochastic quasi-geostrophic equations, [41] [42] D. Martirosyan, Large deviations for stationary measures of stochastic nonlinear wave equations with smooth white noise, [43] E. Olson and E. S. Titi, Viscosity versus vorticity stretching: global well-posedness for a family of Navier-Stokes-alpha-like models, [44] [45] [46] [47] [48] M. Röckner, R.-C. Zhu and X.-C. Zhu, Local existence and non-explosion of solutions for stochastic fractional partial differential equations driven by multiplicative noise, [49] [50] S. S. Sritharan and P. Sundar, Large deviations for the two-dimensional Navier-Stokes equations with multiplicative noise, [51] E. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton, NJ: Princeton University Press, 1970. Google Scholar [52] R. Temam, [53] [54] F.-Y. Wang and L. Xu, Derivative formula and applications for hyperdissipative stochastic Navier-Stokes/Burgers equations, [55] [56] [57] J. Yang and J. Zhai, Asymptotics of stochastic 2D hydrodynamical type systems in unbounded domains, [58] [1] Vladimir V. Chepyzhov, E. S. Titi, Mark I. Vishik. On the convergence of solutions of the Leray-$\alpha $ model to the trajectory attractor of the 3D Navier-Stokes system. [2] Phuong Le. Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian. [3] Anna Amirdjanova, Jie Xiong. Large deviation principle for a stochastic navier-Stokes equation in its vorticity form for a two-dimensional incompressible flow. [4] Harbir Antil, Mahamadi Warma. Optimal control of the coefficient for the regional fractional $p$-Laplace equation: Approximation and convergence. [5] Aseel Farhat, M. S Jolly, Evelyn Lunasin. Bounds on energy and enstrophy for the 3D Navier-Stokes-$\alpha$ and Leray-$\alpha$ models. [6] Claudianor O. Alves, Vincenzo Ambrosio, Teresa Isernia. Existence, multiplicity and concentration for a class of fractional $ p \& q $ Laplacian problems in $ \mathbb{R} ^{N} $. [7] Nicholas J. Kass, Mohammad A. Rammaha. Local and global existence of solutions to a strongly damped wave equation of the $ p $-Laplacian type. [8] Peng Mei, Zhan Zhou, Genghong Lin. Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations. [9] [10] Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar. Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system. [11] Teresa Alberico, Costantino Capozzoli, Luigi D'Onofrio, Roberta Schiattarella. $G$-convergence for non-divergence elliptic operators with VMO coefficients in $\mathbb R^3$. [12] Gabriele Bonanno, Giuseppina D'Aguì. Mixed elliptic problems involving the $p-$Laplacian with nonhomogeneous boundary conditions. [13] Genghong Lin, Zhan Zhou. Homoclinic solutions of discrete $ \phi $-Laplacian equations with mixed nonlinearities. [14] K. D. Chu, D. D. Hai. Positive solutions for the one-dimensional singular superlinear $ p $-Laplacian problem. [15] Chengxiang Wang, Li Zeng, Wei Yu, Liwei Xu. Existence and convergence analysis of $\ell_{0}$ and $\ell_{2}$ regularizations for limited-angle CT reconstruction. [16] [17] Tadahisa Funaki, Yueyuan Gao, Danielle Hilhorst. Convergence of a finite volume scheme for a stochastic conservation law involving a $Q$-brownian motion. [18] [19] Yueling Li, Yingchao Xie, Xicheng Zhang. Large deviation principle for stochastic heat equation with memory. [20] Jishan Fan, Tohru Ozawa. Regularity criteria for the magnetohydrodynamic equations with partial viscous terms and the Leray-$\alpha$-MHD model. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Let me begin with noting that models such as the CAPM have been extensively falsified, beginning with Mandelbrot in 1963 and ending with the Fama-MacBeth testing in 1973. Other falsifications followed, but really, it has been a zombie theory since the early seventies. The difficulty is that the falsification has been like the Michaelson-Morley experiments in the 1880s. Once they were done, classical physics was falsified, but Plank and Einstein were not around yet. I have a paper in peer review proposing a new stochastic calculus for options pricing and portfolio construction, but there is nothing resembling a consensus on a replacement. The math is very different. The core of your problem can understood in two equations. The first is $x_{t+1}=\mu{x}_t+\epsilon_{t+1}$. The second is $r_t=\frac{p_{t+1}q_{t+1}}{p_tq_t}-1$. For the first equation, for someone to be willing to invest, $\mu>1$. It was proven in 1958 that this problem has no meaningful solution in Frequentist statistics. The CAPM is a Frequentist model. For the CAPM, this means that $\tilde{w}=R\bar{w}+\varepsilon$ only has a solution when $R$ is known, which is an assumption of the math underlying the model. In other words, models such as the CAPM and Black-Scholes depend on it being true that people know the true values of the parameters. If they do not, it is known that these models cannot be solved. The argument has been that the market behaves as if it knew the parameters. A Bayesian derivation of the CAPM results in a Cauchy likelihood not a normal one. The second equation makes returns a function of data rather than a known parameter. As such, it must be derived. I have a paper deriving the distribution of returns. The distribution is the product of the ratio of prices and the ratio of volumes if dividends are ignored. Dividends create their own complications. However, from auction theory, it can be determined that the dominant element of the resulting mixture distribution is the truncated Cauchy distribution. It has no first moment, so $\beta$ cannot exist as described in the CAPM. That is different from saying that it cannot be found. Such a thing implies the non-existence of the population parameter as the model describes it. I left out the logarithmic models for two reasons. First, the CAPM has additive errors. Second, the distribution created by taking the log of Cauchy distributed variables is the hyperbolic secant distribution. It has no covariance structure in the multivariate form, so the CAPM is defeated anyway. A short proof that the CAPM cannot function would be to note that if $q_t=q_{t+1}$ which rules out mergers and bankruptcies and $\delta_t=0,\forall{t}$ and there are no dividends, then return is $\frac{p_{t+1}}{p_1}$. Because stocks are sold in a double auction, there is no winner’s curse. The rational behavior is to bid your expectation. Using the assumption of many buyers and sellers from the CAPM, the limiting distribution of the limit book must be normal. If you center the distribution of prices around the equilibrium prices and assume the market is approximately in equilibrium, ruling out things such as a liquidity crisis, then you end up with a bivariate normal distribution. The equilibrium return $r^*=\frac{p_{t+1}^*}{p_t^*}$. Errors could be thought of as excess return, plus or minus. Noting that the slope is the same as the tangent of the angle, you quickly get very close to the Cauchy distribution by observing that $$\theta=\tan^{-1}(r).$$ The arctangent is the kernel of the cumulative density function of the Cauchy distribution. The angle is, minimally, a transform of returns via the Cauchy distribution. Solving for the ratio of two normal distributions by integrating around the equilibrium rather than (0,0), in other words treating the equilibrium as the location of no error, or conceptually by operating in the error space rather than the price space, give you a Cauchy distribution. It is well known in statistics that the ratio of two normal distributions centered around zero is a Cauchy. By treating the equilibrium as zero, it still comes out a Cauchy distribution truncated at -100%. If there exists just one stock that didn't pay a dividend, then the CAPM is intrinsically falsified. So the answer to your question is, there isn’t a solution. If you want to optimize a portfolio, use the Kelly bet instead. Visually, I took the returns for Carnival Cruises, I didn't subtract one, I just took the ratio. I performed a kernel density estimate having removed the return from dividends to make it consistent with the above. I then mapped the truncated Cauchy distribution from the maximum likelihood estimator. As you can see, even if everything I wrote above is false, there cannot be a $\beta$ as described in the CAPM. There are actually three ways to improve the fit, but I used the method most quants would use instead.
The basic idea of asymptotic notation is that "constant factors shouldn't matter". This has several different interpretations. Taking as our example big O, the simplest interpretation is: For all $C,D>0$ and functions $f,g$: $$ f = O(g) \Longleftrightarrow Cf = O(Dg). $$ This means that big O should be scale-invariant, that is, it should treat the functions $f$ and $Cf$ exactly the same. Stated differently, if $f_1,f_2$ differ by a constant factor, then $f_1,f_2$ should be treated the same. In practice, the condition "$f_1/f_2=\text{const}$" is too restrictive. Instead, we would like to treat two functions $f_1,f_2$ in the same way as long as $f_1/f_2$ is bounded, that is, as long as $C_1 \leq f_1/f_2 \leq C_2$ for some constants $C_1,C_2>0$. That is, we would like that For all $C_1,C_2,D_1,D_2$ and functions $f_1,f_2,g_1,g_2$ satisfying $C_1 \leq f_1/f_2 \leq C_2$ and $D_1 \leq g_1/g_2 \leq D_2$: $$ f_1 = O(g_1) \Longleftrightarrow f_2 = O(g_2). $$ You can check that the definition of big O satisfies this invariance property. The constant in the definition of big O ensures that. Indeed, while $2n \leq n$ doesn't hold, $2n = O(n)$ does hold. The integer constant – $N_0$ – in the definition of big O isn't really necessary. It is there so we can accommodate functions which aren't positive or aren't even defined for small $n$. If your functions aren't pathological in that sense, then you can do away with $N_0$. For these non-pathological functions (extended to non-integer points), the various asymptotic notations have equivalent definitions in terms of limits: $f = O(g)$ if $\limsup_{n\to\infty} f(n)/g(n) < \infty$. $f = \Omega(g)$ if $\liminf_{n\to\infty} f(n)/g(n) > 0$. $f = \Theta(g)$ if both these conditions hold. $f = o(g)$ if $\lim_{n\to\infty} f(n)/g(n) = 0$. $f = \omega(g)$ if $\lim_{n\to\infty} f(n)/g(n) = \infty$.
In formal language theory, an alphabet is a finite, non-empty set. The elements of the set are called symbols. A finite sequence of symbols \(a_{1} a_{2} \ldots a_{n}\) from an alphabet is called a string over that alphabet. Example \(3.1 . \Sigma=\{0,1\}\) is an alphabet, and \(011,1010,\) and 1 are all strings over \(\Sigma .\) Note that strings really are sequences of symbols, which implies that order matters. Thus 011, 101, and 110 are all different strings, though they are made up of the same symbols. The strings \(x=a_{1} a_{2} \ldots a_{n}\) and \(y=b_{1} b_{2} \dots b_{m}\) are equal only if m = n (i.e. the strings contain the same number of symbols) and \(a_{i}=b_{i}\) for all \(1 \leq i \leq n\). Just as there are operations defined on numbers, truth values, sets, and other mathematical entities, there are operations defined on strings. Some important operations are: 1. length: the length of a string x is the number of symbols in it. The notation for the length of x is \(|x|\). Note that this is consistent with other uses of \(| |,\) all of which involve some notion of size: \(|\) number \(|\) measures how big a number is (in terms of its distance from 0); \(|\operatorname{set}|\) measures the size of a set (in terms of the number of elements). We will occasionally refer to a length-n string. This is a slightly awkward, but concise, shorthand for “a string whose length is n”. 2. concatenation: the concatenation of two strings \(x=a_{1} a_{2} \dots a_{m}\) and \(y=b_{1} b_{2} \ldots b_{n}\) is the sequence of symbols \(a_{1} \ldots a_{m} b_{1} \ldots b_{n}\). Some- times · is used to denote concatenation, but it is far more usual to see the concatenation of x and y denoted by xy than by x · y. You can easily convince yourself that concatenation is associative (i.e. \((x y) z=x(y z)\) for all strings \(x, y\) and \(z .\) Concatenation is not com- mutative (i.e. it is not always true that \(x y=y x :\) for example, if \(x=a\) and \(y=b\) then \(x y=a b\) while \(y x=b a\) and, as discussed above, these strings are not equal.) 3. reversal: the reverse of a string \(x=a_{1} a_{2} \ldots a_{n}\) is the string \(x^{R}=\) \(a_{n} a_{n-1} \ldots a_{2} a_{1}\). Example \(3.2 .\) Let \(\Sigma=\{a, b\}, x=a, y=a b a a,\) and \(z=b a b .\) Then \(|x|=1,|y|=4,\) and \(|z|=3 .\) Also, \(x x=a a, x y=\) abaa \(, x z=a b a b,\) and \(z x=\) baba. Finally, \(x^{R}=a, y^{R}=a a b a,\) and \(z^{R}=b a b\). By the way, the previous example illustrates a naming convention stan- dard throughout language theory texts: if a letter is intended to represent a single symbol in an alphabet, the convention is to use a letter from the beginning of the English alphabet (a, b, c, d ); if a letter is intended to represent a string, the convention is to use a letter from the end of the English alphabet (u, v, etc). In set theory, we have a special symbol to designate the set that contains no elements. Similarly, language theory has a special symbol ε which is used to represent the empty string, the string with no symbols in it. (Some texts use the symbol \(\lambda\) instead. ) It is worth noting that \(|\varepsilon|=0,\) that \(\varepsilon^{R}=\varepsilon\), and that \(\varepsilon \cdot x=x \cdot \varepsilon=x\) for all strings \(x\). (This last fact may appear a bit confusing. Remember that ε is not a symbol in a string with length 1, but rather the name given to the string made up of 0 symbols. Pasting those 0 symbols onto the front or back of a string x still produces x.) The set of all strings over an alphabet \(\Sigma\) is denoted \(\Sigma^{*} .\) (In language theory, the symbol \(*\) is typically used to denote "zero or more", so \(\Sigma^{*}\) is the set of strings made up of zero or more symbols from \Sigma.) Note that while an alphabet \(\Sigma\) is by definition a finite set of symbols, and strings are by definition finite sequences of those symbols, the set \(\Sigma^{*}\) is always infinite. Why is this? Suppose \(\Sigma\) contains \(n\) elements. Then there is one string over \(\Sigma\) with 0 symbols, \(n\) strings with 1 symbol, \(n^{2}\) strings with 2 symbols ( since there are \(n\) choices for the first symbol and \(n\) choices for the second), \(n^{3}\) strings with 3 symbols, etc. Example \(3.3 .\) If \(\Sigma=\{1\},\) then \(\Sigma^{*}=\{\varepsilon, 1,11,111, \ldots\} .\) If \(\Sigma=\{a, b\}\) then \(\Sigma^{*}=\{\varepsilon, a, b, a a, a b, b a, b b, \text { aaa, aab, } \ldots\} .\) Note that \(\Sigma^{*}\) is countably infinite: if we list the strings as in the preceding example (length-0 strings, length-1 strings in “alphabetical” order, length-2 strings similarly ordered, etc) then any string over \(\sum\) will eventually appear. (In fact, if \(|\Sigma|=n \geq 2\) and \(x \in \Sigma^{*}\) has length \(k,\) then \(x\) will appear on the list within the first \(\frac{n^{k+1}-1}{n-1}\) entries. We now come to the definition of a language in the formal language theoretical sense. Definition 3.1. A language over an alphabet \(\Sigma\) is a subset of \(\Sigma^{*} .\) Thus, a language over \(\Sigma\) is an element of \(\mathcal{P}\left(\Sigma^{*}\right),\) the power set of \(\Sigma^{*} .\) In other words, any set of strings (over alphabet \(\Sigma )\) constitutes a language (over alphabet \(\Sigma )\) Example \(3.4 .\) Let \(\Sigma=\{0,1\} .\) Then the following are all languages over \(\Sigma :\) \(L_{1}=\{011,1010,111\}\) \(L_{2}=\{0,10,110,1110,11110, \ldots\}\) \(L_{3}=\left\{x \in \Sigma^{*} | n_{0}(x)=n_{1}(x)\right\}\), where the notation n0(x) stands for the number of 0’s in the string x, and similarly for n1(x). \(L_{4}=\{x | \quad x \text { represents a multiple of } 5 \text { in binary }\}\) Note that languages can be either finite or infinite. Because \(\Sigma^{*}\) is infinite, it clearly has an infinite number of subsets, and so there are an infinite number of languages over Σ. But are there countably or uncountably many such languages? Theorem 3.1. For any alphabet \(\Sigma,\) the number of languages over \(\Sigma\) is uncountable. This fact is an immediate consequence of the result, proved in a previ- ous chapter, that the power set of a countably infinite set is uncountable. Since the elements of \(\mathcal{P}(\Sigma)\) are exactly the languages over \(\Sigma\), there are uncountably many such languages. Languages are sets and therefore, as for any sets, it makes sense to talk about the union, intersection, and complement of languages. (When taking the complement of a language over an alphabet \(\Sigma,\) we always consider the univeral set to be \(\Sigma^{*},\) the set of all strings over \(\Sigma\) .) Because languages are sets of strings, there are additional operations that can be defined on languages, operations that would be meaningless on more general sets. For example, the idea of concatenation can be extended from strings to languages. For two sets of strings S and T, we define the concatenation of S and T (denotedS·T or just ST)to be the set \(S T=\{s t | s \in S \wedge t \in\)\(\mathrm{~ T \} . ~ F o r ~ e x a m p l e , ~ i f ~} S=\{a b, a a b\}\) and \(T=\{\varepsilon, 110,1010\},\) then \(S T=\)\(\{a b, a b 110, a b 1010, a a b, a a b 110, a a b 1010\} .\) Note in particular that \(a b \in S T\) because \(a b \in S, \varepsilon \in T,\) and \(a b \cdot \varepsilon=a b .\) Because concatenation of sets is defined in terms of the concatenation of the strings that the sets contain, concatenation of sets is associative and not commutative. (This can easily be verified.) When a set S is concatenated with itself, the notation SS is usually scrapped in favour of \(S^{2} ;\) if \(S^{2}\) is concatenated with \(S,\) we write \(S^{3}\) for the resulting set, etc. So \(S^{2}\) is the set of all strings formed by concatenating two (possibly different, possibly identical) strings from \(S, S^{3}\) is the set of strings formed by concatenating three strings from \(S,\) etc. Extending this notation, we take \(S^{1}\) to be the set of strings formed from one string in \(S\) \(\left(\text { i.e. } S^{1} \text { is } S \text { itself }\right),\) and \(S^{0}\) to be the set of strings formed from zero strings in \(S\) (i.e. \(S^{0}=\{\varepsilon\} ) .\) If we take the union \(S^{0} \cup S^{1} \cup S^{2} \cup \ldots,\) then the resulting set is the set of all strings formed by concatenating zero or more strings from \(S,\) and is denoted \(S^{*} .\) The set \(S^{*}\) is called the Kleene closure of \(S,\) and the \(*\) operator is called the Kleene star operator. Example \(3.5 .\) Let \(S=\{01, b a\} .\) Then \(S^{0}=\{\varepsilon\}\) \(S^{1}=\{01, b a\}\) \(S^{2}=\{0101,01 b a, b a 0, \text { baba }\}\) \(S^{3}=\{010101,0101 b a, 01 b a 01,01 b a b a, b a 0101, b a 010 a, b a b a 01, b a b a b a\}\) etc, so \(S^{*}=\{\varepsilon, 01, b a, 0101,01 b a, b a 01, b a b a, 010101,0101 b a, \ldots\}\) Note that this is the second time we have seen the notation something*. We have previously seen that for an alphabet \(\Sigma, \Sigma^{*}\) is defined to be the set of all strings over \(\Sigma .\) If you think of \(\Sigma\) as being a set of length-1 strings, and take its Kleene closure, the result is once again the set of all strings over \(\Sigma,\) and so the two notions of * coincide. Example 3.6. Let \(\Sigma=\{a, b\}\) Then \(\Sigma^{0}=\{\varepsilon\}\) \(\Sigma^{0}=\{\varepsilon\}\) \(\Sigma^{1}=\{a, b\}\) \(\Sigma^{2}=\{a a, a b, b a, b b\}\) \(\Sigma^{3}=\{a a a, a a b, a b a, a b b, b a a, b a b, b b a, b b b\}\) \(\Sigma^{*}=\{\varepsilon, a, b, a a, a b, b a, b b, a a a, a a b, a b a, a b b, b a a, b a b, \ldots\}\) Exercises 1. Let \(S=\{\varepsilon, a b, a b a b\}\) and \(T=\{a a, a b a, a b b a, a b b b a, \ldots\}\). Find the following: a) \(S^{2} \quad\) b) \(S^{3} \quad\) c) \(S^{*} \quad\) d) \(S T \quad\) e) \(T S\) 2. The reverse of a language \(L\) is defined to be \(L^{R}=\left\{x^{R} | x \in L\right\}\). Find \(S^{R}\) and \(T^{R}\) for the \(S\) and \(T\) in the preceding problem. 3. Give an example of a language \(L\) such that \(L=L^{*}\)
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
You can also look at it in the time domain. Your received signal is $$s(t)=s_I(t)\cos(\omega_c t)+s_Q(t)\sin(\omega_c t)\tag{1}$$ where $s_I(t)$ and $s_Q(t)$ are the in-phase and quadrature components, and $\omega_c$ is the carrier frequency (in rad). We have to assume that at the receiver the carrier frequency and phase are known (coherent demodulation), or can be retrieved somehow. After demodulation we obtain two signals: $$\begin{align}s(t)\cos(\omega_ct)&=s_I(t)\cos^2(\omega_ct)+s_Q(t)\sin(\omega_ct)\cos(\omega_ct)\\&=\frac12\big[s_I(t)(1+\cos(2\omega_ct))+s_Q(t)\sin(2\omega_ct)\big]\tag{2}\end{align}$$ and $$\begin{align}s(t)\sin(\omega_ct)&=s_I(t)\cos(\omega_ct)\sin(\omega_ct)+s_Q(t)\sin^2(\omega_ct)\\&=\frac12\big[s_I(t)\sin(2\omega_ct)+s_Q(t)(1-\cos(2\omega_ct))\big]\tag{3}\end{align}$$ The low pass filters filter out all components at $2\omega_c$, so at the outputs of the filters we're left with $$r_I(t)=\frac12s_I(t)\tag{4}$$ and $$r_Q(t)=\frac12s_Q(t)\tag{5}$$
Find the distance between C and midpoint of AC with the \(\sqrt{(χ2-χ1)^2+(y2-y1)^2}\) And with this resault is equal with \(\sqrt{(χ3-χ2)^2+(y3-y2)^2}\) (the distance between midpoint of AC and A) and you have x3,y3 unknowns and you have and the equation 7𝑥 + 2𝑦 = 11 so 2 equations and 2 unknowns.So you solve it! Try it alone! Hope this helps! The line AB has equation 7𝑥 + 2𝑦 = 11 The point ( 5/ 2 , 10) (x1 , y1) is the midpoint of AC. Find the coordinates of point A The point C has coordinates (-5, 25 /2 ) (x2 , y2) Hello YEEEEEET ! Two-point form \(y=\frac{y_2-y_1}{x_2-x_1}(x-x_1)+y_1\\ y=\frac{12.5-10}{-5-2.5}(x-2.5)+10\\ y=-\frac{1}{3}(x-2.5)+10\\ y=-\frac{1}{3}x+10\frac{5}{6}\\ \color{blue}y=-\frac{1}{3}x+\frac{65}{6}\) \(7x+2y=11\\ y=-3.5x+5.5\\ \color{blue}y=-\frac{7}{2}x+\frac{11}{2}\) \(-\frac{1}{3}x+\frac{65}{6}=-\frac{7}{2}x+\frac{11}{2}\\ (-\frac{1}{3}+\frac{7}{2})x=\frac{11}{2}-\frac{65}{6}\\ \frac{19}{6}x=-\frac{32}{6}\\ 19x=-32\\ x=-\frac{32}{19}\\ \color{blue}x=-1\frac{13}{19}\) \(y=-\frac{7}{2}x+\frac{11}{2}\\ y=\frac{-7x+11}{2}\\ y=\frac{-7\cdot (-\frac{32}{19})+11}{2}\\ y=\frac{\frac{224}{19}+11}{2}\\ y=\frac{433}{38}\\ \color{blue}y=11\frac{15}{38}\) The coordinates of point A are [\(-1\frac{13}{19}\), \(11\frac{15}{38}\) ] !
Lets say i have density Matrix on the usual base $$ \rho = \left( \begin{array}{cccc} \frac{3}{14} & \frac{3}{14} & 0 & 0 \\ \frac{3}{14} & \frac{3}{14} & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{4}{7} \\ \end{array} \right) $$ from this two states $$|v_1\rangle=\frac{1}{\sqrt{2}}\left( \begin{array}{c} 1\\ 1 \\ 0 \\ 0 \\ \end{array} \right);|v_2\rangle=\left( \begin{array}{c} 0\\ 0 \\ 0 \\ 1 \\ \end{array} \right)$$ with weights $\frac{3}{7}$ and $\frac{4}{7}$ respectively And two observables A and B $$A=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 2 \\ \end{array} \right) $$ $$B=\left( \begin{array}{cccc} 3 & 0 & 0 & 0 \\ 0 & 4 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \\ \end{array} \right) $$ If you measure $A$ and $B$ at the same time, what you are doing is measuring $A$ with some state and measuring $B$ with not necessarily the same state but it may be the same. So if I am trying to measure probability of measuring 2 and 4, that is $p(2)\times p(4)$ ? Thats $p(2) = \frac{4}{7} $ and $p(4) = \frac{11}{14}$ then obtaining the two at the same time is $ \frac{4}{7}\times\frac{11}{14} = \frac{22}{49}$ What confuses me is, why is lower than $\frac{4}{7}$, since $|v_2\rangle$ has that chance of being measured, the other state just adds chances of measuring 4 with B. Whats really going on here?
Taisuke Yasuda,David Woodruff,Manuel Fernandez; Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7055-7063, 2019. Abstract Kernel methods generalize machine learning algorithms that only depend on the pairwise inner products of the dataset by replacing inner products with kernel evaluations, a function that passes input points through a nonlinear feature map before taking the inner product in a higher dimensional space. In this work, we present nearly tight lower bounds on the number of kernel evaluations required to approximately solve kernel ridge regression (KRR) and kernel $k$-means clustering (KKMC) on $n$ input points. For KRR, our bound for relative error approximation the argmin of the objective function is $\Omega(nd_{\mathrm{eff}}^\lambda/\varepsilon)$ where $d_{\mathrm{eff}}^\lambda$ is the effective statistical dimension, tight up to a $\log(d_{\mathrm{eff}}^\lambda/\varepsilon)$ factor. For KKMC, our bound for finding a $k$-clustering achieving a relative error approximation of the objective function is $\Omega(nk/\varepsilon)$, tight up to a $\log(k/\varepsilon)$ factor. Our KRR result resolves a variant of an open question of El Alaoui and Mahoney, asking whether the effective statistical dimension is a lower bound on the sampling complexity or not. Furthermore, for the important input distribution case of mixtures of Gaussians, we provide algorithms that bypass the above lower bounds. @InProceedings{pmlr-v97-yasuda19a,title = {Tight Kernel Query Complexity of Kernel Ridge Regression and Kernel $k$-means Clustering},author = {Yasuda, Taisuke and Woodruff, David and Fernandez, Manuel},booktitle = {Proceedings of the 36th International Conference on Machine Learning},pages = {7055--7063},year = {2019},editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},volume = {97},series = {Proceedings of Machine Learning Research},address = {Long Beach, California, USA},month = {09--15 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v97/yasuda19a/yasuda19a.pdf},url = {http://proceedings.mlr.press/v97/yasuda19a.html},abstract = {Kernel methods generalize machine learning algorithms that only depend on the pairwise inner products of the dataset by replacing inner products with kernel evaluations, a function that passes input points through a nonlinear feature map before taking the inner product in a higher dimensional space. In this work, we present nearly tight lower bounds on the number of kernel evaluations required to approximately solve kernel ridge regression (KRR) and kernel $k$-means clustering (KKMC) on $n$ input points. For KRR, our bound for relative error approximation the argmin of the objective function is $\Omega(nd_{\mathrm{eff}}^\lambda/\varepsilon)$ where $d_{\mathrm{eff}}^\lambda$ is the effective statistical dimension, tight up to a $\log(d_{\mathrm{eff}}^\lambda/\varepsilon)$ factor. For KKMC, our bound for finding a $k$-clustering achieving a relative error approximation of the objective function is $\Omega(nk/\varepsilon)$, tight up to a $\log(k/\varepsilon)$ factor. Our KRR result resolves a variant of an open question of El Alaoui and Mahoney, asking whether the effective statistical dimension is a lower bound on the sampling complexity or not. Furthermore, for the important input distribution case of mixtures of Gaussians, we provide algorithms that bypass the above lower bounds.}} %0 Conference Paper%T Tight Kernel Query Complexity of Kernel Ridge Regression and Kernel $k$-means Clustering%A Taisuke Yasuda%A David Woodruff%A Manuel Fernandez%B Proceedings of the 36th International Conference on Machine Learning%C Proceedings of Machine Learning Research%D 2019%E Kamalika Chaudhuri%E Ruslan Salakhutdinov%F pmlr-v97-yasuda19a%I PMLR%J Proceedings of Machine Learning Research%P 7055--7063%U http://proceedings.mlr.press%V 97%W PMLR%X Kernel methods generalize machine learning algorithms that only depend on the pairwise inner products of the dataset by replacing inner products with kernel evaluations, a function that passes input points through a nonlinear feature map before taking the inner product in a higher dimensional space. In this work, we present nearly tight lower bounds on the number of kernel evaluations required to approximately solve kernel ridge regression (KRR) and kernel $k$-means clustering (KKMC) on $n$ input points. For KRR, our bound for relative error approximation the argmin of the objective function is $\Omega(nd_{\mathrm{eff}}^\lambda/\varepsilon)$ where $d_{\mathrm{eff}}^\lambda$ is the effective statistical dimension, tight up to a $\log(d_{\mathrm{eff}}^\lambda/\varepsilon)$ factor. For KKMC, our bound for finding a $k$-clustering achieving a relative error approximation of the objective function is $\Omega(nk/\varepsilon)$, tight up to a $\log(k/\varepsilon)$ factor. Our KRR result resolves a variant of an open question of El Alaoui and Mahoney, asking whether the effective statistical dimension is a lower bound on the sampling complexity or not. Furthermore, for the important input distribution case of mixtures of Gaussians, we provide algorithms that bypass the above lower bounds. Yasuda, T., Woodruff, D. & Fernandez, M.. (2019). Tight Kernel Query Complexity of Kernel Ridge Regression and Kernel $k$-means Clustering. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:7055-7063 This site last compiled Mon, 16 Sep 2019 16:05:04 +0000
作者:Zdeněk Skalák 来源:[J].Annales Polonici Mathematici(IF 0.306), 2019, Vol.122, pp.193-199IMPAS 摘要:We show that if $u$ is a solution of the Navier–Stokes equations in the whole three-dimensional space and $\partial _3 u \in L^p(0,T;L^q(\mathbb {R}^3))$, $T \gt 0$, where $2/p+3/q=1+3/q$ and $q \in (3,10/3]$, then $u$ is regular on $(0,T]$. 作者:Brandon Williams 来源:[J].Acta Arithmetica(IF 0.472), 2019, Vol.189, pp.347-365IMPAS 摘要:We prove that the generating function of overpartition $M2$-rank differences is, up to coefficient signs, a component of the vector-valued mock Eisenstein series attached to a certain quadratic form. We use this to compute analogs of the class number relations for $M2$-rank ... 作者:Zhenchao Ge , Thái Hoàng Lê 来源:[J].Acta Arithmetica(IF 0.472), 2019, Vol.189, pp.381-390IMPAS 摘要:We generalize an argument of Wirsing to vector spaces over finite fields, giving a quantitative bound for the size of the sumset of an arbitrary set $A$ and a basis of the space. Then we use it to give a simpler proof of a result of Sanders on the existence of subspaces in the di... 作者:Mirjana Milijević 来源:[J].Annales Polonici Mathematici(IF 0.306), 2019, Vol.122, pp.181-192IMPAS 摘要:We examine statistical hypersurfaces with shape operators having one or two constant eigenvalues, and establish the relation between the eigenvalues and the constant holomorphic sectional curvature of ambient manifolds. 作者:Tomohiro Ooto 来源:[J].Acta Arithmetica(IF 0.472), 2019, Vol.189, pp.179-189IMPAS 摘要:As an analogue of Mahler’s classification for real numbers, Bundschuh introduced a classification for Laurent series over a finite field, divided into $A,S,T,U$-numbers. It is known that each of the sets of $A,S,U$-numbers is nonempty. On the other hand, the existence of $T$... 作者:Chung-Chuan Chen , Kui-Yo Chen , Serap Öztop ... 来源:[J].Annales Polonici Mathematici(IF 0.306), 2019, Vol.122, pp.129-142IMPAS 摘要:Let $G$ be a locally compact group, and let $w$ be a weight on $G$. Let $\varPhi $ be a Young function. We give some characterizations for translation operators to be topologically transitive and chaotic on the weighted Orlicz space $L_w^\varPhi (G)$. In particular, transitivity ... 作者:Benjamin Hutz , Michael Stoll 来源:[J].Acta Arithmetica(IF 0.472), 2019, Vol.189, pp.283-308IMPAS 摘要:We develop an algorithm that determines, for a given squarefreebinary form $F$ with real coefficients, a smallest representativeof its orbit under $\operatorname{SL}(2,\mathbb Z)$, either with respect to the Euclideannorm or with respect to the maximum norm of the coefficient vec... 作者:SoYoung Choi , Bo-Hae Im 来源:[J].Acta Arithmetica(IF 0.472), 2019, Vol.190, pp.57-74IMPAS 摘要:We prove that certain infinitely many weakly holomorphic modular forms for $\Gamma_0(2)$ have all zeros on a part of a certain geodesic but not on the boundary of the fundamental domain $\mathfrak{F}$ of $\Gamma_0 (2)$, and prove that the zeros of one of these forms interlace wit... 作者:Li-Xia Dai , Hao Pan 来源:[J].Acta Arithmetica(IF 0.472), 2019, Vol.189, pp.147-163IMPAS 摘要:We prove several extensions of the Erdős–Fuchs theorem. For example, for two subsets $A=\{a_1,a_2,\ldots\}$ and $B=\{b_1,b_2,\ldots\}$ of ${\mathbb N}$, if$$a_i-b_i=o(a_i^{1/4})$$as $i\to \infty$, then$$|\{(a,b): a\in A,\, b\in B,\, a+b\leq n\}|=cn+o(n^{1/4})$$is impossible ... 作者:Biswajyoti Saha , Jyoti Sengupta 来源:[J].Acta Arithmetica(IF 0.472), 2019, Vol.189, pp.165-178IMPAS 摘要:We address the problem of determining a ${\bf GL}(2)$ Maass cusp form (of weight zero) from the central values of the Rankin–Selberg $L$-functions $L(1/2,f\otimes u_j)$ for infinitely many primes $p$ and all the normalised Maass cusp forms $u_j$ which are newforms for $\varG... 我们正在为您处理中,这可能需要一些时间,请稍等。
Recurrence relation I am trying to find approximate solutions $T(n)$ of the recurrence relation $$ p\ T(n-1) - \left[p+q+\overline{S} + \varepsilon \tilde{S}(n)\right]T(n) + q\ T(n+1) = 0,\\ \text{for } n=2,\dotsc,M-1 $$ with the boundary conditions $$ p\ T(1) - q\ T(2) = \alpha,\\ p\ T(M-1) - (\beta + q)\ T(M) = 0, $$ where $\varepsilon \ll 1$ is a small parameter of the system, $p,q,\alpha,\beta,\overline{S} \ge 0$ are all constants, with $p\ge q$, and the coefficients $\tilde{S_n}$ are "sparse" in the following sense: if for some positive integer $i$, $n = i\ \Delta$, then $\tilde{S}(n) > 0$; otherwise, $\tilde{S}(n) = 0$. Here $\Delta \in \mathbb{N}$ is the separation between the indices at which $\tilde{S}$ is nonzero, and we denote the number of such indices by $N$ and assume $N \ll M$. Ideally, it is possible to find solutions as a power series in $\varepsilon$ and/or $N$ (or some combination), valid as $\varepsilon \to 0$ and $M, N \to \infty$. What I have tried My attempts have focussed on expressing the ratio $r_n := T(n+1)/T(n)$ as the finite continued fraction (following e.g. Risken Ch.9), $$ -q\ r_n = \cfrac{a}{ b_{n+1} + \cfrac{a}{ b_{n+2} + \cfrac{a}{ b_{n+3} + \cfrac{\ddots}{ \ddots \cfrac{a}{ b_{M-1} + \cfrac{a}{b_M} }}}}}, $$ where \begin{align} &a = -p\ q,\\ &b_k = p + q + \overline{S} + \varepsilon\tilde{S}(k), \quad \text{ for } k = n+1,\dotsc,M-1,\\ &b_M = \beta + q. \end{align} Then one can use the Euler formula, after a little bit of rearranging, to write the continued fraction as a sum of products: $$ \frac{b_{n+1}}{p} r_n = 1 + \sum_{j=n+1}^{M-1}\left(\prod_{k=n+1}^j R_k \right),$$ where \begin{align} R_{n+1} &= \frac{p\ q}{\left[p+q+\overline{S}+\varepsilon\tilde{S}(n+1)\right]\left[p+q+\overline{S}+\varepsilon\tilde{S}(n+2)\right]},\\ R_{n+2} &= \frac{p\ q}{p+q+\overline{S}+\varepsilon\tilde{S}(n+3)},\\ R_{k} &= p\ q\frac{p+q+\overline{S}+\varepsilon\tilde{S}(k-1)}{p+q+\overline{S}+\varepsilon\tilde{S}(k+1)}, \text{ for } k = n+3,\dotsc,M-1. \end{align} This is about as far as I got. The idea is to use the sparsity (as described above) of $\tilde{S}(n)$ to simplify the solution when expressed as a sum of products, maybe making it possible to extract an approximate solution. However, it is not clear to me exactly how to proceed. Perhaps someone can see a way forward with this, or suggest a different approach. Thanks.
LaTeX forum ⇒ Text Formatting ⇒ linebreak problem Information and discussion about LaTeX's general text formatting features (e.g. bold, italic, enumerations, ...) 4 posts • Page 1of 1 Hi all, I am typesetting a book and I have an issue with the \linebreak command. Sometimes I use the minipage environment to place an image alongside some text, and the text in the minipage environment is part of a larger paragraph, so I've learned to use the \linebreak command at the end of the minipage text to stretch the last line to the margin and therefore it looks continuous with the text that comes after the minipage environment. However, now I have a new problem, which is that the \linebreak command seems to want to insert a new (blank) line after the command, so although the left-right alignment looks okay, the vertical alignment is off by a single blank line. I've tried the \vspace command but it doesn't usually do what I need it to do. Any suggestions? Thanks, Ben I am typesetting a book and I have an issue with the \linebreak command. Sometimes I use the minipage environment to place an image alongside some text, and the text in the minipage environment is part of a larger paragraph, so I've learned to use the \linebreak command at the end of the minipage text to stretch the last line to the margin and therefore it looks continuous with the text that comes after the minipage environment. However, now I have a new problem, which is that the \linebreak command seems to want to insert a new (blank) line after the command, so although the left-right alignment looks okay, the vertical alignment is off by a single blank line. I've tried the \vspace command but it doesn't usually do what I need it to do. Any suggestions? Thanks, Ben Here's an example of a piece of code which causes this problem. Obviously, you won't be able to see the image to the right of the initial text block, but you should be able to see the spacing problem with the text that comes after. \noindent \begin{minipage}{195pt} As we will see, measurements of the distribution of scattered beam particles can lead to information about the structure or distribution of the target density $\rho(\vec{r})$. This is a remarkable fact, and is related to Fourier optics, in which the pattern of diffracted light on a screen can be used to infer information about the shape of a barrier through which the light was passed (for a review, see Appendix~\ref{ch:Fourier}). In our present example, particle beams, which are of course quantum mechanical probability waves, will be used to infer the shape of the target at which they are aimed.\linebreak \end{minipage} \begin{minipage}{180pt} \begin{center} \includegraphics[keepaspectratio,angle=90,height=.8\textheight,width=.8\textwidth]{./graphics/diffraction} \end{center} \end{minipage} \noindent We begin by revisiting Rutherford scattering, but now considering a nucleus of \noindent \begin{minipage}{195pt} As we will see, measurements of the distribution of scattered beam particles can lead to information about the structure or distribution of the target density $\rho(\vec{r})$. This is a remarkable fact, and is related to Fourier optics, in which the pattern of diffracted light on a screen can be used to infer information about the shape of a barrier through which the light was passed (for a review, see Appendix~\ref{ch:Fourier}). In our present example, particle beams, which are of course quantum mechanical probability waves, will be used to infer the shape of the target at which they are aimed.\linebreak \end{minipage} \begin{minipage}{180pt} \begin{center} \includegraphics[keepaspectratio,angle=90,height=.8\textheight,width=.8\textwidth]{./graphics/diffraction} \end{center} \end{minipage} \noindent We begin by revisiting Rutherford scattering, but now considering a nucleus of Who is online Users browsing this forum: No registered users and 3 guests
I wrote a Monte Carlo simulation of the 2D Lotka-Volterra model on a discrete lattice (with periodic boundary conditions). A video that I produced (which images the system after some number of monte carlo steps) can be found here. I am wondering, what is the best way to track the spatial "speed" of the oscillations of the two species across the lattice from one video frame to the next? My model is done with the following rates, where $A$ denotes a predator node on the lattice, $B$ denotes a prey node on the lattice, and $\emptyset$ denotes an empty space on the lattice: $ A + B \xrightarrow{\lambda} A+A,\;(predator\;consuming\; prey)\\ B + \emptyset \xrightarrow{\sigma} B+B,\;(prey\;growing\;into\;empty\;adjacent\;spaces)\\ A \xrightarrow{\mu} \emptyset,\;(predators\;dying\;off)\\ $ For each Monte Carlo step I choose one space on the lattice randomly, do a cumsum(each neighbor's rate./sum(neighbors' rates)), and then generate a random number to decide which of the three reactions above will be done. In my video, $\lambda=1.6$, $\sigma=50$, and $\mu=0.1$. I am not sure if I should try to track a certain shape in each frame and measure its deformation, or if I should try to use some other sort of spatial metric. To be clear, the data for each video frame is stored in an N by N matrix (with values 0 for empty, 1 for predator, and 2 for prey), so I am not limited to an analysis of the video I provided, it is only for explanation. My source code for this project can be found here. What is the typical way of approaching this sort of problem, perhaps someone can provide a reference? Thanks.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
Difference between revisions of "Woodin" (Some minor editing) Line 3: Line 3: Woodin cardinals are a generalization of the notion of [[strong]] cardinals and have been used to calibrate the exact proof-theoretic strength of the [[Axiom of Determinacy]]. Woodin cardinals are weaker than [[superstrong]] cardinals in consistency strength and fail to be [[weakly compact]] in general, since they are not $\Pi_1^1$ [[indescribable]]. Their exact definition has several equivalent but different characterizations, each of which is somewhat technical in nature. Nevertheless, an inner model theory encapsulating infinitely many Woodin cardinals and slightly beyond has been developed. Woodin cardinals are a generalization of the notion of [[strong]] cardinals and have been used to calibrate the exact proof-theoretic strength of the [[Axiom of Determinacy]]. Woodin cardinals are weaker than [[superstrong]] cardinals in consistency strength and fail to be [[weakly compact]] in general, since they are not $\Pi_1^1$ [[indescribable]]. Their exact definition has several equivalent but different characterizations, each of which is somewhat technical in nature. Nevertheless, an inner model theory encapsulating infinitely many Woodin cardinals and slightly beyond has been developed. − ==Shelah + ==Shelah == Shelah cardinals were introduced by Shelah and Woodin as a weakening of the necessary hypothesis required to show several regularity properties of sets of reals hold in the model $L(\mathbb{R})$ (e.g., every set of reals is Lebesgue measurable and has the property of Baire, etc.). In slightly more detail, Woodin had established that the Axiom of Determinacy (a hypothesis known to imply regularity properties for sets of reals) holds in $L(\mathbb{R})$ assuming the existence of a non-trivial elementary embedding [[L of V lambda+1 |$j:L(V_{\lambda+1})\to L(V_{\lambda+1})$]] with critical point $<\lambda$. This axiom is known to be very strong and its use was first weakened to that of the existence of a [[supercompact]] cardinal. Following the work of Foreman, Magidor and Shelah on saturated ideals on $\omega_1$, Woodin and Shelah subsequently isolated the two large cardinal hypotheses which bear their name and turn out to be sufficient to establish the regularity properties of sets of reals mentioned above. Shelah cardinals were introduced by Shelah and Woodin as a weakening of the necessary hypothesis required to show several regularity properties of sets of reals hold in the model $L(\mathbb{R})$ (e.g., every set of reals is Lebesgue measurable and has the property of Baire, etc.). In slightly more detail, Woodin had established that the Axiom of Determinacy (a hypothesis known to imply regularity properties for sets of reals) holds in $L(\mathbb{R})$ assuming the existence of a non-trivial elementary embedding [[L of V lambda+1 |$j:L(V_{\lambda+1})\to L(V_{\lambda+1})$]] with critical point $<\lambda$. This axiom is known to be very strong and its use was first weakened to that of the existence of a [[supercompact]] cardinal. Following the work of Foreman, Magidor and Shelah on saturated ideals on $\omega_1$, Woodin and Shelah subsequently isolated the two large cardinal hypotheses which bear their name and turn out to be sufficient to establish the regularity properties of sets of reals mentioned above. Line 13: Line 13: It turns out that Shelah cardinals have many large cardinals below them that suffice to establish the regularity properties, and as a result have mostly faded from view in the large cardinal research literature. It turns out that Shelah cardinals have many large cardinals below them that suffice to establish the regularity properties, and as a result have mostly faded from view in the large cardinal research literature. − ==Woodin + ==Woodin == − + Woodin cardinals are a refinement of Shelah cardinals. The primary difference is the requirement of a closure condition on the functions $f:\kappa\rightarrow\kappa$ and associated embeddings. Woodin cardinals are not themselves the critical points of any of their associated embeddings and hence need not be measurable. They are, however, Mahlo cardinals (and hence also inaccessible) since the set of measurable cardinals below a Woodin cardinal must be stationary. Woodin cardinals are a refinement of Shelah cardinals. The primary difference is the requirement of a closure condition on the functions $f:\kappa\rightarrow\kappa$ and associated embeddings. Woodin cardinals are not themselves the critical points of any of their associated embeddings and hence need not be measurable. They are, however, Mahlo cardinals (and hence also inaccessible) since the set of measurable cardinals below a Woodin cardinal must be stationary. − Elementary Embedding Characterization + + Elementary Embedding Characterization A cardinal $\kappa$ is "Woodin" if, for every function $f:\kappa\to\kappa$ there is some $\gamma<\kappa$ such that $f$ is closed under $\gamma$ and there is an associated non-trivial elementary embedding $j:V\prec M$ with critical point $\gamma$ where $M$ contains the initial segment $V_{(j(f))(\gamma)}$. A cardinal $\kappa$ is "Woodin" if, for every function $f:\kappa\to\kappa$ there is some $\gamma<\kappa$ such that $f$ is closed under $\gamma$ and there is an associated non-trivial elementary embedding $j:V\prec M$ with critical point $\gamma$ where $M$ contains the initial segment $V_{(j(f))(\gamma)}$. − − If $\kappa$ is Woodin then for any subset $A\subseteq V_\kappa$ some $\alpha <\kappa$ is $\gamma$-strong for every $\gamma <\kappa$. Intuitively this means that there are elementary embeddings $j_\gamma$ which preserve $A$ i.e., $A\cap V_{\alpha+\gamma}=j_\gamma(A)\cap V_{\alpha+\gamma}$, have critical point $\alpha$, and whose target transitive class contains the initial segment $V_{\alpha+\gamma}$. If $\kappa$ is Woodin then for any subset $A\subseteq V_\kappa$ some $\alpha <\kappa$ is $\gamma$-strong for every $\gamma <\kappa$. Intuitively this means that there are elementary embeddings $j_\gamma$ which preserve $A$ i.e., $A\cap V_{\alpha+\gamma}=j_\gamma(A)\cap V_{\alpha+\gamma}$, have critical point $\alpha$, and whose target transitive class contains the initial segment $V_{\alpha+\gamma}$. Line 29: Line 27: There is a hierarchy of Woodin-type cardinals There is a hierarchy of Woodin-type cardinals − ==Analogue of + ==Analogue of 's Principle== + + + ==Stationary Tower Forcing== ==Stationary Tower Forcing== Line 36: Line 37: ==Role in $\Omega$-Logic and the Resurrection Theorem== ==Role in $\Omega$-Logic and the Resurrection Theorem== − < + <>:</> − + − + − + − + − + − + − + − + − + − + − + − + − + − </ + − {{ + {{}} Revision as of 18:34, 23 July 2013 Woodin cardinals are a generalization of the notion of strong cardinals and have been used to calibrate the exact proof-theoretic strength of the Axiom of Determinacy. Woodin cardinals are weaker than superstrong cardinals in consistency strength and fail to be weakly compact in general, since they are not $\Pi_1^1$ indescribable. Their exact definition has several equivalent but different characterizations, each of which is somewhat technical in nature. Nevertheless, an inner model theory encapsulating infinitely many Woodin cardinals and slightly beyond has been developed. Contents Shelah cardinals Shelah cardinals were introduced by Shelah and Woodin as a weakening of the necessary hypothesis required to show several regularity properties of sets of reals hold in the model $L(\mathbb{R})$ (e.g., every set of reals is Lebesgue measurable and has the property of Baire, etc.). In slightly more detail, Woodin had established that the Axiom of Determinacy (a hypothesis known to imply regularity properties for sets of reals) holds in $L(\mathbb{R})$ assuming the existence of a non-trivial elementary embedding $j:L(V_{\lambda+1})\to L(V_{\lambda+1})$ with critical point $<\lambda$. This axiom is known to be very strong and its use was first weakened to that of the existence of a supercompact cardinal. Following the work of Foreman, Magidor and Shelah on saturated ideals on $\omega_1$, Woodin and Shelah subsequently isolated the two large cardinal hypotheses which bear their name and turn out to be sufficient to establish the regularity properties of sets of reals mentioned above. A cardinal $\kappa$ is Shelah if, for every function $f:\kappa\to\kappa$ there is a non-trivial elementary embedding $j:V\prec M$ with $M$ a transitive class, $\kappa$ the critical point of $j$ and $M$ contains the initial segment $V_{j(f)(\kappa)}$. It turns out that Shelah cardinals have many large cardinals below them that suffice to establish the regularity properties, and as a result have mostly faded from view in the large cardinal research literature. Woodin cardinals Woodin cardinals are a refinement of Shelah cardinals. The primary difference is the requirement of a closure condition on the functions $f:\kappa\rightarrow\kappa$ and associated embeddings. Woodin cardinals are not themselves the critical points of any of their associated embeddings and hence need not be measurable. They are, however, Mahlo cardinals (and hence also inaccessible) since the set of measurable cardinals below a Woodin cardinal must be stationary. Elementary Embedding Characterization A cardinal $\kappa$ is "Woodin" if, for every function $f:\kappa\to\kappa$ there is some $\gamma<\kappa$ such that $f$ is closed under $\gamma$ and there is an associated non-trivial elementary embedding $j:V\prec M$ with critical point $\gamma$ where $M$ contains the initial segment $V_{(j(f))(\gamma)}$. If $\kappa$ is Woodin then for any subset $A\subseteq V_\kappa$ some $\alpha <\kappa$ is $\gamma$-strong for every $\gamma <\kappa$. Intuitively this means that there are elementary embeddings $j_\gamma$ which preserve $A$ i.e., $A\cap V_{\alpha+\gamma}=j_\gamma(A)\cap V_{\alpha+\gamma}$, have critical point $\alpha$, and whose target transitive class contains the initial segment $V_{\alpha+\gamma}$. There is a hierarchy of Woodin-type cardinals Analogue of Vopěnka's Principle This material will be added later. Stationary Tower Forcing Role in $\Omega$-Logic and the Resurrection Theorem [1] References Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory.
$$a = s / r$$ Arc length ($s$) over radius ($r$). $$\omega = \frac{A}{r^2}$$ Area subtended ($A$) over squared radius ($r^2$). $$dA = r^2 \sin(\theta) d\theta d\phi$$ $$d\omega = \frac{dA}{r^2} = \sin(\theta) d\theta d\phi$$ Energy of electromagnet radiation. $$\Phi = \frac{dQ}{dt}$$ Energy per unit time. Unit: Watt $[W]$ Radiant energy emitted, reflected, transmitted or received, per unit time. This is sometimes also called "radiant power". [1] $$I_\Omega = \frac{d\Phi}{d\omega}$$ Radiant flux per unit solid angle. Unit: Watt per steradian $[W * sr^{-1}]$ Radiant flux emitted, reflected, transmitted or received, per unit solid angle. This is a directional quantity. [1] $$E = \frac{d\Phi}{dA}$$ Radiant flux per unit projected area. Unit: Watt per square meter $[W * m^{−2}]$ Radiant flux received by a surface per unit area. [1] $$L_\Omega = \frac{d^2 \Phi}{d\omega dA \cos(\theta)}$$ Radiant flux per unit solid angle per unit projected area. Unit: Watt per steradian per square meter $[W * sr^{-1} * m^{−2}]$ Radiant flux emitted, reflected, transmitted or received by a surface, per unit solid angle per unit projected area. This is a directional quantity. This is sometimes also confusingly called "intensity". [1]
Every liquid in reality has a small and important compressible aspect. The ratio of the change in the fractional volume to pressure or compression is referred to as the bulk modulus of the material. For example, the average bulk modulus for water is \(2.2 \times 10^9\) \(N/m^2\). At a depth of about 4,000 meters, the pressure is about \(4 \times 10^7\) \(N/m^2\). The fractional volume change is only about 1.8% even under this pressure nevertheless it is a change. The compressibility of the substance is the reciprocal of the bulk modulus. The amount of compression of almost all liquids is seen to be very small as given in the Book "Fundamentals of Compressible Flow." The mathematical definition of bulk modulus as following \[ B_T = \rho \,{\dfrac{\partial P}{\partial \rho}} \label{gd:sd:eq:bulkModulus} \tag{16} \] In physical terms can be written as Liquid/Solid Sound Speed \[ \label{gd:sd:eq:sondLiquid} c = \sqrt{\dfrac{elastic\;\; property }{ inertial\;\; property} } = \sqrt{\dfrac{B_T }{ \rho}} \tag{17} \] For example for water \[ \nonumber c = \sqrt{\dfrac{2.2 \times 10^9 N /m^2 }{ 1000 kg /m^3}} = 1493 m/s \tag{18} \] This value agrees well with the measured speed of sound in water, 1482 m/s at \(20^{\circ}C\). A list with various typical velocities for different liquids can be found in "Fundamentals of Compressible Flow'' by by this author. The interesting topic of sound in variable compressible liquid also discussed in the above book. It can be shown that velocity in solid and and slightly compressible liquid is expressed by In summary, the speed of sound in liquids is about 3 to 5 relative to the speed of sound in gases. Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
For systems without dependent types, like Hindley-Milner type system, the types correspond to formulas of intuitionistic logic. There we know that its models are Heyting algebras, and in particular, to disprove a formula, we can restrict to one Heyting algebra where each formula is represented by an open subset of $\mathbb{R}$. For example, if we want to show that $\forall\alpha.\alpha\lor(\alpha\rightarrow\bot)$ is not inhabited, we construct a mapping $\phi$ from formulas to open subsets of $\mathbb{R}$ by defining: \begin{align} \phi(\alpha) &= (-\infty, 0) \end{align} Then \begin{align} \phi(\alpha\rightarrow\bot) &= \mbox{int}( [0, \infty) ) \\ & = (0,\infty) \\ \phi(\alpha\lor(\alpha\rightarrow\bot)) &= (-\infty, 0) \cup (0,\infty) \\ &= \mathbb{R} \setminus {0}. \end{align} This shows that the original formula cannot be provable, since we have a model where it's not true, or equivalently (by Curry-Howard isomorphism) the type cannot be inhabited. Another possibility would be to use Kriepke frames. Are there any similar methods for systems with dependent types? Like some generalization of Heyting algebras or Kripke frames? Note: I'm not asking for a decision procedure, I know there cannot be any. I'm asking just for a mechanism that allows to witness unprovability of a formula - to convince someone that it's unprovable.
Answer a) $M=\frac{2\pi\rho_0wR^2}{3}$ b) $I=\frac{3MR^2}{5}$ Work Step by Step a) We know that the mass is equal to the integral of the quantity of the density times the volume. Thus, we find: $M = 2\int_0^R \frac{\rho_0r}{R}\pi r^2 w dr $ $M=\frac{2\pi\rho_0wR^2}{3}$ b) Using the definition of moment of inertia, we find: $I=\frac{3MR^2}{5}$
383 13 Homework Statement The electric field outside and an infinitesimal distance away from a uniformly charged spherical shell, with radius R and surface charge density σ, is given by Eq. (1.42) as σ/0. Derive this in the following way. (a) Slice the shell into rings (symmetrically located with respect to the point in question), and then integrate the field contributions from all the rings. You should obtain the incorrect result of ##\frac{\sigma}{2\epsilon_0}##. (b) Why isn’t the result correct? Explain how to modify it to obtain the correct result of ##\frac{\sigma}{2\epsilon_0}##. Hint: You could very well have performed the above integral in an effort to obtain the electric field an infinitesimal distance inside the shell, where we know the field is zero. Does the above integration provide a good description of what’s going on for points on the shell that are very close to the point in question? Homework Equations Coulomb's Law Hi! I need help with this problem. I tried to do it the way you can see in the picture. I then has this: ##dE_z=dE\cdot \cos\theta## thus ##dE_z=\frac{\sigma dA}{4\pi\epsilon_0}\cos\theta=\frac{\sigma 2\pi L^2\sin\theta d\theta}{4\pi\epsilon_0 L^2}\cos\theta##. Then I integrated and ended up with ##E=\frac{\sigma}{2\epsilon_0}\int \sin\theta\cos\theta d\theta##. The problem is that I don't know what are the limits of integrations, I first tried with ##\pi##, but I got 0. What am I doing wrong?
Why we need axiom of countable choice to prove following theorem: every compact metric spaces is second countable? In which step it's "hidden"? Thank you for any help. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Usually the proof would go like this: For every $n$ define $\mathcal U_n=\{B(x,\frac1n)\mid x\in X\}$, this is clearly an open cover so it has a finite subcover $\mathcal V_n$. Finally we can show that $\bigcup\mathcal V_n$ is a basis for the topology. Here used twice countable choice: Both things are a consequence of the axiom of countable choice, and cannot be proved in general without it. In the following paper the authors show that for compact metric spaces the statement that the space is separable is equivalent to the statement that it is second-countable. This makes it easier to find a counterexample, as non-separable compact metric spaces are easier to come by. Keremedis, Kyriakos; Tachtsis, Eleftherios. " Compact metric spaces and weak forms of the axiom of choice." MLQ Math. Log. Q. 47(2001), no. 1, 117–128. Models where these fail (i.e. there is a compact metric space which is not second-countable) are also given.
I'm taking Discrete Math this semester. While I understand the mechanics of proofs, I find that I must refine my understanding of how to work them. To that end, I'm working through some extra problems on spring break. Please read over this proof I did from an exercise from the book. I apologize in advance for poor formatting. I just couldn't figure out how to make this one big block of LaTeX commands. I'm still learning. Let A, B, C be subsets of a Universal set U. Given $A \cap B \subseteq C \wedge A^c \cap B \subseteq C \Rightarrow B \subseteq C$ Proof: Part 1: $$ \begin{array}{rcl} B & = & (A \cap B) \cup (A^c \cap B) \\ & = & (A \cap B) \cup B , \text{definition of intersection} \\ & = & B \, \square \end{array} $$ Part 2:Part 1 states $B = (A \cap B) \cup (A^c \cap B)$. By assumption, $(A \cap B) \subseteq C \wedge (A^c \cap B) \subseteq C$ $\therefore B \subseteq C$ Thanks Andy
Homework 6 Professor Bell, could you post the notes from friday? Thanks. --Yu Suo 16:31, 18 October 2009 (UTC) Yu, they are now on the MA 425 Home Page under Professor Bell, You showed in class that we can't show that the integral around the curved portion for problem VI.12.2 goes to zero using the basic estimate because when $ \theta = \pi/4 $ it turns out to be one. Can we use the basic estimate method for VI.12.1 because now $ \theta= \pi/8 $, which should not cause a problem. --Kevin Fernandes 10:05, 20 October 2009 (UTC) Kevin, yes, the basic estimate works just fine on the curved part for VI.12.1. --Steve Bell Problem 12.1 I) Integrate along the real axis. Let $ z = t $. II) Integrate along the curve from $ t=0 $ to $ t=\frac{\pi}{8} $ and show that this tends to zero. III) Integrate along the $ \frac{\pi}{8} $ line. Since the total integral along the closed curve equals zero, this integral must be the negative of the integral found in (I). To do this integral, let $ z=t\exp(i\frac{\pi}{8}) $. The real part is what we are looking for. Hint: $ \cos(\frac{\pi}{8})=\frac{1}{2}\sqrt{2+\sqrt{2}} $, and $ \sin(\frac{\pi}{8})=\frac{1}{2}\sqrt{2-\sqrt{2}} $. -Alex Krzywda Note: In VI.12.1, it will be necessary to make a simple change of variables $ u=t/\sqrt{\sqrt{2}} $ to get the integral in the final answer. --Steve Bell I got these answers for the second problem for both the integrals $ \sqrt{\pi}/2\sqrt{2} $ Did any one get anything besides that? --Kevin Fernandes 16:34, 20 October 2009 (UTC) Yep, that looks right. Also, another interesting way to solve the problem is with the function $ e^{iz^2} $ instead of $ e^{-z^2} $. Although the two quickly follow from each other.--Matt Davis
Question - When Are the Convolution Operator (Kernel) and the Deconvolution Operator (Kernel) the Same? Discrete Convolution (Cyclic) is described by Circulant Matrix. Circulant Matrices are diagonalized by the DFT Matrix (Will be denoted as $ F $). So given a convolution kernel $ g $ with its matrix form given by $ G $ one could have its diagonalization by: $$ G = {F}^{H} \operatorname{diag} \left( \mathcal{F} \left( g \right) \right) F $$ Where $ \mathcal{F} \left( g \right) $ is the Discrete Fourier Transform (DFT) of the kernel $ g $. In the general case the deconvolution is given by: $$ x = G h \Rightarrow {G}^{-1} x = {G}^{-1} G h = h $$ Where $ {G}^{-1} $ is the Deconvolution Operator (Inverse of $ G $ the convolution operator). In our case we're looking for the case $ G = {G}^{-1} \Rightarrow G G = I $. Utilizing the diagonalization of Discrete (Cyclic) Convolution Operators by the DFT Matrix (Which is Unitary Matrix): $$ G G = {F}^{H} \operatorname{diag} \left( \mathcal{F} \left( g \right) \right) F {F}^{H} \operatorname{diag} \left( \mathcal{F} \left( g \right) \right) F = {F}^{H} \operatorname{diag} \left( \mathcal{F} \left( g \right) \right) \operatorname{diag} \left( \mathcal{F} \left( g \right) \right) F $$ Namely we need $ \operatorname{diag} \left( \mathcal{F} \left( g \right) \right) \operatorname{diag} \left( \mathcal{F} \left( g \right) \right) = I $ which means $ \forall i, \, {\mathcal{F} \left( g \right)}_{i} \in \left\{ -1, 1 \right\} $, namely the Fourier Transform of $ g $ has the values of -1 or 1. So there is a set of kernels $ \mathcal{S} $ which is defined by: $$ \mathcal{S} = \left\{ g \mid \forall i, \, {\mathcal{F} \left( g \right)}_{i} \in \left\{ -1, 1 \right\} \right\} $$ One should notice that the identity Kernel ($ I $) is indeed part of this set of kernels. The above kernels obey $ \delta \left[ n \right] = \left( g \circ g \right) \left[ n \right] $ where $ \circ $ is the Circular Convolution (Which is multiplication of the DFT). How could one generate such a kernel? Well, just generate a vector of numbers which each is composed of $ 1 $ or $ -1 $ and use its ifft(). Pay attention that if you are after real operator the vector must obey the symmetry of the DFT for Real Numbers. You may find a code to replicate the above in my StackExchange Signal Processing Q19646 GitHub Repository.
question: I am looking for the literature with the result or the computation of Pin- bordism group: $\Omega^{Pin-}_{d}(B\mathbb{Z}_2)$. Can someone point out some useful ways to do this or any helpful Refs? Some helpful background: There is isomorphism between the following Spin and the Pin- bordism group, known as the Smith isomorphism: $$ \Omega^{Spin}_{d+1}(B\mathbb{Z}_2)' \to \Omega^{Pin-}_{d}(pt) $$ in particular, the $\Omega^{Spin}_{d+1}(B\mathbb{Z}_2)'$ is not exactly the the usual Spin bordism group $\Omega^{Spin}_{d+1}(B\mathbb{Z}_2)'$, but the reduction, based on a relation: $$ \Omega^{Spin}_{d+1}(BG)=\Omega^{Spin}_{d+1}(BG)' \oplus \Omega^{Spin}_{d+1}(pt) $$ where the reduction of the spin bordism group $\Omega^{Spin}_{d+1}(BG)$ to $\Omega^{Spin}_{d+1}(BG)'$ gets rid of the $\Omega^{Spin}_{d+1}(pt)$. This part has something to do with the kernel of the forgetful map to $\Omega^{Spin}_{d+1}(pt)$. In principle, to compute $\Omega^{Pin-}_{d}(B\mathbb{Z}_2)$, we may prove and use the following relations (any comments about this approach): $$ \Omega^{Spin}_{d+1}(B(\mathbb{Z}_2)^2)' \to \Omega^{Pin-}_{d}(B\mathbb{Z}_2)? $$ Some useful info: $\Omega^{Pin-}_1(pt)=\mathbb{Z}_2, \Omega^{Pin-}_2(pt)=\mathbb{Z}_8, \Omega^{Pin-}_3(pt)=0, \Omega^{Pin-}_4(pt)=0$ $\Omega^{Spin}_1(B\mathbb{Z}_2)=\mathbb{Z}_2^2, \Omega^{Spin}_2(B\mathbb{Z}_2)=\mathbb{Z}_2^2, \Omega^{Spin}_3(B\mathbb{Z}_2)=\mathbb{Z}_8, \Omega^{Spin}_4(B\mathbb{Z}_2)=\mathbb{Z}$ This is the reference that I have at hand: Kirby-Taylor, Pin structures on low-dimensional manifolds I am willing to hear some guidance along this line of thinking, or related issue.
Huge cardinal Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\aleph_2$-saturated $\sigma$-complete ideal on $\omega_1$". [1] Contents Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties $n$-$P_0$ and $n$-$P_1$, $n$-$P_0$ has less consistency strength than $n$-$P_1$, which has less consistency strength than $(n+1)$-$P_0$, and so on. This phenomenon is seen only around the $n$-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to $n$-hugeness, so the $\theta$-th $n$-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost $n$-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is $n$-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost $n$-hugeiff it is almost $n$-huge with target $\lambda$ for some $\lambda$. $\kappa$ is $n$-hugeiff it is $n$-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost $n$-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost $n$-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super $n$-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is $n$-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost $1$-huge, $1$-huge, etc. respectively. Ultrafilter definition The first-order definition of $n$-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is $n$-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\mathrm{ot}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\mathrm{ot}(X)$ is the order-type of the poset $(X,\in)$. [1] This definition is, more intuitively, making $U$ very large, like most ultrafilter characterizations of large cardinals (supercompact, strongly compact, etc.). $\kappa$ is then super $n$-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is $n$-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. [1] If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses $n$-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the $n$-fold variants) known as the double helix. This phenomenon is when for one $n$-fold variant, letting a cardinal be called $n$-$P_0$ iff it has the property, and another variant, $n$-$P_1$, $n$-$P_0$ is weaker than $n$-$P_1$, which is weaker than $(n+1)$-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = almost 0-huge = super almost 0-huge = 0-huge = super 0-huge $n$-superstrong $n$-fold supercompact $(n+1)$-fold strong, $n$-fold extendible $(n+1)$-fold Woodin, $n$-fold Vopěnka $(n+1)$-fold Shelah almost $n$-huge super almost $n$-huge $n$-huge super $n$-huge $(n+1)$-superstrong All huge variants lay at the top of the double helix restricted to some natural number $n$, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of $n$-huge cardinals, for all $n$. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every $(n+1)$-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super $n$-huge". [1] In terms of size, however, the least $n$-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because $n$-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an $n$-huge cardinal above it, $\kappa$ "reflects downward" that $n$-huge cardinal: there are $\kappa$-many $n$-huge cardinals below $\kappa$. On the other hand, the least super $n$-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least almost superhuge Every $n$-huge cardinal is $m$-huge for every $m\leq n$. Similarly with almost $n$-hugeness, super $n$-hugeness, and super almost $n$-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
Motivation: I am currently in a rather uncomfortable spot in my Analysis studies. In class we introduced the Jordan measure in a very vague way, meaning no proofs, no examples. (Because next Semester we study the better Lebesgue-Measure theory) I do understand the concepts of it as a generalization of the 1 Dimensional case. That is, approximating a given bounded set $\Omega\subset \mathbb{R}^n$ from above and below by rectangles (boxes, simple sets) and if it happens that the smallest of all the approximations from above is equal to the largest of all the approximations from below, we say by definition that $\Omega$ is Jordan measurable. However when it comes to applying said intuitive idea I run into troubles Problem: Show that the set $\Omega := \lbrace (u,v) \in \mathbb{R}^2 \mid |u| + |v| \leq 1 \rbrace \subset \mathbb{R}^2$ is Jordan measurable. My approach: Clearly the set is bounded, from below by $0$ and from above $1$. I don't know how to work with the definition as explained above in the motivation to show that said set is Jordan measurable. The best thing I could do was to cheat my way around by computing the following picture: The red line shows the boundary $\partial \Omega$ of the set. Since it is only a line segment in $\mathbb{R}^2$ it has Jordan measure zero respective to the topology in $\mathbb{R}^2$. According to How to prove $E\subset R^n$ Jordan measurable is equivalent to $\bar{E}-E$ is Jordan measured null this would 'show' that $\Omega$ is Jordan measurable. My question is if I can formalize this idea into an actual proof rather than my hand wavy explanation above. Or even better, is it possible to show that $\Omega$ is Jordan measurable by just relying on the definition of above/below approximations and check the equality of the infimum and supremum? Because said definition is so far what I understand best, the iff statement linked is only covered in the $\implies$ direction in my course.
Is this a new mathematical concept? $$ \frac{1}{n} + \frac{1}{n^2} + \frac{1}{n^3} \cdots = \frac{1}{n-1} $$ If it isn't then what is this called? I haven't been able to find anything like this anywhere. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Is this a new mathematical concept? $$ \frac{1}{n} + \frac{1}{n^2} + \frac{1}{n^3} \cdots = \frac{1}{n-1} $$ If it isn't then what is this called? I haven't been able to find anything like this anywhere. $$ if\\n\neq 1 \\ s=\frac{1}{n}+\frac{1}{n^2}+\frac{1}{n^3}+\frac{1}{n^4}+...\\multiply\\ s \\by \\n \\\frac{1}{n}s=\frac{1}{n^2}+\frac{1}{n^3}+\frac{1}{n^4}+\frac{1}{n^5}...\\now\\s-\frac{1}{n}s =\frac{1}{n}+(\frac{1}{n^2}-\frac{1}{n^2})+(\frac{1}{n^3}-\frac{1}{n^3})+(\frac{1}{n^4}-\frac{1}{n^4})+(\frac{1}{n^5}-\frac{1}{n^5})+...\\so \\s-\frac{1}{n}s =\frac{1}{n} $$ $$ s(\frac{n-1}{n})=\frac{1}{n} $$ $$ s=\frac{1}{n-1} $$ It is a geometric series with $a=r=1/n$. $$\frac{1}{n}+\frac{1}{n^2}+\frac{1}{n^3}+\dots=\sum_{i=1}^{\infty} \frac{1}{n^i}=\sum_{i=1}^{\infty} \left (\frac{1}{n} \right )^i=\sum_{i=0}^{\infty} \left (\frac{1}{n} \right )^i-1=\frac{1}{1-\frac{1}{n}}-1=\frac{n}{n-1}-1=\frac{n-n+1}{n-1}=\frac{1}{n-1}$$ Fix $r>0$ and consider $$ 1+r+r^2+r^3+\dots+r^k $$ Multiplying by $(1-r)$ you obtain $$ (1+r+r^2+r^3+\dots+r^k)(1-r)=\\ (1-r)+(r-r^2)+(r^2-r^3)+\dots+(r^{k-1}-r^k)+(r^k-r^{k+1})=\\ 1-r^{k+1}\;\;. $$ Hence you have the following identity (valid, as you can see, for $r\neq1$): $$ 1+r+r^2+r^3+\dots+r^k=\frac{1-r^{k+1}}{1-r}\;\;. $$ Then you can pass to the limit $k\to+\infty$, obtaining a finite quantity iff $|r|<1$. This quantity clearly is $$ \frac1{1-r} $$ i.e. $$ 1+r+r^2+r^3+\dots+r^k+\dots=\frac1{1-r}\;\;. $$ Now, in order to obtain your result just pose $r=1/n$, for a fixed $n$; I guess it's a natural number, hence if $n\ge2$, clerarly $r=1/n<1$ and the above formulas are valid and easily yeld to your result.
I seek a two-dimensional shapes $S$, bounded by a Jordan curve,that optimally balances its isoperimetric ratio $r(S)$against what I call its invisibility index $iv(S)$.Define the isoperimetric ratio $r(S)$ of $S$ to be$4 \pi A / L^2$, where $A$ is the area of $S$ and $L$ itsperimeter. This ratio is in $(0,1]$and achieves $1$ for $S$ a disk.See, e.g., theWikipedia article on the isoperimetric inequality.Define invisibility index $iv(S)$to be the probability that that a pair $(x,y)$ of random pointsin $S$ (chosen uniformly and independently)are invisible to one another in the sense thatthe segment $xy$ includes a point strictly exterior to $S$.($iv(S)$ is $1$ minus the Beer convexity of $S$.) Q. What shape (or shapes) $S$ maximize the product $P(S) = r(S) \cdot iv(S)$? If $S$ is a disk, $r(S)=1$ and $iv(S)=0$ so $P(S)=0$. If $S$ is a thin spiral, then $r(S)$ approaches $0$ and $iv(S)$ approaches $1$ so $P(S)$ approaches $0$. In between, $P(S) > 0$. I've computed $P(S)$ for the very narrow class ofsymmetric Ls, unit squares with a square removedfrom one corner, as illustrated below. Two symmetric Ls with different parameters $a$. Origin at lowerleft corner. These shapes are determined by one parameter $a$ as illustrated. Among this class of shapes, it appears that the maximum product $P(S)$ is achieved when $a \approx \frac{1}{4}$, the left shape above. Plots $r(\,)$, $iv(\,)$, and $P(\,)$ are shown below. The isoperimetric ratio for a square is $r(1) = \pi/4 \approx 0.79$. Red: $r(a)$. Blue: $iv(a)$. Green: Product $P(a)$.
I am reading "A Primer in Econometric Theory" by John Stachurski and reading the part on Conditional Maximum Likelihood. There I have seen the same kind of maximization I have seen before in other sources too: In order to estimate the parameter of a distribution, author uses conditional maximum likelihood and he does not take into account the marginal density of inputs when maximizing the objective function. I have seen it being done before, but he openly says marginal density is independent of the parameter we are estimating. Let me explain what I am asking: Suppose we have inputs and outputs in our model, $x$ being the input and $y$ being the output: $x_1, x_2$...$x_N$ and $y_1, y_2$...$y_N$. They come in pairs. Each observation $x_i$ is probably a vector but $y_i$ is just a scalar. Our pair is $(x_i, y_i)$. Our aim is to estimate $\theta$ in $p(y|x;\theta)$ in order to pin down the conditional density of $y$ given $x$. So we maximize the following condition: $l(\theta) = \sum_{n=1}^{N}\text{ln}\,p(x_n,y_n;\theta)$ where $p$ is the joint density of $(x_n,y_n)$. Letting $\pi$ be the marginal density of $x$, we can decompose the joint density as: $p(x,y;\theta) = p(y|x;\theta)\pi(x)$. Here author says the following: "The density $\pi(x)$ is unknown but we have not parameterized it because we aren't trying to estimate it. We can now rewrite the log likelihood as $l(\theta) = \sum_{n=1}^{N}\text{ln}\,p(y_n|x_n;\theta) + \sum_{n=1}^{N}\text{ln}\,\pi(x_n)$ The second term on the right-hand side is independent of $\theta$ and as such it does not affect the maximizer" And he goes on just maximizing the first part, the conditional probability. Here is my question: Is not $\pi(x)$ dependent on $\theta$ somehow? My thinking is convoluted but both $p(y|x;\theta)$ and $\pi(x)$ are derived from the same underlying joint density: $p(x, y;\theta)$. $\pi(x)$ is just a short hand for $\pi(x) = \int p(x,y;\theta)dy$ which is clearly dependent on $\theta$. Once you plug this into your maximization problem, it becomes: $l(\theta) = \sum_{n=1}^{N}\text{ln}\,p(y_n|x_n;\theta) + \sum_{n=1}^{N}\text{ln}\,\int p(x,y;\theta)dy$ Now the second item is also dependent on $\theta$ and need to be taken into account when maximizing wrt $\theta$. Am I missing something here?
I know that if $C \subseteq [0,1]$ is uncountable, then there exists $a \in (0,1)$ such that $C \cap [a,1]$ is uncountable. Is it still true for any infinite sets? That is, if $C \subseteq [0,1]$ is infinite, does there exist an $a \in (0,1)$ such that $C \cap [a,1]$ is infinite? Not necessarily. Consider $C := \{\frac{1}{n} : n \in \mathbb{N}\}$. Not true. Take $$C = \left\{1, \frac12, \frac13, \frac14, \ldots , \frac1N, \ldots \right\}.$$ [Jeez, everyone came up with the same example at once] Every set E such that : $$\forall \varepsilon >0, \quad E\cap(\varepsilon,1)\text{ is finite}$$ is a counter example. You just need $0$ to be an accumulation point of your set. You can take $$\left\{\frac 1n ,\ n\in \mathbb N\right\}$$ for instance. This argument is not true...a Counter example is the set:$$\left\{\frac 1n ,\ n\in \mathbb N\right\}$$
Definition:Ring of Formal Power Series Definition Let $R$ be a commutative ring with unity. One variable $R[[X]]$ is a commutative ring with unity $\iota : R \to R[[X]]$ is a unital ring homomorphism, called canonical embedding $X$ is an element of $R[[X]]$, called indeterminate that may be defined as follows: Let $\N$ be the additive monoid of natural numbers. Let $R[[\N]]$ be the big monoid ring of $R$ over $\N$. Let $\iota : R \to R[[\N]]$ be the embedding. Let $X \in R[[\N]]$ be the mapping $X : \N \to R$ defined by $X(n) = 1$ if $n=1$ and $X(n) = 0$ otherwise. The ring of formal power series over $R$ is the ordered triple $\left({R[[\N]], \iota, X}\right)$
Decrypt the cipher text with a pinhole. $ nc cry1.chal.ctf.westerns.tokyo 23464 pinhole.7z Summary: attacking RSA using decryption oracle leaking 2 consecutive bits in the middle. In this challenge we are given an access to a decryption oracle, which leaks only 2 consecutive bits in the middle of the decrypted plaintext: b = size(key.n) // 2 def run(fin, fout): alarm(1200) try: while True: line = fin.readline()[:4+size(key.n)//4] ciphertext = int(line, 16) # Note: input is HEX m = key.decrypt(ciphertext) fout.write(str((m >> b) & 3) + "\n") fout.flush() except: pass We are also given an encrypted flag and our goal is to decrypt it. Recall that plain RSA is multiplicatively homomorphic: we can multiply ciphertext by $r^e$ and the plaintext is the multiplied by $r$: we need only the public key to do it. Let’s multiply the ciphertext by $2^{-e}$. Assume that the oracle gives bits $(a,b)$ for the ciphertext $C$ and $(c,d)$ for the ciphertext $2^{-e}C\pmod{N}$. Then there are two cases: If the message $M$ is even, then dividing by 2 is equivalent to shifting it right by one bit. Otherwise, $M$ is transformed into $(M + N) / 2$. In the first case due to the shift we must have $d = a$. In the second case, depending on the carries it can be anything. However, if $d \ne a$ then we learn for sure the the LSB of $M$ is odd. As a result we get this probabilistic LSB oracle: $a,b = oracle(C);$ $c,d = oracle(2^{-e}C);$ If $d \ne a$ then $LSB(M) = 1.$ How can we use it? Let’s assume that we confirm that $LSB(M) = 1$. Otherwise we can “randomize” the ciphertext by multiplying it by some $r^e$ (we will be able to remove this constant after we fully decrypt the message) until we get the condition hold. Remember that we can multiply the message by any number $d$, what do we learn from the oracle when it happens that $LSB(dM \mod{N}) = 1$? Let $k = floor(dM/N)$, then: $$dM – kN \equiv 1 \pmod{2}.$$ We know that $N$ is odd and $M$ is odd, hence $$k = d + 1 \pmod{2}.$$ We also know $d$, therefore we learn parity of $k$. If $d$ is small, we can enumerate all possible $k$, since $k < d$. Each candidate $k_0$ gives us a possible range for the message (from the definition of $k$):$$\frac{kN}{d} \le M < \frac{(k+1)N}{d}.$$ Example: assume that $LSB(5M \mod{N}) = 1$. Then $k$ is even and is less than $5$. The possible candidates are $0,2,4$. That is, the message $M$ must be in one of the three intervals: $$0 \le M < N/5, \text{or}$$ $$2N/5 \le M < 3N/5, \text{or}$$ $$4N/5 \le M < 5N/5.$$ So we have reduced the possible message space. Note however that these intervals have size at least $N/d$. If we want to reduce the message space to only few messages, we would need large $d$. Then we will not be able to check all candidates for $k$! But there is a nice trick, we can deduce the possible intervals for $k$ for given $d$ from obtained previously intervals for $M$! I learnt this trick from this article, explaining the Bleichenbacher’s attack (see section “Narrowing the initial interval”). Indeed, if $l \le M \le r$ then $floor(\frac{dl}{N}) \le k \le floor(\frac{dr}{N}).$ To sum up, here is the algorithm structure: Set possible range for $M = [0,N-1]$. Set small $d$. Loop: If $oracle_{LSB}(dM \mod{N}) = ?$ then try another $d$. For each possible interval for $M$: Deduce possible range for $k$. Iterate over all $k$ with parity different from $d % 2$ and obtain union of possible intervals for $M$. Intersect these intervals with the previous intervals for $M$. Increase $d$, for example double it. There is a small detail. If we keep doubling $d$, then number of intervals for $M$ grows quickly and makes the algorithm slower. To keep the number of intervals small, we can multiply $d$ by let’s say 1.5 instead of 2 when there are too many intervals. Here’s python code (works locally by simulating oracle using the secret key): from libnum import invmod, len_in_bits from libnum.ranges import Ranges # added recently from Crypto.PublicKey import RSA with open("secretkey.pem", "r") as f: key = RSA.importKey(f.read()) with open("publickey.pem", "r") as f: pkey = RSA.importKey(f.read()) nmid = len_in_bits(pkey.n) // 2 C = int(open("ciphertext").read()) n = pkey.n e = pkey.e i2 = pow(invmod(2, n), e, n) def oracle(c): m = key.decrypt(c) v = (m >> nmid) & 3 a = v >> 1 b = v & 1 return a, b def oracle_lsb(ct): a, b = oracle(ct) c, d = oracle( (i2 * ct) % n ) if d != a: return True return None rng = Ranges((0, n - 1)) assert oracle_lsb(C), "need blinding..." print "Good" div = 2 ntotal = 0 ngood = 0 while 1: ntotal += 1 div %= n C2 = (pow(div, e, n) * C) % n if not oracle_lsb(C2): div += 1 continue ngood += 1 cur = Ranges() for ml, mr in rng._segments: kl = ml * div / n kr = mr * div / n # ensure correct parity if kl % 2 == div % 2: kl += 1 k = kl while k <= kr: l = k * n / div r = (k + 1) * n / div cur = cur | Ranges((l, r)) k += 2 rng = rng & cur print "#%d/%d" % (ngood, ntotal), "good", div, "unknown bits:", len_in_bits(rng.len), "num segments", len(rng._segments) if rng.len <= 100: print "Few candidates left, breaking" break # heuristic to keep fewer intervals for M if len(rng._segments) <= 10: div = 2*div else: div = div + (div / 2) + (div / 4) M = int(open("message").read()) print "Message in the %d candidates left?" % rng.len, M in rng One interesting thing is that the success probability of the described LSB oracle depends on $N$ strongly. For some $N$ it is equal 50% and for some $N$ it is only about 10%. This happens due to different carry chances depending the middle bits of $N$. Have a look at @carllondahl‘s writeup, where he investigates more cases for the oracle.
Find an explicit expression for the solution of the IVP $$ \begin{cases} u_{t}(x,t)+u_{x}(x,t)+u(x,t)=e^{t+2x}\\ \\ u(0,x)=0, \end{cases} $$ by using the method of characteristics Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Find an explicit expression for the solution of the IVP $$ \begin{cases} u_{t}(x,t)+u_{x}(x,t)+u(x,t)=e^{t+2x}\\ \\ u(0,x)=0, \end{cases} $$ by using the method of characteristics This question appears to be off-topic. The users who voted to close gave this specific reason: This is a linear PDE so $$ u = u^h + u^p $$ with $$ u_t^h+u_x^h=-u^h\\ u_t^p+u_x^p+u^p=e^{t+2x} $$ Using the Characteristics method with $u^h$ we have $$ \begin{cases} \frac{dt}{d\tau} = 1 & t(0) = 0 & t(\tau) = \tau\\ \frac{dx}{d\tau} = 1 & x(0) = s & x(\tau) = s+\tau\\ \frac{du^h}{d\tau} = -u^h & u^h(0) = \phi(s) & -\ln u^h(\tau) = \phi(s)+\tau \end{cases} $$ or $$ u^h(t,x) = e^{-t}\psi(x-t) $$ regarding the particular we have that $u^p(t,x) = \frac 14e^{t+2x}$ verifies the particular equation then $$ u(t,x) = e^{-t}\psi(x-t)+\frac 14e^{t+2x} $$ and for $t=0$ we have $$ \psi(x) + \frac 14e^{2x}=0 $$ then $\psi(x) = -\frac 14 e^{2x}$ and finally $$ u(t,x) = \frac 14\left(-e^{-t}e^{2(x-t)}+e^{t+2x}\right) $$ Yes, by using the method of characteristics, one can find the explicit expression for the solution : $$u(t,x)=\frac14\left(e^{t+2x}-e^{-3t+2x} \right)$$ The question is concise. The answer as well.
In Walter Rudin's Principles of mathematical analysis Exercise 5.21, it is proved that for any closed subset $E\subseteq \mathbb{R}$, there exists a smooth function $f$ on $\mathbb{R}$ such that $E=\{x\in \mathbb{R}\mid f(x)=0\}$. Since $E^c$ is open in $\mathbb{R}$, $E^c$ is a countable disjoint union of open intervals $(a_i,b_i)$. On an interval $(a,b)$, if we let $f(x)=\exp(\frac{1}{(x-a)(x-b)})$, $x\in(a,b)$; $f(x)=0$, $x\notin(a,b)$, ($f(x)=\exp(\frac{1}{(x-a)})$ or $f(x)=\exp(\frac{1}{(x-b)})$ on $(a,\infty)$ and $(-\infty,b)$ resp.), then $f$ is smooth. Hence we can get a smooth function on $\mathbb{R}$ such that $E$ is the zero point sets of $f$. (1). How about a closed subset $E$ of $\mathbb{R}^n$? is it true that for any closed subset $E\subseteq \mathbb{R}^n$, there exists a smooth function $f$ on $\mathbb{R}^n$ such that $E=\{x\in \mathbb{R}^n\mid f(x)=0\}$? An open set in $\mathbb{R}^n$ cannot be written as a disjoint union of countable open balls hence the proof above is not valid. The partition of unity only claims that for an open set $U$, there exists open $V$ in $U$ such that $\bar V\subseteq U$ and a smooth function $f$ on $\mathbb{R}^n$ such that $supp f\subseteq U$, $f|_V=1$. (2). The taylor series of $f(x)=\exp(\frac{1}{(x-a)(x-b)})$ does not converge around points $a,b$ hence $f$ is not analytic at $a,b$. Is it true that for any closed subset $E\subseteq \mathbb{R}$, there exists an analytic function $f$ on $\mathbb{R}$ such that $E=\{x\in \mathbb{R}\mid f(x)=0\}$?
Express each of the following in standard form: (a) The mass of a proton in gram is \[\frac{1673}{1000000000000000000000000000}\] (b) A Helium atom has a diameter of 0.000000022 cm. View Answer question_answer6) If the division \[N\div 5\] leaves a remainder of 4, what might be the one's digit of N? Suppose that the division \[N\div 5\] leaves a remainder of 4 and the division \[N\div 2\] leaves a remainder of 1. What must be the one's digit of N The population of a place increased to 54,000 in 2003 at a rate of 5% per annum (a) Find the population in 2001. (b) What would be its population in 2005? (a) \[-13~~\] (b) \[\frac{-13}{19}\] (c) \[\frac{1}{5}\] (d) \[\frac{-5}{8}\times \frac{-3}{7}\] Simplify: (a) \[-\text{ }pcr\text{ }\left( {{p}^{2}}+{{q}^{2}}+{{r}^{2}} \right)\] (b) \[(px+qy)(ax-by)\] (a) Construct a quadrilateral BLUE in which BL = 5.3 cm, BE = 2.9 cm, \[\angle B=70{}^\circ ,\] \[\angle L=95{}^\circ\] and \[\angle U=85{}^\circ .\] (b) A student attempted to draw a quadrilateral PLAY, where PL = 3 cm, LA = 4 cm, AY = 4.5 cm, PY = 2 cm, LY = 6 cm, but could not draw it. What is the reason? (a) In a lottery, there are 10 prizes and 20 blanks. A ticket is chosen at random, what is the probability of getting a prize? (b) Study the following pictograph and answer the questions given below it. (i) How many cars were produced in the month of July? (ii) In which month were maximum number of cars produced? (a) \[{{(-9)}^{3}}\div {{(-9)}^{8}}\] (b) \[\frac{{{3}^{-5}}\times {{10}^{-5}}\times 125}{{{5}^{-7}}\times {{6}^{-5}}}\] (a) \[{{x}^{2}}+\text{ }xy+8x+8y~\] (b) \[15xy-6x+5y-2\] (c) \[ax+bx-ayby\] (a) \[{{(b-7)}^{2}}\] (b) \[{{(xy+3z)}^{2}}\] (c) \[{{(6{{x}^{2}}-5y)}^{2}}\] (d) \[{{\left( \frac{2}{3}m+\frac{3}{2}n \right)}^{2}}\] (e) \[{{(0.4p-0.5q)}^{2}}\] (f) \[{{(2xy+5y)}^{2}}\] (a) Water is poured into a cuboidal reservoir at the rate of 60 litres per minute. If the volume of reservoir is 108 find the number of hours it will take to fill the reservoir. (b) If the radius and height of the cylindrical tank are 7 m and 10 m, find the capacity of the tank. In a hypothetical sample of 20 people, the amount of money (in thousands of rupees) with each was found to be as follows : 114,108,100,98,101,109,117,119,126,131,136,143,156,169,182,195, 207, 219,235,118. Draw a histogram of the frequency distribution, taking one of the class interval as 50 - 100 (a) \[\left\{ \frac{7}{5}\times \left( \frac{-3}{12} \right) \right\}+\left\{ \frac{7}{5}\times \frac{5}{12} \right\}\] (b) \[\left\{ \frac{9}{16}\times \frac{4}{12} \right\}+\left\{ \frac{9}{16}\times \frac{-3}{9} \right\}\] View Answer question_answer26) Hasan buys two kinds of cloth materials for school uniforms, shirt material that costs him Rs. 50 per metre and trouser material that costs him Rs. 90 per metre. For every 3 metres of the shirt material he buys 2 metres of the trouser material. He sells the materials at 12% and 10%, respectively. His total sale is Rs. 36,600. How much trouser material did he buy? (a) \[{{71}^{2}}\] (b) \[{{99}^{2}}\] (c) \[{{102}^{2}}\] (d) \[{{998}^{2}}\] (e) \[{{5.2}^{2}}\] (f) \[297\times 303\] (g) \[78\times 82\] (h) \[{{8.9}^{2}}\] View Answer question_answer28) Diagram of the adjacent picture frame has outer dimensions = 24 cm \[\times \] 28 cm and inner dimensions 16 cm \[\times \] 20 cm. Find the area of each section of the frame, if the width of each section is same. In a primary school, the parents were asked about the number of hours they spend per day in helping their children to do homework. There were 90 parents who help for \[\frac{1}{2}\]hr to\[1\frac{1}{2}\]hr. The distribution of parents according to the time for which, they said they helped is given in the adjoining figure, 20% helped for more than 1-hr per day. 30% helped for \[1\frac{1}{2}\] hr to \[1\frac{1}{2}\]hr; 50% did not help at all Using this, answer the following : (a) How many parents were surveyed? (b) How many said that did not help? (c) How many said that they helped for more than \[1\frac{1}{2}\] hours? View Answer question_answer30) A 5 m 60 cm high vertical pole casts a shadow 3 m 20 cm long. Find at the same time (a) the length of the shadow cast by another pole 10 m 50 m cm high (b) the height of a pole which cast shadow 5 m long. You need to login to perform this action. You will be redirected in 3 sec
Can somebody provide a visual comparative? What set should I buy with a low budget (I typically typeset handouts for my students)? Do I need Minion Pro or Minion Math provides text font too? About Optical Sizes You said that you do not get the difference between optical size and weight. It is very simple. With weight a typographer means the thickness of the strokes which make up a glyph. A glyph with thicker strokes has more weight than one with thin strokes. Common weights are Bold or Light. To switch to a font with high weight in LaTeX use \textbf (or \mathbf). Optical sizes or design sizes are fonts which are meant to be set at a specific size. Back when text was set in lead there had to be an extra set of glyphs for every conceivable size but with the advent of computer typography designers started to make their fonts only for 12pt font size because it could be easily scaled to any other size. Unfortunately, this scaling is suboptimal and fonts with high contrast become barely legible at small size. Therefore people introduced optical sizes which are fonts optimized for a certain range of sizes. The most common optical sizes, as introduced by Adobe, are Tiny, Caption, Text, Subhead, and Display. Their names reflect the intended place of use. The corresponding size ranges are up to 6pt: Tiny6pt-8.4pt: Caption8.4pt-13pt: Text13pt-19.9pt: Subheadabove 19.9pt: Display Of course, weight and optical size are independent of each other and can be combined. That is why the full set of Minion Math has files called MinionMath-BoldSubh.otf, which contains Minion Math at the design size Subhead in bold weight. About Minion Math I own the Basic Set of Minion Math. As Minion Math is a Math font it doesn't come with a text font which is why I use the Minion Pro text font distributed with Adobe Illustrator. The basic set comprises the following OpenType font files MinionMath-Bold.otfMinionMath-Capt.otfMinionMath-Regular.otfMinionMath-Tiny.otf I also received Type 1 font files and macros for the use in pdfTeX, but I would not be surprised if this support is dropped at some point. I don't even know if these macros provide access to all the glyphs currently covered by Minion Math. The OpenType fonts work nicely with LuaTeX and XeTeX*. Recently I also put together a ConTeXt typescript file which I can provide on demand. For most basic math typesetting you should own at least two fonts: Regular and Bold. Please do not consider only buying Regular and then using fake bold. The additional Caption and Tiny fonts which come with the Basic Set are in normal weight and run slightly wider than Regular. The strokes differ only very slightly. As such subtleties are certainly a very subjective perception, see the next section to make yourself a picture. Visual Comparison I typeset the example found at the very end of this »answer« twice with LuaTeX, once with the SizeFeatures block commented out (no opticals) and once with opticals. No Opticals: Link to screenshot The following fonts are embedded (output of pdffonts) name type encoding emb sub uni object ID------------------------------------ ----------------- ---------------- --- --- --- ---------KESRZU+MinionPro-Bold CID Type 0C Identity-H yes yes yes 4 0FWTMSK+MinionPro-Regular CID Type 0C Identity-H yes yes yes 5 0DRZEGS+MinionPro-It CID Type 0C Identity-H yes yes yes 6 0XPNDTR+MinionMath-Regular CID Type 0C Identity-H yes yes yes 7 0KPJMUA+MinionMath-Regular CID Type 0C Identity-H yes yes yes 8 0KDMCFD+MinionPro-Bold CID Type 0C Identity-H yes yes yes 9 0UTEPWH+MinionMath-Regular CID Type 0C Identity-H yes yes yes 10 0VHZLAM+MinionPro-Regular CID Type 0C Identity-H yes yes yes 11 0 With Opticals: Link to screenshot The following fonts are embedded (output of pdffonts) name type encoding emb sub uni object ID------------------------------------ ----------------- ---------------- --- --- --- ---------KESRZU+MinionPro-Bold CID Type 0C Identity-H yes yes yes 4 0FWTMSK+MinionPro-Regular CID Type 0C Identity-H yes yes yes 5 0DRZEGS+MinionPro-It CID Type 0C Identity-H yes yes yes 6 0XPNDTR+MinionMath-Regular CID Type 0C Identity-H yes yes yes 7 0PXNQGX+MinionMath-Capt CID Type 0C Identity-H yes yes yes 8 0KDMCFD+MinionPro-Bold CID Type 0C Identity-H yes yes yes 9 0VHZLAM+MinionPro-Regular CID Type 0C Identity-H yes yes yes 10 0 Example \documentclass[12pt]{article}\pagestyle{empty}\usepackage{amsmath}\usepackage{amsthm}\newtheorem{theorem}{Theorem}\usepackage{unicode-math}% loads fontspec\setmainfont[%Ligatures={TeX,Common},Numbers={Proportional,Lining},Kerning=On,]{Minion Pro}\setmathfont[SizeFeatures = { {Size = -6, Font = MinionMath-Tiny, Style = MathScriptScript}, {Size = 6-8.4, Font = MinionMath-Capt, Style = MathScript}, {Size = 8.4-13, Font = MinionMath-Regular },},]{MinionMath-Regular}\setmathfont{MinionMath-Bold.otf}[range={bfup->up,bfit->it}]\begin{document}\begin{theorem}[Residue theorem] Let $f$ be analytic in the region $G$ except for the isolated singularities $a_1,a_2,\dots,a_m$. If $\gamma$ is a closed rectifiable curve in $G$ which does not pass through any of the points $a_k$ and if $\gamma\approx 0$ in $G$, then \[ \frac{1}{2\pi i}\int\limits_{\gamma}f\left(x^{\mathbf{N}\in\BbbC^{N\times 10}}\right) = \sum_{k=1}^m n(\gamma;a_k)\mathup{Res}(f;a_k)\,. \]\end{theorem}\begin{theorem}[Maximum modulus] Let $G$ be a bounded open set in $\BbbC$ and suppose that $f$ is a continuous function on $G^-$ which is analytic in $G$. Then \[ \max\{|f(z)|\:z\in G^-\} = \max\{|f(z):z\in \partial G\}\,. \]\end{theorem}First some large operators both in text: $\iiint\limits_{Q}f(x,y,z)\,\mathup{d}x\,\mathup{d}y\,\mathup{d}z$ and $\prod_{\gamma\in\Gamma_{\overbar{C}}}\partial\left(\tilde{X}_\gamma\right)$; and also on display\[\iiiint\limits_{Q}f(w,x,y,z)\,\mathup{d}w\,\mathup{d}x\,\mathup{d}y\,\mathup{d}z\leq\oint_{\partial Q}f^\prime\left(\max\left\{\frac{\Vert w\Vert}{\vert w^2+x^2\vert};\frac{\Vert z\Vert}{\vert y^2+z^2\vert};\frac{\Vert w\oplus z\Vert}{\vert x\oplus y\vert}\right\}\right)\]\end{document}
Fractions and binomial coefficients are common mathematical elements with similar characteristics - one number goes on top of another. This article explains how to typeset them in LaTeX. Contents Using fractions and binomial coefficients in an expression is straightforward. The binomial coefficient is defined by the next expression: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] For these commands to work you must import the package amsmath by adding the next line to the preamble of your file \usepackage{amsmath} The appearance of the fraction may change depending on the context Fractions can be used alongside the text, for example \( \frac{1}{2} \), and in a mathematical display style like the one below: \[\frac{1}{2}\] As you may have guessed, the command \frac{1}{2} is the one that displays the fraction. The text inside the first pair of braces is the numerator and the text inside the second pair is the denominator. Also, the text size of the fraction changes according to the text around it. You can set this manually if you want. When displaying fractions in-line, for example \(\frac{3x}{2}\) you can set a different display style: \( \displaystyle \frac{3x}{2} \). This is also true the other way around \[ f(x)=\frac{P(x)}{Q(x)} \ \ \textrm{and} \ \ f(x)=\textstyle\frac{P(x)}{Q(x)} \] The command \displaystyle will format the fraction as if it were in mathematical display mode. On the other side, \textstyle will change the style of the fraction as if it were part of the text. The usage of fractions is quite flexible, they can be nested to obtain more complex expressions. The fractions can be nested \[ \frac{1+\frac{a}{b}}{1+\frac{1}{1+\frac{1}{a}}} \] Now a wild example \[ a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cdots}}} \] The second fraction displayed in the previous example uses the command \cfrac{}{} provided by the package amsmath (see the introduction), this command displays nested fractions without changing the size of the font. Specially useful for continued fractions. Binomial coefficients are common elements in mathematical expressions, the command to display them in LaTeX is very similar to the one used for fractions. The binomial coefficient is defined by the next expression: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] And of course this command can be included in the normal text flow \(\binom{n}{k}\). As you see, the command \binom{}{} will print the binomial coefficient using the parameters passed inside the braces. A slightly different and more complex example of continued fractions Final example \newcommand*{\contfrac}[2]{% { \rlap{$\dfrac{1}{\phantom{#1}}$}% \genfrac{}{}{0pt}{0}{}{#1+#2}% } } \[ a_0 + \contfrac{a_1}{ \contfrac{a_2}{ \contfrac{a_3}{ \genfrac{}{}{0pt}{0}{}{\ddots} }}} \] For more information see
Hi, We consider subspaces of $\mathbb{R}^N$. Suppose that we have a property called $\mbox{Prop}$ that apply to subspaces of $\mathbb{R}^N$. That is to say a function from the set of subspaces of $\mathbb{R}^N$ into $\{0,1\}$. The variables $X_1,\dots,X_N$ are gaussian taking their values in $\mathbb{R}^N$. They are supposed i.i.d. Deriving from an isotropic distribution (The covariance matrix is identity). Prove that ($0\leqslant n\leqslant N$) $$ P(\mbox{Prop}(Span(X_1,\dots,X_n))=1)= P(\mbox{Prop}(Span(X_1,\dots,X_{N-n})^\bot)=1) $$ Without referring to the definition of $\mbox{Prop}$ other than the obvious that is $$ \mbox{Prop}(Span(X_1,\dots,X_k)) $$ is a measurable function on $\Omega$, the sample set. In other words, the probability for some property to hold on a subspace spanned by $n$ variables chosen at random (gaussian isotropic) is equal to the probability for it to hold on the orthogonal space of the space spanned by $N-n$ variables. This may translate to this "funny" question: $X_1,\dots,X_6$ are six real-valued gaussian iid variables. Prove that $$ P\left(\sin\left(\frac{|X_1|}{|X_1|+|X_2|+|X_3|}\right)>0.2\right)= P\left(\sin\left(\frac{|Y_1|}{|Y_1|+|Y_2|+|Y_3|}\right)>0.2\right) $$ where $Y_1=X_2X_6-X_3X_5$, $Y_2=-X_1X_6+X_3X_4$ and $Y_3=X_1X_5-X_2X_4$. (The vector $Y$ (in $\mathbb{R}^3$) is orthogonal to the subspace spanned by $(X_1,X_2,X_3)$ and $(X_4,X_5,X_6)$). Thank you for your help, Saïd.
Multiple bounded variation solutions of a capillarity problem 1. Dipartimento di Matematica e Informatica, Università degli Studi di Trieste, Via A. Valerio 12/1, 34127 Trieste, Italy, Italy -$\nablau*v/\sqrt(1+|\nablau|^2)=k$ on $\partial\Omega$ Keywords:prescribed mean curvature equation, bounded variation function, Neumann boundary condition, Capillary surface, variational methods. Mathematics Subject Classification:Primary: 35J25, 35J20, 35J66, 53A10, 49Q20; Secondary: 76B45, 76D45. Citation:Franco Obersnel, Pierpaolo Omari. Multiple bounded variation solutions of a capillarity problem. Conference Publications, 2011, 2011 (Special) : 1129-1137. doi: 10.3934/proc.2011.2011.1129 [1] Chiara Corsato, Franco Obersnel, Pierpaolo Omari, Sabrina Rivetti. On the lower and upper solution method for the prescribed mean curvature equation in Minkowski space. [2] Jun Wang, Wei Wei, Jinju Xu. Translating solutions of non-parametric mean curvature flows with capillary-type boundary value problems. [3] Guido De Philippis, Antonio De Rosa, Jonas Hirsch. The area blow up set for bounded mean curvature submanifolds with respect to elliptic surface energy functionals. [4] Chiara Corsato, Colette De Coster, Franco Obersnel, Pierpaolo Omari, Alessandro Soranzo. A prescribed anisotropic mean curvature equation modeling the corneal shape: A paradigm of nonlinear analysis. [5] Franco Obersnel, Pierpaolo Omari. On a result of C.V. Coffman and W.K. Ziemer about the prescribed mean curvature equation. [6] Jaeyoung Byeon, Sangdon Jin. The Hénon equation with a critical exponent under the Neumann boundary condition. [7] [8] Matthias Bergner, Lars Schäfer. Time-like surfaces of prescribed anisotropic mean curvature in Minkowski space. [9] Qinian Jin, YanYan Li. Starshaped compact hypersurfaces with prescribed $k$-th mean curvature in hyperbolic space. [10] Franco Obersnel, Pierpaolo Omari. Existence, regularity and boundary behaviour of bounded variation solutions of a one-dimensional capillarity equation. [11] Matteo Cozzi. On the variation of the fractional mean curvature under the effect of $C^{1, \alpha}$ perturbations. [12] Umberto De Maio, Akamabadath K. Nandakumaran, Carmen Perugia. Exact internal controllability for the wave equation in a domain with oscillating boundary with Neumann boundary condition. [13] Yoshihiro Shibata. Global well-posedness of unsteady motion of viscous incompressible capillary liquid bounded by a free surface. [14] Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. [15] Shouming Zhou, Chunlai Mu, Yongsheng Mi, Fuchen Zhang. Blow-up for a non-local diffusion equation with exponential reaction term and Neumann boundary condition. [16] Martin Gugat, Günter Leugering, Ke Wang. Neumann boundary feedback stabilization for a nonlinear wave equation: A strict $H^2$-lyapunov function. [17] Alessio Pomponio. Oscillating solutions for prescribed mean curvature equations: euclidean and lorentz-minkowski cases. [18] Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. [19] Jinju Xu. A new proof of gradient estimates for mean curvature equations with oblique boundary conditions. [20] Y. Goto, K. Ishii, T. Ogawa. Method of the distance function to the Bence-Merriman-Osher algorithm for motion by mean curvature. Impact Factor: Tools Metrics Other articles by authors [Back to Top]
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Another forthcoming change to the SAT is the number of answer choices per question: there will be four rather than five options for all questions. This is another way in which the new SAT will more closely resemble the ACT, which already uses four-choice questions for all the tests except Mathematics. I can already picture the complaints that this makes the SAT easier. And if you are approaching it from the perspective of guessing, it's clearly easier to guess correctly with fewer choices. Your odds for a single question are 25% rather than 20%. But over the course of the whole test, guessing is not a productive strategy, as Randall Munroe pointed out. The odds of guessing correctly on all the multiple-choice questions on the current test are \[\frac{1}{5^{160}}\approx \frac{1}{6.84 \times 10^{111}}\] [Note that Munro got the number of writing questions wrong, so I corrected his figure. The actual odds are even worse than he calculated.] The proposed test spec just released has fewer multiple-choice questions (141) and fewer options to choose from, so your odds rise to $$\displaystyle\frac{1}{4^{141}} \approx \frac{1}{7.77 \times 10^{84}}$$ Of course that's still an staggeringly remote probability. If we recalculate Munroe's computer-guessing scenario for the new test, the odds of correctly guessing all the math questions alone after 5 billion years rises to only 5.9%. There has been a lot of research about the optimal number of choices for multiple-choice questions, and it turns out that, as in other areas of life, having more choices is not necessarily better. In fact, as question writers know, it's very hard to come up with more than two or three plausible wrong answers for many questions. If you must come up with five options for every question, it's often the case that one or two are implausible fillers that few, if any, students will pick. The number of options you have does affect how many questions you need to put on the test to have a reliable measurement. For example, you can make a good test with only two options per question, but you need to have more questions on the test. On a test the length of the SAT, the choice between four or five options doesn't significantly affect the reliability, and it doesn't simplify the test for the student because the deleted option would probably have been an implausible choice in the first place.
The vector difference between the wave @P04881@ vectors of the incident and the scattered beam, both of length \(\frac{2\ \pi }{\lambda }\), where \(\lambda \) is the @W06659@ of the scattered radiation in the medium. Source: Purple Book, 1st ed., p. 65 [Terms ] [Book ]
11 0 1. Homework Statement Consider a branch of [itex]\log{z}[/itex] analytic in the domain created with the branch cut [itex]x=−y, x≥0.[/itex] If, for this branch, [itex]\log{1}=-2\pi i[/itex], find the following. [tex]\log{(\sqrt{3}+i)}[/tex] 2. Homework Equations [tex]\log{z} = \ln{r} + i(\theta + 2k\pi)[/tex] 3. The Attempt at a Solution This one is actually given in the textbook (odd numbered problem), but I'm having trouble understanding how the answer was arrived at. The answer given: [itex]0.693 - i\frac{11\pi}{6}[/itex] I can see easily that [itex]\log{\sqrt{(1)^2 + (\sqrt{3})^2}}=\ln{2} = 0.693...[/itex] The real part here makes sense since it's the (real) log of the modulus of the given complex number [itex]\sqrt{3}+i[/itex]. I can also understand that the branch cut is made along [itex]x=-y[/itex]. Where I'm getting confused is how the cut actually affects this log. So [itex]r = 2, \theta=\frac{\pi}{6}[/itex]. Winding around counterclockwise from 0, we reach [itex]\frac{\pi}{6}[/itex] easily, since it does not cross the branch cut at all. Does the restriction [itex]\log{1}=-2\pi i[/itex] actually restrict this to moving around the circle clockwise from [itex]-\frac{\pi}{4}[/itex] such that [itex]-\frac{9\pi}{4} < \theta \le -\frac{\pi}{4}[/itex]? When using this log with principal values and restricted to a domain of analyticity of [itex]-\pi < \theta \le \pi[/itex] we traditionally wind around counterclockwise toward [itex]\pi[/itex] and clockwise toward [itex]-\pi[/itex]. This one, if I understand it correctly, winds around [itex]-2\pi[/itex] from the cut so that it's restricted to one set of values for an otherwise multi-valued log. Why do both of these ways of figuring a single-valued log (the traditional principal valued log cut on the negative x axis and the one used in this problem) seem to involve winding around the axis different ways? What should I be understanding here that I'm not?