text
stringlengths
256
16.4k
Citing Wikipedia, the Riemann zeta function is the analytic continuation of $$ \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} $$ The series itself is only convergent in the right half complex plane where $Re(s)>1$. What I would like to understand is how the series behaves near the line $1+it$, i.e. the boundary of this right half plane. On the boundary line we have $|\zeta(1+it)|<\infty$. Lets look at the specific example of $t=5$. for real $\epsilon>0$ I would write $$ \zeta(1+5i) = \lim_{\epsilon\to 0}\sum_{n=1}^{\infty} \frac{1}{n^{1+\epsilon+5i}} $$ and I assume that in this order of taking the two limits, this is actually a correct equation, since otherwise $\zeta$ would not be an analytic continuation. Does this mean that $\sum_{n=1}^{\infty} \frac{1}{n^{1+\epsilon+5i}}$ is bounded for each $\epsilon>0$? But if it is bounded yet does not converge for $\epsilon\to0$, does it mean that the sum, if understood as $\lim_{n\to\infty} \sum^n$, oscillates around the value of $\zeta(1+5i)$? What happens when $\epsilon$ goes towards $0$?
What is the relationship of the following to other axioms of $\sf ZFC$? $\sf WB$: Every set $A$ is in bijection with a set well-founded by $\in$. Obviously, $\sf ZF$ implies $\sf WB$ (because every set is well-founded) and $\sf ZFC-Reg$ implies $\sf WB$ (because every set is equinumerous to an ordinal, which is well-founded). Is it known if $\sf ZF-Reg$ implies $\sf WB$? Another equivalent statement is: $\sf WB'$: Every set injects into ${\rm WF}:=\bigcup_\alpha V_\alpha$. (Since every set well-founded by $\in$ is in $\rm WF$, $\sf WB\to WB'$, and for the converse note that the range $B\subseteq {\rm WF}$ of any injection from set $A$ must also be a set so it is contained in $V_\delta$ where $\delta$ is the supremum of the ranks of elements of $B$.) When written this way it seems hard to believe that it could be false even without Regularity.
In Bosch's Algebra you're asked to prove that every commutative ring R is Noetherian iff every ideal is finitely generated I think I managed to prove the if part (I write it just to be more explicit and to check it): Let $a_i$ be an ascending chain of ideals $a_1\subset a_2\subset\ ...\subset R$ $\bigcup a_i=:a$, and a is still an ideal. Since every ideal is finitely generated by hypothesis, we have: $a=(\alpha_1,...,\alpha_m),\ \alpha_i\in R$ Since the chain is ascending, there is an $n$ such that $\forall i\ \ \alpha_i\ \in a_n$, and thus $a_{m\geq n}=a_n$. I don't know how to approach the only if part: is there any cardinality-based reasoning? After this exercise, another one has got me stuck: every commutative ring $R$ is Noetherian iff every prime ideal is finitely generated Note: I would be really grateful if it wouldn't be necessary to use concepts such as modules and annihilators in proofs, as I'm not used to them
I want to solve this: $$\int 3x \cos x^2 \, dx$$ I get this answer: $$ \frac{\sin 2x}{2}+\frac{\cos 2x}{4}+C $$ but the answer should be: $$ \frac{3 \sin x^2}{2}+C $$ Am I doing anything wrong or is it possible to rewrite the solution in another way so that they are the same? EDIT Here is my calculation: $$3\int x\cos^2x = 3\int x\frac{1}{2}(1+\cos2x)dx=\frac{3}{2}\int x+x\cos2x dx=\frac{3x^2}{4}\int x\cos2x dx=\frac{\sin2x}{2}-\int \frac{\sin2x}{2}dx=\frac{\sin2x}{2}x+\frac{\cos2x}{24}$$ But as said, the assumption $\cos^2x=\cos x^2$ is clearly wrong.
Grid sensitivity analysis on the front wing of the McLaren MP4-29 F1 race car An important aspect to take into account when pushing for a good Computational Fluid Dynamics simulation is the process of verification and validation. Validation is usually carried out with wind tunnel testing, and that is way out of our scope; unless McLaren engineers were so kind to provide us with the data. Nonetheless, we can do something on the verification side of the process towards a good computational simulation. Contents Computational geometry and grid The computational geometry was generated in Cinema 4D by Juan David Barrera, a 3D and post-production senior composer. Nuevo Morro Mclaren Modelado… pic.twitter.com/gsIFzJtcyX — J3DF1 (@erteclas) May 15, 2014 The provided CAD geometry was imported into STAR-CCM+ and a surface wrap was then generated so that the shape could be substracted from a block to get the fluid volume region. Taking advantage of the inherent symmetry of the geometry, only half of the domain is meshed, which drastically reduces the computational requirements. Three different computational domains were made, with base grid size of 4, 5.66 and 8 cm (ratio r = 1.41). The computational domains have dimensions $L\times X\times H = 20\times 5 \times 5 \text{ m}^3$. The blockage ratio is 0.6 %. The total number of cells is 496654, 892642 and 1767964. A detailed view of the intermediate grid is given in Fig. 1. Boundary conditions and solver settings At the inlet of the domain, a constant wind field profile is imposed, the reference wind speed being 50 kph (~13.89 m/s). The floor is of type wall with tangential velocity matching that of the inlet air velocity. Closure of the RANS equations is obtained with the realizable k–ε turbulence model with two-layer all y+ wall treatment. Convergence was assessed by monitoring drag and downforce generated by wing, winglets and nose. Less than 1000 iterations were needed to reach a satisfactory solution. Richardson extrapolation If the grid refinement is performed with constant r, the order of the scheme p and the discretization error ε can be estimated by $$ p \approx \frac{\log \left( \frac{ f_3 – f_2}{f_2 – f_1} \right) }{\log r} \qquad \varepsilon = \frac{f_1 – f_2}{r^p – 1} $$ where f is the solution, subscript 1 indicating the fines grid. An estimate of the exact solution is then given by $$F = f_1 + \varepsilon$$ A rough estimation of the independent grid solution is given below The difference between the middle grid and the fine grid is 0.6 % (drag) and 1.5 % (downforce). The difference between the fine grid and the estimated grid independent solution is 1.0 % for both drag and downforce. On the other hand, the middle grid is off by 1.6 % and 2.5 %, respectively. Further analysis A middle grid with mesh refinement on high gradient zones will be used for further analysis. This way, we can approximate the solution to that of the fine mesh without dramatically increasing the computational cost. Acknowledgements References [1] Larsson, T., Sato, T. and Ullbr, (2005). Supercomputing in F1–Unlocking the Power of CFD. [2] Roache, P. (1997). Quantification of uncertainty in computational fluid dynamics. Annual Review of Fluid Mechanics, 29(1), pp.123–160. [3] Tu, J., Yeoh, G. and Liu, C. (2013). Computational fluid dynamics: A practical approach. 2nd ed. Waltham, Mass.: Butterworth-Heinemann.
Suppose we are given a filtered probability space $(\Omega, \mathscr{F}, \{\mathscr{F}_t\}_{t \in [0,T]}, \mathbb{P})$, where $\{\mathscr{F}_t\}_{t \in [0,T]}$ is the filtration generated by standard $\mathbb P$-Brownian motion. Let $dX_t = \theta_tdt +dW_t$ be an Ito process where $(\theta_t)_{t \in [0,T]}$ is $\mathscr{F}_t$-adapated and $E[\int_0^T \theta_s^2 ds] < \infty$ and $$Y_t := X_tL_t, \ \ L_t = \exp Z_t, \ \ Z_t = -\int_0^t \theta_s dW_s - \frac{1}{2}\int_0^t\theta_s^2ds$$ Suppose Novikov's condition holds. Prove $Y_t$ is a $(\mathscr{F}_t, \mathbb{P})$-martingale. I was able to show that $dY_t = (L_t - \theta_tY_t)dW_t$ from deriving that $dZ_t = -\theta_tdW_t -\frac 1 2 \theta_t^2 dt$ and $dL_t = e^{Z_t}(-\theta_tdW_t)$. Assuming that this is right, does the fact that there is no drift term in $dY_t$ already establish that $Y_t$ is a $(\mathscr{F}_t, \mathbb{P})$-martingale and not merely that it is a local martingale or merely that $E[Y_t | \mathscr{F}_u] = Y_u$? Edit: It seems that according to this, a solution of an SDE is a martingale if it is unique. $E[Y_0^2] = E[X_0^2]< \infty$, I guess? No initial condition is given for $X_t$ Show $\exists K \in \mathbb R$ s.t. $|L_t - \theta_tx| \le K(1+|x|)$ $|(L_t - \theta_tx) - (L_t - \theta_ty)| \le K|x-y|$ We have: $$|L_t - \theta_tx| \le |L_t| + |\theta_t||x| \le |\theta_t|(1+|x|)$$ $$|(L_t - \theta_tx) - (L_t - \theta_ty)| \le |\theta_t||x-y|$$ I don't suppose $E[\int_0^T \theta_s^2 ds] < \infty$ means that $\theta_t$ is bounded, does it?
This question already has an answer here: Compute $\displaystyle\int_0^\infty \frac{dx}{1+x^3}$ by integrating $\dfrac{1}{1+z^3}$ over the contour $\gamma$ (defined below) and letting $R\rightarrow \infty$. The contour is $\gamma=\gamma_1+\gamma_2+\gamma_3$ where $\gamma_1(t)=t$ for $0\leq t \leq R$, $\gamma_2(t)=Re^{i\frac{2\pi}{3}t}$ for $0\leq t \leq 1$, and $\gamma_3(t)=(1-t)Re^{i\frac{2\pi}{3}}$ for $0\leq t \leq 1$. So, the contour is a wedge, and by letting $R\rightarrow \infty$ we're integrating over one third of the complex plane. I believe this means we are integrating over the entire complex plane under the substitution $u=x^3$. There are poles at $-\zeta$ for each third root of unity $\zeta$, so there's only one pole in this wedge. I'll just refer to that pole as $-\zeta$. I guess this means that we can use the residue theorem to say $$\int_{\gamma}\frac{1}{1+z^3}dz=2\pi i\eta(\gamma,-\zeta)\operatorname{Res}\left(\frac{1}{1+z^3},-\zeta\right)=2\pi i \lim_{z\rightarrow -\zeta}\left[(z+\zeta)\frac{1}{1+z^3}\right]$$ I can't evaluate this limit. Also I don't see how it involves $R$, which I'm supposed to be taking a limit of. I suspect I've done something wrong. What's the problem? How do I proceed? Also, after I do properly evaluate this integral, I am assuming that its value is supposed to be $\displaystyle\int_0^\infty\frac{dx}{1+x^3}$. Why? (I think I know why conceptually but I need to see how one rigorously writes that out.) (Note: This is exam review, not homework.)
Given the following series $$\sum_{k=0}^\infty \frac{\sin 2k}{1+2^k}$$ I'm supposed to determine whether it converges or diverges. Am I supposed to use the comparison test for this? My guess would be to compare it to $\frac{1}{2^k}$ and since that is a geometric series that converges, my original series would converge as well. I'm not all too familiar with comparing series that have trig functions in them. Hope I'm going in the right direction Thanks
Journal of the Mathematical Society of Japan J. Math. Soc. Japan Volume 67, Number 2 (2015), 753-761. A note on the stable equivalence problem Abstract We provide counterexamples to the stable equivalence problem in every dimension $d\geq2$. That means that we construct hypersurfaces $H_1, H_2\subset\C^{d+1}$ whose cylinders $H_1\times\C$ and $H_2\times\C$ are equivalent hypersurfaces in $\C^{d+2}$, although $H_1$ and $H_2$ themselves are not equivalent by an automorphism of $\C^{d+1}$. We also give, for every $d\geq2$, examples of two non-isomorphic algebraic varieties of dimension $d$ which are biholomorphic. Article information Source J. Math. Soc. Japan, Volume 67, Number 2 (2015), 753-761. Dates First available in Project Euclid: 21 April 2015 Permanent link to this document https://projecteuclid.org/euclid.jmsj/1429624602 Digital Object Identifier doi:10.2969/jmsj/06720753 Mathematical Reviews number (MathSciNet) MR3340194 Zentralblatt MATH identifier 1338.14058 Citation POLONI, Pierre-Marie. A note on the stable equivalence problem. J. Math. Soc. Japan 67 (2015), no. 2, 753--761. doi:10.2969/jmsj/06720753. https://projecteuclid.org/euclid.jmsj/1429624602
Difference between revisions of "Timeline of prime gap bounds" Line 860: Line 860: | | | Jan 29 results confirmed [http://terrytao.wordpress.com/2014/01/28/polymath8b-vii-using-the-generalised-elliott-halberstam-hypothesis-to-enlarge-the-sieve-support-yet-further/#comment-270631 here] | Jan 29 results confirmed [http://terrytao.wordpress.com/2014/01/28/polymath8b-vii-using-the-generalised-elliott-halberstam-hypothesis-to-enlarge-the-sieve-support-yet-further/#comment-270631 here] + + + + + + |} |} Revision as of 20:38, 17 February 2014 Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments Aug 10 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) May 14 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. May 21 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations May 28 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] May 30 59,470,640 (Morrison) 58,885,998? (Tao) 59,093,364 (Morrison) 57,554,086 (Morrison) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m May 31 2,947,442 (Morrison) 2,618,607 (Morrison) 48,112,378 (Morrison) 42,543,038 (Morrison) 42,342,946 (Morrison) Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] Jun 1 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] Jun 2 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) Jun 3 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison) 4,802,222 (Morrison) Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. Jun 4 1/224?? (v08ltu) 1/240?? (v08ltu) 4,801,744 (Sutherland) 4,788,240 (Sutherland) Uses asymmetric version of the Hensley-Richards tuples Jun 5 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz) 4,717,560 (Sutherland) 397,110? (Sutherland) 4,656,298 (Sutherland) 389,922 (Sutherland) 388,310 (Sutherland) 388,284 (Castryck) 388,248 (Sutherland) 387,982 (Castryck) 387,974 (Castryck) [math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance. [math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve Jun 6 387,960 (Angelveit) 387,904 (Angeltveit) Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. Jun 7 26,024? (vo8ltu) 387,534 (pedant-Sutherland) Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland) 285,752 (pedant-Sutherland) values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here. An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired. Jun 12 22,951 (Tao/v08ltu) 22,949 (Harcos) 249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu) 6,329? (Harcos) 6,329 (v08ltu) 60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu) 5,672? (v08ltu) 5,459? (v08ltu) 5,454? (v08ltu) 5,453? (v08ltu) 60,740 (xfxie) 58,866? (Sun) 53,898? (Sun) 53,842? (Sun) A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu) 5,453? (v08ltu) 5,452? (v08ltu) 53,774? (Sun) 53,672*? (Sun) Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao) [math]148\varpi + 33\delta \lt 1[/math]? (Tao) Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu) 1,467 (v08ltu) 12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu) [math]140\varpi + 32 \delta \lt 1[/math]? (Tao) 1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes) 1,007? (Hannes) 10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen) [math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao) 962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao) 873? (Hannes) Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao) Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao) 632 (Harcos) 4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma) 12 [EH] (Maynard) Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard) 5 [EH] (Maynard) 600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard) 582#*? (Nielsen]) 59,451 [m=2]#? (Nielsen]) 42,392 [m=2]? (Nielsen) 356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie) 448#*? (Nielsen) 43,134 [m=2]#? (Nielsen) 698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen) 10,000,000? [m=3] (Tao) 1,700,000? [m=3] (Tao) 38,000? [m=2] (Tao) 300#? (Clark-Jarvis) 182,087,080? [m=3] (Sutherland) 179,933,380? [m=3] (Sutherland) More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20 55#? (Nielsen) 36,000? [m=2] (xfxie) 175,225,874? [m=3] (Sutherland) 27,398,976? [m=3] (Sutherland) Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck) 75,000,000? [m=4] (Castryck) 3,400,000,000? [m=5] (Castryck) 5,511? [EH] [m=3] (Sutherland) 2,114,964#? [m=3] (Sutherland) 309,954? [EH] [m=5] (Sutherland) 395,154? [m=2] (Sutherland) 1,523,781,850? [m=4] (Sutherland) 82,575,303,678? [m=5] (Sutherland) A numerical precision issue was discovered in the earlier m=4 calculations Dec 23 41,589? [EH] [m=4] (Sutherland) 24,462,774? [m=3] (Sutherland) 1,512,832,950? [m=4] (Sutherland) 2,186,561,568#? [m=4] (Sutherland) 131,161,149,090#? [m=5] (Sutherland) Dec 24 474,320? [EH] [m=4] (Sutherland) 1,497,901,734? [m=4] (Sutherland) Dec 28 474,296? [EH] [m=4] (Sutherland) Jan 2 2014 474,290? [EH] [m=4] (Sutherland) Jan 6 54? (Nielsen) 270? (Clark-Jarvis) Jan 8 4 [GEH] (Nielsen) 8 [GEH] (Nielsen) Using a "gracefully degrading" lower bound for the numerator of the optimisation problem. Calculations confirmed here. Jan 9 474,266? [EH] [m=4] (Sutherland) Jan 28 395,106? [m=2] (Sutherland) Jan 29 3 [GEH] (Nielsen) 6 [GEH] (Nielsen) A new idea of Maynard exploits GEH to allow for cutoff functions whose support extends beyond the unit cube Feb 9 Jan 29 results confirmed here Feb 17 53?# (Nielsen) 264?# (Clark-Jarvis) Managed to get the epsilon trick to be computationally feasible for medium k Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [GEH] - bound is conditional the generalized Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted See also the article on Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
The following thought came to my mind: Given we have a function $f$, and for arbitrary $\varepsilon>0$, $f(a+\varepsilon)= 100\,000$ while $f(a) = 1$. Why is or isn't this function continuous? I thought that with the epsilon-delta definition, where we would chose delta just bigger then $100 000$, the function could be shown to be continuous. Am I wrong? The function is discontinuous because there exists $\epsilon = 27$ such that for any $\delta > 0$, given $x \in (a, a + \delta), x \ne a$, we have that $|f(x) - f(a)| = 99999 \ge 27$. Edit: Since your question is about understanding the basic concept of the epsilon-delta definition, I've added a short explanation: To prove a function is continuous you must show that for any $\epsilon > 0$ (not of your "choosing"), you can choose a $\delta > 0$ (which you may choose after you learn of $\epsilon$, so it may depend on $\epsilon$), such that if $x$ is within $\delta$ of $a$, then $f(x)$ is within $\epsilon$ of $f(a)$. To prove a function is discontinuous you must show that the above cannot be solved for every $\epsilon$, which means you need to point out just one value of $\epsilon$ for which, no matter how $\delta$ is chosen, there is still some $x$ within $\delta$ of $a$ where $f(x)$ is not within $\epsilon$ of $f(a)$. If $f$ is continuous, by the intermediate value theorem, there is some $y>0$ with $f(a+y)=2$. But you've said $f(a+\epsilon)=100~000$ for every $\epsilon>0$, so this is impossible. Hint: Choose $0<r<10000-1$ Then $\forall~\epsilon>0,~|f(a+\epsilon)-f(a)|=10000-1>r.$
Construct Triangle from Altitude, Median and Inradius What Might This Be About? Problem Construct $\Delta ABC,$ given $m_a,$ $h_a,$ and the inradius $r.$ Analysis A construction of a triangle from $h_a,$ $m_a$ and $r$ is clear if the length $x$ from $E$ to $A$ can be found. A straightforward way to calculate $x$ is by means of analytic geometry. If we consider $A$ as a point with coordinates $(x_0, y_0),$ $y_0 = h - r$ (the origin being in the center of the incircle), then the intersection of the line given by $x_0x + y_0 y = r^2$ with the circle $x^2 + y^2 = r^2$ gives the points of tangency, say $(x_1, y_1)$ and $(x_2, y_2).$ Then the two tangents have the equations $x_1x + y_1y = r^2$ and $x_2x + y_2 y = r^2$ (see, e.g., a href="https://www.cut-the-knot.org/Generalization/JoachimsthalsNotations.shtml">Joachimsthal's Notations.) If these tangents are intersected with the line $y = - r$ we get the coordinates of $B$ and $C.$ One just has to solve quadratic equations. While the expressions for the $x_i$ and $y_i$ are rather involved, the expression for the distance $d$ (half length of $BC)$ turns out to be very simple: $\displaystyle d = \frac{rx_0}{r+x_0}.$ The condition $(d + x)^2 + h_a^2=m_a^2$ then determines $x$ and one finds $\displaystyle x = \frac{h-r}{h-2r}\sqrt{m_a^2-h_a^2}.$ This means that the two triangles $IHE$ and $EAF$ are similar, so that $AF\parallel EH;$ and this suggests the construction. Construction Step 1: Start with two parallel lines at distance $h_a$ from each other. Step 2: Draw a circle of radius $r$ tangent to one of the parallels. Step 3: Draw the perpendicular through the center of the circle to obtain points $E$ and $F.$ Step 4: Draw a segment of length $m_a$ from point $E$ to get point $G$ (this may produce two distinct points.) Step 5: Find point $H$ at distance $r$ from $G.$ Step 6: Draw the parallel to $HE$ through point $F.$ This line intersects the other parallel in vertex $A.$ Step 7: From $A$ construct the tangents to the circle to find vertices $B$ and $C.$ Acknowledgment The construction is due to Prof. Dr. René Sperb. 65617505
Im studying for an exam, i don't have the solution, so I hope some of you guys can help me. I have tried a lot but i can't do this proof. Here is the task: Suoppose we have the linear regression model $U_i=\beta_kX_{k,i}+u_i,$ $i=1,2 .... , n,$ where $k$ is an index which we will use later ( it will be the k´th covariate in a more general linear regression model). The least squares estimator of $\beta_k$, which we call for $\hat{b}_k$ is the minimizer of $L_k(b_k)= \sum_{i=1}^{n}(Y_i-b_kX_{k,i})^2$. Show that the first order equation $\frac{d}{db_k}L_k(b_k)=0$ has the single solution $\hat{b}=\frac{\sum ^n_{i=1}X_{k,i}Y_k }{\sum ^n_{i=1}X^{2}_{k,i}}$ You may without justification conclude that this is the minimizer of $L_k(b_k)$
Forgot password? New user? Sign up Existing user? Log in People who've been called for admissions for BSc (Hons.) Physics/Computer Science (in CMI), please comment below. Note by Deeparaj Bhat 3 years, 3 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: @Agnishom Chattopadhyay Log in to reply Yep, I am excited about it! So, are you gonna join? I hope to see you there! @Deeparaj Bhat – Yes, I will join. I paid the DD already. @Agnishom Chattopadhyay – Awesome! Btw, after you paid the DD, did you get any further information? @Deeparaj Bhat – Yes, I did. They sent me an accounting receipt and some details relating to the hostel facilities. I've renamed this thread. I hope you do not mind. @Agnishom Chattopadhyay – Ok. I have to send the DD.😬 Yeah, sure. @Deeparaj Bhat – Great. I hope to see you there. By the way, did you start studying any UG math material yet? @Agnishom Chattopadhyay – Rather than having so many messages over here, could we talk over slack? :) @Agnishom Chattopadhyay – These details were sent to you via email right? @Deeparaj Bhat – Yes.. Did @Indraneel Mukhopadhyaya get selected? @Aditya Raut @Kishlaya Jaiswal @Deeparaj Bhat Can u get a fields medal /abel prize just by joining CMI /ISI .Concentrate on math .Despite ur stuntsu will never get it .So chill .U will after all be an ordinary researcher at the most I don't get the point of your comment. dude lets chat in slack.U didnt get my point Problem Loading... Note Loading... Set Loading...
I've been having a think about the relationship between index $n$ subgroups of a group $G$ and homomorphisms $G \to \mathbb Z / n \mathbb Z$. Let's first have a look at the case $n = 2$. i) Suppose we have a non-trivial homomorphism $\phi : G \to \mathbb Z / 2 \mathbb Z $, where $G$ is a group. Then $H = \mathrm{ker}\phi$ is a normal subgroup of $G$. Since $\phi$ is non-trivial, $G/H \cong \mathbb Z / 2 \mathbb Z$ and we have that $H$ is a normal subgroup of index $2$ in $G$. So every non-trivial homomorphism $\phi$ gives rise to an index $2$ subgroup $\mathrm{ker}\phi$. ii) Suppose $H$ is a subgroup of index $2$ in $G$. Then $H$ is necessarily normal and not equal to $G$. It seems natural to define $\phi : G \to \mathbb Z / 2 \mathbb Z$ by $\phi(H) = 0$, $\phi(G \backslash H) = 1$. This is necessarily a homomorphism, since the product of two elements in $G \backslash H$ must be in $H$ (using normality of $H$). So every index $2$ subgroup gives rise to a non-trivial homomorphism, and the required correspondence is established. Let's see what happens when we try to generalise this method for higher $n$. If $n = 3$, any non-trivial homomorphism $\phi : G \to \mathbb Z / 3 \mathbb Z$ must be surjective, and so the argument used in i) still works; every non-trivial homomorphism $\phi$ gives rise to an index $3$ subgroup $\mathrm{ker}\phi$. I then come across two problems: I can't seem to generalise i) for $n$ higher than $3$, since I can't guarantee surjectivity. EDIT: I can guarantee surjectivity if $n$ is prime, though (since non-triviality implies the existence of a $g \in G$ with $\phi(g) = a \neq 0$, and $n$ prime implies the existence of a multiplicative inverse of $a \ (\mathrm{mod } \ n )$). I can't even generalise my argument in ii) to the case when $n = 3 $, since normality was key here. I suppose what I'd like to know is: A) Is there a nice generalisation of the $n=2$ result? Does it use the same method and, if so, why haven't I managed to get it to work? B) Is there some other approach to this problem / explanation of why no such approach exists? Perhaps it might be helpful to decompose $\mathbb Z / n \mathbb Z$ according to the prime factorisation of $n$? C) Is there anything else relevant to the problem that I haven't thought about? I can see complications may arise in the case where $G$ is not finite. Thanks.
The basic idea is that You use the estimated error given to you (cheaply) by the embedded methods; You use a metric to define acceptance using a user-defined relative and absolute tolerance; Based on the order properties of the code and this metric, a new step size is computed; You avoid large differences in step sizes from one step to the next by limiting the maximum increase and maximum decrease. From "Solving Ordinary Differential Equations, part I" by Hairer, Norsett and Wanner : For two approximations to the solution $y_{1i}$ (using the lower order) and $\hat{y}_{1i}$ (using the higher order), we want the error to satisfy (componentwise) $$|y_{1i} - \hat{y}_{1i}| \leq sc_i|, \quad sc_i = Atol_i + \max\left(|y_{0i}|,|y_{1i}|\right)\cdot Rtol_i.$$ As a measure of the error, one takes $$err = \sqrt{\frac{1}{n}\sum_{i=1}^n\left(\frac{y_{1i}-\hat{y}_{1i}}{sc_i}\right)^{2}}$$ and this value is compared to one. From the expected error behaviour $err \propto C\cdot h^{q+1}$ the optimal step size is obtained as $$h_{\text{opt}} = h\cdot\left(\frac{1}{err}\right)^{\frac{1}{q+1}}$$ where $h$ is the previous step size. Typically, one would add a kind of safety factor so that the probability of having selected a good step size is increased. Typically this safety factor $fac$ is taken $fac = 0.8$ or some power of the order like $fac = \left(0.25\right)^{\frac{1}{q+1}}$ so that $$h_{\text{opt}} = fac\cdot h\cdot\left(\frac{1}{err}\right)^{\frac{1}{q+1}}$$ . As a second measure of safety, one wants to prohibit the stepsize to vary too much from one step to the next. Typically, one limits the new stepsize to be less than a factor five of the previous stepsize. And in some codes, a stepsize increase is forbidden if the previous step was rejected (so if in a previous step, $h$ was decreased, it can only increase again if a step with that step size is accepted). The initial step size for the algorithm can be set by the user or it could be estimated using a procedure based on the order of the method. Hindmarsch and Shampine have developed methods to do so.
To evaluate $S=Nk\ln(\Omega)$ it is necessary to use statistical mechanics to calculate $\Omega$. In quantum mechanics, for any independent system in a potential there is generally a set of permissible energy levels $\epsilon_1,\,\epsilon_2,\cdots$. The energy of the whole assembly of $N$ similar systems is $E$ and the volume $V$. The number of microstates or configurations (complexions) is $\Omega$ and this depends on $E,\,V,\, N$. It is assumed that all of the different microstates with the same values of $E,\,V,\,N$ are equally probable. There are $n_1$ systems with energy $\epsilon_1$, $n_2$ systems with energy $\epsilon_2$ and so on. The total number of different states of the assembly is the number of ways of dividing $N$ things into groups with $n_1, n_2\cdots$ in each which is $N!/(n_1!n_2!\cdots)$. The total number of different states is then the total of all of these and is $$\sum_{\text{all possible sets }n_i} \frac{N!}{\prod_i n_i}$$ As the total energy $E$ is fixed as is $N$, then $$\Omega = \sum \frac{N!}{\prod_i n_i} \quad \text{with constraints} \quad \sum_i n_i =N, \sum_i \epsilon_i n_i=E$$ This is a difficult expression to evaluate but it happens that the largest term makes such an overwhelming contribution that only this term need be considered. The mathematical method to find the maximum is an interesting use of Lagrange's method of undetermined multipliers and is given in texts on stat mech but covers several pages so is too long to include here. The result is that $$S=k\left(N\ln\left(\sum_i e^{-\epsilon_i /(kT)} \right) +\frac{E}{kT}\right) $$ where $\displaystyle Z=\sum_i e^{-\epsilon_i /(kT)}$ is the partition function. The entropy thus comes down to calculating the partition function. In fact all other thermodynamic properties can be found once $Z$ is known. In a molecular system the $\epsilon_i$ are the energies of translation, vibration and rotation and electronic excited states (if any). Knowing the energy levels, say from spectroscopy, then allows each of these terms to be calculated. The translational partition function is calculated using the Sakur-Tetrode equation and a value $\ln(Z)\approx 70 $ is typical. For rotational motion $\ln(Z)\approx 10$ and for vibrations $\ln(Z)\approx 1.5$ for each vibrational mode. The vibrational contribution is small because at room temperature $kT\approx 210$ wavenumbers and vibrations are typically several hundred to a few thousand wavenumbers. In the case of liquid water you have a particular problem which may be mitigated somewhat by adding H bonds to a molecule as an extra vibration.
Consider the problem $y'' + \lambda y = 0$ with the following boundary conditions $y'(0)=0,$ $\,\,y(1)+y'(1)=0$. Find the normalized eigenfunctions. The normalized eigenfunctions are $\phi(n,x) = k_n \cos \sqrt{\lambda_n}\,x$, where $k_n = \left(\frac{2}{1+\sin^2 \sqrt{\lambda_n}}\right)^{1/2}$ This corresponded to the case where $\lambda > 0$. For $\lambda < 0,$ there exists only complex solutions and solutions to Sturm-Liouville problems necessarily have real eigenvalues. However, for $\lambda = 0$, I obtain the non trivial solution $y = c_2(1-\frac{1}{2}x)$. My book says that $\lambda=0$ is not an eigenvalue, and yet I have found a non-trivial solution (i.e one where $c_1,c_2\neq0)$ Why is this? Many thanks.
I was fascinated recently to discover something I hadn’t realized about relative interpretability in set theory, and I’d like to share it here. Namely, Different set theories extending ZF are never bi-interpretable! For example, ZF and ZFC are not bi-interpretable, and neither are ZFC and ZFC+CH, nor ZFC and ZFC+$\neg$CH, despite the fact that all these theories are equiconsistent. The basic fact is that there are no nontrivial instances of bi-interpretation amongst the models of ZF set theory. This is surprising, and could even be seen as shocking, in light of the philosophical remarks one sometimes hears asserted in the philosophy of set theory that what is going on with the various set-theoretic translations from large cardinals to determinacy to inner model theory, to mention a central example, is that we can interpret between these theories and consequently it doesn’t much matter which context is taken as fundamental, since we can translate from one context to another without loss. The bi-interpretation result shows that these interpretations do not and cannot rise to the level of bi-interpretations of theories — the most robust form of mutual relative interpretability — and consequently, the translations inevitably must involve a loss of information. To be sure, set theorists classify the various set-theoretic principles and theories into a hierarchy, often organized by consistency strength or by other notions of interpretative power, using forcing or definable inner models. From any model of ZF, for example, we can construct a model of ZFC, and from any model of ZFC, we can construct models of ZFC+CH or ZFC+$\neg$CH and so on. From models with sufficient large cardinals we can construct models with determinacy or inner-model-theoretic fine structure and vice versa. And while we have relative consistency results and equiconsistencies and even mutual interpretations, we will have no nontrivial bi-interpretations. (I had proved the theorem a few weeks ago in joint work with Alfredo Roque Freire, who is visiting me in New York this year. We subsequently learned, however, that this was a rediscovery of results that have evidently been proved independently by various authors. Albert Visser proves the case of PA in his paper, “Categories of theories and interpretations,” Logic in Tehran, 284–341, Lect. Notes Log., 26, Assoc. Symbol. Logic, La Jolla, CA, 2006, (pdf, see pp. 52-55). Ali Enayat gave a nice model-theoretic argument for showing specifically that ZF and ZFC are not bi-interpretable, using the fact that ZFC models can have no involutions in their automorphism groups, but ZF models can; and he proved the general version of the theorem, for ZF, second-order arithmetic $Z_2$ and second-order set theory KM in his 2016 article, A. Enayat, “Variations on a Visserian theme,” in Liber Amicorum Alberti : a tribute to Albert Visser / Jan van Eijck, Rosalie Iemhoff and Joost J. Joosten (eds.) Pages, 99-110. ISBN, 978-1848902046. College Publications, London. The ZF version was apparently also observed independently by Harvey Friedman, Visser and Fedor Pakhomov.) Meanwhile, let me explain our argument. Recall from model theory that one theory $S$ is interpreted in another theory $T$, if in any model of the latter theory $M\models T$, we can define (and uniformly so in any such model) a certain domain $N\subset M^k$ and relations and functions on that domain so as to make $N$ a model of $S$. For example, the theory of algebraically closed fields of characteristic zero is interpreted in the theory of real-closed fields, since in any real-closed field $R$, we can consider pairs $(a,b)$, thinking of them as $a+bi$, and define addition and multiplication on those pairs in such a way so as to construct an algebraically closed field of characteristic zero. Two theories are thus mutually interpretable, if each of them is interpretable in the other. Such theories are necessarily equiconsistent, since from any model of one of them we can produce a model of the other. Note that mutual interpretability, however, does not insist that the two translations are inverse to each other, even up to isomorphism. One can start with a model of the first theory $M\models T$ and define the interpreted model $N\models S$ of the second theory, which has a subsequent model of the first theory again $\bar M\models T$ inside it. But the definition does not insist on any particular connection between $M$ and $\bar M$, and these models need not be isomorphic nor even elementarily equivalent in general. By addressing this, one arrives at a stronger and more robust form of mutual interpretability. Namely, two theories $S$ and $T$ are bi-interpretable, if they are mutually interpretable in such a way that the models can see that the interpretations are inverse. That is, for any model $M$ of the theory $T$, if one defines the interpreted model $N\models S$ inside it, and then defines the interpreted model $\bar M$ of $T$ inside $N$, then $M$ is isomorphic to $\bar M$ by a definable isomorphism in $M$, and uniformly so (and the same with the theories in the other direction). Thus, every model of one of the theories can see exactly how it itself arises definably in the interpreted model of the other theory. For example, the theory of linear orders $\leq$ is bi-interpretable with the theory of strict linear order $<$, since from any linear order $\leq$ we can define the corresponding strict linear order $<$ on the same domain, and from any strict linear order $<$ we can define the corresponding linear order $\leq$, and doing it twice brings us back again to the same order. For a richer example, the theory PA is bi-interpretable with the finite set theory $\text{ZF}^{\neg\infty}$, where one drops the infinity axiom from ZF and replaces it with the negation of infinity, and where one has the $\in$-induction scheme in place of the foundation axiom. The interpretation is via the Ackerman encoding of hereditary finite sets in arithmetic, so that $n\mathrel{E} m$ just in case the $n^{th}$ binary digit of $m$ is $1$. If one starts with the standard model $\mathbb{N}$, then the resulting structure $\langle\mathbb{N},E\rangle$ is isomorphic to the set $\langle\text{HF},\in\rangle$ of hereditarily finite sets. More generally, by carrying out the Ackermann encoding in any model of PA, one thereby defines a model of $\text{ZF}^{\neg\infty}$, whose natural numbers are isomorphic to the original model of PA, and these translations make a bi-interpretation. We are now ready to prove that this bi-interpretation situation does not occur with different set theories extending ZF. Theorem. Distinct set theories extending ZF are never bi-interpretable. Indeed, there is not a single model-theoretic instance of bi-interpretation occurring with models of different set theories extending ZF. Proof. I mean “distinct” here in the sense that the two theories are not logically equivalent; they do not have all the same theorems. Suppose that we have a bi-interpretation instance of the theories $S$ and $T$ extending ZF. That is, suppose we have a model $\langle M,\in\rangle\models T$ of the one theory, and inside $M$, we can define an interpreted model of the other theory $\langle N,\in^N\rangle\models S$, so the domain of $N$ is a definable class in $M$ and the membership relation $\in^N$ is a definable relation on that class in $M$; and furthermore, inside $\langle N,\in^N\rangle$, we have a definable structure $\langle\bar M,\in^{\bar M}\rangle$ which is a model of $T$ again and isomorphic to $\langle M,\in^M\rangle$ by an isomorphism that is definable in $\langle M,\in^M\rangle$. So $M$ can define the map $a\mapsto \bar a$ that forms an isomorphism of $\langle M,\in^M\rangle$ with $\langle \bar M,\in^{\bar M}\rangle$. Our argument will work whether we allow parameters in any of these definitions or not. I claim that $N$ must think the ordinals of $\bar M$ are well-founded, for otherwise it would have some bounded cut $A$ in the ordinals of $\bar M$ with no least upper bound, and this set $A$ when pulled back pointwise by the isomorphism of $M$ with $\bar M$ would mean that $M$ has a cut in its own ordinals with no least upper bound; but this cannot happen in ZF. If the ordinals of $N$ and $\bar M$ are isomorphic in $N$, then all three models have isomorphic ordinals in $M$, and in this case, $\langle M,\in^M\rangle$ thinks that $\langle N,\in^N\rangle$ is a well-founded extensional relation of rank $\text{Ord}$. Such a relation must be set-like (since there can be no least instance where the predecessors form a proper class), and so $M$ can perform the Mostowski collapse of $\in^N$, thereby realizing $N$ as a transitive class $N\subseteq M$ with $\in^N=\in^M\upharpoonright N$. Similarly, by collapsing we may assume $\bar M\subseteq N$ and $\in^{\bar M}=\in^M\upharpoonright\bar M$. So the situation consists of inner models $\bar M\subseteq N\subseteq M$ and $\langle \bar M,\in^M\rangle$ is isomorphic to $\langle M,\in^M\rangle$ in $M$. This is impossible unless all three models are identical, since a simple $\in^M$-induction shows that $\pi(y)=y$ for all $y$, because if this is true for the elements of $y$, then $\pi(y)=\{\pi(x)\mid x\in y\}=\{x\mid x\in y\}=y$. So $\bar M=N=M$ and so $N$ and $M$ satisfy the same theory, contrary to assumption. If the ordinals of $\bar M$ are isomorphic to a proper initial segment of the ordinals of $N$, then a similar Mostowski collapse argument would show that $\langle\bar M,\in^{\bar M}\rangle$ is isomorphic in $N$ to a transitive set in $N$. Since this structure in $N$ would have a truth predicate in $N$, we would be able to pull this back via the isomorphism to define (from parameters) a truth predicate for $M$ in $M$, contrary to Tarski’s theorem on the non-definability of truth. The remaining case occurs when the ordinals of $N$ are isomorphic in $N$ to an initial segment of the ordinals of $\bar M$. But this would mean that from the perspective of $M$, the model $\langle N,\in^N\rangle$ has some ordinal rank height, which would mean by the Mostowski collapse argument that $M$ thinks $\langle N,\in^N\rangle$ is isomorphic to a transitive set. But this contradicts the fact that $M$ has an injection of $M$ into $N$. $\Box$ It follows that although ZF and ZFC are equiconsistent, they are not bi-interpretable. Similarly, ZFC and ZFC+CH and ZFC+$\neg$CH are equiconsistent, but no pair of them is bi-interpretable. And again with all the various equiconsistency results concerning large cardinals. A similar argument works with PA to show that different extensions of PA are never bi-interpretable.
Given the space A of all programs that could be possibly written in a given language and given the space B of all programs that could be possibly written in the same language but without any mutable state: can we be sure that with just B we can solve all the same problems we could solve with A ? There is no general notion of "mutable state" for a generic language. I believe we need to restrict this question to some specific language. For instance, consider a basic imperative language with arithmetic expressions ($*,+,-$), read input, write output, assignment, composition, conditional ($e=0$ guards), and while loop. No user-defined functions, hence no recursion. Unbounded-precision integer variables. Forbidding mutability would greatly affect the computational power. Unrestricted, it is Turing powerful, but after restriction becomes a very weak language which is able (I believe) to compute only piecewise polynomials / divergence. If the language is instead a higher order functional language with recursion and assignment (e.g. Scheme), forbidding mutability will not affect its computational power: it was and remains Turing powerful. Mutable state can still be simulated without mutability. A way to do it is to encode state-mutating functions $f$ as pure functions $f({\sf input},{\sf old\_ state})=({\sf output},{\sf new\_state})$ (see the so-called "state monad" in functional programming). Further, Turing & Church proved that the (pure) $\lambda$-calculus has the same power of the Turing machine. So this is a pretty established result! It is one of the very first discoveries that started computer science in the 30s. Yes, there is such a transformation. Consider a programming language which has a number of commands $c_1, \ldots, c_n$ that manipulate state. When we run a program, it starts in some initial state $s_0$. As commands are executed, they change the state so that it goes through a series of configurations $s_0, \ldots, s_k$ (the series may be infinite if the program runs forever). Instead of there being one global state $s$ which changes with time, we will instead make sure that every command and every function takes in the current state and outputs the new state. We will then "pass along" the "current state" throughout the program in a systematic way. For instance, if there is a command $\mathtt{INCR}(\ell)$ which increases the counter at memory position $\ell$, we will change it to a stateless function $\mathtt{INCR}' : \mathtt{State} \times \mathtt{Location} \to \mathtt{State}$ which takes the "current state" and a location, and returns the new state. In general, any construct $c$ which takes as input $A$ and outputs $B$ will be transformed to a construct $c'$ which takes as input $\mathtt{State} \times A$ and outputs $\mathtt{State} \times B$. This sort of transformation can be performed systmatically so that in the end we obtain a stateless program which takes as input the initial state and outputs the final state (as well as whatever output it was going to give). I did not explain all the details, but they are there and are well known. Various complications, such as local state, can be dealt with. A good exercise is this: take a language with a fixed number of stateful variables, say a, b, c, and think about transformations that replace the variables with state, say a triple (a,b,c), which is passed along (if there are any while loops they will have to be replaced with recursive functions). In fact, this is how state is implemented in purely functional languages. Haskell has the state monad, for instance, which does precisely what I described above. You need to define what "mutable state" means. In one interpretation space A is all Turing machines and space B is all the read-only Turing machines. Read-only Turing machines are only as powerful as DFAs. If you allow one-time writing to new tape in space B then by copying the string over to new tape and making certain substitutions to the string, you can create a String Rewriting System which is Turing complete.
Then $x = \sin y$. Differentiating both sides with respect to $x$, we get $1 = (\cos y){{dy} \over {dx}}$ Note that we used the chain rule to differentiate $\sin y$ with respect to $x$ because $\sin y$ is a function of $y$, and $y$ in turn is a function of $x$. or, ${{dy} \over {dx}} = {1 \over {\cos y}}$. or, ${{dy} \over {dx}} = {1 \over {\sqrt {1 - \sin ^2 y} }}$ or, ${{dy} \over {dx}} = {1 \over {\sqrt {1 - x^2 } }}$
(Originally posted on Math-Stackexchange) https://math.stackexchange.com/questions/2982949/regular-languages-and-regular-expressions Notation: $\Sigma:=\{a_1,\cdots ,a_\Delta\}$ finite alphabet $\Sigma^n= \underbrace{\Sigma \times \Sigma \times \cdots \times \Sigma}_{n}$ Words of length $=n$ $\Sigma^*= \bigcup \limits_{n\geq0} \Sigma^n$ Set of all finite words $L \subseteq \Sigma^*$ Language I'm trying to prove, that the following statements are either true or false. Unfortunately, I'm having a hard time finding formal proofs for my answers. I hope you can help me out a bit. Let $A,B$ be two subsets of $\{0,1\}^*$ If $A$ is regular pumpable and $B \subseteq A$, then B is regular pumpable too. If $A$ not regular pumpable and $A ⊆ B$, then $B$ is not regular pumpable too. If $A$ is regular and $B \subseteq A$, then B is regular too. If $A \cap B$ is regular, then $A$ and $B$ are regular too. There is at least one finite language on the alphabet $\{0,1\}$, that doesn't meet the pumping lemma.
EDIT: As a consequence of reading the question too quickly, everything I've written below is about functions $\mathbb{R}\rightarrow\mathbb{R}$, not $[0, 1]\rightarrow\mathbb{R}$ - as an exercise, show that this doesn't affect anything. Easiest (if least illuminating) way: count them. There are $2^{2^{\aleph_0}}$-many functions from $\mathbb{R}$ to $\mathbb{R}$, but only $2^{\aleph_0}$-many of those are continuous (exercise - as well as the worst proof imaginable that there exist discontinuous functions). And the number of sequences of continuous functions is no bigger: $(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}$. Note that this proves a stronger result: the Baire hierarchy is the hierarchy of functions you get by starting with the continuous functions and iteratively taking pointwise limits. Baire class 1 is continuous, and for $\alpha>1$, Baire class $\alpha$ is the set of functions which are the limit of a sequence of functions each individually in some $<\alpha$-level of the Baire hierarchy. The Baire hierarchy goes on for $\omega_1$-many levels, and then you stop getting any new functions. The counting argument shows that there are functions which are not Baire class $\alpha$, for any fixed countable $\alpha$. And, if the continuum hypothesis fails - that is, if $2^{\aleph_0}>\aleph_1$ - then this argument shows there are functions which aren't in any level of the Baire hierarchy! (By the way, there's a similar hierarchy, the Borel hierarchy, and everything I've written about the Baire hierarchy holds of the Borel hierarchy too.) We can actually show that there are some functions not in the Baire hierarchy, without any assumptions on cardinal arithmetic. But this is a bit more complicated. It goes as follows: Fix a bijection $f$ from $\omega_1\times\mathbb{R}$ to $\mathbb{R}$. Basically, to each countable ordinal $\alpha$, $f$ associates continuum-many reals. Separately, for each $\alpha\in\omega_1$, fix a bijection $g_\alpha$ between $\mathbb{R}$ and the set of functions of Baire class $\alpha$. (Such a bijection exists, by the argument above; this uses transfinite induction.) Now we combine these! Let $\mathbb{B}$ be the set of all functions in the Baire hierarchy. We can get a function $h:\mathbb{R}\rightarrow \mathbb{B}$ as follows: given $r$, let $f^{-1}(r)=(\alpha, s)$ - we let $h(r)$ be $g_\alpha(s)$. At this point, check that $h$ is in fact a surjection from $\mathbb{R}$ to $\mathbb{B}$. And now we diagonalize! Let $F(r)=h(r)(r)+1$. Then $F\not\in\mathbb{B}$. Done! Note that this can be made explicit: there are lots of easily-describable (if a bit messy) bijections between $\mathbb{R}$ and the set of continuous functions. And there are also lots of reasonbly natural injections of $\mathbb{R}^\omega$ and $\mathbb{R}$. Combining these, we get an explicit bijection $\beta$ from $\mathbb{R}$ to the set $\mathcal{S}$ of sequences of continuous functions. Now, we can use this to define a function $F$ which is not a pointwise limit of continuous functions as follows. If $r$ is a real, we let $F(r)$ be $1+\lim_{n\rightarrow\infty} \beta(r)(n)(r)$, if that limit exists, and $0$, if that limit doesn't exist. This $F$ has a perfectly explicit, if annoyingly messy, definition. And it diagonalizes against the sequences of continuous functions, so it's not Baire class 2. Similarly, we can find explicit-if-messy functions not in Baire class $\alpha$, for any fixed countable $\alpha$. Where this breaks down is in trying to get a function which isn't in the Baire hierarchy at all: it is consistent with ZF that every function is in the Baire hierarchy (this involves killing choice to a stupidly extreme degree, however - $\omega_1$ winds up being a countable union of countable sets!).
I don't understand the different behaviour of the advection-diffusion equation when I apply different boundary conditions. My motivation is the simulation of a real physical quantity (particle density) under diffusion and advection. Particle density should be conserved in the interior unless it flows out from the edges. By this logic, if I enforce Neumann boundary conditions the ends of the system such as $\frac{\partial \phi}{\partial x}=0$ (on the left and the right sides) then the system should be "closed" i.e. if the flux at the boundary is zero then no particles can escape. For all the simulations below, I have applied the Crank-Nicolson discretization to the advection-diffusion equation and all simulation have $\frac{\partial \phi}{\partial x}=0$ boundary conditions. However, for the first and last rows of the matrix (the boundary condition rows) I allow $\beta$ to be changed independently of the interior value. This allows the end points to be fully implicit. Below I discuss 4 different configurations, only one of them is what I expected. At the end I discuss my implementation. Diffusion only limit Here the advection terms are turned off by setting the velocity to zero. Diffusion only, with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at all points The quantity is not conserved as can be seen by the pulse area reducing. Diffusion only, with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at interior points, and $\boldsymbol{\beta}$=1 (full implicit) at the boundaries By using fully implicit equation on the boundaries I achieve what I expect: no particles escape. You can see this by the area being conserved as the particle diffuse. Why should the choice of $\beta$ at the boundary points influence the physics of the situation? Is this a bug or expected? Diffusion and advection When the advection term is included, the value of $\beta$ at the boundaries does not seem to influence the solution. However, for all cases when the boundaries seem to be "open" i.e. particles can escape the boundaries. Why is this the case? Advection and Diffusion with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at all points Advection and Diffusion with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at interior points, and $\boldsymbol{\beta}$=1 (full implicit) at the boundaries Implementation of the advection-diffusion equation Starting with the advection-diffusion equation, $ \frac{\partial \phi}{\partial t} = D\frac{\partial^2 \phi}{\partial x^2} + \boldsymbol{v}\frac{\partial \phi}{\partial x} $ Writing using Crank-Nicolson gives, $ \frac{\phi_{j}^{n+1} - \phi_{j}^{n}}{\Delta t} = D \left[ \frac{1 - \beta}{(\Delta x)^2} \left( \phi_{j-1}^{n} - 2\phi_{j}^{n} + \phi_{j+1}^{n} \right) + \frac{\beta}{(\Delta x)^2} \left( \phi_{j-1}^{n+1} - 2\phi_{j}^{n+1} + \phi_{j+1}^{n+1} \right) \right] + \boldsymbol{v} \left[ \frac{1-\beta}{2\Delta x} \left( \phi_{j+1}^{n} - \phi_{j-1}^{n} \right) + \frac{\beta}{2\Delta x} \left( \phi_{j+1}^{n+1} - \phi_{j-1}^{n+1} \right) \right] $ Note that $\beta$=0.5 for Crank-Nicolson, $\beta$=1 for fully implicit, and, $\beta$=0 for fully explicit. To simplify the notation let's make the substitution, $ s = D\frac{\Delta t}{(\Delta x)^2} \\ r = \boldsymbol{v}\frac{\Delta t}{2 \Delta x} $ and move the known value $\phi_{j}^{n}$ of the time derivative to the right-hand side, $ \phi_{j}^{n+1} = \phi_{j}^{n} + s \left( 1-\beta \right) \left( \phi_{j-1}^{n} - 2\phi_{j}^{n} + \phi_{j+1}^{n} \right) + s \beta \left( \phi_{j-1}^{n+1} - 2\phi_{j}^{n+1} + \phi_{j+1}^{n+1} \right) + r \left( 1 - \beta \right) \left( \phi_{j+1}^{n} - \phi_{j-1}^{n} \right) + r \beta \left( \phi_{j+1}^{n+1} - \phi_{j-1}^{n+1} \right) $ Factoring the $\phi$ terms gives, $ \underbrace{\beta(r - s)\phi_{j-1}^{n+1} + (1 + 2s\beta)\phi_{j}^{n+1} -\beta(s + r)\phi_{j+1}^{n+1}}_{\boldsymbol{A}\cdot\boldsymbol{\phi^{n+1}}} = \underbrace{ (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n}}_{\boldsymbol{M\cdot}\boldsymbol{\phi^n}} $ which we can write in matrix form as $\boldsymbol{A}\cdot\boldsymbol{\phi^{n+1}} = \boldsymbol{M}\cdot\boldsymbol{\phi^{n}}$ where, $ \boldsymbol{A} = \left( \begin{matrix} 1+2s\beta & -\beta(s + r) & & 0 \\ \beta(r-s) & 1+2s\beta & -\beta (s + r) & \\ & \ddots & \ddots & \ddots \\ & \beta(r-s) & 1+2s\beta & -\beta (s + r) \\ 0 & & \beta(r-s) & 1+2s\beta \\ \end{matrix} \right) $ $ \boldsymbol{M} = \left( \begin{matrix} 1-2s(1-\beta) & (1 - \beta)(s + r) & & 0 \\ (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) & \\ & \ddots & \ddots & \ddots \\ & (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) \\ 0 & & (1 - \beta)(s - r) & 1-2s(1-\beta) \\ \end{matrix} \right) $ Applying Neumann boundary conditions NB is working through the derivation again I think I have spotted the error. I assumed a fully implicit scheme ($\beta$=1) when writing the finite difference of the boundary condition. If you assume a Crank-Niscolson scheme here the complexity become too great and I could not solve the resulting equations to eliminate the nodes which are outside the domain. However, it would appear possible, there are two equation with two unknowns, but I couldn't manage it. This probably explains the difference between the first and second plots above. I think we can conclude that only the plots with $\beta$=0.5 at the boundary points are valid. Assuming the flux at the left-hand side is known (assuming a fully implicit form), $ \frac{\partial\phi_1^{n+1}}{\partial x} = \sigma_L $ Writing this as a centred-difference gives, $ \frac{\partial\phi_1^{n+1}}{\partial x} \approx \frac{\phi_2^{n+1} - \phi_0^{n+1}}{2\Delta x} = \sigma_L $ therefore, $ \phi_0^{n+1} = \phi_{2}^{n+1} - 2 \Delta x\sigma_L $ Note that this introduces a node $\phi_0^{n+1}$ which is outside the domain of the problem. This node can be eliminated by using a second equation. We can write the $j=1$ node as, $ \beta(r - s)\phi_0^{n+1} + (1+2s\beta)\phi_1^{n+1} - \beta(s+r)\phi_2^{n+1} = (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n} $ Substituting in the value of $\phi_0^{n+1}$ found from the boundary condition gives the following result for the $j$=1 row, $ (1+2s\beta)\phi_1^{n+1} - 2s\beta\phi_2^{n+1} = (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n} + 2\beta(r-s)\Delta x\sigma_L $ Performing the same procedure for the final row (at $j$=$J$) yields, $ -2s\beta\phi_{J-1}^{n+1} + (1+2s\beta)\phi_J^{n+1} = (1-\beta)(s - r)\phi_{J-1}^{n} + (1 - 2s(1-\beta))\phi_{J}^{n} + 2\beta(s+r)\Delta x\sigma_R $ Finally making the boundary rows implicit (setting $\beta$=1) gives, $ (1+2s)\phi_1^{n+1} - 2s\phi_2^{n+1} = \phi_{j-1}^{n} + 1\phi_{j}^{n} + 2(r-s)\Delta x\sigma_L $ $ -2s\phi_{J-1}^{n+1} + (1+2s)\phi_J^{n+1} = \phi_{J}^{n} + 2(s+r)\Delta x\sigma_R $ Therefore with Neumann boundary conditions we can write the matrix equation, $\boldsymbol{A}\cdot\phi^{n+1} = \boldsymbol{M}\cdot\phi^{n} + \boldsymbol{b_N}$, where, $ \boldsymbol{A} = \left( \begin{matrix} 1+2s & -2s & & 0 \\ \beta(r-s) & 1+2s\beta & -\beta (s + r) & \\ & \ddots & \ddots & \ddots \\ & \beta(r-s) & 1+2s\beta & -\beta (s + r) \\ 0 & & -2s & 1+2s \\ \end{matrix} \right) $ $ \boldsymbol{M} = \left( \begin{matrix} 1 & 0 & & 0 \\ (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) & \\ & \ddots & \ddots & \ddots \\ & (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) \\ 0 & & 0 & 1 \\ \end{matrix} \right) $ $ \boldsymbol{b_N} = \left( \begin{matrix} 2 (r - s) \Delta x \sigma_L & 0 & \ldots & 0 & 2 (s + r) \Delta x \sigma_R \end{matrix} \right)^{T} $ My current understanding I think the difference between the first and second plots is explained by noting the error outlined above. Regarding the conservation of the physical quantity. I believe the cause is that, as pointed out here, the advection equation in the form I have written it doesn't allow propagation in the reverse direction so the wave just passes through even with zero-flux boundary conditions. My initial intuition regarding conservation only applied when advection term is zero (this is solution in plot number 2 where the area is conserved). Even with Neumann zero-fluxboundary conditions $\frac{\partial \phi}{\partial x} = 0$ the mass can still leave the system, this is because the correct boundary conditions in this case are Robinboundary conditions in which the totalflux is specified $j = D\frac{\partial \phi}{\partial x} + \boldsymbol{v}\phi = 0$. Moreover the Neunmann condition specifies that mass cannot leave the domain via diffusion, it says nothing about advection. In essence what we have hear are closed boundary conditions to diffusion and open boundary conditions to advection. For more information see the answer here, Implementation of gradient zero boundary conditon in advection-diffusion equation. Would you agree?
A reader asked “Why did the chicken cross the road?” It is well known that this is a hard question to analyze rigorously. However, by starting with the 1-dimensional chicken diffusion equation \[ \frac{\partial^2\chi}{\partial x^2} + \frac{1}{4\sqrt{\pi}}\frac{\partial \chi}{\partial t} \approx 0, \] where \(\chi\) is the local density of chickens, we may arrive at the answer, “Because it wanted to get to the other side (to a first order approximation).”
How many rectangular $m \times n$ $(0,1)$ matrix (where $n>m$) are there with prescribed row sums $r_i$ for $i=1$ to $m$ such that no two columns are the same. The count of $m\times n$ binary matrices with specified row sums $r_i, i=1,\ldots,m$ and distinct columns can be expressed as a product: $$ n! [1, 0, \ldots ,0] ( \Pi_{i=1}^m T_i ) [0, \ldots ,0, 1]^T $$ where each $T_i$ is a sparse upper triangular matrix depending only on $n$ and $r_i$. The factor $n!$ accounts for permutations of the $n$ distinct columns. We suppress further consideration of that factor by requiring the columns to be ordered descendingly, taking the bit of an upper row to be more significant than one in a lower row. The matrix $T_i$ is the adjacency matrix of a directed multigraph on states that are partitions of the number of columns $n$, ordered by refinement, and whose edges correspond to refining one partition to another by assigning $r_i$ ones to the next row of the matrix (potentially distinguishing some columns that were identical up to that row). Note that initially (before any rows are assigned) all columns are identical, which corresponds to the trivial partition $[n]$. After all rows are assigned we will have all columns distinct, which corresponds to the slightly less trivial partition $[1,1,\ldots ,1]$. Note that this graph allows self-loops, but otherwise it has no cycles. Taking the product of matrices counts paths from one state to another, and we are interested in the count of paths from $[n]$ to $[1,1,\ldots ,1]$ as this corresponds (apart from column permutations) to the number of admissible binary matrices (specified row sums and distinct columns). Still omitting the $n!$ factor, I calculated by hand (and checked with bits of Prolog code) small examples of the form $2k \times 2k$ binary matrices with all rows sums equal $k$. For $k=1$ we get 2 solutions. For $k=2$ there are 52 solutions. For $k=3$ there are 83,680 solutions. As a practical matter we do not need to consider all possible partitions of $n$, only those that are attainable. Taking into account that the first row uniquely transitions from $[n]$ to $[r_1,n-r_1]$ reduces the matrix product by one index and limits the possible partitions. For case $k=4$ in the examples described above, only eight partitions are needed, and the transition matrix can take the form: $$ T = \begin{pmatrix} 2 & 2 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 4 & 0 & 4 & 0 & 6 & 0 & 0 \\ 0 & 0 & 6 & 0 & 0 & 12 & 0 & 1 \\ 0 & 0 & 0 & 6 & 2 & 8 & 6 & 0 \\ 0 & 0 & 0 & 0 & 10 & 0 & 20 & 0 \\ 0 & 0 & 0 & 0 & 0 & 14 & 16 & 6 \\ 0 & 0 & 0 & 0 & 0 & 0 & 30 & 20 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 70 \end{pmatrix} $$ Thus for $k=4$ we'd get (apart from the factor $8!$) a count of $(T^7)_{1,8}$ or 13,849,902,752 solutions. The usefulness of this approach will be limited by how many partitions/states are needed by given parameters $m, n, r_i$. I'd be happy to post my Prolog snippets and/or attempt a larger problem if anyone is interested.
Let $S\subseteq\Bbb R$ and $\alpha \in \Bbb R$. If $\alpha = \sup(S)$, then show that for any $\epsilon > 0$, there is some $x \in S$ such that $\alpha - \epsilon < x$. What I have done : Since $\alpha$ is the supremum, $x<\alpha$ and $\alpha - \epsilon < \alpha$ I want to show $x$ is in between $\alpha - \epsilon$ and $\alpha$ , but can't and not even sure if it is possible. Its been a long I posted a question, so sorry if this seems a total homework question.
Algebraic Geometry Seminar Spring 2015 The seminar meets on Fridays at 2:25 pm in Van Vleck B135. The schedule for the previous semester is here. Contents Algebraic Geometry Mailing List Please join the Algebraic Geometry Mailing list to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Spring 2015 Schedule date speaker title host(s) January 30 Manuel Gonzalez Villa (Wisconsin) Motivic infinite cyclic covers February 20 Jordan Ellenberg (Wisconsin) Furstenberg sets and Furstenberg schemes over finite fields I invited myself February 27 March 6 Matt Satriano (Johns Hopkins) When is a variety a quotient of a smooth variety by a finite group? Max March 13 Jose Rodriguez (Notre Dame) Numerical irreducible decomposition of multiprojective varieties Daniel March 20 Dima Arinkin (Wisconsin) Smooth categorical representations of reductive groups March 27 Joerg Schuermann (Muenster) Chern classes and transversality for singular spaces Max April 17 Lee McEwan (OSU, Mannsfield) TBA Max and Gonzalez Villa April 24 Matthew Woolf (UIC) TBA Daniel May 1 Byeongho Lee (Purdue) $G$-Frobenius manifolds Andrei May 8 Brian Lehmann (Boston College) TBA Daniel Abstracts Manuel Gonzalez Villa Motivic infinite cyclic covers (joint work with Anatoly Libgober and Laurentiu Maxim) We associate with an infinite cyclic cover of a punctured neighborhood of a simple normal crossing divisor on a complex quasi-projective manifold (assuming certain finiteness conditions are satisfied) an element in the Grothendieck ring, which we call motivic infinite cyclic cover, and show its birational invariance. Our construction provides a unifying approach for the Denef-Loeser motivic Milnor fibre of a complex hypersurface singularity germ, and the motivic Milnor fiber of a rational function, respectively. Jordan Ellenberg Furstenberg sets and Furstenberg schemes over finite fields (joint work with Daniel Erman) We prove a theorem of Kakeya type for the intersection of subsets of n-space over a finite field with k-planes. Let S be a subset of F_q^n with the "k-plane Furstenberg property": for every k-plane V, there is a k-plane W parallel to V which intersects S in at least q^c points. We prove that such a set has size at least a constant multiple of q^{cn/k}. The novelty is the method; we prove that the theorem holds, not only for subsets of the plane, but arbitrary 0-dimensional subschemes, and reduce the problem by Grobner methods to a simpler one about G_m-invariant subschemes supported at a point. The talk will not assume that everyone in the room is an algebraic geometer. Matt Satriano When is a variety a quotient of a smooth variety by a finite group? We explore the following local-global question: if X is locally a quotient of a smooth variety by a finite group, then is it globally of this form? We show that the answer is "yes" whenever X is quasi-projective and already known to be a quotient by a torus. In particular, this applies to all quasi-projective simplicial toric varieties. We discuss the proof and show how it can be made explicit in the case of toric varieties. This is joint work with Anton Geraschenko. Jose Rodriguez Numerical algebraic geometry is a growing area of algebraic geometry that involves describing solution sets of systems of polynomial equations. This area has already had an impact in kinematics, statistics, PDE's, and pure math. This talk will introduce key concepts in numerical algebraic geometry that are used to describe positive dimensional projective varieties. In particular, witness sets will be defined and the classic "regeneration procedure" will be described. The second part of the talk will describe a new "Multi-Regeneration Procedure". This technique gives an effective way of describing multiprojective varieties and determining their multidegrees. Throughout the talk motivating examples will be provided, and no previous knowledge of numerical algebraic geometry will be assumed. This is joint work with Jonathan Hauenstein. Dima Arinkin Smooth categorical representations of reductive groups Let G be a complex reductive group. Consider categories equipped with smooth (sometimes called strong) action of G. Natural and important examples of such categories arise from geometry: if X is a variety equipped with an action of G, for instance, X=G itself, or X=G/B is the flag space of G, then the category of D-modules on X carries a smooth action of G. We view such categories as (smooth) categorical representations of G. The theory of smooth categorical representations of G is similar to the representation theory of a reductive group over a finite field, I will discuss this similarity in my talk. The surprising twist (and the main result of this talk) is that the theory of smooth categorical representations is simpler than its classical counterpart: there are no cuspidal representations! Joerg Schuermann Chern classes and transversality for singular spaces Let [math]X[/math] and [math]Y[/math] be closed complex subvarieties in an ambient complex manifold [math]M[/math]. We will explain the intersection formula [math]c(X) \cdot c(Y)= c(TM)\cap c(X\cap Y)[/math] for suitable notions of Chern classes and transversality for singular spaces. If [math]X[/math] and [math]Y[/math] intersect transversal in a Whitney stratified sense, this is true for the MacPherson Chern classes (of adopted constructible functions). If [math]X[/math] and [math]Y[/math] are "splayed" in the sense of Aluffi-Faber, then this formula holds for the Fulton-(Johnson-) Chern classes, and is conjectured for the MacPherson Chern classes. We explain, that the version for the MacPherson Chern classes is true under a micro-local "non-characteristic" condition for the diagonal embedding of [math]M[/math] with respect to [math]X\times Y[/math]. This notion of non-characteristic is weaker than the Whitney stratified transversality as well as the splayedness assumption. Byeongho Lee $G$-Frobenius manifolds The goal of this talk is to introduce the problem of orbifolding Frobenius manifolds and a related concept of $G$-Frobenius manifolds for each finite group $G$. Frobenius manifolds are among the central players of classical mirror symmetry, and orbifolding them can be described as producing a new Frobenius manifold when the original one has a certain group symmetry. After giving some background, $G$-Frobenius manifolds will be introduced as an ingredient of the procedure of orbifolding.
Q. A block of mass m, lying on a smooth horizontal surface, is attached to a spring (of negligible mass) of spring constant k. The other end of the spring is fixed, as shown in the figure. The block is initally at rest in its equilibrium position. If now the block is pulled with a constant force F, the maximum speed of the block is : Solution: Maximum speed is at mean position(equilibrium). F = kx $x = \frac{F}{k}$ $ W_{F} + W_{sp} = \Delta KE $ $ F\left(x\right) - \frac{1}{2} kx^{2} = \frac{1}{2} mv^{2} -0$ $ F\left(\frac{F}{k}\right)- \frac{1}{2} k \left(\frac{F}{k}\right)^{2} = \frac{1}{2} mv^{2} $ $ \Rightarrow v_{max } = \frac{F}{\sqrt{mk}}$ Questions from JEE Main 2019 5. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 10. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$. If this light falls on a silver plate having a work function of 4.7 eV, what will be the maximum kinetic energy of the photo electrons ? $(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade
I am trying to show that if $$F(z) = \sum_{n=1}^{\infty}\frac{z^n}{1-z^n}$$ then $$|F(r)| \geq \frac{c}{1-r}log(\frac{1}{1-r})$$ as r -> 1 where $r \in (0,1)$ I honestly have no idea where to start. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community One might start from noticing that for any $z\in(0,1)$ we have $$ F(z)=\sum_{n\geq 1} d(n) z^n $$ by expanding $\frac{z^n}{1-z^n}$ as a geometric series and rearranging the series. The behaviour of the divisor function $d(n)$ is quite erratic, but there is a simple trick for mitigating it: $$ \frac{1}{1-z}F(z) = \sum_{n\geq 1}\left(\sum_{m=1}^{n}d(m)\right)z^n $$ and by Dirichlet's hyperbola method: $$ \sum_{m=1}^{n} d(m) = n H_n + (\gamma-1) n + O(\sqrt{n}) $$ where $$ \sum_{n\geq 1}\left(n H_n +(\gamma-1) n \right)z^n = \frac{\gamma z-z\log(1-z)}{(1-z)^2}$$ $$ \sum_{n\geq 1}\frac{n\sqrt{\pi}\binom{2n}{n}}{4^n}z^n = \frac{z\sqrt{\pi}}{2(1-z)^{3/2}},\qquad \frac{n\sqrt{\pi}\binom{2n}{n}}{4^n}\sim\sqrt{n} $$ ensure that in a left neighbourhood of $z=1$ we have $$ F(z) \sim \frac{-\log(1-z)+\gamma}{(1-z)}. $$
In Wikipedia, it says that any epsilon number with the index that is countable is countable. How is it? Out of all those numbers, I especially want to know why $\epsilon_0$ is countable. Thanks. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community It is often understood from context which of the two is used, but it can still be quite confusing to people new to this. Cardinal exponentiation $\omega^\omega$ means we take the cardinality of all the functions from $\omega$ to $\omega$, this is of course of size continuum, which is of an uncountable cardinality. On the other hand, $\omega^\omega$ in ordinal exponentiation means that we take some order type which is a supremum of countable ordinals. What are those ordinals? These are ordinals which defined themselves as limit of smaller ordinals are $\omega^n$, and so we can continue to unfold the definitions of ordinal arithmetics until we have that $\omega^\omega$ is a supremum of some much "simpler" set (It is actually much more complicated that just $\omega^n$, though) This "simpler" set contains only countable ordinals, and itself is countable. We know that the countable union of countable sets is countable, therefore $\omega^\omega$ is countable. What does all that have to do with $\epsilon_0$? Well, by induction we have that $\omega,\omega^\omega,\omega^{\omega^\omega},\ldots$ are all countable, and there are only countably many of those. From this we have that $\epsilon_0$ is also countable. It is a large countable ordinal. Now we can continue by induction, what is $\epsilon_1$? We repeat the same process only we start from $\epsilon_0+1$ instead of $\omega$. This is again a countable process, so it must end at a countable ordinal as before; by induction we can see that: What happens with $\epsilon_{\omega_1}$? Well, for every $\alpha<\omega_1$ we have that $\epsilon_\alpha$ is countable, and if $\alpha<\beta$ then $\epsilon_\alpha<\epsilon_\beta$. Therefore we have $\aleph_1$ many distinct ordinals below $\epsilon_{\omega_1}$, therefore it is not a countable ordinal anymore. In fact $\epsilon_{\omega_1}$ is the limit of all those $\epsilon_\alpha$ for countable $\alpha$, and one can see that the the supremum of uncountably many countable ordinals cannot be other than $\omega_1$. (If $\delta>\omega_1$ is cannot be the supremum of a set of countable ordinals!) Let us repeat the definitions of ordinal arithmetics: Some more reading material: Wikipedia gives the answer, in fact: it's the union of a countable set of countable ordinals $\left\{\omega,\omega^\omega,\omega^{\omega^\omega},\ldots\right\}$ As written on Wikipedia, it's because it's a countable union of countable sets. Let $u_0 = 1$ and $u_{n+1} = \omega^{u_n}$. It is easily shown by induction that $u_n$ is countable for all $n$: Finally, you have (by definition) $\epsilon_0 = \bigcup_{n < \omega} u_n$, a countable union of countable sets. Vsauce introduces really well the subject. https://www.youtube.com/watch?v=SrU9YDoXE88
Consider the function $f: \mathbb R \rightarrow \mathbb R$ defined by $$f(x)=\begin{cases}x & x\neq 1,2\\ 2 &x=1\\ 1 &x=2\end{cases}.$$ That is $f$ is the identity except it swaps $1$ and $2$. Then $f$ is Borel and bijective and maps countable dense sets to countable dense sets, since if $D \subset \mathbb R$ is a dense subset and $x \in D$ then $D\setminus \{x\}$ is also dense. Edit: I misread your second question. I think we can make a map that messes with the rationals and is still Borel. Let $g: \mathbb R \rightarrow \mathbb R$ be the identity on the irrationals. Let $g$ map the negative rationals to themselves and let $g$ send $\mathbb N$ to the positive non-integer rationals and send the positive non-integer rationals to $\mathbb N$. Then $g$ should still be Borel, but $\mathbb Q \setminus \mathbb N$ is a countable dense subset whose image is not dense under $g$.
This is a general question. A function is said to be continuous. Can it still have vertical asymptotes? Looking at the definition of continuity, I would say no. Because near a vertical asymptote x-delta might have an y of close to minus infinity, while x+delta might have a value of near +infinity, for example. It doesn't make sense to ask whether a function is continuous at a point where it's not defined. So if it has a vertical asymptote, then that's not a point of discontinuity, but rather a point that is not part of the domain. To make it clear what I mean by "doesn't make sense", the definition of continuity at a point $x$ involves the expression $|f(x) - f(y)|$. If $x$ is such that $f(x)$ doesn't make sense, then neither does that expression, so asking about continuity is meaningless. The function $f\colon \Bbb R\setminus\{0\}\to\Bbb R$ given by $f(x)=\frac 1x$ is continuous on all of its domain. Yet there is a vertical asymptote. An important notion not mentioned in any answer so far is compactness. There is a formal definition of this (https://en.wikipedia.org/wiki/Compact_space) which can be applied to any abstract domain that your function might be defined on, but if your domain is a subset of the number line then it's enough to check whether it's bounded (doesn't "stretch off" to infinity) and closed (includes all its "end points"). Other answers have given various examples of functions which were continuous on their domain and still included a vertical asymptote, but that asymptote was always an "end point" of the domain that was not included in the domain itself; so the domain was not closed and hence not compact. In general, the image of a compact region under any continuous function will always be compact; so, if a function has values in the numberline (or any other metric space) then the set of values of the function on any compact region of its domain must be compact, and hence bounded, and there can be no vertical asymptote. Yes. $1/x$ is continous on (0,1) and has a vertical asymptote at 0. The standard definition of continuity only considers points in the domain of the function. Note that by common understanding, a point where a function is undefined, like a vertical asymptote, is not included in its domain. Therefore, a function can have a vertical asymptote and still be a continuous function. For example, from Stein and Barcellos, Calculus and Analytic Geometry, 5th Edition (sec. 2.8): Definition Continuous function. Let fbe a function whose domain is the xaxis or is made up of open intervals. Then fis a continuous functionif it is continuous at each number ain its domain. EXAMPLE 1Use the definition of continuity to decide whether $f(x) = 1/x$ is continuous. SOLUTION[reasoning about definition]... Thus $1/x$ is continuous at every number in its domain. Hence it is a continuous function. Even though, of course, $1/x$ has a vertical asymptote at $x = 0$. "Because near a vertical asymptote x-delta might have an y of close to minus infinity, while x+delta might have a value of near +infinity, for example." Then you just have to choose a smaller delta. Take $f(x) = 1/x$ for example. It is continuous on all $x \ne 0$. And as $f(0)$ is undefined, it is continuous on all points in its domain. Pick a point $x_1 > 0$. Then the $\delta$ you choose must be $\delta < |x_1 - 0|=x_1$. But that is always possible. If $\delta < x_1$ then $0 < x_1 - \delta < x_1 < x_1 + \delta$ and if $|x - y| < \delta$ then $0< \frac 1{x+ \delta} < \frac 1y < \frac 1{x-\delta}$ does not have the problem of containing a range of unbounded values. The same argument holds for $x_2 < 0$. ===== Practical example: Let $x_1 = 1/\text{googol} = 10^{-100}$. Is $f(x) = 1/x$ continuous at $x= x_1$? $x_1$ is pretty damned close the the assymptote, isn't it. Let $\epsilon > 0$. We want to find a $\delta > 0$ so the for all $y$ such that $|y- 10^{-100}| < \delta$ then $|f(y) - f(10^{-100})| = |1/y - 10^{100}| < \epsilon$. To find such I need $-\epsilon < 1/y - 10^{100} < \epsilon$ or $10^{100} - \epsilon < 1/y < 10^{100} + \epsilon$ or $\frac 1{10^{100}+\epsilon} < y < \frac 1{10^{100} - \epsilon}$. $\frac 1{10^{100} +\epsilon} -x_1 < y-x_1 > \frac 1{10^{100} - \epsilon}-x_1$ So for $\delta = \min (|10^{100} - \frac 1{10^{100} +\epsilon},|\frac 1{10^{100} -\epsilon} - 10^{100}|)$, as long as $|x_1 - y| < \delta$ then $|1/x_1 - 1/y| < \epsilon$. Note: $0 < \delta < 1/10^{100}$ So $f$ is continuous at $x = 1/\text{googol}$. But the delta we had to find was ever smaller than $1/\text{googol}$. How about $f(x) = x^{1/3}$ (real branch)? The asymptote in $(0,0)$ is vertical and the function is continuous on the reals.
The project file used in this example is provided in the top-level page. Alternatively, the file can be created from scratch by following the steps described in the Modeling Instructions page. The figure below shows the step-index fiber geometry and index profile, together with a screenshot of the top XY plane view in Layout mode in FEEM after the simulation has been set up. (Left) Index profile of the step-index fiber. (Right) XY top plane view of the simulation setup in FEEM. Note that the cladding radius used in the simulation setup is much smaller than the actual physical value. This is justified by the fact that the modes of interest are mostly confined in the core so it is not necessary to simulate the entire cladding. As explained in the Modeling Instructions page, we assign PEC boundary conditions to the outer boundary of the cladding circle so this becomes the boundary of the simulated region. Open the step_index_fiber.ldev project file and run the simulation by clicking the "Run" button in the "FEEM" tab of the tabbed toolbar. The settings specified under FEEM Solver Region in the Modeling Instructions page have been chosen so that the solver will find the first twenty modes supported by the fiber for a wavelength of \(\lambda=1.55\mu\text{m}\). Here we will analyze those results and validate them with a semianalytical calculation for the TM modes. It is possible to find the effective indices of the TE, TM and mixed modes (EH and HE) supported by the step-index fiber by solving transcendental equations numerically (see Related references at the top-level page). Here we only consider the semianalytical solutions for TM modes, which are calculated in the MATLAB script step_index_fiber.m. You can download the text file step_index_fiber.txt, which contains the calculated results for the first couple of TM modes (this file and the MATLAB script were also used in this example). The MATLAB script uses a root finder to solve the characteristic equation for the effective index \( n \): $$ \frac{J_1(h \; a)}{h \; a \; J_0(h \; a)} + \bigg( \frac{n_2}{n_1}\bigg)^2\frac{K_1(q \; a)}{q \; a \; K_0(q \; a)}=0 $$ where \( n_1 = 1.44 \) and \( n_2 = 1.4 \) are the material index values for the core and cladding, respectively. The core radius is denoted by \( a = 10 \mu \text{m} \). \( J_{\nu}(x) \) and \( K_{\nu}(x) \) are the Bessel function of first and second kind, with $$ h \equiv k_0 \sqrt{n_1^2 - n ^2}, \quad q \equiv k_0 \sqrt{n^2 - n_2 ^2}, \quad k_o = \frac{2 \pi}{\lambda}. $$ The effective index can be visualized by selecting the FEEM in the Objects Tree, right clicking on the "modeproperties" result in "Result View - FEEM" and selecting Visualize>New Visualizer. The plot of effective index versus mode number will look like the one below. Note that the loss is exactly zero since we only used simple dielectric materials with real-valued refractive index. The staircase shape of the effective index plot is due to mode degeneracy in this fiber. Effective index of the first twenty modes supported by the step-index fiber calculated with FEEM. According to the semianalytical solution described above, the first TM mode, TM01, has effective index 1.43729 (up to five decimal figures). We can use this values to search for this mode and compare the FEEM and semianalytical results. To modify the mode search: •Switch to Layout mode and go to the Modal Analysis tab in the Edit properties window of the FEEM. •Disable the option "use max index" and provide the new target value for the effective index, n. •It is also convenient to reduce the number of trial modes to 1 so that only the mode with the closest effective index will be reported in the results. After running the new simulation, we find that the FEEM and semianalytical results are very close. A careful comparison reveals that the relative error is less than 0.001%, which confirms the validity of the FEEM results. For a more detailed discussion of convergence testing in FEEM, see the Graded-index fiber example here. The modal fields can be visualized by selecting the FEEM in the Objects Tree, right clicking on the "fields" result in "Result View - FEEM" and selecting Visualize>New Visualizer. The results for the magnetic field of the TM01 mode are shown below. We confirm that the polarization of this mode is indeed TM by comparing the amplitude of Hz with those of Hx and Hy. We find that the longitudinal component of the magnetic field is negligible compared to the transverse components. Modal field profile for TM01 mode. The Hz field component (not shown) is negligible for this mode.
Let $K$ be a field having two field extensions $L\supseteq K$ and $M\supseteq K$. Does there exist a field $N$ along with embeddings $L\to N$ and $M\to N$, such that the diagram$$ \require{AMScd}\begin{CD} K @>>> L\\ @V V V @VV V\\ M @>>> N \end{CD}$$is commutative? To put it less formally, do $L$ and $M$ have a common field extension $N$ (with $K$ lying in the intersection)? If yes, consider this side question (but do leave an answer even if you can only answer the main question!): Does the above property of fields generalise to the following, stronger property? Let $p$ be either $0$ or a prime number. Does there exist a sequence of fields$$ \mathbb F_p = L_0\subseteq L_1\subseteq L_2\subseteq L_3\subseteq L_4\subseteq\cdots$$(where I use the convention $\mathbb F_0 = \mathbb Q$) such that any field $K$ of characteristic $p$ has an extension field among the $L_\alpha$? This sequence should be understood as enumerated by ordinal numbers. In other words, what I need is a function that assigns to each ordinal number $\alpha$ a field $L_\alpha$ containing all $L_\beta$ with $\beta < \alpha$. (Intuitively, I suspect that this might depend on the Axiom of Choice.) If the second property holds, it shows that you can essentially only extend a field in one "direction." It also shows that the class (it is obviously not a set) $\mathbb M_p = \bigcup_\alpha L_\alpha$ is a field (class). We can define all the usual field operations (addition, multiplication, division) here since all pairs of elements lie in $L_\alpha$ for a sufficiently large $\alpha$. This "monster field" of characteristic $p$ then contains all other set fields of that characteristic.
Anyone who has had to prepare for an algebra qualifying exam is familiar with the "Classify groups of order $X$" question. To illustrate my general question, which I postpone until the end, consider the following simple example in which I classify groups $G$ of order $3 \cdot 7$. Let $H$ and $K$ be the $7$- and $3$-Sylow subgroups, respectively. By Sylow's theorems, we find easily that $H$ is normal and $K$ is either normal or one of seven conjugate copies. Also $H \cong \mathbf{Z}_7$ and $K \cong \mathbf{Z}_3$. Let $x$ be a generator of $\mathbf{Z}_3$ and let $y$ be a generator of $\mathbf{Z}_7$, both viewed multiplicatively. Now $G$ is a semidirect product of $H$ and $K$, hence the possible structures of $G$ are determined by the possible group homomorphisms $$ \mathbf{Z}_3 \to \mathrm{Aut}(\mathbf{Z}_7) \cong \mathbf{Z}_6. $$ Such a group homomorphism is determined by the image of $x$; since the order of this image must divide the order of $x$, we see $x$ is either sent to the identity automorphism $\mathbf{1}$ or an automorphism of order three. We find that a generator of $\mathbf{Z}_6$ is the automorphism $\alpha \colon y \mapsto y^3$. Therefore, there are three possible group homomorphisms, determined by sending $x$ to $\mathbf{1}$, to $\alpha^2 \colon y \mapsto y^2$, or to $\alpha^4 \colon y \mapsto y^4$. It follows there are at most three possible groups of order $21$, generated by $x$ and $y$ and subject to the relations $x^3 = x^7 = 1$ as well as one of the following commutativity relations: $$xy = yx, \;\;\;\;\; xy = y^2 x, \;\;\;\;\; xy = y^4 x. $$ All such groups exist by employing the abstract construction of the semidirect product. What follows is always the most subtle part of the analysis. Which of these groups are duplicates? The first is the case $G \cong \mathbf{Z}_3 \times \mathbf{Z}_7$ which is clearly distinct from the remaining two. Let the second group be denoted $G_2$ and the third $G_4$. If $G_2$ were isomorphic to $G_4$, then there would have to exist $X, Y \in G_4$ of orders three and seven, respectively, and satisfying $XY = Y^2 X$ (or $X, Y \in G_2$ satisfying $XY = Y^4 X$). And it is easy to see that, in fact, this condition is sufficient for $x \mapsto X$, $y \mapsto Y$ to determine an isomorphism $G_2 \cong G_4$. Note that both $G_2$ and $G_4$ have seven $3$-Sylow subgroups (otherwise they would be Cartesian products). So there are $14$ candidates for $X$ and $6$ candidates for $Y$. Morally, at least in my opinion, these groups should be isomorphic, because the only difference in their definition occurs when we chose between the two generators $\alpha^2$ and $\alpha^4$ of the cyclic subgroup $\mathbf{Z}_3 \subset \mathrm{Aut}(\mathbf{Z}_7) \cong \mathbf{Z}_6$, and these generators are 'essentially the same'. This is indeed the case, but the proof feels 'lucky'. One finds by calculation that no map of the form $X = x$ and $Y = y^k$ satisfies $XY = Y^2 X \in G_4$. But this is satisfied by taking $X = x^2$ and $Y = y$: $$X Y = x^2 y = x y^4 x = y^{16} x^2 = y^2 x^2 = Y^2 X \in G_4. $$ We conclude there are two groups of order $21$ up to isomorphism. To give an example of how this problem becomes more complex, if instead one were computing groups of order $3 \cdot 7 \cdot 13$, then one must determine group homomorphisms $$\mathbf{Z}_3 \to \mathrm{Aut}(\mathbf{Z}_7 \times \mathbf{Z}_{13}) \cong \mathbf{Z}_6 \times \mathbf{Z}_{12}. $$ (Don't forget that the automorphisms of the direct product is the direct product of the automorphisms when the orders of the groups are coprime!) If $\alpha$ generates $\mathbf{Z}_6$ and $\beta$ generates $\mathbf{Z}_{12}$, then there are nine possible semidirect structures, corresponding to $x$ being sent to to any of the following pairs: $$(\mathbf{1}, \mathbf{1}), \;\;\;\;\; (\alpha^2, \mathbf{1}), \\ (\alpha^4, \mathbf{1}), \;\;\;\;\; (\mathbf{1}, \beta^4), \\ (\mathbf{1}, \beta^8), \;\;\;\;\; (\alpha^2, \beta^4), \\ (\alpha^4, \beta^4), \;\;\;\;\; (\alpha^4, \beta^8), \;\;\;\;\; (\alpha^2, \beta^8). $$ Which of these are isomorphic? Hopefully at this point my general question is clear. First, in words: In considering semidirect products $G \cong H \rtimes K$ is there a (natural?) proof that shows the choice of generator(s) of $\mathrm{Aut}(H)$ affects the resulting group only up to the choice of 'non-equivalent' generators? Here is a precise phrasing for which I would be thrilled to receive an answer: Question: Prove or disprove.Let $p$ and $q$ be primes such that $q$ divides $p-1$. Consider semidirect products $G_\rho = \mathbf{Z}_p \rtimes_\rho \mathbf{Z}_q$ determined by group homomorphisms $$ \rho \colon \mathbf{Z}_q \to \mathrm{Aut}(\mathbf{Z}_p) \cong \mathbf{Z}_{p-1}. $$ Let $x$ multiplicatively generate $\mathbf{Z}_q$ and let $\alpha$ multiplicatively generate $\mathbf{Z}_{p-1}$. Setting $n = (p-1)/q$ the generators for $\mathbf{Z}_q \subset \mathbf{Z}_{p-1}$ are $\alpha^{nk}$ where $k = 1, \dots, q-1$. Let $\rho_k$ denote the group homomorphism determined by $x \mapsto \alpha^{nk}$. Then $G_{\rho_k} \cong G_{\rho_\ell}$ for all $1 \leq k, \ell \leq q-1$.
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Let $H$ be a Hilbert space, and let $T \colon H \to H$ be a bounded linear operator. Then how to show the following? The range of $T$ is finite-dimensional if and only if $T$ can be represented in the form $$Tx = \sum_{j=1}^n \langle x, v_j \rangle w_j \ \ \ [ v_j, w_j \in H].$$ My effort: Supppose that $\dim \mathscr{R}(T) = n$, and let $\{ e_1, \ldots, e_n \}$ be an orthonormal basis for $\mathscr{R}(T)$. Then, for every $x \in H$, we have $$Tx = \sum_{j=1}^n \langle Tx, e_j \rangle e_j = \sum_{j=1}^n \langle x, T^* e_j \rangle e_j,$$ where $T^*$ denotes the Hilbert adjoint operator of $T$. So we can take $w_j \colon= e_j$ and $v_j \colon= T^* e_j$. Is this reasoning correct? If so, then how to show the converse? Doest the above representation mean that the same $v_j$s and $w_j$s will be used for every $x \in H$? If so, then range of $T$ is spanned by the finitely many elements $w_1, \ldots, w_n$ and is clearly finite-dimensional.
From a number theoretic perspective, there are a few famous problems related to ranks of elliptic curves, which a lot of modern research in the area is geared towards solving. For example, Manjul Bhargava recently received the Fields medal partly for his work on bounding average ranks of elliptic curves (and proving that the Birch and Swinnerton Dyer conjecture is true for ever-increasing percentages of elliptic curves). To describe some of the results: an elliptic curve over $\mathbb{Q}$ is a rational smooth genus 1 projective curve with a rational point, or in less scary terms, the set of solutions to an equation that looks like$$E(\mathbb{Q}) = \{(x,y) \in \mathbb{Q}^2: y^2 = x^3 + ax + b\}$$where $a, b \in \mathbb{Q}$. It's a fact that any such set forms a finitely generated abelian group, so by the structure theorem for such objects the group of rational points is$$E(\mathbb{Q}) \cong \mathbb{Z}^r + \Delta,$$where $\Delta$ is some finite group. Now, we have complete descriptions of what this group $\Delta$ can be - a Theorem of Mazur limits it to a small finite list of finite groups of size less than 12. However the values of $r$ are much more mysterious. We define the rank of $E$ to be this $r = r(E)$. Now, we know quite a lot about $r$ - for example, in "100%" of cases the rank is $0$ or $1$ (where here "100%" is used in the probablistic sense, not to mean that every elliptic curve has rank $0$ or $1$!). There is also the Birch and Swinnerton Dyer Conjecture (BSD), which is one of the very open problems that you mention that nobody has any idea how to prove, but which most people believe. It relates the rank of the elliptic curve to the order of vanishing of its $L$-function at 1. Perhaps the strongest heuristic for it is that it's been proved in certain special cases, as well as Bhargava's work. So much of modern number theory research goes towards BSD, and it's one of the famous Millenium problems. However, what we don't have much intuition with is: Question: Are the ranks of elliptic curves over $\mathbb{Q}$ bounded? That is, is there some $R$ such that for any elliptic curve $E/\mathbb{Q}$, we have $r(E) \leq R$? As of last year, it was very open - there were loose heuristics both ways. The largest rank we've found so far is a curve with rank at least 28, due to Elkies, which has been the record-holder for a long time now. As I mentioned before, Bhargava has proved the average rank is bounded by at least 1.5, and this was enough to win a Fields medal. However, having said all that, I think there has been some excitement recently with some stronger heuristics that lean towards the rank being bounded. I don't know enough about these heuristics to comment any further, but there's more information here: http://quomodocumque.wordpress.com/2014/07/20/are-ranks-bounded/
How can I implement the computation of the diffusion coefficient $D$ using periodic boundary conditions (PBC)? I use molecular dynamics of a set of $nboby$ particles with positions $pos(3,nbody)$ in a box of length $length$. The implemetion of the PBC is do k = 1, 3 pos(k,i) = modulo(pos(k,i),length)end do At now I'm using for $D$ the following code do it=1,nstepdiff=0do i = 1,nbody pos2(:)=pos2(:)+pos(:,i) diff=diff+dot_product(pos(:,i),pos(:,i))end dodiff=diff/nbody-dot_product(pos2(:),pos2(:))/nbody**2end dodiff=diff/nstep/6 which I think it corresponds to $D=\lim_{t\to\infty}\dfrac{<x^2>-<x>^2}{6t}$ but I'm not very sure that the PBC are taking into account in the right way. Can someone help me? Thanks Matteo
I try to solve the following mixed second order elliptic PDE in the domain $D=[0, 1]^2$ \begin{eqnarray*} v+\nabla p=&0 \quad &\text{in} \quad D,\\ \text{div}(v)=&1/2 \quad &\text{in} \quad D,\\ v\cdot n =&0 \quad &\text{on} \quad \partial D,\\\end{eqnarray*}where $n$ is the unit outward normal vector for $D$. Divided the domain into four sub-squares: $[0,0.5]^2$, $[0.5,1] \times [0, 0.5]$, $[0,0.5] \times [0.5,1]$ and $[0.5,1]^2$, denote the partition as $\mathcal{T}^h$. We want to solve the above problem using (rectangular) lowest-order Raviart Thomas element $RT_0$. If we take the reference square to be $[0,1]^2$, then we can compute four $RT$ basis functions as follow $$ \psi_1(x_1,x_2) = (x_1,0)^T $$ $$ \psi_2(x_1,x_2) = (0, x_2-1)^T $$ $$ \psi_3(x_1,x_2) = (x_1-1, 0)^T $$ $$ \psi_4(x_1,x_2) = (0,x_2)^T.$$ One can transform these basis functions to our sub-squares defined above. For example, in $[0,0.5]^2$, $$\tilde{\psi_1} (x_1,x_2) = (2x_1,0)^T.$$ The weak formulation of the above PDE is: find $(v,p)\in V_h \times Q$ such that\begin{eqnarray*} \int_D v \cdot w - \int_D p \text{div}(w)= & 0 \quad & \forall w \in V_h,\\ \int_D \text{div}(v) q = &\frac{1}{2}\int_D q \quad & \forall q \in Q,\end{eqnarray*}where $V_h = \{ w\in \big(L^2(D)\big)^2: w\cdot n =0\}$ and $Q = \{ q: q|_K = const, K\in \mathcal{T}^h\}.$After we derive the weak formulation, the finite element system will look like the following$$ \left [ \begin{array}{cc}B & C \\C^T & 0 \end{array} \right ] \left [ \begin{array}{c}\tilde{v} \\\tilde{p}\end{array} \right ] = \left [ \begin{array}{c}f_v \\f_p\end{array} \right ].$$ Here is my question: In our special case, it should be $B,C \in \mathbb{R}^{4\times 4}$ and $\tilde{f_v}$ zero vector. I am not clear that which $RT$ basis functions I should consider when forming the finite element matrix. I think there is one basis function per interior edge for $RK_0$. Also, I want to know how the matrices $B$ and $C$ and the vector $\tilde{f_p}$ exactly look like. (Both the numeric and analytical expression) Another question is: What is the compact support of these $RT$ basis functions from the corresponding sub-squares? Say $\tilde{\psi_1}$ in the sub-square $[0,0.5]^2$, what is the compact support of $\tilde{\psi_1}$? How about the other $\tilde{\psi_i}$ in the same sub-squares $[0,0.5]^2$? Remark: To approximate $v$, we use $RT_0$ basis functions. For $p$, we use the piecewise constant function.
Earlier this month, I did a preliminary assessment of Andrew Torrez’s speculation on the Opening Arguments podcast that the Roberts Court has ushered in a new era of polarization on the U.S. Supreme Court. The answer, looking at 20 years of history, seemed to be no. A wider view of 75 years of history, meanwhile, suggests the answer is… still no. The Roberts Court is not significantly more polarized in its merit case votes than any other Court in this history. But, the data suggest two interesting trends in Supreme Court unanimity over the past 75 years: a steady boom-and-bust cycle about every decade, and a significant Roberts Court uptick in the second derivative suggesting that year-over-year, the consensus about consensus may be disappearing. Framing the Question The recent acceleration in U.S. political and ideological polarization has heightened voters’ sensitivity to consensus — or lack thereof — in our political discourse. So it’s only natural to ask whether the U.S. Supreme Court, whose perception as apolitical is rapidly evaporating, is becoming increasingly polarized in its decision-making. Have the Roberts Court’s four conservative-bloc justices (Alito, Gorsuch, Kavanaugh, Thomas) and four liberal-bloc justices (Ginsburg, Sotomayor, Breyer, Souter) been likely to retreat to their ideological corners in the Court’s decisions? Has there been, as Torrez suspects, a significant increase in split-decision opinions (such as 5-4) and decrease in unanimity? Are the justices as unlikely to find agreement on the politically charged issues of the day as is the rest of our political class? Torrez can be forgiven for suspecting so – and I wonder if one reason for it is an availability heuristic bias. The Court, on average, makes between 80–90 decisions on merits each year, and many of these do not make national news. Those that do make news, though, are those most hotly contested along ideological lines. Whole Women’s Health. Obergefell. Citizens United. Of course Bush v. Gore. Besides having hot political and public controversy in common, these decisions also have in common that only five justices voted with the majority. In other words, it might feel like there are more split decisions because so many of the first decisions that come to our mind were split. To test whether there is more to this suspicion than an availability effect, I wanted to look at a data set of Court decisions as far back into history as I could find (without straining my fall sabbatical). The indispensable SCOTUSBlog has compiled “stat pack” reports on each Supreme Court term since 1995, and my preliminary analysis used data from their reports, having roughly ten years’ each of data from the Rehnquist and Roberts courts. Then, as I will now, I operationalized “polarization” using the number of dissenting votes as a proxy. Naturally, with a maximum of nine justices voting, this number of dissents ranges from 0 to 4, with zero serving as my marker of “unanimity” or consensus and four as the maximum level of “polarization” or dissent. Most Court decisions have nine voting justices, though a few have fewer, for various reasons — I chose not to compensate for this effect since the number of cases with fewer than nine votes is relatively small. That preliminary analysis did not turn up evidence that polarization had increased in a significant way in the ten years after Roberts became chief justice relative to the ten years before. But I wasn’t satisfied with only ten years on either side. I want to know how the Roberts Court stacks up against the rest of the 20th century. So I reached for a larger data set, clued in by a commenter on my previous post. (Thanks to you, commenter!) Well – to be honest I didn’t follow his advice at first. Instead I used my institutional membership in the wonderful ICPSR database to locate what, in the years before the internet, was probably the definitive data set on the Supreme Court: Spaeth, Harold J. UNITED STATES SUPREME COURT JUDICIAL DATABASE, 1953-1997 TERMS [Computer file]. 9th ICPSR version. East Lansing, MI: Michigan State University, Dept. of Political Science [producer], 1998. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 1999. Spaeth’s database had everything I would need and much, much more: you could even use it to investigate questions like, which justices tended to vote in concert with which other justices? But, its data only ran up to 1997, so at first I attempted to use SCOTUSBlog’s data to fill in the years 1998–2018. That didn’t work; there were inconsistencies in the numbers of cases recorded in the years in which the two data sources overlapped. (Spaeth records many more decisions each year, even on its coarsest unit of analysis.) So I scrapped the idea of merging the two data sources and slunk back to the clue provided by my blog commenter, leading me to the Washington University Law Library. A View of the Data The Washington University Supreme Court Database, it turns out, appears to have been initially seeded by Spaeth’s data (indeed, it makes explicit references to Spaeth to assist anyone comparing the two). But, it spans the years from 1945 through 2018. I’d found my source. The smallest such data set, which counts each Court citation exactly once (multiple memoranda or decisions on the same matter are excluded), includes 8,966 decisions since 1945. Its numbers matched most closely to SCOTUSBlog’s summaries in the years the latter analyzed, but still there were a few more cases in the WU database each year than appeared in the blog’s stat packs. While I did use SPSS to recode its variables and extract only those I needed, I used Excel for analysis. (Being on sabbatical, my remote connection to university SPSS software is cumbersome. Please poke fun at me in the comments below.) After coding a variable to register the number of dissenting votes in each case, I focused on the average number of dissents in merit cases each year. Because this average “jitters” frequently on a year-to-year basis, the blue line displayed below is a smoothed four-year moving average taken “backward” (so, e.g., the data point for 2016 is an average of cases in the years 2013 through 2016). The orange line, plotted on the same scale, depicts the five-point, centered, discrete second derivative of the number of dissents. I say more about this quantity in my preliminary post. So, what do I make of sixty-plus years’ worth of data? Analysis On the question of whether the Roberts court has seen a precedent-breaking increase in dissension, my “Andrew is wrong” diagnosis stands. If anything, the past several years have seen more consensus (a lower average number of dissents per case) than any other period since World War II. This is juiced in part by the surprisingly unanimous term of 2016 in which the Washington University database recorded only three 4-dissent opinions and forty-one unanimous decisions. The latter represents a difference in how SCOTUSBlog and the WU database record vote tallies. SCOTUSBlog records seven 4-dissent decisions in 2016, by contrast. (I’ve not looked closely enough at the cases themselves to uncover why the sources disagree.) Presumably, the WU database’s methodology remains consistent each year since 1945, so since I’ve not used SCOTUSBlog’s data here, this discrepancy ought not affect the analysis. A Thirty-Year Boom/Bust Cycle Far from a sudden increase in dissent, placed in historical context, the Roberts court has so far been marked by a dissension downswing – but one that, if trends continue, is ready to reverse. What struck me about the historical record is how consistent the levels of dissent are across these six decades: hovering around an average of 1.6 dissents per case and scarcely wandering further than 0.4 votes away from this mean before reverting back. This reversion has come in a steady cadence, taking about 15 years to oscillate from maximum to minimum and then another 15 back to maximum. Consensus on the Supreme Court appears to ebb and flow in a predictable thirty-year-long oscillation. I used spectral analysis to quantify this behavior. Since this data appears to begin in 1945 at the top of a crest and end in 2018 at the bottom of a trough, I used the odd-half-integer exponentials \(\varphi_k(t)=e^{(k+\frac12)\pi t / 73}\) as a basis for a discrete Fourier transform on the data. This found a modest peak at \(k=5\), and taking only this and the constant term (\(k=0\)) we obtain the oscillation shown as the dashed cosine curve in the plot: \(d(t) = 1.614 + 0.180 \cos \left( \frac{5.5\pi}{73}t – 1.126 \right)\), with \(t=0\) marking the year 1946. The wave number \(k=5\) gives an oscillation with a period of \(73/2.25 \approx 32\) years. If this trend continues, then we would expect to see polarization increase over the next decade as dissension swings back upward from the relative trough in which the Roberts court now finds itself. Thinking about the political moment we appear to be in these days, this feels like a man-bites-dog proposition to me; but, of course, one we would then expect to see begin to reverse sometime in the 2030s. A second derivative hockey-stick The one Roberts court trend that was notable in my preliminary post — a sudden uptick in the dissent-over-time second derivative — remains notable even in the longer context of history. The first derivative (the slope of the blue curve in my chart) measures the direction and rate of change in unanimity: it’s positive when dissent is increasing year-over-year and negative when it’s decreasing. Since the blue curve jitters up and down frequently over the decades, you can convince yourself that this first derivative bounces between positive and negative frequently. There’s not much to note in the trend of the first derivative over time; I don’t see, for example, a sustained period of positive first derivative that would suggest a steady increase in polarization in the Roberts court. The second derivative, meanwhile, measures what I call the “whiplash effect:” How rapidly are the Supremes changing course on unanimity? To use an airplane metaphor, the second derivative measures the position of their flight stick: pulled way up when it’s positive or pushed way down when it’s negative. And in the past five years they have yanked that stick further back than any point in the history of this data. Over the last five years the Court has been changing its mind on consensus, turning toward polarization, at a quicker rate than at any point since World War II. One look at the last five years of data shows what I mean. 2014 was about as polarized (average 1.81 dissents per case) as was 2018 (1.78 dissents per case), but in between there was a precipitous decrease toward consensus (to a historic low point in 2016) and then a rapid rebound. All the passengers on this polarization airplane have spilled their drinks. Because I used a centered five-point second derivative estimate, \( f^{\prime\prime} (x_i) \approx \frac{1}{12} \bigl( -f(x_{i-2}) + 16f(x_{i-1}) – 30f(x_i) + 16f(x_{i+1})-f(x_{i+2})\bigr)\), the course change over this five-year period becomes a single data point in the second derivative, attached to the year 2016, the last point on the orange second derivative graph. Note also that the orange curve is the magnitude of the second derivative only; negative signs have been discarded so that only the rate of acceleration is shown and not the direction. Note that this data has been smoothed in three ways: Using a five-point estimate for the second derivative rather than three. This captures a five-year trend in each data point when the minimum necessary is three. To reduce noise, I computed the second derivative from the moving averagesof the original data, i.e., from the blue curve in the graph, rather than from the one-year-at-a-time averages. Finally, the “hockey stick” is most apparent from a further smoothing of this second derivative using a four-year moving average. Discussion Andrew was wrong. (There. I just had to say it one last time.) The Roberts Court has not been an era of precedent-breaking polarization in how the Supreme Court justices vote on merit cases. But, if this analysis is to be believed, we are right now at the nadir of a downswing toward unanimity and have begun to rebound toward dissension – snapping back perhaps more rapidly than at any other point in this historical record. I put more stock in the cyclical oscillation found here – the thirty-year long wave back and forth toward more consensus, then more polarization, and back again – than in the Roberts Court “hockey stick.” For one, the boom-and-bust cycle calculation takes all 60-plus years of data into account, with each year’s average used on its own. It’s global behavior in this data. The “hockey stick” is a local phenomenon, created by only a small handful of data points (namely, the five four-year-moving-averages ending in 2014 through 2018). Because it originated from the moving averages, the year-to-year jitter has been muted — but, whether the increase is statistically significant in the time series is a question I didn’t address. (Nor indeed do I know how to address it, off hand.) And it is entirely possible, maybe even likely, that the hockey stick is explained not by polarized behavior on the Court but rather on the unusually unanimous years of 2015-2016, two of the most consensual Court terms in this data. Indeed, the levels of dissent in 2014 and 2018 (roughly 1.8 dissents per case) that bracket this five-year period were only slightly above the historical average (about 1.6). So it’s not that the polarization got anomalously high in this period; it’s that, for a brief time, it fell anomalously low. But this hockey stick does partially scratch my – and maybe Mr. Torrez’s – confirmation bias itch. While the Roberts Court is not significantly more polarized, it appears to be headed in that direction. To mix one last metaphor, if this hockey stick is to be believed, the wheel on the S.S. Dissent has been decisively turned toward polarization in the past several years, and the ship is rapidly listing back away from the consensus of 2015-2016 and toward more dissension. That “feels right” in today’s hyper-partisan world. But it will take a few more years to know for sure if this change is real or ephemeral. Acknowledgements and Admissions My appreciation is due to the Opening Arguments podcast team – Andrew Torrez, Thomas Smith, and their production colleagues – for putting out a terrific show. It’s frequently hilarious, always intellectually honest, and inescapably informative in these tumultuous legal and political times. I’ve always been a math aficionado, but only recently have I caught the law bug, and it was hanging around their podcast that did that for me. Check it out and support their show on Patreon if you are able. Thanks also to Tom Williams whose comment on my preliminary analysis pointed me to the excellent Washington University database. Not only was it perfect for this work, I’m sure it’d be a great data source for teaching projects suitable for statistics and quantitative literacy courses at any level — particularly for pre-law students, political science majors, or any groups of students interested in trends in the judiciary. Lastly, it will be evident to any data scientist or statistician that reads these posts that I am neither. So if you’re mortally wounded by my analysis or by any of the choices I made, let me offer my hobbyist’s apology. The great thing about an open, public database like WU’s is that you can download it and do your own analysis too. I hope you’ll share your results with me when you do, either in the comments below or via social media.
Difference between revisions of "Fujimura's problem" (→n = 7) (→n = 10) Line 195: Line 195: 316,334,352,361,406,433,442,541,550,604,613,622, 316,334,352,361,406,433,442,541,550,604,613,622, 721,730,901,1000 721,730,901,1000 + + + + + + + + + + + + + + + == General n == == General n == Revision as of 22:54, 6 March 2009 Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture. n 0 1 2 3 4 5 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 Contents n=0 [math]\overline{c}^\mu_0 = 1[/math]: This is clear. n=1 [math]\overline{c}^\mu_1 = 2[/math]: This is clear. n=2 [math]\overline{c}^\mu_2 = 4[/math]: This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 [math]\overline{c}^\mu_3 = 6[/math]: For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math]. For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal: set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0) Consider choices from set A: (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]: The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.) Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. Remark: curiously, the best constructions for [math]c_4[/math] uses only 7 points instead of 9. n=5 [math]\overline{c}^\mu_5=12[/math]: The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. n=6 [math]15 \leq \overline{c}^\mu_6 \leq 17[/math]: [Incomplete: need to add rotations of solution II.] [math]15 \leq \overline{c}^\mu_6[/math] from the bound for general n. Note that there are ten extremal solutions to [math] \overline{c}^\mu_3 [/math]: Solution I: remove 300, 020, 111, 003 Solution II (and 2 rotations): remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201 Also consider the same triangular lattice with the point 020 removed, making a trapezoid. Solutions based on I-III are: Solution IV: remove 300, 111, 003 Solution V: remove 201, 111, 102 Solution VI: remove 210, 021, 102 Solution VI': remove 120, 012, 201 Suppose we can remove all equilateral triangles on our 7×7x7 triangular lattice with only 10 removals. The triangle 141-411-114 must have at least one point removed. Remove 141, and note because of symmetry any logic that follows also applies to 411 and 114. There are three disjoint triangles 060-150-051, 240-231-330, 042-132-033, so each must have a point removed. (Now only six removals remaining.) The remainder of the triangle includes the overlapping trapezoids 600-420-321-303 and 303-123-024-006. If the solutions of these trapezoids come from V, VI, or VI', then 6 points have been removed. Suppose the trapezoid 600-420-321-303 uses the solution IV (by symmetry the same logic will work with the other trapezoid). Then there are 3 disjoint triangles 402-222-204, 213-123-114, and 105-015-006. Then 6 points have been removed. Therefore the remaining six removals must all come from the bottom three rows of the lattice. Note this means the "top triangle" 060-330-033 must have only four points removed so it must conform to solution either I or II, because of the removal of 141. Suppose the solution of the trapezoid 600-420-321-303 is VI or VI'. Both solutions I and II on the "top triangle" leave 240 open, and hence the equilateral triangle 240-420-222 remains. So the trapezoid can't be VI or VI'. Suppose the solution of the trapezoid 600-420-321-303 is V. This leaves an equilateral triangle 420-321-330 which forces the "top triangle" to be solution I. This leaves the equilateral triangle 201-321-222. So the trapezoid can't be V. Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction. [math]15 \leq \overline{c}^\mu_6 \leq 16[/math]: Here, "upper triangle" means the first four rows (with 060 at top) and "lower trapezoid" means the bottom three rows. Suppose 11 removals leave a triangle-free set. First, suppose that 5 removals come from the upper triangle and 6 come from the lower trapezoid. Suppose the trapezoid 600-420-321-303 used solution IV. There are three disjoint triangles 402-222-204, 213-123-114, and 105-015-006. The remainder of the points in the lower trapezoid (420, 321, 510, 501, 402, 312, 024) must be left open. 024 being open forces either 114 or 015 to be removed. Suppose 114 is removed. Then 213 is open, and with 312 open that forces 222 to be removed. Then 204 is open, and with 024 that forces 006 to be removed. So the bottom trapezoid is a removal configuration of 600-411-303-222-114-006, and the rest of the points in the bottom trapezoid are open. All 10 points in the upper triangle form equilateral triangles with bottom trapezoid points, hence 10 removals in the upper triangle would be needed, so 114 being removed doesn't work. Suppose 015 is removed. Then 006-024 forces 204 to be removed. Regardless of where the removal in 123-213-114, the points 420, 321, 222, 024, 510, 312, 501, 402, 105, and 006 must be open. This forces upper triangle removals at 330, 231, 042, 060, 051, 132, which is more than the 5 allowed, so 015 being removed doesn't work, so the trapezoid 600-420-321-303 doesn't use solution IV. Suppose the trapezoid 600-420-321-303 uses solution VI. The trapezoid 303-123-024-006 can't be IV (already eliminated by symmetry) or VI' (leaves the triangle 402-222-204). Suppose the trapezoid 303-123-024-006 is solution VI. The removals from the lower trapezoid are then 420, 501, 312, 123, 204, and 015, leaving the remaining points in the lower trapezoid open. The remaining open points is forces 10 upper triangle removals, so the trapezoid 600-420-321-303 doesn't use solution VI. Therefore the trapezoid 303-123-024-006 is solution V. The removals from the lower trapezoid are then 420, 510, 312, 204, 114, and 105. The remaining points in the lower trapezoid are open, and force 9 upper triangle removals, hence the trapezoid 303-123-024-006 can't be V, and the solution for 600-420-321-303 can't be VI. The solution VI' for the trapezoid 600-420-321-303 can be eliminated by the same logic by symmetry. Therefore it is impossible for 5 removals come from the upper triangle and 6 come from the lower trapezoid. Therefore 4 removals come from the upper triangle and 7 come from the lower trapezoid. At this point note the triangle 141-411-141 must have one point removed, so let it be 141 and note that any logic that follows is also true for a removal of 411 and 141 by symmetry. This implies the upper triangle must have either solution I or II. Suppose it has solution II. Note there are five disjoint triangles 600-510-501, 411-321-312, 402-222-204, 213-123-114, and 105-015-006. Suppose 420 and 024 are removed. Then, noting 303 must be open, 606 must be removed, leaving 510 open. 510-240 forces 213 to be removed, and 510-150 force 114 to be removed. 213 are 114 are in the same disjoint triangle. Hence both 420 and 024 both can't be removed. So at least either 420 or 024 is open. Let it be 420, noting by symmetry identical logic will apply if 024 is removed. Then 321, 222, and 123 are removed based on 420 and the open spaces in the upper triangle. This leaves four disjoint triangles 600-501-510, 402-303-312, 213-033-015, 204-114-105. So 411 and 420 are open, forcing the removal of 510. This leaves 501 open, and 501-411 forces the removal of 402. 600-303, and 330 are then open, forming an equilateral triangle. Therefore 420 isn't open, therefore the upper triangle can't have solution II. Therefore the upper triangle has solution I. Suppose 222 is open. 222 with open points in the upper triangle force 420, 321, 123, and 024 to be removed. This leaves four disjoint triangles 411-501-402, 213-303-204, 015-105-006, and 132-312-114. This would force 8 removals in the lower trapezoid, so 222 must be closed. Therefore 222 is removed. There are six disjoint triangles 150-420-123, 051-321-024, 231-501-204, 132-402-105, 510-150-114, and 312-042-015. So 600, 411, 393, 114, and 006 are open. 600-240 open forces 204 to be removed and 600-150 open forces 105 to be removed. This forces 501 and 402 to be open, but 411 is open, so there is the equilateral triangle 501-411-402. Therefore the solution of the upper triangle is not I, and we have a contradiction. So [math] \overline{c}^\mu_6 \neq 17 [/math]. n = 7 [math]\overline{c}^\mu_{7} \leq 22[/math]: Using the same ten extremal solutions to [math] \overline{c}^\mu_3 [/math] as previous proofs: Solution I: remove 300, 020, 111, 003 Solution II (and 2 rotations): remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201 Suppose the 8x8x8 lattice can be triangle-free with only 13 removals. Slice the lattice into region A (070-340-043) region B (430-700-403) and region C (034-304-007). Each region must have at least 4 points removed. Note there is an additional disjoint triangle 232-322-223 that also must have a point removed. Therefore the points 331, 133, and 313 are open. 331-313 open means 511 must be removed, 331-133 open means 151 must be removed, and 133-313 open means 115 must be removed. Based on the three removals, the solutions for regions A, B, and C must be either I or II. All possible combinations for the solutions leave several triangles open (for example 160-520-124). So we have a contradiction, and [math] \overline{c}^\mu_7 \leq 22 [/math]. n = 8 [math]\overline{c}^\mu_{8} \geq 22[/math]: 008,026,044,062,107,125,134,143,152,215,251,260,314,341,413,431,440,512,521,620,701,800 n = 9 [math]\overline{c}^\mu_{9} \geq 26[/math]: 027,045,063,081,126,135,144,153,207,216,252,270,315,342,351,360,405,414,432,513,522,531,603,630,720,801 n = 10 [math]\overline{c}^\mu_{10} \geq 29[/math]: 028,046,055,064,073,118,172,181,190,208,217,235,262, 316,334,352,361,406,433,442,541,550,604,613,622, 721,730,901,1000 Computer data From integer programming, we have n=3, maximum 6 points, 10 solutions n=4, maximum 9 points, 1 solution n=5, maximum 12 points, 1 solution n=6, maximum 15 points, 4 solutions n=7, maximum 18 points, 85 solutions n=8, maximum 22 points, 72 solutions n=9, maximum 26 points, 183 solutions n=10, maximum 31 points, 6 solutions n=11, maximum 35 points, 576 solutions n=12, maximum 40 points, 876 solutions General n A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound [math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math] for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example. An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound [math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math] which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows. Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math]. Asymptotics The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math]. By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math].
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in... Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch... Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen... Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl... People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f... Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a... I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac... This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s... There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com... Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not... Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}... I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo... Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a... I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst... Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ... NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ... I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few... This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme... EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc... Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu... Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d... I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa... To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co... Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik... I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like. I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have... It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl... Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,... One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi... Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case. What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?...
Let $K$ be a field, say it is of type $n\in\mathbb{N}$ if all its irreducible polynomials are of degree $\leq n$ and if there is an irreducible polynomial of degree $n$. Say $K$ is of infinite type if it has irreducible polynomials of all degrees. There are easy examples of fields of type $1$ ($\mathbb{C}$ and more generally algebraically closed fields), $2$ ($\mathbb{R}$), and of infinite type ($\mathbb{Q}$, finite fields, etc.) But I couldn't come up with a field of type $n >2$, so I was wondering if there was any. There isn't any perfect field of (finite) type $n>2$, since if $K$ is perfect and of type $n$, then the primitive element theorem proves that any algebraic extension of $K$ is of degree $\leq n$, in particular the algebraic closure $L$ of $K$ has $[L:K]< \infty$, and (see this ) this implies $K$ is of type at most $2$. So in conclusion my question is : is it known whether there are (imperfect) fields of type $n>2$, if so what are some examples, or what is a proof that there are none ? EDIT: while looking at stuff about the Artin-Schreier theorem mentioned in the comments, I found the following lemma : if $K$ is a field of characteristic $p>0$ and if $a\in K\setminus K^p$, then for all $m\geq 1$, $X^{p^m}-a$ is irreducible over $K$, which proves that a field that isn't perfect is of infinite type, which concludes the question
I am trying to solve an equation in Python. Basically what I want to do is to solve the equation: $$ \frac{1}{x^2}\frac{d}{dx}\left(Gam \frac{dL}{dx}\right)+L\left(\frac{a^2x^2}{Gam}-m^2\right)=0 $$ This is the Klein-Gordon equation for a massive scalar field in a Schwarzschild spacetime. We know $m$ and $Gam=x^2-2x$. The initial/boundary condition that I know are $L\rvert_{2+\epsilon}=1$ and $L\rvert_{\infty}=0$. Notice that the asymptotic behavior of the equation is $$ L(x\to\infty)\to \frac{e^{\pm(m^2-a^2)x}}{x} $$ Then, if $a^2>m^2$ we will have oscillatory solutions, while if $a^2 < m^2$ we will have a divergent and a decay solution. What I am interested in is the decay solution; however when I am trying to solve the above equation transforming it as a system of first-order differential equations and using the shooting method in order to find the $a$ that can give me the behavior that I am interested about, I am always having a divergent solution (for $0<a^2<m^2$). I suppose that it is happening because odeint is always finding the divergent asymptotic solution. Is there a way to avoid or tell to odeint that I am interested in the decay solution? If not, do you know a way that I could solve this problem? Maybe using another method for solving my system of differential equations? If yes, which method? Basically what I am doing is to add a new system of equations for $a$ $$\frac{d^2a}{dx^2}=0,\\ \frac{da}{dx}(2+\epsilon)=0,\\ a(2+\epsilon)=a_0)$$ in order to have $a$ as a constant. Then I am considering different values for $a_0$ and asking if my boundary conditions are fulfilled. EDIT 1. Hello, let me try to explain a little more my issue I am incorporating the value at infinity considering the assimptotic behavior, it means that I will have a relation between the field and its derivative. I will post the code for you if it is helpful: from IPython import get_ipythonget_ipython().magic('reset -sf')import numpy as npimport matplotlib.pyplot as pltfrom scipy.integrate import odeintfrom math import *from scipy.integrate import ode These are initial conditions for Schwarzschild. The field is invariant under reescaling, then I can use $L(2+\epsilon)=1$ def init_sch(u_sch): om = u_sch[0] return np.array([1,0,om,0]) #conditions near the horizon, [L_c,dL/dx,a,da/dx] These are our system of equations def F_sch(IC,r,rho_c,m,lam,l,j=0,mu=0): L = IC[0] ph = IC[1] om = IC[2] b = IC[3] Gam_sch=r**2.-2.*r dR_dr = ph dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L) dom_dr = b db_dr = 0. return [dR_dr,dph_dr,dom_dr,db_dr] Then I try for different values of "om" and ask if my boundary conditions are fulfilled. p_sch are the parameters of my model. In general what I want to do is a little more complicated and in general I will need more parameters that in the just massive case. Howeve I need to start with the easiest which is what I am asking here p_sch = (1,1,0,0) #[rho_c,m,lam,l], lam and l are for a more complicated case ep = 0.2ep_r = 0.01r_end = 500n_r = 500000n_omega = 1000omega = np.linspace(p_sch[1]-ep,p_sch[1],n_omega)r = np.linspace(2+ep_r,r_end,n_r)tol = 0.01a = 0for j in range(len(omega)): print('trying with $omega =$',omega[j]) omeg = [omega[j]] ini = init_sch(omeg) Y = odeint(F_sch,ini,r,p_sch,mxstep=50000000) print Y[-1,0] #This is my boundary condition that I am implementing. Basically this should be my conditions at "infinity if abs(Y[-1,0]*((p_sch[1]**2.-Y[-1,2]**2.)**(1/2.)+1./(r[-1]))+Y[-1,1]) < tol: print(j,'times iterations in omega') print("R'(inf)) = ", Y[-1,0]) print("\omega",omega[j]) omega_1 = [omega[j]] a = 10 break if a > 1: break Basically what I want to do here is to solve the system of equations giving different initial conditions and find a value for "a=" (or "om" in the code) that should be near to my boundary conditions. I need this because after this I can give such initial guest to a secant method and try to fiend a best value for "a". However, always that I am running this code I am having divergent solutions that it is, of course, a behavior that I am not interested. I am trying the same but considering the scipy.integrate.solve_vbp, but when I run the following code: from IPython import get_ipythonget_ipython().magic('reset -sf')import numpy as npimport matplotlib.pyplot as pltfrom math import *from scipy.integrate import solve_bvpdef bc(ya,yb,p_sch): m = p_sch[1] om = p_sch[4] tol_s = p_sch[5] r_end = p_sch[6] return np.array([ya[0]-1,yb[0]-tol_s,ya[1],yb[1]+((m**2-yb[2]**2)**(1/2)+1/r_end)*yb[0],ya[2]-om,yb[2]-om,ya[3],yb[3]])def fun(r,y,p_sch): rho_c = p_sch[0] m = p_sch[1] lam = p_sch[2] l = p_sch[3] L = y[0] ph = y[1] om = y[2] b = y[3] Gam_sch=r**2.-2.*r dR_dr = ph dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L) dom_dr = b db_dr = 0.*y[3] return np.vstack((dR_dr,dph_dr,dom_dr,db_dr))eps_r=0.01r_end = 500n_r = 50000r = np.linspace(2+eps_r,r_end,n_r)y = np.zeros((4,r.size))y[0]=1tol_s = 0.0001p_sch= (1,1,0,0,0.8,tol_s,r_end)sol = solve_bvp(fun,bc, r, y, p_sch) I am obtaining this error: ValueError: bc return is expected to have shape (11,), but actually has (8,).ValueError: bc return is expected to have shape (11,), but actually has (8,).
In addition to our technical support (e.g. via chat), you’ll find resources on our website that may help you with your design using Dlubal Software. Frequently Asked Questions (FAQ) Search FAQ Further Information Customer Support 24/7 AnswerWith the time history monitor, you can view all results over time. In this case, it is also possible to select several parts of the structure and then export the results directly to Excel. AnswerWith the equivalent loads and forced vibrations add-on modules, you can create result combinations that contain the governing combinations of seismic loads. To perform a design with them, they have to be combined further on the basis of the unusual combination. This combination is, for example, in the EN 1990 clause 6.4.3.4:${\mathrm E}_{\mathrm d}\;=\;\underset{}{\sum_{}^{}\;{\mathrm G}_{\mathrm k,\mathrm j}\;+\;\mathrm P\;+\;{\mathrm A}_{\mathrm{Ed}}\;+\;}\overset{}{\underset{}{\sum{\mathrm\psi}_{2,\mathrm i}\;{\mathrm Q}_{\mathrm k,\mathrm i}}}$This unusual combination has to be defined manually in RFEM. Make sure that (for a direction combination with the 100/30% rule), both created result combinations from RF- / DYNAM Pro have to be added with the "Or" condition. Such a combination can be seen in Figure 02.This unusual combination can then be used for further design. It is possible to evaluate the governing internal forces as well as to import and calculate this combination in the design modules. AnswerNo, this option does not necessarily have to be activated to consider the self-weight. If the masses are imported from a load case that already contains the self-weight, this option must not be activated. Otherwise, the self-weight of structure is doubled. Answer In the results tables of the RF-DYNAM Pro module, the stresses for solids are not displayed. To be able to display the stresses and internal forces, it is necessary to export the results to a load case or to a result combination. Then, you can look at the results in a load case or a result combination as usual. Answer The two solution methods 'Linear Modal Analysis' and 'Linear Implicit Newmark Analysis' are available. Linear Modal Analysis This solution method uses a decoupled structure that is based on the eigenvalues and mode shapes of the structure. It is essential to assign a defined natural vibration case. This method should only be used if a sufficient number of eigenvalues of the structure have been calculated in the natural vibration case. This means that care should be taken to achieve an effective modal mass factor of the total structure of approximately 1 in all governing directions. If this is not possible, this method will lead to inaccurate results. Linear Implicit Newmark Analysis This is a direct time stepping method that does not require a natural vibration case and requires enough small time steps to achieve exact results. This method is recommended for complex structures, which would require a very large number of mode shapes in order to achieve an effective modal mass factor of around 1. If a sufficient number of eigenvalues can be guaranteed by means of the linear modal analysis, both solution methods lead to approximately the same results. For more information about both methods see the RF-DYNAM Pro manual. Answer For some solution methods, the Rayleigh coefficients are absolutely necessary. Since only the Lehr's damping values are given in the literature, they have to be converted.The following formula is used for converting Lehr's damping values into Rayleigh coefficients:${\mathrm D}_{\mathrm r}\:=\:\frac12\;\left(\frac{\mathrm\alpha}{{\mathrm\omega}_{\mathrm r}}\;+\;\mathrm\beta\;{\mathrm\omega}_{\mathrm r}\right)$Where α and β are the Rayleigh coefficients. It is necessary to set up a system of equations always containing the natural angular frequencies of the two most dominant mode shapes. In the case of these two mode shapes, the structure will then be damped with the specified damping value. All other mode shapes of the structure will have different damping values. These result from the curve displayed in Figure 01. The curve shows an example of the two natural angular frequencies of 10 and 20 rad/s and Lehr's damping of 0.015.It is also possible to use the 'Calculate from Lehr's Damping ...' button to activate corresponding conversion tool. AnswerThe results of the RF-/DYNAM Pro add-on modules Forced Vibrations , Nonlinear Time History and Equivalent Loads are not listed directly in the printout report. This is generally due to the fact that a lot of data and results are required for dynamic calculations.In each of the mentioned modules, it is possible to create a result combination with the envelope results. In this generated result combination, you can find the same results as in the main programs and display them in the printout report as usual.Additionally, you can print pictures in the printout report as usual. There is also an option to display the time history graphically in the printout report. Answer The RF-/DYNAM Pro - Equivalent Loads add-on module only contains a linear analysis of structures. If you now apply a nonlinear model for the calculation, RF-/DYNAM Pro — Equivalent Loads will modify it internally and treat it as a linear model. The nonlinearity in your model presents the masonry, which cannot absorb any tensile forces. The problem is as follows: RF-/DYNAM Pro — Equivalent Loads linearly calculates the equivalent loads and exports the load cases from them. However, the load cases are subsequently calculated nonlinearly on the basis of the material model, which is not entirely correct. In addition, the results are superimposed according to the SRSS or CQC method, which results in tensile and compressive forces being present in the model. In this case, you could change e.g. the masonry to isotropic linear and work with linear properties of the material model. Additionally, it is possible to introduce line hinges at this place, which could be used to avoid moment restraint, for example. Answer The differences between the two modules are explained in this FAQ . In general, you should also calculate the same results for both add-on modules if the settings are identical. However, this does not apply to existing nonlinearities. This is because no nonlinearities are considered in the RF-/DYNAM Pro add-on module. If the results are output via the Forced Vibrations add-on module, all nonlinearities are ignored. In contrast to this, the equivalent loads are calculated on a linear system, but the exported load cases are then calculated on the real system, that is, with all nonlinearities in RFEM or RSTAB . This may lead to inconsistent results. If you deactivate the nonlinearities for the exported load cases, they should have identical results. The way of considering nonlinearities in the response spectrum analysis is described using the tension members in this FAQ. Answer The complete quadratic combination (CQC rule) must be applied if adjacent modal shapes whose periods differ by less than 10% are present when analyzing spatial models with mixed torsional / translational mode shapes. If this is not the case, the square root sum rule (SRSS rule) is applied. In all other cases, the CQC rule must be applied. The CQC rule is defined as follows: ${\mathrm E}_{\mathrm{CQC}}=\sqrt{\sum_{\mathrm i=1}^{\mathrm p}\sum_{\mathrm j=1}^{\mathrm p}{\mathrm E}_{\mathrm i}{\mathrm\varepsilon}_{\mathrm{ij}}{\mathrm E}_{\mathrm j}}$ with the correlation factor: ${\mathrm\varepsilon}_{\mathrm{ij}}=\frac{8\sqrt{{\mathrm D}_{\mathrm i}{\mathrm D}_{\mathrm j}}({\mathrm D}_{\mathrm i}+{\mathrm D}_{\mathrm j})\mathrm r^{\displaystyle\frac32}}{\left(1-\mathrm r^2\right)^2+4{\mathrm D}_{\mathrm i}{\mathrm D}_{\mathrm j}\mathrm r(1+\mathrm r^2)+4(\mathrm D_{\mathrm i}^2+\mathrm D_{\mathrm j}^2)\mathrm r^2}$ with: $\mathrm r=\frac{{\mathrm\omega}_{\mathrm j}}{{\mathrm\omega}_{\mathrm i}}$ The correlation coefficient is simplified if the viscous damping value D is selected to be the same for all mode shapes: ${\mathrm\varepsilon}_{\mathrm{ij}}=\frac{8\mathrm D^2(1+\mathrm r)\mathrm r^{\displaystyle\frac32}}{\left(1-\mathrm r^2\right)^2+4\mathrm D^2\mathrm r(1+\mathrm r^2)}$ In analogy to the SRSS rule, the CQC rule can also be executed as an equivalent linear combination. The formula of the modified CQC rule is as follows: ${\mathrm E}_{\mathrm{CQC}}=\sum_{\mathrm i=1}^{\mathrm p}{\mathrm f}_{\mathrm i}{\mathrm E}_{\mathrm i}$ with: ${\mathrm f}_{\mathrm i}=\frac{{\displaystyle\sum_{\mathrm i=1}^{\mathrm p}}{\mathrm\varepsilon}_{\mathrm{ij}}{\mathrm E}_{\mathrm j}}{\sqrt{{\displaystyle\sum_{\mathrm i=1}^{\mathrm p}}{\displaystyle\sum_{\mathrm j=1}^{\mathrm p}}{\mathrm E}_{\mathrm i}{\mathrm\varepsilon}_{\mathrm{ij}}{\mathrm E}_{\mathrm j}}}$ Contact us Did you find your question? If not, contact us via our free e-mail, chat, or forum support, or send us your question via the online form. First Steps Your support is by far the best “Thank you very much for the useful information. I would like to pay a compliment to your support team. I am always impressed how quickly and professionally the questions are answered. In the industry of structural analysis, I use several software including service contract, but your support is by far the best.”
I know it's been a long time. In case someone is still having troubles with excerice II.4.15 from "Algebra: Chapter 0" by P. Aluffi. Since $p$ is prime every non-zero element of $C_p$ has order $p$(since order of elements divide order of group). Any automorphism $C_p \rightarrow C_p$ can be determined by where $g^1$ is mapped. It is easy to see that there can be $p-1$ such morphisms with identity morphism being an idendity element. This defines a group of order $p-1$. One may treat such a group as a multiplicative group modulo $p$ (if it's not clear to you, let $\phi: G\rightarrow G \in Aut_{Grp}(G)$, with generator $g^1$ being mapped to $g^m$, then $\phi^2 = \phi \circ\phi (g) = g^{m^2}$, so it's just multiplication $mm$). As it was pointed out in comments, in order to conclude $Aut_{Grp}(C_p)$ is cyclic of order $p-1$ we need to show that $(\mathbb{Z}/p\mathbb{Z})^*$ has an element of order $p-1$(making it cyclic). Let $G = (\mathbb{Z}/p\mathbb{Z})^*, g \in G$ be an element of maximal order. Since the group is abelian the order of every element divides $|g|$. Therefore, for every $h \in G, h^{|g|} = 1$. As we can see the equation $x^{|g|} = 1$ has $p-1$ solutions modulo $p$ (because $p-1$ is group's order). From the theorem that $x^r = 1$ has at most $r$ solutions modulo $p$ we may conclude that $|g|$ is at least $p-1$, it is also at most $p-1$ as the order of group's element is at most the order of group. Now, suppose the order of $[m]_p$ in $(\mathbb{Z}/p\mathbb{Z})^*$ equals to $p-1$.Then with $\phi$ defined as above $\phi^{p-1} = x^{m^{p-1}} = id$. Hence, you may conclude that having an element of order $p-1$ makes $Aut_{Grp}(C_p)$ cyclic of order $p-1$ and thus isomorphic to $C_{p-1}$
I would like to solve the following equation and make a log-log plot of $M$ against $\lambda$: $$\bigg|\frac{2\lambda^{2}}{(4\pi)^{2}} \frac{m^{2}}{m_{h}^{2}} \left[\log\left(\frac{m_{h}^{2}}{m^{2}}\right) - \frac{7}{6} \right]\bigg| = 10^{-12}$$ What is wrong with the following code? table = Table[ {Exp[M], λ /. NSolve[ (((2*(λ^2))/(4 *Pi)^(2)) *(1/Exp[2 M]) *[Log[Exp[2 M]] - (7/6)]) == 10^(-12), λ][[2]]}, {M, -15, 15, 0.03}];ListLogLogPlot[table]
I am trying to set things up so that occurrences of $$ are replaced by \] if in math mode, \[ otherwise. My strategy is to make $ active and have it behave differently depending on whether the following token is also a $. However, I cannot seem to make this work. Here is my attempt: \documentclass{article}\usepackage{lipsum}\def\unactiveDollar{$}\catcode`\$=\active\def\activeDollar{$}\def\executedollar{% \if\isDollar\activeDollar% \let\next=\executeDDollar% \else% \let\next=\unactiveDollar% \fi% \next% }\def\executeDDollar{% \ifmmode% \let\next=\]% \else% \let\next=\[% \fi% \next% \let\absorbnext= }\def${\futurelet\isDollar\executedollar}\begin{document}\lipsum*[7]$$\int x^2 \; dx = \frac{1}{3}x^3 + C$$\lipsum[9]%Make sure $ $ is not mistaken for $$:It follows that $E = m c^2$ $\forall x \in X$.\end{document} The resulting error is ./doubledollar.tex:29: TeX capacity exceeded, sorry [input stack size=5000].\isDollar ->\if \isDollar \activeDollar \let \next =\executeDDollar \else \l...l.29 $$ \int x^2 \; dx = \frac{1}{3}x^3 + C$$./doubledollar.tex:29: ==> Fatal error occurred, no output PDF file produced! Other variations (e.g., substituting \ifx for \if) give different errors or simply incorrect output, but nothing gets it right. And as far as I can tell, the key problem is that my method for determining whether the following token is a $ (and absorbing it if so) simply does not work. What am I doing wrong, and how can I fix it? Additional note: An answer that throws an error whenever $$ is used instead of other macros/environments (which must be able to use it internally) would also be acceptable, since it answers the titular question.
A Detection of CMB-Cluster Lensing using Polarization Data from SPTpol Abstract We report the first detection of gravitational lensing due to galaxy clusters using only the polarization of the cosmic microwave background (CMB). The lensing signal is obtained using a new estimator that extracts the lensing dipole signature from stacked images formed by rotating the cluster-centered Stokes $Q/U$ map cutouts along the direction of the locally measured background CMB polarization gradient. Using data from the SPTpol 500 deg$$^{2}$$ survey at the locations of roughly 18,000 clusters with richness $$\lambda \ge 10$$ from the Dark Energy Survey (DES) Year-3 full galaxy cluster catalog, we detect lensing at $$4.8\sigma$$. The mean stacked mass of the selected sample is found to be $$(1.43 \pm 0.4)\ \times 10^{14}\ {\rm M_{\odot}}$$ which is in good agreement with optical weak lensing based estimates using DES data and CMB-lensing based estimates using SPTpol temperature data. This measurement is a key first step for cluster cosmology with future low-noise CMB surveys, like CMB-S4, for which CMB polarization will be the primary channel for cluster lensing measurements. Authors: Publication Date: Research Org.: Argonne National Lab. (ANL), Argonne, IL (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) Contributing Org.: SPT; DES OSTI Identifier: 1568875 Report Number(s): arXiv:1907.08605; FERMILAB-PUB-19-429-AE; DES-2018-0395 oai:inspirehep.net:1744661 DOE Contract Number: AC02-07CH11359 Resource Type: Journal Article Journal Name: TBD Additional Journal Information: Journal Name: TBD Country of Publication: United States Language: English Subject: 79 ASTRONOMY AND ASTROPHYSICS Citation Formats Raghunathan, S., and et al. A Detection of CMB-Cluster Lensing using Polarization Data from SPTpol. United States: N. p., 2019. Web. Raghunathan, S., & et al. A Detection of CMB-Cluster Lensing using Polarization Data from SPTpol. United States. Raghunathan, S., and et al. Fri . "A Detection of CMB-Cluster Lensing using Polarization Data from SPTpol". United States. https://www.osti.gov/servlets/purl/1568875. @article{osti_1568875, title = {A Detection of CMB-Cluster Lensing using Polarization Data from SPTpol}, author = {Raghunathan, S. and et al.}, abstractNote = {We report the first detection of gravitational lensing due to galaxy clusters using only the polarization of the cosmic microwave background (CMB). The lensing signal is obtained using a new estimator that extracts the lensing dipole signature from stacked images formed by rotating the cluster-centered Stokes $Q/U$ map cutouts along the direction of the locally measured background CMB polarization gradient. Using data from the SPTpol 500 deg$^{2}$ survey at the locations of roughly 18,000 clusters with richness $\lambda \ge 10$ from the Dark Energy Survey (DES) Year-3 full galaxy cluster catalog, we detect lensing at $4.8\sigma$. The mean stacked mass of the selected sample is found to be $(1.43 \pm 0.4)\ \times 10^{14}\ {\rm M_{\odot}}$ which is in good agreement with optical weak lensing based estimates using DES data and CMB-lensing based estimates using SPTpol temperature data. This measurement is a key first step for cluster cosmology with future low-noise CMB surveys, like CMB-S4, for which CMB polarization will be the primary channel for cluster lensing measurements.}, doi = {}, journal = {TBD}, number = , volume = , place = {United States}, year = {2019}, month = {7} }
The differential equation $$(1-x^2)y''-xy'+p^2y=0$$ can be assumed to have series solution on the form $$y(x) = x^q \sum_0^\infty a_n x^n $$ with $q = 0, 1$ and recurrence relation $$a_{n+2} = \frac{(n+q)^2-p^2}{(n+q+1)(n+q+2)} a_n $$ Find the terminating polynomial series for $p = 2$. First of all, I don't understand why q is not simply 0 in all cases, because the series expansion clearly is about x=0, which is an ordinary point of the equation and I don't see why to inwoke Fuch's theorem. However, approaching the question as suggested I think the series can end in two ways: Case 1: $q = 0, n = 2$ This gives $a_2 = -2a_0$ hence $$T_2(x) = a_0 -2a_0x^2$$ or if we use the common normalisation $T(1) = 1 $ we get: $$T_2(x) = 2x^2-1$$ This far all good. However, I think there is also a second case: Case 2: $q = 1, n = 1$ In this case we terminate with $a_3 = 0$ so we have a series: $$T'_2(x) = x^1 (a_1x) = a_1x^2 = x^2$$ if we normalise. I think this solution satisfies both the termination condition and the recurrence relation found for the series. However, upon substitution I notice that this is not a valid solution to the equation! Why doesn't case $2$ work?
Given the Fourier series expansion of a signal $x(t)$ with fundamental frequency $\omega_0$ gives $c_k$, I am trying to find the Fourier coefficients $b_k$ for the signal $x(1-t)$. Instead of using the time shifting property directly, I am trying to obtain the result by first principles. $$b_k=\frac{1}{T} \, \int_T \, x(1-t) \, \text{e}^{-j\, k\, \omega_0\, t} \, \text{d}t$$ Making a substitution $u=1-t$, I obtain $$b_k= - \frac{1}{T} \, \int_T \, x(u) \, \text{e}^{-j\, k\, \omega_0\, (1-u)} \, \text{d}u$$ $$=- \frac{1}{T} \, \int_T \, x(u) \, \text{e}^{-j\, k\, \omega_0} \, \text{e}^{j\, k\, \omega_0 \, u} \, \text{d}u$$ $$=- \text{e}^{-j\, k\, \omega_0} \, \frac{1}{T} \, \int_T \, x(u) \, \text{e}^{j\, k\, \omega_0 \, u} \, \text{d}u$$ I can see that the inetgral is the conjugate of the synthesis equation. If $x(t)$ is a real signal, then $c_k=c^{\ast}_{-k}$ or $c^{\ast}_k=c_{-k}$. Thus I get $b_k=-\text{e}^{-j\, k\, \omega_0} \, c^{\ast}_k$ or $b_k=-\text{e}^{-j\, k\, \omega_0} \, c_{-k}$. I would say that for $x(t-1)$, its Fourier series coefficients are $b_k=-\text{e}^{-j\, k\, \omega_0} \, c_{-k}$ when $c_k$ are the Fourier series coefficients of $x(t)$. Is the above answer correct? Thank you.
Away from the singular point the standard quadrature methods should be fine, so the problem is really about computing an integral of the form$$ \mathrm{P.V.}\int_{-h}^{h} \frac{f(x)}{x}\,dx. $$ I would suggest trying the following things: If you know a closed-form Taylor series for $f(x)=G(\omega+x)$, the you can choose $h$ to small enough for the series approximation to be accurate, and calculate the principal value in closed form for each series term. Alternatively, you can try to interpolate $f$ numerically, using something like polyfit, which will give you the Taylor series numerically. Depending on how well-behaved your function is, it is difficult to say from the start whether any of this will definitely work. Still the simplest way (although numerically unstable) is to write the integral as$$ \int_0^h \frac{f(x)-f(-x)}{x}\,dx, $$and apply the standard quadrature algorithm to this. The issue is that the calculation $f(x)-f(-x)$ will lead to inaccuracies on the order of $\int_0^h (\epsilon_{\mathrm{mach}}/x)\,dx$, which may or may not be tolerable. If you have a way to simplify the expression $f(x)-f(-x)$ to make it numerically stable, this is probably the best option. There are ways to derive quadrature nodes and weights for computing principal values of such integrals (just search for them, even GSL has them), but I couldn't find this in matlab. These are numerical methods for principal value integrals on the real line. You can also try to compute the integral as$$ \mathrm{P.V.}\int_{-h}^{h}\frac{f(x)}{x}\,dx = \int_\gamma \frac{f(x)}{x}\,dx + \pi \mathrm{i}f(0), $$where $\gamma$ is a contour going from $-h$ to $h$ passing above the real line. If you can evaluate $G$ for complex $\omega$, then even standard quadrature routines will be able to compute the integral over $\gamma$.
Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle) Do you know the other examples? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle) Do you know the other examples? The never ending chocolate bar! If only I knew of this as a child.. The trick here is that the left piece that is three bars wide grows at the bottom when it slides up. In reality, what would happen is that there would be a gap at the right between the three-bar piece and the cut. This gap is is three bars wide and one-third of a bar tall, explaining how we ended up with an "extra" piece. Side by side comparison: Notice how the base of the three-wide bar grows. Here's what it would look like in reality$^1$: 1: Picture source https://www.youtube.com/watch?v=Zx7vUP6f3GM A bit surprised this hasn't been posted yet. Taken from this page: Visualization can be misleading when working with alternating series. A classical example is \begin{align*} \ln 2=&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots,\\ \frac{\ln 2}{2}=&\frac12-\frac14+\frac16-\frac18+\frac1{10}-\frac1{12}+\ldots \end{align*} Adding the two series, one finds \begin{align*}\frac32\ln 2=&\left(\frac11+\frac13+\frac15+\ldots\right)-2\left(\frac14+\frac18+\frac1{12}+\ldots\right)=\\ =&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots=\\ =&\ln2. \end{align*} Here's how to trick students new to calculus (applicable only if they don't have graphing calculators, at that time): $0$. Ask them to find inverse of $x+\sin(x)$, which they will unable to. Then, $1$. Ask them to draw graph of $x+\sin(x)$. $2$. Ask them to draw graph of $x-\sin(x)$ $3$. Ask them to draw $y=x$ on both graphs. Here's what they will do : $4$. Ask them, "What do you conclude?". They will say that they are inverses of each other. And then get very confused. Construct a rectangle $ABCD$. Now identify a point $E$ such that $CD = CE$ and the angle $\angle DCE$ is a non-zero angle. Take the perpendicular bisector of $AD$, crossing at $F$, and the perpendicular bisector of $AE$, crossing at $G$. Label where the two perpendicular bisectors intersect as $H$ and join this point to $A$, $B$, $C$, $D$, and $E$. Now, $AH=DH$ because $FH$ is a perpendicular bisector; similarly $BH = CH$. $AH=EH$ because $GH$ is a perpendicular bisector, so $DH = EH$. And by construction $BA = CD = CE$. So the triangles $ABH$, $DCH$ and $ECH$ are congruent, and so the angles $\angle ABH$, $\angle DCH$ and $\angle ECH$ are equal. But if the angles $\angle DCH$ and $\angle ECH$ are equal then the angle $\angle DCE$ must be zero, which is a contradiction. Proof : Let $O$ be the intersection of the bisector $[BC]$ and the bisector of $\widehat{BAC}$. Then $OB=OC$ and $\widehat{BAO}=\widehat{CAO}$. So the triangles $BOA$ and $COA$ are the same and $BA=CA$. Another example : From "Pastiches, paradoxes, sophismes, etc." and solution page 23 : http://www.scribd.com/JJacquelin/documents A copy of the solution is added below. The translation of the comment is : Explanation : The points A, B and P are not on a straight line ( the Area of the triangle ABP is 0.5 ) The graphical highlight is magnified only on the left side of the figure. I think this could be the goats puzzle (Monty Hall problem) which is nicely visually represented with simple doors. Three doors, behind 2 are goats, behind 1 is a prize. You choose a door to open to try and get the prize, but before you open it, one of the other doors is opened to reveal a goat. You then have the option of changing your mind. Should you change your decision? From looking at the diagram above, you know for a fact that you have a 1/3rd chance of guessing correctly. Next, a door with a goat in is opened: A cursory glance suggests that your odds have improved from 1/3rd to a 50/50 chance of getting it right. But the truth is different... By calculating all possibilities we see that if you change, you have a higher chance of winning. The easiest way to think about it for me is, if you choose the car first, switching is guaranteed to be a goat. If you choose a goat first, switching is guaranteed to be a car. You're more likely to choose a goat first because there are more goats, so you should always switch. A favorite of mine was always the following: \begin{align*} \require{cancel}\frac{64}{16} = \frac{\cancel{6}4}{1\cancel{6}} = 4 \end{align*} I particularly like this one because of how simple it is and how it gets the right answer, though for the wrong reasons of course. A recent example I found which is credited to Martin Gardner and is similar to some of the others posted here but perhaps with a slightly different reason for being wrong, as the diagonal cut really is straight. I found the image at a blog belonging to Greg Ross. Spoilers The triangles being cut out are not isosceles as you might think but really have base $1$ and height $1.1$ (as they are clearly similar to the larger triangles). This means that the resulting rectangle is really $11\times 9.9$ and not the reported $11\times 10$. Squaring the circle with Kochanski's Approximation 1 One of my favorites: \begin{align} x&=y\\ x^2&=xy\\ x^2-y^2&=xy-y^2\\ \frac{(x^2-y^2)}{(x-y)}&=\frac{(xy-y^2)}{(x-y)}\\ x+y&=y\\ \end{align} Therefore, $1+1=1$ The error here is in dividing by x-y That $\sum_{n=1}^\infty n = -\frac{1}{12}$. http://www.numberphile.com/videos/analytical_continuation1.html The way it is presented in the clip is completely incorrect, and could spark a great discussion as to why. Some students may notice the hand-waving 'let's intuitively accept $1 -1 +1 -1 ... = 0.5$. If we accept this assumption (and the operations on divergent sums that are usually not allowed) we can get to the result. A discussion that the seemingly nonsense result directly follows a nonsense assumption is useful. This can reinforce why it's important to distinguish between convergent and divergent series. This can be done within the framework of convergent series. A deeper discussion can consider the implications of allowing such a definition for divergent sequences - ie Ramanujan summation - and can lead to a discussion on whether such a definition is useful given it leads to seemingly nonsense results. I find this is interesting to open up the ideas that mathematics is not set in stone and can link to the history of irrational and imaginary numbers (which historically have been considered less-than-rigorous or interesting-but-not-useful). \begin{equation} \log6=\log(1+2+3)=\log 1+\log 2+\log 3 \end{equation} Here is one I saw on a whiteboard as a kid... \begin{align*} 1=\sqrt{1}=\sqrt{-1\times-1}=\sqrt{-1}\times\sqrt{-1}=\sqrt{-1}^2=-1 \end{align*} I might be a bit late to the party, but here is one which my maths teacher has shown to me, which I find to be a very nice example why one shouldn't solve an equation by looking at the hand-drawn plots, or even computer-generated ones. Consider the following equation: $$\left(\frac{1}{16}\right)^x=\log_{\frac{1}{16}}x$$ At least where I live, it is taught in school how the exponential and logarithmic plots look like when base is between $0$ and $1$, so a student should be able to draw a plot which would look like this: Easy, right? Clearly there is just one solution, lying at the intersection of the graphs with the $x=y$ line (the dashed one; note the plots are each other's reflections in that line). Well, this is clear at least until you try some simple values of $x$. Namely, plugging in $x=\frac{1}{2}$ or $\frac{1}{4}$ gives you two more solutions! So what's going on? In fact, I have intentionally put in an incorrect plots (you get the picture above if you replace $16$ by $3$). The real plot looks like this: You might disagree, but to be it still seems like it's a plot with just one intersection point. But, in fact, the part where the two plots meet has all three points of intersection. Zooming in on the interval with all the solutions lets one barely see what's going on: The oscillations are truly minuscule there. Here is the plot of the difference of the two functions on this interval: Note the scale of the $y$ axis: the differences are on the order of $10^{-3}$. Good luck drawing that by hand! To get a better idea of what's going on with the plots, here they are with $16$ replaced by $50$: Here is a measure theoretic one. By 'Picture', if we take a cover of $A:=[0,1]∩\mathbb{Q}$ by open intervals, we have an interval around every rational and so we also cover $[0,1]$; the Lebesgue measure of [0,1] is 1, so the measure of $A$ is 1. As a sanity check, the complement of this cover in $[0,1]$ can't contain any intervals, so its measure is surely negligible. This is of course wrong, as the set of all rationals has Lebesgue measure $0$, and sets with no intervals need not have measure 0: see the fat Cantor set. In addition, if you fix the 'diagonal enumeration' of the rationals and take $\varepsilon$ small enough, the complement of the cover in $[0,1]$ contains $2^{ℵ_0}$ irrationals. I recently learned this from this MSE post. There are two examples on Wikipedia:Missing_square_puzzle Sam Loyd's paradoxical dissection, and Mitsunobu Matsuyama's "Paradox". But I cannot think of something that is not a dissection. This is my favorite. \begin{align}-20 &= -20\\ 16 - 16 - 20 &= 25 - 25 - 20\\ 16 - 36 &= 25 - 45\\ 16 - 36 + \frac{81}{4} &= 25 - 45 + \frac{81}{4}\\ \left(4 - \frac{9}{2}\right)^2 &= \left(5 - \frac{9}{2}\right)^2\\ 4 - \frac{9}{2} &= 5 - \frac{9}{2}\\ 4 &= 5 \end{align} You can generalize it to get any $a=b$ that you'd like this way: \begin{align}-ab&=-ab\\ a^2 - a^2 - ab &= b^2 - b^2 - ab\\ a^2 - a(a + b) &= b^2 -b(a+b)\\ a^2 - a(a + b) + \frac{a + b}{2} &= b^2 -b(a+b) + \frac{a + b}{2}\\ \left(a - \frac{a+b}{2}\right)^2 &= \left(b - \frac{a+b}{2}\right)^2\\ a - \frac{a+b}{2} &= b - \frac{a+b}{2}\\ a &= b\\ \end{align} It's beautiful because visually the "error" is obvious in the line $\left(4 - \frac{9}{2}\right)^2 = \left(5 - \frac{9}{2}\right)^2$, leading the observer to investigate the reverse FOIL process from the step before, even though this line is valid. I think part of the problem also stems from the fact that grade school / high school math education for the average person teaches there's only one "right" way to work problems and you always simplify, so most people are already confused by the un-simplifying process leading up to this point. I've found that the number of people who can find the error unaided is something less than 1 in 4. Disappointingly, I've had several people tell me the problem stems from the fact that I started with negative numbers. :-( Solution When working with variables, people often remember that $c^2 = d^2 \implies c = \pm d$, but forget that when working with concrete values because the tendency to simplify everything leads them to turn squares of negatives into squares of positives before applying the square root. The number of people that I've shown this to who can find the error is a small sample size, but I've found some people can carefully evaluate each line and find the error, and then can't explain it even after they've correctly evaluated $\left(-\frac{1}{2}\right)^2=\left(\frac{1}{2}\right)^2$. To give a contrarian interpretation of the question I will chime in with Goldbach's comet which counts the number of ways an integer can be expressed as the sum of two primes: It is mathematically "wrong" because there is no proof that this function doesn't equal zero infitely often, and it is visually deceptive because it appears to be unbounded with its lower bound increasing at a linear rate. This is essentially the same as the chocolate-puzzle. It's easier to see, however, that the total square shrinks. This is a fake visual proof that a sphere has Euclidean geometry. Strangely enough, in a 3 dimensional hyperbolic space, the amount of curve a sphere will have approaches a nonzero amount and if you have an infinitely large object with exactly the amount of a curve a sphere approaches as its size approaches infinity, it will have Euclidean geometry and appear sort of the way that image appears. I don't know about you but to me, it looks like the hexagons are stretched horizontally. If you also see it that way and you trust your eyes, then you could take that as a visual proof that $\tan\frac{7}{4} < 60^\circ$. If that's how you saw it, then it's an optical illusion because the hexagons are really stretched vertically. Unlike some optical illusions of images that appear different than they are but are still mathematically possible, this is an optical illusion of a mathematically impossible image. The math shows that $\tan^{-1} 60^\circ = \sqrt{3}$ and $\sqrt{3} < \frac{7}{4}$ because $7^2 = 49$ but $3 \times 4^2$ = 48. It's just like it's mathematically impossible for something to not be moving when it is moving but it's theoretically possible for your eyes to stop sending movement signals to your brain and have you not see movement in something that is moving which would look creepy for those who have not experienced it because your brain could still tell by a more complex method than signals from the eyes that it actually is moving. To draw a hexagonal grid over a square grid more accurately, only the math and not your eye signals can be trusted to help you do it accurately. The math shows that the continued fraction of $\sqrt{3}$ is [1; 1, 2, 1, 2, 1, 2, 1 ... which is less than $\frac{7}{4}$, not more. I do not think this really qualify as "visually intuitive", but it is definitely funny They do such a great job at dramatizing these kind of situations. Who cannot remember of an instance in which he has been either a "Billy" or a "Pa' and Ma'"? Maybe more "Pa' and Ma'" instances on my part...;) Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Let $f(z) = \sum_{n=1}^\infty a(n) e^{2i \pi nz}$ be an eigenform of $S_k(\Gamma_0(N))$. Since the Hecke operator acts by $T_p f = a_p f$ the Riemann hypothesis for $f$'s L-function is $$ \!\!\! \!\!\! \!\!\! \!\!\!(\sum_{n=1}^\infty a_n n^{-s})^{-1} =\prod_p (1-a(p) p^{-s}+p^{k-1-2s}) =\frac{\displaystyle[\prod_p (1-T_p p^{-s}+p^{k-1-2s})] f(z)}{f(z)} \tag{1}$$ converges and is analytic for $\Re(s) > k/2$. Then, the Riemann hypothesis for all the eigenforms of $S_k(\Gamma_0(N))$ is that for any $f \in S_k(\Gamma_0(N))$, $$[\prod_p (1-T_p p^{-s}+p^{k-1-2s})] f(z) \quad \text{ is analytic for } \Re(s) > k/2\tag{2} $$ Questions : This suggests to define a Riemann hypothesis for the Hecke operators themselves, and I would like to know if there is a well-known way to think about that. The RH for the Hecke operators could be the convergence in operator norm of $\lim_{x \to \infty} \prod_{p \le x} (1-T_p p^{-s}+p^{k-1-2s}),$ for $\Re(s) > k/2$, with the norm coming from the Petersson inner product. But there are other possible norms, for example $\langle f,g\rangle = \int_0^\infty f(ix) \overline{g(ix)} x^{2 \sigma -1}dx$. Indeed, the choice of the norm is a major problem : when defining $\displaystyle T_p f(z) = p^{k-1}\sum_{ad =p, b \bmod p} d^{-k} f(\frac{az+b}{d}) \tag{3}$ the statement in $(1)$ works the same way whenever $f(z) = \sum_{n=1}^\infty a_n e^{2i \pi n z}$ and $\sum_{n=1}^\infty a_n n^{-s} = \prod_p (1+a_p p^{-s}+ p^{k-1-2s})^{-1}$, no matter that $f$ is modular or not, so that in general $f$ doesn't have a Riemann hypothesis (nor the $T_p$ operator acting on it). So we really need a norm and a statement specific to modular forms, for example about $\prod_p (1-T_pp^{-s}+\langle p \rangle p^{k-1-2s})$ (which works for $S_k(\Gamma_1(N))$ too). For short, RH is believed true for Dirichlet series with Euler product and functional equation. $\prod_p (1-T_p p^{-s}+p^{k-1-2s})$ is just an Euler product. So hopefully, we only need to add a reference to the modularity (implying the functional equation) to make a viable RH statement. Assuming we solved that part, the Hecke operators depend on $N$ only for finitely many $p$, so can we expect the statement to imply the Riemann hypothesis for $\displaystyle\bigcup_N S_k(\Gamma_0(N))$ ? In that case, what's about the dependence on $k$ ? Also looking at the weight-$\frac{1}{2}$ forms $\sum_{n \ge 1}^\infty \chi(n) e^{2i \pi n^2 z}$ could help.
Revista Matemática Iberoamericana Full-Text PDF (207 KB) | Metadata | Table of Contents | RMI summary Volume 30, Issue 4, 2014, pp. 1123–1134 DOI: 10.4171/RMI/809 Published online: 2014-12-15 On irreducible divisors of iterated polynomialsDomingo Gómez-Pérez [1], Alina Ostafe [2]and Igor E. Shparlinski [3](1) Universidad de Cantabria, Santander, Spain (2) University of New South Wales, Sydney, Australia (3) University of New South Wales, Sydney, Australia D. Gómez-Pérez, A. Ostafe, A.P. Nicolás and D. Sadornil have recently shown that for almost all polynomials $f \in \mathbb F_q[X]$ over the finite field of $q$ elements, where $q$ is an odd prime power, their iterates eventually become reducible polynomials over $\mathbb F_q$. Here we combine their method with some new ideas to derive finer results about the arithmetic structure of iterates of $f$. In particular, we prove that the $n$th iterate of $f$ has a square-free divisor of degree of order at least $n^{1+o(1)}$ as $n\to \infty$ (uniformly in $q$). Keywords: iterations of polynomials, irreducible divisors Gómez-Pérez Domingo, Ostafe Alina, Shparlinski Igor: On irreducible divisors of iterated polynomials. Rev. Mat. Iberoam. 30 (2014), 1123-1134. doi: 10.4171/RMI/809
Last year I made a post about the universal program, a Turing machine program $p$ that can in principle compute any desired function, if it is only run inside a suitable model of set theory or arithmetic. Specifically, there is a program $p$, such that for any function $f:\newcommand\N{\mathbb{N}}\N\to\N$, there is a model $M\models\text{PA}$ — or of $\text{ZFC}$, whatever theory you like — inside of which program $p$ on input $n$ gives output $f(n)$. This theorem is related to a very interesting theorem of W. Hugh Woodin’s, which says that there is a program $e$ such that $\newcommand\PA{\text{PA}}\PA$ proves $e$ accepts only finitely many inputs, but such that for any finite set $A\subset\N$, there is a model of $\PA$ inside of which program $e$ accepts exactly the elements of $A$. Actually, Woodin’s theorem is a bit stronger than this in a way that I shall explain. Victoria Gitman gave a very nice talk today on both of these theorems at the special session on Computability theory: Pushing the Boundaries at the AMS sectional meeting here in New York, which happens to be meeting right here in my east midtown neighborhood, a few blocks from my home. What I realized this morning, while walking over to Vika’s talk, is that there is a very simple proof of the version of Woodin’s theorem stated above. The idea is closely related to an idea of Vadim Kosoy mentioned in my post last year. In hindsight, I see now that this idea is also essentially present in Woodin’s proof of his theorem, and indeed, I find it probable that Woodin had actually begun with this idea and then modified it in order to get the stronger version of his result that I shall discuss below. But in the meantime, let me present the simple argument, since I find it to be very clear and the result still very surprising. Theorem. There is a Turing machine program $e$, such that $\PA$ proves that $e$ accepts only finitely many inputs. For any particular finite set $A\subset\N$, there is a model $M\models\PA$ such that inside $M$, the program $e$ accepts all and only the elements of $A$. Indeed, for any set $A\subset\N$, including infinite sets, there is a model $M\models\PA$ such that inside $M$, program $e$ accepts $n$ if and only if $n\in A$. Proof. The program $e$ simply performs the following task: on any input $n$, search for a proof from $\PA$ of a statement of the form “program $e$ does not accept exactly the elements of $\{n_1,n_2,\ldots,n_k\}$.” Accept nothing until such a proof is found. For the first such proof that is found, accept $n$ if and only if $n$ is one of those $n_i$’s. In short, the program $e$ searches for a proof that $e$ doesn’t accept exactly a certain finite set, and when such a proof is found, it accepts exactly the elements of this set anyway. Clearly, $\PA$ proves that program $e$ accepts only a finite set, since either no such proof is ever found, in which case $e$ accepts nothing (and the empty set is finite), or else such a proof is found, in which case $e$ accepts only that particular finite set. So $\PA$ proves that $e$ accepts only finitely many inputs. But meanwhile, assuming $\PA$ is consistent, then you cannot refute the assertion that program $e$ accepts exactly the elements of some particular finite set $A$, since if you could prove that from $\PA$, then program $e$ actually would accept exactly that set (for the shortest such proof), in which case this would also be provable, contradicting the consistency of $\PA$. Since you cannot refute any particular finite set as the accepting set for $e$, it follows that it is consistent with $\PA$ that $e$ accepts any particular finite set $A$ that you like. So there is a model of $\PA$ in which $e$ accepts exactly the elements of $A$. This establishes statement (2). Statement (3) now follows by a simple compactness argument. Namely, for any $A\subset\N$, let $T$ be the theory of $\PA$ together with the assertions that program $e$ accepts $n$, for any particular $n\in A$, and the assertions that program $e$ does not accept $n$, for $n\notin A$. Any finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. Any model of this theory realizes statement (3). QED One uses the Kleene recursion theorem to show the existence of the program $e$, which makes reference to $e$ in the description of what it does. Although this may look circular, it is a standard technique to use the recursion theorem to eliminate the circularity. This theorem immediately implies the classical result of Mostowski and Kripke that there is an independent family of $\Pi^0_1$ assertions, since the assertions $n\notin W_e$ are exactly such a family. The theorem also implies a strengthening of the universal program theorem that I proved last year. Indeed, the two theorems can be realized with the same program! Theorem. There is a Turing machine program $e$ with the following properties: $\PA$ proves that $e$ computes a finite function; For any particular finite partial function $f$ on $\N$, there is a model $M\models\PA$ inside of which program $e$ computes exactly $f$. For any partial function $f:\N\to\N$, finite or infinite, there is a model $M\models\PA$ inside of which program $e$ on input $n$ computes exactly $f(n)$, meaning that $e$ halts on $n$ if and only if $f(n)\downarrow$ and in this case $\varphi_e(n)=f(n)$. Proof. The proof of statements (1) and (2) is just as in the earlier theorem. It is clear that $e$ computes a finite function, since either it computes the empty function, if no proof is found, or else it computes the finite function mentioned in the proof. And you cannot refute any particular finite function for $e$, since if you could, it would have exactly that behavior anyway, contradicting $\text{Con}(\PA)$. So statement (2) holds. But meanwhile, we can get statement (3) by a simple compactness argument. Namely, fix $f$ and let $T$ be the theory asserting $\PA$ plus all the assertions either that $\varphi_e(n)\uparrow$, if $n$ is not the domain of $f$, and $\varphi_e(n)=k$, if $f(n)=k$. Every finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. But any model of this theory exactly fulfills statement (3). QED Woodin’s proof is more difficult than the arguments I have presented, but I realize now that this extra difficulty is because he is proving an extremely interesting and stronger form of the theorem, as follows. Theorem. (Woodin) There is a Turing machine program $e$ such that $\PA$ proves $e$ accepts at most a finite set, and for any finite set $A\subset\N$ there is a model $M\models\PA$ inside of which $e$ accepts exactly $A$. And furthermore, in any such $M$ and any finite $B\supset A$, there is an end-extension $M\subset_{end} N\models\PA$, such that in $N$, the program $e$ accepts exactly the elements of $B$. This is a much more subtle claim, as well as philosophically interesting for the reasons that he dwells on. The program I described above definitely does not achieve this stronger property, since my program $e$, once it finds the proof that $e$ does not accept exactly $A$, will accept exactly $A$, and this will continue to be true in all further end-extensions of the model, since that proof will continue to be the first one that is found.
J. D. Hamkins, “Every countable model of set theory embeds into its own constructible universe,” J. Math. Logic, vol. 13, iss. 2, p. 1350006, 27, 2013. @article {Hamkins2013:EveryCountableModelOfSetTheoryEmbedsIntoItsOwnL, AUTHOR = {Hamkins, Joel David}, TITLE = {Every countable model of set theory embeds into its own constructible universe}, JOURNAL = {J. Math. Logic}, FJOURNAL = {J.~Math.~Logic}, VOLUME = {13}, YEAR = {2013}, NUMBER = {2}, PAGES = {1350006, 27}, ISSN = {0219-0613}, MRCLASS = {03C62 (03E99 05C20 05C60 05C63)}, MRNUMBER = {3125902}, MRREVIEWER = {Robert S. Lubarsky}, DOI = {10.1142/S0219061313500062}, eprint = {1207.0963}, archivePrefix = {arXiv}, primaryClass = {math.LO}, URL = {http://wp.me/p5M0LV-jn}, } In this article, I prove that every countable model of set theory $\langle M,{\in^M}\rangle$, including every well-founded model, is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$. Another way to say this is that there is an embedding $$j:\langle M,{\in^M}\rangle\to \langle L^M,{\in^M}\rangle$$ that is elementary for quantifier-free assertions in the language of set theory. Main Theorem 1. Every countable model of set theory $\langle M,{\in^M}\rangle$ is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$. The proof uses universal digraph combinatorics, including an acyclic version of the countable random digraph, which I call the countable random $\mathbb{Q}$-graded digraph, and higher analogues arising as uncountable Fraisse limits, leading eventually to what I call the hypnagogic digraph, a set-homogeneous, class-universal, surreal-numbers-graded acyclic class digraph, which is closely connected with the surreal numbers. The proof shows that $\langle L^M,{\in^M}\rangle$ contains a submodel that is a universal acyclic digraph of rank $\text{Ord}^M$, and so in fact this model is universal for all countable acyclic binary relations of this rank. When $M$ is ill-founded, this includes all acyclic binary relations. The method of proof also establishes the following, thereby answering a question posed by Ewan Delanoy. Main Theorem 2. The countable models of set theory are linearly pre-ordered by embeddability: for any two countable models of set theory $\langle M,{\in^M}\rangle$ and $\langle N,{\in^N}\rangle$, either $M$ is isomorphic to a submodel of $N$ or conversely. Indeed, the countable models of set theory are pre-well-ordered by embeddability in order type exactly $\omega_1+1$. The proof shows that the embeddability relation on the models of set theory conforms with their ordinal heights, in that any two models with the same ordinals are bi-embeddable; any shorter model embeds into any taller model; and the ill-founded models are all bi-embeddable and universal. The proof method arises most easily in finite set theory, showing that the nonstandard hereditarily finite sets $\text{HF}^M$ coded in any nonstandard model $M$ of PA or even of $I\Delta_0$ are similarly universal for all acyclic binary relations. This strengthens a classical theorem of Ressayre, while simplifying the proof, replacing a partial saturation and resplendency argument with a soft appeal to graph universality. Main Theorem 3. If $M$ is any nonstandard model of PA, then every countable model of set theory is isomorphic to a submodel of the hereditarily finite sets $\langle \text{HF}^M,{\in^M}\rangle$ of $M$. Indeed, $\langle\text{HF}^M,{\in^M}\rangle$ is universal for all countable acyclic binary relations. In particular, every countable model of ZFC and even of ZFC plus large cardinals arises as a submodel of $\langle\text{HF}^M,{\in^M}\rangle$. Thus, inside any nonstandard model of finite set theory, we may cast out some of the finite sets and thereby arrive at a copy of any desired model of infinite set theory, having infinite sets, uncountable sets or even large cardinals of whatever type we like. The proof, in brief: for every countable acyclic digraph, consider the partial order induced by the edge relation, and extend this order to a total order, which may be embedded in the rational order $\mathbb{Q}$. Thus, every countable acyclic digraph admits a $\mathbb{Q}$ -grading, an assignmment of rational numbers to nodes such that all edges point upwards. Next, one can build a countable homogeneous, universal, existentially closed $\mathbb{Q}$-graded digraph, simply by starting with nothing, and then adding finitely many nodes at each stage, so as to realize the finite pattern property. The result is a computable presentation of what I call the countable random $\mathbb{Q}$-graded digraph $\Gamma$. If $M$ is any nonstandard model of finite set theory, then we may run this computable construction inside $M$ for a nonstandard number of steps. The standard part of this nonstandard finite graph includes a copy of $\Gamma$. Furthermore, since $M$ thinks it is finite and acyclic, it can perform a modified Mostowski collapse to realize the graph in the hereditary finite sets of $M$. By looking at the sets corresponding to the nodes in the copy of $\Gamma$, we find a submodel of $M$ that is isomorphic to $\Gamma$, which is universal for all countable acyclic binary relations. So every model of ZFC isomorphic to a submodel of $M$. The article closes with a number of questions, which I record here (and which I have also asked on mathoverflow: Can there be an embedding $j:V\to L$ from the set-theoretic universe $V$ to the constructible universe $L$, when $V\neq L$?) Although the main theorem shows that every countable model of set theory embeds into its own constructible universe $$j:M\to L^M,$$ this embedding $j$ is constructed completely externally to $M$ and there is little reason to expect that $j$ could be a class in $M$ or otherwise amenable to $M$. To what extent can we prove or refute the possibility that $j$ is a class in $M$? This amounts to considering the matter internally as a question about $V$. Surely it would seem strange to have a class embedding $j:V\to L$ when $V\neq L$, even if it is elementary only for quantifier-free assertions, since such an embedding is totally unlike the sorts of embeddings that one usually encounters in set theory. Nevertheless, I am at a loss to refute the hypothesis, and the possibility that there might be such an embedding is intriguing, if not tantalizing, for one imagines all kinds of constructions that pull structure from $L$ back into $V$. Question 1. Can there be an embedding $j:V\to L$ when $V\neq L$? By embedding, I mean an isomorphism from $\langle V,{\in}\rangle$ to its range in $\langle L,{\in}\rangle$, which is the same as a quantifier-free-elementary map $j:V\to L$. The question is most naturally formalized in Gödel-Bernays set theory, asking whether there can be a GB-class $j$ forming such an embedding. If one wants $j:V\to L$ to be a definable class, then this of course implies $V=\text{HOD}$, since the definable $L$-order can be pulled back to $V$, via $x\leq y\iff j(s)\leq_L j(y)$. More generally, if $j$ is merely a class in Gödel-Bernays set theory, then the existence of an embedding $j:V\to L$ implies global choice, since from the class $j$ we can pull back the $L$-order. For these reasons, we cannot expect every model of ZFC or of GB to have such embeddings. Can they be added generically? Do they have some large cardinal strength? Are they outright refutable? It they are not outright refutable, then it would seem natural that these questions might involve large cardinals; perhaps $0^\sharp$ is relevant. But I am unsure which way the answers will go. The existence of large cardinals provides extra strength, but may at the same time make it harder to have the embedding, since it pushes $V$ further away from $L$. For example, it is conceivable that the existence of $0^\sharp$ will enable one to construct the embedding, using the Silver indiscernibles to find a universal submodel of $L$; but it is also conceivable that the non-existence of $0^\sharp$, because of covering and the corresponding essential closeness of $V$ to $L$, may make it easier for such a $j$ to exist. Or perhaps it is simply refutable in any case. The first-order analogue of the question is: Question 2. Does every set $A$ admit an embedding $j:\langle A,{\in}\rangle \to \langle L,{\in}\rangle$? If not, which sets do admit such embeddings? The main theorem shows that every countable set $A$ embeds into $L$. What about uncountable sets? Let us make the question extremely concrete: Question 3. Does $\langle V_{\omega+1},{\in}\rangle$ embed into $\langle L,{\in}\rangle$? How about $\langle P(\omega),{\in}\rangle$ or $\langle\text{HC},{\in}\rangle$? It is also natural to inquire about the nature of $j:M\to L^M$ even when it is not a class in $M$. For example, can one find such an embedding for which $j(\alpha)$ is an ordinal whenever $\alpha$ is an ordinal? The embedding arising in the proof of the main theorem definitely does not have this feature. Question 4. Does every countable model $\langle M,{\in^M}\rangle$ of set theory admit an embedding $j:M\to L^M$ that takes ordinals to ordinals? Probably one can arrange this simply by being a bit more careful with the modified Mostowski procedure in the proof of the main theorem. And if this is correct, then numerous further questions immediately come to mind, concerning the extent to which we ensure more attractive features for the embeddings $j$ that arise in the main theorems. This will be particularly interesting in the case of well-founded models, as well as in the case of $j:V\to L$, as in question , if that should be possible. Question 5. Can there be a nontrivial embedding $j:V\to L$ that takes ordinals to ordinals? Finally, I inquire about the extent to which the main theorems of the article can be extended from the countable models of set theory to the $\omega_1$-like models: Question 6. Does every $\omega_1$-like model of set theory $\langle M,{\in^M}\rangle$ admit an embedding $j:M\to L^M$ into its own constructible universe? Are the $\omega_1$-like models of set theory linearly pre-ordered by embeddability?
The well-ordered replacement axiom is the scheme asserting that if $I$ is well-ordered and every $i\in I$ has unique $y_i$ satisfying a property $\phi(i,y_i)$, then $\{y_i\mid i\in I\}$ is a set. In other words, the image of a well-ordered set under a first-order definable class function is a set. Alfredo had introduced the theory Zermelo + foundation + well-ordered replacement, because he had noticed that it was this fragment of ZF that sufficed for an argument we were mounting in a joint project on bi-interpretation. At first, I had found the well-ordered replacement theory a bit awkward, because one can only apply the replacement axiom with well-orderable sets, and without the axiom of choice, it seemed that there were not enough of these to make ordinary set-theoretic arguments possible. But now we know that in fact, the theory is equivalent to ZF. Theorem. The axiom of well-ordered replacement is equivalent to full replacement over Zermelo set theory with foundation. $$\text{ZF}\qquad = \qquad\text{Z} + \text{foundation} + \text{well-ordered replacement}$$ Proof. Assume Zermelo set theory with foundation and well-ordered replacement. Well-ordered replacement is sufficient to prove that transfinite recursion along any well-order works as expected. One proves that every initial segment of the order admits a unique partial solution of the recursion up to that length, using well-ordered replacement to put them together at limits and overall. Applying this, it follows that every set has a transitive closure, by iteratively defining $\cup^n x$ and taking the union. And once one has transitive closures, it follows that the foundation axiom can be taken either as the axiom of regularity or as the $\in$-induction scheme, since for any property $\phi$, if there is a set $x$ with $\neg\phi(x)$, then let $A$ be the set of elements $a$ in the transitive closure of $\{x\}$ with $\neg\phi(a)$; an $\in$-minimal element of $A$ is a set $a$ with $\neg\phi(a)$, but $\phi(b)$ for all $b\in a$. Another application of transfinite recursion shows that the $V_\alpha$ hierarchy exists. Further, we claim that every set $x$ appears in the $V_\alpha$ hierarchy. This is not immediate and requires careful proof. We shall argue by $\in$-induction using foundation. Assume that every element $y\in x$ appears in some $V_\alpha$. Let $\alpha_y$ be least with $y\in V_{\alpha_y}$. The problem is that if $x$ is not well-orderable, we cannot seem to collect these various $\alpha_y$ into a set. Perhaps they are unbounded in the ordinals? No, they are not, by the following argument. Define an equivalence relation $y\sim y’$ iff $\alpha_y=\alpha_{y’}$. It follows that the quotient $x/\sim$ is well-orderable, and thus we can apply well-ordered replacement in order to know that $\{\alpha_y\mid y\in x\}$ exists as a set. The union of this set is an ordinal $\alpha$ with $x\subseteq V_\alpha$ and so $x\in V_{\alpha+1}$. So by $\in$-induction, every set appears in some $V_\alpha$. The argument establishes the principle: for any set $x$ and any definable class function $F:x\to\text{Ord}$, the image $F\mathrel{\text{”}}x$ is a set. One proves this by defining an equivalence relation $y\sim y’\leftrightarrow F(y)=F(y’)$ and observing that $x/\sim$ is well-orderable. We can now establish the collection axiom, using a similar idea. Suppose that $x$ is a set and every $y\in x$ has a witness $z$ with $\phi(y,z)$. Every such $z$ appears in some $V_\alpha$, and so we can map each $y\in x$ to the smallest $\alpha_y$ such that there is some $z\in V_{\alpha_y}$ with $\phi(y,z)$. By the observation of the previous paragraph, the set of $\alpha_y$ exists and so there is an ordinal $\alpha$ larger than all of them, and thus $V_\alpha$ serves as a collecting set for $x$ and $\phi$, verifying this instance of collection. From collection and separation, we can deduce the replacement axiom $\Box$ I’ve realized that this allows me to improve an argument I had made some time ago, concerning Transfinite recursion as a fundamental principle. In that argument, I had proved that ZC + foundation + transfinite recursion is equivalent to ZFC, essentially by showing that the principle of transfinite recursion implies replacement for well-ordered sets. The new realization here is that we do not need the axiom of choice in that argument, since transfinite recursion implies well-ordered replacement, which gives us full replacement by the argument above. Corollary. The principle of transfinite recursion is equivalent to the replacement axiom over Zermelo set theory with foundation. $$\text{ZF}\qquad = \qquad\text{Z} + \text{foundation} + \text{transfinite recursion}$$ There is no need for the axiom of choice.
is an extremely useful typesetting language to learn, especially in a math environment like this. However, the quick instructions Brilliant.org gives just aren't good enough to use for most situations. This is why I've decided to create a beginner's guide. There is a table of contents for easy symbol or format finding. I hope you can refer to this guide later, when writing solutions, problems, or notes. Note: You can also view Latex codes by hovering over the equation. Read Seeing actual for more details! To quickly navigate to the part you want via the Table of Contents, press CTRL+F, and type in the section you want (including the tilde's ~ before and after the section). Table of Contents ~Using LaTeX~ ~Text~ ~Basic Operations~ ~Fractions~ ~Sums, Products, Limits, and Integrals~ ~Modular Arithmetic~ ~Trigonometry~ ~Combinatorics~ ~Geometry~ ~Calculus~ ~Parentheses~ ~Fitting Parentheses~ ~Tables and Arrays~ ~Other~ ~Using LaTeX~ To use LaTeX, put a backslash and a left parenthesis before the math you want to LaTeXify, and put a backslash and a right parenthesis after the math you want to LaTeXify. For example: Shows up as However, if you want your math to be more conspicuous and centered, you can use a backslash then a left bracket, then your math, then a backslash then a right bracket. For example: Shows up as This second option is the display text. A lot of other math operations will look better in this text. To force the first option to also use display text, you can add a \displaystyle at the beginning. ~Text~ To write text in LaTeX use \text{your text here}. This gives To use bolded text, use \textbf{your text here}. This gives Italicized text is similar: \textit{your text here}. This gives ~Basic Operations~ "x+y" gives "x-y" gives "x=y" gives "x\times y" gives "x\cdot y" gives "x\div y" gives "x\pm y" gives "x\mp y" gives x^{y} gives x_{y} gives \sqrt{x} gives \sqrt[y]{x} gives \log_{a}b gives \ln a gives (that's a lowercase "l" in the beginning, not an uppercase "i") Note that many of you use "*" or "." for multiplying. This shows up as and which don't look good. Use or instead. Also, the brackets in x^{y} or x_{y} may be omitted if the index is a single character. However, if it is more than one character like , then brackets are needed or else it will show up as . ~Fractions~ Many people simply put a slash between the numerator and denominator to represent a fraction: . However, there are neater ways in LaTeX. \frac{x}{y} is the standard way to write fractions: \dfrac{x}{y} gives a bigger clearer version. However, this takes up more vertical space: the "d" stands for "display text". EXTRA \cfrac{x}{y} is a special type of fraction formatting. This is for continued fractions, hence the "c". typing \cfrac{x}{x+\cfrac{y}{y+\cfrac{z}{2}}} gives ~Sums, Products, Limits, and Integrals~ These four are in the same group because they format differently than other symbols. "\sum" gives "\prod" gives "\lim" gives "\int" gives We can add the other elements of each thing by using _ and ^: \sum_{i=0}^n gives \prod_{i=0}^n gives \lim_{x\rightarrow n} gives \int_{a}^{b} gives However, these don't look very good. However, once putting it on display text, either using the brackets or using \displaystyle as said in the beginning of the guide, we can make them look normal. \displaystyle\sum_{i=0}^n gives \displaystyle\prod_{i=0}^n gives \displaystyle\lim_{x\rightarrow n} gives \displaystyle\int_{a}^{b} gives ~Modular Arithmetic~ "\equiv" gives \mod{a} gives \pmod{a} gives \bmod{a} is \mod{a} without the space before it: versus "a\mid b" creates , which states that is divisible by . ~Trigonometry~ Many of you simply put "sin" and "cos" and be done with it; however, adding a backslash before those two make it look much better. \sin gives (as opposed to ) \cos gives (as opposed to ) \tan gives \sec gives \csc gives \cot gives \arcsin gives \arccos gives \arctan gives Putting a ^{-1} after the trigonometric function designates it as the inverse. For example, \sin^{-1} gives . \sinh gives \cosh gives \tanh gives ~Combinatorics~ \binom{x}{y} gives \dbinom{x}{y} gives ~Geometry~ x^{\circ} gives the degree symbol \angle gives \Delta gives , for example \triangle also does the job: \odot gives , for example AB\parallel CD gives AB\perp CD gives A\cong B gives A\sim B gives ~Calculus~ We've already learned to use . However, there is much more to calculus than integrals! There is no command for the total derivative, so you have to use \text{d} to get around it. For example, \dfrac{\text{d}}{\text{d}x} gives Fortunately, there is a symbol for partial derivatives: \partial gives . So, \dfrac{\partial}{\partial x} gives Double or even triple integrals can be condensed into \iint and \iiint, respectively. This gives and (I am using display text). EXTRA Line integrals can be written as \oint: . ~Parentheses~ ( and ) are standard for parentheses: [ and ] are used for brackets: { and } are used for curly brackets: \lfloor and \rfloor are used for the floor function: \lceil and \rceil are used for the ceiling function: \langle and \rangle are used for vectors: The vertical line symbol | (not a capital "i" or a lowercase "l"!) is used for absolute value: ~Fitting Parentheses~ Suppose you want to write . When you try, it gives . How did I stretch the parentheses to fit? To stretch the parentheses, use \left before the left parenthesis and \right before the right one, like this: \left( and \right). When put back into the expression, this yields as desired. This isn't just for parentheses; you can use them on brackets: changes into You can also use this technique on things that use only one parenthesis/bracket/etc. However, just putting \left or \right will yield an error. This is because \left and \right come in pairs. In orer to sidestep this, you can put a period after the one that you do not need (i.e \left. or \right.). This way it will not produce an error, and it will stretch the parenthesis to size. For example, this: \left. \dfrac{x^3+2x}{3x^2}\right|_0^3 gives this: ~Tables and Arrays~~ To make tables and arrays, use \ begin{array}{[modifiers]} ... \ end{array}. (A space is put before "begin" and before "end" to prevent the LaTeX from prematurely rendering. Even though there are no brackets around to make it render, it does so anyways, I don't know why.) In the modifiers section, you put either l for left, c for center, or r for right, per column. For example, to make an array with 3 columns, all formatted to align along the right edge, you put "rrr" inside the modifier. It would look like this: \ begin{array}{rrr} ... \ end{array}. To add a vertical line between two columns, put the vertical line symbol | between two modifiers: for example, if you wanted a horizontal line between the first two columns in the previous example, then you would put \ begin{array}{r|rr} ... \ end{array}. For actual inputting in the array, there are two rules: put a "&" sign to notify to switch to the next column, and put a "\ \" divider (again a space is added in between to prevent it from rendering) to notify to switch to the next row. When building the table, always fill in row by row: in the first row, fill in all the corresponding columns, and then switch to the next row; then continue in this manner. For example, if I wanted to make a square with the numbers , I would put: \ begin{array}{lcr}1 & 2 & 3 \ \ 4 & 5 & 6 \ \ 7& 8 & 9 \ end{array}. This produces: . To insert horizontal lines between any two rows, put \hline after the divider that separates the two rows. For example, if I wanted to add horizontal lines and vertical lines in the previous example to look like a tic tac toe board, this would be my code: \ begin{array}{l|c|r}1 & 2 & 3 \ \ \hline 4 & 5 & 6 \ \ \hline 7& 8 & 9 \ end{array} and it will produce: ~Other~ To negate any symbol, put \not before the symbol. For example, "\not =" gives Look here for a big list of symbols. If you don't know how to do something or see something missing in this guide, please do comment below so I can add it! Together, we can make a great LaTeX guide!
Can someone help me to evaluate this integral? $$\int \limits _0 ^\beta \frac{\sin\theta}r\,d\theta$$ I need it in a physics exercise. Given a circle (as shown in diagram), a point $P$ and angle $\theta$ made with the axis. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Supposing the circle to be of radius $R$ and its center to be $(0,0)$ and $P=(-a,0)$, we have the point where the ray intersects the circle at $(R\cos\eta,R\sin\eta)$ and so \begin{align} \sin\theta & = \frac{R\sin\eta}{\sqrt{(a+R\cos\eta)^2 + R\sin^2\eta}} = \frac{R\sin\eta}{\sqrt{R^2+a^2 + 2aR\cos\eta}}, \\[10pt] r & = \sqrt{(a+R\cos\eta)^2 + R\sin^2\eta} = \sqrt{R^2+a^2 + 2aR\cos\eta}, \\[15pt] \text{so } \frac{\sin\theta} r & = \frac{R\sin\eta}{R^2+a^2 + 2aR\cos\eta}, \\[10pt] \text{and } d\theta & = \frac{R^2+aR\cos\eta}{R^2+a^2 + 2aR\cos\eta} \, d\eta. \end{align} It's a rational function of sine and cosine, so if all else fails, the standard tangent half-angle substitution will handle it. PS: Maybe I should be explicit about the derivation of $d\theta$. We have$$\tan\theta = \frac{\text{rise}}{\text{run}} = \frac{R\cos\eta}{a+R\sin\eta}$$so$$\theta = \arctan \frac{R\cos\eta}{a+R\sin\eta}$$and then differentiate.
Asaf states in his answer that if for all well-orderable sets $X$ the power set $\mathcal P(X)$ is also well-orderable, then the axiom of choice must hold. There is a proof of this left as exercise in Kunen's Set Theory, 2011 ed., exercise I.12.17, which comes with a hint, and is done as follows: In $\mathsf{ZF}$, including the axiom of regularity of course, suppose that for any cardinal $\aleph$, $2^{\aleph}$ is well-orderable, then this implies that for any ordinal $\delta$, $\mathcal P(\delta)$ is well-orderable; as in $\mathsf{ZF}$ ,$\delta\thickapprox \aleph$, where $\aleph$ is the greatest cardinal with $\aleph\leq \delta.$ Let us prove by transfinite induction on $\gamma$ that $R(\gamma)$, the class of all sets of rank $\gamma$; which is a set, is well-orderable for all ordinals $\gamma$. As we are assuming $V=WF$ this will imply $\mathsf {AC}$. Suppose $\gamma\neq 0 $ is such that the property holds for all $\alpha<\gamma$. If $\gamma=\alpha+1$ is successor, then as $R(\alpha)$ is well-orderable there is some ordinal $\delta$ such that $\delta\thickapprox R(\alpha)$, hence $R(\gamma)=\mathcal P(R(\gamma))\thickapprox \mathcal P(\delta)$, but as $\mathcal P(\delta)$ is well-orderable by hypothesis, $R(\gamma)$ is then well-orderable. Now suppose $\gamma$ is limit. By Hartogs' Theorem; in $\mathsf{ZF}$, there is some cardinal $\kappa$ with $\kappa \npreceq R(\gamma)$. Fix a well-order $\sqsubset$ of $P(\kappa)$. Let us define by induction on $\alpha<\gamma$ an explicit well-order $\lhd_{\alpha}$ on $R(\alpha)$ for all $\alpha<\gamma$. Furthermore, let us construct each $\lhd_{\alpha}$ in such a manner that for any $\alpha<\beta<\gamma$ we have $\lhd_{\beta}\cap (R(\alpha)\times R(\alpha))\subseteq \lhd_{\alpha}$. Let $\lhd_0$ be the canonical well-order of $R(0)$. Suppose we have defined $\lhd_{\alpha}$ for all $\alpha<\beta$ for some $\beta\leq \gamma$, meeting the conditions above. There are two cases: Case $\beta=\alpha+1$. Since $\kappa\npreceq R(\alpha)$, we must have that $\bf{type}$$(R(\alpha),\lhd_{\alpha})<\kappa.$ Let $f_{\alpha}:(R(\alpha),\lhd_{\alpha})\rightarrow \kappa$ be the canonical embedding, the one that maps $(R(\alpha),\lhd_{\alpha})$ into an initial segment of $\kappa$. Then $f_{\alpha}[R(\alpha)]=\mu$ for some $\mu<\kappa$, so that $f^{-1}_{\alpha}[ \ ]:P(\mu)\rightarrow P(R(\alpha))$ is a bijection, but $P(\mu)\subseteq P(\kappa)$. Hence there is an explicit well-ordering of $P(R(\alpha))$ via $f^{-1}_{\alpha}[ \ ]$ and the well-order $\sqsubset$ on $P(\kappa)$. Let $\lhd_{\beta}$ be this well-order. Then clearly for any $\mu<\beta$ we have $\lhd_{\beta}\cap (R(\mu)\times R(\mu))=\emptyset$. Suppose $\beta$ is limit. Since for any $\alpha<\mu<\beta$ we have $\lhd_{\mu}\cap R(\alpha)^2\subseteq \lhd_{\alpha}$, it follows that if we put $\lhd_{\beta}=\bigcup_{\alpha<\beta}\lhd_{\alpha}$, $\lhd_{\beta}$ is a well-order on $R(\beta)=\bigcup_{\alpha<\beta}R(\alpha)$, and also for any $\mu<\beta$, $\lhd_{\beta}\cap R(\mu)^2\subseteq \lhd_{\mu}.$ Hence by transfinite induction we obtained an explicit well-order $\lhd_{\gamma}$ of $R(\gamma)$. Hence $R(\gamma)$ is well-orderable for all ordinals $\gamma$, and therefore $\mathsf{AC}$ holds.
J. D. Hamkins, “Every countable model of set theory embeds into its own constructible universe,” J. Math. Logic, vol. 13, iss. 2, p. 1350006, 27, 2013. @article {Hamkins2013:EveryCountableModelOfSetTheoryEmbedsIntoItsOwnL, AUTHOR = {Hamkins, Joel David}, TITLE = {Every countable model of set theory embeds into its own constructible universe}, JOURNAL = {J. Math. Logic}, FJOURNAL = {J.~Math.~Logic}, VOLUME = {13}, YEAR = {2013}, NUMBER = {2}, PAGES = {1350006, 27}, ISSN = {0219-0613}, MRCLASS = {03C62 (03E99 05C20 05C60 05C63)}, MRNUMBER = {3125902}, MRREVIEWER = {Robert S. Lubarsky}, DOI = {10.1142/S0219061313500062}, eprint = {1207.0963}, archivePrefix = {arXiv}, primaryClass = {math.LO}, URL = {http://wp.me/p5M0LV-jn}, } In this article, I prove that every countable model of set theory $\langle M,{\in^M}\rangle$, including every well-founded model, is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$. Another way to say this is that there is an embedding $$j:\langle M,{\in^M}\rangle\to \langle L^M,{\in^M}\rangle$$ that is elementary for quantifier-free assertions in the language of set theory. Main Theorem 1. Every countable model of set theory $\langle M,{\in^M}\rangle$ is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$. The proof uses universal digraph combinatorics, including an acyclic version of the countable random digraph, which I call the countable random $\mathbb{Q}$-graded digraph, and higher analogues arising as uncountable Fraisse limits, leading eventually to what I call the hypnagogic digraph, a set-homogeneous, class-universal, surreal-numbers-graded acyclic class digraph, which is closely connected with the surreal numbers. The proof shows that $\langle L^M,{\in^M}\rangle$ contains a submodel that is a universal acyclic digraph of rank $\text{Ord}^M$, and so in fact this model is universal for all countable acyclic binary relations of this rank. When $M$ is ill-founded, this includes all acyclic binary relations. The method of proof also establishes the following, thereby answering a question posed by Ewan Delanoy. Main Theorem 2. The countable models of set theory are linearly pre-ordered by embeddability: for any two countable models of set theory $\langle M,{\in^M}\rangle$ and $\langle N,{\in^N}\rangle$, either $M$ is isomorphic to a submodel of $N$ or conversely. Indeed, the countable models of set theory are pre-well-ordered by embeddability in order type exactly $\omega_1+1$. The proof shows that the embeddability relation on the models of set theory conforms with their ordinal heights, in that any two models with the same ordinals are bi-embeddable; any shorter model embeds into any taller model; and the ill-founded models are all bi-embeddable and universal. The proof method arises most easily in finite set theory, showing that the nonstandard hereditarily finite sets $\text{HF}^M$ coded in any nonstandard model $M$ of PA or even of $I\Delta_0$ are similarly universal for all acyclic binary relations. This strengthens a classical theorem of Ressayre, while simplifying the proof, replacing a partial saturation and resplendency argument with a soft appeal to graph universality. Main Theorem 3. If $M$ is any nonstandard model of PA, then every countable model of set theory is isomorphic to a submodel of the hereditarily finite sets $\langle \text{HF}^M,{\in^M}\rangle$ of $M$. Indeed, $\langle\text{HF}^M,{\in^M}\rangle$ is universal for all countable acyclic binary relations. In particular, every countable model of ZFC and even of ZFC plus large cardinals arises as a submodel of $\langle\text{HF}^M,{\in^M}\rangle$. Thus, inside any nonstandard model of finite set theory, we may cast out some of the finite sets and thereby arrive at a copy of any desired model of infinite set theory, having infinite sets, uncountable sets or even large cardinals of whatever type we like. The proof, in brief: for every countable acyclic digraph, consider the partial order induced by the edge relation, and extend this order to a total order, which may be embedded in the rational order $\mathbb{Q}$. Thus, every countable acyclic digraph admits a $\mathbb{Q}$ -grading, an assignmment of rational numbers to nodes such that all edges point upwards. Next, one can build a countable homogeneous, universal, existentially closed $\mathbb{Q}$-graded digraph, simply by starting with nothing, and then adding finitely many nodes at each stage, so as to realize the finite pattern property. The result is a computable presentation of what I call the countable random $\mathbb{Q}$-graded digraph $\Gamma$. If $M$ is any nonstandard model of finite set theory, then we may run this computable construction inside $M$ for a nonstandard number of steps. The standard part of this nonstandard finite graph includes a copy of $\Gamma$. Furthermore, since $M$ thinks it is finite and acyclic, it can perform a modified Mostowski collapse to realize the graph in the hereditary finite sets of $M$. By looking at the sets corresponding to the nodes in the copy of $\Gamma$, we find a submodel of $M$ that is isomorphic to $\Gamma$, which is universal for all countable acyclic binary relations. So every model of ZFC isomorphic to a submodel of $M$. The article closes with a number of questions, which I record here (and which I have also asked on mathoverflow: Can there be an embedding $j:V\to L$ from the set-theoretic universe $V$ to the constructible universe $L$, when $V\neq L$?) Although the main theorem shows that every countable model of set theory embeds into its own constructible universe $$j:M\to L^M,$$ this embedding $j$ is constructed completely externally to $M$ and there is little reason to expect that $j$ could be a class in $M$ or otherwise amenable to $M$. To what extent can we prove or refute the possibility that $j$ is a class in $M$? This amounts to considering the matter internally as a question about $V$. Surely it would seem strange to have a class embedding $j:V\to L$ when $V\neq L$, even if it is elementary only for quantifier-free assertions, since such an embedding is totally unlike the sorts of embeddings that one usually encounters in set theory. Nevertheless, I am at a loss to refute the hypothesis, and the possibility that there might be such an embedding is intriguing, if not tantalizing, for one imagines all kinds of constructions that pull structure from $L$ back into $V$. Question 1. Can there be an embedding $j:V\to L$ when $V\neq L$? By embedding, I mean an isomorphism from $\langle V,{\in}\rangle$ to its range in $\langle L,{\in}\rangle$, which is the same as a quantifier-free-elementary map $j:V\to L$. The question is most naturally formalized in Gödel-Bernays set theory, asking whether there can be a GB-class $j$ forming such an embedding. If one wants $j:V\to L$ to be a definable class, then this of course implies $V=\text{HOD}$, since the definable $L$-order can be pulled back to $V$, via $x\leq y\iff j(s)\leq_L j(y)$. More generally, if $j$ is merely a class in Gödel-Bernays set theory, then the existence of an embedding $j:V\to L$ implies global choice, since from the class $j$ we can pull back the $L$-order. For these reasons, we cannot expect every model of ZFC or of GB to have such embeddings. Can they be added generically? Do they have some large cardinal strength? Are they outright refutable? It they are not outright refutable, then it would seem natural that these questions might involve large cardinals; perhaps $0^\sharp$ is relevant. But I am unsure which way the answers will go. The existence of large cardinals provides extra strength, but may at the same time make it harder to have the embedding, since it pushes $V$ further away from $L$. For example, it is conceivable that the existence of $0^\sharp$ will enable one to construct the embedding, using the Silver indiscernibles to find a universal submodel of $L$; but it is also conceivable that the non-existence of $0^\sharp$, because of covering and the corresponding essential closeness of $V$ to $L$, may make it easier for such a $j$ to exist. Or perhaps it is simply refutable in any case. The first-order analogue of the question is: Question 2. Does every set $A$ admit an embedding $j:\langle A,{\in}\rangle \to \langle L,{\in}\rangle$? If not, which sets do admit such embeddings? The main theorem shows that every countable set $A$ embeds into $L$. What about uncountable sets? Let us make the question extremely concrete: Question 3. Does $\langle V_{\omega+1},{\in}\rangle$ embed into $\langle L,{\in}\rangle$? How about $\langle P(\omega),{\in}\rangle$ or $\langle\text{HC},{\in}\rangle$? It is also natural to inquire about the nature of $j:M\to L^M$ even when it is not a class in $M$. For example, can one find such an embedding for which $j(\alpha)$ is an ordinal whenever $\alpha$ is an ordinal? The embedding arising in the proof of the main theorem definitely does not have this feature. Question 4. Does every countable model $\langle M,{\in^M}\rangle$ of set theory admit an embedding $j:M\to L^M$ that takes ordinals to ordinals? Probably one can arrange this simply by being a bit more careful with the modified Mostowski procedure in the proof of the main theorem. And if this is correct, then numerous further questions immediately come to mind, concerning the extent to which we ensure more attractive features for the embeddings $j$ that arise in the main theorems. This will be particularly interesting in the case of well-founded models, as well as in the case of $j:V\to L$, as in question , if that should be possible. Question 5. Can there be a nontrivial embedding $j:V\to L$ that takes ordinals to ordinals? Finally, I inquire about the extent to which the main theorems of the article can be extended from the countable models of set theory to the $\omega_1$-like models: Question 6. Does every $\omega_1$-like model of set theory $\langle M,{\in^M}\rangle$ admit an embedding $j:M\to L^M$ into its own constructible universe? Are the $\omega_1$-like models of set theory linearly pre-ordered by embeddability?
Revista Matemática Iberoamericana Full-Text PDF (525 KB) | Metadata | Table of Contents | RMI summary Volume 30, Issue 4, 2014, pp. 1149–1190 DOI: 10.4171/RMI/811 Published online: 2014-12-15 $H^{\infty}$ functional calculus and square function estimates for Ritt operatorsChristian Le Merdy [1](1) Université de Franche-Comté, Besançon, France A Ritt operator $T\colon X\to X$ on a Banach space is a power bounded operator satisfying an estimate $n\| T^{n}-T^{n-1}\| \leq C$. When $X=L^p(\Omega)$ for some $1\leq p \leq \infty$, we study the validity of square functions estimates $\| (\sum_k k|T^{k}(x) - T^{k-1}(x)|^2)^{1/2}\|_{L^p}\lesssim\ \|x\|_{L^p}$ for such operators. We show that $T$ and $T^*$ both satisfy such estimates if and only if $T$ admits a bounded functional calculus with respect to a Stolz domain. This is a single operator analogue of the famous Cowling–Doust–McIntosh–Yagi characterization of bounded $H^\infty$-calculus on $L^p$-spaces by the boundedness of certain Littlewood–Paley–Stein square functions. We also prove a similar result for Hilbert spaces. Then we extend the above to more general Banach spaces, where square functions have to be defined in terms of certain Rademacher averages. We focus on noncommutative $L^p$-spaces, where square functions are quite explicit, and we give applications, examples, and illustrations on such spaces, as well as on classical $L^p$. Keywords: Functional calculus, Ritt operators, $R$-boundedness, square functions Le Merdy Christian: $H^{\infty}$ functional calculus and square function estimates for Ritt operators. Rev. Mat. Iberoam. 30 (2014), 1149-1190. doi: 10.4171/RMI/811
We need $k$ to be an infinite field. If $k$ is finite, since point sets are closed in the Zariski topology, we could find two proper closed sets (the union of all but one point of $X$) whose union is all of $X$. Suppose $U\cap V = \emptyset$. Then $X = (U\cap V)^{c} = U^{c} \cup V^{c}$. Since $U$, $V$ are open, $U^{c} = V(I)$, $V^{c} = V(J)$ for some ideals $I,J$ of $k[x_{1},...,x_{n}]$. We have then that $X = V(I) \cup V(J) = V(IJ)$. $I,J\neq (0)$ since $V(I),V(J)$ are proper subsets of $X$. So $IJ \neq (0)$ also since $k[x_{1},...,x_{n}]$ is an integral domain. Thus there exists an $f\in IJ\setminus \{0\}$ such that $f$ vanishes at all points of $X$. Note that $f$ must in fact be nonconstant. This is now where we use the fact that $k$ is infinite. Since $f$ is nonconstant, there exists an $i\in\{1,...,n\}$ such that $f$ has a nonzero term with a nonzero power of $x_{i}$ in it. Suppose for convenience of notation that $i=1$. We can then write $f = g_{m}(x_{2},...,x_{n})x_{1}^{m}+ ...+ g_{0}(x_{2},...,x_{n})$ for some $g_{0},...,g_{m}\in k[x_{2},...,x_{n}]$, $g_{m}\neq 0$, and $m>0$. So there is a $(a_{2},...,a_{n})\in k^{n-1}$ with $g_{m}(a_{2},...,a_{n})\neq 0$. Then $f(x_{1},a_{2},...,a_{n})\in k[x_{1}]$ is nonzero, so can only have finitely many roots. This is a contradiction since $k$ is infinite, and $f$ must vanish at all points of $X=k^{n}$.
If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+} $ satisfies the following condition $$\forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, y - 1)}{4}$$ then is $f$ constant function? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+} $ satisfies the following condition $$\forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, y - 1)}{4}$$ then is $f$ constant function? You can prove this with probability. Let $(X_n)$ be the simple symmetric random walk on $\mathbb{Z}^2$. Since $f$ is harmonic, the process $M_n:=f(X_n)$ is a martingale. Because $f\geq 0$, the process $M_n$ is a non-negative martingale and so must converge almost surely by the Martingale Convergence Theorem. That is, we have $M_n\to M_\infty$ almost surely. But $(X_n)$ is irreducible and recurrent and so visits every state infinitely often. Thus (with probability one) $f(X_n)$ takes on every $f$ value infinitely often. Thus $f$ is a constant function, since the sequence $M_n=f(X_n)$ can't take on distinct values infinitely often and still converge. I can give a proof for the d-dimensional case, if $f\colon\mathbb{Z}^d\to\mathbb{R}^+$ is harmonic then it is constant. The following based on a quick proof that I mentioned in the comments to the same (closed) question on MathOverflow, Liouville property in Z d. [Edit: I updated the proof, using a random walk, to simplify it] First, as $f(x)$ is equal to the average of the values of $f$ over the $2d$ nearest neighbours of $x$, we have the inequality $f(x)\ge(2d)^{-1}f(y)$ whenever $x,y$ are nearest neighbours. If $\Vert x\Vert_1$ is the length of the shortest path from $x$ to 0 (the taxicab metric, or $L^1$ norm), this gives $f(x)\le(2d)^{\Vert x\Vert_1}f(0)$. Now let $X_n$ be a simple symmetric random walk in $\mathbb{Z}^d$ starting from the origin and, independently, let $T$ be a random variable with support the nonnegative integers such that $\mathbb{E}[(2d)^{2T}] < \infty$. Then, $X_T$ has support $\mathbb{Z}^d$ and $\mathbb{E}[f(X_T)]=f(0)$, $\mathbb{E}[f(X_T)^2]\le\mathbb{E}[(2d)^{2T}]f(0)^2$ for nonnegative harmonic $f$. By compactness, we can choose $f$ with $f(0)=1$ to maximize $\Vert f\Vert_2\equiv\mathbb{E}[f(X_T)^2]^{1/2}$. Writing $e_i$ for the unit vector in direction $i$, set $f_i^\pm(x)=f(x\pm e_i)/f(\pm e_i)$. Then, $f$ is equal to a convex combination of $f^+_i$ and $f^-_i$ over $i=1,\ldots,d$. Also, by construction, $\Vert f\Vert_2\ge\Vert f^\pm_i\Vert_2$. Comparing with the triangle inequality, we must have equality here, and $f$ is proportional to $f^\pm_i$. This means that there are are constants $K_i > 0$ such that $f(x+e_i)=K_if(x)$. The average of $f$ on the $2d$ nearest neighbours of the origin is $$ \frac{1}{2d}\sum_{i=1}^d(K_i+1/K_i). $$ However, for positive $K$, $K+K^{-1}\ge2$ with equality iff $K=1$. So, $K_i=1$ and $f$ is constant. Now, if $g$ is a positive harmonic function, then $\tilde g(x)\equiv g(x)/g(0)$ satisfies $\mathbb{E}[\tilde g(X_T)]=1$. So, $$ {\rm Var}(\tilde g(X_T))=\mathbb{E}[\tilde g(X_T)^2]-1\le\mathbb{E}[f(X_T)^2]-1=0, $$ and $\tilde g$ is constant. Here is an elementary proof assuming we have bounds for $f$ on both sides. Define a random walk on $\mathbb{Z}^2$ which, at each step, stays put with probability $1/2$ and moves to each of the four neighboring vertices with probability $1/8$. Let $p_k(u,v)$ be the probability that the walk travels from $(m,n)$ to $(m+u, n+v)$ in $k$ steps. Then, for any $(m, n)$ and $k$, we have $$f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} p_k(u,v) f(m+u,n+v).$$ So $$f(m+1, n) - f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} \left( p_k(u-1,v) - p_k(u,v) \right) f(m+u,n+v).$$ If we can show that $$\lim_{k \to \infty} \sum_{(u,v) \in \mathbb{Z}^2} \left| p_k(u-1,v) - p_k(u,v) \right| =0 \quad (\ast)$$ we deduce that $$f(m+1,n) = f(m,n)$$ and we win. Remark: More generally, we could stay put with probability $p$ and travel to each neighbor with probability $(1-p)/4$. If we choose $p$ too small, then $p_k(u,v)$ tends to be larger for $u+v$ even then for $u+v$ odd, rather than depending "smoothly" on $(u,v)$. I believe that $(\ast)$ is true for any $p>0$, but this elementary proof only works for $p > 1/3$. For concreteness, we'll stick to $p=1/2$. We study $p_k(u,v)$ using the generating function expression $$\left( \frac{x+x^{-1}+y+y^{-1}+4}{8} \right)^k = \sum_{u,v} p_k(u,v) x^u y^v.$$ Lemma: For fixed $v$, the quantity $p(u,v)$ increases as $u$ climbs from $-\infty$ up to $0$, and then decreases as $u$ continues climbing from $0$ to $\infty$. Proof: We see that $\sum_u p_k(u,v) x^u$ is a positive sum of Laurent polynomials of the form $(x/8+1/2+x^{-1}/8)^j$. So it suffices to prove the same thing for the coefficients of this Laurent polynomial. In other words, writing $(x^2+8x+1)^k = \sum e_i x^i$, we want to prove that $e_i$ is unimodal with largest value in the center. Now, $e_i$ is the $i$-th elementary symmetric function in $j$ copies of $4+\sqrt{15}$ and $j$ copies of $4-\sqrt{15}$. By Newton's inequalities, $e_i^2 \geq \frac{i (2j-i)}{(i+1)(2j-i+1)} e_{i-1} e_{i+1} > e_{i-1} e_{i+1}$ so $e_i$ is unimodal; by symmetry, the largest value is in the center. (The condition $p>1/3$ in the above remark is when the quadratic has real roots.) $\square$ Corollary:$$\sum_u \left| p_k(u-1,v) - p_k(u,v) \right| = 2 p_k(0,v).$$ Proof: The above lemma tells us the signs of all the absolute values; the sum is\begin{multline*} \cdots + (p_k(-1,v) - p_{k}(-2,v)) + (p_k(0,v) - p_{k}(-1,v)) + \\ (p_k(0,v) - p_k(1,v)) + (p_k(1,v) - p_k(2,v)) + \cdots = 2 p_k(0,v). \qquad \square\end{multline*} So, in order to prove $(\ast)$, we must show that $\lim_{k \to \infty} \sum_v p_k(0,v)=0$. In other words, we must show that the coefficient of $x^0$ in $\left( \frac{x}{8}+\frac{3}{4} + \frac{x^{-1}}{8} \right)^k$ goes to $0$. There are probably a zillion ways to do this; here a probabilistic one. We are rolling an $8$-sided die $k$ times, and we want the probability that the numbers of ones and twos are precisely equal. The probability that we roll fewer than $k/5$ ones and twos approaches $0$ by the law of large numbers (which can be proved elementarily by, for example, Chebyshev's inequality). If we roll $2r > k/5$ ones and twos, the probability that we exactly the same number of ones and twos is $$2^{-2r} \binom{2r}{r} < \frac{1}{\sqrt{\pi r}} < \frac{1}{\sqrt{\pi k/10}}$$ which approaches $0$ as $k \to \infty$. See here for elementary proofs of the bound on $\binom{2r}{r}$. I wrote this in two dimensions, but the same proof works in any number of dimensions
A cylindrical rod of length $L$ is insulated over its curved surface. The end of the rod at $x = 0$ is in contact with a heat bath at temperature $\Theta_0$ and the end of the rod at $x = L$ is in contact with a heat bath at temperature $\Theta_L$. After some time, a steady state is reached. The steady-state (time-independent) solution of the heat equation is $$\Theta(x)=\Theta_0+\frac{\Theta_L-\Theta_0}{L}x$$ The heat equation that describes the temperature profile of the rod is $$\frac{1}{D}\frac{\partial\Theta}{\partial t}=\frac{\partial^2 \Theta}{\partial x^2}$$ Where $D$ is a constant and $\Theta(x,t)$ is the temperature at position $x$ and time $t$. At time $t = 0$, the rod is disconnected from the heat baths. Assuming that no heat $Q$ subsequently leaves or enters the rod, write down the boundary/initial conditions: $$(a) \quad\text{at}\qquad x = 0,$$ $$(b) \quad\text{at}\qquad x = L,$$ $$\qquad\qquad\qquad\qquad(c) \quad\text{at}\qquad t = 0 \quad \text{for}\qquad 0 \le x \le L$$ (Hint: recall Fourier’s law of heat flow $\frac{1}{A}\frac{\partial Q}{\partial t}=−k\frac{\partial \Theta}{\partial x}$ , where $k$ is the conductivity.) The answer given to parts $(a)$, $(b)$ & $(c)$ (respectively) are The boundary condition at $x = 0$ is that no heat is flowing in or out of the end of the rod. This implies that the temperature gradient at $x = 0$ is zero: $$\frac{\partial\Theta(x,t)}{\partial x}\Bigg{\rvert}_{x=0}= 0$$ The boundary condition at $x = L$ is that no heat is flowing in or out of the end of the rod. This implies that the temperature gradient at $x = L$ is zero: $$\frac{\partial\Theta(x,t)}{\partial x}\Bigg{\rvert}_{x=L}= 0$$ The initial condition at $t = 0$ for $0 \le x \le L$ is that the initial temperature distribution is the steady-state temperature distribution: $$\Theta(x)=\Theta_0+\frac{\Theta_L-\Theta_0}{L}x$$ I'm struggling to find physical intuition for these boundary/initial conditions. I have learnt from reading the comment below this question that steady state in this context means there is as much heat flowing out of the $\Theta_L$ heat bath as there is flowing into the $\Theta_0$ heat bath, and I acknowledge that this is definitely not the same as thermal equilibrium. However, if the temperature gradient at $t=0$ is zero at $x=0,L$ after the rod is disconnected how can there be any transfer of heat whatsoever (even for $t \gt 0$)? Put in another way, I know that the there will be no heat leaving either end of the rod (as it is insulated), and there will be no heat entering either end of the rod (as the heat baths are no longer present). But there must be a transfer of heat from $x=0$ and/or $x=L$ along the rod (in the direction towards the rods centre). If this were not the case how would the temperature profile ever evolve? And it does evolve as the final answer for $\Theta(x,t)$ (working omitted) is $$\Theta(x,t)=\frac{\Theta_0+\Theta_L}{2}+ \frac{2 \left(\Theta_L-\Theta_0\right)}{\pi^2} \sum_{n=1}^{\infty}\frac{\left[(-1)^n-1\right]}{n^2}\cos\left(\frac{n \pi x}{L}\right)\exp\left(-\frac{n^2 \pi^2 D}{L^2}t \right)$$ So put simply I don't understand physically why at $t=0$ $$\frac{\partial\Theta(x,t)}{\partial x}\Bigg{\rvert}_{x=0/L}= 0$$ as by my logic there must be heat flow along the rod (not out or in the rod) even at $t=0$.
I wanted to better understand dfa. I wanted to build upon a previous question:Creating a DFA that only accepts number of a's that are multiples of 3But I wanted to go a bit further. Is there any way we can have a DFA that accepts number of a's that are multiples of 3 but does NOT have the sub... Let $X$ be a measurable space and $Y$ a topological space. I am trying to show that if $f_n : X \to Y$ is measurable for each $n$, and the pointwise limit of $\{f_n\}$ exists, then $f(x) = \lim_{n \to \infty} f_n(x)$ is a measurable function. Let $V$ be some open set in $Y$. I was able to show th... I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Consider a non-UFD that only has 2 units ( $-1,1$ ) and the min difference between 2 elements is $1$. Also there are only a finite amount of elements for any given fixed norm. ( Maybe that follows from the other 2 conditions ? )I wonder about counting the irreducible elements bounded by a lower... How would you make a regex for this? L = {w $\in$ {0, 1}* : w is 0-alternating}, where 0-alternating is either all the symbols in odd positions within w are 0's, or all the symbols in even positions within w are 0's, or both. I want to construct a nfa from this, but I'm struggling with the regex part
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate? I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol. It just seems like this argument is all about the sets of n-simplices. Which is the trivial part. lol no i mean, i'm following it by context actually so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side @user1732 haha thanks! we had no idea if that'd actually find its way to the internet... @JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels @JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC @IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes @JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81 @HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary) @JonathanBeardsley what?! i really liked that picture! i wonder why they removed it @HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world @HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)? i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$ @JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat) I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open not put all my eggs in one basket, as it were I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality @JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak). There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k... @JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad It's enough to show everything works for generating cofaces and codegeneracies the codegeneracies are free, the 0 and nth cofaces are free all of those can be done treating frak{C} as a black box the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation). > Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question. I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers. You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.) I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.) @MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,. You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
The thing with this question is that there is a question that seems to prove the opposite claim Prove the map has a fixed point - someone look into this How should one go about dealing with this question? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community The thing with this question is that there is a question that seems to prove the opposite claim Prove the map has a fixed point - someone look into this How should one go about dealing with this question? Suppose that $f: M \to M $ was onto. Then for every $x,y \in M$ $x \not=y$, there exists an $x',y' \in M$ s.t. $f(x')=x$ and $f(y')=y$. Then $$d(x,y)=d(f(x'),f(y'))\leq c d(x',y')<d(x',y').$$ Let $B=\max_{x,y \in M^2} d(x,y)$, this exists since $M$ is compact and $d:M^2 \to \mathbb{R}$ is continuous. But, by the above fact, for any $x,y \in M$ there exist an $x',y'$ s.t. $$d(x,y)<d(x',y'),$$ which contradicts the existence of a maximizer $B$. This probably works. Define a distance function $r:M\rightarrow M$ such that $$ r(x,y) = d(x,y). $$ Note that $r(\cdot)$ is a continuous function, whose proof can be seen here: Is the distance function in a metric space (uniformly) continuous? Thus, since $r$ is continuous on compact $M$, it attains its supremum, say at $(x^\ast, y^\ast)$. Note that $f(x^\ast),f(y^\ast)\in M$, which means that $$ r(f(x^\ast),f(y^\ast)) = d(f(x^\ast),f(y^\ast)) \leq d(x^\ast, y^\ast) $$ by definition of $(x^\ast, y^\ast)$ resulting in a contradiction. HINT: Let $p$ be the fixed point guaranteed by the earlier question. Show that there is an $x\in X$ that maximizes $d(p,x)$. Then show that $x\notin f[X]$.
Trigonometric Form of Ceva's Theorem Ceva's theorem provides a unifying concept for several apparently unrelated results. The theorem states that, in \(\Delta ABC,\) three Cevians \(AD,\) \(BE,\) and \(CF\) are concurrent iff the following identity holds: \(\displaystyle \frac{AF}{FB} \cdot \frac{BD}{DC} \cdot \frac{CE}{EA} = 1. \) The theorem has a less known trigonometric form \(\displaystyle \frac{\mbox{sin}(\angle ABE)}{\mbox{sin}(\angle CBE)} \cdot \frac{\mbox{sin}(\angle BCF)}{\mbox{sin}(\angle ACF)} \cdot \frac{\mbox{sin}(\angle CAD)}{\mbox{sin}(\angle BAD)} = 1, \) or \( \mbox{sin}(\angle ABE) \cdot \mbox{sin}(\angle BCF) \cdot \mbox{sin}(\angle CAD) = \mbox{sin}(\angle CBE) \cdot \mbox{sin}(\angle ACF) \cdot \mbox{sin}(\angle BAD). \) The latter may serve as a source of great many trigonometric identities - some obvious, some much less so. Some unexpected identities can be obtained by placing the six points \(A,\) \(B,\) \(C\) at the vertices of a regular polygon. If the Cevians are extended to intersect the circumscribed circle, they become three diagonals (or sides) of the regular polygon. We are of course concerned with the case where a regular polygon has three concurrent diagonals. For example, the original configuration in the applet below, suggests the following identity: \( \mbox{sin}(20^{\circ})\cdot \mbox{sin}(50^{\circ})\cdot \mbox{sin}(70^{\circ}) = \mbox{sin}(30^{\circ})\cdot \mbox{sin}(30^{\circ})\cdot \mbox{sin}(80^{\circ}), \) which can be easily verified. Play with the applet to find more such identities. The number of sides of the polygon, can be modified by clicking on the number - originally \(18\) - in the lower left corner of the applet. The applet also lends itself to discovery of problems of a different kind. Return to the \(18-\mbox{gon}.\) You may observe that the configuration now is reminiscent of a very popular problem. Namely, in an isosceles \(\Delta ABC,\) angle \(B\) equals \(20^{\circ}.\) (The same triangle appears in a different configuration inside regular \(18-\mbox{gon}.\)) Two lines \(AD\) and \(BE\) are drawn such that \(\angle CAD = 60^{\circ},\) whereas \(\angle ACE = 50^{\circ}.\) Find \(\angle ADE.\) (Check a more extensive discussion of this problem and a relevant solution.) From the diagram it is immediate that the answer is \(30^{\circ}.\) In a similar vein consider another problem. In an isosceles ΔABC, \(\angle ABC = 80^{\circ}.\) A point \(M\) is selected so that \(\angle MAC = 30^{\circ}\) and \(\angle MCA = 10^{\circ}.\) Find \(\angle BMC.\) Finally, see what you can make of the diagram below: References D. Gale, Tracking the Automatic Ant, Springer-Verlag, 1998 V. V. Prasolov, Essays On Numbers And Figures, AMS, 2000 Copyright © 1996-2018 Alexander Bogomolny Proof of Ceva's Trigonometric Identity \(\displaystyle \frac{AF}{FB} \cdot \frac{BD}{DC} \cdot \frac{CE}{EA} = 1 \) is equivalent to \(\displaystyle \frac{\mbox{Area}(\Delta AFK)}{\mbox{Area}(\Delta BFK)} \cdot \frac{\mbox{Area}(\Delta BKD)}{\mbox{Area}(\Delta CKD)} \cdot \frac{\mbox{Area}(\Delta CKE)}{\mbox{Area}(\Delta AKE)} = 1 \) This is because triangles \(AFK\) and \(BFK\) have the same altitude from the vertex \(K\) and similarly for other two pairs of triangles. On the other hand, \(\mbox{Area}(\Delta AFK) = AF\cdot AK\cdot \mbox{sin}(\angle BAD)/2\), etc. which leads to \(\displaystyle \frac{\mbox{sin}(\angle ABE)}{\mbox{sin}(\angle CBE)} \cdot \frac{\mbox{sin}(\angle BCF)}{\mbox{sin}(\angle ACF)} \cdot \frac{\mbox{sin}(\angle CAD)}{\mbox{sin}(\angle BAD)} = \frac{FB}{AF} \cdot \frac{DC}{BD} \cdot \frac{EA}{CE} = 1, \) and the theorem follows. Copyright © 1996-2018 Alexander Bogomolny Problem In an isosceles \(\Delta ABC,\) \(\angle ABC = 80^{\circ}.\) A point \(M\) is selected so that \(\angle MAC = 30^{\circ}\) and \(\angle MCA = 10^{\circ}.\) Find \(\angle BMC.\) Now, it's obvious that \(\angle BMC = 70^{\circ}.\) Copyright © 1996-2018 Alexander Bogomolny The following solution was found by S. T. Thompson, Tacoma, Washington (see Honsberger, pp 16-18). With the change of notations, in \(\Delta A_{14}OA_{15},\) two lines \(A_{15}X\) and \(A_{14}Y\) are drawn such that \(\angle A_{14}A_{15}X = 60^{\circ}\) and \(\angle A_{15}A_{14}Y = 50^{\circ}.\) The question in the above configuration is to determine \(\angle A_{15}XY.\) Draw a circle with center \(O\) and radius \(OA_{14}.\) The chord \(A_{14}A_{15}\) subtends a \(20^{\circ}\) arc, so that \(A_{14}A_{15}\) is a side of the regular \(18-\mbox{gon}\) inscribed into that circle. I numbered the vertices of that \(18-\mbox{gon}\) as shown in the diagram above. Two observations are important for the proof: \(A_{14}A_{2}\) passes through \(Y.\) \(A_{10}A_{16}\) passes through both \(X\) and \(Y.\) Indeed, \(A_{10}A_{16} = A_{14}A_{2},\) as chords subtending equal arcs. Furthermore, they are symmetric with respect to radius \(OA_{15}.\) Therefore, they intersect on that radius. In the isosceles triangle \(OA_{14}A_{2},\) the angle at \(O\) is obviously equal \(120^{\circ}.\) Therefore, \(\angle OA_{14}A_{2} = 30^{\circ}.\) We see that \(A_{14}A_{2}\) passes through \(Y.\) Further, \(A_{13}\) is the middle of the arc \(A_{10}A_{16}.\) Therefore, \(A_{10}A_{16} \perp OA_{13}.\) Let's for the moment denote the point of intersection of \(A_{10}A_{16}\) with \(OA_{14}\) as \(X'.\) Since every point on \(A_{10}A_{16}\) is equidistant from \(O\) and \(A_{13},\) so is \(X':\) \(OX' = X'A_{13}.\) In the isosceles triangle \(OXA_{13},\) \(\angle OA_{13}X = \angle A_{13}OX = 20^{\circ}.\) Therefore, \(X' = X,\) which proves the second of the two observations. Now, as we've seen, in the isosceles \(OXA_{13},\) \(A_{10}A_{16}\) is the height to side \(OA_{13}.\) It's then bisects \(\angle OXA_{13},\) which implies \(\angle OXA_{10} = 70^{\circ}.\) But then also \(\angle A_{14}XA_{16} = 70^{\circ}.\) On the other hand, \(\angle A_{14}XA_{15} = 180^{\circ} - \angle OA_{14}A_{15} - \angle XA_{15}A_{14}.\) \(\angle A_{14}XA_{15} = 180^{\circ} - 80^{\circ} - 60^{\circ} = 40^{\circ}.\) Finally, \(\angle A_{15}XY = \angle A_{14}XA_{16} - \angle A_{14}XA_{15} = 70^{\circ} - 40^{\circ} = 30^{\circ}.\) References R. Honsberger, Mathematical Gems, II, MAA, 1976 Menelaus and Ceva The Menelaus Theorem Menelaus Theorem: proofs ugly and elegant - A. Einstein's view Ceva's Theorem Ceva in Circumscribed Quadrilateral Ceva's Theorem: A Matter of Appreciation Ceva and Menelaus Meet on the Roads Menelaus From Ceva Menelaus and Ceva Theorems Ceva and Menelaus Theorems for Angle Bisectors Ceva's Theorem: Proof Without Words Cevian Cradle Cevian Cradle II Cevian Nest Cevian Triangle An Application of Ceva's Theorem Trigonometric Form of Ceva's Theorem Two Proofs of Menelaus Theorem Simultaneous Generalization of the Theorems of Ceva and Menelaus Menelaus from 3D Terquem's Theorem Cross Points in a Polygon Two Cevians and Proportions in a Triangle, II Concurrence Not from School Geometry Two Triangles Inscribed in a Conic - with Elementary Solution From One Collinearity to Another Concurrence in Right Triangle 65617910
I was fascinated recently to discover something I hadn’t realized about relative interpretability in set theory, and I’d like to share it here. Namely, Different set theories extending ZF are never bi-interpretable! For example, ZF and ZFC are not bi-interpretable, and neither are ZFC and ZFC+CH, nor ZFC and ZFC+$\neg$CH, despite the fact that all these theories are equiconsistent. The basic fact is that there are no nontrivial instances of bi-interpretation amongst the models of ZF set theory. This is surprising, and could even be seen as shocking, in light of the philosophical remarks one sometimes hears asserted in the philosophy of set theory that what is going on with the various set-theoretic translations from large cardinals to determinacy to inner model theory, to mention a central example, is that we can interpret between these theories and consequently it doesn’t much matter which context is taken as fundamental, since we can translate from one context to another without loss. The bi-interpretation result shows that these interpretations do not and cannot rise to the level of bi-interpretations of theories — the most robust form of mutual relative interpretability — and consequently, the translations inevitably must involve a loss of information. To be sure, set theorists classify the various set-theoretic principles and theories into a hierarchy, often organized by consistency strength or by other notions of interpretative power, using forcing or definable inner models. From any model of ZF, for example, we can construct a model of ZFC, and from any model of ZFC, we can construct models of ZFC+CH or ZFC+$\neg$CH and so on. From models with sufficient large cardinals we can construct models with determinacy or inner-model-theoretic fine structure and vice versa. And while we have relative consistency results and equiconsistencies and even mutual interpretations, we will have no nontrivial bi-interpretations. (I had proved the theorem a few weeks ago in joint work with Alfredo Roque Freire, who is visiting me in New York this year. We subsequently learned, however, that this was a rediscovery of results that have evidently been proved independently by various authors. Albert Visser proves the case of PA in his paper, “Categories of theories and interpretations,” Logic in Tehran, 284–341, Lect. Notes Log., 26, Assoc. Symbol. Logic, La Jolla, CA, 2006, (pdf, see pp. 52-55). Ali Enayat gave a nice model-theoretic argument for showing specifically that ZF and ZFC are not bi-interpretable, using the fact that ZFC models can have no involutions in their automorphism groups, but ZF models can; and he proved the general version of the theorem, for ZF, second-order arithmetic $Z_2$ and second-order set theory KM in his 2016 article, A. Enayat, “Variations on a Visserian theme,” in Liber Amicorum Alberti : a tribute to Albert Visser / Jan van Eijck, Rosalie Iemhoff and Joost J. Joosten (eds.) Pages, 99-110. ISBN, 978-1848902046. College Publications, London. The ZF version was apparently also observed independently by Harvey Friedman, Visser and Fedor Pakhomov.) Meanwhile, let me explain our argument. Recall from model theory that one theory $S$ is interpreted in another theory $T$, if in any model of the latter theory $M\models T$, we can define (and uniformly so in any such model) a certain domain $N\subset M^k$ and relations and functions on that domain so as to make $N$ a model of $S$. For example, the theory of algebraically closed fields of characteristic zero is interpreted in the theory of real-closed fields, since in any real-closed field $R$, we can consider pairs $(a,b)$, thinking of them as $a+bi$, and define addition and multiplication on those pairs in such a way so as to construct an algebraically closed field of characteristic zero. Two theories are thus mutually interpretable, if each of them is interpretable in the other. Such theories are necessarily equiconsistent, since from any model of one of them we can produce a model of the other. Note that mutual interpretability, however, does not insist that the two translations are inverse to each other, even up to isomorphism. One can start with a model of the first theory $M\models T$ and define the interpreted model $N\models S$ of the second theory, which has a subsequent model of the first theory again $\bar M\models T$ inside it. But the definition does not insist on any particular connection between $M$ and $\bar M$, and these models need not be isomorphic nor even elementarily equivalent in general. By addressing this, one arrives at a stronger and more robust form of mutual interpretability. Namely, two theories $S$ and $T$ are bi-interpretable, if they are mutually interpretable in such a way that the models can see that the interpretations are inverse. That is, for any model $M$ of the theory $T$, if one defines the interpreted model $N\models S$ inside it, and then defines the interpreted model $\bar M$ of $T$ inside $N$, then $M$ is isomorphic to $\bar M$ by a definable isomorphism in $M$, and uniformly so (and the same with the theories in the other direction). Thus, every model of one of the theories can see exactly how it itself arises definably in the interpreted model of the other theory. For example, the theory of linear orders $\leq$ is bi-interpretable with the theory of strict linear order $<$, since from any linear order $\leq$ we can define the corresponding strict linear order $<$ on the same domain, and from any strict linear order $<$ we can define the corresponding linear order $\leq$, and doing it twice brings us back again to the same order. For a richer example, the theory PA is bi-interpretable with the finite set theory $\text{ZF}^{\neg\infty}$, where one drops the infinity axiom from ZF and replaces it with the negation of infinity, and where one has the $\in$-induction scheme in place of the foundation axiom. The interpretation is via the Ackerman encoding of hereditary finite sets in arithmetic, so that $n\mathrel{E} m$ just in case the $n^{th}$ binary digit of $m$ is $1$. If one starts with the standard model $\mathbb{N}$, then the resulting structure $\langle\mathbb{N},E\rangle$ is isomorphic to the set $\langle\text{HF},\in\rangle$ of hereditarily finite sets. More generally, by carrying out the Ackermann encoding in any model of PA, one thereby defines a model of $\text{ZF}^{\neg\infty}$, whose natural numbers are isomorphic to the original model of PA, and these translations make a bi-interpretation. We are now ready to prove that this bi-interpretation situation does not occur with different set theories extending ZF. Theorem. Distinct set theories extending ZF are never bi-interpretable. Indeed, there is not a single model-theoretic instance of bi-interpretation occurring with models of different set theories extending ZF. Proof. I mean “distinct” here in the sense that the two theories are not logically equivalent; they do not have all the same theorems. Suppose that we have a bi-interpretation instance of the theories $S$ and $T$ extending ZF. That is, suppose we have a model $\langle M,\in\rangle\models T$ of the one theory, and inside $M$, we can define an interpreted model of the other theory $\langle N,\in^N\rangle\models S$, so the domain of $N$ is a definable class in $M$ and the membership relation $\in^N$ is a definable relation on that class in $M$; and furthermore, inside $\langle N,\in^N\rangle$, we have a definable structure $\langle\bar M,\in^{\bar M}\rangle$ which is a model of $T$ again and isomorphic to $\langle M,\in^M\rangle$ by an isomorphism that is definable in $\langle M,\in^M\rangle$. So $M$ can define the map $a\mapsto \bar a$ that forms an isomorphism of $\langle M,\in^M\rangle$ with $\langle \bar M,\in^{\bar M}\rangle$. Our argument will work whether we allow parameters in any of these definitions or not. I claim that $N$ must think the ordinals of $\bar M$ are well-founded, for otherwise it would have some bounded cut $A$ in the ordinals of $\bar M$ with no least upper bound, and this set $A$ when pulled back pointwise by the isomorphism of $M$ with $\bar M$ would mean that $M$ has a cut in its own ordinals with no least upper bound; but this cannot happen in ZF. If the ordinals of $N$ and $\bar M$ are isomorphic in $N$, then all three models have isomorphic ordinals in $M$, and in this case, $\langle M,\in^M\rangle$ thinks that $\langle N,\in^N\rangle$ is a well-founded extensional relation of rank $\text{Ord}$. Such a relation must be set-like (since there can be no least instance where the predecessors form a proper class), and so $M$ can perform the Mostowski collapse of $\in^N$, thereby realizing $N$ as a transitive class $N\subseteq M$ with $\in^N=\in^M\upharpoonright N$. Similarly, by collapsing we may assume $\bar M\subseteq N$ and $\in^{\bar M}=\in^M\upharpoonright\bar M$. So the situation consists of inner models $\bar M\subseteq N\subseteq M$ and $\langle \bar M,\in^M\rangle$ is isomorphic to $\langle M,\in^M\rangle$ in $M$. This is impossible unless all three models are identical, since a simple $\in^M$-induction shows that $\pi(y)=y$ for all $y$, because if this is true for the elements of $y$, then $\pi(y)=\{\pi(x)\mid x\in y\}=\{x\mid x\in y\}=y$. So $\bar M=N=M$ and so $N$ and $M$ satisfy the same theory, contrary to assumption. If the ordinals of $\bar M$ are isomorphic to a proper initial segment of the ordinals of $N$, then a similar Mostowski collapse argument would show that $\langle\bar M,\in^{\bar M}\rangle$ is isomorphic in $N$ to a transitive set in $N$. Since this structure in $N$ would have a truth predicate in $N$, we would be able to pull this back via the isomorphism to define (from parameters) a truth predicate for $M$ in $M$, contrary to Tarski’s theorem on the non-definability of truth. The remaining case occurs when the ordinals of $N$ are isomorphic in $N$ to an initial segment of the ordinals of $\bar M$. But this would mean that from the perspective of $M$, the model $\langle N,\in^N\rangle$ has some ordinal rank height, which would mean by the Mostowski collapse argument that $M$ thinks $\langle N,\in^N\rangle$ is isomorphic to a transitive set. But this contradicts the fact that $M$ has an injection of $M$ into $N$. $\Box$ It follows that although ZF and ZFC are equiconsistent, they are not bi-interpretable. Similarly, ZFC and ZFC+CH and ZFC+$\neg$CH are equiconsistent, but no pair of them is bi-interpretable. And again with all the various equiconsistency results concerning large cardinals. A similar argument works with PA to show that different extensions of PA are never bi-interpretable.
Let $R$ be a principal ideal domain but not a field, and let $M$ be an $R$-module. Show the following: (i) Let $p \in R$ be an irreducible element and $r \in R \setminus \{0\}$. Then $(R/ \langle r \rangle)[p] \cong R/ \langle p^n \rangle$, where $n=\max\{k \in \mathbb N_0 : p^k\mid r\}$. (ii) $M$ is simple iff $\exists \space p \in R$ irreducible such that $M \cong R/ \langle p \rangle$. I am pretty stuck on both items. In (i) I've tried to prove it by induction on $\mathbb N_0$, but I could only prove it for the base case $n=0$: If $n=0$, then $R/ \langle p^n \rangle=R/ \langle 1 \rangle=0$. Now, $$(R/ \langle r \rangle)[p]=\{\overline{a} \in R/ \langle r \rangle : p^m\overline{a}=0 \space \text{for some } m \in \mathbb N\}=\{a \in R : p^ma \in \langle r \rangle \space \text{for some } m \in \mathbb N \}$$ If I call this set $S$ (which is also a submodule), then I would like to conclude $S=\langle r \rangle$. The inclusion $\langle r \rangle \subset S$ is immediate. Now take $s \in S$, then $p^ms=rq$. Using the fact that $R$ is a UFD, one can deduce that $p^m \sim q$, then $p^ms=rup^m$ for some $u \in \mathcal U(R)$, from here it follows $s=ru \in \langle r \rangle$. I couldn't prove the induction step, maybe induction is not the best way attack this problem. As for (ii), I could show that $M \cong R/ \langle p \rangle \implies M$ is simple: since $p$ is irreducible, it is also prime, as we are in a PID, this implies $\langle p \rangle$ is maximal, so $R/ \langle p \rangle$ is simple, it immediately follows $M$ is simple. I would appreciate suggestions to prove (i) and the other implication in (ii). Thanks in advance.
The roll and pitch angles that you calculate using the accelerometer measurements will only be correct if (1) the IMU is non-accelerating (e.g., stationary), and (2) the accelerometer measurements are perfect. Thus, they can only be used to initialize the tilt (roll and pitch) of the IMU, not to calculate roll and pitch during acceleration. An external measurement of yaw angle is required to initialize the yaw angle. See these answers: for some background. Say the accelerometer measurements are $f_x$, $f_y$, and $f_z$; the gyro measurements are $\omega_x$, $\omega_y$, and $\omega_z$, and the magmetometer measuremenst are $b_x$, $b_y$, and $b_z$. The roll angle ($\phi$) and pitch ($\theta$) angle can be initialized if the IMU is not accelerating using \begin{eqnarray}\phi_0 &=& \tan^{-1}\left(f_y/f_z\right) \\\theta_0 &=& \tan^{-1}\left(-f_x/\sqrt{f_y^2+f_z^2}\right)\end{eqnarray}The yaw angle ($\psi$) can be initialized using the magnetometer measurements. Given the roll and pitch angles, the magnetic heading ($\psi_m$) can be calculated from $b_x$, $b_y$, and $b_z$. Given the magnetic declination at the system location, the true heading (or initial yaw angle, $\psi_o$) can be calculated. Once the IMU is initalized in an unaccelerated state, the gyro measurements can be used to calculate the rates of change of the Euler angles while the IMU is moving:\begin{eqnarray}\dot\phi &=& \omega_x +\tan\theta\sin\phi\,\omega_y +\tan\theta\cos\phi\,\omega_z \\ \dot\theta &=& \cos\phi\,\omega_y -\sin\phi\,\omega_z \\ \dot\psi &=& \sec\theta\sin\phi\,\omega_y +\sec\theta\cos\phi\,\omega_z \end{eqnarray}The rates of change of the Euler angles are then numerically integrated to propagate the Euler angles. The coordinate transformation matrix at each instant of time can then be obtained from the Euler angles. This will work if your IMU never pitches up or down to $\pm90^\circ$. In that case it will be better to calculate and propagate quaternions instead of Euler angles. (Euler angles can always be calculated from the quaternions.) Alas, the gyros are not perfect. Say that the gyros have bias errors, then these bias errors will also be integrated with time to result in the Euler angled "drifting". For this reason, an extended Kalman filter is often used to calculate the orientation of the IMU, aided by other measurements (magnetometer, accelerometer, and a GPS, for example). But that's another topic :)
Q. Surface of certain metal is first illuminated with light of wavelength $\lambda_1 =350 \; nm$ and then, by light of wavelength $\lambda_2=54D \; nm$. It is found that the maximum speed of the photo electrons in the two cases differ by a factor of 2. The work function of the metal (in eV) is close to : (Energjr of photon = $\frac{1240}{\lambda (in \; nm)} eV$) Solution: $\frac{hc}{\lambda_{1}} =\phi + \frac{1}{2} m\left(2v\right)^{2} $ $\frac{hc}{\lambda _{2}} =\phi + \frac{1}{2} mv^{2} $ $\Rightarrow \frac{\frac{hc}{\lambda _{1}}-\phi}{\frac{hc}{\lambda _{2}} -\phi} = 4 $ $\Rightarrow \frac{hc}{\lambda _{1}} - \phi = \frac{4hc}{\lambda _{2}} -4 \phi $ $ \Rightarrow \frac{4hc}{\lambda _{2}} - \frac{hc}{\lambda _{1}} = 3\phi $ $ \Rightarrow \phi = \frac{1}{3} hc \left(\frac{4}{\lambda _{2}} -\frac{1}{\lambda _{1}}\right) $ $= \frac{1}{3} \times1240 \left(\frac{4 \times350 -540}{350\times540} \right) $ $= 1.8 eV $ Questions from JEE Main 2019 Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade
I would like the norm-symbol \left\| \right\|_p to be linked to its definition, but I don't want its interior (x for example) to be linked. (And of course I want them to adjust their size automatically - that's why I use \left and \right) Something like \hyperref[def:norm1]{\left\| {\suspendHyperref #1} \continueHyperref \right\|_1 } would be great! (firefox' build in pdf-reader hilights links on mouseover - so it would be awesome if the right side of the norm symbol \right_p also highlights, when I move the mouse over the left side of the norm symbol \left\| in firefox, but this is not my number 1 priority) At least something like \hyperref[def:norm2]{\left\| } #1 \hyperref[def:norm2]{\right\|_2 } would be quite good, but this doesn't work because \left and \right form a group (like described in this post) The following MWE produces a result that looks like I want it to look, but doesn't link correctly. Especially in eq. (1) the inner l_2-norm in the nested norms isn't linked correctly (because everything inside the outer l_1-norm is linked to the l_1-norm). \documentclass[a4paper,11pt]{report}\usepackage[colorlinks=true]{hyperref}\newtheorem{theorem}{Theorem}[section]\newtheorem{definition}[theorem]{Definition}\newcommand{\normOne}[1]{\hyperref[def:norm1]{\left\| {\normalcolor #1} \right\|_1 }}\newcommand{\normTwo}[1]{\hyperref[def:norm2]{\left\| {\normalcolor #1} \right\|_2 }}\begin{document}\begin{definition}[$\ell_1-Norm$]\label{def:norm1}\[ \normOne{x} := \sum_{i=1}^{n} \left| x_i \right| \]\end{definition}\begin{definition}[$\ell_2-Norm$]\label{def:norm2}\[ \normTwo{x} := \left( \sum_{i=1}^{n} \left| x_i \right|^2 \right)^{\frac{1}{2}} \]\end{definition}many pages later...\begin{equation}\normOne{ \left( \normTwo{y} - 3 \right) \hat{A}x}\end{equation}\end{document} Maybe the idea of one of the answers to this question can be adopted to my problem, but unfortunately I don't understand all of these solutions well enough to know how to adopt them properly.
Last year I made a post about the universal program, a Turing machine program $p$ that can in principle compute any desired function, if it is only run inside a suitable model of set theory or arithmetic. Specifically, there is a program $p$, such that for any function $f:\newcommand\N{\mathbb{N}}\N\to\N$, there is a model $M\models\text{PA}$ — or of $\text{ZFC}$, whatever theory you like — inside of which program $p$ on input $n$ gives output $f(n)$. This theorem is related to a very interesting theorem of W. Hugh Woodin’s, which says that there is a program $e$ such that $\newcommand\PA{\text{PA}}\PA$ proves $e$ accepts only finitely many inputs, but such that for any finite set $A\subset\N$, there is a model of $\PA$ inside of which program $e$ accepts exactly the elements of $A$. Actually, Woodin’s theorem is a bit stronger than this in a way that I shall explain. Victoria Gitman gave a very nice talk today on both of these theorems at the special session on Computability theory: Pushing the Boundaries at the AMS sectional meeting here in New York, which happens to be meeting right here in my east midtown neighborhood, a few blocks from my home. What I realized this morning, while walking over to Vika’s talk, is that there is a very simple proof of the version of Woodin’s theorem stated above. The idea is closely related to an idea of Vadim Kosoy mentioned in my post last year. In hindsight, I see now that this idea is also essentially present in Woodin’s proof of his theorem, and indeed, I find it probable that Woodin had actually begun with this idea and then modified it in order to get the stronger version of his result that I shall discuss below. But in the meantime, let me present the simple argument, since I find it to be very clear and the result still very surprising. Theorem. There is a Turing machine program $e$, such that $\PA$ proves that $e$ accepts only finitely many inputs. For any particular finite set $A\subset\N$, there is a model $M\models\PA$ such that inside $M$, the program $e$ accepts all and only the elements of $A$. Indeed, for any set $A\subset\N$, including infinite sets, there is a model $M\models\PA$ such that inside $M$, program $e$ accepts $n$ if and only if $n\in A$. Proof. The program $e$ simply performs the following task: on any input $n$, search for a proof from $\PA$ of a statement of the form “program $e$ does not accept exactly the elements of $\{n_1,n_2,\ldots,n_k\}$.” Accept nothing until such a proof is found. For the first such proof that is found, accept $n$ if and only if $n$ is one of those $n_i$’s. In short, the program $e$ searches for a proof that $e$ doesn’t accept exactly a certain finite set, and when such a proof is found, it accepts exactly the elements of this set anyway. Clearly, $\PA$ proves that program $e$ accepts only a finite set, since either no such proof is ever found, in which case $e$ accepts nothing (and the empty set is finite), or else such a proof is found, in which case $e$ accepts only that particular finite set. So $\PA$ proves that $e$ accepts only finitely many inputs. But meanwhile, assuming $\PA$ is consistent, then you cannot refute the assertion that program $e$ accepts exactly the elements of some particular finite set $A$, since if you could prove that from $\PA$, then program $e$ actually would accept exactly that set (for the shortest such proof), in which case this would also be provable, contradicting the consistency of $\PA$. Since you cannot refute any particular finite set as the accepting set for $e$, it follows that it is consistent with $\PA$ that $e$ accepts any particular finite set $A$ that you like. So there is a model of $\PA$ in which $e$ accepts exactly the elements of $A$. This establishes statement (2). Statement (3) now follows by a simple compactness argument. Namely, for any $A\subset\N$, let $T$ be the theory of $\PA$ together with the assertions that program $e$ accepts $n$, for any particular $n\in A$, and the assertions that program $e$ does not accept $n$, for $n\notin A$. Any finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. Any model of this theory realizes statement (3). QED One uses the Kleene recursion theorem to show the existence of the program $e$, which makes reference to $e$ in the description of what it does. Although this may look circular, it is a standard technique to use the recursion theorem to eliminate the circularity. This theorem immediately implies the classical result of Mostowski and Kripke that there is an independent family of $\Pi^0_1$ assertions, since the assertions $n\notin W_e$ are exactly such a family. The theorem also implies a strengthening of the universal program theorem that I proved last year. Indeed, the two theorems can be realized with the same program! Theorem. There is a Turing machine program $e$ with the following properties: $\PA$ proves that $e$ computes a finite function; For any particular finite partial function $f$ on $\N$, there is a model $M\models\PA$ inside of which program $e$ computes exactly $f$. For any partial function $f:\N\to\N$, finite or infinite, there is a model $M\models\PA$ inside of which program $e$ on input $n$ computes exactly $f(n)$, meaning that $e$ halts on $n$ if and only if $f(n)\downarrow$ and in this case $\varphi_e(n)=f(n)$. Proof. The proof of statements (1) and (2) is just as in the earlier theorem. It is clear that $e$ computes a finite function, since either it computes the empty function, if no proof is found, or else it computes the finite function mentioned in the proof. And you cannot refute any particular finite function for $e$, since if you could, it would have exactly that behavior anyway, contradicting $\text{Con}(\PA)$. So statement (2) holds. But meanwhile, we can get statement (3) by a simple compactness argument. Namely, fix $f$ and let $T$ be the theory asserting $\PA$ plus all the assertions either that $\varphi_e(n)\uparrow$, if $n$ is not the domain of $f$, and $\varphi_e(n)=k$, if $f(n)=k$. Every finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. But any model of this theory exactly fulfills statement (3). QED Woodin’s proof is more difficult than the arguments I have presented, but I realize now that this extra difficulty is because he is proving an extremely interesting and stronger form of the theorem, as follows. Theorem. (Woodin) There is a Turing machine program $e$ such that $\PA$ proves $e$ accepts at most a finite set, and for any finite set $A\subset\N$ there is a model $M\models\PA$ inside of which $e$ accepts exactly $A$. And furthermore, in any such $M$ and any finite $B\supset A$, there is an end-extension $M\subset_{end} N\models\PA$, such that in $N$, the program $e$ accepts exactly the elements of $B$. This is a much more subtle claim, as well as philosophically interesting for the reasons that he dwells on. The program I described above definitely does not achieve this stronger property, since my program $e$, once it finds the proof that $e$ does not accept exactly $A$, will accept exactly $A$, and this will continue to be true in all further end-extensions of the model, since that proof will continue to be the first one that is found.
No. It's only LTI (Linear and Time-Invariant) systems that can be modeled with convolution through a unique single impulse response.For example the systems$$ y(t) = g(t) x(t) $$ or$$ y[n] = \sum_{k=0}^{k < n} x[n-k] $$are both linear but not time-invariant and their output $y[n]$ cannot be computed with the convolution operation ( $\star$ denoting ... This happens frequently if your poles are reasonably close to the unit circle. Consider the following example%% TF2ZP is problematicfs = 44100;% 6th order lowpass, fc = 50Hz, sampled at 44.1kHz[z,p,k] = cheby2(6,80,50*2/fs);% to transfer function[b,a] = zp2tf(z,p,k);% back to zpk[z1,p1,k1] = tf2zp(b,a);display([p p1]);Displaying the poles side ... Any LTI system can be completely characterized (among other things) by it's transfer function or it's impulse response.If your filter represents an LTI system, that you can calculate it's output by either convolving the input with the impulse response or multiplying the transfer function with the spectrum of the input signal.In theory these things are ... Neither. To me, filter classes using the notion of frequency bands (low-pass, high-pass, etc.) can be used safely in the linear case. And the bilateral filter is nonlinear. Edges are not really high-frequency: they often have sharp variations across the edge, but slow variation along it. I would consider the bilateral filter as an edge-preserving smoother, ... DO NOT PASS GO; DO NOT Collect $200; DO NOT Take Fourier transforms, or worse yet, FFTsYou do the convolution exactly the way you would do any other convolution: start with the basic convolution integral (not what you wrote) and apply the properties of the signals that you are using to come up with an easier calculation. Begin with$$\int_{-\infty}^\infty ... For the biquad section that is cascaded, the quantization issues regarding the pole locations are well understood. For a biquad transfer function:$$\begin{align}H(z) &= \frac{b_0+b_1z^{-1}+b_2z^{-2}}{1+a_1z^{-1}+a_2z^{-2}} \\ \\&= \frac{b_0z^2+b_1z+b_2}{z^2+a_1z+a_2} \\ \\&= b_0\frac{z^2+\frac{b_1}{b_0}z+\frac{b_2}{b_0}}{z^2+a_1z+a_2} \... With infinite-precision arithmetic they will be stable. However, since you don't have infinite-precision arithmetic, you will have quantization issues even if you use 64-bit precision. These quantization issues can make your filter unstable. Even if your filter is stable, perhaps you will not get the frequency response wanted because of these quantization ... You have a very narrow stop band which means that all the poles are crammed in a very small area of the complex plane, close to the unit circle. This can result in severe numerical problems, even for relatively small filter orders, even with floating point arithmetic.Another important point that you might not realize is that if you design a band pass or a ... Your confusion is understandable.If you consider the definition of linear phase FIR filter and the associated symmetry conditions on their impulse responses, then you can arrive the conclusion that the first two cases$$ h_1[n] = [0,0,0,1,0] $$and$$ h_2[n] = [0,0,0,0,1,0,0,0,0,0,1] $$are non-symmetric. However, as you use zeros and ones in those ... The convolution integral is a special case of the Fredholm equation of the first kind.https://en.wikipedia.org/wiki/Fredholm_integral_equationI believe that it covers linear time varying systems, as do linear time varying state space equations, so it’s a no but... kind of answer. To unveil part of the mystery, let us recall how the convolution operation and the properties of linearity and time-invariance are related. In other words, if a discrete system $\mathcal{S}$ is linear and time-invariant, what would be the output for a discrete signal $x[n]$?To do that, let us rewrite the signal on the basis of Kronecker symbols $\delta_n$,... I suppose that you obtain a transfer function of the desired discrete-time system in the form$$H(z)=\frac{b_0+b_1z^{-1}+\ldots +b_Nz^{-N}}{1+a_1z^{-1}+\ldots +a_Nz^{-N}}\tag{1}$$From that transfer function you compute the coefficients of the second-order sections. Before doing that, you can make sure that the DC gain of $(1)$ equals $1$ (which of course ... Bilateral Filter is indeed an Edge Preserving Filter.Moreover, due to being Spatially Variant Non Linear Filter it can be applied using Fourier Transform.Since it has no representation in Frequency Domain it is not well defined how to classify it into one of the categories: LPF, HPF, BPF or BSF.Nonetheless, let's try doing some analysis based on ... Your diagram looks correct. Let's call the transfer function in the feedback loop $G(z)$. Consider the signal $w[n]$ at the input to the delay line. Its $\mathcal{Z}$-transform satisfies$$W(z)=X(z)+G(z)Y(z)\tag{1}$$where $X(z)$ and $Y(z)$ are the $\mathcal{Z}$-transforms of the input and output sequences, respectively. The output is just a delayed ... I actually ran into this same paper and had the same problem.The paper linked below does a good job addressing the issue. Starting on page 13 the discuss the issue with this optical element.https://mstamenk.github.io/assets/files/OpticalRadon.pdf
By a modified version of the Fermat's little theorem we obtain that $a^{\phi(n)} \equiv 1$ mod $n$ whenever $(a,n)=1$. But my professor accidentally gave this question to calculate $2^{9999 }$ mod $100$. So everyone in the class calculated it using the theorem since they forgot to check that $(a,n)=1$. But to my surprise everyone got $88$ as the answer which is actually the correct answer. So I first saw what happens in the case of mod $10$. Here $2^4 \equiv 6$ mod $10$. But magically enough this number multiplies with powers of $2$ as if it were identity i.e. $$2\cdot6 \equiv 2 \pmod {10}$$ $$4\cdot6 \equiv 4 \pmod {10}$$ $$6\cdot6 \equiv 6 \pmod {10}$$ $$8\cdot6 \equiv 8 \pmod {10}$$ After seeing this I checked what happens in the case of $100$. Similar thing happens even here instead $2^{40} \equiv 76\pmod{100}$. Now $76 $ acts like $6$ just that $76*2 \neq 2$ but otherwise $76*4=4$ and $76*8=8$ and so on. Can anyone explain as to why it is happening here and does it happen in other cases as well.
Kleene Star with ‘sed’ is behaving as expected for me, with exception of a case where the input pattern is “ab” and the regex is “b*”. Does anyone know why this regex is not being matched against the pattern space? This is the failure case: $ printf "ab\n" | sed -En 's/b*// p' | od -t c 0000000 a b \n 0000003 This case, without Kleene Star, behaves as expected: $ printf "ab\n" | sed -En 's/b// p' | od -t c 0000000 a \n 0000002 I’m trying to match the ‘b’ and replace it with nothing. The pattern space is ‘ab’, beginning of line is unspecified, so I’m confused why /b*/ would not match a pattern of zero or more ‘b’s, in this case ‘b’. Oddly, this case works with the ‘.’ prepended to the ‘*’: # fails to match $ printf "abb\n" | sed -En 's/b*// p' | od -t c 0000000 a b b \n 0000004 # matches with .* ! $ printf "abb\n" | sed -En 's/b.*// p' | od -t c 0000000 a \n 0000002 Kleene Star matches zero or more occurrences of the preceding alphabet (in this case a single character). By definition: $ V^0 = \{\epsilon\}$ $ V^1 = V$ $ \forall i ( (i \gt 0) \land (V^{i + 1} = \{ wv : w \in V^i \land v \in V )\} $ Therefore $ V^0 = \{ \epsilon\}$ , $ V^1 = \{ \epsilon \cdot b\}$ , $ V^2 = \{ \epsilon \cdot b \cdot b\}$ . $ V^* = \bigcup\limits_{i\ge0} V^i = V^0 \cup V^1 \cup …$ which I have specified by /b*/. The following cases agree with my understanding of sed and Kleene Star: # * : matches 0 or more occurances # no match $ printf "a\n" | sed -En 's/b*// p' | od -t c 0000000 a \n 0000002 # match a printf "a\n" | sed -En 's/a*// p' | od -t c 0000000 \n 0000001 # match a printf "ab\n" | sed -En 's/a*// p' | od -t c 0000000 b \n 0000002 # match aa printf "aab\n" | sed -En 's/a*// p' | od -t c 0000000 b \n 0000002 I tested using BSD and GNU sed, both have the same results. Thanks!
I am developing a code where I am using the least squares method to compute gradients. Generally, we use least squares to obtain some model based on a set of data (${q_1 \cdots q_N}$) at locations (${x_1 \cdots x_N}$), and use this model to predict $q_c$ at $x_c$. In my case, I am looking for $\nabla{q}$ at $x_c$ using least squares. Should the $q$ value at $x_c$ be included in a least squares model for its gradient at the location, or only the neighboring values? I would like use the weighted least squares method. It seems the weighting matrix is defined as a diagonal matrix with the inverse of the variance squared along the main diagonal. The popular choice for the variance seems to be the distance between the neighboring data point and the point in question. So if I include the $q_c$ value at $x_c$ in my least squares model, then the variance would be zero for this data point, which would be problematic unless I add some $\epsilon$ value to the variance before taking it's inverse. The least squares equation is \begin{align} \nabla q = q^TZM^T\nabla\phi(x) \\ \\ M=(Z^TZ)^{-1}\\ \\ \phi = \begin{bmatrix} 1 & x & y & z & xy & xz & yz & 0.5x^2 & 0.5y^2 & 0.5z^2 \\ \end{bmatrix}\\ \\ Z_{unweight} = \begin{bmatrix} 1 & x_1 & y_1 & z_1 & x_1y_1 & x_1z_1 & y_1z_1 & 0.5x_1^2 & 0.5y_1^2 & 0.5z_1^2 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &\vdots & \vdots \\ 1 & x_N & y_N & z_N & x_Ny_N & x_Nz_N & y_Nz_N & 0.5x_N^2 & 0.5y_N^2 & 0.5z_N^2 \\ \end{bmatrix} \\ \\ q = \begin{bmatrix} q_1 & \cdots & q_N \end{bmatrix} \end{align} $q_c$ is known, but I am not sure if this should be included in the above model.
I was fascinated recently to discover something I hadn’t realized about relative interpretability in set theory, and I’d like to share it here. Namely, Different set theories extending ZF are never bi-interpretable! For example, ZF and ZFC are not bi-interpretable, and neither are ZFC and ZFC+CH, nor ZFC and ZFC+$\neg$CH, despite the fact that all these theories are equiconsistent. The basic fact is that there are no nontrivial instances of bi-interpretation amongst the models of ZF set theory. This is surprising, and could even be seen as shocking, in light of the philosophical remarks one sometimes hears asserted in the philosophy of set theory that what is going on with the various set-theoretic translations from large cardinals to determinacy to inner model theory, to mention a central example, is that we can interpret between these theories and consequently it doesn’t much matter which context is taken as fundamental, since we can translate from one context to another without loss. The bi-interpretation result shows that these interpretations do not and cannot rise to the level of bi-interpretations of theories — the most robust form of mutual relative interpretability — and consequently, the translations inevitably must involve a loss of information. To be sure, set theorists classify the various set-theoretic principles and theories into a hierarchy, often organized by consistency strength or by other notions of interpretative power, using forcing or definable inner models. From any model of ZF, for example, we can construct a model of ZFC, and from any model of ZFC, we can construct models of ZFC+CH or ZFC+$\neg$CH and so on. From models with sufficient large cardinals we can construct models with determinacy or inner-model-theoretic fine structure and vice versa. And while we have relative consistency results and equiconsistencies and even mutual interpretations, we will have no nontrivial bi-interpretations. (I had proved the theorem a few weeks ago in joint work with Alfredo Roque Freire, who is visiting me in New York this year. We subsequently learned, however, that this was a rediscovery of results that have evidently been proved independently by various authors. Albert Visser proves the case of PA in his paper, “Categories of theories and interpretations,” Logic in Tehran, 284–341, Lect. Notes Log., 26, Assoc. Symbol. Logic, La Jolla, CA, 2006, (pdf, see pp. 52-55). Ali Enayat gave a nice model-theoretic argument for showing specifically that ZF and ZFC are not bi-interpretable, using the fact that ZFC models can have no involutions in their automorphism groups, but ZF models can; and he proved the general version of the theorem, for ZF, second-order arithmetic $Z_2$ and second-order set theory KM in his 2016 article, A. Enayat, “Variations on a Visserian theme,” in Liber Amicorum Alberti : a tribute to Albert Visser / Jan van Eijck, Rosalie Iemhoff and Joost J. Joosten (eds.) Pages, 99-110. ISBN, 978-1848902046. College Publications, London. The ZF version was apparently also observed independently by Harvey Friedman, Visser and Fedor Pakhomov.) Meanwhile, let me explain our argument. Recall from model theory that one theory $S$ is interpreted in another theory $T$, if in any model of the latter theory $M\models T$, we can define (and uniformly so in any such model) a certain domain $N\subset M^k$ and relations and functions on that domain so as to make $N$ a model of $S$. For example, the theory of algebraically closed fields of characteristic zero is interpreted in the theory of real-closed fields, since in any real-closed field $R$, we can consider pairs $(a,b)$, thinking of them as $a+bi$, and define addition and multiplication on those pairs in such a way so as to construct an algebraically closed field of characteristic zero. Two theories are thus mutually interpretable, if each of them is interpretable in the other. Such theories are necessarily equiconsistent, since from any model of one of them we can produce a model of the other. Note that mutual interpretability, however, does not insist that the two translations are inverse to each other, even up to isomorphism. One can start with a model of the first theory $M\models T$ and define the interpreted model $N\models S$ of the second theory, which has a subsequent model of the first theory again $\bar M\models T$ inside it. But the definition does not insist on any particular connection between $M$ and $\bar M$, and these models need not be isomorphic nor even elementarily equivalent in general. By addressing this, one arrives at a stronger and more robust form of mutual interpretability. Namely, two theories $S$ and $T$ are bi-interpretable, if they are mutually interpretable in such a way that the models can see that the interpretations are inverse. That is, for any model $M$ of the theory $T$, if one defines the interpreted model $N\models S$ inside it, and then defines the interpreted model $\bar M$ of $T$ inside $N$, then $M$ is isomorphic to $\bar M$ by a definable isomorphism in $M$, and uniformly so (and the same with the theories in the other direction). Thus, every model of one of the theories can see exactly how it itself arises definably in the interpreted model of the other theory. For example, the theory of linear orders $\leq$ is bi-interpretable with the theory of strict linear order $<$, since from any linear order $\leq$ we can define the corresponding strict linear order $<$ on the same domain, and from any strict linear order $<$ we can define the corresponding linear order $\leq$, and doing it twice brings us back again to the same order. For a richer example, the theory PA is bi-interpretable with the finite set theory $\text{ZF}^{\neg\infty}$, where one drops the infinity axiom from ZF and replaces it with the negation of infinity, and where one has the $\in$-induction scheme in place of the foundation axiom. The interpretation is via the Ackerman encoding of hereditary finite sets in arithmetic, so that $n\mathrel{E} m$ just in case the $n^{th}$ binary digit of $m$ is $1$. If one starts with the standard model $\mathbb{N}$, then the resulting structure $\langle\mathbb{N},E\rangle$ is isomorphic to the set $\langle\text{HF},\in\rangle$ of hereditarily finite sets. More generally, by carrying out the Ackermann encoding in any model of PA, one thereby defines a model of $\text{ZF}^{\neg\infty}$, whose natural numbers are isomorphic to the original model of PA, and these translations make a bi-interpretation. We are now ready to prove that this bi-interpretation situation does not occur with different set theories extending ZF. Theorem. Distinct set theories extending ZF are never bi-interpretable. Indeed, there is not a single model-theoretic instance of bi-interpretation occurring with models of different set theories extending ZF. Proof. I mean “distinct” here in the sense that the two theories are not logically equivalent; they do not have all the same theorems. Suppose that we have a bi-interpretation instance of the theories $S$ and $T$ extending ZF. That is, suppose we have a model $\langle M,\in\rangle\models T$ of the one theory, and inside $M$, we can define an interpreted model of the other theory $\langle N,\in^N\rangle\models S$, so the domain of $N$ is a definable class in $M$ and the membership relation $\in^N$ is a definable relation on that class in $M$; and furthermore, inside $\langle N,\in^N\rangle$, we have a definable structure $\langle\bar M,\in^{\bar M}\rangle$ which is a model of $T$ again and isomorphic to $\langle M,\in^M\rangle$ by an isomorphism that is definable in $\langle M,\in^M\rangle$. So $M$ can define the map $a\mapsto \bar a$ that forms an isomorphism of $\langle M,\in^M\rangle$ with $\langle \bar M,\in^{\bar M}\rangle$. Our argument will work whether we allow parameters in any of these definitions or not. I claim that $N$ must think the ordinals of $\bar M$ are well-founded, for otherwise it would have some bounded cut $A$ in the ordinals of $\bar M$ with no least upper bound, and this set $A$ when pulled back pointwise by the isomorphism of $M$ with $\bar M$ would mean that $M$ has a cut in its own ordinals with no least upper bound; but this cannot happen in ZF. If the ordinals of $N$ and $\bar M$ are isomorphic in $N$, then all three models have isomorphic ordinals in $M$, and in this case, $\langle M,\in^M\rangle$ thinks that $\langle N,\in^N\rangle$ is a well-founded extensional relation of rank $\text{Ord}$. Such a relation must be set-like (since there can be no least instance where the predecessors form a proper class), and so $M$ can perform the Mostowski collapse of $\in^N$, thereby realizing $N$ as a transitive class $N\subseteq M$ with $\in^N=\in^M\upharpoonright N$. Similarly, by collapsing we may assume $\bar M\subseteq N$ and $\in^{\bar M}=\in^M\upharpoonright\bar M$. So the situation consists of inner models $\bar M\subseteq N\subseteq M$ and $\langle \bar M,\in^M\rangle$ is isomorphic to $\langle M,\in^M\rangle$ in $M$. This is impossible unless all three models are identical, since a simple $\in^M$-induction shows that $\pi(y)=y$ for all $y$, because if this is true for the elements of $y$, then $\pi(y)=\{\pi(x)\mid x\in y\}=\{x\mid x\in y\}=y$. So $\bar M=N=M$ and so $N$ and $M$ satisfy the same theory, contrary to assumption. If the ordinals of $\bar M$ are isomorphic to a proper initial segment of the ordinals of $N$, then a similar Mostowski collapse argument would show that $\langle\bar M,\in^{\bar M}\rangle$ is isomorphic in $N$ to a transitive set in $N$. Since this structure in $N$ would have a truth predicate in $N$, we would be able to pull this back via the isomorphism to define (from parameters) a truth predicate for $M$ in $M$, contrary to Tarski’s theorem on the non-definability of truth. The remaining case occurs when the ordinals of $N$ are isomorphic in $N$ to an initial segment of the ordinals of $\bar M$. But this would mean that from the perspective of $M$, the model $\langle N,\in^N\rangle$ has some ordinal rank height, which would mean by the Mostowski collapse argument that $M$ thinks $\langle N,\in^N\rangle$ is isomorphic to a transitive set. But this contradicts the fact that $M$ has an injection of $M$ into $N$. $\Box$ It follows that although ZF and ZFC are equiconsistent, they are not bi-interpretable. Similarly, ZFC and ZFC+CH and ZFC+$\neg$CH are equiconsistent, but no pair of them is bi-interpretable. And again with all the various equiconsistency results concerning large cardinals. A similar argument works with PA to show that different extensions of PA are never bi-interpretable.
Define the Fibonacci numbers by $f_0 = 0$, $f_1 = 1$, $f_{n+1} = f_n + f_{n-1}$, and the golden ratio by $\phi = \frac{\sqrt{5}+1}{2}$. Then it is easy to check that $f_n = \frac{\phi^n - (\tfrac{-1}{\phi})^n}{\sqrt{5}}$ and $(\phi-1)f_n = f_{n-1}-(\tfrac{-1}{\phi})^n$. We will show that if five terms in a row are all less than $f_{2k}$ for some $k > 1$, then all $a_n$ are less than or equal to $f_{2k}.$ Suppose that $a_{n-1}, a_n$ are both positive. As is well known, we can find sets $I,J \subset \{2,3,4,...\}$ with $I \cap (I+1) = J\cap (J+1) = \emptyset$, such that $a_n = \sum_{i\in I} f_i$ and $a_{n-1} = \sum_{j\in J} f_j$. Put $x = \sum_{i\in I} (\tfrac{-1}{\phi})^i$ and $y = \sum_{j\in J} (\tfrac{-1}{\phi})^j$. By summing a geometric series, it's easy to show that we have $\frac{-1}{\phi^2} < x,y < \frac{1}{\phi}$. Expressing $a_{n+1}, ..., a_{n+5}$ in terms of $x$ and $y$, we get $a_{n+1} = \sum_{i\in I}f_{i-1} - \sum_{j\in J} f_j + \alpha,\ \ \alpha = \lceil -x\rceil,$ $a_{n+2} = -\sum_{i\in I}f_{i-1} - \sum_{j\in J} f_{j-1} + \beta,\ \ \beta = \lceil\phi x + y + (\phi-1)\alpha\rceil,$ $a_{n+3} = -\sum_{i\in I}f_i + \sum_{j\in J} f_{j-1} + \gamma,\ \ \gamma = \lceil-\phi x-\phi y + (\phi-1)\beta - \alpha\rceil,$ $a_{n+4} = \sum_{j\in J} f_j + \delta = a_{n-1} + \delta,\ \ \delta = \lceil x+\phi y + (\phi-1)\gamma - \beta\rceil,$ $a_{n+5} = \sum_{i\in I}f_i + \epsilon = a_n + \epsilon,\ \ \epsilon = \lceil -y + (\phi-1)\delta - \gamma\rceil.$ Note that if we instead had $a_n$ positive and $a_{n-1}$ negative, then on writing $a_{n-1} = -\sum_{j\in J} f_j$ and $y = -\sum_{j\in J} (\frac{-1}{\phi})^j$, we still get $a_{n+4} = a_{n-1} + \delta$ and $a_{n+5} = a_n + \epsilon$, with $\alpha, \beta, \gamma, \delta, \epsilon$ defined as above. Note that in this case, we have the inequalities $\frac{-1}{\phi} < y < \frac{1}{\phi^2}$. Claim: We always have $\delta, \epsilon \in \{-1,0,1\}$. Furthermore, if $0 < x < \frac{1}{\phi^2}$ and either $\frac{-1}{\phi^2} \le y \le \frac{1}{\phi} - \frac{x}{\phi}$ or $\frac{-1}{\phi} < y \le \frac{-1}{\phi^2} - \frac{x}{\phi}$, then $\delta \ge 0$ and $\epsilon \le 0$. Proof of Claim: For the first part, note that $a_n - a_{n+5} = (a_n - \frac{a_{n+1}}{\phi} + a_{n+2}) + \frac{1}{\phi}(a_{n+1} - \frac{a_{n+2}}{\phi} + a_{n+3}) - \frac{1}{\phi}(a_{n+2} - \frac{a_{n+3}}{\phi} + a_{n+4}) - (a_{n+3} - \frac{a_{n+4}}{\phi} + a_{n+5}),$ which is between $-1-\frac{1}{\phi}$ and $1+\frac{1}{\phi}$ by assumption. Thus $|\epsilon| = |a_{n+5}-a_n| \le 1$ (since it is an integer). Now suppose that $0 < x < \frac{1}{\phi^2}$ and either $\frac{-1}{\phi^2} \le y \le \frac{1}{\phi} - \frac{x}{\phi}$ or $\frac{-1}{\phi} < y \le \frac{-1}{\phi^2} - \frac{x}{\phi}$. Then we immediately see $\alpha = 0$, and from $-1 < y \le \phi x + y \le \frac{1}{\phi} + \phi x - \frac{x}{\phi} = \frac{1}{\phi} + x < 1$, we see that $\beta = \lceil\phi x + y\rceil = \begin{cases} 1 & \phi x + y > 0,\\ 0 & \phi x + y \le 0. \end{cases}$ From this together with $\gamma = \lceil -\phi x - \phi y + \frac{\beta}{\phi}\rceil = \lceil -(\phi x + y) + \frac{\beta-y}{\phi}\rceil$ and $y > \frac{-1}{\phi}$ we can easily show $1 \ge \gamma \ge \begin{cases} 1 & y \le 0\\ 0 & y > 0.\end{cases}$ That $\delta = \lceil x + \phi y + \frac{\gamma}{\phi} - \beta\rceil$ is at least $0$ follows from $x + \phi y + \frac{\gamma}{\phi} - \beta = (\frac{\phi x + y}{\phi} - \beta) + (y + \frac{\gamma}{\phi}) > -1 + 0.$ Finally, we must show that $\epsilon =\lceil -y + \frac{\delta}{\phi} - \gamma\rceil$ is at most $0$. We split into three cases. First case: $\gamma = 0\implies y>0\implies \beta = 1\implies \delta = 0\implies \epsilon = \lceil -y \rceil = 0.$ Second case: $\gamma = 1\ \&\ y \ge \frac{-1}{\phi^2} \implies \epsilon \le \lceil \frac{1}{\phi^2} + \frac{1}{\phi} - 1\rceil = 0.$ Third case: $\gamma = 1\ \&\ y \le \frac{-1}{\phi^2} - \frac{x}{\phi} \implies \delta \le \lceil \frac{-1}{\phi} + \frac{1}{\phi} - \beta\rceil = 0 \implies \epsilon \le \lceil \frac{1}{\phi} - 1\rceil = 0.$ Corollary: If $a_n = f_{2k}$ and $-f_{2k+2} \le a_{n-1} < f_{2k+3}-1$, then $a_{n+5} \le a_n$. Proof: We just have to check that the claim applies. We have $x = (\tfrac{-1}{\phi})^{2k} = \frac{1}{\phi^{2k}}$, so we just need to check that either $\frac{-1}{\phi^2} \le y \le \frac{1}{\phi} - \frac{1}{\phi^{2k+1}}$ or $y \le \frac{-1}{\phi^2} - \frac{1}{\phi^{2k+1}}.$ Suppose first that $a_{n-1} \ge 0$. Then from $a_{n-1} < f_{2k+3} - 1 = \sum_{j=1}^{k+1} f_{2j}$, we see that $y \le \sum_{j=1}^k \frac{1}{\phi^{2j}} = \frac{1}{\phi} - \frac{1}{\phi^{2k+1}}.$ Now suppose that $a_{n-1} < 0$. Note that in this case we automatically have $y \le \frac{1}{\phi^2} \le \frac{1}{\phi} - \frac{1}{\phi^{2k+1}}.$ Write $a_{n-1} = -\sum_{j\in J} f_j$ for some $J \subseteq \{2,3,...\}$ with $J\cap (J+1) = \emptyset.$ If $2 \not\in J$, then $y > -\frac{1}{\phi^4}-\frac{1}{\phi^6} -\cdots = -\frac{1}{\phi^3} > -\frac{1}{\phi^2}$. If $J = \{2\}$ or if $2\in J$ and the second smallest element of $J$ is odd, then we have $y \ge -\frac{1}{\phi^2}$ as well. Now suppose that $2\in J$ and the next smallest element of $J$ is $2l$. Since $a_{n-1} \ge -f_{2k+2}$, we must have $l \le k$, so $y < \frac{-1}{\phi^2} - \frac{1}{\phi^{2l}} + \frac{1}{\phi^{2l+3}} + \frac{1}{\phi^{2l+5}} + \cdots = \frac{-1}{\phi^2} - \frac{1}{\phi^{2l+1}} \le \frac{-1}{\phi^2} - \frac{1}{\phi^{2k+1}}.$ To finish: Note that if $a_n = f_{2k}$ and $a_{n-1}, a_{n+1} \le f_{2k}$, then $a_{n-1} + a_{n+1} = f_{2k-1}$, so $-f_{2k-2} \le a_{n-1} \le f_{2k},$ and we can apply the Corollary to see that $a_{n+5} \le f_{2k}.$ Of course, if $a_n < f_{2k}$ then we have $a_{n+5} \le a_n+1 \le f_{2k}$ (by the Claim) as well.
I was fascinated recently to discover something I hadn’t realized about relative interpretability in set theory, and I’d like to share it here. Namely, Different set theories extending ZF are never bi-interpretable! For example, ZF and ZFC are not bi-interpretable, and neither are ZFC and ZFC+CH, nor ZFC and ZFC+$\neg$CH, despite the fact that all these theories are equiconsistent. The basic fact is that there are no nontrivial instances of bi-interpretation amongst the models of ZF set theory. This is surprising, and could even be seen as shocking, in light of the philosophical remarks one sometimes hears asserted in the philosophy of set theory that what is going on with the various set-theoretic translations from large cardinals to determinacy to inner model theory, to mention a central example, is that we can interpret between these theories and consequently it doesn’t much matter which context is taken as fundamental, since we can translate from one context to another without loss. The bi-interpretation result shows that these interpretations do not and cannot rise to the level of bi-interpretations of theories — the most robust form of mutual relative interpretability — and consequently, the translations inevitably must involve a loss of information. To be sure, set theorists classify the various set-theoretic principles and theories into a hierarchy, often organized by consistency strength or by other notions of interpretative power, using forcing or definable inner models. From any model of ZF, for example, we can construct a model of ZFC, and from any model of ZFC, we can construct models of ZFC+CH or ZFC+$\neg$CH and so on. From models with sufficient large cardinals we can construct models with determinacy or inner-model-theoretic fine structure and vice versa. And while we have relative consistency results and equiconsistencies and even mutual interpretations, we will have no nontrivial bi-interpretations. (I had proved the theorem a few weeks ago in joint work with Alfredo Roque Freire, who is visiting me in New York this year. We subsequently learned, however, that this was a rediscovery of results that have evidently been proved independently by various authors. Albert Visser proves the case of PA in his paper, “Categories of theories and interpretations,” Logic in Tehran, 284–341, Lect. Notes Log., 26, Assoc. Symbol. Logic, La Jolla, CA, 2006, (pdf, see pp. 52-55). Ali Enayat gave a nice model-theoretic argument for showing specifically that ZF and ZFC are not bi-interpretable, using the fact that ZFC models can have no involutions in their automorphism groups, but ZF models can; and he proved the general version of the theorem, for ZF, second-order arithmetic $Z_2$ and second-order set theory KM in his 2016 article, A. Enayat, “Variations on a Visserian theme,” in Liber Amicorum Alberti : a tribute to Albert Visser / Jan van Eijck, Rosalie Iemhoff and Joost J. Joosten (eds.) Pages, 99-110. ISBN, 978-1848902046. College Publications, London. The ZF version was apparently also observed independently by Harvey Friedman, Visser and Fedor Pakhomov.) Meanwhile, let me explain our argument. Recall from model theory that one theory $S$ is interpreted in another theory $T$, if in any model of the latter theory $M\models T$, we can define (and uniformly so in any such model) a certain domain $N\subset M^k$ and relations and functions on that domain so as to make $N$ a model of $S$. For example, the theory of algebraically closed fields of characteristic zero is interpreted in the theory of real-closed fields, since in any real-closed field $R$, we can consider pairs $(a,b)$, thinking of them as $a+bi$, and define addition and multiplication on those pairs in such a way so as to construct an algebraically closed field of characteristic zero. Two theories are thus mutually interpretable, if each of them is interpretable in the other. Such theories are necessarily equiconsistent, since from any model of one of them we can produce a model of the other. Note that mutual interpretability, however, does not insist that the two translations are inverse to each other, even up to isomorphism. One can start with a model of the first theory $M\models T$ and define the interpreted model $N\models S$ of the second theory, which has a subsequent model of the first theory again $\bar M\models T$ inside it. But the definition does not insist on any particular connection between $M$ and $\bar M$, and these models need not be isomorphic nor even elementarily equivalent in general. By addressing this, one arrives at a stronger and more robust form of mutual interpretability. Namely, two theories $S$ and $T$ are bi-interpretable, if they are mutually interpretable in such a way that the models can see that the interpretations are inverse. That is, for any model $M$ of the theory $T$, if one defines the interpreted model $N\models S$ inside it, and then defines the interpreted model $\bar M$ of $T$ inside $N$, then $M$ is isomorphic to $\bar M$ by a definable isomorphism in $M$, and uniformly so (and the same with the theories in the other direction). Thus, every model of one of the theories can see exactly how it itself arises definably in the interpreted model of the other theory. For example, the theory of linear orders $\leq$ is bi-interpretable with the theory of strict linear order $<$, since from any linear order $\leq$ we can define the corresponding strict linear order $<$ on the same domain, and from any strict linear order $<$ we can define the corresponding linear order $\leq$, and doing it twice brings us back again to the same order. For a richer example, the theory PA is bi-interpretable with the finite set theory $\text{ZF}^{\neg\infty}$, where one drops the infinity axiom from ZF and replaces it with the negation of infinity, and where one has the $\in$-induction scheme in place of the foundation axiom. The interpretation is via the Ackerman encoding of hereditary finite sets in arithmetic, so that $n\mathrel{E} m$ just in case the $n^{th}$ binary digit of $m$ is $1$. If one starts with the standard model $\mathbb{N}$, then the resulting structure $\langle\mathbb{N},E\rangle$ is isomorphic to the set $\langle\text{HF},\in\rangle$ of hereditarily finite sets. More generally, by carrying out the Ackermann encoding in any model of PA, one thereby defines a model of $\text{ZF}^{\neg\infty}$, whose natural numbers are isomorphic to the original model of PA, and these translations make a bi-interpretation. We are now ready to prove that this bi-interpretation situation does not occur with different set theories extending ZF. Theorem. Distinct set theories extending ZF are never bi-interpretable. Indeed, there is not a single model-theoretic instance of bi-interpretation occurring with models of different set theories extending ZF. Proof. I mean “distinct” here in the sense that the two theories are not logically equivalent; they do not have all the same theorems. Suppose that we have a bi-interpretation instance of the theories $S$ and $T$ extending ZF. That is, suppose we have a model $\langle M,\in\rangle\models T$ of the one theory, and inside $M$, we can define an interpreted model of the other theory $\langle N,\in^N\rangle\models S$, so the domain of $N$ is a definable class in $M$ and the membership relation $\in^N$ is a definable relation on that class in $M$; and furthermore, inside $\langle N,\in^N\rangle$, we have a definable structure $\langle\bar M,\in^{\bar M}\rangle$ which is a model of $T$ again and isomorphic to $\langle M,\in^M\rangle$ by an isomorphism that is definable in $\langle M,\in^M\rangle$. So $M$ can define the map $a\mapsto \bar a$ that forms an isomorphism of $\langle M,\in^M\rangle$ with $\langle \bar M,\in^{\bar M}\rangle$. Our argument will work whether we allow parameters in any of these definitions or not. I claim that $N$ must think the ordinals of $\bar M$ are well-founded, for otherwise it would have some bounded cut $A$ in the ordinals of $\bar M$ with no least upper bound, and this set $A$ when pulled back pointwise by the isomorphism of $M$ with $\bar M$ would mean that $M$ has a cut in its own ordinals with no least upper bound; but this cannot happen in ZF. If the ordinals of $N$ and $\bar M$ are isomorphic in $N$, then all three models have isomorphic ordinals in $M$, and in this case, $\langle M,\in^M\rangle$ thinks that $\langle N,\in^N\rangle$ is a well-founded extensional relation of rank $\text{Ord}$. Such a relation must be set-like (since there can be no least instance where the predecessors form a proper class), and so $M$ can perform the Mostowski collapse of $\in^N$, thereby realizing $N$ as a transitive class $N\subseteq M$ with $\in^N=\in^M\upharpoonright N$. Similarly, by collapsing we may assume $\bar M\subseteq N$ and $\in^{\bar M}=\in^M\upharpoonright\bar M$. So the situation consists of inner models $\bar M\subseteq N\subseteq M$ and $\langle \bar M,\in^M\rangle$ is isomorphic to $\langle M,\in^M\rangle$ in $M$. This is impossible unless all three models are identical, since a simple $\in^M$-induction shows that $\pi(y)=y$ for all $y$, because if this is true for the elements of $y$, then $\pi(y)=\{\pi(x)\mid x\in y\}=\{x\mid x\in y\}=y$. So $\bar M=N=M$ and so $N$ and $M$ satisfy the same theory, contrary to assumption. If the ordinals of $\bar M$ are isomorphic to a proper initial segment of the ordinals of $N$, then a similar Mostowski collapse argument would show that $\langle\bar M,\in^{\bar M}\rangle$ is isomorphic in $N$ to a transitive set in $N$. Since this structure in $N$ would have a truth predicate in $N$, we would be able to pull this back via the isomorphism to define (from parameters) a truth predicate for $M$ in $M$, contrary to Tarski’s theorem on the non-definability of truth. The remaining case occurs when the ordinals of $N$ are isomorphic in $N$ to an initial segment of the ordinals of $\bar M$. But this would mean that from the perspective of $M$, the model $\langle N,\in^N\rangle$ has some ordinal rank height, which would mean by the Mostowski collapse argument that $M$ thinks $\langle N,\in^N\rangle$ is isomorphic to a transitive set. But this contradicts the fact that $M$ has an injection of $M$ into $N$. $\Box$ It follows that although ZF and ZFC are equiconsistent, they are not bi-interpretable. Similarly, ZFC and ZFC+CH and ZFC+$\neg$CH are equiconsistent, but no pair of them is bi-interpretable. And again with all the various equiconsistency results concerning large cardinals. A similar argument works with PA to show that different extensions of PA are never bi-interpretable.
Difference between revisions of "Timeline of prime gap bounds" Line 1,043: Line 1,043: | [http://math.mit.edu/~drew/schinzel_3473955908_80550202480.txt 80,550,202,480]* [m=5] ([http://terrytao.wordpress.com/2014/05/17/polymath-8b-xi-finishing-up-the-paper/#comment-366807 Sutherland]) | [http://math.mit.edu/~drew/schinzel_3473955908_80550202480.txt 80,550,202,480]* [m=5] ([http://terrytao.wordpress.com/2014/05/17/polymath-8b-xi-finishing-up-the-paper/#comment-366807 Sutherland]) | Verification of several previous bounds | Verification of several previous bounds + + + + + |} |} Revision as of 10:07, 23 June 2014 Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments Aug 10 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) May 14 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. May 21 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations May 28 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] May 30 59,470,640 (Morrison) 58,885,998? (Tao) 59,093,364 (Morrison) 57,554,086 (Morrison) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m May 31 2,947,442 (Morrison) 2,618,607 (Morrison) 48,112,378 (Morrison) 42,543,038 (Morrison) 42,342,946 (Morrison) Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] Jun 1 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] Jun 2 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) Jun 3 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison) 4,802,222 (Morrison) Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. Jun 4 1/224?? (v08ltu) 1/240?? (v08ltu) 4,801,744 (Sutherland) 4,788,240 (Sutherland) Uses asymmetric version of the Hensley-Richards tuples Jun 5 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz) 4,717,560 (Sutherland) 397,110? (Sutherland) 4,656,298 (Sutherland) 389,922 (Sutherland) 388,310 (Sutherland) 388,284 (Castryck) 388,248 (Sutherland) 387,982 (Castryck) 387,974 (Castryck) [math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance. [math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve Jun 6 387,960 (Angelveit) 387,904 (Angeltveit) Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. Jun 7 26,024? (vo8ltu) 387,534 (pedant-Sutherland) Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland) 285,752 (pedant-Sutherland) values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here. An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired. Jun 12 22,951 (Tao/v08ltu) 22,949 (Harcos) 249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu) 6,329? (Harcos) 6,329 (v08ltu) 60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu) 5,672? (v08ltu) 5,459? (v08ltu) 5,454? (v08ltu) 5,453? (v08ltu) 60,740 (xfxie) 58,866? (Sun) 53,898? (Sun) 53,842? (Sun) A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu) 5,453? (v08ltu) 5,452? (v08ltu) 53,774? (Sun) 53,672*? (Sun) Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao) [math]148\varpi + 33\delta \lt 1[/math]? (Tao) Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu) 1,467 (v08ltu) 12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu) [math]140\varpi + 32 \delta \lt 1[/math]? (Tao) 1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes) 1,007? (Hannes) 10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen) [math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao) 962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao) 873? (Hannes) Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao) Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao) 632 (Harcos) 4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma) 12 [EH] (Maynard) Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard) 5 [EH] (Maynard) 600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard) 582#*? (Nielsen]) 59,451 [m=2]#? (Nielsen]) 42,392 [m=2]? (Nielsen) 356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie) 448#*? (Nielsen) 43,134 [m=2]#? (Nielsen) 698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen) 10,000,000? [m=3] (Tao) 1,700,000? [m=3] (Tao) 38,000? [m=2] (Tao) 300#? (Clark-Jarvis) 182,087,080? [m=3] (Sutherland) 179,933,380? [m=3] (Sutherland) More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20 55#? (Nielsen) 36,000? [m=2] (xfxie) 175,225,874? [m=3] (Sutherland) 27,398,976? [m=3] (Sutherland) Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck) 75,000,000? [m=4] (Castryck) 3,400,000,000? [m=5] (Castryck) 5,511 [EH] [m=3] (Sutherland) 2,114,964#? [m=3] (Sutherland) 309,954? [EH] [m=5] (Sutherland) 395,154? [m=2] (Sutherland) 1,523,781,850? [m=4] (Sutherland) 82,575,303,678? [m=5] (Sutherland) A numerical precision issue was discovered in the earlier m=4 calculations Dec 23 41,589? [EH] [m=4] (Sutherland) 24,462,774? [m=3] (Sutherland) 1,512,832,950? [m=4] (Sutherland) 2,186,561,568#? [m=4] (Sutherland) 131,161,149,090#? [m=5] (Sutherland) Dec 24 474,320? [EH] [m=4] (Sutherland) 1,497,901,734? [m=4] (Sutherland) Dec 28 474,296? [EH] [m=4] (Sutherland) Jan 2 2014 474,290? [EH] [m=4] (Sutherland) Jan 6 54# (Nielsen) 270# (Clark-Jarvis) Jan 8 4 [GEH] (Nielsen) 8 [GEH] (Nielsen) Using a "gracefully degrading" lower bound for the numerator of the optimisation problem. Calculations confirmed here. Jan 9 474,266 [EH] [m=4] (Sutherland) Jan 28 395,106? [m=2] (Sutherland) Jan 29 3 [GEH] (Nielsen) 6 [GEH] (Nielsen) A new idea of Maynard exploits GEH to allow for cutoff functions whose support extends beyond the unit cube Feb 9 Jan 29 results confirmed here Feb 17 53?# (Nielsen) 264?# (Clark-Jarvis) Managed to get the epsilon trick to be computationally feasible for medium k Feb 22 51?# (Nielsen) 252?# (Clark-Jarvis) More efficient matrix computation allows for higher degrees to be used Mar 4 Jan 6 computations confirmed Apr 14 50?# (Nielsen) 246?# (Clark-Jarvis) A 2-week computer calculation! Apr 17 35,410 [m=2]* (xfxie) 398,646? [m=2]* (Sutherland) 25,816,462? [m=3]* (Sutherland) 1,541,858,666? [m=4]* (Sutherland) 84,449,123,072? [m=5]* (Sutherland) Redoing the m=2,3,4,5 computations using the confirmed MPZ estimates rather than the unconfirmed ones Apr 18 398,244? [m=2]* (Sutherland) 1,541,183,756? [m=4]* (Sutherland) 84,449,103,908? [m=5]* (Sutherland) Apr 28 398,130 [m=2]* (Sutherland) 1,526,698,470? [m=4]* (Sutherland) 83,833,839,882? [m=5]* (Sutherland) May 1 81,973,172,502? [m=5] (Sutherland) 2,165,674,446#? [m=4] (Sutherland) 130,235,143,908#? [m=5] (Sutherland) faster admissibility testing May 3 1,460,493,420? [m=4] (Sutherland) 80,088,836,006? [m=5] (Sutherland) 1,488,227,220?* [m=4] (Sutherland) 81,912,638,914?* [m=5] (Sutherland) 2,111,605,786?# [m=4] (Sutherland) 127,277,395,046?# [m=5] (Sutherland) Fast admissibility testing for Hensley-Richards tuples May 3 3,393,468,735? [m=5] (de Grey) 2,113,163?# [m=3] (de Grey) 105,754,479?# [m=4] (de Grey) 5,274,206,963?# [m=5] (de Grey) Improved hillclimbing; also confirmation of previous k values May 4 79,929,339,154? [m=5] (Sutherland) 2,111,597,632?# [m=4] (Sutherland) 126,630,432,986?# [m=5] (Sutherland) May 5 32,285,928?# [m=3] (Sutherland) May 9 1,460,485,532? [m=4] (Sutherland) 79,929,332,990? [m=5] (Sutherland) 1,488,222,198?* [m=4] (Sutherland) 81,912,604,302?* [m=5] (Sutherland) 2,111,417,340?# [m=4] (Sutherland) 126,630,386,774?# [m=5] (Sutherland) Fast admissibility testing for Hensley-Richards sequences May 14 1,440,495,268? [m=4] (Sutherland) 78,807,316,822 [m=5] (Sutherland) 1,467,584,468?* [m=4] (Sutherland) 80,761,835,464?* [m=5] (Sutherland) 2,082,729,956?# [m=4] (Sutherland) 124,840,189,042?# [m=5] (Sutherland) Fast admissibility testing for Schinzel sequences May 18 1,435,011,318? [m=4] (Sutherland) 1,462,568,450?* [m=4] (Sutherland) 2,075,186,584?# [m=4] (Sutherland) Faster modified Schinzel sieve testing May 23 1,424,944,070? [m=4] (Sutherland) 1,452,348,402?* [m=4] (Sutherland) Fast restricted greedy sieving May 28 52? [m=2] [GEH] (de Grey) 51? [m=2] [GEH] (de Grey) 254? [m=2] [GEH] (Clark-Jarvis) 252? [m=2] [GEH] (Clark-Jarvis) New bounds for [math]M_{k,1/(k-1)}[/math] May 30 1,404,556,152? [m=4] (Sutherland) Heuristically determined shift for the shifted greedy sieve June 8 80,550,202,480* [m=5] (Sutherland) Verification of several previous bounds June 23 78,602,310,160? [m=5] (Sutherland) Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [GEH] - bound is conditional the generalized Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted See also the article on Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
I have recently come to realize that a number of problems I had a few years ago trying to implement various mathematical theories in Java came down to the fact that the typing system in Java is not sufficiently strong to model all of Martin-Löf dependent type theory. Prior to Java 5 and generics, the only type theory you could do was through classes and interfaces, which give you arbitrary types built out of the ground types int, double, char and so on using product and function types. You can also build recursive types such as Lists, though not in a uniform way. Using generics, you can do a bit more. You can now define List<T> as a function$$\DeclareMathOperator{\Type}{Type}\Type\to\Type$$and so we get higher order types. This is not the end of the story, though. Using a generics trick, we can model some dependent product types. For example, we can define types of the form$$\prod_{T\colon\Type}f(T)$$using the syntax public interface f<T extends f<T>>{ // We can now refer to T as much as we like // inside the class. T has type f<T>.} As an example, we can model the basic underlying structure of a monoid (but not the associativity and unitality conditions) using a term of type $$ \prod_{T\colon\Type}T\times (T\to T\to T) $$ (i.e., a set $T$ with a designated unit element and a binary operation on $T$). Using Java generics, we can model this type: public interface MonoidElement<T extends MonoidElement<T>>{ public T unit(); public T mul(T op1, T op2);} However, when we try to model more complicated concepts, the type theory breaks down. Is there a simple description of the fragment of MLTT corresponding to the types that can be built in the Java typing system?
Consider a machine where jobs arrive according to a Poisson stream with a rate of 4 jobs per hour. Half of the jobs have a processing time of exactly 10 minutes, a quarter have a processing time of exactly 15 minutes and the remaining quarter have a processing time of 20 minutes. The jobs with a processing time of 10 minutes are called type 1 jobs, the ones with a processing time of 15 minutes type 2 jobs and the rest type 3 jobs. The jobs are processed in order of arrival. The first question about this queueing system is the following: determine the mean sojourn time (waiting time plus processing time) of a type 1, 2 and 3 job and also of an arbitrary job. I first tried to calculate the mean sojourn time for an arbitrary job. The arrival rate is $ \lambda = 4 / \text{hour} = \frac{1}{15} / \text{minute} $. Furthermore, for the processing rate $\mu$ we have $ \frac{1}{\mu} = \frac{1}{2} \cdot 10 + \frac{1}{4} \cdot 15 + \frac{1}{4} \cdot 20 = \frac{55}{4} \text{minutes}$. So the processing time is $\frac{1}{\mu} = E(B) = \frac{4}{55}$. We can now also calculate the probability $\rho$ that the server is busy on arrival: $\rho = \frac{\lambda}{\mu} = \frac{11}{12}$. So far, everything is correct (according to the answer sheet). However, something goes wrong in the following (again, according to the solution manual). To calculate the waiting time, we can use the following formula (which holds for M/G/1 queues, so also for our case: M/D/1 queues (where D stands for "deterministic")) : $$ E(W) = \frac{\rho E(R)}{1-\rho} \quad . $$ Here, R denotes the residual service time. We also know that $E(R) = \frac{E(B^{2})}{2E(B)}$. We know that $ Var(B) = E(B^{2}) - E(B)^{2}$, so $E(B^{2}) = Var(B) + E(B)^{2}$. I thought that, since the processing times are deterministic, the variance is always zero. So we have $E(B^{2}) = E(B)^{2} $, which implies that $E(R) = \frac{(55/4)^{2}}{2\cdot (55/4)} = \frac{55}{8} $. According to the solutions manual, however, $E(R) = 15/2$. So I've probably done something wrong. Do you know who's wrong, the answer sheet or me? If it's me (which, again, is probably the case), do you know what I've done wrong?
I'm working on the following problem: let $G = (V, E)$ be a connected, planar graph. Our goal is to find a $d$-partition of $G$, $P = \{V_1, \ldots, V_d\}$, such that $G[V_i]$ is connected, and $\min_{V_i \in P} |V_i|$ is maximized. (That is, the partitions are as balanced in size as possible.) This is the balanced connected partition problem on planar graphs, and is known to be NP-complete. I'm trying to use hill-climbing to solve this problem. Let $C$ be a $|V|$-length array, such that $C[i] = j$ implies that vertex $i$ is in $V_j$. This represents a partition $P_C$ of $V$, such that $V_k = \{i \in V \mid C[i] = k\}$. Let $G_C = (V, E_C)$ be the corresponding graph $\bigcup_{V_i \in P_C} G[V_i]$. Thus, $E \setminus E_C$ is the set of all edges $(u, v)$ such that $u$ and $v$ are in different vertex subsets. A "move" in this space (for the purposes of hill-climbing) is performed as follows. Let $N(v) = \{w \in V \mid (w, v) \in E\}$ be the neighbors of a vertex $v$ in the original graph. Let $(u, v)$ be an edge in $E \setminus E_C$. For all $w \in N(v)$, if $C[w] = C[v]$, remove $(w, v)$ from $E_C$; otherwise, if $C[w] = C[u]$, add $(w, v)$ to $E_C$. This induces a different partition, one where $v$ is in $u$'s subset. It's of course possible that the graph induced by these partitions contains more than $d$ connected components. I want to penalize these, proportional to the absolute difference between the number of connected components in the graph and $d$. The problem is that finding these connected components is expensive—$O(|V| + |E|)$ time per step—and I'd like to reduce that. I did some snooping around in the space of fully dynamic graph connectivity algorithms. For general graphs, Holm et al. (see [1]) achieve $O(\log^2 n/\log \log n)$ insert/delete and $O(\log n/\log \log n)$ connectivity queries (i.e., are $u$ and $v$ connected?) by using Henzinger and King's Euler-tour trees (see [2]). For planar graphs, Eppstein et al. (see [3]) achieve $O(\log n)$ insert/delete/connectivity. The problem is that I'm not sure that it's trivial to adapt any of these results to my problem, since I'm not sure that transforming connectivity queries into connected-component counts is easy or possible. The other thing is that those results support general edge insertion and deletion, which is good, but I want to know if I can exploit the fact that I know all of the edges that can ever be inserted or deleted ahead of time to do some precomputation and speed up connected component counting. I've also had the following insight. Let's say we move $v$ to $u$'s subset as described above. Call the original graph $G_1$ and the new graph $G_2$. Then the number of connected components in $G_2$ is the number of connected components in $G_1$, minus the number of vertices $w$ in $N(v)$ that weren't connected to $u$ before but are now (i.e., every path from $u$ to $w$ goes through $v$), plus the number of bridge edges $(w, v)$ for $w \in N(v)$ and $C[w] = C[v]$ (i.e., the number of $v$'s neighboring edges that, upon removal break connectedness of $v$'s original subset). Does anyone know of any kind of solution to this problem? Or does anyone have any suggestions for papers to read?
How to transform the equation $\frac{\partial L(m,\lambda)}{\partial m}=2 \cdot \Sigma m+ \lambda \cdot \mathbf{1}$ into $m=-\frac{1}{2} \cdot \lambda \Sigma^{-1} \mathbf{1}$, where $\mathbf{1} = \left( \begin{matrix} 1 \\ 1 \\ 1 \end{matrix}\ \right), m=(m_a,m_b,m_c), \Sigma$ is a $3 \times 3$ matrix, $\lambda$ is a scalar? closed as unclear what you're asking by BLAZE, Servaes, SchrodingersCat, Crostul, user223391 Nov 29 '15 at 5:20 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. This holds if and only if $\frac{\partial L}{\partial m} = 0$. In this case, you obtain the equation \begin{equation} 0 = 2\cdot\Sigma m + \lambda\cdot \mathbf{1}, \end{equation} which is equivalent with your expression for $m$ after shuffeling some terms around.
Given a system (S) that accepts a tuple of parameters ($\alpha, \beta, \gamma \dots$ etc). These parameters affect the performance of S in a different way. For instance, decreasing $\alpha$ would improve performance of S at some rate, whereas decreasing $\beta$ has a reversed effect. How can one define a performance metric that captures the effects of all parameters in a simple, clear and correct manner. If the system exists in reality, hence measurements can be taken arbitrarily, what approach one can use to achieve an accurate metric? I give an example to clarify the question: Suppose I have a sorting algorithm (A). Performance of A depends on the number of input numbers, precision of the numbers (32, 64, ...) bit, or whether they are partially sorted or not (assuming this can be quantified somehow). Let's limit ourselves to only these 3 factors. If we say A sorts 1000 numbers in 1 millisecond. This doesn't say anything about precision nor entropy of the input. How can we come up with a standard, metric, or unit to describe the performance of A using preferably a single number with a single unit so that they capture all the factors mentioned above.
Page numbers are from the paper itself, and not the pdf. From page 3, "An interactive proof system is said to be black-box (computational) zero knowledge if there is a probabilistic polynomial time oracle machine $S$ such that for any probabilistic polynomial time verifier $V^*$ and for all $\:x \in L\:$, the distribution of the output produced by $S^{V^*}$ on input $x$ is computationally indistinguishable from the view of the verifier at the end of the interaction $(P,V)(x)$." I assume they mean $\:\left(P,V^*\right)(x)\:$ at the end, otherwise this just defines Honest Verifier Computational Zero Knowledge. From page 7, "If the simulator does not abort till all the sessions are over (or the verifier terminates), it outputs the view of the verifier at that point." From the third paragraph on page 8, "If the depth counter indicates that we are just one level above the leaves, then the simulator has to wait for the next two preamble messages, i.e., it has to move through the two leaves, and then return. For this the simulator keeps modifying the current view by letting the (modified) prover and the verifier run, until two preamble messages arrive." It seems to me like this can't work, since the prover's message at that stage is just a statistically binding commitment. Let $\: p: \omega \to \omega \:$ be a polynomial that bounds the number of oracle queries $S$ makes when $V^*$ only requests one proof. $\:$ Suppose the statistically binding commitment scheme being used is such that it is easy to compute a predicate which commitments have a probability of approximately $\: \frac1{2\cdot p(k)} \:$ of satisfying (for example, Naor commitments or commitment from an injective pseudo-random generator). $\:$ Have the cheating verifier $V_1$ request only one proof and then behave like the honest verifier except that, at one level above the leaves, it will terminate if the prover's commitment does not satisfy the predicate. $\;\;$ The probability of $\:\left(P,V_1\right)(x)\:$ succeeding is obviously approximately $\: \frac1{2\cdot p(k)} \:$, $\:$ whereas by the union bound, but by the union bound and the independence of the simulator's, the probability that $S^{V_1}$ outputs a succeeding view is not non-negligibly more than $\: \frac1{4\cdot p(k)} \:$. $\;\;$ This means that for sufficiently large $k$, the probability of $\:\left(P,V_1\right)(x)\:$ succeeding will be more than $\: \frac1{5\cdot p(k)} \:$ greater than the probability $S^{V_1}$ has of outputting a succeeding view. $\;\;$ That means the distribution of $S^{V_1}$ is computationally distinguishable from the distribution of $\:\left(P,V_1\right)(x)\:$. Is this an error in the paper? $\:$ If no, what am I missing?
I am working on ECG signals, to eventually extract features in order to detect an arrhythmia and classify it. I am using Discrete Wavelet Transform with biorthogonal wavelet bior6.8 During my research, I came to know that wavelet transform is the convolution of the input processed signal with the daughter wavelets to get approximation and detail $$X(a,b)=\frac{1}{\sqrt{a}}\int_{-\infty}^{\infty} \Psi\left( \frac{t-b}{a}\right)x(t)dt $$ where $a$ is scaling and $b$ is time. I can't find the expression of the mother wavelet anywhere also when it came to practice DWT is usually presented as a filter bank of high pass and low pass filter The question is what is the difference between the different wavelets if it is always presented as a bank of filters In my work, I used these two Butterworth high pass and low pass filters, but I still can't explain my choice, I read that Butterworth is the most used in signal processing and that it optimizes the frequency response in the passband, getting as much as you can from the wanted frequency Still, I have no arguments why I shouldn't use any others and not sure whether it is the correct way to implement bior 6.8 wavelet as I do not know any other way to implement wavelets and I would implement Daubechies or any other the same which do not make sense from scipy.signal import filtfilt , butterdef butter_highpass(cutoff, fs, order=5): nyq = 0.5 * fs normal_cutoff = cutoff / nyq b, a = butter(order, normal_cutoff, btype='high', analog=False) return b, adef butter_highpass_filter(data, cutoff, fs, order=5): b, a = butter_highpass(cutoff, fs, order=order) y = filtfilt(b, a, data,padlen=0) return y from scipy.signal import butter, lfilterdef butter_lowpass(cutoff, fs, order=5): nyq = 0.5 * fs normal_cutoff = cutoff / nyq b, a = butter(order, normal_cutoff, btype='low', analog=False) return b, adef butter_lowpass_filter(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = lfilter(b, a, data) return y So basically and to resume, the question is what is the difference between wavelet transforms and how can we value that during implementation and then, the use of BF in this type of wavelet is it correct? NB: the image inserted is from Wikipedia
TLDR; How does one efficiently apply color on every line of every array to override the color by \everymath? i.e from to (A compilable example of this is at the bottom) Is there a way to efficiently apply \color{black} on every line for all arrays (rather than doing it line-by-line)? I am a big fan of colors and I would like to color up all my inline maths to make the text more readable: But I would like the color for display maths to remain black. After some googling, I found Stefan's solution, which uses \everymath and \everydisplay from the everysel package. However, there is a known problem with \everydisplay: it does not work with some maths environments in amsmath. To get around it, he suggested redefining displayed math, and I found the following code snippet in OP's comments to his answer: \let\originaldisplaystyle\displaystyle \renewcommand\displaystyle{\color{black}\originaldisplaystyle}\let\oldeq\equation \def\equation{\oldeq\color{black}} I tried redefining cases like this: \let\oldcase\cases \def\cases{\oldcase\color{black}} but it only overrides the color of the first symbol: Here is the latex for the equation above: \begin{equation}\label{phi} \phi(\delta,w)= \begin{cases} 1 & \text{if } \; \quad (a \in div_1 \land b \in div_2) \Leftrightarrow div_1 = div_2 \\ 0 & \text{if } \; \quad (a \in div_1 \land b \in div_2) \Leftrightarrow div_1 \not= div_2 \\\end{cases} \text{ where } \begin{aligned} \renewcommand\arraystretch{1.25}\begin{array}[t]{|@{\hskip0.6em}l} \delta =\{div_1, div_2\} \\ w = \{a,b\}\end{array} \end{aligned}\end{equation} On the other hand, I get Error: Illegal Character in array tag while trying to redefine array by \let\olda\array \def\array{\olda\color{black}}. However, I am able to change one line of the array by putting \color{black} at the start of the line: Is there a way to efficiently apply \color{black} on every line for all arrays (rather than doing it line-by-line)? How do I do the same for cases as well? Here is a compilable example: \documentclass[11pt, oneside]{article}\usepackage{geometry}\geometry{letterpaper} \usepackage{graphicx} \usepackage{amssymb}\usepackage{amsthm}\usepackage{amsmath}\usepackage{mathtools}\usepackage{xcolor}\definecolor{myColor}{HTML}{E93F3C}\usepackage{everysel}\everymath{\color{myColor}}\let\originaldisplaystyle\displaystyle \renewcommand\displaystyle{\color{black}\originaldisplaystyle}\let\oldeq\equation \def\equation{\oldeq\color{black}}\begin{document}A \textit{div point set} is any order-pair $(P,\Theta_P)$ satisfying\begin{alignat}{2} \label{def1_1} &\mathrlap{\lvert\Theta_P\rvert = \binom{\lvert P\rvert}{2} \land P \not= \varnothing} \\[1.5ex] \label{def1_2} & \forall D_n \in \Theta_P & \quad & \begin{aligned}[t] \renewcommand\arraystretch{1.25}\begin{array}[t]{|@{\hskip0.6em}l} \color{black} (d_n,\delta_n) \mkern-2mu\coloneqq D_n \\ \lvert d_n\rvert =2\\ d_n \in \mathcal{P}(P)\\ \lvert\delta_n\rvert = 2\\ \bigcup \delta_n = P \setminus d_n \\ \bigcap \delta_n = \varnothing \end{array} \end{aligned}\\[1.5ex] \label{def1_3} & \forall D_n, D_m \in \Theta_P & \quad & \begin{aligned}[t] \renewcommand\arraystretch{1.25}\begin{array}[t]{|@{\hskip0.6em}l} (d_n,\delta_n) \mkern-2mu\coloneqq D_n \\ (d_m,\delta_m) \coloneqq D_m\\ d_n= d_m \Leftrightarrow D_n=D_m \end{array} \end{aligned}\end{alignat}\begin{equation}\label{phi} \phi(\delta,w)= \begin{cases} 1 & \text{if } \; \quad (a \in div_1 \land b \in div_2) \Leftrightarrow div_1 = div_2 \\ 0 & \text{if } \; \quad (a \in div_1 \land b \in div_2) \Leftrightarrow div_1 \not= div_2 \\\end{cases} \text{ where } \begin{aligned} \renewcommand\arraystretch{1.25}\begin{array}[t]{|@{\hskip0.6em}l} \delta =\{div_1, div_2\} \\ w = \{a,b\}\end{array} \end{aligned}\end{equation}\end{document}
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
At page 20 of Classical Mechanics' Goldstein (Third edition), there are these two steps given between eqs. (1.51) and (1.52): $$\sum_i m_i \ddot {\bf r}_i \cdot \frac{\partial {\bf r_i}}{ \partial q_j}= \sum_i [\frac {d}{dt}(m_i {\bf v}_i \cdot \frac{\partial {\bf v}_i}{\partial \dot q_j})-m_i {\bf v}_i \cdot \frac{\partial {\bf v}_i}{\partial q_j}]$$ and $$\sum_j \{ \frac{d}{dt}[ \frac{\partial}{\partial \dot q_j}(\sum_i \frac{1}{2}m_i v^2_i)] - \frac{\partial}{\partial q_j}(\sum_i \frac{1}{2}m_i v^2_i)-Q_j \}\delta q_j .$$ Why does "$ \frac {1}{2}$" appear in the second formula?
Finite State Automata CS390, Fall 2019 Abstract Finite Automata are a simple, but nonetheless useful, mathematical model of computation. In this module, we look at this model and at the languages that they can accept. These lecture notes are intended to be read in concert with the assigned portions of Chapter 2 of the text (Hopcroft). I will both offer my own commentary on the text as appropriate and will also provide JFLAP files and possibly other study aids to enhance the text’s own examples. 1 Introduction Finite automata (FA), also widely known as finite state automata (FSA), are a mathematical model of computation based on the ideas of A system changing statedue to inputs supplied to it. Transitions from state to state are governed Some of these states being acceptoror finalstates. This is, as we will see, a computation model of somewhat limited power. Not all things that we regard as “computation” can be done with FAs. It is, however, powerful enough to have quite few practical applications, while being simple enough to be easily understood. 2 An Opening Example A traditional puzzle: A man stands on the side of a small river. He has with him a head cabbage, a goose, and a dog. On the shore in front of him is a small rowboat. The boat is so small that he can take only himself and one of his accompanying items at a time. But: If he leaves the goose alone on either shore with the cabbage, the goose will eat the cabbvage. If he leaves the dog alone on either shore with hte goose, the dog will kill the goose. How can the man get across the river with all his items intact? We can model this by labeling situations using the characters M C G D | to denote the man, the cabbage, the goose, the dog, and the river, respectively. For example, we start with everyone on one side of the river: CDGM|. (We will choose to always write the characters on either side of the river in alphabetic order.) We want to end up with everything on the other side of the river: |CDGM. Starting from CDGM|, we can consider four possibilities: The man rows across the stream with the cabbage: DG|CM. The man rows across the stream with the goose: CD|GM. The man rows across the stream with the dog: CG|DM. The man rows across the stream alone: CDG|M. We can diagram that like this: The circles (states) are labeled with the positions of hte man and his items. The connecting arrows (transitions) are labeled with what the man took with him on his trip (using ‘x’ to indicate that he rowed back alone). Now looking at this, we can see case 1 ends badly. The dog kills the goose. In case 3, the goose eats the cabbage. In case 4, one or both of those two things happens (depending on how fast the goose is). None of these are desirable, so I will mark them as “final”. That leaves us with one non-final state to explore: CD|GM. From here, we have two possibilities: The man rows back with the goose. The man rows back alone. Case 1 actually takes us back to our starting state. Case 2 gives us a new possibility, CDM|G. From GDM|G we have three possibilities: The man rows across the stream with the cabbage. The man rows across the stream with the dog. The man rows across the stream alone. The last case takes us to a state we have already seen. The other two are new, and neither leads to immediate disaster. From D|CGM we get three possibilities: The man rows across with the cabbage. The man rows across with the goose. The man rows across alone. One of these is a previously inspected state. One of these is a final state. One is new. From C|DGM we get three possibilities: The man rows across with the dog. The man rows across with the goose. The man rows across alone. Again, one of these is a previously inspected state. One of these is a final state. One is new. From DGM|C we have three possibilities: The man rows across with the dog. The man rows across with the goose. The man rows across alone. Only one of these yields a new possibility. From CGM|D we have three possibilities: The man rows across with the cabbage. The man rows across with the goose. The man rows across alone. This does not introduce any new possibilities. From G|CDM, we have three possibilities: The man rows across with the cabbage. The man rows across with the dog. The man rows across alone. Only one new possibility is found. From that new state, GM|CD, we have two possibilities: The man rows across with the goose. The man rows across alone. This finally connects us to the |CDGM state that we were looking for. This is also a “final” state, in the sense that we need look no further. We could, for completeness, fill out the chart by recording what would happen if the man kept rowing back and forth. But, since we have him safely on the other with all of his possessions, let’s let him continue on with his journey. We can read the series of steps necessary to solve the puzzle by reading off the labels of all the transitions that take us from CDGM| to |CDGM: G x D G C x G 3 Finite Automata: Definition What we have just built is a Finite Automaton (FA), a collection of states in which we make transitions based upon input symbols. Definition:A Finite Automaton A finite automaton(FA) is a 5-tuple $(Q, \Sigma, q_0, A, \delta)$ where $Q$ is a finite set of states; $\Sigma$ is a finite input alphabet; $q_0 \in Q$ is the initial state; $A \subseteq Q$ is the set of acceptingstates; and $\delta : Q \times \Sigma \rightarrow Q$ is the transition function. For any element qof Qand any symbol $\sigma \in \Sigma$, we interpret $\delta(q,\sigma)$ as the state to which the FA moves, if it is in state qand receives the input $\sigma$. This is the key definition in this chapter and is worth a look to be sure you understand it. For example, for the FA shown here, we would say that: $Q = \{ q_0, q_1, q_2, 0, 1, 2 \}$ These are simply labels for states, so we can use any symbol that is convenient. $\Sigma = \{ 0, 1 \}$ These are the characters that we can supply as input. $q_0$ is, well, $q_0$ because we chose to use that matching label for the state. $A = \{ q_1, 0 \}$ The accepting states $\delta = \{ ((q0, 0), q1), ((q0, 1), 1),$ $((q1, 0), q2), ((q1, 1), q2),$ $((q2, 0), q2), ((q2, 1), q2),$ $((0, 0), 0), ((0, 1), 1),$ $((1, 0), 2), ((1, 1), 0),$ $((2, 0), 1), ((2, 1), 2) \}$ Functions are, underneath it all, sets of pairs. The first element in each pair is the input to the function and the second element is the result. Hence when you see $((q0, 0), q1)$, it means that, "if the input is $(q0, 0)$, the result is $q1$. The input is itself a pair because $\delta$ was defined as a function of the form $Q \times \Sigma \rightarrow Q$, so the input has the form $Q \times \Sigma$, the set of all pairs in which the first element is taken from set $Q$ and the second element from set $\Sigma$. Of course, there may be easier ways to visualize $\delta$. In particular, we could do it via a table with the input state on one axis and the input character on another: Starting State Input q0 q1 q2 0 1 2 0 q1 q2 q2 0 2 1 1 1 q2 q2 1 0 2 The table representation is particularly useful because it suggests an efficient implementation. If we numbered our states instead of using arbitrary labels: Starting State Input 0 1 2 4 5 6 0 1 2 2 3 5 4 1 4 2 2 4 3 5 then we could implement this FA as follows: char c; int state = 0; while (cin >> c) { int charNumber = c - '0'; state = delta[state][charNumber]; } 4 Some Examples Look at this automaton. Answer: This accepts all strings over ${a,b}$ that start with an ‘a’. Look at this automaton. Answer: This accepts all strings over ${a,b}$ that end with two ’a’s. A little bit trickier. Answer: This accepts all strings over ${a,b}$ that end with b and that never contain the substring ‘aa’. This automaton accepts strings over ${0,1}$, wo we can think of it as accepting certain binary numbers. Answer: All numbers that are evenly divisible by 3. Don’t see it? You’ll confirm this for yourself in the upcoming lab exercise. 5 Creating FSAs Creating an FSA is much like writing a program. That is, it is a creative process with no simple “recipe” that can be followed to always lead you to a correct design. That said, many of the skills you apply to programming will work when creating FSAs as well. 5.1 Testing You can test your FSAs, either by desk-checking them or by actually executing them in JFLAP. As when testing a program, you will get the best results by choosing a variety of inputs, with attention paid to the various distinct “cases” that the FSA/program must satisfy, and by paying attention to boundary cases. You should rarely be “surprised” to learn that an automaton you submit for grading is actually incorrect. 5.2 Decomposition Every programmer quickly learns that the key to success is breaking down complicated problems into simpler ones that can be combined to form an overall solution. The same can be said of designing automata. If I asked you, for example, to create an FA for, say, $\{01,101\}^*$, that breaks down into The *, a “loop” that has to come back to its starting state the body of the loop, consisting of a choice between a concatenation of two characters 0, 1, and a concatenation of three characters 1, 0, 1 So you could… … start by writing out the straight-line sequences for the two concatenations. Then tie them together at the beginning by merging their starting states so that the branch into the appropriate sequence depending on whether the first character is a 0 or 1. That gives us our “choice”. Note that we really are merging those two states. The total number of states is reduced by one. Then, knowing that the * must loop us back to the beginning of its “body”, merge the endpoints back to the beginning: Finally, choose the initial and final states. In this case, q0 is both. Not every problem can be solved this way, but many can be approached like this in part at least. Sometimes this does not yield the simplest possible FA for the language (though it tends to produce one where the correctness of the FA is easily judged), but it’s often easier to simpllfy a known correct FA than to come up with the simplest possible one from scratch. This whole approach will be easier for the NFAs we discuss later than for DFAs, because they allow sub-solutions to be linked together more easily. In fact, we will use this construction approach to prove that all regular expressions can be presented as an FA. 5.3 What’s in a State? Although we think of each FSA, as a whole, as a means of recognizing a particular language, we can actually think of each state as being associated with a language, a set of strings that would take us to that particular state. A good rule of thumb in creating FAs is that every state should have an obvious “meaning”, a set of strings that are “accepted” by that state. For example in this problem from above, we clearly intend q0 to be associated with the set of strings $\{01,101\}^*$ – that’s what we designed this FSA to recognize. But we can also say that q1 represents the strings $\{01, 101\}^* \{0 \}$, i.e., strings that have an “extra” zero that might or might not later be shown to be part of an 01 choice. q4 represents the strings $\{01, 101\}^* \{1 \}$, i.e., strings that have an “extra” one that might or might not later be shown to be part of a 101 choice. q4 represents the strings $\{01, 101\}^* \{10 \}$, i.e., strings that have an “extra” 10 that might or might not later be shown to be part of a 101 choice. Try to only introduce new states into an FSA because they have a specific purpose. If you don’t know what set of strings would take you to a state, you should consider carefully whether you want or need that state at all. 6 Closing Discussion 6.1 Variations This chapter has looked at a very “pure” form of FSA. There are some common variations that allow us to “do” things with an FA without substantially altering the computation power beyond that of a regular FA. One worth mentioning is the Mealy machine, which adds to each state transition an optional string to be emitted. Here, for example, is a Mealy machine that translates binary numbers into octal (base 8). Some of the more complicated automata that we will examine in later chapters can be thought of as associating some form of I/O device with a Mealy machine “controller”. Thinking of an FA as a controller may make a little more sense if we believe that the FA can issue output other than simply accepting to not accepting a completed string, output that could be signals to some other device. 6.2 Applications Although FAs are limited in computational power, they have the virtue of being relatively easy to understand. Consequently, they are often used by software developers and non-software system designers as a means of summarizing the behavior of a complex system. Here, for example, is a state diagram summarizing the behavior of processes in a typical operating system. Most operating systems will be running more processes than can be simultaneously accommodated on the number of physical CPUs. Therefore newly started processes start in a Ready state and have to wait until the OS scheduler assigns a CPU to them. At that moment, the process starts Running. It stays in this state until either the scheduler decides to take back the CPU (because a “time slice” has expired) or because the process has initiated an I/O operation that, compared to the CPU speed, may take a substantial amount of time. In the latter case, the process moves into a Blocked state, surrendering the CPU back to the schedule. When the I/O operation is completed, the process returns to the Ready state until the scheduler decides to give it some more CPU time. The notation used in this diagram is that of a UML state diagram. But the kinship to the formal model of FAs should be pretty obvious. The very fact that the notation for such state diagrams is part of an industry standard notation should give you a sense of how common it is for developers to use FAs as a basis for reasoning about software. Pushing a bit further on this theme, model checking is an approach to validating complex systems, particularly concurrent systems. An FSA “specification” is given for various portions of a complex system. Combinations of states that are considered “dangerous” or that would indicate a failure (e.g., a system going into deadlock) are identified Analysis, some algorithmic and some human, of those state diagrams is performed to see if it is possible to reach any of those undesired states. The process for doing this is not unlike the process of minimizing the states of an FSA from section 2.6.
I’d like to discuss my theorem that the collection of models $M[c]$ obtained by adding an $M$-generic Cohen real $c$ over a fixed countable transitive model of set theory $M$ is upwardly countably closed, in the sense that every increasing countable chain has an upper bound. I proved this theorem back in 2011, while at the Young Set Theory Workshop in Bonn and continuing at the London summer school on set theory, in a series of conversations with Giorgio Venturi. The argument has recently come up again in various discussions, and so let me give an account of it. We consider the collection of all forcing extensions of a fixed countable transitive model $M$ of ZFC by the forcing to add a Cohen real, models of the form $M[c]$, and consider the question of whether every countable increasing chain of these models has an upper bound. The answer is yes! (Actually, Giorgio wants to undertake forcing constructions by forcing over this collection of models to add a generic upward directed system of models; it follows from this theorem that this forcing is countably closed.) This theorem fits into the theme of my earlier post, Upward closure in the toy multiverse of all countable models of set theory, where similar theorems are proved, but not this one exactly. Theorem. For any countable transitive model $M\models\text{ZFC}$, the collection of all forcing extensions $M[c]$ by adding an $M$-generic Cohen real is upward-countably closed. That is, for any countable tower of such forcing extensions $$M[c_0]\subset M[c_1]\subset\cdots\subset M[c_n]\subset\cdots,$$ we may find an $M$-generic Cohen real $d$ such that $M[c_n]\subset M[d]$ for every natural number $n$. Proof. $\newcommand\Add{\text{Add}}$Suppose that we have such a tower of forcing extensions $M[c_0]\subset M[c_1]\subset\cdots$, and so on. Note that if $M[b]\subset M[c]$ for $M$-generic Cohen reals $b$ and $c$, then $M[c]$ is a forcing extension of $M[b]$ by a quotient of the Cohen-real forcing. But since the Cohen forcing itself has a countable dense set, it follows that all such quotients also have a countable dense set, and so $M[c]$ is actually $M[b][b_1]$ for some $M[b]$-generic Cohen real $b_1$. Thus, we may view the tower as having the form: $$M[b_0]\subset M[b_0\times b_1]\subset\cdots\subset M[b_0\times b_1\times\cdots\times b_n]\subset\cdots,$$ where now it follows that any finite collection of the reals $b_i$ are mutually $M$-generic. Of course, we cannot expect in general that the real $\langle b_n\mid n<\omega\rangle$ is $M$-generic for $\Add(\omega,\omega)$, since this real may be very badly behaved. For example, the sequence of first-bits of the $b_n$’s may code a very naughty real $z$, which cannot be added by forcing over $M$ at all. So in general, we cannot allow that this sequence is added to the limit model $M[d]$. (See further discussion in my post Upward closure in the toy multiverse of all countable models of set theory.) We shall instead undertake a construction by making finitely many changes to each real $b_n$, resulting in a real $d_n$, in such a way that the resulting combined real $d=\oplus_n d_n$ is $M$-generic for the forcing to add $\omega$-many Cohen reals, which is of course isomorphic to adding just one. To do this, let’s get a little more clear with our notation. We regard each $b_n$ as an element of Cantor space $2^\omega$, that is, an infinite binary sequence, and the corresponding filter associated with this real is the collection of finite initial segments of $b_n$, which will be an $M$-generic filter through the partial order of finite binary sequences $2^{<\omega}$, which is one of the standard isomorphic copies of Cohen forcing. We will think of $d$ as a binary function on the plane $d:\omega\times\omega\to 2$, where the $n^{th}$ slice $d_n$ is the corresponding function $\omega\to 2$ obtained by fixing the first coordinate to be $n$. Now, we enumerate the countably many open dense subsets for the forcing to add a Cohen real $\omega\times\omega\to 2$ as $D_0$, $D_1$, and so on. There are only countably many such dense sets, because $M$ is countable. Now, we construct $d$ in stages. Before stage $n$, we will have completely specified $d_k$ for $k<n$, and we also may be committed to a finite condition $p_{n-1}$ in the forcing to add $\omega$ many Cohen reals. We consider the dense set $D_n$. We may factor $\Add(\omega,\omega)$ as $\Add(\omega,n)\times\Add(\omega,[n,\omega))$. Since $d_0\times\cdots\times d_{n-1}$ is actually $M$-generic (since these are finite modifications of the corresponding $b_k$’s, which are mutually $M$-generic, it follows that there is some finite extension of our condition $p_{n-1}$ to a condition $p_n\in D_n$, which is compatible with $d_0\times\cdots\times d_{n-1}$. Let $d_n$ be the same as $b_n$, except finitely modified to be compatible with $p_n$. In this way, our final real $\oplus_n d_n$ will contain all the conditions $p_n$, and therefore be $M$-generic for $\Add(\omega,\omega)$, yet every $b_n$ will differ only finitely from $d_n$ and hence be an element of $M[d]$. So we have $M[b_0]\cdots[b_n]\subset M[d]$, and we have found our upper bound. QED Notice that the real $d$ we construct is not only $M$-generic, but also $M[c_n]$-generic for every $n$. My related post, Upward closure in the toy multiverse of all countable models of set theory, which is based on material in my paper Set-theoretic geology, discusses some similar results.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say