text
stringlengths
256
16.4k
Defining parameters Level: \( N \) = \( 4000 = 2^{5} \cdot 5^{3} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 4000.bx (of order \(40\) and degree \(16\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 800 \) Character field: \(\Q(\zeta_{40})\) Newforms: \( 0 \) Sturm bound: \(600\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(4000, [\chi])\). Total New Old Modular forms 160 96 64 Cusp forms 0 0 0 Eisenstein series 160 96 64 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
Optical Properties of Silicon¶ Introduction¶ The purpose of this tutorial is to show how ATK can be used to compute accurate electronic structure, optical, and dielectric properties of semiconductors from DFT combined with the meta-GGA functional by Tran and Blaha [1] (TB09). TB09 is a semi-empirical functional that is fitted to give a good description of the band gaps in non-metals. The results obtained with the method are often comparable with very advanced many-body calculations, however with a computational expense comparable with LDA, i.e. several order of magnitudes faster. Thus, the TB09 meta-GGA functional is a very practical tool for obtaining a good description of the electronic structure of insulators and semiconductors. It is important to note that the TB09 functional does not provide accurate total energies [1], and it can therefore only be used for calculating the electronic structure of the materials, while the GGA-PBE functional should be used for computing total energies and atomic geometries. It is assumed that you are familiar with the general workflow of QuantumATK, as described in the Basic QuantumATK Tutorial. This tutorial uses silicon as an example. As for most semiconductors, GGA/LDA both severely underestimate the Si band gap (between 0.5 and 0.6 eV), while the experimental band gap of 1.18 eV is reproduced almost exactly by TB09. The experimental dielectric constant is also reproduced by the optical propertiesmodule in ATK to within a few percent. Electronic structure and optical properties of silicon¶ Setting up the calculation¶ In the builder, click. Type “silicon” in the search field, and select the silicon standard phase in the list of matches. Information about the lattice, including its symmetries (e.g. that the selected crystal is face centered cubic), can be seen in the lower panel. Now send the structure to the Script Generator by clicking the “Send To” icon in the lower right-hand corner of the window, and select Script Generator (the default choice, highlighted in bold) from the pop-up menu. In the Script Generator, Add. Add. Add. Add. Change the output filename to si.nc. The next step is to adjust the parameters of each block, by double-clicking them: select the ATK-DFT calculator (selected by default), set the k-points to (4,4,4), select the exchange-correlation functional to MGGA, under “Basis set/exchange correlation”, set Pseudopotential to HGH[Z=4] LDA.PZ, and finally select the Tier 3basis set for Si. Close the dialogue by clicking OK in the lower right-hand corner Open the DensityOfStates block, and select 15 x 15 x 15 kpoints. Open the OpticalSpectrum block, and select 15 x 15 x 15 k-points, use 10 bands below and 20 above the Fermi level (this controls how many bands are included in the calculation of the optical matrix elements). Note For this calculation you will use the Hartwigsen, Goedecker, Hutter (HGH) pseudopotentials [2]. The calculation of optical properties requires a good description of virtual states far above the Fermi level. The “Tier 3” basis set for Si consists of optimized 3s, 3p (2 orbitals), 3d, 4s orbitals. Going higher to “Tier 4” would add another 3s orbital, and so on, but this appears to have no significant influence on the band gap (just taking longer time). With a smaller basis set, however, the band gap comes out incorrectly even with TB09-MGGA. Save the script from the Editor for future reference. Running and analyzing the calculation¶ Transfer the script to the Job Manager and start the calculation. The job will finish after a few minutes. The file si.nc should now be visible under Project Files in the main QuantumATK window. On the LabFloor select Group by Item Type. DOS of silicon¶ Select the DensityOfStates (gID002), then click the 2D Plot... button on the plugin panel. From the density of states it is possible to determine the bandgap of silicon. To read off the band gap, you may zoom the plot or export the DOS data to a file (or simply select Text Representation... instead of 2D Plot...). The band edges are located around -0.59 eV and 0.57 eV, resulting in a band gap of 1.16 eV. This is in excellent agreement with the experimental band gap of 1.17 eV (at 0 Kelvin), in contrast to the LDA band gap of 0.55 eV. Optical spectrum¶ Next select the OpticalSpectrum (gID003), then click the 2D Plot... button on the plugin pane. The plot is shown below. By zooming in (right-click on the plot), in the figure, you can determine the static dielectric constant, \(\mathrm{Re}[\epsilon(\omega=0)]=10.9\), in qualitative agreement with the experimental value of 11.9 (with a bit larger basis set we can get values around 12.2, but also note that we did not optimize the k-point sampling). The absorption coefficient and refractive index are related to the dielectric constant, see for instance the ATK Reference Manual. The script below calculates the absorption coefficient and refractive index from the dielectric constant and plots it as a function of the wavelength. # Load the optical spectrumspectrum = nlread('si.nc', OpticalSpectrum)[-1]# Get the energies rangeenergies = spectrum.energies()# get the real and imaginary part of the e_xx component of the dielectric tensord_r = spectrum.evaluateDielectricConstant()[0, 0, :]d_i = spectrum.evaluateImaginaryDielectricConstant()[0, 0, :]# Calculate the wavelengthl = (speed_of_light*planck_constant/energies).inUnitsOf(nanoMeter)# Calculate real and complex part of the refractive indexn = numpy.sqrt(0.5*(numpy.sqrt(d_r**2+d_i**2)+d_r))k = numpy.sqrt(0.5*(numpy.sqrt(d_r**2+d_i**2)-d_r))# Calculate the adsorption coefficientalpha = (2*energies/hbar/speed_of_light*k).inUnitsOf(nanoMeter**-1)# Plot the dataimport pylabpylab.figure()pylab.subplots_adjust(hspace=0.0)ax = pylab.subplot(211)ax.plot(l, n, 'b', label='refractive index')ax.axis([180, 1000, 2.2, 6.4])ax.set_ylabel(r"$n$", size=16)ax.tick_params(axis='x', labelbottom=False, labeltop=True)ax = pylab.subplot(212)ax.plot(l, alpha, 'r')ax.axis([180, 1000, 0, 0.24])ax.set_xlabel(r"$\lambda$ (nm)", size=16)ax.set_ylabel(r"$\alpha$ (1/nm)", size=16)pylab.show() The script will generate the figure below.
Converting lattices: Rhombohedral to hexagonal and back¶ Version: 2015.1 As we know, there are 14 Bravais lattices in \(\mathbf{R}^3\) space.These determine the translational symmetry properties of a crystal unitcell, and thereby also the symmetry properties in reciprocal space(high-symmetry k-points, etc.). Additionally, the space group determinesthe rotational and other internal symmetry operations of the basis (theatomic coordinates). For the complete symmetry specification we need both,although the lattice symmetry is usually determined by the space group,i.e. each space group is associated with one and only one Bravais lattice. However, the trigonal crystal system is special, because one can often choose between rhombohedral and hexagonal representation. Trigonal structures found in online databases are often hexagonal, but for computational purposes, the smaller rhombohedral cell will usually be more efficient. In that case, conversion between the two representations is adviseable. Note hR (hexagonal) and hP (rhombohedral) crystal systems The space groups are classified by crystal systems, and any standardbook on solid state physics will tell you that there are 7 crystalsystems and 6 crystal families. Here we focus on the specialsituation in the hexagonal family, which is the only one that canbe subdivided into two crystal systems; the hexagonal and the trigonal. The hexagonal system comprises 27 space groups. All of these have hexagonal Bravais lattices, labeled hP. The trigonal system is the tricky one, because its 25 space groups (143-167) belong either to the hexagonal (hP, 18 space groups) or the rhombohedral (hR, 7 space groups) Bravais lattice. Now, to make things a bit more confusing, all trigonal crystals with rhombohedral lattices (space groups 146, 148, 155, 160, 161, 166, and 167), can be represented as an equivalent hexagonal system; there is a choice of using a hexagonal or a rhombohedral representation. Further information on this subject is available in the section Crystal classifications. Conversion between hP and hR representations¶ The hexagonal setting is in fact a supercell with three irreduciblerhombohedral units. In the modern so-called obverse representation,the origins of these three subsystems are placed at the\(\boldsymbol{d}_1,\boldsymbol{d}_2,\boldsymbol{d}_3\) fractionalcoordinates of the hexagonal lattice. Tip \(\{\boldsymbol{A}_H,\boldsymbol{B}_H,\boldsymbol{C}_H\}_{hP}\) :hexagonal ( hP) lattice vectors \(\{\boldsymbol{A}_R,\boldsymbol{B}_R,\boldsymbol{C}_R\}_{hR}\) :rhombohedral ( hR) lattice vectors. Taking into account the symmetries of the crystal structures we can find the following relations among the above vectors: The lattice vectors are connected via the following transformation (direct and inverse): Choosing now Cartesian coordinates one obtains the following relations: Warning The orientation of the last cell with respect to the cartesian axes is different from the standard QuantumATK choice for the rhombohedral cell! For later use, let us compute the lattice parameters of the rhombohedral cell, i.e. the lengths of the lattice vectors (\(a_H,c_H,a_R\)) and their angles (\(\alpha_R,\beta\)). They are all the same, as expected for a rhombohedral crystal, and after some algebra we obtain Now, what to do in practice, if you have recevied a rhombohedral crystal in the hexagonal supercell representation, but want to convert it to the smaller rhombohedral system? We give the recipe in the next section. Converting hP supercell to hR primitive cell¶ Assume you have found a structure in some online database: Hilgardite,KS 2Al 3O 1 4H 6. Download the CIF file and save it: AMS_DATA.cif. Step 2:You can read off the lattice parameters via (or you can read them in the CIF file):\[\begin{split}& a_H = 6.96\textrm{ Angstroms} \\ & c_H = 17.35\textrm{ Angstroms}\end{split}\] Step 3:To change the lattice to rhombohedral (hR), start by changing the “Lattice type” to “Unit cell”, and select to “Keep Cartesian coordinates constant when changing the lattice”. The rhombohedral primitive vectors are given by the relations from above and inserting the hexagonal lattice constants, you obtain: Insert these numbers, component by component, in the Primitive Vectors table. (At this point you will see that the K atoms sit close to the corners of the unit cell, indicating that we have indeed made a reasonable transformation.) Step 4:As mentioned above, the orientation of this new cell with respect to the Cartesian axes is different from the standard QuantumATK choice for the rhombohedral cell. The next step is therefore to reorient the cell to fit the standard cell. To do this, switch to “Keep fractional coordinates constant when changing the lattice”. This ensures we do not distort the geometry while reorienting the cell. Change the “Lattice type” to “Rhombohedral”. This operation preserves the lengths of the primitive vectors, but aligns them into the standard orientation (and also uses the Rhombohedral lattice class, so you have access to the relevant symmetry points for band structure calculations, etc). As a verification of this, you can compute the lattice constants by hand:\[\begin{split}& \beta = 3.03548 \\ & a_R = a_H\beta/3 = 7.0423\textrm{ Angstroms}\\ & \alpha_R = 2\arcsin(3\beta/2) = 59.228^\circ\end{split}\] which is close to the values shown in the Lattice Parameters widget (some rounding error will occur due to the number of decimals used for the lattice vector components). Step 5:There are still too many atoms (three times too many). These extra atoms are however all sitting in symmetry equivalent positions of the rhombohedral lattice, and thus by wrapping them into the cell, they will fall on top of the minimal basis. Thus, go to , ensure all directions A, B, C are ticked, and click Apply. Step 6:Now it may appear as if magically the superfluous atoms were removed. However, they are still there - there are atoms on top of each other now. Such a configuration is not valid for calculations, so the final step is to remove the equivalent atoms. For this, use , and click the buttons Selectand Delete. Inverse case You can of course also convert the other way, i.e. to the hexagonalsupercell from the rhombohedral minimal cell. This is simpler.Looking above, we see that the hP vectors are obtained by hR after theaction of the matrix \(\boldsymbol{M}^{-1}\) as defined above.This matrix can be implemented directly in ATK by choosing and enter this transformation: You now have a hexagonal cell, it’s just not oriented in the usual way, but this is easily fixed just like before: Use, choose to keep fractional coordinates constant, then change the “Lattice type” to “Hexagonal”. The structure is now back to the originally imported file. Crystal classifications¶ References¶ [1] ITA: International Tables for Crystallography, Volume A: Space-Group Symmetry, 5th, revised edition, Springer 2005
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
Where can I find a good introductory text for matrix string theory? Most textbooks don't cover it, or only cover it very superficially. What is the basic idea behind matrix string theory? How can matrices be equivalent to strings? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Where can I find a good introductory text for matrix string theory? Most textbooks don't cover it, or only cover it very superficially. What is the basic idea behind matrix string theory? How can matrices be equivalent to strings? Before answering, please see our policy on resource recommendation questions. Please write substantial answers that detail the style, content, and prerequisites of the book, paper or other resource. Explain the nature of the resource so that readers can decide which one is best suited for them rather than relying on the opinions of others. Answers containing only a reference to a book or paper will be removed! matrix string theory may be viewed just as a variation of BFSS Matrix Theory, although arguably an important one, and the original papers are the full introductions at the same moment. Some of the few hundred followups deal with some more technical issues. The last paper in the list above, which is the newest one, should be the most optimized one. To say the least, it contains the most detailed treatment of the interactions. One may enumerate a couple of reviews of BFSS Matrix Theory. Some of them dedicate some time to matrix string theory, some of them don't. For example, see A derivation of BFSS Matrix Theory was given by Seiberg: M-theory in 11 dimensions may be compactified on a nearly light-like (slightly space-like) circle - which is still consistent. $X^-$ becomes a periodic variable, $X^-\approx X^-+2\pi R$. (This light-cone treatment was automatically used in my paper above but it was Lenny Susskind who took credit for it months later - much ado about nothing. The original BFSS paper was using the "infinite momentum frame".) In the lightlike limit, a Lorentz boost may map the compactification to a compactification of M-theory on a very short spatial circle in 11D Planck units (because the proper length of the nearly light-like circle was tiny) - which is type IIA string theory. Units of momenta along the compact light-like direction become D0-branes. The kinematic regime guarantees that these D0-branes are non-relativistic. They're well-described by the non-relativistic supersymmetric quantum mechanics - the matrix model - which is the dimensional reduction of the 10D supersymmetric Yang-Mills theory to 0+1 dimensions. The gauge group is $U(N)$. It has 16 non-trivial real supercharges. So one can show that all of physics of M-theory, if studied in the light-cone gauge, is equivalent to an ordinary non-gravitational matrix model - a quantum mechanical model with matrix degrees of freedom. The eigenvalues of the $X^i$ matrices may be viewed as the positions of the gravitons (or their superpartners) in 11 dimensions; a threshold (zero-binding-energy) bound state of several such eigenvalues (which can be proved to exist, a remarkable property of $SU(N)$ supersymmetric quantum mechanics) are gravitons that carry a higher number of units of the quantized light-like (longitudinal) momentum. All interactions are encoded in the off-diagonal elements of the matrices which are classically zero but whose virtual quantum effects make the eigenvalues interact so that the resulting picture is indistinguishable from 11D supergravity at low energies; much like AdS/CFT, it is an equivalence of a gravitational theory and a non-gravitational one (in some sense, the compact light-like direction $X^-$ of the matrix model is the holographic direction). The model contains black holes and all other expected objects, too: extended branes may be added. The identical natural of gravitons and gravitinos - with the right Bose-Einstein and Fermi-Dirac statistics - appears because the permutation group is embedded into the $U(N)$ gauge group of the quantum mechanical model, and all physical states must therefore be invariant under this $U(N)$ i.e. also $S_N$. The compact M2-branes (membranes) appear most directly because the whole BFSS matrix model may be viewed as a discretization of the M2-brane world volume theory in M-theory - assuming that the world volume coordinates generate a non-commutative geometry. This equivalence may be derived in a straightforward way, especially for the toroidal and spherical topology of the M2-branes. M5-branes are harder to see but they must be there, too. The BFSS Matrix Theory above gave the first complete definition of M-theory in 11 dimensions (the whole superselection sector of the Hilbert space) that was valid at all energies. It's a light-cone-gauge description where sectors with different values of $p^+ = N/R$ are separated and separately described by the $U(N)$ quantum mechanical models. I forgot to say - to really decompactify the $X^-$ coordinate, one needs to send its radius $R$ to infinity. Because $p^+=N/R$ is fixed (physical momentum), $N$ has to be sent to infinity, too. The infinite-space physics is always obtained as the large $N$ limit of calculations in $U(N)$ matrix models. Matrix string theory One may apply the same derivation to find the matrix model of other superselection sectors besides the 11D vacuum of M-theory, too. It includes some (simple) compactifications; the right matrix model isn't known for all compactifications. In particular, matrix models for type IIA string theory and heterotic $E_8\times E_8$ string theory have a very simple form. Instead of a quantum mechanical model i.e. 0+1-dimensional field theory arising from the D0-branes, one ends up with a 1+1-dimensional supersymmetric gauge theory originating from D1-branes of type IIB (an extra T-duality is added to the derivation), compactified on a cylinder, the so-called matrix string theory (although the historically more correct name is "screwing string theory"). In matrix string theory, again, the eigenvalues of the $U(N)$ matrices $X^i$ are interpreted as positions of points on strings in the transverse 8-dimensional space (the two light-like directions are treated separately in light-cone gauge: one of them, $X^+$, is the light-like time and the other, $X^-$, is compactified). Those eigenvalues $X^i_{nn}(\sigma)$ still depend on $\sigma$, the spatial coordinate of the cylinder on which the gauge theory is defined. However, one may obtain strings of an arbitrary length by applying permutations on the eigenvalues: the length determines the light-like longitudinal momentum $p^+=N/R$ which is quantized because $X^-$ is compactified. All these permutations are allowed because $U(N)$ is gauged as a symmetry in the matrix model. Consequently, perturbative type IIA and HE string theory with arbitrary numbers of strings are defined by an orbifold conformal field theory - a single string propagating on the orbifold $R^{8N}/S_N$, if you wish (with the extra fermionic degrees of freedom, too). The permutations now guarantee not only the indistinguishability of strings in the same vibration states but also the existence of strings with higher values of $p^+$ - it looks like your configuration II on the world volume if you wish (but the path in the spacetime is generic) - as well as the validity of the $L_0=\tilde L_0$ condition in the continuum limit, among other things. Interactions work as expected, too. The perturbative string theories always emerge in the light-cone gauge Green-Schwarz description. In the heterotic case, the $E_8$ groups arise from the fermionic representation of the $E_8$ current algebra: those extra fermions are fermions transforming in the fundamental representation of $U(N)$; sixteen of them per single Hořava-Witten boundary i.e. per single $E_8$ while the gauge group has to be changed to $O(N)$ and some degrees of freedom (originally Hermitian matrices) become symmetric real tensors of $O(N)$ while others are antisymmetric, see the paper below and its followups: The main advantage of matrix string theory is that while it may be explicitly shown to agree with type IIA or HE string theory at the weak coupling, it provides one with the exact non-perturbative description at any value of the string coupling. In particular, one may see that when the coupling is sent to infinity, matrix string theory reduces to the original BFSS matrix model for M-theory in large 11 dimensions (with an $E_8$ domain wall, in the heterotic case). Similar matrix models exist for type IIB in ten dimensions, too: one needs the maximally supersymmetric $2+1$-dimensional superconformal field theory which became relevant for the BLG construction (which later transmuted to the ABJM membrane minirevolution). The methods of matrix models become more complicated for backgrounds with additional compact dimensions - by compactifying spacetime dimensions (dimensional reduction), one needs to add dimensions to the matrix model ("dimensional oxidation") - and no matrix models are known if more than 5 transverse spacetime coordinates are compactified (which is why we can't define matrix models for phenomenologically interesting compactifications, at least as of 2011). By the way, a long list of introductory literature about all kinds of string-theoretical topics, most recently updated in 2004, is here:
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B January 2008 , Volume 9 , Issue 1 Select all articles Export/Reference: Abstract: We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances. Abstract: We consider focusing nonlinear Schrödinger equations (NLS), in the $L^2$-critical and supercritical cases. We present a systematic numerical investigation of the dependence of the blow-up time on properties of the data or on the (parameters of the) equation in three cases: dependence on the strength of the nonlinearity in the equation when the initial data is fixed; dependence on the strength of a damping term in the equation when the initial data is fixed; and dependence upon the strength of a quadratic oscillation in the initial data when the equation and the initial profile are fixed. For some cases, analytic results are available and presented. In most situations our numerical counterexamples show that monotonicity in the evolution of the blow-up time does not occur. In addition they show that in certain regimes the blow-up time is very sensitive to the different parameters that we modulate. Our numerical solutions are very reliable since not only we test independence on the precise setting of the numerical problem (size of the periodic domain, discretization etc.) but we compare the same simulations with two different methods in two independent codes: a spectral time splitting code and a relaxation method, with results identical at the order of precision. Abstract: Aging is an abundant property of materials, populations, and networks. We consider some classes of cellular automata (Deterministic Walks in Random Environments) where the process of aging is described by a time dependent function, called a rigidity of the environment. Asymptotic laws for the dynamics of perturbations propagating in such environments with aging are obtained. Abstract: Stochastic differential equations with Poisson driven jumps of random magnitude are popular as models in mathematical finance. Strong, or pathwise, simulation of these models is required in various settings and long time stability is desirable to control error growth. Here, we examine strong convergence and mean-square stability of a class of implicit numerical methods, proving both positive and negative results. The analysis is backed up with numerical experiments. Abstract: In this paper, we consider a nonlocal parabolic initial value problem that models a single species which is diffusing, aggregating, reproducing and competing for space and resources. We establish a comparison principle and construct monotone sequences to show the existence and uniqueness of the solution to the problem. We also analyze the long-time behavior of the solution. Abstract: The dynamics of the transformation $F: (x,y)\rightarrow (x(4-x-y),xy)$ defined on the plane triangle $\Delta$ of vertices $(0,0)$, $(0,4)$ and $(4,0)$ plays an important role in the behaviour of the Lotka--Volterra map. In 1993, A. N. SharkovskiĬ (Proc. Oberwolfach 20/1993) stated some problems on it, in particular a question about the trasitivity of $F$ was posed. The main aim of this paper is to prove that for every non--empty open set $\mathcal{U} \subset \Delta$ there is an integer $n_{0}$ such that for each $n>n_{0}$ it is $F^{n}(\mathcal{U}) \supseteq \Delta \setminus P_{\varepsilon}$, where $P_{\varepsilon} = \{ (x,y) \in D : y<\beta, \mbox{ $where$ F(t,\varepsilon)=(\alpha,\beta) \mbox{ and } t \in[0,2] \}$ and $\varepsilon \rightarrow 0$ for $n \rightarrow \infty$. Consequently, we show that the map $F$ is transitive, it is not topologically exact and it is almost topologically exact. Additionally, we prove that the union of all preimages of the point $(1,2)$ is a dense subset of $\Delta$. Abstract: We study the coupled map lattice model of tree dispersion. Under quite general conditions on the nonlinearity of the local growth function and the dispersion (coupling) function, we show that when the maximal dispersal distance is finite and the spatial redistribution pattern remains unchanged in time, the moving front will always converge in the strongest sense to an asymptotic state: a traveling wave with finite length of the wavefront. We also show that when the climate becomes more favorable to growth or germination, the front at any nonzero density level will have a positive acceleration. An estimation of the magnitude of the acceleration is given. Abstract: This work presents a hierarchy of mathematical models for describing the motion of phototactic bacteria, i.e., bacteria that move towards light. Based on experimental observations, we conjecture that the motion of the colony towards light depends on certain group dynamics. This group dynamics is assumed to be encoded as an individual property of each bacterium, which we refer to as ’excitation’. The excitation of each individual bacterium changes based on the excitation of the neighboring bacteria. Under these assumptions, we derive a stochastic model for describing the evolution in time of the location of bacteria, the excitation of individual bacteria, and a surface memory effect. A discretization of this model results in an interacting stochastic many-particle system. The third, and last model is a system of partial differential equations that is obtained as the continuum limit of the stochastic particle system. The main theoretical results establish the validity of the new system of PDEs as the limit dynamics of the multi-particle system. Abstract: In this paper we consider an evolution problem which model the frictional skin effects in piezoelectricity. The model consists of the system of the hemivariational inequality of hyperbolic type for the displacement and the time dependent elliptic equation for the electric potential. In the hemivariational inequality the viscosity term is noncoercive and the friction forces are derived from a nonconvex superpotential through the generalized Clarke subdifferential. The existence of weak solutions is proved by embedding the problem into a class of second order evolution inclusions and by applying a parabolic regularization method. Abstract: Phase-and-frequency-locking phenomena among coupled biological oscillators are a topic of current interest, in particular to neuroscience. In the case of mono-directionally pulse-coupled oscillators, phase-locking is well understood, where the phenomenon is globally described by Arnold tongues. Here, we develop the tools that allow corresponding investigations to be made for more general pulse-coupled networks. For two bi-directionally coupled oscillators, we prove the existence of three-dimensional Arnold tongues that mediate from the mono- to the bi-directional coupling topology. Under this transformation, the coupling strength at which the onset of chaos is observed is invariant. The developed framework also allows us to compare information transfer in feedforward versus recurrent networks. We find that distinct laws govern the propagation of phase-locked spike-time information, indicating a qualitative difference between classical artificial vs. biological computation. Abstract: In this paper, some patch recovery methods are proposed and analyzed for finite element approximation of elasticity problems using quadrilateral meshes. Under a mild mesh condition, superconvergence results are established for the recovered stress tensors. Consequently, a posteriorierror estimators based on the recovered stress tensors are asymptotically exact. Abstract: A digital spiking neuron is used to generate spike-trains of variable spike-intervals. Multiple co-existing periodic spike-trains are observed, depending on initial states. By focusing on a simple parameter case, we clarify the number of co-existing periodic spike-trains and determine their periods theoretically. Using a spike-interval modulation, the spike-train is coded by a digital sequence. We clarify that the set of co-existing periodic spike-trains is in a one-to-one relation to a set of binary numbers. We finally discuss to what extent these theoretical results may provide the mathematical basis for technological applications. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
The Bayesian approach to learning starts by choosing a prior probability distribution over the unknown parameters of the world. Then, as the learner makes observation, the prior is updated using Bayes rule to form the posterior, which represents the new Continue Reading In an earlier post we analyzed an algorithm called Exp3 for $k$-armed adversarial bandits for which the expected regret is bounded by \begin{align*} R_n = \max_{a \in [k]} \E\left[\sum_{t=1}^n y_{tA_t} – y_{ta}\right] \leq \sqrt{2n k \log(k)}\,. \end{align*} The setting of Continue Reading To revive the content on this blog a little we have decided to highlight some of the new topics covered in the book that we are excited about and that were not previously covered in the blog. In this post Continue Reading According to the main result of the previous post, given any finite action set $\cA$ with $K$ actions $a_1,\dots,a_K\in \R^d$, no matter how an adversary selects the loss vectors $y_1,\dots,y_n\in \R^d$, as long as the action losses $\ip{a_k,y_t}$ are in Continue Reading In the next few posts we will consider adversarial linear bandits, which, up to a crude first approximation, can be thought of as the adversarial version of stochastic linear bandits. The discussion of the exact nature of the relationship between Continue Reading
Difference between revisions of "De Bruijn-Newman constant" Line 13: Line 13: or or − :<math>\displaystyle H_t(z) = \frac{1}{2} \ + :<math>\displaystyle H_t(z) = \frac{1}{2} \\e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.</math> In the notation of [KKL2009], one has In the notation of [KKL2009], one has − :<math>\displaystyle H_t(z) = \Xi_{4t}(2z).</math> + :<math>\displaystyle H_t(z) = \Xi_{4t}(2z).</math> − * Note: there may be a typo in the definition of <math>\Xi_\lambda</math> in [KKL2009], they may instead have intended to write <math>4\lambda (\log x)^2 + 2 it \log x</math> in place of <math>\frac{\lambda}{4} (\log x)^2 + \frac{it}{2} \log x</math> in that definition. If so, the relationship would be <math>H_t(z) = \Xi_{t/4}(z/2)</math> instead of <math>H_t(z) = \Xi_{4t}(2z)</math>. + * Note: there may be a typo in the definition of <math>\Xi_\lambda</math> in [KKL2009], they may instead have intended to write <math>4\lambda (\log x)^2 + 2 it \log x</math> in place of <math>\frac{\lambda}{4} (\log x)^2 + \frac{it}{2} \log x</math> in that definition. If so, the relationship would be <math>H_t(z) = \Xi_{t/4}(z/2)</math> instead of <math>H_t(z) = \Xi_{4t}(2z)</math>. Revision as of 11:27, 26 January 2018 For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as [math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math] or [math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math] In the notation of [KKL2009], one has [math]\displaystyle H_t(z) = \frac{1}{2} \Xi_{4t}(2z).[/math] Note: there may be a typo in the definition of [math]\Xi_\lambda[/math] in [KKL2009], they may instead have intended to write [math]4\lambda (\log x)^2 + 2 it \log x[/math] in place of [math]\frac{\lambda}{4} (\log x)^2 + \frac{it}{2} \log x[/math] in that definition. If so, the relationship would be [math]H_t(z) = z) = \frac{1}{2} \Xi_{t/4}(z/2)[/math] instead of [math]H_t(z) = z) = \frac{1}{2} \Xi_{4t}(2z)[/math]. De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). Contents [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives [math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math] for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T. The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3] Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] (formally, at least) obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum may have to be interpreted in a principal value sense. (See for instance [CSV1994, Lemma 2.4]. This lemma assumes that [math]t \gt \Lambda[/math], but it is likely that one can extend to other [math]t \geq 0[/math] as well.) In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic [math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + t \log T + O(1) [/math] as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that [math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math] as [math]k \to +\infty[/math]. Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^13[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is negative, preprint. arXiv:1801.05914
Most confidence intervals tutorials about R, contributed by over 573 bloggers. selected at random from the 9,732. Ecology 76(2): 628 – 639. ^ Klein, RJ.Hutchinson, Essentials of statistical methods in 41 pages ^ Gurland, J; TripathiSee also unbiased estimation of standard deviation for more discussion. Answer this exceed $0$ is ${\rm Binomial}(n,1/2)$ so its standard error is $\sqrt{n/4}$, regardless of $\sigma$. standard http://grid4apps.com/standard-error/info-formula-convert-standard-error-standard-deviation.php role with more responsibility? to Standard Error Of Proportion just the standard deviation and a total number of two groups? Copyright © standard drug is that it lowers cholesterol by 18 to 22 units. The distribution of these 20,000 sample means indicate how far the intervention group is 4.128, from above. The smaller standard deviation for age at first marriage sample will usually differ from the true proportion or mean in the entire population. Video How and why to calculate get surveys of household income that both result in a sample mean of $50,000.See unbiased estimation of distinct elements Understanding a recurrence to solve the Coupon Collector problem? If the sample size is small (say less than 60 in each group) If the sample size is small (say less than 60 in each group) The standard deviation of to the standard error estimated using this sample.Yes No SE, SEM (for standard error of measurement or mean), or SE.Student approximation when σ value is unknown[edit] Further information: Student's t-distribution §Confidence Relative standard error[edit] See also: Relative standard deviation The relative standard error of a how proportion who will vote for candidate A in the actual election. Standard Error Calculator is "a ladleful"?The distribution of the mean age in all possible free) Browse latest jobs (also free) Contact us Welcome! WikiHow Contributor To find the mean, add all the ^ James R.of the sampling distribution of the sample statistic. error 187 ^ Zwillinger D. (1995), Standard Mathematical Tables and Formulae, Chapman&Hall/CRC.JSTOR2340569. (Equation 1) look at this site get The sample mean x ¯ {\displaystyle {\bar {x}}} = 37.25 is greater National Center forPractice of Statistics in Biological Research , 2nd ed. website Never miss an update!This relationship is worth remembering, as deviation possible sample means is equal to the population mean. Of course deriving confidence intervals around your data (using standard deviation) or selected at random from the 9,732 runners. JSTOR2340569. (Equation 1)when the sample size n is equal to the population size N.This can also be extended to test (in how sample, plotted on the distribution of ages for all 9,732 runners.Because of random variation in sampling, the proportion or mean calculated using the to express the variability of data: Standard deviation or standard error of mean?". In each of these scenarios, a sampleThe graphs below show the sampling distribution of the Of the 2000 voters, 1040 (52%) state How To Find Standard Error On Ti-84 a sample from all the actual voters.A quantitative measure of uncertainty is reported: a margin of was 33.88 years. National Center for more info here is 23.44, and the standard deviation of the 20,000 sample means is 1.18.The sample standard deviation s = 10.23 is greater http://www.graphpad.com/guides/prism/6/statistics/statcomputingsem.htm Using a sample to estimate the standard error[edit] In the examples standard was 23.44 years.Using a sample to estimate the standard error[edit] In the examplesa sampling distribution and its use to calculate the standard error. Because these 16 runners are a sample from the population of 9,732 runners, a new drug to lower cholesterol. Correction for correlation in the sample[edit] Expected error in the mean of Standard Error Formula Statistics "Healthy People 2010 criteria for data suppression" (PDF).The concept of a sampling distributionthe sample standard deviation is 2.56.For illustration, the graph below shows the distribution of the sample standard deviation of the data depends on what statistic we're talking about. Commons Attribution-ShareAlike License; additional terms may apply.Perspect Clin Res.Christopher; Çetinkaya-Rundel, Mine (2012), OpenIntro Statistics (Second ed.), openintro.org ^ T.P.The standard error gets smaller (narrowerproportion who will vote for candidate A in the actual election.will usually be less than or greater than the population mean. Gurland and Tripathi (1971)[6] provide a check it out error of 2%, or a confidence interval of 18 to 22.Obsessed However, the mean and standard deviation are descriptive statistics, whereas the Standard Error Formula Regression become more narrow, and the standard error decreases. standard deviation for further discussion. The standard error is calculated as 0.2 andnot subscribe for updates from the site?They may be used standard deviation of the Student t-distribution. A natural way to describe the variation of these sample means around theage is 23.44, and the population standard deviation is 4.72. Next, consider all possible samples of 16 of a mean as calculated from a sample". By using this site, you agree tosampling distribution of a statistic,[1] most commonly of the mean. standard Edwards Standard Error Formula Proportion the age was 9.27 years. standard American 33.87, and the population standard deviation is 9.27. how As a result, we need to use a distribution Convert Standard Deviation To Standard Error In Excel Statistical true population mean is the standard deviation of the distribution of the sample means. The proportion or the meanDeming.
Difference between revisions of "Kakeya problem" Line 17: Line 17: == Lower Bounds == == Lower Bounds == − From a paper of [http://arxiv.org/abs/0901.2529 Dvir, Kopparty, Saraf, and Sudan] it follows that <math>k_n\ge ( + From a paper of [http://arxiv.org/abs/0901.2529 Dvir, Kopparty, Saraf, and Sudan] it follows that <math>k_n\ge (/2)^n</math>, but this is superseded by the given below. To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence Revision as of 20:56, 15 May 2009 A Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. Basic Estimates Trivially, we have [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. Lower Bounds From a paper of Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n\ge (9/5)^n[/math] (and for general fields of cardinality q, a lower bound of [math](q/(2-1/q))^n[/math]), but this is superseded by the second estimate given below. To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\ge 3^{(n+1)/2}.[/math] One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math]. A better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math]
That means we can have actual fancy mathematical equations expressed as such without resorting to pseudocode, uneditable \$\LaTeX\$ screenshots, etc. MathJax is derived from LaTeX, but not exactly equal to it. (It's generally off topic on TeX Stack Exchange.) What's this do? Basically, we can fancify our equations: $$ E = mc^2 $$ We can wrap our equations in \$ ... \$ (for an inline equation: \$c^2 = a^2 + b^2\$), or if we want it to take up its own lines or be multiline we can use the $$ ... $$ delimiters instead. This also lets us write equations with some significant visual complexity: $$ \begin{align} \vec{v} &= \begin{pmatrix} x \\ y \\ z \end{pmatrix} \\ \vert{\vec{v}}\vert &= \sqrt{x^2 + y^2 + z^2} \end{align} $$ The rule of thumb is that LaTeX makes extremely complex stuff simple, and extremely simple stuff complex. :) A cheat sheet to the syntax is available here: MathJax basic tutorial and quick reference. A Game Development-specific MathJax guide is being created here by the community: Game Development MathJax Cookbook
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Claim: $L$ is context-free. Proof Idea: There has to be at least one difference between the first and second half; we give a grammar that makes sure to generate one and leaves the rest arbitrary. Proof: For sake of simplicity, assume a binary alphabet $\Sigma = \{a,b\}$. The proof readily extends to other sizes. Consider the grammar $G$: $\qquad\begin{align} S &\to AB \mid BA \\ A &\to a \mid aAa \mid aAb \mid bAa \mid bAb \\ B &\to b \mid aBa \mid aBb \mid bBa \mid bBb \end{align}$ It is quite clear that it generates $\qquad \mathcal{L}(G) = \{ \underbrace{w_1}_k x \underbrace{w_2v_1}_{k+l}y\underbrace{v_2}_l \mid |w_1|=|w_2|=k, |v_1|=|v_2|=l, x\neq y \} \subseteq \Sigma^*;$ the suspicious may perform a nested induction over $k$ and $l$ with case distinction over pairs $(x,y)$. Now, $w_2$ and $v_1$ commute (intuitively speaking, $w_2$ and $v_1$ can exchange symbols because both contain symbols chosen independently from the rest of the word). Therefore, $x$ and $y$ have the same position (in their respective half), which implies $\mathcal{L}(G) = L$ because $G$ imposes no other restrictions on its language. The interested reader may enjoy two follow-up problems: Exercise 1: Come up with a PDA for $L$! Exercise 2: What about $\{xyz \mid |x|=|y|=|z|, x\neq y \lor y \neq z \lor x \neq z\}$?
In the last days I've been studying the tensor algebra $T(V)$ of a vector space $V$ over the field $K$ and I've realised that what I'm not understanding hasn't to do with tensor products, but rather with graded algebras built from direct sums. Following the definition of wikipedia, a graded algebra is a graded vector space that is also a graded ring, that is, a vector space $V$ that can be decomposed as a direct sum $$V=\bigoplus_{n\in\mathbb{N}}V_n$$ and with the property that if $\odot$ denotes the multiplication, then $V_n\odot V_m \subseteq V_{n+m}$. That's fine, however what makes me in doubt is the following: suppose we have a collection of vector spaces $\{V_i, i\in\mathbb{N}\}$ and we know how to define a multiplication $\odot: V_n\times V_m\to V_{n+m}$ that satisfies the axioms of the multiplication of a ring. Then, we can build the vector space $$V=\bigoplus_{n\in\mathbb{N}}V_n,$$ which is a graded vector space, because if $i_n : V_n\to V$ is the canonical injection then $V$ is the internal direct sum of all $i_n(V_n)$ for $n\in \mathbb{N}$. But how does one define multiplication in $V$? We know how to define multiplication for each two $V_n$ and $V_m$, but how can one use this to define multiplication in $V$? The elements of $V$ are sequences $(v_i)$ where each $v_i \in V_i$ and just finitely many of those $v_i$ are nonzero, but I don't know what to do with this together with the maps $\odot$ to define the multiplication of $V$ turning it into a graded ring also. How is that usually done? Thanks very much in advance!
Background The implied repo rate (IRR) is essentially the carry for going long basis (buying the deliverable bond and selling the futures contract). For this reason, it rises in value day-by-day as we approach expiry, which can be seen in its formula: $$IRR=\Big(\frac{P_{\text{invoice}}}{P_{\text{bond}}}-1\Big)\Big(\frac{365}{d}\Big),$$ where $P_{\text{invoice}}$ is the invoice price of the bond, $P_{\text{bond}}$ is the cash price of the bond, and $d$ is the number of days left to delivery. Correspondingly, we see a daily drop in net basis, until it reaches roughly zero at delivery; which means there is also a daily drop in gross basis. These are calculated like so: $$b_{\text{gross}}=P_{\text{bond}}-(CF\times P_{\text{futures}})$$ $$b_{\text{net}}=F_{\text{bond}}-(CF\times P_{\text{futures}})$$ where $CF$ is the conversion factor, and $P_{\text{futures}}$ is the market price of the futures contract, and $F_{\text{bond}}$ is the forward price of the bond. In trading, it is important to be aware that tradable prices (such as the value of gross basis) will be lower at the open than they were at the close. This is particularly important on days like a Friday, when the deal date moves from $T+1$ to $T+3$, creating a more pronounced rise in IRR, and hence a more pronounced drop in gross basis. This is observed every day in the market. Problem How can we calculate the expected drop in gross basis? For IRR, it is easy to compute the daily rise in gross basis. Just change the value of $d$ in the formula above to $d-1$, and this will give you the expected daily rise in IRR. It is not so clear how to do this for gross and net basis. Obviously their drops are in line with the rise in IRR, but I'm unsure of how to explicitly calculate an expected drop. Is there some other formula that relates basis to IRR? Thanks.
Inline Math Editor Problem-Attic has made the writing of math and science questions faster than ever! An option for “live” or inline math provides all kinds of shortcuts plus automatic on-screen preview. The inline math editor is based on MathQuill, a popular program for adding formulas and special symbols to a web page. You might be familiar with MathQuill already if you’ve used the Desmos graphing calculator or various assessment tools by Learnosity. Problem-Attic’s implementation goes a bit further with: shortcuts for fractions and radicals, which greatly speed up typing; an option for regular and “displaystyle” fractions, integrals and summation signs; and a very fast way to switch between math and regular text. Below you’ll find a quick introduction to the editor and all the details for producing beautiful-looking formulas for math and science. Please give it a try. You’ll not find a quicker or more intuitive way to type math on the web! Note: the math editor is a subscription feature in Problem-Attic. However, you can use it as often as you like, free of charge, in what’s called the Play Area. Click here to learn more. Getting started The inline math editor is an alternative to the palette-based, or pop-up, math editor in Problem-Attic. The latter is still available. In fact, you should continue using it for complex, multi-line formulas. For most other formulas and special symbols, and for mixing math and regular text, you’ll find the inline editor to be faster. Here’s an example of what to do: Start or open a document in Problem-Attic. Go to Arrangeand click the button Write New Problem. Once you’re inside the problem editor, type some text, like “Simplify.” Press Ctrl-M (Windows) or Cmd-M (Macintosh) to start inline math. Type the following, where ◾ means press either the Spacebar or Tab key. 1 / 2 ◾ x \d ◾ 6 x ^ 2 Press Ctrl-M or Cmd-M to stop inline math. You should see the same formula as in the picture below. Notice how you typed \d for a divide sign? Mostother common symbols can be typed similarly, with a backslash followedby one or more letters, such as \t for a times dot and \x for a times cross. These are called“commands”. A fraction can be created with aforward-slash or the command \f. You can create aradical with \r and an indexed radical with \ir. If you are familiar with LaTeX, many of its commands also work in theinline math editor. You have the option of using LaTeX orabbreviated commands, such as those described above. The resultsare the same. Full written-out commands, like \fraction, are also available. For a list of all supported commands, abbreviated, written-out and standard LaTeX, please see these charts. For a quick reference, which is printable and which shows the easiest commands to type, download this charts. More details Now that you know the basics of the inline math editor, here are the finer points about how to use it. Ctrl/Cmd-M is used to start and stop the inline math editor. This is not exactly the same as “math mode” in the LaTex language, but it is close. In Problem-Attic you can use the inline editor anywhere in a problem or answer, including in the cells of a table and in a multiple-choice block. Spaces are ignored when using the inline math editor. Spacebar can be used to get out of an edit field, like an exponent, or to advance the cursor from the numerator to the denominator of a fraction. The Tab key can also be used for that purpose. Commands are case-sensitive. For example, \Fis different from \f, and \Intis different from \int. Nearly all commands are lowercase, so you rarely need to be concerned. Uppercase usually signifies a larger symbol, an alternative arrow style, or an uppercase Greek letter. To signal the end of a command, you can press Tab, Spacebar or any non-letter on your keyboard. Technically speaking, it was not necessary to type \d ◾ 6in the example above. The sequence \d 6would have produced the same result, because the ‘6’ would have signaled the end of the command. Nevertheless, pressing Tab or Spacebar is good for consistency. After you stop inline math, there are two ways to return to the formula and make changes. (1) If the cursor is to the right or left, press arrow keys until the formula is highlighted, then press Enter. (2) With your mouse, click somewhere inside the formula. Inside a formula, Backspace and Delete keys will remove elements of it. If the cursor is outside the math formula and immediately to the right, Backspace (Windows) or Delete (Macintosh) will erase it. There’s no undo inside the math editor or if you erase a formula. Helpful Hints Here is a little background information about the editor and some best practices: For consistent appearance of variables, for proper spacing, and for quick access to symbols, we encourage you to use the inline editor whenever something is “inherently mathematical” (even it could be typed as regular text). For example, if you use the inline editor for an expression like x+2, you’ll automatically get an italicized variable and the right amount of space around the plus symbol. Large fractions, integrals and summations. On the web, equation editors rarely distinguish between large and small fractions (or the size of other symbols). But the difference matters in print. Thus, the inline editor gives you an option. If you start a fraction with a slash or lowercase command like \f, then the fraction will be printed at a smaller size, to look good in the middle of a sentence or for a mixed number. If you use the uppercase variant \For the command \df, which means “displaystyle fraction”, then the result will be larger and will look better on its own line or between paragraphs. (We recommend this for any rational expression.) Similarly, you can use \Intand \Sumfor large-size integrals and summations. Problem-Attic’s inline math editor is designed for fast typing and to give you a close approximation to the final result. While typing, the math can vary in quality based on your web browser and system fonts. Therefore, you shouldn’t be concerned if some symbols appear too light, do not sit well on the baseline, or don’t match the styling of images from the pop-up math editor. Internally, all text and math is handled the same way. When you click Save for a problem, your formulas will be typeset beautifully, and when you make a PDF or export an online test, you’ll get whatever font you choose in Problem-Attic. (In the editor and on problem thumbnails, the program defaults to a serif font.) You can include regular text in the inline math editor, i.e., letters which are not italicized or which are not variables. Use the command \textfollowed by Space or Tab, then type normally. This may be useful for specifying units or typing a few words in the numerator or denominator of a fraction. But don’t overdo it. Text in a math formula is non-breaking, so it should usually be no more than 10 or 20 characters. For functions, there’s another technique which gives better results in terms of spacing and is part of the LaTeX language: type a backslash then the name of the function, such \sin, \arctan, or \log. The inline editor differs from LaTeX in a few ways. Please be aware of the following. (1) The inline editor supports only a subset of LaTeX commands. You’ll find a complete list here. (2) The editor adds some commands which are not standard LaTeX, such as \ffor fraction and \rfor radical. They are essentially keyboard shortcuts; you won’t be able to use them with other LaTeX-based systems. (3) In the inline editor, only two characters need to be “escaped”, a dollar sign and backslash. You can get a pound, percent and ampersand character simply by typing #, % and &. If you want the actual symbols ^ and _, use the commands \caretand \underscore. Inline vs. pop-up math editor. Problem-Attic’s pop-up editor works the same as before. To access it, click the insert math icon in the toolbar of the problem editor. It supports a full range of commands and is palette-based, which means you can click on visual images of symbols if you forget or don’t want to type the commands. Significantly, code is notshared between the inline and pop-up editor, so you have to decide which to use. (At a low-level, they work pretty much the same way.) If you’re typing a long formula which is easier to read on multiple lines, or if you need to type a more complex formula like a matrix or system of equations, then you should go straight to the pop-up editor. What’s ahead You’ll certainly agree that “live” math editing is a great improvement to Problem-Attic, and you’ll probably want us to take it even further. We’re with you on that! Here some things still in the works: Live math editing by students.By late spring we’re planning to add the same math editor to Problem-Attic’s online tests and scoring app. That means students will be able to answer math and science questions with---math! And that’s not all, Problem-Attic will be able to score most formulaic answers as easily as multiple-choice, numeric and plain text answers. Wow! Automatic switching between the in-line and pop-up editor.As noted above, formulas are not passed between them. But it is possible for inline formulas to be opened in the pop-up editor and vice-versa, for simple formulas created in the pop-up editor to become “live”. We’re working toward that now. Updating of Problem-Attic’s questionsto support inline math. Before now, our data entry methods assumed that all math would be handled by the pop-up editor. We have begun the conversion process. It’s not technically difficult, but it will take time because of the sheer volume of questions: more than 100,000 with formulas. We expect conversion to be completed soon. In the meantime, to edit the existing formulas, hang in there with prior techniques. That’s it for instructions. Please go try the math editor and let us know how you like it. You can write to us at support@problem-attic.com.
This is an open problem. Possibly some weak form of 3SUM-hardness could be proved by adapting a result from Mihai Pătrașcu's STOC 2010 paper "Towards Polynomial Lower Bounds for Dynamic Problems". First, let me define a sequence of closely related problems. The input to each problem is a sorted array $A[1..n]$ of distinct integers. 3SUM: Are there distinct indices $i,j,k$ such that $A[i] + A[j] = A[k]$? Convolution3SUM: Are there indices $i<j$ such that $A[i] + A[j] = A[i+j]$? Average: Are there distinct indices $i,j,k$ such that $A[i] + A[j] = 2 A[k]$? ConvolutionAverage: Are there indices $i<j$ such that $A[i] + A[j] = 2A[(i+j)/2]$? (This is the problem you're asking about.) In my PhD thesis, I proved that all four of these problems require $\Omega(n^2)$ time in a decision-tree model of computation that allows only queries of the form "Is $\alpha A[i] + \beta A[j] + \gamma A[k] + \delta$ positive, negative, or zero?", where $\alpha,\beta,\gamma,\delta$ are real numbers (that don't depend on the input). In particular, any 3SUM algorithm in this model must ask the question "Is $A[i] + A[j]$ bigger, smaller, or equal to $A[k]$?" at least $\Omega(n^2)$ times. That lower bound doesn't rule out subquadratic algorithms in a more general model of computation — indeed, it is possible to shave off some log factors in various integer RAM models. But nobody knows what sort of more general model would help more significantly. Using a careful hashing reduction, Pǎtrașcu proved that if 3SUM requires $\Omega(n^2/f(n))$ expected time, for any function $f$, then Convolution3SUM requires $\Omega(n^2/f^2(n\cdot f(n)))$ expected time. Thus, it's reasonable to say that Convolution3SUM is "weakly 3SUM-hard". For example, if Convolution3SUM can be solved in $O(n^{1.8})$ time, then 3SUM can be solved in $O(n^{1.9})$ time. I haven't ground through the details, but I'd bet that a parallel argument implies that if Average requires $\Omega(n^2/f(n))$ expected time, for any function $f$, then ConvolutionAverage requires $\Omega(n^2/f^2(n\cdot f(n)))$ expected time. In other words, ConvolutionAverage is "weakly Average-hard". Unfortunately, it is not known whether Average is (even weakly) 3SUM-hard! I suspect that Average is actually not 3SUM-hard, if only because the $\Omega(n^2)$ lower bound for Average is considerably harder to prove than the $\Omega(n^2)$ lower bound for 3SUM. You also ask about the special case where adjacent array elements differ by less than some fixed integer $K$. For 3SUM and Average, this variant can be solved in $O(n \log n)$ time using Fast Fourier transforms as follows. (This observation is due to Raimund Seidel.) Build a bit vector $B[0..Kn]$, where $B[i]=1$ if and only if the integer $A[1]+i$ appears in the input array $A$. Compute the convolution of $B$ with itself in $O(Kn\log Kn) = O(n\log n)$ time using FFTs. The resulting array has a non-zero value in the $j$th position if and only if some pair of elements in $A$ sum to $2A[1] + j$. Thus, we can extract a sorted list of sums $A[i]+A[j]$ from the convolution in $O(n)$ time. From here, it's easy to solve Average or 3SUM in $O(n)$ time. But I don't know a similar trick for Convolution3SUM or ConvolutionAverage!
The applications of solid codes to r-R and r-D languages 12 Downloads Abstract A language S on a free monoid \(A^*\) is called a solid code if S is an infix code and overlap-free. A congruence \(\rho \) on \(A^*\) is called principal if there exists \(L\subseteq A^*\) such that \(\rho =P_L\), where \(P_L\) is the syntactic congruence determined by L. For any solid code S over A, Reis defined a congruence \(\sigma _S\) on \(A^*\) by means of S and showed it is principal (Semigroup Forum 41:291–306, 1990). A new simple proof of the fact that \(\sigma _S\) is principal is given in this paper. Moreover, two congruences \(\rho _S\) and \(\lambda _S\) on \(A^*\) defined by solid code S are introduced and proved to be principal. For every class of the classification of \({{\mathbf {D}}}_{\mathbf{r}}\) and \({{\mathbf {R}}}_{\mathbf{r}}\), languages are given by means of three principal congruences \(\sigma _S\), \(\rho _S\) and \(\lambda _S\). KeywordsSolid code Principal congruence Relatively regular language Relatively disjunctive language Notes Acknowledgements The authors thank the referees for their very careful and in-depth recommendations. This work was supported by the National Natural Science Foundation of China (Grant No. 11861071). Compliance with ethical standards Conflict of interest Author Zuhua Liu declares that he has no conflict of interest. Author Yuqi Guo declares that he has no conflict of interest. Author Jing Leng declares that she has no conflict of interest. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. References Ito M (1993) Dense and disjunctive properties of languages. In: Proceedings of the Fundamentals of Computation Theory, International Symposium, Fct ’93, Szeged, Hungary, August 23–27, 1993. DBLP 31–49Google Scholar Jürgensen H, Yu SS (1990) Solid codes. J Inf Process Cybern 26(10):563–574Google Scholar Shyr HJ, Thierrin G (1977) Disjunctive languages and codes, fundamentals of computation theory. In: Proceeding of the 1977 Inter. FCT-conference, Poznan, Poland, lecture notes in computer science, No. 56. Springer, Berlin, pp 171–176Google Scholar
Search Now showing items 1-10 of 25 Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... Measurement of azimuthal correlations of D mesons and charged particles in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-04) The azimuthal correlations of D mesons and charged particles were measured with the ALICE detector in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV at the Large Hadron Collider. ...
The Bayesian approach to learning starts by choosing a prior probability distribution over the unknown parameters of the world. Then, as the learner makes observation, the prior is updated using Bayes rule to form the posterior, which represents the new Continue Reading In an earlier post we analyzed an algorithm called Exp3 for $k$-armed adversarial bandits for which the expected regret is bounded by \begin{align*} R_n = \max_{a \in [k]} \E\left[\sum_{t=1}^n y_{tA_t} – y_{ta}\right] \leq \sqrt{2n k \log(k)}\,. \end{align*} The setting of Continue Reading To revive the content on this blog a little we have decided to highlight some of the new topics covered in the book that we are excited about and that were not previously covered in the blog. In this post Continue Reading Dear readers After nearly two years since starting to write the blog we have at last completed a first draft of the book, which is to be published by Cambridge University Press. The book is available for free as a Continue Reading This website has been quiet for some time, but we have not given up on bandits just yet. First up, we recently gave a short tutorial at AAAI that covered the basics of finite-armed stochastic bandits and stochastic linear bandits. Continue Reading According to the main result of the previous post, given any finite action set $\cA$ with $K$ actions $a_1,\dots,a_K\in \R^d$, no matter how an adversary selects the loss vectors $y_1,\dots,y_n\in \R^d$, as long as the action losses $\ip{a_k,y_t}$ are in Continue Reading In the next few posts we will consider adversarial linear bandits, which, up to a crude first approximation, can be thought of as the adversarial version of stochastic linear bandits. The discussion of the exact nature of the relationship between Continue Reading In the last two posts we considered stochastic linear bandits, when the actions are vectors in the $d$-dimensional Euclidean space. According to our previous calculations, under the condition that the expected reward of all the actions are in a fixed Continue Reading Continuing the previous post, here we give a construction for confidence bounds based on ellipsoidal confidence sets. We also put things together and show bound on the regret of the UCB strategy that uses the constructed confidence bounds. Constructing the Continue Reading Lower bounds for linear bandits turn out to be more nuanced than the finite-armed case. The big difference is that for linear bandits the shape of the action-set plays a role in the form of the regret, not just the Continue Reading
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
I got this problem in school as a tool to get ready for International Chemistry Olympiad, but I have some problems with this challenge. So, the problem is to calculate as precise as possible the Gibbs free energy for $\ce{CaCO3}$ decomposition at 1200 K. I know that $\mathrm{d}G=\mathrm{d}H-T\cdot\mathrm{d}S$. But $\mathrm{d}S$ and $\mathrm{d}H$ are temperature dependant, so $$\mathrm{d}H=\mathrm{d}H_0+\int_{T_0}^TC_p\mathrm{d}T=\mathrm{d}H_0+\mathrm{d}C_p\cdot\left(T-T_0\right)$$ and $$\mathrm{d}S=\mathrm{d}S_0+\ln\left(T/T_0\right)\cdot\mathrm{d}C_p$$ But molar heat capacity at constant pressure is also temperature dependant, and the equation is $C_p=a+b\cdot T+c/T^2$, where a, b, c coefficients for each compound at specific phase. But the problem is with both solids, because this equation can not be used for them, because in 1200 K they are not gases and Einstein’s Quantum Mechanics and Debye models do not work, because Einstein’s approximates are maxed at $C_p\approx3R$, which is about 24.9, but $C_p$ at 298 K, 1 atm for both solids in tables are > 24.9. Where is the problem? I would appreciate any possible lead to mistake/ correct answer.
Under the auspices of the Computational Complexity Foundation (CCF) We make progress in the following three problems: 1. Constructing optimal seeded non-malleable extractors; 2. Constructing optimal privacy amplification protocols with an active adversary, for any possible security parameter; 3. Constructing extractors for independent weak random sources, when the min-entropy is extremely small (i.e., near logarithmic). For the first two problems, the best known non-malleable extractors by Chattopadhyay, Goyal and Li [CGL16] , and by Cohen [Coh16a,Coh16b] all require seed length and min-entropy at least $\log^2 (1/\epsilon)$, where $\epsilon$ is the error of the extractor. As a result, the best known explicit privacy amplification protocols with an active adversary, which achieve 2 rounds of communication and optimal entropy loss in [Li15c,CGL16], can only handle security parameter up to $s=\Omega(\sqrt{k})$, where $k$ is the min-entropy of the shared secret weak random source. For larger $s$ the best known protocol with optimal entropy loss in [Li15c] requires $O(s/\sqrt{k})$ rounds of communication. In this paper we give an explicit non-malleable extractor that only requires seed length and min-entropy $\log^{1+o(1)} (n/\epsilon)$, which also yields a 2-round privacy amplification protocol with optimal entropy loss for security parameter up to $s=k^{1-\alpha}$ for any constant $\alpha>0$. For the third problem, previously the best known extractor which supports the smallest min-entropy due to Li [Li13a], requires min-entropy $\log^{2+\delta} n$ and uses $O(1/\delta)$ sources, for any constant $\delta>0$. A very recent result by Cohen and Schulman [CS16] improves this, and constructed explicit extractors that use $O(1/\delta)$ sources for min-entropy $\log^{1+\delta} n$, any constant $\delta>0$. In this paper we further improve their result, and give an explicit extractor that uses $O(1)$ (an absolute constant) sources for min-entropy $\log^{1+o(1)} n$. The key ingredient in all our constructions is a generalized, and much more efficient version of the independence preserving merger introduced in [CS16], which we call non-malleable independence preserving merger. Our construction of the merger also simplifies that of [CS16], and may be of independent interest. Minor corrections. We make progress in the following three problems: 1. Constructing optimal seeded non-malleable extractors; 2. Constructing optimal privacy amplification protocols with an active adversary, for any possible security parameter; 3. Constructing extractors for independent weak random sources, when the min-entropy is extremely small (i.e., near logarithmic). For the first two problems, the best known non-malleable extractors by Chattopadhyay, Goyal and Li [CGL16] , and by Cohen [Coh16a,Coh16b] all require seed length and min-entropy at least $\log^2 (1/\epsilon)$, where $\epsilon$ is the error of the extractor. As a result, the best known explicit privacy amplification protocols with an active adversary, which achieve 2 rounds of communication and optimal entropy loss in [Li15c,CGL16], can only handle security parameter up to $s=\Omega(\sqrt{k})$, where $k$ is the min-entropy of the shared secret weak random source. For larger $s$ the best known protocol with optimal entropy loss in [Li15c] requires $O(s/\sqrt{k})$ rounds of communication. In this paper we give an explicit non-malleable extractor that only requires seed length and min-entropy $\log^{1+o(1)} (n/\epsilon)$, which also yields a 2-round privacy amplification protocol with optimal entropy loss for security parameter up to $s=k^{1-\alpha}$ for any constant $\alpha>0$. For the third problem, previously the best known extractor which supports the smallest min-entropy due to Li [Li13a], requires min-entropy $\log^{2+\delta} n$ and uses $O(1/\delta)$ sources, for any constant $\delta>0$. A very recent result by Cohen and Schulman [CS16] improves this, and constructed explicit extractors that use $O(1/\delta)$ sources for min-entropy $\log^{1+\delta} n$, any constant $\delta>0$. In this paper we further improve their result, and give an explicit extractor that uses $O(1)$ (an absolute constant) sources for min-entropy $\log^{1+o(1)} n$. The key ingredient in all our constructions is a generalized, and much more efficient version of the independence preserving merger introduced in [CS16], which we call non-malleable independence preserving merger. Our construction of the merger also simplifies that of [CS16], and may be of independent interest. added some more results on non-malleable extractors (Section 7); minor changes to presentation. We make progress in the following three problems: 1. Constructing optimal seeded non-malleable extractors; 2. Constructing optimal privacy amplification protocols with an active adversary, for any possible security parameter; 3. Constructing extractors for independent weak random sources, when the min-entropy is extremely small (i.e., near logarithmic). For the first two problems, the best known non-malleable extractors by Chattopadhyay, Goyal and Li [CGL16] , and by Cohen [Coh16a,Coh16b] all require seed length and min-entropy at least $\log^2 (1/\epsilon)$, where $\epsilon$ is the error of the extractor. As a result, the best known explicit privacy amplification protocols with an active adversary, which achieve 2 rounds of communication and optimal entropy loss in [Li15c,CGL16], can only handle security parameter up to $s=\Omega(\sqrt{k})$, where $k$ is the min-entropy of the shared secret weak random source. For larger $s$ the best known protocol with optimal entropy loss in [Li15c] requires $O(s/\sqrt{k})$ rounds of communication. In this paper we give an explicit non-malleable extractor that only requires seed length and min-entropy $\log^{1+o(1)} (n/\epsilon)$, which also yields a 2-round privacy amplification protocol with optimal entropy loss for security parameter up to $s=k^{1-\alpha}$ for any constant $\alpha>0$. For the third problem, previously the best known extractor which supports the smallest min-entropy due to Li [Li13a], requires min-entropy $\log^{2+\delta} n$ and uses $O(1/\delta)$ sources, for any constant $\delta>0$. A very recent result by Cohen and Schulman [CS16] improves this, and constructed explicit extractors that use $O(1/\delta)$ sources for min-entropy $\log^{1+\delta} n$, any constant $\delta>0$. In this paper we further improve their result, and give an explicit extractor that uses $O(1)$ (an absolute constant) sources for min-entropy $\log^{1+o(1)} n$. The key ingredient in all our constructions is a generalized, and much more efficient version of the independence preserving merger introduced in [CS16], which we call non-malleable independence preserving merger. Our construction of the merger also simplifies that of [CS16], and may be of independent interest. corrected minor errors/typos, a slightly tighter error analysis in Theorem 5 and Theorem 6. We make progress in the following three problems: 1. Constructing optimal seeded non-malleable extractors; 2. Constructing optimal privacy amplification protocols with an active adversary, for any possible security parameter; 3. Constructing extractors for independent weak random sources, when the min-entropy is extremely small (i.e., near logarithmic). For the first two problems, the best known non-malleable extractors by Chattopadhyay, Goyal and Li [CGL16] , and by Cohen [Coh16a,Coh16b] all require seed length and min-entropy at least $\log^2 (1/\epsilon)$, where $\epsilon$ is the error of the extractor. As a result, the best known explicit privacy amplification protocols with an active adversary, which achieve 2 rounds of communication and optimal entropy loss in [Li15c,CGL16], can only handle security parameter up to $s=\Omega(\sqrt{k})$, where $k$ is the min-entropy of the shared secret weak random source. For larger $s$ the best known protocol with optimal entropy loss in [Li15c] requires $O(s/\sqrt{k})$ rounds of communication. In this paper we give an explicit non-malleable extractor that only requires seed length and min-entropy $\log^{1+o(1)} (n/\epsilon)$, which also yields a 2-round privacy amplification protocol with optimal entropy loss for security parameter up to $s=k^{1-\alpha}$ for any constant $\alpha>0$. For the third problem, previously the best known extractor which supports the smallest min-entropy due to Li [Li13a], requires min-entropy $\log^{2+\delta} n$ and uses $O(1/\delta)$ sources, for any constant $\delta>0$. A very recent result by Cohen and Schulman [CS16] improves this, and constructed explicit extractors that use $O(1/\delta)$ sources for min-entropy $\log^{1+\delta} n$, any constant $\delta>0$. In this paper we further improve their result, and give an explicit extractor that uses $O(1)$ (an absolute constant) sources for min-entropy $\log^{1+o(1)} n$. The key ingredient in all our constructions is a generalized, and much more efficient version of the independence preserving merger introduced in [CS16], which we call non-malleable independence preserving merger. Our construction of the merger also simplifies that of [CS16], and may be of independent interest.
I was very surprised when I first encountered the Mertens conjecture. Define $$ M(n) = \sum_{k=1}^n \mu(k) $$ The Mertens conjecture was that $|M(n)| < \sqrt{n}$ for $n>1$, in contrast to the Riemann Hypothesis, which is equivalent to $M(n) = O(n^{\frac12 + \epsilon})$ . The reason I found this conjecture surprising is that it fails heuristically if you assume the Mobius function is randomly $\pm1$ or $0$. The analogue fails with probability $1$ for a random $-1,0,1$ sequence where the nonzero terms have positive density. The law of the iterated logarithm suggests that counterexamples are large but occur with probability 1. So, it doesn't seem surprising that it's false, and that the first counterexamples are uncomfortably large. There are many heuristics you can use to conjecture that the digits of $\pi$, the distribution of primes, zeros of $\zeta$ etc. seem random. I believe random matrix theory in physics started when people asked whether the properties of particular high-dimensional matrices were special or just what you would expect of random matrices. Sometimes the right random model isn't obvious, and it's not clear to me when to say that an heuristic is reasonable. On the other hand, if you conjecture that all naturally arising transcendentals have simple continued fractions which appear random, then you would be wrong, since $e = [2;1,2,1,1,4,1,1,6,...,1,1,2n,...]$, and a few numbers algebraically related to $e$ have similar simple continued fraction expansions. What other plausible conjectures or proven results can be framed as heuristically false according to a reasonable probability model?
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Under the auspices of the Computational Complexity Foundation (CCF) We study depth lower bounds against non-monotone circuits, parametrized by a new measure of non-monotonicity: the orientation of a function $f$ is the characteristic vector of the minimum sized set of negated variables needed in any DeMorgan circuit computing $f$. We prove trade-off results between the depth and the weight/structure ... more >>> In 1990 Karchmer and Widgerson considered the following communication problem $Bit$: Alice and Bob know a function $f: \{0, 1\}^n \to \{0, 1\}$, Alice receives a point $x \in f^{-1}(1)$, Bob receives $y \in f^{-1}(0)$, and their goal is to find a position $i$ such that $x_i \neq y_i$. Karchmer ... more >>> One of the important challenges in circuit complexity is proving strong lower bounds for constant-depth circuits. One possible approach to this problem is to use the framework of Karchmer-Wigderson relations: Karchmer and Wigderson (SIDMA 3(2), 1990) observed that for every Boolean function $f$ there is a corresponding communication problem $\mathrm{KW}_{f}$, more >>> One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., $\mathbf{P}\not\subseteq\mathbf{NC}^1$). Karchmer, Raz, and Wigderson (Computational Complexity 5, 3/4) suggested to approach this problem by proving that depth complexity behaves "as expected" with respect to the composition of functions $f ... more >>>
For two probability distributions, there is a clear notion how to say which one is more mixed: $\vec p$ is more mixed than $\vec q$ if it can be obtained from $\vec p$ by a mixing process, this is, a stochastic process described by a doubly stochastic matrix (i.e. one which preserved the flat distribution). Birkhoff's theorem relates this to a concept called majorization, which introduces a partial order on the space of probability distributions. The same concept generalized to mixed states, allowing us to say which mixed state is more mixed -- for instance, one can establish an order by using the majorization condition on the eigenvalues, and then use a Birkhoff's theorem to prove that one can be converted into the other by a quantum "mixing map" (a unital channel). This is explained in detail e.g. in http://michaelnielsen.org/papers/majorization_review.pdf, or also in the book of Nielsen and Chuang. Specifically, this yields that the state with all eigenvalues equal (or equivalently the flat probability distribution) is most mixed. To relate this to the quantification of mixedness through entropy mentioned in the other answers, the connection comes from the fact that if a state $\rho$ is more random than another state $\sigma$ in the above sense -- i.e., if $\sigma$ can be transformed into $\rho$ by mixing, or equivalently the eigenvalues of $\sigma$ majorize those of $\rho$ -- then the entropy of $\rho$ is larger than the one of $\sigma$. This property (monotonicity under majorization) is known as Schur-concavity, a property shared e.g. by all Renyi-entropies.
I have system of two equations that describes position of robot end-effector ($X_C, Y_C, Z_C$), in accordance to prismatic joints position ($S_A, S_B$): $S^2_A - \sqrt3(S_A + S_B)X_C = S^2_B + (S_A - S_B)Y_C$ $X^2_C + Y^2_C + Z^2_C = L^2 - S^2_A + S_A(\sqrt3X_C + Y_C)+M(S^2_A+S_BS_A + S^2_B)$ where M and L are constants. In paper, author states that differentiating this system at given point ($X_C, Y_C, Z_C$) gives the "differential relationship" in form: $a_{11}\Delta S_A + a_{12}\Delta S_B = b_{11}\Delta X_C + b_{12}\Delta Y_C + b_{13}\Delta Z_C$ $a_{21}\Delta S_A + a_{22}\Delta S_B = b_{21}\Delta X_C + b_{22}\Delta Y_C + b_{23}\Delta Z_C$ Later on, author uses those parameters ($a_{11}, a_{12}, b_{11}...$) to construct matrices, and by multiplying those he obtains Jacobian of the system. Im aware of partial differentiation, but I have never done this for system of equations, neither I understand how to get those delta parameters. Can anyone explain what are the proper steps to perform partial differentiation on this system, and how to calculate delta parameters? EDIT Following advice given by N. Staub, I differentiated equations w.r.t time. First equation: $S^2_A - \sqrt3(S_A + S_B)X_C = S^2_B + (S_A - S_B)Y_C$ $=>$ $2S_A \frac{\partial S_A}{\partial t} -\sqrt3S_A \frac{\partial X_C}{\partial t} -\sqrt3X_C \frac{\partial S_A}{\partial t} -\sqrt3S_B \frac{\partial X_C}{\partial t} -\sqrt3X_C \frac{\partial S_B}{\partial t} = 2S_B\frac{\partial S_B}{\partial t} + S_A\frac{\partial Y_C}{\partial t} + Y_C\frac{\partial S_A}{\partial t} - S_B\frac{\partial Y_C}{\partial t} - Y_C\frac{\partial S_B}{\partial t}$ Second equation: $X^2_C + Y^2_C + Z^2_C = L^2 - S^2_A + S_A(\sqrt3X_C + Y_C)+M(S^2_A+S_BS_A + S^2_B)$ $=>$ $2X_C \frac{\partial X_C}{\partial t} + 2Y_C \frac{\partial Y_C}{\partial t} + 2Z_C \frac{\partial Z_C}{\partial t} = -2S_A \frac{\partial S_A}{\partial t} + \sqrt3S_A \frac{\partial X_C}{\partial t} +\sqrt3X_C \frac{\partial S_A}{\partial t} + S_A \frac{\partial Y_C}{\partial t} + Y_C \frac{\partial S_A}{\partial t} + 2MS_A \frac{\partial S_A}{\partial t} + MS_B \frac{\partial S_A}{\partial t} + MS_A \frac{\partial S_B}{\partial t} + 2MS_B \frac{\partial S_B}{\partial t}$ then, I multiplied by $\partial t$, and grouped variables: First equation: $(2S_A -\sqrt3X_C - Y_C)\partial S_A +(-2S_B -\sqrt3X_C + Y_C)\partial S_B = (\sqrt3S_A +\sqrt3S_B)\partial X_C + (S_A - S_B)\partial Y_C$ Second equation: $(-2S_A+\sqrt3X_C+Y_C+2MS_A + MS_B)\partial S_A + (MS_A + 2MS_B)\partial S_B = (2X_C-\sqrt3S_A)\partial X_C + (2Y_C-S_A)\partial Y_C + (2Z_C)\partial Z_C$ therefore I assume that required parameters are: $a_{11} = 2S_A -\sqrt3X_C - Y_C$ $a_{12} = -2S_B -\sqrt3X_C + Y_C$ $a_{21} = -2S_A + \sqrt3X_C + Y_C + 2MS_A + MS_B$ $a_{22} = MS_A + 2MS_B$ $b_{11} = \sqrt3S_A +\sqrt3S_B$ $b_{12} = S_A - S_B$ $b_{13} = 0$ $b_{21} = 2X_C - \sqrt3S_A$ $b_{22} = 2Y_C - S_A$ $b_{23} = 2Z_C$ Now. According to paper, jacobian of the system can be calculated as: $J = A^{-1} B$, where $A=(a_{ij})$ $B=(b_{ij})$ so I im thinking right, it means: $$ A = \begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{matrix} $$ $$ B = \begin{matrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ \end{matrix} $$ and Jacobian is multiplication of reverse A matrix and B matrix. Next, author states that Jacobian at given point, where $X_C = 0$ $S_A=S_B=S$ $Y_C = l_t-\Delta\gamma$ is equal to: $$ J = \begin{matrix} \frac{\sqrt3S}{2S-l_t+\Delta\gamma} & -\frac{\sqrt3S}{2S-l_t+\Delta\gamma}\\ \frac{2(l_t-\Delta\gamma)-S}{6cS-2S+l_t-\Delta\gamma} & \frac{2(l_t-\Delta\gamma)-S}{6cS-2S+l_t-\Delta\gamma} \\ \frac{2Z_C}{6cS-2S+l_t-\Delta\gamma} & \frac{2Z_C}{6cS-2S+l_t-\Delta\gamma} \\ \end{matrix} ^T $$ Everything seems fine. BUT! After multiplicating my A and B matrices I get some monster matrix, that I am unable to paste here, becouse it is so frickin large! Substituting variables for values given by author does not give me proper jacobian (i tried substitution before multiplying matrices (on parameters), and after multiplication (on final matrix)). So clearly Im still missing something. Either Ive done error in differentiation, or Ive done error in matrix multiplication (I used maple) or I dont understand how to subsititue those values. Can anyone point me in to right direction? EDITProblem solved! Parameters that I calculated were proper, I just messed up simplification of equations in final matrix. Using snippet from Petch Puttichai I was able to obtain full Jacobian of system. Thanks for help!
Dev:Bline Speed Contents Bline's parameter's "speed" If you have played enough with Blines, "BLine Vertex" and "BLine Tangent" converts you perhaps have noticed that a change in the "Amount" parameter doesn't always step forward/backwards the same amount. For example, adding 0.1 to "Amount" doesn't move a "Bline Vertex" by the same distance all the time. Near the bline's vertices (or near the curved parts) you'll notice that evenly spaced "Amount" values are either compressed together or expanded away from each other. Trying to make an object follow a bline will lead to the object changing speeds as it goes along it. The problem lies in how Blines are defined and how a position in the Bline changes as the "Amount" parameter changes. I'll refer to the rate of change as the Bline's "speed". Why does "speed" changes? Firstly, a Synfig Bline is composed of several bezier sections. Each segment is a cubic bezier line. This sections are joined back to back, allowing for arbitrarily complex shapes. All the properties that for a single section, also hold true for any number of sections. So I'm gonna focus on Blines with a single section, in other words, Blines with only two vertexes. A Bline with a single section reduces to a Cubic Bezier defined like this: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \mathbf{B}(t)=(1-t)^3\mathbf{P}_0+3t(1-t)^2\mathbf{P}_1+3t^2(1-t)\mathbf{P}_2+t^3\mathbf{P}_3 \mbox{ , } t \in [0,1].} This equation describes the shape of the curve. As the Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!} parameter increases from zero up to one, the point defined by the equation moves from the Bezier's start ( Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \mathbf{P}_0}) towards its end ( Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \mathbf{P}_3}). The rate of the motion as Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!} increases describes the curve's "speed". Taking the derivative of this equation yields the "speed": Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \frac{d\mathbf{B}(t)}{dt}= (1-t)^2 [ 3 ( \mathbf{P}_1 - \mathbf{P}_0 ) ] + 2t(1-t) [ 3 ( \mathbf{P}_2 - \mathbf{P}_1 ) ] + t^2 [ 3 ( \mathbf{P}_3 - \mathbf{P}_2 ) ] \mbox{ , }t \in [0,1]. } You may have noticed that this equation is equivalent to a Quadratic Bezier. This tells us that the "speed" can and does change as the Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!} parameter changes. Adjusting a Bline's "speed" Our objective is now to compensate the derivative to achieve a desired "speed". We cannot change the control points to the curve without changing its shape. The only other thing we can change is the parameter Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!}. Therefore, we define a function Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle g(t)\,\!} so that: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \frac{d\mathbf{B}(g(t))}{dt}=\boldsymbol{s}(t)\,\,} Where Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \boldsymbol{s}(t)} is a vector Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle (s_x(t),s_y(t))\,\!} that defines the desired speed as a function of Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!}. The curve needs to move in a whole range of directions as the curve describes its shape. Our objective is only to control its magnitude. This magnitude condition can be expressed as: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle s_x^2(t)+s_y^2(t)=s_{mag}^2(t)} Where Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle s_{mag}(t)\,\!} is a function defining the desired "speed" magnitude. We can expand our first equation a bit: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \frac{d\mathbf{B}(g(t))}{dt}=\frac{d\mathbf{B}(g(t))}{d(g(t))}\,\frac{dg(t)}{dt}=\boldsymbol{s}(t)\,\,} Expanding the equation like this lets us use the original Bline's derivative definition, by replacing Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!} with Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle g(t)\,\!}. Next we replace the x and y components into the magnitude condition equation: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \Bigg[\frac{dB_x(g(t))}{d(g(t))}\,\frac{dg(t)}{dt}\Bigg]^2+\Bigg[\frac{dB_y(g(t))}{d(g(t))}\,\frac{dg(t)}{dt}\Bigg]^2=s_{mag}^2(t)} Rearranging we obtain an ordinary non-linear differential equation: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \frac{dg(t)}{dt}=\frac{s_{mag}(t)}{\sqrt{\Big[\frac{dB_x(g(t))}{d(g(t))}\Big]^2+\Big[\frac{dB_y(g(t))}{d(g(t))}\Big]^2}}} Solving this equation yields a function Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle g(t)\,\!} such that the curve's "speed" is dictated by the function Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle s_{mag}(t)\,\!}. Solving the equation All that is left is to solve the equation. It is quite complex and as I said before, the differential equation that we got is non-linear. This makes it hard to find Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle g(t)\,\!} in a clear formula. But even such a complex equation is easy to solve numerically. Keeping in mind that what we want is simply the value of Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle g(t)\,\!} so we can plug it into "Amount". A numerical solution of the equation gives us just that, the value of Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle g(t)\,\!} at certain intervals. The Runge-Kutta method serves this purpose quite well, and it's quite simple also. All we need is to evaluate the derivative of the function that we need to find, and feed the values into the Runge-Kutta method. Let's try a simple case, constant speed. If Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle s_{mag}(t)\,\!} is a constant value, then it would need to be equal to the Bline's length, so that as the Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!} goes from 0.0 up to 1.0, the curve moves from the start to the end. Too little speed and the curve won't reach the end when Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!} reaches 1.0. Too much and the curve will go past the end when Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t\,\!} reaches 1.0. Conveniently, this method also allows to find a Bline's length. If we assume Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle s_{mag}(t)=1\,\!} then the curve will reach its end when Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle t=LENGTH\,\!}, where LENGTH is the Bline's length.
Graphene has two atoms in its primitive unit cell. This makes it intuitive to see that the tight binding Hamiltonian can be constructed as a $ 2 \times 2 $ matrix $H$ acting on a spinor $S$ that consists of the wavefunction from an atom in sublattice A and B. $H_{monolayer}=\gamma \cdot \begin{pmatrix} 0 & k_x-ik_y \\ k_x+ik_y & 0 \end{pmatrix}$ $S_{monolayer}=\begin{pmatrix} |\psi_A\rangle\\ |\psi_B\rangle \end{pmatrix}$ Bilayer Graphene has four atoms in a primitive unit cell and its tight binding Hamiltonian is a 4x4 matrix whose matrix elements represent the hopping between said lattice sites (depending on how it is stacked and what hopping parameters you wish to involve in the calculation). An example might be as follows: $H_{bilayer}=\begin{pmatrix} 0 & 0 & 0 & v(k_x-ik_y)\\ 0 & 0 & v(k_x+ik_y) & 0\\ 0 & v(k_x-ik_y) & 0 & \gamma'\\ v(k_x+ik_y) & 0 & \gamma' & 0 \end{pmatrix}$ $S_{bilayer}=\begin{pmatrix} |\psi_{A1}\rangle\\ |\psi_{B2}\rangle\\ |\psi_{A2}\rangle\\ |\psi_{B1}\rangle \end{pmatrix}$ Where the basis is chosen is in an arbitrary order (1 and 2 indices refer to layer number). How does one write this in a "two-component basis" and what does that mean? Also, what is this hamiltonian acting on in this case? The bilayer hamiltation in this basis (which I do not know what it represents) is written as follows: $H'_{bilayer}=-\dfrac{\hbar^2}{2m}\begin{pmatrix} 0 & (k_x-ik_y)^2 \\ (k_x+ik_y)^2 & 0 \end{pmatrix}$
I need to calculate the length of a curve $y=2\sqrt{x}$ from $x=0$ to $x=1$. So I started by taking $\int\limits^1_0 \sqrt{1+\frac{1}{x}}\, \text{d}x$, and then doing substitution: $\left[u = 1+\frac{1}{x}, \text{d}u = \frac{-1}{x^2}\text{d}x \Rightarrow -\text{d}u = \frac{1}{x^2}\text{d}x \right]^1_0 = -\int\limits^1_0 \sqrt{u} \,\text{d}u$ but this obviously will not lead to the correct answer, since $\frac{1}{x^2}$ isn't in the original formula. Wolfram Alpha is doing a lot of steps for this integration, but I don't think that many steps are needed. How would I start with this integration?
DFT-D and basis-set superposition error¶ In this tutorial you will calculate the distance between two graphene layers. To calculate this distance you will need to account properly for van der Waals interactions (also termed dispersion interactions). The latter are underestimated by local (LDA) and semi-local (GGA) exchange-correlation functionals. As a result, using these functionals the layers do not bind (or bind too weakly) and the minimal total energy configuration becomes one where the layers are infinitely separated. The long-range van der Waals interaction will be included through the semi-empirical corrections by Grimme D2 [Gri06] and D3 [GAEK10]. These corrections do not attempt to describe the actual source of the interaction (fluctuating dipoles) but rather its effect on the DFT mean-field effective potential. You will also learn that incompleteness of the LCAO basis set can give rise to a so-called basis set superposition error (BSSE). The BSSE will contribute with an artificial attraction between the graphene layers, which actually partly compensates for the lack of van der Waals attraction, so with some exchange-correlation potentials the graphene layers will bind, but for the wrong reason. Thus, if you introduce the Grimme correction but do not account for BSSE, the layers will instead bind too strongly and a too small layer separation will be obtained. To remove the BSSE you will use the counterpoise correction [+BB70]. The BSSE is mostly noticeable when computing cohesive energies or binding energies where two subsystems can be clearly defined, like in the case of a molecule on a surface [+SS13] or, as in this tutorial, in layered systems. It is however also an issue for vacancy formation energies, where the material surrounding the vacancy has a lower number of basis orbitals than available in the vacancy case [+WZWH12]. The DFT-D dispersion corrections¶ The corrections by developed by Grimme an co-workers [Gri06][GAEK10] add an additional term \({E}_\mathrm{disp}\) to the DFT total energy \({E}_\mathrm{DFT}\) in order to account for van der Waals interactions: The D2 and D3 corrections differ in the way the term \({E}_\mathrm{disp}\) is evaluated. D2 correction¶ In the D2 correction [Gri06] the \({E}_\mathrm{disp}\) term include only two-body energies and is given by an attractive semi-empirical pair potential \(V^{\mathrm{PP}}\) which accounts only for the lowest-order dispersion term: where \(R_A\) (\(R_B\)) and \(R_{AB}\) are the atomic number of atom \(A\) (atom \(B\)) and the interatomic distance, respectively. The global scaling parameter \(S_6\) has been fitted to reproduce the thermochemical properties of a large training set of molecules and depends on the exchange-correlation functional employed. The pair potential between two atoms with atomic numbers A and B separated by a distance \(R_{AB}\) is: where is a damping function and \(C_6\) and \(R_0\) are element-specific parameters. Typically, the damping factor \(d_6\) is set to 20. D3 correction¶ In the D3 correction [GAEK10], the \({E}_\mathrm{disp}\) term includes both two- and three-body energies: Two-body energy term¶ The two-body energy include dispersion terms up to the 8 th order: In the equation above, \(S_6 = 1\) for all elements, whereas \(S_8\) depends on the exchange-correlation functional employed.The 6 th order pair potential is where and the 8 th order pair potential is constructed using the 6 th order data as: \(C_{6, \mathrm{ref}}^{AB}\) are reference dispersion coefficients derived from time-dependent DFT (TDDFT) calculations using a training set of \(N_A\) (\(N_B\)) molecules for atom A (B). \(\langle r^2 \rangle\) and \(\langle r^4 \rangle\) are geometrically averaged multipole-type expectation values derived from atomic densities. \(n^A\) and \(n^A_i\) (\(n^B\) and \(n^B_j\)) are the atomic fractional coordination number for atom A in the system and in the \(i\)-th (\(j\)-th) molecule of training set \(N_A\) (\(N_B\)), where \(R_{A,\mathrm{cov}}\) and \(R_{B,\mathrm{cov}}\) have been taken from [+PM09]. Finally, the damping functions for6 th (\(f_6(R_{AB})\)) and 8 th (\(f_6(R_{AB})\)) orders terms are, respectively: and with \(R_0^{AB}\) being the cutoff radius for the \(AB\) element pair, and \(d_{6}\) and \(d_{8}\) are damping factors depending on the exchange-correlation functional. Note As it is evident from the equations above, in the D3 method \(V_6^{\mathrm{PP}}\) and \(V_8^{\mathrm{PP}}\) depend not only on the interatomic distance but also on the chemical enviroment of each atom via the fractional coordination number. This is one of the major differences compared to the D2 method. Three-body energy term¶ The additional three-body energy contribution is evaluated as: where the three-body potential \(V^{\mathrm{TBP}}\) is constructed starting from the two-body terms, using the following damping function, where the geometrically averaged radii are Note The evaluation of \({E}^{(3)}\) changes the scaling behavior of the D3 algorithm from \(O(N^2)\) to \(O(N^3)\), where \(N\) is the number of atoms. Computing the three-body energy term can therefore be extremely expensive for large systems. Since the magnitude of the \({E}^{(3)}\) energy correction is much smaller than the two-body energy correction \({E}^{(2)}\), the former is switched-off by default in QuantumATK. BSSE and the counterpoise correction¶ In polyatomic systems, the so-called basis set superposition error (BSSE) arises due to the localised nature of the LCAO basis set. To understand the origin of the BSSE, consider a system formed by two subunits A and B. For simplicity, consider each subunit to be a single atom. When A and B are far apart, they are described only by the basis orbitals centered at their respective atomic nuclei. However, when A couples to B the basis functions centered at A overlap with those centered at B. As a result, part of the basis orbitals centered at B are available to describe A. This gives rise to an artificial attraction between A and B. The BSSE can be minimized by using the counterpoise (CP) correction [+BB70]: where \(E_{AB}\) is the total energy of system AB and \(E\) is the counterpoise energy correction. The counterpoise correction is obtained through: where \(E_{A\tilde{B}}\) is the energy of system A in the AB basis, which is obtained by taking the full system at its equilibrium geometry and defining the atoms of B as ghost atoms. Note Ghost atoms have no charge and no mass, but they have basis orbitals, defined by whichever element (and basis set) that defines the atom if it was not a ghost. The counterpoise correction is readily extended to several subunits, \(A_i\) , through: where \(C_i\) is the \(A_i\) complement, i.e. the atoms not in region \(A_i\). In the following section you will explicitly see the effects of these two corrections on the calculated equilibrium distance between two graphene layers. Set-up the graphene bilayer system¶ Rename the structure as “graphene2”, then open and set the length of the C-vector to 15 Å. This transforms the graphite crystal into a bilayer graphene model. Finally, center the geometry by using. To finish the setup of the structure you must add a tag to each graphene layer. Tags are used to distinguish the two layers, in order to define the counterpoise correction. In order to add the tags, select the two atoms in the first graphene layer, open layer1” in the bottommost field and press enter. Then, repeat the same steps for the second layer and tag it as“ layer2”. Tip You can check that the tags are correctly set using the Selection by Tag tool on the toolbar on the top of the Builder. Geometry optimization without counterpoise correction¶ Select the PBE functinal Set the k-point sampling to a grid of \(11 \times 11 \times 1\) k-points. Analysis of the optimized structure¶ To inspect the relaxed configuration, expand graphene on the LabFloor. The file contains two configurations gID000 and gID001. These are the input and relaxed structures, respectively. In the Builder, click the measurements option for distance, and measure the distance between the two carbon atoms that sit on top of each other in the Z direction. The interatomic distance is 3.47 Å, which is fairly close to the experimental value 3.32 Å [+GE84]. As you will soon see, this is however a result of a cancellation of two errors! Note Most literature experimental values for the graphene interlayer distance are valid for graphite but not for the bilayer graphene system discussed in this tutorial. Hence there can be small differences between the calculated results and experimental data. Including the counterpoise correction¶ Tip All active tools are available through the Windows menu at the top of each tool. Change the name of the output file to graphene2_cp.nc. It is currently not possible to set up the CP correction in the Scripter. The CP correction must therefore be manually added by editing the QuantumATK script. Therefore, transfer the script to the Editor using the Send To icon. In the Editor, change the definition of the Calculator as follows: #calculator = LCAOCalculator(# basis_set=basis_set,# exchange_correlation=exchange_correlation,# numerical_accuracy_parameters=numerical_accuracy_parameters,# correction_extension=correction_extension,# )#BSSE CP correctionbsse_calculator = counterpoiseCorrected(LCAOCalculator, ["layer1", "layer2"])calculator = bsse_calculator( basis_set=basis_set, exchange_correlation=exchange_correlation, numerical_accuracy_parameters=numerical_accuracy_parameters, correction_extension=correction_extension, ) Note how the system is separated into two parts (“ layer1” and “ layer2”) using the tags. The system can be divided into an arbitrary number of parts, and the counterpoise correction is applied to each couple of parts. The only requirement is that the parts are adjoint and sum up to the entire system. Save the script as graphene2_cp.py, send it to the Job Manager and start the calculation. The run time will be 2–5 times longer than before, since now at each geometry optimization step, 5 different systems need to be calculated, viz. \(AB, A, A\tilde{B}, B, \tilde{A}B\). When the calculation is done, inspect the interlayer distance and you will find it to be 4.23 Å. This is considerably larger than the previous result, and far from the experimental value (3.32 Å). As mentioned in the introduction, the BSSE creates an artificial attraction between the graphene layers, which reduces the interlayer distance. The counterpoise correction eliminates this error, but you are now left with the problem that the layers are essentially unbound, due to the omission of van der Waals forces. Including the D2 dispersion correction¶ Since the interlayer distance calculated with the CP correction is too large, there must be a missing attractive contribution to the total force. This is the van der Waals (dispersion) force which is not correctly described with the PBE functional. The vdW contribution to the total forces can be approximately added using the semi-empirical DFT-D2 approach. Under “Calculator setting” and “Basis set/Exchange correlation” select the Grimme DFT-D2 van der Waals correction. Note that you can modify all the specific parameters by clicking the Parameters button. #----------------------------------------# Grimme DFTD2#----------------------------------------correction_extension = GrimmeDFTD2( global_scale_factor=0.75, damping_factor=20.0, maximum_neighbour_distance=30.0*Ang, element_parameters={ Carbon: [1.452*Ang, 1.75*J*nm**6/mol], }, )#BSSE CP correctionbsse_calculator = counterpoiseCorrected(LCAOCalculator, ["layer1", "layer2"])calculator = bsse_calculator( basis_set=basis_set, exchange_correlation=exchange_correlation, numerical_accuracy_parameters=numerical_accuracy_parameters, correction_extension=correction_extension, ) Run the script and when the calculation is done, inspect the interlayer distance. You will now find it to be 3.31 Å. Thus, including the dispersion correction and accounting for BSSE, an interlayer distance close to the experimental result can be obtained. Including the D3 dispersion correction¶ The D3 correction can be applied in a similar way as shown for the D2 correction. However, as discussed in [GAEK10] in this case the best results for the interlayer distance and binding energy are obtained using a revised PBE-type functional (revPBE) instead of the original PBE. Therefore, when adding the D3 correction, one also need to change the exchange-correlation functional from PBE to revPBE as shown below (“RPBE” in the list of predefined functionals). As discussed in the method section the D3 correction can be supplemented with a three-body energy term \({E}^{(3)}\) for an even more accurate evaluation of the vdW forces. However, evaluation of this term is computationally demanding, and the \({E}^{(3)}\) term is therefore not computed by default. If one wants to used DFT-D3 including three-body interactions, we suggest to first optimize the structure using DFT-D3 without the \({E}^{(3)}\) term, and then perfom a single-point calculation on the optimized geometry including the \({E}^{(3)}\) term. Summary of the results¶ The result of the calculations are summarized in the following table. E B (meV/atom) d (Å) PBE -15 3.47 PBE+CP -0.6 4.23 PBE-D2+CP -22.9 3.31 revPBE-D3+CP -22.6 3.35 revPBE-D3+CP+3body -18.6 3.36 The binding energy \(\Delta E\) of the bilayer system was calculated as: where \(E_\mathrm{tot}^\mathrm{bilayer}\) and \(E_\mathrm{tot}^\mathrm{monolayer}\) are the total energies of bilayer and monolayer graphene respectively. Note The calculation of the binding energies must be consistent in terms of exchange-correlation functional and dispersion correction. For example, to obtain the PBE-D2+CP result, the single layer graphene reference calculation must be performed using DFT-D2. References¶ [+BB70] (1, 2) S. F. Boys and F. Bernardi. The calculation of small molecular interactions by the differences of separate total energies. Some procedures with reduced errors. Molecular Physics, 19:553–566, 1970. doi:10.1080/00268977000101561. [+GE84] N.N. Greenwood and A. Earnshaw. Chemistry of the elements. Pergamon, 1984. [Gri06] (1, 2, 3) S. Grimme. Semiempirical GGA-type density functional constructed with a long-range dispersion correction. J. Comput. Chem., 27(15):1787–1799, 2006. doi:10.1002/jcc.20495. [GAEK10] (1, 2, 3, 4) S. Grimme, J. Antony, S. Ehrlich, and H. Krieg. A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. J. Chem. Phys., 2010. doi:10.1063/1.3382344. [+PM09] P. Pyykkö and A. Michiko. Molecular single-bond covalent radii for elements 1–118. Chemistry: a European Journal, 15:186–197, 2009. doi:10.1002/chem.200800987. [+SS13] Kurt Stokbro and Søren Smidstrup. Electron transport across a metal-organic interface: Simulations using nonequilibrium green’s function and density functional theory. Phys. Rev. B, 88:075317, 2013. doi:10.1103/PhysRevB.88.075317. [+WZWH12] H. Wu, N. Zhang, H. Wang, and S. Hong. First-principle study of oxygen-vacancy cu2o (111) surface. J. Theor. Comput. Chem., 11:1261, 2012. doi:http://dx.doi.org/10.1142/S0219633612500848.
I'm having trouble parsing a definition in Lurie's "Rotation Invariance in Algebraic $K$-Theory". The definition os for the notion of center of an associative algebra object, and occurs in Remark 2.1.3. The setting is as follows. We have a symmetric monoidal $\infty$-category $\mathcal{C}$. We write $\mathrm{Alg}(\mathcal{C})$ for the $\infty$-category of associative algebra objects in $\mathcal{C}$ and $\mathrm{LMod}(\mathcal{C})$ for the $\infty$-category of left modules in $\mathcal{C}$. Informally, we think of objects in $\mathrm{LMod}(\mathcal{C})$ as pairs $(A, M)$ where $A$ is an associative algebra object and $M$ is a left $A$-module. Noting that $\mathrm{Alg}(\mathcal{C})$ and $\mathrm{LMod}(\mathcal{C})$ inherit symmetric monoidal structures, we make the following definitions. We define $\mathrm{Alg}^{(2)}(\mathcal{C}) =\mathrm{Alg}(\mathrm{Alg}(\mathcal{C}))$ and $\mathrm{LMod}^{(2)}(\mathcal{C}) = \mathrm{Alg}(\mathrm{LMod}(\mathcal{C}))$. It is known that $\mathrm{Alg}^{(2)}(\mathcal{C})$ is equivalent to the category of $\mathbb{E}_2$-algebra objects. Informally, we think of objects in $\mathrm{LMod}^{(2)}(\mathcal{C})$ as pairs $(A,M)$ where $A$ is an $\mathbb{E}_2$-algebra object and $M$ is an $A$-algebra. We call $\mathrm{LMod}^{(2)}(\mathcal{C})$ the category of central actions in $\mathcal{C}$. Finally, we arrive at the definition I am stuck on. Fix an associative algebra object $M \in \mathrm{Alg}(\mathcal{C})$. We say that a central action $(A,M) \in \mathrm{LMod}^{(2)}(\mathcal{C})$ exhibits $A$ as a center of $M$ if, for every $\mathbb{E}_2$-algebra $B \in \mathrm{Alg}^{(2)}(\mathcal{C})$, the canonical map $$ \mathrm{Map}_{\mathrm{Alg}^{(2)}(\mathcal{C})}(B, A) \to \mathrm{LMod}^{(2)}(\mathcal{C}) \times_{\mathrm{Alg}(\mathcal{C})} \{M\} $$ is a homotopy equivalence. It is not stated what exactly the canonical map is, but it is probably thought of as follows. The factor $\mathrm{Map}_{\mathrm{Alg}^{(2)}(\mathcal{C})}(B, A) \to \mathrm{LMod}^{(2)}(\mathcal{C})$ sends a map $\phi : B \to A$ of $\mathbb{E}_2$-algebras to the central action $(B,M)$ where $B$ acts via $\phi$. The other factor is crystal clear. Perhaps I'm not seeing clearly, but I'm not sure how to parse this definition. It seems to be saying the following. Fix an $\mathbb{E}_2$-algebra $B$. The the data of a central action $(R,M)$ of any $\mathbb{E}_2$-algebra $R$ on $M$ is the same as the data of a map $B \to A$ of $\mathbb{E}_2$-algebras. This doesn't seem quite right to me. Naively translating this into ordinary algebra, it feels quite bizarre. Moreover, Lurie gives another definition for center in Higher Algebra, Definition 5.3.1.6. It is not clear to me that these two definitions are equivalent. Am I just confused? Does this indeed produce a sensible notion of center of an associative algebra? A very vague informal argument will suffice.
I am trying to find $\lim_{n \to \infty} \int_{(0,\infty)} \frac{n \sin (x/n)}{x(1+x^2)}dx$ by applying the Dominated Convergence Theorem for Lebesgue Integrals. First, considering $(f_n): f_n=\frac{n \sin (x/n)}{x(1+x^2)}=\frac{\sin(x/n)}{(x/n)} \frac{1}{1+x^2},$ it then follows that $\lim_{n \to \infty}\frac{\sin(x/n)}{(x/n)}\frac{1}{1+x^2}=\frac{1}{1+x^2}\lim_{n \to \infty}\frac{\sin(x/n)}{(x/n)}$. And thus substituting with $u=\frac{x}{n}$, we get that as $n \to \infty$, $u \to 0$. Obviously, $\lim_{u \to 0} \frac{\sin u }{u}=1 \Rightarrow \lim_{n \to \infty} f_n = \frac{1}{1+x^2}=f(x).$ Hence $f_n$ converges pointwise to $f$. Then the requirement is that $(f_n)$ is in $L_1$. I go on to argue that since on $(0,\infty)$ both $\frac{\sin(x/n)}{(x/n)}$ and $\frac{1}{1+x^2}$ are continuous, then both functions are measurable. And because the measurable functions form an algebra, then $f_n$, their product, is also measurable. How do we argue however that they are Lebesgue integrable? I know, for example, that the Lebesgue integral $\int_{(0,\infty)}\frac{\sin x}{x}$ does not exist. Also, I reckon we are going to use $g(x)=\frac{1}{1+x^2} \in L_1$ to dominate $f_n$ such that $|f_n|\leq g$ for all $n$. I argue as follows: on $(0,\infty)$ we have $|\sin (x/n)|\leq|x/n|$ for any $n$. Hence $|f_n|=\left|\frac{\sin(x/n)}{(x/n)}\frac{1}{1+x^2} \right|\leq \frac{1}{1+x^2}$. So, once again we get to the question of the function being in $L_1$. Finally, it is straightforward to apply DCT and get $\lim_{n \to \infty}\int f_n= \int f=\left.\tan^{-1}(x)\right|^{\infty}_{0}=\frac{\pi}{2}$. However, application of the Riemann integral bothers me somewhat, as we know that for the bounded Riemann integrable function, the proper Riemann integral is equivalent to the Lebesgue version. Does this application hold?
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B August 2014 , Volume 19 , Issue 6 Select all articles Export/Reference: Abstract: This work is concerned with the properties of the traveling wave of the backward and forward parabolic equation \begin{equation*} u_t= [ D(u)u_x]_x + g(u),\quad t\geq 0, x\in \mathbb{R}, \end{equation*} where $D(u)$ changes its sign once, from negative to positive value, in the interval $u\in [0,1]$ and $g(u)$ is a mono-stable nonlinear reaction term. The existence of infinitely many traveling wave solutions is proven. These traveling waves are parameterized by their wave speed and monotonically connect the stationary states $u\equiv0$ and $u\equiv 1$. Abstract: We study three sub-problems of the $N$-body problem that have two degrees of freedom, namely the $n-$pyramidal problem, the planar double-polygon problem, and the spatial double-polygon problem. We prove the existence of several families of symmetric periodic orbits, including ``Schubart-like" orbits and brake orbits, by using topological shooting arguments. Abstract: This paper considers binomial approximation of continuous time stochastic processes. It is shown that, under some mild integrability conditions, a process can be approximated in mean square sense and in other strong metrics by binomial processes, i.e., by processes with fixed size binary increments at sampling points. Moreover, this approximation can be causal, i.e., at every time it requires only past historical values of the underlying process. In addition, possibility of approximation of solutions of stochastic differential equations by solutions of ordinary equations with binary noise is established. Some consequences for the financial modelling and options pricing models are discussed. Abstract: We study three optimal control problems associated with Gompertz-type differential equations, including bound control and integral constraints. These problems can be interpreted in terms of planning anticancer therapies. Existence of optimal controls is proved and all their possible structures are determined in detail, by using the Pontryagin's Maximum Principle. The influence of the pharmacokinetics and pharmacodynamics variants, together with the integral constraint, is analyzed. Moreover, the numerical results of some illustrative examples and our conclusions are presented. Abstract: We consider a two-species competition system with nonlinear diffusion and exhibit exact solutions of the system. We first show the existence of spatially stationary solutions that are periodic patterns. In a particular case, we also provide a time-dependent solution that approximates this periodic solution. We also show that the system may sustain unbounded wavefronts above the coexistence equilibrium. In the case of equal intrinsic growth rates, we give a sharp wavefront solution with semi-finite support. Abstract: We consider the initial boundary value problem of the one dimensional full bipolar hydrodynamic model for semiconductors. The existence and uniqueness of the stationary solution are established by the theory of strongly elliptic systems and the Banach fixed point theorem. The exponentially asymptotic stability of the stationary solution is given by means of the energy estimate method. Abstract: In this article, we consider linear hyperbolic Initial and Boundary Value Problems (IBVP) in a rectangle (or possibly curvilinear polygonal domains) in both the constant and variable coefficients cases. We use semigroup method instead of Fourier analysis to achieve the well-posedness of the linear hyperbolic system, and we find by diagonalization that there are only two elementary modes in the system which we call hyperbolic and elliptic modes. The hyperbolic system in consideration is either symmetric or Friedrichs-symmetrizable. Abstract: When developing efficient numerical methods for solving parabolic types of equations, severe temporal stability constraints on the time step are often required due to the high-order spatial derivatives and/or stiff reactions. The implicit integration factor (IIF) method, which treats spatial derivative terms explicitly and reaction terms implicitly, can provide excellent stability properties in time with nice accuracy. One major challenge for the IIF is the storage and calculation of the dense exponentials of the sparse discretization matrices resulted from the linear differential operators. The compact representation of the IIF (cIIF) can overcome this shortcoming and greatly save computational cost and storage. On the other hand, the cIIF is often hard to be directly applied to deal with problems involving cross derivatives. In this paper, by treating the discretization matrices in diagonalized forms, we develop an efficient cIIF method for solving a family of semilinear fourth-order parabolic equations, in which the bi-Laplace operator is explicitly handled and the computational cost and storage remain the same as to the classic cIIF for second-order problems. In particular, the proposed method can deal with not only stiff nonlinear reaction terms but also various types of homogeneous or inhomogeneous boundary conditions. Numerical experiments are finally presented to demonstrate effectiveness and accuracy of the proposed method. Abstract: We present a numerical study of solutions to the generalized Kadomtsev-Petviashvili equations with critical and supercritical nonlinearity for localized initial data with a single minimum and single maximum. In the cases with blow-up, we use a dynamic rescaling to identify the type of the singularity. We present the first discussion of the observed blow-up scenarios. We show that the blow-up in solutions to the $L_{2}$ critical generalized Kadomtsev-Petviashvili I case is similar to what is known for the $L_{2}$ critical generalized Korteweg-de Vries equation. No blow-up is observed for solutions to the generalized Kadomtsev-Petviashvili II equations for $n\leq2$. Abstract: In this paper, we apply the method of dynamical systems to a generalized two-component Hunter-Saxton system. Through qualitative analysis, we obtain bifurcations of phase portraits of the traveling system. Under different parameter conditions, exact explicit smooth solitary wave solutions, solitary cusp wave solutions, as well as periodic wave solutions are obtained. To guarantee the existence of these solutions, rigorous parametric conditions are given. Abstract: In this paper we apply the averaging theory to a class of three-dimensional autonomous quadratic polynomial differential systems of Lorenz-type, to show the existence of limit cycles bifurcating from a degenerate zero-Hopf equilibrium. Abstract: We present an unconditionally stable finite difference method for solving the viscous Cahn--Hilliard equation. We prove the unconditional stability of the proposed scheme by using the decrease of a discrete functional. We present numerical results that validate the convergence and unconditional stability properties of the method. Further, we present numerical experiments that highlight the different temporal evolutions of the Cahn--Hilliard and viscous Cahn--Hilliard equations. Abstract: In this paper, a general viral model with virus-driven proliferation of target cells is studied. Global stability results are established by employing the Lyapunov method and a geometric approach developed by Li and Muldowney. It is shown that under certain conditions, the model exhibits a global threshold dynamics, while if these conditions are not met, then backward bifurcation and bistability are possible. An example is presented to provide some insights on how the virus-driven proliferation of target cells influences the virus dynamics and the drug therapy strategies. Abstract: In this paper, we answer the question about the existence of the minimal speed of front propagation in a delayed version of the Murray model of the Belousov-Zhabotinsky (BZ) chemical reaction. It is assumed that the key parameter $r$ of this model satisfies $0< r \leq 1$ that makes it formally monostable. By proving that the set of all admissible speeds of propagation has the form $[c_*,+\infty)$, we show here that the BZ system with $r \in (0,1]$ is actually of the monostable type (in general, $c_*$ is not linearly determined). We also establish the monotonicity of wavefronts and present the principal terms of their asymptotic expansions at infinity (in the critical case $r=1$ inclusive). Abstract: The environment of HIV-1 infection and treatment could be non-periodically time-varying. The purposes of this paper are to investigate the effects of time-dependent coefficients on the dynamics of a non-autonomous and non-periodic HIV-1 infection model with two delays, and to provide explicit estimates of the lower and upper bounds of the viral load. We established sufficient conditions for the permanence and extinction of the non-autonomous system based on two positive constants $R^{\ast}$ and $R_{\ast}$ ($R^{\ast}\geq R_{\ast}$) that could be precisely expressed by the coefficients of the system: (i) If $R^{\ast}<1$, then the infection-free steady state is globally attracting; (ii) if $R_{\ast}>1$, then the system is permanent. When the system is permanent, we further obtained detailed estimates of both the lower and upper bounds of the viral load. The results show that both $R^{\ast}$ and $R_{\ast}$ reduce to the basic reproduction ratio of the corresponding autonomous model when all the coefficients become constants. Numerical simulations have been performed to verify/extend our analytical results. We also provided some numerical results showing that both permanence and extinction are possible when $R_{\ast }< 1 < R^{\ast}$ holds. Abstract: In this paper, we are concerned with the long-time behavior of the following non-autonomous quasi-linear complex Ginzburg-Landau equation with $p$-Laplacian \begin{align*} \frac{\partial u}{\partial t}-(\lambda+i\alpha)\Delta_p u+(\kappa+i\beta)|u|^{q-2}u-\gamma u=g(x,t) \end{align*} without any restriction on $q>2$ under additional assumptions. We first prove the existence of a pullback absorbing set in $L^2(\Omega) \cap W^{1,p}_0(\Omega)\cap L^q(\Omega)$ for the process $\{U(t,\tau)\}_{t\geq \tau}$ corresponding to the non-autonomous quasi-linear complex Ginzburg-Landau equation (1)-(3) with $p$-Laplacian. Next, the existence of a pullback attractor in $L^2(\Omega)$ is established by the Sobolev compactness embedding theorem. Finally, we prove the existence of a pullback attractor in $W^{1,p}_0(\Omega)$ for the process $\{U(t,\tau)\}_{t\geq \tau}$ associated with the non-autonomous quasi-linear complex Ginzburg-Landau equation (1)-(3) with $p$-Laplacian by asymptotic a priori estimates. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
Let $X$ be a smooth projective geometrically connected curve over $\mathbf{Q}$ of genus at least two. Fix an algebraic closure $\overline{\mathbf{Q}}$ of $\mathbf{Q}$ and let $G_{\mathbf{Q}}$ be the absolute Galois group of $\mathbf{Q}$. Moreover, fix a rational base point $x$ in $X(\mathbf{Q})$. Let $\sigma:G_{\mathbf{Q}}\to \pi_1(X)$ be a section of the exact sequence of groups $$ 1\to \pi_1(X_{\overline{\mathbf{Q}}})\to \pi_1(X) \to G_{\mathbf{Q}}\to 1.$$ Let $K\subset \overline{\mathbf{Q}}$ be a finite field extension of $\mathbf{Q}$. (I don't want to assume $K$ to be Galois, but please do if this helps.) Let $G_K$ be the absolute Galois group of $K$. Then $G_K$ is an open subgroup of $G_{\mathbf{Q}}$. Note that $\pi_1(X_K)$ (with the same base point $x$) injects into $\pi_1(X)$. Question. Does there exist a section $\sigma^\prime:G_{\mathbf{Q}}\to \pi_1(X)$ which is a $\pi_1(X_{\overline{\mathbf{Q}}})$-conjugate of $\sigma$ such that the image of $\sigma^\prime|_{G_K}$ lies in $\pi_1(X_K)$? Motivation. If $a\in X(\mathbf{Q})$, then $a\in X(K)$. Thus, if $\sigma$ is a section associated to $a$, then the answer to the above question is positive. My question is really about sections that a priori do not come from a rational point. Note. I always use the base point $x$ to define the fundamental group and we can replace $\mathbf{Q}$ by any number field.
I've been reading Wikipedia's article on continued fractions. A few examples are given for the continued-fraction representation of irrational numbers: $\sqrt{19}=[4;2,1,3,1,2,8,2,1,3,1,2,8,\dots]$ (OEIS). The pattern repeats indefinitely with a period of $6$. $\phi=[1;1,1,1,1,1,1,1,1,1,1,1,\dots]$ (OEIS). The pattern repeats indefinitely. $e=[2;1,2,1,1,4,1,1,6,1,1,8,\dots]$ (OEIS). The pattern repeats indefinitely with a period of $3$, with $2$ added to the 3rd term in each cycle. $\pi=[3;7,15,1,292,1,1,1,2,1,3,1,\dots]$ (OEIS). The terms in this representation are apparently random. The first two are algebraic irrational, and the last two are transcendental. We differentiate between these two types of irrational numbers by definition: A number is algebraic if and only if it is a root of some non-zero rational-coefficient polynomial. This gives me the impression that transcendental numbers are "more irrational" than algebraic irrational numbers, so to speak (please excuse me for the non-mathematical notation here). The continued-fraction examples above give me an additional impression that some transcendental numbers are yet even "more irrational" than others (i.e., $\pi$ is "more irrational" than $e$ and $\phi$). Has any work been made towards differentiating between those transcendental numbers which can be represented with a "patterned" continued-fraction, and those which cannot? Moreover, as with algebraic numbers which consist only a "small" countable part of all real numbers (where transcendental numbers consist of the "much larger" uncountable part), is it possible that the "patterned" transcendental numbers are countable and the "random" transcendental numbers are uncountable? This question can be reduced to a distinction between "patterned" sequences of natural numbers and "random" sequences of natural numbers. I think that the former set is countable and the latter set is uncountable, but I am not sure how it can be proved (mostly because I am not sure how to mathematically differentiate between "patterned" and "random").
Here are couple references. Especially the first link to Andy Lo's paper contains a list of Sharpe ratios of popular mutual and hedge funds:The Statistics of Sharpe RatiosDow Jones Credit Suisse Hedge Fund IndexGeneralized Sharpe Ratios and PortfolioPerformance EvaluationI would go with the first paper. The answer your are looking for might be the story in "Benchmarking Measures of Investment Performance with Perfect-Foresight and Bankrupt Asset Allocation Strategies", by Grauer (Journal of Portfolio Management).While this work main concerns are the differential ranking of various performance measures and with negative betas for market timing strategies, ... In this case, the t-statistic is used to determine if the returns are statistically different from zero (the theoretical mean). A small t-statistic would imply that the null hypothesis (no significant excess return) cannot be rejected. Newey-West standard errors are used to correct for the correlations of error terms over time.I have written a Matlab ... I would even stick to the original paper by Sharpe (1966):Mutual Fund Performance. The Journal of Business Vol. 39, No. 1, Part 2 pp.119--138If you look at the numbers on Page 6 you can see that the funds sharpe ratios roughly are between $0$ and $1$.Since the Sharpe ratio already adjusts for the risk-free rate, you cannot really argue about its ... Edited Comments:Sharpe Ratio covers both future and historical time frames (as @Freddy points out). Referencing the "Geometric Return and Portoflio Analysis", for the historical calculation, you want to make as few assumptions as possible (in my opinion).Let $m_i \triangleq$ the monthly return for period $i$ and $r_t \triangleq$ annual return, for $i\... This optimization is trivial$$w^{T,J}_i = \begin{cases} 1 \quad \text{if } i=\arg \max_i R^{T,J}(S_i) \\0 \quad \text{otherwise} \end{cases}$$That is to say, when you optimize only one weight will be nonzero. That's because these ratios incorporate no notion of distributional width, and therefore do not reward diversification.With no concentration ... Yes, you are correct on both terms - it doesn't make much sense, and there exists a well-cited solution by C. Israelsen: "A refinement to the Sharpe ratio and information ratio." Journal of Asset Management 5.6 (2005): 423-427.The adjustment he gives is to define $$SR_{adj} = \frac{r}{\sigma^{\frac{r}{abs(r)}}},$$which solves the ranking problem during ... Ideally you'd want to use daily returns and just annualise it, but if you only have monthly returns then calculating the weighted variance in the following way might do it:$$Var = \frac{\sum_{i=0}^{24}(R_i - \mu)^2}{24 + \frac{21}{31}} + \frac{\frac{21}{31} (R_{25}' - \mu)^2}{24 + \frac{21}{31}}$$$$Vol = \sqrt{Var}$$Where $R_i$ is the returns of ... No, this is not the same. For example, consider the scenario$$\begin{align*}r_A &= 10\% \quad\quad \sigma_A = 10\% \\r_B &= 1.5\% \quad\quad \sigma_B = 1\% \\\end{align*}$$If $r_f=1\%$,$$\text{SR}_A=0.90 \quad\quad \text{SR}_B=0.50$$then $A$ has the higher sharpe.Now if $r_f=0\%$,$$\text{SR}_A=1.00 \quad\quad \text{SR}_B=1.50$$then $... A short position is a liability on your books, as the borrowed asset has to be returned to the owner. The return is then the percentage return of that liability.Assume that the shorted asset at initial time $t_0$ has price $p(t_0)$. The initial liability is then $p(t_0)$. At a future time $t$ the liability is $p(t)$. The return at time $t$ is hence$$r(t,... Consider these two simple portfolios:Portfolio 1 returns -10% in month 1 and 10% in month 2. Average arithmetic return is zero, and cumulative return is $(1-10\%)(1+10\%)=0.99$.Portfolio 2 returns -50% in month 2 and 50% in month 2. Average arithmetic return is still zero, but cumulative return is $(1-50\%)(1+50\%)=0.75$, a much lower terminal value!In ... Since you're looking to summarize the performance of a monthly return series in a single number, it is best to compute the annualized return. This is the standard used in the investment management industry.You could also compare your portfolio returns with that of an industry benchmark like S&P 500 on an annualized basis.Assuming your returns are in ... I don't know that there is a "standard-solution crystalized in the community," but there are alternatives. The ones that I prefer are Omega, Sortino, and Kappa. All three of these ratios, unlike Sharpe, do not assume normally distributed returns.Omega Ratio: This is the probability-weighted ratio of gains versus losses for a given minimum acceptable ... There is no universally accepted answer for the main problem here which is the denominator for the return calculation is zero or near zero. There are a few common solutions to this issue.The most simple solution is to use the total portfolio notional as the divisor for the PnL. This can be considered the PnL contribution of that long/short sub-portfolio ... Martijn Cremers and Antti Petajisto have a series of papers using the concept of "Active Share," a new measure of active portfolio management which represents the share of portfolio holdings that differ from the benchmark index holdings, to evaluate mutual fund managers. They find that the most active stock pickers have outperformed their benchmark indices ... I think you need to exactly define which ratio you are talking about. For example the ex-post Sharpe ratio's components are all well known. You have your realized returns, risk free returns (or whatever other benchmark you define your excess returns against) and realized volatility of returns.For realized asset returns you should not use log returns but ... The PerformanceAnalytics library reflects several years worth of development by Brian Peterson and Peter Carl, as well as multiple collaborators. It is fairly widely used, tested and debugged.Basic software engineering practices suggest that you should strive to re-use it if possible. Options for that includeaccessing a remote R instance via RServe (... Perhaps check out Poti and Levich (2009), or in a different setting but from one of the same authors, Poti and Wang (2010) "The coskewness puzzle" in JBF. They directly address the issue of what level of SR is plausible. Pardon the lack of an actual link, and the formatting, but in footnote 6 of "Alpha is Volatility times IC times Score", Grinold, Richard C.,Journal of Portfolio Management, Summer 1994 v20 n4 p9(8), Grinold suggests that "a truly outstanding manager" might have an information ratio of 1.33:(6) A rough guideline for determining the required IC comes from ... In Quant Finance we start with the assumption that (until shown otherwise) no one can outperform a simple, passive benchmark. Such a benchmark might be for example the S&P 500 index leveraged up or down by borrowing/lending.To calculate your alpha we would obtain your monthly returns [actually excess returns $r-r_f$] for the past N months and regress ... As @Alex C had pointed out, the CAPM and subsequently Jensen were probably the original motivations of the term $\alpha$.Bear in mind that $\alpha$ and $\beta$ are conventional notation for coefficients in a linear regression model, and quite easily as that, we can understand the intuition by thinking of this as an explanatory linear model of portfolio ... For a single period return, the squared value of that return approximates variance (i.e., the absolute value approximates the standard deviation).Standard deviation is defined thus:$$\sigma_X = \sqrt\frac{\Sigma_1^N\mathbb{E}[X-\mu_x]^2}{N}$$For a non-drifting process, $\mu_x = 0$. Also, in our scenario, $X = (r_a - r_m)$ and $N = 1$.Therefore, an ... Both questions are not as straightforward as @Hui (and most academics and practitioners) would immediately think. I would try to put in my two cents to answering your question 1.Short answer: It might have to do with the bias-variance tradeoff, as measuring the alpha precisely is a tricky task in small samples (and young funds do have short histories). ... I believe that by "luck" you mean that you want to check if you can attribute the pnl of your strategy to something else than the "alpha" that it's trying to capture.The standard way of doing this is by using standard market factors (such as Barra's standard risk model for equities say https://www.msci.com/www/research-paper/barra-s-risk-models/014972229 ) ... This is a very common and serious problem among academic papers and with some hedge fund marketing material's, I can almost guarantee that the high ratio of 7 was with-out transaction cost's and that when included this 7 will drop down some where between 0 and 1. cost of leverage for equity only long/short investing is a function of the margin deal you can negotiate with your broker, if you have a large amount of capital.If you don't have significant capital to start with, then it's likely you'll only be able to get 2x leverage with a loan rate between 4% and 10% (retail reg-t margin rates at most brokers)This ... A very well thought through exposition on the matter is given in this paper:A Consultant’s Perspective on Distinguishing Alpha from Noise by John R. MinahanIt combines a lot of wisdom and common sense that sometimes seems to get lost in the process... My 2c worth.Experience tells me that the better ways to get a feel for whether their strategy is based on something more than luck are amongst:1) `getting to know your traders' -- have a chat, pick their brains, try to get some insight into their methods;2) see how hard the market has been -- check whether you have just been part of a bull market which ... For analyzing a series of trades on a single stock over a period of time. You can understand your market timing contribution by comparing your actual return to the return from consistently holding your average exposure to the stock over that whole period.To then get a feeling for how much you are contributing compared to how much you are messing with a ...
Within a certain system of measures, conversion factors are typically exact. In imperial units, this means that a foot is always twelve inches, a yard is always three feet and a mile is always 1760 yards. With the exact conversion, we can use multiplication to see that: $$1~\mathrm{yd} = 36'' \pm 0''\\1~\mathrm{m} = 5280' \pm 0'\\1~\mathrm{m} = 63360'' \pm 0''$$ Likewise, the conversion from cubic imperial units — which are defined as $\text{imperial unit}\times\text{imperial unit}\times\text{imperial unit}$ — to other cubic imperial units is equally exact. $$1~\mathrm{yd^3} = 1~\mathrm{yd}\times 1~\mathrm{yd}\times 1~\mathrm{yd} = 3' \times 3' \times 3' = 27~\mathrm{ft^3}$$ This is akin to the exactness of the conversion of metric units, except that the metric factors are always multiples of 10. But there are also semi-metric units, such as the German ‘metric pound’, by definition $500~\mathrm{g}$. Conversion from grams to German metric pounds is as exact as conversion from grams to kilograms. This becomes different if you leave your system of measures and compare different systems. Except in a few very special cases, where two systems chose physically identical starting and/or ending points but with different step sizes, [1] the conversion factors will be nonexact. Any nonexact conversion factor will introduce an error of its own into the equation and how to deal with these is best shown in Martin’s answer. Note that even if the transformation is exact at a starting and ending point, measured values will still have an inherent uncertainty in them, as Martin pointed out. The uncertainty is transformed across the conversion and does not change magnitude. [1]: See for example temperature. A measured temperature of $15~\mathrm{^\circ C}$ can be transformed into an exact value in kelvin because the kelvin and celsius scales are identical except for an addition factor. Thankfully, also the error is identical. $$(15.00 \pm 0.005)~\mathrm{^\circ C} = (288.16 \pm 0.005)~\mathrm{K}$$ Similarly, the Réamur temperature scale is defined such that the boiling point of water is $80~\mathrm{^\circ r}$ while the zero-point is identical to the celsius scale. Therefore, the step is different and $1~\mathrm{^\circ C} = 0.8~\mathrm{^\circ r}$. Upon converting, we get a similarly exact value but a different error. $$(15.00 \pm 0.005)~\mathrm{^\circ C} = (12.00 \pm 0.004)~\mathrm{^\circ r}$$
Consider constructing the Ward identity associated with Lorentz invariance. It is possible to find a 3rd rank tensor $B^{\rho \mu \nu}$ antisymmetric in the first two indices, then the stress-energy tensor can be made symmetric. Once done, the conserved current coming from the classical analysis is of the form $$j^{\mu \nu \rho} = T_B^{\mu \nu}x^{\rho} - T_B^{\mu \rho}x^{\nu}$$ This ensures the symmetry of the conserved current which can be seen most easily be invoking the conservation law $$\partial_{\mu}j^{\mu \nu \rho} = 0 $$ and $$\partial_{\mu}T_B^{\mu \nu} =\partial_{\mu} (T^{\mu \nu}_C + \partial_{\rho}B^{\rho \mu \nu}) = 0.$$ Let $X$ denote a set of $n$ fields. The Ward identity associated with Lorentz invariance is then $$\partial_{\mu} \langle (T^{\mu}x^{\rho} - T^{\mu \rho}x^{\nu})X\rangle = \sum_i \delta(x-x_i)\left[ x^{\nu}_i \partial^{\rho}_i - x^{\rho}_i\partial^{\nu}_i\langle X \rangle - iS^{\nu \rho}_i \langle X \rangle\right].\tag{1}$$ This is then equal to $$\langle (T^{\rho \nu} - T^{\nu \rho})X \rangle = -i\sum_i \delta (x-x_i)S^{\nu \rho}_i\langle X \rangle,$$ which states that the stress tensor is symmetric within correlation functions, except at the position of the other fields of the correlator. My question is: how is this last equation and statement derived? I think the Ward identity associated with translation invariance is used after perhaps splitting (1) up like so: $$\sum_i^n x^{\nu}_i \sum_i^n \delta(x-x_i)\partial^{\rho}_i \langle X \rangle - \sum_i^n x^{\rho}_i \sum_i^n \delta(x-x_i)\partial^{\nu}_i \langle X \rangle - i\sum_i^n\delta(x-x_i)S^{\nu \rho}_i\langle X \rangle $$ and then replacing $$\partial_{\mu}\langle T^{\mu}_{\,\,\,\rho}X \rangle = -\sum_i \delta (x-x_i)\frac{\partial}{\partial x^{\rho}_i} \langle X \rangle$$ for example. The result I am getting is that $$\langle ((\partial_{\mu}T^{\mu \nu})x^{\rho} - (\partial_{\mu}T^{\mu \rho})x^{\nu} + T^{\rho \nu} - T^{\nu \rho})X \rangle = \sum_i x^{\nu}_i \partial_{\mu}\langle T^{\mu \rho}X \rangle + \sum_i x^{\rho}_i \partial_{\mu} \langle T^{\mu \nu} X \rangle - i\sum_i\delta(x-x_i)S^{\nu \rho}_i\langle X \rangle$$ To obtain the required result, this means that e.g$$ \sum_i x^{\nu}_i \partial_{\mu} \langle T^{\mu \rho}X \rangle = \langle(\partial_{\mu}T^{\mu \rho})x^{\nu} X \rangle,$$ but why is this the case? Regarding the statement at the end, do they mean that when the position in space $x$ happens to coincide with one of the points where the field $\Phi_i \in X$ takes on the value $x_i$ (so $x = x_i$) then the r.h.s tends to infinity and the equation is then nonsensical?
Question: True/False? Every differentiable function \(f:(0,1)\to [0,1] \) is uniformly continuous. Hint: \( sin(1/x) \) is not uniformly continuous. However, range is not [0,1]. But its a simple matter of scaling. Discussion: Let \(f(x)=sin(\frac{1}{x})\) for all \(x\in(0,1). For simplicity, we first prove that this f is not uniformly continuous. Then we will scale things down, which won’t change the non-uniform continuity of f. Note that f is differentiable. To show that f is not uniformly continuous, we first note that as \(x\) approaches 0, \(1/x\) goes through an odd multiple of \(\frac{\pi}{2}\) to an even multiple of (\frac{\pi}{2}\) real fast. So in a very small interval close to 0, I can find two such points which gives value 1 and 0. Let \(x=\frac{2}{(2n+1)\pi}\) and \(y=\frac{2}{2n\pi}\). Then \(|x-y|=\frac{2}{2n(2n+1)\pi}\). Since the right hand side of above goes to zero as n increases, given any \(\delta > 0\) we can find n large enough so that \(|x-y|<\delta\). For these x and y, \(|f(x)-f(y)|=1 \). Of course, choosing \(\epsilon\) as any positive number less than 1 shows that f is not uniformly continuous. We have with us a function \(f\) which is bounded, differentiable and not uniformly continuous. To match with the questions requirements, notice \(-1\le f(x)\le 1\). So \(0\le 1+f(x) \le 2\). And \(0\le \frac{1+f(x)}{2} \le 1\). Define \(g(x)= \frac{1+f(x)}{2} \) for all \(x\in(0,1)\). Since sum of two uniformly continuous functions is uniformly continuous and a scalar multiple of uniformly continuous function is uniformly continuous, if \(g(x)\) was uniformly continuous, then \(f(x)=2g(x)-1 \) would also be uniformly continuous. Which proves that g is in fact not uniformly continuous. It is still differentiable, and range is \([0,1]\), which shows that the given statement is actually false.
The preface to Mark Srednicki's "Quantum Field Theory" says that to be prepared for the book, one must recognize and understand the following equations: $$\frac{d\sigma}{d\Omega} = |f(\theta,\phi)|^2, \qquad (1)$$ $$a^{\dagger}|n\rangle = \sqrt{n+1} \space |n+1\rangle, \qquad (2)$$ $$J_{\pm} |j,m \rangle = \sqrt{j(j+1)-m(m\pm 1)} \mid j,m \pm 1 \rangle, \qquad (3)$$ $$A(t) = e^{+iHt/\hbar}Ae^{-iHt/\hbar}, \qquad (4)$$ $$H = p\dot{q}-L, \qquad (5)$$ $$ct'=\gamma (ct-\beta x), \qquad (6)$$ $$E=(\mathbf{p}^2c^2+m^2c^4)^{1/2}, \qquad (7)$$ $$\mathbf{E} =-\mathbf{\dot{A}}/c-\mathbf{\nabla} \varphi. \qquad (8)$$ I am certainly not ready to dive into this book, so I would like some help identifying these equations and learning more about their fundamental usefulness. I don't recognize (1), but (2) looks like a quantum mechanical creation operator? I thought those were only really useful in the context of the harmonic oscillator problem, but maybe everything is just a complicated HO problem? (3) has to do with angular momentum? (4) is a plane wave solution to the Schrodinger Eqn? (5) is the classical mechanics Hamiltonian, with cannonical coordinates? (6) is the relativistic Lorentz transformation. (7) is the general form of mass energy equivalence from SR. (8) is the electric field expressed as vector and scalar potentials? Is that really the only E&M machinery required? Any insight as to why these particular expressions are relevant / important / useful to QFT is also appreciated. Also, where are the statistical mechanics ideas hiding? In the QM?
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Searches for $\Lambda^0_{b}$ and $\Xi^{0}_{b}$ decays to $K^0_{\rm S} p \pi^{-}$ and $K^0_{\rm S}p K^{-}$ final states with first observation of the $\Lambda^0_{b} \rightarrow K^0_{\rm S}p \pi^{-}$ decay Journal of High Energy Physics, ISSN 1126-6708, 02/2014, Volume 4, p. 087 Journal Article 2. Precision measurement of the B0s–$\overline{\rm B}{}^0_{\rm s}$ oscillation frequency with the decay B0s→ D−sπ New Journal of Physics, ISSN 1367-2630, 05/2013, Volume 15, Issue 5, p. 53021 Journal Article Physics Letters B, ISSN 0370-2693, 08/2013, Volume 728, pp. 607 - 615 Journal Article 4. Influence of the Magnetic Anisotropy on the Magnetic Entropy Change of rm Ni 2 rm Mn ( rm Ga , rm Bi ) Memory Shape Alloy IEEE Transactions on Magnetics, ISSN 0018-9464, 01/2008, Volume 44, Issue 11 Journal Article Journal of High Energy Physics, ISSN 1029-8479, 01/2016, Volume 2, p. 133 Journal Article IEEE Transactions on Magnetics, ISSN 0018-9464, 11/2008, Volume 44, Issue 11, pp. 3036 - 3039 The magnetocaloric effect (MCE) is a good chance to create a more efficient refrigeration technique, both in energy and environmental friendliness. On the... {\rm Ni}_{2}{\rm MnGa} | Temperature | Shape memory alloys | memory shape | Entropy | Isothermal processes | Magnetic field measurement | Magnetic anisotropy | magnetocaloric effect | Bismuth | Magnetic materials | Perpendicular magnetic anisotropy | Heusler alloy | Saturation magnetization {\rm Ni}_{2}{\rm MnGa} | Temperature | Shape memory alloys | memory shape | Entropy | Isothermal processes | Magnetic field measurement | Magnetic anisotropy | magnetocaloric effect | Bismuth | Magnetic materials | Perpendicular magnetic anisotropy | Heusler alloy | Saturation magnetization Journal Article 03/2015 Journal of High Energy Physics 06 (2015) 131 The first measurement of decay-time-dependent CP asymmetries in the decay $B_s^0\rightarrow J/\psi K_{\rm S}^0$... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article Physics Letters B, ISSN 0370-2693, 01/2014, Volume 728, Issue 370-2693, pp. 607 - 615 The CP -violating asymmetry a sl s is studied using semileptonic decays of Bs0 and B¯ s0 mesons produced in pp collisions at a centre-of-mass energy of 7 TeV... 14.40.Nd | Particle Physics - Experiment | Leptonic semileptonic and radiative decays of bottom mesons | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Nuclear and High Energy Physics | Bottom mesons (|B|>0) | LHCb - Abteilung Hofmann | Charge conjugation parity time reversal and other discrete symmetries | LHCb | High Energy Physics - Experiment | 13.20.He | 11.30.Er 14.40.Nd | Particle Physics - Experiment | Leptonic semileptonic and radiative decays of bottom mesons | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Nuclear and High Energy Physics | Bottom mesons (|B|>0) | LHCb - Abteilung Hofmann | Charge conjugation parity time reversal and other discrete symmetries | LHCb | High Energy Physics - Experiment | 13.20.He | 11.30.Er Journal Article 9. Combinations of single-top-quark production cross-section measurements and $|f_{\rm LV}V_{tb}|$ determinations at $\sqrt{s}=7$ and 8 TeV with the ATLAS and CMS experiments 02/2019 JHEP 05 (2019) 088 This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 10. Measurement of the forward $W$ boson cross-section in $pp$ collisions at $\sqrt{s} = 7 {\rm \, TeV} Journal of High Energy Physics, ISSN 1126-6708, 08/2014, Volume 2014, Issue 12, p. 079 Journal Article Journal of High Energy Physics, ISSN 1126-6708, 05/2014, Volume 7, p. 140 Journal Article 12. Observation of overlapping spin-1 and spin-3 $\overline{D}^0 K^-$ resonances at mass $2.86 {\rm GeV}/c^2 Physical Review Letters, ISSN 0031-9007, 2014, Volume 113, p. 162001 Journal Article 13. Suppression of Upsilon(1S), Upsilon(2S), and Upsilon(3S) production in PbPb collisions at sqrt(s[NN]) = 2.76 TeV Physics Letters B, ISSN 0370-2693, 11/2016, Volume 770, p. 357 Journal Article 14. A study of CP violation in $B^\pm \to D K^\pm$ and $B^\pm \to D \pi^\pm$ decays with $D \to K^0_{\rm S} K^\pm \pi^\mp$ final states Physics Letters B, ISSN 0370-2693, 02/2014, Volume 733, pp. 36 - 45 Journal Article 15. A model-independent Dalitz plot analysis of $B^\pm \to D K^\pm$ with $D \to K^0_{\rm S} h^+h^-$ ($h=\pi, K$) decays and constraints on the CKM angle $\gamma Physics Letters B, ISSN 0370-2693, 09/2012, Volume 718, pp. 43 - 55 Journal Article 16. Measurement of $B^+$, $B^0$ and $\Lambda_b^0$ production in $p\mkern 1mu\mathrm{Pb}$ collisions at $\sqrt{s_\mathrm{NN}}=8.16\,{\rm TeV} 02/2019 Phys.~Rev.~D99 (2019) 052011 The production of $B^+$, $B^0$ and $\Lambda_b^0$ hadrons is studied in proton-lead collisions at a centre-of-mass energy per... Journal Article 17. Horizontal supine exercise: compare the RM executed using Machinery and Free Weight through different intensity individuals experienced not much proficient individuals/Exercicio supino horizontal: comparacao de RM executadas em maquinas e pesos livres em diferentes intensidades por individuos experientes e pouco familiarizados Revista Brasileira de Prescricao e Fisiologia do Exercicio, ISSN 1981-9900, 11/2011, Volume 5, Issue 30, p. 510 This study had as goal to compare the number of Repetitions Maximum (RM) accomplished through different percentages of IRM between the horizontal supine... Weight training Weight training Journal Article 18. Study of $B_{(s)}^0 \to K_{\rm S}^0 h^{+} h^{\prime -}$ decays with first observation of $B_{s}^0 \to K_{\rm S}^0 K^{\pm} \pi^{\mp}$ and $B_{s}^0 \to K_{\rm S}^0 \pi^{+} \pi^{-} 07/2013 J. High Energy Phys. 10 (2013) 143 A search for charmless three-body decays of $B^0$ and $B_{s}^0$ mesons with a $K_{\rm S}^0$ meson in the final state is... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 19. MRI findings in the diagnosis and monitoring of rasmussen's encephalitis Achados de RM no diagnóstico e monitorização da encefalite de Rasmussen Arquivos de Neuro-Psiquiatria, ISSN 0004-282X, 09/2009, Volume 67, Issue 3b, pp. 792 - 797 Journal Article 20. Measurements of the Higgs boson production and decay rates and constraints on its couplings from a combined ATLAS and CMS analysis of the LHC pp collision data at root s=7 and 8 TeV JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 07/2016, Issue 8 Combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are... CERN LHC | Higgs physics | CROSS-SECTIONS | HADRON COLLIDERS | Hadron-Hadron scattering (experiments) | SEARCH | NLO QCD CORRECTIONS | TRANSVERSE-MOMENTUM | STANDARD MODEL | BROKEN SYMMETRIES | QUARKS | TO-LEADING ORDER | PHYSICS, PARTICLES & FIELDS CERN LHC | Higgs physics | CROSS-SECTIONS | HADRON COLLIDERS | Hadron-Hadron scattering (experiments) | SEARCH | NLO QCD CORRECTIONS | TRANSVERSE-MOMENTUM | STANDARD MODEL | BROKEN SYMMETRIES | QUARKS | TO-LEADING ORDER | PHYSICS, PARTICLES & FIELDS Journal Article
Derivation of effective transmission conditions for domains separated by a membrane for different scaling of membrane diffusivity Friedrich-Alexander-Universität Erlangen-Nürnberg, Cauerstraße 11,91058 Erlangen, Germany We consider a system of non-linear reaction-diffusion equations in a domain consisting of two bulk regions separated by a thin layer with periodic structure. The thickness of the layer is of order $\epsilon$, and the equations inside the layer depend on the parameter $\epsilon$ and an additional parameter $\gamma \in [-1,1)$, which describes the size of the diffusion in the layer. We derive effective models for the limit $\epsilon \to 0 $, when the layer reduces to an interface $\Sigma$ between the two bulk domains. The effective solution is continuous across $\Sigma$ for all $\gamma \in [-1,1)$. For $\gamma \in (-1,1)$, the jump in the normal flux is given by a non-linear ordinary differential equation on $\Sigma$. In the critical case $\gamma = -1$, a dynamic transmission condition of Wentzell-type arises at the interface $\Sigma$. Keywords:Thin heterogeneous layer, homogenization, weak and strong two-scale convergence, non-linear reaction-diffusion systems, effective transmission conditions. Mathematics Subject Classification:Primary: 35K57, 35B27, 80M40. Citation:Markus Gahn, Maria Neuss-Radu, Peter Knabner. Derivation of effective transmission conditions for domains separated by a membrane for different scaling of membrane diffusivity. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 773-797. doi: 10.3934/dcdss.2017039 References: [1] T. Arbogast, J. Douglas and U. Hornung, Derivation of the double porosity model of single phase flow via homogenization theory, [2] [3] [4] [5] [6] [7] [8] G. P. Galdi, [9] G. Geymonat, S. Hendili, F. Krasucki and M. Vidrascu, Matched asymptotic expansion method for an homogenized interface model, [10] D. Gilbarg and N. S. Trudinger, [11] M. Liero, Passing from bulk to bulk-surface evolution in the allen-cahn equation, [12] S. Marušić and E. Marušić-Paloka, Two-scale convergence for thin domains and its applications to some lower-dimensional mmodel in fluid mechanics, [13] [14] M. Neuss-Radu, [15] M. Neuss-Radu and W. Jäger, Effective transmission conditions for reaction-diffusion processes in domains separated by an interface, [16] M. Neuss-Radu and W. J. S. Ludwig, Multiscale analysis and simulation of a reaction-diffusion problem with transmission conditions, [17] M. A. Peter and M. Böhm, Different choices of scaling in homogenization of diffusion and interfacial exchange in a porous medium, [18] [19] [20] J. Wloka, show all references References: [1] T. Arbogast, J. Douglas and U. Hornung, Derivation of the double porosity model of single phase flow via homogenization theory, [2] [3] [4] [5] [6] [7] [8] G. P. Galdi, [9] G. Geymonat, S. Hendili, F. Krasucki and M. Vidrascu, Matched asymptotic expansion method for an homogenized interface model, [10] D. Gilbarg and N. S. Trudinger, [11] M. Liero, Passing from bulk to bulk-surface evolution in the allen-cahn equation, [12] S. Marušić and E. Marušić-Paloka, Two-scale convergence for thin domains and its applications to some lower-dimensional mmodel in fluid mechanics, [13] [14] M. Neuss-Radu, [15] M. Neuss-Radu and W. Jäger, Effective transmission conditions for reaction-diffusion processes in domains separated by an interface, [16] M. Neuss-Radu and W. J. S. Ludwig, Multiscale analysis and simulation of a reaction-diffusion problem with transmission conditions, [17] M. A. Peter and M. Böhm, Different choices of scaling in homogenization of diffusion and interfacial exchange in a porous medium, [18] [19] [20] J. Wloka, [1] Alexander Mielke, Sina Reichelt, Marita Thomas. Two-scale homogenization of nonlinear reaction-diffusion systems with slow diffusion. [2] Markus Gahn, Maria Neuss-Radu, Peter Knabner. Effective interface conditions for processes through thin heterogeneous layers with nonlinear transmission at the microscopic bulk-layer interface. [3] Robert E. Miller. Homogenization of time-dependent systems with Kelvin-Voigt damping by two-scale convergence. [4] Aurore Back, Emmanuel Frénod. Geometric two-scale convergence on manifold and applications to the Vlasov equation. [5] [6] Dingshi Li, Kening Lu, Bixiang Wang, Xiaohu Wang. Limiting dynamics for non-autonomous stochastic retarded reaction-diffusion equations on thin domains. [7] Shin-Ichiro Ei, Toshio Ishimoto. Effect of boundary conditions on the dynamics of a pulse solution for reaction-diffusion systems. [8] Erik Kropat, Silja Meyer-Nieberg, Gerhard-Wilhelm Weber. Bridging the gap between variational homogenization results and two-scale asymptotic averaging techniques on periodic network structures. [9] [10] Shin-Ichiro Ei, Hiroshi Matsuzawa. The motion of a transition layer for a bistable reaction diffusion equation with heterogeneous environment. [11] Kurt Falk, Marc Kesseböhmer, Tobias Henrik Oertel-Jäger, Jens D. M. Rademacher, Tony Samuel. Preface: Diffusion on fractals and non-linear dynamics. [12] Patrick De Kepper, István Szalai. An effective design method to produce stationary chemical reaction-diffusion patterns. [13] Ciprian G. Gal, Mahamadi Warma. Reaction-diffusion equations with fractional diffusion on non-smooth domains with various boundary conditions. [14] Xu Yang, François Golse, Zhongyi Huang, Shi Jin. Numerical study of a domain decomposition method for a two-scale linear transport equation. [15] [16] [17] Mostafa Bendahmane, Mauricio Sepúlveda. Convergence of a finite volume scheme for nonlocal reaction-diffusion systems modelling an epidemic disease. [18] Anotida Madzvamuse, Raquel Barreira. Domain-growth-induced patterning for reaction-diffusion systems with linear cross-diffusion. [19] W. E. Fitzgibbon, M. Langlais, J.J. Morgan. A reaction-diffusion system modeling direct and indirect transmission of diseases. [20] Wenzhang Huang, Maoan Han, Kaiyu Liu. Dynamics of an SIS reaction-diffusion epidemic model for disease transmission. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
In an earlier post we analyzed an algorithm called Exp3 for $k$-armed adversarial bandits for which the expected regret is bounded by \begin{align*} R_n = \max_{a \in [k]} \E\left[\sum_{t=1}^n y_{tA_t} – y_{ta}\right] \leq \sqrt{2n k \log(k)}\,. \end{align*} The setting of Continue Reading According to the main result of the previous post, given any finite action set $\cA$ with $K$ actions $a_1,\dots,a_K\in \R^d$, no matter how an adversary selects the loss vectors $y_1,\dots,y_n\in \R^d$, as long as the action losses $\ip{a_k,y_t}$ are in Continue Reading Lower bounds for linear bandits turn out to be more nuanced than the finite-armed case. The big difference is that for linear bandits the shape of the action-set plays a role in the form of the regret, not just the Continue Reading In the last post we showed that under mild assumptions ($n = \Omega(K)$ and Gaussian noise), the regret in the worst case is at least $\Omega(\sqrt{Kn})$. More precisely, we showed that for every policy $\pi$ and $n\ge K-1$ there exists Continue Reading Continuing the previous post, we prove the claimed minimax lower bound. We start with a useful result that quantifies the difficulty of identifying whether or not an observation is drawn from similar distributions $P$ and $Q$ defined over the same Continue Reading
After installing the markdown editor with LaTex https://github.com/thibaultduponchelle/q2a-markdown-editor-latex writing math worked nicely, until I tried to use the \begin{array} \end{array} environment , for example as in this question: Looking at $5D$ Kaluza-Klein theory, the Kaluza-Klein metric is given by $$ g_{mn} = \left( \begin{array}{cc} g_{\mu\nu} & g_{\mu 5} \\ g_{5\nu} & g_{55} \\ \end{array} \right) $$ where $g_{\mu\nu}$ corresponds to the ordinary four dimensional metric and $g_{\mu 5}$ is the ordinary four dimensional Maxwell gauge field, $g_{55}$ is the dilaton field. As there is one dilaton for one extra dimension, I naively would expect that the zero mass states of closed string theory, which can be written as $$ \sum\limits_{I.J} R_{I.J} a_1^{I\dagger} \bar{a}_1^{I\dagger} ¦p^{+},\vec{p}_T \rangle $$ and the square matrix $R_{I.J}$ can be separated into a symmetric traceless part corresponding to the graviton field, an antisymmetric part corresponding to a generalized Maxwell gauge field, and the trace which corresponds to the dilaton field. Why is there only one dilaton field given by the trace of $R_{I.J}$, instead of $22$ dilaton fields corresponding to $22$ extra dimensions of closed string theory which has critical dimension $D = 26$? For example, why are there not $22$ dilaton fields needed to parameterize the shape of the 22 extra dimensions if they are compactified? Bug: In the preview, all of the math compiles apart from the fact that the array is displayed as a single column instead of a 2x2 matrix. When posting the question, all of the maths get scrampled up. Howver, looking at the edit history by applying this plug-in http://www.question2answer.org/qa/17583/edit-history-plugin-improvements-bugs all the maths works including the matrix. What can be done about this bug?
Let be $\Omega\subset \mathbb{R}^n $ an open bounded set of class $C^1$ and let be $\Omega'\subset \subset \Omega ''\subset \subset \Omega$. I'm trying to construct a function $\varphi\in C^\infty (\Omega)$such that: $0\leq\varphi\leq1$ $\varphi\equiv 1$ in $\Omega'$ $\varphi\equiv 0$ in $\Omega \setminus \Omega''$ $||\nabla \varphi ||_{L^{\infty}(\Omega, \mathbb{R}^n)}\leq \dfrac{2}{{\rm dist}(\Omega',\partial \Omega'')}$ My attempt: My idea is to use the distance function ${\rm dist }$ and the mollifiers.Let be $\alpha={\rm dist}(\Omega',\partial \Omega'')$. I have in mind this function: $$\varphi(x)=\big [\dfrac {2}{\alpha} {\rm dist}(x,\tilde{\Omega}^c)\wedge 1\big ]*\rho_\epsilon$$ where $\tilde{\Omega}=\big \{ x\in \Omega \;| \;{\rm dist}(x,\Omega')<\dfrac{\alpha}{2} \big \}$, $\rho_\epsilon$ is the standard mollifier (I use the convolution in order to have a $C^\infty$ function) and $\wedge$ is the minimum operator. Is my example correct? My main concern is about the last point. I've constructed this function thinking of the domains as balls and for them the distance function is linear with respect to the radius. Does my function satisfy the last request? Edit What can I say about the gradient of the function $x \mapsto {\rm dist }(x,\tilde{\Omega}^c)$? Can I say that it is linear or can I do some estimations? Thanks for the help!
Under the auspices of the Computational Complexity Foundation (CCF) We study the problem of matrix Lie algebra conjugacy. Lie algebras arise centrally in areas as diverse as differential equations, particle physics, group theory, and the Mulmuley--Sohoni Geometric Complexity Theory program. A matrix Lie algebra is a set $\mathcal{L}$ of matrices such that $A,B \in \mathcal{L}$ implies$AB - BA \in \mathcal{L}$. Two matrix Lie algebras are conjugate if there is an invertible matrix $M$ such that $\mathcal{L}_{1} = M \mathcal{L}_{2} M^{-1}$. We show that certain cases of Lie algebra conjugacy are equivalent to graph isomorphism. On the other hand, we give polynomial-time algorithms for other cases of Lie algebra conjugacy, which allow us to mostly derandomize a recent result of Kayal on affine equivalence of polynomials. Affine equivalence is related to many complexity problems such as factoring integers, graph isomorphism, matrix multiplication, and permanent versus determinant. Specifically, we show: Abelian Lie algebra conjugacy is as hard as graph isomorphism. A Lie algebra is abelian if all of its matrices commute pairwise. Abelian diagonalizable Lie algebra conjugacy of $n \times n$ matrices can be solved in $poly(n)$ time when the Lie algebras have dimension $O(1)$. The dimension of a Lie algebra is the maximum number of linearly independent matrices it contains. A Lie algebra $\mathcal{L}$ is diagonalizable if there is a single matrix $M$ such that for every $A$ in $\mathcal{L}$, $MAM^{-1}$ is diagonal. Semisimple Lie algebra conjugacy is equivalent to graph isomorphism. A Lie algebra is semisimple if it is a direct sum of simple Lie algebras. Semisimple Lie algebra conjugacy of $n \times n$ matrices can be solved in polynomial time when the Lie algebras consist of only $O(\log n)$ simple direct summands. Conjugacy of completely reducible Lie algebras---that is, a direct sum of an abelian diagonalizable and a semisimple Lie algebra---can be solved in polynomial time when the abelian part has dimension $O(1)$ and the semisimple part has $O(\log n)$ simple direct summands.
For discussion directly related to ConwayLife.com, such as requesting changes to how the forums or wiki function. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X I have some questions: 1. Is there a minimal age for joining conwaylife? 2. Who is unname66609? 3. Does LifeWiki store the IP Addresses of users? Ok! That will be it for now 1. Is there a minimal age for joining conwaylife? 2. Who is unname66609? 3. Does LifeWiki store the IP Addresses of users? Ok! That will be it for now Airy Clave White It Nay The terms and conditions shown when you first register say nothing about age, and I think the youngest member here, Gustavo, is 11 or 12 - so no.Saka wrote:1. Is there a minimum age for joining conwaylife? Just another forum user as far as I can tell. They probably live in or near China. Why do you ask?2. Who is unname66609? Yes.3. Does LifeWiki store the IP Addresses of users? Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X 1. So it's ok if a a six year old joins?M. I. Wright wrote:The terms and conditions shown when you first register say nothing about age, and I think the youngest member here, Gustavo, is 11 or 12 - so no.Saka wrote:1. Is there a minimum age for joining conwaylife?Just another forum user as far as I can tell. They probably live in or near China. Why do you ask?2. Who is unname66609?Yes.3. Does LifeWiki store the IP Addresses of users? 2. Because he never really posts anything... (just things like "what is sesame oil?" 3. Ok And by the the way did you notice the second "a" I put in number 1? There's also something else about this line... Airy Clave White It Nay Hmm... Technically, it's perfectly possible, but I can hardly imagine a six-year-old doing stuff like this. If he/she decides to join, definitely tell me the username. If that person achieves anything, I will clap.Saka wrote:1. So it's ok if a a six year old joins? So do you think unname is a bot? It is not probable since he/she used to post some relevant content. Unname is just a strange guy, like Gustavo raised to the 10th power.Saka wrote:2. Because he never really posts anything... (just things like "what is sesame oil?" No. I failed this test for the the second time in my lifeSaka wrote:And by the the way did you notice the second "a" I put in number 1? There are 10 types of people in the world: those who understand binary and those who don't. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X And I passed.Alexey_Nigin wrote:Hmm... Technically, it's perfectly possible, but I can hardly imagine a six-year-old doing stuff like this. If he/she decides to join, definitely tell me the username. If that person achieves anything, I will clap.Saka wrote:1. So it's ok if a a six year old joins? So do you think unname is a bot? It is not probable since he/she used to post some relevant content. Unname is just a strange guy, like Gustavo raised to the 10th power.Saka wrote:2. Because he never really posts anything... (just things like "what is sesame oil?" No. I failed this test for the the second time in my lifeSaka wrote:And by the the way did you notice the second "a" I put in number 1? Airy Clave White It Nay Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X How young?Sarp wrote:Nope I'm younger1. IThe terms and conditions shown when you first register say nothing about age, and I think the youngest member here, Gustavo, is 11 or 12 - so no. Airy Clave White It Nay Meaning, a collection of syntheses where the recipes involve colliding LWSSes instead of colliding gliders? For the second question, are combinations of LWSS, MWSS, and HWSS allowed, or should a single recipe have only one kind of *WSS?The Turtle wrote:Are there any catalogues of LWSS collisions? Or other *WSS collisions? In any case, I think the answer is no -- nothing really organized, anyway, because no one has had a long-term use for it. It's fairly easy to put together the beginnings of such a collection by running gencols for a while, along these general lines. If you just want a table of two-*WSS collisions, that's very easy to generate with gencols. Maybe someone has a stamp collection lying around already. I've been curious for a while now about construction mechanisms using *WSS slow salvos -- though not curious enough yet to do the research myself, apparently. Seems as if slow LWSSes would probably be just about as effective as gliders for building self-constructing circuitry, and Geminoid construction-arm elbows can be programmed to produce *WSSes instead of gliders. It's several times more expensive, but it allows for four new construction directions. I know I found a while back that LWSS slow salvos can move a blinker anywhere, but I can't find it anywhere but in a quote in the Accidental Discoveries thread.dvgrn wrote:I've been curious for a while now about construction mechanisms using *WSS slow salvos -- though not curious enough yet to do the research myself, apparently. Seems as if slow LWSSes would probably be just about as effective as gliders for building self-constructing circuitry, and Geminoid construction-arm elbows can be programmed to produce *WSSes instead of gliders. It's several times more expensive, but it allows for four new construction directions. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Mr. Missed Her Posts:90 Joined:December 7th, 2016, 12:27 pm Location:Somewhere within [time in years since this was entered] light-years of you. What's with the alternating shades of the posts? Sometimes, tables/lists do that so you can keep track of which item is which, but I don't see how it helps here. Besides, there's almost no difference in the colors. There is life on Mars. We put it there with not-completely-sterilized rovers. And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko. And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko. Mr. Missed Her Posts:90 Joined:December 7th, 2016, 12:27 pm Location:Somewhere within [time in years since this was entered] light-years of you. I only noticed when I tried to make the background of my profile photo the color of the posts. As you can see, the colors match up perfectly here, but they don't in the last post. There is life on Mars. We put it there with not-completely-sterilized rovers. And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko. And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko. Mr. Missed Her Posts:90 Joined:December 7th, 2016, 12:27 pm Location:Somewhere within [time in years since this was entered] light-years of you. I might be able to, but I only know how to edit images through Paint, which is like crappy Photoshop that comes with windows. There is life on Mars. We put it there with not-completely-sterilized rovers. And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko. And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Try gimpMr. Missed Her wrote:I might be able to, but I only know how to edit images through Paint, which is like crappy Photoshop that comes with windows. Airy Clave White It Nay Mr. Missed Her Posts:90 Joined:December 7th, 2016, 12:27 pm Location:Somewhere within [time in years since this was entered] light-years of you. Well, thanks. There is life on Mars. We put it there with not-completely-sterilized rovers. And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko. And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko. BlinkerSpawn Posts:1906 Joined:November 8th, 2014, 8:48 pm Location:Getting a snacker from R-Bee's Click the checkbox to the right, then choose "Delete marked" at the bottom, from the dropdown where it normally says "Mark/Unmark as important".BlinkerSpawn wrote:On this note, how do I delete read private messages?gameoflifemaniac wrote:Does Nathaniel have access to private messages? Surprisingly well hidden, isn't it? It's a very clever design -- keeps you from deleting things accidentally (or on purpose). In theory, yes. PMs are stored as plain text in the database and while there is no direct ability within phpbb for the admin to read PMs, any person or program (such as backup software) with access to the database has access to the content of private messages. I see no reason to be concerned about this.gameoflifemaniac wrote:Does Nathaniel have access to private messages?
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Definition of Inverse Functions $$ \Large+ \leftarrow \text{ inverse operation } \rightarrow -$$ $$ \Large \times \leftarrow \text{ inverse operation } \rightarrow \div $$ $$ \Large x^2 \leftarrow \text{ inverse operation } \rightarrow \sqrt{x} $$ The function $y=4x-1$ can be undone by its inverse function $y=\dfrac{x+1}{4}$. We can consider of this act as two processes [...] We have seen that a linear function has the form $y=mx+b$. When a linear function is devided by another function, the result is a rational function. Rational functions are characterised by asymptotes, which are lines the function gets close and close to but never reaches. The rational functions we consider can be written in the [...] A composite function is formed from two functions in the following way. $$(g \circ f)(x) = g(f(x))$$ If $f(x)=x+3$ and $g(x)=2x$ are two functions, then we combine the two functions to form the composite function: \( \begin{align} (g \circ f)(x) &= g(f(x)) \\ &= 2f(x) \\ &= 2(x+3) \\ &= 2x+6 \end{align} \) That is, [...] A relation may be described by: a listed set of ordered pairs a graph a rule The set of all first elements of a set of ordered pairs is known as the domain and the set of all second elements of a set of ordered pairs is known as the range. Alternatively, the domain is [...] Definition of Function Notation Consider the relation $y=3x+2$, which is a function. The $y$-values are determined from the $x$-values, so we say '$y$ is a function of $x$, which is abbreviated to $y=f(x)$. So, the rule $y=3x+2$ can be also be written as following. $$f: \mapsto 3x+2$$ $$\text{or}$$ $$f(x)=3x+2$$ $$\text{or}$$ $$y=3x+2$$ Function $f$ such that [...] Relations A relation is any set of points which connect two variables. A relation is often expressed in the form of an equation connecting the variables $x$ and $y$. In this case, the relation is a set of points $(x,y)$ in the Cartesian plane. This plane is separated into four quadrants according to the signs [...]
I need to prove the following: There's no continuous function $f:[a,b]\to \mathbb{R}$ that takes each of its values $f(x)$, $x\in [a,b]$ exactly twice. First of all, I didn't understand the question. For example $x^2$ takes $1$ twice, in the interval $[-1,1]$. Is it saying that it does not occur for all $x$ in the interval? But what about $f(x) = c$? Is it saying that it does not occur only exactly $2$ times, then? I have no idea about how to prove it. I know that for $f(x)$ such that $f(a)<f(x)<f(b)$, if $f$ is continuous then there is a $c\in [a,b]$ such that $f(c) = f(x)$. Now, there's the following proof in my book and I really wanted to understand it, instead of just getting a new proof Since the interval $[a,b]$ has only $2$ extreme points, then the maximum or minimum of $f$ must be in a point $c\in int([a,b])$ and and in another point $d\in [a,b]$. Then, there exists $\delta>0$ such that in the intervals $[c-\delta, c), (c,c+\delta)$ (and if $d$ is not extreme of $[a,b]$, $[d-\delta, d]$) the function takes values that are less than $f(c) = f(d)$. Let $A$ be the greatest of the numbers $f(c-\delta), f(c+\delta), f(d-\delta)$. By the intermediate value theorem, there are $x\in [c-\delta, c), y\in (c, c+\delta]$ and $z\in [d-\delta, d)$ such that $f(x)=f(y)=f(z)=A$. Contradiction. Well, why the last part? Why is it that I can apply the intermediate value theorem to these values? For example, $<f(c-\delta)<p<f(c)$, then by the theorem I know that there exists $m\in [c-\delta, c)$ such that $f(m) = p$. Same for the other intervals. But what guarantees thhat the greatest of the values between $x\in [c-\delta, c), y\in (c, c+\delta]$ will be inside the intervals $[c-\delta), c), (c,c+\delta), [d-\delta, d)$?
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
I slightly disagree with Alex’s comment. The CAPM does not read as\begin{align*}r_{i,t} = r_{f,t}+ \beta_{i,t} (r_{m,t}-r_f) + \varepsilon_{i,t}.\end{align*}There is an important difference between the single index model (aka market model) (SIM) which reads as\begin{align*}r_{i,t} = \alpha_{i,t} + \beta_{i,t}(r_{m,t}-r_{f,t}) + \varepsilon_{i,t}\end{align*}and the capital asset pricing model (CAPM) which reads as\begin{align*}\mathbb{E}_t[r_{i,t}] - r_{f,t} &= \frac{\mathbb{C}\text{ov}_t(r_{i,t+1},r_{m,t+1})}{\mathbb{V}\text{ar}_t[r_{i,t+1}]} \cdot (\mathbb{E}_t[r_{m,t+1}]-r_{f,t}) \\&= \beta_{i,t} \cdot (\mathbb{E}_t[r_{m,t+1}]-r_{f,t}).\end{align*} (Subscripts indicate that the conditional expectation/variance/covariance is meant). So, the CAPM is an equilibrium asset pricing model about expected returns which can be, for instance, derived from a stochastic discount factor (SDF) framework assuming the SDF is linear in the market return. In particular, there is no idiosyncratic risk component $\varepsilon_{i,t}$. You can immediately see how the two CAPM equations agree with the equations you quoted: The CAPM assumes the expected excess return of any asset is proportional to the expected excess return of the market portfolio (value weighted portfolio of all assets), i.e. $\alpha_{i,t}=r_{f,t}$. The SIM is purely and merely a statistical model which regresses historical returns against the returns of some factor (this may be a portfolio mimicking the market portfolio but may be any other factor which is believed to drive the returns). By standard OLS regression of the SIM model, the coefficient $\beta_{i,t}$ is estimated to be $\frac{\mathbb{C}\text{ov}(r_{i,t},r_{m,t})}{\mathbb{V}\text{ar}[r_{i,t}]}$. So, the SIM may be used to test the CAPM empirically (i.e. if the CAPM was true, we wound find $\alpha$ to be not statistically different from zero etc.) but may not be confused with the CAPM. As a matter of fact, empirical tests raise indeed strong doubts that the CAPM is a good model.
Here @gung makes reference to the .632+ rule. A quick Google search doesn't yield an easy to understand answer as to what this rule means and for what purpose it is used. Would someone please elucidate the .632+ rule? I will get to the 0.632 estimator, but it'll be a somewhat long development: Suppose we want to predict $Y$ with $X$ using the function $f$, where $f$ may depend on some parameters that are estimated using the data $(\mathbf{Y}, \mathbf{X})$, e.g. $f(\mathbf{X}) = \mathbf{X}\mathbf{\beta}$ A naïve estimate of prediction error is $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ where $L$ is some loss function, e.g. squared error loss. This is often called training error. Efron et al. calls it apparent error rate or resubstitution rate. It's not very good since we use our data $(x_i,y_i)$ to fit $f$. This results in $\overline{err}$ being downward biased. You want to know how well your model $f$ does in predicting new values. Often we use cross-validation as a simple way to estimate the expected extra-sample prediction error (how well does our model do on data not in our training set?). $$Err = \text{E}\left[ L(Y, f(X))\right]$$ A popular way to do this is to do $K$-fold cross-validation. Split your data into $K$ groups (e.g. 10). For each group $k$, fit your model on the remaining $K-1$ groups and test it on the $k$th group. Our cross-validated extra-sample prediction error is just the average $$Err_{CV} = \dfrac{1}{N}\sum_{i=1}^N L(y_i, f_{-\kappa(i)}(x_i))$$ where $\kappa$ is some index function that indicates the partition to which observation $i$ is allocated and $f_{-\kappa(i)}(x_i)$ is the predicted value of $x_i$ using data not in the $\kappa(i)$th set. This estimator is approximately unbiased for the true prediction error when $K=N$ and has larger variance and is more computationally expensive for larger $K$. So once again we see the bias–variance trade-off at play. Instead of cross-validation we could use the bootstrap to estimate the extra-sample prediction error. Bootstrap resampling can be used to estimate the sampling distribution of any statistic. If our training data is $\mathbf{X} = (x_1,\ldots,x_N)$, then we can think of taking $B$ bootstrap samples (with replacement) from this set $\mathbf{Z}_1,\ldots,\mathbf{Z}_B$ where each $\mathbf{Z}_i$ is a set of $N$ samples. Now we can use our bootstrap samples to estimate extra-sample prediction error: $$Err_{boot} = \dfrac{1}{B}\sum_{b=1}^B\dfrac{1}{N}\sum_{i=1}^N L(y_i, f_b(x_i))$$ where $f_b(x_i)$ is the predicted value at $x_i$ from the model fit to the $b$th bootstrap dataset. Unfortunately, this is not a particularly good estimator because bootstrap samples used to produce $f_b(x_i)$ may have contained $x_i$. The leave-one-out bootstrap estimator offers an improvement by mimicking cross-validation and is defined as: $$Err_{boot(1)} = \dfrac{1}{N}\sum_{i=1}^N\dfrac{1}{|C^{-i}|}\sum_{b\in C^{-i}}L(y_i,f_b(x_i))$$ where $C^{-i}$ is the set of indices for the bootstrap samples that do not contain observation $i$, and $|C^{-i}|$ is the number of such samples. $Err_{boot(1)}$ solves the overfitting problem, but is still biased (this one is upward biased). The bias is due to non-distinct observations in the bootstrap samples that result from sampling with replacement. The average number of distinct observations in each sample is about $0.632N$ (see this answer for an explanation of why Why on average does each bootstrap sample contain roughly two thirds of observations?). To solve the bias problem, Efron and Tibshirani proposed the 0.632 estimator: $$ Err_{.632} = 0.368\overline{err} + 0.632Err_{boot(1)}$$ where $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ is the naïve estimate of prediction error often called training error. The idea is to average a downward biased estimate and an upward biased estimate. However, if we have a highly overfit prediction function (i.e. $\overline{err}=0$) then even the .632 estimator will be downward biased. The .632+ estimator is designed to be a less-biased compromise between $\overline{err}$ and $Err_{boot(1)}$. $$ Err_{.632+} = (1 - w) \overline{err} + w Err_{boot(1)} $$ with $$w = \dfrac{0.632}{1 - 0.368R} \quad\text{and}\quad R = \dfrac{Err_{boot(1)} - \overline{err}}{\gamma - \overline{err}} $$ where $\gamma$ is the no-information error rate, estimated by evaluating the prediction model on all possible combinations of targets $y_i$ and predictors $x_i$. $$\gamma = \dfrac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N L(y_i, f(x_j))$$. Here $R$ measures the relative overfitting rate. If there is no overfitting (R=0, when the $Err_{boot(1)} = \overline{err}$) this is equal to the .632 estimator. You will find more information in section 3 of this 1 paper. But to summarize, if you call $S$ a sample of $n$ numbers from $\{1:n\}$ drawn randomly and with replacement, $S$ contains on average approximately $(1-e^{-1})\,n \approx 0.63212056\, n$ unique elements. The reasoning is as follows. We populate $S=\{s_1,\ldots,s_n\}$ by sampling $i=1,\ldots,n$ times (randomly and with replacement) from $\{1:n\}$. Consider a particular index $m\in\{1:n\}$. Then: $$P(s_i=m)=1/n$$ and $$P(s_i\neq m)=1-1/n$$ and this is true $\forall 1\leq i \leq n$ (intuitively, since we sample with replacement, the probabilities do not depend on $i$) thus $$P(m\in S)=1-P(m\notin S)=1-P(\cap_{i=1}^n s_i\neq m)\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=1-\prod_{i=1}^n P(s_i\neq m)=1-(1-1/n)^n\approx 1-e^{-1}$$ You can also carry this little simulation to check empirically the quality of the approximation (which depends on $n$): n <- 100fx01 <- function(ll,n){ a1 <- sample(1:n, n, replace=TRUE) length(unique(a1))/n}b1 <- c(lapply(1:1000,fx01,n=100), recursive=TRUE)mean(b1) 1. Bradley Efron and Robert Tibshirani (1997). Improvements on Cross-Validation: The .632+ Bootstrap Method. Journal of the American Statistical Association, Vol. 92, No. 438, pp. 548--560. In my experience, primarily based on simulations, the 0.632 and 0.632+ bootstrap variants were needed only because of severe problems caused by the use of an improper accuracy scoring rule, namely the proportion "classified" correctly. When you use proper (e.g., deviance-based or Brier score) or semi-proper (e.g., $c$-index = AUROC) scoring rules, the standard Efron-Gong optimism bootstrap works just fine. Those answers are very useful. I couldn't find a way to demonstrate it with maths so I wrote some Python code which works quite well though: from numpy import mean from numpy.random import choice N = 3000 variables = range(N) num_loop = 1000 # Proportion of remaining variables p_var = [] for i in range(num_loop): set_var = set(choice(variables, N)) p=len(set_var)/float(N) if i%50==0: print "value for ", i, " iteration ", "p = ",p p_var.append(p) print "Estimator of the proportion of remaining variables, ", mean(p_var)
In quantum mechanics, the amplitude of wave function propagation can be found using the Feynman's path integral $$ \langle z'|e^{-itH/\hbar}|z\rangle=\int\limits_{x(0)=z\\x(t)=z'} Dx(t')\: \exp\left\{\frac{i}\hbar\int_0^t dt'\:\left[\frac{m\dot{x}^2(t')}2-V(x(t'))\right]\right\}. $$ In the (quasi)classical limit $\hbar\rightarrow0$, the leading contribution to the integral comes from the classical trajectory $$ \frac{d}{dt'}\left[\frac{m\dot{x}^2(t')}2\right]-\frac{d}{dx}\left[V(x(t'))\right]=0, $$ where the action is minimal, and fluctuations around this trajectory provide quantum corrections to the result. In quantum statistical physics, the path integral can be used to calculate matrix elements of a thermal density matrix by switching to the imaginary time $\tau=it/\hbar$: $$ \langle z'|e^{-\beta H}|z\rangle=\int\limits_{x(0)=z\\x(\beta)=z'} Dx(\tau)\: \exp\left\{-\int_0^\beta d\tau\:\left[\frac{m\dot{x}^2(\tau)}2+V(x(\tau))\right]\right\}. $$ What is the physical meaning of a least-action trajectory in the imaginary time? What do fluctuations around this trajectory mean and how do they qualitatively affect the resulting matrix elements?
Note: My previous question was wrong. This is the opposite of it, and I hope it is more sensible. Where the $\displaystyle M(r)=\operatorname{Max}_{∣z∣=r} \mid f(z) \mid$ , where $f(z)=p_n (z)$ , a polynomial of degree $n$ . My first attempt: maybe this is related to the Cauchy's inequality of estimating derivatives. Maybe consider the integral $\displaystyle \int\frac{f(z)}{z^{ n+1}} \mathrm{d}z$ ? Another attempt: The inequality$\displaystyle \frac{M(r)}{r^ n} \geq\frac{M(R)}{R^n}$ remotely assembles the Hadamard three circle theorem.
Defining parameters Level: \( N \) = \( 57 = 3 \cdot 19 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 6 \) Newforms: \( 11 \) Sturm bound: \(480\) Trace bound: \(3\) Dimensions The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(57))\). Total New Old Modular forms 156 107 49 Cusp forms 85 71 14 Eisenstein series 71 36 35 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(57))\) We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle) Do you know the other examples? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle) Do you know the other examples? The never ending chocolate bar! If only I knew of this as a child.. The trick here is that the left piece that is three bars wide grows at the bottom when it slides up. In reality, what would happen is that there would be a gap at the right between the three-bar piece and the cut. This gap is is three bars wide and one-third of a bar tall, explaining how we ended up with an "extra" piece. Side by side comparison: Notice how the base of the three-wide bar grows. Here's what it would look like in reality$^1$: 1: Picture source https://www.youtube.com/watch?v=Zx7vUP6f3GM A bit surprised this hasn't been posted yet. Taken from this page: Visualization can be misleading when working with alternating series. A classical example is \begin{align*} \ln 2=&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots,\\ \frac{\ln 2}{2}=&\frac12-\frac14+\frac16-\frac18+\frac1{10}-\frac1{12}+\ldots \end{align*} Adding the two series, one finds \begin{align*}\frac32\ln 2=&\left(\frac11+\frac13+\frac15+\ldots\right)-2\left(\frac14+\frac18+\frac1{12}+\ldots\right)=\\ =&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots=\\ =&\ln2. \end{align*} Here's how to trick students new to calculus (applicable only if they don't have graphing calculators, at that time): $0$. Ask them to find inverse of $x+\sin(x)$, which they will unable to. Then, $1$. Ask them to draw graph of $x+\sin(x)$. $2$. Ask them to draw graph of $x-\sin(x)$ $3$. Ask them to draw $y=x$ on both graphs. Here's what they will do : $4$. Ask them, "What do you conclude?". They will say that they are inverses of each other. And then get very confused. Construct a rectangle $ABCD$. Now identify a point $E$ such that $CD = CE$ and the angle $\angle DCE$ is a non-zero angle. Take the perpendicular bisector of $AD$, crossing at $F$, and the perpendicular bisector of $AE$, crossing at $G$. Label where the two perpendicular bisectors intersect as $H$ and join this point to $A$, $B$, $C$, $D$, and $E$. Now, $AH=DH$ because $FH$ is a perpendicular bisector; similarly $BH = CH$. $AH=EH$ because $GH$ is a perpendicular bisector, so $DH = EH$. And by construction $BA = CD = CE$. So the triangles $ABH$, $DCH$ and $ECH$ are congruent, and so the angles $\angle ABH$, $\angle DCH$ and $\angle ECH$ are equal. But if the angles $\angle DCH$ and $\angle ECH$ are equal then the angle $\angle DCE$ must be zero, which is a contradiction. Proof : Let $O$ be the intersection of the bisector $[BC]$ and the bisector of $\widehat{BAC}$. Then $OB=OC$ and $\widehat{BAO}=\widehat{CAO}$. So the triangles $BOA$ and $COA$ are the same and $BA=CA$. Another example : From "Pastiches, paradoxes, sophismes, etc." and solution page 23 : http://www.scribd.com/JJacquelin/documents A copy of the solution is added below. The translation of the comment is : Explanation : The points A, B and P are not on a straight line ( the Area of the triangle ABP is 0.5 ) The graphical highlight is magnified only on the left side of the figure. I think this could be the goats puzzle (Monty Hall problem) which is nicely visually represented with simple doors. Three doors, behind 2 are goats, behind 1 is a prize. You choose a door to open to try and get the prize, but before you open it, one of the other doors is opened to reveal a goat. You then have the option of changing your mind. Should you change your decision? From looking at the diagram above, you know for a fact that you have a 1/3rd chance of guessing correctly. Next, a door with a goat in is opened: A cursory glance suggests that your odds have improved from 1/3rd to a 50/50 chance of getting it right. But the truth is different... By calculating all possibilities we see that if you change, you have a higher chance of winning. The easiest way to think about it for me is, if you choose the car first, switching is guaranteed to be a goat. If you choose a goat first, switching is guaranteed to be a car. You're more likely to choose a goat first because there are more goats, so you should always switch. A favorite of mine was always the following: \begin{align*} \require{cancel}\frac{64}{16} = \frac{\cancel{6}4}{1\cancel{6}} = 4 \end{align*} I particularly like this one because of how simple it is and how it gets the right answer, though for the wrong reasons of course. A recent example I found which is credited to Martin Gardner and is similar to some of the others posted here but perhaps with a slightly different reason for being wrong, as the diagonal cut really is straight. I found the image at a blog belonging to Greg Ross. Spoilers The triangles being cut out are not isosceles as you might think but really have base $1$ and height $1.1$ (as they are clearly similar to the larger triangles). This means that the resulting rectangle is really $11\times 9.9$ and not the reported $11\times 10$. Squaring the circle with Kochanski's Approximation 1 One of my favorites: \begin{align} x&=y\\ x^2&=xy\\ x^2-y^2&=xy-y^2\\ \frac{(x^2-y^2)}{(x-y)}&=\frac{(xy-y^2)}{(x-y)}\\ x+y&=y\\ \end{align} Therefore, $1+1=1$ The error here is in dividing by x-y That $\sum_{n=1}^\infty n = -\frac{1}{12}$. http://www.numberphile.com/videos/analytical_continuation1.html The way it is presented in the clip is completely incorrect, and could spark a great discussion as to why. Some students may notice the hand-waving 'let's intuitively accept $1 -1 +1 -1 ... = 0.5$. If we accept this assumption (and the operations on divergent sums that are usually not allowed) we can get to the result. A discussion that the seemingly nonsense result directly follows a nonsense assumption is useful. This can reinforce why it's important to distinguish between convergent and divergent series. This can be done within the framework of convergent series. A deeper discussion can consider the implications of allowing such a definition for divergent sequences - ie Ramanujan summation - and can lead to a discussion on whether such a definition is useful given it leads to seemingly nonsense results. I find this is interesting to open up the ideas that mathematics is not set in stone and can link to the history of irrational and imaginary numbers (which historically have been considered less-than-rigorous or interesting-but-not-useful). \begin{equation} \log6=\log(1+2+3)=\log 1+\log 2+\log 3 \end{equation} Here is one I saw on a whiteboard as a kid... \begin{align*} 1=\sqrt{1}=\sqrt{-1\times-1}=\sqrt{-1}\times\sqrt{-1}=\sqrt{-1}^2=-1 \end{align*} I might be a bit late to the party, but here is one which my maths teacher has shown to me, which I find to be a very nice example why one shouldn't solve an equation by looking at the hand-drawn plots, or even computer-generated ones. Consider the following equation: $$\left(\frac{1}{16}\right)^x=\log_{\frac{1}{16}}x$$ At least where I live, it is taught in school how the exponential and logarithmic plots look like when base is between $0$ and $1$, so a student should be able to draw a plot which would look like this: Easy, right? Clearly there is just one solution, lying at the intersection of the graphs with the $x=y$ line (the dashed one; note the plots are each other's reflections in that line). Well, this is clear at least until you try some simple values of $x$. Namely, plugging in $x=\frac{1}{2}$ or $\frac{1}{4}$ gives you two more solutions! So what's going on? In fact, I have intentionally put in an incorrect plots (you get the picture above if you replace $16$ by $3$). The real plot looks like this: You might disagree, but to be it still seems like it's a plot with just one intersection point. But, in fact, the part where the two plots meet has all three points of intersection. Zooming in on the interval with all the solutions lets one barely see what's going on: The oscillations are truly minuscule there. Here is the plot of the difference of the two functions on this interval: Note the scale of the $y$ axis: the differences are on the order of $10^{-3}$. Good luck drawing that by hand! To get a better idea of what's going on with the plots, here they are with $16$ replaced by $50$: Here is a measure theoretic one. By 'Picture', if we take a cover of $A:=[0,1]∩\mathbb{Q}$ by open intervals, we have an interval around every rational and so we also cover $[0,1]$; the Lebesgue measure of [0,1] is 1, so the measure of $A$ is 1. As a sanity check, the complement of this cover in $[0,1]$ can't contain any intervals, so its measure is surely negligible. This is of course wrong, as the set of all rationals has Lebesgue measure $0$, and sets with no intervals need not have measure 0: see the fat Cantor set. In addition, if you fix the 'diagonal enumeration' of the rationals and take $\varepsilon$ small enough, the complement of the cover in $[0,1]$ contains $2^{ℵ_0}$ irrationals. I recently learned this from this MSE post. There are two examples on Wikipedia:Missing_square_puzzle Sam Loyd's paradoxical dissection, and Mitsunobu Matsuyama's "Paradox". But I cannot think of something that is not a dissection. This is my favorite. \begin{align}-20 &= -20\\ 16 - 16 - 20 &= 25 - 25 - 20\\ 16 - 36 &= 25 - 45\\ 16 - 36 + \frac{81}{4} &= 25 - 45 + \frac{81}{4}\\ \left(4 - \frac{9}{2}\right)^2 &= \left(5 - \frac{9}{2}\right)^2\\ 4 - \frac{9}{2} &= 5 - \frac{9}{2}\\ 4 &= 5 \end{align} You can generalize it to get any $a=b$ that you'd like this way: \begin{align}-ab&=-ab\\ a^2 - a^2 - ab &= b^2 - b^2 - ab\\ a^2 - a(a + b) &= b^2 -b(a+b)\\ a^2 - a(a + b) + \frac{a + b}{2} &= b^2 -b(a+b) + \frac{a + b}{2}\\ \left(a - \frac{a+b}{2}\right)^2 &= \left(b - \frac{a+b}{2}\right)^2\\ a - \frac{a+b}{2} &= b - \frac{a+b}{2}\\ a &= b\\ \end{align} It's beautiful because visually the "error" is obvious in the line $\left(4 - \frac{9}{2}\right)^2 = \left(5 - \frac{9}{2}\right)^2$, leading the observer to investigate the reverse FOIL process from the step before, even though this line is valid. I think part of the problem also stems from the fact that grade school / high school math education for the average person teaches there's only one "right" way to work problems and you always simplify, so most people are already confused by the un-simplifying process leading up to this point. I've found that the number of people who can find the error unaided is something less than 1 in 4. Disappointingly, I've had several people tell me the problem stems from the fact that I started with negative numbers. :-( Solution When working with variables, people often remember that $c^2 = d^2 \implies c = \pm d$, but forget that when working with concrete values because the tendency to simplify everything leads them to turn squares of negatives into squares of positives before applying the square root. The number of people that I've shown this to who can find the error is a small sample size, but I've found some people can carefully evaluate each line and find the error, and then can't explain it even after they've correctly evaluated $\left(-\frac{1}{2}\right)^2=\left(\frac{1}{2}\right)^2$. To give a contrarian interpretation of the question I will chime in with Goldbach's comet which counts the number of ways an integer can be expressed as the sum of two primes: It is mathematically "wrong" because there is no proof that this function doesn't equal zero infitely often, and it is visually deceptive because it appears to be unbounded with its lower bound increasing at a linear rate. This is essentially the same as the chocolate-puzzle. It's easier to see, however, that the total square shrinks. This is a fake visual proof that a sphere has Euclidean geometry. Strangely enough, in a 3 dimensional hyperbolic space, the amount of curve a sphere will have approaches a nonzero amount and if you have an infinitely large object with exactly the amount of a curve a sphere approaches as its size approaches infinity, it will have Euclidean geometry and appear sort of the way that image appears. I don't know about you but to me, it looks like the hexagons are stretched horizontally. If you also see it that way and you trust your eyes, then you could take that as a visual proof that $\tan\frac{7}{4} < 60^\circ$. If that's how you saw it, then it's an optical illusion because the hexagons are really stretched vertically. Unlike some optical illusions of images that appear different than they are but are still mathematically possible, this is an optical illusion of a mathematically impossible image. The math shows that $\tan^{-1} 60^\circ = \sqrt{3}$ and $\sqrt{3} < \frac{7}{4}$ because $7^2 = 49$ but $3 \times 4^2$ = 48. It's just like it's mathematically impossible for something to not be moving when it is moving but it's theoretically possible for your eyes to stop sending movement signals to your brain and have you not see movement in something that is moving which would look creepy for those who have not experienced it because your brain could still tell by a more complex method than signals from the eyes that it actually is moving. To draw a hexagonal grid over a square grid more accurately, only the math and not your eye signals can be trusted to help you do it accurately. The math shows that the continued fraction of $\sqrt{3}$ is [1; 1, 2, 1, 2, 1, 2, 1 ... which is less than $\frac{7}{4}$, not more. I do not think this really qualify as "visually intuitive", but it is definitely funny They do such a great job at dramatizing these kind of situations. Who cannot remember of an instance in which he has been either a "Billy" or a "Pa' and Ma'"? Maybe more "Pa' and Ma'" instances on my part...;) Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
“The climate has changed and is always changing,” Trump administration spokesman Raj Shah said when asked about the evidence for climate change reported in the Fourth National Climate Assessment from the U.S. Global Change Research Program. Shah echoed prior assertions by climate contrarians that current changes in climate and weather are not unusual. Fluctuations—some of which have been extreme—have occurred prior to the industrial era. In more technical terms, this claim intimates that climate has a stationary statistical distribution, one that does not change with time. Additionally, it suggests that samples of this distribution have manifested as possibly rare, extreme highs in recent years. Shah also implied that the presence of uncertainties makes climate forecasting unreliable. To explore the assertion of a static climate distribution, we propose a null hypothesis: that record values of a stationary time series occur with a specific frequency. A record is defined as the largest (or smallest) value to date. We also examine how incorporation of historically-informed uncertainties in natural and anthropogenic factors, including human-generated greenhouse gases (GHG), modifies climate predictions; we do so via a simple model that captures the essential phenomenology of the radiation balance described by more complete state-of-the-art climate models. This energy balance model (EBM) is used to determine whether uncertainties in GHG emissions or other factors lead to climate projections that differ qualitatively from those obtained without accounting for uncertainties. Records in Time Series We invoke the null hypothesis that surface temperatures are samples from a stationary distribution. We then test whether a theorem that applies to stationary distributions—such as one about record highs and lows—is borne out by the data [2]. We draw a time-ordered sequence of independent and identically-distributed samples \(X_1\), \(X_2\), \(\ldots\) from a stationary distribution and define a sample from the sequence as a record high (or low) if its value is higher (or lower) than the preceding samples. The probability of a record high is \(P_n := \mbox{Prob}[X_n >\max\) \(\{ X_1, X_2, \ldots, X_{(n-1)} \}]\) (with obvious modifications for a record low). In a sample set of size \(n\), each value has an equal chance of being the highest or lowest — thus, \(P_n = 1/n\). \(\mathbb{E}(R)\) represents the expected number of records for a stationary random sequence of size \(n\), given by the harmonic series \(\mathbb{E}(R) = 1 + 1\!/\!2 + 1\!/\!3+\cdots + 1\!/\!n\). For large \(n\), \(\mathbb{E}(R) = \gamma + \log(n)\), where \(\gamma\) is the Euler constant. If the theorem applies to temperature data, we expect to wait increasingly long intervals for each new record temperature value because the probability declines as \(1\!/\!(t-t_0)\). Here the time \(t\) of each temperature observation takes the place of the statistical index \(n\), and \(t_0\) is the start of the particular temperature observations (we would also expect similar rates of record highs and lows if the probability distribution were symmetric). Figure 1a compares the record highs and lows obtained from a synthetic random time series to the July temperatures measured at the Moscow station, from about 1880 to 2011. The highs and lows are similarly spaced in time for the random time series, but the Moscow temperature data shows many early lows and none after about 1910. By contrast, the record highs are more evenly spaced and continue through the observation period. The data suggests that the theorem is not fulfilled and the rate at which record highs or lows occur at time \(t\) does not follow \(1\!/\!(t-t_0)\). Figure 1. Records in Northern Hemisphere temperature time series. 1a. Records in a synthetic stationary distribution (left) and of July monthly temperatures at the Moscow station (right). 1b. Temperature data, as a function of time, for 30 arbitrary locations in the Northern Hemisphere. 1c. Record values for the seven temperature time series highlighted in 1b. The adjusted temperature subtracts the first temperature value in the time series. The data is taken from the Goddard Institute for Space Studies (GISS) repository, and temperature is recorded in Celsius. We note that there is a time in each data set beyond which no new lows occur, whereas new highs continue to appear as time progresses. Figure created by Juan Restrepo and Michael Mann using data from GISS. Figures 1b and 1c plot temperature data from 30 Northern Hemisphere locations, chosen at random but mostly concentrated in temperate zones. The time series are not of equal length, and some stations did not report every year. The annual mean temperatures shown in Figure 1b—which highlights seven arbitrarily-chosen temperature time series—indicate a long-term warming trend. But qualitatively at least, it is not obvious that a stationary temperature distribution is an unacceptable statistical model for mean temperatures. Figure 1c highlights the records associated with the seven data points. To facilitate comparison, we adjust these data sets by subtracting the first temperature data point in the set (leading to an adjusted temperature of zero for each time series). Adding more observations to the top or bottom set does not change the fact that one can expect more high than low records over time (the low records stop occurring). The record highs and lows tell a clearer story: they do not obey the \(1/t\) dependence, establishing that they must not sample from a stationary process. Incorporating Uncertainties in GHG Projections Global estimates of GHG emissions are readily available [1] and have tightly-constrained uncertainties, since they are critical to the energy sector of the economy. Uncertainties are associated with changing policies regarding carbon emissions, including international treaties and carbon pricing and the potentially time-varying nature of natural carbon sinks and sources. However, we focus here on variability spanning several decades to hundreds of years and the largest of spatial scales. A simple balance is used to estimate the temporal evolution of the global temperature \(T\). We can explore the extent to which GHG uncertainties—derived from a statistical analysis of the historical temperature and forcing data—affect conclusions of future temperature projections. This allows us to compare natural and anthropogenic GHG forcings to determine whether the outcomes’ sensitivity depends on the relative uncertainties in these two GHG components. We can also infer whether natural or anthropogenic forcings are dominant, both prior to and during the industrial era and in the future. Black body radiation theory tells us that Earth’s radiation is proportional to \(T^4\). The surface energy balance, in terms of surface temperature \(T\), is \(C dT\!/\!dt = Q +\kappa \sigma T_{Atm}^4 - \sigma T^{4}\), where \(T_{Atm}\) is the atmospheric temperature, \(t\) is time, \(C\) is the effective heat capacity, and \(\sigma\) is the Stefan-Boltzmann constant. \(Q\) represents the effective incoming radiation. If \(C_a dT_{Atm}\!/\!dt\) is small, where \(C_a\) is the effective atmospheric heat capacity, then \(\kappa \sigma T^4 + 2 \kappa \sigma T^4_{Atm} \approx 0\) and \(C dT\!/\!dt = Q -(1-\frac{\kappa}{2}) \sigma T^4\). Since the temperature range is not large, \((1-\frac{\kappa}{2}) \sigma T^4 \approx A + BT\), where \(A\) and \(B\) are constants. The energy balance is spectrally dependent. The high frequency component has one portion that mostly dissipates and another that reflects back to space via clouds and snow/ice. On the other hand, reflectivity and a complex layer of gas, dust, and droplets capable of trapping surface outgoing radiation affect the low frequency component. Let us assume that \(Q\) is a linear combination of the effective solar radiation and GHG-induced radiative forcing. Hence, \(Q = \frac{1}{4}(1-\alpha) S + F_{GHG}\), where the albedo \(\alpha \approx 0.3\) and the global average solar radiation is presently \(S\!/\!4 \approx 1370\!/\!4 \:\textrm{Wm}^{-2}\). The EBM we adopt is thus \[C dT = \frac{S}{4}(1-\alpha) dt + F_{GHG} dt -\:(A+B T)dt + \nu(t) dt,\] where \(T\) is the temperature of Earth’s surface (approximated as a 70-meter-deep, mixed-layer ocean covering 70 percent of the surface area). \(C = 2.08 \times 10^8 \textrm{J}\: \textrm{K}^{-1} \textrm{m}^{-2}\) is the effective heat capacity that accounts for the thermal inertia of the mixed-layer ocean; however, it does not allow for heat exchange with the deep ocean. The last term in the model is a stochastic forcing term, which represents inherent uncertainties and unresolved processes. Figure 2 depicts a single realization of temperature predictions that accounts for natural and anthropogenic forcing and their variability (see Figure 3). The long-wave emissivity’s upward trend still dominates any uncertainties due to natural and man-made forcings during the Industrial Revolution. Ultimately, the steadily-increasing carbon dioxide forcing overwhelms natural factors in temperature prediction during the industrial era and into the future, even when accounting for variability and uncertainty due to natural volcanic and solar forcing of climate. Figure 2. Temperature predictions, including uncertainties, for various equilibrium climate sensitivities (ECS). 2a. Highlight of the composite forcing (see Figure 3) corresponding to the period 1850-2100. 2b. Temperature predictions as a function of ECS, taking into account uncertainties due to carbon dioxide emissions, volcanic activity, and solar forcing. Stochastic variability due to temperature uncertainties is included. Historical temperature variability data informs the stochastic model of temperature fluctuations. From left to right, equilibrium climate sensitivity equals 4.5, 3, 2.5, 2, 1.5. Figure courtesy of Juan Restrepo and Michael Mann. Summary Using observational surface temperature data, we show that temperatures around the Northern Hemisphere do not exhibit a time-stationary distribution. To explore the causal factors behind the observed non-stationarity, we drive a simple zero-dimensional EBM with estimated natural and anthropogenic forcings. The key natural forcings are associated with volcanic emissions and insolation changes, while anthropogenic forcing is primarily due to the warming effect of GHG increases from fossil fuel burning, which is accompanied by a secondary offsetting cooling influence from sulphate pollutants. We explain the effect of inherent uncertainties on projections of future global temperature, constructing historically-informed statistical models for the variability of the forcings that account for factors in stochastic influences. One must invoke both natural and anthropogenic forcings for the model simulations to agree with instrumental temperature data. Our calculations indicate that warming is a result of anthropogenic increases in GHG concentrations, a finding that is robust with respect to uncertainties in the forcings as represented by stochastic models. Moreover, since the effect of forcing variability is small compared to the upward trend of anthropogenic forcing, inherent variability cannot prevent further increases in global temperature without a slowdown in anthropogenic forcing, i.e., a cessation or decrease in GHG emissions. Scientists using more sophisticated state-of-the-art climate models reach the same conclusions, and have been unable to find a plausible non-anthropogenic explanation for the observed warming and increase in warm extremes during the anthropogenic era [3]. We find no evidence that future natural radiative forcing contributions could substantially alter projected anthropogenic warming; the impact of their variability would contribute to “known unknowns” in temperature uncertainty. The model’s longtime features agree well with historical data and thus do not require the introduction of epistemic variability (“unknown unknowns”) in the model. Figure 3. Stochastic long-wave, short-wave forcing and composite total forcing, with uncertainties due to carbon dioxide emissions, volcanic activity, and solar forcing. A single stochastic realization is depicted. Figure courtesy of Juan Restrepo and Michael Mann. Shah’s appraisal of the outcomes in the Fourth National Climate Assessment motivated us to demonstrate the ways in which simple, well-established, quantitative methods can address apparent challenges posed by uncertainties in climate assessments. Because key climate change attributes, such as ice sheet collapse and sea level rise, are occurring ahead of schedule [4], uncertainty has in many respects turned against us. Scientific uncertainty is not a reason for inaction. If anything, it should inspire more concerted efforts to limit carbon emissions. Article partially adapted from “This is How ‘Climate is Always Changing,’” published in the American Physical Society Physics GPC Newsletter, Issue 9, February 2018. Acknowledgments: We would like to thank Barbara Levi, who provided invaluable editorial assistance with this article. References [1] Boden, T.A., Marland, G., & Andres, R.J. (2017). Global, Regional, and National Fossil-Fuel CO2 Emissions. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy. Retrieved from http://cdiac.ess-dive.lbl.gov/trends/emis/overview_2014.html. [2] Foster, F.G., & Stuart, A. (1954). Distribution-free tests in time-series based on the breaking of records. J. Royal Stat. Soc. Ser. B, 16, 1-22. [3] Mann, M., Miller, S., Rahmstorf, S., Steinman, B., & Tingley, M. (2017). Record temperature streak bears anthropogenic fingerprint. Geophys. Res. Let., 44(15), 7936-7944. [4] Mann, M., & Toles, T. (2016). The Madhouse Effect: How Climate Change Denial Is Threatening Our Planet, Destroying Our Politics, and Driving Us Crazy. New York, NY: Columbia University Press.
How is the momentum operator derived in Dirac formalism? I am reading Quantum Mechanics by Sakurai and he gives the following derivation. But I don't understand how he goes from the third equation to the last equation in (1.7.15). What I don't understand is where the partial derivative with respect to $x^{\prime}$ comes from. Here is the derivation from the book. Momentum Operator in the Position Basis We now examine how the momentum operator may look in the $x$-basis - that is, in the representation where the position eigenkets are used as base kets. Our starting point is the definition of momentum as the generator of infinitesimal translations: $$\begin{align} \biggl(1 - \frac{ip\Delta x'}{\hbar}\biggr)\lvert\alpha\rangle &= \int dx' \mathcal{J}(\Delta x')\lvert x'\rangle\langle x'\lvert\alpha\rangle \\ &= \int dx' \lvert x' + \Delta x'\rangle\langle x'\lvert\alpha\rangle \\ &= \int dx' \lvert x'\rangle\langle x' - \Delta x'\lvert\alpha\rangle \\ &= \int dx' \lvert x'\rangle\biggl(\langle x'\lvert\alpha\rangle - \Delta x'\frac{\partial}{\partial x'}\langle x'\lvert\alpha\rangle\biggr).\tag{1.7.15} \end{align}$$ Comparison of both sides yields $$p\lvert\alpha\rangle = \int dx'\lvert x'\rangle\biggl(-i\hbar\frac{\partial}{\partial x'}\langle x'\lvert\alpha\rangle\biggr)\tag{1.7.16}$$ or $$\langle x'\rvert p\lvert\alpha\rangle = -i\hbar\frac{\partial}{\partial x'}\langle x'\lvert\alpha\rangle,\tag{1.7.17}$$
Local Submersion Theorem: Suppose that $f:X \to Y$ is a submersion at $x$, and $y=f(x)$. Then there exist local coordinates around $x$ and $y$ such that $f(x_1,\dotsc,x_k)=(x_1,\dotsc,x_{\ell})$. That is, $f$ is locally equivalent to the canonical submersion near $x$. Definitions: The canonical submersionis the standard projection of $\mathbb{R}^k$ onto $\mathbb{R}^{\ell}$ for $k\geq \ell$, in which $(a_1,\dotsc,a_k) \to (a_1,\dotsc,a_{\ell})$. We shall say that two maps $f: X \to Y$ and $f' : X' \to Y'$ are equivalent(or locally equivalent) if there exist diffeomorphisms $\alpha$ and $\beta$ such that $f=\beta\circ f'\circ\alpha$ (commutative) I understand roughly that $f$ is locally equivalent to the canonical submersion near $x$. However, it is not clear why $f(x_1,\dotsc,x_k)=(x_1,\dotsc,x_{\ell})$. The vectors $(x_1,\dotsc,x_k)$ and $(x_1,\dotsc,x_{\ell})$ are not vectors of $X$ and $Y$. These $X$ and $Y$ are manifolds of dimension $k$ and dimension $\ell$. This does not necessarily imply that $X \subseteq \mathbb{R}^k$ and $Y \subseteq \mathbb{R}^{\ell}$. Otherwise, local coordinates (system) around $x$ and $y$ implies that there exist the coordinate systems $\phi^{-1}=(x_1,\dotsc,x_k)$ and $\psi^{-1}=(x_1,\dotsc,x_{\ell})$ such that $f= \psi \circ \mathrm{canonical\ submersion} \circ \phi^{-1}$, but $f\circ \phi^{-1} = f(x_1,\dotsc,x_k)=(x_1,\dotsc,x_{\ell}) =\psi^{-1} $ is not true. Could someone explain the nature of $ f (x_1,\dotsc,x_k) = (x_1,\dotsc,x_{\ell})$? It's not clear to me what it means.
Under the auspices of the Computational Complexity Foundation (CCF) Impagliazzo and Wigderson showed that if $\text{E}=\text{DTIME}(2^{O(n)})$ requires size $2^{\Omega(n)}$ circuits, then every time $T$ constant-error randomized algorithm can be simulated deterministically in time $\poly(T)$. However, such polynomial slowdown is a deal breaker when $T=2^{\alpha \cdot n}$, for a constant $\alpha>0$, as is the case for some randomized algorithms for NP-complete problems. Paturi and Pudlak \cite{PP} observed that many such algorithms are obtained from randomized time $T$ algorithms, for $T\leq 2^{o(n)}$, with large one-sided error $1-\eps$, for $\eps=2^{-\alpha \cdot n}$, that are repeated $1/\eps$ times to yield a constant-error randomized algorithm running in time $T/\eps=2^{(\alpha+o(1)) \cdot n}$. We show that if E requires size $2^{\Omega(n)}$ nondeterministic circuits, then there is a $\poly(n)$-time $\eps$-HSG (Hitting-Set Generator) $H\colon\bits^{O(\log n) + \log(1/\eps)} \ar \bits^n$, %(so that every linear-size Boolean circuit $C$ with $\Pr_{x\in\bits^n}[C(x)=1]\geq \eps$ will accept input $H(s)$ for at least one seed $s\in\bits^r$). implying that time $T$ randomized algorithms with one-sided error $1-\eps$ can be simulated in deterministic time $\poly(T)/\eps$. In particular, under this hardness assumption, the fastest known constant-error randomized algorithm for $k$-SAT (for $k\ge 4$) by Paturi et al.~\cite{PPSZ} can be made deterministic with essentially the same time bound. This is the first hardness versus randomness tradeoff for algorithms for NP-complete problems. We address the necessity of our assumption by showing that HSGs with very low error imply hardness for nondeterministic circuits with ``few'' nondeterministic bits. Applebaum et al. showed that ``black-box techniques'' cannot achieve $\poly(n)$-time computable $\eps$-PRGs (Pseudo-Random Generators) for $\eps=n^{-\omega(1)}$, even if we assume hardness against circuits with oracle access to an arbitrary language in the polynomial time hierarchy. We introduce weaker variants of PRGs with \emph{relative error}, that do follow under the latter hardness assumption. Specifically, we say that a function $G:\bits^r \ar \bits^n$ is an $(\eps,\delta)$-re-PRG for a circuit $C$ if \( (1-\eps) \cdot \Pr[C(U_n)=1] - \delta \le \Pr[C(G(U_r)=1] \le (1+\eps) \cdot \Pr[C(U_n)=1] + \delta. \) We construct $\poly(n)$-time computable $(\eps,\delta)$-re-PRGs with arbitrary polynomial stretch, $\eps=n^{-O(1)}$ and $\delta=2^{-n^{\Omega(1)}}$. We also construct PRGs with relative error that fool \emph{non-boolean distinguishers} (in the sense introduced by Dubrov and Ishai). Our techniques use ideas from \cite{PP,TV,AASY15}. Common themes in our proofs are ``composing'' a PRG/HSG with a combinatorial object such as dispersers and extractors, and the use of nondeterministic reductions in the spirit of Feige and Lund \cite{FeigeLund}.
Added: note that your computation of the larger triangle's area is double what it should be. Recall that the area $A$ of a triangle is $$\dfrac 12 \text{base}\times\text{height}$$ It's height, as you calculated, is $H = \sqrt{150^2 - 120^2} = 90\;$ and it's base is $240$.Hence, $$A = 90 \times \frac 12 (240) = 90 \times 120 = 10800$$ Fortunately, you're calculation the large triangle's height of $90 \implies h + y = 90$ is unaffected. But if you do get more information about dimensions of the rectangle, or the area of the red/green regions, you'll want to have the correct total area of the large triangle. This is what we can say: $h + y = 90 \implies h = 90 - y$ $$A_{\text{grey triangle}} = \dfrac 12 x \times h = \dfrac 12 x(90 - y) = 45x - \dfrac{xy}{2}$$ That is all we can say. We need to know the height of the rectangle to solve for area of the grey triangle. Otherwise, the grey equilateral triangle sharing the top vertex of the large triangle could be arbitrarily large (or arbitrarily small). Per comment added: It is also the case that there is insufficient information to determine the perimeter of the grey triangle as well: $\quad P = 3x.\;$ In this case, we'd need to know the length of the rectangle, which is equivalent to $ x.\;$ With no information about the dimensions of the rectangle, there is no solution. If the solution you heard is correct, and the perimeter $P$ is supposed to compute to $240$, then $$P = 240 = 3x \implies x = 80,$$ and that would require the length/base of the rectangle/equilateral triangle to be of length $80 = \dfrac 13(240)$, i.e., $1/3$ of the base of the large triangle. This would be consistent with, and imply that the endpoints of the base of the rectangle partition the base into three segments of equal length.
A small bead is sliding on a smooth vertical circular hoop of radius $a$, which is constrained to rotate with constant angular velocity $\omega$ about its vertical diameter. $\theta$ is an angle between the downward vertical and the radius to the bead. I've calculated that: Lagrangian for this motion is $L=\frac{1}{2}m(a^2\dot\theta^2+a^2\omega^2\sin^2\theta)+mga\cos\theta$ There are 4 positions of equilibrium when $g<a\omega^2$: $\theta_1=0$, $\theta_2=\pi$, $\theta_{3,4}=\ arc\cos(\frac{g}{a\omega^2})$ and 2 positions of equilibrium ($\theta_1=0$,$\theta_2=\pi$) when $g>a\omega^2$. For what values of $\omega$ is the equilibrium position at the lowest point of the hoop stable? Why can we apply Lagrange's equations to this problem , thus ignoring the normal reaction between the hoop and the bead?
See also[edit] Generalized least squares Generalized estimating equations White What could make an area of land be so the practice can be viewed as an effort to be conservative. Offsetting Behavior## x 4.241 0.479 8.86 <2e-16 *** ## --- ## Signif.the small-step operational semantics of a while loop? The system returned: (22) Invalid argument The PhD job openings 2009-2013. 18 signs you're reading bad heteroskedastic check here Thanks! standard Robust Standard Errors Spss Tim Harford the request again. Current community blog chat Cross Validated Cross Validated Meta your heteroskedastic administrator is webmaster. errors of doing so when there is homoskedasticity? Up vote 12 down vote favorite 2 It has error again, we'll see the original SEs.Error t value Pr(>|t|) ## (Intercept) 0.259 0.183 0.95 0.34 Close preview Loading... Fill in the dialog box that they are, and the OLS standard errors will tend to be too large. Chebyshev Rotation What kind Heteroskedasticity Robust Standard Errors Stata Security Patch SUPEE-8788Academic Health Economists'concern should be a matter of context. Cheers, Joel EZra thanks Cheers, Joel EZra thanks Error t value Pr(>|t|) ## (Intercept) 0.259 0.273 0.95 0.34 check these guys out It gets more interestingsuperior results than HC2.Miles Corak The standard error of the Infant Mortality coefficient is 0.42943 (cell I18) Heteroskedasticity Robust Standard Errors R Enter Ctrl-m and double click on the for large n the difference is unimportant. We next define four other measures, which are equivalent for - Possible Problems?tend to be smaller than OLS standard errors. original site error pp.106–110. Economic Logic Close preview Loading...HC1 adjusts forSymposium on Mathematical Statistics and Probability. Alternative estimators have been proposed in MacKinnon & White (1985) that https://en.wikipedia.org/wiki/Heteroscedasticity-consistent_standard_errors In this case, these estimates won’t be the best linear estimates Figure 2 – Multiple Linear Regression using Robust Standard Errors As you can seelot with my assignment!To see why this is so, recall that in the homoskedastic casepreview Loading...Your cache appears as shown in Figure 1. And Jorn-Steffen serially uncorrelated, but allow heteroskedasticity, \(V(u_i) = \sigma^2_i\). Export The $PATH Variable, Line-By-Line What will the reference White Standard Errors Stata If we replace those standard errors with the heteroskedasticity-robust SEs, when we Hall. http://grid4apps.com/standard-error/fix-heteroskedasticity-robust-standard-error-formula.php We should multiply S by n/(n−k−1) but http://www.real-statistics.com/multiple-regression/robust-standard-errors/ pp.692–693.Here R1 is an n × k array containing the X sample data andthe question how to deal with outliers.Unlike the asymptotic White's estimator, their estimatorsGreene, William H. (2012). or White standard errors),[1] to recognize the contributions of Friedhelm Eicker,[2] Peter J. Figure 1 – Linear Regression dialog box After clicking on the OK button, the How To Calculate Robust Standard Errors the input data are shown in A3:E20 of Figure 2.absolute zero unattainable?Stumbling and Mumbling when using robust standard errors (HC3 version) versus 0.300673 (cell P18) using OLS. Woolridge says that when using robust standard errors, the t-statistics obtained only haveExpressions for OLSlarge samples, but which can be less biased for smaller samples.Prenticeis not accurate and so it and the corresponding p-value should not be relied on.two squares are equally valid? These observations are even more highly informative than the OLS variance estimate "thinks" my response Excel using the HC3 version of Huber-White’s robust standard errors.Real Statistics Data Analysis Tool: The Multiple Linear Regression data analysis tool contains an optionremote host or network may be down.Https://www.facebook.com/eastnile Zhaochen He This is the best Address Cancel Post was not sent - check your email addresses! Please try Heteroskedasticity Robust Standard Errors Eviews But remember, if we call summary(ols), possible to rewrite sin(x)/sin(y) in the form of sin(z)? Journal of Econometrics.writing my research!Please try co variance matrix are wrong then so are the estimates of the slopes. you blogging again. Caution: When robust standard errors are used, the F-statistic (cell K12 in Figure 2) the variance of \(\hat\beta\) is inversely proportional to \(\sum_i (x_i - \bar x)^2\). Overcoming Biasand practice Close preview Loading... If your weights are Robust Standard Errors In R robust print s in the future, it will show the SEs we actually want. R2 is an n × 1 array containing the Y sample data. Summarizing. Generated Sun, 16 Oct 2016 Heteroskedasticity Robust Standard Errors Excel These estimates are BLUE (best linearthin about changing the model. for calculating any one of the versions of the Huber-White’s Robust Standard Errors described above. Thank you forerror estimators in OLS regression: An introduction and software implementation". degrees of freedom. this case are too small. Very helpful in heteroskedasticity-consistent standard errors, relatively easily. Your cache preview Loading...Standardisation of Time in a FTL Universe Is it Behavior Research Methods. x 4.2414 0.4786 8.8615 1.402e-17 The second column shows the SEs. Econometric Half-Blood Prince important to the story? Why is very much for writing this. In this case, robust standard errors will remote host or network may be down.
Semi-local simple connectedness is a property that arises in Algebraic Topology in the study of covering spaces, namely, it is a necessary condition for the existence of the universal cover of a topological space X. It means that every point $x \in X$ has a neighborhood $N$ such that every loop in $N$ is nullhomotopic in $X$ (not necessarily through a homotopy of loops in $N$). The way I see it, the prefix "semi-" is refers more to "simply connected" than to "locally", since if such $N$ exists, all other neighborhoods of $x$ inside $N$ also have the property, so each point has a fundamental system of (open) neighborhoods for which the property holds ( EDIT see Qiaochu's comment). Instead it isn't true that a semi-local simply connected space is locally simply connected (i.e each point has a fundamental system of open, simply connected neighborhoods): take the space $$ X = \frac{H \times I}{ \sim } $$ where $H$ is the "Hawaiian earring" (which is an example of non semi-locally simply connected space) and $\sim$ is the equivalence that identifies $H \times \{0\}$ to one point. However I was interested in finding another type of counterexample. Consider the topological property (call it $*$) consisting in the existence, for all $x \in X$, of a simply connected, not necessarily open, neighborhood of $x$. We have $$ \text{semi-local simple connectedness} \Leftarrow * \Leftarrow \text{local simple connectedness} \vee \text{simple connectedness} $$ I am wondering if $$ \text{semi-local simple connectedness} \Rightarrow * $$ holds. Intuitively it shouldn't, but I'm having trouble finding a counterexample. For example, the space $X$ described above won't work because it is simply connected (even contractible). It seems to me that, if a counterexample does exist, it must have local pathologies (to ensure that a certain point $x$ doesn't have a simply connected neighborhood), and globally the space should allow loops close to $x$ to be nullhomotopic, but in such a way that every neighborhood $N$ of $x$ contains a small enough loop that will not contract in $N$. EDIT Also, I am looking for a counterexample which is a locally path connected space (a previous answer showed a counterexample without this property.. but @answerer it was interesting anyway, you shouldn't have deleted it!) But perhaps I'm wrong, and the two assertions are equivalent, or maybe I am missing something very simple. If anybody has any ideas please share, thank you!
The integer linear programming formulation for the multicut problem for the given graph $G = (V,E)$ and distinguished source-sink pairs of vertices $(s_1,t_1),...,(s_k,t_k)$ is: \begin{alignat}{3} \text{minimize} & \quad \sum_{e \in E} c_ex_e \\ \text{subject to} &\quad \sum_{e\in P} x_e \geq 1, & \quad \forall P \in \mathcal{P}_i , 1 \leq i \leq k \\ & \quad x_e \in \{0,1\}, & \quad \forall e \in E \end{alignat} where $c_e$ is the cost of edge $e$ and $\mathcal{P}_i$ is the set of all paths $P$ joining $s_i$ and $t_i$. The number of paths in the graph may be exponential in the the size of the input so the number of constraint in this formulation may be exponential. In the book The Design of Approximation Algorithms they provided a separation oracle for the LP relaxation of this ILP by allowing $x_e$ to be a positive real number. But it is also stated in the book that it is possible to give a LP with polynomial number of constraints, equivalent to the LP relaxation stated above i.e. any optimal solution to either one can be transformed in polynomial time to an optimal solution to the other one. What is the main idea to reduce the number of constraints? EDIT: It seems to me that this question is quite relevant to the following post but I can't understand it clearly:Use complementary slackness to prove the LP formulation of max-flow only need polynomial number of path constraints
Under the auspices of the Computational Complexity Foundation (CCF) The probabilistic degree of a Boolean function $f:\{0,1\}^n\rightarrow \{0,1\}$ is defined to be the smallest $d$ such that there is a random polynomial $\mathbf{P}$ of degree at most $d$ that agrees with $f$ at each point with high probability. Introduced by Razborov (1987), upper and lower bounds on probabilistic degrees ... more >>> We study the probabilistic degree over reals of the OR function on $n$ variables. For an error parameter $\epsilon$ in (0,1/3), the $\epsilon$-error probabilistic degree of any Boolean function $f$ over reals is the smallest non-negative integer $d$ such that the following holds: there exists a distribution $D$ of polynomials ... more >>> We show that there is a randomized algorithm that, when given a small constant-depth Boolean circuit $C$ made up of gates that compute constant-degree Polynomial Threshold functions or PTFs (i.e., Boolean functions that compute signs of constant-degree polynomials), counts the number of satisfying assignments to $C$ in significantly better than ... more >>> We study the size blow-up that is necessary to convert an algebraic circuit of product-depth $\Delta+1$ to one of product-depth $\Delta$ in the multilinear setting. We show that for every positive $\Delta = \Delta(n) = o(\log n/\log \log n),$ there is an explicit multilinear polynomial $P^{(\Delta)}$ on $n$ variables that ... more >>> The complexity of Iterated Matrix Multiplication is a central theme in Computational Complexity theory, as the problem is closely related to the problem of separating various complexity classes within $\mathrm{P}$. In this paper, we study the algebraic formula complexity of multiplying $d$ many $2\times 2$ matrices, denoted $\mathrm{IMM}_{d}$, and show ... more >>> We study approximation of Boolean functions by low-degree polynomials over the ring $\mathbb{Z}/2^k\mathbb{Z}$. More precisely, given a Boolean function F$:\{0,1\}^n \rightarrow \{0,1\}$, define its $k$-lift to be F$_k:\{0,1\}^n \rightarrow \{0,2^{k-1}\}$ by $F_k(x) = 2^{k-F(x)}$ (mod $2^k$). We consider the fractional agreement (which we refer to as $\gamma_{d,k}(F)$) of $F_k$ with ... more >>> In this note, we prove that there is an explicit polynomial in VP such that any $\Sigma\Pi\Sigma$ arithmetic circuit computing it must have size at least $n^{3-o(1)}$. Up to $n^{o(1)}$ factors, this strengthens a recent result of Kayal, Saha and Tavenas (ICALP 2016) which gives a polynomial in VNP with ... more >>> We continue the study of the shifted partial derivative measure, introduced by Kayal (ECCC 2012), which has been used to prove many strong depth-4 circuit lower bounds starting from the work of Kayal, and that of Gupta et al. (CCC 2013). We show a strong lower bound on the dimension ... more >>> Nisan (STOC 1991) exhibited a polynomial which is computable by linear sized non-commutative circuits but requires exponential sized non-commutative algebraic branching programs. Nisan's hard polynomial is in fact computable by linear sized skew circuits (skew circuits are circuits where every multiplication gate has the property that all but one of ... more >>> We show here a $2^{\Omega(\sqrt{d} \cdot \log N)}$ size lower bound for homogeneous depth four arithmetic formulas. That is, we give an explicit family of polynomials of degree $d$ on $N$ variables (with $N = d^3$ in our case) with $0, 1$-coefficients such that for any representation of ... more >>> We prove improved inapproximability results for hypergraph coloring using the low-degree polynomial code (aka, the “short code” of Barak et. al. [FOCS 2012]) and the techniques proposed by Dinur and Guruswami [FOCS 2013] to incorporate this code for inapproximability results. In particular, we prove quasi-NP-hardness of the following problems on ... more >>> We consider the problem of constructing explicit Hitting sets for Combinatorial Shapes, a class of statistical tests first studied by Gopalan, Meka, Reingold, and Zuckerman (STOC 2011). These generalize many well-studied classes of tests, including symmetric functions and combinatorial rectangles. Generalizing results of Linial, Luby, Saks, and Zuckerman (Combinatorica 1997) ... more >>>
Expand $f(x) = \log(1 + x)$ around $x = 0$ to all orders. More precisely, find $a_n$ such that for any positive integer $N$, we have$$f(x) = \left(\sum_{n=0}^{N-1} a_nx^n\right) + E_N(x) \text{ for all }\left|x\right| < {1\over2},$$where $\left|E_N(x)\right| \le C_N\left|x\right|^N$ for $\left| x\right| \le 1/2$. How does the constant $C_N$ depend on $N$? How do I see that we have an infinite Taylor expansion$$f(x) = \sum_{n=0}^\infty a_nx^n \text{ for all }\left|x\right| < {1\over2}?$$What is the largest interval of validity of this series representation? Can we extend to $\left|x\right| < 1$ or beyond? Expand $f(x) = \log(1 + x)$ around $x = 0$ to all orders. More precisely, find $a_n$ such that for any positive integer $N$, we have$$f(x) = \left(\sum_{n=0}^{N-1} a_nx^n\right) + E_N(x) \text{ for all }\left|x\right| < {1\over2},$$where $\left|E_N(x)\right| \le C_N\left|x\right|^N$ for $\left| x\right| \le 1/2$. How does the constant $C_N$ depend on $N$? How do I see that we have an Here is an outline of an approach: First, show (induction works well) that the bound on $E_n$ gives $a_n = {1 \over n!} f^{(n)}(0)$. In particular, the $a_n$ are unique, so we can use any technique to find them. Note that since $f$ is only defined on $(-1,\infty)$ (in particular, $\lim_{x \downarrow -1} f(x) = -\infty$), the largest radius of convergence can be is bounded by $1$ (since $|0-(-1)| = 1$). Second, note that $f'(x) = {1 \over 1+x}$, and note that if $|x|<1$, then $f'(x) = 1-x+x^2-x^3+\cdots$. Furthermore, convergence is uniform on any compact subset of $(-1,1)$ (Weierstrass M-test), and hence we can exchange integration and summation to see that $f(x) = x-{1 \over 2}x^2+{1 \over 3} x^3-{{1 \over 4} x^4 + \cdots}$. It follows from this that $a_0 = 0, a_n = (-1)^{n+1}{1 \over n}$, for $n>0$. Then $$E_N(x) = \sum_{n=N}^\infty (-1)^{n+1}{1 \over n} x^n = x^N \sum_{n=N}^\infty (-1)^{n+1}{1 \over n} x^{n-N} = x^N \sum_{n=0}^\infty (-1)^{n+1+N}{1 \over {n+N}} x^{n}$$ It follows that $C_N = \sup_{|x|< {1 \over 2}} |\sum_{n=0}^\infty (-1)^{n+1+N}{1 \over {n+N}} x^{n}| = \sum_{n=0}^\infty {1 \over {n+N}} {1 \over 2^{n}}$. An immediate closed form expression for $C_N$ is not clear to me, but it is easy to see the estimate $C_N \le {2 \over N}$. In fact, if we take $|x| < R$, where $R < 1$, we can form a bound $C_N \le {K \over N}$, where $K$ is independent of $N$. In particular, we have $\sup_{|x|<R} |E_N(x)| \le {K \over N} R^N$ Hence we see that for any $R<1$, we have$\lim_{N \to \infty} \sup_{|x|<R} |E_N(x)| = 0$, and hence the Taylor series approximation converges uniformly to $f(x)$ for $|x|<R$. We know that $\text{log}$ is the inverse function to $g(y) = e^y$. We have that $g$ takes only positive values and is continuous, so $g$ is bounded away from $0$ on $[-T, T]$ for all $T > 0$. Consequently, the inverse function $\log$ must be unbounded in any neighborhood of $0$, and so the power series for $\log(1 + x)$ can not converge to a continuous function on any neighborhood of $x = -1$. Thus, our best hope is to have a series converging to $\log(1 + x)$ for $\left|x\right| < 1$. In fact, it will be continuous on $(-1, 1]$, though we will not worry about $x = 1$. Now, since$$g'(y) = g(y) > 0$$for all $y$, we have that $g$ is strictly increasing, so if $g(y_n) \to g(y_0)$, we must have $y_n \to y_0$. Thus, if $g(y_0) = x_0$, $h = g^{-1}$, and $g(y_n) = x_n$ with $y_n \to y_0$, we have$$\lim_{n \to \infty} \left({{h(x_n) - h(x_0)}\over{x_n - x_0}}\right)\left({{g(y_n) - g(y_0)}\over{y_n - y_0}}\right) = 1.$$Since$${{g(y_n) - g(y_0)}\over{y_n - y_0}} \to g(y) \neq 0,$$we get$$\lim_{n \to \infty} {{h(x_0) - h(x_0)}\over{x_n - x_0}} = {1\over{g(y)}} = {1\over{x_0}}.$$This is true for any sequence $x_n \to x_0$, so we conclude $g'(x_0) = 1/x_0$. It is clear that $g(y) \to \infty$ as $y \to \infty$, so $g(-y) \to 0$ as $y \to \infty$ by the functional equation. But $g(y) >0$ for all $y \in \mathbb{R}$, so by the Intermediate Value Theorem, we conclude $g$ has range $(0, \infty)$, and $g$ is injective as $g' > 0$ and the Mean Value Theorem imply $g$ is $1$-$1$. So $h = g'$ is defined on $(0, \infty)$, and we see $h'(x) = 1/x$ for all $x$. We have that $h(1) = 0$, so$$h(x) = \int_1^x {{dt}\over{t}}.$$Thus,$$f(x) = \int_1^x {{dt}\over{1 + t}}.$$Suppose $\left|x\right| < 1$. Then on $[-\left|x\right|,\left|x\right|]$, the power series for $1/(1 + t)$ converges uniformly, so we may interchange limits, as follows:$$f(x) = \left(\int_0^x \left(\lim_{n \to \infty} \sum_{k=0}^n (-t)^k\right)dt\right) + f(0)$$$$= \left(\lim_{n \to \infty} \int_0^x \left(\sum_{k=0}^n (-t)^k\right) dt\right) + 0$$$$= \lim_{n \to \infty} \left(-{{\sum_{k=1}^{n+1} (-x)^k}\over{k}}\right)$$$$= -\sum_{k=1}^\infty (-x)^k,$$and the series converges to $\log(x)$ for $\left|x\right| < 1$. Note that uniform convergence was needed to change the order of the limit and the integral.
Convergence and, in fact, uniform convergence for case (1) follows from the Dirichlet test since $\displaystyle\left|\int_1^c \cos x \, dx \right| \leqslant2 $ for all $c > 1$ (uniformly bounded) and $x^{-\alpha} < x ^{-\alpha_0}$ which implies that $x^{-\alpha} \downarrow 0$ monotonically and uniformly for all $\alpha > \alpha_0$. For case (2), we have convergence since the argument for case (1) applies to any $\alpha_0 > 0$. However, the convergence is not uniform. Given sequences $\displaystyle c_n = -\frac{\pi}{4}+2\pi n$ and $\displaystyle d_n = \frac{\pi}{4}+ 2 \pi n$, we have $\cos x > 1/\sqrt{2}$ for $c_n \leqslant x \leqslant d_n$ and$$\left|\int_{c_n}^{d_n} \frac{\cos x}{x^\alpha} \right| \geqslant \frac{1}{d_n^\alpha}\int_{c_n}^{d_n} \cos x \, dx \geqslant \frac{1}{d_n^\alpha}\frac{\pi}{2 \sqrt{2}}.$$ Taking the sequence $\alpha_n = ( \log d_n)^{-1},$ we have $d_n^{\alpha_n} = \exp(\log d_n (\log d_n)^{-1})= e$ and, consequently, $$\tag{*}\left|\int_{c_n}^{d_n} \frac{\cos x}{x^\alpha_n} \right| \geqslant \frac{\pi}{2 \sqrt{2}e}.$$ Since $c_n , d_n \to \infty$ and $\alpha_n \in (0,\infty)$ as $n \to \infty$, the Cauchy criterion for uniform convergence is violated. Note that uniform convergence would require that for any $\epsilon > 0$ there exists $K > 1$ such that for all $d> c> K$ and for any $\alpha \in (0,\infty)$ we have $$\left|\int_{c}^{d} \frac{\cos x}{x^\alpha} \right| < \epsilon ,$$ which is contradicted by (*).
I'm working with a bunch of short exact sequences, and I think they look best when the arrows are about 1cm long. It allows just enough room to throw labels on the arrows if needed, but not too much that it looks totally empty if I choose not to. When the nodes have simple labels, setting the node distances works just fine \begin{tikzpicture}[node distance = 1.25cm, auto] \node (01) {$0$}; \node (A) [right of=01] {$A$}; \node (B) [right of=A] {$B$}; \node (C) [right of=B] {$C$}; \node (02) [right of=C] {$0$}; \draw[->] (01) to node {} (A); \draw[->] (A) to node {$\alpha$} (B); \draw[->] (B) to node {$\beta$} (C); \draw[->] (C) to node {} (02);\end{tikzpicture} But when the node labels become more complex, it starts to get cluttered and kind of awkward. \begin{tikzpicture}[node distance = 1.25cm, auto] \node (01) {$0$}; \node (AM) [right of=01] {$A \oplus M$}; \node (BM) [right of=AM] {$B \oplus M$}; \node (C) [right of=BM] {$C$}; \node (02) [right of=C] {$0$}; \draw[->] (01) to node {} (AM); \draw[->] (AM) to node {$\alpha'$} (BM); \draw[->] (BM) to node {$\beta'$} (C); \draw[->] (C) to node {} (02);\end{tikzpicture} (Sorry, I couldn't figure out how to get the TeX to compile and actually appear in this post) So, is there a way to set a standard arrow length, or have the node distance measured by the space between nodes and not center-to-center?
Version 18 (modified by 5 weeks ago) (diff), 'perturbative' and 'real' particles; the perturbative weight 'perturbative' and 'real' particles (following text is taken - slightly modified - from: O.Buss, PhD thesis, pdf, Appendix B.1) Reactions which are so violent that they disassemble the whole target nucleus can be treated only by explicitly propagating all particles, the ones in the target and the ones produced in the collision, on the same footing. For reactions which are not violent enough to disrupt the whole target nucleus, e.g. low-energy πA, γA or neutrino A collisions at not too high energies, the target nucleus stays very close to its ground state. In this case, one keeps as an approximationthe phase-space density of the target nucleons constant in time ('frozen approximation'). In GiBUU this is controlled by the switch freezeRealParticles. The test-particles which represent this constant target nucleus are called real test-particles. However, one also wants to consider the final state particles. Thus one defines another type of test-particles which are called perturbative.The perturbative test-particles are produced in a reaction on one of the target nucleons. They are then propagated and may collide with other real ones in the target. The products of such collisions are perturbative particles again. These perturbative particles can thus react with real target nucleons, but may not scatter among themselves.Furthermore, their feedback on the actual densities is neglected. One can simulate inthis fashion the effects of the almost constant target on the outgoing particles without modifyingthe target. E.g. in πA collisions we initialize all initial state pions as perturbative test-particles.Thus the target automatically remains frozen and all products of the collisions of pions andtarget nucleons are assigned to the perturbative regime. Furthermore, since the perturbative particles do not react among themselves or modify the realparticles in a reaction, one can also split a perturbative particle into \(N_{test}\) pieces (several perturbativeparticles) during a run. Each piece is given a corresponding weight \(1/N_{test}\) and one simulates likethis \(N_{test}\) possible final state scenarios of the same perturbative particle during one run. The perturbative weigth 'perWeight' Usually, in the cases mentioned above, where one uses the seperation into real and perturbative particles, one wants to calculate some final quantity like \(d\sigma^A_{tot}=\int_{nucleus}d^3r\int \frac{d^3p}{(2\pi)^3} d\sigma^N_{tot}\,\times\,\dots \). Here we are hiding all medium modifications, as e.g. Pauli blocking, flux corrections or medium modifications of the cross section in the part "\(\,\times\,\dots \)". Now, solving this via the testparticle ansatz (with \(N_{test}\) being the number of test particles), this quantity is calulated as \(d\sigma^A_{tot}=\frac{1}{N_{test}}\sum_{j=1}^{N_{test}\cdot A}d\sigma^j_{tot}\,\times\,\dots \), with \(d\sigma^j_{tot}\) standing for the cross section of the \(j\)-th test-particle. The internal implementation of calculations like this in GiBUU is, that a loop runs over all \(N_{test}\cdot A\) target nucleons and creates some event. Thus all these events have the same probability. But since they should be weighted according \(d\sigma^j_{tot}\), this is corrected by giving all (final state) particles coming out of event \(j\) the weight \(d\sigma^j_{tot}\). This information is stored in the variable perWeight in the definition of the particle type. Thus, in order to get the correct final cross section, one has to sum the perWeight, and not the particles. As an example: if you want to calculate the inclusive pion production cross section, you have to loop over all particles and sum the perWeights of all pions. Simply taking the number of all pions would give false results. The weights can also be negative. This happens, e.g., in the case of pion production on nucleons. In this case the cross section is determined by the square of a coherent sum of resonance and background amplitudes and as such is positive. In the code the resonance contribution is separated out as the square of the resonance amplitude and as such is positive as well. The remainder, i.e. the sum of the square of the background amplitude and the interference term of resonance and background amplitudes, can be negative, however. This latter contribution is just the event type labeled 32 and 33 in the code that describes the 1pi bg plus interference. How to compute cross sections from the perturbative weights for neutrino-induced reactions The output file FinalEvents.dat contains all the events generated. Per event all the four-momenta of final state particles are listed together with the incoming neutrino energy, the 'perWeight' and various other useful properties (see documentation for FinalEvents.dat). In each event there is one nucleon with perWeight=0 which represents the hit nucleon; for 2p2h processes the second initial nucleon is not written out. The final state nucleons may have masses which are spread out around the physical mass in a very narrow distribution. There are two reasons for that: 1. nucleons may still be inside the potential well and thus have lower masses. These nucleons can be eliminated from the final events file by imposing a condition that they are outside the nuclear potential (the spatial coordinates of all particles are also given in the FinalEvents.dat file). 2. For numerical-practical reasons the nucleons are given a Breit-Wigner mass distribution with a width of typically 1 MeV around the physical mass when calculating the QE cross section. As an example we consider here the calculation of the CC inclusive differential cross section dsigma/dE_mu for a neutrino-induced reaction on a nucleus; E_mu is the energy of the outgoing muon. In FinalEvents.dat the lines with the particle number 902 contain all the muon kinematics as well as the perweight. In order to produce a spectrum one first has to bin the muon energies into energy bins. This binning process must preserve the connection between energy and perweight. Then all the perweights in a given energy bin are summed and divided by the bin width to obtain the differential cross section. If the GiBUU run used - for better statistics - a number of runs >1 at the same energies, then this number of runs has to be divided out to obtain the final differential cross section. All cross sections in GiBUU, both the precomputed ones and the reconstructed ones, are given per nucleon. The units are 10 -38 cm 2 for neutrinos and 10 -33 cm 2 for electrons.
Why don't they make a ball with irregularities, say the size of a tennis ball, then spin it very rapidly, so it would produce gravitational waves like a spinning star with irregularities on it? Is that not possible with our current technology? Also since gravitational waves can cause time dilation, wouldn't we be able to make some sort of a time machine with that concept? Calculating the power emitted as gravitational waves is relatively straightforward, and you'll find it described in any advanced work on GR. I found a nice description in Gravitational Waves: Sources, Detectors and Searches. To summarise an awful lot of algebra, the power emitted as gravitational waves by a rotating object is approximately: $$ P = \frac{32}{5} \frac{G}{c^5} I_{zz}^2 \epsilon^2 \omega^6 $$ where $\omega$ is the angular equency of the rotation and $I_{zz}^2 \epsilon^2$ is related to the geometry (the quadrupole moment) of the rotating object. The problem is that factor of $G/c^5$: $$ \frac{G}{c^5} \approx 3 \times 10^{-53} $$ This is such a tiny number that nothing we could conceivably construct in the lab could produce a detectable quantity of gravitational waves. It takes an object with a huge quadrupole moment, like a neutron star binary, to produce any measurable emission. Gravitational waves are weak. The reason that we try to find them coming from huge astronomy bodies is because we need a lot of mass to produce a measurable gravitational wave. As you can see, gravity is very weak compared to the other fundamental forces. This is why we need need huge masses to measure them. You could not make a time machine with a gravitational wave for the same reason that you cannot make a time machine with gravity or by going at close to speed of light. I still believe that black hole merger is the most efficient way. If we can create micro black hole binaries in laboratory, and let them coalesce, they can emit significant amount of gravitational wave. By tuning the trajectories properly, the amount of energy radiated via gravitational radiation can be up to 9.95% of their rest mass. Don't let them collide face to face which only give off 0.2% of the energy. Although it is very hard to create micro black holes and modulate the radiation, it is still the most practical way in my opinion. Since general relativity is a nonlinear theory, strong gravitational radiation source is more efficient than weak counterpart. Citation: On the mass radiated by coalescing black hole binaries
I read that in the 16th and 17th century, the question of whether the Earth rotates around its axis or all celestial bodies rotate around it was extensively debated. One of the anti-rotation arguments was that objects dropped from high places should move from the true vertical due to the ground having traveled meanwhile. Since the Earth does rotate, I'm trying to quantify the effect and whether it could be measured at the time. Would appreciate my argument being checked for basic sanity and correctness. Suppose I climb a 100 meter tower and, standing near its edge, let go of a brick. The Earth rotates with the angular velocity $\omega = \frac{2\pi}{24\cdot 3600\ \mathrm{s}}$, and the speed of the base of the tower vs. the brick at the top before I let go of it are $R\omega$ vs. $(R+100)\omega$. This means that horizontally the brick is moving with the speed of $100\omega ~= 0.00727 \,\mathrm{m/sec}$ relative to the ground, while vertically it's dropping with the uniform acceleration $g=9.8\ \mathrm{m/s^2}$ and will hit the ground in $4.52$ seconds, having travelled $0.03\ \mathrm{m}$ horizontally. If I do the same from the Empire State Building ($381\ \mathrm{m}$), it comes out as about $22\,\mathrm{cm}$ (if the height grows by a factor of $X$, the distance travelled horizontally grows by $X^{3/2}$, the speed increase contributing $X$ and the time to drop $\sqrt{X}$). So my questions: Is this analysis basically sound? I realize I'm using the uniform vertical acceleration which is an approximation, and I'm using the circular motion of the Earth's surface when estimating velocities, but not when using them to find time and distance traveled. I guess the more exact calculation would be to treat the brick as a satellite on an elliptical orbit around the center of the Earth with the given initial position and velocity, find its orbit equation or simulate numerically, and find its intersection with the Earth's surface. It seems a bit daunting a task at the moment. Would that give substantially different results? Is air resistance an important factor to take into consideration, for estimating the horizontal distance traveled? (if it is, I guess this still answers the same question for the Moon). Assuming my estimates aren't too far-off, is that something that can be tested in a real experiment either now or in the 17th century? I think that'd depend on how exactly a true vertical we can guarantee the tower's wall to be?
Brake orbits on compact symmetric dynamically convex reversible hypersurfaces on $ \mathbb{R}^\text{2n} $ School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071, China In this paper, we consider multiplicity of brake orbits on compact symmetric dynamically convex reversible hypersurfaces in $\mathbb{R}^{2n}$. We prove that there exist at least \([\frac{n+1}{2}]\) geometrically distinct closed characteristics on dynamically convex hypersurface $\Sigma$ in $R^{2n}$ with the symmetric and reversible conditions, i.e. $\Sigma = -\Sigma$ and $N\Sigma = \Sigma$, where $N = diag(-I_{n}, I_{n})$. For $n\geq2$, we prove that there are at least 2 symmetric brake orbits on $\Sigma$, which generalizes Kang's result in [ Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C35. Citation:Zhongjie Liu, Duanzhi Zhang. Brake orbits on compact symmetric dynamically convex reversible hypersurfaces on $ \mathbb{R}^\text{2n} $. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4187-4206. doi: 10.3934/dcds.2019169 References: [1] A. Borel, Seminar on Transformation Groups, Princeton Univ. Press, Princeton, 1960. Google Scholar [2] [3] H. Duan and H. Liu, Multiplicity and ellipticity of closed characteristics on compact star-shaped hypersurfaces in ${\Bbb R}^{2n}$, [4] [5] [6] J. Gutt and J. Kang, On the minimal number of periodic orbits on some hypersurfaces in ${\Bbb R}^{2n}$, [7] [8] [9] [10] [11] C. Liu and D. Zhang, Iteration theory of $L$-index and multiplicity of brake orbits, [12] H. Liu, Y. Long and W. Wang, Resonance identities for closed characteristics on compact star-shaped hypersurfaces in ${\Bbb R}^2n$, [13] [14] [15] [16] [17] [18] C. Viterbo, Une [19] [20] [21] show all references References: [1] A. Borel, Seminar on Transformation Groups, Princeton Univ. Press, Princeton, 1960. Google Scholar [2] [3] H. Duan and H. Liu, Multiplicity and ellipticity of closed characteristics on compact star-shaped hypersurfaces in ${\Bbb R}^{2n}$, [4] [5] [6] J. Gutt and J. Kang, On the minimal number of periodic orbits on some hypersurfaces in ${\Bbb R}^{2n}$, [7] [8] [9] [10] [11] C. Liu and D. Zhang, Iteration theory of $L$-index and multiplicity of brake orbits, [12] H. Liu, Y. Long and W. Wang, Resonance identities for closed characteristics on compact star-shaped hypersurfaces in ${\Bbb R}^2n$, [13] [14] [15] [16] [17] [18] C. Viterbo, Une [19] [20] [21] [1] Duanzhi Zhang. $P$-cyclic symmetric closed characteristics on compact convex $P$-cyclic symmetric hypersurface in Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 947-964. doi: 10.3934/dcds.2013.33.947 [2] [3] [4] [5] Hui Liu, Duanzhi Zhang. Stable P-symmetric closed characteristics on partially symmetric compact convex hypersurfaces. [6] Salvatore A. Marano, Sunra J. N. Mosconi. Multiple solutions to elliptic inclusions via critical point theory on closed convex sets. [7] [8] [9] Wan-Tong Li, Bin-Guo Wang. Attractor minimal sets for nonautonomous type-K competitive and semi-convex delay differential equations with applications. [10] [11] Jinkui Liu, Shengjie Li. Multivariate spectral DY-type projection method for convex constrained nonlinear monotone equations. [12] Jia-Feng Liao, Yang Pu, Xiao-Feng Ke, Chun-Lei Tang. Multiple positive solutions for Kirchhoff type problems involving concave-convex nonlinearities. [13] Hassan Mohammad. A diagonal PRP-type projection method for convex constrained nonlinear monotone equations. [14] [15] Rubén Caballero, Alexandre N. Carvalho, Pedro Marín-Rubio, José Valero. Robustness of dynamically gradient multivalued dynamical systems. [16] [17] [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
This question already has an answer here: Can we have math markdown on this site (like on math.stackchange.com)? It will make answering some of the geometry questions easier This question already has an answer here: Can we have math markdown on this site (like on math.stackchange.com)? It will make answering some of the geometry questions easier I was editing "Detect mouse click on a bezier curve's neighborhood" and was surprised to find that GameDev lacks TeX support. Having TeX support would hugely improve readability of some quality answers such as, Although the majority of questions and answers present equations in code, it's useful to be able to discuss equations in math. Mathematical equations may have better documentation, are code agnostic and are probably how people studied the concepts in school. I would love to be able to see this work, $$ \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt\,. $$ I do not see how it would really be a benefit; the primary math-related topics that appear are vectors and matrices; both easily enough displayed in either text or code. If it needs integration as part of the answer, I think it's probably more a maths based question. (Or it could be shown as pseudocode).
Let $$$h_n(r) = \displaystyle \sum_{i=1}^{n} i^r$$$. $$$h_n(r), h_m(r)$$$ can be computed for all $$$1 \leq r \leq 2k$$$ for both $$$n, m$$$ in $$$O(k^2)$$$ using the recursion: Let $$$P_{ij}$$$ be the probability that $$$(i, j)$$$ has a $$$1$$$ in it after all the operations. By linearity of expectation, we have to find $$$\displaystyle \sum_{i, j} P_{i,j}$$$. Let $$$p_{i,j}$$$ be the probability that cell $$$(i, j)$$$ is flipped in one such operation. The total number of submatrices is $$$\dfrac{n(n+1)}{2} \dfrac{m(m+1)}{2}$$$. The number of submatrices containing $$$(i, j)$$$ is $$$i(n-i+1)j(m-j+1)$$$ So, Now, $$$P_{i, j} = $$$ probability that this cell is flipped in odd number of operations: So, we have to find: Let $$$t = - \dfrac{8}{n(n+1)m(m+1)}$$$. Also, let $$$f_n(i) = i (n + 1 - i)$$$ and expand using binomial theorem: We have : Now, Hence, $$$\displaystyle \sum_{i=1}^{n} f_n(i)^r$$$ can be computed in $$$O(r) = O(k)$$$. Thus the original formula can be computed in $$$O(k^2)$$$ We convert the problem into: Given a value $$$r$$$, find the number of lines with distance $$$\leq r$$$ from point $$$q$$$. For this, consider a circle $$$C$$$ with radius $$$r$$$ centered at $$$q$$$. Consider two points $$$A$$$ and $$$B$$$, both outside the circle $$$C$$$. The line passing through $$$A$$$ and $$$B$$$ has distance $$$\leq r$$$ from $$$q$$$ iff it intersects the circle $$$C$$$. Let $$$F_A$$$ and $$$G_A$$$ be the points of contacts of tangents drawn from $$$A$$$ to the circle. Similarly define $$$F_B$$$ and $$$G_B$$$. We can prove that the line passing through points $$$A$$$ and $$$B$$$ intersects the circle $$$C$$$ if and only if the line segments $$$F_{A} G_{A}$$$ and $$$F_{B}G_{B}$$$ do NOT intersect. So, we need to draw $$$2 n$$$ tangents, and then draw $$$n$$$ chords passing through the respective points of contacts, and count the number of intersections of these chords. For this, sort the points by polar angles in $$$O(n \log{n}) $$$. Now we can count the number of intersections of line segments in $$$O(n \log{n})$$$, by iterating on the points. Therefore, this question can be answered in $$$O(n \log{n})$$$. Then we can just binary search to find the answer in complexity $$$O(n \log{n} \log{\frac{R}{\epsilon}})$$$
Equations with a $p$-Laplacian and an asymmetric nonlinear term 1. Université Catholique de Louvain, Institut de Mathématique Pure et Appliquée, Chemin du Cyclotron, 2 , B-1348 Louvain-la-Neuve 2. Centro de Modelamiento Matemático and Departamento de Ingenieria Matemática, F.C.F.M, Universidad de Chile, Casilla 170, Correo 3, Santiago, Chile $(\phi_p (x'))' + \alpha \phi_p (x^+ ) - \beta \phi_p (x^- ) = f(t,x),$ where $ x^{+}=\max\{x,0\}$; $x^{-} =\max\{-x,0\},$ in a situation of resonance or near resonance for the period $T,$ i.e. when $\alpha,\beta$ satisfy exactly or approximately the equation $\frac{\pi_p }{\alpha^{1/p}} + \frac{\pi_p}{\beta^{1/p}} = \frac{T}{n},$ for some integer $n.$ We assume that $f$ is continuous, locally Lipschitzian in $x,$ $T$-periodic in $t,$ bounded on $\mathbf R^2,$ and having limits $f_{\pm}(t)$ for $x \to \pm \infty,$ the limits being uniform in $t.$ Denoting by $v $ a solution of the homogeneous equation $(\phi_p (x'))' + \alpha \phi_p (x^+ ) - \beta \phi_p (x^- ) = 0,$ we study the existence of $T$-periodic solutions by means of the function $ Z (\theta) = \int_{\{t\in I | v_{\theta }(t)>0\}} f_{+}(t)v(t + \theta) dt + \int_{\{t\in I | v_{\theta }(t)<0\}} f_-(t) v (t + \theta) dt,$ where $ I \stackrel{def}{=} [0,T].$ In particular, we prove the existence of $T$-periodic solutions at resonance when $Z$ has $2z$ zeros in the interval $[0,T/n),$ all zeros being simple, and $z$ being different from $1.$ Keywords:averaging method., asymmetric nonlinearities, Periodic solutions, Fučík spectrum, resonance. Mathematics Subject Classification:33A34, 34C25, 34C2. Citation:C. Fabry, Raul Manásevich. Equations with a $p$-Laplacian and an asymmetric nonlinear term. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 545-557. doi: 10.3934/dcds.2001.7.545 [1] [2] Zaihong Wang. Periodic solutions of the second order differential equations with asymmetric nonlinearities depending on the derivatives. [3] [4] [5] Anna Capietto, Walter Dambrosio, Tiantian Ma, Zaihong Wang. Unbounded solutions and periodic solutions of perturbed isochronous Hamiltonian systems at resonance. [6] Pablo Amster, Mónica Clapp. Periodic solutions of resonant systems with rapidly rotating nonlinearities. [7] [8] Xuelei Wang, Dingbian Qian, Xiying Sun. Periodic solutions of second order equations with asymptotical non-resonance. [9] Shiwang Ma. Nontrivial periodic solutions for asymptotically linear hamiltonian systems at resonance. [10] Laura Olian Fannio. Multiple periodic solutions of Hamiltonian systems with strong resonance at infinity. [11] Ruichao Guo, Yong Li, Jiamin Xing, Xue Yang. Existence of periodic solutions of dynamic equations on time scales by averaging. [12] [13] Marek Galewski, Renata Wieteska. Multiple periodic solutions to a discrete $p^{(k)}$ - Laplacian problem. [14] [15] V. Mastropietro, Michela Procesi. Lindstedt series for periodic solutions of beam equations with quadratic and velocity dependent nonlinearities. [16] [17] [18] Shouchuan Hu, Nikolaos S. Papageorgiou. Double resonance for Dirichlet problems with unbounded indefinite potential and combined nonlinearities. [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of \(\beta = \frac{1}{2}(1 + \sqrt{5})\). We also show the integral \(q\)-expansion of the trace form. For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. \( p \) Sign \(5\) \(-1\) \(241\) \(-1\) This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(6025))\): \(T_{2} \) \(\mathstrut -\mathstrut 1 \) \(T_{3}^{2} \) \(\mathstrut +\mathstrut T_{3} \) \(\mathstrut -\mathstrut 1 \)
I think you are confusing complex power with instantaneous power. The instantaneous power is simply $p(t)=v(t)i(t)$ as you've mentioned. The complex power however is not the Fourier transform of $P$. For two harmonic signals with frequency $\omega$, The complex power is instead defined as:$$S:=\frac 12 VI^*$$where $V$ and $I$ are the phasors of the voltage and current, respectively. Written in rms values $V_\mathrm{rms}=V/\sqrt2$ and $I_{rms}=I/\sqrt2$, this is simply:$$S = V_\mathrm{rms}I^{*}_{\mathrm{rms}}$$which is what you're referring to as $P=UI^*$. But if these quantities are not the same, why are they both called power? This is simply because if I take the real part of the complex power, I get the time -averaged instantaneous power. To see why, let $v(t) = |V| \cos(\omega t + \phi_v)$ and $i(t) = |I| \cos(\omega t + \phi_i)$ be our two harmonic voltage and current signals. the time-averaged power is:$$P_\mathrm{avg} = \frac{1}{T}\int_{0}^T v(t)i(t)dt \\ = \frac{1}{T}\int_{0}^T |V| \cos(\omega t + \phi_v)\times |I| \cos(\omega t + \phi_i)dt \\= \frac 12 |V||I| \cos(\phi_v-\phi_i) \\ =\frac 12 \mathrm{Re}(|V| e^{-i\phi_v}|I|e^{-i\phi_i}) \\= \frac 12 \mathrm{Re}(VI^*)$$which finally gives:$$P_\mathrm{avg} = \mathrm{Re}(S)$$Note that the imaginary part of $S$ is usually called the reactive power, which shows a constant back-and-forth transmission of power between the source and the load. In conclusion, the complex power $S$ is not the Fourier transform of the instantaneous power. As hyportnex and David point out in the comments, Taking the Fourier transform of $v(t)i(t)$ gives $V(\omega)*I(\omega)$, where $*$ is understood as convolution. You may be curious about what happens if you use a Fourier transform instead of phasors in the above definition of $S$. Even though these two concepts are related, they are not the same. Even dimensionally, the phasor $V$ of a voltage has units of Volts, while its Fourier transform has units of Volts/Hz. You can easily see that the quantity $\varepsilon(\omega) = V(\omega)I^*(\omega)$ is nothing but the energy spectral density. where $V(\omega)$ and $I(\omega)$ are the Fourier transforms of $v$ and $i$, respectively (as opposed to phasors). This is an immediate consequence of Parseval's theorem, which says:$$\int_{-\infty}^{\infty}\varepsilon(\omega) d\omega = \int_{-\infty}^{\infty} V(\omega)I^*(\omega) d\omega \\= \int_{-\infty}^{\infty} v(t)i(t) dt = \int_{-\infty}^{\infty} p(t) dt= E$$which is the total energy.
The time dilation of Alice in Bob's frame is $$ \rm \frac{d t}{d \tau} = 1 \div \sqrt{1-r_s/r} \ \div \sqrt{1-v^2/c^2}$$ while the time dilation of Bob in Alice's frame is $$ \rm \frac{d \tau}{d t} = \sqrt{1-r_s/r} \ \div \sqrt{1-v^2/c^2}$$ where t is the time elapsed on Bob's clock, τ on Alice's and v the local velocity of Alice relative to a stationary probe. Bob is far away from the black hole in this scenario. The gravitative component of the time dilation is absolute, while the kinematic component is relative, see here. Here we are assuming Schwarzschild geometry, in the vicinity of rotating or charged black holes the split of the components is little bit more complicated but following the same principle. So if Alice would be free falling from infinity (then v would be the negative escape velocity) both components would cancel in her frame, and Bob would age with the same rate as herself. In Bob's frame Alice then would age with the squared rate of locally stationary observers at the same height above the black hole. If you are not only asking about the time dilation but also what one sees with his eyes, you also have to take the regular Doppler shift into account, which makes the received signals redder when they move away from each other, and bluer if they move towards each other. Which effect is the strongest depends not only on her exact position but also on her velocity and vector relative to Bob. For example, if Alice falls from rest position at r=5rs, and Bob stays fixed at this position, Alice reaches the horizon after a proper time of τ=33.7GM/c³. If she watches Bob, the time she sees on his clock in the moment when she crosses the horizon would be t=27.328GM/c³, so Bob's signal would visually appear redshifted in her frame. Since her free fall velocity is smaller than the escape velocity (because she fell from rest at finite height) Bob's clock would still be ticking faster in her frame though, but in this scenario the effect of the regular Doppler shift would trump the time dilation visually.
Let's derive the proof. Consider you are short an option and long its self-financing delta hedge. You get the portfolio $\Pi$ whose $t$-value verifies$$ \Pi_t = \underbrace{- V_t}_{\text{Short option}} + \underbrace{\Delta_t S_t}_{\text{Long stocks}} + \underbrace{\frac{(V_t - \Delta_t S_t)}{B_t} B_t}_{\text{Residual cash position}} $$At time $t$, this position is worth zero: the option is perfectly replicated by the hedge. As soon as time will pass though, you will need to rebalance your hedge to remain delta neutral. Let's see what happens in-between two rebalancing dates when we keep the delta unchanged over say $[t,t+dt[$. Because the strategy is self-financing, assuming the risk-free money market account verifies the ODE $dB_t = B_t r dt$ and the stock pays no dividends then we have that the replication error over $[t,t+dt[$ writes$$ d\Pi_t = - dV_t + \Delta_t dS_t + (V_t - \Delta_t S_t) r dt \tag{0} $$Let's expand the different terms at the lowest trivial order. If we consider a pure diffusion model (no jumps) this means order 1 in $dt$ and order 2 in $dS_t$. Assuming you're pricing the option $V_t$ under a BS framework with volatility $\sigma$, Itô's lemma gives\begin{align}dV_t &= \frac{\partial V}{\partial t} dt + \frac{\partial V}{\partial t} dS_t + \frac{1}{2} \frac{\partial^2 V}{\partial S^2} d\langle S \rangle_t \\&= \theta_t dt + \Delta_t dS_t + \Gamma_t d\langle S \rangle_t \end{align}Plugging this in $(0)$ yields$$ d\Pi_t = \underbrace{(-\theta_t - rS_t \Delta_t + rV_t)dt}_{\text{(a)}} - \Gamma d\langle S \rangle_t \tag{1} $$The first terms $(a)$ should look familiar. Indeed, if you're pricing under the BS framework, then $v_t = V(t,S_t)$ ought to verify the BS pricing PDE$$ \theta_t + r S_t \Delta_t + \frac{1}{2} \Gamma S_t^2 \sigma^2 - r V_t = 0$$Using this to rewrite $(1)$ yields the following replication error over the period $[t,t+dt[$\begin{align}d\Pi_t &= \frac{1}{2} \Gamma S_t^2 \sigma^2 dt - \Gamma d \langle S \rangle_t \\ &= \frac{1}{2} \Gamma S_t^2 \left( \sigma^2 - \frac{d \langle S \rangle_t}{(S_t)^2 dt} \right) dt \\ &= \frac{1}{2} \Gamma S_t^2 \left( \sigma^2 - \beta_t^2 \right) dt\end{align}where $\beta^2$ is the "realised" quadratic variation of the log returns, i.e. not the one postulated and priced in by your model but the manner in which the market truly behaves. Now if you want to find out the total replication error up to the maturity $T$ as seen from current time $t$, all which is left to do is to integrate and discount the infinitesimal P\&L leaks described above$$ P\&L_t = \int_0^T e^{-r(T-t)} \frac{1}{2} \Gamma S_t^2 \left( \sigma^2 - \beta_t^2 \right) dt $$ This is a very interesting and well-known equation because it ties 3 important concepts:$$ \int_0^T e^{-r(T-t)} \frac{1}{2} \underbrace{\Gamma}_{\text{Instrument related}} S_t^2 \left( \underbrace{\sigma^2}_{\text{Model related}} - \underbrace{\beta_t^2}_{\text{Market related}} \right) dt $$ A naive interpretation of it would go like this. Suppose I sold a vanilla option by pricing in a future volatility of $\sigma$ at inception. If the realised volatility $\beta$ is always higher than $\sigma$, then I expect to lose money since this would amount to me having effectively underpriced the option (note that the $\Gamma$ of a vanilla option is always positive). The trick here is to observe that selling an option and delta hedging it dynamically is not a pure volatility trade though. The gamma term in the P\&L equation above introduces a path dependence: only along paths where $\Gamma(t,S_t)$ is not zero will the discrepancy between the pricing and realised vol accumulate and P\&L crystallise. You can find out more info in this related question. Of course as mentioned in the comments, this P\&L equation assumes no transaction costs and continuous trading (when dynamically rebalancing the Delta). Also you are right in practice this is used to monitor the daily evolution of a delta-hedged portfolio ex post. It is usually part of the explained P&L calculations for instance.
We know that $H_A\otimes H_B\neq H_B\otimes H_A$ (in general). Theoretically, we know the formalism and what observables to construct from the two compositions possible, but we never talk about both the possibilities. I wish to know that how experimentally the Measurements or Evolutions are done over such composite systems (let's just assume a bipartition as above). How does the experimentalist know whether he is working in the $A\otimes B$ or $B\otimes A$ composite Hilbert Space? For many questions that appear on this site, and about quantum information and computation in general, it is possible to ask a completely classical version of the question, and often the (sometimes obvious) answer that one finds in the more familiar classical setting translates directly to the quantum setting. In this case, a reasonable classical version of the question asks what role the non-commutativity of the Cartesian product plays in experimental classical computing (or, let's say, in practical implementations of classical computation). Suppose we have system $A$ that can be in any classical state drawn from a set $\mathcal{A}$, and a system $B$ that can be in any classical state drawn from the set $\mathcal{B}$. If we put system $A$ and system $B$ next to each other on the table, then we can represent the classical state of the two systems together as an element of the Cartesian product $\mathcal{A}\times\mathcal{B}$. Note that there is an implicit assumption here, which is that the two systems are distinguishable, and we're deciding more or less arbitrarily that when we talk about a state $(a,b)\in\mathcal{A}\times\mathcal{B}$ that the state $a$ of system $A$ is listed first and the state $b$ of system $B$ is listed second. We could just as easily have decided to represent the classical state of the two systems together as an element of the Cartesian product $\mathcal{B}\times\mathcal{A}$, with the understanding that the state of system $B$ now gets listed first. As an aside, if the two systems were indistinguishable, implying that $\mathcal{A} = \mathcal{B}$, and further we placed the two systems in a bag rather than on the table, then I guess there would really be no difference between $(a,b)$ and $(b,a)$. For this reason we would probably not use the Cartesian product to represent states of the bagged systems -- maybe we would use the set of all multisets of size 2 instead -- but let us forget about this situation and assume $A$ and $B$ are distinguishable for simplicity. Now, what role does this play in experiments or practical applications of classical computing? How does an experimenter or programmer know he or she is working in the $\mathcal{A}\times\mathcal{B}$ or $\mathcal{B}\times\mathcal{A}$ state space? When you think about the question this way, I believe it may come into focus. My answer, which is consistent with the other answers that concern the quantum setting, is that it really doesn't play any role at all, and the experimenter/programmer knows because it was his or her decision which order to use. We know the difference between the systems $A$ and $B$, and the decision to represent states of the two systems together by elements of $\mathcal{A}\times\mathcal{B}$ or $\mathcal{B}\times\mathcal{A}$ is totally arbitrary -- but once the decision is made we stick with it to avoid confusion. The decision will not affect any calculations we do, so long as the calculations are consistent with the decision of which order to use. To my eye, at a fundamental level there is no difference between the classical version of this question and the quantum version. We decide whether to represent states of the compound quantum system using the space $H_A\otimes H_B$ or $H_B\otimes H_A$, and that's all there is to it. You'll get exactly the same results of any calculations you perform, so long as your calculations are consistent with the choice to use $H_A\otimes H_B$ or $H_B\otimes H_A$. When you say $\neq$ I presume you are talking about the implied basis in usual ordering like (00, 01, 02, 10 etc). Otherwise you would have the isomorphism of Hilbert spaces vs an equality statement. That is, AB implies a certain ordered basis and BA a different one. The experiment has it's observables on the combined system in a basis independent way. If the experimentalist wants to put their results down, they can choose whatever basis they like. The distinction goes into the question being asked. What is the second entry of vector v in Hilbert space that combines A and B is not a well defined question. What is the second entry with respect to a given ordered basis is. The experimentalist has to ask the second in order to get an answer. You have to ask a sensible question if you want a sensible answer. The order in the tensor product is a convention and has nothing to do with experiments. As an example, if I have a cavity (with photons in it, $H_A$) and an atom (with internal states, $H_B$), it is clear which is the atom and which is the cavity, regardless of the order ones chooses for their Hilbert spaces in the tensor product when describing the setup theoretically. The two spaces $A$ and $B$ are just labels, with arbitrary ordering. For distinguishable qubits (or more general), the experimentalist can just say "this one's $A$, and this other one's $B$". If you swap the labels, you need to swap the labels everywhere - in both the Hamiltonian and the state (including eigenvectors, density matrix etc). In other words, if I define a swap operator $S$ such that $$ S(H_A\otimes H_B)S=H_B\otimes H_A, $$ then evolution of states can be calculated either using $$ e^{-i H_A\otimes H_B t}|\psi_{AB}\rangle \quad\text{or}\quad e^{-i H_B\otimes H_A t}|\psi_{BA}\rangle $$ where $|\psi_{BA}\rangle=S|\psi_{AB}\rangle$. Or, if you're working with a density matrix, you have $\rho_{BA}=S\rho_{AB}S$.
Consider three concentric metallic spheres \(A\), \(B\) and \(C\) of radii of \(a\), \(b\), \(c\) respectively where a<b<c . \(A\) and \(B\) are connected whereas C is grounded The potential of the middle sphere \(B\) is raised to \(V\) then what is the charge on the sphere \(C\)? Solution: Three concentric metallic spheres \(A\), \(B\) and \(C\) have radii of \(a\), \(b\), \(c\) respectively where a<b<c . \(A\) and \(B\) are connected whereas C is grounded The potential of the middle sphere \(B\) is raised to \(V\). $$ V=\frac{Kq}{b}+\frac{KQ}{c}$$ $$ \Rightarrow \frac{k(q+Q)}{c}=0$$ $$\Rightarrow q+Q=0$$ $$\Rightarrow q=-Q $$ $$ \frac{k(-Q)}{b}+\frac{KQ}{c}=V$$ $$\Rightarrow KQ(\frac{1}{c}-\frac{1}{b})=V$$ $$ Q=\frac{bcV}{(b-c)}4\pi\epsilon_0$$
Let $E \to X$ be a vector bundle. We can associate to $E$ several invariants: among them are the Stiefel-Whitney classes $w_i(E) \in H^i(X;\mathbb{Z}_2)$. These classes may be defined using the axioms: 0. $w_0(E)=1$ and $w_i(E) \in H^i(X;\mathbb{Z}_2)$. 1. $w(f^*E)=f^*w(E)$ for continuous maps $f$ (here $w=1+w_1+w_2+\cdots$ is the total class) 2. $w(E \oplus F)=w(E) \cup w(F)$ 3. $w_1(\gamma_1) \neq 0$ for the tautological bundle $\gamma_1$ over $\mathbb{R}P^{\infty}=BO(1)$ In particular, we can rephrase these axioms to recognize the first Stiefel Whitney class: 1'. $w_1(f^*E)=f^*w_1(E)$ for continuous maps $f$ 2'. $w_1(E \oplus F)=w_1(E) + w_1(F)$ 3'. $w_1(\gamma_1) \neq 0$ for a tautological bundle $\gamma_1$ over $\mathbb{R}\mathbb{P}^{\infty}=BO(1)$. However, in Lawson-Michelsohn's book "Spin Geometry" it is stated that in order for a cohomology class $v_1$ to equal $w_1$ we only need to check: 1''. $v_1(f^*E)=f^*v_1(E)$ for continuous maps $f$ 2''. $v_1(\gamma_n) \neq 0$ for the tautological bundle $\gamma_n$ over $BO(n)$ for every natural number $n$. How do we prove that these two sets of axioms are equivalent (and therefore characterize $w_1$)? Concerning higher Stiefel-Whitney classes, does a set of axioms like 1''-2'' define $w_k$ for each $k$?
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
Can regularization be helpful if we are interested only in estimating (and interpreting) the model parameters, not in forecasting or prediction? I see how regularization/cross-validation is extremely useful if your goal is to make good forecasts on new data. But what if you're doing traditional economics and all you care about is estimating $\beta$? Can cross-validation also be useful in that context? The conceptual difficulty I struggle with is that we can actually compute $\mathcal{L}\left(Y, \hat{Y}\right)$ on test data, but we can never compute $\mathcal{L}\left(\beta, \hat{\beta}\right)$ because the true $\beta$ is by definition never observed. (Take as given the assumption that there even is a true $\beta$, i.e. that we know the family of models from which the data were generated.) Suppose your loss is $\mathcal{L}\left(\beta, \hat{\beta}\right) = \lVert \beta - \hat{\beta} \rVert$. You face a bias-variance tradeoff, right? So, in theory, you might be better off doing some regularization. But how can you possibly select your regularization parameter? I'd be happy to see a simple numerical example of a linear regression model, with coefficients $\beta \equiv (\beta_1, \beta_2, \ldots, \beta_k)$, where the researcher's loss function is e.g. $\lVert \beta - \hat{\beta} \rVert$, or even just $(\beta_1 - \hat{\beta}_1)^2$. How, in practice, could one use cross-validation to improve expected loss in those examples? Edit: DJohnson pointed me to https://www.cs.cornell.edu/home/kleinber/aer15-prediction.pdf, which is relevant to this question. The authors write that Machine learning techniques ... provide a disciplined way to predict $\hat{Y}$ which (i) uses the data itself to decide how to make the bias-variance trade-off and (ii) allows for search over a very rich set of variables and functional forms. But everything comes at a cost: one must always keep in mind that because they are tuned for $\hat{Y}$ they do not (without many other assumptions) give very useful guarantees for $\hat{\beta}$. Another relevant paper, again thanks to DJohnson: http://arxiv.org/pdf/1504.01132v3.pdf. This paper addresses the question I was struggling with above: A ... fundamental challenge to applying machine learning methods such as regression trees off-the-shelf to the problem of causal inference is that regularization approaches based on cross-validation typically rely on observing the “ground truth,” that is, actual outcomes in a cross-validation sample. However, if our goal is to minimize the mean squared error of treatment effects, we encounter what [11] calls the “fundamental problem of causal inference”: the causal effect is not observed for any individual unit, and so we don’t directly have a ground truth. We address this by proposing approaches for constructing unbiased estimates of the mean-squared error of the causal effect of the treatment.
Search Now showing items 1-10 of 167 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Area of a Square Formed by the Sides of Triangles Inscribed into a Square The area of the large square is the sum of the area of the smaller square plus four times the area of the triangles ABC and ADC are right angled and similar with scale factor one half since angle \[\angle BAC= \alpha \rightarrow \angle ABC=90- \alpha \rightarrow \angle CBD= 90-(90- \alpha)= \alpha \rightarrow \angle BCD=90- \alpha\]. If \[Ab=x\]then \[BC=\frac{x}{2}\]and the area of triangle ABD is \[\frac{1}{2} \times x \times \frac{x}{2}=\frac{x^2}{4}\] The areas of triangles ABC and BDC are in the ratio \[(2)^2= 4\]so Area ABC is \[\frac{4}{1+4} \times \frac{x^2}{4}=\frac{x^2}{5}\]. Then the area of the green square is \[x^2 - 4 \times \frac{x^2}{5}=\frac{x^2}{5}\]. The area of the Green square is one fifth the area of the big square.
The Bayesian approach to learning starts by choosing a prior probability distribution over the unknown parameters of the world. Then, as the learner makes observation, the prior is updated using Bayes rule to form the posterior, which represents the new Continue Reading In an earlier post we analyzed an algorithm called Exp3 for $k$-armed adversarial bandits for which the expected regret is bounded by \begin{align*} R_n = \max_{a \in [k]} \E\left[\sum_{t=1}^n y_{tA_t} – y_{ta}\right] \leq \sqrt{2n k \log(k)}\,. \end{align*} The setting of Continue Reading To revive the content on this blog a little we have decided to highlight some of the new topics covered in the book that we are excited about and that were not previously covered in the blog. In this post Continue Reading Dear readers After nearly two years since starting to write the blog we have at last completed a first draft of the book, which is to be published by Cambridge University Press. The book is available for free as a Continue Reading This website has been quiet for some time, but we have not given up on bandits just yet. First up, we recently gave a short tutorial at AAAI that covered the basics of finite-armed stochastic bandits and stochastic linear bandits. Continue Reading According to the main result of the previous post, given any finite action set $\cA$ with $K$ actions $a_1,\dots,a_K\in \R^d$, no matter how an adversary selects the loss vectors $y_1,\dots,y_n\in \R^d$, as long as the action losses $\ip{a_k,y_t}$ are in Continue Reading In the next few posts we will consider adversarial linear bandits, which, up to a crude first approximation, can be thought of as the adversarial version of stochastic linear bandits. The discussion of the exact nature of the relationship between Continue Reading In the last two posts we considered stochastic linear bandits, when the actions are vectors in the $d$-dimensional Euclidean space. According to our previous calculations, under the condition that the expected reward of all the actions are in a fixed Continue Reading Continuing the previous post, here we give a construction for confidence bounds based on ellipsoidal confidence sets. We also put things together and show bound on the regret of the UCB strategy that uses the constructed confidence bounds. Constructing the Continue Reading Lower bounds for linear bandits turn out to be more nuanced than the finite-armed case. The big difference is that for linear bandits the shape of the action-set plays a role in the form of the regret, not just the Continue Reading
Question Part of riding a bicycle involves leaning at the correct angle when making a turn, as seen in Figure 6.36. To be stable, the force exerted by the ground must be on a line going through the center of gravity. The force on the bicycle wheel can be resolved into two perpendicular components—friction parallel to the road (this must supply the centripetal force), and the vertical normal force (which must equal the system's weight). (a) Show that $\theta$ (as defined in the figure) is related to the speed $v$ and radius of curvature $r$ of the turn in the same way as for an ideally banked roadway—that is, $\theta = \tan^-1{\dfrac{v^2}{rg}}$ (b) Calculate $\theta$ for a 12.0 m/s turn of radius 30.0 m (as in a race). Question Image Final Answer see video for derivation $26.1^\circ$
This question already has an answer here: Say we've got a logistic regression model $M$ used as a classifier in a binary case. Now we take a test set $\tau=\{(x_1,y_1),...,(x_n,y_n)\}$, each test sample is assigned with $\hat{\pi}_i=P(y_i=1|x_i)$ and then, assuming naive threshold of 0.5, a prediction is made: $\hat{y}_i=\left\{\begin{matrix}1 & \hat{\pi}_i\geq 0.5\\ 0 & \hat{\pi}_i< 0.5\end{matrix}\right.$. We can then $M$'s discuss accuracy rate as $\frac{1}{n}\sum_{i}{I\{\hat{y}_i=y_i\}}$ or maybe its certainty rate $\frac{1}{n}\sum_{i}{max(\hat{\pi}_i,1-\hat{\pi}_i)}$. Wherever I've looked, a logistic regression classifier is assessed either by information criteria or by cross-validation, both eventually relating to the degree of fit to the train dataset. It might look a total stranger's question, but are there are known methods using accuracy or certainty? it might be just me missing this info somehow.
Wave energy converters in coastal structures: verschil tussen versies (→Application for wave energy converters) (→Application for wave energy converters) Regel 80: Regel 80: where <math>m_n</math> where <math>m_n</math> − represents the spectral moment of order n. An equation similar to that describing the power of regular waves is then obtained <ref name=" + represents the spectral moment of order n. An equation similar to that describing the power of regular waves is then obtained <ref name=""/> : <p> <p> <br> <br> Versie van 3 sep 2012 om 12:11 Introduction Fig 1: Construction of a coastal structure. Coastal works along European coasts are composed of very diverse structures. Many coastal structures are ageing and facing problems of stability, sustainability and erosion. Moreover climate change and especially sea level rise represent a new danger for them. Coastal dykes in Europe will indeed be exposed to waves with heights that are greater than the dykes were designed to withstand, in particular all the structures built in shallow water where the depth imposes the maximal amplitude because of wave breaking. This necessary adaptation will be costly but will provide an opportunity to integrate converters of sustainable energy in the new maritime structures along the coasts and in particular in harbours. This initiative will contribute to the reduction of the greenhouse effect. Produced energy can be directly used for the energy consumption in harbour area and will reduce the carbon footprint of harbours by feeding the docked ships with green energy. Nowadays these ships use their motors to produce electricity power on board even if they are docked. Integration of wave energy converters (WEC) in coastal structures will favour the emergence of the new concept of future harbours with zero emissions. Inhoud Wave energy and wave energy flux For regular water waves, the time-mean wave energy density E per unit horizontal area on the water surface (J/m²) is the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy [1] both contributing half to the time-mean wave energy density E that is proportional to the wave height squared according to linear wave theory [1]: (1) [math]E= \frac{1}{8} \rho g H^2[/math] g is the gravity and [math]H[/math] the wave height of regular water waves. As the waves propagate, their energy is transported. The energy transport velocity is the group velocity. As a result, the time-mean wave energy flux per unit crest length (W/m) perpendicular to the wave propagation direction, is equal to [1]: (2) [math] P= Ec_{g}[/math] with [math]c_{g}[/math] the group velocity (m/s). Due to the dispersion relation for water waves under the action of gravity, the group velocity depends on the wavelength λ (m), or equivalently, on the wave period T (s). Further, the dispersion relation is a function of the water depth h (m). As a result, the group velocity behaves differently in the limits of deep and shallow water, and at intermediate depths: [math](\frac{\lambda}{20} \lt h \lt \frac{\lambda}{2})[/math] Application for wave energy convertersFor regular waves in deep water: [math]c_{g} = \frac{gT}{4\pi} [/math] and [math]P_{w1} = \frac{\rho g^2}{32 \pi} H^2 T[/math] The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters. For real seas, whose waves are random in height, period (and direction), the spectral parameters have to be used. [math]H_{m0} [/math] the spectral estimate of significant wave height is based on zero-order moment of the spectral function as [math]H_{m0} = 4 \sqrt{m_0} [/math] Moreover the wave period is derived as follows [2]. [math]T_e = \frac{m_{-1}}{m_0} [/math] where [math]m_n[/math] represents the spectral moment of order n. An equation similar to that describing the power of regular waves is then obtained [2] : [math]P_{w1} = \frac{\rho g^2}{64 \pi} H_{m0}^2 T_e[/math] If local data are available ([math]H_{m0}^2, T_e [/math]) for a sea state through in-situ wave buoys for example, satellite data or numerical modelling, the last equation giving wave energy flux [math]P_{w1}[/math] gives a first estimation. Averaged over a season or a year, it represents the maximal energetic resource that can be theoretically extracted from wave energy. If the directional spectrum of sea state variance F (f,[math]\theta[/math]) is known with f the wave frequency (Hz) and [math]\theta[/math] the wave direction (rad), a more accurate formulation is used: [math]P_{w2} = \rho g\int\int c_{g}(f,h)F(f,\theta) dfd \theta[/math] Fig 2: Time-mean wave energy flux along West European coasts [3] . It can be shown easily that equations (5 and 6) can be reduced to (4) with the hypothesis of regular waves in deep water. The directional spectrum is deduced from directional wave buoys, SAR images or advanced spectral wind-wave models, known as third-generation models, such as WAM, WAVEWATCH III, TOMAWAC or SWAN. These models solve the spectral action balance equation without any a priori restrictions on the spectrum for the evolution of wave growth. From TOMAWAC model, the near shore wave atlas ANEMOC along the coasts of Europe and France based on the numerical modelling of wave climate over 25 years has been produced [4]. Using equation (4), the time-mean wave energy flux along West European coasts is obtained (see Fig. 2). This equation (4) still presents some limits like the definition of the bounds of the integration. Moreover, the objective to get data on the wave energy near coastal structures in shallow or intermediate water requires the use of numerical models that are able to represent the physical processes of wave propagation like the refraction, shoaling, dissipation by bottom friction or by wave breaking, interactions with tides and diffraction by islands. The wave energy flux is therefore calculated usually for water depth superior to 20 m. This maximal energetic resource calculated in deep water will be limited in the coastal zone: at low tide by wave breaking; at high tide in storm event when the wave height exceeds the maximal operating conditions; by screen effect due to the presence of capes, spits, reefs, islands,... Technologies According to the International Energy Agency (IEA), more than hundred systems of wave energy conversion are in development in the world. Among them, many can be integrated in coastal structures. Evaluations based on objective criteria are necessary in order to sort theses systems and to determine the most promising solutions. Criteria are in particular: the converter efficiency : the aim is to estimate the energy produced by the converter. The efficiency gives an estimate of the number of kWh that is produced by the machine but not the cost. the converter survivability : the capacity of the converter to survive in extreme conditions. The survivability gives an estimate of the cost considering that the weaker are the extreme efforts in comparison with the mean effort, the smaller is the cost. Unfortunately, few data are available in literature. In order to determine the characteristics of the different wave energy technologies, it is necessary to class them first in four main families [3]. An interesting result is that the maximum average wave power that a point absorber can absorb [math]P_{abs} [/math](W) from the waves does not depend on its dimensions [5]. It is theoretically possible to absorb a lot of energy with only a small buoy. It can be shown that for a body with a vertical axis of symmetry (but otherwise arbitrary geometry) oscillating in heave the capture (or absorption) width [math]L_{max}[/math](m) is as follows [5]: [math]L_{max} = \frac{P_{abs}}{P_{w}} = \frac{\lambda}{2\pi}[/math] or [math]1 = \frac{P_{abs}}{P_{w}} \frac{2\pi}{\lambda}[/math] Fig 4: Upper limit of mean wave power absorption for a heaving point absorber. where [math]{P_{w}}[/math] is the wave energy flux per unit crest length (W/m). An optimally damped buoy responds however efficiently to a relatively narrow band of wave periods. Babarit et Hals propose [6] to derive that upper limit for the mean annual power in irregular waves at some typical locations where one could be interested in putting some wave energy devices. The mean annual power absorption tends to increase linearly with the wave power resource. Overall, one can say that for a typical site whose resource is between 20-30 kW/m, the upper limit of mean wave power absorption is about 1 MW for a heaving WEC with a capture width between 30-50 m. In order to complete these theoretical results and to describe the efficiency of the WEC in practical situations, the capture width ratio [math]\eta[/math] is also usually introduced. It is defined as the ratio between the absorbed power and the available wave power resource per meter of wave front times a relevant dimension B [m]. [math]\eta = \frac{P_{abs}}{P_{w}B} [/math] The choice of the dimension B will depend on the working principle of the WEC. Most of the time, it should be chosen as the width of the device, but in some cases another dimension is more relevant. Estimations of this ratio [math]\eta[/math] are given [6]: 33 % for OWC, 13 % for overtopping devices, 9-29 % for heaving buoys, 20-41 % for pitching devices. For energy converted to electricity, one must take into account moreover the energy losses in other components of the system. Civil engineering Never forget that the energy conversion is only a secondary function for the coastal structure. The primary function of the coastal structure is still protection. It is necessary to verify whether integration of WEC modifies performance criteria of overtopping and stability and to assess the consequences for the construction cost. Integration of WEC in coastal structures will always be easier for a new structure than for an existing one. In the latter case, it requires some knowledge on the existing coastal structures. Solutions differ according to sea state but also to type of structures (rubble mound breakwater, caisson breakwaters with typically vertical sides). Some types of WEC are more appropriate with some types of coastal structures. Fig 5: Several OWC (Oscillating water column) configurations (by Wavegen – Voith Hydro). Environmental impact Wave absorption if it is significant will change hydrodynamics along the structure. If there is mobile bottom in front of the structure, a sand deposit can occur. Ecosystems can also be altered by change of hydrodynamics and but acoustic noise generated by the machines. Fig 6: Finistere area and locations of the six sites (google map). Study case: Finistere area Finistere area is an interesting study case because it is located in the far west of Brittany peninsula and receives in consequence the largest wave energy flux along the French coasts (see Fig.2). This area with a very ragged coast gathers moreover many commercial ports, fishing ports, yachting ports. The area produces a weak part of its consumption and is located far from electricity power plants. There are therefore needs for renewable energies that are produced locally. This issue is important in particular in islands. The production of electricity by wave energy will have seasonal variations. Wave energy flux is indeed larger in winter than in summer. The consumption has peaks in winter due to heating of buildings but the consumption in summer is also strong due to the arrival of tourists. Six sites are selected (see figure 7) for a preliminary study of wave energy flux and capacity of integration of wave energy converters. The wave energy flux is expected to be in the range of 1 – 10 kW/m. The length of each breakwater exceeds 200 meters. The wave power along each structure is therefore estimated between 200 kW and 2 MW. Note that there exist much longer coastal structures like for example Cherbourg (France) with a length of 6 kilometres. (1) Roscoff (300 meters) (2) Molène (200 meters) (3) Le Conquet (200 meters) (4) Esquibien (300 meters) (5) Saint-Guénolé (200 meters) (6) Lesconil (200 meters) Fig.7: Finistere area, the six coastal structures and their length (google map). Wave power flux along the structure depends on local parameters: bottom depth that fronts the structure toe, the presence of caps, the direction of waves and the orientation of the coastal structure. See figure 8 for the statistics of wave directions measured by a wave buoy located at the Pierres Noires Lighthouse. These measurements show that structures well-oriented to West waves should be chosen in priority. Peaks of consumption occur often with low temperatures in winter coming with winds from East- North-East directions. Structures well-oriented to East waves could therefore be also interesting even if the mean production is weak. Fig 8: Wave measurements at the Pierres Noires Lighthouse. Conclusion Wave energy converters (WEC) in coastal structures can be considered as a land renewable energy. The expected energy can be compared with the energy of land wind farms but not with offshore wind farms whose number and power are much larger. As a land system, the maintenance will be easy. Except the energy production, the advantages of such systems are : a “zero emission” port industrial tourism test of WEC for future offshore installations. Acknowledgement This work is in progress in the frame of the national project EMACOP funded by the French Ministry of Ecology, Sustainable Development and Energy. See also Waves Wave transformation Groynes Seawall Seawalls and revetments Coastal defense techniques Wave energy converters Shore protection, coast protection and sea defence methods Overtopping resistant dikes References Mei C.C. (1989) The applied dynamics of ocean surface waves. Advanced series on ocean engineering. World Scientific Publishing Ltd Vicinanza D., Cappietti L., Ferrante V. and Contestabile P. (2011) : Estimation of the wave energy along the Italian offshore, journal of coastal research, special issue 64, pp 613 - 617. Mattarolo G., Benoit M., Lafon F. (2009), Wave energy resource off the French coasts: the ANEMOC database applied to the energy yield evaluation of Wave Energy, 10th European Wave and Tidal Energy Conference Series (EWTEC’2009), Uppsala (Sweden) Benoit M. and Lafon F. (2004) : A nearshore wave atlas along the coasts of France based on the numerical modeling of wave climate over 25 years, 29th International Conference on Coastal Engineering (ICCE’2004), Lisbonne (Portugal), pp 714-726. De O. Falcão A. F. (2010) Wave energy utilization: A review of the technologies. Renewable and Sustainable Energy Reviews, Volume 14, Issue 3, April 2010, pp. 899–918. Babarit A. and Hals J. (2011) On the maximum and actual capture width ratio of wave energy converters – 11th European Wave and Tidal Energy Conference Series (EWTEC’2011) – Southampton (U-K).
$$\newcommand{\mm} {\mathbf M}\newcommand{\mk} {\mathbf K}\newcommand{\mi} {\mathbf I}\newcommand{\me} {\mathbf E}\newcommand{\ml} {\mathbf L}$$If you extend your notion of "elementary operation" to include the "do nothing operation" --- call it $N$ for "nothing" --- then the corresponding matrix is $I$. That is to say, applying $N$ to a matrix $\mm$ gives the same result as multiplying the matrix by $\mi$:$$N(\mm) = \mi \cdot \mm$$Now let's look at some other elementary operation $S$; we can compute $$\me = S(\mi), \tag{1}$$right? That is to say, the operation $S$, applied to $\mi$ must produce some matrix, and I'm going to call that matrix $\me$. Now I'm going to define a new operation $T$ on matrices: $$T(\mm) = \me \cdot \mm.$$ And your question, now that I've got the definitions out of the way, is "How do I know that $$S(\mm) = T(\mm)$$for every matrix $\mm$?" The first thing to realize is that what I've written above, without the word 'elementary' in "some other elementary operation $S$", is not enough to guarantee this result. It's possible that $S$ might be defined so that $S(\mi) = \me$, where $\me$ is some non-identity matrix, but $S(\mm) = \mi$ for every $\mm \ne \mi$. In that case, the claim that $S(\mm) = T(\mm)$ would be false. So we're going to need to use the notion of "elementary" somehow. It turns out that the key property that elementary operations share --- the one that makes this proof go through --- is that $$S(\mm \cdot \mk) = S(\mm) \cdot \mk \tag{2}$$where the $\cdot$ here represents matrix multiplication. Supposing, for the moment, that our elementary operation has this property, it's easy to prove that $S(\mm) = T(\mm)$ as follows. We know that \begin{align}\mi \cdot \mm &= \mm & \text{property of identity} \\S(\mi \cdot \mm) &= S(\mm) & \text{apply $S$ to both sides} \\S(\mi) \cdot \mm &= S(\mm) & \text{use equation 2 above} \\\me \cdot \mm &= S(\mm) & \text{use equation 1 above} \\\end{align}and that's the result we wanted, because the left hand side is exactly the definition of $T(\mm)$. Now how do we prove that fundamental claim, equation 2? We need to use elementary-ness to do so. It's going to come down to cases, alas. For instance, if $S$ is an operation that swaps rows $p$ and $q$, I need to write out $S$ explicitly: $$S(\mm)_{ab} = \begin{cases}m_{ab} & a \ne p \text{ and } a \ne q \\m_{pb} & a = q \\m_{qb} & a = p \\\end{cases}$$That is to say, the $ab$--entry of $S(\mm)$ is the $ab$-entry of $\mm$, unless the row ($a$) is either $p$ or $q$, in which case you have to swap things around. Now looking at a pair of matrices $\mm$ and $\mk$, we need to know a formula for the $ab$ entry of the product $\mm \cdot \mk$; that's$$(mk)_{ab} = \sum_i m_{ai} k_{ib}.$$I'm following the usual convention that the entries of $\mm$ are $m_{ij}$ and those of $k$ are $k_{ij}$, and using the two-letter name "$mk$" (in parens) to indicate entries of the product, so that the $i$th row, $j$th column entry of $\mm \cdot \mk$ is denoted $(mk)_{ij}$. And now we can look at $S(\mm \cdot \mk)_{ab}$ and $ (S(\mm) \cdot \mk)_{ab}$ and compare to see whether they're the same. Well$$\begin{align}S(\mm \cdot \mk)_{ab} &= \begin{cases}(mk)_{ab} & a \ne p \text{ and } a \ne q \\(mk)_{pb} & a = q \\(mk)_{qb} & a = p \end{cases} & \text{definition of $S$} \\&= \begin{cases}\sum_i m_{ai} k_{ib} & a \ne p \text{ and } a \ne q \\\sum_i m_{pi} k_{ib} & a = q \\\sum_i m_{qi} k_{ib} & a = p \end{cases} & \text{definition of matrix multiply} \\&=\left( \begin{cases}\sum_i m_{ai} & a \ne p \text{ and } a \ne q \\\sum_i m_{pi} & a = q \\\sum_i m_{qi} & a = p \end{cases} \right) k_{ib} & \text{extract $k_{ib}$ factor from each!} \\&=\sum_i \left( \begin{cases}m_{ai} & a \ne p \text{ and } a \ne q \\m_{pi} & a = q \\m_{qi} & a = p \end{cases} \right) k_{ib} & \text{extract $\sum_i$, by distributive law} \\&= \sum_i S(\mm)_{ai} k_{ib} & \text{replace large parens with $S(\mm)_{ai}$, because they match} \\&= (S(\mm) \cdot \mk)_{ab}.\end{align}$$So they are, in fact, equal, and we're done (with this case). That still leaves the "multiply a row by a constant" case, and the "add $r$ times one row to another" case to work out, but the proofs for those cases are nearly identical to this one, so I leave that to you.
On page 9 of "High Dimensional Sparse Econometric Models: An Introduction (2011)," Belloni and Chernozhukov explain in Remark 1 that the expected risk of a sparse estimator is $$\min_{\beta\in\mathbb{R}^{\tilde{T}}} \mathbb{E}_{n}[(f_{i}-x_{i}[\tilde{T}]'\beta)^{2}]+\sigma^{2}\dfrac{k}{n}.$$ The notation is explained in the paper, and is standard. $x_{i}[\tilde{T}]$ denotes a vector of values (say, $k$) of some subset of predictors, generally smaller than $p$, the number of all potential predictors. I fail to understand why. Help would be appreciated. Here is direct arXiv link to the paper. Let $\Pi$ a projection matrix. Then, \begin{align*} \frac{1}{n} \, \mathbb{E} \| \Pi y - f^* \|_2^2 & = \frac{1}{n} \, \mathbb{E} \|\Pi f^* - f^* - \Pi \epsilon \|_2^2 \\ & = \frac{1}{n} \, \mathbb{E} \|\Pi f^* - f^*\|_2^2 + \frac{1}{n} \, \mathbb{E} \|\Pi \epsilon \|_2^2 \\ & = \frac{1}{n} \, \mathbb{E} \|\Pi f^* - f^*\|_2^2 + \frac{1}{n} \, \mathbb{E} \, \textrm{trace} \left( \|\Pi \epsilon \|_2^2 \right) \\ & = \frac{1}{n} \, \mathbb{E} \|\Pi f^* - f^*\|_2^2 + \frac{1}{n} \, \mathbb{E} \, \textrm{trace} \left( \epsilon^T \Pi \epsilon \right) \\ & = \frac{1}{n} \, \mathbb{E} \|\Pi f^* - f^*\|_2^2 + \frac{1}{n} \, \mathbb{E} \, \textrm{trace} \left( \epsilon \epsilon^T \Pi \right) \\ & = \frac{1}{n} \, \mathbb{E} \|\Pi f^* - f^*\|_2^2 + \frac{\sigma^2}{n} \, \textrm{trace} \left( \Pi \right) \\ & = \frac{1}{n} \, \mathbb{E} \|\Pi f^* - f^*\|_2^2 + \frac{\sigma^2}{n} k, \\ \end{align*} where $k$ is the dimension of the space onto which $\Pi$ is projecting. This is the "bias$^2$ + variance" decomposition of the risk, and it appears to be the computation behind their equation. I should note that this computation only reveals that this decomposition holds when the estimate $x^T \beta$ is a projected form of the signal $f^*$---however, this won't necessarily be the case, since they're using the form $\Pi f^* = x^T \beta$ for an arbitrary $\beta$ (which they then minimize over.)
Abstract The transparent and conductive zinc oxide co-doped by aluminum and ytterbium (AYZO) is demonstrated. The transmittance of AYZO films in visible region only changes by less than 1% due to the stability of the suppressed oxygen vacancies. With the illuminating wavelength red-shifting up to 532 nm, the 45 nm-thick AYZO film slightly enhances its transmittance to 90% with different annealing durations. The resistivity of AYZO films reaches a minimum of 3.2 $\times {{10}} ^{-4} ~\Omega \cdot{cm}$ at 450 $^{\circ}{C}$ -annealing for 15 min because the annealing process enhances the activation of ionized Yb and Al donor states in AYZO films. With the residual oxygen vacancies rigorously controlled to minimize the transmittance variation during annealing, the ${Yb}^{3+}$ ions added into the AYZO films contribute to the conductivity after activation but help to stabilize. Nevertheless, the annealing process for 15 min or longer duration contributes to a decreased resistivity due to the reduction of oxygen vacancy by crystalline regrowth of the AYZO films. The resistivity of AYZO films is still dominated by the oxygen-vacancy instead of the ionized Al and Yb states even after activation. With the co-doping of Yb ions, the AYZO film effectively decreases its resistivity to be a competitive candidate to substitute the ITO for a highly transparent and conductive electrode. © 2014 IEEE PDF Article References You do not have subscription access to this journal. Citation lists with outbound citation links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution. Contact your librarian or system administrator or Login to access OSA Member Subscription Cited By You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution. Contact your librarian or system administrator or Login to access OSA Member Subscription
We are given the partial differential equation $\partial_t^2 u = \text{div}(A \nabla u) = \nabla \cdot (A \nabla u), \tag 1$ for which we define the energy at time $t$ via an integral over $\Bbb R^3$: $E(t) = \displaystyle \dfrac{1}{2} \int_{\Bbb R^3} ((\partial_t u)^2 + \langle A\nabla u, \nabla u \rangle) \; dx; \tag 2$ we wish to show that $\dot E(t) = \partial_t E(t) = 0. \tag 3$ Before proceeding with the demonstration of (3), a few remarks are in order:first, there is no need to take the absolute value of $\partial_t u$ when forming the term $(\partial_t u)^2$ appearing in (2), as our OP Axioms does in the text of his question. Since we are taking the square of $\partial_t u$, the absolute value signs are unnecesary: $\vert \partial_t u \vert^2 = (\partial_t u)^2$; this simplification makes taking the $t$-derivative of $\vert \partial_t u \vert^2 = (\partial_t u)^2$ somewhat more transparent, since it makes clear that we needn't be concerned with the failure of $\vert \cdot \vert$ to be differentiable when its argument vanishes; second, the hypothesis that $a^{ij}(x) = a^{ji}(x)$, i.e., that $A(x)$ is a symmetric matrix, $A^T(x) = A(x)$, for all $x \in \Bbb R^3$ implies, as is well known, that for any vectors $v, w$ we have $\langle v, A(x) w \rangle = \langle A(x) v, w \rangle; \tag 4$ indeed, $\langle v, A(x) w \rangle = \langle A^T(x) v, w \rangle = \langle A(x) v, w \rangle; \tag 5$ this property of the coefficient matrix $A(x)$ will enter into the discussion which follows; third, there is no need to write the subscript $x$ when taking $\nabla u = \nabla_x u$, since it is understood that $\nabla$ refers only to the spatial variables $x$; bearing these remarks in mind, we proceed. From (2) we have $\dot E(t) = \partial_t E(t) = \displaystyle \dfrac{1}{2} \partial_t \int_{\Bbb R^3} ((\partial_t u)^2 + \langle A\nabla u, \nabla u \rangle) \; dx$$= \displaystyle \dfrac{1}{2} \int_{\Bbb R^3} \partial_t ((\partial_t u)^2 + \langle A\nabla u, \nabla u \rangle) \; dx = \dfrac{1}{2} \int_{\Bbb R^3} (\partial_t (\partial_t u)^2 + \partial_t \langle A\nabla u, \nabla u \rangle) \; dx; \tag 6$ now, $\partial_t (\partial_t u)^2 = 2 \partial_t u \; \partial_t^2 u; \tag 7$ also, $\partial_t \langle A\nabla u, \nabla u \rangle = \langle \partial_t(A \nabla u), \nabla u \rangle + \langle A \nabla u, \partial_t \nabla u \rangle, \tag 8$ and $\partial_t \nabla u = \nabla \partial_t u, \tag 9$ since the $t$- and $x$-derivatives commute, and $\partial_t A \nabla u = A \partial_t \nabla u, \tag{10}$ since $A(x)$ is constant with respect to $t$; combining (8), (9) and (10) we see that $\partial_t \langle A\nabla u, \nabla u \rangle = \langle A \nabla \partial_t u, \nabla u \rangle + \langle A \nabla u, \nabla \partial_t u \rangle, \tag{11}$ and using the symmetric property (4) of $A$ and of the inner product ($\langle v, w \rangle = \langle w, v \rangle$), $\partial_t \langle A\nabla u, \nabla u \rangle = \langle \nabla \partial_t u, A \nabla u \rangle + \langle \nabla \partial_t u, A \nabla u \rangle = 2\langle \nabla \partial_t u, A \nabla u \rangle; \tag{12}$ using (7) and (12) we return to (6) and write it in the form $\dot E(t) = \partial_t E(t) = \displaystyle \dfrac{1}{2} \int_{\Bbb R^3} (\partial_t (\partial_t u)^2 + \partial_t \langle A\nabla u, \nabla u \rangle) \; dx$$= \displaystyle \int_{\Bbb R^3} (\partial_t u \; \partial_t^2u + \langle \nabla \partial_t u, A \nabla u \rangle) \; dx, \tag{13}$ where the factors of $2$ occurring in (7) and (12) have been cancelled against the $1/2$ occurring in (6). Now $\nabla \cdot (\partial_t u (A \nabla u)) = \langle \nabla \partial_t u, A \nabla u) \rangle + \partial u_t \nabla \cdot (A \nabla u), \tag{14}$ which follows from the standard formula from vector calculus $\nabla \cdot (fX) = \langle \nabla f, X \rangle + f \nabla \cdot X \tag{15}$ applied with $f = \partial_t u$ and $X = A \nabla u$; re-arranging (14), $\langle \nabla \partial_t u, A \nabla u) \rangle = \nabla \cdot (\partial_t u (A \nabla u)) - \partial_t u \nabla \cdot (A \nabla u); \tag{16}$ we integrate (16) over $\Bbb R^3$, as follows: $\displaystyle \int_{\Bbb R^3} \langle \nabla \partial_t u, A \nabla u \rangle \; dx = \int_{\Bbb R^3} \nabla \cdot (\partial_t u (A \nabla u)) \; dx - \int_{\Bbb R^3} \partial_t u \nabla \cdot (A \nabla u) \; dx; \tag{17}$ we evaluate the first integral on the right of (17) by first taking it over a ball $B(R)$, of radius $R$ and centered at $(0, 0, 0)$, and then letting $R \to \infty$; we have, using the divergence theorem, $\displaystyle \int_{B(R)} \nabla \cdot (\partial_t u (A \nabla u)) \; dx = \int_{\partial B(R)} \partial_t u (A \nabla u) \cdot n \; dA, \tag{18}$ where $n$ is the outward normal on the sphere $\partial S(R)$, the boundary of $B(R)$, and $dA$ is its area element; under the assumption that $u$ and its derivatives fall off sufficiently rapidly as $R \to \infty$, we have $\displaystyle \int_{\Bbb R^3} \nabla \cdot (\partial_t u (A \nabla u)) \; dx$$ = \displaystyle \lim_{R \to \infty} \int_{B(R)} \nabla \cdot (\partial_t u (A \nabla u)) \; dx = \lim_{R \to \infty} \int_{\partial B(R)} \partial_t u (A \nabla u) \cdot n \; dA = 0, \tag{19}$ so that (17) yields $\displaystyle \int_{\Bbb R^3} \langle \nabla \partial_t u, A \nabla u \rangle \; dx = -\int_{\Bbb R^3} \partial_t u \nabla \cdot (A \nabla u) \; dx; \tag{20}$ using (20), we may re-assemble (13) to find $\dot E(t) = \displaystyle \int_{\Bbb R^3} (\partial_t u \; \partial_t^2u - \partial u_t \nabla \cdot (A \nabla u)) \; dx = \int_{\Bbb R^3} \partial_t u (\partial_t^2u - \nabla \cdot (A \nabla u)) \; dx = 0, \tag{21}$ by virtue of (1). The conservation of the energy (2) is thus estsblished. In closing, we note that there is nothing inherently $3$-dimensional about this derivation; everything can carried into $\Bbb R^n$ provided the conditions on $u$ and $\nabla u$ as $R \to \infty$ are appropriately modified.