text
stringlengths
256
16.4k
If an electron partially occupy $1s$ orbital, can other electron occupy $1s$ partially, too? The "grown-up" version of the Pauli exclusion principle reads as follows: Electrons are identical particles which obey Fermi-Dirac statistics, so that the wavefunction needs to change sign if two electrons are exchanged. For the specific case of two electrons, the wavefunction $\Psi$ is a function of two positions $\mathbf r_1$ and $\mathbf r_2$, with one electron at $\mathbf r_1$ and the other at $\mathbf r_2$, and the Fermi-Dirac condition is that $$ \Psi(\mathbf r_2, \mathbf r_1) = - \Psi(\mathbf r_1, \mathbf r_2). \tag1 $$ When we say that "one electron is in state $a$ and the other electron is in state $b$", what we normally mean is that the global wavefunction is in the specific form of a Slater determinant,$$\Psi(\mathbf r_1, \mathbf r_2) = \frac{1}{\sqrt{2}} \bigg[ \psi_a(\mathbf r_1)\psi_b(\mathbf r_2) - \psi_b(\mathbf r_1)\psi_a(\mathbf r_2) \bigg].\tag 2$$It should be obvious that this satisfies the condition $(1)$. (However, it is important to remark from the outset that this is not the only possible way for this to happen $-$ there are antisymmetric wavefunctions which satisfy $(1)$ but which cannot be expressed as a single Slater determinant as in $(2)$; those are generally only required in post-Hartree-Fock theories.) This is where the "pre-grown-up" version of the Pauli exclusion principle fits: trying to have "two electrons in the same state" would require you to have $\psi_b=\psi_a$, and if you try to put that into the Slater determinant in $(2)$ it will lead to a vanishing two-electron wavefunction, so this state is not possible for fermions (as opposed to bosons!). It's unclear what you mean by an electron partially occupy $1s$ orbital and why you think this is related to "non-stable" states. (Unstable states in quantum mechanics are tricky beasts - see the paper cited at the end of this answer to see just how tricky.) If by that quote you simply mean, say, that you have one electron in an orbital that has only a partial overlap with the $1s$ orbital, such as e.g.$$\psi_a(\mathbf r) = \frac{1}{\sqrt{2}} \bigg[ \psi_{1s}(\mathbf r) + \psi_{2s}(\mathbf r) \bigg],\tag 3$$say, then the overall answer is yes: it is indeed possible for this orbital to be part of a two-electron state in combination (through a Slater determinant) with a second orbital which includes a nonzero overlap with the $1s$ state; the simplest such state is just$$\psi_b(\mathbf r) = \frac{1}{\sqrt{2}} \bigg[ \psi_{1s}(\mathbf r) - \psi_{2s}(\mathbf r) \bigg].\tag 4$$ However, it is important to note that this type of manipulation should be handled with extreme care, since individual orbitals typically do not have physical meaning in multi-electron states, and indeed if you substitute in $(3)$ and $(4)$ into the formula $(2)$ for the Slater determinant, you will find that $$ \frac{1}{\sqrt{2}} \bigg[ \psi_a(\mathbf r_1)\psi_b(\mathbf r_2) - \psi_b(\mathbf r_1)\psi_a(\mathbf r_2) \bigg] = \frac{1}{\sqrt{2}} \bigg[ \psi_{2s}(\mathbf r_1)\psi_{1s}(\mathbf r_2) - \psi_{1s}(\mathbf r_1)\psi_{2s}(\mathbf r_2) \bigg], \tag 5 $$ or, in other words, "rotating" in this way to "mixed" orbitals does not actually accomplish anything $-$ because of the antisymmetry of the state, and the linearity of quantum mechanics (a.k.a. "the wave nature of matter", as that property is called in the earliest introductory texts and in popular-science presentations), this does not affect the real, physical state in any way.
The Kahler $4$ form constructed from two-forms $\{\alpha, \beta\} \in H^2(M,\mathbb Z)$, and $M$ a $4$-manifold, is induced by $\alpha\wedge\beta$ with the map $H^2(M, \mathbb Z)\otimes H^2(M, \mathbb Z) \rightarrow$ $H^4(M, \mathbb Z)$. This defines the topological charge $$ 8\pi k = \langle\omega(\alpha\cup\beta)\rangle = \int\omega. $$ This is geometrically the number of ways these two forms intersect, and the four-form $\omega$ is then an intersection form. Milnor [On Simply Connected 4-manifolds, Symp. Int. Top. Alg., Mexico (1958) 122-128] demonstrated that for $$ M = \{z_0, z_1, z_2, z_4 \in \mathbb CP^3: z_0^4 + z_1^4 + z_2^4 + z_4^4 = 0\}, $$ the Kummer surface, that this intersection form is given by $$ [E_8]\oplus[E_8]\oplus3\left(\begin{array}{c,c} 0 & 1 \\ 1 & 0\end{array}\right). $$ Here $[E_8]$ is the Cartan center matrix for the exceptional $E_8$ group. This leads to a couple of questions or observations. One of them is that $\mathbb CP^3$ is projective twistor space $\mathbb P\mathbb T^+$. Twistor projective space is $$ \mathbb P\mathbb T^+ = SU(2,2)/SU(2,1)\times U(1) \simeq SO(4,2)/SO(4,1)\times SO(2). $$ This is a Hermitian symmetric space. The question is then whether the global symmetries given by the Kahler form are a set of global symmetries reduced from the local symmetries of $E_8\times E_8 \sim$ $SO(32)$. This could also be carried to Witten's supertwistor space $\mathbb C\mathbb T^{3|4}$ as well. String theory requires a background for gravitation. String theory then demands there be some global spacetime symmetry, say a set of global symmetries at the conformal $i^0$. The rest of string theory involves entirely local symmetries. Along the lines of this question, is this relationship between global and local symmetries (assuming my hypothesis here is correct) the reason string theory requires background dependency, such as type II strings on $AdS_5$. The twistor space here has a relationship with the anti-de Sitter spacetime, being quotient groups of $SO(4,2)$ with different divisors.
Self-Consistent Schrödinger-Poisson Results for a Nanowire Benchmark The Schrödinger-Poisson Equation multiphysics interface simulates systems with quantum-confined charge carriers, such as quantum wells, wires, and dots. Here, we examine a benchmark model of a GaAs nanowire to demonstrate how to use this feature in the Semiconductor Module, an add-on product to the COMSOL Multiphysics® software. The Schrödinger-Poisson Equation Multiphysics Interface The Schrödinger-Poisson Equation multiphysics interface, available as of COMSOL Multiphysics® version 5.4, creates a bidirectional coupling between the Electrostatics interface and the Schrödinger Equation interface to model charge carriers in quantum-confined systems. The electric potential from the electrostatics contributes to the potential energy term in the Schrödinger equation. A statistically weighted sum of the probability densities from the eigenstates of the Schrödinger equation contributes to the space charge density in the electrostatics. All spatial dimensions (1D, 1D axial symmetry, 2D, 2D axial symmetry, and 3D) are supported. Solving the Schrödinger-Poisson System The Schrödinger-Poisson system is special in that a stationary study is necessary for the electostatics, and an eigenvalue study is necessary for the Schrödinger equation. To solve the two-way coupled system, the Schrödinger equation and Poisson’s equation are solved iteratively until a self-consistent solution is obtained. The iterative procedure consists of the following steps: Step 1 To provide a good initial condition for the iterations, we solve Poisson’s equation (1) for the electric potential, V, in which \epsilon is the permittivity and \rho is the space charge density. In this initialization step, \rho is given by the best initial estimate from physical arguments; for example, using the Thomas-Fermi approximation. Step 2 The electric potential, V, from the previous step contributes to the potential energy term, V_e, in the Schrödinger equation (2) where q is the charge of the carrier particle, which is given by (3) where z_q is the charge number and e is the elementary charge. Step 3 With the updated potential energy term given by Eq. 2, the Schrödinger equation is solved, producing a set of eigenenergies, E_i, and a corresponding set of normalized wave functions, \Psi_i. Step 4 The particle density profile, n_\mathrm{sum}, is computed using a statistically weighted sum of the probability densities (4) where the weight, N_i, is given by integrating the Fermi-Dirac distribution for the out-of-plane continuum states (thus depending on the spatial dimension of the model). (5) (6) (7) where g_i is the valley degeneracy factor, E_f is the Fermi level, k_B is the Boltzmann constant, T is the absolute temperature, m_d is the density of state effective mass, and F_0 and F_{-1/2} are Fermi-Dirac integrals. For simplicity, the weighted sum in Eq. 4 shows only one index, i, for the summation. There can be, of course, more than one index in the summation. For example, in the nanowire model discussed here, the summation is over both the azimuthal quantum number and the eigenenergy levels (for each azimuthal quantum number). Step 5 Given the particle density profile, n_\mathrm{sum}, we reestimate the space charge density, \rho , and then re-solve Poisson’s equation to obtain a new electric potential profile, V. The straightforward formula for the new space charge density (8) almost always leads to divergence of the iterations. A much better estimate is given by (9) where V_\mathrm{old} is the electric potential from the previous iteration and \alpha is an additional tuning parameter. The formula is motivated by the observation that the particle density, n_\mathrm{sum}, is the result from V_\mathrm{old} and would change once Poisson’s equation is re-solved to obtain a new V. In other words, Eq. 8 can be written more explicitly as (10) since n_\mathrm{sum} is the result from V_\mathrm{old}, and \rho is used to re-solve Poisson’s equation to get a new V. To achieve a self-consistent solution, a better formula would be (11) At this point, n_\mathrm{sum,new} is unknown to us, since it comes from the solution to the Schrödinger equation in the next iteration. However, we can formulate a prediction for it using Boltzmann statistics, which provides a simple exponential relation between the potential energy, V_e=qV, and the particle density, n_\mathrm{sum}. (12) This leads to Eq. 9 for the case of \alpha=0. This works well at high temperatures, where Boltzmann statistics is a good approximation. At lower temperatures, setting \alpha to a positive number helps accelerate convergence. Step 6 Once a new electric potential profile, V, is obtained by re-solving Poisson’s equation, compare it with the electric potential from the previous iteration, V_\mathrm{old}. If the two profiles agree within the desired tolerance, then self-consistency is achieved; otherwise, go to step 2 to continue the iteration. A dedicated Schrödinger-Poissonstudy type is available to automatically generate the steps outlined above in the solver sequence. Benchmark Example: The Nanowire Model The GaAs nanowire tutorial model is based on a paper by J.H. Luscombe, A.M. Bouchard, and M. Luban titled “Electron confinement in quantum nanostructures: Self-consistent Poisson-Schrödinger theory”. Given the assumption of an infinite length and cylindrical symmetry, we choose the 1D axisymmetric space dimension. We then select the Schrödinger-Poisson Equation multiphysics interface under the Semiconductor branch, which adds the Schrödinger Equation and Electrostatics interfaces together with the Schrödinger-Poisson Coupling multiphysics coupling in the Model Builder. Selecting the Schrödinger-Poisson Equation interface for the nanowire model. Following the description in the paper, the radius of the nanowire is set to 50 nm. The electron effective mass is set to 0.067 times the free electron mass (as suggested by the Fermi-temperature result in the paper), and the dielectric constant is assumed to be 12.9. The Fermi energy level in the model is set to 0 V and the electric potential at the wall to −0.7 V in order to match the Fermi-level-pinning boundary condition described by the researchers. We model the case of 2·10 18 cm –3 uniform ionized dopants at a temperature of 10 K to compare with Figures 2 and 3 in the paper. The numbers above are entered as global parameters in the model. Global parameters for the nanowire model. Following the approach of the paper, we first solve for the Thomas-Fermi approximate solution, then use it as the initial condition for the fully coupled Schrödinger-Poisson equation. The formulas for the Thomas-Fermi approximation are entered as local variables in the model. Local variables for the nanowire model. With the global parameters and local variables defined, it is straightforward to use them to fill the various input fields in the geometry, material, and physics nodes in the Model Builder. Here are a few things to note: The azimuthal quantum number m is parameterized to allow sweeping and summing over its values, as mentioned above, and is entered in the Settingswindow of the Schrödinger Equationphysics node Recall from a previous blog post on computing the band gap for superlattices that the eigenvalue scale λ scaleworks as a multiplication factor for the dimensionless eigenvalue λ to produce the eigenenergy, E_i (E_i = λ scaleλ) For instance, if λ scaleequals 1 eV, then an eigenvalue of 1.23 indicates an eigenenergy of 1.23 eV For instance, if λ For the Electrostaticsinterface, an Electric Potentialboundary condition is added to set the value at the wall of the nanowire, as mentioned above In addition, two Space Charge Densitydomain conditions are added, one for the ionized dopants and the other for the Thomas-Fermi approximation (the latter should be turned off for the Schrödinger-Poissonstudy) Setting Up the Schrödinger-Poisson Multiphysics Coupling In the Settings window for the Schrödinger-Poisson Coupling multiphysics node, expand the Equation section to see the equations implemented in this node — they should look familiar if you’ve read the Solving the Schrödinger-Poisson System section above. The Coupled Interfaces section in the settings allows the selection of the two coupled physics interfaces. The Model Input section sets the temperature of the system, as shown in the screenshot below: Upper part of the Settings window for the Schrödinger-Poisson Coupling node. The Particle Density Computation section (screenshot below) specifies the statistically weighted sum of the probability densities, as described in Eq. 4. If the default option of Fermi-Dirac statistics, parabolic band is selected, then Eq. 5–Eq.7 are used to compute the weights, N_i. A user-defined option is also available for entering different expressions for the weights. To take into account the pairs of degenerate azimuthal quantum numbers (m = ±1, ±2, etc.), we use the formula 1+(m>0) for the Degeneracy factor, g_i, which evaluates to 1 for m = 0 and 2 for m > 0. Lower part of the Settings window for the Schrödinger-Poisson Coupling node. The Charge Density Computation section (screenshot above) takes the input for the Charge number, z_q, for Eq. 3. If the default option of Modified Gummel iteration is selected, then Eq. 9 is used to compute the new space charge density, \rho. Other options are also available, including a user-defined option where you can enter your own mathematical expressions. The default expression for the Global error variable, (schrp1.max(abs(V-schrp1.V_old)))/1[V], computes the maximum difference between the electric potential fields from the two most recent iterations, in the unit of V. Note that the prefix schrp1 should match the Name input field of the Schrödinger-Poisson Coupling node, and the variable name V should match the dependent variable name for the Electrostatics interface. These names may change from the default in a more complicated model, and the expression will turn yellow if the names do not match. In this case, some manual editing is needed. Setting Up the Schrödinger-Poisson Study Step The dedicated Schrödinger-Poisson study step under Study 2 automatically generates the self-consistent iterations in the solver sequence. The iteration scheme is outlined in Solving the Schrödinger-Poisson System above. If we are dealing with a completely new problem, then for the Eigenfrequency search method menu under the Study Settings section, it is often necessary to use the default Manual search option to find the range of the eigenenergies. Once the range is found, we can switch to the Region search option with appropriate settings for the range and number of eigenvalues in order to ensure that all significant eigenstates are found by the solver. For this tutorial, the estimated energy range is between -0.15 and 0.05 eV. This corresponds to -0.15 and 0.05 for the unitless eigenvalue, as discussed earlier. The real and imaginary parts of the input fields refer to the real and imaginary parts of the eigenvalue, respectively. To look for the eigenenergies of bound states, we set the input fields for the real parts to the expected energy range and set the input fields for the imaginary parts to a small range around 0 to capture numerical noise or slightly leaky quasibound states, as shown below: Upper part of the Settings window for the Schrödinger-Poisson study step. As we have pointed out earlier, the second Space Charge Density domain condition is only used for the Thomas-Fermi approximation solution in Study 1. It is thus disabled under the Physics and Variables Selection section, as shown in the screenshot above. Under the Iterations section, the default option for the Termination method drop-down menu is Minimization of global variable, which automatically updates a result table that displays the history of the global error variable after each iteration during the solution process. The built-in global error variable schrp1.global_err computes the maximum difference between the electric potential fields from the two most recent iterations, in the unit of V, as already configured in the Schrödinger-Poisson Coupling multiphysics node. (Note that the prefix schrp1 should match the Name input field of the Schrödinger-Poisson Coupling node.) Setting the tolerance to 1E-6 thus means that the iteration ends after the max difference is less than 1 uV. See the screenshot below for these settings: Lower part of the Settings window for the Schrödinger-Poisson study step. Under the Values of Dependent Variables section, we select the Thomas-Fermi approximate solution from Study 1 as the initial condition for this study. We then use the Auxiliary sweep functionality to solve for a list of nonnegative azimuthal quantum numbers m. The negative ones are taken into account using the formula 1+(m>0) for the degeneracy factor, g_i, as discussed earlier. The dedicated solver sequence automatically performs the statistically weighted sum of the probability densities for all of the eigenstates. Examining the Self-Consistent Results The solver converges in eight iterations thanks to the good initial condition provided by the Thomas-Fermi approximation and the good forward estimate of the space charge density given by Eq. 9. The plot of the electron density, potential energy, and partial orbital contributions agree well with the figure published in the reference paper. Comparison of the electron density, potential energy, and partial orbital contributions with the figure published in the reference paper. The plot below shows the Friedel-type spatial oscillations present in both the electron density and the potential energy profiles. Zoomed-in plot of the Friedel-type spatial oscillations in the electron density and potential energy profiles. Next Step In this blog post, we have demonstrated that the Schrödinger-Poisson Equation interface and the Schrödinger-Poisson study type make it simple to set up and solve a Schrödinger-Poisson system, using the Self-Consistent Schrödinger-Poisson Results for a GaAs Nanowire benchmark model as an example. To try this model yourself, click the button below to go to the Application Gallery, where you can download the documentation and, with a valid software license, the MPH-file for this tutorial. We hope you find these new features useful and we would love to hear how you apply them to your research. Reference J.H. Luscombe, A.M. Bouchard, and M. Luban, “Electron confinement in quantum nanostructures: Self-consistent Poisson-Schrödinger theory,” Phys. Rev. B, vol. 46, no. 16, p. 10262, 1992. Comments (13) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
(I first recall the definitions, but specialists can probably go directly to the question.) A twist map of the annulus $A=(\mathbb R/\mathbb Z)\times \mathbb R$ is an orientation preserving homeomorphism $f=(f_1,f_2):A\to A$ that satisfies the "twist condition": for every $x_1$, the function $f_1(x_1,x_2)$ is strictly monotone in $x_2$. Here $f_1$ and $f_2$ are the two coordinate components of $f$, and monotonicity in $\mathbb R/\mathbb Z$ should be taken to mean the monotonicity of a lifting of the corresponding function to $\mathbb R$. Twist maps of the annulus have been studied a lot in the measure-preserving case, notably in the development of Mather-Aubry theory. For a map $f$ to be measure preserving means that for every Lebesgue-measurable set $U$, $\mu(U)=\mu(f^{-1}(U))$, where $\mu$ denotes Lebesgue measure. A twist map $f$ is topologically conjugate to another map $g$ if there exists a homeomorphism $\phi : A\to A$ such that $f=\phi^{-1}\circ g\circ \phi$. Question. Do there exist twist maps of the annulus that are not topologically conjugate to a map that preserves the measure? In particular, are there any twist maps where Mather-Aubry theory does not hold? Thank you in advance!
DTFT of a Cosine Sampled Above and Below the Nyquist Rate Contents Outline Introduction Useful Background DTFT Example of a Cosine Sampled Above the Nyquist Rate DTFT Example of a Cosine Sampled Below the Nyquist Rate Conclusion References Introduction In this Slecture, I will walk you through taking the DTFT of a pure frequency sampled above and below the Nyquist Rate. Then I will compare the differences between them. Useful Background Nyquist Condition: $ f_s = 2f_{max} $ DTFT of a Cosine: $ x_d[n] = cos(2\pi nT){\leftrightarrow}X(\omega) = \pi(\delta(\omega-\omega_o) + \delta(\omega+\omega_o)){ ,for\ } \omega \in [-\pi,\pi] $ The DTFT of a sampled signal is periodic with 2π. DTFT of a Cosine Sampled Above the Nyquist RateFor our original pure frequency, let’s choose the E below middle C. The E occurs at 330 Hz x( t) = c o s(2 π * 330 t) Now let’s sample this pure cosine at a frequency above the Nyquist Rate. The Nyquist Rate is: f = 2 * s f = 2 * (330 max Hz) = 660 Hz. Let’s sample at 990 Hz. $ \begin{align} \\ x_d[n] & = x(n*\frac{1}{990Hz})\\ & = cos(2\pi n *\frac{330}{990}) = \frac{e^{j2\pi n \frac{330}{990}} + e^{-j2\pi n \frac{330}{990}}}{2}\\ & = cos(\frac{2\pi n}{3}) \end{align} $ Because $ \left | \frac{2\pi}{3}\right | < \pi $, there is no aliasing occurring in the DTFT, and it can be written as follows: $ \begin{align} \\ X(\omega) & = \frac{1}{2}(2\pi\delta(\omega - 2\pi \frac{330}{990}) + 2\pi\delta(\omega + 2\pi \frac{330}{990})) , \ \omega \in\ [-\pi,\pi]\\ & = \frac{990}{2}(\delta(\frac{990}{2\pi}\omega - 330) + \delta(\frac{990}{2\pi}\omega + 330)) , \ \omega \in\ [-\pi,\pi]\\ & = rep_{2\pi}(\frac{990}{2}(\delta(\frac{990}{2\pi}\omega - 330) + \delta(\frac{990}{2\pi}\omega + 330))), \forall \omega \end{align} $ DTFT Of a Cosine Sampled Below the Nyquist RateLet’s use the same pure frequency as above. x( t) = c o s(2 π * 330 t) Now let’s sample this pure cosine at a frequency below the Nyquist Rate. From above, the Nyquist Rate is 660 H z. Let’s sample at 550 H z. $ \begin{align} \\ x_d[n] & = x(n*\frac{1}{550Hz})\\ & = cos(2\pi n *\frac{330}{550}) = \frac{e^{j2\pi n \frac{330}{550}} + e^{-j2\pi n \frac{330}{550}}}{2}\\ \\ \end{align} $ Because $ \pi < \frac{2\pi 330}{550} < 2\pi $, aliasing occurs in the DTFT. The DTFT should be calculated with ω ∈ [ − π, π], so we will use the periodicity of cosine to shift $ x_d[n] $ into an appropriate range. >$ \begin{align}\\ x_{d}[n] & = cos(2\pi n*\frac{330}{550})\\ & = cos(2\pi n*\frac{330}{550} - 2\pi n)\\ & = cos(2\pi n*(\frac{330}{550} - \frac{550}{550}))\\ & = cos(2\pi n*(\frac{-220}{550}))\\ & = cos(2\pi n*\frac{220Hz}{550Hz}) \end{align} $ Now that the argument of the cosine $ \left | 2\pi \frac{220}{550}\right | < \pi $, we can take the DTFT of x [ d n], and the initial value will fall into a desired range for ω. $ \begin{align} X(\omega) & = \frac{1}{2}(2\pi\delta(\omega - 2\pi \frac{220}{550}) + 2\pi\delta(\omega + 2\pi \frac{220}{550})) , \ \omega \in\ [-\pi,\pi]\\ & = \frac{550}{2}(\delta(\frac{550}{2\pi}\omega - 220) + \delta(\frac{550}{2\pi}\omega + 220)) , \ \omega \in\ [-\pi,\pi]\\ & = rep_{2\pi}(\frac{550}{2}(\delta(\frac{550}{2\pi}\omega - 220) + \delta(\frac{550}{2\pi}\omega + 220))), \forall \omega \end{align} $ Conclusion The DTFT of a sampled signal is always periodic with $ 2\pi $. So even though the DTFT of a signal sampled below Nyquist may initially not fall within $ [-\pi,\pi] $, it can be extrapolated to the window you are interested in. In my derivation, I chose to shift the cosine before the DTFT. Looking at Figure 3, you can see the comparison between a cosine sampled above and below the Nyquist Rate. The cosine sampled below the Nyquist Rate exhibits aliasing. The aliased signal has a decreased magnitude compared to the original. The aliased signal also is at a different frequency. References [1] Mireille Boutin, "ECE 438 Digital Signal Processing with Applications," Purdue University. September 9, 2014. If you have any questions, comments, etc. please post them on this page
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
Strongly compact cardinal The strongly compact cardinals have their origins in the generalization of the compactness theorem of first order logic to infinitary languages, for anuncountable cardinal $\kappa$ is strongly compact if the infinitary logic $L_{\kappa,\kappa}$ exhibits the $\kappa$-compactness property. It turns out that this model-theoretic concept admits fruitful embedding characterizations, which as with so many large cardinal notions, has become the focus of study. Strong compactness rarefies into a hierarchy, and a cardinal $\kappa$ is strongly compact if and only if it is $\theta$-strongly compact for every ordinal $\theta\geq\kappa$. The strongly compact embedding characterizations are closely related to that of supercompact cardinals, which are characterized by elementary embeddings with a high degree of closure: $\kappa$ is $\theta$-supercompact if and only if there is an embedding $j:V\to M$ with critical point $\kappa$ such that $\theta<j(\kappa)$ and every subset of $M$ of size $\theta$ is an element of $M$. By weakening this closure requirement to insist only that $M$ contains a small cover for any subset of size $\theta$, or even just a small cover of the set $j''\theta$ itself, we arrive at the $\theta$-strongly compact cardinals. It follows that every $\theta$-supercompact cardinal is $\theta$-strongly compact and so every supercompact cardinal is strongly compact. Furthermore, since every ultrapower embedding $j:V\to M$ with critical point $\kappa$ has $M^\kappa\subset M$, for $\theta$-strong compactness we may restrict our attention to the case when $\kappa\leq\theta$. Contents 1 Diverse characterizations 1.1 Strong compactness characterization 1.2 Strong compactness embedding characterization 1.3 Cover property characterization 1.4 Fine measure characterization 1.5 Filter extension characterization 1.6 Discontinuous ultrapower characterization 1.7 Discontinuous embedding characterization 1.8 Ketonen characterization 1.9 Regular ultrafilter characterization 2 Strongly compact cardinals and forcing 3 Relation to other large cardinal notions 4 References Diverse characterizations There are diverse equivalent characterizations of the strongly compact cardinals. Strong compactness characterization An uncountable cardinal $\kappa$ is strongly compact if every $\kappa$-satisfiable theory in the infinitary logic $L_{\kappa,\kappa}$ is satisfiable. The signature of an $L_{\kappa,\kappa}$ language consists, just as in the first order context, of a set of finitary function, relation and constant symbols. The $L_{\kappa,\kappa}$ formulas, however, are built up in an infinitary process, by closing under infinitary conjunctions $\wedge_{\alpha<\delta}\varphi_\alpha$ and disjunctions $\vee_{\alpha<\delta}\varphi_\alpha$ of any size $\delta<\kappa$, as well as infinitary quantification $\exists\vec x$ and $\forall\vec x$ over blocks of variables $\vec x=\langle x_\alpha\mid\alpha<\delta\rangle$ of size less than $\kappa$. A theory in such a language is satisfiable if it has a model under the natural semantics. A theory is $\kappa$-satisfiable if every subtheory consisting of fewer than $\kappa$ many sentences of it is satisfiable. First order logic is precisely $L_{\omega,\omega}$, and the classical compactness theorem asserts that every $\omega$-satisfiable $L_{\omega,\omega}$ theory is satisfiable. Similarly, an uncountable cardinal $\kappa$ is defined to be strongly compact if every $\kappa$-satisfiable $L_{\kappa,\kappa}$ theory is satisfiable (and we call this the $\kappa$-compactness property}). The cardinal $\kappa$ is weakly compact, in contrast, if every $\kappa$-satisfiable $L_{\kappa,\kappa}$ theory, in a language having at most $\kappa$ many constant, function and relation symbols, is satisfiable. Strong compactness embedding characterization A cardinal $\kappa$ is $\theta$-strongly compact if and only if there is an elementary embedding $j:V\to M$ of the set-theoretic universe $V$ into a transitive class $M$ with critical point $\kappa$, such that $j''\theta\subset s\in M$ for some set $s\in M$ with $|s|^M\lt j(\kappa)$. [1] Cover property characterization A cardinal $\kappa$ is $\theta$-strongly compact if and only if there is an ultrapower embedding $j:V\to M$, with critical point $\kappa$, that exhibits the $\theta$-strong compactness cover property, meaning that for every $t\subset M$ of size $\theta$ there is $s\in M$ with $t\subset s$ and $|s|^M<j(\kappa)$. Fine measure characterization An uncountable cardinal $\kappa$ is $\theta$-strongly compact if and only if there is a fine measure on $\mathcal{P}_\kappa(\theta)$. The notation $\mathcal{P}_\kappa(\theta)$ means $\{\sigma\subset\theta\mid |\sigma|<\kappa\}$. [1] Filter extension characterization An uncountable cardinal $\kappa$ is $\theta$-strongly compact if and only if every $\kappa$-complete filter of size at most $\theta$ on a set extends to a $\kappa$-complete ultrafilter on that set. [1] Discontinuous ultrapower characterization A cardinal $\kappa$ is $\theta$-strongly compact if and only if there is an ultrapower embedding $j:V\to M$ with critical point $\kappa$, such that $\sup j''\lambda<j(\lambda)$ for every regular $\lambda$ with $\kappa\leq\lambda\leq\theta^{\lt\kappa}$. In other words, the embedding is discontinuous at all such $\lambda$. Discontinuous embedding characterization A cardinal $\kappa$ is $\theta$-strongly compact if and only if for every regular $\lambda$ with $\kappa\leq\lambda\leq\theta^{\lt\kappa}$, there is an embedding $j:V\to M$ with critical point $\kappa$ and $\sup j''\lambda<j(\lambda)$. Ketonen characterization An uncountable regular cardinal $\kappa$ is $\theta$-strongly compact if and only if there is a $\kappa$-complete uniform ultrafilter on every regular $\lambda$ with $\kappa\leq\lambda\leq\theta^{\lt\kappa}$. An ultrafilter $\mu$ on a cardinal $\lambda$ is uniform if all final segments $[\beta,\lambda)= \{\alpha<\lambda\mid \beta\leq\alpha\}$ are in $\mu$. When $\lambda$ is regular, this is equivalent to requiring that all elements of $\mu$ have the same cardinality. Regular ultrafilter characterization An uncountable cardinal $\kappa$ is $\theta$-strongly compact if and only if there is a $(\kappa,\theta)$-regular ultrafilter on some set. An ultrafilter $\mu$ is $(\kappa,\theta)$-regular if it is $\kappa$-complete and there is a family $\{X_\alpha\mid\alpha<\theta\}\subset \mu$ such that $\bigcap_{\alpha\in I}X_\alpha=\emptyset$ for any $I$ with $|I|=\kappa$. Strongly compact cardinals and forcing If there is proper class-many strongly compact cardinals, then there is a generic model of $\text{ZF}$ + "all uncountable cardinals are singular". If each strongly compact cardinal is a limit of measurable cardinals, and if the limit of any sequence of strongly compact cardinals is singular, then there is a forcing extension V[G] that is a symmetric model of $\text{ZF}$ + "all uncountable cardinals are singular" + "every uncountable cardinal is both almost Ramsey and a Rowbottom cardinal carrying a Rowbottom filter". This also directly follows from the existence of a proper class of supercompact cardinals, as every supercomact cardinal is simultaneously strongly compact and a limit of measurable cardinals. Relation to other large cardinal notions Strongly compact cardinals are measurable. The least strongly compact cardinal can be equal to the least measurable cardinal, or to the least supercompact cardinal, by results of Magidor. [2] (It cannot be equal to both at once because the least measurable cardinal cannot be supercompact.) Even though strongly compact cardinals imply the consistency of the negation of the singular cardinal hypothesis, SCH, for any singular strong limit cardinal $\kappa$ above the least strongly compact cardinal, $2^\kappa=\kappa^+$ (also known as "SCH holds above strong compactness"). [2] If there is a strongly compact cardinal $\kappa$ then for all $\lambda\geq\kappa$ and $A\subseteq\lambda$, $\lambda^+$ is ineffable in $L[A]$. It is not currently known whether the existence of a strongly compact cardinal is equiconsistent with the existence of a supercompact cardinal. The ultrapower axiom gives a positive answer to this, but itself isn't known to be consistent with the existence of a supercompact in the first place. Every strongly compact cardinal is strongly tall, although the existence of a strongly compact cardinal is equiconsistent with "the least measurable cardinal is the least strongly compact cardinal, and therefore the least strongly tall cardinal", so it could be the case that the least of the measurable, tall, strongly tall, and strongly compact cardinals all line up. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory.
The Navigation Algorithm¶ This section describes the navigation algorithm in detail. Global Path Planner¶ A global planner problem in the Isaac framework is decomposed in three classes: the Planner Model, the Visibility Graph Algorithm, and the Optimizer. Planner Model¶ A planner model (engine/gems/path_planner/PlannerModel.hpp) must provide the following: A set of functions that provide the information regarding whether or not a given state is collision free. Information on whether or not a direct path(an easy path such as straight line) exists between two states and is collision free. The distance, or lenght of the path. A way to randomly sample in the space of states. In the case of the carter platform, a differential base is approximated as circular, allowing fast collision detection using a distance map. A direct path is defined as a short path (< 2m) in straight line (as we can always rotate in the direction). Therefore the planning problem is a 2D problem. Visibility Graph Algorithm¶ Inspired from the paper of T. SIMÉON, J.-P. LAUMOND and C. NISSOUX, “Visibility-based probabilistic roadmaps for motion planning”, the visibility graph algorithm provides a very generic algorithm to find a path in a high-dimension space. The goal is to produce a small graph with high visibility coverage. The graph is built by keeping a set of points (called guard in the paper) that cannot directly be connected to each other. Connections are added whenever there is an intermediate state that directly connects two guards that were not connected by any path. The Isaac implementation adds a connection between guards not connected by a path of size 2 using only one middle state. This produces a bigger but higher-quality graph. Once the graph is built, the shortest path can be computed by first finding a connection between these states and a guard and then running a Djikstra algorithm on the graph. The same graph can be pre-computed, manually assisted in case of difficult problems and reused for other shortest path requests as long as the environment is static. For better performance, build a dense graph by increasing the number of random samples. Optimizer¶ The final state is path optimization. The visibility graph during quick path production produces a very chaotic path. A better quality path is then computed using a short-cut: two waypoints are selected randomly and if a direct path between them exists, then all waypoints in between are bypassed. In addition, waypoints to close to an obstacle are moved away from the closest obstacle. Trajectory Planner¶ The local planner of Isaac is Linear Quadratic Regulator (LQR) based. Isaac SDK provides a customizable LQR solver. The dynamics of the system as well as the cost function need to be provided to the LQR solver which perform an iterative gradient descent using a linear search to find the best path. In the case of carter platform the dynamics of the system are those of a differential base: State: \(x(t)\): X position of the base \(y(t)\): Y position of the base \(\theta (t)\): Orientation of the base \(v(t)\): Linear velocity \(\theta '(t)\): Angular velocity Control: \(al(t)\): Left wheel angular acceleration \(ar(t)\): Right wheel angular acceleration The dynamics are then given by the formula (L is the base length and R the wheel radius): \(x'(t) = v(t) \cos( \theta(t) )\) \(y'(t) = v(t) \sin( \theta(t) )\) \(\theta'(t) = \theta'(t)\) \(v'(t) = a(t) = (ar(t) - al(t)) \cdot R / L\) \(\theta''(t) = a(t) = (ar(t) + al(t)) \cdot R / 2\) Here is an example of a path produced by carter in the local view: The local map is computed in real time as well as the distance map. In red is the path provided by the global planner, and in blue is the plan produced by the LQR, optimizing speed, distance to obstacle, acceleration, and other factors.
Though we have used the term string throughout to refer to a sequence of symbols from an alphabet, an alternative term that is frequently used is word. The analogy seems fairly obvious: strings are made up of “letters” from an alphabet, just as words are in human languages like English. In English, however, there are no particular rules specifying which sequences of letters can be used to form legal English words—even unlikely combinations like ghth and ckstr have their place. While some formal languages may simply be random collections of arbitrary strings, more interesting languages are those where the strings in the language all share some common structure: \(L_{1}=\left\{x \in\{a, b\}^{*} | n_{a}(x)\right\} ; L_{2}=\{\text { legal Java identifiers }\}\);\(L_{3}=\{\text { legal } \mathrm{C}++\text { programs }\}\). In all of these languages, there are structural rules which determine which sequences of symbols are in the language and which aren’t. So despite the terminology of “alphabet” and “word” in for- mal language theory, the concepts don’t necessarily match “alphabet” and “word” for human languages. A better parallel is to think of the alphabet in a formal language as corresponding to the words in a human language; the words in a formal language correspond to the sentences in a human language, as there are rules ( grammar rules) which determine how they can legally be constructed. One way of describing the grammatical structure of the strings in a lan- guage is to use a mathematical formalism called a regular expression. A regular expression is a pattern that “matches” strings that have a partic- ular form. For example, consider the language (over alphabet \(\Sigma=\{a, b\} )\). \(L=\{x | x \text { starts and ends with } a\}\) What is the symbol-by-symbol struc- ture of strings in this language? Well, they start with an a, followed by zero or more a’s or b’s or both, followed by an a. The regular expression \(a \cdot(a | b)^{*} \cdot a\) is a pattern that captures this structure and matches any string in L (· and ∗ have their usual meanings, and | designates or.1) Conversely, consider the regular expression (a · (a | b)∗ ) | ((a | b)∗ · a). This is a pattern that matches any string that either has the form “a followed by zero or more a’s or b’s or both” (i.e. any string that starts with an a) or has the form “zero or more a’s or b’s or both followed by an a” (i.e. any string that ends with an a). Thus the regular expression generates the language of all strings that start or end (or both) in an a: this is the set of strings that match the regular expression. Here are the formal definitions of a regular expression and the language generated by a regular expression: Definition 3.2. Let \(\Sigma\) be an alphabet. Then the following patterns are over \(\Sigma :\) regular expressions 1. \(\Phi\) and \(\varepsilon\) are regular expressions; 2. \(a\) is a regular expression, for each \(a \in \Sigma\) 3. if r1 and r2 are regular expressions, then so are r1 | r2, r1 · r2, r1∗and (r1) (and of course, r2∗ and (r2)). As in concatenation of strings, the · is often left out of the second expression. (Note: the order of precedence of operators, from lowest to highest, is | , ·, ∗.) No other patterns are regular expressions. Definition 3.3. The r, denoted L(r), is defined as follows: language generated by a regular expression 1. \(L(\Phi)=\emptyset\) , i.e. no strings match \(\Phi ;\) 2. \(L(\varepsilon)=\{\varepsilon\},\) i.e. \(\varepsilon\) matches only the empty string; 3. \(L(a)=\{a\},\) i.e. a matches only the string \(a\) ; 4. \(L\left(r_{1} | r_{2}\right)=L\left(r_{1}\right) \cup L\left(r_{2}\right),\) i.e. \(r_{1} | r_{2}\) matches strings that match \(r_{1}\) or \(r_{2}\) or both; 5. \(L\left(r_{1} r_{2}\right)=L\left(r_{1}\right) L\left(r_{2}\right),\) i.e. \(r_{1} r_{2}\) matches strings of the form "something that matches \(r_{1}\) followed by something that matches \(r_{2} "\) 6. \(L\left(r_{1}^{*}\right)=\left(L\left(r_{1}\right)\right)^{*},\) i.e. \(r_{1}^{*}\) matches sequences of 0 or more strings each of which matches \(r_{1}\) . 7. \(L\left(\left(r_{1}\right)\right)=L\left(r_{1}\right),\) i.e. \(\left(r_{1}\right)\) matches exactly those strings matched by \(r_{1}\) Example 3.7. Let \(\Sigma=\{a, b\}\) and consider the regular expression r = \(a^{*} b^{*} .\) What is \(L(r) ?\) Well, \(L(a)=\{a\}\) so \(L\left(a^{*}\right)=(L(a))^{*}=\{a\}^{*},\) and \(\{a\}\) is the set of all strings of zero or more a's, so \(L\left(a^{*}\right)=\{\varepsilon, a, a a, a a a, \ldots\}\). Similarly, \(L\left(b^{*}\right)=\{\varepsilon, b, b b, b b b, \ldots\} .\) since \(L\left(a^{*} b^{*}\right)=L\left(a^{*}\right) L\left(b^{*}\right)=\{x y | x \in\) \(L\left(a^{*}\right) \wedge y \in L\left(b^{*}\right) \},\) we have \(L\left(a^{*} b^{*}\right)=\{\varepsilon, a, b, a a, a b, b b, a a a, a a b, a b b, b b b, \dots\}\), which is the set of all strings of the form “zero or more a’s followed by zero or more b’s”. Example 3.8. Let \(\Sigma=\{a, b\},\) and consider the regular expression \(r=\) \((a|a a| a a a)(b b)^{*} .\) since \(L(a)=\{a\}, L(a a)=L(a) L(a)=\{a a\}\). Similarly, \(L(a a a)=\{a a a\}\) and \(L(b b)=\{b b\} .\) Now \(L(a|a a| a a a)=L(a) \cup\)\(L(a a) \cup L(a a a)=\{a, a a, a a a\},\) and \(L\left((b b)^{*}\right)=(L((b b)))^{*}=(L(b b))^{*}\) (the last equality is from clause 7 of Definition \(3.3 ),\) and \((L(b b))^{*}=\{b b\}^{*}=\) \(\{\varepsilon, b b, b b b b, \ldots\} .\) So \(L(r)\) is the set of strings formed by concatenating \(a\) or \(a a\) or aaa with zero or more pairs of \(b^{\prime} \mathrm{s} .\) Definition 3.4. A language is regular if it is generated by a regular expression. Clearly the union of two regular languages is regular; likewise, the con- catenation of regular languages is regular; and the Kleene closure of a reg- ular language is regular. It is less clear whether the intersection of regular languages is always regular; nor is it clear whether the complement of a regular language is guaranteed to be regular. These are questions that will be taken up in Section 3.6. Regular languages, then, are languages whose strings’ structure can be described in a very formal, mathematical way. The fact that a language can be “mechanically” described or generated means that we are likely to be able to get a computer to recognize strings in that language. We will pursue the question of mechanical language recognition in Section 3.4, and subsequently will see that our first attempt to model mechanical language recognition does in fact produce a family of “machines” that recognize exactly the regular languages. But first, in the next section, we will look at some practical applications of regular expressions. Exercises 1. Give English-language descriptions of the languages generated by the follow- ing regular expressions. a) \((a | b)^{*} \quad\) b) \(a^{*} | b^{*} \quad\) c) \(b^{*}\left(a b^{*} a b^{*}\right)^{*} \quad\) d) \(b^{*}\left(a b^{*}\right)\) 2. Give regular expressions over \(\Sigma=\{a, b\}\) that generate the following languages. a) \(L_{1}=\left\{x | x \text { contains } 3 \text { consecutive } a^{\prime} s\right\}\) b) \(L_{2}=\{x | x \text { has even length }\}\) c) \(L_{3}=\left\{x | n_{b}(x)=2 \bmod 3\right\}\) d) \(L_{4}=\{x | x \text { contains the substring aaba }\}\) e) \(L_{5}=\left\{x | n_{b}(x)<2\right\}\) f) \(L_{6}=\{x | x \text { doesn't end in aa }\}\) 3. Prove that all finite languages are regular.
Limit Ordinals Closed under Ordinal Exponentiation Theorem Let $x$ and $y$ be ordinals. Let $y$ be a limit ordinal. Let $x^y$ denote ordinal exponentiation. Then: If $x > 1$, then $x^y$ is a limit ordinal. If $x \ne \varnothing$, then $y^x$ is a limit ordinal. Proof Suppose $x > 1$. By definition of ordinal multiplication: $\displaystyle x^y = \bigcup_{z \mathop \in y} x^z$ Then: \(\displaystyle w\) \(\in\) \(\displaystyle x^y\) Ordinal is Less Than Successor \(\displaystyle \exists z \in y: \ \ \) \(\displaystyle w\) \(\in\) \(\displaystyle x^z\) by definition of ordinal exponentiation \(\displaystyle w^+\) \(\subseteq\) \(\displaystyle x^z\) Successor of Element of Ordinal is Subset \(\displaystyle \) \(\in\) \(\displaystyle x^{z^+}\) Membership is Left Compatible with Ordinal Exponentiation But $z^+ \in y$ by Successor in Limit Ordinal. So $w^+ \in x^y$ and $w^+ \in w^+$. $\Box$ If $x$ is the successor of $z$, then: $y^x = y^w × y$ by the definition of ordinal exponentiation. Then, $y^x$ is a limit ordinal by Limit Ordinals Preserved Under Ordinal Multiplication. $\blacksquare$
Let's say we have a mass on a spring being driven by a forcing function. Given hook's law, $F = -kx$, and a forcing function of $$F(t) = F_0\sin(\omega t) .$$ We can write: $$ m\frac{d^2x}{dt^2} = -kx + F_0\sin(\omega t) $$ All the physics resources I've come across assume that the motion of the spring follows the applied force, and present the solution as some form of: $$ x = C\sin(\omega t) $$ They typically then go onto substitute x into the differential equation and obtain: $$ C = \frac{F_0}{m(\omega_0^2-\omega^2)} $$ This is a pretty cool formula. I really wanted understand, why, if $\omega>\omega_0$, $C$ becomes negative and our motion is exactly out of phase with our force. This is not intuitive to me, and in an effort to better understand it, I decided to run a numerical analysis. I started my mass initially at rest at position zero and to my horror, my numerical analysis yielded these results: Clearly the motion of the mass can not be described by a single sine! What's going on here? After pulling my hair out a bit, I realized that my numeric analysis was in fact correct, and it was the analytical solution that was lacking. The full solution to our equation of motion is: $$ x(t) = A\sin(\omega_0 t) + B\cos(\omega_0 t) + \frac{F_0 \sin(\omega t)}{m(\omega_0^2-\omega^2)} $$ And when we setup out initial conditions correctly, this analytical solution agrees with the numerical solution! Our earlier solution is a special case of this solution - but the initial conditions must be set to very specific values for this to happen. So my question is - what the heck is going on here? Is this solution just not important, or not relevant? It seems to me that unless the initial conditions are exactly right, you won't even see the behavior shown in most physics resources. In real applications, does the solution taught typically just not happen, or am I missing something?
As an alternative to General Relativity, i hear that it can avoid the initial big bang singularity as well as the singularities in black holes, so why does it appear to be talked about so little? If anyone can enlighten me a bit on the successes and shortcomings of it i would be very grateful. One main reason why the ECKS theory (Einstein-Cartan-Kibble-Sciama) is so unpopular even today is its extreme mathematical complexity. The connection isn't symetric anymore, so it changes a lot of things in the mathematical formalism (for example: the covariant derivative applied to a simple scalar field doesn't commute anymore):\begin{equation}\tag{1}\Gamma_{\mu \nu}^{\lambda} \ne \Gamma_{\nu \mu}^{\lambda}.\end{equation}The non-symetric connection implies the existence of a tensor field called torsion, which is a three indices tensor field:\begin{equation}\tag{2}T_{\mu \nu}^{\lambda} \equiv \Gamma_{\mu \nu}^{\lambda} -\Gamma_{\nu \mu}^{\lambda}.\end{equation}This tensor field could be extracted everywhere and added to the Lagrangian as an extra field, so most people prefer to use standard GR (with a symmetric connection and standard formalism), and simply add the new tensor field by hand in the Lagrangian. You may explore the Wikipedia page, but beware: it has a few subtle mistakes: So PROS: It's a natural extension of General Relativity. GR feels more "complete" with spacetime torsion. CONS: It's terribly more complicated than GR without torsion. Torsion occurs only inside matter, in cases of extremely high density states, so it's very hard to test it in labs! Propagating torsion outside matter is probably too weak to be observable at all (if it could propagate outside at all!). A con i know: The Dirac equation becomes nonlinear and therefore the superposition principle used for the quantisation doesn't work anymore. But it should be mentioned, that the difference in predictions is so little different from GR that nowadays we aren't able to measure which one is "correct".
Use of rich Ratio and Algebraic concepts leads us to the solution in no time Usually profit and loss problems do not come difficult, variations are few and with a bit of grounding on concepts you can solve any of this category of problems easily. That's why this one surprised us. At first glance there seemed to be too many variables and the problem unsolvable. But when we looked deeper we discovered the clues to elegant solution one by one. Not surprisingly the new crucial concept that provided the breakthrough belonged to the category of rich algebraic concepts as well as rich ratio concepts both apparently different from the profit and loss topic. This again shows how at the basic level, the concepts of basic topic areas can be intertwined with each other. We will now see how. Problem A trader bought a number of pens at the rate of 5 per Rs. $P$, and in a second occasion he purchased the same number of pens at the rate of 4 per Rs. $P$. Then he mixed the two lots together and sold the pens at the rate of 9 per Rs. $Q$ suffering thus a loss of Rs. $R$. If $P : Q : R = 1 : 2 : 3$, find how many pens the trader bought. 540 545 1080 1090 Problem analysis and first stage simplification using basic ratio concept At first glance unknown variables seem to be too many for any feasible solution. But when we find the answer choice values to be simple numbers we know for sure that all except the single desired variable will be eliminated and the breakthrough technique lies within the problem. When we examine the problem more closely, we notice the ratio and immediately find the way to reduce number of unknown variables from 3 of $P$, $Q$ and $R$ to 1, the HCF of the three. We transform then the ratio terms to their actual values by introducing as a product factor, their HCF as an unknown variable $x$, Thus the ratio is transformed as, $P : Q : R = x : 2x : 3x$, where $x$ is the HCF of the three quantities $P$, $Q$ and $R$ and the actual values of the variables are, $P=x$, $Q=2x$ and $R=3x$. This is direct application of the most basic concept of ratios. Rich algebraic ratio concept of Ratio variable elimination property We state the new as, Ratio variable elimination property When a number of variables appear in a problem related with each other as a numeric ratio, the ratio variable quantities will be eliminated from consideration if all the variables appear in a linear equation together. This is primarily , it is a a rich algebraic concept of rich ratio concept variety . powerful hybrid concept In our case till this point, in transforming the three price/cost varibales $P$, $Q$ and $R$ into $x$, $2x$ and $3x$ as their actual values, we have reduced the number of variables from 3 to 1. This is significant simplification. But we also foresee that, these three variables will certainly appear in a future linear equation all together, as we know, in basic Profit and loss all relations are of linear form. Here we are concerned with such a one in, $CP - SP = Loss$, a perfectly linear equation. Implementation of first stage simplification After converting the three price/cost ratio variables to their actual values we face two choices of paths for further progress, to proceed with per pen purchase cost, per pen sale price and per pen loss, or, total purchase cost, total sale price and the total loss. we found it possible to choose the As purchase quantities on both occasion are equal thus simplifying the computations to an extent. If we had taken conventional total price/cost calculation path, the deductive load would have increased. Let us see how. per pen cost/price deduction path Conventional total cost/price deduction Let the number of pens purchased on each occasion be $N$. The total cost price on the first occasion is then, $CP_1 = \displaystyle\frac{NP}{5}=\frac{Nx}{5}$. And the total cost price on the second occasion is, $CP_2 = \displaystyle\frac{NP}{4}=\frac{Nx}{4}$. So that the grand total of the cost price is, $CP =Nx\left(\displaystyle\frac{1}{5} + \displaystyle\frac{1}{4}\right)=\displaystyle\frac{9Nx}{20}$ Similarly total sale price, $SP = 2N\displaystyle\frac{Q}{9} = \displaystyle\frac{4Nx}{9}$. Finally loss is, $Loss = R = 3x = CP - SP$ $=\displaystyle\frac{9Nx}{20} - \displaystyle\frac{4Nx}{9}$ $=\displaystyle\frac{Nx}{180}$ Eliminating $x$, $N =3\times{180} = 540$, So the total pens purchased was, $2N = 1080$, our answer as option c. In this path of solution we had to carry the variable $N$ through a few steps. Instead in the elegant solution we will take up the per pen price/cost path and will bring in $N$ at the last possible step. Elegant solution along the per pen deduction path In the first purchase per pen cost is, $CPP_1 = \displaystyle\frac{P}{5} = \frac{x}{5}$. In the second purchase per pen cost is, $CPP_2 = \displaystyle\frac{P}{4} = \frac{x}{4}$. Number of pens purchased on both occasions being same, average per pen cost after mixing the lots together is, $CPP = \displaystyle\frac{1}{2}\left(\displaystyle\frac{x}{5} + \displaystyle\frac{x}{4}\right) = \displaystyle\frac{9x}{40}$ Similarly the per pen sale price of the mixed lot of pens is, $SPP = \displaystyle\frac{Q}{9} = \frac{2x}{9}$ Bringing in the number of pens purchased each time as $N$ now we have the per pen loss as, $\text{Per pen loss } = \displaystyle\frac{R}{2N} = \displaystyle\frac{3x}{2N}$ $=CPP - SPP$ $=\displaystyle\frac{9x}{40} - \displaystyle\frac{2x}{9}$ $=\displaystyle\frac{x}{360}$, Eliminating $x$ we get, $N = 360\times{\displaystyle\frac{3}{2}}=540$. Total number of pens purchased is then, $2N = 1080$. Answer: c: 1080. Summarization First we simplified the complexity of the problem by introducing HCF $x$ as a factor for each of the ratio variables to get their actual values in terms of the single variable $x$ thus reducing the number of these variables from 3 to 1. Second, avoiding the conventional path of dealing with the total values of cost price, sale price and loss, we took up the per item values thus simplifying the deduction. This was possible because of same number of items purchased on both occasions and we could work on the average per item purchase cost. In the third stage, using the basic relation of profit or loss, we formed the linear equation in per item cost, per item sale price and per item loss, bringing in the number of items purchased at the last possible stage. Because of the linear relationship, the variable HCF $x$ was eliminated leaving only the desired unknown variable, the number of items purchased on each occasion. This we named as new variable elimination technique that stated (we repeat), Ratio variable elimination rich concept When a number of variables appear in a problem related with each other as a numeric ratio, the ratio variable quantities will be eliminated from consideration if all the variables appear in a linear equation together. In our problem, this powerful could be applied as basic profit and loss relations are linear. hybrid algebraic and ratio rich concept Inherent abstraction and further exploration This is an example of direct application of abstraction of simpler problems of this type when the cost value, sale value and loss value appear as simple numeric terms of 1, 2, 3 or 100, 200, 300, but always in ratio of 1 : 2 : 3 giving us the idea that actual values of these variables are not important at all, only their relative values are of significance. Thus it could imagined that these three values can even be algebraic variables but still in a ratio of 1 : 2 : 3. You can tune the ratio values or the rate of purchase or sale values to see how the profit or loss behave. You may ask us, why so much trouble? Our answer will be, with this you will gain explorative experimentative approach arriving at total control of the profit and loss concept domain for of any problem in the domain. quick elegant solution Resources that should be useful for you or 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but section on SSC CGL generally for any hard MCQ test. Other related question set and solution set on SSC CGL Profit, loss and discount and Ratio and Proportion How to solve difficult SSC CGL Math problems at very high speed using efficient problem solving strategies and techniques These resources should be extremely useful for you to speed up your in-the-exam-hall SSC CGL math problem solving. You will find these under the subsection Efficient Math Problem Solving. This is a collection of high power strategies and techniques for solving apparently tricky looking problems in various topic areas usually within a minute.These are no bag of tricks but are based on concepts and strategies that are not to be memorized but to be understood and applied with ease along with permanent skillset improvement. The following are the associated links, How to solve difficult SSC CGL Profit and loss problems in a few steps 3
Yes the signal is perfectly reconstructed. Consider the process at each stage as I show using the block diagram below:Consider each sample of the signal at each node in the diagram (each sample is shown using the sample index at the node for each row):(Note: You see the same form of reconstruction in the FFT algorithm).I will attempt to illustrate how ... I will answer question 2 first, and hopefully that will help explain what is going on with question 1.When you sample a baseband signal there are implicit aliases of the baseband signal at all integer multiples of the sampling frequency, as shown in the picture below.The solid image is the original baseband signal, and the aliases are represented by the ... Both taking a magnitude spectrogram and a Mel filter bank are lossy processes. Important information needed to reconstruct the original will have been lost. Thus you need to go back and use the original audio samples to do the reconstruction by determining a time or frequency domain filter equivalent to your dimensionality reduction.You can make ... The mathematical definition of orthogonality between two vectors is that their dot product is zero. It just means that there is no correlation between the two- at least at that "phase". It is often the case that if you shifted one of the vectors you would get strong correlation.Infinite vectors of different frequencies are always orthogonal, so in an ... Your work is correct.First just think about how causal digital filters produce an output as a function of the current and previous inputs. Right?Now think about the case where a 'filter' only produces an output as a function of the current input (i.e. not influenced by the previous inputs). We don't typically classify these as filters, instead we ... First, the P. P. Vaidyanathan condition is a sufficient one, not a necessary one.The upper part keeps every even sample. The lower part convert odds to evens, keeps every (novel) even, and put the (novel) evens back to thir old place.Hence, the delays $z^{-1}$ and $z^{+1}$ exactly interleave the kept evens (top) and odds (bottom).From P. P. ... @jodag, you have not implemented any kind of filter. You implemented a top-path system in parallel with a bottom-path system. In the diagram below the impulse response of the top-path is shown with blue dots. The impulse response of the bottom-path is shown with red asterisks.The combined parallel system's impulse response is the sum of the impulse ... If we stick to the linear version and discrete versions of filter banks and wavelets, filter banks represent the generic tool, and wavelets can be implemented as a specific instance of iterated $2$-band filter banks satisfying some additional properties, namely that low-pass spaces are embedded dyadically.In other words: get a single-level $2$-band ... It looks like you're doing everything correctly. For an $M$-channel DFT filter bank, the impulse response of the $m^{th}$ filter is given by$$h_m[n]=h_0[n]e^{j\frac{2\pi m}{M}},\qquad 0<m<M\tag{1}$$where $h_0[n]$ is the prototype filter. For $M=4$, $h_3[n]$ is indeed centered at $\omega=3\pi /2$. Note that due to the $2\pi$-periodicity of discrete-... One advantage of a polyphase filterbank approach is, as you guessed, that you can control the frequency response of each channel. When using a DFT alone, you have limited control over the frequency band covered by each bin (characterized by a Dirichlet kernel in the unwindowed case, or by the frequency response of whichever window function you select). This ... Since the term linear does not appear in the question and the current answers, let me offer a complementary perspective.A kernel in this acceptation (especially for images, which don't always follow linear rules, think about occlusion or saturation) is an array that is applied, somehow, on any input data. One often distinguishes linear and non-linear ... Orthogonality provides an interesting backbone to the structure of the filter-banks (FB). First, from an analysis FB, the synthesis FB is very direct, so it can ease implementations. Second, the orthogonality often allows faster implementations, as there is "little redundancy" in computation. Third, orthogonality ensures that matrices are well-conditioned, ... Unlike the convolution, the lifting scheme canmap integers to integers (CDF 5/3 transform in case of JPEG 2000),asymptotically reduces the computational complexity by a factor two,can be computed in-place,allows to simply treat signal boundaries (periodic symmetric extension in case of JPEG 2000). It's input agnostic, anything will work just as it would with any other real valued prototype filter. I've implemented polyphase filter banks this on radar systems in practice, where we're operating on complex data, both pulse compressed and uncompressed. Filter banks like these have loads of applications due to the i inherent design and theoretical speed.... A windowed FFT with a length that corresponds to 2/100ths of a second will give you overlapping filters each with roughly 100 Hz bandwidth. Just use every other FFT result bin if you want 100 Hz filter spacing. The window shape (Von Hann, Nutall, etc.) will allow some tuning of the filter shape, but the main passband lobe of a windowed FFT will be about ... If you have a 1D uniform $M$-channel multirate filter bank (FB), and $N$ is the decimation ($M \ge N$), rational oversampling ratios ($M/N$) are indeed possible (see Fig. 2), or from Figure 1 in Optimization of Synthesis Oversampled Complex Filter Banks, J. Gautier et al., IEEE Trans. Signal Processing, 2009 (doi), which provides optimized synthesis FB ... However, it seems rather inefficient to implement two separate filters. Would it not be more efficient to implement a single filter, and then subtract the output from the original signal to determine the other signal?That is in fact a very common technique, and as Olli pointed out it can only work if you make sure that the HPF (LPF) version is subtracted ... You are right about the importance of the phase.Subtraction in the time domain is equivalent to subtraction in the frequency domain. If the frequency domain phase at some frequency is not equal between the complex number being subtracted and the complex number subtracting, even if their magnitudes are the same, the result will be non-zero:Figure 1. ... This is a problem from Multirate Systems and Filter Banks by P. P. Vaidyanathan. In Section 4.6.2.E, there is a discussion on what he calls Euclidean Complementary Functions. Specifically, he discusses one of the results of Euclid's theorem:... if $H_0(z)$ and $H_1(z)$ are relatively prime, there exists polynomials $F_0(z)$ and $F_1(z)$ such that$$H_0(... Using different filter lengths for different channels is basically what the Wavelet Transform does. I think it basically addresses this issue. Wavelet transforms divide the spectrum up into spectral bins of unequal width. There are efficient options for computing the Wavelet Transform (a tree structured filter bank approach usually).In the linked Wikipedia ... I have tested your code.There are problems with the decimation and upsample stages. When you run the analysis and synthesis filters at the undecimated rate, then you get the perfect reconstruction (plus a filter shift)First of all in case of applications which are sensitive to group delay, you should better use the conv / filter function and then ... Typically what you’ll see in an application like this is a polyphase filter bank, which is designed with a single low pass filter prototype. This keeps your prototype filter real, and they can be quite efficient in comparison to directly designing an entire bank of individual bandpass filters.The short of it is this is a multi-rate technique which aims to ... A filter bank really is just what it says:A bank of filters, each of which gets applied to the signal.So, one signals in (signal=image), 3 signals out. You apply each of the kernels separately and don't combine anything. The notes made me wonder as well, and I cannot suspect the author made a mistake. The first graph with periodicity is generic. The second one corresponds to dyadic iteration (wavelets), often displayed in $[0,\pi]$, except when dealing with complex wavelets where $[-\pi,\pi]$ is used. I agree, $[0,2\pi]$ could have been used instead, but I have never seen ... What many people do not seen to realize is that using a finite length transform (such as one iteration of an STFT FFT) on a longer signal already alters the longer signal by windowing (e.g. ignoring the portions of the signal that do not fit in the current finite length FFT). Not using a tapered window means that one is using a rectangular window. Using a ... The factor of 1/2 comes from the fact that you are down-sampling. The down-sampling property of the z-transform states that given:$$ y[n] = x[nL] $$Then:$$ Y(z) = \frac{1}L \sum_{r=0}^{L-1}X(z^{1/L}e^{-j2{\pi}r/L})$$And in your case $L = 2$ so $r$ indexes from 0 to 1 so then:$$ Y(z) = \frac{1}2 \sum_{r=0}^{1}X(z^{1/2}e^{-j{\pi}r})$$To be more ... You mix up the convolution theorem: You cannot multiply the impulse responses, but merely their spectra, if you want to go this way.However, a simpler way (esp. when you have Matlab available and look for a numeric solution) is to calculate the overall impulse response. For example, considering the filter $x[n] \rightarrow x_1[n]$ you have:Let $x[n]=\... Any spectrum that is between the two DFT bin centers ends up getting represented, in various proportions, in all the DFT bins (in the rectangular window case, the proportions decay as in a Sinc function, roughly by 1/(i-k)). Those proportions might be needed for your synthesis to be an accurate enough reconstruction.Using 2 bins alone usually creates a ... First, I suggest you take a look at the STFT.Let say you use $N$-point FFT, then you filterbank has $N/2$ filters (assuming real signal). Usually, you divide your signal into $K$ segments, each $N$ samples long (could be overlapping) and apply the FFT to each segment. Every time you apply the FFT you get a single sample for each filter. Therefore your ...
This question is a duplicate of an existing MO question, but that other MO question has an accepted answer that does not actually answer the question, and I'm not sure how to fix that other than by re-asking the question. On page 76 of Serre's book A Course in Arithmetic, he writes: [T]here exist sets having an analytic density but no natural density. It is the case, for example, of the set $P^1$ of prime numbers whose first digit (in the decimal system, say) is equal to 1. One sees easily, using the prime number theorem, that $P^1$ does not have a natural density and on the other hand Bombieri has shown me a proof that the analytic density of $P^1$ exists (it is equal to $\log_{10}2 = 0.301029995\ldots$). There is a slight misstatement here because literally speaking, $P^1$ has natural density zero, but clearly the intent is to speak of the relative density of $P^1$ inside the set $P$ of all primes. In other words, Bombieri's result is a kind of "Benford's law for primes":$$\lim_{s\to1} {\sum_{m\in P_1} m^{-s} \over \sum_{p\in P} p^{-s}} = \log_{10}2.$$ My question is, how does one prove that the above limit (which goes by various names—relative analytic density, relative Dirichlet density, relative zeta density) exists? Serre does not say anything about this. The accepted answer to the duplicate MO question cites two papers, one by Cohen and Katz, and one by Raimi. But the paper by Cohen and Katz simply restates what Serre says without giving a proof. The paper by Raimi cites a paper by R. E. Whitney (Initial digits for the sequence of primes, Amer. Math. Monthly 79 (1972), 150–152) but Whitney's paper considers logarithmic density rather than Dirichlet density:$$\lim_{N\to\infty} {\sum_{m\in P_1, m\le N} 1/m \over \sum_{p\in P, p\le N} 1/p}.$$It's not clear to me that this implies Bombieri's result. In a comment to a now-deleted MO question, KConrad suggested looking in the book Prime Numbers by Ellison and Ellison, but I did not find the answer there either.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Article Keywords: signed distance-$k$-domination number; signed distance-$k$-dominating function; signed domination number Summary: The signed distance-$k$-domination number of a graph is a certain variant of the signed domination number. If $v$ is a vertex of a graph $G$, the open $k$-neighborhood of $v$, denoted by $N_k(v)$, is the set $N_k(v)=\lbrace u\mid u\ne v$ and $d(u,v)\le k\rbrace $. $N_k[v]=N_k(v)\cup \lbrace v\rbrace $ is the closed $k$-neighborhood of $v$. A function $f\: V\rightarrow \lbrace -1,1\rbrace $ is a signed distance-$k$-dominating function of $G$, if for every vertex $v\in V$, $f(N_k[v])=\sum _{u\in N_k[v]}f(u)\ge 1$. The signed distance-$k$-domination number, denoted by $\gamma _{k,s}(G)$, is the minimum weight of a signed distance-$k$-dominating function on $G$. The values of $\gamma _{2,s}(G)$ are found for graphs with small diameter, paths, circuits. At the end it is proved that $\gamma _{2,s}(T)$ is not bounded from below in general for any tree $T$. References: [1] J. H. Hattingh, M. A. Henning, and E. Ungerer: Partial signed domination in graphs . Ars Combin. 48 (1998), 33–42. MR 1623038 [2] T. W. Haynes, S. T. Hedetniemi, and P. J. Slater: Fundamentals of Domination in Graphs . Marcel Dekker, New York, 1998. MR 1605684 [3] T. W. Haynes, S. T. Hedetniemi, and P. J. Slater: Domination in Graphs: Advanced Topics . Marcel Dekker, New York, 1998. MR 1605685
Example: The points ( x, y) satisfying x 2 + y 2 = r 2 are colored blue. The points ( x, y) satisfying x 2 + y 2 < r 2 are colored red. The red points form an open set. The blue points form a boundary set. The union of the red and blue points is a closed set. In topology, an open set is an abstract concept generalizing the idea of an open interval in the real line. The simplest example is in metric spaces, where open sets can be defined as those sets which contain an open ball around each of their points (or, equivalently, a set is open if it doesn't contain any of its boundary points); however, an open set, in general, can be very abstract: any collection of sets can be called open, as long as the union of an arbitrary number of open sets is open, the intersection of a finite number of open sets is open, and the space itself is open. These conditions are very loose, and they allow enormous flexibility in the choice of open sets. In the two extremes, every set can be open (called the discrete topology), or no set can be open but the space itself (the indiscrete topology). In practice, however, open sets are usually chosen to be similar to the open intervals of the real line. The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, which use notions of nearness, can be defined using these open sets. Each choice of open sets for a space is called a Zariski topology in algebraic geometry that reflects the algebraic nature of varieties, and the topology on a differential manifold in differential topology where each point within the space is contained in an open set that is homeomorphic to an open ball in a finite-dimensional Euclidean space. Contents Motivation 1 Definitions 2 Euclidean space 2.1 Metric spaces 2.2 Topological spaces 2.3 Properties 3 Uses 4 Notes and cautions 5 "Open" is defined relative to a particular topology 5.1 Open and closed are not mutually exclusive 5.2 See also 6 References 7 External links 8 Motivation Intuitively, an open set provides a method to distinguish two points. For example, if about one point in a topological space there exists an open set not containing another (distinct) point, the two points are referred to as topologically distinguishable. In this manner, one may speak of whether two subsets of a topological space are "near" without concretely defining a metric on the topological space. Therefore, topological spaces may be seen as a generalization of metric spaces. In the set of all real numbers, one has the natural Euclidean metric; that is, a function which measures the distance between two real numbers: d( x, y) = | x - y|. Therefore, given a real number, one can speak of the set of all points close to that real number; that is, within ε of that real number (refer to this real number as x). In essence, points within ε of x approximate x to an accuracy of degree ε. Note that ε > 0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x = 0 and ε = 1, the points within ε of x are precisely the points of the interval (-1, 1); that is, the set of all real numbers between -1 and 1. However, with ε = 0.5, the points within ε of x are precisely the points of (-0.5, 0.5). Clearly, these points approximate x to a greater degree of accuracy compared to when ε = 1. The previous discussion shows, for the case x = 0, that one may approximate x to higher and higher degrees of accuracy by defining ε to be smaller and smaller. In particular, sets of the form (-ε, ε) give us a lot of information about points close to x = 0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x. This innovative idea has far-reaching consequences; in particular, by defining different collections of sets containing 0 (distinct from the sets (-ε, ε)), one may find different results regarding the distance between 0 and other real numbers. For example, if we were to define R as the only such set for "measuring distance", all points are close to 0 since there is only one possible degree of accuracy one may achieve in approximating 0: being a member of R. Thus, we find that in some sense, every real number is distance 0 away from 0! It may help in this case to think of the measure as being a binary condition, all things in R are equally close to 0, while any item that is not in R is not close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis; a member of this neighborhood basis is referred to as an open set. In fact, one may generalize these notions to an arbitrary set ( X); rather than just the real numbers. In this case, given a point ( x) of that set, one may define a collection of sets "around" (that is, containing) x, used to approximate x. Of course, this collection would have to satisfy certain properties (known as axioms) for otherwise we may not have a well-defined method to measure distance. For example, every point in X should approximate x to some degree of accuracy. Thus X should be in this family. Once we begin to define "smaller" sets containing x, we tend to approximate x to a greater degree of accuracy. Bearing this in mind, one may define the remaining axioms that the family of sets about x is required to satisfy. Definitions The concept of open sets can be formalized with various degrees of generality, for example: Euclidean space A subset U of the Euclidean n-space R is called n open if, given any point x in U, there exists a real number ε > 0 such that, given any point y in R whose Euclidean distance from n x is smaller than ε, y also belongs to U. [1] Equivalently, a subset U of R is open if every point in n U has a neighborhood in R contained in n U. Metric spaces A subset U of a metric space ( M, d) is called open if, given any point x in U, there exists a real number ε > 0 such that, given any point y in M with d( x, y) < ε, y also belongs to U. Equivalently, U is open if every point in U has a neighbourhood contained in U. This generalises the Euclidean space example, since Euclidean space with the Euclidean distance is a metric space. Topological spaces In general topological spaces, the open sets can be almost anything, with different choices giving different spaces. Let X be a set and \tau be a family of sets. We say that \tau is a topology on X if: X\in\tau \and \emptyset\in\tau (X and \emptyset are in \tau) \{O_i\}_{i \in I} \subseteq \tau \Rightarrow \left(\bigcup_{i \in I} O_i \right ) \in \tau (any union of sets in \tau is in \tau) O_1\in\tau \and O_2\in\tau \Rightarrow O_1\cap O_2\in\tau (any finite intersection of sets in \tau is in \tau) We call the sets in \tau the open sets. Note that infinite intersections of open sets need not be open. For example, the intersection of all intervals of the form (−1/ n, 1/ n), where n is a positive integer, is the set {0} which is not open in the real line. Sets that can be constructed as the intersection of countably many open sets are denoted G sets. δ The topological definition of open sets generalises the metric space definition: If one begins with a metric space and defines open sets as before, then the family of all open sets is a topology on the metric space. Every metric space is therefore, in a natural way, a topological space. There are, however, topological spaces that are not metric spaces. Properties The empty set is both open and closed (clopen set). [2] The set X that the topology is defined on is both open and closed (clopen set). The union of any number of open sets is open. [3] The intersection of a finite number of open sets is open. [3] Uses Open sets have a fundamental importance in topology. The concept is required to define and make sense of topological space and other topological structures that deal with the notions of closeness and convergence for spaces such as metric spaces and uniform spaces. Every subset A of a topological space X contains a (possibly empty) open set; the largest such open set is called the interior of A. It can be constructed by taking the union of all the open sets contained in A. Given topological spaces X and Y, a function f from X to Y is continuous if the preimage of every open set in Y is open in X. The function f is called open if the image of every open set in X is open in Y. An open set on the real line has the characteristic property that it is a countable union of disjoint open intervals. Notes and cautions "Open" is defined relative to a particular topology Whether a set is open depends on the topology under consideration. Having opted for greater brevity over greater clarity, we refer to a set X endowed with a topology T as "the topological space X" rather than "the topological space ( X, T)", despite the fact that all the topological data is contained in T. If there are two topologies on the same set, a set U that is open in the first topology might fail to be open in the second topology. For example, if X is any topological space and Y is any subset of X, the set Y can be given its own topology (called the 'subspace topology') defined by "a set U is open in the subspace topology on Y if and only if U is the intersection of Y with an open set from the original topology on X." This potentially introduces new open sets: if V is open in the original topology on X, but V\cap Y isn't, then V\cap Y is open in the subspace topology on Y but not in the original topology on X. As a concrete example of this, if U is defined as the set of rational numbers in the interval (0, 1), then U is an open subset of the rational numbers, but not of the real numbers. This is because when the surrounding space is the rational numbers, for every point x in U, there exists a positive number a such that all rational points within distance a of x are also in U. On the other hand, when the surrounding space is the reals, then for every point x in U there is no positive a such that all real points within distance a of x are in U (since U contains no non-rational numbers). Open and closed are not mutually exclusive A set might be open, closed, both, or neither. For example, we'll use the real line with its usual topology (the Euclidean topology), which is defined as follows: every interval (a,b) of real numbers belongs to the topology, and every union of such intervals, e.g. (a,b)\cup (c,d), belongs to the topology. In any topology, the entire set X is declared open by definition, as is the empty set. Moreover, the complement of the entire set X is the empty set; since X has an open complement, this means by definition that X is closed. Hence, in any topology, the entire space is simultaneously open and closed ("clopen"). The interval I=(0,1) is open because it belongs to the Euclidean topology. If I were to have an open complement, it would mean by definition that I were closed. But I does not have an open complement; its complement is I^C=(-\infty, 0]\cup[1,\infty), which does not belong to the Euclidean topology since it is not a union of intervals of the form (a,b). Hence, I is an example of a set that is open but not closed. By a similar argument, the interval J=[0,1] is closed but not open. Finally, since neither K=[0,1) nor its complement K^C=(-\infty,0)\cup[1,\infty) belongs to the Euclidean topology (neither one can be written as a union of intervals of the form (a,b) ), this means that K is neither open nor closed. See also References ^ Ueno, Kenji et al. (2005). "The birth of manifolds". A Mathematical Gift: The Interplay Between Topology, Functions, Geometry, and Algebra. Vol. 3. American Mathematical Society. p. 38. ^ ^ a b Taylor, Joseph L. (2011). "Analytic functions". Complex Variables. The Sally Series. American Mathematical Society. p. 29. External links Hazewinkel, Michiel, ed. (2001), "Open set", Open Set at PlanetMath.org. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
№ 9 All Issues Polulyakh E. O. Ukr. Mat. Zh. - 2016. - 68, № 2. - pp. 254-270 Let $T$ be a forest formed by finitely many locally finite trees. Let $V_0$ be the set of all vertices of $T$ of degree 1. We propose a sufficient condition for the image of an embedding $\Psi : T \setminus V_0 \rightarrow R^2$ to be a level set of a pseudoharmonic function. Ukr. Mat. Zh. - 2015. - 67, № 10. - pp. 1398-1408 We consider continuous functions on two-dimensional surfaces satisfying the following conditions: they have a discrete set of local extrema and if a point is not a local extremum, then there exist its neighborhood and a number $n ∈ ℕ$ such that the function restricted to this neighborhood is topologically conjugate to Re $z^n$ in a certain neighborhood of zero. Given $f : M^2 → ℝ$, let $Γ_{K−R} (f)$ be a quotient space of $M^2$ with respect to its partition formed by the components of level sets of the function $f$. It is known that the space $Γ_{K−R} (f)$ is a topological graph if $M^2$ is compact. In the first part of the paper, we introduced the notion of graph with stalks that generalizes the notion of topological graph. For noncompact $M^2$ , we present three conditions sufficient for $Γ_{K−R} (f)$ to be a graph with stalks. In the second part, we prove that these conditions are also necessary in the case $M^2 = ℝ^2$. In the general case, one of our conditions is not necessary. We provide an appropriate example. Ukr. Mat. Zh. - 2015. - 67, № 3. - pp. 375-396 We consider continuous functions on two-dimensional surfaces satisfying the following conditions: they have a discrete set of local extrema; if a point is not a local extremum, then there exist its neighborhood and a number $n ∈ ℕ$ such that a function restricted to this neighborhood is topologically conjugate to Re $z^n$ in a certain neighborhood of zero. Given $f : M^2 → ℝ$, let $Γ_{K−R} (f)$ be a quotient space of $M^2$ with respect to its partition formed by the components of the level sets of $f$. It is known that, for compact $M^2$, the space $Γ_{K−R} (f)$ is a topological graph. We introduce the notion of graph with stalks, which generalizes the notion of topological graph. For noncompact $M^2$, we establish three conditions sufficient for $Γ_{K−R} (f)$ to be a graph with stalks. Ukr. Mat. Zh. - 2013. - 65, № 7. - pp. 974–995 Let T be a finite or infinite tree and let V 0 be the set of all vertices of T of valency 1. We propose a sufficient condition for the image of the imbedding ψ: T \ V 0 → \( {{\mathbb{R}}^2} \) to be a level set of a pseudoharmonic function. Ukr. Mat. Zh. - 2006. - 58, № 5. - pp. 705–707 It is proved that the Birkhoff center of a homeomorphism on an arbitrary metric space coincides with the Birkhoff center of its power 2. Ukr. Mat. Zh. - 1997. - 49, № 11. - pp. 1567–1571 We construct a Pontryagin fiber bundle ξ = ( N, p, S 1), the total space N of which cannot be imbedded into any two-dimensional oriented manifold but can be imbedded into an arbitrary nonoriented two-dimensional manifold.
Specifying Boundary Conditions and Constraints in Variational Problems In the first part of this blog series, we discussed variational problems and demonstrated how to solve them using the COMSOL Multiphysics® software. In that case, we used simple built-in boundary conditions. Today, we will discuss more general boundary conditions and constraints. We will also show how to implement these boundary conditions and constraints in the COMSOL® software using the same variational problem from Part 1: (the soap film) — and just as much math. Classifying Constraints There are several schemes of classifying constraints. Here, we will consider those that have the most implications for computational implementation. Based on the geometric entity concerned, we can have point (isolated), distributed, and global constraints. For example, on one hand, the boundary conditions in a 1D problem are constraints at isolated points. On the other hand, a condition that should hold at every point is a distributed constraint. Global constraints specify some norm (usually an integral) of the solution. For example, specifying the length of a catenary cable or surface area of a soap film provides a global constraint. Some people use the term pointwise constraint for a distributed constraint. We would like to make a clear distinction between that and what we are calling a point constraint here. A point constraint is enforced at a single point or a finite number of isolated points. This set of points has no length, area, or volume. Distributed constraints, however, hold at every point of a region. This can be every point of an edge, surface, or domain of a 3D object. Another classification is equality constraints versus inequality constraints. A familiar inequality constraint in structural mechanics arises in contact mechanics. The gap between contacting objects in an assembly has to be nonnegative. In chemical reaction engineering, lower bounds on species concentrations are also inequalities. These classifications overlap. For instance, we can have distributed inequality constraints and distributed equality constraints and so on. Inequality constraints are mathematically more challenging, so we will focus on equality constraints first and move on to the former a bit later in the series. Brief Introduction to the Theory of Equality Constraints The calculus problem is solved by finding the stationary points of the augmented objective function with respect to both the coordinate \bf{x} and the Lagrange multiplier \lambda. This same idea is used, albeit with appropriate extensions, in variational calculus. Consider the problem Here, the distributed constraint g(x,u,u^{\prime})=0 has to be satisfied at all points in our domain and not just at one point. Thus, each point will have its own Lagrange multiplier, making \lambda a function and not just one number. Accordingly, the augmented functional is (1) Having turned the constrained variational problem for one field, u(x), to an unconstrained problem in two fields, u(x) and \lambda(x), we turn our attention to the optimality criteria. It is important to note here that we can make independent variation in both the solution field and the Lagrange multiplier field. That gives us the first-order optimality criteria \frac{d}{d\epsilon_2}E[u+\epsilon_1 \hat u,\lambda+\epsilon_2 \hat{\lambda}]\bigg|_{(\epsilon_1=0,\epsilon_2=0)} = 0. In terms of F and g, we have These are the equations we need to enter in the Weak Form PDE interface. We will quickly return to that. First, let’s derive the corresponding conditions for global (integral) and point (isolated) constraints. When we have a global constraint the augmented functional is where \lambda is one number and not a field. The first-order optimality conditions are Finally, using the properties of the Dirac delta function, the point (isolated) constraint g(x,u,u^{\prime})=0, \textrm{ at } x=x_o can be thought of as the global constraint Dirac delta functions can be analogously utilized to derive constraints on edges in 2D and 3D, and surfaces in 3D. Plugging the above result in the formulation for global constraints, we get the first-order optimality conditions (2) (3) If we have more than one isolated point constraint, we will have a Lagrange multiplier for each point. The Lagrange multiplier will not be a field, but a finite set of scalars, one valid at each isolated point. If we have a distributed constraint that is not imposed on the whole domain but over parts of a domain, we can define the Lagrange multiplier only over that part. Here, we will have a Lagrange multiplier field. Implementing Constraints in COMSOL Multiphysics® Let us now see how to implement constraints in COMSOL Multiphysics. We will consider the same soap film problem from the previous blog post, but with the following boundary conditions. The first boundary condition is something we could have specified using the Dirichlet Boundary Condition node, but for pedagogic reasons, we will use the more general constraint framework. The two boundary conditions above can be rewritten as We need the partial derivatives of the constraint equations with respect to u and u^{\prime}. Finally, we will plug these as weak contributions at the corresponding points. The contribution from F remains the same as before. Specifying point constraints using point weak contributions. Note that the point contribution from (Eq. 3) and the second term in (Eq. 2) have been added together in the Weak Contribution 1. The logic here is that, since \hat{u} and \hat{\lambda}_a are independent variations, setting the sum of terms containing those variations to zero is equivalent to setting terms containing each variation to zero. The numerical solution for this variational problem with the above constraints is shown in the plot below. Solution with radius 2 at the left end and zero slope at the right end. In the theory section, we discussed different types of constraints. The table below summarizes the recommended places in the software where contributions from constraints are specified and unknowns (Lagrange multipliers) are defined based on the type of constraint. Constraint Type Examples Constraint Contribution Containing \hat{u} Constraint Contribution Containing \hat{\lambda} Where to Define \lambda Distributed constraint Weak contribution Weak contribution Global (integral) constraint Weak contribution Global Equations node Global Equations node Isolated point constraint Weak contribution Weak contribution Specifying Fluxes (Forces) So far, we discussed specifying constraints. To this end, we introduced unknown Lagrange multipliers. In many physical problems, Lagrange multipliers are reaction forces or fluxes necessary to enforce a constraint. If instead of the constraint, you know the applied forces or fluxes, how do you specify that? Go back to the functional to be minimized and reformulate, including the forces (fluxes). For example, for boundary loads in structural mechanics, this adds the virtual work due to the loads. The expression to enter in the COMSOL Multiphysics interface is similar to what we had to do for constraints. The known value of the force (flux) goes in place of the Lagrange multiplier, and we do not need auxiliary variables for Lagrange multipliers. As a result, the term containing the variation of the Lagrange multiplier disappears. Find more information in our previous blog post on how to add point loads (sources) using weak contributions. Adding Constraints to Points That Are Not in the Geometry Sequence Occasionally, we want to add a constraint to a point in our domain that is not explicitly inserted (or created by intersection of curves) in the geometry sequence. We cannot select the point and associate it with the point weak contribution as shown in the above example. However, we can add a global weak contribution and use a domain point probe to refer to the solution and its variation. Consider a catenary cable supported at two ends. For a cable with uniform weight per unit length the variational problem is the same as the axisymmetric soap film problem. Thus, we will use the same COMSOL model. Additionally, we want to constrain the height u to 1.95 at the center without adding a point node to the geometry sequence. First, we add a domain point probe for u. Let the name of the probe be ppb1. The weak contribution corresponding to (Eq. 1) is lam_c*test(comp1.ppb1), where lam_c is a new Lagrange multiplier. We enter this contribution in the global Weak Contribution node as shown in the screenshot below. This node does not allow the creation of auxiliary variables, as opposed to weak contributions on explicitly defined geometric entities. However, we can add a Global Equation node where we can define the Lagrange multiplier and specify the constraint. Adding a constraint on a point not in the geometry. The plots below show the solution including that constraint. Circles are added at mesh nodes. The plot on the left shows the case where the central point does not have a mesh node associated with it. As such, the constraint is satisfied only approximately. On the right, we see the case of a solution on a finer mesh, where the mesher added a node where we want to impose a constraint. Here, the constraint is satisfied exactly. Verifying internal constraints on points not in the geometry sequence. Aspect ratio is not preserved in plots. A similar strategy can be used to add point loads (sources) for problems such as structural mechanics, heat transfer, or chemical transport. As mentioned earlier, such loads mathematically correspond to Lagrange multipliers. If we know the Lagrange multiplier, we do not add the Global Equation. Rather, we just have the weak contribution and, in place of lam_c above, we type the applied mechanical, thermal, chemical, or other type of load. Stay Tuned! So far in this blog series, we have shown how to solve variational problems using the Weak Form PDE interface and how to include equality constraints. Constraints introduce additional complexity in the numerical solution. Mainly, they lead to saddle point problems, and a Lagrange multiplier implementation destroys the positive definiteness of the stiffness matrix. For nonlinear constraints, there is the additional danger of singular matrices during the nonlinear iteration. This can be especially problematic in global and distributed constraints. In the next installment of this series, we will show numerical strategies for circumventing these issues. After we discuss these strategies using equality constraints, we will proceed to inequality constraints later in the series. View More Blog Posts in the Variational Problems and Constraints Series Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Difference between revisions of "N-fold variants" m (→$\omega$-fold strong: minor correction) (→$n$-fold extendible cardinals: definition, results) Line 44: Line 44: === $n$-fold extendible cardinals === === $n$-fold extendible cardinals === + + + + + + + + + ''(To be added)'' ''(To be added)'' Revision as of 12:25, 6 May 2019 This page is a WIP.The $n$-fold variants of large cardinal axioms were created by Sato Kentaro in [1] in order to study and investigate the double helix phenomena. The double helix phenomena is the strange pattern in consistency strength between such cardinals, which can be seen below. This diagram was created by Kentaro. The arrows denote consistency strength, and the double lines denote equivalency. The large cardinals in this diagram will be detailed on this page (unless found elsewhere on this website). This page will only use facts from [1] unless otherwise stated. Contents 1 $n$-fold Variants 2 $\omega$-fold variants 3 References $n$-fold Variants The $n$-fold variants of large cardinals were given in a very large paper by Sato Kentaro. Most of the definitions involve giving large closure properties to the $M$ used in the original large cardinal in an elementary embedding $j:V\rightarrow M$. They are very large, but rank-into-rank cardinals are stronger than most $n$-fold variants of large cardinals. Generally, the $n$-fold variant of a large cardinal axiom is the similar to the generalization of superstrong cardinals to $n$-superstrong cardinals, huge cardinals to $n$-huge cardinals, etc. More specifically, if the definition of the original axiom is that $j:V\prec M$ has critical point $\kappa$ and $M$ has some closure property which uses $\kappa$, then the definition of the $n$-fold variant of the axiom is that $M$ has that closure property on $j^n{\kappa}$. $n$-fold Variants Which Are Simply the Original Large Cardinal There were many $n$-fold variants which were simply different names of the original large cardinal. This was due to the fact that some n-fold variants, if only named n-variants instead, would be confusing to the reader (for example the $n$-fold extendibles rather than the $n$-extendibles). Here are a list of such cardinals: The $n$-fold superstrongcardinals are precisely the $n$-superstrong cardinals The $n$-fold almost hugecardinals are precisely the almost $n$-huge cardinals The $n$-fold hugecardinals are precisely the $n$-huge cardinals The $n$-fold superhugecardinals are precisely the $n$-superhuge cardinals The $\omega$-fold superstrongand $\omega$-fold Shelahcardinals are precisely the I2 cardinals $n$-fold supercompact cardinals A cardinal $\kappa$ is $n$-fold $\lambda$-supercompact iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\lambda<j(\kappa)$ and $M^{j^{n-1}(\lambda)}\subset M$ (i.e. $M$ is closed under all of its sequences of length $j^{n-1}(\lambda)$). This definition is very similar to that of the $n$-huge cardinals. A cardinal $\kappa$ is $n$-fold supercompact iff it is $n$-fold $\lambda$-supercompact for every $\lambda$. Consistency-wise, the $n$-fold supercompact cardinals are stronger than the $n$-superstrong cardinals and weaker than the $(n+1)$-fold strong cardinals. In fact, if an $n$-fold supercompact cardinal exists, then it is consistent for there to be a proper class of $n$-superstrong cardinals. It is clear that the $n+1$-fold $0$-supercompact cardinals are precisely the $n$-huge cardinals. The $1$-fold supercompact cardinals are precisely the supercompact cardinals. The $0$-fold supercompact cardinals are precisely the measurable cardinals. $n$-fold strong cardinals A cardinal $\kappa$ is $n$-fold $\lambda$-strong iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\kappa+\lambda<j(\kappa)$ and $V_{j^{n-1}(\kappa+\lambda)}\subset M$. A cardinal $\kappa$ is $n$-fold strong iff it is $n$-fold $\lambda$-strong for every $\lambda$. Consistency-wise, the $(n+1)$-fold strong cardinals are stronger than the $n$-fold supercompact cardinals, equivalent to the $n$-fold extendible cardinals, and weaker than the $(n+1)$-fold Woodin cardinals. More specifically, in the rank of an (n+1)-fold Woodin cardinal there is an $(n+1)$-fold strong cardinal. It is clear that the $(n+1)$-fold $0$-strong cardinals are precisely the $n$-superstrong cardinals. The $1$-fold strong cardinals are precisely the strong cardinals. The $0$-fold strong cardinals are precisely the measurable cardinals. $n$-fold extendible cardinals For ordinal $η$, class $F$, positive natural $n$ and $κ+η<κ_1<···<κ_n$: Cardinal $κ$ is $n$-fold $η$-extendible for $F$with targets $κ_1,...,κ_n$ iff there are $κ+η=ζ_0<ζ_1<···<ζ_n$ and an iteration sequence $\vec e$ through $〈(V_{ζ_i},F∩V_{ζ_i})|i≤n〉$ with $\mathrm{crit}(\vec e)=κ$, and $e_{0,i}(κ)=κ_i$. Cardinal $κ$ is $n$-fold extendible for $F$iff, for every $η$, $κ$ is $n$-fold $η$-extendible for $F$. Cardinal $κ$ is $n$-fold extendibleiff it is $n$-fold extendible for $\varnothing$. $n$-fold extendible cardinals are precisely $n+1$ strong cardinals. $n$-fold $1$-extendibility is implied by $(n+1)$-fold $1$-strongness and implies $n$-fold superstrongness. (To be added) $n$-fold Woodin cardinals A cardinal $\kappa$ is $n$-fold Woodin iff for every function $f:\kappa\rightarrow\kappa$ there is some ordinal $\alpha<\kappa$ such that $\{f(\beta):\beta<\alpha\}\subseteq\alpha$ and $V_{j^{n}(f)(j^{n-1}(\alpha))}\subset M$. Consistency-wise, the $(n+1)$-fold Woodin cardinals are stronger than the $(n+1)$-fold strong cardinals, and weaker than the $(n+1)$-fold Shelah cardinals. Specifically, in the rank of an $(n+1)$-fold Shelah cardinal there is an $(n+1)$-fold Woodin cardinal, and every $(n+1)$-fold Shelah cardinal is also an $(n+1)$-fold Woodin cardinal. The $2$-fold Woodin cardinals are precisely the Vopěnka cardinals (therefore precisely the Woodin for supercompactness cardinals). In fact, the $n+1$-fold Woodin cardinals are precisely the $n$-fold Vopěnka cardinals. The $1$-fold Woodin cardinals are precisely the Woodin cardinals. (More to be added) $\omega$-fold variants The $\omega$-fold variant is a very strong version of the $n$-fold variant, to the point where they even beat some of the rank-into-rank axioms in consistency strength. Interestingly, they follow a somewhat backwards pattern of consistency strength relative to the original double helix. For example, $n$-fold strong is much weaker than $n$-fold Vopěnka (the jump is similar to the jump between a strong cardinal and a Vopěnka cardinal), but $\omega$-fold strong is much, much stronger than $\omega$-fold Vopěnka. $\omega$-fold extendible (To be added) $\omega$-fold Vopěnka (To be added) $\omega$-fold Woodin A cardinal $\kappa$ is $\omega$-fold Woodin iff for every function $f:\kappa\rightarrow\kappa$ there is some ordinal $\alpha<\kappa$ such that $\{f(\beta):\beta<\alpha\}\subseteq\alpha$ and $V_{j^{\omega}(f)(\alpha))}\subset M$. Consistency-wise, the existence of an $\omega$-fold Woodin cardinal is stronger than the I2 axiom, but weaker than the existence of an $\omega$-fold strong cardinal. In particular, if there is an $\omega$-fold strong cardinal $\kappa$ then $\kappa$ is $\omega$-fold Woodin and has $\kappa$-many $\omega$-fold Woodin cardinals below it, and $V_\kappa$ satisfies the existence of a proper class of $\omega$-fold Woodin cardinals. $\omega$-fold strong A cardinal $\kappa$ is $\omega$-fold $\lambda$-strong iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\kappa+\lambda<j(\kappa)$ and $V_{j^\omega(\kappa+\lambda)}\subset M$. $\kappa$ is $\omega$-fold strong iff it is $\omega$-fold $\lambda$-strong for every $\lambda$. Consistency-wise, the existence of an $\omega$-fold strong cardinal is stronger than the existence of an $\omega$-fold Woodin cardinal and weaker than the assertion that there is a $\Sigma_4^1$-elementary embedding $j:V_\lambda\prec V_\lambda$ with an uncountable critical point $\kappa<\lambda$ (this is a weakening of the I1 axiom known as $E_2$). In particular, if there is a cardinal $\kappa$ which is the critical point of some elementary embedding witnessing the $E_2$ axiom, then there is a nonprincipal $\kappa$-complete ultrafilter over $\kappa$ which contains the set of all cardinals which are $\omega$-fold strong in $V_\kappa$ and therefore $V_\kappa$ satisfies the existence of a proper class of $\omega$-fold strong cardinals. References Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
Automated decision-making is one of the core objectives of artificial intelligence. Not surprisingly, over the past few years, entire new research fields have emerged to tackle that task. This blog post is concerned with regret minimization, one of the central tools in online learning. Regret minimization models the problem of repeated online decision making: an agent is called to make a sequence of decisions, under unknown (and potentially adversarial) loss functions. Regret minimization is a versatile mathematical abstraction, that has found a plethora of practical applications: portfolio optimization, computation of Nash equilibria, applications to markets and auctions, submodular function optimization, and more. In this blog post, we will be interested in showing how one can compose regret-minimizing agents—or regret minimizers for short. In other words, suppose that you are given a regret minimizer that can output good decisions on a set \(\mathcal{X}\) and another regret minimizer that can output good decisions on a set \(\mathcal{Y}\). We show how you can combine them and build a good regret minimizer for a composite set obtained from \(\mathcal{X}\) and \(\mathcal{Y}\)—for example their Cartesian product, their convex hull, their intersection, and so on. Our approach will treat the two regret minimizers, one for \(\mathcal{X}\) and one for \(\mathcal{Y}\), as black boxes. This is tricky: we simply combine them without opening the box, so we must account for the possibility of having to combine very different regret minimizers. On the other hand, the benefit is that we are free to pick the best regret minimizers for each individual set. This is important. For example, consider an extensive-form game: we might know how to build specialized regret minimizers for different parts of the game. We figured out how to combine these regret minimizers to build a composite regret minimizer that can handle the whole game. All material is based off of a recent paper that appeared at ICML 2019. By the end of the blog post, I will give several applications of this calculus. It enables one to do several things which were not possible before. It also gives a significantly simpler proof of counterfactual regret minimization(CFR), the state-of-the-art scalable method for computing Nash equilibrium in large extensive-form games. The whole exact CFR algorithm falls out naturally, almost trivially, from our calculus. Regret Minimizer: An Abstraction for Repeated Decision Making A regret minimizer is an abstraction of a repeated decision-maker. One way to think about a regret minimizer is as a device that supports two operations: Output the next decision \(\mathbf{x}^t\), drawn from a convex and bounded domain of decisions \(\mathcal{X}\); and Receive/observe a convex loss \(\ell^{t-1}\) function meant to evaluate the last decision \(\mathbf{x}^{t-1}\) that was output. The loss function is revealed after the decision \(\mathbf{x}^{t-1}\) has been made, and it could have been chosen adversarially by the environment. The decision making is online, in the sense that each decision is made by only taking into account the past decisions and their corresponding loss functions; no information about future losses is available to the regret minimizer at any time. For the rest of the post, we focus on linear losses, that is \(\mathcal{F} = \mathcal{L}\) where \(\mathcal{L}\) denotes the set of all linear functions with domain \(\mathcal{X}\). Regret as a Quality Metric The quality metric for a regret minimizer is its cumulative regret. Intuitively, it measures how well the regret minimizer did against the best, fixed decision in hindsight. We can formalize this idea mathematically as the difference between the loss that was cumulated, \(\sum_{t=1}^T \ell^t(\mathbf{x}^t)\), and the minimum possible cumulative loss, \(\min_{\hat{\mathbf{x}}\in\mathcal{X}} \sum_{t=1}^T \ell^t(\hat{\mathbf{x}})\). In formulas, the cumulative regret up to time \(T\) is defined as $$\displaystyle R^T := \sum_{t=1}^T \ell^t(\mathbf{x}^t) – \min_{\hat{\mathbf{x}} \in \mathcal{X}} \sum_{t=1}^T \ell^t(\hat{\mathbf{x}}).$$ Good Regret Minimizers “Good” regret minimizers, also called Hannan consistent regret minimizers, are such that their cumulative regret grows sublinearly as a function of \(T\). Several good and general-purpose regret minimizers are known in the literature. Some of them, like follow-the-regularized-leader, online mirror descent, and online (projected) gradient descent work for any convex domain \(\mathcal{X}\). Other are tailored for specific domains, such as regret matching and regret matching plus, both of which are specifically designed for the case in which \(\mathcal{X}\) is a (probability) simplex. However, these general-purpose regret minimizers typically come with two drawbacks: They need a notion of projection onto the domain of decisions \(\mathcal{X}\), which can be computationally expensive in practice; and They are monolithic: they cannot take advantage of the specific combinatorial structure of their domain. Given the drawbacks of the traditional approaches, we started to wonder about different ways to construct regret minimizers, until we stumbled upon this intriguing thought: can we construct regret minimizers for composite sets by combining regret minimizers for the individual atoms? The answer is yes 🙂 Warm-Up: Cartesian Products Let’s start from a simple example. Suppose we have a regret minimizer that outputs decisions on a convex set \(\mathcal{X}\), and another regret minimizer that outputs decisions on a convex set \(\mathcal{Y}\). How can we combine them to obtain a regret minimizer for their Cartesian product \(\mathcal{X} \times \mathcal{Y}\)? The natural idea, in this case, is to let the two regret minimizers operate independently: Every time we need to output a decision on \(\mathcal{X} \times \mathcal{Y}\), we ask the regret minimizer for \(\mathcal{X}\) and \(\mathcal{Y}\) to independently output their next decisions \(\mathbf{x}^{t+1}\) and \(\mathbf{y}^{t+1}\) on \(\mathcal{X}\) and \(\mathcal{Y}\) respectively, and output the pair \((\mathbf{x}^{t+1}, \mathbf{y}^{t+1})\). Every time we receive a linear loss function \(\ell^t\) (defined over \(\mathcal{X}\times\mathcal{Y}\)), we separate the components that refer to \(\mathcal{X}\) and \(\mathcal{Y}\) respectively, and feed them into the regret minimizers for \(\mathcal{X}\) and \(\mathcal{Y}\). This process is represented pictorially in Figure 2. We coin this type of pictorial representation a “regret circuit”. Some simple algebra shows that, at all time \(T\), our strategy guarantees that the cumulative regret \(R^T\) of the composite regret minimizer (as seen from outside of the gray dashed box), satisfies \(R^T = R_\mathcal{X}^T + R_\mathcal{Y}^T\), where \(R_\mathcal{X}^T\) and \(R_\mathcal{Y}^T\) are the cumulative regrets of the regret minimizers for domains \(\mathcal{X}\) and \(\mathcal{Y}\) respectively. Hence, if both of those regret minimizers are “good” (Hannan consistent), than so is the composite regret minimizer. A Harder Example: Convex Hull What about convex hulls? It turns out that this is much trickier! We can try to reuse the same approach as before: we ask the two regret minimizers, one for \(\mathcal{X}\) and one for \(\mathcal{Y}\), to independently output decisions. But now we run into this dilemma as to how we should form a convex combination between the two decisions. Big idea: use a third regret minimizer to decide how to “mix” the recommendations from \(\mathcal{X}\) and \(\mathcal{Y}\) In this case, the regret circuit is shown in Figure 3. If the loss function \(\ell_{\lambda}^{t-1}\) that enters the extra regret minimizer is set up correctly, and if all three internal regret minimizers are good, one can prove that the composite regret minimizer, as seen from outside of the gray dashed box, is also a good regret minimizer. In particular, a natural way to define \(\ell_{\lambda}^{t}\) is as \[ \ell^t_\lambda : \Delta^{2} \ni (\lambda_1,\lambda_2) \mapsto \lambda_1 \ell^t(\mathbf{x}^t) + \lambda_2\ell^t(\mathbf{y}^t), \] which can be seen as a form of counterfactual loss function. Application: Counterfactual Regret Minimization It turns out that the two regret circuits we’ve seen so far—one for the Cartesian product and one for the convex hull of two sets—are already enough to give a very natural proof of the counterfactual regret minimization (CFR) framework, a family of regret minimizers, specifically tailored for extensive-form games. CFR has been the de facto state of the art for the past 10+ years for computing approximate Nash equilibria in large games and has been one of the key technologies that allowed to solve large Heads-Up Limit and No-Limit Texas Hold’Em. The basic intuition is as follows (all details are in our paper). Consider for example the sequential action space of the first player in the game of Kuhn poker (Figure 4, left): Every time we have to take an action in a game, we are effectively deciding how to break probability mass down onto different subtrees. Intuitively, this corresponds to a convex hull operation. Every time we make an observation, we need to have a contingency plan for each possible outcome and formulate a strategy for each possible observation. This is a Cartesian product operation. In other words, we can represent the strategy of the player by composing convex hulls and Cartesian products, following the structure of the game (Figure 4, right). Since we can express the set of strategies in the game by composing convex hulls and Cartesian products, it should now be clear how our framework assists us in constructing a regret minimizer for this domain. Intersections and Constraint Satisfaction After having seen Cartesian products and convex hulls, a natural question is: what about intersections and constraint satisfaction? In this case, we assume to have access to a good regret minimizer for a domain \(\mathcal{X}\), and we want to somehow construct a good regret minimizer for the curtailed set \(\mathcal{X} \cap \mathcal{Y}\). It turns out that, in general, these constraining operations are more costly than “enlarging” operations such as convex hull, Minkowski sums, Cartesian products, etc. In the paper, we show two different circuits: One is approximate and does not maintain feasibility (that is, some \(\mathbf{x}^t\) might fall outside of the domain of decisions \(\mathcal{X}\)). However, this circuit is very cheap to evaluate and does not use any notion of projection. The other guarantees feasibility, but at the cost of using (generalized) projections. The main idea for both circuits is to use the regret minimizer for \(\mathcal{X}\) output decisions, and then penalize infeasible choices by injecting extra penalization terms in the loss functions that enter the regret minimizer for \(\mathcal{X}\). In the case of the circuit that guarantees feasibility, the decisions are also projected onto \(\mathcal{X} \cap \mathcal{Y}\) before they are output by the composite regret minimizer. Figure 5 shows the resulting regret circuits, where \(d\) is the distance-generating function used in the projection (for example, a good choice could be \(d(\mathbf{x}) = \|\mathbf{x}\|^2_2\)), and \(\alpha^t\) is a penalization coefficient. While here we are not interested in all the details of this circuit, we remark an interesting observation: the regret circuit is a constructive proof of the fact that we can always turn an infeasible regret minimizer into a feasible one by projecting onto the feasible set, outside the loop! Other Applications Armed with these new intersection circuits, we can show that the recent Constrained CFR algorithm can be constructed as a special example via our framework. Our exact (feasible) intersection construction leads to a new algorithm for the same problem as well. Another application is in the realm of optimistic/predictive regret minimization. This is a recent subfield of online learning, whose techniques can be used to break the learning-theoretic barrier \(O(T^{-1/2})\) on the convergence rate of regret-based approaches to saddle points (for example, Nash equilibria). In a different ICML 2019 paper, we used our calculus to prove that, under certain hypotheses, CFR can be modified to have a convergence rate of \(O(T^{-3/4})\) to Nash equilibrium, instead of \(O(T^{-1/2})\) as in the original (non-optimistic) version. Future Research Regret circuits have already proved to be useful in several applications, mostly in game theory. The fact that we can combine potentially very different regret minimizers as black boxes is very appealing because it enables to choose the best algorithm for each set that is being composed, and conquer different parts of the design space with different techniques. In the paper, we show regret circuits for several convexity-preserving operations, including convex hull, Cartesian product, affine transformations, intersections, and Minkowski sums. However, several questions remain open: Most of the work assumes linear loss functions. For many applications, this is less restrictive than it might seem, because every time we face a non-linear loss function we can use a well-known linearization trick and fall back on our results. However, it would still be interesting to see what can be said about general convex loss functions? It would be interesting to derive a full calculus of optimistic/predictive regret minimization. Not all intersections are equally hard. Perhaps we could improve on the intersection construction in special cases. Let’s develop more circuits! If you have a particular application in mind that could benefit from a specially-designed regret circuit, I would love to hear about that. 🙂
Each of the following three examples highlights different ways to go beyond the very simple examples of the previous section. See also The complete source code of this example can be found in spin_orbit.py We begin by extending the simple 2DEG-Hamiltonian by a Rashba spin-orbit coupling and a Zeeman splitting due to an external magnetic field: Here \(\sigma_{x,y,z}\) denote the Pauli matrices. It turns out that this well studied Rashba-Hamiltonian has some peculiar properties in (ballistic) nanowires: It was first predicted theoretically in Phys. Rev. Lett. 90, 256601 (2003) that such a system should exhibit non-monotonic conductance steps due to a spin-orbit gap. Only very recently, this non-monotonic behavior has been supposedly observed in experiment: Nature Physics 6, 336 (2010). Here we will show that a very simple extension of our previous examples will exactly show this behavior (Note though that no care was taken to choose realistic parameters). The tight-binding model corresponding to the Rashba-Hamiltonian naturally exhibits a 2x2-matrix structure of onsite energies and hoppings. In order to use matrices in our program, we import the Tinyarray package. (NumPy would work as well, but Tinyarray is much faster for small arrays.) import tinyarray For convenience, we define the Pauli-matrices first (with \(\sigma_0\) the unit matrix): sigma_0 = tinyarray.array([[1, 0], [0, 1]])sigma_x = tinyarray.array([[0, 1], [1, 0]])sigma_y = tinyarray.array([[0, -1j], [1j, 0]])sigma_z = tinyarray.array([[1, 0], [0, -1]]) Previously, we used numbers as the values of our matrix elements.However, Builder also accepts matrices as values, andwe can simply write: syst[(lat(x, y) for x in range(L) for y in range(W))] = \ 4 * t * sigma_0 + e_z * sigma_z # hoppings in x-direction syst[kwant.builder.HoppingKind((1, 0), lat, lat)] = \ -t * sigma_0 + 1j * alpha * sigma_y / 2 # hoppings in y-directions syst[kwant.builder.HoppingKind((0, 1), lat, lat)] = \ -t * sigma_0 - 1j * alpha * sigma_x / 2 Note that the Zeeman energy adds to the onsite term, whereas the Rashbaspin-orbit term adds to the hoppings (due to the derivative operator).Furthermore, the hoppings in x and y-direction have a different matrixstructure. We now cannot use lat.neighbors() to add all the hoppings atonce, since we now have to distinguish x and y-direction. Because of that, wehave to explicitly specify the hoppings in the form expected by HoppingKind: Since we are only dealing with a single lattice here, source and target lattice are identical, but still must be specified (for an example with hopping between different (sub)lattices, see Beyond square lattices: graphene). Again, it is enough to specify one direction of the hopping (i.e.when specifying (1, 0) it is not necessary to specify (-1, 0)), Builder assures hermiticity. The leads also allow for a matrix structure, lead[(lat(0, j) for j in range(W))] = 4 * t * sigma_0 + e_z * sigma_z # hoppings in x-direction lead[kwant.builder.HoppingKind((1, 0), lat, lat)] = \ -t * sigma_0 + 1j * alpha * sigma_y / 2 # hoppings in y-directions lead[kwant.builder.HoppingKind((0, 1), lat, lat)] = \ -t * sigma_0 - 1j * alpha * sigma_x / 2 The remainder of the code is unchanged, and as a result we should obtain the following, clearly non-monotonic conductance steps: The Tinyarray package, one of the dependencies of Kwant, implements efficient small arrays. It is used internally in Kwant for storing small vectors and matrices. For performance, it is preferable to define small arrays that are going to be used with Kwant using Tinyarray. However, NumPy would work as well: import numpysigma_0 = numpy.array([[1, 0], [0, 1]])sigma_x = numpy.array([[0, 1], [1, 0]])sigma_y = numpy.array([[0, -1j], [1j, 0]])sigma_z = numpy.array([[1, 0], [0, -1]]) Tinyarray arrays behave for most purposes like NumPy arrays except that they are immutable: they cannot be changed once created. This is important for Kwant: it allows them to be used directly as dictionary keys. It should be emphasized that the relative hopping used for HoppingKind is given in terms oflattice indices, i.e. relative to the Bravais lattice vectors.For a square lattice, the Bravais lattice vectors are simply (a,0) and (0,a), and hence the mapping fromlattice indices (i,j) to real space and back is trivial.This becomes more involved in more complicated lattices, wherethe real-space directions corresponding to, for example, (1,0)and (0,1) need not be orthogonal any more(see Beyond square lattices: graphene). See also The complete source code of this example can be found in quantum_well.py Up to now, all examples had position-independent matrix-elements (and thus translational invariance along the wire, which was the origin of the conductance steps). Now, we consider the case of a position-dependent potential: The position-dependent potential enters in the onsite energies. One possibility would be to again set the onsite matrix elements of each lattice point individually (as in Transport through a quantum wire). However, changing the potential then implies the need to build up the system again. Instead, we use a python function to define the onsite energies. Wedefine the potential profile of a quantum well as: def make_system(a=1, t=1.0, W=10, L=30, L_well=10): # Start with an empty tight-binding system and a single square lattice. # `a` is the lattice constant (by default set to 1 for simplicity. lat = kwant.lattice.square(a) syst = kwant.Builder() #### Define the scattering region. #### # Potential profile def potential(site, pot): (x, y) = site.pos if (L - L_well) / 2 < x < (L + L_well) / 2: return pot else: return 0 This function takes two arguments: the first of type Site,from which you can get the real-space coordinates using site.pos, and thevalue of the potential as the second. Note that in potential we can accessvariables of the surrounding function: L and L_well are taken from thenamespace of make_system. Kwant now allows us to pass a function as a value to Builder: def onsite(site, pot): return 4 * t + potential(site, pot) syst[(lat(x, y) for x in range(L) for y in range(W))] = onsite syst[lat.neighbors()] = -t For each lattice point, the corresponding site is then passed as thefirst argument to the function onsite. The values of any additionalparameters, which can be used to alter the Hamiltonian matrix elementsat a later stage, are specified later during the call to smatrix.Note that we had to define onsite, as it isnot possible to mix values and functions as in syst[...] = 4 * t +potential. For the leads, we just use constant values as before. If we passed a function also for the leads (which is perfectly allowed), this function would need to be compatible with the translational symmetry of the lead – this should be kept in mind. Finally, we compute the transmission probability: # Compute conductance data = [] for welldepth in welldepths: smatrix = kwant.smatrix(syst, energy, params=dict(pot=-welldepth)) data.append(smatrix.transmission(1, 0)) pyplot.figure() pyplot.plot(welldepths, data) pyplot.xlabel("well depth [t]") pyplot.ylabel("conductance [e^2/h]") pyplot.show() kwant.smatrix allows us to specify a dictionary, params, that containsthe additional arguments required by the Hamiltonian matrix elements.In this example we are able to solve the system for different depthsof the potential well by passing the potential value (remember abovewe defined our onsite function that takes a parameter named pot).We obtain the result: Starting from no potential (well depth = 0), we observe the typical oscillatory transmission behavior through resonances in the quantum well. Warning If functions are used to set values inside a lead, then they must satisfy the same symmetry as the lead does. There is (currently) no check and wrong results will be the consequence of a misbehaving function. See also The complete source code of this example can be found in ab_ring.py Up to now, we only dealt with simple wire geometries. Now we turn to the case of a more complex geometry, namely transport through a quantum ring that is pierced by a magnetic flux \(\Phi\): For a flux line, it is possible to choose a gauge such that a charged particle acquires a phase \(e\Phi/h\) whenever it crosses the branch cut originating from the flux line (branch cut shown as red dashed line) [1]. There are more symmetric gauges, but this one is most convenient to implement numerically. Defining such a complex structure adding individual lattice sitesis possible, but cumbersome. Fortunately, there is a more convenient solution:First, define a boolean function defining the desired shape, i.e. a functionthat returns True whenever a point is inside the shape, and False otherwise: def make_system(a=1, t=1.0, W=10, r1=10, r2=20): # Start with an empty tight-binding system and a single square lattice. # `a` is the lattice constant (by default set to 1 for simplicity). lat = kwant.lattice.square(a) syst = kwant.Builder() #### Define the scattering region. #### # Now, we aim for a more complex shape, namely a ring (or annulus) def ring(pos): (x, y) = pos rsq = x ** 2 + y ** 2 return (r1 ** 2 < rsq < r2 ** 2) Note that this function takes a real-space position as argument (not a Site). We can now simply add all of the lattice points inside this shape atonce, using the function shapeprovided by the lattice: syst[lat.shape(ring, (0, r1 + 1))] = 4 * t syst[lat.neighbors()] = -t Here, lat.shape takes as a second parameter a (real-space) point that isinside the desired shape. The hoppings can still be added using lat.neighbors() as before. Up to now, the system contains constant hoppings and onsite energies,and we still need to include the phase shift due to the magnetic flux.This is done by overwriting the values of hoppings in x-directionalong the branch cut in the lower arm of the ring. For this we selectall hoppings in x-direction that are of the form (lat(1, j), lat(0, j))with j<0: def hopping_phase(site1, site2, phi): return -t * exp(1j * phi) def crosses_branchcut(hop): ix0, iy0 = hop[0].tag # builder.HoppingKind with the argument (1, 0) below # returns hoppings ordered as ((i+1, j), (i, j)) return iy0 < 0 and ix0 == 1 # ix1 == 0 then implied # Modify only those hopings in x-direction that cross the branch cut def hops_across_cut(syst): for hop in kwant.builder.HoppingKind((1, 0), lat, lat)(syst): if crosses_branchcut(hop): yield hop syst[hops_across_cut] = hopping_phase Here, crosses_branchcut is a boolean function that returns True forthe desired hoppings. We then use again a generator (this time withan if-conditional) to set the value of all hoppings acrossthe branch cut to fluxphase. The rationalebehind using a function instead of a constant value for the hoppingis again that we want to vary the flux through the ring, withoutconstantly rebuilding the system – instead the flux is governedby the parameter phi. For the leads, we can also use the lat.shape-functionality: sym_lead = kwant.TranslationalSymmetry((-a, 0)) lead = kwant.Builder(sym_lead) def lead_shape(pos): (x, y) = pos return (-W / 2 < y < W / 2) lead[lat.shape(lead_shape, (0, 0))] = 4 * t lead[lat.neighbors()] = -t Here, the shape must be compatible with the translational symmetryof the lead sym_lead. In particular, this means that it should extend toinfinity along the translational symmetry direction (note how there isno restriction on x in lead_shape) [2]. Attaching the leads is done as before: syst.attach_lead(lead) syst.attach_lead(lead.reversed()) In fact, attaching leads seems not so simple any more for the current structure with a scattering region very much different from the lead shapes. However, the choice of unit cell together with the translational vector allows to place the lead unambiguously in real space – the unit cell is repeated infinitely many times in the direction and opposite to the direction of the translational vector. Kwant examines the lead starting from infinity and traces it back (going opposite to the direction of the translational vector) until it intersects the scattering region. At this intersection, the lead is attached: After the lead has been attached, the system should look like this: The computation of the conductance goes in the same fashion as before. Finally you should get the following result: where one can observe the conductance oscillations with the period of one flux quantum. Leads have to have proper periodicity. Furthermore, the Kwantformat requires the hopping from the leads to the scatteringregion to be identical to the hoppings between unit cells inthe lead. attach_lead takes care ofall these details for you! In fact, it even adds points tothe scattering region, if proper attaching requires this. Thisbecomes more apparent if we attach the leads a bit further awayfrom the central axis o the ring, as was done in this example: Per default, attach_lead attachesthe lead to the “outside” of the structure, by tracing thelead backwards, coming from infinity. One can also attach the lead to the inside of the structure, by providing an alternative starting point from where the lead is traced back: syst.attach_lead(lead1, lat(0, 0)) starts the trace-back in the middle of the ring, resulting in the lead being attached to the inner circle: Note that here the lead is treated as if it would pass over the other arm of the ring, without intersecting it. Footnotes [1] The corresponding vector potential is \(A_x(x,y)=\Phi \delta(x) \Theta(-y)\) which yields the correct magnetic field \(B(x,y)=\Phi \delta(x)\delta(y)\). [2] Despite the “infinite” shape, the unit cell will still be finite; the TranslationalSymmetry takes care of that.
I try to construct a TM that accepts the language $\{ ww \mid w \in \{a,b\}^* \}$. Between the words $w$ is no delimeter, so I don't know, how my TM can know where the first $w$ ends and the second $w$ begins. Is there a trick for this? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community I try to construct a TM that accepts the language $\{ ww \mid w \in \{a,b\}^* \}$. Between the words $w$ is no delimeter, so I don't know, how my TM can know where the first $w$ ends and the second $w$ begins. Is there a trick for this? Here is a sketch for how a deterministic machine might work: One way I have read of is to first check the length of the string, and if it is odd, reject the string. Then We could put some special character (e.g. c) at the end of the entire string and copy the entire string to the other side of the special character and then apply the algorithm for checking $a^nb^n$. I also found another method explained as a pseudo code in this pdf, which changes the first half of the string to some other substitute variables and then compares them. An important thing to recognize is that you are only concerned with deciding if your input $x$ can be written as $ww$. That is, you want if decide $\exists w (x=ww)$. There are many ways you can go about this, and honestly, there is even a large variety of easy ways to go about it. If you just have to describe the Turing machine in words (like, you don't have to construct a detailed diagram), then pretty much any of them would be sufficient. If you have to draw a diagram, here are some things you might want to consider: If you are allowed to use nondeterminism, as was commented, you should take advantage of this. You can nondeterministically guess where there half-way point of your input is, and then check to see if the first segment matches the second segment. Likewise, you can always just guess what $w$ is and then check to see that your input tape looks like $ww$. If you can't use nondeterminism, you can always emulate it by trying all possible middle points or writing down all possible $w$'s, stopping at a provably reasonable time, of course. You can, as a previous answer mentions, count the length of your input $x$, then use that to compute the middle point. Some other considerations that are typical when one is a assigned problems like these deals with how many tapes your machine has. If you Turing machine only has one tape, guessing $w$ might be foolish, and you would be much better off guessing or finding the middle. I hope this is sufficient. The tricky thing about this kind of question is, either you don't have to fully describe the Turing machine, in which case, there are lots of nice and short answers, or you do have to fully describe the Turing machine, in which case, you will have to do a nice little bit of work to convert one of these short descriptions into a diagram.
Let $\mathbf V \colon [0,T] \times \mathbb R^d \to \mathbb R^d$ (for $T>0$) be a given, bounded smooth vector field and let $\mathbf X=\mathbf X(t,x)$ be its flow, i.e. the unique solution to the initial-value problem\begin{equation}\begin{cases}\frac{\partial}{\partial t} \mathbf X(t,x) = \mathbf V(t,\mathbf X(t,x)) & \text{ in } (0,T) \times \mathbb R^d \\\mathbf X(0,x) = x \quad \text{ for all } x \in \mathbb R^d. \end{cases}\end{equation} A well-known result in standard ODE's theory says that $$\tag{1} \nabla_x \mathbf X(t,x) = \exp\bigg( \int_0^t \nabla \mathbf V(s,\mathbf X(s,x))\,ds\bigg). $$ Is there an analogous formula to (1) for ODEs driven by (smooth) vector fields on Riemannian manifolds? In particular, does this formula involve somehow the geometry of the Riemannian manifold? A rather precise question could be: consider the $C^1$ norm of $\mathbf X$ (or even its Lipschitz constant) w.r.t. space variable $x$: does it depend on some known tensors on the manifold (e.g. curvature)? I have gone through books in differential geometry/differential topology (e.g. Lee, Lang) and they prove that $\mathbf X$ is smooth but do not compute explicitly the derivative. References are very much welcome. Thanks.
The quick answer is: no. The Hamiltonian operator is a unitary operator that maps state vectors to other state vectors in a given Hilbert Space, regardless of time. Lubos's answer in this thread discusses this distinction very clearly: Why $\displaystyle i\hbar\frac{\partial}{\partial t}$ can not be considered as the Hamiltonian operator? Another point that you may be interested in is: Hamiltonians can't contain time derivative operators, but they certainly CAN be time dependent. For any particular interaction, you will have a predetermined Hamiltonian. For example, if a particle is free, then $$\hat H = \hat P^2/2m$$ If a particle is subject to some kind of scalar potential energy V(x), then $$\hat H = \hat P^2/2m + V(x)$$ Most operators you see in introductory quantum mechanics, like the two written above, are time-independent. But in general, operators can be time dependent. For example, you can apply a potential energy that is changing over time (a finite square wall that varies in height maybe) Sometimes, these operators are divided into time-independent and time-dependent parts. See the link below for detailed discussions and examples of important time-dependent Hamiltonians in atomic physics: http://ocw.mit.edu/courses/nuclear-engineering/22-51-quantum-theory-of-radiation-interactions-fall-2012/lecture-notes/MIT22_51F12_Ch5.pdf
The Rope of Dreams – Polynomial Imagine you are given a length of rope that is 120 meters long, and told that you can go to any place on Earth and whatever you enclose with the rope – is yours to keep, or do with whatever you wish. You can enclose only a single area, a single time, and then must return the rope. What would you do? You may not think of a Polynomial of the Second Order – a Quadratic Equation, but you probably should. What would you do? Say a big “Thanks”, take the rope, and start pondering the options. A likely plan is to think of where on Earth you want to go (a tropical island, a bustling city, a countryside retreat, maybe even Fort Knox – it’s your choice), and while en route to your destination figure out how to maximize the area the 120 meter rope can enclose. I’ll leave the destination to your own imagination (you can post in the Comments section below) and turn our attention for now to maximizing the area the rope can enclose once you get there. Did someone say Polynomial! How long is a piece of string, or 120 meters of rope? A likely first question you might have is to get an idea of just how long 120 meters is, so some reference examples might help, note that ‘m’ is short for ‘meter’. A soccer pitch is between 90m and 120m in length; A rugby pitch is 100m – the same as the 100m sprint in Athletics (Usain Bolt, Carl Lewis etc.); An American football pitch is 110m long; A CLG/GAA (Cumann Lúthchleas Gael / Gaelic Athletic Association) pitch is between 130m – 145m in length. For petrol heads, 120m is about 24 Nascars end-to-end, or 21 Formula1 cars end-to-end – that’s almost the entire grid – are you heading to Monaco with your rope? Triangles and Rectangles and Squares, Oh My! A second question might be what shape to use, that is a great question and goes to the heart of this blog post. You might think of a triangle like the ancient Egyptians; or a rectangle shape like so many sports pitches and courts; an oval shape like running and race tracks; or a square shape like many public Parks. Or maybe you didn’t think about shape at all, maybe you thought shape doesn’t matter; but shape matters a whole lot and we’re about to find out why. Let’s try a Triangle – the Egyptian Pyramids stood the test of time reasonably well So a triangle could be a good shape to try, and let’s make it a long triangle because it seems logical that the longer it is the more area it will have. Let’s say we go with a triangle with sides of length 50m, 50m, and a base of 20m – that totals 120m. This type of triangle, with two sides of equal length is called an isosceles triangle. Now lets calculate the area of the triangle. using the formula below we get an area of \(490m^2\). That seems decent, for 120m length of rope we can cover an area of 490 sq.m. For a Triangle: \( Area = \frac {base \cdot height} {2} \) Herons Formula: \( Area = \sqrt {p(p-a)(p-b)(p-c)}\) where p is half the perimeter i.e. \(p = \frac {a+b+c}{2}\) Now let’s try another configuration of triangle, one that is more symmetric, in fact an equilateral triangle i.e. a triangle with 3 sides of equal length. The equilateral triangle will have sides of 40m, 40m, and 40m – all equal, and totalling 120m. Using Herons Formula above we see that we now have an area of \(693m^2\), that’s over 200 sq.m. more than the previous triangle. With a slight change in how we used our rope we’ve bagged ourselves a considerable area with an equilateral triangle of \(693m^2\). Q.) What do you call a broken Square? A.) A Rectangle. Rectangle – Wrecked angle, broken square, get it. Heheheh. Ok, so if changing the shape of a triangle can bag us a bigger area, what about roping a rectangle! Once again we’ll go for a longer length and shorter width in the hope the longer dimension pays dividends. Let’s go with a rectangle of sides 50m and width 10m, that’s a total perimeter of 120m (50+10+50+10). This rectangle gives us an area of \(500m^2\). Hmm, that’s a bit better than our original isosceles triangle (490 sq.m.) but a lot less than our equilateral triangle (693 sqm.). For a Rectangle: \( Area = length \cdot width \) Now let’s try the same thing we did with the triangles i.e. go for a more symmetric shape, you can’t get a more symmetric rectangle than – a square. That’s right, a square is just a special case of a rectangle where the length and width are equal. A square will have 4 sides of 30m each, totaling 120m and using all our rope. The area of this square will be 30 times 30 i.e. \(900m^2\). That’s more like it, once again a reconfiguration of our shape has yielded a much greater area with a square yielding \(900m^2\). Pentagons and Hexagons and Heptagons, Oh My! Applying the lessons we learned from Triangles and Rectangles, it would seem that the more symmetric the shape the greater the area for a given circumference. So what shape is the most symmetric of all? Triangles, Squares, Pentagons, Hexagons, Heptagons, Octagons, and so on. But where does it end, what is the most sided shape you ca make? The Circle. Were the ancient Irish right 5000 years ago – 1000 years before the Egyptian pyramids? Is the ultimate shape the humble Circle, as used in the World Heritage Site Brú na Bóinne in Ireland over 5200 years ago in the prehistoric passage tombs of Newgrange, Nowth and Dowth? Let us see. The perimeter i.e. circumference of the circle we can describe is 120 meters long, using the equations below we can determine the radius of the circle will be \( \frac {60} {\pi} \), we can use that to determine the area of the circle which is \(1146m^2\). Winner! Optimizing out shape by using a circle has yielded us a maximum area to enclose with our rope of \(1146m^2\). For a Circle: \( Area = \pi r^2 = \pi {( \frac{d}{2} )}^2 \), and \( Perimeter = \pi d \) Given circumference, C, \( Area = \frac {C^2} {4\pi} \) RESULTS Remember that all of these areas were obtained with the same 120 meter long rope, no magic, only mathematics! \(490m^2\) – Isosceles triangle. \(693m^2\) – Equilateral triangle. \(500m^2\) – Long Rectangle. \(900m^2\) – Square. \(1146m^2\) – Circle. Polynomial! Where is the link with Polynomial Equations and the Quadratic? I’m glad you asked. This all started when I was looking at numbers on a number line and squaring them and seeing how the values changed. Then I wondered, what is the difference between squaring a number (i.e. multiplying a number by itself); and multiplying the number to the left by the number on the right. So for example if I take the number 6, squaring it gives 36 (i.e. 6 * 6). Multiplying the number to the left by the number on the right is 5 * 7 = 35. 35 differs from 36 by 1. Interesting. And what if I take the number 8, squaring gives 64; multiplying 7 * 9 = 63, once again this is one less. Interesting! Then I wondered if this relationship held for all numbers. Now multiplying an infinite amount of numbers is too much work, and that’s why we love Algebra because it saves us from all that extra work. Let me call the number I pick, x. So the number to the left of x is one less i.e. x-1, and the number to the right of x is one more i.e. x+1. The examples above showed that: My test cases showed: \( x^2 = (x-1)(x+1) + 1 \) Multiplying this out: \( x^2 = x^2 + x -x -1 +1 \) Simplifying: \( x^2 = x^2 \) Q.E.D. What if you move two places rather than one? Next I wondered if there was a relationship between moving two spaces left and two spaces right. Taking 6 * 6 = 36, while 4 * 8 = 32; so a difference of 4. Interesting. Taking 8 * 8 = 64, while 6 * 10 = 60; again it is 4 less. Note that 4 is 2 squared i.e. 2 * 2 = 4. What if you move three places rather than one? Then I wondered about the relationship with moving 3 places. Since moving one space left a deficit of 1, moving 2 spaces left a deficit of 4, would moving 3 spaces leave a deficit of 9; or would it be 8? Let us see. Taking 6 * 6 = 36, while 3 * 9 = 27, so a deficit of 9. Interesting. Taking 8 * 8 = 64, while 5 * 11 = 55, again a deficit of 9. Note that 9 is 3 squared i.e. 3 * 3 = 9. Generalizing using Algebra for Polynomials of the Second Order Once again, taking x as the initial number, and n as the number of places to move to the left and right, we can see the above results give us the generality: \(\boxed{ x^2 = (x-n)(x+n) + n^2 }\) Once again, we should prove the proposed equation. Starting with \({ x^2 = (x-n)(x+n) + n^2 }\) Multiplying out \({ x^2 = x^2 +nx -nx -n^2 +n^2 }\) Simplifying: \( x^2 = x^2 \) Q.E.D. You can try the formula with your own values, please comment below if your calculations agree or disagree – and you can provide numbers used, calculations and results. You can confine your calculations to Integers i.e. positive and negative whole numbers including 0. Does the Equation explain the choice of Shape? Great question, and the answer is Yes. The equation tells us algebraically and in a more general and proven way, what we found out empirically by choosing different types of shape. The ideal scenario is to choose a value of n = 0 i.e. not to wander away from the square condition. The further you wander, i.e. the greater n, then the greater \(n^2\) will become. If you move 2, the loss is the square (\(n^2\)) i.e. 4, if you move 3 the loss is 9, if you move 4 the loss is 16, if you move 5, the loss is 25 and so on. This is why the isosceles triangle of 50-50-20 was much worse than the equilateral triangle of 40-40-40. It is also why the rectangle of 50-10-50-10 was much worse than the square of 30-30-30-30. So why was the Circle the best Shape? Another great question. If you look carefully at the boxed equation above you may notice that it is very similar to the equation of a circle, and also very similar to Pythagoras Theorem. But really it boils down to symmetry, the way to get a maximum area with a given perimeter/circumference is to use a circle because it satisfies these equations and gives a maximum value. Corners are inefficient, curves are efficient. Be mindful the opposite is true when it comes to packing and stacking i.e. corners are efficient and curves are inefficient. The Hexagon is a nice compromise between minimizing perimeter, maximizing area, and maximizing stacking and packing. Somehow bees seem to know this and build their nests accordingly. Our equation: \(\boxed{ x^2 = (x-n)(x+n) + n^2 }\) Eqn. of a Circle: \( r^2 = (x-h)^2 + (y-k)^2 \) Pythagoras Theorem: \( c^2 = a^2 + b^2 \) Conclusion Take that Rope of Dreams, decide where you want to go, and if you want to maximize the area you cover – choose a circle. If you liked this post you will probably like the posts on the Monty Hall Problem – Can You Solve This Maths Puzzle? Enjoy! UPDATE: You’ll definitely like the follow-up post to this which takes things to another Dimension (there’s even mention of the BORG), The Rope of Dreams Recut: Polynomials of the Third Order – Cubic Equations. Triple enjoy!
In Analytic Mechanics, the Lagrangian is taken to be a function of $x$ and $\dot{x}$, where $x$ stands for position and is a function of time and $\dot{x}$ is its derivative wrt time. To set my question, lets consider motion of a particle along a line: $$x: \mathbb{R} \to \mathbb{R} ~~as~~ t \mapsto x(t)$$ and take the Lagrangian to be: $$L(x, \dot{x}) := \frac{1}{2}m\dot{x}^2 - V(x)$$ By applying the Euler-Lagrange equations: $$ \frac{d}{dt}\left(\frac{\partial{L}}{\partial\dot{x}}\right) = \frac{\partial{L}}{\partial x}$$ we get back Newton's law of motion. This follows formally where we consider $x$ and $\dot{x}$ as independent, but if we consider $\dot{x}$ as velocity, then it is indeed a function of position so when we partially differentiate $L$ wrt $x$ the $\dot{x}$-terms shouldn't vanish and this messes up the derivation. What am I misunderstanding? edit I am beginning to think that this is a non-question as some people seem to have suggested and I am being confused by the hand-wavy math in basic physics textbooks. In the above example, let just consider $C$ to be the configuration space of the particle. Locally $C$ is given by the co-ordinate function $x: U \to \mathbb{R}$, the tangent bundle $\pi: TC \to C$ being locally trivial has as co-ordinate functions above $U$: $(x\circ \pi) \oplus dx: TU \to \mathbb{R^2}$. We take the Lagrangian to be simply a functional: $TC \to \mathbb{R}$, which when written locally is in terms of $x$ and $\frac{\partial}{\partial x}$. But, what about the dot in $\dot{x}$? Say the particle traces out $\gamma: I \to C$, where interval $I$ is time. Let $\gamma(0) = p \in C$. Locally in terms of $x$, since we have $\gamma_{\ast}(\frac{d}{dt}\rvert_0)$ in $T_{p}C$, we get $dx(\gamma_{\ast}(\frac{d}{dt}\rvert_0)) = \dot{x \circ \gamma}(0)$, which we can abuse notation and write as $\dot{x}(p)$, so the corresponding point on the bundle looks like $(x(p), \dot{x}(p))$. Does this make sense or have is there something I am still missing?
This answer responds only to a small part of the OP's post, where he posits that outliers can be detected when the absolute studentised residuals are greater than three. Diagnosing 'outliers' using studentised residuals: I note that you are diagnosing 'outliers' as points that have an absolute studentised residual greater than three. The hyperlink you provide to James, Witten, Hastie and Tibshirani (2017) notes these points as 'possible outliers' (p. 97). They refer to removing data points only in the case of actual measurement error, and also note that an outlier many indicate a deficiency in the model. This 'rule-of-thumb' is also commonly taught in introductory courses, and is in a lot of textbooks. It is extremely unfortunate that this rule is still being taught; it is an idea that should have died more than fifty years ago. This rule-of-thumb for outliers is completely wrong - it has no basis in statistical theory. Distribution of maximum absolute studentised residual: When you look at extreme values based on absolute studentised residuals, you must bear in mind that you are cherry-picking a maximum value from a set of underlying random variables; it is not appropriate to compare this value to the tails of the distribution for a single underlying random variable. You need to consider the distribution of the statistic you are actually looking at. Under the standard Gaussian regression assumption that $\varepsilon_1, ..., \varepsilon_n \sim \text{IID N}(0, \sigma^2)$ your (externalised) studentised residuals are only weakly dependent and have distribution: $$S_1, ..., S_n \sim \text{Student T}(df_{Res}).$$ For large $n$, the distribution of the maximum absolute studentised residual is well-approximated by an extreme value distribution which depends on the value $n$: $$M_n \equiv \max_{i = 1,...,n} |S_i| \sim \text{EV}(n),$$ As $n$ gets bigger, this distribution shifts to the right and its expected value becomes bigger, without any upper bound. It has approximate mean $\mathbb{E}(M_n) \approx \Phi^{-1}(1 - 1/n)$, which tends to infinity as $n \rightarrow \infty$. This is hardly surprising: as you take more and more approximately IID random variables from an unbounded distribution, the (absolute) biggest one tends to get bigger and bigger, without any upper bound. So, the idea that you can diagnose an 'outlier' by observing that $M>3$, without any consideration of the sample size, is just absurd.$\dagger$ There are various formal tests for outliers that take account of sample size and thereby correctly use the fact that the maximum studentised residual has a distribution that increases with $n$. The most well-known test is Grubbs' test, so that is a good place to start. Proper outlier tests operate by comparing the maximum value to an appropriate extreme value distribution that approximates the distribution of that maximum value, under the model assumptions. If you find an outlier that is diagnosed with Grubbs' test then this tells you that this extreme value is so extreme that it is unlikely to have come from an underlying Gaussian regression model, even after taking account of the sample size. What does it actually mean to find one or more 'outliers'? Assuming you have tested correctly, using a test that accounts for sample size, you may find that you are able to identify one or more points whose values are so extreme that they are 'outliers' with respect to the model assumption of normally distributed error terms. So, what does this actually mean? Unless you have reason to believe that there are actual measurement errors in your data, all this really means is that the underlying errors in your model have a distribution with fatter tails than the normal distribution. The normal distribution has thin tails, and often data does not fit this well. If your data fails Grubbs' test, or some other appropriate outlier test, you might consider changing your model to one with an error distribution that has fatter tails. A simple but effective change is to use a GLM with a generalised error distribution, which adds an additional parameter that allows for additional kurtosis. It is a very bad idea to remove data points purely because they are 'outliers' in comparison to some assumed error distribution. If you do this, you are effectively requiring the data to meet your model assumptions rather than requiring your model assumptions to conform to the reality of the data. Filtering out data points that have high absolute studentised residuals means that you will systematically underestimate the variability in your data. Unless there is reason to believe that there is actual measurement error in the data point, you should keep it in your data. If you have outliers from this then you should adapt your model to allow for higher kurtosis in your error distribution. $\dagger$ Again, to be clear, no derogatory comment about this rule should be interpreted as a derogatory comment about the OP. This is a rule that keeps getting taught in statistics courses and is also found in many (otherwise excellent) statistics textbooks. It is rarely corrected when it comes up, and a concerted effort by the statistics profession is needed to kill it off.
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Periodic solutions in a delayed predator-prey models with nonmonotonic functional response 1. School of Mathematic and Statistics, Lanzhou University, Lanzhou, Gansu 730000, China 2. School of Mathematics and Information Sciences, Ludong University, Yantai, Shandong 264025, China $y^{'}(t)=y(t)[ \frac{\mu (t)x(t-\tau )}{m^2+x^2(t-\tau )} -d(t)]. \]$ is established, where $a(t), b(t), \mu (t)$ and $d(t)$ are all positive periodic continuous functions with period $\omega >0$, $m>0$ and $\tau \geq 0 $ are constants. Keywords:time delay, coincidence degree., positive periodic solution, nonmonotonic functional response, Predator-prey model. Mathematics Subject Classification:Primary: 34K15, 34C25 Secondary: 92D2. Citation:Wan-Tong Li, Yong-Hong Fan. Periodic solutions in a delayed predator-prey models with nonmonotonic functional response. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 175-185. doi: 10.3934/dcdsb.2007.8.175 [1] Hongwei Yin, Xiaoyong Xiao, Xiaoqing Wen. Analysis of a Lévy-diffusion Leslie-Gower predator-prey model with nonmonotonic functional response. [2] Eduardo González-Olivares, Betsabé González-Yañez, Jaime Mena-Lorca, José D. Flores. Uniqueness of limit cycles and multiple attractors in a Gause-type predator-prey model with nonmonotonic functional response and Allee effect on prey. [3] Zengji Du, Xiao Chen, Zhaosheng Feng. Multiple positive periodic solutions to a predator-prey model with Leslie-Gower Holling-type II functional response and harvesting terms. [4] Zhijun Liu, Weidong Wang. Persistence and periodic solutions of a nonautonomous predator-prey diffusion with Holling III functional response and continuous delay. [5] Gianni Gilioli, Sara Pasquali, Fabrizio Ruggeri. Nonlinear functional response parameter estimation in a stochastic predator-prey model. [6] Haiying Jing, Zhaoyu Yang. The impact of state feedback control on a predator-prey model with functional response. [7] [8] Jun Zhou, Chan-Gyun Kim, Junping Shi. Positive steady state solutions of a diffusive Leslie-Gower predator-prey model with Holling type II functional response and cross-diffusion. [9] Shanshan Chen, Junping Shi, Junjie Wei. The effect of delay on a diffusive predator-prey system with Holling Type-II predator functional response. [10] Sze-Bi Hsu, Tzy-Wei Hwang, Yang Kuang. Global dynamics of a Predator-Prey model with Hassell-Varley Type functional response. [11] Mostafa Fazly, Mahmoud Hesaaraki. Periodic solutions for a semi-ratio-dependent predator-prey dynamical system with a class of functional responses on time scales. [12] H. W. Broer, K. Saleh, V. Naudot, R. Roussarie. Dynamics of a predator-prey model with non-monotonic response function. [13] Xiaoling Li, Guangping Hu, Zhaosheng Feng, Dongliang Li. A periodic and diffusive predator-prey model with disease in the prey. [14] Rui Xu, M.A.J. Chaplain, F.A. Davidson. Periodic solutions of a discrete nonautonomous Lotka-Volterra predator-prey model with time delays. [15] Haiyin Li, Yasuhiro Takeuchi. Dynamics of the density dependent and nonautonomous predator-prey system with Beddington-DeAngelis functional response. [16] [17] Wenshu Zhou, Hongxing Zhao, Xiaodan Wei, Guokai Xu. Existence of positive steady states for a predator-prey model with diffusion. [18] [19] Eric Avila-Vales, Gerardo García-Almeida, Erika Rivero-Esquivel. Bifurcation and spatiotemporal patterns in a Bazykin predator-prey model with self and cross diffusion and Beddington-DeAngelis response. [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Can the uncertainty principle be derived in quantum field theory? If yes, does is have a different interpretation than quantum mechanics because the coordinates $x_i$ are now parameters and not operators? There are some possibilities to produce a statement in QFT similar to the one valid for QM. In this case $X$ and $P$ must be replaced by the analogous objects in QFT, the field operator and its conjugate momentum. Consider a quantum scalar field $\phi$ and the equal time CCR: $$[\phi(t, \vec{x}), \pi(t, \vec{y})] = i\hbar \delta(\vec{x}-\vec{y}) I \:.$$ The rigorous version is: $$[\phi(t, f), \pi(t, g)] = i\hbar (f|g) I$$ where $f,g : \mathbb R^3 \to \mathbb R$ are spatial smearing test functions and $$(f|g)= \int _{\mathbb R^3} \overline{f(\vec{x})} g(\vec{x}) d^3x\:.$$ With the same procedure as for standard CCR you easily get $$\Delta \phi(t, f)_\Psi \:\Delta \pi(t, g)_\Psi \geq \frac{\hbar}{2} |(f|g)|$$ for every normalized vector state $\Psi$ which belongs to the domain of $\phi(t, f), \pi(t, g)$ and their second order powers. In particular you see that if $f$ and $g$ have disjoint supports, $|(f|g)|=0$, so that $\Delta \phi(t, f)_\Psi \:\Delta \pi(t, g)_\Psi \geq 0$, in accordance with the fact that $\phi(t, f)$ and $\pi(t, g)$ commute in that case... Quantum Field Theory is essentially modelled on top of the theory of Quantum Mechanics for finitely many degrees of freedom. With the creation and annihilation operators one can define the analogue of the position and momentum operators $q$ and $p$ as the closures of $$q_0(x) = \frac1{\sqrt2}[a(x) - a(x)^*],\qquad p_0(x) = \frac i{\sqrt2}[a(x)+a(x)^*]$$ respectively. The Heisenberg relations are then $$[q(x),p(y)] = i (x,y)I,$$ for any pair of vectors $x,y$ in the single particle Hilbert space. The same relations come directly from the canonical field $\phi$ and $\pi$, which satisfy the Heisenberg relations at a certain time, say $t=0$. By choosing an orthonormal basis $\{e_n\}$ of the single particle Hilbert space one can set $$q_k = \phi(e_k),\qquad p_k=\pi(e_k),$$ whence $$[q_i,q_k]=[p_i,p_k] = 0,\qquad [q_i,p_k] = i\delta_{ik}I.$$ As the uncertainty relations stem directly from the Heisenberg relations, one have them for the degrees of freedom of the theory, but there is no link between them and the coordinates of space-time.
How to Use the Beam Envelopes Method for Wave Optics Simulations In the wave optics field, it is difficult to simulate large optical systems in a way that rigorously solves Maxwell’s equation. This is because the waves that appear in the system need to be resolved by a sufficiently fine mesh. The beam envelopes method in the COMSOL Multiphysics® software is one option for this purpose. In this blog post, we discuss how to use the Electromagnetic Waves, Beam Envelopes interface and handle its restrictions. Comparing Methods for Solving Large Wave Optics Models In electromagnetic simulations, the wavelength always needs be resolved by the mesh in order to find an accurate solution of Maxwell’s equations. This requirement makes it difficult to simulate models that are large compared to the wavelength. There are several methods for stationary wave optics problems that can handle large models. These methods include the so-called diffraction formulas, such as the Fraunhofer, Fresnel-Kirchhoff, and Rayleigh-Sommerfeld diffraction formula and the beam propagation method (BPM), such as paraxial BPM and the angular spectrum method (Ref. 1). Most of these methods use certain approximations to the Helmholtz equation. These methods can handle large models because they are based on the propagation method that solves for the field in a plane from a known field in another plane. So you don’t have to mesh the entire domain, you just need a 2D mesh for the desired plane. Compared to these methods, the Electromagnetic Waves, Beam Envelopes interface in COMSOL Multiphysics (which we will refer to as the Beam Envelopes interface for the rest of the blog post) solves for the exact solution of the Helmholtz equation in a domain. It can handle large models; i.e., the meshing requirement can be significantly relaxed if a certain restriction is satisfied. A beam envelopes simulation for a lens with a millimeter-range focal length for a 1-um wavelength beam. We discuss the Beam Envelopes interface in more detail below. Theory Behind the Beam Envelopes Interface Let’s take a look at the math that the Beam Envelopes interface computes “under the hood”. If you add this interface to a model and click the Physics Interface node and change Type of phase specification to User defined, you’ll see the following in the Equation section: Here, \bf E1 is the dependent variable that the interface solves for, called the envelope function. In the phasor representation of a field, \bf E1 corresponds to the amplitude and \phi_1 to the phase, i.e., The first equation, the governing equation for the Beam Envelopes interface, can be derived by substituting the second definition of the electric field into the Helmholtz equation. If we know \phi_1, the only unknown is \bf E1 and we can solve for it. The phase, \phi_1, needs to be given a priori in order to solve the problem. With the second equation, we assume a form such that the fast oscillation part, the phase, can be factored out from the field. If that’s true, the envelope \bf E1 is “slowly varying”, so we don’t need to resolve the wavelength. Instead, we only need to resolve the slow wave of the envelope. Because of this process, simulating large-scale wave optics problems is possible on personal computers. A common question is: “When do you want the envelope rather than the field itself?” Lens simulation is one example. Sometimes you may need the intensity rather than the complex electric field. Actually, the square of the norm of the envelope gives the intensity. In such cases, it suffices to get the envelope function. What Happens If the Phase Function Is Not Accurately Known? The math behind the beam envelope method introduces more questions: What if the phase is notaccurately known? Can we use the Beam Envelopesinterface in such cases? Are the results correct? To answer these questions, we need to do a little more math. 1D Example Let’s take the simplest test case: a plane wave, Ez = \exp(-i k_0 x), where k_0 = 2\pi / \lambda_0 for wavelength \lambda_0 = 1 um, it propagates in a rectangular domain of 20 um length. (We intentionally use a short domain for illustrative purposes.) The out-of-plane wave enters from the left boundary and transmits the right boundary without reflection. This can be simulated in the Beam Envelopes interface by adding a Matched boundary condition with excitation on the left and without excitation on the right, while adding a Perfect Magnetic Conductor boundary condition on the top and bottom (meaning we don’t care about the y direction). The correct setting for the phase specification is shown in the figure below. We have the answer Ez = \exp(-i k_0 x), knowing that the correct phase function is k_0 x or the wave vector is (k_0,0) a priori. Substituting the phase function in the second equation, we inversely get E1z = 1, the constant function. How many mesh elements do we need to resolve a constant function? Only one! (See this previous blog post on high-frequency modeling.) The following results show the envelope function \bf E1 and the norm of \bf E, ewbe.normE, which is equal to |{\bf E1}|. Here, we can see that we get the correct envelope function if we give the exact phase function, constant one, for any number of meshes, as expected. For confirmation purposes, the phase of \bf E1z, arg(E1z), is also plotted. It is zero, also as expected. Now, let’s see what happens if our guess for the phase function is a little bit off — say, (0.95k_0,0) instead of the exact (k_0,0). What kind of solutions do we get? Let’s take a look: What we see here for the envelope function is the so-called beating. It’s obvious that everything depends on the mesh size. To understand what’s going on, we need a pencil, paper, and patience. We knew the answer was Ez = \exp(-i k_0 x), but we had “intentionally” given an incorrect estimate in the COMSOL® software. Substituting the wrong phase function in the second equation, we get \exp(-i k_0 x)={\bf E1z} \exp(-0.95i k_0 x). This results in {\bf E1z} = \exp(-0.05i k_0 x), which is no longer constant one. This is a wave with a wavelength of \lambda_b= 2\pi/0.05k_0 = 20 um, which is called the beat wavelength. Let’s take a look at the plot above for six mesh elements. We get exactly what is expected (red line), i.e., {\bf E1z} = \exp(-0.05i k_0 x). The plot automatically takes the real part, showing {\bf E1z} = \cos(-0.05 k_0 x). The plots for the lower resolutions still show an approximate solution of the envelope function. This is as expected for finite element simulations: coarser mesh gives more approximate results. This shows that if we make a wrong guess for the phase function, we get a wrong (beat-convoluted) envelope function. Because of the wrong guess, the envelope function is added a phase of the beating (green line), which is -0.05 k_0 x. What about the norm of \bf E? Look at the blue line in the plots above. It looks like the COMSOL Multiphysics software generated a correct solution for ewbe.normE, which is constant one. Let’s calculate: Substituting both the wrong (analytical) phase function and the wrong (beat-convoluted) envelope function in the second equation, we get {\bf Ez} = \exp(-0.05i k_0 x) \times \exp(-0.95i k_0 x) = \exp(-i k_0 x), which is the correct fast field! If we take a norm of \bf E, we get a correct solution, constant one. This is what we wanted. Note that we can’t display \bf E itself because the domain can be too large, but we can find \bf E analytically and display the norm of \bf E with a coarse mesh. This is not a trick. Instead, we see that if the phase function is off, the envelope function will also be off, since it becomes beat-convoluted. However, the norm of the electric field can still be correct. Therefore, it is important that the beat-convoluted envelope function be correctly computed in order to get the correct electric field. The above plots clearly show that. The six-element mesh case gives the completely correct electric field norm because it fully resolves the beat-convoluted envelope function. The other meshes give an approximate solution to the beat-convoluted envelope function depending on the mesh size. They also do so for the field norm. This is a general consequence that holds true for arbitrary cases. No matter what phase function we use in COMSOL Multiphysics, we are okay as long as we correctly solve the first equation for \bf E1 and as long as the phase function is continuous over the domain. When there are multiple materials in a domain, the continuity of the phase function is also critical to the solution accuracy. We may discuss this in a future blog post, but it is also mentioned in this previous blog post on high-frequency modeling. 2D Example So far, we have discussed a scalar wave number. More generally, the phase function is specified by the wave vector. When the wave vector is not guessed correctly, it will have vector-valued consequences. Suppose we have the same plane wave from the first example, but we make a wrong guess for the phase, i.e., k_0(x \cos \theta + y \sin \theta) instead of k_0 x . In this case, the wave number is correct but the wave vector is off. This time, the beating takes place in 2D. Let’s start by performing the same calculations as the 1D example. We have \exp(-i k_0 x)= {\bf E1z}(x,y) \exp(-i k_0 (x \cos \theta+y \sin \theta) ) and the envelope function is now calculated to be {\bf E1z}(x,y) = \exp(-i k_0 (x (1-\cos \theta) -y \sin \theta) ) , which is a tilted wave propagating to direction (1-\cos \theta, -\sin \theta) , with the beat wave number k_b = 2 k_0/\sin (\theta/2) and the beat wavelength \lambda_b=\lambda_0/(2\sin (\theta/2)). The following plots are the results for θ = 15° for a domain of 3.8637 um x 29.348 um for different max mesh sizes. The same boundary conditions are given as the previous 1D example case. The only difference is that the incident wave on the left boundary is {\bf E1z}(0,y) = \exp(i k_0 y \sin \theta) . (Note that we have to give the corresponding wrong boundary condition because our phase guess is wrong.) In the result for the finest mesh (rightmost), we can confirm that \bf E1z is computed just like we analyzed in the above calculation and the norm of \bf Ez is computed to be constant one. These results are consistent with the 1D example case. The electric field norm (top) and the envelope function (bottom) for the wrong phase function k_0(x \cos\theta +y \sin\theta ), computed for different mesh sizes. The color range represents the values from -1 to 1. Simulating a Lens Using the Beam Envelopes Interface The ultimate goal here is to simulate an electromagnetic beam through optical lenses in a millimeter-scale domain with the Beam Envelopes interface. How can we achieve this? We already discussed how to compute the right solution. The following example is a simulation for a hard-apertured flat top incident beam on a plano-convex lens with a radius of curvature of 500 um and a refractive index of 1.5 (approximately 1 mm focal length). Here, we use \phi_1 = k_0 x, which is not accurate at all. In the region before the lens, there is a reflection, which creates an interference. In the lens, there are multiple reflections. After the lens, the phase is spherical so that the beam focuses into a spot. So this phase function is far different from what is happening around the lens. Still, we have a clue. If we plot \bf E1z, we see the beating. Plot of \bf E1z. The inset shows the finest beat wavelength inside the lens. As can be seen in the plot, a prominent beating occurs in the lens (see the inset). Actually, the finest beat wavelength is \lambda_0/2 in front of the lens. To prove this, we can perform the same calculations as in the previous examples. The finest beat wavelength is due to the interference between the incident beam and reflected beam, but we can ignore this because it doesn’t contribute to the forward propagation. We can see that the mesh doesn’t resolve the beating before the lens, but let’s ignore this for now. The beat wavelength in the lens is 3\lambda_0/2 for the backward beam and 2\lambda_0 for the forward beam for n = 1.5, which we can also prove in the same way as the previous examples. Again, we ignore the backward beam. In the plot, what’s visible is the 2\lambda_0 beating for the forward beam. The backward beam is only a fraction (approximately 4% for n = 1.5 of the incident beam, so it’s not visible). The following figure shows the mesh resolving the beat inside the lens with 10 mesh elements. The beat wavelength inside the lens. The mesh resolves the beat with 10 mesh elements. Other than the beating for the propagating beam in the lens, the beating in the subsequent air domain is pretty large, so we can use a coarse mesh here. This may not hold for faster lenses, which have a more rapid quadratic phase and can have a very short beat wavelength. In this example, we must use a finer mesh only in the lens domain to resolve the fastest beating. The computed field norm is shown at the top of this blog post. To verify the result, we can compute the field at the lens exit surface by using the Frequency Domain interface, and then using the Fresnel diffraction formula to calculate the field at the focus. The result for the field norm agrees very well. Comparison between the Beam Envelopes interface and Fresnel diffraction formula. The mesh resolves the beat inside the lens with 10 mesh elements. The following comparison shows the mesh size dependence. We get a pretty good result with our standard recommendation, \lambda_b/6, which is equal to \lambda_0/3. This makes it easier to mesh the lens domain. Mesh size dependence on the field norm at the focus. As of version 5.3a of the COMSOL® software, the Fresnel Lens tutorial model includes a computation with the Beam Envelopes interface. Fresnel lenses are typically extremely thin (wavelength order). Even if there is diffraction in and around the lens surface discontinuities, the fine mesh around the lens part does not significantly impact the total number of mesh elements. Concluding Remarks In this blog post, we discuss what the Beam Envelopes interface does “under the hood” and how we can get accurate solutions for wave optics problems. Even if we get beating, the beat wavelength can be much longer than the wavelength, which makes it possible to simulate large optical systems. Although it seems tedious to check the mesh size to resolve beating, this is not extra work that is only required for the Beam Envelopes interface. When you use the finite element method, you always need to check the mesh size dependence for accurately computed solutions. Next Steps Try it yourself: Download the file for the millimeter-range focal length lens by clicking the button below. References J. Goodman, Fourier Optics, Roberts and Company Publishers, 2005. Comments (29) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Fig. 5.8 Control volume usage to calculate local averaged velocity in three coordinates. The integral approach is intended to deal with the "big'' picture. Indeed the method is used in this part of the book for this purpose. However, there is very little written about the usability of this approach to provide way to calculate the average quantities in the control system. Sometimes it is desirable to find the averaged velocity or velocity distribution inside a control volume. There is no general way to provide these quantities. Therefore an example will be provided to demonstrate the use of this approach. Consider a container filled with liquid on which one exit opened and the liquid flows out as shown in Figure 5.8. The velocity has three components in each of the coordinates under the assumption that flow is uniform and the surface is straight . The integral approached is used to calculate the averaged velocity of each to the components. To relate the velocity in the \(z\) direction with the flow rate out or the exit the velocity mass balance is constructed. A similar control volume construction to find the velocity of the boundary velocity (height) can be carried out. The control volume is bounded by the container wall including the exit of the flow. The upper boundary is surface parallel to upper surface but at \(Z\) distance from the bottom. The mass balance reads \[ \label{mass:eq:containerZs} \int_V \dfrac{d\rho}{dt} \,dV + \int_A U_{bn}\,\rho\,dA + \int_A U_{rn}\,\rho\,dA = 0 \tag{41} \] For constant density (conservation of volume) equation and (\(h>z\)) reduces to \[ \label{mass:eq:containerZrho} \int_A U_{rn}\,\rho\,dA = 0 \tag{42} \] In the container case for uniform velocity equation (??) becomes \[ \label{mass:eq:Zu} U_z\,A = U_{e}\,A_e \Longrightarrow U_z = - \dfrac{A_e}{A} U_e \tag{43} \] It can be noticed that the boundary is not moving and the mass inside does not change this control volume. The velocity \(U_z\) is the averaged velocity downward. Fig. 5.9 Control volume and system before and after the motion. The \(x\) component of velocity is obtained by using a different control volume. The control volume is shown in Figure 5.9. The boundary are the container far from the flow exit with blue line projection into page (area) shown in the Figure 5.9. The mass conservation for constant density of this control volume is \[ \label{mass:eq:containerXs} - \int_A U_{bn}\,\rho\,dA + \int_A U_{rn}\,\rho\,dA = 0 \tag{44} \] Usage of control volume not included in the previous analysis provides the velocity at the upper boundary which is the same as the velocity at \(y\) direction. Substituting into (??) results in \[ \label{mass:eq:containerXU} \int_ Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/05:_The_Control_Volume_and_Mass_Conservation/5.6:_The_Details_Picture_–_Velocity_Area_Relationship), /content/body/p[6]/span, line 1, column 5 \] Where \({A_{x}}^{-}\) is the area shown the Figure under this label. The area \(A_{yz}\) referred to area into the page in Figure under the blow line. Because averaged velocities and constant density are used transformed equation (??) into \[ \label{mass:eq:containerXUa} \dfrac{A_e}{A} {A_{x}}^{-} U_e + U_{x}\,\overbrace{Y(x)\,h}^{A_{yz}} = 0 \tag{46} \] Where \(Y(x)\) is the length of the (blue) line of the boundary. It can be notice that the velocity, Ux is generally increasing with \(x\) because \({A_{x}}^{-}\) increase with \(x\). The calculations for the \(y\) directions are similar to the one done for \(x\) direction. The only difference is that the velocity has two different directions. One zone is right to the exit with flow to the left and one zone to left with averaged velocity to right. If the volumes on the left and the right are symmetrical the averaged velocity will be zero. Fig. 5.10 Circular cross section for finding \(U_x\) and various cross sections.} Example 5.12 Calculate the velocity, \(U_x\) for a cross section of circular shape (cylinder). Solution 5.12 The relationship for this geometry needed to be expressed. The length of the line \(Y(x)\) is \[ \label{Ccontainer:yLength} Y(x) = 2\,r\, \sqrt{ 1 - \left(1-\dfrac{x}{r}\right)^2 } \tag{47} \] This relationship also can be expressed in the term of \(\alpha\) as \[ \label{Ccontainer:yLengthalpha} Y(x) = 2 \, r\,\sin\alpha \tag{48} \] Since this expression is simpler it will be adapted. When the relationship between radius angle and \(x\) are \[ \label{Ccontainer:rx} x = r (1 -\sin\alpha) \tag{49} \] The area \({A_{x}}^{-}\) is expressed in term of \(\alpha\) as \[ \label{Ccontainer:area} {A_{x}}^{-} = \left( \alpha - \dfrac{1}{2\dfrac{}{}},\sin(2\alpha) \right) r^2 \tag{50} \] Thus, the velocity, \(U_x\) is \[ \label{Ccontainer:Uxs} \dfrac{A_e}{A} \left( \alpha - \dfrac{1}{2\dfrac{}{}}\,\sin(2\alpha) \right) r^2\, U_e + U_{x}\, 2 \, r\,\sin\alpha\,h = 0 \tag{51} \] \[ \label{Ccontainer:Uxf} U_x = \dfrac{A_e}{A} \dfrac{r}{h} \dfrac{\left( \alpha - \dfrac{1}{2\dfrac{}{}}\,\sin(2\alpha) \right) }{\sin\alpha} \, U_e \tag{52} \] Averaged velocity is defined as \[ \label{Ccontainer:UxaveDef} \overline{U_x} = \dfrac{1}{S}\int_S U dS \tag{53} \] Where here \(S\) represent some length. The same way it can be represented for angle calculations. The value \(dS\) is \(r\cos\alpha\). Integrating the velocity for the entire container and dividing by the angle, \(\alpha\) provides the averaged velocity. \[ \label{Ccontainer:Uxtotal} \overline{U_x} = \dfrac{1}{2\,r} \int_0^{\pi}\dfrac{A_e}{A} \dfrac{r}{h} \dfrac{\left( \alpha - \dfrac{1}{2\dfrac{}{}}\,\sin(2\alpha) \right) }{\tan\alpha} \, U_e\, r\, d\alpha \tag{54} \] which results in \[ \label{Ccontainer:Uxtotala} \overline{U_x} = \dfrac{\left( \pi -1\right)}{4} \dfrac{A_e}{A} \dfrac{r}{h} \, U_e \tag{55} \] Example 5.13 Fig. 5.11 \(y\) velocity for a circular shape What is the averaged velocity if only half section is used. State your assumptions and how it similar to the previous example. Solution 5.13 The flow out in the \(x\) direction is zero because symmetrical reasons. That is the flow field is a mirror images. Thus, every point has different velocity with the same value in the opposite direction. The flow in half of the cylinder either the right or the left has non zero averaged velocity. The calculations are similar to those in the previous to example 5.12. The main concept that must be recognized is the half of the flow must have come from one side and the other come from the other side. Thus, equation (??) modified to be \[ \label{ene:eq:containeryU} \dfrac{A_e}{A} {A_{x}}^{-} U_e + U_{x}\,\overbrace{Y(x)\,h}^{A_{yz}} = 0 \tag{56} \] The integral is the same as before but the upper limit is only to \(\pi/2\) \[ \label{Ccontainer:Uytotal} \overline{U_x} = \dfrac{1}{2\,r} \int_0^{\pi/2}\dfrac{A_e}{A} \dfrac{r}{h} \dfrac{\left( \alpha - \dfrac{1}{2\dfrac{}{}}\,\sin(2\alpha) \right) }{\tan\alpha} \, U_e\, r\, d\alpha \tag{57} \] which results in \[ \label{Ccontainer:Uytotala} \overline{U_x} = \dfrac{\left( \pi -2\right)}{8} \dfrac{A_e}{A} \dfrac{r}{h} \, U_e \tag{58} \] Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
From Ravenel's article "Localization and Periodicity in Homotopy Theory": Two spectra $E$ and $F$ are said to be Bousfield equivalentwhen they give the same localization functor, or equivalently when $E_\ast (X)=0$ iff $F_\ast (X)=0$. The equivalence class of $E$ is denoted by $\langle E \rangle$. There is a partial ordering on the set of Bousfield classes. We say that $\langle E \rangle \geq \langle F \rangle$ if $E_\ast (X)=0$ implies that $F_\ast (X)=0$. Thus $\langle S^0 \rangle$ is the biggest class and $\langle pt \rangle $ is the smallest. Smash products and wedges are well defined on Bousfield classes. A class $\langle F \rangle$ is the complementof $\langle E \rangle$ if $\langle E \rangle \vee \langle F \rangle = \langle S^0 \rangle$ and $\langle E \rangle \wedge \langle F \rangle = \langle pt \rangle$. A class may or may not have a complement. It is easy to find examples of classes (e.g., that of an integer Eilenberg-Mac Lane spectrum) that do not. I was trying to figure out why this last statement is true, and at first I wanted to apply cohomotopy to a hypothetical equivalence $H\mathbb{Z} \vee F \simeq S^0$, but then I realized that of course there's no reason that we should have such an equivalence. Is there some other easy approach?
The following methods discussed are nonparametric analogues to analysis of variance (ANOVA). When the ANOVA assumptions (normality and homogeneity of variances in residuals) cannot be met, a nonparametric test is appropriate. Notations Independent random samples (independence within and across samples); the $i^{th}$ sample has $n_i$ observations, $i = 1, \cdots, k$. The general question would be equality of means or medians, or identical distributions. $x_{ij}$ - observations from sample $i$, $j = 1, \cdots, n_i$, $i = 1, \cdots, k$ $N = \sum\limits_{i=1}^k{n_i}$ - total number of observations across all samples $r_{ij}$ - rank of $x_{ij}$ in the combined sample (ranges from $1$ to $N$) $S_i = \sum\limits_{j=1}^{n_i}{r_{ij}}$ - sum of ranks of observations in the $i^{th}$ sample (group) The Kruskal-Wallis Test The K-W test extends the WMW test via the Wilcoxon formulation. It can be used as an overall test for equality of means / medians at the population level, when samples are from otherwise identical continuous distributions (i.e. shifts of location). It can also be used as a test for equality of distribution against an alternative that at least one of the populations tends to yield larger observations than at least one of the other populations. Under $H_0$ (equality of means / medians), all of the $S_i$ should be "roughly similar" because each sample will have small, medium and large ranks. Under $H_1$ (not all the same), some $S_i$ should be "larger" than others, and some "smaller" - clusters of small / large ranks in some of the samples. The test statistic \[\begin{aligned} T = \frac{12S_k}{N(N+1)} - 3(N+1) && \text{where } S_k = \sum\limits_{i=1}^k{\frac{S_i^2}{n_i}} \end{aligned}\] $S_k$ will be inflated if there's a cluster of large ranks in one or two of the samples. A large $T$ is evidence against $H_0$, as shown in the figure below [1]. If $N$ is moderate or large, then under $H_0$, $T \dot\sim \chi^2_{(k-1)}$. In R, the function kruskal.test() can be used in two ways: Input a data vector xand a vector of group indicators g( xand gshould both have length $N$). Now we can call kruskal.test(x, g). This method emphasizes the Wilcoxon nature. Use the same input, but use a formula kruskal.test(x ~ g). This model formulation emphasizes the connection to ANOVA. A very useful R package coin includes more functionality for this test. The Multiple Comparison Problem If $H_0$ is rejected, i.e. there's a difference among the $k$ groups, we may be interested in exploring which groups differ. This is the multiple comparisons problem. There are a lot of different ways to address this issue, and we'll look at one of these - consider all $\binom{k}{2}$ pairwise comparisons. For each pair of samples $i$ and $j$: \[\left|\frac{S_i}{n_i} - \frac{S_j}{n_j}\right| > t_{1 - \frac{\alpha}{2}} \left[\frac{N(N+1)}{12} - \frac{(N_1-T)}{N-k}\right]^{\frac{1}{2}}\left(\frac{1}{n_i} + \frac{1}{n_j}\right)^{\frac{1}{2}}\] where the LHS is the average rank for observations in sample $i$ ($j$), and the RHS is a measure of variability in the differences. The $t_{1 - \frac{\alpha}{2}}$ is the $1 - \frac{\alpha}{2}$ upper quantile from the $t$-distribution with $N - k$ degrees of freedom. The problem now is, what is the "right" $\alpha$ to use here? A naïve approach is to test all $\binom{k}{2}$ post-hoc (after rejecting $H_0$ of overall equality of means/medians) at the same level $\alpha$, say $0.05$. The problem with the approach is that our Type I error probability is inflated above $0.05$. The more comparisons there are, the more inflation there is, which makes our inference unreliable. We need to make an adjustment. There's also lots of ways to do this! One popular way to adjust this is the Bonferroni correction. The idea is to control the overall Type I error probability at level $\alpha$ by testing each of the post-hoc comparisons at level $\frac{\alpha}{\binom{k}{2}}$ instead of at level $\alpha$. With a more stringent cutoff, we make it harder to reject a null hypothesis of "no difference". Ties in the Data We use the mid-ranks for ties as before. Define: \[\begin{aligned} S_r = \sum_{i,j}{r_{ij}^2} && C = \frac{1}{4}N(N+1)^2 \end{aligned}\] and our test statistic: \[T = \frac{(N-1)(S_k - C)}{S_r - C}\] If there are no ties, this reduces to the previous statistic. This will be approximately $\chi^2_{(k-1)}$ for large $N$. The Jonckheere-Terpstra Test The J-T test is a test for a directional alternative. This is more specific than the previous K-W test as now there's a priori ordering, i.e. $H_1: \theta_1 \leq \theta_2 \leq \cdots \leq \theta_k$ with at least one strict inequality. Here we have ordered means/medians. This test has more statistical power than the K-W test. The K-W can still be used, but we'll get a more sensitive analysis if we use a test that is more specific for this type of alternative. Importantly, as with all cases of ordered alternatives, the order of the groups and the direction of the alternative need to be set prior to data collection based on theory, prior experience, etc. You can't look at the data and then decide on order, or decide to do an ordered alternative. The J-T test is designed for this case, and extends the Mann-Whitney two-sample formulation. As usual, this is best explained in an example! Braking distance taken by motorists to stop when driving at various speeds. A prior: (before seeing any data) we have an idea that braking distance increases as driving speed increases. A subset of the data: Sample Speed (mph) Braking distances (feet) 1 20 48 2 25 33, 48, 56, 59 3 30 60, 67, 101 4 35 85, 107 The speeds are ordered in the data. We expect braking distance to increase. Our procedure is with $k$ samples (here $k=4$), compute all pairwise M-W test statistics. Add all of these up, i.e. compute all of $U_r$s relevant to the $r^{th}$ sample ($r = 1, \cdots, k-1$) and any sample $s$ for which $s > r$. In example, $U = U_{12} + U_{13} + U_{14} + U_{23} + U_{24} + U_{34}$, e.g. $U_{12} =$ number of sample $2$ values that exceed each sample $1$ value $= 2.5$ ($48$ counts as a half since it's equal to the value in sample $1$). Likewise: $U_{13} = 3, U_{14} = 2, U_{23} = 12, U_{24} = 8, U_{34} = 5$. In total, our test statistic $U = 32.5$. We could potentially do an exact calculation by looking at all the possible configurations. Or we could use simulation where we randomly sample from all possible configurations. The final way is using asymptotic approximation. In R, the package clinfun has a function called jonckheere.test() that does the job for us. When $N < 100$ and there are no ties, it does an exact calculation. Otherwise, a normal approximation is used: \[\begin{aligned} E(U) &= \frac{1}{4}\left(N^2 - \sum\limits_{i=1}^k{n_i^2}\right) \\ Var(U) &= \frac{1}{72}\left[N^2(2N+3) - \sum\limits_{i=1}^k{\left[n_i^2(2n_i+3)\right]}\right] \end{aligned}\] In our case, since we have an ordered alternative, an one-sided test should be used. Also, since we have very small samples, can we trust a normal approximation? If we had perfect separation (i.e. all observations in sample $1$ < all observations in sample $2$ < $\cdots$ without overlap), then the value of the test statistic $U$ would be $35$. Our observed value is pretty close to that. We have circumstantial evidence against $H_0$. But the question is, how many configurations do we have in total? How many of them have value of $U > 32.5$? The exact calculation (ignores ties) gives a p-value of $0.0011$. The normal approximation (we have small samples) gives a p-value of $0.0022$. Both values are very small, and we have converging evidence against $H_0$. We may conclude that braking distance increases as driving speed increases. The Median Test Another easy generalization is the median test. We have $k$ independent samples instead of two: \[\begin{aligned} H_0: &\theta_1 = \theta_2 = \cdots = \theta_k \\ H_1: &\text{not all } \theta_i \text{ are equal} \end{aligned}\] We can get a $K \times 2$ contingency table following the same procedure as before. sample 1 sample 2 $\cdots$ sample k Above M A Below M B $n_1$ $n_2$ $\cdots$ $n_k$ $N$ Column margins $n_1 \cdots n_k$ are fixed by design, and row margins $A$ and $B$ are constrained by use of median ($A \approx B$). What doesn't translate from the $2 \times 2$ table is - what does a "more extreme" table mean now? Another thing is just knowing one cell value isn't enough to fill out the whole table since we have more degrees of freedom here. We need $k-1$ of them. Because of those issues, we'll just go straight to an approximation here. Our test statistic \[T = \sum\limits_{i=1}^k {\frac{\left(a_i - \frac{n_i A}{N}\right)^2}{\frac{n_i A}{N}}} + \sum\limits_{i=1}^k{\frac{\left(b_i - \frac{n_i B}{N}\right)^2}{\frac{n_i B}{N}}}\] where $\frac{n_iA}{N}$ is the expected number of cases above $M$ for group $i$ under $H_0$. This test statistic is essentially the $\chi^2$ test for independence: \[\chi^2 = \sum_{i=1}^m{\frac{(O_i - E_i)^2}{E_i}}\] The Friedman Test The Friedman test makes inference for several related samples. Its parametric analogy is the two-way classification ANOVA from a randomized block design [2]. Data: the data we have are $b$ independent "blocks" (patients, plots of land, etc.) and $t$ "treatments" which are applied to (or measured on) each block. So the data are a $b \times t$ array, as usually we put blocks along the rows and treatments along the columns. Blocks are independent; within a block, measurements are dependent. Procedure: within each block, independently rank the observations from $1$ (smallest) to $t$ (largest). The null hypothesis: all treatments have the same effect; the alternative is not $H_0$. The test statistic: \[T = \frac{12}{bt(t+1)} \sum\limits_{i=1}^t {S_i^2} - 3b(t+1)\] where $S_i$ is the sum over all blocks of the ranks allocated to treatment $i$ (rank along the rows, sum across the columns). The sum part is the core of the test statistic that encapsulates the procedure, and the other constants are derived from mathematical theories that help form a distribution we can work with later. For $b, t$ that are "not too small", $T$ is approximately $\chi^2_{(t-1)}$ under $H_0$. The test is to look at whether we fall in the far right tail of the $\chi^2$ distribution. Again, it's much easier to explain the test with an example. Example: $7$ students' pulse rate per min. was measured (1) before exercise, (2) immediately after exercise, and (3) 5 min. after exercise. \[\begin{aligned} &H_0: \text{no effect of exercise / rest on pulse rates,} \\ \text{vs. } &H_1: \text{not } H_0 \end{aligned}\] Results (repeated measures): Student Before Right after 5min. after 1 72 120 76 2 96 120 95 3 88 132 104 4 92 120 96 5 74 101 84 6 76 96 72 7 82 112 76 Our procedure says rank along each row separately: Student Before Right after 5min. after 1 1 3 2 2 2 3 1 3 1 3 2 4 1 3 2 5 1 3 2 6 2 3 1 7 2 3 1 then sum ranks along each column, which gives \[S_1 = 10, S_2 = 21, S_3 = 11\] In data, $T = 10.57$. Compare to $\chi^2_{(2)}$ at $\alpha = 0.05$, where $T_\alpha = 5.99$. We have strong evidence against $H_0$. R code used for generating this figure inspired by this post: ↩︎ library(tidyverse) df <- tibble( x = seq(0, 20, 0.01), y = dchisq(x, 3) ) rejection.region <- df %>% filter(x >= qchisq(0.95, 3)) %>% bind_rows(tibble( x = qchisq(0.95, 3), y = 0 )) ggplot(rejection.region, aes(x, y))+ geom_polygon(show.legend = F)+ geom_line(data = df, aes(x, y))+ theme_minimal()
To read before (following some answers or comments) My background: As a mathematician, probability theory is my field of research; please do not answer/comment just to refer to a course on probability theory. I expose below an erroneous application of probability theory. It is summarized at the end, in the Summarysection. The two probabilities $\Pr$ and ${\Pr}^\ast$ are explained just before. If you are not a bit specialist in probability theory, please do not downvote if you disagree with this point, unless you show an error in my reasoning at a precise point (after numerous edits, I believe I have enough detailed my explanation so that one can point a precise point). The previous comments helped me to write a short summary on my blog (rather oriented in general towards a mathematical audience), about the prisoner dilemma. I have just come across some papers and slides about quantum cognition, including: Cold and hot cognition: Quantum probability theory and realistic psychological modeling, by P. J. Corr Applications of quantum probability theory to dynamic decision making, by Busemeyer, Balakrishnan and Wang Quoting the first one about the prisoner dilemma: The literature shows: (1) knowing that one’s partner has defected leads to a higher probability of defection; (2) knowing that one’s partner has cooperated also leads to a higher probability of defection; and, most troubling for Classical Probability theory, (3) not knowing one’s partner’s decision leads to a higher probability of cooperation. The second one provides some empirical data supporting this claim. The data are some relative frequencies: one deals with the frequentist interpretation of probability. I disagree with the claim that the law of total probability is violated here. The conditional probabilities are misinterpreted. Let $A$ and $B$ be the two prisoners. Consider the experiment consisting in asking them to choose between defecting or cooperating, without knowing the choice of the other prisoner. Then, the conditional probability $P(A \textrm{ defects} \mid B \textrm{ defects})$ is the long-term relative frequency of the event "$A$ defects" among all those experiments for which the event "$B$ defects" occurs. This has nothing to do with the probability that $A$ defects when $A$ that $B$ defects, hereafter denoted by $\Pr^\ast(A \textrm{ defects} \mid B \textrm{ defects})$. knows The law of total probability says that $$ \Pr(A \textrm{ defects}) = \Pr(A \textrm{ defects} \mid B \textrm{ defects})\Pr(B \textrm{ defects}) + \Pr(A \textrm{ defects} \mid B \textrm{ cooperates})\Pr(B \textrm{ cooperates}), $$ thereby implying that $\Pr(A \textrm{ defects})$, as a weighted average of the two conditional probabilities $\Pr(A \textrm{ defects} \mid B \textrm{ defects})$ and $\Pr(A \textrm{ defects} \mid B \textrm{ cooperates})$, lies between these two conditional probabilities. The above mentioned papers claim that the law of total probability is violated because $\Pr(A \textrm{ defects})$ does not lie between $\Pr^\ast(A \textrm{ defects} \mid B \textrm{ defects})$ and $\Pr^\ast (A \textrm{ defects} \mid B \textrm{ cooperates})$, where $\Pr^\ast (A \textrm{ defects} \mid B \textrm{ defects})$ is the probability that $A$ defects when $A$ that $B$ defects, and, as said before, $${\Pr}^\ast (A \textrm{ defects} \mid B \textrm{ defects}) \neq \Pr(A \textrm{ defects} \mid B \textrm{ defects})$$ knows So, is it an error, or do I misunderstand the purpose behind the modeling based on quantum probability ? EDIT: details on the difference between $\Pr$ and ${\Pr}^\ast$ To explain the difference, I give the way to get an empirical estimate of these probabilites. Experiment 1 ($\Pr$) Ask $A$ and $B$ to perform the prisoner dilemma, without giving any information. Repeat this experiment a large number of times, independently (with others $A$ and $B$). The estimate of $\Pr(A \textrm{ defects})$ is the relative frequency of the experiments for which $A$ defects. The estimate of $\Pr (A \textrm{ defects} \mid B \textrm{ defects})$ is the relative frequency of the experiments for which "$A$ defects" among all those experiments for which the event "$B$ defects" occurs. Experiment 2 ($\Pr^*$) Ask $A$ and $B$ to perform the prisoner dilemma with $B$ first, and giving the choice of $B$ to $A$. Then ${\Pr}^\ast (A \textrm{ defects})$ and ${\Pr}^\ast (A \textrm{ defects} \mid B \textrm{ defects})$ are estimated in the same way as before. The experiment is not the same, in other words this is another probability (${\Pr}^*$) on the probability space. As you can see in Experiment 1, the conditional probability has nothing to do with the probability that $A$ defects . In this experiment, $A$ when $A$ knows that $B$ defects neverknows whether $B$ defects. Of course, if you follow the above procedure to estimate the empirical probabilities, the law of total probability cannot be violated. This law is not really a principle, this is rather a definition (up to an elementary calculation, this is just the definition of the conditional probability). That makes no sense to say a definition is violated. If it is violated, that's because it has not been correctly used. Summary The law of total probability implies that $\Pr(A \textrm{ defects})$ is a weighted average of the two conditional probabilities $\Pr(A \textrm{ defects} \mid B \textrm{ defects})$ and $\Pr(A \textrm{ defects} \mid B \textrm{ cooperates})$: $$ \Pr(A \textrm{ defects}) = wavg\Bigl(\Pr(A \textrm{ defects} \mid B \textrm{ defects}), \Pr(A \textrm{ defects} \mid B \textrm{ coop.})\Bigr) $$ and therefore, it lies between these two conditional probabilities. Similalry, for the other probability ${\Pr}^\ast$, $${\Pr}^\ast(A \textrm{ defects}) = wavg\Bigl({\Pr}^\ast(A \textrm{ defects} \mid B \textrm{ defects}), {\Pr}^\ast(A \textrm{ defects} \mid B \textrm{ coop.})\Bigr)$$The so-called violation of the law of total probability is a consequence of the :$$\Pr(A \textrm{ defects}) = wavg\Bigl({\Pr}^\ast(A \textrm{ defects} \mid B \textrm{ defects}), {\Pr}^\ast(A \textrm{ defects} \mid B \textrm{ coop.})\Bigr),$$"mixing" the two probabilities. Based on this formula, $\Pr(A \textrm{ defects})$ shoud lie between ${\Pr}^\ast(A \textrm{ defects} \mid B \textrm{ defects})$ and ${\Pr}^\ast(A \textrm{ defects} \mid B \textrm{ coop.})$. This is intuitively wrong, and this has been observed to be wrong on empirical data. But this formula is wrong. erroneous formula As a side note, I think that the misunderstanding could have been caused by the name probability of $X$ knowing $Y$ to call the conditional probability of $X$ given $Y$. This has nothing to do with $X$ knowing something about $Y$:$$\text{Probability of $X$ given $Y$}$$does not mean$$\text{Probability of $X$ when $X$ knows $Y$}.$$
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-5 of 5 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2013-11) We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
I was wondering if the task of searching for planar 3-colorings is known to be of complexity $O\left(c^{\sqrt{n}}\right)$ or lower? This feels like it would be an intuitive consequence based from planar separator results, yet in wikipedia, it only mentions independent sets, Steiner trees, Hamiltonian cycles, and TSP. Below I include some reasoning which I think almost does achieve this bound. With a zero reduced decision diagram, (ZDD), I believe you can get $O\left(c^{O(log_2(n)\sqrt{n})}\right)$, and I was curious how I could do better. What I came up with is rather rudimentary. Note: throughout, the ZDD I describe is ternary, but I don’t think that greatly matters. For the ZDD, given an ordering, $L = \{v_1 \dots v_n\}$, of vertices to color, the number of nodes at step $i$ will be exponential in respect to the size the frontier, $F_i = \{v_k | k < i \land v_k~v_j, j \geq i \}$. To create your ordering $L$, you may create an optimal branch-decomposition tree, $b$, in polynomial time, which has width at most $\sqrt{n}$. Then, select a random leaf $v’$ of $b$ to be your root. With a BFS, weight each edge $e$ by the number of leaves not connected to $v’$ if you were to remove $e$ from $b$. Then, do a DFS to finally create $L$, always going down the edge furtherest from $v’$, choosing the one with least weight if there is a tie, and choosing arbitrarily if there is still a tie. When we reach a leaf, $(u,v)$ add $u$/$v$ to $L$ if either is not in $L$. Let $c_i$ be the component induced in $b$ by the vertices visited when we added $v_i$ to $L$. Then, $F_i$ is bounded by the branch width times the number of edges $x_i$ needs to be removed from $b$ to get the component $c_i$. $x$ is bounded roughly by $log_2$ of the vertices in $b$, which is linear to $n$ since we’re dealing with planar graphs. With that, you check all three colors for each node for each of the $n$ frontiers and you’re done.
Author Message Lonely-Star Tux's lil' helper Joined: 12 Jul 2003 Posts: 82 Posted: Wed Apr 27, 2005 9:12 am Post subject: Mathematical Symbols in Xfig and gnuplot Hi everybody, Studying physics I am starting to use Xfig and gnuplot. Now I wonder How I can use symbols like omega or phi in the text in xfig or labels in gnuplot. Any help? Thanks! furanku l33t Joined: 08 May 2003 Posts: 902 Location: Hamburg, Germany Posted: Wed Apr 27, 2005 2:40 pm Post subject: Hi! The good old "make-my-graphs-pretty" question You have several possibilties. gnuplot 1) The enhanced postscript driver Start gnuplot. Try Code: gnuplot> plot sin(x) title "{/Symbol F}(x)" You will get a window with your graph labled verbose as "{/Symbol F}(x)". But now try Code: gnuplot> set term post enh Terminal type set to 'postscript' Options are 'landscape enhanced monochrome blacktext \ dashed dashlength 1.0 linewidth 1.0 defaultplex \ palfuncparam 2000,0.003 \ butt "Helvetica" 14' gnuplot> set output "test.ps" gnuplot> plot sin(x) title "{/Symbol F}(x) gnuplot> exit View the resulting file "test.ps" in your favorite postscript viewer: Now you have a greek upper Phi as label. To learn more about the enhanced possibilities of the postscrip driver read the file "/usr/share/doc/gnuplot-4.0-r1/psdoc/ps_guide.ps" (or whatever gnuplot version you use). Advantages: Easy to use, output file easy included in almost every document Disadvantages: Limited possibilities, looks ugly, wrong fonts when included in other documents (esp. LaTeX) 2) The LaTeX drivers I guess you want to include your graph in a LaTeX File (I hope you have learned LaTeX, if not do so, quickly, it's essential for all physical publications!) Again several possibilities... 2a) The "latex" driver, which uses the pictex environment Code: gnuplot> set term latex Options are '(document specific font)' gnuplot> set output "test.tex" gnuplot> plot sin(x) title "$\Phi(x)$" gnuplot> exit Now gnuplot produced a file "test.tex" which you can include in your LaTeX Document with Process your LaTeX Document and you'll see the graph labled with the TeX Fonts and all the glory you can use to typeset formulas in LaTeX: fractions, integrals, ... all you can do in LaTeX can be used. Advantage: beautifull output, fonts fitting to the rest of your document Disadvantage: more complicated to use, limited capabilities of the LaTeX picture environment 2b) Combined LaTeX and Postscript. Almost like 2a) but now the graph is in Postscript, just the lables are set by LaTeX: Code: gnuplot> set term pslatex Terminal type set to 'pslatex' Options are 'monochrome dashed rotate' gnuplot> set output "test.tex" gnuplot> plot sin(x) title "$\Phi (x)$" gnuplot> exit Now the file "test.tex" will contain postscript special to draw the graph, the label is still set by LaTeX. Use it in your LaTeX Document like before. Advantage: almost unlimited graphics possibilities due to postscript Disadvantage: usage of poststcript needs converting the whole document to postscript afterwards (but that's normal anway), pdflatex isn't able to process postscript (well, VTeX's version can, but it's not open source) 3) The fig driver You export your graph in gnuplot into xfigs "fig" file format, which can be usefull if you want to add modify your graph afterward, for example add some text and arrows (which can be also done in gnuplot but is a pain in the ass...) Code: gnuplot> set term fig textspecial Terminal type set to 'fig' Options are 'monochrome small pointsmax 1000 landscape inches dashed textspecial fontsize 10 thickness 1 depth 10 version 3.2' gnuplot> set output "test.fig" gnuplot> plot sin(x) title "$\Phi (x)$" gnuplot> exit Open your file "test.fig" in xfig and go on as described below. XFig Xfig offers you, almost like gnuplot the possibility to add greek symbols as postscript fonts, and has also a "special flag" which is meant for using LaTeX code in your illustration, which is set in LaTeX later when compiling your document with latex. 4) The symbol postscript font Click in xfig on the large "T" to get the text tool. Click on "Text font (Default)" in the lower right corner and select "Symbol (Greek)". Click somewhere in the image. Now you can type greek letters. Unlike in gnuplot they will appear on the screen. Export your file to postscript and you can use it in your documents like the files generated by gnuplot described in 1) above. 5) The special text flag Note the option "textspecial" in the "set term fig textspecial" command in 3) above. This tells xfig that text set with this flag has a special meaning in some exported formats. You can set it manually in xfig with the button "Text flags" in the lower bar. Set "Special flag" to "Special" in the appearing dialog. Now click somewhere and type somthing like "$\int{-\infty}^\infty e^{-x^2} dx$. Now go to "File -> Export" and select one of "Latex picture" (which is like gnuplots "latex" terminal described above) or "Combined PS/LaTeX (both parts)" (which is like gnuplots "pslatex" driver, with the only exception that the LaTeX and Postscript code are stored in two separate files. Don't worry, you will just have to include the file ending with "_t" into your LaTeX document, this will automaticall include the other file). [Edit:] It may necessary to set the "hidden" flag in newer versions of xfig to avoid getting both labels, the one set by xfig and the one from LaTeX on top of each other. You will see that there is also an "Combined PDF/LaTeX (both parts)" export options which is usefull if you want to generate pdf files from your LaTeX sources directly using pdflatex, since that can't include postscript graphics. On the other hand you still can make a dvi file from your LaTeX sources and convert that to pdf using dvipdf, or convert your postscript files to pdf by epstopdf, or ... You see there are a lot of possibilities, and I just mentioned the ones I used, and did a good job for me during my diploma thesis in physics, and still do. Fell free to ask if you still have questions, Frank Last edited by furanku on Thu Feb 14, 2008 10:31 am; edited 2 times in total Lonely-Star Tux's lil' helper Joined: 12 Jul 2003 Posts: 82 Posted: Wed Apr 27, 2005 5:14 pm Post subject: Thanks a lot for your help! (it worked) incognito n00b Joined: 15 Jan 2004 Posts: 3 Posted: Wed Apr 27, 2005 11:30 pm Post subject: lurkers thank you furanku, Great post - hopefully the moderators would consider putting it in the Documents, Tips, and Tricks section. incognito adsmith Veteran Joined: 26 Sep 2004 Posts: 1386 Location: NC, USA Posted: Thu Apr 28, 2005 1:11 am Post subject: There;s a script which does all that very nicely and automatically... google for "texfig" furanku l33t Joined: 08 May 2003 Posts: 902 Location: Hamburg, Germany Posted: Thu Apr 28, 2005 8:29 am Post subject: Thanks, incognito, but I guess it's not gentoo related enough for the tips and trick section. But almost all of our new students in our workgroup come up with this question after a while, so I thought that's a nice occasion to write down what I learned about that and give them simply the URL (I guess, at least I have to spellcheck it on the weekend, sorry, I'm not a native english speaker and wrote it in a hurry yesterday...) adsmith, that's a nice little script. It includes your exported fig file in a skeleton LaTeX document and processes and previews that. For larger documents I prefer the method using a make file which does the neccessary conversions (I didn't mention that xfig comes with a seperate program called "fig2dev" which can do all the exports xfig can do on the commandline). That combined with the preview-latex (screenshot) mode (which displays all your math and graphics inline in the [x]emacs, it's now part of auctex), gives me the for my taste most effective document writing environment. But as far as I can see new users are more attracted by kile (a KDE TeX environment, screenshots) or lyx (screenshots), and in that case texfig is a good help to get the LaTeX labels in your figures right. Thanks for your tip! nixnut Bodhisattva Joined: 09 Apr 2004 Posts: 10974 Location: the dutch mountains Posted: Thu Feb 14, 2008 7:59 pm Post subject: Moved from Other Things Gentoo to Documentation, Tips & Tricks. Tip/trick, so moved here _________________ Please add [solved] to the initial post's subject line if you feel your problem is resolved. Help answer the unanswered talk is cheap. supply exceeds demand You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
I have to find the value of constant factor $c_1$ and $c_2$ and $n_0$ in equation for which this equation satisfy: $$c_1\leq \frac12 - \frac3n \leq c_2$$ Here $n\geq n_0$. So for what value of $c_1, c_2 $ and $n_0$, this equation will hold, Please help me out here. This is question of chapter name Asymptotic notation, In korman book it's answer is $c_1 = 1/14$, $c_2=1/2$ and $n_0 =7$ , But I am not able to figure out here how he found that value of $c_1, c_2$ and $n_0$. Thanks in advance
This is just a quick question of a misconception I have. I've looked through books, and online pretty extensively, and I couldn't find the simple answer I was looking for, so I came here. Gauss's Law is of the form: $$ \Phi_E=\oint \vec E\cdot ~\mathrm d\vec a = \frac {q_\textrm{enc}} {\varepsilon_0}$$ Say we have some charged sphere, with continuous charge distribution $ \rho $, and radius $R.$ We have to draw a Gaussian surface, of radius $r.$ My question is... how do we determine the volume parameters inside $q_\textrm{enc}$ I know it can be defined as $q_\textrm{enc} = \displaystyle \int\rho ~\mathrm dV $, and since the charge distribution is continuous we can pull it out, and integrate and get $q_\textrm{enc} = \rho V $. We also know what $\rho$ is, and that is $\rho = q/V.$ Depending on where the Gaussian surface is placed, these volumes can be different. For example, if we are inside the sphere, we get $ q_\textrm{enc} = \rho V=\frac q {V_1}V_2=\frac {q} {\frac 4 3\pi R^3}(\frac 4 3\pi r^3) =\frac {qr^3} {R^3}$. This will then simplify to the proper electric field, once we set it divided by $\varepsilon_0$ equal to the flux, which is $\vec E =\frac 1 {4\pi \varepsilon_0} \frac {qr} {R^3}$. How do we determine which volume is which, the volume the sphere encloses, or the volume the Gaussian surface encloses? Because when we examine the electric field outside the sphere, $V_1$ and $V_2$ equal each other and cancel, giving $\vec E = \frac 1 {4\pi \varepsilon_0} \frac q {r^2} \hat r\,.$ So those $V_1$ and $V_2$ must have some reasoning behind their choosing, or I'm interpreting something wrong, and everything I've written is invalid.
I've read in David Tong's lecture notes on gauge theory that the Hamiltonian of Yang-Mills theory does not depend on the angular parameter $\theta$, because it can be absorbed in the electric field: $$ \mathcal{H}=\frac{1}{g^2}\text{tr}(\mathbf{E}^2+\mathbf{B}^2)=g^2\text{tr}(\mathbf{\pi}-\frac{\theta}{8\pi^2}\mathbf{B})^2+\frac{1}{g^2}\text{tr}(\mathbf{B}^2). $$ Here, $g$ is the gauge coupling, $E_i=\dot{A}_i$ is the non-Abelian electric field, $B_i=-\frac{1}{2}\epsilon_{ijk}F^{jk}$ the non-Abelian magnetic field, ${F}_{\mu\nu}$ is the gluon field strength, and $$ \mathbf{\pi}=\frac{\partial \mathcal{L}}{\partial \mathbf{\dot{A}}}= \frac{1}{g^2}\mathbf{E}+\frac{\theta}{8\pi^2}\mathbf{B} $$ is the momentun conjugate to $\mathbf{A}$ (see pp. 39 and 40 of the lecture notes). In contrast, it's well known that the Yang-Mills Lagrangian contains a topological $\theta$-term: $$ \mathcal{L}= -\frac{1}{2g^2}\text{tr}(F^{\mu\nu}F_{\mu\nu})+\frac{\theta}{16\pi^2}\text{tr}(F^{\mu\nu}\tilde{F}_{\mu\nu})=\frac{1}{g^2}\text{tr}(\mathbf{\dot{A}}^2-\mathbf{B}^2)-\frac{\theta}{4\pi^2}\text{tr}(\mathbf{\dot{A}} \mathbf{B}), $$ where $\tilde{F}_{\mu\nu}$ is the Hodge dual of $F_{\mu\nu}$, and the last equality holds for $A_0=0$ and $D_iE_i=0$. How is this possible? Tong mentions that the $\theta$-dependence in the Hamiltonian formalism is somehow hidden in the structure of the Poisson bracket, but he gives no detailed explanation. Why doesn't it appear in the Hamiltonian itself? The $\theta$-term gives rise to the infamous strong CP problem, so how can we explicitly compute this $\theta$-dependence of Yang-Mills in this formalism?
The Dimensionless numbers since many of them have formulated in a certain field tend to be duplicated. For example, the Bond number is referred in Europe as Eotvos number. In addition to the above confusion, many dimensional numbers expressed the same things under certain conditions. For example, Mach number and Eckert Number under certain circumstances are same. Example 9.16 Galileo Number is a dimensionless number which represents the ratio of \[ \label{GalileoNumber:def} Ga = \dfrac{\rho^2\,g\,\ell^3}{\mu^2} \tag{64} \] The definition of Reynolds number has viscous forces and the definition of Froude number has gravitational forces. What are the relation between these numbers? Example 9.17 Laplace Number is another dimensionless number that appears in fluid mechanics which related to Capillary number. The Laplace number definition is \[ \label{Laplace:def} La = \dfrac{\rho \, \sigma \, \ell }{\mu^2} \tag{65} \] Show what are the relationships between Reynolds number, Weber number and Laplace number. Example 9.18 The Rotating Froude Number is a somewhat a similar number to the regular Froude number. This number is defined as \[ \label{RotatingFr:def} Fr_R = \dfrac{\omega^2\,\ell}{g} \tag{66} \] What is the relationship between two Froude numbers? Example 9.19 Ohnesorge Number is another dimensionless parameter that deals with surface tension and is similar to Capillary number and it is defined as \[ \label{ohnesorge:def} Oh = \dfrac{\mu}{\sqrt{\rho\,\sigma\,\ell} } \tag{67} \] Defined \(Oh\) in term of \(We\) and \(Re\) numbers. Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
$\renewcommand{\ket}[1]{\left \lvert #1 \right \rangle}$$\renewcommand{\bra}[1]{\left \langle #1 \right \rvert}$We can see how decoherence really works, why it messes up superposition states, and why it's particularly prone to messing up states of large objects all through a very simple example $^{[a]}$. Single two-level system Suppose we have a quantum system $S$ with two possible states.$S$ could be a cat and the states could be $\left \lvert \text{alive} \right \rangle$ and $\left \lvert \text{dead} \right \rangle$, but for the sake of generality we label the states as $$\ket{\uparrow} \quad \text{and} \quad \ket{\downarrow} \, .$$ Coherent case Now suppose $S$ is in state $\ket{y}$ defined as $$\ket{y} \equiv \left( \ket{\uparrow} + i \ket{\downarrow} \right) / \sqrt{2} \, .$$ This is a perfectly happy superposition state.It's density matrix is $$ \rho_S = \frac{1}{2} \left(\ket{\uparrow} \bra{\uparrow}+ \ket{\downarrow} \bra{\downarrow}- i \ket{\uparrow} \bra{\downarrow}+ i \ket{\downarrow} \bra{\uparrow}\right)= \frac{1}{2} \left[ \begin{array}{cc} 1 & -i \\ i & 1 \end{array} \right]= \frac{1}{2} \left( \text{Id} + \sigma_y \right) \, ,$$where in the matrix representation we've ordered the states $\{ \ket{\uparrow}, \ket{\downarrow} \}$.We can think of this state as a spin pointed along the $y$ axis (hence the symbol $\ket{y}$).The first two terms are the classical terms (diagonal in the matrix representation) and the other two are the so-called "coherences" (off-diagonal) which disappear via decoherence processes, as we show below. If we were to prepare $S$ in state $\ket{y}$ many times and each time measure it along the $z$ axis we would get a random sequence of results where half of them are $\uparrow$ and half are $\downarrow$.Naively you might think that this means that our preparation procedure is giving us a normal "classical" probability distribution where half of the time we prepared $\ket{\uparrow}$ and half of the time we prepared $\ket{\downarrow}$.However, we can see that this is not true if we rotate $S$ about the $x$ axis and then measure it along the $z$ axis.The operator for the rotation is$$U = \cos(\theta / 2) \, 1 + i \sin(\theta / 2) \, \sigma_x= \left[ \begin{array}{cc} \cos(\theta/2) & i \sin(\theta/2) \\ i \sin(\theta / 2) & \cos(\theta / 2) \end{array} \right]$$and the density matrix after the rotation is$$U \rho_S U^\dagger =\frac{1}{2} \left[ \begin{array}{cc} 1 - \sin(\theta) & -i \cos(\theta) \\ i \cos(\theta) & 1 + \sin(\theta) \end{array} \right]= \frac{1}{2} \left( \text{Id} - \sin(\theta) \sigma_z + \cos(\theta) \sigma_y \right) \, .$$ As you can see, for a given angle $\theta$, the probability to find the system in $\ket{\uparrow}$ is $(1/2)(1 - \sin(\theta))$, i.e. it depends on how much we rotated.Another way to say this is that$$\langle \sigma_z \rangle_{U \rho_S U^\dagger} = - \sin(\theta) \, ,$$i.e. the expectation value of $\sigma_z$ oscillates as we rotate the system.This makes perfect sense if you think of the two level system as an arrow oriented in 3D space (e.g. a spin): as we rotate the system about the $x$ axis its projection about the $z$ axis oscillates.So far, nothing about this example tells us anything about decoherence or why it's hard to make big Schrodinger cat states, so now let's get to that. Incoherent case Suppose $S$ interacts with some other two level system $E$.The letter $E$ stands for "environment" which will make sense later.Suppose the state of the combined $S + E$ system is $^{[b]}$ $$\left( \ket{\uparrow}\ket{\downarrow} + \ket{\downarrow}\ket{\uparrow} \right) / \sqrt{2}$$ where the first ket labels the state of $S$ and the second ket labels the state of $E$.Now the critical part: what happens if we now do the rotate-and-measure experiment described above on system $S$ without doing anything, including measurement, to $E$?Experimentally, when we try this in the lab, we find that there is no oscillation in the probability to find $S$ in $\ket{\uparrow}$ as a function of $\theta$! This is decoherence.To describe this mathematically we look at the density matrix of the $S+E$ system.The state of the total system is$$\rho_{S+E} =\left[\begin{array}{cccc}0&0&0&0 \\0 & 1/2 & 1/2 & 0 \\0 & 1/2 & 1/2 & 0 \\0&0&0&0\end{array}\right]$$ where we've ordered the states $\{ \ket{\uparrow}\ket{\uparrow}, \ket{\uparrow}\ket{\downarrow}, \ket{\downarrow}\ket{\uparrow},\ket{\downarrow}\ket{\downarrow} \}$.To predict the behaviour of experiments done on $S$ alone, we take the trace of $\rho_{S+E}$ over the part of the space belonging to $E$ $^{[c]}$.Doing this gives$$\tilde{\rho}_S \equiv \text{Tr}_E \left( \rho_{S+E}\right) = \frac{1}{2} \left[\begin{array}{cc}1 & 0 \\ 0 & 1\end{array}\right]= \frac{1}{2} \left( \ket{\uparrow}\bra{\uparrow} + \ket{\downarrow}\bra{\downarrow} \right) = \frac{1}{2}\text{Id} \, .$$The off-diagonal terms are gone - we have a purely classical state!If we now rotate $\tilde{\rho}_S$ by any rotation operator $U$ we find that$$U \tilde{\rho}_S U^\dagger = \frac{1}{2} \left[ \begin{array}{cc} 1&0\\0&1 \end{array} \right] = \tilde{\rho}_S $$and$$ \langle \sigma_z \rangle_{U \tilde{\rho}_S U^\dagger}= \text{Tr}_S (U \tilde{\rho}_S U \sigma_z) = 0 \, .$$Rotations no longer do anything - there's no oscillation and the expectation value of $\sigma_z$ is always zero regardless of rotation angle. This is really, really interesting.Previously we said that a single isolated two level system can be thought of like a spin particle: it's always pointing in some direction in space, so even if measurements along the $z$ axis give half up and half down, if you rotate the spin and measure, you see oscillation.On the other hand, we just showed that if we let the two level system interact with something else ($E$), the combined system can be left in a state such that the original two level system ($S$) doesn't exhibit that oscillation. What we have just seen is the essence of quantum decohrence.If a quantum system $S$ becomes entangled with its surrounding environment $E$, then $S$ can lose its quantum nature.Of course, if we don't ignore the environment $E$ and instead include it in our measurements, then we would observe the full quantum properties of the combined system.In other words, decoherence is just lack of knowledge of the complete system. If $E$ is really big then keeping track of all its degrees of freedom and measuring them in a controlled way is just impossible.That is the essence of why making big Schrodinger cats is hard; if the system $S$ is big it interacts with more environmental degrees of freedom and so observing quantum effects is very difficult. For something a large as a speck of dust interacting with air molecules, the time it takes for the decoherence to kill any off diagonal elements in the density matrix is incredibly small $^{[d]}$.Interestingly though, some fairly large systems can be sufficiently isolated from their environments such that they exhibit quantum properties long enough times to be useful; this is, for example, a large fraction of what goes into building a quantum computer. Large system So far we showed what decoherence is, and in particular how it makes a quantum system appear classical.To recap, decoherence happens when your system $S$ interacts with the environment $E$; if you don't have access to the environment degrees of freedom, then $S$ can lose its quantum interference properties and appear classical.In the example we gave, we saw that a single two level system interacting with another one can appear classical.We will now, via an illustrative extention of the same example, that a large system is more prone to decoherence. Suppose $S$ consists of three two level systems in an initial state $\ket{\uparrow \uparrow \uparrow}$ with density matrix$$\rho = \ket{\uparrow \uparrow \uparrow} \bra{\uparrow \uparrow \uparrow} \, .$$Note that there's a sort of redundancy here: we have three separate spins which can be thought of as collectively representing a single spin up.$^{[e]}$Like in the single particle case, we can measure the projection of the spin along the $z$ axis, but in this case we use the three-particle operator$$Z^{(3)} \equiv \left( \sigma_z \otimes \sigma_z \otimes \sigma_z \right) \, .$$ Coherent case Like before, if we rotate all three spins and measure the average of $Z^{(3)}$ we get a sinusoidal dependence on the rotation angle.In particular, if we rotate each spin by an angle $\theta$ about the $x$ axis, then we get$$\langle Z^{(3)} \rangle_{U \rho U^\dagger} = \cos(\theta)^3 \, .$$ Incoherent case Now consider what happens if just one of our spins interacts with the environment.Suppose the middle spin interacts with the environment such that the initial state $\ket{\uparrow \uparrow \uparrow} \ket{\downarrow}$ (here the separate second ket with one arrow represents the environment) becomes$$\left(\ket{\uparrow \uparrow \uparrow}\ket{\downarrow}+ \ket{\uparrow \downarrow \uparrow}\ket{\uparrow}\right) / \sqrt{2} \, .$$Writing out the complete four particle density matrix would be tedious and unenlightening.However, the reduced density matrix of the first three particles is$$\tilde{\rho}_S =\frac{1}{2} \left(\ket{\uparrow \uparrow \uparrow}\bra{\uparrow \uparrow \uparrow}+ \ket{\uparrow \downarrow \uparrow}\bra{\uparrow \downarrow \uparrow}\right) \, .$$Note that we have a diagonal density matrix just like we did in the single particle incoherent case.With this density matrix, the expectation value of $Z^{(3)}$ following a rotation of all spins by $\theta$ is$$\langle Z^{(3)} \rangle_{U \tilde{\rho}_S U^\dagger} = \frac{1}{2} (\underbrace{\cos(\theta)^3}_{\text{from } \ket{\uparrow \uparrow \uparrow}} + \underbrace{-\cos(\theta)^3}_{\text{from } \ket{\uparrow \downarrow \uparrow}}) = 0 \, .$$Here again we've lost the oscillation, and it only took a single spin interacting with the environment to do it.That is why making large Schrodinger cats is hard. Notes $[a]$: This is a simplified version of an example from the introductory chapter of my PhD thesis (pdf). $[b]$: This state can be realized if we start with the system in $\ket{\uparrow}\ket{\downarrow}$ and subject the system to the Hamiltonian $H=\sigma_+ \sigma_- + \sigma_- \sigma_+$ for the proper amount of time such that the propagator is$$U = \left [ \begin{array}{cccc}1&0&0&0\\0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0\\0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0\\0&0&0&1 \end{array}\right ] \, .$$ $[c]$: Note that, like any other theoretical description, this procedure is justified because it reproduces the results of experiments. $[d]$: I don't remember the numbers but see Schlosshauer's book for a calculation. $[e]$: This kind of redundancy is critical in classical machines.For example, a memory bit in a classical computer could be represented by current in a large number of conduction channels in a transistor; if any one of those channels were to change state, that's such a tiny fraction of the total current that the logical state of the transistor is preserved.This redundancy gives classical computers their robustness against errors on the microscopic level.
This question already has an answer here: I'm trying to get a big cross which I can subscript in order to denote a generalized cartesian product (much like how \bigcup works for generalized unions). How can I accomplish this? TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: I'm trying to get a big cross which I can subscript in order to denote a generalized cartesian product (much like how \bigcup works for generalized unions). How can I accomplish this? If you don't mind using a different font, kpfonts gives you the \varprod command: \documentclass{article}\usepackage{kpfonts}\begin{document}$\varprod_{i=1}^n A_i$\[ \varprod_{i=1}^n A_i\]\end{document} I would be more inclined to use \prod to denote a generalised cartesian product, though. I am sure others will come up with simpler solutions, but here is an overkill solution that might be useful: \documentclass{article}\usepackage{amsmath}\usepackage{tikz}\newcommand{\Cross}{\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex] \draw (0,0) -- (1,1) (0,1) -- (1,0);}}%\begin{document}$A \Cross B$\end{document} Adjust the x= and y= options to change the size, and the line width= to adjust the thickness of the line. The \mathbin ensures that correct spacing for a binary operator is placed around the symbol. There is a simple command for that in the mathabx package: \documentclass{minimal}\usepackage{mathabx}\begin{document} $\bigtimes\limits_{x=1}$\end{document} In fact if you find another operator symbol you fancy, you can try the \limits command on it. It may actually work. :) I hope this answers it. :) If you can use unicode-math, the symbol you are looking for is at U+2A09 'N-ARY TIMES OPERATOR' \begin{equation}B ⊂ ⨉_{x∈J}A(x)\end{equation} Might I suggest a very simple-minded approach? Why not just use the letter "X"? Below, I inserted \sf to render the symbol more distinguishable $${\sf X}^n_{i=1} A_i = A_1\times \cdots \times A_n$$
Under the auspices of the Computational Complexity Foundation (CCF) Let the randomized query complexity of a relation for error probability $\epsilon$ be denoted by $\R_\epsilon(\cdot)$. We prove that for any relation $f \subseteq \{0,1\}^n \times \mathcal{R}$ and Boolean function $g:\{0,1\}^m \rightarrow \{0,1\}$, $\R_{1/3}(f\circ g^n) = \Omega(\R_{4/9}(f)\cdot\R_{1/2-1/n^4}(g))$, where $f \circ g^n$ is the relation obtained by composing $f$ and $g$. ... more >>> While exponential separations are known between quantum and randomized communication complexity for partial functions, e.g. Raz [1999], the best known separation between these measures for a total function is quadratic, witnessed by the disjointness function. We give the first super-quadratic separation between quantum and randomized communication complexity for a ... more >>> In 1986, Saks and Wigderson conjectured that the largest separation between deterministic and zero-error randomized query complexity for a total boolean function is given by the function $f$ on $n=2^k$ bits defined by a complete binary tree of NAND gates of depth $k$, which achieves $R_0(f) = O(D(f)^{0.7537\ldots})$. ... more >>>
PIMS - SFU Theory Seminar: Akbar Rafiey Date: 06/20/2019 Time: 11:00 Simon Fraser University Toward a Dichotomy for Approximation of H-coloring Given two (di)graphs $G$, $H$ and a cost function $c:V(G)\times V(H) \to \mathbb{Q}_{\geq 0}\cup\{+\infty\}$, in the minimum cost homomorphism problem, MinHOM($H$), we are interested in finding a homomorphism $f:V(G)\to V(H)$ (a.k.a $H$-coloring) that minimizes $\sum\limits_{v\in V(G)}c(v,f(v))$. The complexity of \emph{exact minimization} of this problem is well understood due to Hell and Rafiey 2012, and the class of digraphs $H$, for which the MinHOM($H$) is polynomial time solvable is a small subset of all digraphs. In this paper, we consider the approximation of MinHOM within a constant factor. In terms of digraphs, MinHOM($H$) is not approximable if $H$ contains a \emph{digraph asteroidal triple (DAT)}. We take a major step toward a dichotomy classification of approximable cases. We give a dichotomy classification for approximating the MinHOM($H$) when $H$ is a graph (i.e. symmetric digraph). For digraphs, we provide constant factor approximation algorithms for two important classes of digraphs, namely bi-arc digraphs (digraphs with a \emph{conservative semi-lattice polymorphism} or \emph{min-ordering}), and $k$-arc digraphs (digraphs with an \emph{extended min-ordering}). Specifically, we show that: a) \textbf{Dichotomy for Graphs:} MinHOM($H$) has a $2|V(H)|$-approximation algorithm if graph $H$ admits a \emph{conservative majority polymorphims} (i.e. $H$ is a \emph{bi-arc graph}), otherwise, it is inapproximable; b) MinHOM($H$) has a $|V(H)|^2$-approximation algorithm if $H$ is a bi-arc digraph; c) MinHOM($H$) has a $|V(H)|^2$-approximation algorithm if $H$ is a $k$-arc digraph. In conclusion, we show the importance of these results and provide insights for achieving a dichotomy classification of approximable cases. Our constant factors depend on the size of $H$. However, the implementation of our algorithms provides a much better approximation ratio. It leaves open to investigate a classification of digraphs $H$, where MinHOM($H$) admits a constant factor approximation algorithm that is independent of $|V(H)|$. Join work with Arash Rafiey and Tiago Santos SFU TASC 1 9204 Time: 11am
A classic statistics problem In machine learning, problems are often framed as optimization problems. For example, let us take one of the simplest applications of supervised learning: the linear regression. It's conceptually an easy problem - given a set of data points, create a function \( f = ax + b \) that best approximates these points. If you ever took statistics or have used excel, this is the line of best fit. Conceptually, this operation doesn't seem too difficult. Note that this is a supervised learning problem - we need to have a set of observed data to "train" from. We use this data to create the line of best fit, which we can then subsequently use to make predictions given new parameters. Framing it as a machine learning problem Defining a linear regression Let's rewrite the equation \( f(x) = ax + b \) as \( f(x) = a_1 x + a_0 \). We have two weights that we need to define, \( a_0 \) and \( a_1 \), that correspond to the features \( x_0 \) and \( x_1 \). \( x_0 = 1 \) and is kind of a "dummy variable" because we need to fit a linear regression which has the offset \( a_0 \), there isn't any actual observation associated with \( x_0 \) and it has no measured effect on the final line of best fit. Our data is defined as a matrix \( X \) where each row of the matrix represents the readings for the \( i^{th} \) observation. Each row, \( x^{(i)T} \), is the set of observed variables for a particular observation, which means it must have our feature and the observed output. Note that it is transposed because \( x^{(i)} \) is a vector with the recorded features for that observation. We have another vector \( Y \) that holds the actual observations for \( X \), so the \( i^{th} \) element in \( Y \) corresponds to the \( i^{th} \) row of \( X \). Our objective is to create some function \( f \) in the form \( f(x) = a*0 + a_1 x \) that best approximates the data in \( X \) and \( Y \). This means that we need to pick \( a_0, a_1 \) such that \( f \) best approximates _all* of the data in \( X \). So how do we pick the best \( a_0, a_1 \)? The loss function Suppose we randomly guess \( a_0, a_1 \). We want to know how far off from the actual data we are, we need a quantitative way to determine how wrong we are. We can measure the "wrongness" of our function with something called a loss function. A loss function is a function that provides a measure of your wrongness, and our objective is to minimize it as best as possible. There are several loss functions you can use, a very popular loss function is called mean-squared error (MSE). We define \( \text{MSE} \) as follows: \[ \text{MSE} = \frac{1}{n} \sum\_{i = 1}^n (y^{(i)} - f(x^{(i)}))^2 \] In essence, we are taking every row in \( X \), making a prediciton with our function \( f = a_0 + a_1 x \), taking the difference between the actual \( y \) value and our prediction \( f(x^{(i)}) \), then squaring the difference. We do this for every row, then we divide by \( n \) (the number of data points we have). This gives us the average squared error in the data set, and lets us quantify how good our line of best fit is. Of course, we want to minimize our loss function. We can find the minimum of a quadratic function by finding the root of the derivation. Note that it has to be a concave down function, otherwise the root will give us the maximum. Let us define our loss function \( L(a) \) as \( MSE \) and find the minimum. \[ \begin{aligned} L(a) &= \frac{1}{n} \sum*{i = 1}^n (y^{(i)} - f(x^{(i)}))^2 \\ L(a) &= \frac{1}{n} \sum*{i = 1}^n (y^{(i)} - (a^T x^{(i)})^2 \\ L'(a) &= \frac{1}{n} _ 2 (y^{(i)} - a^T x^{(i)}) _ -x^{(i)} \\ L'(a) &= -\frac{2}{n} (y^{(i)} - a^T x^{(i)}) x^{(i)} \\ 0 &= -\frac{2}{n} (y^{(i)} - a^T x^{(i)}) x^{(i)} \end{aligned} \] We have our derived function, solving for \( a \) will be left as an exercise to the reader. Gradient Descent With our defined loss function, how do we figure out how to guess \( a \) in an iterative manner? We could start with some random vector for \( a \) and then increment the variables a little, trying to lower \( L(a) \) with every step until we converge on the minimum (so \( L(a) \) cannot be minimized any further). This is the intuition behind gradient descent, except gradient descent has a clever way of incrementing the new \( a \) values so that every time gradient descent iterates, \( L(a) \) is guaranteed to decrease. In gradient descent, take our vector \( a*t \) where \( t \) is our current iteration. With each iteration of gradient descent, we set \( a*{t+1} \) equal to some new value as defined below: \[ \begin{aligned} a*t &= a*{t-1} - \alpha f(a\_{t-1}) \end{aligned} \] Gradient descent allows us to iteratively minimize the loss function - in other words, this is how we find the value with the least amount of error. Intuitively, imagine a ball rolling down a hill. The hill is drawn by the loss function, and we want to reach the bottom. Because we are subtracting by a ratio defined by the slope of the hill, with a convex function, we should be able to optimize and find the minimum. If we don't have a strictly concave down function, there is a chance we will not be able to find the minimum. Additionally, note the \( \alpha \) term. This is the step size, this is a constant that we define which will affect how fast our gradient descent mechanism rolls the ball down the hill. If we keep the step size small, it could take a while until we reach the bottom of the hill, it might also get stuck in a local minima. If the step size is too large, it may oscillate around the minimum but never quite converge. Optimize for big data (stochastic gradient descent) Gradient descent can be an expensive process - imagine our \( n \) is extremely large (this is the definition of big data). If \( n \) is really big, then every iteration of gradient descent will be extremely expensive. In practice, we must balance the accuracy of our models with computation time. A lot of models that use massive amounts of data mitigate this with a loss function called stochastic gradient descent. The main intuition behind stochastic gradient descent is that the model is trained and tested on a small subset of the data at a time. This way, you get a rougher approximation of the optimal value(s), but it is much faster. Because gradient descent is an approximation itself, the penalty in accuracy brought on by stochastic gradient descent isn't big enough to outweigh the performance gains that are brought on by operating on only a batch of the data at a time. If you're using stochastic gradient descent, you'll want to randomly shuffle the data, and perform each iteration on a randomly shuffled mini-batch.
Current browse context: math.SP Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Spectral Theory Title: An Abstract Approach to Weak Convergence of Spectral Shift Functions and Applications to Multi-Dimensional Schrödinger Operators (Submitted on 1 Nov 2011) Abstract: We study the manner in which a sequence of spectral shift functions $\xi(\cdot;H_j,H_{0,j})$ associated with abstract pairs of self-adjoint operators $(H_j, H_{0,j})$ in Hilbert spaces $\cH_j$, $j\in\bbN$, converge to a limiting spectral shift function $\xi(\cdot;H,H_0)$ associated with a pair $(H,H_0)$ in the limiting Hilbert space $\cH$ as $j\to\infty$ (mimicking the infinite volume limit in concrete applications to multi-dimensional Schr\"odinger operators). Our techniques rely on a Fredholm determinant approach combined with certain measure theoretic facts. In particular, we show that prior vague convergence results for spectral shift functions in the literature actually extend to the notion of weak convergence. More precisely, in the concrete case of multi-dimensional Schr\"odinger operators on a sequence of domains $\Omega_j$ exhausting $\bbR^n$ as $j\to\infty$, we extend the convergence of associated spectral shift functions from vague to weak convergence and also from Dirichlet boundary conditions to more general self-adjoint boundary conditions on $\partial\Omega_j$. Submission historyFrom: Fritz Gesztesy [view email] [v1]Tue, 1 Nov 2011 00:51:37 GMT (31kb)
Forgot password? New user? Sign up Existing user? Log in ∑n=120151(n+n+1)(n4+n+14)\large \sum_{n=1}^{2015} \dfrac{1}{(\sqrt{n} + \sqrt{n+1})(\sqrt[4]{n} + \sqrt[4]{n+1})}n=1∑2015(n+n+1)(4n+4n+1)1 If the above summation can be expressed as ab4−ca\sqrt[4]{b}-ca4b−c where a,b,ca,b,ca,b,c are positive integers and bbb is free of fourth power, find the value of a+b+ca+b+ca+b+c. Problem Loading... Note Loading... Set Loading...
For convenience let $H(X|Y) = \log(n)$, then $$ -\infty ~~\leq~~ H(X|Y) - H(X|Y,X\neq Y) ~~\leq~~ \log\left(\frac{n}{n-1}\right) $$ and both sides have tight examples (i.e. as $p\to 0$ it can be arbitrarily negative, and your example matches the upper bound). More specifically, if $p = \Pr[X \neq Y]$, then: $$ H(X|Y) - H(X|Y,X\neq Y) ~~ \geq ~~ -\frac{(1-p)H(X|Y)}{p} $$and $$ H(X|Y) - H(X|Y,X\neq Y) ~~\leq~~ \log\frac{1}{p} + \frac{1-p}{p}\left[\log\frac{1}{1-p} ~-~ H(X|Y)\right] $$ and we have tight examples for the second, and arbitrarily close to tight examples for the first, for every $p,H(X|Y)$. There will be two steps to the proof: (1) prove an upper bound and matching examples for $H(X|Y,Z)$ where $Z$ is an indicator; (2) convert these to your quantity of interest $H(X|Y,X\neq Y)$. Upper Bound Step 1. Claim 1. Let $Z$ be the indicator for $X\neq Y$ and let $p = \Pr[X\neq Y]$. Then $$H(X|Y) - H(X|Y,Z) \leq H(p) $$ and for any fixed values of $H(X|Y)$ and $p$, we can construct tight examples. Proof. The natural quantity to consider is $$ H(X|Y) - H(X|Y,Z) $$where $Z$ is the indicator, $Z=1$ if $X \neq Y$ and $Z=0$ otherwise. Let $p = \Pr[X\neq Y] = \Pr[Z=1]$. Then as Thomas points out, by the chain rule and the fact that $H(X,Y,Z) = H(X,Y)$, $$ H(X|Y) - H(X|Y,Z) = H(Z|Y) \leq H(p) . ~~~~~~~~ (*) $$ Examples showing tightness: Let $Y$ be distributed arbitrarily; then conditioned on $Y=y$, we let $X=y$ with probability $1-p$ and with probability $p$ we let $X$ be distributed arbitrarily on any set not containing $y$. To be very concrete, you could let $Y=0$ always and let $X=0$ with probability $1-p$ and otherwise $X$ is uniform on $\{1,\dots,m\}$. Choose $m$ to get the desired value of $H(X|Y)$. In these examples, $H(Z|Y) = H(Z) = H(p)$. So we can make the inequality $(*)$ tight for any $p$ and any $H(X|Y)$. $\square$ Step 2. "Theorem" 1. $$ H(X|Y) - H(X|Y, X\neq Y) \leq \log\frac{1}{p} + \frac{1-p}{p}\left[\log\frac{1}{1-p} ~ - ~ H(X|Y) \right] $$ and for any fixed $H(X|Y)$ and $p$ there are tight examples. Proof. Now, again as Thomas points out, we have $$ H(X|Y,Z) = p\cdot H(X|Y, X \neq Y) . $$Now, by plugging in to $(*)$, we have the inequality $$ H(X|Y) - p\cdot H(X|Y, X\neq Y) \leq H(p) . ~~~~~~~~ (**) $$and we can make this tight for any $p$. Let $H(X|Y) = p\cdot H(X|Y) + (1-p)H(X|Y)$ and rearrange:\begin{align} H(X|Y) - H(X|Y, X\neq Y) &\leq \frac{H(p) - (1-p)H(X|Y)}{p} \\ &= \log\frac{1}{p} + \frac{1-p}{p}\left[\log\frac{1}{1-p} ~ - ~ H(X|Y) \right] .\end{align}and, again, we can make this tight using the examples from before (since we have only renamed things and rearranged the inequality). $\square$ Claim 2. For any fixed $H(X|Y) > 0$, as $p \to 0$, we always have $$ H(X|Y,Z) - H(X|Y,X \neq Y) \to -\infty .$$ Proof. In the bound of the "theorem", for small enough $p$, the upper bound is $\log\frac{1}{p} - \frac{1}{p}\Theta(H(X|Y))$, which approaches $-\infty$ as $p \to 0$ for all fixed $H(X|Y)$. $\square$ Claim 3. For any fixed $H(X|Y)$, we have $$ H(X|Y,Z) - H(X|Y,X\neq Y) \leq \log\frac{2^{H(X|Y)}}{2^{H(X|Y)}-1} , $$ and there are tight examples. In such examples, $p = 1 - 2^{-H(X|Y)}$. Proof. Taking the bound in the "theorem" and taking the derivative with respect to $p$, we find that the upper bound is maximized uniquely at $p = 1 - 2^{-H(X|Y)}$. In that case, the quantity inside the brackets is zero, and we obtain \begin{align} H(X|Y) - H(X|Y, X\neq Y) &\leq \log\frac{1}{p} \\ &=\log\frac{2^{H(X|Y)}}{2^{H(X|Y)}-1} .\end{align} Again, for any $H(X|Y)$, the prior examples with this choice of $p$ make this inequality tight. $\square$ Lower Bound Step 1. Claim 4. For any $p$, $$H(X|Y) - H(X|Y,Z) \geq 0$$ and we can construct examples that are arbitrarily close to $0$. Proof. As stated above, $H(X|Y) - H(X|Y,Z) = H(Z|Y) \geq 0$. To construct examples arbitrarily close to $0$, fix $p$. The intuition is $H(p)$ is concave, so we will have sometimes $\Pr[Z|Y] = \epsilon$ and sometimes $\Pr[Z|Y] = 1-\epsilon$, so that $H(Z|Y) = H(\epsilon) \to 0$, yet still $H(Z) = H(p)$. Let $Y = -1$ with probability $1-p$ and $Y = 0$ with probability $p$. If $Y=-1$, then with probability $\epsilon$ we have $X=Y$ and otherwise $X$ is uniform on $\{1,\dots,m\}$. If $Y=0$, then with probability $1-\epsilon$ we have $X=Y$ and otherwise $X$ is uniform on $\{1,\dots,m\}$. Now we can check that $H(Z|Y) = H(\epsilon)$ and $p = \Pr[Z] = (1-p)\cdot \epsilon + p\cdot(1-\epsilon) = p$. Taking $\epsilon \to 0$ gives the example. $\square$ Step 2. "Theorem" 2. For any $p$ and $H(X|Y)$, $$H(X|Y) - H(X|Y,X\neq Y) \geq -\frac{(1-p)H(X|Y)}{p} $$ and there are examples arbitrarily close. Proof. Again we have $p\cdot H(X|Y,X\neq Y) = H(X|Y,Z)$, so by the previous claim, $$ H(X|Y) - p\cdot H(X|Y,X\neq Y) \geq 0 . $$Again let $H(X|Y) = p\cdot H(X|Y) + (1-p)H(X|Y)$ and rearrange. By the previous claim, we have arbitrarily close examples (since we have only rearranged the inequality). $\square$
We call those circuits 'equivalent' because their behaviour is identical in terms of the voltage and current they deliver to an external load $R$ connected to the terminals on the right of each circuit, independently of the value of $R$. So, let's take a look: For the circuit on the left, if we connect a load $R_L$ to the terminals, then the voltage across that load will be $$V_\mathrm{out} = \frac{R_L}{R_L+R_0}V_0,$$where $R_0=5\:\Omega$ and $V_0= 20\:\mathrm{V}$. For the circuit on the right, if we connect a load $R_L$ to the terminals, it will draw a current $I_L$ at a voltage $V_\mathrm{out}=I_LR_L$, which must match the voltage $I_0R_0=V_\mathrm{out}=I_LR_L$ over the internal resistance, with the sum of these matching the source current, i.e.$$I_\mathrm{source} = I_0 + I_L = \frac{V_\mathrm{out}}{R_0} + \frac{V_\mathrm{out}}{R_L} = \frac{R_0+R_L}{R_0R_L}V_\mathrm{out},$$or in other words$$V_\mathrm{out} = \frac{R_LR_0}{R_L+R_0}I_\mathrm{source} = \frac{R_L}{R_L+R_0}R_0I_\mathrm{source},$$where $R_0I_\mathrm{source} = \rm 4\:A \times 5\:\Omega = 20\:V$. In other words, no matter what load resistance we connect to the source, it will deliver the same voltage and current, and we won't be able to distinguish which of the two configurations is on the left-hand side of the terminals. This includes the limit of an open circuit (i.e. $R_L\to\infty$, which is what the currently-accepted answer restricts itself to), but it is much broader than that, including finite loads as well as the circuits' behaviour under a short-circuit induced between the terminals, as the $R_L\to0$ limit.
RF Signal Transformation Between the Time and Frequency Domains When we analyze high-frequency electromagnetics problems using the finite element method (FEM), we often compute S-parameters in the frequency domain without reviewing the results in the complementary domain; that is, the time domain. The time domain is where we can find other useful information, such as time-domain reflectometry (TDR). In this blog post, we will demonstrate data conversion between two domains in order to efficiently obtain results in the desired computation domain through a fast Fourier transform (FFT) process. Very Wide Frequency Range S-Parameter Calculation Say you are simulating a device and want to get a very wideband frequency response with a small frequency step in the frequency domain or the time-domain reflectometry with a long time period. This would take a long time. However, in both cases, the computation performance in a wide range of frequencies and times can be boosted by running the simulation in the complementary domain first and then conducting a FFT to generate the results in the preferred domain. For example, you can: Simulate a transient analysis, and then run a time-to-frequency FFT for a wideband frequency response Perform a frequency sweep, and then a frequency-to-time FFT for a time domain bandpass impulse response Logarithmic surface plot of the electric field norm and arrow plot of time-averaged power flow in a coaxial low-pass filter at 10 GHz. Performing a wideband frequency sweep with a small frequency step size could be a time-consuming and cumbersome task. A sharp resolution of the frequency response of a device can be found from the time-to-frequency FFT, where the ending time of the transient input to the FFT process defines the frequency resolution of the final results. Consider a modulated Gaussian pulse used for an excitation source to drive a time-domain model for a wideband response in the frequency domain. The excited energy gradually decays in general as time passes by, and it eventually vanishes. The longer the time-domain simulation is performed as an input to the FFT, the finer the frequency step size in the FFT output. When the amount of energy in the simulation domain is negligible after a certain time period, there is no need to continue the simulation. Instead, we can stop the transient simulation when the energy is less than a certain threshold and fill the solutions with zeros in the remaining time before executing the FFT. We call this process zero-padding. The time-domain voltage at the excitation (source) lumped port. Left: The voltage is converging to zero and S-parameters are in the frequency domain. Right: Reflection property (S 11) and insertion loss (S21) are plotted in a 60-GHz bandwidth. Far-Field Radiation Pattern of Wideband and Multiband Antennas A wideband antenna study, such as a S-parameter and/or far-field radiation pattern analysis, can be obtained by performing a transient simulation and a time-to-frequency FFT. We can run a time-dependent study first and then transform the dependent variable (magnetic vector potential A) to convert a voltage signal at a lumped port from the time domain to the frequency domain. S-parameters and far-field radiation results are computed from the converted frequency domain data. The following dual-band printed antenna shows two resonances, where the computed S 11 is below -10 dB in the S-parameter plot for the given frequency range. Left: Surface plot of the electric field norm and far-field radiation pattern of a dual-band printed strip antenna at 2.265 GHz. Right: S-parameter plot shows two resonance regions where the computed S 11 is below -10 dB. Two-Step Process with Time-to-Frequency Fourier Transform In the Lumped Port Settings window, clicking the Calculate S-parameter check box on the excitation port sets the voltage excitation type to the modulated Gaussian. The center frequency (f0) of the modulating sinusoidal function can also be specified. Lumped port settings in the Electromagnetic Waves, Transient physics interface. The modulated Gaussian excitation voltage is defined as: where \sigma is the standard deviation 1/2f 0, f 0 is the center frequency, and \eta_f is the modulating frequency shift ratio. A small ratio value (for example, 3%) of \eta_f may enhance the frequency response around the highest frequency. The frequency here has to be matched to the center frequency of the S-parameter calculation used in the time-to-frequency FFT study step from the Model Builder tree. Left: Time-dependent study step settings. Center: Time-to-frequency FFT study step settings. Right: Default solver sequence in the Model Builder tree. The end time of the time-dependent study step is set to 100 times that of the period of the modulating sinusoidal function, which could be long enough for a simple passive device to ensure that the input energy is fully decayed. This works for typical passive circuits, except for closed-cavity-type devices, where the energy decay time can be much longer. The stop condition is automatically added under the time-dependent solver (the Calculate S-parameter check box activates this stop condition in the solver settings). When the sum of the total electric and magnetic energy in the modeling domain is less than 70 dB compared to the input energy, the time-dependent study is terminated by the stop condition and all time-domain data is passed to the FFT step. To generate the frequency-domain data without significant distortion in the frequency range between 0 and 2f 0, the time step, satisfying the Nyquist criterion, is set to 1/4f 0 = 1/2B, where B is the bandwidth 2f 0. To provide a fine frequency resolution, the end time of the FFT study step is much longer than that of the time-dependent study. Zero-padding is automatically applied to the time-dependent study data before the FFT study step. Time Domain Bandpass Impulse Response of a Transmission Line While transient analyses are useful for time-domain reflectometry (TDR) to handle signal integrity (SI) problems, many RF and microwave examples are addressed using frequency-domain simulations generating S-parameters. However, from the frequency-domain data, it is difficult to identify sources for this signal degradation. By simulating a circuit in the frequency domain and performing a frequency-to-time FFT, a voltage signal in the frequency domain can be investigated in the time domain. The computed results can help identify the physical discontinuities and impedance mismatches on the transmission line by analyzing the signal fluctuation in the time domain. Time domain lumped port voltage. The overshoot and undershoot of the signal indicate the discontinuities of the microstrip line. In the above figure, the time-domain results of the voltage bandpass impulse response at lumped port 1 are plotted with a microstrip line that has a couple of line discontinuities. The voltage fluctuation times correspond to the propagation times for the incident pulse to be reflected from the two line discontinuities: the defective parts of the 50-ohm microstrip line. The roundtrip travel time from lumped port 1 to each discontinuity agrees with the voltage fluctuation location. Two-Step Process with Frequency-to-Time Fourier Transform The time-domain results may vary with the input arguments in each study step. The impacts of the study step input arguments are described in the table below: Study Step Argument Impact on Transformed Time-Domain Result Frequency domain Start frequency Low-frequency envelope noise Stop frequency Resolution and high-frequency ripple noise Frequency step Alias period Frequency-to-time FFT Stop time Alias visibility Frequency domain study step settings. The frequency step, \Delta f (that is, df in the frequency domain study step settings above), is set to make the period of alias in the time-domain response greater than the roundtrip travel time from the excitation, lumped port 1, to the line termination, lumped port 2: 1/\Delta f = 1 ns > 2d\sqrt{\epsilon_r}/c_const where d is the circuit board length; \epsilon_r is the permittivity; and c_const is a constant for the speed of light in vacuum, predefined in the COMSOL Multiphysics® software. Frequency-to-time FFT study step settings. While performing FFT, a Gaussian window function is used. This helps to suppress the noise coming from the limited range of the frequency sweep. Each study step uses the Store fields in output option, which defines the selections where the computed results are stored. By choosing only the lumped port boundaries for the Store fields in output settings, it is possible to greatly reduce the size of the model file. Managing Computed Results Since the FFT only transforms the dependent variable from the first domain, it is only possible to use postprocessing variables directly related to the dependent variable in the second domain. The first domain results are still accessible, typically through the Solution Store 1 data set. The frequency-to-time FFT study step transforms the solution of the dependent variables in the frequency domain to the time domain with a very small time step of ten samples per period, defined by the highest frequency in the model. Only the postprocessing variables that can be expressed with the dependent variables are valid for results analysis. Since the transformed solutions typically contain many time steps, it is recommended to use the Store field in output option to reduce the size of the model. Try These Application Examples of RF Signal Transformation The simulation methods using fast Fourier transform presented in this blog post make RF and microwave device modeling more efficient. Try performing a time-to-frequency FFT for a wideband evaluation of S-parameters in this tutorial model of a coaxial cable: Note that to download the MPH-file, you must be logged into your COMSOL Access account and have a valid software license. Browse additional examples in the Application Library in the COMSOL® software: RF Module > Filters > coaxial_low_pass_filter_transient RF Module > Antennas > dual_band_antenna_transient RF Module > EMI_EMC_Applications > microstrip_line_discontinuity (Frequency-to-time for a quick investigation of TDR) Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
I have a question about conversion in number systems . For converting binary to decimal ,why do we need to multiply each of the coefficients by powers of 2 ? And while converting decimal (such as100.12 ) back to binary ,why do we need to multiply the numbers after decimal point by 2 ? And this happens in almost all the conversions . Can somebody please explain me why ? For example 11 in Binary would be equal to 1+2=3(in decimal) But why do we need to raise them to the powers of two ? The whole thing about having different number systems is having different number of available symbols to express the quantity. The concept of 11 things is not the same in base 3, as it is in base 10, or in base 2. For your question, I am doing a simple computation below. In base 2, you have only two digits (symbols) available, $\big\{0,1\big\}$, while in decimal system you have 10 digits (symbols) available $\big\{0,1,\cdots,9\big\}$. Now we write the following correspondence with the decimal numbers on the left side, and the binary numbers on the right. 0$\to$0 1$\to$1 What do we do now? We have more symbols left in the decimal system, but no more symbols in the binary system. The solution is, do exactly as what we do when the symbols run out in decimal system. When we reach 9, we write the first symbol 0 again, but since we already have a number 0, we elevate it by writing the next bigger digit beside it. The consequence? We get 10. Now we proceed in the same way as before. So we get. 2$\to$10 3$\to$11 4$\to$100 5$\to$101 . . . So in base b, we are using b digits, and suppose we have an n-digit number in base b, $a_na_{n-1}a_{n-2}\cdots a_1$. Now let us try to find out how we arrived at this number. (Please note that all the following calculations have been done in the familiar decimal system that we use.) Note that we have the symbol $a_n$ in the leftmost place. From the generic rule outlined above, we can see that $a_n$ comes to play only if we have already used up all the symbols occurring before it. So, for the $n^{th}$ place we have used up all the symbols in $\big\{0,1,\cdots a_n-1\big\}$, which contains $a_n$ symbols. So, we have used up $a_n$ symbols for the $n^{th}$ place. Now, we move on to the $n-1^{st}$ place, and we check that, in a similar way, we have used up all of the $a_{n-1}$ symbols in $\big\{0,1,\cdots a_{n-1}-1\big\}$. But for each of the symbols $a_i$ in the $n^{th}$ place, we have actually used up all the $b$ symbols in the $n-1^{st}$ place, before we could come to $a_{n-1}$ in the $n-1^{st}$ place for $a_n$ in the $n^{th}$ place. Thus to reach $a_n$ in the $n^{th}$ place and $a_{n-1}$ in the $n-1^{st}$ place, we have crossed $b\times a_n + a_{n-1}$ numbers before it. By a similar argument, we reach $a_n$ in the $n^{th}$ place, $a_{n-1}$ in the $n-1^{st}$ place and $a_{n-2}$ in the $n-2^{nd}$ place after crossing $b^2\times a_n + b\times a_{n-1} + a_{n-2}$ numbers before it. We proceed in the same way, till we have crossed $b^{n-1}\times a_n+ b^{n-2}\times a_{n-1} + \cdots + a_1$. Thus, to reach $a_na_{n-1}a_{n-2}\cdots a_1$ in base b, we need to cross $b^{n-1}\times a_n+ b^{n-2}\times a_{n-1} + \cdots + a_1$ numbers (calculation in decimal numbers). So it is equivalent to saying that $0+b^{n-1}\times a_n+ b^{n-2}\times a_{n-1} + \cdots + a_1$ in decimal is same as $a_na_{n-1}a_{n-2}\cdots a_1$ in base b. I hope this answers your question, because b (base) is 2 in the binary system since we use two symbols only, 0 and 1. For your last Q, a sequence $d_n...d_0$ of $1$'s and $0$'s, in base two, represents the number $2^nd_n+...+2^0d_0.$ That follows immediately from the definition of a "base n number system". The numeral "abcd" in base n means $a\cdot n^3+ b\cdot n^2+ c\cdot n^1+ d\cdot n^0$. (Of course, each of a, b, c, d must be less than n and $n^1= n$ and $n^0= 1$.) In our "usual" base 10 number system "abcd" would be $a\cdot 10^3+ b\cdot 10^2+ c\cdot 10^1+ d\cdot 10^0= a(1000)+ b(100)+ c(10)+ c(1)$. For example, $3215= 3(1000)+ 2(100)+ 1(10)+ 5$. In base 2, we can use only "0" and "1" as "digits as they are the only positive integers less than 2. $11001= 1(2^4)+ 1(2^3)+ 0(2^2)+ 0(2)+ 1(1) which would be 1(16)+ 1(8)+ 0(4)+ 0(2)+ 1= 25 in base 10. "Base 16" is also commonly used in computer science because it is a power of 2 (and computer registers are typically 4, 9, or 16 bites long) and results in smaller numerals than base 2. Of course, we also need symbols for the integers 10, 11, 12, 13, 14, and 15 which are also less than 16. The standard is A= 10, B= 11, C= 12, D= 13, E= 14, and F= 15. The numeral A5B3 is the same as $A(16^3)+ 5(16^2)+ B(16)+ 3(1)= 10(4096)+ 5(225)+ 12(16)+ 3= 42280 in base 10.
Cards from an ordinary deck are turned face up one at a time. Compute the expected number of cards that need to be turned face up in order to obtain (a) 2 aces; (c) all 13 hearts. This is a homework problem straight from a chapter on Expectation from a probability textbook. The textbook has a section on finding the expected value of a negative binomial random variable, and says $E[X] = E[X_1]+ E[X_2[ \dots + E[X_r] = \frac{r}{p}$. (The textbook defines the negative binomial distribution as the probability that $n$ trials are required until $r$ successes occur. It is assumed that $r$ is constant and $n$, the value of the negative binomial random variable, is unbounded i.e. may go to $\infty$. $$P(X = n) = \binom{n-1}{r-1}p^r (1-p)^{n-r}\ \ \ \ \text{for}\ r \le n \lt \infty$$ The problem (a) here seems to want the expectation of a negative binomial random variable, however, the above equation for $E[X]$ assumes that $r \le n \lt \infty$, but in the case of this problem, only up to $50$ cards may actually be selected such that $2$ are aces (there are $48$ non-aces, so the next $49$th, $50$th cards must be aces). In other words, for this problem $2 \le n \le 50$. So instead of following the textbook's equation for $E[X]$, I tried to find $E[X]$ for (a), given $r = 2, p = \frac{4}{52}$ as $$E[X] = \sum_{n=2}^{50} n \binom{n-1}{2-1} \left(\frac{4}{52}\right)^2\left(\frac{48}{52}\right)^{n-2} \approx 19.8134 $$ If I follow the textbook, I get $E[X] = \frac{r}{p} = 2 \left(\frac{48}{52}\right)^{-1} = 26$. Are either of these answers correct? And is problem (c) essentially the same as solving (a)?
In addition to André's notes, another means of calculating solutions to these recurrence relations is to rephrase them using linear algebra as a single matrix multiply and then apply the standard algorithms for computing large powers of numbers (i.e., via binary representation of the exponent) to computing powers of the matrix; this allows for the $n$th member of the sequence to be computed with $O(\log(n))$ multiplies (of potentially exponentially-large numbers, but the multiplication can also be sped up through more complicated means). In the Fibonacci case, this comes by forming the vector $\mathfrak{F}_n = {F_n\choose F_{n-1}}$ and recognizing that the recurrence relation can be expressed by multiplying this vector with a suitably-chosen matrix:$$\mathfrak{F}_{n+1} = \begin{pmatrix}F_{n+1} \\\\ F_n \end{pmatrix} = \begin{pmatrix}F_n + F_{n-1} \\\\ F_n \end{pmatrix} = \begin{pmatrix} 1&1 \\\\ 1&0 \end{pmatrix} \begin{pmatrix} F_n \\\\ F_{n-1} \end{pmatrix} = M_F\mathfrak{F}_n $$ where $M_F$ is the $2\times2$ matrix $\begin{pmatrix} 1&1 \\\\ 1&0 \end{pmatrix}$. This lets us find $F_n$ by finding $M_F^n\mathfrak{F}_0$, and as I noted above the matrix power is easily computed by finding $M_F^2, M_F^4=(M_F^2)^2, \ldots$ (note that this also gives an easy way of proving the formulas for $F_{2n}$ in terms of $F_n$ and $F_{n-1}$, which are just the matrix multiplication written out explicitly; similarly, the Binet formula itself can be derived by finding the eigenvalues of the matrix $M_F$ and diagonalizing it). Similarly, for the Tribonacci numbers the same concept applies, except that the matrix is 3x3:$$\mathfrak{T}_{n+1} = \begin{pmatrix} T_{n+1} \\\\ T_n \\\\ T_{n-1} \end{pmatrix} = \begin{pmatrix} T_n+T_{n-1}+T_{n-2} \\\\ T_n \\\\ T_{n-1} \end{pmatrix} = \begin{pmatrix} 1&1&1 \\\\ 1&0&0 \\\\ 0&1&0 \end{pmatrix} \begin{pmatrix} T_n \\\\ T_{n-1} \\\\ T_{n-2} \end{pmatrix} = M_T\mathfrak{T}_n$$with $M_T$ the $3\times3$ matrix that appears there; this is (probably) the most efficient all-integer means of finding $T_n$ for large values of $n$, and again it provides a convenient way of proving various properties of these numbers.
It's context-free. Here's the grammar: $S \to A | B|AB|BA$ $A \to a|aAa|aAb|bAb|bAa$ $B \to b|aBa|aBb|bBb|bBa$ $A$ generates words of odd length with $a$ in the center. Same for $B$ and $b$. I'll present a proof that this grammar is correct. Let $L = \{a,b\}^* \setminus \{ww \mid w \in \{a,b\}^*\}$ (the language in the question). Theorem. $L = L(S)$. In other words, this grammar generates the language in the question. Proof. This certainly holds for all odd-length words, since this grammar generates all odd-lengths words, as does $L$. So let's focus on even-length words. Suppose $x \in L$ has even length. I'll show that $x \in L(G)$. In particular, I claim that $x$ can be written in the form $x=uv$, where both $u$ and $v$ have odd length and have different central letters. Thus $x$ can be derived from either $AB$ or $BA$ (according to whether $u$'s central letter is $a$ or $b$). Justification of claim: Let the $i$th letter of $x$ be denoted $x_i$, so that $x = x_1 x_2 \cdots x_n$. Then since $x$ is not in $\{ww \mid w \in \{a,b\}^{n/2}\}$, there must exist some index $i$ such that $x_i \ne x_{i+n/2}$. Consequently we can take $u = x_1 \cdots x_{2i-1}$ and $v = x_{2i} \cdots x_n$; the central letter of $u$ will be $x_i$, and the central letter of $v$ will be $x_{i+n/2}$, so by construction $u,v$ have different central letters. Next suppose $x \in L(G)$ has even length. I'll show that we must have $x \in L$. If $x$ has even length, it must be derivable from either $AB$ or $BA$; without loss of generality, suppose it is derivable from $AB$, and $x=uv$ where $u$ is derivable from $A$ and $v$ is derivable from $B$. If $u,v$ have the same lengths, then we must have $u\ne v$ (since they have different central letters), so $x \notin \{ww \mid w \in \{a,b\}^*\}$. So suppose $u,v$ have different lengths, say length $\ell$ and $n-\ell$ respectively. Then their central letters are $u_{(\ell+1)/2}$ and $v_{(n-\ell+1)/2}$. The fact that $u,v$ have different central letters means that $u_{(\ell+1)/2} \ne v_{(n-\ell+1)/2}$. Since $x=uv$, this means that $x_{(\ell+1)/2} \ne x_{(n+\ell+1)/2}$. If we attempt to decompose $x$ as $x=ww'$ where $w,w'$ have the same length, then we'll discover that $w_{(\ell+1)/2} = x_{(\ell+1)/2} \ne x_{(n+\ell+1)/2} = w'_{(\ell+1)/2}$, i.e., $w\ne w'$, so $x \notin \{ww \mid w \in \{a,b\}^*\}$. In particular, it follows that $x \in L$.
Under the auspices of the Computational Complexity Foundation (CCF) We study the following question: Is it easier to construct a hitting-set generator for polynomials $p:\mathbb{F}^n\rightarrow\mathbb{F}$ of degree $d$ if we are guaranteed that the polynomial vanishes on at most an $\epsilon>0$ fraction of its inputs? We will specifically be interested in tiny values of $\epsilon\ll d/|\mathbb{F}|$. This question was ... more >>> In their seminal work, Chattopadhyay and Zuckerman (STOC'16) constructed a two-source extractor with error $\varepsilon$ for $n$-bit sources having min-entropy $poly\log(n/\varepsilon)$. Unfortunately, the construction running-time is $poly(n/\varepsilon)$, which means that with polynomial-time constructions, only polynomially-large errors are possible. Our main result is a $poly(n,\log(1/\varepsilon))$-time computable two-source condenser. For any $k ... more >>> A code $\mathcal{C}$ is $(1-\tau,L)$ erasure list-decodable if for every codeword $w$, after erasing any $1-\tau$ fraction of the symbols of $w$, the remaining $\tau$-fraction of its symbols have at most $L$ possible completions into codewords of $\mathcal{C}$. Non-explicitly, there exist binary $(1-\tau,L)$ erasure list-decodable codes having rate $O(\tau)$ and ... more >>> The question of finding an epsilon-biased set with close to optimal support size, or, equivalently, finding an explicit binary code with distance $\frac{1-\epsilon}{2}$ and rate close to the Gilbert-Varshamov bound, attracted a lot of attention in recent decades. In this paper we solve the problem almost optimally and show an ... more >>> A recent series of breakthroughs initiated by Spielman and Teng culminated in the construction of nearly linear time Laplacian solvers, approximating the solution of a linear system $L x=b$, where $L$ is the normalized Laplacian of an undirected graph. In this paper we study the space complexity of the problem. more >>> We show a reduction from the existence of explicit t-non-malleable extractors with a small seed length, to the construction of explicit two-source extractors with small error for sources with arbitrarily small constant rate. Previously, such a reduction was known either when one source had entropy rate above half [Raz05] or ... more >>> Approximating the eigenvalues of a Hermitian operator can be solved by a quantum logspace algorithm. We introduce the problem of approximating the eigenvalues of a given matrix in the context of classical space-bounded computation. We show that: - Approximating the second eigenvalue of stochastic operators (in a certain range of ... more >>> We construct explicit two-source extractors for $n$ bit sources, requiring $n^\alpha$ min-entropy and having error $2^{-n^\beta}$, for some constants $0 < \alpha,\beta < 1$. Previously, constructions for exponentially small error required either min-entropy $0.49n$ \cite{Bou05} or three sources \cite{Li15}. The construction combines somewhere-random condensers based on the Incidence Theorem \cite{Zuc06,Li11}, ... more >>> We explicitly construct extractors for two independent $n$-bit sources of $(\log n)^{1+o(1)}$ min-entropy. Previous constructions required either $\mathrm{polylog}(n)$ min-entropy \cite{CZ15,Meka15} or five sources \cite{Cohen16}. Our result extends the breakthrough result of Chattopadhyay and Zuckerman \cite{CZ15} and uses the non-malleable extractor of Cohen \cite{Cohen16}. The main new ingredient in our construction ... more >>> Constructing pseudorandom generators for low degree polynomials has received a considerable attention in the past decade. Viola [CC 2009], following an exciting line of research, constructed a pseudorandom generator for degree d polynomials in n variables, over any prime field. The seed length used is $O(d \log{n} + d 2^d)$, ... more >>> We show a generic, simple way to amplify the error-tolerance of locally decodable codes. Specifically, we show how to transform a locally decodable code that can tolerate a constant fraction of errors to a locally decodable code that can recover from a much higher error-rate. We also show how to ... more >>> Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to $\frac{1}{3} $ fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate $\half-\alpha$ for any $\alpha>0$ and locally list-decoded from error rate $1-\alpha$ ... more >>> We show that a mild derandomization assumption together with the worst-case hardness of NP implies the average-case hardness of a language in non-deterministic quasi-polynomial time. Previously such connections were only known for high classes such as EXP and PSPACE. There has been a long line of research trying to explain ... more >>> Optimal dispersers have better dependence on the error than optimal extractors. In this paper we give explicit disperser constructions that beat the best possible extractors in some parameters. Our constructions are not strong, but we show that having such explicit strong constructions implies a solution to the Ramsey graph construction ... more >>> In this note we revisit the construction of high noise, almost optimal rate list decodable code of Guruswami ("Better extractors for better codes?") Guruswami showed that based on optimal extractors one can build a $(1-\epsilon,O({1 \over \epsilon}))$ list decodable codes of rate $\Omega({\epsilon \over {log{1 \over \epsilon}}})$ and alphabet size ... more >>> Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. This research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, was noticed before. Yet, ... more >>> We deal with the problem of extracting as much randomness as possible from a defective random source. We devise a new tool, a ``merger'', which is a function that accepts d strings, one of which is uniformly distributed, and outputs a single string that is guaranteed ... more >>> We present a Logspace, many-one reduction from the undirected st-connectivity problem to its complement. This shows that $SL=co-SL$
I'm having trouble to understand and interpret the differential equation for the activation variable $m$: $$\dot{m} = (m_\infty(V) - m)/\tau(V)$$ which enters via $$p = m^a h^b$$ into the equation $$I = \bar{g} p (V − E)$$ for the net current $I$ generated by a large population of identical channels where $p$ is the average proportion of channels in the open state, $\bar{g}$ is the maximal conductance of the population, and $E$ is the reverse potential of the current. $m$ is the probability of one of $a$ activation gates to be open. Interchangeably: $m^a$ is the proportion of open activation channels (assuming all of its activation gates must be open simultaneously). The differential equation might give us $m$ as an explicit function of time but it – explicitly – involves $V$ which is – implicitly – another function of time, which in turn depends on the number of open gates. Things are horribly complicated! On the other hand — since it's about voltage-gated channels and there are "immediate" voltage-sensitive regions in the channel protein which presumably don't have memory ("the single channel has no memory about the duration of its own state"), but possibly a time lag — I expected the being-open-probability of an activation gate to be a "pure" (possibly time-lagged) function of $V$. My question comes in two disguises (1) Given two explicit functions $m_\infty(V)$ and $\tau(V)$ like these: together with $m(0)$ and $V(0)$, how could we ever arrive at an explicit solution for $m(t)$, assuming that $V(t)$ depends somehow on $m(t')$ for $t'\leq t$, but possibly also on some injected currents. (2) How can an intuitive and sensible interpretation of the terms in $$\dot{m} = (m_\infty(V) - m)/\tau(V)$$ be given? What does the time constant $\tau(V)$ and its dependence on $V$ mean? In which respect and by which hypothesised mechanism is the gate "faster", when $\tau$ is smaller? How can the voltage-sensitive steady-state activation function $m_\infty(V)$ be intuitively explained other than by "giving the asymptotic value of $m$ when the potential $V$ is fixed (voltage-clamp)" or by some complicated description of experiments to determine it? What does $m_\infty(V) - m$ mean, i.e. "the deviation of the current activation from the steady-state activation"?
If you want rigorous proof, the following lemma is often useful resp. more handy than the definitions. If $c = \lim_{n\to\infty} \frac{f(n)}{g(n)}$ exists, then $c=0 \qquad \ \,\iff f \in o(g)$, $c \in (0,\infty) \iff f \in \Theta(g)$ and $c=\infty \quad \ \ \ \iff f \in \omega(g)$. With this, you should be able to order most of the functions coming up in algorithm analysis¹. As an exercise, prove it! Of course you have to be able to calculate the limits accordingly. Some useful tricks to break complicated functions down to basic ones are: Express both functions as $e^{\dots}$ and compare the exponents; if their ratio tends to $0$ or $\infty$, so does the original quotient. More generally: if you have a convex, continuously differentiable and strictly increasing function $h$ so that you can re-write your quotient as $\qquad \displaystyle \frac{f(n)}{g(n)} = \frac{h(f^*(n))}{h(g^*(n))}$, with $g^* \in \Omega(1)$ and $\qquad \displaystyle \lim_{n \to \infty} \frac{f^*(n)}{g^*(n)} = \infty$, then $\qquad \displaystyle \lim_{n \to \infty} \frac{f(n)}{g(n)} = \infty$. See here for a rigorous proof of this rule (in German). Consider continuations of your functions over the reals. You can now use L'Hôpital's rule; be mindful of its conditions²! Have a look at the discrete equivalent, Stolz–Cesàro. When factorials pop up, use Stirling's formula: $\qquad \displaystyle n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n$ It is also useful to keep a pool of basic relations you prove once and use often, such as: logarithms grow slower than polynomials, i.e. $\qquad\displaystyle (\log n)^\alpha \in o(n^\beta)$ for all $\alpha, \beta > 0$. order of polynomials: $\qquad\displaystyle n^\alpha \in o(n^\beta)$ for all $\alpha < \beta$. polynomials grow slower than exponentials: $\qquad\displaystyle n^\alpha \in o(c^n)$ for all $\alpha$ and $c > 1$. It can happen that above lemma is not applicable because the limit does not exist (e.g. when functions oscillate). In this case, consider the following characterisation of Landau classes using limes superior/inferior: With $c_s := \limsup_{n \to \infty} \frac{f(n)}{g(n)}$ we have $0 \leq c_s < \infty \iff f \in O(g)$ and $c_s = 0 \iff f \in o(g)$. With $c_i := \liminf_{n \to \infty} \frac{f(n)}{g(n)}$ we have $0 < c_i \leq \infty \iff f \in \Omega(g)$ and $c_i = \infty \iff f \in \omega(g)$. Furthermore, $0 < c_i,c_s < \infty \iff f \in \Theta(g) \iff g \in \Theta(f)$ and $ c_i = c_s = 1 \iff f \sim g$. Check here and here if you are confused by my notation. ¹ Nota bene: My colleague wrote a Mathematica function that does this successfully for many functions, so the lemma really reduces the task to mechanical computation. ² See also here.
The original Noether's theorem assumes a Lagrangian formulation. Is there a kind of Noether's theorem for the Hamiltonian formalism? Action formulation. It should be stressed that Noether's theorem is a statement about consequences of symmetries of an action functional (as opposed to, e.g., symmetries of equations of motion, or solutions thereof, cf. this Phys.SE post). So to use Noether's theorem, we first of all need an action formulation. How do we get an action for a Hamiltonian theory? Well, let us for simplicity consider point mechanics (as opposed to field theory, which is a straightforward generalization). Then the Hamiltonian action reads $$ S_H[q,p] ~:=~ \int \! dt ~ L_H(q,\dot{q},p,t). \tag{1}$$ Here $L_H$ is the so-called Hamiltonian Lagrangian $$ L_H(q,\dot{q},p,t) ~:=~\sum_{i=1}^n p_i \dot{q}^i - H(q,p,t). \tag{2}$$ We may view the action (1) as a first-order Lagrangian system $L_H(z,\dot{z},t)$ in twice as many variable $$ (z^1,\ldots,z^{2n}) ~=~ (q^1, \ldots, q^n;p_1,\ldots, p_n).\tag{3}$$ $$ 0~\approx~\frac{\partial S_H}{\partial z^I} ~=~\sum_{J=1}^{2n}\omega_{IJ}\dot{z}^J -\frac{\partial H}{\partial z^I} \qquad\Leftrightarrow\qquad \dot{z}^I~\approx~\{z^I,H\} \qquad\Leftrightarrow\qquad $$ $$ \dot{q}^i~\approx~ \{q^i,H\}~=~\frac{\partial H}{\partial p_i}\qquad \text{and}\qquad \dot{p}_i~\approx~ \{p_i,H\}~=~-\frac{\partial H}{\partial q^i}. \tag{4}$$ [Here the $\approx$ symbol means equality on-shell, i.e. modulo the equations of motion (eom).] Equivalently, for an arbitrary quantity $Q=Q(q,p,t)$ we may collectively write the Hamilton's eoms (4) as $$ \frac{dQ}{dt}~\approx~ \{Q,H\}+\frac{\partial Q}{\partial t}.\tag{5}$$ Returning to OP's question, the Noether theorem may then be applied to the Hamiltonian action (1) to investigate symmetries and conservation laws. Statement 1: "A symmetry is generated by its own Noether charge." Sketched proof: Let there be given an infinitesimal (vertical) transformation $$ \delta z^I~=~ \epsilon Y^I(q,p,t), \qquad I~\in~\{1, \ldots, 2n\}, \qquad \delta t~=~0,\tag{6}$$ where $Y^I=Y^I(q,p,t)$ are (vertical) generators, and $\epsilon$ is an infinitesimal parameter. Let the transformation (6) be a quasisymmetry of the Hamiltonian Lagrangian $$ \delta L_H~=~\epsilon \frac{d f^0}{dt},\tag{7}$$ where $f^0=f^0(q,p,t)$ is some function. By definition, the bare Noether charge is $$ Q^0~:=~ \sum_{I=1}^{2n}\frac{\partial L_H}{\partial \dot{z}^I} Y^I \tag{8}$$ while the full Noether charge is $$ Q~:=~Q^0-f^0. \tag{9} $$ Noether's theorem then guarantees an off-shell Noether identity $$\sum_{I=1}^{2n}\dot{z}^I \frac{\partial Q}{\partial z^I} +\frac{\partial Q}{\partial t} ~=~ \frac{dQ}{dt} ~\stackrel{\text{NI}}{=}~ -\sum_{I=1}^{2n} \frac{\delta S_H}{\delta z^I}Y^I ~\stackrel{(4)}{=}~\sum_{I,J=1}^{2n}\dot{z}^I\omega_{IJ}Y^J + \sum_{I=1}^{2n} \frac{\partial H}{\partial z^I}Y^I . \tag{10}$$ By comparing coefficient functions of $\dot{z}^I$ on the 2 sides of eq. (10), we conclude that the full Noether charge $Q$ generates the quasisymmetry transformation $$ Y^I~=~\{z^I,Q\}.\tag{11}$$ $\Box$ Statement 2: "A generator of symmetry is essentially a constant of motion." Sketched proof: Let there be given a quantity $Q=Q(q,p,t)$ (a priori not necessarily the Noether charge) such that the infinitesimal transformation $$ \delta z^I~=~ \{z^I,Q\}\epsilon,\qquad I~\in~\{1, \ldots, 2n\}, \qquad \delta t~=~0,$$ $$ \delta q^i~=~\frac{\partial Q}{\partial p_i}\epsilon, \qquad \delta p_i~=~ -\frac{\partial Q}{\partial q^i}\epsilon, \qquad i~\in~\{1, \ldots, n\},\tag{12}$$ generated by $Q$, and with infinitesimal parameter $\epsilon$, is a quasisymmetry (7) of the Hamiltonian Lagrangian. The bare Noether charge is by definition $$ Q^0~:=~ \sum_{I=1}^{2n}\frac{\partial L_H}{\partial \dot{z}^I} \{z^I,Q\} ~\stackrel{(2)}{=}~ \sum_{i=1}^n p_i \frac{\partial Q}{\partial p_i}.\tag{13}$$ Noether's theorem then guarantees an off-shell Noether identity $$ \frac{d (Q^0-f^0)}{dt} ~\stackrel{\text{NI}}{=}~-\sum_{I=1}^{2n}\frac{\delta S_H}{\delta z^I} \{z^I,Q\} $$ $$~\stackrel{(2)}{=}~ \sum_{I=1}^{2n}\dot{z}^I \frac{\partial Q}{\partial z^I} +\{H,Q\} ~=~\frac{dQ}{dt}-\frac{\partial Q}{\partial t} +\{H,Q\}. \tag{14}$$ Firstly, Noether theorem implies that the corresponding full Noether charge $Q^0-f^0$ is conserved on-shell $$ \frac{d(Q^0-f^0)}{dt}~\approx~0,\tag{15}$$ which can also be directly inferred from eqs. (5) and (14). Secondly, the off-shell Noether identity (14) can be rewritten as $$ \{Q,H\}+\frac{\partial Q}{\partial t} ~\stackrel{(14)+(17)}{=}~~\frac{dg^0}{dt}~=~\sum_{I=1}^{2n}\dot{z}^I \frac{\partial g^0}{\partial z^I}+\frac{\partial g^0}{\partial t},\tag{16} $$ where we have defined the quantity $$ g^0~:=~Q+f^0-Q^0.\tag{17}$$ We conclude from the off-shell identity (16) that (i) $g^0=g^0(t)$ is a function of time only, $$ \frac{\partial g^0}{\partial z^I}~=~0\tag{18}$$ [because $\dot{z}$ does not appear on the lhs. of eq. (16)]; and (ii) that the following off-shell identity holds $$ \{Q,H\} +\frac{\partial Q}{\partial t} ~=~\frac{\partial g^0}{\partial t}.\tag{19}$$ Note that the quasisymmetry and the eqs. (12)-(15) are invariant if we redefine the generator $$ Q ~~\longrightarrow~~ \tilde{Q}~:=~Q-g^0 .\tag{20} $$ Then the new $\tilde{g}^0=0$ vanishes. Dropping the tilde from the notation, the off-shell identity (19) simplifies to $$ \{Q,H\} +\frac{\partial Q}{\partial t}~=~0.\tag{21}$$ Eq. (21) is the defining equation for an off-shell constant of motion $Q$. $\Box$ Statement 3: "A constant of motion generates a symmetry and is its own Noether charge." Sketched proof: Conversely, if there is given a quantity $Q=Q(q,p,t)$ such that eq. (21) holds off-shell, then the infinitesimal transformation (12) generated by $Q$ is a quasisymmetry of the Hamiltonian Lagrangian $$ \delta L_H ~\stackrel{(2)}{=}~\sum_{i=1}^n\dot{q}^i \delta p_i -\sum_{i=1}^n\dot{p}_i \delta q^i -\delta H +\frac{d}{dt}\sum_{i=1}^np_i \delta q^i \qquad $$ $$~\stackrel{(12)+(13)}{=}~ -\sum_{I=1}^{2n}\dot{z}^I \frac{\partial Q}{\partial z^I}\epsilon -\{H,Q\}\epsilon + \epsilon \frac{d Q^0}{dt}$$ $$~\stackrel{(21)}{=}~ \epsilon \frac{d (Q^0-Q)}{dt} ~\stackrel{(23)}{=}~ \epsilon \frac{d f^0}{dt},\tag{22}$$ because $\delta L_H$ is a total time derivative. Here we have defined $$ f^0~=~ Q^0-Q .\tag{23}$$ The corresponding full Noether charge $$ Q^0-f^0~\stackrel{(23)}{=}~Q \tag{24}$$ is just the generator $Q$ we started with! Finally, Noether's theorem states that the full Noether charge is conserved on-shell $$ \frac{dQ}{dt}~\approx~0.\tag{25}$$ Eq. (25) is the defining equation for an on-shell constant of motion $Q$. $\Box$ Discussion. Note that it is overkill to use Noether's theorem to deduce eq. (25) from eq. (21). In fact, eq. (25) follows directly from the starting assumption (21) by use of Hamilton's eoms (5) without the use of Noether's theorem! For the above reasons, as purists, we disapprove of the common praxis to refer to the implication (21)$\Rightarrow$(25) as a 'Hamiltonian version of Noether's theorem'. Interestingly, an inverse Noether's theorem works for the Hamiltonian action (1), i.e. a on-shell conservation law (25) leads to an off-shell quasisymmetry (12) of the action (1), cf. e.g. my Phys.SE answer here. In fact, one may show that (21)$\Leftrightarrow$(25), cf. my Phys.SE answer here. Example 4: The Kepler problem: The symmetries associated with conservation of the Laplace-Runge-Lenz vector in the Kepler problem is difficult to understand via a purely Lagrangian formulation in configuration space $$ L~=~ \frac{m}{2}\dot{q}^2 + \frac{k}{q},\tag{26}$$ If your Hamiltonian is invariant, that means there should be a vanishing Poisson bracket for some function $F(q,p)$ of your canonical coordinates so that $$\{ H(q,p), F(q,p)\} = 0$$ Since the Poisson bracket with the Hamiltonian also gives the time derivative, you automatically have your conservation law. One thing to note: The Lagrangian is a function of position and velocity, whereas the Hamiltonian is a function of position and momentum. Thus, your $T$ and $V$ in $L = T - V$ and $H = T + V$ are not the same functions.
At least our ETA is better than M$. http://xkcd.com/612/ Summary: optimizing an algorithm using Treap data structure and CRC32 properties. After reverse-engineering the binary, we can write the following pseudocode in python: from binascii import crc32 def lcg_step(): global lcg lcg = (0x5851F42D4C957F2D * lcg + 0x14057B7EF767814F) % 2**64 return lcg def extract(val): res = 32 + val - 95 * (( ((val - (0x58ED2308158ED231 * val >> 64)) >> 1) + (0x58ED2308158ED231 * val >> 64)) >> 6) return chr(res & 0xff) buf = [] lcg = 8323979853562951413 crc = 0 for i in xrange(31415926): # append a symbol c = extract( lcg_step() ) buf.append(c) # reverse interval x = lcg_step() % len(buf) y = lcg_step() % len(buf) l, r = min(x, y), max(x, y) buf[l:r+1] = buf[l:r+1][::-1] # update crc crc = crc32("".join(buf), crc) % 2**32 array = [...] # from binary flag = "" for i in range(len(array)): if buf[array[i]] == "}": flag += "%08x" % crc flag += buf[array[i]] The binary generates a large array using an LCG PRNG, reverses subarrays defined by PRNG and updates the CRC of the whole state after each iteration. There are 31 million iterations total, and straightforward reversing subarrays and computing CRC32 will take quadratic time so this is going to be infeasible. We have to come up with better algorithm. One of the data structures which can quickly reverse intervals is the Treap which is also known as randomized binary search tree. Since it is basically a binary search tree, it can easily be modified to maintain and update various sums on intervals. Since CRC32 is not a simple sum, it requires some special care. I took the basic implementation of Treap from here (the code in the end of the article). A good thing about Treap is that it allows to quickly “extract” a node which corresponds to any interval of the array. In conjunction with lazy propagation, it allows to do cool things. For example, to reverse an interval we “extract” the node corresponding to that interval and set a “rev” flag. If then later we visit this node for some reason, the reversal is “pushed down”: two children of the node are swapped and each children’s “rev” flag is flipped. In such lazy way we will do logarithmic number of operations per each reversal on average. The main problem here is to update the CRC32 state with the whole array after each reversal. We need to teach our Treap to compute CRC32 even after performing reversals. Note that the CRC32 value is added to the flag only in the end, thus, using this basic Treap, we can already compute the final string and extract the first part of the flag in a few minutes: hitcon{super fast reversing and CRC32 – [FINAL CRC HERE]} Unfortunately, we HAVE to compute the CRC to get the points. Let’s do it! About CRC32 The CRC32 without the initial and the final xors with 0xffffffff (let’s call it $rawCRC32$) is simply multiplication modulo an irreducible polynomial over $GF(2)$: $$rawCRC32(m) = m \times x^{32} \mod P(x),$$ where $$\begin{split} P(X) & = & X^{32}+X^{26}+X^{23}+X^{22}+X^{16}+X^{12}+X^{11}+ \\ & + & X^{10}+X^8+X^7+X^5+X^4+X^2+X+1. \end{split}$$ Such polynomials are nicely stored in 32-bit words. For example, $P(x) = \mathtt{0xEDB88320}$ (MSB are lowest degree terms). Multiplications are done in a way similar to fast exponentiation (see Finite field arithmetic). A good thing is that $rawCRC32(m)$ is linear: $$rawCRC32(a \oplus b) = rawCRC32(a) \oplus rawCRC32(b).$$ Shifting the message left by one bit is equivalent to multiplying it by $x$. Therefore, for concatenation we get: $$rawCRC32(a || b) = rawCRC32(a) \times x^{|b|} \oplus rawCRC32(b).$$ Using this formula allows us to combine CRC values of two large strings quite quickly. Computing $x^{|y|}$ can be done using fast exponentiation or simply precomputed. Adding CRC32 into the Treap Let’s store in each tree node the $rawCRC32$ of the corresponding segment and, additionally, $rawCRC32$ of the reversed segment. Then, depending on the “rev” flag we may retrieve one or the other value. When we “push” the lazy reversal down, we simply swap the two values. The main part is then in computing the two $rawCRC$ values of a node using values of its child nodes. This is quite easy to code using the concatenation formula given before. The formula is also useful when merging the CRCs of the consequtive states. Here is the CRC-related code: uint32_t POLY = 0xedb88320L; uint32_t HI = 1u << 31; uint32_t LO = 1; uint32_t ONEBYTE = (1u << 31) >> 8; uint32_t BYTE_CRC[256]; uint32_t SHIFT_BYTES[100 * 1000 * 1000]; inline uint32_t poly_mul(uint32_t a, uint32_t b) { uint32_t p = 0; while (b) { if (b & HI) p ^= a; b <<= 1; if (a & LO) a = (a >> 1) ^ POLY; else a >>= 1; } return p; } void precompute() { SHIFT_BYTES[0] = HI; // 1 FOR(i, 1, 100 * 1000 * 1000) { SHIFT_BYTES[i] = poly_mul(SHIFT_BYTES[i-1], ONEBYTE); } FORN(c, 256) { BYTE_CRC[c] = poly_mul(c, ONEBYTE); } } inline uint32_t lift(uint32_t crc, LL num) { return poly_mul(crc, SHIFT_BYTES[num]); } And here is modification of the Treap related to the CRC: inline uint32_t crc1(pitem it) { if (!it) return 0; if (it->rev) return it->crc_backward; return it->crc_forward; } inline uint32_t crc2(pitem it) { if (!it) return 0; if (it->rev) return it->crc_forward; return it->crc_backward; } inline void update_up (pitem it) { if (it) { it->cnt = cnt(it->l) + cnt(it->r) + 1; int left_size = cnt(it->l); int right_size = cnt(it->r); uint32_t cl, cr, cmid; cmid = BYTE_CRC[it->value]; cl = crc1(it->l); cr = crc1(it->r); it->crc_forward = lift(cl, right_size + 1) ^ lift(cmid, right_size) ^ cr; cl = crc2(it->l); cr = crc2(it->r); it->crc_backward = cl ^ lift(cmid, left_size) ^ lift(cr, left_size + 1); } } inline void push (pitem it) { if (it && it->rev) { swap(it->crc_forward, it->crc_backward); it->rev = false; swap (it->l, it->r); if (it->l) it->l->rev ^= true; if (it->r) it->r->rev ^= true; } } The full solution is available here. This code works for ~40 minutes on my laptop and produces the final CRC: d72a4529. The flag then is hitcon{super fast reversing and CRC32 – d72a4529}.
排序方式:共有27条查询结果,搜索用时 15 毫秒 1. 2. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models 总被引:7,自引:0,他引:7 YUE Li & CHEN Xiru School of Mathematics Statistics Wuhan University Wuhan China Graduate School Chinese Academy of Sciences Beijing China 《中国科学A辑(英文版)》2004,47(6):882-893 Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions, it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore. 相似文献 3. 4. Estimation in the polynomial errors-in-variables model 总被引:3,自引:0,他引:3 Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions. 相似文献 5. ASYMPTOTIC NORMALITY OF PARAMETERSESTIMATION IN EV MODEL WITH REPLICATEDOBSERVATIONS 总被引:2,自引:0,他引:2 1 IntroductionFOrnully the EV (Errors-ill-Vaiables) nrodel is just tlie regressiOli medel with both depen-dent aud iudependeut variables arc subject to error (see, fOr exan1ple, [21~[71 alid tlie literaturecited tl1ere). Collsiderillg tliat iu mauy practical aPplications, taking replicated observationspresellts lio esseutial dimculties. heuce can offer a couveuiellt choice in avoidillg artilicia1 as-sunWtions about tlie nlodel. This case was studied ill essay [11, in which estinators of a … 相似文献 6. Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency. 相似文献 7. ON ASYMPTOTIC NORMALITY OF PARAMETERS IN LINEAR EV MODEL 下载免费PDF全文 总被引:2,自引:0,他引:2 This paper studies the parameter estimation of one dimensional linear errors-in-variables (EV) models in the case that replicated observations are available in some experimental points. Asymptotic normality is established under mild conditions, and the parameters entering the asymptotic variance are consistently estimated to render the result useable in construction of large-sample confidence regions. 相似文献 8. 设有该文第1节所描述的广义线性回归模型,以$\underline{\lambda}_n$和$\overline{\lambda}_n$分别记$\sum\limits_{i=1}^{n}Z_iZ_i^{\prime}$的最小和最大特征根,$\hat{\beta}_n$记$\beta_0$的极大似然估计.在文献[1]中,当\{$Z_i,i\ge1$\}有界时得到$\hat{\beta}_n$强相合的充分条件,在自然联系和非自然联系下分别为$\underline{\lambda}_n\rightarrow\infty$, $(\overline{\lambda}_n)^{1/2+\delta}=O(\underline{\lambda}_n)$(对某$\delta>0$)以及$\underline{\lambda}_n\rightarrow\infty$, $\overline{\lambda}_n=O(\underline{\lambda}_n)$.作者将后一结果改进为只要求$(\overline{\lambda}_n)^{1/2+\delta}=O(\underline{\lambda}_n)$,从而与自然联系情况下的条件达到一致. 相似文献 9. 10.
I've tried to detail my question using the image shown in this post. Consider an ellipse with 5 parameters $(x_C, y_C, a, b, \psi)$ where $(x_C, y_C)$ is the center of the ellipse, $a$ and $b$ are the semi-major and semi-minor radii and $\psi$ is the orientation of the ellipse. Now consider a point $(x,y)$ on the circumference of the ellipse. The normal at this point on the circumference of the ellipse intersects the major axis at a point $(x_D, y_D)$. This normal makes an angle $/phi$ with the major axis. However, the angle subtended by this point at the center of the ellipse is $\theta$. For a circle, $\theta = \phi$ for all points on its circumference because the normal at the circle is the radial angle subtended by the point on the circumference. Is there a relationship between the angles $\theta$ and $\phi$ for an ellipse. For some context, I am trying to "extract" points from the circumference of an ellipse given its parameters $(x_C, y_C, a, b, \psi)$. For such an ellipse, I start from $(x_C, y_C$) and with angle $\theta = 0^\circ$ and I start sweeping until $360^\circ$. Using the equation $\left[\begin{array}{c} x \\ y\end{array}\right] = \left[\begin{array}{c c} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{array}\right] \left[\begin{array}{c} a\cos(\psi) \\ b\sin(\psi) \end{array}\right]$, I get the $(x,y)$ location of the point that is supposed to be on the ellipse circumference. I then look up this location in a list of "edge" points. Along with this list of edge points, I also have gradient angle information for each edge point. This corresponds to the angle $\phi$. Here is the crux of the question, for a circle, I am confident that the edge point lies on the circumference of the circle if $|\theta - \phi| < \text{threshold}$. But, for an ellipse, how do I get a similar relationship ?
Here, we have $F(k,n)$ defined as the set of ordered $k$-tuples of linearly independent vectors in $\mathbb{R^n}$. To start, let $X \in F(k,n)$. We can express $X$ as $$X = \begin{bmatrix} x_1^1 & \cdots & x_1^n \\ \vdots & \vdots & \vdots \\ x_k^1 & \cdots & x_k^n \end{bmatrix}$$ where this is a matrix of rank $k$. Having rank $k$ implies that not all $k \times k$ minors are equal to zero. Define a function: $$f(X) = \sum(k\times k \text{ minors of } X)^2$$ This is a continuous function from $\mathbb{R}^{k\cdot n} \to \mathbb{R}$. Using definition of continuity, we get that $f^{-1}(\{0\})$ is a closed subset of $\mathbb{R}^{n\cdot k}$. My question is, why does $f^{-1}(\{0\})$ being closed imply that $F(k,n)$ is an open subset of $\mathbb{R}^{k\cdot n}$?
№ 9 All Issues Coconvex Pointwise Approximation Abstract Assume that a function f ∈ C[−1, 1] changes its convexity at a finite collection Y := { y 1, ... y s} of s points y i ∈ (−1, 1). For each n > N( Y), we construct an algebraic polynomial P n of degree ≤ n that is coconvex with f, i.e., it changes its convexity at the same points y i as f and $$\left| {f\left( x \right) - P_n \left( x \right)} \right| \leqslant c{\omega }_{2} \left( {f,\frac{{\sqrt {1 - x^2 } }}{n}} \right), x \in \left[ { - 1,1} \right],$$ where c is an absolute constant, ω 2( f, t) is the second modulus of smoothness of f, and if s = 1, then N( Y) = 1. We also give some counterexamples showing that this estimate cannot be extended to the case of higher smoothness. English version (Springer): Ukrainian Mathematical Journal 54 (2002), no. 9, pp 1445-1461. Citation Example: Dzyubenko H. A., Gilewicz J., Shevchuk I. A. Coconvex Pointwise Approximation // Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1200-1212. Full text
Bounded state solutions of Kirchhoff type problems with a critical exponent in high dimension School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, Guangdong, China $\begin{cases}-\Big(a+λ∈t_{\mathbb R^N} | \nabla u|^2dx\Big) Δ u+V(x)u = |u|^{2^*-2}u \;\;\;{\rm in}\ \mathbb{R}^N,\\u∈ D^{1,2}(\mathbb R^N),\end{cases}$ $a$ $λ$ $V∈ L^{\frac{N}{2}}(\mathbb{R}^N)$ $2^*$ $\mathbb R^N$ $N≥5$ $N≥4$ $N≥5$ Mathematics Subject Classification:Primary: 35J60; Secondary: 47J30, 35J20. Citation:Qilin Xie, Jianshe Yu. Bounded state solutions of Kirchhoff type problems with a critical exponent in high dimension. Communications on Pure & Applied Analysis, 2019, 18 (1) : 129-158. doi: 10.3934/cpaa.2019008 References: [1] [2] V. Benci and G. Cerami, Existence of positive solutions of the equation $-Δ u+a(x)u = u^{(N+2)/(N-2)}$ in $\mathbb R^N$, [3] D. M. Cao and H. S. Zhou, Multiple positive solutions of nonhomogeneous semilinear elliptic equations in $\mathbb R^N$, [4] [5] [6] P. L. Felmer, A. Quaas, M. X. Tang and J. S. Yu, Monotonicity properties for ground states of the scalar field equation, [7] G. M. Figueiredo, R. C. Morales, J. J. R. Santos and A. Suárez, Study of a nonlinear Kirchhoff equation with non-homogeneous material, [8] [9] [10] [11] G. B. Li and H. Y. Ye, Existence of positive solutions for nonlinear Kirchhoff type problems in $\mathbb R^3$ with critical Sobolev exponent, [12] Z. S. Liu and S. J. Guo, On ground states for the Kirchhoff-type problem with a general critical nonlinearity, [13] Z. S. Liu and S. J. Guo, Existence and concentration of positive ground state for a Kirchhoff equation involving critical Sobolev exponent, [14] J. Liu, J. F. Liao and C. L. Tang, Positive solutions for Kirchhoff-type equations with critical exponent in $\mathbb R^N$, [15] Z. Liu, S. Guo and Y. Fang, Positive solutions of Kirchhoff type elliptic equations in $\mathbb R^4$ with critical growth, [16] R. Q. Liu, C. L. Tang, J. F. Liao and X. P. Wu, Positive solutions of Kirchhoff type problem with singular and critical nonlinearities in dimension four, [17] D. Naimen, Positive solutions of Kirchhoff type elliptic equations involving a critical Sobolev exponent, [18] [19] [20] [21] J. Wang, L. X. Tian, J. X. Xu and F. B. Zhang, Multiplicity and concentration of positive solutions for a Kirchhoff type problem with critical growth, [22] J. Wang and L. Xiao, Existence and concentration of solutions for a Kirchhoff tpye problem with potentials, [23] [24] Q. L. Xie, X. P. Wu and C. L. Tang, Existence and multiplicity of solutions for Kirchhoff type problem with critical exponent, [25] Q. L. Xie, S. W. Ma and X. Zhang, Bound state solutions of Kirchhoff type problems with critical exponent, show all references References: [1] [2] V. Benci and G. Cerami, Existence of positive solutions of the equation $-Δ u+a(x)u = u^{(N+2)/(N-2)}$ in $\mathbb R^N$, [3] D. M. Cao and H. S. Zhou, Multiple positive solutions of nonhomogeneous semilinear elliptic equations in $\mathbb R^N$, [4] [5] [6] P. L. Felmer, A. Quaas, M. X. Tang and J. S. Yu, Monotonicity properties for ground states of the scalar field equation, [7] G. M. Figueiredo, R. C. Morales, J. J. R. Santos and A. Suárez, Study of a nonlinear Kirchhoff equation with non-homogeneous material, [8] [9] [10] [11] G. B. Li and H. Y. Ye, Existence of positive solutions for nonlinear Kirchhoff type problems in $\mathbb R^3$ with critical Sobolev exponent, [12] Z. S. Liu and S. J. Guo, On ground states for the Kirchhoff-type problem with a general critical nonlinearity, [13] Z. S. Liu and S. J. Guo, Existence and concentration of positive ground state for a Kirchhoff equation involving critical Sobolev exponent, [14] J. Liu, J. F. Liao and C. L. Tang, Positive solutions for Kirchhoff-type equations with critical exponent in $\mathbb R^N$, [15] Z. Liu, S. Guo and Y. Fang, Positive solutions of Kirchhoff type elliptic equations in $\mathbb R^4$ with critical growth, [16] R. Q. Liu, C. L. Tang, J. F. Liao and X. P. Wu, Positive solutions of Kirchhoff type problem with singular and critical nonlinearities in dimension four, [17] D. Naimen, Positive solutions of Kirchhoff type elliptic equations involving a critical Sobolev exponent, [18] [19] [20] [21] J. Wang, L. X. Tian, J. X. Xu and F. B. Zhang, Multiplicity and concentration of positive solutions for a Kirchhoff type problem with critical growth, [22] J. Wang and L. Xiao, Existence and concentration of solutions for a Kirchhoff tpye problem with potentials, [23] [24] Q. L. Xie, X. P. Wu and C. L. Tang, Existence and multiplicity of solutions for Kirchhoff type problem with critical exponent, [25] Q. L. Xie, S. W. Ma and X. Zhang, Bound state solutions of Kirchhoff type problems with critical exponent, [1] Qi-Lin Xie, Xing-Ping Wu, Chun-Lei Tang. Existence and multiplicity of solutions for Kirchhoff type problem with critical exponent. [2] Peng Chen, Xiaochun Liu. Multiplicity of solutions to Kirchhoff type equations with critical Sobolev exponent. [3] Rui-Qi Liu, Chun-Lei Tang, Jia-Feng Liao, Xing-Ping Wu. Positive solutions of Kirchhoff type problem with singular and critical nonlinearities in dimension four. [4] [5] Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. [6] Yinbin Deng, Wentao Huang. Positive ground state solutions for a quasilinear elliptic equation with critical exponent. [7] Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. [8] Norihisa Ikoma. Existence of ground state solutions to the nonlinear Kirchhoff type equations with potentials. [9] Xiao-Jing Zhong, Chun-Lei Tang. The existence and nonexistence results of ground state nodal solutions for a Kirchhoff type problem. [10] Yinbin Deng, Wentao Huang. Least energy solutions for fractional Kirchhoff type equations involving critical growth. [11] Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang. Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent. [12] Shu-Zhi Song, Shang-Jie Chen, Chun-Lei Tang. Existence of solutions for Kirchhoff type problems with resonance at higher eigenvalues. [13] Jiafeng Liao, Peng Zhang, Jiu Liu, Chunlei Tang. Existence and multiplicity of positive solutions for a class of Kirchhoff type problems at resonance. [14] Guangze Gu, Xianhua Tang, Youpei Zhang. Ground states for asymptotically periodic fractional Kirchhoff equation with critical Sobolev exponent. [15] [16] Claudianor Oliveira Alves, M. A.S. Souto. On existence and concentration behavior of ground state solutions for a class of problems with critical growth. [17] Jing Zhang, Shiwang Ma. Positive solutions of perturbed elliptic problems involving Hardy potential and critical Sobolev exponent. [18] Davide Guidetti. Convergence to a stationary state of solutions to inverse problems of parabolic type. [19] Jia-Feng Liao, Yang Pu, Xiao-Feng Ke, Chun-Lei Tang. Multiple positive solutions for Kirchhoff type problems involving concave-convex nonlinearities. [20] Natalia O. Babych, Ilia V. Kamotski, Valery P. Smyshlyaev. Homogenization of spectral problems in bounded domains with doubly high contrasts. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Suppose we know that $n=p_a p_b$ where $a<b$ and $n$ are known integers and $p_a$ is the $a$-th prime. ( For example $n = p_{102} \cdot p_{2034}$). Then the time $t(n)$ to factor $n$ is greater then or equal to $T(a)$ = the time to find the $a$-th prime ( Edit by comment from Erick Wong:) given $n = p_a \cdot p_b ,a,b$ in the decimal system. So what is the time to find the $a$-th prime number given $n = p_a p_b,a,b$ in the decimal system? This question is related to the number system http://oeis.org/A054841 where in the given representation of a number one "knows" its prime factors. Suppose we know that $n=p_a p_b$ where $a<b$ and $n$ are known integers and $p_a$ is the $a$-th prime. ( For example $n = p_{102} \cdot p_{2034}$). Then the time $t(n)$ to factor $n$ is greater then or equal to $T(a)$ = the time to find the $a$-th prime ( One way to compute $p_a$ given $n$,$a$,$b$ is to factor $n$. If $a$ and $b$ are extremely close together (like $b=a+1$) then the fastest method may well be a simple Fermat factorization. If $a$ and $b$ are roughly of the same size, then this would fall under a general-purpose algorithm like Number Field Sieve, which takes $O(\exp((\log n)^{1/3 + \epsilon}))$ time. However, if $b$ is substantially larger than $a$ (say, on the order of $a^2$, perhaps) then $n$ will also be substantially longer than $a$, and it may be more efficient to use something like Lenstra ECM factorization which is faster for small $a$. At some point (when $b$ is much larger than $a$, maybe exponentially larger) there would be a crossover where it makes sense to ignore $n$ and just focus on computing the $a$th prime. Computing the $a$th prime function is essentially the same as computing $\pi(x)$, the prime-counting function: given either function, you can use binary search to compute the other, so that the running times are at most a $O(\log a)$ factor apart. Algorithms for computing the prime-counting function $\pi(x)$ are pretty well-studied. In practice this is done by modern forms of the Meissel-Lehmer algorithm which runs in time $O(x^{2/3 + \epsilon})$, which is faster than computing all primes up to $x$ at least for larger $x$, but notably slower than factoring methods for numbers of similar size. See Deleglise and Rivat's paper for more details. In theory it is possible to compute $\pi(x)$ using high-precision complex arithmetic in time $O(x^{1/2 + \epsilon})$, but this may not be very practical. You might find some informative discussion of this method in William Galway's thesis.
I am trying to optimize the hyperparameters for a Gaussian process. I am using a squared exponential kernel, where I am optimizing three parameters. $$k_y(x_p,x_q) = \sigma^2_f \exp\left(-\frac{1}{2l^2}(x_p-x_q)^2\right) + \sigma^2_n\delta_{pq}$$ As described by Rasmussen I am maximizing log marginal likelihood using its derivatives, but for some combination initial values of hyperparameters, I am occasionally getting negative values of $\sigma_f$ and $\sigma_n$. Are the negative values acceptable? Many of the times people optimize the parameters in log domain to avoid negative values of hyperparameters, but I do not understand how ppl do that.
This is admittedly a rather naively put and vague question, and I'm not sure how much more specific I want or can make it, but I'll try. By "practice" I mean surely in actual programming practice (of which I embarrassingly don't know much), but also in mathematical practice, whenever certain higher-type objects are employed as examples or counterexamples within arguments. By the "height" of a type, I don't mean to include the obvious and natural arbitrariness involved in objects like, say, the fixpoint functional (in fact, in the spirit of the question, I would prefer to understand this as a type $2$ object "up to parameter types", so to speak). A better example of what I mean would be the well-known (from mathematical practice, at least) fan functional, of type $((\mathbb{N} \to \mathbb{B}) \to \mathbb{N}) \to \mathbb{N}$, given by$$\lambda f. \mu m. \forall_{\alpha, \beta}\left(\forall_{n < m} \alpha(n) = \beta(n) \to f(\alpha) = f(\beta)\right) \ ,$$where $f : (\mathbb{N} \to \mathbb{B}) \to \mathbb{N}$ and $\alpha, \beta : \mathbb{N} \to \mathbb{B}$. My questions: Are there any objects of yet higher type than $3$ (possibly "up to parameter types") that are naturally used in the literature? In practice?
There is no way to measure $\rho$ or any function of its matrix elements such as $\gamma$ by a single measurement (in one repetition of the situation) because $\rho$ is a quantum counterpart of the probability distribution. It describes the observer's knowledge of the properties of the physical system. And the knowledge is intrinsically subjective, not objective, so it can't be measured. The same comments apply to the wave function $\psi$. The wave function – and similarly the density matrix (which is the tensor square of the wave function or a linear combination of such squares) – isn't an observable so it isn't observable (without "a"), either. At most, one can make measurements and see that they statistically agree or statistically disagree with a given particular form of the density matrix. But without knowing a good candidate form of the density matrix, one can't measure its properties in isolation. According to basic postulates of QM, everything that can be measured may be interpreted as an observable and each observable has to be represented by a well-defined particular linear Hermitian operator acting on the Hilbert space. Possible results of measurements are the eigenvalues and the probabilities of different outcomes are calculable by the usual formulae. The density matrix isn't a fixed linear operator, however, so it can't be measured. This rule, that only linear operators are physically meaningful and "measurable", is a totally important and universal axiom of all of quantum physics and is often ignored, like most other facts about quantum mechanics, but it's still totally right and crucial. One may measure all of $\rho$ by doing infinitely many measurements on the repetitions of the same situation, however. This general strategy is referred to as the quantum state tomography. Just like all features of a probabilistic distribution, $\rho$ may be fully (increasingly accurately) reconstructed from the frequency counts resulting from such infinitely many measurements. $\gamma$ may be calculated from such a $\rho$ at the end. There exist situations in which "well-behaved" states of some kind are mixed and measurements of $\rho$ or some features of $\rho$ similar to $\gamma$ may be "effectively" approximated by the measurements of some observables, and therefore become doable. For example, one may assume that $\rho$ for a given system is thermal. And one may measure some $\gamma$-like function of $\rho$, the temperature, by the thermometer because this temperature may be transformed to the height of the level of mercury, or something like that. Quantum gravity is full of important examples like that because by the Hawking-Bekenstein insights, the entropy and energy are "equivalent" (for black holes) to geometric quantities such as the surface area of the event horizon and the gravitational acceleration at the event horizon. But whenever some features of $\rho$, especially the $U(\infty)$-invariant features such as $\gamma$ (and the set of eigenvalues of $\rho$), may be measured without repeating the same situation, one needs to be sure that we're only working with some very specific forms of $\rho$, e.g. the thermal ones, described by a small number of parameters. If $\rho$ is a priori completely general, nothing about it may be measured by a single measurement.
While solving a 2D-collision problem, I obtained the following 3 equations: $$0.866\,|\vec v_{Af}|+|\vec v_{Bf}| \cos \theta_{Bf}=6$$ $$0.5\,|\vec v_{Af}|+|\vec v_{Bf}| \sin \theta_{Bf}=0$$ $$|\vec v_{Af}|^2+|\vec v_{Bf}|^2=36$$ Where $$|\vec v_{Af}|=\text{Magnitude of final velocity of object A}$$ $$|\vec v_{Bf}|=\text{Magnitude of final velocity of object B}$$ $$\theta_{Bf}=\text{Angle made by the final velocity of object B with x-axis}$$ The first two equations are obtained by applying the law of conservation of momentum. The third equation is obtained by applying conservation of kinetic energy for elastic collision. Though there are 3 equations to find 3 unknowns, I am not able to solve them. If the masses are equal, I can use the relation ($\theta_{Af}-\theta_{Bf}=90^o$). Because, $\theta_{Af}$ is given in the problem. But I am confused as to how I would proceed when I get problems with two different masses. Please give me a method to solve the three equations without using the relation ($\theta_{Af}-\theta_{Bf}=90^o$) closed as off-topic by StephenG, Ruslan, Jon Custer, GiorgioP, Kyle Kanos Mar 17 at 11:26 This question appears to be off-topic. The users who voted to close gave this specific reason: "Homework-like questions should ask about a specific physics conceptand show some effortto work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – StephenG, Jon Custer, GiorgioP, Kyle Kanos I could give you the following clue: From your second equation: $$Ax + By \sin \theta = 0 $$ You know: $$ \sin \theta = -\frac{Ax}{By}$$ This implies: (form a triangle with opposite side and hypotenuse given) $$\cos \theta = \frac{\sqrt{B^2y^2 - A^2x^2}}{By}.$$ You can now use this, along your equation number one, to eliminate $\cos \theta$. Your problem now turns into a 2 equations with two variables. You can now solve this graphically for example. equation (1) $$a_1\,x+y\,\cos(\alpha)=c_1\tag 1$$ equation (2) $$a_2\,x+y\,sin(\alpha)=0\tag 2$$ equation (3) $$x^2+y^2=c_2$$ from equation (1) you get: $$y\,\cos(\alpha)=c_1-a_1\,x$$ from equation (2) you get: $$y\,\sin(\alpha)=-a_2\,x$$ $\Rightarrow$ $$y^2=(c_1-a_1\,x)^2+(a_2\,x)^2$$ $\Rightarrow\quad x_{1,2}=$ equation (3)
It's a side effect of the unreasonable effectiveness of mathematics. You are in good company thinking it is a little strange. Many quantities in physics can be related to each other by a few lines of algebra. These tend to be the models that we think of as "pretty."Terms manipulated by pure algebra tend to pick up integer factors, or factors that are integers raised to integer powers; if only a few algebraic manipulations are involved, the integers and their powers tend to be small ones. Other quantities may be related by a few lines of calculus. From calculus you get the transcendental numbers, which can't be related to the integers by solving an algebraic equation. But there are lots of algebraic transformations you can do to relate one integral to another, and so many of these transcendental numbers can be related to each other by factors of small integers raised to small integer powers. This is why we spend a lot of time talking about $\pi$, $e$, and sometimes Bernoulli's $\gamma$, but don't really have a whole library of irrational constants for people to memorize. Most of constants with many significant digits come from unit conversions, and are essentially historical accidents. Carl Witthoft gives the example of $E=mc^2$ having a numerical factor if you want the energy in BTUs. The BTU is the heat that's needed to raise the temperature of a pound of water by one degree Fahrenheit, so in addition to the entirely historical difference between kilograms and pounds and Rankine and Kelvin it's tied up with the heat capacity of water. It's a great unit if you're designing a boiler! But it doesn't have any place in the Einstein equation, because $E\propto mc^2$ is a fact of nature that is much simpler and more fundamental than the rotational and vibrational spectrum of the water molecule. There are several places where there are real, dimensionless constants of nature that, so far as anyone knows, are not small integers and familiar transcendental numbers raised to small integer powers. The most famous is probably the electromagnetic fine structure constant $\alpha \approx 1/137.06$, defined by the relationship $\alpha \hbar c = e^2/4\pi\epsilon_0$, where this $e$ is the electric charge on a proton. The fine structure constant is the "strength" of electromagnetism, and the fact that $\alpha\ll1$ is a big part of why we can claim to "understand" quantum electrodynamics. "Simple" interactions between two charges, like exchanging one photon, contribute to the energy with a factor of $\alpha$ out front, perhaps multiplied by some ratio of small integers raised to small powers. The interaction of exchanging two photons "at once," which makes a "loop" in the Feynman diagram, contributes to the energy with a factor of $\alpha^2$, as do all the other "one-loop" interactions. Interactions with two "loops" (three photons at once, two photons and a particle-antiparticle fluctuation, etc.) contribute at the scale of $\alpha^3$. Since $\alpha\approx0.01$, each "order" of interactions contributes roughly two more significant digits to whatever quantity you're calculating. It's not until sixth- or seventh-order that there begin to be thousands of topologically-allowed Feynman diagrams, contributing so many hundreds of contributions at level of $\alpha^{n}$ that it starts to clobber the calculation at $\alpha^{n-1}$. An entry point to the literature. The microscopic theory of the strong force, quantum chromodynamics, is essentially identical to the microscopic theory of electromagnetism, except with eight charged gluons instead of one neutral photon and a different coupling constant $\alpha_s$. Unfortunately for us, $\alpha_s \approx 1$, so for systems with only light quarks, computing a few "simple" quark-gluon interactions and stopping gives results that are completely unrelated to the strong force that we see. If there is a heavy quark involved, QCD is again perturbative, but not nearly so successfully as electromagnetism. There is no theory which explains why $\alpha$ is small (though there have been efforts), and no theory that explains why $\alpha_s$ is large. It is a mystery. And it will continue to feel like a mystery until some model is developed where $\alpha$ or $\alpha_s$ can be computed in terms of other constants multiplied by transcendental numbers and small integers raised to small powers, at which point it will again be a mystery why mathematics is so effective. A commenter asks Isn't α already expressible in terms of physical constants or did you mean to say mathematical constants like π or e? It's certainly true that $$ \alpha \equiv \frac{e^2}{4\pi\epsilon_0} \frac1{\hbar c} $$defines $\alpha$ in terms of other experimentally measured quantities. However, one of those quantities is not like the other. To my mind, the dimensionless $\alpha$ is the fundamental constant of electromagnetism; the size of the unit of charge and the polarization of the vacuum are related derived quantities. Consider the Coulomb force between two unit charges:$$F = \frac{e^2}{4\pi\epsilon_0}\frac1{r^2} = \alpha\frac{\hbar c}{r^2}$$This is exactly the sort of formulation that badroit was asking about: the force depends on the minimum lump of angular momentum $\hbar$, the characteristic constant of relativity $c$, the distance $r$, and a dimensionless constant for which we have no good explanation.
I am trying to write some simple code to perform IK for a 6 DoF redundant robot using the Jacobian pseudo-inverse method. I can solve IK for a desired pose using the iterative method, and I want to now focus on applying constraints to the solution. Specifically I'm interested in Keep the end effector orientation constant as the robot moves from initial to final pose Avoid obstacles in the workspace I've read about how the redundancy/null space of the Jacobian can be exploited to cause internal motions that satisfy desired constraints, while still executing the trajectory, but I am having trouble implementing this as an algorithm. For instance, my simple iterative algorithm looks like error = pose_desired - pose_current;q_vel = pinv(J)*error(:,k);q = q + q_vel; where $q$ is 'pushed' towards the right solution, updated until the error is minimized. But for additional constraints, the equation (Siciliano, Bruno, et al. Robotics: modelling, planning and control) specifies $$ \dot{q} = J^\dagger*v_e - (I-J^\dagger J)\dot{q_0} \\ \dot{q_0} = k_0*(\frac{\partial w(q)}{\partial q})^T $$ where $w$ is supposed to be a term that minimizes/maximizes a chosen constraint. I don't understand the real world algorithmic implementation of this 'term' in the context of my desired constraints: so if I want to keep the orientation of my end effector constant, how can I define the parameters $w$, $q_0$ etc.? I can sense that the partial derivative signifies that it is representing the difference between how the current configuration and a future configuration affect my constraint, and can encourage 'good' choices, but not more than that.
I am trying to detect outliers/noise as indicated on the diagram below from sensor data. Can anyone advice how to go about it? I can only do this in python, so are there libraries in python that I can leverage? Is there an example that can be given. Sample data am using has timestamps and the value. Also how can I also get the frequency of the noise/outlier occurence? There are several ways you can do this, however this is highly specific to the data you are analyzing. I will describe some of very general (not domain/data specific) approaches below: One possible approach I can think of that'd deal with this sudden, HF noise is to run a window with desired length through the signal. In every iteration, you'd take several samples for this window (i.e. for t=0 to t=10, then t=1 to t=11 and so on) and measure the rate of change within the window. If the rate of change is below some threshold you defined, you'd draw a line from the last point where the rate of change is below the threshold to the beginning on the next window where the rate of change is below the threshold again. This might or might not work, however I described the more general and popular ways below: One of the simplest and most naive ways would be to apply a high order low pass filter that would remove higher frequencies (sudden changes are high frequency peaks in Fourier spectrum) - this would simply smooth out your signal but it would also alter it, so it could be no longer useful. Obviously you'd have to set up the cutoff frequency properly to get a satisfactory result. If the HF peaks have some specific, known frequency, you can also apply a band stop filter (or a few of them, pretty much like an audio equalizer!). It's pretty trivial to implement these filters as IIR and a bit more complex if you need FIR. You can find a lot of information about how to implement them in the Internet and there are most likely ready scripts for Python that readily implement them. Another - and perhaps a better approach - would be to use Kalman filter, but again, it's not obvious to tune and you'd need some more understanding of your signal and the Kalman filter to implement it succesfuly. Yet another (more sophisticated) approach is to use sample-based noise removal techniques, however the signal you presented doesn't seem to have just a single feature that could be processed with such the filter, so you'd have to apply it several times. If you need help understanding/implementing any of the solutions described below, feel free to ask! Regarding the question about how to measure the noise frequency - you can manually select the time range in which the noise occurs and draw a FFT spectrum / Histogram of it and find the peak value. That's the most prevalent frequency in your noise. As far as I can see, your signal is very smooth and thus low frequency, so it won't have much influence on this kind of measurement. If you don't feel like doing this in Python, you can always export it to a text format acceptable by Matlab or Scilab (free) and these programs have ready tools for doing this kind of analysis. Regarding your next question, I will try to give you some long description explainig a general way of transforming filters defined by their transfer function into discrete domain. If you are not interested in the details, you can skip to the bottom, where I wrote a C code (tested). The transfer function of a first-order low pass filter (with unit gain) is defined as follows: $$K(s) = \frac{1}{1+sT}$$ And its angular cutoff frequency defined as $$\omega_c = \frac{1}{T}$$ Therefore the cutoff frequency expressed in Hertz becomes $$f_c=\frac{1}{2 \pi T}$$ $$T=\frac{1}{2 \pi f_c}$$ Whatever goes above this frequency, will be cut off with gain equal to -3dB/decade for the first order low pass filter. Either using bilinear transform or by expressing the gain in form of differential equations and then discretizing them with Euler scheme you can obtain a discrete domain equation that expresses the gain. I prefer bilinear transform as it is more straightforward.In order to apply bilinear transform we perform the following substitution:$$ s = \frac{1}{T_s} \frac{1-z^{-1}}{1+z^{-1}}$$Where Ts ( not to be mistaken with T) is sampling period, so in your case it is the number of seconds between each sample. I.e. for 44100Hz sampling frequency, Ts becomes 1/44100 seconds. Now, after performing this substitution and simplifying terms, the discrete time transfer function (Z transform) of your filter becomes: $$\frac{1+z^{-1}}{(1+2T_s/T) + (1-2T_s/T)z^{-1}}$$ Now lets recall the definition of Z transform. In general form it is expressed as the following: $$ H(z) = \frac{b_0 + b_1z^{-1}+ b_2z^{-2} + ... + b_Mz^{-M}}{1 + a_1z^{-1}+ a_2z^{-2} + ... + a_Nz^{-N}} = \frac{Y(z)}{X(z)}$$ Where bare numerator coefficients from 1 to M (highest power) and a are denominator coefficients. Now, a difference equation can be derived in the following way: $$y(n) = b_0x(n) + b_1x(n-1) + b_2x(n-2) + ... + b_Mx(n-M) - a_1y(n-1) - a_2y(n-2) - ... - a_Ny(n-N) $$ Where n means n-th sample, y is your output and x is your input. After applying the above, the simplified form of difference equation for your filter becomes: $$y(n) = (1-\alpha)y(n-1) + \alpha x(n)$$ Where $$ \alpha = \frac{T_s}{T + T_s}$$ This means that n-th sample (the current one) in filtered signal, called y(n) here, equals one minus alpha times the previous sample in filtered signal plus alpha times n-th sample in your source (noised) signal. In C language: double lowpass (double x, double y_prev, double alpha){ return (1.0-alpha)*y_prev + alpha*x;}int main(){ double x[10] = {1,2,3,4,5,6,7,8,9,10}; //signal with noise double y[10]; //filtered signal (will be generated) double fc = 500.0; //500Hz cutoff frequency double Ts = 1.0/44100.0; //for example double T = 1.0/(2.0*3.1415*fc); double alpha = Ts/(T+Ts); y[0] = x[0]; //initialize first y sample as equal to first x sample printf("Input\tOutput\n"); int i; for(i=1; i<10; ++i) { y[i] = lowpass(x[i], y[i-1], alpha); printf("%.4f\t%.4f\n", x[i], y[i]); }} Without complicating this code, for higher order filtering, you can swap y with x once it is filtered and repeat what's in the loop. It's slow and naive, but otherwise, you'd have to take a power of the transfer function (i.e. K(s)^2 for second order filter) and derive the discrete equation again. Thankfully with some practice, you will see a pattern in this and (even better!) you can find ready derivations of higher order low pass filters in the Internet :)
I thought that induction heaters (as used in cooking) relied on Joule heating losses due to Eddy currents induced by an oscillating magnetic field, produced by an inductor through which AC runs through. That is basically what I can understand from the Wikipedia's article. However, Pieter, a materials scientist, claims that the main mechanism is not the above one, but instead is due to magnetic hysteresis losses. Microscopically, the heat is released when the domain walls are set in motion by the applied magnetic field. This viewpoint conflicts with the one described above, in that non ferromagnetic materials do not posses magnetic walls and therefore are not susceptible to suffer from any magnetic hysteresis loss. But it is possible to melt aluminum via induction heating, though I suppose by using a very high frequency AC compared to a regular induction cooker. But still, a model based solely on ferromagnetism seems lacking. Now, a quick analysis. The skin depth goes like $\sim 1/ \sqrt \mu$ where $\mu$ is the magnetic permeability. If we take nickel vs copper, Ni has a skin depth roughly 10 times greater than that of Cu. Furthermore, it also has a resistivity of about 5 times than that of Cu. This means that, for a given AC frequency, Ni has an impedance roughly 50 times that of Cu. Now, if we use Ohm's law $V=RI$ where $V$ is the induced emf by the induction cooker and does not depend on which material one heats, then this means that $I$ the Eddy currents generated in Ni will be about 50 times lesser than those generated in Cu. And since the Joule heat is $I^2R$, it means that there is about 50 times more Joule heat losses in Cu than in Ni. This is quite counter intuitive at first to me, I would have expected Ni to dissipate more heat through Joule losses than Cu, but it is the other way around. Now, I think I understand that it is because Cu conducts electricity so much better than the current is much higher and due to how Joule heat scales with current, no wonder that Cu dissipates much more heat than Ni via the Joule effect. However this clashes with the common Wikipedia claim that induction heaters work mainly via Joule heating due to Eddy currents. Because if that was the case, then copper should work much better than nickel, and I suppose, than stainless steel since it should be similar to nickel. Hence in the end it looks like Pieter's viewpoint might be the most accurate if we consider a stainless steel pan in the kitchen. But I seek an answer that will show the math very concisely and compare the two views, mention in which case(s) each view is more accurate, etc. Feel free to compute the skin depth for $\sim 1 cm$ thick pans with the AC frequency used in houses, to use in your answer.
Figure 1: Visualization of supervised neighborhoods for local explanation with MAPLE. When seeing the new point \(X = (1, 0, 1)\), this tree determines that \(X_2\) and \(X_6\) are its neighbors and gives them weight 1 and gives all other points weight 0. MAPLE averages these weights across all the trees in the ensemble. Machine learning is increasingly used to make critical decisions such as a doctor’s diagnosis, a biologist’s experimental design, and a lender’s loan decision. In these areas, mistakes can be the difference between life and death, can lead to wasted time and money, and can have serious legal consequences. Because of the serious potential ramifications of using machine learning in these domains, it falls onto machine learning practitioners to ensure that their models are robust and to foster trust with the people who interact with their models. Broadly speaking, meeting these two goals is the objective of interpretability and is achieved by iteratively: explaining both global and local behavior of a model (increasing understanding), checking that these explanations make sense (developing trust), and fixing any identified problems (preventing bad failures). Meeting these two goals is a very difficult task and interpretability faces many challenges, but we will be focusing on two in particular: The frequently observed accuracy-interpretability trade-off where more interpretable models are often less accurate. Both accuracy and interpretability are desirable properties, so we do not want to have to choose between them. The existence of multiple types of explanations that each have their own advantages and disadvantages. We want to be able to combine their strengths and minimize their weaknesses. Our proposed method, MAPLE, couples classical local linear modeling techniques with a dual interpretation of tree ensembles (which aggregate the predictions of multiple decision trees), both as a supervised neighborhood approach and as a feature selection method (see Fig. 1). By doing this, we are able to slightly improve accuracy while producing multiple types of explanations. Before diving into the technical details of MAPLE and how it works as an interpretability system (both for explaining its own predictions and for explaining the predictions of another model), we provide an overview and comparison of the main types of explanations. The Main Types of Explanations At a high level, there are three main types of explanations: Example-based. In the context of an individual prediction, it is natural to ask: Which points in the training set most closely resemble a test point or influenced its prediction? Example: The K neighbors used by K-Nearest Neighbors. Local. We may aim to understand an individual prediction by asking: If the input is changed slightly, how does the model’s prediction change? Example: LIME (Ribeiro et al. 2016) uses a local linear model to approximate the predictive model in a neighborhood around the point being explained. Global. To gain an understanding of a model’s overall behavior we can ask: What are the patterns underlying the model’s behavior? Example: Anchors (Ribeiro et al. 2018) finds a rule-based approximation of as much of the model as possible. Example-based explanations are clearly distinct from the other two explanation types, as the former relies on sample data points and the latter two on features. Furthermore, local and global explanations themselves capture fundamentally different characteristics of the predictive model. To see this, consider the toy datasets in Fig. 2 generated from three univariate functions. Figure 2: Toy datasets from left to right (a) Linear (b) Shifted Logistic (c) Step Function. Generally, local explanations are better suited for modeling smooth continuous effects (Fig. 2a). For discontinuous effects (Fig. 2c) or effects that are very strong in a small region (Fig. 2b), local explanations either fail to detect the effect or make unusual predictions, depending on how the local neighborhood is defined ( i.e., whether or not it is defined in a supervised manner, more on this in the ‘Supervised vs Unsupervised Neighborhood’ section). We will call such effects global patterns because they are difficult to detect or model with local explanations. Conversely, global explanations are less effective at explaining continuous effects and more effective at explaining global patterns. This is because they tend to be rule-based models that use feature discretization or binning. This processing doesn’t lend itself easily to modeling continuous effects (you need many small steps to approximate a linear model well) but does lend itself towards modeling the abrupt changes around global patterns (because those effects create natural cut-offs for the feature discretization or binning). Most real datasets have both continuous and discontinuous effects and, therefore, it is crucial to devise explanation systems that can capture, or are at least aware of, both types of effects. Because local explanations are actionable for (they answer the question “what could I have done differently to get the desired outcome?”) and relevant to (it is not particularly helpful to a person to know how the model behaves for an entirely different person) the people impacted by machine learning systems, we focus on them for this work. Creating Local Explanations The goal of a local explanation, \(g\), is to approximate our learned model, \(f\), well across some neighborhood of the input space, \(N_x\). Naturally, this leads to the fidelity-metric: \(E_{x’ \sim N_x}[ (g(x’) – f(x’))^2]\). The choices of \(g\) and \(N_x\) are important and should often be problem specific. Similar to previous work, we assume that \(g\) is a linear function. Figure 3: A simple way of generating a local explanation that is very similar to LIME. From left to right, 1) Start with a point that you want to explain, 2) Define a neighborhood around that point, 3) Sample points from that neighborhood, and 4) Fit a linear model to the model’s predictions at those sampled points MAPLE: Modifying Tree Ensembles to get Local Explanations MAPLE (Plumb et al. 2018) modifies tree ensembles to produce local explanations that are able to detect global patterns and to produce example-based explanations; these modifications are built on work from (A. Bloniarz et al. 2016) and (S. Kazemitabar et al. 2017). Importantly, we find that doing this typically improves the predictive accuracy of the model and that the resulting local explanations have high fidelity. Model Formulation At a high level, MAPLE uses the tree ensemble to identify which training points are most relevant to a new prediction and uses those points to fit a linear model that is used both to make a prediction and as a local explanation. We will now make this more precise. Given training data \((x_i, y_i)\) for \(i= 1, \ldots, n\), we start by training an ensemble of trees on this data, \(T_i\) for \(i= 1, \ldots, K\). For a point \(x\), let \(T_k(x)\) be the index of the leaf node of \(T_k\) that contains \(x\). Suppose that we want to make a prediction at \(x\) and also give an explanation for that prediction. To do this, we start by assigning a similarity weight to each training point, \(x_i\), based on how often the trees put \(x_i\) and \(x\) in the same leaf node. So we define \(w_i = \frac{1}{K} \sum_{j=1}^K \mathbb{I}[T_j(x_i) = T_j(x)]\). This is how MAPLE produces example-based explanations; training points with a larger \(w_i\) will be more relevant to the prediction/explanation at \(x\) than training points with smaller weights. An example of this process for a single tree is shown in Fig. 1. To actually make a prediction/explanation, we solve the weighted linear regression problem \(\hat\beta_x = \text{argmin}_\beta \sum_{i=1}^n w_i (\beta^T x_i – y_i)^2\). Then MAPLE makes the prediction \(f_{MAPLE}(x) = \hat\beta_x^T x\) and gives the local explanation \(\hat\beta_x\). Because the \(w_i\) depend on the training data (i.e., the most relevant points depend on the labels \(y_i\)), we say that \(\hat\beta_x\) uses a supervised neighborhood. Supervised vs Unsupervised Neighborhoods Local Explanations When LIME defines its local explanations, it optimizes for the fidelity-metric with \(N_x\) set as a probability distribution centered on \(x\). So we say it uses an unsupervised neighborhood. As mentioned earlier, the behavior of local explanations around global patterns depends on whether or not they use a supervised or unsupervised neighborhood. Why don’t unsupervised neighborhoods detect global patterns? Near a global pattern, an unsupervised neighborhood will sample points on either side of it. Consequently, if the explanation is linear, it will smooth the global pattern ( i.e., fail to detect it). Importantly, the only indication that something might be awry is that the explanation will have lower fidelity. Although sometimes this smoothing is a good enough approximation, it would be better if the explanation detected the global pattern. For example, if we interpret Fig. 2b as the probability of giving someone a loan as their income increases, we can see that smoothing the global effect causes the explanation to give overly optimistic advice. How are supervised neighborhoods different? On the other hand, supervised neighborhoods will tend to sample points only on one side of the global pattern and consequently will not smooth it. For example, in Fig. 2c, MAPLE will predict a slope of zero at almost all points because the function is flat across each one of its three learned neighborhoods. But this clearly is also not a desirable behavior since it would imply that this feature does not matter for the prediction. Consequently, we introduce a technique to determine if a coefficient is zero/small because it does not matter or if it is zero/small because it is near a global pattern. We do this by examining the probability distribution over the features induced by the weights, \(w_i\), and training points, \(x_i\), and determining where the explanation can be applied. Note that this distribution is defined using the weights learned by MAPLE. When a point is near a global pattern, this distribution becomes skewed and we can detect it. A brief example is shown bellow in Fig. 4 (see the paper for complete details). Figure 4: An example of the local neighborhoods learned by MAPLE as we perform a grid search across the active feature of each of the toy datasets from Fig. 2. Notice that we can detect the strong effect by the small neighborhood in the steep region of the logistic curve (middle) and the discontinuities in the step function (right). In summary, by using the local training distribution that MAPLE learns around a point, we can determine whether or not that point is near a global pattern. Results When evaluating the effectiveness of MAPLE, there are three main questions: Do we sacrifice accuracy to gain interpretability? How well do its local explanations explain its own predictions? How well can it explain a black-box model? We evaluated these questions on several UCI datasets [Dheeru 2017] and will summarize our results here (for full details, see the paper). 1. Do we sacrifice accuracy to gain interpretability? No, in fact MAPLE is almost always more accurate than the tree ensemble it is built on. 2. How well do its local explanations explain its own predictions? When comparing MAPLE’s local explanation to an explanation fit by LIME to explain the predictions made by MAPLE, MAPLE produces substantially better explanations (as measured by the fidelity metric). This is not surprising since this is asking MAPLE to explain itself, but it does indicate that MAPLE is an improvement on tree ensembles in terms of both accuracy and interpretability. 3. How well can it explain a black-box model? When we use MAPLE or LIME to explain a black-box model (in this case a Support Vector Regression model), MAPLE often produces better explanations (again, measured by the fidelity metric). Conclusion By using leaf node membership as a form of supervised neighborhood selection, MAPLE is able to modify tree ensembles to be substantially more interpretable without the typical accuracy-interpretability trade-off. Additionally, it is able to provide feedback for all three types of explanations: local explanations via training a linear model, example-based explanations via highly weighted neighbors, and finally, detection of global patterns by using the supervised neighborhoods. References Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Why should i trust you?: Explaining the predictions of any classifier.” Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 2016. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “Anchors: High-precision model-agnostic explanations.” Thirty-Second AAAI Conference on Artificial Intelligence. 2018. Gregory Plumb, Denali Molitor, and Ameet S. Talwalkar. “Model Agnostic Supervised Local Explanations.” Advances in Neural Information Processing Systems. 2018. A. Bloniarz, C. Wu, B. Yu, A. Talwalkar. Supervised Neighborhoods for Distributed Nonparametric Regression. AISTATS, 2016. S. Kazemitabar, A. Amini, A. Bloniarz, A. Talwalkar. Variable Importance Using Decision Trees. NIPS, 2017. Dua Dheeru and Efi Karra Taniskidou. UCI machine learning repository, 2017. DISCLAIMER: All opinions expressed in this post are those of the author and do not represent the views of CMU.
Difference between revisions of "Dictionary:Q(Quality)" (added math to sentence) Line 8: Line 8: <center><math>A\mathrm{e}^{-\alpha x} \sin 2 \pi f ( t - \tfrac{x}{V} ) </math></center> <center><math>A\mathrm{e}^{-\alpha x} \sin 2 \pi f ( t - \tfrac{x}{V} ) </math></center> − where ''x'' is the distance traveled. The [[Dictionary:logarithmic decrement|logarithmic decrement]] ''δ'' is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates ''Q'' to the sharpness of a resonance condition; ''f''<sub>r</sub> is the resonance frequency and + where ''x'' is the distance traveled. The [[Dictionary:logarithmic decrement|logarithmic decrement]] ''δ'' is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates ''Q'' to the sharpness of a resonance condition; ''f''<sub>r</sub> is the resonance frequency and Delta fis the change in frequency that reduces the amplitude by 1/. The [[Dictionary:damping factor|damping factor]] ''h'' relates to the decrease in amplitude with time, <center><math>A(t) = A_0\mathrm{e}^{-ht} \cos \omega t \ </math></center> <center><math>A(t) = A_0\mathrm{e}^{-ht} \cos \omega t \ </math></center> Revision as of 13:39, 30 December 2014 1. Quality factor, the ratio of 2π times the peak energy to the energy dissipated in a cycle; the ratio of 2π times the power stored to the power dissipated. The seismic Q of rocks is of the order of 50 to 300. Q is related to other measures of absorption (see below): where V, f, λ, and T are, respectively, velocity, frequency, wavelength, and period. [1] The absorption coefficient α is the term for the exponential decrease of amplitude with distance because of absorption; the amplitude of plane harmonic waves is often written as where x is the distance traveled. The logarithmic decrement δ is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates Q to the sharpness of a resonance condition; f r is the resonance frequency and is the change in frequency that reduces the amplitude by . The damping factor h relates to the decrease in amplitude with time, See Figure A-2. 2. The ratio of the reactance of a circuit to the resistance. 3. A term to describe the sharpness of a filter; the ratio of the midpoint frequency to the bandpass width (often at 3 dB). 4. A designation for Love waves (q.v.). 5. Symbol for the Koenigsberger ratio (q.v.). 6. See Q-type section. See also References Sheriff, R. E. and Geldart, L. P., 1995, Exploration Seismology, 2nd Ed., Cambridge Univ. Press.
I'm trying to build a discrete Kalman Filter that fuses accelerometer (acceleration) and GPS (position, velocity) measurements. However, I'm finding that my filter can't properly track a constant-acceleration path: My implementation uses the following standard signal model: $A = \begin{pmatrix} 1 & \Delta t \\ 0 & 1 \end{pmatrix}, \quad B = \begin{pmatrix} \frac{1}{2}\Delta t^{2}\\ \Delta t \end{pmatrix}, \quad C = \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \quad D = \begin{pmatrix} 0\\0 \end{pmatrix}$ $x_{n+1}=Ax_{n}+Bu_{n}$ $y_{n} = Cx_{n}+Du_{n}$ Where $x = (r_x,v_x)^{T}$, the measurement $y=x$ uses the GPS position/velocity readings, $\Delta t$ is the sample time and the accelerometer reading is used as the control input $u$. My $Q$ and $R$ matrices are both constant, reflecting each sensor's minimum rated accuracy. Is this poor tracking behavior to be expected, or is there something wrong with my implementation? I also understand that several other alternative signal models can be used for this problem - for example, using the acceleration as a observable state instead of a control input. I've tried implementing this model too, but the filtered result is even noisier than what's shown above. Is one of these models more suitable over the other, and why?
The moment of inertia turns out to be an essential part for the calculations of rotating bodies. Furthermore, it turns out that the moment of inertia has much wider applicability. Moment of inertia of mass is defined as Moment of Inertia \[I_{rrm} = \int_{m} \rho r^{2} dm \tag{13}\] If the density is constant then equation 13 can be transformed into \[I_{rrm} = \rho \int_{V} r^{2} dV \tag{14}\] The moment of inertia is independent of the coordinate system used for the calculation, but dependent on the location of axis of rotation relative to the body. Some people define the radius of gyration as an equivalent concepts for the center of mass concept and which means if all the mass were to locate in the one point/distance and to obtain the same of moment of inertia. \[r_{k} = \sqrt{\frac{I_{m}}{m}}\tag{15}\] The body has a different moment of inertia for every coordinate/axis and they are \[I_{xx} = \int_{V} r_{x}^{2}dm = \int_{V} \left(y^{2} + z^{2}\right)dm\] \[I_{yy} = \int_{V} r_{y}^{2}dm = \int_{V} \left(x^{2} + z^{2}\right)dm\tag{16}\] \[I_{zz} = \int_{V} r_{z}^{2}dm = \int_{V} \left(x^{2} + y^{2}\right)dm\] Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
Given a family $\mathcal{F}$ of sets over ground set $X$, let $\tau(\mathcal{F})$ be the transversal number (aka blocking number), that is the cardinality of the smallest set of points $E \subseteq X$ such that every set in $\mathcal{F}$ meets $E$. Lovasz (Problem 13.25 in his Problems & Exercises books) has proved that if $F$ is an $r$-uniform family of sets that every $k$ sets in $F$ intersect, then: $$ \tau(\mathcal{F}) \leq \frac{r-1}{k-1}+1. $$ I wonder whether it is possible to strengthen the conclusion if we assume that $\mathcal{F}$ is $k$-wise $t$-intersecting (that is, every $k$ sets in $\mathcal{F}$ intersect in at least $t$ points). I can obtain the following by mimicking the proof, but I feel there ought to be a better result somewhere: $$ \tau(\mathcal{F}) \leq \frac{r-t}{k-1}+1. $$
Here's a plan: Create a kindle-friendly preamble (or a document class) Redefine your page geometry to match your screen ratio ( geometry package) Remove most of the margins (also with geometry) Remove or resize headers and footers (perhaps with fancyhdr) Enlarge text until it is readable to you ( depends on which TeX friend you are using) Choose appropriate (legible) fonts ( depends on which TeX friend you are using) Redefine you section titles to smaller spacing ( titlesec package) Reduce other spacing as well ( various lenghts like ) \abovecaptionskip Don't use absolute sizes in your documents: Rescale your figures to fit factor\textwidth Mathematics should automatically break at the end of the line ( breqn package might help) If you didn't create a document class: Extract your document content (textbody) to a content-file \include{content-file} after the \begin{document} If you created a document class, you will just have to change your class before compiling. Possible result The figure is the result of the following code: \documentclass[10pt]{article} \usepackage{fontspec} % font selection \setmainfont{Cambria} \usepackage{breqn} % automatic equation breaking \usepackage{microtype} % microtypography, reduces hyphenation \usepackage{polyglossia} % language selection \setmainlanguage{english} \usepackage{graphicx} % graphics support \usepackage[font=small,labelformat=simple,]{caption} % customizing captions \usepackage{titlesec} % customizing section titles \titleformat{\section}{\itshape\large}{}{0em}{} \titlespacing{\section}{0pt}{8pt}{4pt} \titleformat{\subsection}{\itshape}{}{0em}{} \titlespacing{\subsection}{0pt}{4pt}{2pt} \titleformat{\subsubsection}[runin]{\bf\scshape}{}{0em}{} \titlespacing{\subsubsection}{0pt}{5pt}{5pt} \usepackage[papersize={3.6in,4.8in},hmargin=0.1in,vmargin={0.1in,0.1in}]{geometry} % page geometry \usepackage{fancyhdr} % headers and footers \pagestyle{fancy} \fancyhead{} % clear page header \fancyfoot{} % clear page footer \setlength{\abovecaptionskip}{2pt} % space above captions \setlength{\belowcaptionskip}{0pt} % space below captions \setlength{\textfloatsep}{2pt} % space between last top float or first bottom float and the text \setlength{\floatsep}{2pt} % space left between floats \setlength{\intextsep}{2pt} % space left on top and bottom of an in-text float \begin{document} In another moment down went Alice after it, never once considering how in the world she was to get out again. \section{Wonderful section title} Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look \begin{figure}[htb] \includegraphics[width=\textwidth]{alice} \caption{Quite wide picture, resized to fit} \end{figure} about her and to wonder what was going to happen next. \subsection{Tufte-style subsection} then she looked at the sides of the well \subsubsection{Saving space running in} and noticed that they were filled with cupboards and book\-shel\-ves. \begin{dmath}[label={sna74}] \frac{1}{6} \left(\sigma(k,h,0) +\frac{3(h-1)}{h}\right) +\frac{1}{6} \left(\sigma(h,k,0) +\frac{3(k-1)}{k}\right) =\frac{1}{6} \left(\frac{h}{k} +\frac{k}{h} +\frac{1}{hk}\right) +\frac{1}{2} -\frac{1}{2h} -\frac{1}{2k}, \end{dmath} \end{document} Different font sizes can be easily selected using fontspec.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
I am preparing for an exam in neural networks. As an example for self-organizing maps they showed the inverted pendulum problem where you want to keep the pole vertical: Now the part which I don't understand: $$f(\theta) = \alpha \sin(\theta) + \beta \frac{\mathrm{d} \theta}{\mathrm{d} t}$$ Let $x= \theta$, $y=\frac{\mathrm{d} \theta}{\mathrm{d} t}$, $z=f$. Solution with SOM: three-dimensional surface in $(x,y,z)$ adapt two-dimensional SOM to surface Method of control For a given $(x,y)$ find neuron $k$ for wich $w_k = [w_{k1}, w_{k2}, w_{k2}, w_{k3}]$ $f(\theta)$ is then $w_{k3}$ I guess we use the SOM to learn the function $f$. However, I would like to understand where $f$ comes from / what it means in this model.
Answer The speed was greater than the speed limit of the train. Work Step by Step We know that the strap is at 15 degrees, so we use trigonometry to find the values of gravity and the centripetal force that make the angle 15 degrees: $tan(15)=\frac{ma_r}{mg}$ $tan(15)=\frac{a_r}{g}$ $a_r=2.63$ We find the velocity: $v=\sqrt{a_rr}=\sqrt{2.63\times 150}=19.85\ m/s=71.484 \ km/h$ This is greater than the speed limit of the train.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
This is a badly phrased question, so let's first make sense of it. I am going to do it the style of computability theory. Thus I will use numbers instead of strings: a piece of source code is a number, rather than a string of symbols. It does not really matter, you may replace $\mathbb{N}$ with $\mathtt{string}$ throughout below. Let $\langle m, n\rangle$ be a pairing function. Let us say that a programming language $L = (P, ev)$ is given by the following data: a decidable set $P \subseteq \mathbb{N}$ of "valid programs", and a computable and partial function $ev : P \times \mathbb{N} \to \mathbb{N}$. The fact that $P$ is decidable means there is a total computable map $valid : \mathbb{N} \to \{0,1\}$ such that $valid(n) = 1 \iff n \in P$. Informally, we are saying that it is possible to tell whether a given string is a valid piece of code. The function $ev$ is essentialy an interpreter for our language: $ev(m,n)$ runs code $m$ on input $n$ – the result may be undefined. We can now introduce some terminology: A language is total if $n \mapsto ev(m,n)$ is a total function for all $m \in P$. A language $L_1 = (P_1, ev_1)$ interprets language $L_2 = (P_2, ev_2)$ if there exists $u \in P_1$ such that $ev_1(u, \langle n, m \rangle) \simeq ev_2(n, m)$ for all $n \in P$ and $m \in \mathbb{N}$. Here $u$ is the simulator for $L_2$ implemented in $L_1$. It is also known as the universal program for $L_2$. Other definitions of "$L_1$ interprets $L_2$" are possible, but let me not get into this now. We say that $L_1$ and $L_2$ are equivalent if they interpret each other. There is "the most powerful" language $T = (\mathbb{N}, \varphi)$ of Turing machines (which you refer to as "a Turing machine") in which $n \in \mathbb{N}$ is an encoding of a Turing machine and $\varphi(n,m)$ is the partial computable function that "runs the Turing machine encodede by $n$ on input $m$". This language can interpet all other languages, obviously since we required $ev$ to be computable. Our definition of programming languages is very relaxed. For the following to go through, let us require three more conditions: $L$ implements the successor function: there is $succ \in P$ such that $ev(succ,m) = m+1$ for all $m \in \mathbb{N}$, $L$ implements the diagonal function: there is $diag \in P$ such that $ev(diag,m) = \langle m, m \rangle$ for all $m \in \mathbb{N}$, $L$ is closed under composition of functions: if $L$ implements $f$ and $g$ then it also implements $f \circ g$, A classic result is this: Theorem: If a language can interpret itself then it is not total. Proof. Suppose $u$ is the universal program for a total langauge $L$ implemented in $L$, i.e., for all $m \in P$ and $n \in \mathbb{N}$,$$ev(u, \langle m, n \rangle) \simeq ev(m, n).$$As successor, diagonal, and $ev(u, {-})$ are implemented in $L$, so is their composition $k \mapsto ev(u, \langle k, k \rangle) + 1$. There exists $n_0 \in P$ such that $ev(n_0, k) \simeq ev(u, \langle k, k \rangle) + 1$, but then$$ev(u, \langle n_0, n_0\rangle) \simeq ev(n_0, n_0) \simeq ev(u, \langle n_0, n_0 \rangle) + 1$$As there is no number equal its own successor, it follows that $L$ is not total or that $L$ does not interpret itself. QED. Observe that we could replace the successor map with any other fixpoint-free map. Here is a little theorem which I think will clean up a misunderstanding. Theorem: Every total language can be interpreted by another total language. Proof. Let $L$ be a total language. We get a total $L'$ which interprets $L$ by adjoining to $L$ its evaluator $ev$. More precisely, let $P' = \{\langle 0, n\rangle \mid n \in P\} \cup \{\langle 1, 0\rangle\}$ and define $ev'$ as$$ev'(\langle b, n \rangle, m) =\begin{cases}ev(n,m) & \text{if $b = 0$},\\ev(m_0, m_1) & \text{if $b = 1$ and $m = \langle m_0, m_1 \rangle$}\end{cases}$$Obviously, $L'$ is total because $L$ is total. To see that $L'$ can simulate $L$ just take $u = \langle 1, 0\rangle$, since then$ev'(u, \langle m, n\rangle) \simeq ev(m, n)$, as required. QED. Exercise: [added 2014-06-27] The language $L'$ constructed above is not closed under composition. Fix the proof of the theorem so that $L'$ satisfies the extra requirements if $L$ does. In other words, you never need the full power of Turing machines to interpret a total language $L$ – a slightly more powerful total language $L'$ suffices. The language $L'$ is strictly more powerful than $L$ because it interprets $L$, but $L$ does not interpret itself.
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including: Does GMAT RC seem like an uphill battle? e-GMAT is conducting a free webinar to help you learn reading strategies that can enable you to solve 700+ level RC questions with at least 90% accuracy in less than 10 days. Sat., Oct 19th at 7 am PDT Hello, Am new here. I just took the m25 GMAT CLub Test and I don't get the solution of a question. (Q19) If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400 OA: 200 ME: well, since \(|x| + |y| = 10\) ; X can range from (-10) to (10) (when Y is 0) and the same for Y So the length of the side of the square should be 20. My Answer : 400 I think I am making a silly mistake some where but I just can't figure it out. Thanks Hi and welcome to the Gmat Club. Below is the solution for your problem. Hope it's clear. \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) You will have 4 case: \(x<0\) and \(y<0\) --> \(-\frac{x}{2}-\frac{y}{2}=5\) --> \(y=-10-x\); \(x<0\) and \(y\geq{0}\) --> \(-\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10+x\); \(x\geq{0}\) and \(y<0\) --> \(\frac{x}{2}-\frac{y}{2}=5\) --> \(y=x-10\); \(x\geq{0}\) and \(y\geq{0}\) --> \(\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10-x\); So we have equations of 4 lines. If you draw these four lines you'll see that the figure which is bounded by them is square which is turned by 90 degrees and has a center at the origin. This square will have a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\). Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\). If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink] Show Tags 15 Jan 2012, 10:26 3 8 Apex231 wrote: If equation |x/2|+|y/2| = 5 encloses a certain region on the coordinate plane, what is the area of this region? A 20 B 50 C 100 D 200 E 400 First of all to simplify the given expression a little bit let's multiply it be 2: \(|\frac{x}{2}|+|\frac{y}{2}|=5\) --> \(|x|+|y|=10\). Now, find x and y intercepts of the region (x-intercept is a value(s) of x for y=0 and similarly y-intercept is a value(s) of y for x=0): \(y=0\) --> \(|x|=10\) --> \(x=10\) and \(x=-10\); \(x=0\) --> \(|y|=10\) --> \(y=10\) and \(y=-10\). So we have 4 points: (10, 0), (-10, 0), (0, 10) and (-10, 0). When you join them you'll get the region enclosed by \(|x|+|y|=10\): You can see that it's a square. Why a square? Because diagonals of the rectangle are equal (20 and 20), and also are perpendicular bisectors of each other (as they are on X and Y axis), so it must be a square. As this square has a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\). Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\). Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink] Show Tags 20 Jun 2015, 01:53 2 2 jayanthjanardhan wrote: Sorry, i dont know what i am missing, how do i get the diagonal to be 20?...from the square i got, i have all the sides equal to 20, hence the area=400. Hi Jayanthjanardan, The given equation is basically representing FOUR linear equations which are representing 4 lines on the plane One Linear equation when x is +ve and y is +ve i.e. X+Y = 10 Second Linear equation when x is +ve and y is -ve i.e. X-Y = 10 Third Linear equation when x is -ve and y is +ve i.e. -X+Y = 10 Forth Linear equation when x is -ve and y is -ve i.e. -X-Y = 10 NOTE: PLEASE PLOT THE LINES TO UNDERSTAND THE FIGURE (REFER THE FIGURE) and see that Diagonal of Square is 10 So you need to plot these equation and then take the area of Quadrilateral formed Also, Please Note that Four Vertices of Quadrilateral are obtained where two lines Intersect, and The intersections of the lines are obtained at points (10,0), (-10,0), (0,10) and (0,-10) Whereas, what you have done is taking any FOUR RANDOM POINTS on those four lines as per your convenience and then have assumed that these points form the Square I hope this clears your doubt! Attachments File comment: www.GMATinsight.com figure.jpg [ 88.87 KiB | Viewed 9722 times ] _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html If equation |x/2|+|y/2| = 5 encloses a certain region[#permalink] Show Tags 10 Sep 2012, 12:25 CMcAboy wrote: Can someone help me with this question: If equation |x/2| + |y/2| = 5 encloses a certain region on the coordinate plane, what is the area of this region? A) 20 B) 50 C) 100 D) 200 E) 400 I believe this is the simplest & the quickest solution |x/2| + |y/2| = 5 Put x = 0 in the above equation we get |y/2| = 5, which means y= 10, - 10 Put y = 0 in the above equation we get |y/2| = 5, which means x= 10, - 10 If you see plot these four points you get a square with two equal diagonals of length 20 units Thus area = 1/2 * (Diagonal)^2 -----> 1/2 * 400 = 200 I hope this will help many._________________ If you like my Question/Explanation or the contribution, Kindly appreciate by pressing KUDOS. Kudos always maximizes GMATCLUB worth-Game Theory If you have any question regarding my post, kindly pm me or else I won't be able to reply If equation |x/2|+|y/2| = 5 encloses a certain region[#permalink] Show Tags 15 Oct 2014, 19:17 Apex231 wrote: If equation |x/2|+|y/2| = 5 encloses a certain region on the coordinate plane, what is the area of this region? A. 20 B. 50 C. 100 D. 200 E. 400 Hello There, Equation of a straight line whose x and y intercepts are a and b resp. is (x/a) + (y/b) = 1 i.e., coordinates of two ends of the line are (a,0) and (0,b). Now, from the given question, |x/2|+|y/2| = 5, reducing this to intercept form we get, |x/10|+|y/10| = 1 Considering the equation without modulus, coordinates are (10,0) and (0,10). Since there is modulus, other two coordinates are (-10,0) and (0,-10). Now coordinates (10,0), (0,10), (-10,0) and (0,-10) form a square with diagonal length = 20. Here diagonal length can be obtained by calculating the distance between (10,0) and (-10,0) or (0,10) and (0,-10). In a square, Diagonal = Side * sqrt(2) Side = 10 * sqrt(2) Area = Side * Side = 200. Ans : D Hope this helps! Thanks!_________________ Regards, Bharat Bhushan Sunkara. "You need to sacrifice what you are TODAY, for what you want to be TOMORROW!!" Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink] Show Tags 10 Jun 2015, 09:21 Bunuel wrote: Barkatis wrote: Hello, Am new here. I just took the m25 GMAT CLub Test and I don't get the solution of a question. (Q19) If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400 OA: 200 ME: well, since \(|x| + |y| = 10\) ; X can range from (-10) to (10) (when Y is 0) and the same for Y So the length of the side of the square should be 20. My Answer : 400 I think I am making a silly mistake some where but I just can't figure it out. Thanks Hi and welcome to the Gmat Club. Below is the solution for your problem. Hope it's clear. \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) You will have 4 case: \(x<0\) and \(y<0\) --> \(-\frac{x}{2}-\frac{y}{2}=5\) --> \(y=-10-x\); \(x<0\) and \(y\geq{0}\) --> \(-\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10+x\); \(x\geq{0}\) and \(y<0\) --> \(\frac{x}{2}-\frac{y}{2}=5\) --> \(y=x-10\); \(x\geq{0}\) and \(y\geq{0}\) --> \(\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10-x\); So we have equations of 4 lines. If you draw these four lines you'll see that the figure which is bounded by them is square which is turned by 90 degrees and has a center at the origin. This square will have a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\). Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\). If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink] Show Tags 10 Jun 2015, 09:50 arshu27 wrote: Bunuel wrote: If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400 OA: 200 I had another way of solving. The answer is wrong but i wanted to know what is wrong in the method. We can re-write the question as below \(x^2/4 +y^2/4 = 5\) (since \(|x| = x^2\)) \(x^2 + y^2 = 20\) This is the equation is a circle having the centre at (0,0) (general form is \(x^2 + y^2= r^2\)) area =\(3.14 * R^2\) = \(3.14 * 20\) = 62.8 What am i assuming wrong here?? Thanks! The part that I have highlighted above is WRONG which the first step in your solution |x| is NOT equal to x^2 for all values of x[/highlight] The Function "Modulus" only keeps the final sign Positive but that doesn't mean what you mentioned in the quoted Highlighted section. Alternatively you can solve this question in this way Step 1: Substitute y=0, \(|\frac{x}{2}| + |\frac{0}{2}| = 5\) i.e. \(|\frac{x}{2}| = 5\) i.e. \(|x| = 10\) i.e. \(x = +10\) So on the X-Y plane you get two Point (+10,0) and (-10,0) Step 2:Substitute x=0, \(|\frac{0}{2}| + |\frac{y}{2}| = 5\) i.e. \(|\frac{y}{2}| = 5\) i.e. \(|y| = 10\) i.e. \(y = +10\) So on the X-Y plane you get two lines parallel to X-Axis passing through Y=+10 and Y=-10 So on the X-Y plane you get two Point (0, +10) and (0, -10) Join all the four points, It's a Square with Side \(10\sqrt{2}\) i.e. Area =\((10\sqrt{2})^2\) = 200_________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink] Show Tags 10 Jun 2015, 10:02 arshu27 wrote: Bunuel wrote: Barkatis wrote: Hello, Am new here. I just took the m25 GMAT CLub Test and I don't get the solution of a question. (Q19) If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400 OA: 200 ME: well, since \(|x| + |y| = 10\) ; X can range from (-10) to (10) (when Y is 0) and the same for Y So the length of the side of the square should be 20. My Answer : 400 I think I am making a silly mistake some where but I just can't figure it out. Thanks Hi and welcome to the Gmat Club. Below is the solution for your problem. Hope it's clear. \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) You will have 4 case: \(x<0\) and \(y<0\) --> \(-\frac{x}{2}-\frac{y}{2}=5\) --> \(y=-10-x\); \(x<0\) and \(y\geq{0}\) --> \(-\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10+x\); \(x\geq{0}\) and \(y<0\) --> \(\frac{x}{2}-\frac{y}{2}=5\) --> \(y=x-10\); \(x\geq{0}\) and \(y\geq{0}\) --> \(\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10-x\); So we have equations of 4 lines. If you draw these four lines you'll see that the figure which is bounded by them is square which is turned by 90 degrees and has a center at the origin. This square will have a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\). Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\). I had another way of solving. The answer is wrong but i wanted to know what is wrong in the method. We can re-write the question as below \(x^2/4 +y^2/4 = 5\) (since \(|x| = x^2\)) \(x^2 + y^2 = 20\) This is the equation is a circle having the centre at (0,0) (general form is \(x^2 + y^2= r^2\)) area =\(3.14 * R^2\) = \(3.14 * 20\) = 62.8 What am i assuming wrong here?? Thanks! One More Clarification ( \(|x| is NOT equal to x^2\)) Instead, \(|x| = \sqrt{(x^2)}\) _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink] Show Tags 12 Jun 2015, 14:43 why should I suppose that x or y equals +- 10 & zeros ? what about +- 5 as following : |+ 5 |+ |-5| = 10 |-5|+|+5| = 10 |-5|+|-5|= 10 |+5|+|+5|=10 S0 we have Square with side of 10 length Its area is 100 Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink] Show Tags 12 Jun 2015, 23:02 hatemnag wrote: why should I suppose that x or y equals +- 10 & zeros ? what about +- 5 as following : |+ 5 |+ |-5| = 10 |-5|+|+5| = 10 |-5|+|-5|= 10 |+5|+|+5|=10 S0 we have Square with side of 10 length Its area is 100 Hi Hatemnag, The given equation is basically representing FOUR linear equations which are representing 4 lines on the plane One Linear equation when x is +ve and y is +ve i.e. X+Y = 10 Second Linear equation when x is +ve and y is -ve i.e. X-Y = 10 Third Linear equation when x is -ve and y is +ve i.e. -X+Y = 10 Forth Linear equation when x is -ve and y is -ve i.e. -X-Y = 10 So you need to plot these equation and then take the area of Quadrilateral formed Also, Please Note that Four Vertices of Quadrilateral are obtained where two lines Intersect, and The intersections of the lines are obtained at points (10,0), (-10,0), (0,10) and (0,-10) Whereas, what you have done is taking any FOUR RANDOM POINTS on those four lines as per your convenience and then have assumed that these points form the Square I hope this clears your doubt!_________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink] Show Tags 20 Jun 2015, 01:15 Bunuel wrote: Barkatis wrote: Hello, Am new here. I just took the m25 GMAT CLub Test and I don't get the solution of a question. (Q19) If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400 OA: 200 ME: well, since \(|x| + |y| = 10\) ; X can range from (-10) to (10) (when Y is 0) and the same for Y So the length of the side of the square should be 20. My Answer : 400 I think I am making a silly mistake some where but I just can't figure it out. Thanks Hi and welcome to the Gmat Club. Below is the solution for your problem. Hope it's clear. \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) You will have 4 case: \(x<0\) and \(y<0\) --> \(-\frac{x}{2}-\frac{y}{2}=5\) --> \(y=-10-x\); \(x<0\) and \(y\geq{0}\) --> \(-\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10+x\); \(x\geq{0}\) and \(y<0\) --> \(\frac{x}{2}-\frac{y}{2}=5\) --> \(y=x-10\); \(x\geq{0}\) and \(y\geq{0}\) --> \(\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10-x\); So we have equations of 4 lines. If you draw these four lines you'll see that the figure which is bounded by them is square which is turned by 90 degrees and has a center at the origin. This square will have a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\). Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\). Its a similar problem, but the diagram we end getting is a square and not a rhombus...what am i missing here? Even that is a square but never forget that a Square is a specific type of Rhombus only I hope, You can understand that the Product of the slopes of the adjacent sides is -1 in that fugure which proves the angle between the adjacent sides as 90 degree a Square is a "Rhombus with all angles 90 degrees". So calling it a Rhombus won;t be wrong either but you are right about the figure being a Square._________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html
Consider a fermion $\chi$ whose left-handed part is in a triplet representation of $SU(2)_L$: $$ \chi_{L} = (\chi^1,\chi^2,\chi^3)_L^{\ \ \text{T}}. $$ The charged current of $\chi_L$ (i.e. its coupling to $W^\pm$) is $$ - g \left( \overline{\chi^1}\gamma^\mu W_\mu^+\gamma_L \chi^2 + \overline{\chi^2}\gamma^\mu W_\mu^+\gamma_L \chi^3 + h.c.\right).$$ (This can be derived from the term $i\overline{\chi} \gamma^\mu D_\mu \chi$ in a GWS-like Lagrangian with an appropriate Higgs sector). Question: Are there any constraints on the right-handed part of $\chi$? More specifically, could $\chi_R$ also be in a triplet representation of $SU(2)_L$? (I'm not talking about a $SU(2)_L\times SU(2)_R$ symmetry). As far as I know, since RH and LH fermions don't interact together, we can use any representation for each part. In the Standard Model, LH particles form doublets while RH particles are singlets under $SU(2)_L$, but this simply comes from experimental data. I guess nothing prevents both $\chi_L$ and $\chi_R$ to be triplets? A consequence of this is that both representations would have the same quantum numbers (since the electric charge $Q_{EM}=I_{3W}+Y_W/2$ has to be the same for each component). Therefore, their weak coupling would be identical and the fields would be indistinguishable except for their chirality. But I suppose that's not a problem? Is there a deeper reasoning that would prevent $\chi_R$ to be a weak isospin triplet? (perhaps constraints from the charged/neutral currents, parity, Yukawa coupling, etc. ?)
Good day! I googled the whole stackechange, but was not able to find the answer. I'm quite new to Latex, so my approach is not proper. I try to define a macro, that should format greek letter as upright bold. I know there are several ways to do it, I would like to avoid usage of isomath and stick to upgreek package. I want to be able to write \gb{\alpha}\gb{\Alpha} and in both cases get upright bold characters. So, here is my code inside: \newcommand{\gb}[1]{ % Imagine #1=\Psi\StrGobbleLeft{\detokenize{#1}}{1}[\chrcodet] % variable "\Psi"\StrGobbleRight{\chrcodet}{1}[\chrcode] % variable "Psi"\StrLeft{\chrcode}{1}[\chrfirst] % first character "P"\IfSubStr{ABCDEFGHIJKLMNOPQRSTUVWXYZ}{\chrfirst} % Is it capital? {\boldsymbol{#1}} % Yes - no modification {\boldsymbol{\csname up\chrcode\endcsname}} % No - glue \up+Psi (that what happens!)} I'm trying to get the first character of the control sequence and if it's capital, I call just \boldsymbol with no modifications, otherwise I want to modify control sequence by placing \up at the beginning. So, when I do that, it produces the following error ! Undefined control sequence.\bm@command ->\upPsi It looks like the "if" condition always gives false, i.e. "P" is not recognized as capital, thus the following MWE will write twice "lowercase": \documentclass{article}\usepackage{bm,upgreek}\usepackage{etoolbox}\usepackage{xstring}\newcommand{\gb}[1]{ \StrGobbleLeft{\detokenize{#1}}{1}[\chrcodet] \StrGobbleRight{\chrcodet}{1}[\chrcode] \StrLeft{\chrcode}{1}[\chrfirst] #1 - \IfSubStr{ABCDEFGHIJKLMNOPQRSTUVWXYZ}{\chrfirst}{uppercase}{lowercase}}\begin{document}\begin{equation} \psi,\Psi,\gb{\psi},\gb{\Psi}\end{equation}\end{document} Can anyone give me an idea, why it does not work as expected?
I am interested in samples of $\theta$ from the posterior distribution $$ P(\theta|x) = \int d\phi P(\theta|\phi)P(\phi|x) $$ where $x$ are data and $\phi$ are nuisance parameters. In principle, I can use a Metropolis Hastings sampler to sample from $\theta$ and $\phi$ and discard the samples of the nuisance parameter. In this case, I can sample from $P(\phi|x)$ directly such that I can approximate the marginal posterior by Monte Carlo integration $$ P(\theta|x)\approx\frac{1}{n}\sum_{i=1}^n P(\theta|\phi_i)\equiv \hat P, $$ where $\phi_i$ are samples from $P(\phi|x)$. Of course, the approximation $\hat P$ is not deterministic because of the sampling error. I seem to remember that running the following algorithm samples from the posterior in this case but I can no longer find the reference. Is this correct? If so, do you have a reference? # Sampler in pseudo-pythontheta = initial_valuefor step in range(num_steps): # Sample phi phi = sample_phi_given_x(x) # Evaluate current posterior posterior_estimate = mean(posterior(theta, phi)) # Sampled from proposal and evaluate posterior at proposal candidate = sample_from_proposal(theta) posterior_candidate_estimate = mean(posterior(candidate, phi)) # Accept or reject the proposal if random_uniform() < posterior_candidate_estimate / posterior_estimate: theta = candidate
In discrete time, distinct frequencies only have sense in a range of $2\pi$ (where the units are radians per sample). A sinusoid (or complex exponential) of frequency $2\pi$ is indistinguishable from one with frequency 0. If in doubt, evaluate $x[n] = \cos(2\pi \times n)$ and $x[n] = \cos(0 \times n)$. Same thing with any two frequencies separated $2\pi$ or multiples of $2\pi$. For example, you can easily show that $\cos( \theta_0 \times n ) = \cos((\theta_0 + 2k\pi) \times n)$ for any integer $k$. So a range of size $2\pi$ is enough not to repeat frequencies. In general, the range chosen is $-\pi \cdots \pi$, where $\pi$ is the maximum frequency ( $\cos(\pi \times n ) = (-1)^n$ ). In the $z$ domain, the frequency response is the transfer function ($H(z)$) evaluated on the unit circle ($z = e^{j\theta}$), so here again you have the range $2\pi$ before repeating your path.
The absolute viscosity of many fluids relatively doesn't change with the pressure but very sensitive to temperature. For isothermal flow, the viscosity can be considered constant in many cases. The variations of air and water as a function of the temperature at atmospheric pressure are plotted in Figures 1.8 and 1.9. Some common materials (pure and mixture) have expressions that provide an estimate. For many gases, Sutherland's equation is used and according to the literature, provides reasonable of \(-40°C\) to \(1600°C\). \[\mu = \mu_{0} \frac{0.555 T_{i0} + Suth}{0.555 T_{in} + Suth} (\frac{T}{T_0})^\frac{3}{2}\] Where \(\mu\) viscosity at input temperature, T \(\mu_{0}\) reference viscosity at reference temperature, \(T_{i0}) \(T_{in}\) input temperature in degrees Kelvin \(T_{i0}\) reference temperature in degrees Kelvin \(Suth\) Sutherland's constant (presented in Table 1.1) Example 1.3 Calculate the viscosity of air at 800K based on Sutherland's equation. Use the data provide in Table 1.1. Solution 1.3 Applying the constants from Suthelnd's table provides \[ \mu = 0.00001827 \times \dfrac{ 0.555\times524.07+120}{0.555\times800+120} \times \left( \dfrac{800}{524.07}\right)^{\dfrac{3}{2}} \ \sim 2.51\,{10}^{-5} \left[\dfrac{N\, sec}{m^2}\right] \] The observed viscosity is about \(\sim 3.7{10}^{-5}\left[\dfrac{N\, sec}{m^2}\right]\). Table 1.2 Viscosity of selected gases. Substance Chemical formula Temperature, \(T\,[^{\circ}C]\) Viscosity, \(\left[\dfrac{N\, sec}{m^2} \right]\) \(i-C_4\,H_{10}\) 23 0.0000076 \(CH_4\) 20 0.0000109 Oxygen \(O_2\) 20 0.0000203 Mercury Vapor \(Hg\) 380 0.0000654 Table 1.3 Viscosity of selected liquids. Substance Chemical formula Temperature, \(T\,[^{\circ}C]\) Viscosity, \(\left[\dfrac{N\, sec}{m^2} \right]\) \((C_2H_5)O\) 20 0.000245 \(C_6H_6\) 20 0.000647 \(Br_2\) 26 0.000946 \(C_2H_5OH\) 20 0.001194 \(Hg\) 25 0.001547 \(H_2SO_4\) 25 0.01915 Olive Oil 25 0.084 Castor Oil 25 0.986 Clucuse 25 5-20 Corn Oil 20 0.072 SAE 30 - 0.15-0.200 SAE 50 \(\sim25^{\circ}C\) 0.54 SAE 70 \(\sim25^{\circ}C\) 1.6 Ketchup \(\sim20^{\circ}C\) 0,05 Ketchup \(\sim25^{\circ}C\) 0,098 Benzene \(\sim20^{\circ}C\) 0.000652 Firm glass - \(\sim 1\times10^7\) Glycerol 20 1.069 Fig. 1.10. Liquid metals' viscosity as a function of the temperature. Liquid Metals Liquid metal can be considered as a Newtonian fluid for many applications. Furthermore, many aluminum alloys are behaving as a Newtonian liquid until the first solidification appears (assuming steady state thermodynamics properties). Even when there is a solidification (mushy zone), the metal behavior can be estimated as a Newtonian material (further reading can be done in this author's book ``Fundamentals of Die Casting Design''). Figure 1.10 exhibits several liquid metals (from The Reactor Handbook, Vol. Atomic Energy Commission AECD-3646 U.S. Government Printing Office, Washington D.C. May 1995 p. 258.) The General Viscosity Graphs In case ``ordinary'' fluids where information is limit, Hougen et al suggested to use graph similar to compressibility chart. In this graph, if one point is well documented, other points can be estimated. Furthermore, this graph also shows the trends. In Figure 1.11 the relative viscosity \(\mu_{r} = \mu / \mu_{c}\) is plotted as a function of relative temperature, \(T_{r}\). \(\mu_{c}\) is the viscosity at critical condition and \(\mu\) is the viscosity at any given condition. The lines of constant relative pressure \(P_{r} = P / P_{c}\) are drawn. The lower pressure is, for practical purpose, \(\sim1[bar]\). Table 1.3 Viscosity of selected liquids. Chemical component Molecular Weight \(T_c\)[K] \(P_c\)[Bar] \(\mu_c\)\(\left[\dfrac{N\,sec}{m^2}\right]\) \(H_2\) 2.016 33.3 12.9696 3.47 \(He\) 4.003 5.26 2.289945 2.54 \(Ne\) 20.183 44.5 27.256425 15.6 \(Ar\) 39.944 151 48.636 26.4 \(Xe\) 131.3 289.8 58.7685 49. Air "mixed'' 28.97 132 36.8823 19.3 \(CO_2\) 44.01 304.2 73.865925 19.0 \(O_2\) 32.00 154.4 50.358525 18.0 \(C_2H_6\) 30.07 305.4 48.83865 21.0 \(CH_4\) 16.04 190.7 46.40685 15.9 Water 18.01528 647.096 K 22.064 [MPa] The critical pressure can be evaluated in the following three ways. The simplest way is by obtaining the data from Table 1.4 or similar information. The second way, if the information is available and is close enough to the critical point, then the critical viscosity is obtained as \[\mu_{c} = \frac{\mu}{\mu_{r}}\tag{21}\] The third way, when none is available, is by utilizing the following approximation \[\mu_{c} = \sqrt{MT_{c}}v_{c}^{2/3}\tag{22}\] Where ___vc with sim hat___ is the critical molecular volume and \(M\) is molecular weight. Or \[\mu_{c} = \sqrt{M}P_{c}^{2/3}T_{c}^{-1/6}\tag{23}\] Calculate the reduced pressure and the reduced temperature and from the Figure 1.11 obtain the reduced viscosity. Example 1.4 Estimate the viscosity of oxygen, \(O_2\) at \(100^{\circ}C\) and 20[Bar]. Solution 1.4 \(P_c = 50.35[Bar]\,\) \(T_c=154.4\) and therefore \(\mu_c=18 \left[ \dfrac{N\,sec}{m^2}\right]\) The value of the reduced temperature is \[\left[ \dfrac{N\,sec}{m^2}\right] \tag{24}\] The value of the reduced pressure is \[P_r \sim \dfrac{20}{50.35} \sim 0.4 \tag{25}\] From Figure 1.11 it can be obtained \(\mu_r\sim 1.2\) and the predicted viscosity is \[\mu = \mu_c \, \overbrace{\left( \dfrac{\mu}{\mu_c}\right)} ^{Table } = 18 \times 1.2 = 21.6[N sec/m^2] \tag{26}\] Fig. 1.11. Reduced viscosity as a function of the reduced temperature. Fig. 1.12. Reduced viscosity as a function of the reduced temperature. Viscosity of Mixtures In general the viscosity of liquid mixture has to be evaluated experimentally. Even for homogeneous mixture, there isn't silver bullet to estimate the viscosity. In this book, only the mixture of low density gases is discussed for analytical expression. For most cases, the following Wilke's correlation for gas at low density provides a result in a reasonable range. \[\mu_{mix} = \sum_{i=1}^n \frac{x_{i}\mu_{i}}{\sum_{j=1}^n x_{i}\phi_{ij}}\tag{27}\] where \(\phi_{ij}\) is defined as \[\phi_{ij} = \frac{1}{\sqrt{8}}\sqrt{1+\frac{M_i}{M_j}}(1+\sqrt{\frac{mu_i}{mu_j}}{\frac{M_j}{M_i})^2\) Here, \(n\) the number of the chemical components in the mixture \(x_{i}\) is the mole fraction of component \(i\) \mu_{i} the viscosity of component \(i\) The subscript \(i\) should be used for the \(j\) index. The dimensionless parameter \(\phi_{ij}\) is equal to one when \(i=j\). The mixture viscosity is highly nonlinear function of the fractions of the components. Example 1.5 Calculate the viscosity of a mixture (air) made of 20% oxygen, \(O_2\) and 80% nitrogen \(N_2\) for the temperature of \(20^{\circ}C\). Solution 1.5 The following table summarizes the known details Table summary 1. Component Molecular Weight, \(M\) Fraction, \(x\) Viscosity, \(\mu\) \(O_2\) 32. 0.2 0.0000203 \(N_2\) 28. 0.8 0.00001754 Table summary 2. i j \(M_i/M_j\) \(\mu_i/\mu_j\) \(\Phi_{ij}\) 1 1 1.0 1.0 1.0 1 2 1.143 1.157 1.0024 2 1 0.875 .86 0.996 2 2 1.0 1.0 1. \[ \mu_{mix} \sim \dfrac{0.2\times 0.0000203}{0.2\times1.0 + 0.8\times 1.0024} + \\ \dfrac{0.8\times 0.00001754}{0.2\times0.996 + 0.8\times 1.0} \sim 0.0000181 \left[\dfrac{N\,sec}{m^2}\right] \] The observed value is \(\sim0.0000182 \left[\dfrac{N\,sec}{m^2}\right]\). In very low pressure, in theory, the viscosity is only a function of the temperature with a ``simple'' molecular structure. For gases with very long molecular structure or complexity structure these formulas cannot be applied. For some mixtures of two liquids it was observed that at a low shear stress, the viscosity is dominated by a liquid with high viscosity and at high shear stress to be dominated by a liquid with the low viscosity liquid. The higher viscosity is more dominate at low shear stress. Reiner and Phillippoff suggested the following formula: \[\frac{dU_{x}}{dy} = \left(\frac{1}{\mu_{\infty} + \frac{\mu_{0} - \mu_{\infty}}{1 + (\frac{\tau_{xy}}{\tau_{s}})^2}}\right)\tau_{xy}\tag{29}\] Where the term \(\mu_{\infty}\) is the experimental value at high shear stress. The term \(\mu_0\) is the experimental viscosity at shear stress approaching zero. The term \(\tau_s\) is the characteristic shear stress of the mixture. An example for values for this formula, for Molten Sulfur at temperature \(120^{\circ}C\) are \(\mu_{\infty} = 0.0215 \left({N\,sec}/{m^2}\right)\), \(\mu_{0} = 0.00105 \left({N\,sec}/{m^2}\right)\), and \(\tau_s = 0.0000073 \left({kN}/{m^2}\right)\). This equation (29) provides reasonable value only up to \(\tau = 0.001 \left({kN}/{m^2}\right)\). Figure 1.12 can be used for a crude estimate of dense gases mixture. To estimate the viscosity of the mixture with \(n\) component Hougen and Watson's method for pseudocritial properties is adapted. In this method the following are defined as mixed critical pressure as \[{P_c}_{mix} = \sum_{i=1}^{n} \, x_i \,{P_c}_i \tag{30}\] the mixed critical temperature is \[{T_c}_{mix} = \sum_{i=1}^{n} \,x_i\, {T_c}_i \tag{31}\] and the mixed critical viscosity is \[{\mu_c}_{mix} = \sum_{i=1}^{n} \,x_i\, {\mu_c}_i \tag{32}\] Example 1.6 of 0.101 [m] radius and the cylinders length is 0.2 [m]. It is given that a moment of 1 [\(N\times m\)] is required to maintain an angular velocity of 31.4 revolution per second (these number represent only academic question not real value of actual liquid). Estimate the liquid viscosity used between the cylinders.} Solution 1.6 The moment or the torque is transmitted through the liquid to the outer cylinder. Control volume around the inner cylinder shows that moment is a function of the area and shear stress. The shear stress calculations can be estimated as a linear between the two concentric cylinders. The velocity at the inner cylinders surface is \[ \label{concentricCylinders:Ui} U_i = r\,\omega = 0.1\times 31.4[rad/second] = 3.14 [m/s] \] The velocity at the outer cylinder surface is zero. The velocity gradient may be assumed to be linear, hence, \[ \label{concentricCylinders:dUdr} \dfrac{dU}{dr} \cong \dfrac{0.1- 0}{0.101 - 0.1} = 100 sec^{-1} \] The used moment is \[ \label{concentricCylinders:M1} M = \overbrace{2\,\pi\,r_i\,h}^{A} \overbrace{\mu \dfrac{dU}{dr}}^{\tau} \,\overbrace{r_i}^{ll} \] or the viscosity is \[ \label{concentricCylinders:M} \mu = \dfrac{M}{ {2\,\pi\,{r_i}^2\,h} { \dfrac{dU}{dr}} } = \dfrac{1}{2\times\pi\times{0.1}^2 \times 0.2 \times 100} = \] Example 1.7 A square block weighing 1.0 [kN] with a side surfaces area of 0.1 [\(m^2\)] slides down an incline surface with an angle of 20\0C. The surface is covered with oil film. The oil creates a distance between the block and the inclined surface of \(1\times10^{-6}[m]\). What is the speed of the block at steady state? Assuming a linear velocity profile in the oil and that the whole oil is under steady state. The viscosity of the oil is \(3 \times 10^{-5} [m^2/sec]\). Solution 1.7 The shear stress at the surface is estimated for steady state by \[ \label{slidingBlock:shear} \tau = \mu \dfrac{dU}{dx} = 3 \times 10^{-5} \times \dfrac{U}{1\times10^{-6}} = 30 \, U \] The total fiction force is then \[ \label{slidingBlock:frictionForce} f = \tau\, A = 0.1 \times 30\,U = 3\,U \] The gravity force that acting against the friction is equal to the friction hence \[ \label{slidingBlock:solPre1} F_g = f = 3\,U\Longrightarrow U = \dfrac{m\,g\,\sin\,20^{\circ}}{3} \] Or the solution is \[ \label{slidingBlock:solPre} U = \dfrac{1\times 9.8\times\sin\,20^{\circ}}{3} \] Example 1.8 The edge effects can be neglected. The gap is given and equal to \(\delta\) and the rotation speed is \(\omega\). The shear stress can be assumed to be linear.} Solution 1.8 In this cases the shear stress is a function of the radius, \(r\) and an expression has to be developed. Additionally, the differential area also increases and is a function of \(r\). The shear stress can be estimated as \[ \label{discRotating:tau} \tau \cong \mu \,\dfrac{U}{\delta} = \mu\,\dfrac{\omega \, r}{\delta} \] This torque can be integrated for the entire area as \[ \label{discRotating:F} T = \int_0^R r\, \tau \,dA = \int_0^R \overbrace{r}^{ll} \, \overbrace{\mu\, \dfrac{\omega \, r}{\delta}}^{\tau} \, \overbrace{2\,\pi\,r\,dr}^{dA} \] The results of the integration is \[ \label{discRotating:I} T = \dfrac{\pi\,\mu\,\omega\,R^4}{2\,\delta} \] Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
Question: If $x \neq 0$, then prove that $\displaystyle \sum_{n=1}^{\infty}\dfrac1{2^n} \tan\left(\dfrac{x}{2^n}\right) = \dfrac1{x} - \cot x.$ My answer: I proved this result by using the following identity: $$ \prod_{k=1}^n \cos\left(\frac{x}{2^k}\right) = \frac{\sin x}{2^n\sin \frac{x}{2^n}}$$ I took natural log on both sides of the above equation and then differentiated both sides to get $$\sum_{k=1}^n \frac1{2^k} \tan\left(\frac{x}{2^k}\right) = \frac{1}{2^n}\cot \left(\frac{x}{2^n}\right) - \cot x.$$ Now taking the limit of $n$ to $\infty$ I get the required identity. $\blacksquare$ I have never seen the above identity before in my life. I was amused and surprised by this identity. That's because I have never seen an infinite trig series summing up to a rational function like $\dfrac1{x}$ before. So my questions are the following: A) First of all, is my derivation correct? Secondly, does anyone know a source for this problem? B) Are there other 'elementary 'derivations that have an infinite trig series on one side and a rational function on the other? [I say 'elementary' to avoid Fourier series. I am guessing Fourier series must be full of such results.]
I went through many textbooks and websites and I found out the derivation for AC resistance for diode which is \$r_d=26mV/I_{F}\$ where as for ac emitter resistance it is \$r_e=25mV/I_E\$ for which I didn't get any derivation or explanation. So, I wanted to request the forum for mathematical derivation or are they same with a valid reasons. It's just a linearization of the Shockley equation. The large signal model for a diode is: $$I_F=I_S\cdot\left(e^{\cfrac{V_F}{n k T/q}}-1\right)~~~~~~~~~~~~~(1)$$ \$n\$ is the emission coefficient and is by default set to 1. In many small signal BJTs, the default is fairly accurate. With many diodes, it's not, and is usually larger -- not unlikely from 1.6 to 3 or more. But in the two cases you mentioned, I'm pretty sure it was taken as the default value. The value of \$\frac{k T}{q}\$ is a matter of basic physics and ambient temperature. It's called the thermal voltage and is, at room temperatures, somewhere from \$25-26\:\textrm{mV}\$. \$I_S\$ is known as the saturation current and for BJTs is typically in the area of \$2\times 10^{-14}\:\textrm{A}\$ for discrete, small signal BJTs. For discrete diodes, it's usually more, often by a factor of 1000 or more. Linearization of the above equation is pretty easy: $$\begin{align*} D\left(I_F\right)&=D\left(I_S\cdot\left(e^{\cfrac{V_F}{n k T/q}}-1\right)\right) \\ \\ \textrm{d} I_F&=I_s\cdot D\left(e^{\cfrac{V_F}{n k T/q}}-1\right) \\ \\ \textrm{d} I_F&=I_s\cdot e^{\cfrac{V_F}{n k T/q}} \cdot D\left(\cfrac{V_F}{n k T/q}\right) \\ \\ \textrm{d} I_F&=I_s\cdot e^{\cfrac{V_F}{n k T/q}} \cdot \cfrac{\textrm{d} V_F}{n k T/q}~~~~~~~~(2) \end{align*}$$ At this point, it is helpful to recall equation (1) above and note that the "-1" term is negligible in almost all cases. So we can substitute \$I_F\$ into equation (2) above, giving: $$\begin{align*} \textrm{d} I_F&=I_F \cdot \cfrac{\textrm{d} V_F}{n k T/q}~~~~~~~~~~~~~~~~~~~~~~~~~~(3) \end{align*}$$ Trivial algebra now yields: $$\begin{align*} \frac{\textrm{d} V_F}{\textrm{d}I_F} &= \frac{n k T}{q I_F} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(4) \end{align*}$$ Since \$n=1\$ and thus \$\tfrac{n k T}{q}\approx 26\:\textrm{mV}\$ at room temperature, equation (4) reduces to the equation you've been seeing. Whether it is 25 or 26, is rarely important. (It might be if you are using this equation to measure the ambient temperature or if you are struggling with circuits that otherwise depend upon this parameter's exact value in exact circumstances.) In the case of the BJT, the usual much simplified active region equation is: $$I_C=I_S\cdot\left(e^{\cfrac{V_{BE}}{n k T/q}}-1\right)~~~~~~~~~~~~~(5)$$ But a similar derivation occurs. And since the emitter collects both the base and collector currents, which travel across that PN junction, the use of \$I_E\$ rather than \$I_C\$ is appropriate. But those two values are usually so similar, you may find either form used, in practical applications. In the case of a small signal BJT, I'd be more likely to use the equation since it more usually applies (ignorant of better specifications.) In the case of diodes, excepting diode connected BJTs, I'd usually be suspicious of it as the emission coefficients are usually not 1, but larger. Oh. Keep in mind that the thermal voltage is in millivolts. An emitter current of \$1\:\textrm{mA}\$ suggests \$re \approx 26\:\Omega\$. You need to keep your multipliers straight. (When you see \$\tfrac{26}{I_E}\$, then you must either read the 26 as being millivolts with \$I_E\$ being in amps, or else you must read the \$I_E\$ as being specified in milliamps.) " ....where as for ac emitter resistance it is r'e=25/IE" From jonk`s excellent explanation - and in conjunction with the above quoted sentence - we can derive a very important property of the quantity r'e. The TRANSCONDUCTANCE of a BJT is (with sufficient accuracy) d(Vbe)/d(Ic)=gm=Ic/Vt (with Vt~26mV) Note that we have gm=1/r'e. The transconductance gm plays the major role for finding the voltage gain of a transistor stage; gm can be calculated also using the small-signal parameters beta=dIc/dIb and rpi=d(Vbe)/d(Ib) as gm=beta/rpi. However, for practical design purposes, who knows the values for beta and rpi? Therefore, it is common practice to use gm=Ic/Vt because our design starts always selecting a proper Ic value.
Here's my two cents worth. Why Lie Algebras? First I'm just going to talk about Lie algebras. These capture almost all information about the underlying group. The only information omitted is the discrete symmetries of the theory. But in quantum mechanics we usually deal with these separately, so that's fine. The Lorentz Lie Algebra It turns out that the Lie algebra of the Lorentz group is isomorphic to that of $SL(2,\mathbb{C})$. Mathematically we write this (using Fraktur font for Lie algebras) $$\mathfrak{so}(3,1)\cong \mathfrak{sl}(2,\mathbb{C})$$ This makes sense since $\mathfrak{sl}(2,\mathbb{C})$ is non-compact, just like the Lorentz group. Representing the Situation When we do quantum mechanics, we want our states to live in a vector space that forms a representation for our symmetry group. We live in a real world, so we should consider real representations of $\mathfrak{sl}(2,\mathbb{C})$. A bit of thought will convince you of the following. Fact: real representations of a Lie algebra are in one-to-one correspondence (bijection) with complex representations of its complexification. That sounds quite technical, but it's actually simple. It just says that we can have complex vector spaces for our quantum mechanical states! That is, provided we use complex coefficients for our Lie algebra $\mathfrak{sl}(2,\mathbb{C})$. When we complexify $\mathfrak{sl}(2,\mathbb{C})$ we get a direct sum of two copies of it. Mathematically we write $$\mathfrak{sl}(2,\mathbb{C})_{\mathbb{C}} = \mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$$ So Where Does $SU(2)$ Come In? So we're looking for complex representations of $\mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$. But these just come from a tensor product of two representations of $\mathfrak{sl}(2,\mathbb{C})$. These are usually labelled by a pair of numbers, like so $$|\psi \rangle \textrm{ lives in the } (i,j) \textrm{ representation of } \mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$$ So what are the possible representations of $\mathfrak{sl}(2,\mathbb{C})$? Here we can use our fact again. It turns out that $\mathfrak{sl}(2,\mathbb{C})$ is the complexification of $\mathfrak{su}(2)$. But we know that the real representations of $\mathfrak{su}(2)$ are the spin representations! So really the numbers $i$ and $j$ label the angular momentum and spin of particles. From this perspective you can see that spin is a consequence of special relativity! What about Compactness? This tortuous journey shows you that things aren't really as simple as Ryder makes out. You are absolutely right that $$\mathfrak{su}(2)\oplus \mathfrak{su}(2) \neq \mathfrak{so}(3,1)$$ since the LHS is compact but the RHS isn't! But my arguments above show that compactness is not a property that survives the complexification procedure. It's my "fact" above that ties everything together. Interestingly in Euclidean signature one does have that $$\mathfrak{su}(2)\oplus \mathfrak{su}(2) = \mathfrak{so}(4)$$ You may know that QFT is closely related to statistical physics via Wick rotation. So this observation demonstrates that Ryder's intuitive story is good, even if his mathematical claim is imprecise. Let me know if you need any more help!
Let $Y_1,\ldots, Y_n$ be independent and $N(x_i\theta,1)$ distributed, with for each $Y_i$ a mean of $x_i\theta$ for known $x_1,\ldots,x_n$. In a previous section of this exercise I found that the Cramér–Rao lower bound for estimating $\theta$ is equal to $$1\left/\left(\sum_{i=1}^n x_i^2\right) \right.$$ Now I must find an (UMVU) estimator that has a variance equal to this lower bound. I found the unbiased estimator $$T=\frac{\sum_{i=1}^n Y_i}{\sum_{i=1}^n x_i}$$ Indeed: $$\operatorname{E}[T]=\frac{1}{\sum_{i=1}^n x_i} \cdot \sum_{i=1}^n \operatorname{E}[Y_i]=\frac{1}{\sum_{i=1}^n x_i} \cdot \sum_{i=1}^n x_i\theta = \theta$$ However, if I'm not mistaken, the variance is $$\mathbb{Var}[T]=\frac{1}{\left(\sum_{i=1}^n x_i \right)^2} \cdot \sum_{i=1}^n\operatorname{Var}[Y_i]=\frac{n}{\left(\sum_{i=1}^n x_i\right)^2}$$ since all the $Y_i$ are independent, and all have variance $1$. I suppose this variance does not actually equal the required lower bound. Do I have to use another estimator, or is there actually a way to rewrite the variance above so that it's clearly equal to the lower bound? Or did I just make an error in calculation...
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...