text
stringlengths
256
16.4k
I know that this answer makes a lot of assumptions, but it at least generalizes your algorithm: Suppose that $\{A_n\}$, $\{B_n\}$, and the seeding matrix, $V_N$, all form a commuting family of normal matrices, where the eigenvalue decompositions of $\{A_n\}$ and $\{B_n\}$ are known a priori, say $U' V_N U = \Lambda_N$, $U' A_n U = \Omega_n$, and $U' B_n U = \Delta_n$, where $U$ is unitary and $\Lambda_N$, $\{\Omega_n\}$, and $\{\Delta_n\}$ are complex-valued diagonal matrices. Once we have said decomposition, by induction, $$V_n = (I - B_n V_{n+1})^{-1} A_n = (I - U \Delta_n U' U\Lambda_{n+1} U')^{-1} U \Omega_n U',$$ which can be rearranged into the form $$V_n = U (I - \Delta_n \Lambda_{n+1})^{-1} \Omega_n U' \equiv U \Lambda_n U',$$ where $\Lambda_n$ is of course still diagonal, so the entire family $\{V_n\}$ will necessarily commute with the other operators, and we have shown that the diagonal values of each $\Lambda_n$ are decoupled, so your fast scalar recursion formula can be applied independently on the eigenvalues of $V_N$ and the coefficient matrices. Note that a special case is when $A_n \equiv \alpha_n I$ and $B_n \equiv \beta_n I$, so that the only requirement is that $V_N$ be a normal matrix.
1d diffusion equation Integrating the diffusion equation, $$ \frac{\partial u}{\partial t} = D \frac{\partial^2 u}{\partial x^2}, $$ with a constant diffusion coefficient D using forward Euler for time and the finite difference approximation for space, $$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta x^2} ( u_{i+1}^{t} + u_{i-1}^{t} - 2u_i^{t} ), $$ leads to the conservation of $\bar{u}=\sum_i \Delta x\, u_i$ over time (see animation 1 and figure 1), because reflective Neumann boundary conditions,$\partial_x u=0$, are employed at the borders (forward differences: $u_i = u_{i \pm 1}$): $$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta x^2} \cdot ( u_{i\pm1}^{t} - u_{i}^{t} ) $$ Space and time are discretized with $\Delta x=0.01$ and $\Delta t=10^{-7}$. The diffusion coefficient is $D=10$. radial diffusion equation In 2d polar coordinates $(r,\phi)$, the Laplacian is given by: $$ \nabla^2 u = \frac{\partial^2 u}{\partial r^2} + \frac{1}{r}\frac{\partial u}{\partial r} + \frac{1}{r^2} \frac{\partial u^2}{\partial \phi^2}. $$ In case of an axisymmetric distribution $u(r,\phi)=u(r)$, the Laplacian reduces to: $$ \nabla^2 u = \frac{\partial^2 u}{\partial r^2} + \frac{1}{r}\frac{\partial u}{\partial r} $$ Forward Euler and finite difference approximation with central differences for the advective term leads to (derivation): $$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta r^2} \big( u_{i+1}^{t} + u_{i-1}^{t} - 2u_i^{t} \big) + D\frac{\Delta t}{2 \Delta r} \big( u_{i+1}^{t} - u_{i-1}^{t} \big) \\ = u_i^t + D\frac{\Delta t}{\Delta r^2} \big( (1+0.5/i)u_{i+1}^{t} + (1-0.5/i)u_{i-1}^{t} -2u_i^{t} \big). $$ The same approximation is obtained with the Finite Volume Method (derivation). The boundary condition at the origin exploits the rotational symmetry and thus is $\partial_r u = 0$, leading to (central differences: $u_{i-1}=u_{i+1}$) $$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta r^2} \cdot 4( u_{i+1}^{t} - u_i^{t} ) $$ At the other boundary, Neumann conditions $\partial_r u = 0$ are employed and realized as (central differences: $u_{i-1}=u_{i+1}$): $$ u_i^{t+1} = u_i^t + D\frac{\Delta t}{\Delta r^2} \cdot 2( u_{i-1}^{t} - u_{i}^{t} ) $$ However, there is a problem: Conservation of u is violated (see animation 2 and figure 2). Over time the total amount of u grows as calculated via $$ 2\pi \int_\Omega u(r) r dr = 2\pi \sum_{i=0}^{n} u_i\; (i \,\Delta r) \; \cdot \; \Delta r, $$ where $(i \Delta r)$ is the discretized volume element from 2d polar coordinates. The dashed line gives the analytical result and the regular line the numerical result. Numerical parameters $\Delta r=\Delta x$, $\Delta t$ and $D$ are as before. Question Why is conservation of u violated when going from the 1d system to the 1d radial (reduced from 2d polar coordinates)? What can I do to regain the conservation? Finite volume methods lead to the same discretization scheme as shown above. Robin boundaries (as used for conservation in advection equations) are not applicable, since the flux $j$ at the boundaries is given by $j=r\partial_r u$, which leads to the already employed Neumann boundary conditions $\partial_r u=0$ at $r=0$ and $r=r_{end}$. Edit: While testing a few things, I came up with an MWE in c++ for others interested to try. Compilation instructions are given at the top. It reproduces the conservation violation problem :( // test program to check conservation in radial diffusion// compilation: g++ -Wall radial_diffusion.cpp -std=c++11 -fopenmp -O3 -o radial_diffusion.exe#include <fstream> // std::ofstream#include <string> // std::string#include <cmath> // expint main(){ // initialize variables std::string name = "/tmp/u_sum.dat"; int nx = 1000; size_t nsteps=100000; double d = 10; double dr = 0.01; double dt = 1e-7; double diffcoeff = d*dt/(dr*dr); double *u = new double[nx]; double *unew = new double[nx]; double *usum = new double[nsteps](); double usum0; // initial condition float mu_x = 0*nx; // mean x float xsigma = 0.01*nx; // variance x for(int x=0; x<nx; x++) u[x] = exp(-0.25*((x-mu_x)*(x-mu_x)/(xsigma*xsigma))); // time evolution for(size_t step=0; step<nsteps; step++){ #pragma omp parallel for for(int x=0; x<nx; x++){ int left=x-1; int right=x+1; // calculation, central difference with lHospital at r->0 with du/dr -> 0 if(x==0){ unew[x] = u[x] + diffcoeff*( 4*(u[right] - u[x]) ); }else if(x==nx-1){ unew[x] = u[x] + diffcoeff*( 2*(u[left] - u[x]) ); }else{ unew[x] = u[x] + diffcoeff*( u[right]*(1+0.5/x) + u[left]*(1-0.5/x) - 2*u[x] ); } } // sum up to check conservation usum0 = 0; for(int x=0; x<nx; x++) usum0 += 2*M_PI*dr*dr*x*unew[x]; if(!(step%10000)) printf("%12.12f\n",usum0); for(int x=0; x<nx; x++) usum[step] += 2*M_PI*dr*dr*x*unew[x]; // update u,unew std::swap(u,unew); } // save stuff FILE *fp; if((fp=fopen(name.c_str(), "w"))==NULL){ printf("Cannot open file.\n"); } for(size_t step=0; step<nsteps; step++) fprintf(fp,"%12.12f\n",usum[step]); fclose(fp); // free memory delete[] u; delete[] unew; delete[] usum;} Update In the paper "High-order schemes for cylindrical/spherical geometries with cylindrical/spherical symmetry" by Wang et al. (2013), the authors propose the following scheme: $$ \frac{d u_i}{dt} = \frac{1}{\Delta V} (r_{i+\frac{1}{2}}j_{i+\frac{1}{2}} - r_{i-\frac{1}{2}}j_{i-\frac{1}{2}}), \\ \Delta V = \frac{1}{2}(r^2_{i+\frac{1}{2}} - r^2_{i-\frac{1}{2}}), \\ j_i = \nabla u_i = \frac{1}{2\Delta r}(u_{i+1} - u_{i-1}). $$ The half index, $i+\frac{1}{2}$, implies an arithmetic mean. Is this scheme applicable here? Is the gradient discretized correctly?
Introduction Theoretically speaking, an optical fiber with a circular core has no birefringence, and the polarization state in such an optical fiber does not change during propagation. In reality, however, a small amount of birefringence is always present in an optical fiber due to external perturbations (load, bend, etc…) or manufacturing imperfection. Such birefringence is inherently random, and thus causes random power coupling between two polarization modes (polarization crosstalk) in an optical fiber. The output polarization state, therefore, becomes unpredictable and also varies with time. A Polarization-Maintaining Fiber (PM Fiber, PMF) maintains two polarization modes by intentionally inducing uniform birefringence along the entire fiber length, thereby prohibiting random power coupling between two polarization modes. Examples of PMFs are schematically shown in Figure 1. Larger birefringence better prohibits the power coupling, and thus is one of the key characteristics for a PMF. A PMF is also called a birefringent fiber. Figure 1: Schematic of PMFs. (Non PMF is also shown for comparison.) Birefringence can be induced by introducing twofold (or less) rotational symmetry, either in the refractive-index profile or in the stress distribution. The former is called form birefringence and the latter is stress birefringence. PMFs are an essential device for guiding polarized light, and various types of PMFs are used for optical communication, fiber lasers, and fiber sensors, etc . NOTE: This article only deals with a linearly birefringent PMF, in which two linear (i.e. x- and y-) polarization states are maintained. This is the most common form of PMF, and it is well accepted to call this type of fiber simply a PMF. Form birefringence Form birefringence originates from a vector electromagnetic effect in optical fibers possessing twofold (or less) rotational symmetry . The simplest way to introduce twofold rotational symmetry in the refractive-index profile is making the shape of the core elliptical . An example of elliptical core fiber is shown in Figure 2. Figure 2: Cross section of elliptical core PMF. (FiberLabs ZSF-2.2×5.5/125-N-PM ZBLAN fluoride fiber) Stress birefringence Stress birefringence in optical fibers is thermally induced . Figure 3 schematically shows an example of PMF based on stress birefringence, most commonly known as PANDA (*) fiber or Bow-Tie fiber . The fiber has two stress-applying parts which are positioned on both sides of the core and have a thermal expansion coefficient different from the rest of the fiber. The stress-applying parts change volume at a different rate compared to the other part of the fiber when the fiber cools down to the room temperature after fiber drawing; this induces a large stress into the core of the fiber. (*) “PANDA” stands for Polarization-maintaining AND Absorption reducing, and also is named after its similarity to panda bear. Figure 3: Schematic of PANDA and Bow-Tie fiber. Key optical characteristics Birefringence (modal birefringence) The difference in propagation constants Δβ between the two polarization modes is called modal birefringence (Bm); modal birefringence is usually normalized such that it has no unit, and provided as: \( \mathrm{B_m} = \displaystyle{\frac{\Delta\beta}{k_0}} \), where k0=2π/λ 0 (λ 0: wavelength in vacuum). Large modal birefringence reduces polarization crosstalk, thus enables better ability to maintain polarization modes. PMFs typically exhibit modal birefringence larger than 10 -4. Beat length Two polarization modes have different propagation constants in a PMF. Beat length (LB) is the length where the accumulated phase difference reaches 2π, and is provided as: \( \mathrm{L_B (m)} = \left( \displaystyle{\frac{2π}{Δβ}} \right) = \left( \displaystyle{\frac{\lambda_0}{\mathrm{B_m}}} \right) \). Beat length is another way to quantify the amount of birefringence, and is inversely proportional to birefringence. The larger the birefringence becomes, the shorter the beat length becomes. Slow axis and fast axis There are two polarization modes in a PMF; one polarized along the x-axis and the other polarized along the y-axis. The propagation constants of these two modes are different, and the polarization direction of a mode with the larger propagation constant is called the slow axis, as the phase velocity of the mode is slower than the other. And the polarization direction of a mode with the smaller propagation constant is called the fast axis. Figure 4 shows the slow axis and fast axis of an elliptical-core fiber and PANDA fiber. The polarization mode polarized along the slow axis is usually more well confined and robust against external perturbations; and thus is more often used. Figure 4: Slow axis and fast axis of elliptical-core and PANDA fiber. Polarization Crosstalk Quantifying polarization crosstalk is one of the most important task for characterization of a PMF. It is usually measured by launching linearly polarized light aligned to one of the polarization axis and measure the crosstalk at the output (see Figure 5), and is provided by the following formula: \( \mathrm{Crosstalk (dB)} = \log_{10} \left( \displaystyle{\frac{P_1}{P_0}} \right) \), where P0 is the power of the main polarization mode (at the output), and P1 is the power of the unwanted polarization mode produced by polarization crosstalk. Figure 5: Schematic of polarization crosstalk measurement. Reference 26(8), 488–490 (2001) [http://doi.org/doi:10.1364/OL.26.000488]. 1(2), 332–339 (1983) [http://doi.org/10.1109/JLT.1983.1072123]. 33(11), 3235–3243 (1962) [http://doi.org/10.1063/1.1931144]. 17(1), 15–22 (1981) [http://doi.org/10.1109/JQE.1981.1070626]. 19(7), 246–247 (1983) [http://doi.org/10.1049/el:19830169]. 4(8), 1071–1089 (1986) [http://doi.org/10.1109/JLT.1986.1074847]. 17(10), 2123–2129 (1981) [http://doi.org/10.1109/JQE.1981.1070652].
Hilmar's answer is of course perfectly correct, but I think there there are several points that Lyons did not address in the statement quoted by the OP (or maybe he talked about them previously and chose not to repeat himself in the paragraph quoted by the OP). The Discrete Fourier Transform (DFT) is commonly described as a transformation of a sequence $(x[0], x[1], \ldots, x[N-1])$ of finite length $N$into another sequence $(X[0], X[1], \ldots, X[N-1])$ of length$N$ where$$\begin{align*}X[m] &= \sum_{k=0}^{N-1} x[k]\exp\left(\frac{-j2\pi mk}{N}\right), ~ m = 0, 1, \ldots, N-1,\\x[n] &= \frac{1}{N}\sum_{m=0}^{N-1} X[m]\exp\left(\frac{j2\pi nm}{N}\right), ~ n = 0, 1, \ldots, N-1.\end{align*}$$But these formulas can also be used when $m, n$ are outside the range $[0, N-1]$ and if we do so, we come to the conclusion that the length-$N$DFT can be viewed as a transformation from a periodic sequence $x[\cdot]$to another periodic sequence $X[\cdot]$, both extending to infinityin both directions, and that $(x[0], x[1], \ldots, x[N-1])$ and $(X[0], X[1], \ldots, X[N-1])$ are just one period of these infinitely long sequences.Note that we are insisting that $x[n+iN] = x[n]$ and $X[m+iN] = X[m]$ forall $m, n,$ and $i$. This is, of course, not how data are often handled in practice. We may havea very long sequence of samples, and we break them up into blocksof suitable length $N$. We calculate the DFT of $(x[0], x[1], \ldots, x[N-1])$ as$$X^{(0)}[m] = \sum_{k=0}^{N-1} x[k]\exp\left(\frac{-j2\pi mk}{N}\right), ~ m = 0, 1, \ldots, N-1,$$the DFT of the next chunk $(x[N], x[N+1], \ldots, x[2N-1])$ as$$X^{(1)}[m] = \sum_{k=0}^{N-1} x[k+N]\exp\left(\frac{-j2\pi mk}{N}\right), ~ m = 0, 1, \ldots, N-1,$$the DFT of the previous chunk $(x[-N], x[-N+1], \ldots, x[-1])$ as$$X^{(-1)}[m] = \sum_{k=0}^{N-1} x[k-N]\exp\left(\frac{-j2\pi mk}{N}\right), ~ m = 0, 1, \ldots, N-1,$$etc. and then we play with these various DFTs of the various chunksinto which we have subdivided our data. Of course, if the data are in fact periodic with period $N$, all these DFTs will be the same. Now, when Lyonstalks of ...where the input index n is defined over both positive and negative values... he is talking of the periodic case, and when he says thata (real) even function has the property $x[n] = x[-n]$, this property must hold for all integers $n$.Since periodicity also applies, we have not only that $x[-1] = x[1]$but $x[-1] = x[-1+N] = x[N-1]$, and similarly, $x[-n] = x[n] = x[N-n]$.In other words, the real even sequence $(x[0], x[1], \ldots, x[N-1])$ whoseDFT is a real even sequence (as stated by Lyons and explained verynicely by Hilmar) is necessarily of the form$$(x[0], x[1], \ldots, x[N-1])= (x[0], x[1], x[2], x[3], \ldots, x[3], x[2], x[1])$$which is (apart from the leading $x[0]$) a palindromic sequence.If you are partitioning your data into blocks of length $N$and computing the DFT of each block separately, then theseseparate DFTs will not have the symmetry propertiesdescribed above unless the DFT is of a block with thispalindromic property.
This question already has an answer here: Why is the Earth so fat? 7 answers My dynamics lecture notes repeat the Earth's equatorial bulge can be approximated as: $$ \approx \frac{\Omega^2R}{g} \approx \frac{1}{300} $$ (Do they mean R/300?) They also include statements like: "The Earth's oblateness $ \frac{I_3-I_1}{I_1}\approx \frac{\Omega^2R}{g} \approx \frac{1}{300} $" Could anybody explain where this comes from? I have tried several ways to balance the gravitational and centripetal forces but I never seem to arrive at the correct result and cannot get my head around this concept.
In many papers the contact resistance of a metal in contact with a semiconductor is given in units of $\Omega~\mu m$, for example in the paper by Li et al. (Appl. Phys. Lett. 102 (2013), p. 183110): $R_C$ for contacts formed to epitaxial graphene on SiC have been reported to be less than 100 $\Omega~\mu m$ and with specific contact resistivity ($\rho_c$) of order $10^{-7} \Omega~cm^2$. I'm not sure where this unit comes from. From the transfer line method one can determine the contact resistance as a value in $\Omega$. Normalizing it to the contact area however should bring the unit to $\frac{\Omega}{\mu m^2}$, i.e. it should be divided by the area rather than be multiplied by a length. How does one arrive at the unit $\Omega~\mu m$ for the contact resistance? What assumptions regarding the geometry of the contact enter into this? Why is the correct unit not just $\frac{\Omega}{\mu m^2}$?
Consider the following plot, courtesy of this page: Regarding the $y$-axis, how does this "expected return" relate to the "instantaneous expected return" in a geometric Brownian motion (GBM)? E.g., assume each stock price follows $dS(t) = \mu S(t) dt + \sigma S(t) dW(t)$, and so $S(t) = S(0)\exp\left(\left(\mu - \frac{\sigma^2}{2}\right)t + \sigma \sqrt{t} Z\right)$ where $Z \sim \mathcal{N}(0,1)$. Then I would calculate the (annual) expected return as $$ \mathrm{E}\left[\frac{S(1)}{S(0)} - 1\right] = \mathrm{E}\left[\exp\left(\mu - \frac{\sigma^2}{2} + \sigma Z\right)\right] - 1 = \exp\left(\mu - \frac{\sigma^2}{2} + \frac{\sigma^2}{2}\right) - 1 = e^\mu - 1, $$ where the second equality is from the moment-generating function of a normal random variable. Take Portfolio A in the plot and suppose it's just a single stock, driven by the GBM above with instantaneous rate of return $\mu$. Portfolio A has an "expected return" of $8\%$. So, which of the following (if any) do we mean? $e^\mu - 1 = 8\%$ $\mu = 8\%$
i need to get a Givens-Rotation, which zeros a matrix entry when multiplied from the right side. I did already look at this Topic https://math.stackexchange.com/questions/945736/givens-rotation-from-right-side but i could not really understand the process of getting there. Some more details: I have to matrices, A and B and i Need to get their Eigenvalues using the QZ-Algorithm. I get B to triagonal form using Givens-Rotations from left. These transformations are applied to A from the left side, too. After B is in triagonal form, i want to get A in triagonal form, too. Therefore i need Givens-Rotations from Right so that it does not destroy the zeros of Matrix B (Or is there another possibility to do this?) As an equation: $$\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot\begin{bmatrix}e&f\\g&h\end{bmatrix}=\begin{bmatrix}ae+bg&af+bh\\ce+dg&cf+dh\end{bmatrix}$$ with $$ce+dg=0$$ How to find appropriate e,f,g,h ? Can anybody provide me some hints how to achieve this? Thank you very much in advance! EDIT: Im pretty sure it works like this: $$\begin{bmatrix} a&b \end{bmatrix}\cdot\begin{bmatrix} c&-s\\ s&c \end{bmatrix}=\begin{bmatrix} 0&r \end{bmatrix}$$ with $$r = \sqrt{a^2+b^2}$$ $$c = \frac{b}{r}$$ $$s = \frac{-a}{r}$$ Can anyone approve this? EDIT2: Consider a simple 3x3-Matrix which i am trying to triangularize using These givens-rotation from the right. I can Zero out elements (3,1) and (2,1) using the same rotation-Matrix $G$ with different Parameters $c$ and $s$. So i can create a Zero like this $\begin{bmatrix}*&*&*\\*&*&*\\a&b&*\end{bmatrix}\cdot\begin{bmatrix}c&-s&0\\s&c&0\\0&0&1\end{bmatrix}=\begin{bmatrix}*&*&*\\*&*&*\\0&*&*\end{bmatrix}$ BUT also like that $\begin{bmatrix}*&*&*\\a&b&*\\*&*&*\end{bmatrix}\cdot\begin{bmatrix}c&-s&0\\s&c&0\\0&0&1\end{bmatrix}=\begin{bmatrix}*&*&*\\0&*&*\\*&*&*\end{bmatrix}$ for the same Rotation Matrix $G$. This makes it impossible for me to Zero out the elements iteratively, as the second Rotation will destroy the Zero created in the first one. It is no Problem to Zero out all elements in the last row except the one in the right bottom edge (as each Zero is created by another Rotation Matrix $G$). But how can i achieve to Zero out the elements mentioned in my example, when using the same Rotation Matrix $G$ which destroys already created zero-elements? Please, any help is greatly appreciated!
Edit: According to comments of Eric Wofsy and Yemon Choi I edit the question. For a (compact) topological space $X$, we put $A=\{f:X\to \mathbb{C}\mid f\text{ is bounded}\}$. We define a semi-norm on $A$ with $\parallel f \parallel=\parallel \omega_{f}\parallel_{\infty}$ where $\omega_{f}(x)$ is the oscilation of $f$ at $x$. It defines a norm on $A/C(X)$. The completion of this normed space is denoted by $DC(X)$. How can $DC(X\times Y)$ be written in term of $DC(X)$ and $DC(Y)$?(An appropriate completion of the algebraic tensor product $DC(X)\otimes DC(Y)$? (Or some thing else?) Note: We can not expect to have an algebra structure on $DC(X)$, since $C(X)$ is not an ideal. But in the following particular case we may expect that we have a Banach algebra structure: When $X$ is a compact topological group with its Haar measure, $A$ is the algebra of bounded measurable functions with the convolution product. again we consider the oscillation norm. For abelian $X$, it would be interesting to study the Gelfand spectrum of the resulting commutative Banach algebra. In particular, what is the resulting algebra and its Gelfand spectrum in the particular case of $X=S^{1}$?
Tasks The tasks of the challenge are two-fold: To segment vertebrae from the given spine images that include fractured and non-fractured cases, and provide vertebra segmentation results in the form of corresponding masks. To classify vertebrae from the given spine images into fractured and non-fractured cases along with specific morphological grades and cases of vertebral fractures, and provide fracture classification results in the form of corresponding fracture scores. Participants are invited to develop automated or semi-automated computer-assisted algorithms to solve both tasks or an individual task, and to submit their results through this website. The results for the vertebra segmentation task and fracture classification task will be evaluated and ranked separately, so that tasks can be approached individually or jointly, meaning that a qualitative vertebra segmentation does not necessarily imply a qualitative fracture classification, and vice versa. Rules The is an open online challenge, meaning that anyone can participate by entering the challenge (open), and that results are regularly updated and posted through this website (online). If a sufficient number of participants enter the challenge, organizers may decide to proceed to a live challenge with corresponding paper submissions within a major medical imaging conference (e.g. MICCAI, ISBI, SPIE MI) and/or prepare a joint journal paper summarizing challenge outcomes for a high-impact journal in the corresponding field (e.g. IEEE TMI, MedIA). The following rules apply: Anyone can download the database without registering. However, we kindly ask users to register with their contact information before downloading the database, so that we can keep track of its usage. Moreover, each registered user will be informed via e-mail about future developments and news related to the challenge. Contributions to the challenge are not limited to new and unpublished methods, which means that application of existing methods is allowed. Participants agree that they will specifically describe any manual operations in their contributions, which may influence their final ranking. Registering with contact information is mandatory at result submission, however, participation in the challenge is anonymous to the extent that identities of participants are known to organizers only. When submitting results, each participant will be given an unique id, the so called participant id (PID), by which the participant will be identified in challenge results posted online. Online challenge results will be reported by PID in the form of ranks only. Ranking will be performed separately for the vertebra segmentation task and fracture classification task. Along with PID, each participant will also receive a password for accessing a more detailed report of the corresponding results. Participants that originate from the same group can submit at most two (2) substantially different contributions to the challenge. Participants are allowed to resubmit twice (2×) an improved version of their existing contributions, meaning that a maximum of three (3) versions per contribution are allowed, and contribution versions resulting in the highest rank will be assigned to corresponding participants. If the challenge organizers decide to proceed to a live challenge and/or prepare a joint journal paper, they reserve the right to decline selected participants (e.g. participants with trivial contributions producing very poor results, etc.). Accepted participants must agree to disclose their identity and provide a short description of the contributed method. For the joint journal paper, a maximum of two (2) co-authors per contribution will be allowed. Evaluation Metrics Evaluation of the results will be performed on the basis of the submitted results separately for vertebra segmentation and fracture classification tasks. Vertebra Segmentation Evaluation Metrics For vertebra segmentation, the following metrics will be considered to evaluate the quality of volume masks \(M_{seg}\) of segmented vertebrae against corresponding reference volume masks \(M_{ref}\): DSC The Dice similarity coefficient (DSC) is defined as: $$DSC = \frac{2N(M_{seg} \cap M_{ref})}{N(M_{seg})+N(M_{ref})}$$ where \(N(M_{seg} \cap M_{ref})\) is the number of voxels in the overlap between volume masks \(M_{seg}\) and \(M_{ref}\), \(N(M_{seg})\) is the number of voxels in the segmented volume mask \(M_{seg}\), and \(N(M_{ref})\) is the number of voxels in the reference volume mask \(N(M_{ref})\). MSSD The mean symmetric surface distance (MSSD) is defined as: $$MSSD = \frac{1}{N(M_{seg}) + N(M_{ref})}\left(\sum_{i \in M_{seg}}\min_{j \in M_{ref}}d(i,j) + \sum_{j \in M_{ref}}\min_{i \in M_{seg}}d(j,i)\right)$$ $$MSSD = \frac{1}{N(M_{seg}) + N(M_{ref})} \cdot \\[0.5em] \left(\sum_{i \in M_{seg}}\min_{j \in M_{ref}}d(i,j) + \sum_{j \in M_{ref}}\min_{i \in M_{seg}}d(j,i)\right)$$ where \(\min_{j \in M_{ref}}d(i,j)\) is the Euclidean distance from surface voxel \(i\) in the segmented volume mask \(M_{seg}\) to the closest surface voxel \(j\) in the reference volume mask \(M_{ref}\), and \(\min_{i \in M_{seg}}d(j,i)\) is the Euclidean distance from surface voxel \(j\) in the reference volume mask \(M_{ref}\) to the closest surface voxel \(i\) in the segmented volume mask \(M_{seg}\). Fracture Classification Evaluation Metrics For fracture classification, the following metrics will be considered to evaluate the correctness of detected fracture scores \(S_{det} = (g_{det},c_{det})\) against corresponding reference fracture scores \(S_{ref} = (g_{ref},c_{ref})\), where \(g_{det}\) and \(g_{ref}\) are morphological grades, and \(c_{det}\) and \(c_{ref}\) are morphological cases of vertebral fractures (if either \(g=0\) or \(c=0\), the only possible score is \(S=(0,0)\), representing a non-fractured vertebra): MSPP The shortest path penalty (SPP) is defined as the sum of individual penalties accumulated along the shortest path from the detected score \(S_{det}\) to the corresponding reference score \(S_{ref}\). In this case, each individual penalty equals 1, representing each change in morphological grade and/or case starting from the detected score and reaching the corresponding reference score. For all vertebrae, the mean shortest path penalty (MSPP) is therefore computed as: $$MSPP = \frac{\sum\left(|g_{det}-g_{ref}| + [c_{det} \neq c_{ref}]\right)}{\sum[c_{ref} \geq 0]}$$ where \([x]=1\) if condition \(x\) is satisfied, and \([x]=0\) otherwise. For example, for a vertebra with the detected score \(S_{det}=(1,3)\) (mild crush fracture) and the corresponding reference score \(S_{ref}=(2,1)\) (moderate wedge fracture), SPP results in \(SPP=2\). F-score The F-score quantitatively describes the quality of a binary classification and is defined as: $$F = \frac{2 \cdot PPV \cdot TPR}{PPV + TPR}$$ where the positive predictive value (PPV), also known as precision, is defined as the ratio of all correctly detected fractured vertebrae, and the true positive rate (TPR), also known as sensitivity or recall, is defined as the ratio of fractured vertebrae that are correctly detected as such: $$\begin{split} PPV &= \frac{TP}{TP + FP} \\[0.5em] TPR &= \frac{TP}{TP + FN} \end{split}$$ where true positives \(TP = \sum\left([c_{det} > 0]\cdot[c_{ref} > 0]\right)\) represent the number of fractured vertebrae that are correctly detected as fractured, false negatives \(FN = \sum\left([c_{det} = 0]\cdot[c_{ref} > 0]\right)\) represent the number of fractured vertebrae that are incorrectly detected as non-fractured, true negatives \(TN = \sum\left([c_{det} = 0]\cdot[c_{ref} = 0]\right)\) represent the number of non-fractured vertebrae that are correctly detected as non-fractured, and false positives \(FP = \sum\left([c_{det} > 0]\cdot[c_{ref} = 0]\right)\) represent the number of non-fractured vertebrae that are incorrectly detected as fractured. Positives \(P=TP+FN=\sum[c_{ref}>0]\) represent the number of all fractured vertebrae, and negatives \(N=FP+TN=\sum[c_{ref}=0]\) represent the number of all non-fractured vertebrae, where \([x]=1\) if condition \(x\) is satisfied, and \([x]=0\) otherwise. Ranking Ranking of the results will be performed on the basis of the evaluation metrics separately for vertebra segmentation and fracture classification. Vertebra Segmentation Ranking Ranking of vertebra segmentation results will be performed in the following order: for each segmented vertebra in each spine image, the DSC and MSSD values will be computed for each participant, then, for each segmented vertebra in each spine image, ranks for the DSC and MSSD values will be computed across all participants (if DSC is zero, the participant will be attributed the lowest rank for both DSC and MSSD), then, for each segmented vertebra in each spine image, the mean of the DSC and the mean of the MSSD ranks will be computed for each participant, resulting in the participant's rank for that vertebra, then, the mean participant's rank across all vertebrae will be computed, resulting in the participant's final vertebra segmentation rank. Ranks for the DSC and MSSD values are natural numbers, while all remaining ranks, including the participant's final vertebra segmentation rank, are rational numbers. Fracture Classification Ranking Ranking of fracture classification results will be performed in the following order: for all classified vertebrae, the MSPP and F-score values will be computed for each participant, then, ranks for the MSPP and F-score values will be computed across all participants, then, the mean of the MSPP and F-score ranks will be computed for each participant, resulting in the participant's final fracture classification rank. Ranks for the MSPP and F-score values are natural numbers, while the participant's final fracture classification rank is a rational number.
Consider the preliminary question of getting a sequence of $N$ heads out of $k$ throws, with probability $p(N,k)$. This is given by the recurrence formula $$ p(N,k) = \begin{cases} 0 &\text{if } k<N\\ \dfrac{1}{2^N} + \sum_{m=0}^{\min(N-1,k-N-1)} \dfrac{1}{2^N}\dfrac{1}{2} + \sum_{m=N}^{k-N-1} \{1-p(N,m)\} \dfrac{1}{2^N}\dfrac{1}{2} &\text{else} \end{cases} $$ (with the convention that sums are equal to zero if the upper bound is strictly less than the lower bound). Indeed, my reasoning is that the first consecutive N heads have to be preceded by a tail and no other consecutive N heads before. If there are $m<N$ steps before, this is not possible so any sequence of $m$ throws can occur. When $m\ge N$, it is possible and has to be excluded. Next, the probability of getting the first consecutive N heads in $m\ge N$ throws is$$q(N,m) =\begin{cases} \dfrac{1}{2^N} &\text{if }m=N\\ \dfrac{1}{2^N}\dfrac{1}{2} &\text{if } N<m<2N+1\\ \{1-p(N,m-N-1)\} \dfrac{1}{2^N}\dfrac{1}{2} &\text{else} \end{cases}$$ Now, the probability to get $M$ heads first and $N$ heads in $m\ge N$ throws is $$ r(M,N,m) = \begin{cases} \dfrac{1}{2^N} &\text{if }m=N\ \end{cases} $$
Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. Sum Stacking This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing. Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations. These algorithms are very efficient to remove satellite/plane tracks. Median Stacking This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math]. Pixel Maximum Stacking This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. Pixel Minimum Stacking This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower. In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]). The output console thus gives the following result: 22:26:06: Pixel rejection in channel #0: 0.215% - 1.401% 22:26:06: Pixel rejection in channel #1: 0.185% - 1.273% 22:26:06: Pixel rejection in channel #2: 0.133% - 1.150% 22:26:06: Integration of 12 images: 22:26:06: Normalization ............. additive + scaling 22:26:06: Pixel rejection ........... Winsorized sigma clipping 22:26:06: Rejection parameters ...... low=4.000 high=3.000 22:26:09: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 22:26:19: Background noise value (channel: #0): 10.013 (1.528e-04) 22:26:19: Background noise value (channel: #1): 6.755 (1.031e-04) 22:26:19: Background noise value (channel: #2): 6.621 (1.010e-04) After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the differe1nt display mode. In our example the file is the stack result of all files, i.e., 12 files. The images above picture the result in Siril using the Auto-Stretch rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]21/5.1 = 4.11 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math]. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril: Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
For each of the functions you mention, the answer to "why it's useful" will be somewhat different. In general, if a function appears in many unrelated contexts, that's a sure sign that it's useful. Think of other mathematical structures that appear in many unrelated contexts: the natural numbers, the real numbers, the complex numbers, polynomials, trigonometric functions, etc. My understanding is that in the olden days, the second half a complex analysis course - after the theory has been developed - was a detailed study of special functions. (At least this may have been the case in courses for physicists.) That doesn't seem to be so common anymore. Some remarks about specific functions that you mention: Gamma function: Just as it turned out to be very convenient to allow the $x$ in $a^x$ to take non-natural-number values, it is convenient to allow the $n$ in $n!$ to do so. To give but one small example of an occurrence of the Gamma function in the answer to a naturally occurring question, the volume of an $n$-dimensional ball is$$V_n(r)=\frac{\pi^{n/2}r^n}{\Gamma\left(\frac{n}{2}+1\right)}.$$This example may not be altogether convincing, because the Gamma function can be eliminated in favor of more elementary functions, but it does have the advantage of providing a uniform formula for odd and even $n$, which the more elementary formulas to not. This is just the tip of the iceberg as far as appearances of the Gamma function in mathematics and physics go. Functions defined in order to compute integrals: If the $\ln$ function were not already known, it would have been necessary to invent it in order to carry out $\int\frac{1}{x}\,dx$. The inverse of the the $\ln$ function would have then led to the exponential function. Similarly, if the inverse trigonometric functions, $\arctan$, $\arcsin$ were not already known, it would have been necessary to invent them in order to carry out integrals such as $\int\frac{1}{\sqrt{1-x^2}}\,dx$, $\int\frac{1}{1+x^2}\,dx$. This would then have led to the creation of $\tan$ and $\sin$. In the case of elliptic functions, this actually is what occurred. The computation of the arc length of an ellipse (which is of interest in the study of planetary motion) leads to an integral that cannot be expressed in terms of elementary functions. This led to the definition of "elliptic integrals", whose inverses are elliptic functions, to which the theta functions are closely related. Elliptic functions have the remarkable property of double periodicity. That is, they are periodic in two directions in the complex plane. You can imagine how this might be useful in the study of planar lattices, for example. But more generally, any time you have a function whose domain of definition is the torus, you will find that elliptic functions come into play. Certain complex algebraic curves of degree three are topologically tori; elliptic functions can be used to parametrize positions on such curves, much as $\sin$ and $\cos$ parametrize positions on the circle. Solutions to differential equations: One of the first classes of differential equations you come to whose solutions cannot be expressed in terms of elementary functions are second order linear differential equations whose coefficients are polynomials of low degree. Hence the Airy functions, Bessel functions, hypergeometric functions, and so on. The simpler the differential equation, the more often it will come up. The differential equations that lead to the above functions arise repeatedly in classical mechanics, electromagnetism, and quantum mechanics. Orthogonal polynomials: Principles of linear algebra can be brought to bear on the study of spaces of functions. Just as it is useful to write down an orthonormal basis for $\mathbf{R}^n$, it is useful to write down a basis for certain spaces of functions. Depending on the domain and symmetries of the functions of interest, different orthogonal polynomials will come into play. Just as periodic functions may be expanded as Fourier series in $\sin$ and $\cos$, series with other symmetries may be expanded in terms of other types of functions. Often the coefficients of the lowest order terms in the expansion have clear physical meaning. You mention the Chebyshev polynomials. Another example are Zernike polynomials, whose domain of definition is the disk, of which an example is the pupil of the eye. In optometry, the coefficients of some of the low order polynomials relate to certain refractive errors in the eye. In physics and chemistry, the spherical harmonics are used to describe atomic orbitals. They are useful in many other problems with spherical symmetry, including celestial mechanics.
A density matrix view of the problem: When $p_\mu$ is near zero, one is considering a momentum near zero and the discrete lattice works fine. It's when $p_\mu$ is near $\pm \pi/a$, or more generally, near $n\pi/a$ for $n\neq 0$ an integer, that one finds problems. Instead of having very large momenta, these values of $p_\mu$ essentially give momenta as small as those near $p_\mu=0$, but with signs in the $\gamma$ matrices negated. This is a type of aliasing problem. As with the usual aliasing, the problem at $\pm \pi/a$ goes away when one makes the sample rate faster, that is, replaces $a$ with a smaller value. And just as with aliasing, making $a$ smaller does not eliminate aliasing entirely, but instead pushes the problem to a higher frequency. This drawing shows the usual aliasing effect. Note that the high frequency black signal (which corresponds to a high momentum) appears as a low frequency red signal (with low momentum): The difference with the usual aliasing is what happens when $p_\mu = (2n+1)\pi/a$. These are the values that give a continuous Dirac equation with negated gamma matrices. To understand better what is going on here, let's consider the density matrix form. Density matrices avoid unphysical complex phases. I'll work in 3+1 dimensions. One obtains a density matrix $\rho$ by multiplying a ket by a bra: $$\rho = |a\rangle\langle a|.$$The reason there are four spin-1/2 particles in 3+1 dimensions is that there are four primitive (i.e. trace=1) solutions to the idempotency equation: $$\rho^2 = \rho$$ One has latitude in how one chooses the four states. In general, one chooses two elements of the Dirac algebra that (a) square to unity, (b) commute, and (c) are independent. These are called a "complete set of commuting roots of unity." The commuting roots of unity are operators; one chooses them according to what operators one wishes to diagonalize. Following the wikipedia article on the construction of Dirac spinors, if we choose z-spin and charge $Q$, our complete set of commuting roots of unity is: $$\sigma_z = i\gamma^1\gamma^2,\;\;\; Q = -\gamma^0$$ The four independent states are then: $$\rho = (1\pm \sigma_z)(1\pm Q)/4.$$To get the spinors from a density matrix, one chooses a nonzero column and normalizes. Thus spinors and density matrices are alternative mathematical representations of wave functions; neither is more fundamental. If we discretize the density matrix, we will end up with the usual aliasing problem. From the point of view of lattice type calculations this is acceptable; there will be no duplicated particles. But spinors carry an extra degree of freedom; the arbitrary complex phase. This makes their aliasing behavior more complicated. So consider what happens for a frequency a little larger than $\pi/2$. In the following illustration we color every other sample red or blue: In the above, the density matrix will see this frequency appropriately as a high frequency. But with a spinor, we have arbitrary complex phase freedom. So we can negate the blue dots; the result is a low frequency. Thus the arbitrary complex phases of spinors naturally give aliasing problems at half frequencies.
Let's consider a rigid body B and an arbitrary reference point P ${\it fixed}$ with respect to the body (P can be a particle of the body or it can be a "mathematical point" outside the body but solidary to B, it doesn't matter). As B moves, the point P has a velocity ${\bf v}_P$ that, of course, can change over time. The most general motion of B is the composition of a rigid translation with velocity ${\bf v}_P$ and a rotation with angular velocity $\vec \omega$ around an axis that passes through the point P (This result is known as the Chasles' theorem). Accordingly, the velocity of any point A of the body is \begin{equation}{\bf v}_A = {\bf v}_P + {\vec \omega}\wedge \left( {\bf r}_A - {\bf r}_P\right),\;\;\;\;\;\;\;\;\;\;\;\; (1)\end{equation} where ${\bf r}_A$, ${\bf r}_P$ are the vector position of the points A and P with respect to a given reference frame $S.$ This is a kinematic result, it doesn't matter if the reference frame is non-inertial. When we compute the kinetic energy with respect to $S$, $$T=\frac{1}{2}\sum_{i\in B}m_i {\bf v}_i^2,$$ using Eq. (1), $T$ results the sum of three terms: one involving only the rigid translation (${\bf v}_P$), other involving only the rotation ($\vec\omega$), and a third one that mixes translation and rotation. In the special case where the center of mass is taken as the point P, the last term vanishes and, then, the kinetic energy is the sum of the translational kinetic energy (the body moving as a whole with the center of mass velocity) and the rotational kinetic energy (the body rotating around its center of mass):$$T= \frac{1}{2}M{\bf v}_{CM}^2 + \frac{1}{2}\sum_{i \in B} m_i \left(\vec \omega \wedge ({\bf r}_i-{\bf r}_{CM})\right)^2.\;\;\;\;(2)$$In the case of a rotation around a fixed direction (like in your problem), the last term is rewritten as $$T_{rot} = \frac{1}{2}I^{cm}\omega^2,$$where $I^{cm}$ is the body moment of inertia with respect to the rotation axis that pass through the center of mass of the body. If one point of the body is fixed with respect to $S,$ all the kinetic energy has a rotational character (see Eq. (1), with ${\bf v}_P={\bf 0}$), $$T = T_{rot} = \frac{1}{2}I^{P}\omega^2,\;\;\;\;(3)$$where now $I^{P}$ is the body moment of inertia with respect to an axis that pass through the point P (again we are considering rotation around a fixed direction). Let's go to your problem. With respect to the rod, we can take the pivot as the reference point $P$ (this is convenient, as the pivot is fixed with respect to a laboratory frame). So, the kinetic energy of the rod is (see Eq. (3)), $$T_{rod}= \frac{1}{2}I_{rod}^P \dot{\theta}^2,$$ where, for a uniform rod $I^P_{rod} = \frac{1}{3}M_{rod} L_{rod}^2,$ and $\theta$ is the angle between the rod and the vertical direction (You can recalculate the rod kinetic energy taking its center of mass as the reference point $P$!). With respect to the square block, it is convenient to take its center of mass as the reference $P$ point. In this case, the velocity of the center of mass is $|{\bf v}^{cm}_{block}| = L_{rod} |\dot\theta|$, while its angular velocity will depend on whether this block is allowed to rotate independently of the rod, or not. In the first case, if we call $\phi$ the angle that determines the block orientation, the total block kinetic energy is (see Eq. (2)), $$T_{block} = \frac{1}{2}M_{block} L_{rod}^2 \dot\theta^2 + \frac{1}{2}I^{cm}_{block}\dot\phi^2.$$ If the square block is rigidly tied to the rod (that is, it cannot rotate independently), its angular velocity will be the same as the one for the rod ($\dot \theta$). So, in the last equation you should replace $\dot\phi$ by $\dot\theta$. Don't forget to include the potential energy in the Lagrangean function.
I am reading a paper on stability of CG, and I came across the following statement: \begin{equation} \frac{\|A\|\,\|p\|^2}{\langle p,Ap\rangle} \leq \kappa(A) \end{equation} where $\kappa(\cdot)$ is the condition number and $\langle \cdot, \cdot \rangle$ the inner product. Can you, please, help me understand this bound? I cannot see how it was derived. The authors simply state this as a fact. What obvious fact am I missing? $p$ is the search direction vector, and $A$ is the SPD matrix. Thank you! here is the paper: http://www.cs.cas.cz/tichy/download/public/StTi2004.pdf The statement is on page 13, at the bottom, on the first line after equation 4.27: "Using (4.5)–(4.8) and ...."
Let \(\gamma(G)\) and \(\iota(G)\) be the domination and independent domination numbers of a graph \(G\), respectively.In this paper, we define the Price of Independence of a graph \(G\) as the ratio \(\frac{\iota(G)}{\gamma(G)}\). Firstly, we bound the Price of Independence by values depending on the number of vertices. Secondly, we consider the question of computing the Price of Independence of a given graph. Unfortunately, the decision version of this question is \(\Theta_2^p\)-complete. The class \(\Theta_2^p\) is the class of decision problems solvable in polynomial time, for which we can make \(O(\log(n))\) queries to an NP-oracle.Finally, we restore the true characterization of domination perfect gaphs, i.e. graphs whose the Price of Independence is always \(1\) for all induced subgraphs, and we propose a conjecture on futher problems. Published October 2017 , 13 pages
Journal of Mathematics of Kyoto University J. Math. Kyoto Univ. Volume 39, Number 4 (1999), 649-673. Perturbation theorems for supercontractive semigroups Abstract Let $\mu$ be a probability measure on a Riemannian manifold. It is known that if the semigroup $e^{-t\nabla *\nabla}$ is hypercontractive, then any function $g$ for which $\|\nabla g\|_{\infty}\leq 1$ will satisfy a Herbst inequality, $\int \exp (\alpha g^{2})d\mu < \infty$, for small $\alpha > 0$. If the semigroup is supercontractive, then the above inequality will hold for all $\alpha > 0$. For any $\alpha > 0$ for which $Z = \int \exp(\alpha g^{2})d\mu < \infty$, we define a measure $\mu _{g}$ by $d\mu _{g}=Z^{-1}\exp (\alpha g^{2})d\mu$. We show that if $\mu$ is hyper- or supercontractive, then so is $\mu _{g}$. Moreover, under standard conditions on logarithmic Sobolev inequalities which yield ultracontractivity of the semigroup, Gross and Rothaus have shown that $Z = \int \exp (\alpha g^{2}|\log |g||^{c})d\mu < \infty$ for some constants $\alpha ,c$. We in addition show that the perturbed measure $d\mu _{g} = Z^{-1} \exp (\alpha g^{2}|\log |g||^{c})d\mu$ is ultracontractive. Article information Source J. Math. Kyoto Univ., Volume 39, Number 4 (1999), 649-673. Dates First available in Project Euclid: 17 August 2009 Permanent link to this document https://projecteuclid.org/euclid.kjm/1250517819 Digital Object Identifier doi:10.1215/kjm/1250517819 Mathematical Reviews number (MathSciNet) MR1740196 Zentralblatt MATH identifier 0981.47029 Citation Lewkeeratiyutkul, Wicharn. Perturbation theorems for supercontractive semigroups. J. Math. Kyoto Univ. 39 (1999), no. 4, 649--673. doi:10.1215/kjm/1250517819. https://projecteuclid.org/euclid.kjm/1250517819
State A state is something that a system is in. The system is where I perform my measurement; the state is the result of that measurement. When I perform one kind of a measurement, and then I perform another kind of a measurement, the two results are correlated. In particular, the first result may determine the probability that the second measurement will yield a specific value, i.e., that the system will be in a specific state with respect to the second measurement, as opposed to some other state. The abstract symbol for a state \(x\) is \(|x\rangle\). This is just a label; it is not a number. The symbol for the transition from state \(x\) to state \(y\) is \(\langle y|x\rangle\). This, actually, is a number! Amplitude Specifically, it is a complex number called the amplitude. The reason why it is a complex number is experimental: we found that if there are two possible ways for a system to reach state \(y\) starting from state \(x\), it is not the probabilities, but these complex numbers that will need to be summed: Probability The actual probability that the system in state \(x\) will also be in state \(y\) is computed as the square of the absolute value of the complex number: Base states It is assumed as an axiom that any state can be expressed as a sum of base states: What we know about the base states is that they are orthogonal: \[\langle i|j\rangle=\delta_{ij}.\] State vector The contribution of each base state \(|i\rangle\) to \(|x\rangle\) is characterized by , which is just a complex number. The state \(|x\rangle\) can, therefore, be viewed as a vector that is expressed in terms of base vectors \(| i\rangle\) in some complex vector space. The transition amplitude from state \(x\) to state \(y\) can be expressed through a set of base states as: The probability that a system in state \(x\) is in state \(x\) is unity: \[\langle x|x\rangle=\sum\limits_i\langle x|i\rangle\langle i|x\rangle=1.\] The probability that a system in state \(x\) is found in some base state is also unity: \[\sum\limits_i|\langle i|x\rangle|^2=\sum\limits_i\langle i|x\rangle\langle i|x\rangle^\star=1.\] From this one can see that \[\langle i|x\rangle=\langle x|i\rangle^\star.\] And since any state \(y\) can be a base state in some set of base states, it is true in general that \[\langle y|x\rangle=\langle x|y\rangle^\star.\] Operators When you do something to a system, you change its state. This is expressed by an operator acting on that state: \[|y\rangle=\hat A|x\rangle.\] This is defined to mean the following: \[\hat A|x\rangle=\sum\limits_{ij}|i\rangle\langle i|\hat A|j\rangle\langle j|x\rangle,\] which means that \(\hat A\) is just a collection of matrix elements \(A_{ij}\), expressed with respect to some set of base states. Expectation value A measurement may be expressed in the form of an operator. If this is the case, the average (expectation) value of that measurement can be expressed as: \[A_\mathrm{av}=\langle x|\hat A|x\rangle,\] which really is just shorthand for \[A_\mathrm{av}=\sum\limits_{ij}\langle x|i\rangle\langle i|\hat A|j\rangle\langle j|x\rangle.\] Wave function What if there is an infinite number of states? For instance, a particle's position \(l\) along a line may be expressed in terms of the set of individual positions as base states. But there is an infinite number of such positions possible. Thus, our sum \[|l\rangle=\sum\limits_x|x\rangle\langle x|l\rangle\] becomes instead the integral \[|l\rangle=\int|x\rangle\langle x|l\rangle dx.\] This equation has little practical meaning since \(|x\rangle\) is just an abstract symbol. However, the probability that a system in state \(l\) is later found in state \(k\), previously expressed as the sum (1): \[\langle k|l\rangle=\sum\limits_x\langle k|x\rangle\langle x|l\rangle.\] is now the integral \[\langle k|l\rangle=\int\langle k|x\rangle\langle x|l\rangle dx.\] Both \(\langle k|x\rangle\) and \(\langle x|l\rangle\) are just complex numbers; complex-valued functions, in fact, of the continuous variable \(x\): \[\langle k|x\rangle=\langle x|k\rangle^\star=\psi^\star(x),\] \[\langle l|x\rangle=\psi(x).\] These functions are called wave functions mainly because they typically appear in the form of periodic complex-valued functions. With their help, the transitional probability can now be expressed as: \[P(\phi\rightarrow\psi)=\int\phi^\star(x)\psi(x)dx,\] and the expectation value of an operator can be written as Algebraic operator In this context, \(\hat A\) no longer works as a matrix operator converting a state vector into another state vector, but as an algebraic operator converting a wave function into another wave function. How the matrix operator, expressed in terms of states and amplitudes, and the algebraic operator, expressed usually as a differential operator, relate to each other is another question! Position operator If we know the probability \(P(x)\) that a particle will be at position \(x\), we can compute the average position of the particle after many measurements as follows: \[\bar x=\int xP(x)dx.\] But this is the same as \[\int x\phi^\star(x)\phi(x)dx=\int\phi^\star(x)x\phi(x)dx,\] which is formally identical to the expectation value (2) for a measurement that can be expressed in the form of an operator \(\hat x\). In other words, \(\hat x\) can be viewed as the position operator. When the base states are positions, the position operator is just a multiplication of the wave function by \(x\). Momentum operator The same computation can be performed for the momentum, using as base states states of definite momentum: \[\int p\phi^\star(p)\phi(p)dp=\int\phi^\star(p)p\phi(p)dp.\] Question is, can the momentum operator be expressed in terms of base states of position? The amplitude of a system, which is in state \(\beta\), to be found in a state of definite momentum \(p\), is just the definite integral \[\int\limits_{-\infty}^\infty\langle p|x\rangle\langle x|\beta\rangle dx.\] The relationship between position and momentum, specifically the amplitude for a particle to be found at position \(x\) after it has been measured to have momentum \(p\) is assumed to be So our integral becomes Now let's use, as \(|\beta\rangle\), the state \(\hat p|k\rangle\) (it can be any state after all) where \(\langle x|k\rangle=\phi(x)\), and the \(k\)'s are assumed to be states of definite momentum. This way, our earlier expression becomes the expectation value of the momentum: \[\int\limits_{-\infty}^\infty\langle p|x\rangle\langle x|\beta\rangle dx=\int\limits_{-\infty}^\infty\langle p|x\rangle\langle x|\hat p|k\rangle dx=\int\limits_{-\infty}^\infty\langle p|x\rangle p\langle x|k\rangle dx.\] Then \(\langle x|\beta\rangle\) is just \(p\langle x|k\rangle=p\phi(x)\), and: \[\langle p|\beta\rangle=\int\limits_{-\infty}^\infty e^{-ipx/\hbar}p\phi(x)dx.\] The integral can be computed by observing that \(de^{-ipx/\hbar}/dx=(-i/\hbar)pe^{-ipx/hbar}\), integrating in parts, and assuming that \(\phi(x)=0\) when \(x=\pm\infty\): \[\langle p|\beta\rangle=\frac{\hbar}{i}\int\limits_{-\infty}^\infty e^{-ipx/\hbar}\frac{\partial\phi}{\partial x}dx,\] so, from (3): \[\langle x|\beta\rangle=\frac{\hbar}{i}\frac{\partial\phi}{\partial x},\] and we now have an expression for the momentum operator \(\hat p\): \[\hat p=\frac{\hbar}{i}\frac{\partial}{\partial x}.\] Time displacement How does a system evolve over time? Let's consider the time displacement operator \(\hat U(t_1,t_2)\): \[\langle\chi|\hat U(t_1,t_2)|\phi\rangle.\] S-matrix When \(t_1\rightarrow-\infty\) and \(t_2\rightarrow+\infty\), we call \(\hat U(t_1,t_2)\) the S-matrix. Making \(t_1=t\) and \(t_2=t+\Delta t\), observing that when \(\Delta t=0\), \(U_{ij}\) (in some coordinate representation) must be \(\delta_{ij}\), and assuming that for small \(\Delta t\), the change in \(\phi\) will be linear, we get: \[U_{ij}=\delta_{ij}-\frac{i}{\hbar}H_{ij}(t)\Delta t.\] (the factor \(-i/\hbar\) is introduced for reasons of convenience.) In other words, the difference between the wave function of the two states can be expressed as: \[\phi'-\phi=-\frac{i}{\hbar}\Delta t\bar H\phi,\] or, dividing by \(\Delta t\) and recognizing the left-hand side as a time differential: \[i\hbar\frac{\partial\phi}{\partial t}=\bar H\phi.\] Schrödinger equation Schrödinger, that kind chap, then just decided to use in place of \(\hat H\) an operator that he concocted up on the basis of the classical expression for energy: \[E=\frac{p^2}{2m}+V.\] His equation: describes the wave function of a particle moving in a potential field \( V\). A crucial thought is that the Schrödinger equation is not as fundamental as you might have been led to believe. Indeed, there's no single Schrödinger equation; the actual equation of a system depends on the characteristics of that system, and is often derived heuristically, through the process of operator substitution. Operator substitutions One result is a "rule of thumb": substitution rules that are used to derive quantum operators from the classical quantities of momentum, energy, and position: \[\hat p\rightarrow\frac{\hbar}{i}\frac{\partial}{\partial x},\] \[\hat H\rightarrow i\hbar\frac{\partial}{\partial t},\] \[\hat x\rightarrow x.\] Commutativity The operators \(\hat x\) and \(\hat p\) do not commute: \[(\hat x\circ\hat p)\phi=x\frac{\hbar}{i}\frac{\partial\phi}{\partial x},\] \[(\hat p\circ\hat x)\phi=\frac{\hbar}{i}\frac{\partial(x\phi)}{\partial x}=\frac{\hbar}{i}\frac{\partial x}{\partial x}\phi+\frac{\hbar}{i}x\frac{\partial\phi}{\partial x},\] \[(\hat p\circ\hat x-\hat x\circ\hat p)\phi=\frac{\hbar}{i}\phi.\] Probability Current A simple manipulation of the Schrödinger equation—multiplying on the left by \(\phi^\star\), multiplying the equation's complex conjugate on the left by \(\phi\), and subtracting one from the other—can lead to the continuity equation: \[\phi^\star\left(\frac{\hbar^2}{2m}\nabla^2\phi+V\phi-i\hbar\frac{\partial\phi}{\partial t}\right)-\phi\left(\frac{-\hbar^2}{2m}\nabla^2\phi^\star+V\phi^\star+i\hbar\frac{\partial\phi^\star}{\partial t}\right)\] \[=\frac{-\hbar^2}{2m}(\phi^\star\nabla^2\phi+\nabla\phi^\star\nabla\phi-\nabla\phi\nabla\phi^\star-\phi\nabla^2\phi^\star)-i\hbar\left(\phi^\star\frac{\partial\phi}{\partial t}+\phi\frac{\partial\phi^\star}{\partial t}\right)\] \[=\frac{-\hbar^2}{2m}\nabla(\phi^\star\nabla\phi-\phi\nabla\phi^\star)-i\hbar\frac{\partial\phi^\star\phi}{\partial t},\] or, substituting \[{\bf\mathrm{j}}=\frac{-i\hbar^2}{2m}(\phi^\star\nabla\phi-\phi\nabla\phi^\star),\] \[\rho=\hbar\phi^\star\phi,\] we get \[-i\left(\nabla{\bf\mathrm{j}}+\frac{\partial\rho}{\partial t}\right)=0,\] \[\nabla{\bf\mathrm{j}}+\frac{\partial\rho}{\partial t}=0.\] References Feynman, Richard P., The Feynman Lectures on Physics III., Addison-Wesley, 1977 Aitchison, I. J. R. & Hey, A. J. G., Gauge Theories in Particle Physics, Institute of Physics Publishing, 1996
Mandelbrot polynomial is special kind of quadratic polynomial, written in form\( P_c(z)=z^2+c\)whrere \(c\) is parameter. Usially, it is assumed to be a complex number. Mandelbrot set The Mandelbrot polynomial is used to define the Mandelbrot set \( M = \left\{c\in \mathbb C : \exists s\in \mathbb R, \forall n\in \mathbb N, |P_c^n(0)| \le s \right\} \) SuperMandelbrot Superfunction \(\Phi\) for the transfer function \(P_c\) can be constructed from the Superfunction \(F\) of the Logistic operator: \(\Phi(z)=p(F(z))\) where \(p(z) = r z -r/2\), \(r\displaystyle =\frac{1}{2}+\sqrt{\frac{1}{4}+c}\) Similarly, the corresponding Abel function \(\Psi=\Phi^{-1}\) can be expressed through the Abel function \(G\) of the Logistic operator, \(\Psi(z) = G(q(z))\) where \(\displaystyleq(z)=\left(\frac{1}{2}-z\right)/r\). Then, following the general rule, through the superfunction \(\Phi\) and the Abel function \(\Psi\),the \(n\)th iteration of the Mandelbrot polynomial can be written as follows: \(P_c^n(z)=\Phi(n+\Psi(z))\) In this expression, number \(n\) of iterations has no need to be integer; the Mandelbrot polynomial can be iterated arbitrary (even complex) number of times. References http://en.wikipedia.org/wiki/Mandelbrot_set Keywords Iteration ,Logistic operator ,Mandelbrot set ,Superfunction
To solve exponential problems with different bases we t ake the common logarithm or natural logarithm of each side. Use the properties of logarithms to rewrite the problem. Use Property 5 to move the exponent out front which turns this into a multiplication problem. Divide each s ide by log 3. Multiplying logarithms of different bases [closed] Ask Question How to compare logarithms with different bases? 1. Solving logarithmic equation with different bases Logarithm addition with different bases. 1. Question about asymptotic notation with logs of different bases. 0. The logarithm of a fraction is equal to the logarithm of the numerator minus the logarithm of the denominator. If we encounter two logarithms with the same base, we can likely combine them. In this case, we can use the reverse of the above identity. Multiplying logarithms of different bases [closed] Ask Question How to compare logarithms with different bases? 1. Solving logarithmic equation with different bases Logarithm addition with different bases. 1. Question about asymptotic notation with logs of different bases. 0. A logarithm is just an exponent. To be specific, the logarithm of a number x to a base b is just the exponent you put onto b to make the result equal x. For instance, since 5² = 25, we know that 2 (the power) is the logarithm of 25 to base 5. Symbolically, log 5 (25) = 2. More generically, if x = by. Feb 01, · Solving, [math]2log_3(6)-\frac{1}{3}log_6(64)[/math] [math]log_3(36)-log_6(\sqrt[3]{64})[/math] [math]log_3(36)-log_6(4)[/math] [math]log_3(36)+log_6(\frac{1}{4. Jan 17, · This algebra 2 and precalculus video tutorial focuses on solving logarithmic equations with different bases. To do this, you need to understand how to use the change of base . I was wondering how one would multiply two logarithms together? Say, for example, that I had: $$\log x·\log 2x Multiplying two logarithms (Solved) Ask Question Asked 3 years, Use MathJax to format equations. MathJax reference. To . Mar 25, · ANSWER: x = 23/4 Remember that logs of the same base can be combined by multiplying them together. So for example, log 4 + log 7 = log For your future reference, if they are being subtracted Followers: 2. Mar 25, · ANSWER: x = 23/4 Remember that logs of the same base can be combined by multiplying them together. So for example, log 4 + log 7 = log For your future reference, if they are being subtracted Followers: 2. Feb 01, · Solving, [math]2log_3(6)-\frac{1}{3}log_6(64)[/math] [math]log_3(36)-log_6(\sqrt[3]{64})[/math] [math]log_3(36)-log_6(4)[/math] [math]log_3(36)+log_6(\frac{1}{4. I was wondering how one would multiply two logarithms together? Say, for example, that I had: $$\log x·\log 2x Multiplying two logarithms (Solved) Ask Question Asked 3 years, Use MathJax to format equations. MathJax reference. To . Multiplying logarithms of different bases [closed] Ask Question How to compare logarithms with different bases? 1. Solving logarithmic equation with different bases Logarithm addition with different bases. 1. Question about asymptotic notation with logs of different bases. 0. I was wondering how one would multiply two logarithms together? Say, for example, that I had: $$\log x·\log 2x Multiplying two logarithms (Solved) Ask Question Asked 3 years, Use MathJax to format equations. MathJax reference. To . Jan 17, · This algebra 2 and precalculus video tutorial focuses on solving logarithmic equations with different bases. To do this, you need to understand how to use the change of base . Multiplying logarithms of different bases [closed] Ask Question How to compare logarithms with different bases? 1. Solving logarithmic equation with different bases Logarithm addition with different bases. 1. Question about asymptotic notation with logs of different bases. 0. Jan 17, · This algebra 2 and precalculus video tutorial focuses on solving logarithmic equations with different bases. To do this, you need to understand how to use the change of base . The logarithm of a fraction is equal to the logarithm of the numerator minus the logarithm of the denominator. If we encounter two logarithms with the same base, we can likely combine them. In this case, we can use the reverse of the above identity. The logarithm of a fraction is equal to the logarithm of the numerator minus the logarithm of the denominator. If we encounter two logarithms with the same base, we can likely combine them. In this case, we can use the reverse of the above identity. Sometimes, however, you may need to solve logarithms with different bases. This is where the change of base formula comes in handy: logbx = log ax/log ab. This formula allows you to take advantage of the essential properties of logarithms by recasting any problem in a form that is more easily solved. To solve exponential problems with different bases we t ake the common logarithm or natural logarithm of each side. Use the properties of logarithms to rewrite the problem. Use Property 5 to move the exponent out front which turns this into a multiplication problem. Divide each s ide by log 3. Mar 25, · ANSWER: x = 23/4 Remember that logs of the same base can be combined by multiplying them together. So for example, log 4 + log 7 = log For your future reference, if they are being subtracted Followers: 2. I was wondering how one would multiply two logarithms together? Say, for example, that I had: $$\log x·\log 2x Multiplying two logarithms (Solved) Ask Question Asked 3 years, Use MathJax to format equations. MathJax reference. To . Jan 17, · This algebra 2 and precalculus video tutorial focuses on solving logarithmic equations with different bases. To do this, you need to understand how to use the change of base . A logarithm is just an exponent. To be specific, the logarithm of a number x to a base b is just the exponent you put onto b to make the result equal x. For instance, since 5² = 25, we know that 2 (the power) is the logarithm of 25 to base 5. Symbolically, log 5 (25) = 2. More generically, if x = by. Jan 17, · This algebra 2 and precalculus video tutorial focuses on solving logarithmic equations with different bases. To do this, you need to understand how to use the change of base . The logarithm of a fraction is equal to the logarithm of the numerator minus the logarithm of the denominator. If we encounter two logarithms with the same base, we can likely combine them. In this case, we can use the reverse of the above identity. Feb 01, · Solving, [math]2log_3(6)-\frac{1}{3}log_6(64)[/math] [math]log_3(36)-log_6(\sqrt[3]{64})[/math] [math]log_3(36)-log_6(4)[/math] [math]log_3(36)+log_6(\frac{1}{4. To solve exponential problems with different bases we t ake the common logarithm or natural logarithm of each side. Use the properties of logarithms to rewrite the problem. Use Property 5 to move the exponent out front which turns this into a multiplication problem. Divide each s ide by log 3. Jan 17, · This algebra 2 and precalculus video tutorial focuses on solving logarithmic equations with different bases. To do this, you need to understand how to use the change of base . Jan 17, · This algebra 2 and precalculus video tutorial focuses on solving logarithmic equations with different bases. To do this, you need to understand how to use the change of base . A logarithm is just an exponent. To be specific, the logarithm of a number x to a base b is just the exponent you put onto b to make the result equal x. For instance, since 5² = 25, we know that 2 (the power) is the logarithm of 25 to base 5. Symbolically, log 5 (25) = 2. More generically, if x = by. To solve exponential problems with different bases we t ake the common logarithm or natural logarithm of each side. Use the properties of logarithms to rewrite the problem. Use Property 5 to move the exponent out front which turns this into a multiplication problem. Divide each s ide by log 3. I was wondering how one would multiply two logarithms together? Say, for example, that I had: $$\log x·\log 2x Multiplying two logarithms (Solved) Ask Question Asked 3 years, Use MathJax to format equations. MathJax reference. To . A logarithm is just an exponent. To be specific, the logarithm of a number x to a base b is just the exponent you put onto b to make the result equal x. For instance, since 5² = 25, we know that 2 (the power) is the logarithm of 25 to base 5. Symbolically, log 5 (25) = 2. More generically, if x = by. Sometimes, however, you may need to solve logarithms with different bases. This is where the change of base formula comes in handy: logbx = log ax/log ab. This formula allows you to take advantage of the essential properties of logarithms by recasting any problem in a form that is more easily solved. Feb 01, · Solving, [math]2log_3(6)-\frac{1}{3}log_6(64)[/math] [math]log_3(36)-log_6(\sqrt[3]{64})[/math] [math]log_3(36)-log_6(4)[/math] [math]log_3(36)+log_6(\frac{1}{4. Sometimes, however, you may need to solve logarithms with different bases. This is where the change of base formula comes in handy: logbx = log ax/log ab. This formula allows you to take advantage of the essential properties of logarithms by recasting any problem in a form that is more easily solved. Mar 25, · ANSWER: x = 23/4 Remember that logs of the same base can be combined by multiplying them together. So for example, log 4 + log 7 = log For your future reference, if they are being subtracted Followers: 2.Date now...
Probability Seminar Spring 2015 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu. Thursday, January 15, Miklos Racz, UC-Berkeley Stats Title: Testing for high-dimensional geometry in random graphs Abstract: I will talk about a random geometric graph model, where connections between vertices depend on distances between latent d-dimensional labels; we are particularly interested in the high-dimensional case when d is large. Upon observing a graph, we want to tell if it was generated from this geometric model, or from an Erdos-Renyi random graph. We show that there exists a computationally efficient procedure to do this which is almost optimal (in an information-theoretic sense). The key insight is based on a new statistic which we call "signed triangles". To prove optimality we use a bound on the total variation distance between Wishart matrices and the Gaussian Orthogonal Ensemble. This is joint work with Sebastien Bubeck, Jian Ding, and Ronen Eldan. Thursday, January 22, No Seminar Thursday, January 29, Arnab Sen, University of Minnesota Title: Double Roots of Random Littlewood Polynomials Abstract: We consider random polynomials whose coefficients are independent and uniform on {-1,1}. We will show that the probability that such a polynomial of degree n has a double root is o(n^{-2}) when n+1 is not divisible by 4 and is of the order n^{-2} otherwise. We will also discuss extensions to random polynomials with more general coefficient distributions. This is joint work with Ron Peled and Ofer Zeitouni. Thursday, February 5, No seminar this week Thursday, February 12, No Seminar this week Thursday, February 19, Xiaoqin Guo, Purdue Title: Quenched invariance principle for random walks in time-dependent random environment Abstract: In this talk we discuss random walks in a time-dependent zero-drift random environment in [math]Z^d[/math]. We prove a quenched invariance principle under an appropriate moment condition. The proof is based on the use of a maximum principle for parabolic difference operators. This is a joint work with Jean-Dominique Deuschel and Alejandro Ramirez. Thursday, February 26, Dan Crisan, Imperial College London Title: Smoothness properties of randomly perturbed semigroups with application to nonlinear filtering Abstract: In this talk I will discuss sharp gradient bounds for perturbed diffusion semigroups. In contrast with existing results, the perturbation is here random and the bounds obtained are pathwise. Our approach builds on the classical work of Kusuoka and Stroock and extends their program developed for the heat semi-group to solutions of stochastic partial differential equations. The work is motivated by and applied to nonlinear filtering. The analysis allows us to derive pathwise gradient bounds for the un-normalised conditional distribution of a partially observed signal. The estimates we derive have sharp small time asymptotics This is joint work with Terry Lyons (Oxford) and Christian Literrer (Ecole Polytechnique) and is based on the paper D Crisan, C Litterer, T Lyons, Kusuoka–Stroock gradient bounds for the solution of the filtering equation, Journal of Functional Analysis, 2105 Wednesday, March 4, Sam Stechmann, UW-Madison, 2:25pm Van Vleck B113 Please note the unusual time and room. Title: Stochastic Models for Rainfall: Extreme Events and Critical Phenomena Abstract: In recent years, tropical rainfall statistics have been shown to conform to paradigms of critical phenomena and statistical physics. In this talk, stochastic models will be presented as prototypes for understanding the atmospheric dynamics that leads to these statistics and extreme events. Key nonlinear ingredients in the models include either stochastic jump processes or thresholds (Heaviside functions). First, both exact solutions and simple numerics are used to verify that a suite of observed rainfall statistics is reproduced by the models, including power-law distributions and long-range correlations. Second, we prove that a stochastic trigger, which is a time-evolving indicator of whether it is raining or not, will converge to a deterministic threshold in an appropriate limit. Finally, we discuss the connections among these rainfall models, stochastic PDEs, and traditional models for critical phenomena. Thursday, March 12, Ohad Feldheim, IMA Title: The 3-states AF-Potts model in high dimension Abstract: Take a bounded odd domain of the bipartite graph [math]\mathbb{Z}^d[/math]. Color the boundary of the set by [math]0[/math], then color the rest of the domain at random with the colors [math]\{0,\dots,q-1\}[/math], penalizing every configuration with proportion to the number of improper edges at a given rate [math]\beta\gt 0[/math] (the "inverse temperature"). Q: "What is the structure of such a coloring?" This model is called the [math]q[/math]-states Potts antiferromagnet(AF), a classical spin glass model in statistical mechanics. The [math]2[/math]-states case is the famous Ising model which is relatively well understood. The [math]3[/math]-states case in high dimension has been studies for [math]\beta=\infty[/math], when the model reduces to a uniformly chosen proper three coloring of the domain. Several words, by Galvin, Kahn, Peled, Randall and Sorkin established the structure of the model showing long-range correlations and phase coexistence. In this work, we generalize this result to positive temperature, showing that for large enough [math]\beta[/math] (low enough temperature) the rigid structure persists. This is the first rigorous result for [math]\beta\lt \infty[/math]. In the talk, assuming no acquaintance with the model, we shall give the physical background, introduce all the relevant definitions and shed some light on how such results are proved using only combinatorial methods. Joint work with Yinon Spinka. Thursday, March 19, Mark Huber, Claremont McKenna Math Title: Understanding relative error in Monte Carlo simulations Abstract: The problem of estimating the probability [math]p[/math] of heads on an unfair coin has been around for centuries, and has inspired numerous advances in probability such as the Strong Law of Large Numbers and the Central Limit Theorem. In this talk, I'll consider a new twist: given an estimate [math]\hat p[/math], suppose we want to understand the behavior of the relative error [math](\hat p - p)/p[/math]. In classic estimators, the values that the relative error can take on depends on the value of [math]p[/math]. I will present a new estimate with the remarkable property that the distribution of the relative error does not depend in any way on the value of [math]p[/math]. Moreover, this new estimate is very fast: it takes a number of coin flips that is very close to the theoretical minimum. Time permitting, I will also discuss new ways to use concentration results for estimating the mean of random variables where normal approximations do not apply. Thursday, March 26, Ji Oon Lee, KAIST Title: Tracy-Widom Distribution for Sample Covariance Matrices with General Population Abstract: Consider the sample covariance matrix [math](\Sigma^{1/2} X)(\Sigma^{1/2} X)^*[/math], where the sample [math]X[/math] is an [math]M \times N[/math] random matrix whose entries are real independent random variables with variance [math]1/N[/math] and [math]\Sigma[/math] is an [math]M \times M[/math] positive-definite deterministic diagonal matrix. We show that the fluctuation of its rescaled largest eigenvalue is given by the type-1 Tracy-Widom distribution. This is a joint work with Kevin Schnelli. Thursday, April 2, No Seminar, Spring Break Thursday, April 9, Elnur Emrah, UW-Madison Title: The shape functions of certain exactly solvable inhomogeneous planar corner growth models Abstract: I will talk about two kinds of inhomogeneous corner growth models with independent waiting times {W(i, j): i, j positive integers}: (1) W(i, j) is distributed exponentially with parameter [math]a_i+b_j[/math] for each i, j.(2) W(i, j) is distributed geometrically with fail parameter [math]a_ib_j[/math] for each i, j. These generalize exactly-solvable i.i.d. models with exponential or geometric waiting times. The parameters (a_n) and (b_n) are random with a joint distribution that is stationary with respect to the nonnegative shifts and ergodic (separately) with respect to the positive shifts of the indices. Then the shape functions of models (1) and (2) satisfy variational formulas in terms of the marginal distributions of (a_n) and (b_n). For certain choices of these marginal distributions, we still get closed-form expressions for the shape function as in the i.i.d. models. Thursday, April 16, Scott Hottovy, UW-Madison Title: An SDE approximation for stochastic differential delay equations with colored state-dependent noise Abstract: In this talk I will introduce a stochastic differential delay equation with state-dependent colored noise which arises from a noisy circuit experiment. In the experimental paper, a small delay and correlation time limit was performed by using a Taylor expansion of the delay. However, a time substitution was first performed to obtain a good match with experimental results. I will discuss how this limit can be proved without the use of a Taylor expansion by using a theory of convergence of stochastic processes developed by Kurtz and Protter. To obtain a necessary bound, the theory of sums of weakly dependent random variables is used. This analysis leads to the explanation of why the time substitution was needed in the previous case. Thursday, April 23, Hoi Nguyen, Ohio State University Title: On eigenvalue repulsion of random matrices Abstract: I will address certain repulsion behavior of roots of random polynomials and of eigenvalues of Wigner matrices, and their applications. Among other things, we show a Wegner-type estimate for the number of eigenvalues inside an extremely small interval for quite general matrix ensembles. Thursday, May 7, Jessica Lin, UW-Madison, 2:25pm, Room TBA Please note the unusual room, which is to be announced. Title: Random Walks in Random Environments and Stochastic Homogenization In this talk, I will draw connections between random walks in random environments (RWRE) and stochastic homogenization of partial differential equations (PDE). I will introduce various models of RWRE and derive the corresponding PDEs to show that the two subjects are intimately related. I will then give a brief overview of the tools and techniques used in both approaches (reviewing some classical results), and discuss some recent problems in RWRE which are related to my research in stochastic homogenization. Thursday, May 14, Chris Janjigian, UW-Madison Title: Large deviations of the free energy in the O’Connell-Yor polymer Abstract: The first model of a directed polymer in a random environment was introduced in the statistical physics literature in the mid 1980s. This family of models has attracted substantial interest in the mathematical community in recent years, due in part to the conjecture that they lie in the KPZ universality class. At the moment, this conjecture can only be verified rigorously for a handful of exactly solvable models. In order to further explore the behavior of these models, it is natural to question whether the solvable models have any common features aside from the Tracy-Widom fluctuations and scaling exponents that characterize the KPZ class. This talk considers the behavior of one of the solvable polymer models when it is far away from the behavior one would expect based on the KPZ conjecture. We consider the model of a 1+1 dimensional directed polymer model due to O’Connell and Yor, which is a Brownian analogue of the classical lattice polymer models. This model satisfies a strong analogue of Burke’s theorem from queueing theory, which makes some objects of interest computable. This talk will discuss how, using the Burke property, one can compute the positive moment Lyapunov exponents of the parabolic Anderson model associated to the polymer and how this leads to a computation of the large deviation rate function with normalization n for the free energy of the polymer.
L-functions of signature (0,0,0;) These L-functions satisfy a functional equation with \(\Gamma\)-factors \[\begin{aligned}\Gamma_\R(s + i \mu_1)\Gamma_\R(s + i \mu_2)\Gamma_\R(s + i \mu_3)\end{aligned}\] with \(\mu_j\in \R\) and \(\mu_1 + \mu_2 + \mu_3 = 0\). By permuting and possibly taking the complex conjugate, we may assume \(\mu_1 \ge \mu_2 \ge 0\), so the functional equation can be represented by a point \( (\mu_1, \mu_2) \) below the diagonal in the first quadrant of the Cartesian plane. L-functions of conductor 1 The dots in the plot correspond to L-functions with \((\mu_1,\mu_2)\) in the \(\Gamma\)-factors, colored according to the sign of the functional equation (blue indicates \(\epsilon=1\)). Click on any of the dots for detailed information about the L-function.
It is a question vector calculus and Maxwell's laws. I put it this way. Let's say, we are working in a $3$-Dimensional space ( e.g $x\cdot y\cdot z = 4\cdot3\cdot2$, a certain room/class of that size ) . Within this room, the heat obey a certain equation ( for e.g. $T = 25 + 5z$ ) .We know that heat flows from higher temperature regions to lower temperature regions. With this information in mind How could I be able to determine the amplitude and the direction of travel of the thermal energy with the Del operator? I'm not looking for a definitive response, but an equation that could give me potentially the result for the amplitude and the direction. I also want to know if thermal energy follows a loop pattern within my room (and be able to explain it mathematically by using the del operator once again of course)? You can find the lecture source here. From those, we are able to get the Amplitude: For one dimensional heat flow, we have $q= k \dfrac{T_2-T_1}{L}$ , where $T_1$ and $T_2$ are terminal temperatures and $L$ is the length of the material. Switching to $3$-D spaces, we need to look at the fourier system of the heat transfer before proceeding further. $\dfrac{q_x}{A}= - K \dfrac{dT}{dx}$. So integrating we obtain $$\dfrac{q_x}{A}\large \int_{0}^{L} \ dx= -k \int_{T_1}^{T_2} \ dT.$$ So given that the heat flow ( rate of conduction ) in the $3$-D space $(x,y,z)$ is given by the following equation $$ q= - k \nabla T = - k \bigg( \hat{i} \dfrac{\partial T}{\partial x}+\hat{j} \dfrac{\partial T}{\partial y}+\hat{k} \dfrac{\partial T}{\partial y}\bigg).$$The negative sign, there indicates the transfer of heat from one place to another. So we can simply substitute the heat equation in the place of $T$ and compute the partial derivatives to get the rate of flow of heat. What about the direction?
This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social Email Address: Article Id: WHEBN0008515349 Reproduction Date: This article is summary of common equations and quantities in thermodynamics (see thermodynamic equations for more elaboration). SI units are used for absolute temperature, not celsius or fahrenheit. Many of the definitions below are also used in the thermodynamics of chemical reactions. (Ni, S, V must all be constant) The equations in this article are classified by subject. For an ideal gas W=kTN \ln(V_2/V_1)\,\! \Delta W = p \Delta V, \quad \Delta Q = \Delta U + p \delta V\,\! \Delta W = 0, \quad \Delta Q = \Delta U\,\! T_1 V_1^{\gamma - 1} = T_2 V_2^{\gamma - 1} \,\! \Delta W = \int_{V_1}^{V_2} p \mathrm{d}V \,\! Net Work Done in Cyclic Processes \Delta W = \oint_\mathrm{cycle} p \mathrm{d}V \,\! \frac{p_1 V_1}{p_2 V_2} = \frac{n_1 T_1}{n_2 T_2} = \frac{N_1 T_1}{N_2 T_2} \,\! Below are useful results from the Maxwell–Boltzmann distribution for an ideal gas, and the implications of the Entropy quantity. The distribution is valid for atoms or molecules constituting ideal gases. K2 is the Modified Bessel function of the second kind. P\left ( v \right )=4\pi\left ( \frac{m}{2\pi k_B T} \right )^{3/2} v^2 e^{-mv^2/2 k_B T} \,\! Relativistic speeds (Maxwell-Jüttner distribution) f(p) = \frac{1}{4 \pi m^3 c^3 \theta K_2(1/\theta)} e^{-\gamma(p)/\theta} where: P_i = 1/\Omega\,\! \Delta S = k_B N \ln\frac{V_2}{V_1} + N C_V \ln\frac{T_2}{T_1} \,\! \langle E_\mathrm{k} \rangle = \frac{1}{2}kT\,\! Internal energy U = d_f \langle E_\mathrm{k} \rangle = \frac{d_f}{2}kT\,\! Corollaries of the non-relativistic Maxwell–Boltzmann distribution are below. For quasi-static and reversible processes, the first law of thermodynamics is: where δQ is the heat supplied to the system and δW is the work done by the system. The following energies are called the thermodynamic potentials, and the corresponding fundamental thermodynamic relations or "master equations"[2] are: The four most common Maxwell's relations are: \left(\frac{\partial T}{\partial P}\right)_S = +\left(\frac{\partial V}{\partial S}\right)_P = \frac{\partial^2 H }{\partial S \partial P} +\left(\frac{\partial S}{\partial V}\right)_T = \left(\frac{\partial P}{\partial T}\right)_V = - \frac{\partial^2 F }{\partial T \partial V} -\left(\frac{\partial S}{\partial P}\right)_T = \left(\frac{\partial V}{\partial T}\right)_P = \frac{\partial^2 G }{\partial T \partial P} More relations include the following. Other differential equations are: where N is number of particles, h is Planck's constant, I is moment of inertia, and Z is the partition function, in various forms: \lambda_\mathrm{net} = \sum_j \lambda_j \,\! Parallel \frac{1}{\lambda}_\mathrm{net} = \sum_j \left ( \frac{1}{\lambda}_j \right ) \,\! \eta = \left |\frac{W}{Q_H} \right|\,\! Carnot engine efficiency: \eta_c = 1 - \left | \frac{Q_L}{Q_H} \right | = 1-\frac{T_L}{T_H}\,\! K = \left | \frac{Q_L}{W} \right | \,\! Carnot refrigeration performance K_C = \frac{|Q_L|}{|Q_H|-|Q_L|} = \frac{T_L}{T_H-T_L}\,\! Statistical mechanics, Temperature, Entropy, Energy, Pressure Pressure, Temperature, Internal energy, Volume, Physics Thermodynamics, Quantum mechanics, Classical mechanics, James Clerk Maxwell, Microcanonical ensemble Magnetism, Maxwell's equations, Chemistry, Quantum mechanics, James Clerk Maxwell Thermodynamics, Energy, Statistical mechanics, Temperature, Thermodynamic system Statistical mechanics, Thermodynamics, Entropy, Temperature, Pressure Electromagnetism, International System of Units, Ampere, Second, Lux
An Introduction to MimeTex For those who wish to format math questions on the web using super and subscripting, try using the Tex tags. Tom has been kind enough to import MimeTeX to this site. One thing I cannot yet do with it is make those clever angle brackets for terms certain. You may see some good samples of how this works at the MimeTex Site , which is a public domain site and I don't believe I am hotlinking. Here's a basic intro. superscripts x^2 ex: Greek Letters \mu ex: Subscripts are done by _. You can put both into a sum or integral. \sum is for summation and \int is for integration. Put special functions like sine, cos and log with backslashes before their name to typeset them correctly. \infty is infinity ex: ex: if More examples can be found on the website, where you can click on an expression and see the tags that generate it. Grouping can be done with curly braces. ex: I hope this brief introduction, combined with the link, will make posting math questions on the web a little bit clearer and easier. This might also be helpful: http://web.ift.uib.no/Fysisk/Teori/K...eX/symALL.html A much more comprehensive (but much longer list) http://www.ctan.org/tex-archive/info...ols-letter.pdf Any tips on installation? The readme file isn't much help. I'm a little lost about executing the "cc -DAA ..." command, for example. Quote: If you just want to be able to make your notation nicer looking on the Actuarial Outpost, you just need to put {tex} {/tex} tags around it (substitute [] for {}). So {tex}e^x{/tex} becomes etc. After that, look at the mimeTeX site for examples of how to markup different formulas with LaTeX notation to make them look the way you want. If you want to install a LaTeX distribution on your home computer, get started at www.latex-project.org. If you're running a PC download ProText and follow along. Installation is relatively painless and there are some good getting started documents around to get you headed in the right direction. If you are running a bulletin board of your own and want to add mimeTeX capabilities to it, you'll need to ask Tom for some help. But as far as I know mimeTeX won't help you do any sort of math typesetting outside of an internet bulletin board, so if you just want to have nice looking formulas in your own work away from the AO, see the previous paragraph and that should get you started. Thanks, nd, that's all what I needed to know. Mostly for AO purposes but I'm sure I'll put the rest to use. You can also read about LaTeX here: http://en.wikipedia.org/wiki/LaTeX if you follow all (or most) links at the bottom, you should have a much better idea as to how to use it. A lot of math faculty uses LaTeX or something similar to write all their papers / class notes. Some universities even have classes on how to use it. Quote: Quote: which is odd. for the life of me I can't imagine how I found it. I usually just use the new posts search and go through them. odd. 2007 is going to be a great year for me. I can feel it. All times are GMT -4. The time now is 07:09 PM. Powered by vBulletin® Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
This is a quick advert for the excellent online course “Reinforcement Learning” by David Silver. David Silver’s ‘teaching’ home page includes links to the slides (which are particularly helpful for Lecture 7, where there’s no video available - just sound). These notes are mainly for my own consumption… Musings : Eligibility Traces Several things that bubbled to the surface during the Reinforcement Learning course: Eligibility traces, which are a way of keeping track of how rewards for \(TD(\lambda)\) should be attributed to the states/actions that lead to them, seem very reminiscent of activation potentials Aiming for minimum fit error is the same as optimising for least surprise, which is the same as making predictions that are as un-surprising as possible Distributing the amount of ‘surprise’ (positive or negative) to exponentially decaying eligibility nodes with a neural networkseems like a very natural reinterpretation of the \(TD(\lambda)\) framework But current neural networks are densified, with multi-dimensional spaces embedding knowledge in a way that is entangled far more thoroughly than the sparse representations (apparently) found in the brain Perhaps the ‘attribution problem’ that sparse & eligibility solves is equivalent to dense & backprop But there appears to be a definite disconnect between the rationales behind each of the two methods that requires more justification Musings : Exploration vs Exploitation Interesting that \( (\mu + n\sigma) \) is an effective action-chooser, since that is what I was doing in the 1990s, but using as a placeholder heuristic until I found something that was ‘more correct’ But \([0,1]\) bounded i.i.d variables having a decent confidence bound (Hoeffding’s inequality) was a new thing for me Also, liked the Thompson sampling (from 1930s) that implicitly created samples according to a distribution, merely by using samples from the underlying factors is very elegant and makes me think about the heuristic in Genetic Algorithms for ‘just sampling’ as potentially solving a (very complex) probablity problem, without actually doing any computation Musings : Games and State-of-the-Art The tree optimisations seem ‘obvious’ in retrospect - but clearly each was a major ‘aha!’ when it was first proposed. Very interesting. Almost all of the State-of-the-Art methods use binary features, linear approximations to \( v_*() \), search and self-play. Perhaps deeper models will work instead of the linear ones - but it’s interesting that binary (~sparse?) features are basically powerful enough (when there’s some tree-search for trickier strategy planning) Musings : Gimbal Learning Lecture 8 of the Reinforcement Learning Course by David Silver is super-relevant to the Gimbal control problem that I’ve been considering. It’s also interesting that the things I had already assumed would have been obvious are currently considered state-of-the-art (but isn’t that always the way? Hindsight, etc). In summary : Learn model from real world by observing state transitions, and then learning {state, action}-to-state mapping Also learn {State, Action}-to-Reward mapping (almost a separate model) Apply model-free methods to model-simulated world Once ‘correct’ action has been selected, actually perform it Now we have new real-world learning with which to fine-tune the world model To apply this to gimbal, seems like one could present ‘target trajectories’ to controller in turn, letting it learn a common world-model, with different reward-models for each goal. And let it self-play… Apropos of Nothing DARPA Robotics Fast Track - are they doing anything? blog comments powered by Disqus
Understanding light propagation in an optical fiber by ray optics (see “optical fiber“) gives an intuitive understanding and, in fact, is a good approximation if the size of the core is much larger than the wavelength guided in the optical fiber. The electromagnetic properties of an optical fiber, however, become essential as the size of the waveguide becomes closer to the wavelength. The concept of modes in an optical waveguide is one of the most important concepts when analyzing an optical fiber using electromagnetics. Figure 1: Schematic of optical fiber and guided modes. Figure 1 shows schematically a longitudinally uniform optical fiber and examples of modes. Each mode maintains the distribution of the electromagnetic field in the x-y plane (modal-field distribution) as it propagates in the z-direction. Modes are determined once the refractive-index profile and the angular frequency of light \(\omega\) (\(\omega=2\pi C_0/\lambda\) where \(C_0\) is the speed of light in vacuum) are given, the electromagnetic field of the mode at an arbitrary position and time is given as follows: \( \mathbf{E}_{\beta}(x,y,z,t)=\mathbf{E}_{\beta}(x,y)e^{i(\omega t – \beta z)}, \) \( \mathbf{H}_{\beta}(x,y,z,t)=\mathbf{H}_{\beta}(x,y)e^{i(\omega t – \beta z)}. \) The propagation constant \(\beta\) of each mode is provided by solving an eigenvalue problem (which is determined by the refractive-index profile and angular frequency \(\omega\)), and the modal-field distribution \(\mathbf{E}_{\beta}(x,y)\) or \(\mathbf{H}_{\beta}(x,y)\) is the eigenstate for the eigenvalue. The modal-field distribution is generally either oscillatory or evanescent in the transverse direction. If the modal-field distribution is oscillatory in the core and evanescent in the cladding, the mode is confined in the core as the modal-field distribution rapidly decays in the cladding. These modes are referred to as guided modes. Guided mode is the most important type of modes and is often simply referred to as “mode”. Multimode and single-mode fiber A set of guided modes with the same (or very close) \(\beta\) generally show very similar characteristics (e.g. modal-field distribution) except their polarization states; this set of modes is called a mode class. An optical fiber with a large core guides multiple mode classes and the fiber is called a multimode fiber. The number of mode classes reduces as the size of the core decreases, and when the size becomes sufficiently small, only one mode class is allowed. This fiber is called a single-mode fiber.
To evaluate detection performance, we plot the miss-rate $mr(c) = \frac{fn(c)}{tp(c) + fn(c)}$ against the number of false positives per image $fppi(c)=\frac{fp(c)}{\text{#img}}$ in log-log plots. $tp(c)$ is the number of true positives, $fp(c)$ is the number of false positives, and $fn(c)$ is the number of false negatives, all for a given confidence value $c$ such that only detections are taken into account with a confidence value greater or equal than $c$. As commonly applied in object detection evaluation the confidence threshold $c$ is used as a control variable. By decreasing $c$, more detections are taken into account for evaluation resulting in more possible true or false positives, and possible less false negatives. We define the log average miss-rate (LAMR) as shown, where the 9 fppi reference points are equally spaced in the log space: $\DeclareMathOperator*{\argmax}{argmax}LAMR = \exp\left(\frac{1}{9}\sum\limits_f \log\left(mr(\argmax\limits_{fppi\left(c\right)\leq f} fppi\left(c\right))\right)\right)$ For each fppi reference point the corresponding mr value is used. In the absence of a miss-rate value for a given f the highest existent fppi value is used as new reference point. This definition enables LAMR to be applied as a single detection performance indicator at image level. At each image the set of all detections is compared to the groundtruth annotations by utilizing a greedy matching algorithm. An object is considered as detected (true positive) if the Intersection over Union (IoU) of the detection and groundtruth bounding box exceeds a pre-defined threshold. Due to the high non-rigidness of pedestrians we follow the common choice of an IoU threshold of 0.5. Since no multiple matches are allowed for one ground-truth annotation, in the case of multiple matches the detection with the largest score is selected, whereas all other matching detections are considered false positives. After the matching is performed, all non matched ground-truth annotations and detections, count as false negatives and false positives, respectively. Neighboring classes and ignore regions are used during evaluation. Neighboring classes involve entities that are semantically similar, for example bicycle and moped riders. Some applications might require their precise distinction ( enforce) whereas others might not ( ignore). In the latter case, during matching correct/false detections are not credited/penalized. If not stated otherwise, neighboring classes are ignored in the evaluation. In addition to ignored neighboring classes all persons annotations with the tags behind glass or sitting-lying are treated as ignore regions. Further, as mentioned in Section 3.2, EuroCity Persons Dataset Publication, ignore regions are used for cases where no precise bounding box annotation is possible (either because the objects are too small or because there are too many objects in close proximity which renders the instance based labeling infeasible). Since there is no precise information about the number or the location of objects in the ignore region, all unmatched detections which share an intersection of more than $0.5$ with these regions are not considered as false positives. Note that submissions with provided publication link and/or code will get priorized in below list (COMING SOON). Method User LAMR (reasonable)▲ LAMR (small) LAMR (occluded) LAMR (all) External data used Publication URL Publication code Submitted on HRNet Hongsong Wang 0.061 0.138 0.287 0.183 ImageNet no no 2019-08-05 17:11:04 View YOLOv3 ECP Team 0.097 0.186 0.401 0.242 ImageNet yes no 2019-04-01 17:08:05 View Faster R-CNN ECP Team 0.101 0.196 0.381 0.251 ImageNet yes no 2019-04-01 17:06:33 View SSD ECP Team 0.131 0.235 0.460 0.296 ImageNet yes no 2019-04-02 13:56:14 View R-FCN (with OHEM) ECP Team 0.163 0.245 0.507 0.330 ImageNet yes no 2019-04-01 17:10:03 View YOLOv3_640 HUI_Tsinghua-Daim... 0.273 0.564 0.623 0.456 no no 2019-05-17 04:56:27 View Method User LAMR (reasonable)▲ LAMR (small) LAMR (occluded) LAMR (all) External data used Publication URL Publication code Submitted on HRNet Hongsong Wang 0.079 0.156 0.265 0.153 ImageNet no no 2019-08-05 17:11:04 View FasterRCNN with M... Qihua Cheng 0.150 0.253 0.653 0.295 ImageNet no no 2019-07-08 08:48:13 View Faster R-CNN ECP Team 0.201 0.359 0.701 0.358 ImageNet yes no 2019-05-02 10:10:01 View
If you want a numerical method to satisfy certain properties, you have to prove (or construct the method in such a way) that these properties hold. Examples: conservative finite-volume formulations for conservation laws Runge-Kutta methods preserve linear invariants symplectic integrators preserve symplecticity, which is important for Hamiltonian systems symplectic Runge-Kutta methods preserve quadratic invariants total variation diminishing discretizations (TVD) are monotonicity-preserving (typically, these are spatial discretizations for hyperbolic problems) strong-stability preserving discretizations in time are those that preserve the TVD property, if a spatial discretization is TVD when integrated in time using forward Euler you can force certain integration schemes (example: BDF implementation in SUNDIALS) to respect bounds in properties of ODE solutions (example: you're integrating an ODE tracking the mass of an object; that object's mass must be nonnegative) Tightening the tolerances, as DavidKetcheson correctly points out, will not necessarily make the numerical solution monotonic. There's no way to guarantee that. To summarize the following discussion, in a practical sense, if you are approaching a steady-state monotonically, tightening tolerances will limit spurious oscillations (specifically, it will limit their amplitude, but not necessarily their total variation), but it will not eliminate them for certain. Your particular ODE is: \begin{align}\dot{\theta} = \frac{1}{\sqrt{\frac{\cos^{4}{\theta}}{\sin^{2}{\theta}} + \cos^{2}}{\theta}}\end{align} After some simplification and consulting trigonometric identities, it becomes \begin{align}\dot{\theta} = \pm\tan{\theta},\end{align} and since you start at $\theta(0) = -\pi/2$, your right-hand side is singular anyway, and your "solution" is dubious. (The reason for the ambiguity in sign via $\pm$ involves a trigonometric identity for sine in terms of cotangent, and the choice of sign depends on which quadrant $\theta$ is in.) The only reason you see a solution at all is due to quirks in floating-point arithmetic; when I evaluate your right-hand side at zero, I get 16331239353195370.0, probably a system- and hardware-dependent result. For $\theta \in (-\pi/2, 0]$, you should have $\dot{\theta} = -\tan{\theta}$, and since $-\tan{\theta}$ is nonnegative on $(-\pi/2, 0]$ and $\tan{0} = 0$, any initial condition in $(-\pi/2, 0]$ should yield a monotonically increasing solution. Again, this property isn't guaranteed by the numerical methods you are using. Furthermore, if $\theta \in (0, \pi/2)$, your equation becomes $\dot{\theta} = \tan{\theta}$ to preserve positivity of the right-hand side (the original form of the equation ensures positivity), and $\tan{\pi/2}$ is of course, again, singular. The main function of your tolerance here is to ensure that you get the right steady-state, and because floating-point arithmetic is mysterious, you somehow get what looks like a "correct" answer (for $\theta(0) = -\pi/2 + \varepsilon$ for some small $\varepsilon > 0$) despite the initial condition being at a singularity, and you see those deviations from the true steady state at $\theta = 0$ is because tolerances correspond to "roughly zero" in terms of estimating error in the computed solution. Due to quirks in floating-point arithmetic, your right-hand side won't necessarily evaluate to exactly zero, but because it will be sufficiently small relative to your tolerances, you should see no oscillations of amplitude larger than your tolerances (approximately, because this reasoning is heuristic, and not rigorous, but it should hold). Aside from reasons of stability or necessary precision, I am skeptical of extremely small tolerances unless they are required due to physical or numerical reasons. Undue demands on accuracy will unnecessarily restrict time step size and increase simulation execution times without yielding any appreciable improvement in results. As a counterexample, combustion problems are notorious for requiring small tolerances for certain species to get approximately correct numerical results; if certain species are never present, certain reactions cannot occur, even if those species concentrations are calculated in quantities unlikely to be measured, so in that case, small absolute and relative tolerances are warranted. For your particular problem, I suspect that with sufficiently large tolerances, you will see wild oscillations because you will overshoot appreciably your steady state and reach the singularity at $\theta = \pi/2$, so the default tolerance is probably fine, but I see no good reason to decrease the tolerances here.
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Detailed record - Similar records 2018-08-25 06:58 Detailed record - Similar records 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Detailed record - Similar records 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Detailed record - Similar records 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Detailed record - Similar records 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Detailed record - Similar records 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Detailed record - Similar records 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Detailed record - Similar records 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Detailed record - Similar records 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Detailed record - Similar records
As Geoff Oxberry points out in his comment, if you know a priori that $x_k=0$ for $k\leq i$ and $k\geq j$, then there's nothing to optimize with respect to these variables and you can just solve the (smaller) reduced unconstrained problem $\min_{x\in\mathbb{R}^{j-i-1}}\hat f(x)$, where $\hat f$ takes only the nonzero variables as input. If this is not feasible for some reason, you might want to update your question to include this in order to get concrete advice. But to address the question in your comment: One possible approach to solve a constrained optimization problem of the form $\min_{x\in C} f(x)$ for some (convex) set $C\subset\mathbb{R}^n$ (in your case, $C=\{x\in\mathbb{R}^n: x_k=0,\ k\leq i \text{ or }k\geq j\}$) is projected gradient descent (which is a special instance of proximal gradient methods), defined by the iteration$$x^{k+1} = P_C \left(x^k - \alpha_k f'(x^k)\right)$$where $P_C$ is the projection onto the convex set $C$ (in your case, simply setting the corresponding $x_k$ to zero) and $\alpha_k$ is a suitable step size (e.g., of Armijo type). If $f$ is smooth, this iteration converges to a stationary point (which is a minimizer if $f$ is convex); see, e.g., PH Calamai, JJ Moré: Projected gradient methods for linearly constrained problems, Mathematical Programming 39, 93-116. In principle, this is what you are doing, except you are using the conjugate gradient as a search direction instead of the gradient. I haven't checked the proofs, but I'd assume that your version should still work as long as you have a descent direction. Whether it is reasonable, though, depends on your definition (it is a bit of a sledgehammer approach...)
This is essentially an addition to the list of @4tnemele I'd like to add some earlier work to this list, namely Discrete Gauge Theory. Discrete gauge theory in 2+1 dimensions arises by breaking a gauge symmetry with gauge group $G$ to some lower discrete subgroup $H$, via a Higgs mechanism. The force carriers ('photons') become massive which makes the gauge force ultra-short ranged. However, as the gauge group is not completely broken we still have the the Aharanov-Bohm effect. If H is Abelian this AB effect is essentially a 'topological force'. It gives rise to a phase change when one particle loops around another particle. This is the idea of fractional statistics of Abelian anyons. The particle types that we can construct in such a theory (i.e. the one that are "color neutral") are completely determined by the residual, discrete gauge group $H$. To be more precise: a particle is said to be charged if it carries a representation of the group H. The number of different particle types that carry a charge is then equal to the number of irreducible representations of the group H. This is similar to ordinary Yang-Mills theory where charged particles (quarks) carry the fundamental representation of the gauge group (SU(3). In a discrete gauge theory we can label all possible charged particle types using the representation theory of the discrete gauge group H. But there are also other types of particles that can exist, namely those that carry flux. These flux carrying particles are also known as magnetic monopoles. In a discrete gauge theory the flux-carrying particles are labeled by the conjugacy classes of the group H. Why conjugacy classes? Well, we can label flux-carrying particles by elements of the group H. A gauge transformation is performed through conjugacy, where $ |g_i\rangle \rightarrow |hg_ih^{-1}\rangle $ for all particle states $|g_i\rangle$ (suppressing the coordinate label). Since states related by gauge transformations are physically indistinguishable the only unique flux-carrying particles we have are labeled by conjugacy classes. Is that all then? Nope. We can also have particles which carry both charge and flux -- these are known as dyons. They are labeled by both an irrep and a conjugacy class of the group $H$. But, for reasons which I wont go into, we cannot take all possible combinations of possible charges and fluxes. (It has to do with the distinguishability of the particle types. Essentially, a dyon is labeled by $|\alpha, \Pi(g)\rangle$ where $\alpha$ is a conjugacy class and $\Pi(N(g))$ is a representation of the associated normalizer $N(\alpha)$ of the conjugacy class $\alpha$.) The downside of this approach is the rather unequal setting of flux carrying particles (which are labeled by conjugacy classes), charged particles (labeled by representations) and dyons (flux+compatible charge). A unifying picture is provided by making use of the (quasitriangular) Hopf algebra $D(H)$ also known as a quantum double of the group $H$. In this language all particles are (irreducible) representations of the Hopf algebra $D(H)$. A Hopf Algebra is endowed with certain structures which have very physical counterparts. For instance, the existence of a tensor product allows for the existence of multiple particle configurations (each particle labeled by their own representation of the Hopf algebra). The co-multiplication then defines how the algebra acts on this tensored space. the existence of an antipode (which is a certain mapping from the algebra to itself) ensures the existence of an anti-particle. The existence of a unit labels the vacuum (=trivial particle). We can also go beyond the structure of a Hopf algebra and include the notion of an R-matrix. In fact, the quasitriangular Hopf Algebra (i.e. the quantum double) does precisely this: add the R-matrix mapping. This R-matrix describes what happens when one particle loops around another particle (braiding). For non-Abelian groups $H$ this leads to non-Abelian statistics. These quasitriangular Hopf algebras are also known as quantum groups. Nowadays the language of discrete gauge theory has been replaced by more general structures, referred to by topological field theories, anyon models or even modular tensor categories. The subject is huge, very rich, very physical and a lot of fun (if you're a bit nerdy ;)). Sources: http://arxiv.org/abs/hep-th/9511201 (discrete gauge theory) http://www.theory.caltech.edu/people/preskill/ph229/ (lecture notes: check out chapter 9. Quite accessible!) http://arxiv.org/abs/quant-ph/9707021 (a simple lattice model with anyons. There are definitely more accessible review articles of this model out there though.) http://arxiv.org/abs/0707.1889 (review article, which includes potential physical realizations)This post imported from StackExchange Physics at 2015-11-01 19:23 (UTC), posted by SE-user Olaf
Help:Editing Math Equations using TeX This is how you edit math equations using the TeX syntax to make nice looking equations. Please use TeX when writing math. Trying to put equations directly into the text doesn't look very nice and TeX is very easy to learn. If you already use TeX, then all you need to know is that your normal syntax must be surrounded by tags: For HTML rendering if possible <math> syntax </math> For forced TeX rendering ( which produces an image ) <math> syntax ~</math> <math> syntex \,\!</math> <math> syntax \,</math> | The above forced rendering are explained in Forced PNG rendering If you've never used TeX before, here's a crash course: Contents Fractions To make a fraction use: \frac{foo}{bar} These render as: Superscript and Subscript To make superscripts and subscripts, use: x^2 y_0 These render as: Greek Letters To make greek letters, you just need to know their names. Use a capital for the capital letter, a lower-case for the lower-case letter: \pi \theta \omega \Omega \gamma \Gamma \alpha \beta These render as like: Trig Stuff You can use the following to render trig functions without italics so it looks nicer: \cos(\theta) \sin(\theta) \tan(\theta) These render as: , as opposed to: cos(\theta) sin(\theta) tan(\theta) Which look like: In general, if you want to make words not in italics in math, use \text{foo bar} Which looks like instead of Big Brackets Usually, a normal parenthesis or bracket will do fine: x^2 (2x + y) Renders as: But if you have big stuff like fractions, it doesn't always look as nice: (\frac{\pi}{2}) renders as: Instead, use: x^2 \left(2x + y\right) \left( \frac{\pi}{2} \right) Which looks like: Notice that these kinds of brackets are always the right size. They also work with square brackets: \left[ \frac{\pi}{2} \right] \left[ x^2 (2x + y) \right] Renders as: Other These are some other things that may be useful. In general, if you want to know how to make a type of symbol, you can find many usefuly lists by searching for TeX or LaTeX math symbols. \sqrt{foo} \int_a^b f(x)dx \pm \mp \approx One thing you may notice is that adding spaces between symbols will not add more space in the rendered output. a_0 a_1 a_0 a_1 These will both render as: If you want to force spacing between symbols, you have to use the "~" symbol: a_0~~~~~~~~~a_1 renders as: Also, it can be useful to note that there as two kinds of math font that the wiki will try to use. One is a smaller font which fits better into a line of normal text, and the other is a larger font which looks nicer when you have an equation on its own line. Whenever you make somthing "big" like a fraction or square root symbol, the wiki will automatically use bigger font. Sometimes you may want to force a line to be bigger because it looks nicer. I have found no nice way to do this other than it just so happens that if you put a "~" at the end of a line that line will be rendered in the bigger font; since the "~" is at the end you won't notice that there is technically an extra blank space there. A\cos(\omega t)</math> A\cos(\omega t)~</math> These will render as: Lastly, don't be afraid to nest things together to make really complicated looking expressions: v_2 = \sqrt{\frac{2\left(P_1 - P_2\right)}{\rho \left(1 - {\left(\frac{A_2}{A_1}\right)}^2\right)}} will render as:
№ 9 All Issues Volume 68, № 8, 2016 Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1011-1020 We prove the existence of wave operators for the multidimensional electromagnetic Schr¨odinger operator in divergent form by the Cook method. Moreover, under certain conditions on the coefficients of the given operator, we establish the isometry of its wave operators and determine the initial domains of these operators. Jackson-type inequalities with generalized modulus of continuity and exact values of the $n$-widths of the classes of $(ψ,β)$-differentiable functions in $L_2$. II Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1021-1036 In the space $L_2$, we determine the exact values of some $n$-widths for the classes of functions such that the generalized moduli of continuity of their $(\psi, \beta)$ - derivatives or their averages with weight do not exceed the values of the majorants $\Phi$ satisfying certain conditions. Specific examples of realization of the obtained results are also analyzed. Problem without initial conditions for a countable semilinear hyperbolic system of first-order equations Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1043-1055 We derive sufficient conditions for the solvability of the problem without initial conditions for a countable semilinear hyperbolic system of first-order equations and establish conditions for the classical solvability of the initial-boundary value problem for countable hyperbolic systems of semilinear equations of the first-order in a semistrip. Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1056-1067 We study the rate of convergence of the values of analogs of the functionals of strong approximation of Fourier series in generalized $L$-Hölder spaces. Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1068-1079 We study several problems in the field of modified divisors; more precisely, from the theory of exponential and infinitary divisors.We analyze the behavior of modified divisors, sum-of-divisors, and totient functions. Our main results are connected with the asymptotic behavior of mean values and explicit estimates of the extremal orders. Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1080-1091 We establish the exact-order estimates of Kolmogorov and orthoprojective widths of anisotropic Besov classes of periodic functions of several variables in the spaces $L_q$. Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1092-1101 We study the process of transfer of Markov perturbations and control over this process under the condition of existence of the equilibrium point of the quality criterion. For this control, we construct a normalized process and establish its asymptotic normality in the form of the Ornstein – Uhlenbeck process in the case where the transfer process changes under the influence of Markov switchings along a new trajectory of evolution from the state in which it was at the time of switching. Spectral problem for Sturm – Liouville operator with retarded argument which contains a spectral parameter in boundary condition Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1102-1114 We prove the existence of wave operators for the multidimensional electromagnetic Schr¨odinger operator in divergent form by the Cook method. Moreover, under certain conditions on the coefficients of the given operator, we establish the isometry of its wave operators and determine the initial domains of these operators. Nonlocal mixed-value problem for a Boussinesq-type integrodifferential equation with degenerate kernel Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1115-1131 We consider the problem of one-valued solvability of the mixed-value problem for a nonlinear Boussinesq type fourth-order integrodifferential equation with degenerate kernel and integral conditions. The method of degenerate kernel is developed for the case of nonlinear Boussinesq type fourth-order partial integrodifferential equation. The Fourier method of separation of variables is employed. After redenoting, the integrodifferential equation is reduced to a system of countable system of algebraic equations with nonlinear and complex right-hand side. As a result of the solution of this system of countable systems of algebraic equations and substitution of the obtained solution in the previous formula, we get a countable system of nonlinear integral equations (CSNIE). To prove the theorem on one-valued solvability of the CSNIE, we use the method of successive approximations. Further, we establish the convergence of the Fourier series to the required function of the mixed-value problem. Our results can be regarded as a subsequent development of the theory of partial integrodifferential equations with degenerate kernel. Properties of the isolated solutions bounded on the entire axis for a system of nonlinear ordinary differential equations Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1132-1138 We establish the conditions of continuous dependence on the right-hand side for the “isolated” solutions of a system of nonlinear ordinary differential equations bounded on the entire axis. Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1139-1141 We give new and brief proofs of the results obtained by X. Li and T. Zhao in [\mathrm{S}\Phi -supplemented subgroups of finite groups // Ukr. Math. J. – 2012. – 64, № 1. – P. 102–109]. Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1142-1146 Let $G$ be a finite group. The prime graph of $G$ is a graph $\Gamma (G)$ with vertex set $\pi (G)$ and the set of all prime divisors of $|G|$, where two distinct vertices $p$ and $q$ are adjacent by an edge if $G$ has an element of order $pq$. We prove that if $G\Gamma (G) = \Gamma (G_2(5))$, then $G$ has a normal subgroup $N$ such that $\pi (N) \subseteq \{ 2, 3, 5\}$ and $G/N \sim = G_2(5)$. Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1147-1151 We establish more general conditions for the univalence of analytic functions in the open unit disk $U$. In addition, we obtain a refinement to the criterion of quasiconformal extension for the main result.
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
I'm not sure where I could pose a challenge to find best $f(n)$ so people will join in. $n\ge 5$ will never probably be proven optimal, but some lucky computations or out of the box analysis might give nice results. (Given $n$ fixed digits and operations $(+,-,\times,\div)$, whats the highest $N\in\mathbb N$, such that all numbers $1\dots N$ can be built? $f(n)=N$) @TheSimpliFire You mentioned base, is it true that using digits $\lt b$ means we can represent some number $N$ using $\le (b+1)\log_b N$ digits, if only $+,\times$ are allowed? If $b=2$, $3\log_2 N$ bound is given: https://arxiv.org/pdf/1310.2894.pdf and explained: " The upper bound can be obtained by writing $N$ in binary and finding a representation using Horner’s algorithm." So if we actually allow $\le b$ digits, we have $log_b N$ digits and that many bases, so the bound would be $2\log_b N$? https://en.wikipedia.org/wiki/Horner%27s_method @TheSimpliFire The problem is inverting the bound which is not trivial if $b\ne 2$. For example, we can build $1=2-1$ using $1,2$ digits but adding onto $5$ and having now a set $1,2,5$ does NOT allow to rebuild $1$ since all digits must be used. So keeping consecutive integers from $n-1$ digit case is not guaranteed. This is the issue. The $d$ is fixed at $n$ digits and all need to be used. Thats why I took $d_i=2^{i-1}$ digit sets since we can divide two largest to get the $n-1$ case and this allows to obtain bound $f(n)\ge2^n-1$ eventually. Inductively. $i=1,\dots,n$ This is not the issue if all digits are $1$'s also, on which they give bound $3\log_2 N\ge a(N)$ which can be translated to $f(n)\ge 2^{N/3}$ since multiplying two $1$'s reduces the case to $n-1$ and allows induction. We need to inductively build digits $d_i$ so next set can achieve at least what previous one did. Otherwise, it is hard to prove the next step is better when adding more digits. For example we can add $d_0,d_0/2,d_0/2$ where $d_0$ can be anything since $d_0-d_0/2-d_0/2$ reduces us to case $n-3$. The comments discuss setting better bounds using similar construction (on my last question) I'm not sure if you have the full context of the question or if this makes sense so sorry for clogging up the chat :P
The only way in which your original grammar produces a string of the form $a^nb^n$ is if the production $S\to aSbb$ is never used. Similarly, the only way in which it produces $a^nb^{2n}$ is if $S \to aSb$ is never used. We thus want to force both of them to be used. Since all these productions "commute" (it doesn't matter in which order you apply them), we might as well assume that the first thing you do is apply these two productions, and the rest is arbitrary. The corresponding grammar is $$\begin{align*}&S \to aaTbbb \\&T \to aTb \mid aTbb \mid \lambda\end{align*}$$ If instead we apply these two productions at the end, we get an even more succinct grammar:$$S \to aSb \mid aSbb \mid aabbb$$ Alternatively, notice that $n < m < 2n$ is equivalent to $n-2 \leq m-3 \leq 2(n-2)$. This is since $n < m$ iff $n-2 \leq m-3$, and similarly $m < 2n$ iff $m-3 \leq 2n-4$. We can therefore write the language as$$\begin{align*}L &= \{ a^nb^m \mid n-2 \leq m-3 \leq 2(n-2) \} \\ &= \{ a^{n+2} b^{m+3} \mid n \leq m \leq 2m \}.\end{align*}$$The grammars above implement the latter form.
Background: We have so far taken the bond B to be deterministic for simplicity, but some reflection shows that this is not in any way necessary. Everything works out the same way with a stochastic bond $B_1(u) \neq B_1(d)$ (except the algebra takes a little more work), as we now describe. The equations defining the hedging portfolio now become $$\phi S(u) + \psi B(u) = X(u) \ \ \ \ \ (1.11)$$ $$\phi S(d) + \psi B(d) = X(d) \ \ \ \ \ (1.12)$$ where we have temporarily dropped the subscript “1" on $S_1$ and $B_1$ for convenience. Assuming that the determinant $\Delta = S(u)B(d) − S(d)B(u)$ is nonzero, the unique solution is $$\phi = \frac{B(d)X(u) − B(u)X(d)}{\Delta} \ \ \ \ (1.13)$$ $$\psi = \frac{S(u)X(d) − S(d)X(u)}{\Delta} \ \ \ \ (1.14)$$ The claim value is again forced by the assumption of no arbitrage to be $$V_0(X) = \phi S_0 + \psi B_0$$ If we define the discount factor (now stochastic) as $\beta(\cdot) = B_0/B(\cdot)$, we obtain $$V_0(X) = E_Q[\beta X] \ \ \ \ \ (1.15)$$ where $Q$ is a candidate probability measure defined by $$Q(u) = \frac{−B(u)S(d) + B(u)B(d)(S0/B0)}{\Delta} \ \ \ \ \ (1.16)$$ $$Q(d) = \frac{−B(d)B(u)(S0/B0) + S(u)B(d)}{\Delta} \ \ \ \ \ (1.17)$$ Question: Verify formulas (1.15),(1.16), and (1.17) Partial solution: We have \begin{align*} V_0(X) = \phi S_0 + \psi B_0 &= \left(\frac{B(d)X(u) − B(u)X(d)}{\Delta}\right)S_0 + \left(\frac{S(u)X(d) − S(d)X(u)}{\Delta}\right)B_0\\ &= \frac{B(d)X(u)S_0 - B(u)X(d)S_0 + S(u)X(d)B_0 - S(d)X(u)B_0}{\Delta}\\ &= \frac{(B(d)S_0 - S(d)B_0)X(u) + (S(u)B_0 - B(u)S_0)X(d)}{\Delta}\\ &= \left(\frac{B(d)S_0 - S(d)B_0}{\Delta}\right)X(u) + \left(\frac{S(u)B_0 - B(u)S_0}{\Delta}\right)X(d) \end{align*} I believe this is where I need to incorporate the $\beta(\cdot)$ discount factor but I am really sure how to do that, I don't really understand the discount factor. Any suggestions is greatly appreciated.
Thanks in advance for the help. I'm working on a problem that would be greatly simplified if a solution exists for the following optimization problems for some $N \in \mathbb{N}^+$ $$min \underset{i \in N}{\prod} x_i \;\; s.t. \;\; y = \underset{i \in N}{\sum} x_i \; \wedge \; y,x_i \in [0,1]$$ $$max \underset{i \in N}{\prod} x_i \;\; s.t. \;\; y = \underset{i \in N}{\sum} x_i \; \wedge \; y,x_i \in [0,1]$$ At the moment the best I have been able to manage is randomly generating solutions (for my purposes $N$ tends to be small, less than 6 or 7). Does a solution for these problems exist? Alternatively, is there a trivial heuristic solution or do a set of tight bounds exist that I am just not seeing for some reason? I've noticed that for any given $y$ the interval of solutions tends to be fairly tight. Note that I have included the statistics tag because of what $x_i$ represents in the problem I would like to use this for. Specifically, $x_i$ represents the probability of an event occurring and the product the probability of a specific outcome of random variable. The constraint I am using, a sum which can be larger than 1, is the result of heuristic I have been playing with that seems to work very well. Edit. A bit more information for the eye brow raises I expect I'll get. I have a discrete random variable whose outcomes depend entirely on a set of events (ie $x_i$). Each $x_i$ is independent. What I'm interested in is the behavior of the probabilities of the set of events as they become more concentrated so to speak while the probability of a specific outcome remains constant. For example, with three events two possible solutions would be events with probabilities [.9,.9,.9] and [1,.729,1]. The second solution I would say is more concentrated than the second since the probability of the outcome I am interested in (ie all events occur) is based solely on whether the second event occurs. A natural statistic to use to measure this I believe would be the standard deviation, however, this does not appear to work very well. On a whim, I tried the summation of the events and found that for a specific summation the min and max values of $y$ tends to very tight which, while not perfect, gives me exactly what I want. However, while I know that the bound is tight via simulation I would prefer a closed form solution, if one exists. Also, I'm open to any other possible statistic that would capture ``concentration'' as abstract as this idea is (ideally for a concentration there would be exactly one $y$ value).
The notation is mostly taken from the book "Markov chains and mixing times" by Levin, Peres, and Wilmer. Consider an irreducible, aperiodic, time-reversible, discrete-time Markov chain on a finite state space. The relaxation time $t_{\mathrm{rel}}$ is the reciprocal of the absolute spectral gap. The $\epsilon$-mixing time $t_{\mathrm{mix}}(\epsilon)$ is the shortest time in which, regardless of any initialization, the state becomes $\epsilon$-close to the stationary distribution in the total variation sense. The two are related as: Levin-Peres-Wilmer (Theorem 12.3, 12.4) $$(t_{\mathrm{rel}}-1)\log\frac{1}{2\epsilon}\leq t_{\mathrm{mix}}(\epsilon)\leq t_{\mathrm{rel}}\log\frac{1}{\epsilon\min_x\pi(x)}~,$$ where $\min_x\pi(x)$ is the smallest atom of the stationary distribution. If the Markov chain is a random walk on a 3-regular expander graph, then $t_{\mathrm{rel}}$ is a constant, but $t_{\mathrm{mix}} = \Theta(\log n)$ if the graph has $n$ vertices. Thus, the upper bound here better describes the behavior of the mixing time relative to the relaxation time, since $\pi(x) = \frac 1n$ for all $x.$ My question is: How large can the ratio $\frac{t_{\mathrm{mix}}(1/8)}{t_{\mathrm{rel}}}$ be? From above theorem, it is at most $\log\frac{8}{\min_x\pi(x)},$ but can it indeed be as large as this when $\min_x \pi(x)$ may be very very tiny? I conjecture that $$\frac{t_{\mathrm{mix}}(1/8)}{t_{\mathrm{rel}}}\leq O(\log n)~.$$ Any proofs or counterexamples?
Scientific posters present technical information and are intended for congress or presentations with colleagues. Since LaTeX is the most natural choice to typeset scientific documents, one should be able to create posters with it. This article explains how to create posters with latex Contents The two main options when it comes to writing scientific posters are tikzposter and beamerposter. Both offer simple commandsto customize the poster and support large paper formats. Below, you can see a side-to-side comparison of the output generated by both packages (tikzposter on the left and beamerposter on the right). Tikzposter is a document class that merges the projects fancytikzposter and tikzposter and it's used to generate scientific posters in PDF format. It accomplishes this by means the TikZ package that allows a very flexible layout. The preamble in a tikzposter class has the standard syntax. \documentclass[24pt, a0paper, portrait]{tikzposter} \usepackage[utf8]{inputenc} \title{Tikz Poster Example} \author{ShareLaTeX Team} \date{\today} \institute{ShareLaTeX Institute} \usetheme{Board} \begin{document} \maketitle \end{document} The first command, \documentclass[...]{tikzposter} declares that this document is a tikzposter. The additional parameters inside the brackets set the font size, the paper size and the orientation; respectively. The available font sizes are: 12pt, 14pt, 17pt, 20pt and 24pt. The possible paper sizes are: a0paper, a1paper and a2paper. There are some additional options, see the further reading section for a link to the documentation. The commands title, author, date and institute are used to set the author information, they are self-descriptive. The command \usetheme{Board} sets the current theme, i.e. changes the colours and the decoration around the text boxes. See the reference guide for screenshots of the available themes. The command \maketitle prints the title on top of the poster. The body of the poster is created by means of text blocks. Multi-column placement can be enabled and the width can be explicitly controlled for each column, this provides a lot of flexibility to customize the look of the final output. \documentclass[25pt, a0paper, portrait]{tikzposter} \usepackage[utf8]{inputenc} \title{Tikz Poster Example} \author{ShareLaTeX Team} \date{\today} \institute{ShareLaTeX Institute} \usepackage{blindtext} \usepackage{comment} \usetheme{Board} \begin{document} \maketitle \block{~} { \blindtext } \begin{columns} \column{0.4} \block{More text}{Text and more text} \column{0.6} \block{Something else}{Here, \blindtext \vspace{4cm}} \note[ targetoffsetx=-9cm, targetoffsety=-6.5cm, width=0.5\linewidth ] {e-mail \texttt{sharelatex@sharelatex.com}} \end{columns} \begin{columns} \column{0.5} \block{A figure} { \begin{tikzfigure} \includegraphics[width=0.4\textwidth]{images/lion-logo.png} \end{tikzfigure} } \column{0.5} \block{Description of the figure}{\blindtext} \end{columns} \end{document} In tikzposter the text is organized in blocks, each block is created by the command \block{}{} which takes two parameters, each one inside a pair of braces. The first one is the title of the block and the second one is the actual text to be printed inside the block. The environment columns enables multi-column text, the command \column{} starts a new column and takes as parameter the relative width of the column, 1 means the whole text area, 0.5 means half the text area and so on. The command \note[]{} is used to add additional notes that are rendered overlapping the text block. Inside the brackets you can set some additional parameters to control the placement of the note, inside the braces the text of the note must be typed. The standard LaTeX commands to insert figures don't work in tikzposter, the environment tikzfigure must be used instead. The package beamerposter enhances the capabilities of the standard beamer class, making it possible to create scientific posters with the same syntax of a beamer presentation. By now there are not may themes for this package, and it is slightly less flexible than tikzpopster, but if you are familiar with beamer, using beamerposter don't require learning new commands. Note: In this article a special beamerposter theme will be used. The theme "Sharelatex" is based on the theme "Dreuw" created by Philippe Dreuw and Thomas Deselaers, but it was modified to make easier to insert the logo and print the e-mail address at the bottom of the poster. Those are hard-coded in the original themes. Even though this article explains how to typeset a poster in LaTeX, the easiest way is to use a template as start point. We provide several in the ShareLaTeX templates page The preamble of a beamerposter is basically that of a beamer presentation, except for an additional command. \documentclass{beamer} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{times} \usepackage{amsmath,amsthm, amssymb, latexsym} \boldmath \usetheme{Sharelatex} \usepackage[orientation=portrait,size=a0,scale=1.4,debug]{beamerposter} \title[Beamer Poster]{ShareLaTeX example of the beamerposter class} \author[sharelatexteam@sharelate.com]{ShareLaTeX Team} \institute[Sharelatex University]{The ShareLaTeX institute, Learn faculty} \date{\today} \logo{\includegraphics[height=7.5cm]{SharelatexLogo}} The first command in this file is \documentclass{beamer}, which declares that this is a beamer presentation. The theme "Sharelatex" is set by \usetheme{Sharelatex}. There are some beamer themes on the web, most of them can be found in the web page of the beamerposter authors. The command \usepackage[orientation=portrait,size=a0,scale=1.4,debug]{beamerposter} Imports the beamerposter package with some special parameters: the orientation is set to portrait, the poster size is set to a0 and the fonts are scaled to 1.4. The poster sizes available are a0, a1, a2, a3 and a4, but the dimensions can be arbitrarily set with the options width=x,height=y. The rest of the commands set the standard information for the poster: title, author, institute, date and logo. The command \logo{} won't work in most of the themes, and has to be set by hand in the theme's .sty file. Hopefully this will change in the future. Since the document class is beamer, to create the poster all the contents must be typed inside a frame environment. \documentclass{beamer} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{times} \usepackage{amsmath,amsthm, amssymb, latexsym} \boldmath \usetheme{Sharelatex} \usepackage[orientation=portrait,size=a0,scale=1.4]{beamerposter} \title[Beamer Poster]{ShareLaTeX example of the beamerposter class} \author[sharelatexteam@sharelate.com]{ShareLaTeX Team} \institute[Sharelatex University] {The ShareLaTeX institute, Learn faculty} \date{\today} \logo{\includegraphics[height=7.5cm]{SharelatexLogo}} \begin{document} \begin{frame}{} \vfill \begin{block}{\large Fontsizes} \centering {\tiny tiny}\par {\scriptsize scriptsize}\par {\footnotesize footnotesize}\par {\normalsize normalsize}\par ... \end{block} \end{block} \vfill \begin{columns}[t] \begin{column}{.30\linewidth} \begin{block}{Introduction} \begin{itemize} \item some items \item some items ... \end{itemize} \end{block} \end{column} \begin{column}{.48\linewidth} \begin{block}{Introduction} \begin{itemize} \item some items and $\alpha=\gamma, \sum_{i}$ ... \end{itemize} $$\alpha=\gamma, \sum_{i}$$ \end{block} ... \end{column} \end{columns} \end{frame} \end{document} Most of the content in the poster is created inside a block environment, this environment takes as parameter the title of the block. The environment columns enables multi-column text, the environment \column starts a new columns and takes as parameter the width of said column. All LaTeX units can be used here, in the example the column width is set relative to the text width. Tikzposter themes Default Rays Basic Simple Envelope Wave Board Autumn Desert For more information see
Given an isolated $N$-particle dynamical system with only two body interaction, that is $$H=\sum_{i=1}^N\frac{\mathbf{p}_i^2}{2m}+\sum_{i<j}V(\mathbf{r}_i-\mathbf{r}_j)$$ In the thermodynamic limit, that is $N\gg 1$ and $N/V=$constant, it seems that not all two body interaction can make system approach thermal equilibrium automatically. For example, if the interaction is inverse square attractive force, we know the system cannot approach thermal equilibrium. Although there is [Boltzmann's H-theorem](https://en.wikipedia.org/wiki/H-theorem) to derive the second law of thermodynamics, it relies on the [Boltzmann equation](https://en.wikipedia.org/wiki/Boltzmann_equation) which is derived from [Liouville's equation](https://en.wikipedia.org/wiki/Liouville's_equation) in approximation of low density and short range interaction. My question: 1. Does it mean that any isolated system with low density and short range interaction can approach thermal equilibrium automatically? If not, what's the counterexample? 2. For long range interaction or high density isolated system, what's the necessary and sufficient conditions for such system can approach thermal equilibrium automatically? What's about coulomb interaction? 3. How to prove rigorously that a pure self-gravitational system cannot approach equilibrium? I only heard the hand-waving argument that gravity has the effect of clot, but I never see the rigorous proof. I know there is maximal entropy postulate in microscopic ensemble.I just want to find the range of application of this postulate. I'm always curious about the above questions but I never saw the discussion in any textbook of statistical mechanics. You can also cite the literature in which I can find the answer.
Let $M_d$ be a $d$-manifold generator of a subgroup of bordism group $$ \Omega_d^{G}, $$ or further generalization $$ \Omega_d^{G}(K(\mathcal{G},n+1)), $$ which $G$ is the given structure including the tangent bundle structure (such as the SO or Spin) and the internal gauge bundle structure (such as an additional compact Lie group). The $K(\mathcal{G},n)$ is the Eilenberg–MacLane space of $\mathcal{G}$; if $n$ > 1, then $\mathcal{G}$ must be abelian. Generally, $M_d$ cannot be written as $M'_{d-1} \times S^1$, nor $\tilde M_{d-n} \times T^n$, where $T^n$ is the $n$-torus. Here are my questions: (1) However, are there certain cases that $$M_d \overset{?}{=}M_{d-n} \times T^n,$$ such that $M_{d-n}$ is also a $(d-n)$-manifold generator of a subgroup of bordism group $$ \Omega_{d-n}^{G}(K(\mathcal{G},1))? $$ Now $K(\mathcal{G},1)=B\mathcal{G}$ is the classifying space of $\mathcal{G}$. (2) Are there actually mathematical proofs or theorems stating the similar structures given above relation the bordism group generators of $\Omega_d^{G}(K(\mathcal{G},n+1))$ to the bordism group generators of $\Omega_{d-n}^{G}(K(\mathcal{G},1))$, for whatever integer $n$? If so, please state the results and please provide the Refs? Many thanks.
Cost-Complexity Pruning Post-pruning algorithm for Decision Trees by Breiman, Olshen, Stone (1984) Cost-Complexity Function need to optimize the cost-complexity function $R_\alpha (T) = R(T) + \alpha \cdot | f(T) |$ where $R(T)$ is the training/learning error $f(T)$ a function that returns the set of leaves of tree $T$ $\alpha$ is a Regularization parameter $R(T) = \sum_{t \in f(T)} r(t) \cdot p(t) = \sum_{t \in f(T)} R(t)$ $\sum_{t \in f(T)} R(t)$ - sum of misclassification errors at each leaf $r(t) = 1 - \max_k p(C_k - t)$ - misclassification rate $p(t) = \cfrac{n(t)}{n}$ with $n(t)$ being the # of records in node $t$ and $n$ - total # of records Pruning Subtrees Subtrees: Pruning a subtree $T_{t}$ $R_\alpha(T - T_t) - R_\alpha(T)$ - variation of the cost-complexity function this is the cont-complexity when pruning subtree $T_t$ $R_\alpha(T - T_t) - R_\alpha(T) = R(T - T_t) - R(T) + \alpha ( | f(T - T_t) | - |f(T)| ) = R(t) - R(T_t) + \alpha ( 1 - |f(T_t)| )$ let $\alpha' = \cfrac{R(t) - R(T_t)}{|f(T_t)| - 1}$ variation is null if $\alpha = \alpha'$ negative if $\alpha < \alpha'$ positive if $\alpha > \alpha'$ Algorithm Pruning Algorithm: Initialization: let $T^1$ be the tree obtained with $\alpha^1 = 0$ by minimizing $R(T)$ Step 1 select node $t \in T^1 $ that minimizes $g_1(t) = \cfrac{R(t) - R(T^1_t)}{|f(T^1_t)| - 1}$ let $t_1$ be this node let $\alpha^2 = g_1(t_1)$ and $T^2 = T^1 - T^1_{t_1}$ step $i$ select node $t \in T^i $ that minimizes $g_i(t) = \cfrac{R(t) - R(T^i_t)}{|f(T^i_t)| - 1}$ let $t_i$ be this node let $\alpha^{i + 1} = g_i(t_i)$ and $T^{i+1} = T^i - T^i_{t_i}$ Output: a sequence of trees $T^1 \supseteq T^2 \supseteq \ ... \ \supseteq T^k \supseteq \ ... \ \supseteq \{ \text{root} \}$ a sequence of parameters $\alpha^1 \leqslant \alpha^2 \leqslant \ ... \ \leqslant \alpha^k \leqslant \ ... $ Choosing $\alpha$ The algorithm outputs $\alpha^1 \leqslant \alpha^2 \leqslant \ ... \ \leqslant \alpha^k \leqslant \ ... $ need to choose some $\alpha \in [\alpha^k, \alpha^{k+1} )$ let $\alpha \in [\alpha^k, \alpha^{k+1} )$ How to choose $\alpha$ Example Example 1 Suppose we have the following tree: we want to prune it we have 3 inner nodes where we can prune: $t_1 \equiv \text{root}, t_2, t_3$ Some formulas: $R(T_t)$ - training error of a subtree $T_t$ - a tree with root at node $t$ $R(T_t) = \sum_{l \in f(T_t)} R(l)$ - sum of all training errors over all leaves $R_(t)$ - training error of node $t$ $R_(t) = r(t) \cdot p(t)$ $r(t)$ - misclassification error at this none (without considering the leaves) $p(t)$ - proportion of data items reached $t$ (i.e. # of items reached $t$ divided by # of training items) $g(t) = \cfrac{R(t) - R(T_{T_t})}{| f(T_t) | - 1}$ $| f(T_t) | - 1$ is the number of leaves to prune Iteration 1: $t$ $R_(t)$ $R(T_t)$ $g(t)$ $t_1$ $\cfrac{8}{16} \cdot \cfrac{16}{16}$ $T_{t_1}$ - the entire tree all leaves are pure $R(T_{t_1}) = 0$ $\cfrac{8/16 - 0}{4 - 1} = \cfrac{1}{6}$ $t_2$ $\cfrac{4}{12} \cdot \cfrac{12}{16} = \cfrac{4}{16}$ (there are 12 records, 4 $\blacksquare$ + 8 $\bigcirc$ ) $R(T_{t_2}) = 0$ $\cfrac{4/16 - 0}{3 - 1} = \cfrac{1}{8}$ $t_3$ $\cfrac{2}{6} \cdot \cfrac{6}{16} = \cfrac{2}{16}$ $R(T_{t_3}) = 0$ $\cfrac{2/16 - 0}{3 - 1} = \cfrac{1}{8}$ We want to find the minimal $g(t)$ it's $g(t_2)$ and $g(t_3)$ in case of a tie, we choose the one that prunes fewer nodes i.e. $g(t_3)$ so prune at $t_3$ let $\alpha^{(2)} = 1/8$ (the min $g(t)$) Iteration 2: in the tree now we have only candidates: $t_1$ and $t_2$ $t$ $R_(t)$ $R(T_t)$ $g(t)$ $t_1$ $\cfrac{8}{16} \cdot \cfrac{16}{16}$ $\cfrac{2}{16}$ $\cfrac{8/16 - 2/16}{3 - 1} = \cfrac{6}{32}$ $t_2$ $\cfrac{4}{12} \cdot \cfrac{12}{16}$ $\cfrac{2}{16}$ $\cfrac{4/16 - 2/16}{2 - 1} = \cfrac{1}{8}$ Find minimal $g(t)$: it's $g(t_2) = 1/8$ let $\alpha^{(3)} = 1/8$ prune at $t_2$ Iteration 3: only one candidate for pruning: $t_1$ $\alpha^{(4)} = g(t_1) = \cfrac{8/16 - 4/16}{2 - 1} = \cfrac{1}{4}$ Selecting the best: we have these values: $\alpha^{(0)} = 0, \alpha^{(1)} = 1/8, \alpha^{(2)} = 1/8, \alpha^{(3)} = 1/4$ by the theorem we want to find tree such $T$ that minimizes the cost-complexity function if $0 \geqslant \alpha < 1/8$, then $T_1$ is the best if $\alpha = 1/8$, then $T_2$ is the best if $1/8 < \alpha < 1/4$, then $T_3$ is the best if $1/8 < \alpha < 1/4$, then $T_3$ is the best to choose $\alpha$ use Cross-Validation Example 2 From IT4BI 2013 year exam: Sources
I wanted to better understand dfa. I wanted to build upon a previous question:Creating a DFA that only accepts number of a's that are multiples of 3But I wanted to go a bit further. Is there any way we can have a DFA that accepts number of a's that are multiples of 3 but does NOT have the sub... Let $X$ be a measurable space and $Y$ a topological space. I am trying to show that if $f_n : X \to Y$ is measurable for each $n$, and the pointwise limit of $\{f_n\}$ exists, then $f(x) = \lim_{n \to \infty} f_n(x)$ is a measurable function. Let $V$ be some open set in $Y$. I was able to show th... I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Consider a non-UFD that only has 2 units ( $-1,1$ ) and the min difference between 2 elements is $1$. Also there are only a finite amount of elements for any given fixed norm. ( Maybe that follows from the other 2 conditions ? )I wonder about counting the irreducible elements bounded by a lower... How would you make a regex for this? L = {w $\in$ {0, 1}* : w is 0-alternating}, where 0-alternating is either all the symbols in odd positions within w are 0's, or all the symbols in even positions within w are 0's, or both. I want to construct a nfa from this, but I'm struggling with the regex part
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
2019-10-14 17:21 Performance of VELO clustering and VELO pattern recognition on FPGA/LHCb Collaboration This document contains plots and tables showing the performance obtained on VELO clustering and VELO pattern recognition using algorithms implementable on FPGA. The data used are simulated with LHCb Upgrade conditions.. LHCB-FIGURE-2019-011.- Geneva : CERN, 2019 - 16. Registro completo - Registros similares 2019-10-11 14:20 TURBO stream animation /LHCb Collaboration An animation illustrating the TURBO stream is provided. It shows events discarded by the trigger in quick sequence, followed by an event that is kept but stripped of all data except four tracks [...] LHCB-FIGURE-2019-010.- Geneva : CERN, 2019 - 3. Registro completo - Registros similares 2019-10-10 15:48 Registro completo - Registros similares 2019-09-12 16:43 Pending/LHCb Collaboration Pending LHCB-FIGURE-2019-008.- Geneva : CERN, 10 Registro completo - Registros similares 2019-09-10 11:06 Smog2 Velo tracking efficiency/LHCb Collaboration LHCb fixed-target programme is facing a major upgrade (Smog2) for Run3 data taking consisting in the installation of a confinement cell for the gas covering $z \in [-500, -300] \, mm $. Such a displacement for the $pgas$ collisions with respect to the nominal $pp$ interaction point requires a detailed study of the reconstruction performances. [...] LHCB-FIGURE-2019-007.- Geneva : CERN, 10 - 4. Fulltext: LHCb-FIGURE-2019-007_2 - PDF; LHCb-FIGURE-2019-007 - PDF; Registro completo - Registros similares 2019-09-09 14:37 Background rejection study in the search for $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$/LHCb Collaboration A background rejection study has been made using LHCb Simulation in order to investigate the capacity of the experiment to distinguish between $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$ and its main background $\Lambda^0 \rightarrow p^+ \pi^-$. Two variables were explored, and their rejection power was estimated applying a selection criteria. [...] LHCB-FIGURE-2019-006.- Geneva : CERN, 09 - 4. Fulltext: PDF; Registro completo - Registros similares 2019-09-06 14:56 Tracking efficiencies prior to alignment corrections from 1st Data challenges/LHCb Collaboration These plots show the first outcoming results on tracking efficiencies, before appli- cation of alignment corrections, as obtained from the 1st data challenges tests. In this challenge, several tracking detectors (the VELO, SciFi and Muon) have been misaligned and the effects on the tracking efficiencies are studied. [...] LHCB-FIGURE-2019-005.- Geneva : CERN, 2019 - 5. Fulltext: PDF; Registro completo - Registros similares 2019-09-06 11:34 Registro completo - Registros similares 2019-09-02 15:30 First study of the VELO pixel 2 half alignment/LHCb Collaboration A first look into the 2 half alignment for the Run 3 Vertex Locator (VELO) has been made. The alignment procedure has been run on a minimum bias Monte Carlo Run 3 sample in order to investigate its functionality [...] LHCB-FIGURE-2019-003.- Geneva : CERN, 02 - 4. Fulltext: VP_alignment_approval - TAR; VELO_plot_approvals_VPAlignment_v3 - PDF; Registro completo - Registros similares 2019-07-29 14:20 Registro completo - Registros similares
I'm working on a 4-layer PCB with a U-Blox module and I'm trying to calculate the space between the fencing vias next to the Antenna trace and for the stitching vias. According to the datasheet we have the following possible frequencies: The case which would cause the closest vias is the last one. According to the calculations suggested here https://electronics.stackexchange.com/a/42028 I should put them ideally with the following spacing: $$ \frac{\lambda}{20} = \frac{C/2170Mhz}{20} = \frac{138.25mm}{20} = 6.9mm $$ but if we look at this image from the datasheet it seems way too much compared to what they did; consider that the side of the module (the white outline) shown in the above figure is 16mm, so according to calculation there should be a dot at the beginning and at the end of the track and that's it. My best guess is they are basing the fence spacing on the Greatest Common Divisor for the mean value of each frequency reported in the table to cover each operational mode. Stitching wise I report what found in this page: https://www.edn.com/electronics-blogs/the-practicing-instrumentation-engineer/4406491/Via-spacing-on-high-performance-PCBs $$ \lambda = \frac{300}{F\times\sqrt{\varepsilon_{R}}} = \frac{300}{2170Mhz\times2.097} = 60mm $$ now \$\lambda/8\$ is 7.5mm and that should be the necessary spacing for the ground stitching (Er = 4.4 for typical FR-4 PCB material). To sum up: 1) How much space there should be between each fencing via in my case? 2) How much space there should be between each stitching via in my case? 3) Does the frequency actually relates to the fencing/stitching space, or placing the vias closer than the smallest \$\lambda/20\$ (for fencing) and \$\lambda/8\$ (for stitching) is all that matters? 4) Is all that stitching shown in figure really necessary from an RF point of view?
Let $G$ be a group (for now discrete). A subgroup $H$ of $G$ is called a commensurated subgroup of $G$, if $H\cap xHx^{-1}$ is a finite index subgroup of $H$ for all $x\in G$. These subgroups are also called Hecke subgroups or almost normal subgroups. My question is: Is there any non-elementary, closed, discrete and commensurated subgroup $H$ of a non-discrete locally compact group $G$? Elementary examples are finite subgroups and normal subgroups. Edit: Regarding Jean Raimbault's comment, we consider all closed discrete nearly normal subgroups of $G$ as elementary examples too. (nearly normal means it is commensurable to some normal subgroup). I just noticed that another class of very elementary examples can be considered as follows: Consider $G=\Gamma\times \Delta$, where $\Gamma$ is discrete and $\Delta$ is an arbitrary non-discrete locally compact group and $H$ is a commensurated subgroup of $\Gamma$.
Some background on (compact) Belyi surfaces $\newcommand{\Ch}{\hat{\mathbb{C}}}$A compact Riemann surface $X$ is called a Belyi surface if there exists a branched covering map $f:X\to \Ch$ such that $f$ is branched over at most three points of $\Ch$. Here $\Ch$ denotes the Riemann sphere; we can and will take the three points to be $0$, $1$ and $\infty$. (Recall that $f$ is a branched covering map if, for every $a\in\Ch$, there is a simply-connected neighborhood $U$ of $a$ such that $f$ maps every component of $f^{-1}(U)$ like $z\mapsto z^d$ for some $d\geq 1$. The function is branched over $a$ if $d>1$ for every such $U$.) Equivalently, $X$ is a Belyi surface if it can be created by glueing together finitely many equilateral triangles together (defining a complex structure at the vertices in the obvious manner). Every Belyi function is uniquely determined, up to a conformal change of variable, by its line complex, also known as a dessin d'enfant. This is a finite graph that essentially tells us how to form the surface by glueing together triangles. In particular, the set of Belyi surfaces is countable. (This also follows from Belyi's famous theorem, which states that Belyi surfaces are exactly those that are definable over a number field.) So, in a nontrivial moduli space of Riemann Surfaces, such as the space of complex tori, most surfaces are not Belyi. Belyi functions on non-compact surfaces It seems natural to extend this notion to noncompact surfaces. Definition. Let $X$ be a non-compact Riemann surface. An analytic function $f:X\to\Ch$ is called a Belyi function if $f$ is a branched covering whose only branched points lie over $0$, $1$ and $\infty$, and if $f$ has no removable singularities at the punctures of $X$. Question. On which non-compact surfaces do Belyi functions exist? This question seems extremely natural and came up in discussions among Bishop, Epstein, Eremenko and myself. Again, a Belyi function is uniquely determined by its line complex, which is now an infinite graph. Since the space of these graphs is totally disconnected, one might expect that not every non-compact surface supports a Belyi function. However, we discovered that this initial intuition is wrong: Theorem. For every non-compact Riemann surface $X$, there is a Belyi function $f:X\to\Ch$. In particular, every non-compact Riemann surface can be built by glueing together equilateral triangles. Note that, for non-compact surfaces, as pointed out by Misha in the comments, the existence of a Belyi function is formally stronger than being built from triangles, assuming that we allow vertices to be incident to infinitely many triangles. (Such vertices would not correspond to points in the resulting surface, as we have no way of defining a complex structure near these.) To get a Belyi function, we should assume that every vertex is incident to only finitely many triangles, so that each triangle is compactly contained in the resulting surface. (We can also prove the existence of what one might call "Shabat functions", which have two critical values and omit $\infty$.) My question: Have such Belyi functions, and particular the problem of their existence on arbitrary non-compact surfaces, previously appeared in the literature? (I would also be interested to hear whether our results might be of interest outside of one-dimensional complex function theory and complex dynamics. After all, classical Belyi functions and dessins d'enfants are relevant to many areas of mathematics.)
This article is cited in scientific papers (total in 3 3 papers) Estimation of Solutions of Boundary-Value Problems in Domains with Concentrated Masses Located Periodically along the Boundary: Case of Light Masses G. A. Chechkin M. V. Lomonosov Moscow State University Abstract: We study the asymptotic behavior of solutions and eigenelements of boundary-value problems with rapidly alternating type of boundary conditions in the domain $\Omega\subset\mathbb R^n$. The density, which depends on a small parameter $\varepsilon$, is of the order of $O(1)$ outside small inclusions, where the density is of the order of $O((\varepsilon \delta)^{-m})$. These domains, i.e., concentrated masses of diameter $O(\varepsilon \delta)$, are located near the boundary at distances of the order of $O(\delta)$ from each other, where $\delta=\delta(\varepsilon )\to0$. We pose the Dirichlet condition (respectively, the Neumann condition) on the parts of the boundary $\partial\Omega$ that are tangent (respectively, lying outside) the concentrated masses. We estimate the deviations of the solutions of the limit (averaged) problems from the solutions of the original problems in the norm of the Sobolev space $W_2^1$ for $m<2$. DOI: https://doi.org/10.4213/mzm152 Full text: PDF file (322 kB) References: PDF file HTML file English version: Mathematical Notes, 2004, 76:6, 865–879 Bibliographic databases: UDC: 517.956.226 Received: 27.02.2003 Citation: G. A. Chechkin, “Estimation of Solutions of Boundary-Value Problems in Domains with Concentrated Masses Located Periodically along the Boundary: Case of Light Masses”, Mat. Zametki, 76:6 (2004), 928–944; Math. Notes, 76:6 (2004), 865–879 Citation in format AMSBIB \Bibitem{Che04} \by G.~A.~Chechkin \paper Estimation of Solutions of Boundary-Value Problems in Domains with Concentrated Masses Located Periodically along the Boundary: Case of Light Masses \jour Mat. Zametki \yr 2004 \vol 76 \issue 6 \pages 928--944 \mathnet{http://mi.mathnet.ru/mz152} \crossref{https://doi.org/10.4213/mzm152} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2127504} \zmath{https://zbmath.org/?q=an:1076.35014} \transl \jour Math. Notes \yr 2004 \vol 76 \issue 6 \pages 865--879 \crossref{https://doi.org/10.1023/B:MATN.0000049687.89273.d9} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000226356700029} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-10344258632} Linking options: http://mi.mathnet.ru/eng/mz152https://doi.org/10.4213/mzm152 http://mi.mathnet.ru/eng/mz/v76/i6/p928 Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: G. A. Chechkin, “Asymptotic expansions of eigenvalues and eigenfunctions of an elliptic operator in a domain with many “light” concentrated masses situated on the boundary. Two-dimensional case”, Izv. Math., 69:4 (2005), 805–846 Chechkin G.A., Koroleva Yu.O., Persson L.-E., “On the precise asymptotics of the constant in Friedrich's inequality for functions vanishing on the part of the boundary with microinhomogeneous structure”, Journal of Inequalities and Applications, 2007, 34138 Chechkin G.A., Cioranescu D., Damlamian A., Piatnitski A.L., “On Boundary Value Problem with Singular Inhomogeneity Concentrated on the Boundary”, J. Math. Pures Appl., 98:2 (2012), 115–138 Number of views: This page: 285 Full text: 73 References: 36 First page: 2
"Early in his life, Schrödinger experimented in the fields of electrical engineering, atmospheric electricity, and atmospheric radioactivity, but he usually worked with his former teacher Franz Exner. He also studied vibrational theory, the theory of Brownian movement, and mathematical statistics." Schrödinger's cat is a thought experiment, sometimes described as a paradox, devised by Austrian physicist Erwin Schrödinger in 1935. It illustrates what he saw as the problem of the Copenhagen interpretation of quantum mechanics applied to everyday objects. The scenario presents a cat that may be simultaneously both alive and dead, a state known as a quantum superposition, as a result of being linked to a random subatomic event that may or may not occur. The thought experiment is also often featured in theoretical discussions of the interpretations of quantum mechanics. Schrödinger coined the... > Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; on the contrary, he intended the example to illustrate the absurdity of the existing view of quantum mechanics. "Despite praise from the Soviet Union government, the money that poured in to support his laboratory, and the honours he was given, Pavlov made no attempts to conceal the disapproval and contempt with which he regarded Soviet Communism." Schrödinger's cat is a thought experiment, sometimes described as a paradox, devised by Austrian physicist Erwin Schrödinger in 1935. It illustrates what he saw as the problem of the Copenhagen interpretation of quantum mechanics applied to everyday objects. The scenario presents a cat that may be simultaneously both alive and dead, a state known as a quantum superposition, as a result of being linked to a random subatomic event that may or may not occur. The thought experiment is also often featured in theoretical discussions of the interpretations of quantum mechanics. Schrödinger coined the... it's certainly true to say that Schrodinger's conclusion from his cat scenario was not "QM says the world behaves in a way contrary to intuition" but rather "what we think QM says about the world is so contrary to intuition that it can't possibly be the right interpretation" That, however, tells us nothing about how we should understand QM. Hence why I always feel a bit impatient when I hear someone say that Schrodinger presented his cat as an absurdity. It's true, but that's a historical question not an immediately scientific one. Ok but with Stern Gerlach, you can put one magnetic field pointing up/down and you get two beams, one going up and one going down. Now you put a left/right field in one of those beams and you get two beams: one going left and one going right. @Blue w/CC @JohnRennie Hey, I wanted to check in on this before it got too late. On further thought, this isn't quite how you should think of Landau levels. Classical orbits in a magnetic field are never cycloids, they're always circles. The heuristic I was thinking of is this: i.e. a superposition of many circular motions, each with some different quantum mechanical phase (that' s the $e^{ikx} $ factor, which should be handled with care - you might have a definite canonical momentum in the $x$ direction, but the kinematic momentum will differ from that). @DanielSank: That would be anachronistic. If you popped into a time machine and asked Aristotle about the scientific method I'm pretty sure he'd grok it. Actually, he grumbled somewhere in Physics about his contemporary critics asking rhetorically 'do they think that we do not observe'. @Blue keep in mind that most useful treatments of Landau quantization tend to be geared towards the solid state, and the 'particles' you're quantizing are really quasiparticles, i.e. joint excitations on the states of many particles, which then act like a single-particle system. The interesting bit isn't increasing in size, but in increasing complexity of the particle, i.e. pushing towards skyrmions and whatnot. @danielSank: I've read his Physics and it doesn't read like he's making things up; its written like a contemporary paper, he talks discusses the finds and suggestion of his predeccesors looking for the strengths and weaknesses and then going to give his own view. Stephen Hawking talks about Aristotle in a Brief History of Time without being patronising. I'm not sure why physicists pick on Aristotle so much. It was the introduction oh his work by Islamic scholars, like Averroes which kicked of the scientific revolution in the modern era; it didn't come from nowhere. Anonymous @DanielSank I guess you need to define $\lambda$ in a better manner. "Probability per unit time" isn't a standard term (afaik). According to your treatment, $\lambda$ is the time average of the probability density function. And $\lambda \Delta t$ gives an approximate value, not the exact (which I'm happy you addressed in your PSE post). Anonymous Also, it would be nice if you use $\Delta t$ instead of $dt$ there, since writing $dt=T/N$ looks a bit handwavy to me ($dt$ being a differential). You can later take $\Delta t \to 0$ and $N\to \infty$. Anonymous Feel free to give any counter arguments you might have, I'm interested
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
$\def\pd{\partial}$ $\def\l{\left}\def\r{\right}$ $\def\mdot{{\dot{m}}}$ $\def\eps{\varepsilon}$ Consider a tube with longitudinal coordinate $x$ from $0$ to $l$ and varying cross-section $A(x)$. Derek S. Bale (a former student of Prof. Randall J. LeVeque) gives in his PhD (http://faculty.washington.edu/rjl/students/dbale/thesis.ps.gz) the Euler equations for density $\rho$, impulse density $(\rho u)$ and total energy density $e$ in such a tube in the form $$ \begin{array}{ccccl} \pd_t \rho &+& \frac1{A}\pd_x(A(\rho u)) &=& 0\\ \pd_t (\rho u) &+& \frac1{A}\pd_x\l(A\l(\frac{(\rho u)^2}\rho+p\r)\r) &=& \frac{A'}{A}p\\ \pd_t e &+& \frac1{A}\pd_x\l(A(e+p)\frac{(\rho u)}{\rho}\r)&=& 0 \end{array} $$ as an example for a spatially varying flux function. Thereby, the pressure is determined by the gamma-law $p =(\gamma-1)\l(e-\frac12 \frac{(\rho u)^2}{\rho}\r)$. Note, that I have slightly reformulated the system such that beside the pressure $p$ only conserved quantities are used. But, is it not better to change the conserved variables to the mass per length $m':=\rho A$, the mass flow $\mdot:=A\rho u$ and the energy per length $E':=Ae$? As an intermediate variable one could introduce a pressure force $P:=A p=(\gamma-1)\l(E'-\frac12\frac{\mdot^2}{m'}\r)$. Euler's equations reformulated with these new conserved quantities are:$$\begin{array}{ccccl} \pd_t m' &+& \pd_x\mdot &=& 0\\ \pd_t \mdot &+& \pd_x\l(\frac{\mdot^2}{m'}+P\r) &=& \frac{A'}{A}P\\ \pd_t E' &+& \pd_x\l((E'+P)\frac{\mdot}{m'}\r)&=& 0\end{array}$$In this way one obtains a system with the structure of the original Eulerequations (including a source term) where the flux function does depend on the spatial variable.Therefore, the numerics should work fine. I could use Roe's method on this problem. not Do I miss here something? EDIT: Bale uses the first form to calculate supersonic waves through a tube with a narrowing halfway. The speed-up in the narrowing causes a supersonic wave and the corresponding shock wave. The variation of $A(x)$ is smooth: $$ A(x) = \begin{cases} 1& \text{ for }x<1\\ 1+\frac\eps2\bigl(\cos(\pi(x-1))-1\bigr)&\text{for }1\leq x \leq 3\\ 1&\text{for }x>3 \end{cases} $$ I am aware of the fact that smooth transformations of the conserved quantities, such as $(m',\mdot,E')=(A\rho,A(\rho u),A e)$ have the potential to change the structure of discontinusous solutions. But, is this really the case here?
SolidsWW Flash Applet Sample Problem 2 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem 2 with solidsWW.swf embedded A standard WeBWorK PG file with an embedded applet has six sections: A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete. The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below: There are other example problems using this applet: solidsWW Flash Applet Sample Problem 1 solidsWW Flash Applet Sample Problem 3 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ########################################## This is the The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code. All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')). DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", ); This is the The TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = random(2,10,1); $xy = 'x'; $func1 = "$a*sin(pi*x/8)"; $func2 = '2'; $xmax = Compute("8"); $shapeType = 'circle'; $correctAnswer =Compute("128*$a"); This is the The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set ######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height ######################################### <p> This is the Those portions of the code that begin the line with ################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, #answerBoxAlias => 'answerBox', height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' ); You must include the section that follows ################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0xff6699</theColor> <profile> <piece func='$func1' cut='8'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0xff6699</theColor> <profile> <piece func='$func1' cut='8'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, ))); The lines The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable The code Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission. TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT The text between the BEGIN_TEXT $BR $BR Find the volume of the solid of revolution formed by rotating the curve \[y=$a\sin\left(\frac{\pi x}{8}\right)\] for \(x=0\) to \(8\) about the \(y\)-axis. \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings; This is the ###################################### # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT(); This is the The The Flash applets are protected under the following license: Creative Commons Attribution-NonCommercial 3.0 Unported License.
Inductive Logic Programming (ILP) Algorithm AKA:ILP, Inductive Logic Programming. Context: It can (typically) be used to solve a Knowledge-based Supervised Binary Classification Task. It can (typically) find a Hypothesis [math]H[/math], such that [math]B \cup H \vDash e, \forall e \in E^+[/math] [math]B \cup H \nvDash e, \forall f \in E^-[/math] [math]B \cup H[/math] is consistent. assuming, [math]B \nvDash e, \exists e \in E^+[/math] It can be applied by a Inductive Logic Programming System. It can be a Rule Induction Algorithm. Example(s): Counter-Example(s): See:Supervised Learning; First-Order Logic; Inductive Reasoning; Propositional Learner; Multi-Relational Data Mining, Analytical Learning Algorithm, Logic Programming, Mathematical Induction. References 2014 (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Inductive_logic_programming Retrieved:2014-5-6. Inductive logic programming(ILP) is a subfield of machine learning which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples. Schema: positive examples+ negative examples+ background knowledge=> hypothesis. The term Inductive Logic Programmingwas first introduced [1]in a paper by Stephen Muggleton in 1991. The term “ inductive” here refers to philosophical (i.e. suggesting a theory to explain observed facts) rather than mathematical (i.e. proving a property for all members of a well-ordered set) induction. 2011 (De Raedt, 2011a) ⇒ Luc De Raedt. (2011). “Inductive Logic Programming.” In: (Sammut & Webb, 2011) p.529 1991 (Muggleton, 1991) ⇒ Stephen Muggleton. (1991). “Inductive Logic Programming.” In: Journal of New Generation Computing, 8(4). doi:10.1007/BF03037089
As you have already figured out that we have to use the ideal gas law $PV= nRT$, which is you have modified as $$\frac{n}{V}= \frac{P}{RT}$$ To find the density, we can further modify the equation as,\begin{align}\frac{m}{MV} &= \frac{P}{RT}\\\frac{\rho}{M}&= \frac{P}{RT}\\\rho &= \frac{PM}{RT}\\\end{align} Now we just have to replace the parameters with their respective values. The value of temperature should be converted to Kelvin as $(0 + 273)~\mathrm{K} = 273~\mathrm{K}$. Likewise, the value of gas constant $R$ should be chosen as $0.8206~\mathrm{L~atm~K^{-1}{mol}^{-1}}$.\begin{align}\rho &= \frac{1~\mathrm{atm} \cdot 44.09562~\mathrm{g~mol^{-1}}}{{0.8206~\mathrm{L~atm~K^{-1}{mol}^{-1}} \cdot 273~\mathrm{K}}}\\&=1.9683~\mathrm{g/L}\\&=0.0019683~\mathrm{g/cm^3}\\\end{align}
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of proccesses.
I have just started learning Riemann surfaces and I am using the book by Rick Miranda: Algebraic curves and Riemann Surfaces. #F in section 1.3 asks to determine the genus of the curve in $\mathbb{P}^3$ defined by the two equations $x_0x_3=2x_1x_2$ and $x_0^2 + x_1^2 +x_2^2 +x_3^2 = 0$. #G also has a similar question in which he asks to determine the genus of the twisted cubic. Please explain how to approach this type of question. I have just started learning Riemann surfaces and I am using the book by Rick Miranda: Here's a map from the curve $X$ in $\mathbb{P}^3$ defined by $x_1 x_4 = 2 x_2 x_3$ and $x_1^2+x_2^2+x_3^2+x_4^2 = 0$, to the elliptic curve $E$ in $ \mathbb{P}^2$ given by the Weierstrass equation $$Y^2 Z = X^3 - \frac{17822265625}{62208} X Z^2 + \frac{2269744873046875}{40310784} Z^3,$$ which is topologically a torus: $$ [x_1:x_2:x_3:x_4 ] \mapsto [y_1:y_2:y_3] \mapsto [a:b:c] \mapsto [X:Y:Z]$$ with $$ \begin{align*} y_1 &= x_2 x_3^2 - \frac{i}{2}x_2 x_3 x_4 - \frac{i}{2}x_3^2x_4 - \frac{1}{4} x_3 x_4^2 - \frac{i}{4} x_4^3 \\ y_2 &= x_2 x_3 x_4 - \frac{i}{2} x_2 x_4^2 \\ y_3 &= x_3 x_4^2 - i x_4^3 \end{align*},$$ (note that $y_2$ has a simple pole at $[1:i:0:0]$, $y_1$ has a simple pole at $[1:0:i:1]$ and $y_3$ has a simple pole at $[0:0:i:1]$), $$ \begin{align*} a &= \frac{625}{36} i y_1 - \frac{625}{72} i y_2 \\ b &= \frac{-390625}{1296} i y_1 \\ c &= \frac{-4}{5} i y_1 - \frac{8}{5} i y_2 - i y_3 \end{align*},$$ and $$ [a:b:1] \mapsto \left [\frac{1}{36}a - \frac{71875}{15552} : \frac{1}{216}b - \frac{125}{288} a + \frac{13671875}{124416} : 1 \right ].$$ The above maps then give you an explicit homeomorphism from your curve $X$ to the elliptic curve $E$ which is topologically a torus (these in fact form an algebraic isomorphism between the two curves). (This was obtained from embedding $X$ in $\mathbb{P}^2$ through the divisor $([1:i:0:0]) + ([1:0:i:0]) + ([0:0:i:1])$ which is very ample, and then messing around to get a nice Weierstrass equation for the resulting elliptic curve.) Not sure if the proof below is totally correct as I read this book two year ago. Write this in local coordinates $(\frac{x_{1}}{x_{3}}, \frac{x_{2}}{x_{3}})=(y_{1},y_{2})$ you should get a curve as $$y_{1}^{2}+y_{2}^{2}+4y_{1}^{2}y_{2}^{2}+1=0$$ defined on the affine chart $(x_{1},x_{2},x_{3}), x_{3}\not=0$. Locally this surface is singular if and only if $\frac{\partial}{\partial y_{1}}$ and $\frac{\partial}{\partial y_{2}}$ are both 0. Hence all the singularity points are $( \frac{1}{2}i, \frac{1}{2}i),(-\frac{1}{2}i, -\frac{1}{2}i)$ . Further both of them are of multiplicity 1. Therefore by the formula $g=\frac{(d-1)(d-2)}{2}-\sum_{p}\delta_{p}$ we should have $g=3*2/2-2=1$. (but I am not familiar with Plucker's formula myself so I might be wrong). Another way of looking at this is to define $\displaystyle(y_{1}^{2}+y_{2}^{2}+4y_{1}^{2}y_{2}^{2})^{1/4}=y_{3}$. Then we are essentially working with a ramified map $y_{3}^{4}-1=0$. From Riemann-Hurwtiz we should have g=1 by $2-2g=4*(2-0)-4*2$.
Given that the BVP is a second-order inhomogeneous ODE, we find the characteristic equation to be in the form $$r^2+a^2=0\implies r=\pm ai.$$ Thus, the homogeneous equation is $$u_h=C_1\cos(ax)+C_2\sin(ax).$$ The particular equation is $$u_p=C_3x\sin(\pi x)+C_4x\cos(\pi x),$$ $$u_p'=-C_3\pi x\cos(\pi x)+C_3\sin(\pi x)-C_4\pi x\sin(\pi x)+C_4\cos(\pi x),$$ $$u_p''=-C_3\pi^2 x\sin(\pi x)+2C_3\pi\cos(\pi x)-2C_4\pi\sin(\pi x)-C_4\pi^2 x\cos(\pi x).$$ Substituting for the given ODE, we have $$-C_3\pi^2 x\sin(\pi x)+2C_3\pi\cos(\pi x)-2C_4\sin(\pi x)-C_4\pi^2 x\cos(\pi x)+\pi^2(C_3x\sin(\pi x)+C_4x\cos(\pi x))=\sin(\pi x),$$ and we get $C_3=0$ and $C_4=-\frac{1}{2\pi}$. Then the general solution is now, $$u=u_h+u_p=C_1\cos(ax)+C_2\sin(ax)-\frac{1}{2\pi}x\cos(\pi x)$$ Using the initial conditions, we get $C_1=1$ and $C_2=\frac{-2-\frac{1}{2\pi}-\cos(a)}{\sin(a)}$, but $a=\pm\pi$ and so $\sin(\pm\pi)=0$, which indicates $u$ is undefined when $a=\pm\pi$. But is it truly undefined? I am not sure I solved this correctly.
For a simple and undirected graph $G$, is there a known upper bound on the number of edges it has, given number of vertices $n$, girth $g$ and maximum degree $\Delta$? No, there is no bound. For instance, consider an $n\times n$ square grid. It has girth $g=4$ and maximum degree $\Delta=4$ but there are roughly $2n(n+1)$ edges, which is unbounded as $n\to\infty$ (even though the girth and maximum degree are bounded. Question was edited to include number of vertices, but now there is a trivial bound: $e\leq n\Delta/2$, since the total number of edges is half the sum of the vertex degrees. You should have a look at this survey, section 4, you should be able to find some results. A first upper bound, without taking into account $\Delta$ would be, for odd girth, $$\textrm{ex}(n, \{C_3, C_4, \dots, C_{2k}\}) < \frac{1}{2} n^{1 + 1/k} + \frac{1}{2} n,$$ Where $\textrm{ex}(n,H)$ is the maximal number of edge in a graph not including $H$.
I have a question concerning computational geometry which arises in the simulation of fields with topological defects, and I'd like to know whether there's an efficient algorithm (or any algorithm) to solve it. The problem is basically the following: consider a grid cell $\mathcal{C}$ in a 3-D cubical grid. On each of the eight vertices of this grid cell, we are given a triplet of numbers $(a_i, b_i, c_i)$ such that $a_i^2 + b_i^2 + c_i^2 = 1$ for each $i = 1 \dots 8$. By triangulating the surface of the cube (which we denote $\partial \mathcal{C}$) and linearly interpolating between the vertices of each triangle in $\partial \mathcal{C}$, we can define a map $f : \partial \mathcal{C} \to \mathbb{R}^3$. Let us assume that the image of this map does not contain 0, i.e., the map so defined is actually from $f: \partial \mathcal{C} \to \mathbb{R}^3 \setminus\{0\}.$ We have thus defined a map from a space homeomorphic to $S^2$ ($\partial \mathcal{C}$) to a space that has a non-trivial second homotopy group $\pi_2$. This map may or may not be contractible, and in fact it should have some notion of a winding number associated with it. My questions are: Is there an algorithm that, given the vertex values $(a_i, b_i, c_i)$ for a triangulated cube, calculates a winding number for the map $f$ so defined? Is there an algorithm that, given the vertex values $(a_i, b_i, c_i)$ for a triangulated cube, calculates whether the map $f$ so defined is nulhomotopic? (This is obviously a weaker question than the first one, but if this is all that can be done I wouldn't be too disappointed.) I would not be surprised if this question has been addressed in the literature somewhere, but it's quite hard to google for it: every reference to "winding number" seems to be for maps of 1-D curves into some space rather than 2-D spheres.
Consider the strictly convex unconstrained optimization problem $\mathcal{O} := \min_{x \in \mathbb{R}^n} f(x).$ Let $x_\text{opt}$ denote its unique minima and $x_0$ be a given initial approximation to $x_\text{opt}.$We will call a vector $x$ an $\epsilon-$ close solution of $\mathcal{O}$ if \begin{equation} \frac{||x - x_{\text{opt}}||_2}{||x_0 - x_\text{opt}||_2} \leq \epsilon. \end{equation} Suppose that there exists two iterative algorithms $\mathcal{A}_1$ and $\mathcal{A}_2$ to find an $\epsilon-$ close solution of $\mathcal{O}$ with the following properties: For any $\epsilon > 0,$ the total computational effort, i.e. effort required per iteration $\times$ total number of iterations, to find an $\epsilon-$ close solution is same for both the algorithms. The per iteration effort for $\mathcal{A}_1$ is $O(n),$ say, while that of $\mathcal{A}_2$ is $O(n^2).$ Are there situations, where one would prefer one algorithm over the other? Why?
Answer If you assume your returns are independent (yes your models might loosen this assumption) then the two models, $Q_1$ and $Q_2$ assign probability distributions to the returns on any given day, $i$: $q_1^i(r^i)$ and $q_2^i(r^i)$. Presumably you are interested in the model that can more accurately predict the state of the market over subsequent returns, i.e. you are interested in: $$ \text{Probability of All Observed Returns} = P(r^1, r^2, .., r^n) $$ Under the assumption of indenpendence: $$ P(r^1, r^2, .., r^n) = P(r^1)P(r^2)..P(r^n)$$ which under the two different models looks like this: $$ Q_1 = q_1^1(r^1) q_1^2(r^2) .. q_1(r^n) $$$$ Q_2 = q_2^1(r^1) q_2^2(r^2) .. q_2(r^n) $$ This should be familiar territory if you are used to maximum likelihood expectation. Obviously you would like to choose the model with the highest likelihood. Commonly, to avoid floating point rounding error the maximum of the log is taken since it is a monotonic functions so consider maximising, instead: $$ log(Q_1) = \sum_i log(q_1^i(r^i)) $$$$ log(Q_2) = \sum_i log(q_2^i(r^i)) $$ In this case this is also equivalent to the cross entropy between $p$ the true probability distribution which is 1 for the observed state and 0 otherwise, relative to either model $q_1$ or $q_2$. If you do not assume independence of returns then you have a slightly more complicated problem, post more details if otherwise.. Just a thought If your models are uncorrelated (or have limited correlation) you may be able to improve your accuracy by using an emsemble. Consider the third model: $$Q_3 = \alpha Q_1 + (1-\alpha) Q_2 \quad \text{for} \quad \alpha \in some[a,b]$$ Now your probability distribution is, $$\alpha q_1^i(r^i) + (1-\alpha)q_2^i(r^i)$$and ideally you would like to acquire, $$max_{\alpha} \quad log(Q_3)$$ This will be at least as good as the best model $Q_1$ or $Q_2$ for $\alpha=1$ or $\alpha=0$, but of course you need to cross-validate $\alpha$ otherwise you will just be overfitting this hyper parameter to your observed data.
It seems that the sample linear correlation coefficient $\hat{\rho}$ of samples generated by a copula that is parametrized by $\rho$ is unequal to $\rho$. For example, I construct a Normal copula with parameter $\rho = 0.9$. I generated a bunch of samples from this copula, and calculate their sample correlation coefficient $\hat{\rho}$. I am finding that $\hat{\rho}$ is slightly, but consistently less than $\rho$ for large samples (around 0.891). This is confusing to me as shouldn't $E[\hat{\rho}] = \rho$, as the Normal copula is solely parametrized by $\rho$? Any help would be greatly appreciated. Thanks! This may not be a consequence of biased estimators or sampling error. I don't think it is a coincidence that $$\frac{6}{\pi} \arcsin\left(\frac{0.9}{2} \right) = 0.891457\ldots \approx 0.891$$ Copula construction involves applying nonlinear transformations to random variables which need not preserve correlation. If random variables $X$ and $Y$ are jointly normal with correlation $\rho$, then with $\Phi$ denoting the univariate standard normal distribution function, we have $$corr(\Phi(X), \Phi(Y)) = \frac{6}{\pi}\arcsin \frac{\rho}{2}$$. For a proof, we can make a suitable transformation so that $X$ and $Y$have standard normal marginal distributions and the same correlation. We then have $$\tag{*}E(\Phi(X) \Phi(Y)) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \left(\int_{-\infty}^x \phi(u)\, du\right) \left( \int_{-\infty}^y \phi(v)\, dv\right)f(x,y;\rho)\, dx \, dy,$$ where $\phi$ is the standard normal density function and $f$ is the standard bivariate normal density function. Observe that (*) is equivalent to the joint probability that $U \leqslant X$ and $V \leqslant Y$ where $U$ and $V$ are standard normally distributed random variables that are uncorrelated with each other and where each is uncorrelated with $X$ and $Y$: $$E(\Phi(X)\Phi(Y)) = P(U \leqslant X, V\leqslant Y) = P(X-U \geqslant 0, Y-V \geqslant 0).$$ Now $X_1 = X-U$ and $X_2 = Y-V$ are both normally distributed with mean $0$, standard deviation $\sqrt{2}$, and with correlation $$\text{corr}(X_1,X_2) = \frac{E((X-U)(Y-V))}{\sqrt{\text{var}(X-U)}\sqrt{\text{var}(Y-V)}} \\= \frac{E(XY)-E(XV) - E(YU) + E(UV)}{\sqrt{2}\sqrt{2}} \\ = \frac{\rho}{2}$$ It is well known that if $Z_1$ and $Z_2$ have a standard bivariate normal distribution with correlation $\rho'$, then the orthant probability is $$P(Z_1 \geqslant 0, Z_2 \geqslant 0) = \frac{1}{4} + \frac{1}{2 \pi}\arcsin \rho'.$$ Hence, $$E(\Phi(X)\Phi(Y)) = P(X_1 \geqslant 0, X_2\geqslant 0) = P(X_1/\sqrt{2} \geqslant 0, X_2/\sqrt{2}\geqslant 0) =\frac{1}{4} + \frac{1}{2\pi} \arcsin \frac{\rho}{2}.$$ SInce $\Phi(X)$ and $\Phi(Y)$ are uniformly distributed in $[0,1]$ we have $$E(\Phi(X)) = E(\Phi(Y)) = \frac{1}{2}, \\ var(\Phi(X)) = var\Phi(Y)) = \frac{1}{12}$$ Thus $$corr(\Phi(X), \Phi(Y)) = \frac{E(\Phi(X)\Phi(Y)) - E(\Phi(X))E(\Phi(Y))}{\sqrt{var(\Phi(X))}\sqrt{var(\Phi(Y))}} =\frac{6}{\pi} \arcsin \frac{\rho}{2}$$ I would guess you are calculating the maximum likelihood estimator: $ \hat{\theta} = \frac{1}{N} \sum (x_i - \bar{x}) (y_i - \bar{y}) $ instead of the unbiased estimator: $ \hat{\theta} = \frac{1}{N-1} \sum (x_i - \bar{x}) (y_i - \bar{y}) $ The unbiased estimator has a bias of zero, i.e. : $ E_{x|\theta}[\hat{\theta}] - \theta = 0 $ The unbiased estimator is obviously slightly larger and fits your described scenario, based on my hypothesis it implies you used 100 samples, which is quite low by the way. This is an interesting observation that you have. The interesting part is "consistently smaller". The normal copula is based on a multivariate normal distribution. The correlation you get out is the correlation parameter you put in. Everything else is most probably due to an issue in your approach. If you did not say "consistently smaller", I would say it is sampling error. Let's try to find the issue: What are your marginal distributions? What is the dimension of your model? How many samples do you draw?
Calculators¶ ClusterExpansionCalculator¶ class mchammer.calculators. ClusterExpansionCalculator( structure, cluster_expansion, name='Cluster Expansion Calculator', scaling=None, use_local_energy_calculator=True)¶ A ClusterExpansionCalculator object enables the efficient calculation of properties described by a cluster expansion. It is specific for a particular (supercell) structure and commonly employed when setting up a Monte Carlo simulation, see Ensembles. Cluster expansions, e.g., of the energy, typically yield property values per site. When running a Monte Carlo simulation one, however, considers changes in the totalenergy of the system. The default behavior is therefore to multiply the output of the cluster expansion by the number of sites. This behavior can be changed via the scalingkeyword parameter. Parameters: structure( ase.Atoms) – structure for which to set up the calculator cluster_expansion( ClusterExpansion) – cluster expansion from which to build calculator name( str) – human-readable identifier for this calculator scaling( Union[ int, float, None]) – scaling factor applied to the property value predicted by the cluster expansion use_local_energy_calculator( bool) – evaluate energy changes using only the local environment; this method is generally muchfaster; unless you know what you are doing do notset this option to False calculate_local_contribution( *, local_indices, occupations)¶ Calculates and returns the sum of the contributions to the property due to the sites specified in local_indices Parameters: local_indices( List[ int]) – sites over which to sum up the local contribution occupations( List[ int]) – entire occupation vector Return type: float calculate_total( *, occupations)¶ Calculates and returns the total property value of the current configuration. Parameters: occupations( List[ int]) – the entire occupation vector (i.e. list of atomic species) Return type: float cluster_expansion¶ cluster expansion from which calculator was constructed Return type: ClusterExpansion structure¶ atomic structure associated with calculator Return type: Atoms sublattices¶ Sublattices of the calculators structure. Return type: Sublattices update_occupations( indices, species)¶ Updates the occupation (species) of the associated atomic structure. Parameters: indices( List[ int]) – sites to update species( List[ int]) – new occupations (species) by atomic number TargetVectorCalculator¶ class mchammer.calculators.target_vector_calculator. TargetVectorCalculator( structure, cluster_space, target_vector, weights=None, optimality_weight=1.0, optimality_tol=1e-05, name='Target vector calculator')¶ A TargetVectorCalculatorenables evaluation of the similarity between a structure and a target cluster vector. Such a comparison can be carried out in many ways, and this implementation follows the measure proposed by van de Walle et al.in Calphad 42, 13 (2013) [WalTiwJon13]. Specifically, the objective function \(Q\) is calculated as\[Q = - \omega L + \sum_{\alpha} \left| \Gamma_{\alpha} - \Gamma^{\text{target}}_{\alpha} \right|.\] Here, \(\Gamma_{\alpha}\) are components in the cluster vector and \(\Gamma^\text{target}_{\alpha}\) the corresponding target values. The factor \(\omega\) is the radius of the largest pair cluster such that all clusters with the same or smaller radii have \(\Gamma_{\alpha} - \Gamma^\text{target}_{\alpha} = 0\). Parameters: structure( Atoms) – structure for which to set up calculator cluster_space( ClusterSpace) – cluster space from which to build calculator target_vector( List[ float]) – vector to which any vector will be compared weights( Optional[ List[ float]]) – weighting of each component in cluster vector comparison, by default 1.0 for all components optimality_weight( float) – factor \(L\), a high value of which effectively favors a complete series of optimal cluster correlations for the smallest pairs (see above) optimality_tol( float) – tolerance for determining whether a perfect match has been achieved (used in conjunction with \(L\)) name( str) – human-readable identifier for this calculator calculate_local_contribution( local_indices, occupations)¶ Not yet implemented, forwards calculation to calculate_total. Return type: float calculate_total( occupations)¶ Calculates and returns the similarity value \(Q\) of the current configuration. Parameters: occupations( List[ int]) – the entire occupation vector (i.e. list of atomic species) Return type: float structure¶ atomic structure associated with calculator Return type: Atoms sublattices¶ Sublattices of the calculators structure. Return type: Sublattices update_occupations( indices, species)¶ Updates the occupation (species) of the associated atomic structure. Parameters: indices( List[ int]) – sites to update species( List[ int]) – new occupations (species) by atomic number mchammer.calculators.target_vector_calculator. compare_cluster_vectors( cv_1, cv_2, orbit_data, weights=None, optimality_weight=1.0, tol=1e-05)¶ Calculate a quantity that measures similarity between two cluster vecors. Parameters: cv_1( ndarray) – cluster vector 1 cv_2( ndarray) – cluster vector 2 orbit_data( OrderedDict) – orbit data as obtained by ClusterSpace.orbit_data weights( Optional[ List[ float]]) – Weight assigned to each cluster vector element optimality_weight( float) – quantity \(L\) in [WalTiwJon13] (see mchammer.calculators.TargetVectorCalculator) tol( float) – numerical tolerance for determining whether two elements are exactly equal Return type: float
Let's say we are given a function $f(x)$, which is not defined at the point $x_0$. How do we find linear approximation of $f$ near $x_0$? P.S. I wrote "linear" just to make things simpler, I came across this problem while trying to approximate the following function near zero: $\frac{lnx}{x*e^x}$. My problem is that to approximate this function near zero I have to put zero for x in the function as the first (or zeroth) term of Maclaurin series, but this way I get zero in the denominator. Well there are two problems here. Assuming you want to linearize this function in the usual sense of approximating it by first derivative, then you can use a kind of limiting process. In order to evaluate function where it is not formally defined you can use the limiting value. For example if you want to evaluate a function $f$ at some point $x_0$ where it not defined, you could try evaluating the function in limit as $x \to x_0$. Now in your specific case this will not work as function obviously explodes as it approaches zero. Reason for this is as x goes to zero, the logarithm goes to negative infinity and $\frac{1}{x}$ goes to zero, thus the fraction goes to negative infinity. Further your function is not defined to the left of zero, and there is no way to build a derivative. Best thing you can hope for is to linearize your function around $0 + \varepsilon$. Usually even if function is undefined at some point, but is defined on both sides of the point you can use the $ \lim \limits_{h\to 0} \frac{f(x+h) - f(x-h)}{2h}$ definition of limit to evaluate the derivative at that point. In your case you could say that at zero, the best "linear" approximation to your function is a vertical line through origin.
Well firstly, how do you define $[a,b]$ aka $\text{lcm}(a,b)$? I'll define it by the converse of a proposition in Artin Algebra (Prop 2.3.8) Namely, we'll prove the converse of Prop 2.3.8 where $m:=\text{lcm}(a,b)$ is defined by the integer s.t. (a) $m$ is divisible by both $a$ and $b$ (b) If $n$ is divisible by $a$ and $b$, then $n$ is divisible by $m$. Pf: $(\subseteq)$ Let $n \in \mathbb Z m$. Then there is an integer $n_m$ s.t. $n_m = \frac n m$. Observe that $n_m = \frac{n_a}{m_a} = \frac{n_b}{m_b}$ where we define $n_a := \frac n a, m_a := \frac m a, m_b := \frac m b, n_b := \frac n b$. Observe that $m_a, m_b$ are integers by assumption (a) while we want to show that $n_a, n_b$ are integers because showing such is equivalent to showing $n \in \mathbb Za \cap \mathbb Zb$. Now, $n_m m_a = n_a$ is a product of integers and hence an integer. The same is true for $n_m m_b = n_b$. Therefore, $n_a, n_b$ are integers and thus, $n \in \mathbb Za \cap \mathbb Zb$ $(\supseteq)$ This one is easier. Let $n \in \mathbb Za \cap \mathbb Zb$. Then $n_a, n_b$ as defined earlier are integers, i.e. $n$ is divisible by both $a$ and $by$. By assumption (b), $n$ is divisible by $m$. QED
I am trying to show the following two statements: 1) Let $U$ be a connected open subset of $\mathbb{C}^n$ and let $L$ be a closed subset of $U$. Show that $$accum(U\setminus L)\cap L\neq \emptyset,$$ where accum() denotes the set of accumulation points. 2) Let $L$ be a closed subset of $\mathbb{C}^n$ and let $x\in L$. Show that there exists an open connected neighborhood $U$ of $x$ in $\mathbb{C}^n$ such that $U\setminus L$ is connected and $$x\in accum(U\setminus L).$$ Edit: For (1) $L$ is strictly contained in $U$. For (2) $x\in \partial L$ and the complement of $L$ in $\mathbb{C}^n$ is connected.
In that case, the problem becomes a non-trivial stopping time problem. Consider a filtered probability space $(\Omega, \mathcal{F}, \mathbb{P})$ equipped with the natural filtration of a standard Brownian motion $W_t^\mathbb{P}$. Assuming a geometric Brownian motion for the underlying asset, one gets$$ S_t = S_0 \exp\left((\mu-\frac{1}{2}\sigma^2)t + \sigma W_t^\mathbb{P}\right) $$and the event $A = \left\{ \frac{S_t}{S_0} = a \right\}$, in words " the stock reaches $a$ times its initial value $S_0$ at a certain time $t$", can equivalently be specified as $$ A = \left\{ W_t^\mathbb{P} = \alpha(t)\right\} $$with $$ \alpha(t) =\frac{\ln(a)}{\sigma} - \frac{\mu-\frac{1}{2}\sigma^2}{\sigma}t $$ Define the hitting time:$$ \tau = \inf(t \geq 0: W^\mathbb{P}_t = \alpha(t)) \tag{1} $$ Based on the above definitions, your question amounts to: [Part 1] Identifying the distribution of the hitting time $\tau$ [Part 2] Computing $\mathbb{P}(\tau < T)$. [Part 1] First of all, we know that $\tau < \infty$ $\mathbb{P}$-a.s., since the Brownian motion has continuous sample paths and verifies:$$ \limsup_{t \to \infty} W_t^\mathbb{P} = \infty \qquad \qquad \liminf_{t \to \infty} W_t^\mathbb{P} = -\infty $$ The tricky part is that the hitting level $\alpha$ is in fact an affine function of time and not just a constant for which standard results exist. There is a nice answer to this [Part 1] on math.stackexchange where the following $\color{red}{\text{notations}}$ are used: \begin{align}{\color{red}{X_t}} &= \underbrace{W_t^\mathbb{P}}_{\color{red}{B_t}} + \underbrace{\frac{\mu - \frac{1}{2}\sigma^2}{\sigma}}_{\color{red}{c}} t \end{align}such that the hitting time $(1)$ can be expressed as:$$ \underbrace{\tau}_{\color{red}{H_a}} = \inf\left(t \geq 0: {\color{red}{X_t}} = \underbrace{\frac{\ln(a)}{\sigma}}_{\color{red}{a}} \right) $$for which it is shown that:\begin{align}p_{H_a}(t) &= \frac{d \mathbb{P}(H_a \leq t)}{d t} \\&= \frac{\color{red}{a}}{\sqrt{2\pi t^3}} \exp \left(- \frac{(\color{red}{a}-\color{red}{c}t)^2}{2t} \right).\end{align} [Part 2] Now all that is left is to compute:$$ \mathbb{P}(H_a \leq T) = \int_0^T p_{H_a}(t) dt $$with in your case $T=10$, $a=3$, $\mu=16\%$ and $\sigma = 40\%$
I would like to generate an isotropic gaussian random field described by a power spectrum $P(k)$ on a 3D grid which represents spherical polar coordinates (i.e., the angular separation between pixels at each slice in the radial direction is equal and each pixel represents a solid angle). I am currently generating a random field on a cartesian grid by Filling a 3D grid in Fourier space $(k_x, k_y, k_z)$ with random numbers drawn from a normal distribution with standard deviation $\sigma = \frac{P(k)}{2}$ where $k = \sqrt{k_x^2 + k_y^2 + k_z^2}$ Imposing hermitian symmetry to ensure a real field Fourier transforming into real space My only thought is that I could do this process on a high resolution cartesian grid and average into spherical bins, though this seems convoluted. Is there a more efficient way that I could generate this directly?
I am really confused on how to go about this problem. I know the formula, but I get confused with all the steps and how to use what I got to get to what I need. A student is examining a bacterium under the microscope. The E. coli bacterial cell has a mass of $m = 0.100~\mathrm{fg}$ (where a femtogram, $\mathrm{fg}$, is $10^{−15}~\mathrm{g}$) and is swimming at a velocity of $v = 9.00~\mathrm{{\mu m/s}}$, with an uncertainty in the velocity of $9.00\%$. E. coli bacterial cells are around $1~\mathrm{\mu m}$ ($10^{−6}~\mathrm{m}$) in length. What is the uncertainty of the position of the bacterium? I used the formula $$\Delta x \Delta p \geq \hslash/2.$$ I plugged in these numbers: Mass of bacteria $[m] = 0.100~\mathrm{fg} = 0.1 \times 10^{-15}~\mathrm{g} = 10^{-16}~\mathrm{g}$ Velocity of bacteria $[v] = 2~\mathrm{m/s}$ uncertainty in velocity = 9.00 I am getting this answer $1.67\times10^{-9}$, and it is wrong. I don't know what I am doing wrong.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
I am trying to fit a restricted cubic spline (natural cubic spline) with 4 knots to toy data, attempting to follow Hastie, Tibshirani, Friedman 2nd ed. 5.2.1 p.144-146, Eqs 5.4 and 5.5. Data: Is basically a transposed ‘S’ shape. R-code is: n <© 100x <- (1:n)/ntrue <- ((exp(1.2*x)+1.5*sin(7*x))-1)/3noise <- rnorm(n, 0, 0.15)y <- true + noiseplot(x,y) I set knots at: {.2, .4, .6, .8} and am fitting using the non-linear NLS() function in R, but I can’t get the S-shape of the data no matter what I try. My equations is wrong ? Or I am completely off-base in my approach? Any suggestions? (Book-excerpt, my equation, and data-plot posted below) A natural cubic spline with $K$ knots is represented by $K$ basis functions. One can start from a basis for cubic splines, and derive the reduced basis by imposing the boundary constraints. For example, starting from the truncated power series basis described in section 5.2, we arrive at where $$d_k=\frac{(X-\xi_k)^3_+ - (X-\xi_K)^3_+}{\xi_K - \xi_k} \enspace .$$ Each of these basis functions can be seend to have zero second and third derivative for $X\geq \xi_K$. $$y = \beta_0 + \beta_1 x + \beta_2\left(\left[\frac{(x-k_1)^3_+ - (x-k_4)^3_+}{k_4 - k_1}\right] - \left[\frac{(x-k_3)^3_+ - (x-k_4)^3_+}{k_4 - k_3}\right] \right) + \beta_3\left(\left[\frac{(x-k_2)^3_+ - (x-k_4)^3_+}{k_4 - k_2}\right] - \left[\frac{(x-k_3)^3_+ - (x-k_4)^3_+}{k_4 - k_3}\right] \right)$$ Can I ask a simpler question: Sites/books say: for my natural cubic spline approach (i.e. restricted cubic spline) w/ 4 knots, I need 4 basis functions. Is it Beta_0, Beta_1*x, and '4 more' ? Or is indeed just 4 betas (conceptually, as I have above) ? Thank you. Thank you for your guidance.I am fitting the spline, along with many other modeling co-variates, i.e. explanatory variables. So, a simple canned package for the spline alone is not sufficient. The actual shape I expect, when I fit to my modeling data, is a very positive skewed distributional form (I was trying to fit the fancy shaped example data, with an inflection, thinking it would be a good test of my coding of the spline functional-form). I thought about snitching the functional form and calibrated-parameterization (from your Python above or from R) - but its a cubic-spline, not a natural cubic spline. I get how my ftn is linear to the LHS of first knot. I guess next step is for me to see that various terms cancel, and indeed I'd be linear to the RHS of the right-most knot too. Also look for a 4-term basis that is said to be 'stable'.
As per the suggestion by Christian in the comments here, as part of my continuing quest to understand the Raviart-Thomas (RT) elements I'd like to know how exactly the RT elements are defined globally, and in particular how they have compact support. For RT0 on the reference square, one of the basis functions is $\mathbf{\phi}(\mathbf{x}) = \frac{1}{4}\langle 1 + x, 0\rangle^T$. This function is only dependent on $x$ so it is non-zero over all elements above and below the reference square. Since the RT are $H$(div) conforming, I suppose there is no need to enforce continuity in the solution, or the basis functions. As I understand it, this means that we could simply set $\mathbf{\phi}$ to be $\mathbf{0}$ outside some domain. As a concrete example, given the edge numbering below, (I assume there is one basis functions per edge for RK0, but this may be wrong) what basis functions are non-zero over the middle element (A)? As a separate question, for Langrangian elements of order $k$ we choose the finite dimensional subset of $H^1$ to be the set of all piece-wise continuous polynomials of order $k$. For the RT elements of order $k$ we take the subspace of $H$(div): $$P_{k+1,k} \times P_{k,k+1}$$ as defined in the answer to my last question. Does this space have a name?
The action which describes a string propagating in a $D$ dimensional spacetime, with given metric $g_{\mu\nu}$, is given by the Polyakov action $$S_{\text{p}}=-\frac{T}{2}\int \mathrm{d}\sigma\mathrm{d}\tau\sqrt{-h}\eta^{\alpha\beta}\partial_{\alpha}X^{\mu}\partial_{\beta}X^{\nu}g_{\mu\nu}\tag{1}$$ where the symbols have their usual meaning. It is not hard to check that action is invariant under Poincare transformations $$\delta X^{\mu}(\sigma,\tau)=a^{\mu}_{~~~\nu}X^{\nu}(\sigma,\tau)+b^{\mu}\tag{2}.$$ When all the dust is settled (i.e. after gauge fixing and Weyl transformations) the Polyakov action becomes $$S_{\text{P}}=\frac{T}{2}\int \mathrm{d}\sigma\mathrm{d}\tau \left((\dot{X})^{2}-(X')^{2}\right)\tag{3}$$ where $\dot{X}=\partial_{\tau}X^{\mu}$ and $X'=\partial_{\sigma}X^{\mu}$. Variation with respect to $X^{\mu}$ yields the equation of motion $$(-\partial^{2}_{\tau}+\partial^{2}_{\sigma})X^{\mu}-T\int\mathrm{d}\tau\left[X'\delta X^{\mu}|_{\sigma=\pi}+X'\delta X^{\mu}|_{\sigma=0}\right]=0.\tag{4}$$ The $\sigma$ boundary terms tell us what type of strings we have, either closed or open strings. For open string equation (4) becomes $(-\partial^{2}_{\tau}+\partial^{2}_{\sigma})X^{\mu}=0$ where we assume that the end points of the string follow the Neumann boundary conditions $$\partial_{\sigma}X^{\mu}(\tau,\sigma)=\partial_{\sigma}X^{\mu}(\tau,\sigma+n).\tag{5}$$ One interesting feature is that the Neumann boundary conditions remains invariant under global Poincare transformationsince \begin{eqnarray} \partial_{\sigma}X'^{\mu}|_{\sigma=0,n} & = & \partial_{\sigma}\left(a^{\mu}_{~~~\nu}X^{\nu}(\sigma,\tau)+b^{\mu}\right)|_{\sigma=0,n} \\ & = & a^{\mu}_{~~~\nu}~\partial_{\sigma}X^{\nu}|_{\sigma=0,n}\\ & = & 0 \\ \end{eqnarray} Whereas the Dirichlet boundary conditions$$X^{\mu}(\tau,\sigma=0)=X^{\mu}_{0}\qquad\qquad X^{\mu}(\tau,\sigma=n)=X^{\mu}_{n}$$ break the Poincare invariance,as $$X'^{\mu}|_{\sigma=0,n}=\left(a^{\mu}_{~~~\nu}X^{\nu}(\sigma,\tau)+b^{\mu}\right)|_{\sigma=0,n}\neq X^{\mu}_{0,n}$$ which simply means that under a Poincare transformation the ends of the string actually change. Does the spectrum of string excitations keep any signature of this (non) invariance under Poincare transformations? If so, how can that result be interpreted?
Octopress has become a popular blogging framework among programmers since its 2.0 release. We can observe this growing from the reading of google keyword tool. It did serve the needs of most programmers comparing to other existing blogging frameworks. To the author, I give my full compliement and respect. However, as what luikore said: > Octopress uses Rdiscount instead of Maruku for Markdown engine. > Maruku supports (LATEX) math but Rdiscount doesn’t. > Octopress claims itself a blog for hackers, but what hacker doesn’t use (LATEX) math? Making Octopress to work with MathJax is a pain in the neck (though technically it is not Octopress, it is Jekyll). Even following the instructions provided by luikore cannot completely resolve the problem. I excerpted a few equations from the official site of MathJax to do a display test, and here it is. The Cauchy-Schwarz Inequality \[ \left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right) \] \[ \mathbf{V}_1 \times \mathbf{V}_2 = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ \frac{\partial X}{\partial u} & \frac{\partial Y}{\partial u} & 0 \\ \frac{\partial X}{\partial v} & \frac{\partial Y}{\partial v} & 0 \end{vmatrix} \] An Identity of Ramanujan \[ \frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \] Maxwell’s Equations \[ \begin{aligned} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{aligned} \] Some of the equations are still broken after the hack. I guess it is because \ symbols are escaped. Since I don’t have decent knowledge about ruby packages, it may take me some time to fix.
Given a tree with $$$n$$$ nodes numbered from $$$1$$$ to $$$n$$$. Each node $$$i$$$ has an associated value $$$V_i$$$. If the simple path from $$$u_1$$$ to $$$u_m$$$ consists of $$$m$$$ nodes namely $$$u_1 \rightarrow u_2 \rightarrow u_3 \rightarrow \dots u_{m-1} \rightarrow u_{m}$$$, then its alternating function $$$A(u_{1},u_{m})$$$ is defined as $$$A(u_{1},u_{m}) = \sum\limits_{i=1}^{m} (-1)^{i+1} \cdot V_{u_{i}}$$$. A path can also have $$$0$$$ edges, i.e. $$$u_{1}=u_{m}$$$. Compute the sum of alternating functions of all unique simple paths. Note that the paths are directed: two paths are considered different if the starting vertices differ or the ending vertices differ. The answer may be large so compute it modulo $$$10^{9}+7$$$. The first line contains an integer $$$n$$$ $$$(2 \leq n \leq 2\cdot10^{5} )$$$ — the number of vertices in the tree. The second line contains $$$n$$$ space-separated integers $$$V_1, V_2, \ldots, V_n$$$ ($$$-10^9\leq V_i \leq 10^9$$$) — values of the nodes. The next $$$n-1$$$ lines each contain two space-separated integers $$$u$$$ and $$$v$$$ $$$(1\leq u, v\leq n, u \neq v)$$$ denoting an edge between vertices $$$u$$$ and $$$v$$$. It is guaranteed that the given graph is a tree. Print the total sum of alternating functions of all unique simple paths modulo $$$10^{9}+7$$$. 4 -4 1 5 -2 1 2 1 3 1 4 40 8 -2 6 -4 -4 -9 -3 -7 23 8 2 2 3 1 4 6 5 7 6 4 7 5 8 4 Consider the first example. A simple path from node $$$1$$$ to node $$$2$$$: $$$1 \rightarrow 2$$$ has alternating function equal to $$$A(1,2) = 1 \cdot (-4)+(-1) \cdot 1 = -5$$$. A simple path from node $$$1$$$ to node $$$3$$$: $$$1 \rightarrow 3$$$ has alternating function equal to $$$A(1,3) = 1 \cdot (-4)+(-1) \cdot 5 = -9$$$. A simple path from node $$$2$$$ to node $$$4$$$: $$$2 \rightarrow 1 \rightarrow 4$$$ has alternating function $$$A(2,4) = 1 \cdot (1)+(-1) \cdot (-4)+1 \cdot (-2) = 3$$$. A simple path from node $$$1$$$ to node $$$1$$$ has a single node $$$1$$$, so $$$A(1,1) = 1 \cdot (-4) = -4$$$. Similarly, $$$A(2, 1) = 5$$$, $$$A(3, 1) = 9$$$, $$$A(4, 2) = 3$$$, $$$A(1, 4) = -2$$$, $$$A(4, 1) = 2$$$, $$$A(2, 2) = 1$$$, $$$A(3, 3) = 5$$$, $$$A(4, 4) = -2$$$, $$$A(3, 4) = 7$$$, $$$A(4, 3) = 7$$$, $$$A(2, 3) = 10$$$, $$$A(3, 2) = 10$$$. So the answer is $$$(-5) + (-9) + 3 + (-4) + 5 + 9 + 3 + (-2) + 2 + 1 + 5 + (-2) + 7 + 7 + 10 + 10 = 40$$$. Similarly $$$A(1,4)=-2, A(2,2)=1, A(2,1)=5, A(2,3)=10, A(3,3)=5, A(3,1)=9, A(3,2)=10, A(3,4)=7, A(4,4)=-2, A(4,1)=2, A(4,2)=3 , A(4,3)=7$$$ which sums upto 40. Name
The basic reason that it is justified is that the time integral is a linear operator. So let's step way back into the abstract, what you really have is some configuration space $\mathcal C$ of possible configurations of some system, and some space of paths $\mathcal P$ through that, which is some subset of the functions $[0,1]\to \mathcal C$. Inside of this configuration space we may have a time coordinate, position coordinates, velocity or momentum coordinates... Then we have some functions of these paths, $X:\mathcal P \to \mathbb R.$ Some of them have the format, $$\mathcal A[\mathbf P] = \int_0^1 ds~L\big(\mathbf P(s)\big).$$We want to do calculus with whole paths, so we want to consider the difference $$\delta \mathcal A = \mathcal A[\mathbf P + \epsilon\mathbf p] - \mathcal A[\mathbf P],~~ \epsilon \approx 0$$ Note that this in theory depends on both the path big-$\mathbf P$ that we use to evaluate the action and the path little-$\mathbf p$ that we use to perturb it, but we're typically going to take the approach of trying to find the big-$\mathbf P$ such that this $\delta \mathcal A$ is zero to first order in $\epsilon$ or so, irrespective of little-$\mathbf p$. Just putting those two together and using the linearity of the integral, we have $$\delta\mathcal A = \int_0^1 ds~\Big(L\big(\mathbf P(s) + \epsilon \mathbf p(s)\big) - L\big(\mathbf P(s)\big)\Big),$$and the internal term could meaningfully be called $\delta L$. It doesn't mean exactly the same thing as it is applied "pointwise" to the path, but if we were instead to write this as $L[\mathbf P](s)$ or some similar notation then it would indeed be $\delta L[\mathbf P](s)$. The rest of your expression then follows from the Leibniz property of this $\delta$ operator, which depends on understanding that we are taking a limit etc., which is nontrivial of course.
Problem: Let $ A_1,A_2,...,A_n$ be sets such that $X=\bigcup_{i=1}^n A_i$. Prove that there exists a sequence of sets $ B_1,B_2,...,B_n$ such that a) $B_i \subseteq A_i$ for each $i=1,2,...,n$ b) $B_i \cap B_j = \emptyset$ for $i \neq j$ c) $X=\bigcup_{i=1}^n B_i$ My observation: I look at the "new things" added to $X$ by $A_i$ and call that $B_i$. For example $B_1$ is $A_1$, $B_2$ is $A_2\setminus A_1$ and $B_3$ is $A_3 \setminus A_2 \cup A_1$. In general, the sequence of sets is defined by this $B_i=A_i-\bigcup_{k=1}^{i-1} A_k$ for $k=1,2,...,n$. Proof Attempt: Proof (a) Let $x\in B_i$. Since $B_i=A_i\setminus\bigcup_{k=1}^{i-1} A_k$, so by defnition of Set Difference, if something is in $B_i$, it must also be in $A_i$. Since, $x\in B_i$ implies $x\in A_i$, we conclude that $B_i\subseteq A_i$ (by definition of Subset). Proof (b) On which, I found an elementary approach from here. Proof (c) To show that $X=\bigcup_{i=1}^n B_i$, we must show that $X\subseteq\bigcup_{i=1}^n B_i$ and $\bigcup_{i=1}^n B_i\subseteq X$. Part 1. Show that $X\subseteq\bigcup_{i=1}^n B_i$. Let $x\in X$ then $x\in\bigcup_{i=1}^n A_i$ as defined.It implies that $x$ is an element of some $A_i$'s (by definition of $\cup$).If $i_0$ is the least such value of $i$ such that $x\in A_{i_0}$ then $x\in A_{i_0}\setminus \bigcup_{k=1}^{i_0-1} A_k$ (by definition of $\setminus$). It further implies that $x\in B_{i_0}$ and hence $x\in\bigcup_{i=1}^n B_i$ by definition of $\cup$. Since $x\in X$ implies $x\in\bigcup_{i=1}^n B_i$, we conclude that $X\subseteq\bigcup_{i=1}^n B_i$ (by definition of $\subseteq$). Part 2. Show that $\bigcup_{i=1}^n B_i\subseteq X$. Let $x\in\bigcup_{i=1}^n B_i$ then $x\in B_i$ (for some $i$). If $i_0$ is the least such value of $i$ such that $x\in B_{i_0}$ and since $B_i\subseteq A_i$ which we proved already above, it implies that $x\in A_{i_0}$ which further implies that $x\in\bigcup_{i=1}^n A_i$ (by definition of $\cup$). Hence, $x\in X$. Since, $x\in \bigcup_{i=1}^n B_i$ implies that $x\in X$, we conclude that $\bigcup_{i=1}^n B_i\subseteq X$. Conclusion. Since, $X\subseteq\bigcup_{i=1}^n B_i$ and $\bigcup_{i=1}^n B_i\subseteq X$ then $X=\bigcup_{i=1}^n B_i$. Note: The proof is so elementary that fits to beginners like me. Is this right already?
In an action-packed three pages of Lurie's DAG-XIII: Rational and p-adic Homotopy Theory, section 2.2: Power Operations on $\mathbb{E}_{\infty}$-algebras, one finds a construction of the power operation $P^0$ following a few observations on the $p$-power Tate construction in the category of $k$-module spectra: $\hat{T}_p: X \mapsto ( X^{\otimes p})^{tC_p}$ and its best colimit-preserving approximation, $T_p.$ For any $\mathbb{E}_\infty$ $k$-algebra $X,$ one obtains a map $T_p(X)[-1] \to X.$ $T_p(X)$ is given by tensoring with a $k$-bimodule which is equivalent on one side to $k^{tC_p}$ and this allows us to obtain operations (not $k$-linear) from elements of Tate cohomology, $ \pi_* k^{tC_p} \simeq \hat{H}^{-*}(C_p, k).$ The precise statement is in construction 2.2.6, which applies the observation that for $k$ a discrete ring of characteristic $p$, $1 \in k$ determines a canonical element of $\hat{H}^{-1}(C_p, k),$ precisely because that group is given as the kernel of the norm. This defines a map $k \to k^{tC_p}[-1]$ which upon composition with the map in the previous paragraph gives a map $X \to X.$ This map is supposed to be the derived witness to $P^0.$ Here is remark 2.2.9 Construction 2.2.6 can be generalized: given any class $x \in \hat{H}^{n-1}(\mathbb{Z} / p\mathbb{Z}; k)$, we obtain an associated map $P(x) : A \to A[n]$, which induces group homomorphisms $\pi_m(A) \to \pi_{m-n}(A)$. These operations depend functorially on $A$ and generate an algebra (the extended Steenrod algebra) of “power operations” which act on the homotopy groups of every $\mathbb{E}_\infty$-algebra over $k.$ My questions: is there any reference where this construction of the extended powers is fully elaborated? How much of the elementary structure of the Steenrod algebra (e.g. Adem relations, structure of the dual Steenrod algebra, etc.) can be translated to this point of view? Just to fill in a few details, Lurie constructs these power operations by first rotating the fiber sequence defining the Tate construction to yield $\hat{T}_p(X)[-1] \to (X^{\otimes p})_{hC_p} \to (X^{\otimes p})^{hC_p}$ If $X$ is an $\mathbb{E}_\infty$ $k$-algebra, one then computes the composition $T_p(X)[-1] \to \hat{T}_p(X)[-1] \to (X^{\otimes p})_{hC_p} \to X^{\otimes p}_{h\Sigma_p} \to X$ where the maps are given by approximation, the first map in the rotated fiber sequence, a tautological map between colimits, and the $\mathbb{E}_\infty$ multiplication respectively.
I am wondering how the confidence interval for the Area under the Curve statistic (ROC curves) is derived. I have heard that the AUC can be assumed to be normally distributed, but I am looking for a proof of this statement or a derivation of the confidence intervals AUC can be viewed as Wilcoxon-Mann-Whitney Test. And here is some demo, where for the R code I posted, I first calculate AUC, then use Wilcoxon-Mann-Whitney Test to calculate the number. Then verify both numbers are the same which is 0.911332. For a hypothesis testing, it is not hard to derive confidence interval. Right? Also I do not remember it Wilcoxon-Mann-Whitney Test requires normal distribution. As hxd1011 said, the AUC is equivalent to the U statistic calculated for the Mann-Whitney-U aka Wilcoxon-rank-sum test, [normalized to [0;1] by the product of observations in each group]. The original paper by Mann and Whitney (1947) contains a proof for approximate normality and the U statistic is approximately Normal distributed even for small samples (~20). In that sense, the AUC could be assumed to be approximately Normal distributed. To clarify some comments here, note that the test itself does not make any assumption on the distribution of the scores (usually the probability predictions from the model) (i.e. the test is non-parametric). However, the exact variance of the AUC can only be calculated based on the distribution of the scores (Cortes & Mohri 2005), which typically do not follow a known distribution. There seem to be different suggestions on how to calculate a useful confidence interval, see Cortes & Mohri (2005) for a summary and their (long) formula in Corollary 1 ([4]): $$ \sigma^2(AUC) = \frac{(m+n+1)(m+n)(m+n−1)T((m+n−2)Z_4−(2m−n+3k−10)Z_3)} {72m^2n^2} + \frac{(m+n+1)(m+n)T(m^2−nm+3km−5m+2k^2−nk+12−9k)Z_2}{48m^2n^2} − \frac{(m+n+1)^2(m−n)^4 Z_1^2} { 16m^2n^2} − \frac{(m+n+1)Q_1 Z_1}{72m^2n^2} + \frac{kQ_0}{144m^2n^2} $$ with: $$ Z_i = \frac{ \sum^{k−i}_{x=0} \left(\begin{smallmatrix} m+n+1−i \\ x \end{smallmatrix}\right) } { \sum^k_{x=0} \left(\begin{smallmatrix} m+n+1 \\ x \end{smallmatrix}\right) } $$ $$ T = 3((m − n)^2 + m + n) + 2 $$ $$ Q_0 = (m + n + 1)T k^2 + ((−3n^2 + 3mn + 3m + 1)T − 12(3mn + m + n) − 8)k + (−3m^2 +7m + 10n + 3nm + 10)T − 4(3mn + m + n + 1) $$ $$ Q_1 = T k^3 + 3(m − 1)T k^2 + ((−3n^2 + 3mn − 3m + 8)T − 6(6mn + m + n))k + (−3m^2 +7(m + n) + 3mn)T − 2(6mn + m + n) $$ References Cortes, C., & Mohri, M. (2005). Confidence intervals for the area under the ROC curve. In Advances in neural information processing systems (pp. 305-312). https://cs.nyu.edu/~mohri/pub/area.pdf
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Bulletin of the American Physical Society 65th Annual Meeting of the APS Division of Fluid Dynamics Volume 57, Number 17 Sunday–Tuesday, November 18–20, 2012; San Diego, California Session L11: Bubbles IV and Drug Delivery Hide Abstracts Chair: Kausik Sarkar, George Washington University Room: 26A Monday, November 19, 2012 3:35PM - 3:48PM L11.00001: Simulation of Magnetic Particles in the Bloodstream for Magnetic Drug Targeting Applications Erica Cherry, John Eaton Magnetic Drug Targeting (MDT) is a promising new idea for treatment of cancer and other well-localized diseases. An ideal MDT treatment would involve chemically binding the drug particles to magnetic particles, injecting them into the body, and using external magnetic fields to steer the particles towards or hold them near areas of diseased tissue not accessible via injection. However, it would be difficult to implement efficient MDT treatments because we know little about how magnetic particles interact with blood flow. With the goal of understanding these dynamics, a simulation of blood flow containing magnetic particles was performed. The particles were subject to a variety of forces such as gravity, externally-applied magnetic force, and inter-particle magnetic force. A separate simulation was performed to determine how the magnetic particle dispersion coefficient varied with flow properties such as shear and erythrocyte content, and the results of the dispersion simulation were used in the main simulation. Results from these simulations will be presented and used to draw conclusions about the technology required for a successful MDT treatment. [Preview Abstract] Monday, November 19, 2012 3:48PM - 4:01PM L11.00002: Highly-focused high-speed impact on soft material: Application for needle-free injection device Yoshiyuki Tagawa, Nikolai Oudalov, Claas Willem Visser, Chao Sun, Detlef Lohse The development of needle-free drug injection systems is of great importance to global healthcare. Existing methods use diffusive jets, which suffer from insufficient penetration into the skin. We established a novel method of creating microjets with a very sharp geometry and controlled velocities even for supersonic speeds up to 850 m/s. In this presentation we demonstrate that it is possible to penetrate human skin using these jets and in this way deliver liquid substances to the human body. The penetration depth is much deeper than those of conventional methods. Further penetration dynamics is studied through experiments performed on gelatin mixtures. A model based on Stokes-like drag is proposed to predict the depth of the penetration. [Preview Abstract] Monday, November 19, 2012 4:01PM - 4:14PM L11.00003: Mapping the acoustic scattering behavior of spherical microbubble clouds Miguel A. Parrales, Juan M. Fernandez, Miguel Perez-Saborid Sound scattering and acoustic propagation through bubbly liquids have been studied deeply in the last decades. The main reason for these studies was to explain and analyze the high impact of gas bubbles on sound propagation: the compressibility missmatch and the resonant behavior make the bubble a very efficient sound scatterer, changing appreciably the acoustic properties of the biphasic medium. Here we propose a numerical analysis, based on the self-consistant multiple scattering approach, to compute the linear acoustic response of spherical microbubble clouds while excited by an external ultrasonic wave. The calculations have been done for a wide range of the cloud void fraction $\beta$. By varying the excitation frequency $\omega_o$, we are able to map the total scattering intensity from the cloud in a $(\beta-\omega_o)$ phase space. The localization of the collective resonant modes on this map finally reveals the different scattering regimes. Furthermore, the total pressure field is obtained both inside and outside the cloud, being possible to visualize the acoustic wave propagation in each scattering regime. [Preview Abstract] Monday, November 19, 2012 4:14PM - 4:27PM L11.00004: Acoustic Excitation of a Micro-bubble Inside a Rigid Tube Adnan Qamar, Ravi Samtaney A theoretical model for acoustic excitation of a single micro-bubble inside a rigid tube is proposed in the present work. The model is derived from the reduced Navier-Stokes equations and by utilizing Poiseuille pipe flow theory. Wall Frictional losses induced due to fluid motion by the bubble oscillation in response to the acoustic perturbation are taken into account. The proposed model is not a variant of conventional Rayleigh-Plesset (RP) equation and is principally a super-set of all the conventional RP models. The model is first of its kind, which relates the bubble dynamics with the tube geometric and acoustic parameters in a consistent manner. Model predicts bubble oscillation dynamics as well as bubble fragmentation quite well when compared to the available experimental data. Results are computed for three tube diameters of 200, 100 and 12 microns with two initial bubble radiuses of 1.5 and 2 microns. The response of micro-bubble is highly non-linear with the driving acoustic frequency. Bubble response for low acoustic peak negative pressure (PNP) is linear, whereas as the PNP is increase nonlinearity are manifested and eventually bubble fragmentation takes place. For fixed acoustic parameters, an exponential decay in bubble response is observed as the tube length is increased. For very small tube diameters, the predictions are damped, suggesting the breakdown of the inherent model assumptions for these cases. [Preview Abstract] Monday, November 19, 2012 4:27PM - 4:40PM L11.00005: Stably Levitated Large Bubbles in Vertically Vibrating Liquids Timothy O'Hern, Bion Shelden, Louis Romero, John Torczynski Vertical vibration of a liquid can cause small gas bubbles to move downward against the buoyancy force. Downward bubble motion is caused by the oscillating bubble volume (induced by the oscillating pressure field) interacting with the bubble drag force. The volume-drag asymmetry and the oscillating pressure gradient produce net downward bubble motion analogous to that caused by the Bjerknes force in high-frequency vibrations. Low-frequency (below 300 Hz) experiments demonstrate downward bubble motion over a range of vibration conditions, liquid properties, and pressure in the air above the free surface. Small bubbles deep in a quasi-two-dimensional test cell usually coalesce to form a much larger bubble that is stably levitated well below the free surface. The size and position of this levitated bubble can be controlled by varying the vibration conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. [Preview Abstract] Monday, November 19, 2012 4:40PM - 4:53PM L11.00006: Nonlinear dynamics of PLA (poly-lactic acid) encapsulated ultrasound contrast microbubbles Shirshendu Paul, Kausik Sarkar, Margaret Wheatley The presence of the stabilizing encapsulation in microbubble based ultrasound contrast agents (UCAs) has critical effects on their acoustic properties. Biodegradable polymers like poly-lactic acid (PLA) hold promises to provide better stability and control over properties. Here, we report determination of interfacial rheological properties of PLA microbubbles using\textit{ in vitro }experiments and investigation of their non-linear scattering response. The average bubble size measured using DLS is 1.8 $\mu $m. However, the attenuation measured through a suspension of PLA bubbles shows a peak between 2.5-3.2 MHz, much smaller than the resonance frequency of a free bubble of similar size. This observation is explained by an extremely low surface elasticity (0.02-0.06 N/m) and the polydispersity of the bubble population. The estimated properties lead to an excellent agreement between model prediction and the experimentally measured response (up to 30 dB enhancement of fundamental response). Subharmonic threshold prediction is shown to be critically dependent on the bubble size distribution. [Preview Abstract] Monday, November 19, 2012 4:53PM - 5:06PM L11.00007: Microbubbles as drug-delivery vectors: steering ultrasound contrast agents in arterial flow using the Bjerknes force Alberto Aliseda, Alicia Clark Micron-sized coated microbubbles, commonly referred to as ultrasound contrast agents (UCAs), have been identified as potential targeted drug delivery vectors with applications in cancer chemotherapy and thrombolysis. The Bjerknes force, produced by the fluctuating pressure field created by the ultrasound waves acting on the oscillating bubble with a phase lag induced by the liquid's inertia and viscosity, can be used to direct the microbubbles to specific targeted areas in the circulatory system. While this phenomenon is well understood in a quiescent fluid, we need a better understanding of the dynamics of microbubbles in the complex pulsatile flow found in the human circulatory system. The non-linear interactions of ultrasound volume oscillations and flow-induced stresses are explored via high speed imaging of UCAs under in vitro flow that reproduces conditions in large arteries (relatively high Reynolds and Womersley numbers). This improved understanding will be used to manipulate and steer UCAs with ultrasound, in conjunction with hydrodynamic forces. [Preview Abstract] Monday, November 19, 2012 5:06PM - 5:19PM L11.00008: The Short Time Scale Events of Acoustic Droplet Vaporization David S. Li, Oliver D. Kripfgans, J. Brian Fowlkes, Joseph L. Bull The conversion of a liquid microdroplets to gas bubbles initiated by an acoustic pulse, known as acoustic droplet vaporization (ADV), has been proposed as a method to selectively generate gas emboli for therapeutic purposes (gas embolotherapy), specifically for vascularized tumors. In this study we focused on the first 10 microseconds of the ADV process, namely the gas nucleation site formation and bubble evolution. BSA encapsulated dodecafluoropentane (CAS: 678--26--2) microdroplets were isolated at the bottom of a degassed water bath held at 37\r{ }C. Microdroplets, diameters ranging from 5-65 microns, were vaporized using a single pulse (4-16 cycles) from a 7.5 MHz focused single element transducer ranging from 2-5 MPa peak negative pressure and images of the vaporization process were recorded using an ultra-high speed camera (SIM802, Specialised Imaging Ltd). It was observed that typically two gas nuclei were formed in series with one another on axis with ultrasound pulse. However, relative positioning of the nucleation sites within the droplet depended on droplet diameter. Additionally, depending on acoustic parameters the bubble could deform into a toroidal shape. Such dynamics could suggest acoustic parameters that may result in tissue damage. [Preview Abstract] Monday, November 19, 2012 5:19PM - 5:32PM L11.00009: An Experimental Review on Microbubble Generation to Be Used in Echo-PIV Method to Determine the Pipe Flow Velocity Alinaghi Salari, Mohammad Behshad Shafii, Shapoor Shirani Today microbubbles are broadly used as ultrasound contrast agents. Flow Focusing (FF) in microchannels can pave the way for the generation of same size bubbles. Microbubbles can be used as tracers in the Echo Particle Image Velocimetry (Echo-PIV) method to determine the velocity profile in main body vessels such as carotid. In this paper we use a low-cost microchannel fabrication method for preparing microbubble contrast agents by using some surface active agents and a viscosity enhancing material to obtain appropriate microbubbles with desired lifetime and stability for any \textit{in vitro }infusion for velocity measurement. All the five parameters that govern the bubble size extract and some efforts are done to achieve the smallest bubbles by adding suitable surfactant concentrations. By using these microbubbles for the Echo-PIV method, we experimentally determine the velocity field of two flow types, namely steady state and pulsatile pipe flows. [Preview Abstract] Monday, November 19, 2012 5:32PM - 5:45PM L11.00010: Development of microbubbles generator using microchannel toward biomedical applications Hironobu Kaji, Rei Masuda, Kazuhito Inoue, Mitsuhisa Ichiyanagi, Ikuya Kinefuchi, Shu Takagi, Yoichiro Matumoto Microbubbles have been already used as ultrasound contrast agents to visualize microcirculation system. They are also expected to be used for the drag delivery agent. For these bubbles, important requirements are their size and functionality such as carrying drugs and staying stability in vivo. Aiming at the development of microbubbles with the well-controlled size and functions, we have been developing a microbubble generation system using microchannels. Advantages of the method using microchannels are to generate small- and monodisperse-size microbubbles with the wide variety of choice in both liquid phase and gas phase and the capability of surface coating. In the present study, microbubbles are generated using T-junction type microchannel. We have designed the channel shape to reduce the bubble size. The improvement of the shape has enabled us to generate the smaller microbubble whose diameter is 6.1$\mu $m. Moreover, the effect of the viscosity in the liquid phase are investigated and it is confirmed that smaller bubbles are generated with the increase of viscosity. In addition, we have developed a new type microchannel for the surface coating of a microbubble. The results will be discussed in the presentation. [Preview Abstract] Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. Headquarters1 Physics Ellipse, College Park, MD 20740-3844(301) 209-3200 Editorial Office1 Research Road, Ridge, NY 11961-2701(631) 591-4000 Office of Public Affairs529 14th St NW, Suite 1050, Washington, D.C. 20045-2001(202) 662-8700
I am doing a molecular dynamics simulation. I need to assign initial velocities to the atoms. I want to assign the initial velocities which follow the Maxwell-Boltzmann distribution. How do I calculate such initial velocities using a uniform random number generator with range [0,1)? The initial velocities are drawn from a Gaussian distribution with variance $$\sigma_i^2=\frac{k_{\textrm{B}}T}{m_i},$$ where $k_{\textrm{B}}$ denotes Boltzmann's constant, $T$ is the temperature and $m_i$ is the mass of the $i^{\textrm{th}}$ particle. Thus, the problem boils down to generate random numbers from a gaussian distribution using uniformly distributed random numbers. This is fortunately quite simple: the Wikipedia article https://en.wikipedia.org/wiki/Box–Muller_transform shows some very common algorithms how to transform uniform random numbers into gaussian random numbers. Let's put everything together: every component of the velocity of the $i^{\textrm{th}}$ particle is computed via $$v_{i,\alpha}=\sqrt{\frac{k_{\textrm{B}}T}{m_i}}\,\mathcal{N}(0,1)\,,\quad\alpha\in\{x,y,z\}\,,$$ where $\mathcal{N}(0,1)$ is a gaussian random number with variance 1 and mean 0. With this definition, each velocity component follows a gaussian distribution $$\pi(v_\alpha)\textrm{d}v_\alpha\propto\exp\left(-\frac{v_\alpha^2}{2\sigma^2}\right)\textrm{d}v_\alpha\,,$$ but when you write the distribution of the velocity vector in spherical coordinates and integrate the angular components, you obtain $$\pi(v)\textrm{d}v \propto v^2\exp\left(-\frac{v^2}{2\sigma^2}\right)\textrm{d}v\,,$$ which is the desired Maxwell-Boltzmann distribution.
One thing that really helped me understand poles and zeros is to visualize them as amplitude surfaces. Several of these plots can be found in A Filter Primer. Some notes:It's probably easier to learn the analog S plane first, and after you understand it, then learn how the digital Z plane works.A zero is a point at which the gain of the transfer ... The Fourier and the Laplace transform obviously have many things in common. However, there are cases where only one of them can be used, or where it's more convenient to use one or the other.First of all, even though in the definitions you simply replace $s$ by $j\omega$ or vice versa to go from one transform to the other, this cannot generally be done ... I think there are actually 3 questions in your question:Q1: Can I derive the frequency response given the poles of a (linear time-invariant) system?Yes, you can, up to a constant. If $s_{\infty,i}$, $i=1,\ldots,N,$ are the poles of the transfer function, you can write the transfer function as$$H(s)=\frac{k}{(s-s_{\infty,1})(s-s_{\infty,2})\ldots (s-s_{\... Why is the fourier transform a special case of the laplace transform?The Laplace transform produces a 2D surface of complex values, while the Fourier transform produces a 1D line of complex values. The Fourier transform is what you get when you slice the Laplace transform along the jω axis. For instance, a simple lowpass filter $H(s)=\frac{1}{s+1}$ has a ... The system$$y[n]=y[n-1]+x[n]\tag{1}$$is an ideal accumulator, i.e., it computes the cumulative sum of the input samples:$$y[n]=\sum_{k=-\infty}^nx[k]\tag{2}$$It is in a way analogous to a continuous-time integrator, but this doesn't mean that you will necessarily obtain an ideal integrator by transforming the discrete-time system to a continuous-time ... Assume you have a $N\times M$ sized image.If you know take what is classically used, a square filter kernel, of let's say size $L\times L$, you'd need to convolve that with the picture – which gives you $N\times M$ pixels, each needing $L^2$ multiply-accumulates. So you end up with $A_{2D}=L^2MN$ operations.Now, if you can decompose that filter into an $... As pointed out by Peter K., it is important to distinguish between Laplace and Fourier transforms. The first few transform pairs in your question are Fourier transform pairs, whereas the last pair is a correspondence of the unilateral Laplace transform:$$F(s)=\int_{0}^{\infty}f(t)e^{-st}dt$$In the last transform pair in your question the $\mathcal{F}$ ... Both notations are common and correct. As pointed out by Yuri Nenakhov, the advantage of the argument $j\omega$ is that it coincides with the complex (Laplace transform) variable $s$ when its real-part is zero. Note that in the complex $s$-plane the frequency axis is the imaginary axis. So $j\omega$ has nothing to do with complex frequency (which makes no ... The Laplace Transform is more representative of real systems that have a starting point, which is why the integral starts at 0, and also why the unit step function is generally talked about alongside the Laplace Transform. With the Laplace Transform, we can examine the transient and steady-state behavior of a system.Using $e^{st}$ instead of $e^{iwt}$ ... Laplace transforms can capture the transient behaviors of systems. Fourier transforms only capture the steady state behavior. Of course, Laplace transforms also require you to think in complex frequency spaces, which can be a bit awkward, and operate using algebraic formula rather than simply numbers.If you want to see the power distribution of a signal ... We usually talk of $j\omega$ when we're also interested in the Laplace transform of a signal / system, but want to just talk about the frequency response.The physical meaning of the imaginary part is that it refers to purely sinusoidal signals and are constant "amplitude". The real part refers to signals for which the "amplitude" decays or grows ... In engineering practice, the complex inversion integral is hardly ever used. As an engineer, you will almost exclusively need to invert rational functions, and this can be done by partial fraction expansion and elementary inversions. So first I'll show you how to obtain the inverse Laplace transform by partial fraction expansion, then I'll explain the ... The short answer is yes, if you have the Laplace or Z-transform of a function you do not need the Fourier transform.This is because the CFT is a special case of the Laplace transform and the DTFT is a special case of the Z transform. The Fourier transform is used to find the complex sinusoids that compose a function, whereas the Laplace transform finds ... You cannot make conclusions about the stability of a system by only considering its transfer function evaluated on the imaginary axis $s=j\omega$. Replacing $s$ by $j\omega$ in the transfer function only makes sense for a stable system, otherwise you get a function of $\omega$ that does not describe the system, but another (stable) system.Let me explain ... The problem is that you took the derivative of the function$$\hat{x}_u(t)=2e^{-3t}-e^{-4t}\tag{1}$$whereas using the Laplace transform you implicitly assumed that $x_u(t)$ equals zero for $t<0$:$$x_u(t)=\hat{x}_u(t)u(t)=(2e^{-3t}-e^{-4t})u(t)\tag{2}$$where $u(t)$ is the unit step function.If you take the derivative of $(2)$ then you'll get the ... Let $H(s)$ be a transfer function of the form$$H(s) = \frac{1}{s-p}$$where $p$, which is a pole of $H(s)$, can be written as a complex number $a+jb$. Taking the inverse Laplace transform of $H(s)$ gives the corresponding impulse response $h(t)$ (that is, the output of your system when given $\delta(t)$ as input). Noting $\mathcal{L}^{-1}$ the inverse ... Let $s = \sigma + j\omega$, the inverse Laplace transform of $f(t+a)$ is given by$$f(t+a) = \frac{1}{2\pi j} \int_{\sigma-j\infty}^{\sigma+j\infty} F(s)e^{s(t+a)} \mathrm{d}s = \frac{1}{2\pi j} \int_{\sigma-j\infty}^{\sigma+j\infty} F(s)e^{sa}e^{st} \mathrm{d}s.$$Hence the bilateral Laplace transform of $f(t+a)$ is $F(s)e^{sa}$ where $F(s)$ is the Laplace ... $X(j \omega)$ (frequency response) is a Fourier transform of system's impulse response. It's actually a function of frequency ($\omega$) but usually is written as $X(j \omega)$ because replacing $j \omega$ in the formula with $s$ will give you system's Laplace transform $X(s)$ without any additional conversions. (This works in the opposite direction as well: ... If you have an understanding of Fourier transforms then you probably already have a conceptual model of transforming signals into the frequency domain. The Laplace transform provides an alternative frequency domain representation of the signal - usually referred to as the "S domain" to differentiate it from other frequency domain transforms (such as the Z ... Matt is correct that the sign is convention. I think that there is a reason for it beyond that though.If we look at complex frequencies in the complex plane, they look like a constant vectors that rotate in one direction or another. Positive frequencies rotate counter-clockwise, negative frequencies rotate clockwise, and "0 Hz" frequencies don't rotate ... You are right that the (bilateral) Laplace transform can be interpreted as the Fourier transform of $e^{-\sigma t}f(t)$. However, I think that the significance of the Laplace transform only becomes clear when $s=\sigma+j\omega$ is viewed as a complex variable because then we can study the analytic properties of the system function. E.g., electrical networks ... Both transforms are equivalent tools, but the Laplace transform is used for continuous-time signals, whereas the $\mathcal{Z}$-transform is used for discrete-time signals (i.e, sequences).You can see that they are equivalent by using the continuous-time representation of a discrete-time signal, and then applying the Laplace transform to that signal. The ... The answer to your last question is definitely 'no'. The point hotpaw2 makes in his answer is very relevant: the FFT is an efficient implementation of the DFT, and there are no equivalently efficient implementations for the numerical computation of the $\mathcal{Z}$-transform or the Laplace transform.But that's not the only reason. There are important ... It's important to realize that generally the differential equation (DE) alone doesn't tell us anything about causality. You claim that the system given in your question is causal. However, an anti-causal system with impulse response $h(t)=-u(-t)$ (where $u(t)$ is the step function) also satisfies the very same DE! So the given DE actually describes two ... You're comparing the transforms of two different functions. You consider the Fourier transform of the function $x_1(t)=\cos(\omega_0 t)$, but you took the Laplace transform of the function $x_2(t)=\cos(\omega_0t)u(t)$, where $u(t)$ is the unit step function:$$X_1(j\omega)=\int_{-\infty}^{\infty}x_1(t)e^{-j\omega t}dt\\X_2(s)=\int_{0}^{\infty}x_2(t)e^{-st}... This is because the impulse response of an integrator is $h(t)=u(t)$. The output which is the convolution with the impulse respoponse is$$y(t)=\int_{-\infty}^{\infty}x(\tau)h(t-\tau)d\tau$$and with $h(t)=u(t)$ it becomes$$\begin{align}y(t)&=\int_{-\infty}^{\infty}x(\tau)u(t-\tau)d\tau\\&=\int_{-\infty}^{t}x(\tau)d\tau\tag{1}\end{align}$$where $(... Because, the region of convergence in the Laplace transform$$X(s) = \int_{-\infty}^{\infty} x(t) e^{-st} dt$$is related to the weighting provided by the real part of the complex $s = \sigma + j \omega$; as this will yield the weight $|e^{-st}| = e^{-\sigma t}$ applied on the input signal $x(t)$, and is a function of $\sigma$ alone and is a ... The unilateral Laplace transform is used for analyzing causal linear time-invariant systems, which have an impulse response $h(t)$ that is zero for $t<0$. The unilateral Laplace transform can be used to solve initial value problems, due to the correspondence$$x'(t)\Longleftrightarrow sX(s)-x(0)$$where $x(0)$ is a given initial value for the function $... Both transforms have a large overlap in their applications. So you can use both to analyze an RC circuit. However, with the unilateral Laplace transform it's much more straightforward to take initial conditions into account, such as an initially charged capacitor. This has to do with the unilateral Laplace transform of the derivative of a function:$$\...
A distribution $\mathcal{D}$ is said to $\epsilon$-fool a function $f$ if $|E_{x\in U}(f(x)) - E_{x\in \mathcal{D}}(f(x))| \leq \epsilon$. And it is said to fool a class of functions if it fools every function in that class. It is known that $\epsilon$-biased spaces fool the class of parities over subsets. (see Alon-Goldreich-Hastad-Peralta for some nice constructions of such spaces). The question I want to ask is a generalization of this to arbitrary symmetric functions. Question:Suppose we take the class of arbitrary symmetric functions over some subset, do we have distribution (with small support) that fools this class? Some small observations: It is sufficient to fool exact thresholds ($\text{ETh}^S_k(x)$ is 1 if and only if $x$ has exactly $k$ ones amongst the indices in $S$). Any distribution that $\epsilon$-fools these exact thresholds will $n\epsilon$ fool all symmetric functions over $n$ bits. (This is because every symmetric function can be written as a real linear combination of these exact thresholds where the coefficients in the combination are either 0 or 1. Linearity of expectation then gives us what we want) A similar argument also works for general thresholds ($\text{Th}^S_k(x)$ is 1 if and only if $x$ has at least $k$ ones amongst the indices in $S$) There is an explicit construction of a distribution with support $n^{O(\log n)}$ via Nisan's PRG for LOGSPACE. Arbitrary $\epsilon$-biased spaces will not work. For example if $S$ is the set of all $x$ such that the number of ones in x is non-zero mod 3, this is actually $\epsilon$-biased for very small $\epsilon$ (from a result of Arkadev Chattopadyay). But clearly this does not fool the MOD3 function. An interesting subproblem may be the following: suppose we just want to fool symmetric functions over all n indices, do we have a nice space? By the above observations, we just need to fool the threshold functions over $n$-bits, which is just a family of $n+1$ functions. Thus one can just pick the distribution by brute-force. But are there nicer examples of spaces that fool $\text{Th}^{[n]}_k$'s for every $k$?
Let $(\Omega,\mathcal A,\operatorname P)$ be a probability space, $E$ be a complete locally compact separable metric space, $(X^n_t)_{t\ge0}$ be an $E$-valued càdlàg process on $(\Omega,\mathcal A,\operatorname P)$ for $n\in\mathbb N$ and $(X_t)_{t\ge0}$ be an $E$-valued continuous process on $(\Omega,\mathcal A,\operatorname P)$ with $$\left(X^n_{t_0},\ldots,X^n_{t_k}\right)\xrightarrow{n\to\infty}\left(X_{t_0},\ldots,X_{t_k}\right)\tag1$$ in distribution for all $k\in\mathbb N_0$ and $0=t_0<\cdots<t_k$. Are we able to conclude $$X^n\xrightarrow{n\to\infty}X\tag2$$ in distribution (with respect to the Skorohod topology)? I know that if $S$ is a Polish space, then a sequence $(\mu_n)_{n\in\mathbb N}$ of probability measures on $(S,\mathcal B(S))$ converges weakly to a probability measure $\mu$ on $(S,\mathcal B(S))$ if and only if $(\mu_n)_{n\in\mathbb N}$ is tight and there is a separating family $\mathcal C$ of bounded continuous functions $S\to\mathbb R$ such that $$\int f\:{\rm d}\mu=\lim_{n\to\infty}\int f\:{\rm d}\mu_n\;\;\;\text{for all }f\in\mathcal C\tag3.$$ In the context of the question, $S$ is the space $C([0,\infty),E)$ of càdlàg functions $[0,\infty)\to E$ equipped with the Skorohod metric. My hope is that we somehow can benefit from the fact that $X$ is not only càdlàg, but continuous. Remark: I don't think that it's of any use, but let me remark that in my application $X^n$ and $X$ are Markov processes. Their transition semigroups, say $(\kappa_t)_{t\ge0}$ and $(\kappa^n_t)_{t\ge0}$, induce strongly continuous contraction semigroups, say $(T_n(t))_{t\ge0}$ and $(T(t))_{t\ge0}$, on the space $C_0(E)$ of continuous functions $E\to\mathbb R$ vanishing at infinity equipped with the supremum norm. Each $(T_n(t))_{t\ge0}$ is even uniformly continuous. Moreover, $T_n(t)\xrightarrow{n\to\infty}T(t)$ with respect to the strong operator topology uniformly for $t$ in a bounded interval. If any of these properties are useful, feel free to use them.
I am trying to find the minimum of the so called Beale’s function given by $f(x_1,x_2) = (1.5-x_1+x_1x_2)^2 + (2.25-x_1+x_1x_2^2)^2 + (2.625-x_1+x_1x_2^3)^2$ Using Newton iteration $x^{(k+1)} = x^{(k)} - \alpha^{(k)} \eta^{(k)} $ with $ \eta^{(k)} = \left( \frac{\partial ^2f}{\partial x^2}(x^{(k)}) \right)^{-1}\frac{\partial f}{\partial x}(x^{(k)})$ at point $x = [4 \ \ 1]^T$ with $\alpha = 0.5$. The result is $x_{num} = [0 \ \ 1]^T$, while the real minimum is at $x_{min} = [3 \ \ 0.5]^T$. I found out the problem is, that $\eta$ is pointing in the wrong direction. It is not showing “downhill” like the gradient would. My question is, how is this possible? Why does the newton iteration is failing here? EDIT: By the way, I get the some result if I start at $x = [4 \ \ 0.9]^T$. First happens some weird stuff and $x^{(k)}$ is jumping around but then it ends again in $x= [0 \ \ 1]^T$.
If you converge, you would expect the steps to get small. Ideally, a step $\delta x_k$ in an optimization algorithm would go from the current iterate $x_k$ to the exact solution $x^\ast$, so $\|\delta x_k\| \approx \|x_k-x^\ast\|$ which is, as you approach the solution, going to be small. Now you say that $\delta x_k$ is getting small even though the gradient is still large. That, of course, depends on how you define "large". Is $10^3$ small or large? By itself, there is no way to say this. It depends on the units of the thing, as well as on what you compare with. Compared with the typical size of objects we have around us, $10^3$ light years is clearly large. $10^3$ nanometers is pretty small, however. But if you're a cosmologist, then $10^3$ light years is small. And if you're looking at atomic distances, then $10^3$ nanometers is large. In other words, you need to investigate what exactly it means for the gradient to be large in your case, and whether a number that is, for example, not on the order of $10^{-7}$ but rather $10^7$ really means that you're still far from the solution. In the context of optimization problems, you need to ask "what is small?" when you look at the gradient. One way to approach this is to ask "what is a typical size for the gradient?". To give an example, let us say that you have a spring-mass system for which you'd like to find the minimum energy position. Let's assume that the springs are all around 10cm long, then a typical displacement of springs might be $\Delta x=1cm$. Choose two positions for the bodies and connecting springs that are approximately $\Delta x$ apart and evaluate the energies for these two positions to get a corresponding "typical energy difference" $\Delta E$. Then a "typical size of the gradient" would be $\Delta E/\Delta x$. If your optimization algorithm has produced a position for which the gradient is $\|g_k\| \le 10^{-3} \left|\Delta E/\Delta x \right|$, then you can say that you're converged.
In the last lesson, we derived the functions that help us descend down cost functions efficiently. Remember that this technique is not so different from what we saw when using the derivative to tell us the next step size and direction in two dimensions. When descending down our cost curve in two dimensions, we used the slope of the tangent line at each point to tell us how large of a step to take next. Since the cost curve is a function of $m$ and $b$, we used the gradient to determine each step. But really this approach is analogous to what you have seen before. For a single variable function, $f(x)$, the derivative tells you the slope of the line tangent to the plot of $f(x)$ at a given value of $x$. In turn, this tells you the next step size. In the case of a multivariable function, $J(m, b)$, we calculate the partial derivative with respect to both variables. For a regression line, these variables are our slope and y-intercept. The gradient allows you to calculate how much to move in either direction in order to reach the local minimum. Luckily for us, we already did the hard work of deriving these formulas. Now we get to see the fruit of our labor. The following formulas tell us how to update regression variables of $m$ and $b$ to approach a "best fit" line. Given the formulas above, we can work with any dataset of $x$ and $y$ values to determine the best fit line. We simply iterate through our dataset and use the formulas above to determine an update to $m$ and $b$ that will bring us closer to the minimum. So ultimately, to descend along the cost function, we will use the calculations: current_m = old_m $ - (-2*\sum_{i=1}^n x_i*\epsilon_i )$ current_b = old_b $ - ( -2*\sum_{i=1}^n \epsilon_i )$ Ok, let's turn this into code. First, let's initialize some data. # our datafirst_show = {'x': 30, 'y': 45}second_show = {'x': 40, 'y': 60}third_show = {'x': 100, 'y': 150}shows = [first_show, second_show, third_show] Now we set our initial regression line by initializing $m$ and $b$ to zero. Then to update our regression line, we iterate through each of the points in the dataset, and at each iteration, change our update_to_b by $2*\epsilon$ and change our update_to_m by $2*x*\epsilon$. # initial variables of our regression lineb_current = 0m_current = 0# amount to update our variables for our next stepupdate_to_b = 0update_to_m = 0 def error_at(point, b, m): return (m*point['x'] + b - point['y'])for i in range(0, len(shows)): update_to_b += -2*(error_at(shows[i], b_current, m_current)) update_to_m += -2*(error_at(shows[i], b_current, m_current)*shows[i]['x'])new_b = b_current - update_to_bnew_m = m_current - update_to_m In the last two lines of the code above, we calculate our new_b and new_m values by updating our current values and adding our respective updates. We define a function called error_at, which we can use in the error component of our partial derivatives above. The code above represents just one update to our regression line and, therefore, just one step towards our best fit line. We'll just repeat the process to take multiple steps. But first we have to make a couple other changes. Ok, the above code is very close to what we want, but we just need to make a few small tweaks before it's perfect. The first one is obvious if we think about what these formulas are really telling us to do. Look at the graph below, and think about what it means to change each of our $m$ and $b$ variables by at least the sum of all of the errors (the $y_i$ values that our regression line predicts and our actual data). That would be an enormous change. To ensure that we do not drastically update our regression line after each step, we multiply each of these partial derivatives by a learning rate. As we have seen before, the learning rate is just a small number, like $0.0001$, which controls how large our updates to the regression line will be. The learning rate is represented by the Greek letter eta, $\eta$, or alpha $\alpha$. We'll use eta, so $\eta = 0.0001$ means the learning rate is $0.0001$. Multiplying our step size by our learning rate works fine, so long as we multiply both of the partial derivatives by the same amount. This is because with think of our gradient, $ \nabla J(m,b)$, as steering us in the correct direction. In other words, our derivatives ensure we are making the correct proportional changes to $m$ and $b$. So scaling down these changes to make sure we don't update our regression line too quickly works fine, so long as we keep me moving in the correct direction. While we're at it, we can also get rid of multiplying our partials by 2. As mentioned, so long as our changes are proportional we're in good shape. Before discussing our second tweak, note that as the size of the dataset increases, the sum of the errors will also increase. But this doesn't mean our formulas are less accurate or that they require larger changes. It just means that the total error is larger. We should really think of accuracy as being proportional to the size of our dataset. We can correct for this effect by dividing the effect of our update by the size of our dataset, $n$. Making these changes, our formula looks like the following: #amount to update our variables for our next stepupdate_to_b = 0update_to_m = 0 learning_rate = .0001n = len(shows)for i in range(0, n): update_to_b += -(1/n)*(error_at(shows[i], b_current, m_current)) update_to_m += -(1/n)*(error_at(shows[i], b_current, m_current)*shows[i]['x'])new_b = b_current - (learning_rate*update_to_b)new_m = m_current - (learning_rate*update_to_m) So our code now reflects everything we know about our gradient descent process: Start with an initial regression line with values of $m$ and $b$. Then for each point, calculate how the regression line's prediction compares to the actual point (that is, find the error). Update what our next step to each variable should be by using the partial derivative. And after iterating through all of the points, update the values of $b$ and $m$ appropriately, scaled down by a learning rate. As mentioned earlier, the code above represents just one update to our regression line, and therefore just one step towards our best fit line. To take multiple steps, we'll wrap the process we want to duplicate in a function called step_gradient so we can call that function as much as we want. first_show = {'x': 30, 'y': 45}second_show = {'x': 40, 'y': 60}third_show = {'x': 100, 'y': 150}shows = [first_show, second_show, third_show]def step_gradient(b_current, m_current, points): b_gradient = 0 m_gradient = 0 learning_rate = .0001 N = float(len(points)) for i in range(0, len(points)): x = points[i]['x'] y = points[i]['y'] b_gradient += -(1/N) * (y - ((m_current * x) + b_current)) m_gradient += -(1/N) * x * (y - ((m_current * x) + b_current)) new_b = b_current - (learning_rate * b_gradient) new_m = m_current - (learning_rate * m_gradient) return {'b': new_b, 'm': new_m} b = 0m = 0step_gradient(b, m, shows) # {'b': 0.0085, 'm': 0.6249999999999999} {'b': 0.0085, 'm': 0.6249999999999999} Take a look at the input and output. We begin by setting $b$ and $m$ to 0, 0. Then from our step_gradient function, we receive new values of $b$ and $m$ of .0085 and .6245. Now what we need to do, is take another step in the correct direction by calling our step gradient function with our updated values of $b$ and $m$. updated_b = 0.0085updated_m = 0.6249step_gradient(updated_b, updated_m, shows) # {'b': 0.01345805, 'm': 0.9894768333333332} {'b': 0.01345805, 'm': 0.9894768333333332} Let's do this, say, 10 times. # set our initial step with m and b values, and the corresponding error.b = 0m = 0iterations = []for i in range(10): iteration = step_gradient(b, m, shows) # {'b': value, 'm': value} b = iteration['b'] m = iteration['m'] # update values of b and m iterations.append(iteration) Let's take a look at these iterations. iterations [{'b': 0.0085, 'm': 0.6249999999999999}, {'b': 0.013457483333333336, 'm': 0.9895351666666665}, {'b': 0.016348771640555558, 'm': 1.20215258815}, {'b': 0.018034938763874835, 'm': 1.3261630333815368}, {'b': 0.01901821141416974, 'm': 1.398492904819568}, {'b': 0.019591516465717437, 'm': 1.4406797579467343}, {'b': 0.019925705352372706, 'm': 1.4652855068756228}, {'b': 0.020120428242875608, 'm': 1.4796369666804499}, {'b': 0.02023380672219544, 'm': 1.4880075481368862}, {'b': 0.020299740568747532, 'm': 1.4928897448417577}] As you can see, our $m$ and $b$ values both update with each step. Not only that, but with each step, the size of the changes to $m$ and $b$ decrease. This is because they are approaching a best fit line. We can use Plotly to see these changes to our regression line visually. We'll write a method called to_line that takes a dictionary of $m$ and $b$ variables and changes it to produce a line object. We can then see how our line changes over time. def to_line(m, b): initial_x = 0 ending_x = 100 initial_y = m*initial_x + b ending_y = m*ending_x + b return {'data': [{'x': [initial_x, ending_x], 'y': [initial_y, ending_y]}]}frames = list(map(lambda iteration: to_line(iteration['m'], iteration['b']),iterations))frames[0] {'data': [{'x': [0, 100], 'y': [0.0085, 62.508499999999984]}]} Now we can see how our regression line changes, and approaches a better model of our data, with each iteration. from plotly.offline import init_notebook_mode, iplotfrom IPython.display import display, HTMLinit_notebook_mode(connected=True)x_values_of_shows = list(map(lambda show: show['x'], shows))y_values_of_shows = list(map(lambda show: show['y'], shows))figure = {'data': [{'x': [0], 'y': [0]}, {'x': x_values_of_shows, 'y': y_values_of_shows, 'mode': 'markers'}], 'layout': {'xaxis': {'range': [0, 110], 'autorange': False}, 'yaxis': {'range': [0,160], 'autorange': False}, 'title': 'Regression Line', 'updatemenus': [{'type': 'buttons', 'buttons': [{'label': 'Play', 'method': 'animate', 'args': [None]}]}] }, 'frames': frames}iplot(figure) As you can see, our regression line starts off far away. Using our step_gradient function, the regression line moved closer to the line that produces the lowest error. In this section, we saw our gradient descent formulas in action. The core of the gradient descent functions are understanding the two lines: $$ \frac{\partial J}{\partial m}J(m,b) = -2\sum_{i = 1}^n x(y_i - (mx_i + b)) = -2\sum_{i = 1}^n x_i*\epsilon_i$$ $$ \frac{\partial J}{\partial b}J(m,b) = -2\sum_{i = 1}^n(y_i - (mx_i + b)) = -2\sum_{i = 1}^n \epsilon_i $$ Both equations use the errors of the current regression line to determine how to update the regression line next. These formulas came from our cost function, $J(m,b) = \sum_{i = 1}^n(y_i - (mx_i + b))^2 $ and from using the gradient to find the direction of steepest descent. Translating this into code, and seeing how the regression line continued to improve in alignment with the data, we saw the effectiveness of this technique in practice.
I don't know much about the deformation of compact complex manifolds, I've only read chapter 6 of Huybrechts' book Complex Geometry: An Introduction. There are two parts to this chapter. The second goes through the standard approach, that is, considering a family of compact complex manifolds as a proper holomorphic submersion between two connected complex manifolds. My question is about the approach taken in the first section, which I will briefly outline. One can instead consider a deformation of complex structures on a fixed smooth manifold, as opposed to deformations of complex manifolds – by Ehresmann's result, a deformation over a connected base is nothing but a deformation of complex structure on a fixed smooth manifold. This point of view is difficult to work with because a complex structure is a complicated object, so we instead consider almost complex structures – by the Newlander-Niremberg Theorem, complex structures correspond to integrable almost complex structures. Fix a smooth even-dimensional manifold $M$. Now Huybrechts considers a continuous family of almost complex structures $I(t)$. He does not say where $t$ comes from, but I have interpreted it to be an open neighbourhood of $0$ in $\mathbb{C}$. Now, let $I(0) = I$. The complexified tangent bundle to $M$ splits with respect to $I$. That is, $TM\otimes_{\mathbb{R}}\mathbb{C} = T^{1,0}M\oplus T^{0,1}M$. But this is true of each almost complex structure $I(t)$. Denote the corresponding decompositions by $TM\otimes_{\mathbb{R}}\mathbb{C} = T^{1,0}M_t\oplus T^{0,1}M_t$ – this is deliberately suggestive notation; we can consider the compact (soon-to-be) complex manifold $(M, I(t))$ as the fibre of a complex family over a point $t$ in the base. For small $t$, we can encode the given information by a map $\phi(t) : T^{0,1}M \to T^{1,0}M$ where, for $v \in T^{0,1}M$, $v + \phi(t)v \in T^{0,1}M_t$. Huybrechts then says: Explicitly, one has $\phi(t) = -\text{pr}_{T^{1,0}M_t}\circ j$, where $j : T^{0,1}M \subset TM\otimes_{\mathbb{R}}\mathbb{C}$ and $\text{pr}_{T^{1,0}M_t} : TM\otimes_{\mathbb{R}}\mathbb{C} \to T^{1,0}M_t$ are the natural inclusion respectively projection. According to this, the codomain of $\phi(t)$ is $T^{1,0}M_t$, not $T^{1,0}M$. Is this a typo or am I missing something? Added later: As Peter Dalakov points out in his answer, it is a typo. Anyway, Huybrechts continues with this approach. Enforcing the integrability condition $[T^{0,1}M_t, T^{0,1}M_t] \subset T^{0,1}M_t$ ensures that each almost complex structure is induced by a complex structure. Under the assumption that $I$ is integrable, $[T^{0,1}M_t, T^{0,1}M_t] \subset T^{0,1}M_t$ is equivalent to the Maurer-Cartan equation $\bar{\partial}\phi(t) + [\phi(t), \phi(t)] = 0$, where $\bar{\partial}$ is the natural operator on the holomorphic vector bundle $T^{1,0}M$, and $[\bullet, \bullet]$ is an extension of the Lie bracket. I like this approach because if you take a power series $\sum_{t=0}^{\infty}\phi_it^i$ of $\phi(t)$ you can deduce: $\phi_1$ defines the Kodaira-Spencer class of the deformation; all the obstructions to finding the coefficients $\phi_i$ lie in $H^2(M, T^{1,0}M)$. Does anyone know of some other places where I would be able to learn about this approach, or is there some reason why this approach is not that common? Just for the record, I have looked at Kodaira's Complex Manifolds and Deformation of Complex Structures, but I haven't been able to find anything resembling the above.
I'm trying to understand tensor notation and working with indices in special relativity. I use a book for this purpose in which $\eta_{\mu\nu}=\eta^{\mu\nu}$ is used for the metric tensor and a vector is transformed according to the rule $$x'^\mu= \Lambda^\mu{}_{\alpha}x^\alpha$$ (Lorentz-transformation). I think I understand what is going on up to this point but now, I'm struggling to understand how the following formula works: $$\eta_{\nu\mu}\Lambda^{\mu}{}_{\alpha}\eta^{\alpha\kappa} ~=~ \Lambda_{\nu}{}^{\kappa}$$ Why is this not equal to (for instance) $\Lambda^{\kappa}{}_{\nu}$? In addition, I have trouble understanding what the difference is between $\Lambda_\alpha^{\ \ \beta}$, $\Lambda_{\ \ \alpha}^\beta$, $\Lambda^\alpha_{\ \ \beta}$ and $\Lambda^{\ \ \alpha}_\beta$ (order and position of indices). And if we write tensors as matrices, which indices stand for the rows and which ones stand for the columns? I hope someone can clarify this to me.
Search Now showing items 1-10 of 17 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV (Springer, 2014-08) The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
Why do we care about eigenvalues, eigenvectors, and singular values? Intuitively, what do they tell us about a matrix? When I first studied eigenvalues in college, I regarded it as yet another theoretical math trick that is hardly applicable to my life. Once I passed the final exam, I shelved all my eigen-knowledge to a corner in my memory. Years have passed, and I gradually realize the importance and brilliance of eigenvalues, particularly in the realm of machine learning. In this post, I will discuss how and why we perform eigendecomposition and singular value decomposition in machine learning. In the previous post, I mentioned that there are essentially 2 questions in linear algebra and matrix: solve linear equation \(Ax = b\) eigendecomposition and singular value decomposition of a matrix \(A\) Here I will focus on the second one. Eigen decomposition and singular value decomposition Matrix is all about transformation and change of coordinate systems. Through rotation, stretch, and scale, we can reveal unique characteristics of a matrix, and decompose a matrix into simple and representative matrices. 1. Eigendecomposition Also called Eigen Value Decomposition (EVD) [1]. Geometric interpretation of EVD This video by 3Blue1Brown provides a very informative geometric interpretation of EVD. Essentially, when we linearly transform a matrix by , if a vector \(v \in R^{n}\) in the space changes only by a scalar factor \(\lambda\) after transformation, we call such vector an eigenvector. stretch and rotate \(Av = \lambda v \tag{1} \) Eigenvalues are be used directly to calculate the determinant of a matrix: \(det(A) = \prod_i^n |\lambda_i|\tag{2}\) \(A\) is singular if its eigenvalues is 0. Eigendecomposition If a square matrix \(A \in R^{n \times n}\) has \(n\) linearly independent eigenvectors \(q_i \in R^{n}\). For a single eigenvector \(q_i\), from equation 1, we have \(Aq_i = \lambda_i q_i, i = 1,2…n \tag{3} \) For all eigenvectors, we can define a square matrix \(Q \in R^{n \times n}\) with \(q_i\) in each column: \(Q = \begin{bmatrix} q_1 & q_2 & … &q_n \end{bmatrix} \tag{4} \) For all eigenvalues, we can define a diagonal matrix \(\Lambda \in R^{n \times n}\) with the eigenvalues \(\lambda_i\) as the diagonal elements: \(\Lambda = diag(\lambda_1, \lambda_2, … \lambda_n) \tag{5} \) Thus equation 3 with all eigenvalues and eigenvectors of \(A\) can be written as: \(AQ = Q \Lambda \tag{7}\) Because all eigenvectors are linearly independent, \(Q\) has full rank and invertible with inverse \(Q^{-1}\) \(AQQ^{-1} = Q \Lambda Q^{-1} \tag{8}\) Therefore, we can decompose a full rank square matrix \(A\) into square matrices of eigenvectors and a diagonal matrix with eigenvalues [5]. \(A = Q \Lambda Q^{-1} \tag{9} \) If \(A \in R^{n \times n}\) does not have \(n\) linearly independent eigenvectors, then it is not diagonalizable. In this case, \(A\) is also called defective matrix, and there is no diagonal matrix \(\Lambda\) for eigendecomposition in equation 9. 2. Singular value decomposition SVD LU factorization and eigendecomposition mentioned before are only applicable to square matrices. We would like to have a more generic decomposition approach for any rectangular matrices. Geometric interpretation of SVD We start with a \(R^n\) space defined by orthogonal unit vectors \(v\). Shown in the figure above is a 2-dimension example. Note that \(v_1, v_2\) are unit vectors \(\|v_i\| = 1\), and they define a sphere \(S\) [2][3]. Now we apply a linear transformation \(A \in R^{n \times n} \) to transform the sphere \(S\) into an output space \(AS \in R^{n \times n}\) . In this new space, the original sphere \(S\) is transformed into an ellipse defined by 2 unit vectors \(u_1, u_2\), with scalar factors of \(\sigma_1, \sigma_2\) respectively. Therefore, we have \(Av_i = \sigma_i u_i \tag {10}\) Note that equation 10 looks quite similar to eigendecomposition in equation 3, except that \(v_i \neq u_i\). We can rewrite equation 10 in a matrix format with all \(n\) dimension of \(A\): \(AV = \hat U \hat {\Sigma} \tag{11} \) \(V = \begin{bmatrix} v_1 &v_2 & … &v_n \end{bmatrix} \tag{12} \) \(\hat U = \begin{bmatrix} u_1 & u_2 & … &u_n \end{bmatrix} \tag{13} \) \(\hat {\Sigma} = diag (\sigma_1, \sigma_2, … \sigma_n) \tag{14}\) \(\hat {\Sigma}\) is a diagonal matrix with sorted diagonal values of \(\sigma_i\), called : singular values \(\sigma_1 \geq \sigma_2 .. \geq \sigma_{\min(m,n)} \geq 0 \tag{15} \) Geometrically, \(V\) is an orthonormal matrix defining in the input space: \(VV^T = V^TV = I, V^T = V^{-1}, det(V) = 1 \tag{16.1}\) In theory, \(V\) can take either real or complex values. In the case of complex values, \(V\) is called an unitary matrix with its conjugate transpose \(V^{*}\): \(VV^{*} = V^{*}V = I\tag{16.2}\) \(A\) is a linear transformation matrix. \(\hat U\) represents orthogonal vectors as the rotation of the input space, and \(\hat {\Sigma}\) represents stretch. We can rewrite equation 11 in the real value case as: \(AVV^T = \hat U \hat {\Sigma}V^T \tag{17} \) \(A = \hat U \hat {\Sigma}V^T \tag{18.1} \) In the complex value case, we can write it in the following format: \(A = \hat U \hat {\Sigma}V^{*}\tag{18.2} \) Equation 18 is called , with \(\hat {\Sigma}\) as a square diagonal matrix. Usually, we can pad \(\hat {\Sigma}\) with zeros and add arbitrary orthogonal columns to \(\hat U\) to form an unitary matrix \(U\), \(UU^T = I\), as shown in the following figure: reduced singular value decomposition \(A = U {\Sigma}V^T \tag{19.1}\) \(A = U {\Sigma}V^{*} \tag{19.2}\) Equation 19 is the general format of . Any matrix \(A \in R^{m \times n}\) can be decomposed into these 3 matrices: \(U \in R^{m \times m}, V \in R^{n \times n}\) are orthogonal square matrices (also called unitary matrix if \(U,V\) takes complex values), \(\Sigma \in R^{m \times n}\) is a diagonal matrix. singular value decomposition SVD is commonly used in recommendation system and matrix factorization. 3. Covariance matrix, EVD, SVD Covariance matrix \(C = A^TA\) is used to describe the redundancy and noise in data \(A\). For a matrix \(A \in R^{m \times n}\), multiplying it to its transpose gives us a square and symmetric matrix. The diagonal elements represent variance of \(C\) and off-diagonal elements represent covariance. \(C\) is likely to be a dense matrix unless all columns in the input \(X\) are independent, i.e. covariance equals to 0. Dense matrices are difficult to transform, inverse, and compute. If we can a matrix such that all off-diagonal elements are 0, matrix computation would be easier and more efficient. diagonalize As discussed above, both EVD and SVD can diagonalize square matrices. Using SVD, we have: \(A^TA \\ = (U \Sigma V^T )^TU \Sigma V^T \\ =V {\Sigma}^T U^T U \Sigma V^T \\ = V{\Sigma}^T \Sigma V^T \\ = V {\Sigma}^2 V^T \tag {20} \) Since \(V\) is unitary matrix, \(V^T = V^{-1}\), equation 20 can be written as: \(A^TA = V {\Sigma}^2 V^{-1} \tag{21}\) Similarly \(AA^T = U {\Sigma}^2 U^T = U {\Sigma}^2 U^{-1} \tag{22} \) Equation 21 and 22 have the same format as equation 9: eigendecomposition. Indeed, if \(C\) is a symmetric matrix, its eigendecomposition is the same as its singular value decomposition. The eigenvectors of \(C\) are not only linearly independent, but also orthogonal. Let the eigenvalues for \(C = A^TA\) be \(\Lambda\), we have: \(\Lambda = {\Sigma}^2 \tag{23.1}\) \(\lambda_i = {\sigma_i}^2 \tag{23.2} \) Covariance matrix with singular value decomposition is used in Principal Component Analysis (PCA) [4], discussed in later posts. 4.Positive definite matrix Another common matrix concept is positive definite. It is a characteristic of a symmetric matrix \(M \in R^{n \times n}\), such as covariance matrix \(A^TA\) and second derivative Hessian matrix \(\frac {\partial^2 J(x)}{\partial x}\). In optimization, the objective function is convex if Hessian matrix is positive definite. One way to evaluate the definiteness of a matrix is to check its eigenvalues: if all eigenvalues are positive, \(M\) is positive definite. if all eigenvalues are negative, \(M\) is negative definite if all eigenvalues are non-negative, \(M\) is positive semi-definite if all eigenvalues are non-positive, \(M\) is negative semi-definite. else, \(M\) is indefinite. Definiteness of a matrix is also evaluated in the following format. if \(x^TMx > 0,\forall x \in R^n\), \(M\) is positive definite. \(M = Q \Lambda Q^{-1} \tag{24} \) \(x^TMx = x^TQ \Lambda Q^T x = y^T \Lambda y \tag{25.1}\) \(y= Q^T x \tag{25.2}\) In equation 24, because \(\Lambda = diag(\lambda_1, \lambda_2, … \lambda_n) \): \(x^TMx \\= y^T \Lambda y \\ = \sum_i^n (y_i)^2 \lambda_i \tag{26}\) For a positive definite matrix \(M\), all eigenvalues are bigger than 0, \(\lambda_i > 0\), as a result: \(x^TMx = \sum_i^n (y_i)^2 \lambda_i > 0 \tag{27}\) Similarly, if \(x^TMx < 0, \forall x \in R^n\), \(M\) is negative definite. if \(x^TMx \geq 0 ,\forall x \in R^n\), \(M\) is positive semi-definite. if \(x^TMx \leq 0, \forall x \in R^n\), \(M\) is negative semi-definite. if \(x^TMx > 0, \exists x \in R^n. x^TMx < 0, \exists x \in R^n\), \(M\) is indefinite. Take home message Eigendecomposition and singular value decompositon factorize a matrix into a diagonal matrix, which not only reveals unique characteristics of the data, but also makes matrix computation more efficiently. References [1] https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors [2] http://www.cs.cornell.edu/courses/cs322/2008sp/stuff/TrefethenBau_Lec4_SVD.pdf [3] youtube https://www.youtube.com/watch?v=EokL7E6o1AE [4] https://intoli.com/blog/pca-and-svd/ [5] https://en.wikipedia.org/wiki/Orthogonal_matrix
Hot-keys on this page r m x p toggle line displays j k next/prev highlighted chunk 0 (zero) top of page 1 (one) first highlighted chunk """ Definition of the semi-grand canonical ensemble class. """ import numpy as np from ase import Atoms from ase.data import atomic_numbers, chemical_symbols from ase.units import kB from collections import OrderedDict from typing import Dict, Union, List from .. import DataContainer from ..calculators.base_calculator import BaseCalculator from .thermodynamic_base_ensemble import ThermodynamicBaseEnsemble class SemiGrandCanonicalEnsemble(ThermodynamicBaseEnsemble): """Instances of this class allow one to simulate systems in the semi-grand canonical (SGC) ensemble (:math:`N\\Delta\\mu_i VT`), i.e. at constant temperature (:math:`T`), total number of sites (:math:`N=\\sum_i N_i`), relative chemical potentials (:math:`\\Delta\\mu_i=\\mu_i - \\mu_1`, where :math:`i` denotes the species), and volume (:math:`V`). The probability for a particular state in the SGC ensemble for a :math:`m`-component system can be written .. math:: \\rho_{\\text{SGC}} \\propto \\exp\\Big[ - \\big( E + \\sum_{i>1}^m \\Delta\\mu_i N_i \\big) \\big / k_B T \\Big] with the *relative* chemical potentials :math:`\\Delta\\mu_i = \\mu_i - \\mu_1` and species counts :math:`N_i`. Unlike the :ref:`canonical ensemble <canonical_ensemble>`, the number of the respective species (or, equivalently, the concentrations) are allowed to vary in the SGC ensemble. A trial step thus consists of randomly picking an atom and changing its identity with probability .. math:: P = \\min \\Big\\{ 1, \\, \\exp \\big[ - \\big( \\Delta E + \\sum_i \\Delta \\mu_i \\Delta N_i \\big) \\big / k_B T \\big] \\Big\\}, where :math:`\\Delta E` is the change in potential energy caused by the swap. There exists a simple relation between the differences in chemical potential and the canonical free energy :math:`F`. In a binary system, this relationship reads .. math:: \\Delta \\mu = - \\frac{1}{N} \\frac{\\partial F}{\\partial c} ( N, V, T, \\langle c \\rangle). Here :math:`c` denotes concentration (:math:`c=N_i/N`) and :math:`\\langle c \\rangle` the average concentration observed in the simulation. By recording :math:`\\langle c \\rangle` while gradually changing :math:`\\Delta \\mu`, one can thus in principle calculate the difference in canonical free energy between the pure phases (:math:`c=0` or :math:`1`) and any concentration by integrating :math:`\\Delta \\mu` over that concentration range. In practice this requires that the average recorded concentration :math:`\\langle c \\rangle` varies continuously with :math:`\\Delta \\mu`. This is not the case for materials with multiphase regions (such as miscibility gaps), because in such regions :math:`\\Delta \\mu` maps to multiple concentrations. In a Monte Carlo simulation, this is typically manifested by discontinuous jumps in concentration. Such jumps mark the phase boundaries of a multiphase region and can thus be used to construct the phase diagram. To recover the free energy, however, such systems require sampling in other ensembles, such as the :ref:`variance-constrained semi-grand canonical ensemble <sgc_ensemble>`. Parameters ---------- structure : :class:`Atoms <ase.Atoms>` atomic configuration to be used in the Monte Carlo simulation; also defines the initial occupation vector calculator : :class:`BaseCalculator <mchammer.calculators.ClusterExpansionCalculator>` calculator to be used for calculating the potential changes that enter the evaluation of the Metropolis criterion temperature : float temperature :math:`T` in appropriate units [commonly Kelvin] chemical_potentials : Dict[str, float] chemical potential for each species :math:`\\mu_i`; the key denotes the species, the value specifies the chemical potential in units that are consistent with the underlying cluster expansion boltzmann_constant : float Boltzmann constant :math:`k_B` in appropriate units, i.e. units that are consistent with the underlying cluster expansion and the temperature units [default: eV/K] user_tag : str human-readable tag for ensemble [default: None] data_container : str name of file the data container associated with the ensemble will be written to; if the file exists it will be read, the data container will be appended, and the file will be updated/overwritten random_seed : int seed for the random number generator used in the Monte Carlo simulation ensemble_data_write_interval : int interval at which data is written to the data container; this includes for example the current value of the calculator (i.e. usually the energy) as well as ensembles specific fields such as temperature or the number of atoms of different species data_container_write_period : float period in units of seconds at which the data container is written to file; writing periodically to file provides both a way to examine the progress of the simulation and to back up the data [default: np.inf] trajectory_write_interval : int interval at which the current occupation vector of the atomic configuration is written to the data container. sublattice_probabilities : List[float] probability for picking a sublattice when doing a random flip. This should be as long as the number of sublattices and should sum up to 1. Example ------- The following snippet illustrate how to carry out a simple Monte Carlo simulation in the semi-canonical ensemble. Here, the parameters of the cluster expansion are set to emulate a simple Ising model in order to obtain an example that can be run without modification. In practice, one should of course use a proper cluster expansion:: from ase.build import bulk from icet import ClusterExpansion, ClusterSpace from mchammer.calculators import ClusterExpansionCalculator from mchammer.ensembles import SemiGrandCanonicalEnsemble # prepare cluster expansion # the setup emulates a second nearest-neighbor (NN) Ising model # (zerolet and singlet ECIs are zero; only first and second neighbor # pairs are included) prim = bulk('Au') cs = ClusterSpace(prim, cutoffs=[4.3], chemical_symbols=['Ag', 'Au']) ce = ClusterExpansion(cs, [0, 0, 0.1, -0.02]) # set up and run MC simulation (T=600 K, delta_mu=0.8 eV/atom) structure = prim.repeat(3) calc = ClusterExpansionCalculator(structure, ce) mc = SemiGrandCanonicalEnsemble(structure=structure, calculator=calc, temperature=600, data_container='myrun_sgc.dc', chemical_potentials={'Ag': 0, 'Au': 0.8}) mc.run(100) # carry out 100 trial swaps TODO ---- * add check that chemical symbols in chemical potentials are allowed """ def __init__(self, structure: Atoms, calculator: BaseCalculator, temperature: float, chemical_potentials: Dict[str, float], user_tag: str = None, data_container: DataContainer = None, random_seed: int = None, data_container_write_period: float = np.inf, ensemble_data_write_interval: int = None, trajectory_write_interval: int = None, boltzmann_constant: float = kB, sublattice_probabilities: List[float] = None) -> None: self._ensemble_parameters = dict(temperature=temperature) # add chemical potentials to ensemble parameters self._chemical_potentials = get_chemical_potentials(chemical_potentials) for atnum, chempot in self.chemical_potentials.items(): mu_sym = 'mu_{}'.format(chemical_symbols[atnum]) self._ensemble_parameters[mu_sym] = chempot self._boltzmann_constant = boltzmann_constant super().__init__( structure=structure, calculator=calculator, user_tag=user_tag, data_container=data_container, random_seed=random_seed, data_container_write_period=data_container_write_period, ensemble_data_write_interval=ensemble_data_write_interval, trajectory_write_interval=trajectory_write_interval, boltzmann_constant=boltzmann_constant) 184 ↛ 187line 184 didn't jump to line 187, because the condition on line 184 was never false if sublattice_probabilities is None: self._flip_sublattice_probabilities = self._get_flip_sublattice_probabilities() else: self._flip_sublattice_probabilities = sublattice_probabilities @property def temperature(self) -> float: """ temperature :math:`T` (see parameters section above) """ return self.ensemble_parameters['temperature'] def _do_trial_step(self): """ Carries out one Monte Carlo trial step. """ sublattice_index = self.get_random_sublattice_index( probability_distribution=self._flip_sublattice_probabilities) return self.do_sgc_flip(self.chemical_potentials, sublattice_index=sublattice_index) @property def chemical_potentials(self) -> Dict[int, float]: """ chemical potentials :math:`\\mu_i` (see parameters section above) """ return self._chemical_potentials def _get_ensemble_data(self) -> Dict: """Returns the data associated with the ensemble. For the SGC ensemble this specifically includes the species counts. """ # generic data data = super()._get_ensemble_data() # species counts data.update(self._get_species_counts()) return data def get_chemical_potentials(chemical_potentials: Dict[Union[int, str], float]): """ Gets values of chemical potentials.""" if not isinstance(chemical_potentials, dict): raise TypeError('chemical_potentials has the wrong type: {}' .format(type(chemical_potentials))) cps = OrderedDict([(key, val) if isinstance(key, int) else (atomic_numbers[key], val) for key, val in chemical_potentials.items()]) return cps
I have two questions. First, given a finite dimensional complex vector space $V$ and a finite group representation $\rho:G \to GL(V)$, Maschke's theorem tells us that we may decompose $V$ into a direct sum $$V= \bigoplus_{\lambda}m_{\lambda}V^{\lambda}$$ of irreducible representations $V^{\lambda}$ of multiplicity $m_{\lambda}$. Does this result hold for representations of $G$ on an infinite dimensional vector space? Of particular interest is the case where the vector space $V$ is graded $$V=\bigoplus_{n \geq 0} V_n$$ such that there is a representation $\rho_n: G \to GL(V_n)$ for all $n$. As an example, the symmetric group $S_n$ acts on the polynomial ring $\mathbb{C}[t_1,...,t_n]$ by permuting variables. This gives us a representation of $S_n$ on each graded part of $\mathbb{C}[t_1,...,t_n]$, with the grading given by the degree of the polynomials. Furthermore, on each graded part, the $S_n$ representation is the defining representation. I'm wondering how to talk about this representation. In other words, in a decomposition of a finite dimensional vector space, the multiplicity of each irreducible representation is a natural number, but in this case we would have infinitely many isomorphic copies of the same representation. Assuming that Maschke's theorem holds in the infinite dimensional case, how do we write or represent the decomposition into irreducibles?